129 93 43MB
English Pages [320]
Lecture Notes in Networks and Systems 407
Miguel Botto-Tobar · Omar S. Gómez · Raul Rosero Miranda · Angela Díaz Cadena · Sergio Montes León · Washington Luna-Encalada Editors
Trends in Artificial Intelligence and Computer Engineering Proceedings of ICAETT 2021
Lecture Notes in Networks and Systems Volume 407
Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas— UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Turkey Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong
The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science. For proposals from Asia please contact Aninda Bose ([email protected]).
More information about this series at https://link.springer.com/bookseries/15179
Miguel Botto-Tobar Omar S. Gómez Raul Rosero Miranda Angela Díaz Cadena Sergio Montes León Washington Luna-Encalada •
•
•
•
•
Editors
Trends in Artificial Intelligence and Computer Engineering Proceedings of ICAETT 2021
123
Editors Miguel Botto-Tobar Eindhoven University of Technology Guayaquil, Ecuador
Omar S. Gómez Escuela Superior Politécnica del Litoral Riobamba, Chimborazo, Ecuador
Raul Rosero Miranda Escuela Superior Politécnica del Litoral Riobamba, Chimborazo, Ecuador
Angela Díaz Cadena Universitat de Valencia Valencia, Valencia, Spain
Sergio Montes León Universidad de las Fuerzas Armadas (ESPE) Quito, Ecuador
Washington Luna-Encalada Escuela Superior Politécnica del Litoral Riobamba, Chimborazo, Ecuador
ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-3-030-96146-6 ISBN 978-3-030-96147-3 (eBook) https://doi.org/10.1007/978-3-030-96147-3 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
The 3rd International Conference on Advances in Emerging Trends and Technologies (ICAETT) was held on the main campus of the Escuela Superior Politécnica de Chimborazo, in Riobamba–Ecuador, from November 10 until 12, 2021, and it was proudly organized by Facultad de Informática y Electrónica (FIE) at Escuela Superior Politécnica de Chimborazo and supported by GDEON. The ICAETT series aims to bring together top researchers and practitioners working in different domains in the field of computer science to exchange their expertise and to discuss the perspectives of development and collaboration [1, 2]. The content of this volume is related to the following subjects: • • • • • •
e-Business e-Learning Intelligent systems Machine vision Security Technology trends
ICAETT 2021 received 96 submissions written in English by 140 authors coming from 15 different countries. All these papers were peer-reviewed by the ICAETT 2021 program committee consisting of 162 high-quality researchers. To assure a high-quality and thoughtful review process, we assigned each paper at least three reviewers. Based on the peer reviews, 22 full papers were accepted, resulting in a 23% acceptance rate, which was within our goal of less than 40%. We would like to express our sincere gratitude to the invited speakers for their inspirational talks, to the authors for submitting their work to this conference and to the reviewers for sharing their experience during the selection process. November 2021
Miguel Botto-Tobar Omar S. Gómez Raúl Rosero Miranda Angela Díaz-Cadena Sergio Montes León Washington Luna-Encalada v
Organization
General Chairs Miguel Botto-Tobar Omar S. Gómez
Eindhoven University of Technology, The Netherlands Escuela Superior Politécnica de Chimborazo, Ecuador
Organizing Committee Miguel Botto-Tobar Omar S. Gómez Raúl Rosero Miranda Ángela Díaz Cadena Sergio Montes León Washington Luna-Encalada
Eindhoven University of Technology, The Netherlands Escuela Superior Politécnica de Chimborazo, Ecuador Escuela Superior Politécnica de Chimborazo, Ecuador Universitat de Valencia, Spain Universidad de las Fuerzas Armadas (ESPE), Ecuador Escuela Superior Politécnica de Chimborazo, Ecuador
Steering Committee Miguel Botto-Tobar Ángela Díaz Cadena
Eindhoven University of Technology, The Netherlands Universitat de Valencia, Spain
vii
viii
Organization
Publication Chair Miguel Botto-Tobar
Eindhoven University of Technology, The Netherlands
Program Chairs Technology Trends Miguel Botto-Tobar Sergio Montes León Hernán Montes León
Eindhoven University of Technology, The Netherlands Universidad de las Fuerzas Armadas (ESPE), Ecuador Universidad Rey Juan Carlos, Spain
Electronics Ana Zambrano Vizuete David Rivas Edgar Maya-Olalla Hernán Domínguez-Limaico
Escuela Politécnica Nacional, Ecuador Universidad de las Fuerzas Armadas (ESPE), Ecuador Universidad Técnica del Norte, Ecuador Universidad Técnica del Norte, Ecuador
Intelligent Systems Guillermo Pizarro Vásquez Janeth Chicaiza Gustavo Andrade Miranda
Universidad Politécnica Salesiana, Ecuador Universidad Técnica Particular de Loja, Ecuador Universidad de Guayaquil, Ecuador
Machine Vision Julian Galindo Erick Cuenca Pablo Torres-Carrión
LIG-IIHM, France Université de Montpellier, France Universidad Técnica Particular de Loja, Ecuador
Communication Óscar Zambrano Vizuete Pablo Palacios Jativa
Universidad Técnica del Norte, Ecuador Universidad de Chile, Chile
Security Luis Urquiza-Aguiar Joffre León-Acurio
Escuela Politécnica Nacional, Ecuador Universidad Técnica de Babahoyo, Ecuador
Organization
ix
e-Learning Miguel Zúñiga-Prieto Doris Macias
Universidad de Cuenca, Ecuador Universitat Politécnica de Valencia, Spain
e-Business Angela Díaz Cadena
Universitat de Valencia, Spain
e-Government and e-Participation Alex Santamaría Philco
Universidad Laica Eloy Alfaro de Manabí, Ecuador
Program Committee Abdón Carrera Rivera Adrián Cevallos Navarrete Alba Morales Tirado Alejandro Ramos Nolazco Alex Santamaría Philco
Alex Cazañas Gordon Alexandra Velasco Arévalo Alexandra Elizabeth Bermeo Arpi Alfonso Guijarro Rodríguez Alfredo Núñez Allan Avendaño Sudario Almílcar Puris Cáceres Ana Guerrero Alemán Ana Santos Delgado Ana Núñez Ávila Andrea Mory Alvarado Andrés Calle Bustos Andrés Jadan Montero Andrés Molina Ortega Andrés Robles Durazno Andrés Vargas González Andrés Barnuevo Loaiza Andrés Chango Macas
University of Melbourne, Australia Griffith University, Australia University of Greenwich, UK Instituto Tecnológico y de Estudios Superiores Monterrey, Mexico Universitat Politècnica de València, Spain/Universidad Laica Eloy Alfaro de Manabí, Ecuador The University of Queensland, Australia Universität Stuttgart, Germany Universidad de Cuenca, Ecuador Universidad de Guayaquil, Ecuador New York University, USA Università degli Studi di Roma “La Sapienza,” Italy Universidad Técnica Estatal de Quevedo, Ecuador University of Adelaide, Australia Universidade Federal de Santa Catarina (UFSC), Brazil Universitat Politècnica de València, Spain Universidad Católica de Cuenca, Ecuador Universitat Politècnica de València, Spain Universidad de Buenos Aires, Argentina Universidad de Chile, Chile Edinburgh Napier University, UK Syracuse University, USA Universidad de Santiago de Chile, Chile Universidad Politécnica de Madrid, Spain
x
Andrés Cueva Costales Andrés Parra Sánchez Ángel Plaza Vargas Angel Vazquez Pazmiño Ángela Díaz Cadena Angelo Vera Rivera Antonio Villavicencio Garzón Audrey Romero Pelaez Bolívar Chiriboga Ramón Byron Acuna Acurio Carla Melaños Salazar Carlos Barriga Abril Carlos Valarezo Loiza Cesar Mayorga Abril César Ayabaca Sarria Christian Báez Jácome Cintya Aguirre Brito Cristian Montero Mariño Daniel Magües Martínez Daniel Silva Palacios Daniel Armijos Conde Danilo Jaramillo Hurtado David Rivera Espín David Benavides Cuevas Diana Morillo Fueltala Diego Vallejo Huanga Edwin Guamán Quinche Efrén Reinoso Mendoza Eric Moyano Luna Erick Cuenca Pauta Ernesto Serrano Guevara Estefania Yánez Cardoso Esther Parra Mora Fabián Corral Carrera Felipe Ebert Fernando Borja Moretta Franklin Parrales Bravo Gabriel López Fonseca Gema Rodriguez-Perez Georges Flament Jordán Germania Rodríguez Morales Ginger Saltos Bernal Gissela Uribe Nogales
Organization
University of Melbourne, Australia University of Melbourne, Australia Universidad de Guayaquil, Ecuador Université Catholique de Louvain, Belgium Universitat de València, Spain George Mason University, USA Universitat Politècnica de Catalunya, Spain Universidad Politécnica de Madrid, Spain University of Melbourne, Australia Flinders University, Australia Universidad Politécnica de Madrid, Spain University of Nottingham, UK Manchester University, UK Universidad Técnica de Ambato, Ecuador Escuela Politécnica Nacional (EPN), Ecuador Wageningen University & Research, The Netherlands University of Portsmouth, UK University of Melbourne, Australia Universidad Autónoma de Madrid, Spain Universitat Politècnica de València, Spain Queensland University of Technology, Australia Universidad Politécnica de Madrid, Spain University of Melbourne, Australia Universidad de Sevilla, Spain Brunel University London, UK Universitat Politècnica de València, Spain Universidad del País Vasco, Spain Universitat Politècnica de València, Spain University of Southampton, UK Université de Montpellier, France Université de Neuchâtel, Switzerland University of Southampton, UK University of Queensland, Australia Universidad Carlos III de Madrid, Spain Universidade Federal de Pernambuco (UFPE), Brazil University of Edinburgh, UK Universidad Complutense de Madrid, Spain Sheffield Hallam University, UK LibreSoft/Universidad Rey Juan Carlos, Spain University of York, UK Universidad Politécnica de Madrid, Spain University of Portsmouth, UK Australian National University, Australia
Organization
Glenda Vera Mora Guilherme Avelino Héctor Dulcey Pérez Henry Morocho Minchala Holger Ortega Martínez Iván Valarezo Lozano Jacqueline Mejia Luna Jaime Jarrin Valencia Janneth Chicaiza Espinosa Jefferson Ribadeneira Ramírez Jeffrey Naranjo Cedeño Jofre León Acurio Jorge Quimí Espinosa Jorge Cárdenas Monar Jorge Illescas Pena Jorge Lascano Jorge Rivadeneira Muñoz Jorge Charco Aguirre José Carrera Villacres José Quevedo Guerrero Josue Flores de Valgas Juan Barros Gavilanes Juan Jiménez Lozano Juan Romero Arguello Juan Zaldumbide Proaño Juan Balarezo Serrano Juan Lasso Encalada Juan Maestre Ávila Juan Miguel Espinoza Soto Juliana Cotto Pulecio Julio Albuja Sánchez Julio Proaño Orellana Julio Balarezo Karla Abad Sacoto Leopoldo Pauta Ayabaca Lorena Guachi Guachi Lorenzo Cevallos Torres Lucia Rivadeneira Barreiro Luis Carranco Medina Luis Pérez Iturralde Luis Torres Gallegos Luis Benavides
xi
Universidad Técnica de Babahoyo, Ecuador Universidade Federal do Piauí (UFP), Brazil Swinburne University of Technology, Australia Moscow Automobile and Road Construction State Technical University (Madi), Russia University College London, UK University of Melbourne, Australia Universidad de Granada, Spain Universidad Politécnica de Madrid, Spain Universidad Politécnica de Madrid, Spain Escuela Superior Politécnica de Chimborazo, Ecuador Universidad de Valencia, Spain Universidad Técnica de Babahoyo, Ecuador Universitat Politècnica de Catalunya, Spain Australian National University, Australia Edinburgh Napier University, UK University of Utah, USA University of Southampton, UK Universitat Politècnica de València, Spain Université de Neuchâtel, Switzerland Universidad Politécnica de Madrid, Spain Universitat Politécnica de València, Spain INP Toulouse, France Universidad de Palermo, Argentina University of Manchester, UK University of Melbourne, Australia Monash University, Australia Universitat Politècnica de Catalunya, Spain Iowa State University, USA Universitat de València, Spain Universidad de Palermo, Argentina James Cook University, Australia Universidad de Castilla La Mancha, Spain Universidad Técnica de Ambato, Ecuador Universidad Autónoma de Barcelona, Spain Universidad Católica de Cuenca, Ecuador Università della Calabria, Italy Universidad de Guayaquil, Ecuador Nanyang Technological University, Singapore Kansas State University, USA Universidad de Sevilla, Spain Universitat Politècnica de València, Spain Universidad de Especialidades Espíritu Santo, Ecuador
xii
Luis Urquiza-Aguiar Manuel Beltrán Prado Manuel Sucunuta España Marcia Bayas Sampedro Marco Falconi Noriega Marco Tello Guerrero Marco Molina Bustamante Marco Santórum Gaibor María Escalante Guevara María Molina Miranda María Montoya Freire María Ormaza Castro María Miranda Garcés Maria Dueñas Romero Mariela Barzallo León Mauricio Verano Merino Maykel Leiva Vázquez Miguel Botto-Tobar Miguel Arcos Argudo Mónica Baquerizo Anastacio Mónica Villavicencio Cabezas Omar S. Gómez Orlando Erazo Moreta Pablo León Paliz Pablo Ordoñez Ordoñez Pablo Palacios Jativa Pablo Saá Portilla Patricia Ludeña González Paulina Morillo Alcívar Rafael Campuzano Ayala Rafael Jiménez Ramiro Santacruz Ochoa Richard Ramírez Anormaliza
Roberto Larrea Luzuriaga Roberto Sánchez Albán Rodrigo Saraguro Bravo
Organization
Universitat Politècnica de Catalunya, Spain University of Queensland, Australia Universidad Politécnica de Madrid, Spain Vinnitsa National University, Ukraine Universidad de Sevilla, Spain Rijksuniversiteit Groningen, The Netherlands Universidad Politécnica de Madrid, Spain Escuela Politécnica Nacional, Ecuador/Université Catholique de Louvain, Belgium University of Michigan, USA Universidad Politécnica de Madrid, Spain Aalto University, Finland University of Southampton, UK University of Leeds, UK RMIT University, Australia University of Edinburgh, UK Eindhoven University of Technology, The Netherlands Universidad de Guayaquil, Ecuador Eindhoven University of Technology, The Netherlands Universidad Politécnica de Madrid, Spain Universidad Complutense de Madrid, Spain Université du Quebec À Montréal, Canada Escuela Superior Politécnica del Chimborazo (ESPOCH), Ecuador Universidad de Chile, Chile/Universidad Técnica Estatal de Quevedo, Ecuador Université de Neuchâtel, Switzerland Universidad Politécnica de Madrid, Spain Universidad de Chile, Chile University of Melbourne, Australia Politecnico di Milano, Italy Universitat Politècnica de València, Spain Grenoble Institute of Technology, France Escuela Politécnica del Litoral (ESPOL), Ecuador Universidad Nacional de La Plata, Argentina Universidad Estatal de Milagro, Ecuador/Universitat Politècnica de Catalunya, Spain Universitat Politècnica de València, Spain Université de Lausanne, Switzerland Escuela Superior Politécnica del Litoral (ESPOL), Ecuador
Organization
Rodrigo Cueva Rueda Rodrigo Tufiño Cárdenas
Samanta Cueva Carrión Sergio Montes León Tania Palacios Crespo Tony Flores Pulgar Vanessa Echeverría Barzola Vanessa Jurado Vite Verónica Yépez Reyes Victor Hugo Rea Sánchez Voltaire Bazurto Blacio Washington Velásquez Vargas Wayner Bustamante Granda Wellington Cabrera Arévalo Xavier Merino Miño Yan Pacheco Mafla Yessenia Cabrera Maldonado Yuliana Jiménez Gaona
xiii
Universitat Politècnica de Catalunya, Spain Universidad Politécnica Salesiana, Ecuador/Universidad Politécnica de Madrid, Spain Universidad Politécnica de Madrid, Spain Universidad de las Fuerzas Armadas (ESPE), Ecuador University College London, UK Université de Lyon, France Université Catholique de Louvain, Belgium Universidad Politécnica Salesiana, Ecuador South Danish University, Denmark Universidad Estatal de Milagro, Ecuador University of Victoria, Canada Universidad Politécnica de Madrid, Spain Universidad de Palermo, Argentina University of Houston, USA Instituto Tecnológico y de Estudios Superiores Monterrey, Mexico Royal Institute of Technology, Sweden Pontificia Universidad Católica de Chile, Chile Università di Bologna, Italy
Organizing Institutions
References 1. M. Botto-Tobar, J. León-Acurio, A. D. Cadena, and P. M. Díaz, Preface, vol. 1067. 2020 2. M. Botto-Tobar, O. S. Gómez, R. R. Miranda, and Á. D. Cadena, Preface, vol. 1302. 2021
Contents
e-Business Monitoring Tool to Improve Strategic and Operational Planning Processes in Universities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Walter Zambrano-Romero, Edison Solorzano-Solorzano, Maria Azua-Campos, Mauricio Quimiz-Moreira, and Miguel Rodriguez-Veliz Digital Media Ecosystem: A Core Component Analysis According to Expert Judgment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gabriel Saltos-Cruz, Santiago Peñaherrera-Zambrano, José Herrera-Herrera, Fernando Naranjo-Holguín, and Wilson Araque-Jaramillo Cost System for Small Livestock Farmers . . . . . . . . . . . . . . . . . . . . . . . Jasleidy Astrid Prada Segura and Liyid Yoresy López Parra
3
16
29
e-Learning Effects of Virtual Reality and Music Therapy on Academic Stress Reduction Using a Mobile Application . . . . . . . . . . . . . . . . . . . . . . . . . . Cristian A. Cabezas, Alexander R. Arcos, José L. Carrillo-Medina, and Gloria I. Arias-Almeida ROBOFERT: Human - Robot Advanced Interface for Robotic Fertilization Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christyan Cruz Ulloa, Anne Krus, Guido Torres Llerena, Antonio Barrientos, Jaime Del Cerro, and Constantino Valero Teaching of Mathematics Using Digital Tools Through the Flipped Classroom: Systematization of Experiences in Elementary Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Roxana Zuñiga-Quispe, Yesbany Cacha-Nuñez, and Ivan Iraola-Real
45
60
74
xv
xvi
Contents
Assessing Digital Competencies of Students in the Fifth Cycle of Primary Education: A Diagnostic Study in the Context of Covid-19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yesbany Cacha-Nuñez, Roxana Zuñiga-Quispe, and Ivan Iraola-Real External Factors and Their Impact on Satisfaction with Virtual Education in Peruvian University Students . . . . . . . . . . . . . . . . . . . . . . Ysabel Anahí Oyardo-Ruiz, Leydi Elizabeth Enciso-Suarez, and Ivan Iraola-Real
85
97
Comparative Study of Academic Performance in the 2018 PISA Test in Latin America . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Eda Vasquez-Anaya, Lucero Mogrovejo-Torres, Vanessa Aliaga-Ahuanari, and Ivan Iraola-Real Can Primary School Children Be Digital Learners? A Peruvian Case Study on Teaching with Digital Tool . . . . . . . . . . . . . . . . . . . . . . . 119 Gisela Chapa-Pazos, Estrella Cotillo-Galindo, and Ivan Iraola-Real Performance Evaluation of Teaching of the Professional School of Education of a Private University of Peru . . . . . . . . . . . . . . . . . . . . . 128 Ivan Iraola-Real, Elvis Gonzales Choquehuanca, Gustavo Villar-Mayuntupa, Fernando Alvarado-Rojas, and Hugo Del Rosario Systematic Mapping of Literature About the Early Diagnosis of Alzheimer’s Disease Through the Use of Video Games . . . . . . . . . . . 139 María Camila Castiblanco, Leidy Viviana Cortés Carvajal, César Pardo, and Laura Daniela Lasso Arciniegas Intelligent Systems Design of the Process for Methane-Methanol at Soft Conditions Applied to Selection the Best Descriptors for Periodic Structures Using Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Josue Lozada, E. Reguera, and C. I. Aguirre-Velez An Interface for Audio Control Using Gesture Recognition and IMU Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Victor H. Vimos, Ángel Leonardo Valdivieso Caraguay, Lorena Isabel Barona López, David Pozo Espín, and Marco E. Benalcázar Advantages of Machine Learning in Networking-Monitoring Systems to Size Network Appliances and Identify Incongruences in Data Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Anthony J. Bustamante, Niskarsha Ghimire, Preet R. Sanghavi, Arpit Pokharel, and Victor E. Irekponor
Contents
xvii
Machine Vision System for Troubleshooting Welded Printed Circuit Boards with Through Hole Technology Using Convolutional Neural Networks and Classic Computer Vision Techniques . . . . . . . . . . . . . . . . 199 Alberto-Santiago Ramirez-Farfan and Miguel-Angel Quiroz-Martinez Security Cybersecurity Mechanisms for Information Security in Patients of Public Hospitals in Ecuador . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Mauricio Quimiz-Moreira, Walter Zambrano-Romero, Cesar Moreira-Zambrano, Maritza Mendoza-Zambrano, and Emilio Cedeño-Palma Technology Trends Software Regression Testing in Industrial Settings: Preliminary Findings from a Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Raúl H. Rosero, Omar S. Gómez, Eduardo R. Villa, Raúl A. Aguilar, and César J. Pardo Simulation of the Physicochemical Properties of Anatase TiO2 with Oxygen Vacancies and Doping of Different Elements for Photocatalysis Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Heraclio Heredia-Ureta, Ana E. Torres, Edilso F. Reguera, and Carlos I. Aguirre-Vélez A Blockchain-Based Approach for Issuing Health Insurance Contracts and Claims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Julio C. Mendoza-Tello, Tatiana Mendoza-Tello, and Jenny Villacís-Ramón Integration of Administrative Records for Social Protection Policies-A Systematic Literature Review of Cases in the Latin American and Caribbean Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Yasmina Vizuete-Salazar, Miguel Flores, Ana Belén Cabezas, Lisette Zambrano, and Andrés Vinueza Implementation and Analysis of the Results of the Application of the Methodology for Hybrid Multi-cloud Replication Systems . . . . . . 273 Diego P. Rodriguez Alvarado and Erwin J. Sacoto-Cabrera Diagnostic Previous to the Design of a Network of Medium-Sized University Campuses: An Improvement to the Methodology of Santillán-Lima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Juan Carlos Santillán-Lima, Fernando Molina-Granja, Washington Luna-Encalada, and Raúl Lozada-Yánez
xviii
Contents
Security Techniques in Communications Networks Applied to the Custody of Digital Evidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 Juan Carlos Santillán-Lima, Perkins Haro-Parra, Washington Luna-Encalada, Raúl Lozada-Yánez, and Fernando Molina-Granja Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
e-Business
Monitoring Tool to Improve Strategic and Operational Planning Processes in Universities Walter Zambrano-Romero(B) , Edison Solorzano-Solorzano, Maria Azua-Campos, Mauricio Quimiz-Moreira, and Miguel Rodriguez-Veliz Universidad Técnica de Manabí, Portoviejo, Manabí, Ecuador {walter.zambrano,edison.solorzano,maria.azua,mauricio.quimiz, miguel.rodriguez}@utm.edu.ec
Abstract. A close collaboration between the institutional planning department, the strategic leaders, the faculties, and departments is an important pillar to improve the strategic and operational planning of the Universidad Técnica de Manabí. This research presents an explanatory case study of how the web application automated the processes to contribute to the creation of the institutional strategic plan of the majors and the operational plans of the academic and administrative units with the respective monitoring and evaluation that determine whether or not the objectives and goals are met. The methodology applied is the V model and the tools used are the PHP programming language, the Laravel framework, and the PostgresSQL database manager. Its application was carried out in 10 Faculties with 33 majors and 9 departments, which expedited the registration of evidence in the comprehensive institutional planning system, improving the validation times of the evidence entered in 2017 with a 90-day spreadsheet compared to 45 days with the SIPI system in the 2018 period, in addition to constantly monitoring the objectives and goals planned by each academic and administrative unit, generating multiple reports that contain information that helps in making strategic decisions for university management and obtaining a positive impact on the monitoring of the performance indicators of the higher education institution. Keywords: Higher education institutions · Information systems · Strategic planning · Strategic software · Software as a service
1 Introduction Web systems have become very important at the present time, the mass use of the Internet has had an impact on the process transferring desktop applications to the web and this has generated to determine the best options for this transfer [18] due to the advantages it offers as a level of higher throughput and a high availability rate. Strategic and operational planning within higher education institutions (HEIs) play a fundamental role in meeting objectives. According to [5] strategic planning, it is a holistic approach that arises both from the weaknesses and the strengths of the institution itself, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Botto-Tobar et al. (Eds.): ICAETT 2021, LNNS 407, pp. 3–15, 2022. https://doi.org/10.1007/978-3-030-96147-3_1
4
W. Zambrano-Romero et al.
the opportunities and threats that result due to external factors, the need for a planning process and the support of those who make up the organization. For this reason [6], the important point in strategic planning for the success of the institution is knowing that everyone must work to meet the objectives set. The success of the organization depends on its functioning as a single body. Strategic planning has three main elements: (1) To develop the Plan (2) To implement the Plan (3) To evaluate and Feedback the Plan. In addition to the agreement [9], strategic planning is the most common process used in the management of institutions that can be complemented with mechanisms for evaluating and monitoring performance from the perspective of the organization. In this regard [15], it indicates that the management of processes in any organization is a complex task that must be developed under a review approach both at the micro and macro level to be able to cover all the needs of the areas that are part of the organization. Once the management commits to the fulfillment of a continuous and permanent task, the quality of its processes must be measured, and then actions taken to optimize its resources and increase the impact of its activities. HEIs as organizations are not alien to this because the quality of higher education must cover all its substantive functions and for the benefit of the university community and society in general. Strategic planning processes are urgent requirements of the Ecuadorian educational system, universities deserve special attention to these processes due to their proactive nature and their anticipation of needs and changes. Without the adequate strategic intention that must be part of the organizational culture of senior managers and the institution in general, this transformation will not be possible [8]. At the national level, universities are in a process of modeling in institutional planning, institutions have begun with the generation of standards that allow the adequate use of information to promote decision-making assertively. In [4] this way, an integration between strategic planning and ICTs is achieved through information systems. As [6] indicates, the use of an adequate management tool in planning provides the opportunity to increase academic, scientific and cultural quality by facilitating the process of competing with leadership in the increasingly demanding market for university education. The Universidad Técnica de Manabí in its trajectory has generated four strategic plans with the participation of all those involved, with their respective POA, the processes were carried out manually through spreadsheets and the transfer of files; that caused delays in the delivery times of the information, the strategic objectives could not be fully met and there was not an adequate flow of processes for monitoring in each of the planning stages such as the preparation of the plan, monitoring and implementation monitoring, evaluation and feedback. As an Institution of Higher Education, there was no tool that would allow control and monitoring of the fulfillment of PEDI and POA. With the development and implementation of an information system for the Directorate of Institutional Planning of the Universidad Técnica de Manabí (UTM), processes were managed in a more agile and efficient way; By existence of this tool, suitable process flows were established that allow adequate monitoring and feedback to be carried out, in this way the results obtained to provide support for institutional decision-making, which allow progress in each of the
Monitoring Tool to Improve Strategic and Operational Planning Processes
5
areas: academy, management, research and the relationship with society that bring closer to the local reality and considering that strategic planning is an innovation to the higher education system of our country. The present research work aims to identify the contribution that arises in the institutional planning processes through the implementation of a web tool as a computer strategy for the optimization of these processes. The proposal arises from the problem faced by Higher Education Universities and in this case in particular the Universidad Técnica de Manabí, having ineffective management of information through manual management of planning processes such as preparation, monitoring, and evaluation of strategic and operational planning.
2 Material and Method The present research has a bibliographic and experimental nature. It was held at the Universidad Técnica de Manabí (UTM) located in the city of Portoviejo, Ecuador. The objective of which is to improve Institutional planning processes in academic and administrative units through the use of the Integral Institutional Planning System (SIPI) application. The tools used were PHP and PostgreSQL in conjunction with the laravel framework, highlighting the use of the MVC model (model-view-controller), the Google Chrome and Mozilla Firefox web browsers, and their add-ons such as firebug for error detection and as a means of verification for response times. The execution was carried out using the V model development methodology, the project is presented as an explanatory case study where each of the phases that were executed to obtain the results are described [20]. The phases were carried out as detailed below: 2.1 Specifications Phase In the first instance, it is observed the prerequisite analysis that according to [18], defines as the phase in charge of the knowledge model and the need to solve the existing problem [10], which describes the analysis of the processes that they are carried out within the Planning Directorate (DPI) of the UTM, likewise in the administrative and academic units, the directorates and institutes of the university in the field of strategic and operational planning. A special emphasis is placed on the prerequisites of institutional planning, which was carried out through interviews with the Director of the DPI and the Director of the ICTS, who outlined the limitations that existed in the execution of the strategic plan for institutional development (PEDI) and annual operating plan (POA). The need on the part of UTM was to have a computer strategy that allows the automation of institutional planning processes, when carrying out a preliminary analysis it was evidenced that the processes were carried out manually, through the use of spreadsheets and the sending of emails, which caused delays in delivery times, duplicate information and inconsistency in the data, which significantly affected the fulfillment of the strategic objectives initially established.
6
W. Zambrano-Romero et al.
Given this, a computer solution that allows to manage institutional planning processes in an automated way and have a higher percentage of compliance within the strategic objectives arises. The necessary roles for the correct functioning of the computer system were defined, in addition to the functions that the heads of the units will perform and the access levels that will allow greater control within the functionality and benefits established within the SIPI web tool (Integrated System of Institutional Planning). 2.2 High-Level and Detailed Design Phase In this phase, the design of the structure that will contain the computer system was carried out, which is based on the MVC model (Model-View-Controller) under the LARAVEL framework, which contributed to the separation of the logical part of the application and the data displayed, among which the extraction of the data from the database server directed towards the indistinct user of the platform used [7] as shown in Fig. 1.
Fig. 1. Web service architecture
2.3 Implementation Phase For the implementation process, it was carried out in two phases. The first phase consisted of the installation and configuration of the necessary tools for the operation of the computer system in the ICT Directorate and the site was published for its first version. A Secure Sockets Layer (SSL) security certificate was used that allows the data between the user and the web application to be encrypted and the information cannot be intercepted by external agents.
Monitoring Tool to Improve Strategic and Operational Planning Processes
7
The second process was the execution of training for the personnel involved in the institutional planning process for the heads of each unit of the Universidad Técnica de Manabí. The application can be used from any web browser, regardless of the operating system. Among the main functionalities that it provides, are the registration and evaluation of each of the phases of the processes concerning to the POA or PEDI, at the level of each of the administrative units of the institution, the visualization of the status of each task through real-time monitoring during development. The tool was developed with user-friendly interfaces as shown in Fig. 2.
Fig. 2. SIPI POA tool of the academic unit
2.4 Unit, Integration, and Operational Test Phase By testing through the use of firebug, it was possible to verify the proper functioning of the computer system, its server response times were within the allowed times as shown in Fig. 3. For the development of the web application, the integrated development environments (IDE) were used, which allow its execution locally without the need for a physical server for the web service and as shown through the registry of events, a request has been made to the webserver where the tool is hosted correctly and it displays the data obtained in Fig. 4.
8
W. Zambrano-Romero et al.
Fig. 3. Verification of the response time of the Web Service
Fig. 4. Connection verification from the application and data collection
3 Results and Discussion Web systems in higher education institutions play a fundamental role and provide tools that promote the automation of processes with which employees and teachers feel supported in their daily actions comfortably and adjusted to the needs of each institution [12]. This computer strategy has allowed providing mechanisms for the monitoring and control of institutional planning operations, as relevant aspect it can be noted that this type of
Monitoring Tool to Improve Strategic and Operational Planning Processes
9
strategy within an organization is not something static, rather one of the most challenging tasks is maintenance of a tool that generates information for decision-making. The strategic and operational planning tool allows the process of creating each process to be carried out in an organized way as shown in the following figure where block one defines the institutional strategic objectives, in block two the accumulated goals that indirectly serve in planning to constitute long-term results to be achieved, in block three the end of the chain is constituted, where the relationship between the institutional strategic objectives and the institutional accumulated goals that derive in the formation of the annual operating plan and department or career plan is established, as shown in Fig. 5.
Fig. 5. Creation of institutional objectives and goals by academic units
The evaluation of strategic planning in higher education institutions is a complex process to achieve as indicated [13]. For this reason, countless methods with divergent opinions have been used to identify the most suitable indicators for a certain function that have been categorized as components of teaching, research, social responsibility, well-being, internationalization, management, and resources [17]. On the other hand
10
W. Zambrano-Romero et al.
[3] indicates that there are evaluation tools that have been used by the education sector to maintain sustainable monitoring and benchmarking through the use of software as a service. Once the implementation of the SIPI tool was carried out, its application was carried out in 10 Faculties with 33 majors and 9 departments, it was possible to identify the improvements in the response times of each institutional planning process, specifically in two processes: the entry of evidence and the evaluation of the execution of the annual operating plan. A record of the response time of the first year the tool was implemented and the comparison with the record of the time of these processes when they were kept in the spreadsheets in 2017, as shown in the following table one and two: Table 1. Record of the time spent in the processes of input and evaluation of the information of the annual operating plan with spreadsheets 2017. Participants in planning 2017
First semester Total of days of the process of entering the Informacion for the evaluation in spreadsheets and means verification on CD
Total of days of the process validation and evaluation of evidences from the planning direction from the spreadsheet
Second semester Total of days of the process of entering information for the evaluation in spreadsheets and means of verification on CD
Total of days of the process validation and evaluation of evidence from then planning direction from the spreadsheet
Faculties departments Institutes
45 days
90 days
38 days
90 days
When reviewing the comparative records of Table 1 and Table 2, it can be identified that the execution times of the processes of entry of means of verification and selfevaluation are reduced from 45 days (2017) to 15 days in 2018 in the first Semester equivalent to a time saving of 67% and for the second semester from 38 days (2017) to 15 days (2018) which is equivalent to a 61% saving in execution time of these processes. In turn, the validation and evaluation of the evidence from the Institutional Planning Department until issuing the evaluation report is reduced from the 90 days that the manual process took in 2017 through the spreadsheets to 50% less in days (45) in 2018 through the SIPI computing strategy, with 2018 being the first year of implementation, which guarantees that in future years its optimization in the planning processes and specifically in these two key processes will be significantly reduced in time, according to the optimization of the web tool and the expertise of its operation, Fig. 6 shows the evaluation of the POA of an academic unit in charge of the institutional planning department.
Monitoring Tool to Improve Strategic and Operational Planning Processes
11
Table 2. Record of the time spent in the processes of input and evaluation of the information of the annual operating plan with spreadsheets 2018. Participants in the planning 2018
First semester
Second semester
Total of days the self-assessment entry process and means if verification in SIPI
Total of days of evidence evaluation and validation process from the planning directorate at SIPI
Total of days of the self-assessment entry process and means of verification in the SIPI
Total of days evidence evaluation and validation process from the planning directorate at SIPI
Faculties departments Institutes
15 days
45 days
15 days
45 days
The report issued by the system offers the benefit of knowing the progress of the execution of the planning of each administrative or academic agency as shown in Fig. 6. This report can be presented making a distinction between each semester (first or second semester) or in turn, a general report can be generated where the behavior of the goals for each of the objectives in the two semesters and their annual fulfillment is evidenced. For the preparation of the execution report of each agency and in the institutional report these graphs are used, therefore the reduction of time in the elaboration of this type of statistics is significant for the entire institution.
Fig. 6. Evaluation of institutional objectives and goals by academic units
12
W. Zambrano-Romero et al.
According to the research [14], out of the 53 Federal Universities investigated by their efficiency and quality in high-level educational institutions (HLEI), 6 did not follow the PDI in all, 7 monitored their progress using specific software and 16 used a spreadsheet. It should be noted that 34 institutions used other forms of monitoring, including a management report (14%), special committee (13%), regular meetings (13%), custom software development (8%), occasional meetings (5%), and technical Advice of a contracted company (2%). In tentative periods established by the Planning Directorate of the Universidad Técnica de Manabí, the Evaluation process of each Operational Plan is carried out, therefore, at the end of the process mentioned before, graphical results are obtained which allow knowing the status of the planning of each University dependency.
Fig. 7. Result of execution of the POA of the Civil Engineering major in the first semester of the year 2020.
The report issued by the system offers the benefit of knowing the progress of the execution of the planning of each administrative or academic agency as shown in Fig. 7. This report can be presented making a distinction between each semester (first or second semester) or in turn, a general report can be generated where the behavior of the goals for each of the objectives in the two semesters and their annual fulfillment is evidenced. These graphs are used for the preparation of the execution report of each agency and in the institutional report, so the reduction of time in the preparation of this type of statistic is significant for the entire institution. The DPI as the monitoring unit of the Institutional Planning processes through the various graphic reports, Excel matrices, and pdf report documents that can be generated in the SIPI, including the report for the Council of Citizen Participation, report of general institutional compliance, a summary of goals of the POA, a summary of pieces
Monitoring Tool to Improve Strategic and Operational Planning Processes
13
of evidence of all the dependencies, objectives and participating units, a summary of objectives and goals, financing of goals and objectives among others. These reports have optimized the follow-up and feedback provided to those responsible for the execution of planning in the administrative and academic units. The Institutional Strategic Managers that are confirmed by the highest authorities can also view this type of reports that allow them to make decisions in a timely manner through the strengths and weaknesses evidenced in them that are easily obtained through the web tool and that manually would take weeks and months in being generated (Fig. 8).
Fig. 8. Result of execution of the POA of the Civil Engineering major in the first semester of the year 2020.
University decision-making is a complicated process that involves several dimensions such as structure, logic, processes, information, interaction, and communication. The characteristics that can define a decision as “A human and daily process”, in which the subjective dimension of the one who makes the choices of the decisions is decisive [19]. For the quality in the processes of the HEIs and to achieve an optimal measurement and control of them, it is important to have a measurement tool, which allows the monitoring of the fulfillment of the planned and thus determine the impact on the university community and the society. In the exercise of institutional and operational planning, the collaboration of all those involved is essential for the fulfillment of all phases [15]. In addition, it is important to emphasize that the SIPI tool allows adapt- ability to changes in the indicators for strategic development and their objective values for academic and administrative units, which according to [16] plays a crucial role in planning which is defined as an organizational and technical activity that is oriented to satisfy the external demands of the university through the effectiveness of the indicators of each unit.
14
W. Zambrano-Romero et al.
The SIPI tool encourages the participation of those involved in the strategic and operational planning process, in such a way that, according to [11], this promotes new strategic ideas to form a high rate of greater effectiveness in the development of the POA and PEDI. In addition, it is important to emphasize that participation was permanent during each of the phases for the development, evaluation, and feedback for the fulfillment of the strategic objectives both at the unit and university level [1, 2].
4 Conclusions The importance of using a computer tool for the implementation of mechanisms for measuring the quality of HEIs is highlighted. In the case of the Universidad Técnica de Manabí, the SIPI computer system has promoted improvements in the administration and control of the strategic and operational planning of the UTM. The comprehensive institutional planning software enabled a close relation- ship between the faculties and the institutional planning management, helping to improve the validation times of the evidence entered into the system from 90 days to 45 days and to carry out constant monitoring of the objectives and goals planned by each academic and administrative unit and thereby improve the strategic and operational management of the UTM. The introduction of the comprehensive institutional planning system contributed to better monitoring the achievement of the university’s strategic development indicators and as a result the application of strategies to the first semester of the year in the indicators that did not have evidence in coordination with the planning team and the faculty management team. Higher education institutions need a tool to measure and control their processes in order to evaluate the impact of their activities. The SIPI is aligned with most of the modern strategic planning systems used in universities and as future projects the software incorporates a mobile app and sentiment analysis for the construction of strategic planning for institutional development.
References 1. Amrollahi, A., Rowlands, B.: Collaborative open strategic planning: a method and case study. Inf. Technol. people (2017) 2. Angiola, N., Bianchi, P., Damato, L.: How to improve performance of public universities? A strategic management approach. Publ. Admin. Q. 43(3) (2019) 3. de AraÚjo Gôges, H.C., Magrini, A.: Higher education institution sustainability assessment tools: considerations on their use in brazil. Int. J. Sustainabil. High. Educ. 17(3), 322–341 (2016) 4. Baldeon Egas, P.F., Albuja Mariño, P.A., Rivero Padrón, Y.: Las tecnologías de la informacion y la comunicación en la gestión estratégica universitaria: experiencias en la universidad tecnologica israel. Conrado 15(68), 83–88 (2019) 5. Ba¸sarı, G., Aktepeba¸sı, A., Tuncel, E., Ya˘gcı, E., Akda˘g, S.: Statistical reasoning of education managers opinions on institutional strategic planning. In: Aliev, R., Kacprzyk, J., Pedrycz, W., Jamshidi, M., Sadikoglu, F. (eds.) International Conference on Theory and Applications of Fuzzy Systems and Soft Computing, pp. 399–403. Springer, Cham (2018). https://doi.org/ 10.1007/978-3-030-04164-9_53
Monitoring Tool to Improve Strategic and Operational Planning Processes
15
6. Bustos, J., Zapata, M., Valdivia, M.T.R.: Más allá de la gestión estratégica en educacíon superior: aplicación del cuadro de mando integral. Oikos: Revista de la Escuela de Administracion y Economía, p. 5 (2008) 7. He, R.Y.: Design and implementation of web based on Laravel framework. In: 2014 International Conference on Computer Science and Electronic Technology (ICC- SET 2014), pp. 301–304. Atlantis Press (2015) 8. Hernández, N.B., Guerrero, R.O., Quiñonez, W.A.: Universidad y planificación estratégica en el ecuador. Didasc@ lia: didáctica y educación 7(2), 171–180 (2016). ISSN 2224–2643 9. Hernández, O., Santos, M., Gallardo, S.: SGE: information system for strategic planning management applied to an electric utility. In: Proceedings of the World Congress on Engineering and Computer Science, vol. 1 (2015) 10. López, A.H.A., Mosquera, J.M.L., Castro, J.A.Q., Vieda, E.A.L., Herrera, T.O.R.: Software para autoevaluación de programas académicos en instituciones de educacion superior. In: Memorias de Congresos UTP, pp. 40–46 (2019) 11. López-Gorozabel, O., Cedenõ-Palma, E., Pinargote-Ortega, J., Zambrano-Romero, W., Pazminõ-Campuzano, M.: Bootstrap as a tool for web development and graphic optimization on mobile devices. In: XV Multidisciplinary International Congress on Science and Technology, pp. 290–302. Springer, Cham (2020) 12. Martins, V.A.: Proposta de um mapa estratégico para uma universidade pública. Revista Evidenciaçao Contábil Finanças 3(2), 88–103 (2015) 13. Megnounif, A., Kherbouche, A., Chermitti, N.: Contribution to the quality assessment in higher education: The case study of the faculty of technology, Tlemcen, Algeria. Procedia Soc. Behav. Sci. 102, 276–287 (2013) 14. Mendonça, L.C., dos Anjos, F.H., de Souza Bermejo, P.H., Sant’Ana, T.D., Borges, G.H.A.: Strategic planning in the public sector: how can Brazilian public universities transform their management, computerise processes and improve monitoring? In: International Conference on Electronic Government and the Information Systems Perspective, pp. 294–306. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-64248-2_21 15. Obredor-Baldovino, T., et al.: University quality measurement model based on balanced scorecard. In: International Conference on Applied Informatics, pp. 131–144. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32475-9_10 16. Ovchinkin, O., Pykhtin, A., Bessonova, E., Kharchenko, E., Chernykh, N.: System of internal monitoring of performance of indicators of efficiency of the university. Int. J. Adv. Trends Comput. Sci. Eng. 8(6), 3312–3317 (2019) 17. Sánchez Quintero, J.: A proposal of quality indicators to self-assessment and accreditation of undergraduate programs in management. Estudios Gerenciales 30(133), 419–429 (2014) 18. Silva, J.M., Silva, J.R.: A new hierarchical approach to requirement analysis of problems in automated planning. Eng. Appl. Artif. Intell. 81, 373–386 (2019) 19. Vidal, J.P., Andre, C.: Toma de decisiones en instituciones de educacíon superior en la amazonía: Hacia una síntesis de racionalidades. Estado, Gobierno y Gestión Publica (27), 149–171 (2016) 20. Yin, R.: Case Study Research and Applications: Design and Methods. Sage Publications, New York (2018)
Digital Media Ecosystem: A Core Component Analysis According to Expert Judgment Gabriel Saltos-Cruz1(B) , Santiago Peñaherrera-Zambrano1 José Herrera-Herrera1 , Fernando Naranjo-Holguín2 , and Wilson Araque-Jaramillo3
,
1 Universidad Técnica de Ambato, Ambato, Tungurahua, Ecuador {jg.saltos,spenaherrera,josebherrera}@uta.edu.ec 2 Niagara University, New York, Lewiston, USA [email protected] 3 Universidad Andina Simón Bolívar, Quito, Tungurahua, Ecuador [email protected]
Abstract. Digital marketing plays an important role in implementing the marketing strategies of organizations. This article aims to identify the elements and resources capable of configuring a digital media ecosystem to increase the effectiveness of business strategies. To fulfill the proposed objective, a systematic literature review related to digital media has been carried out, including several elements: (1) Website, (2) Social, (3) Paid search advertising, (4) Mobile, (5) Adaptive SEO, (6) Inbound marketing, (7) Social media, (8) Email/CRM. The analyzed data were extracted from the index of peer-reviewed abstracts and citations Scopus. This research defined the intervening elements most recommended by experts in each digital channel. The main contribution of this study is the analysis and clustering of the elements of a digital media ecosystem based on the most relevant literature and supported by an empirical study to validate our results. Keywords: Social media marketing · Social media analytics · Digital marketing · Inbound marketing
1 Introduction Since the onset of the COVID-19 pandemic, numerous efforts have emerged to mitigate its impact on public health and the global economy. There have been contributions from academics, managers, and researchers. Effective contributions in the business arena have been limited. The use of information technology will be the only resource that can counteract the economic effects of the pandemic on SMEs. Different types of digital technology will enable business partners to work without physical contact. SMEs require digital technologies to increase their performance, especially during the COVID-19 pandemic. The effect of this pandemic phenomenon has caused crises in capital markets, supply chains and disruptions in the flow of goods around the world [1, 2]. Today, discussions about contemporary topics, such as social development, policies to reduce poverty, plans to increase sustainable production, occur thanks to the use of new © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Botto-Tobar et al. (Eds.): ICAETT 2021, LNNS 407, pp. 16–28, 2022. https://doi.org/10.1007/978-3-030-96147-3_2
Digital Media Ecosystem: A Core Component Analysis
17
information and communication technologies. In this context, SMEs play an important role, since they are an engine for the economy and development of any country. In this sense, SMEs contribute to increasing the GDP of the manufacturing sector. One of the most relevant characteristics of SMEs is their ability to adapt to changes in the environment since they have the facility to expand and downsize according to market behaviour. In addition, they constitute a rich source of employment, due to their low levels of industrialization and automation [2]. SMEs are a source of inclusive growth and economic sustainability. This assertion has been proven since the global economic crisis of 2008–2009. In this period, SMEs took on great importance as they constituted a pillar of technological innovation, economic growth, poverty alleviation and social cohesion. When an SME enters a digital transformation process, it generates a competitive advantage. This evolutionary process facilitates entry into large markets, an action that generates sustainability in this type of business structure. The entry of companies into a digital world deploys mechanisms of social interaction of collective action, collaboration among stakeholders and knowledge sharing. However, the highest benefit that SMEs receive when entering a digital world is the contribution of new management practices to the organizational system. New practices range from knowledge sharing and innovation to the exploration of new business models. However, given the unstable environment caused by the COVID-19 pandemic, it is essential to study new tools and instruments to improve the performance of SMEs [3]. In Ecuador, the evolution of digital marketing is developing proportionally to the new information and communication technologies. The development of SMEs in 2018 shows that 66.7% of Ecuadorian SMEs invested in digital technology. This investment in hard technology is distributed in 98% for laptops, and 24% for smartphones. The increase in demand is due to the functionalities and access to Internet business. These resources benefit the exchange of information through mobile and web technology applications. In addition, the option to carry out virtual tasks (inquiries, banking transactions and online purchases including deliveries) offers consumers a new way of interacting commercially with the market [4]. Based on the above, it is important to determine a system of instruments and tools that contribute to the development of digital marketing in Ecuador. From this premise, the research question arises based on the problem, namely: What are the interacting elements necessary to exploit a digital transformation strategy? To answer the research question, a digital media ecosystem was developed to explain the interacting elements. The extraction of key concepts was performed using an exhaustive literature review from the index of abstracts and peer-reviewed citations Scopus, validated by experts in the industry and academia.
2 Review of Related Literature Digital marketing is a structural set of applications and marketing strategies executed in digital media. This concept has evolved from product-based marketing 1.0 to consumercentric marketing 2.0, advancing to marketing 3.0 focused on values and social causes, to the most recent marketing 4.0 that combines traditional marketing, digital media, and artificial intelligence, for maximum customer loyalty [5]. The first stage of digital
18
G. Saltos-Cruz et al.
marketing lacked interaction between the company and users. Users were allowed to navigate between different contents and in some cases, they could leave comments in a resource called “guestbook” [6]. Similarly, the Internet has enabled the rapid evolution of digital advertising with new tools and challenges, including search engine marketing, brand positioning, email marketing and the possibility of proper user tracking through web analytics. In the 1980s, the creation of digital databases changed the dynamics of buyer-seller relationships. It allowed companies to access, store and track information. In the 1990s, after the emergence of digital marketing, companies had for the first time access to databases for interactive control with users. Simultaneously, the launch of the first search engine “Archie” occurred, which changed the way companies used technology [7]. In that decade, the so-called Web 1.0 became widespread and integrated, a time when modems used to be slow and noisy, completely different from today’s modems. Searching for information was a slow and complex process. The first search engines to emerge in Web 1.0 were Google and Bing. Today, Google has become the largest information search platform in existence. Search engines had a similar standard to directory search engines. Data management consisted of a set of documents networked by hyperlinks. The major achievement was the democratic access to information since the participation among users was worldwide [8]. Web 2.0 is a method of using the Internet with a more dynamic structure, modern formats and more functionalities. It allows users to have their voice on the Internet, manage their content, make comments on others, and send and receive information with other users of the same status or organization. Not only can users of Web 2.0 access information, but they can also be content creators with different tools provided by the Internet. In terms of design, Web 2.0 is based on Web standards that make it a usable, accessible and efficient site for everyone. Companies stopped focusing on sales with Web 2.0, giving importance to audiences and contact with users through the Internet. This new resource allowed companies to make the flow of information dependent on the behaviour of visitors. In other words, access to the network and company content was provided in a convenient and centralized way. The challenges of the Internet and digitization imply continuous experimentation with new business models for cost reduction, service customization and value chain strengthening. In the last decade, the set of resources configured to face the technological challenges was called the digital media ecosystem [5]. A digital media ecosystem is a multi-channel, multi-platform network built to meet different data management needs for informational, commercial and advertising purposes. The main elements include: (i) Website, (ii) Social, (iii) Paid search advertising, (iv) Mobile, (v) Adaptive SEO, (vi) Inbound marketing, (vii) Social media, (viii) Email / CRM, and others [9]. For the analysis of the state of research of digital media ecosystems, the most relevant information from the Scopus database was studied. We searched for the most important key categories among the most cited articles and authors of the last ten years. Table 1 contains the collected information in categorical order.
Digital Media Ecosystem: A Core Component Analysis
19
Table 1. Review of the literature on elements of the digital media ecosystem used by the relevant literature. Theme
Key concepts related to the research objectives
Website
Blog, Keyword analysis, Visitor tracking, Landing page, Lead conversion, Responsive, Design, Retinal display, Google analytics, Webmaster tools, Web technology
Social
Google+, Facebook, Twitter, Linked In, YouTube, Review sites, Social insight
Paid search advertising
Campaign management, Geofencing, Remarketing, Call-Tracking, Pay-Per-Click, Display advertising, Retargeting, Facebook Ads, Promotional tweets, LinkedIn ads, Video ads, Search engines, Conversion tracking
Mobile
QR codes, Local search, Check-ins, Mobile app, SMS
Adaptive SEO
Content syndication, Forums, Citations, Industry directories, Search engines, Competitive analysis
Inbound Marketing
Press release, Case studies, Articles, eBooks, How to guides, White papers, Content marketing, Automatization
Social media
# Hashtag registration, Social book marketing, Social Selling Networking, KPIs
Email/CRM
Communication, E-Newsletter, Lead, Management, Segmentation, List building, Lead nurturing, Emails, Lead scoring, Campaign tracking
2.1 The Website as a Digital Medium Today, intuitive and easy-to-use interfaces have become an indispensable tool for companies. The tools provided by the Internet help users to be informed, orient themselves, communicate and interact with organizations, offering opportunities and benefits for all users. Organizations that do not take advantage of the Internet will lose potential customers and therefore sales opportunities. Companies that do not stay at the forefront of technological advances presented by the Internet and digital tools, will not be able to build a solid relationship with their audience. The Web has become the main source of information leading many users to search for products/services of a company using the Internet before making a purchase allowing them to have timely information, updated and easy access to purchases [10]. In a Website context, the main target is not a mass audience. The most important factor is the number of end-users that are attracted by the usability of Web resources. The main intention of the use and maintenance of a Website is to reach the desired audience (Buyer persona). The use of Web resources ensures that transactions are easy, fast, secure, and accessible 24 hours a day. At this point, audiences are replaced by communities that allow users to interact with the companies’ web pages considering an online communication development. Interactivity can play a key role on the Internet since the promotion and mutual strengthening of relationships increases the possibilities of converting users into captive customers. If organizations ignore the use of a Website,
20
G. Saltos-Cruz et al.
they will not be able to adapt to the change and the global trend of digital marketing [11]. Search engines, which are specific Websites, constitute a fundamental tool for users who require some type of information. Web search service providers have long faced conflicts over the sustainability of this activity, as users seek free access to information. Before the launch of the first search algorithm developed by Google, most Web sites had supported their economic revenues by advertising on their platform. Advertising companies attached to search engine companies have focused their attention on the speed, accuracy and efficiency of the main search engines (Yahoo!, Google, MSN and AOL), in addition to the implementation of paid search procedures. Paid search mechanisms are an increasingly important, popular, and unique form of contextual interaction on the Web. Within the use of search engines, the display of ads based on search topics has become enabled. Through paid or sponsored search, the content provider pays the Web search engine to display sponsored links and algorithmic links in response to user queries [12]. 2.2 Social Network In a business context, the World Wide Web hosts two types of platforms, the Website and Social networks. The Social network has become a means of interacting with users, sharing content and attracting new followers. This resource has great potential as its predominance over the Website has strengthened in recent years. The popularity of the Social network has awakened the interest of large advertising and IT corporations to exploit its potential. Each Social network has its particular potential, which forces companies to manage them independently. The complexity of the Social network requires the configuration of complementary strategies to promote the positioning of the advertised resource [13]. Since the introduction of digital media to the Internet, generational evolution has conditioned the creation of better interfaces to support new technologies. A clear example is the migration of connectivity from old desktop computers to laptops, and from palms and tablets to smartphones. This latest hard technology leap has forced the emergence of the seventh generation of digital media. The key driver for the expansion of these devices is the creation of new interfaces that enhance the user experience. The first smartphones entered the market between 2007 and 2008 and their diffusion was so rapid that, between 2011 and 2012, the so-called “second great technological bubble” occurred, which was directly related to communication. In 2020, as a technological frontier was surpassed, the number of companies dedicated to the design of mobile applications has been decreasing. This decrease is justified since mobile applications must support a multiplatform criterion, which at least must be available for the Android operating system, and the iOS platform. This reality forces contracting companies to ensure that the cost-benefit is clear and measurable. Within this context, mobile advertising has developed hand in hand with programming, since it can be incorporated into free applications that provide attractive usability for users [14]. It is important to highlight the potential of mobile marketing as an online shopping platform. The success of these applications lies in the possibility of shortening the purchase cycle through fast, direct and secure access. The procedure consists of the
Digital Media Ecosystem: A Core Component Analysis
21
following steps: (1) the recruitment of potential buyers, (2) the direct and secure access to the stock of online stores, (3) the management of payments through accounts or cards certified by financial institutions, and (4) the home delivery service. Until 2019 mobile marketing was a complement to digital marketing, but today, after the world suffered the COVID-19 pandemic, is the center of digital strategies. The scope of mobile marketing actions is getting wider and wider, including all new web technologies such as those embedded in wearable devices and tablets [15]. Unlike web directories maintained by human editors, search engines operate through algorithms. There is also the combination of algorithms and manual information for efficient retrieval of the information required by the user [16]. The results provided by search engines are not always adequate. To provide more quality content to an audience within an organic positioning context, the use of content syndication is recommended. 2.3 Analytics Analytics marketing is a concept that integrates data analysis on the Internet to digital marketing. This data analysis is performed on digital attributes and user activities within a website or web application. Today, there is an immeasurable amount of data for the study of consumer behaviour online. The challenge for programmers, designers and data architects revolves around capturing, sorting and converting online users’ data. The results of online users’ data analysis directly benefit companies that have entered the digital world. The main objective of web analytics is to support business decisionmaking, as well as, to determine the most suitable marketing strategy to succeed in a web environment. Intuition based on experience can be of great help for conventional marketing, but when designing a digital marketing campaign, data and statistics are required to support decision-making based on results [17]. A major difference between conventional marketing and digital marketing is that the former is based on more tangible information, unlike the latter, where the performance of strategies cannot be seen briefly [18]. Web analytics allow companies to make better use of all the data that the Internet can provide. Nowadays, not only are these data related to the company, but also to the interaction between users and brands. In the era of digital marketing, it is important to determine the best way to reach the audience. Analytics marketing can achieve higher business performance and greater visibility for a lower investment. Digital marketing shares objectives with traditional marketing, but changes methods, strategies and especially the procedures for data collection. This function is very useful for companies and brands looking for new ways to connect with consumers and showcase their offerings. The use of metrics guarantees a higher rate of return on advertising investments. For this, Web analytics rely on multiple platforms, such as, Google Analytics which provides general information about the website, Google Attribution focused on conversion attribution, YouTube Analytics, Facebook Insights which contains metrics for leads and engagement, and others [19]. There are also payment solutions designed to integrate information from different platforms and provide a complete view of the entire customer journey, regardless of the platform chosen. A distinction can be made between two main types of analytics, (i) descriptive analytics (focuses on describing what is happening at a specific time), and (ii)
22
G. Saltos-Cruz et al.
predictive and prescriptive analytics (based on descriptive data, attempts to foresee the future and guide campaigns, products, and services to be launched for better results). Web analytics must focus on the most relevant indicators, considering that there are several data available on user activity on the Internet. It is important to clarify that several of the data are not in line with the strategic objective of the business. There are several important factors to consider when analyzing the data collected: (i) the objectives must be very clear when looking for information in Data analytics (according to these objectives the data are structured and provide guidelines to the management of the business), (ii) choosing the right performance indicators or KPIs, (iii) the profitability of the resources used is the ultimate goal to be achieved, and (iv) the basic pillar of a successful marketing strategy is to correctly segment your audience [20].
3 Methodology This research was quantitative, non-experimental, and cross-sectional. The unit of analysis is composed of specialists who work in a dependent relationship with companies and specialists who advise companies. The theoretical methods used were: (i) the historical logical method that allowed identifying the construction of the research object over time, (ii) the deductive method was used to identify the fundamental categories of the media ecosystem, and (iii) the synthetic analytical method that contributed to the study of the elements of the media ecosystem. The empirical method used was the collection of data, the technique selected for this purpose was the telephone survey. 3.1 Participants The digital marketing specialists who made up the unit of analysis belonged to two groups: expert professionals who had a direct working relationship with the companies (115), and professionals who performed external consulting activities for the companies (158). In both cases, to ensure representativeness, the companies belonged to different sectors, namely: education, textile manufacturing, footwear production, intermediary commerce of basic necessities and financial services. The sampling framework for the selection of the study group was composed of data provided by the universities in the center of the country (Ecuador), the chamber of commerce of the cities in Zone 3 and the IRS (Internal Revenue Service). The specialists (key informants) were chosen considering two filters to ensure scientific rigour. 3.2 Data Collection Instrument Three instruments were used to collect data. The first instrument was a structured questionnaire with the categories to be explored: (1) Website, (2) Social, (3) Paid search advertising, (4) Mobile, (5) Adaptive SEO, (6) Inbound marketing, (7) Social media, (8) Email/CRM. The set of questions contained a Likert scale with five alternatives (1 = None, 2 = Scarce, 3 = Acceptable, 4 = High or 5 = Very high) which evaluated the knowledge of each participant on the proposed topics. The second instrument was a form to determine the “Knowledge or Information Coefficient”, to serve as a filter to
Digital Media Ecosystem: A Core Component Analysis
23
select the best experts. Finally, the third instrument comprised eight categories and 62 items valued by a Likert scale of 5 levels of relevance (5 = Totally relevant; 0 = Totally not relevant). 3.3 Data Gathering Procedure In the systematic review, a search strategy aimed at reviewing relevant articles of the Scopus search index was applied. Search strings included inclusive syntax for digital marketing topics (Website OR Social OR “Paid search advertising” OR Mobile OR “Adaptive SEO” OR “Inbound marketing” OR “Social media” OR “Email marketing” AND “Digital marketing”). Search restrictions were based on the following exclusion criteria: (1) preprint non-peer-reviewed, (2) articles older than 10 years, (3) duplicates in each database, and (4) articles that are not indexed in Scopus. For the selection of suitable specialists for the study, two processes were carried out. First, an electronic survey on digital marketing was applied, assessing the minimum knowledge of 776 professionals. At the same time, the predisposition of the specialists to participate in the research was determined (by accepting informed consent for data use). Second, the 325 professionals who passed the first filter were subjected to a competency test that was submitted on an electronic form. The competency assessment fields were the sources of argumentation or rationale for assessing experts, before a Delphi study (academic assessment, professional assessment, and work experience) [21]. This procedure made it possible to select the individuals with the highest coefficient of competence as experts (273 professionals). In addition, it allowed creating a database with sociodemographic information and contact information for the telephone survey that motivated this research. The application of the survey followed this procedure: (1) the competence of the experts was validated, (2) the sampling frame was organized and the groups of professionals to be surveyed were allocated to each interviewer, (3) the interviewers contacted the professionals and requested permission to record the telephone survey, (4) at the time of the survey execution, the objective of the survey was explained, how to answer each question, and feedback was given only if the respondent required it, (5) the surveyor recorded the responses directly into a Microsoft Excel spreadsheet, and (6) when the process was completed, the surveyors sent the spreadsheets for review. 3.4 Analysis of Data The data analysis process included the following sequential steps: (1) the coder evaluated the quality of data recorded on the spreadsheet and proceeded to load them into SPSS Statistics 25.0 statistical software, and (2) once the database of responses was loaded, the internal consistency was calculated to determine the degree of reliability of the instrument using Cronbach’s alpha test. The multivariate statistical analysis of factor reduction contained the following calculations [22, 23]: (1) sampling adequacy using the Kaiser-Meyer-Olkin test and Bartlett’s test of sphericity, (2) orthogonal factor rotation using the Varimax method. (3) dimension reduction using principal components analysis, and (4) discrimination of factors less than 0.4 for the rotated matrix.
24
G. Saltos-Cruz et al.
4 Results 4.1 Systematic Review of the Digital Media Ecosystem We identified 924 articles that satisfied the inclusion criteria. In the first phase, these papers were reviewed by focusing on their titles and abstracts to identify if they included the search terms. The database of articles was subject to a search for duplicates discarding 49.13% of them. In addition, we analyzed the existence of scientific articles that were non-peer-reviewed, it was identified that 9% met this exclusion criterion. The next step was the review of the complete texts of the articles to eliminate documents not aligned with the objectives of our research (27%). Publication dates were then examined to rule out those articles older than 10 years old (74%). Finally, the 82 articles were organized to be studied (Fig. 1).
Selection of Database: Scopus
Elimination of duplicates in databases
Identification of Search String Website OR Social OR “Paid search advetising” OR Mobile OR “Adaptive SEO” OR “Inbound marketing” OR “Social media” OR “Email marketing” AND “Digital marketing”
Elimination of non-peer reviewed articles
454
425 Review of full texts
313 Final database
Review of titles and summaries
82 924
Consolidations of databases
Fig. 1. Summary of literature selection
4.2 Factor Analysis The Cronbach’s alpha indicator presented a value of 0.891 concerning the measures of all the elements of the digital media ecosystem (62 items). In addition, when analyzing the reliability test with the elimination of items, it could be observed that no item negatively affected the degree of consistency, since all of them exceeded the minimum value required. The Kaiser-Meyer-Olkin indicator of sampling adequacy showed a value of 0.879. This value allowed us to continue with the factor extraction process since it surpassed the minimum value required. Bartlett’s test of sphericity, measured through
Digital Media Ecosystem: A Core Component Analysis
25
the chi-square, yielded a value of 1,644.521, with 20 degrees of freedom and a significance level of less than 0.05. The analysis of total variance explained showed an optimal probability level of explanation of the variance of the mathematical model for the eight groups (Table 2). Table 2. N° 2. Total variance explained Component Initial eigenvalues Total
Extraction sums of squared loadings
% % Total variance cumulative
Rotation sums of squared loadings
% % Total variance cumulative
% % variance cumulative
1
9.951 16.050
16.050
9.951 16.050
16.050
8.729 14.079
14.079
2
9.139 14.740
30.790
9.139 14.740
30.790
6.881 11.098
25.177
3
6.696 10.800
41.590
6.696 10.800
41.590
6.698 10.803
35.979
4
5.019
8.095
49.685
5.019
8.095
49.685
5.913
9.536
45.516
5
3.952
6.374
56.058
3.952
6.374
56.058
4.553
7.343
52.859
6
3.840
6.193
62.251
3.840
6.193
62.251
4.409
7.111
59.970
7
3.643
5.876
68.127
3.643
5.876
68.127
4.159
6.708
66.677
8
2.337
3.769
71.896
2.337
3.769
71.896
3.236
5.219
71.896
9
0.975
1.572
73.468
Note: Extraction method: principal components.
5 Discussion A major difference between traditional marketing and digital marketing is that the former is based on more tangible information, unlike the latter where the performance of strategies cannot be seen briefly. Specifically, while traditional marketing can be monitored by sales or channel coverage, digital marketing relies on leads or engagement. The empirical analysis allowed validating the digital media ecosystem, in this sense, the values that support this criterion are presented, namely: (i) Cronbach’s alpha of 0.891, (ii) Kaiser-Meyer-Olkin of 0.879, (iii) Bartlett’s sphericity test Chi-square = 1644.521, with 20 degrees of freedom and a significance level lower than 0.05, (iv) a total variance explained of 73.47% of the behaviour of the digital media ecosystem, (v) factor loadings higher than 0.7 except for Social/Review sites which presented a value 0.687 which is close to the minimum required level. Unlike other studies [9], this study presents a media ecosystem with instruments, tools, and metrics for the management of a digital marketing system in small and medium enterprises based on 82 academic papers and supported by the criteria of 273 Ecuadorian specialists in this field.
26
G. Saltos-Cruz et al.
6 Conclusions A business Website allows organizations to be more competitive and visible, influencing the image and credibility generated in users. In addition, it allows effective bidirectional communication, building direct relationships and eliminating intermediaries for transactions with consumers. Paid search advertising is an increasingly important, popular and unique form of contextual interaction that can interact with information on the Web. The use of search engines has enabled the display of ads based on search topics. Through paid or sponsored search, the content provider pays the Web search engine to display sponsored links and algorithmic links in response to user queries. The use of Social media resources for the formation of Social network groups has helped organizations develop a competitive advantage based on their image. This image is positioned in the consumer’s mind based on the particularities of the Social network and digital resources that allow intertwining emotions and feelings. This configuration develops an intimate relationship that strengthens the connection between the brand and its followers. Before the pandemic, mobile marketing was a complement of digital marketing, today after the world was impacted by the COVID-19 virus, it is the center of digital strategies. The scope of mobile marketing actions is becoming wider and wider, including all new web technologies such as those embedded in handheld devices and tablets. The main benefit of content syndication (Adaptive SEO) is the publication of information linked to the author’s profile and image to increase traffic and reputation. In many cases, if the results are ranked in the top five options of the search engine and are syndicated to content and image, there is a high probability that the user will choose this author. The digital media ecosystem has been developed based on the criteria of the leading authors according to the scientific database Scopus. By using a systematic literature review, it was possible to discover the existence of eight fundamental categories that should be integrated into a digital marketing system. The general conclusions of the theoretical abstraction suggest the existence of eight dimensions, namely: (1) Website, (2) Social, (3) Paid search advertising, (4) Mobile, (5) Adaptive SEO, (6) Inbound marketing, (7) Social media, (8) Email/CRM. The empirical validation corroborates this relationship of the observable variables (62) with the eight latent variables described above.
References 1. Nasution, M., Rafiki, A., Lubis, A., Rossanty, Y.: Entrepreneurial orientation, knowledge management, dynamic capabilities towards e-commerce adoption of SMEs in Indonesia. J. Sci. Technol. Poli. Manage. 12(2), 256–282 (2021). https://doi.org/10.1108/JSTPM-03-20200060 2. Papadopoulos, T., Baltas, K., Balta, M.: The use of digital technologies by small and medium enterprises during COVID-19: Implications for theory and practice. Int. J. Inf. Manage. 55(1), 1–4 (2020). https://doi.org/10.1016/J.IJINFOMGT.2020.102192
Digital Media Ecosystem: A Core Component Analysis
27
3. Qalati, S., Li, W., Ahme, N., Mirani, M., Khan, A.: Examining the factors affecting SME performance: the mediating role of social media adoption. Sustainability 13(1), 1–24 (2021). https://doi.org/10.3390/su13010075 4. Arteaga, J., Coronel, V., Acosta, M.: Marketing’s influence in the PYME’s development in Ecuador. Espacios 39(47), 40–54 (2018). Accessed from https://www.revistaespacios.com/ a18v39n47/18394701.html 5. Kim, J., Kang, S., Lee, K.: Evolution of digital marketing communication: bibliometric analysis and network visualization from key articles. J. Bus. Res. 130(1), 552–563 (2021). https:// doi.org/10.1016/j.jbusres.2019.09.043 6. Bennett, W., Segerberg, A.: The logic of connective action. Inf. Commun. Soc. 15(5), 739–768 (2012). https://doi.org/10.1080/1369118X.2012.670661 7. Zhang, G., Zhang, G., Yang, Q., Cheng, S., Zhou, T.: Evolution of the Internet and its cores. New J. Phys. 10(1), 1–12 (2008). https://doi.org/10.1088/1367-2630/10/12/123027 8. Kollmann, T.: Grundlagen des Web 1.0, Web 2.0, Web 3.0 und Web 4.0. In: Kollmann, T. (ed.) Handbuch Digitale Wirtschaft, pp. 133–155. Springer, Wiesbaden (2020). https://doi. org/10.1007/978-3-658-17291-6_8 9. Colapinto, C.: Moving to a multichannel and multiplatform company in the emerging and digital media ecosystem: the case of mediaset group. Int. J. Media Manage. 12(1), 59–75 (2010). https://doi.org/10.1080/14241277.2010.510459 10. Kumar, G., Kumar, A.: A work on digital marketing processes at digitally inspired India. Int. J. Recent Technol. Eng. 8(2), 212–214 (2019). https://doi.org/10.35940/ijrte.B1351.088 2S819 11. Li, X., Wang, Y., Yu, Y.: Present and future hotel website marketing activities: change propensity analysis. Int. J. Hosp. Manag. 47, 131–139 (2015). https://doi.org/10.1016/J.IJHM.2015. 02.007 12. Ahuja, V., Medury, Y.: CRM in a Web 2.0 world: using corporate blogs for campaign management. J. Direct Data Digital Market. Pract. 13(1), 11–24. (2011). https://doi.org/10.1057/ DDDMP.2011.15 13. Khan, M.: Social media engagement: What motivates user participation and consumption on YouTube? Comput. Hum. Behav. 66, 237–247 (2017). https://doi.org/10.1016/J.CHB.2016. 09.024 14. Elkind, E., Faliszewski, P.: Approximation algorithms for campaign management. In: Saberi, A. (ed.) WINE 2010. LNCS, vol. 6484, pp. 473–482. Springer, Heidelberg (2010). https:// doi.org/10.1007/978-3-642-17572-5_40 15. Isoraite, M.: Remarketing features. Int. J. Trend Sci. Res. Dev. 3(6), 6 (2019). Accessed from https://www.ijtsrd.com/management/marketing-management/28031/remark eting-features/m-isoraite 16. Lambrecht, A., Tucker, C.: When does retargeting work? Information specificity in online advertising. J. Mark. Res. 50(5), 561–576 (2013). https://doi.org/10.1509/JMR.11.0503 17. Chan, H.: Intelligent value-based customer segmentation method for campaign management: a case study of automobile retailer. Expert Syst. Appl. 1, 2754–2762 (2008). https://doi.org/ 10.1016/j.eswa.2007.05.0 18. Dutt, R., Deb, A., Ferrara, E.: “Senator, We Sell Ads”: analysis of the 2016 Russian Facebook Ads Campaign. In: Akoglu, L., Ferrara, E., Deivamani, M., Baeza-Yates, R., Yogesh, P. (eds.) ICIIT 2018. CCIS, vol. 941, pp. 151–168. Springer, Singapore (2019). https://doi.org/10. 1007/978-981-13-3582-2_12 19. Choi, H., Varian, H.: Predicting the present with Google trends. Econ. Record 88(1), 2–9 (2012). https://doi.org/10.1111/j.1475-4932.2012.00809.x 20. Philip, C., Zhang, C.: Data-intensive applications, challenges, techniques and technologies: a survey on Big Data. Inf. Sci. 275, 314–347 (2014). https://doi.org/10.1016/J.INS.2014.01.015
28
G. Saltos-Cruz et al.
21. Cabrera, N., Cantelar, N., Cantelar, B., Chao, M., Valcárcel, N.: Modelo educativo para la gestión académica en el Instituto de Medicina Tropical. Revista Habanera de Ciencias Médicas 19(3), 1–13 (2020) 22. Díaz, C., González, G., Jara, L., Muñoz, J.: Validation of a classroom management questionnaire for pre and Inservice teachers of English. Revista Colombiana de Educación 75, 263–286 (2018) 23. Barrios, M., et al.: La evaluación psicométrica. 1st edn. Editorial UOC, Bogota (2017)
Cost System for Small Livestock Farmers Jasleidy Astrid Prada Segura(B)
and Liyid Yoresy López Parra
Corporación Universitaria Minuto de Dios, Bogotá D.C, Colombia {jpradasegur,llopezparr3}@uniminuto.edu.co
Abstract. The objective of this research is to design a cost system for a milk producer farm, located in Vereda Soatama in the municipality of Villa Pinzón in Colombia, using technique direct observation, interview, survey, and content analysis, applying a documentary and field research, with a triangular approach. The information was collected through field visits and a survey, the results of the research determined that it is necessary to provide a cost system by means of ABC through Excel templates, so that the producer of Finca El Palmar can keep track of their expenses and costs incurred and identify their profits or losses acquired monthly, facilitating decision making to have an improvement in their farm. Keywords: Agricultural accounting · Small cattle raisers · Costing system JEL: Q12
1 Introduction Colombia presents a stable economic growth, the standard of living of its inhabitants depends mainly on their ability to have access to land if we talk about the rural sector, there is the artisanal livestock activity that stands out, an activity that is developed in any of the thermal floors of the country and that is the basis of livelihood for many families due to the instability in the prices of other products produced in the country, therefore it is essential to study this activity in order to generate greater competitiveness of our farmers when it comes to livestock. The main problem that small cattle ranchers have (El Palmar farm) is that they do not have an information system that adapts to their needs, since their work is done by hand, and since they do not keep track of their costs and expenses with certainty, they cannot know if their farm management is profitable or not. This research will answer the question “How to determine the costs, expenses and profits of the farm El Palmar? Tools were designed for data collection, taking into account (Moreno 2009) mentions that the ABC system is an instrument designed to solve some of the problems of the modern company, because it considers as an objective of analysis the different activities performed by the company, deepening its study in the cost drivers of each of the activities, as a tool for cost reduction (page 3); according to (Rojas Conde 2017, p. 101)) The application of a cost system, would provide agricultural companies with a way of presenting real financial information, by providing certainty in the production costs by items that it performs, and in this way that © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Botto-Tobar et al. (Eds.): ICAETT 2021, LNNS 407, pp. 29–42, 2022. https://doi.org/10.1007/978-3-030-96147-3_3
30
J. A. Prada Segura and L. Y. López Parra
information allows you to know the activities that will provide a greater benefit in the period; additional, Ruiz and Pinilla (2018) mention that family production systems are usually informal and producers are interested in knowing the net result of the exercise, not the different variables that generate it. Likewise, agricultural production is usually carried out by traditional cultural mechanisms, so producers do not want to change the way or methodology of work (p. 4). After analyzing the information, the cost system will be provided to the farm for its proper application to facilitate decision making according to the results obtained. 1.1 Definition of the Problem The El Palmar Farm in the Soatama Village of the municipality of Villa Pinzon, manages an artisanal production system, which diminishes its competitive and financial capacity since it does not have an adequate control of the expenses and costs of daily production, not knowing if it obtains losses or profits. It is necessary to identify a strategy that allows them to measure and control the costs and expenses of their productive activity. Carrying out dairy production requires a great deal of work, which is what allows the producer’s objectives to be met. This activity can be called a long-term investment process, since consolidating a farm with good animals and technifying the grazing areas is not achieved in the short term, especially because it requires simplifying tasks and increasing the farm’s competitiveness and profitability. 1.2 Definition of the Problem How to design an activity-based costing system for small livestock farmers that allows them to keep track of their costs and expenses monthly and reflect their actual financial situation?
2 Methodology The research is approached through a mixed type of research since it gathers documentary and fieldwork information as mentioned by Caro-González et al. (2014). Mixed or hybrid methods represent a set of systematic, empirical, and critical research processes and involve the collection and analysis of both quantitative and qualitative data, as well as their integration and joint discussion, to make inferences product of all the information collected (called meta inferences) and achieve a greater understanding of the phenomenon under study. (p. 10). The research will be developed in a triangular manner as defined: Triangulation allows a vision of the problem from various angles and positions to the extent that the information on a given topic and problem is confronted with the information extracted from various sources, with that produced by the application of various techniques and with that obtained from various researchers. Triangulation is not difficult because it is what is often done in daily life when one wants to verify information. If we take an example from the daily life of an educational institution, we could say that the information evidenced in a written text is confronted with the information provided by teachers, students, parents and, finally, with the rector’s version. In turn, it is
Cost System for Small Livestock Farmers
31
possible to verify the information by means of an observation, a survey, or an interview. And for greater credibility, the information obtained is compared with that which would be obtained by two or more researchers from their point of view. (Rojas 2011, pp. 31– 32). The research will have a descriptive-explanatory scope as defined by: (Cauas 2015) This study is fundamentally aimed at the description of social or educational phenomena in each temporal and special circumstance. The different levels of research differ in the type of question they can ask, in a descriptive study a series of questions are selected and each of them is measured independently, in such a way as to describe those under investigation. This type of study may offer the possibility of carrying out some level of prediction (albeit elementary) (p. 6). The research method and the techniques that will be used for the development of the research is inductive by means of techniques such as a survey and an interview. (Newman 2006) The inductive method is known as experimental, and its steps are: 1) Observation, 2) Hypothesis formulation, 3) Verification, 4) Thesis, 5) Law and 6) Theory. Falsification theory works with the inductive method, so inductive conclusions can only be absolute when the group to which they refer is small (p. 9).
3 Background The livestock sector in Colombia is represented mostly by small farmers, who subsist on the production obtained and the few profits they obtain from their farms. Livestock activity requires a great deal of knowledge regarding the costs and expenses incurred in the production of meat, milk, and their derivatives. The importance of applying the three cost elements such as: Raw Material, Labor and Indirect Manufacturing Costs, the incorrect way of applying each element in production can result in erroneous information, or losses (Peña 2016, p. 2). According to Luna Jara (2020) IAS 41, on agricultural accounting is relevant for companies with agricultural and livestock activities since it allows to know the costs and the optimization of their resources. According to León (2016), it is necessary to implement a costing system to improve the profitability of small farmers and ranchers (p. 8). The characteristics of small dual-purpose livestock herds in the Municipality of Charalá (Santander), show that the cost control system applied, contributes to generate higher productivity (Archila 2015, p. 14–15). Veiga Carballido (2013) determines that for the accounting valuation of biological assets, the market value should be considered as a reference, the fair value should be applied and otherwise the criterion will be the historical cost. After having carried out a descriptive analysis of the agricultural sector, of the methods and regulations on the value of the fixed assets of the companies it encompasses, based on fair value, a methodological gap has been detected as there are expert intermediaries in the value of biological assets whose data are not explored (p. 7). According to Ramírez Muñoz (2020) the IFRS international accounting standards, which are in force in more than one hundred countries in the world, constitute a great challenge for Colombia since day by day it must promote the implementation of these standards regardless of the economic sector. Thus, in Colombia, the agricultural sector in the segment of biological assets is one of the strongest sectors, since the productive chain of livestock and agriculture is quite wide; reason for which the analysis concentrates on the biological assets that constitute the raison d’être of production (p. 6).
32
J. A. Prada Segura and L. Y. López Parra
4 Theoretical Framework 4.1 Cost Systems for Livestock Farmers The creation of a cost system allows small farmers to know their economic status, as mentioned by Campo Verde Largo (2018). Its purpose is to provide a tool for small livestock producers and to propose a flexible system that allows them to identify the costs involved in the livestock production process. In addition, it is essential for microproducers to have a cost system that allows them to safely manage all the fundamental costs involved in the livestock production process and to determine the real value of their investments for correct decision making (p. 6). Costs are part of the development of activities that can generate profit or loss, as mentioned by Luna Jara (2020). Determination of production costs induced in the production of biological assets in accordance with the International Accounting Standard, specifically IAS 41, which is directly related to the processes of livestock activities (fattening, development, and production) to then commercialize their cattle in national markets; its proper application allows the company to know if its activities are generating profit or loss (p. 7). 4.2 ABC Cost Systems Used in Dairy Production The ABC cost system according to Moreno (2009), In the production of raw milk and live cattle lack rigorousness in their determination; that is why the ABC system is proposed as a tool that offers a rigorous methodology for the treatment of indirect costs incurred in production (p. 10). Rojas Conde (2017) In the livestock company, as in any other type of company, a series of production costs are contemplated because of using or consuming so me factors, in order to generate products that satisfy the needs of a market. Throughout this production process, two types of costs are generated: Costs recorded in the Profit and Loss Account of the operation, versus other costs involved in the process, although they are not included in the cost structure (p. 21). According to Ruiz and Pinilla (2018) the ABC system, an activity is any discrete task that an organization undertakes to make or deliver toward products or services. Products or services consume activities and activities consume resources. Activity-based costing (ABC) is a two-step product costing method that assigns costs first to activities and then to products based on each product’s use of activities. The cost elements of the processes help us to make decisions, as Atehortúa (2008) mentions, an adequate method to determine production costs is by structuring the cost centers of the production processes identified in the same (pastures, breeding and raising) and the profit centers (milk production); a tool that becomes the major source of internal information of the companies, to allow making adequate administrative decisions (p. 2). 4.3 Knowledge of Production Costs The knowledge of production costs that must be had in livestock farming according to Vilchez Castro (2018). Livestock activity requires a lot of knowledge regarding the costs and expenses incurred in the production of meat, milk, and its derivatives. The importance of applying the three cost elements such as: Raw Material, Labor and Indirect
Cost System for Small Livestock Farmers
33
Manufacturing Costs, the incorrect way of applying each element in production can result in erroneous information, or losses for the company. As for milk, cheese and meat production, the application of these elements will help the company to classify the costs originated by each of the productions. Not only do they generate costs when there is production, but the cattle also generate costs during the period of their growth development, in addition to the changes of biological transformation, so it is necessary to keep a control and record from their purchase or birth until the final stage that is destined, either as a producer of milk or sale of meat. With this control the company will be able to recognize when the livestock is a biological asset, or when it is an agricultural product, as such is considered milk or the products resulting from its transformation after harvesting or collection, the fair value of the profit or loss caused by the biological assets or agricultural products can be measured, which are reflected through the Income Statement adjusted according to the already established standards (pp. 2–3). 4.4 Analysis of How Production Costs Are Used as Management Tools that Influence the Profitability of Small – Scale Farmers The analysis of the form of production is very important, as mentioned by Alvarado Herrera (2015), the use of cost analysis constitutes an increasingly important tool for dairy farm managers, since they must select, among many alternatives, the best use of their resources to achieve success; that is, to obtain lower costs through the efficient use of them. The realization of a cost study will lead us to fix some points for the efficient use, whether it is feeding, labor, sanitation, facilities, etc.; in such a way that this work will help the organizations to establish the prices; and the liter of milk goes to the market with a value that has a direct relation with the economic benefit for the farmer; and in this way to encourage milk production, and at the same time to establish a daily balance of supply and demand of the product (Guillen 1971, p. 18).
5 Results For the development of the ABC cost system for the El Palmar cattle ranch, data was collected through an interview and a field visit to the owner of the El Palmar ranch to identify the activities incurred to complete the dairy production process, since he does not manage any control system on the ranch. The ABC costing system was started by macroprocesses, which are mentioned below (Fig. 1). Adequacy of Paddocks: This macro-process focuses on the activities that must be carried out to maintain the land of the El Palmar farm in a suitable condition for the development of the production process; the farm has 4 hectares divided into 3 paddocks. In Table 1 of the data collection table, the producer must specify the date on which the farm’s maintenance is carried out, since this activity is not done daily. Figure 1 shows the depreciation process undergone by biological assets, in this case livestock (Fig. 2).
34
J. A. Prada Segura and L. Y. López Parra
Fig. 1. Milk production system with macroprocesses. Source: own elaboration.
Table 1. Data collection format - Paddock adequacy
Source: Own elaboration
Pre-milking: This activity is essential to achieve the productive milking process, which considers the transfer to the milking site, calf suckling, docking at the milking site and teat washing. The number of grams of concentrate or salts per head served during the daily milking session should be determined. The amount fed to each cow varies according to milk productivity. Milking: The third macro-process is aimed at being the productive unit of the El Palmar farm. It identifies how many liters of milk are collected per cow in the morning. Post-milking: The fourth macro-process has the objective of collecting the time spent on the activities necessary to carry out the process each day.
Cost System for Small Livestock Farmers
35
Fig. 2. Depreciation of biological assets Source: Own elaboration.
Reproduction: The fifth macro-process aims to identify the type of cargo handled on the farm. Only one data collection tool was used, corresponding to Table 2. Table 2. Data collection format - Data associated with reproduction.
Source: Own elaboration
Analysis: The owner of the farm El Palmar currently manages a dairy production system, he has 6 head of cattle, only 3 of them in production, daily he performs several activities that are incurred in the daily process, but he states that he does not manage any type of control because his work is done by hand, that is why the need arises to create an ABC cost system to keep track of the costs incurred in the production of the farm and in this way to determine whether it is profit or loss what he gets on a monthly basis.
36
J. A. Prada Segura and L. Y. López Parra
6 Survey Analysis The survey was applied to 20 farmers in Vereda Soatama, where the following analysis is presented for each question: Approximately 30% of the farmers have been working as cattlemen for less than 20 years and 20% have been working as cattlemen for almost 50 years.
Graphic 1. Time on the job How long have you been a cattle rancher?
20%
30%
25% 25% 10-20 years
21-30 years
Ninety percent of the cattle ranchers are not formalized and 10% are.
Graphic 2. Formalized Is it formalized?
10%
90%
YES
NO
90% of the farmers in Soatama do not belong to any association that helps them to become formalized; only 10% belong to an association of dairy farmers through the SENA.
Cost System for Small Livestock Farmers
37
Graphic 3. Formalized Do you belong to any guild or association that allows you to formalize?
10%
YES
90%
Approximately 45% of the farmers in the Soatama verada have 10 head of cattle.
Graphic 4. Heads of cattle How many head of cattle do you own?
30% 45% 15%
10% (1-10)
(11-20)
In Soatama, 90% of the cattle ranchers’ production is dual-purpose.
Graphic 5. Type of production What type of production do you use on your livestock farm?
20% 0% 80%
Milk
Meat
Dual purpose
The inputs used to improve production are fertilizers on the land, salt, concentrate and chopped potatoes. 35% of the farmers use salt the most and 5% use cattle purge.
38
J. A. Prada Segura and L. Y. López Parra
Graphic 6. Use of production 5% 10%
What inputs do you use for producƟon improvement? 10%
35% 20% 20%
Salt
Concentrates
Potato
Eighty-five percent of the farmers do not keep accounts on the farm and in turn state that they have no knowledge of how to do so. An additional 15% keep their accounts in a notebook and only one farmer keeps them in Excel, but with the help of his daughter.
Graphic 7. Bookkeeping Do you do bookkeeping on your farm?
15%
85% YES
Eighty percent of the farmers do not know how much they spend or how much they earn each month because their work is done by hand; 20% say that they do know how much they earn each month because they keep track of it in a notebook or in their minds.
Graphic 8. Spend Do you know how much you spend and how much profit you have leŌ each month? How do you keep track of it?
20%
80% YES
NO
Cost System for Small Livestock Farmers
39
100% of the farmers say that it would be useful because they would be able to identify how much they spend and how much profit they make each month, and depending on the results, it would help them make decisions.
Graphic 9. Profitability on your livestock farm Do you think it would be useful to know the monthly profitability on your livestock farm?
0%
100% YES
NO
100% of the farmers would make a change, saying that they would concentrate on production for meat only, improving the breed, improving inputs for production and others would change the cattle for sheep, specifying that they would sell the wool from the sheep and the sale of their offspring.
Graphic 10. Case of not obtaining profitability In case of not obtaining profitability with the management of the farm, would you be willing to make any change? Which one? YES
NO 0%
100%
There is only one farmer who has technified his farm, but not 100%, and he states that he can see the difference. In addition, 10% state that they do not technify their farms due to their health and economic conditions.
40
J. A. Prada Segura and L. Y. López Parra
Graphic 11. Technifying Have you ever thought about technifying your farm? Why? 10%
90% YES
NO
Seventy-five percent of the farmers have obtained financing from the Banco Agrario with the deeds to their farms; 25% have not received financing because they are not farm owners.
Graphic 12. Financing When you have required financing from banks, have you obtained it?
25% 75%
YES 100% of the farmers have not received any assistance from the state.
Graphic 13. Received financial
0%
Have you received financial aid or support from the state?
100% YES
Cost System for Small Livestock Farmers
41
Seventy percent would be willing, but they do not know which model to use; they also state that it would not have to be by means of a computer due to lack of this resource, and 30% say no because they do not know the model and do not have knowledge. Graphic 14. Implement model Would you be willing to implement a model to control your cost and expense?
30%
70% YES
7 Discussion From the results obtained and analyzed in the interview, we can affirm that the producer of the El Palmar farm does not control costs and expenses due to lack of knowledge of a system, in addition to the fact that they do not know with certainty the profitability or loss that their farm generates because their work is manual, However, they state that they would like to keep track of the costs and expenses incurred in their livestock, which will allow them to identify their profits, which is achieved with the implementation of an ABC costing system through macro processes that they can manage as a measure of control of production costs; This will help them make decisions regarding their farm to generate better economic benefits for the producer, improving production management and control of the elements involved in the entire livestock production process, allowing them to know the real profitability and/or loss.
8 Conclusions The ABC cost system was delivered in manual form to the producer of Finca El Palmar, this format will allow them to keep a monthly control of their income, costs, and expenses, thus being able to show their profit or loss during the month. It was determined that on the El Palmar farm the cost elements incurred during the production process are the purchase of inputs such as (grass, salt, concentrates, fertilizers, purgatives) to improve production, labor, and the purchase of other indirect costs such as medicines, gasoline, transportation, among others. The activities that occur during the production process are classified by 6 macro-processes, paddock maintenance, pre-milking, sorting, postmilking, reproduction, and other expenses incurred in the process. The management tool to be applied is the ABC costing system, which will help to improve profitability by making timely decisions if the company is generating losses or otherwise better manage its resources.
42
J. A. Prada Segura and L. Y. López Parra
References Alvarado Herrera, I.C.: Cost structure for small livestock farmers in the San Felipe irrigation system (2015) Atehortúa: Costing analysis for a specialized dairy production system An approach to economic analysis in dairy farming”: Case study. Dyna, 75(155), 37–46 (2008) Camargo Palacios, L.C., Flórez Rodríguez, Y.M.: Implementation of a training model for entrepreneurial members of the María Luisa Moreno Foundation FIMLM, in Funza Cundinamarca (2017) Campoverde Largo, J.G.: Proposal for a cost system in a cattle ranch in the province of Sucumbíos period 2018 UISRAEL, CONTABILIDAD PÚBLICA Y AUDITORIA Quito: Universidad Israel 2020, 75p (Bachelor’s thesis, Quito, Ecuador: Universidad Tecnologica Israel) Casilimas, C.A.S.: Qualitative research. Icfes (1996) Hernández Leyva, A.R.: Procedure for recording, calculating, and analyzing real livestock costs at the Holguín Livestock Farm Unit. (Bachelor’s thesis, University of Holguin, Faculty of Economic Sciences, Department of Accounting and Finance) (2017) Luna Jara, Z.T.: Analysis of production and marketing costs according to IAS 41 of the livestock activity (2020) Pinilla, A.D.P., Ruiz-Urquijo, J.C.: Measurement of production costs for a milk producing farm in Ubaté (2015) Pinilla, A.D.P., Ruiz-Urquijo, J.C.: Book chapter: “Cost system for a milk producing farm. Case study for La Esmeralda farm located in the municipality of Ubaté - Cundinamarca” Costs of Livestock Production: Case studies in the Colombian high tropics (2018) Quel Garofalo, R.S.: Proposal for the improvement of financial procedures for the accounting area of the company Sociedad Industrial Ganadera el Ordeño SA, located in the city of Quito (Bachelor’s thesis, Quito: UCE). (2015) Reyes Díaz, J.A.: Development and implementation of intensive cattle raising for a better commercialization of beef in the farm El Cortijo Las Marías (Bachelor’s thesis, Universidad Autónoma de Occidente) (2012) Rojas Conde, P.R., Azañedo Martínez, A.A.: The cost of production system in milk production, in livestock enterprises in the Province of Ambo, period 2015 (2017) Rojas, V.M.N.: Research methodology. Bogotá: Ediciones de la U, 2011 (2011) Sabiote, C.R., Llorente, T.P., Pérez, J.G.: Analytical triangulation as a resource for the validation of recurrent survey studies and replication research in Higher Education. RELIEVE. Electronic J. Educ. Res. Evaluat. 12(2), 289–305 (2006) Suárez, G.R.G.: Importance of accounting in the agricultural and livestock sector (2015) Tello Cuya, M.I.: Análisis de la herramienta del plan de finca en el proceso de innovación de los sistemas ganaderos en Muy Muy y Matiguás, Nicaragua (2013) Vargas-Benavides, P.: Implementation of a record keeping system in a cattle ranch (2009) Vilchez Castro, R., Ticliahuanca Cruz, A.M.: Implementing a production cost system in the Cattlemen of Miraflores de Buena Vista, year 2011 to 2015 (2018) Villegas, M.Á.V., Moreno, M.C.M.: An activity-based costing system for dual-purpose livestock farming units. Case: Agropecuaria El Lago S.A. INNOVAR. J. Admin. Soc. Sci. 19(35), 99–117 (2009)
e-Learning
Effects of Virtual Reality and Music Therapy on Academic Stress Reduction Using a Mobile Application Cristian A. Cabezas(B) , Alexander R. Arcos, José L. Carrillo-Medina, and Gloria I. Arias-Almeida Universidad de Las Fuerzas Armadas ESPE, Sangolquí, Ecuador {cacabezas5,ararcos1,jlcarrillo,giarias}@espe.edu.ec
Abstract. In recent years, music therapy has proven to be an effective strategy to reduce stress. This research examines the effects of applying a social support strategy to reduce stress levels in students of higher education students, by combining music therapy with the use of immersive virtual reality, making use of a mobile application. This proposal implements a system that performs the following tasks: i) collects information on the emotions of the participants through a facial recognition module, ii) provides a relaxation experience within an animated and musicalized virtual environment, iii) measures stress levels, through the application of a psychology stress questionnaire, and iv) evaluates the effects produced on the user. Experiments were carried out with the participation of several students from Universidad de las Fuerzas Armadas-ESPE. Results show that the use of music therapy, combined with virtual reality, could be a powerful strategy to decrease stress by up to 50%. Keywords: Virtual reality · Music therapy · Mobile application · Reduce stress · Academic
1 Introduction The COVID-19 pandemic has created a worldwide state of emergency and more than a year after its onset, we continue with biosecurity measures, according to the policies implemented in each country that has forced men and women to adapt and seek new ways to cope with their daily reality [1]. One of the strategies to mitigate the spread of this disease is isolation, which has led people to carry out their activities such as work and studies virtually. Social distancing and change of lifestyle have affected people’s emotional and psychological health, which in the long run can produce depression, anxiety and especially stress [2, 3]. The latter can be one of the main generators of pathologies and chronic diseases such as irritability, decreased self-esteem, insomnia, asthma, hypertension, ulcers, etc. [4]. Under these circumstances, it becomes necessary to implement early strategies for the prevention and treatment of psychological effects that can be caused by multiple factors [5]. According to some studies, it turns out that © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Botto-Tobar et al. (Eds.): ICAETT 2021, LNNS 407, pp. 45–59, 2022. https://doi.org/10.1007/978-3-030-96147-3_4
46
C. A. Cabezas et al.
young people show greater stress than older people [1, 6], and even more so when they are pursuing an academic education [7]. Academic stress is a common pathology that appears when a student faces a set of situations and/or academic demands, such as: assignments, homeworks, scholar projects, test or quizzes. This problem can affect the well-being and emotional stability of people. Many studies conclude that the performance and development of competencies of higher education students appear altered due to the influence of stressful academic situations and/or circumstances combined with personal, family and the pandemic situations [8, 9]. For this reason, it is necessary to seek a set of strategies to address these problems in order to achieve a stable psychological condition and have a quality mental health [10]. In this scenery, many studies have proposed methods to reduce stress levels, among these are physiological, natural, and technological. They aim to relax and provide peace of mind, relieving the patient’s stress symptoms [11, 12]. Music is related to the emotions of a person, a song can cause happiness or sadness having the ability to impact on their mentality [13], it can also be used to improve the physiological state [14], since the words and harmonic rhythm that conform it provide a relaxing and positive stimulus for mood change and stress reduction [15] due to the use of sounds, melodies, relaxing harmonies that when listened to satisfy physical, emotional, mental, social and cognitive needs [16], through listening to relaxing sounds or musical pieces [8, 15]. Recent studies report that music therapy helps to balance a person’s emotional state [17], to achieve a better intrapersonal and interpersonal integration and therefore a better quality of life [18]. In recent years virtual reality has gained much attention among developers. This technology allows the immersion of a person in a computer-created environment [19], giving a sense of presence and interaction that can take place depending on the level of immersion [20]. For the development of this type of applications, it is necessary to acquire devices for the visualization and manipulation of virtual reality environments. The acquisition of virtual reality devices depends on their intended use, since they are expensive and represent a large investment. For this reason, this project has decided to use low-cost technologies, low-consumption energy and easy to transport. In this case, mobile devices are a good alternative. There are a number of successful cases where the use of this type of low-cost technology works properly and efficiently, which has led to its use in different fields such as education [21], entertainment [22], social media [23], health care [24], among others. In the health field, virtual reality has been used for therapeutic purposes to provide the patient with a simulated experience for the treatment of psychological conditions that cause difficulties to patients [2], reported studies indicate good results [11]. In the case of reducing stress levels, virtual reality can be applied with two approaches. The first employs relaxing generic environments, while the second requires people to interact with objects in the virtual environment [25]. In addition, this research examines the effects of applying a social support strategy to reduce stress levels in higher education students, by combining music therapy with the use of immersive virtual reality, making use of a mobile application. The proposed system allows collecting information about the emotions of the participants (facial recognition through convolutional neural networks), providing the participant with an experience within a virtual musical and animated environment as a relaxation tool, applying an
Effects of Virtual Reality and Music Therapy on Academic Stress Reduction
47
academic stress questionnaire (self-perceived stress through the SISCO SV-21 academic stress questionnaire) and with the results obtained to evaluate the effects produced by the pre-use and post-use of the mobile application.
2 Methods and Materials Virtual reality is getting a lot of attention in the development of therapeutic applications for stress management [26]. On the other hand, music therapy is an intervention that is very commonly used in stress reduction. This paper proposes a mobile application that integrates these two approaches, with the aim of facilitating stress management. 2.1 System Design This application is intended to function as a tool for psychologists to reduce stress levels through the collection of data on emotions, self-perceived stress and its effects. SCRUM software development methodology was applied because of its high flexibility and adaptability [27]. The functional scheme of mobile application is shown in Fig. 1 which is composed of four modules: a) The emotion recognition module, which allows to know the visual emotional state of the participant, b) The stress assessment module, which allows to do a pre-test before using the application and a post-test at the end of the intervention according to an adaptation of the SISCO-SV-21 questionnaire (the description of this adaptation is in the metrics section) to obtain self-perceived stress information and its effects, c) The virtual reality interfaces, which generates an environment of a virtualized forest where the participant interacts with 3D objects such as percussion instruments, and d) The data reported by the application, on the analysis of emotions before and after using the application. 2.2 Emotion Recognition Emotion recognition represents a very important function in many fields such as forensic crime detection, psychologically affected patients, tutoring students in academia, observing victims in court, etc. [28]. The aim of this module is to recognize facial emotions, for this purpose we make use of the well-known convolutional neural networks (CNN), which detects in an image (human face) 6 emotions: anger, fear, happy, sad, surprise, disgust. The CNN network proposed in the paper “Real-time Convolutional Neural Networks for Emotion and Gender Classification” [29] was used as a basis for its development, since the proposed network architecture is small and reduces the number of parameters to be used in training at the same time improving performance, which makes it possible to detect faces, identify gender and, in our case in particular, classify emotions in a fast and efficient way. Based on this premise, the CNN network was deployed as a cloud service (Digital Ocean - jukeapi.ml) and through an API (application program interface). The image of the face of an application user is analyzed, classifying the emotions detected, entries that are stored in a database and can be consulted through the mobile application. To create the API, the Flask framework was chosen because of
48
C. A. Cabezas et al.
Fig. 1. Functional scheme of the proposed application: a) emotion recognition module, b) stress assessment module, c) virtual reality interfaces and d) reported data about emotions.
Fig. 2. Emotion recognition implementation architecture a) Batch normalization for image processing b) Residual convolutions a) Batch normalization and the ReLU trigger function c) Combination of global averages, softmax trigger for prediction.
the several advantages it provides in the creation of these services [30]. The structure of this module is shown in Fig. 2. The mobile application sends an image containing the face of the user to the API in which each of the 6 emotions is analyzed using the CNN network. The model for emotion recognition is formed by a) a batch normalization for image processing prior to analysis b) four separate residual convolutions, in which each convolution is formed
Effects of Virtual Reality and Music Therapy on Academic Stress Reduction
49
by a batch normalization and the activation function ReLU. Finally, c) a combination of global averages, plus a softmax activation for prediction based on a percentage of the emotions reflected by the user’s face. The API returns a percentage for each of the 6 emotions indicated above, which are stored in a knowledge database for further analysis. The interest of knowing the emotions through this module is to be able to compare each of the emotions recorded at the beginning and end of the use of the application and thus have an objective comparison of the change in the participant’s emotions [29]. 2.3 Virtual Environment and Music Therapy Virtual reality (VR) is increasingly and more frequently used in the construction of interactive 3D environments in different research fields, due to its great flexibility, through game engines [24]. The Unity 3D graphics engine was used to build the virtual environments of the application (Unity SDK for Google CardBoard, due to its ease of implementation). This type of virtual reality applications can be visualized using viewers for mobile devices. Fifty two students were surveyed in order to collect information about their preferences about the relaxing environment with respect to their favorite music and choose between “rainforest”, “beach”, “riverbank”, “home” and “anywhere”. This task is based on previous studies [31, 32]. After that, this information was used to design the virtual environments. The relaxing environment preferred by most of the students surveyed was the rainforest, according to the results obtained, and was selected for implementation due to its therapeutic and relaxing effect [32]. A 3D modeling of this environment was performed to provide the user with a realistic experience of being surrounded by nature with a 360° perspective [31]. A VR viewer for mobile devices was used for the visualization of the constructed virtual environment as well as for testing and validation of the application due to its easy accessibility and affordability. Additionally, music is used as therapy while navigating through the virtual environment, the user’s favorite music will be played at the same time. Several studies [27, 33] have shown the effectiveness of using music therapy, as it awakens in the participant different feelings and emotions, such as joy and tranquility, this would positively stimulate the participant’s emotional state. The experience within the virtual environment is not limited to listening, an interactive activity has been created to accommodate the rhythm of the song. A bass drum, and a snare drum (3D objects) are modeled as percussion instruments as they are more common to carry the rhythm of the songs [34]. These objects are generated according to the rhythmic pattern of the musical piece the participant has selected (see Fig. 3a) and can be destroyed with a drumstick whose movement is linked to that of his head (see Fig. 3b-c). When the bass drum and the snare are destroyed, an animation and a sound that simulates the sound of the percussion instrument is played (see Fig. 3d) [35] which allows to make the user’s experience more dynamic and satisfying, increasing its relaxation state [36]. Beat Detection Algorithm. The beat detection algorithm is based on the search for a rhythmic pattern from an audio. The rhythm was used to synchronize certain effects such as the generation of 3D objects as percussion instruments when the user interacts, objects
50
C. A. Cabezas et al.
Fig. 3. View of the virtual environment through a virtual reality device. a) view of the bass drum and snare drum b) explosion of the bass drum when it hits the drumstick c) view of the snare drum d) explosion of the snare drum when it hits the drumstick
that will make the experience more vivid and dynamic within the virtual environment [35]. For the implementation of the algorithm, the best balance between accuracy and processing speed has been used, considering that it should be as optimal as possible because it will be executed on a mobile device, which is why the method used in this study is based on one of the pulse detection algorithms of Frédéric Patín [37]. The algorithm suggests the use of a statistical model based on the amplitude or energy of sound, the average value of this energy in an interval of a couple of seconds is calculated and compared with the current energy of sound, if the difference between the two exceeds a previously determined threshold, it is considered that a pulse has been generated. The objective is to capture the lowest frequencies in a range between 60 Hz– 180 Hz for the bass threshold in which we find the sound of a bass drum, and a mid/bass
Effects of Virtual Reality and Music Therapy on Academic Stress Reduction
51
range 500 Hz–2000 Hz in which it is considered, are most of the snare drums [38]. A window of 1024 Hz is used as samples and a sampling rate of 44100 Hz, obtaining a buffer of 43 elements to store in 1 s of audio. It is worth noting that the values of the samples can be obtained from the analysis of the fast Fourier transform (FFT). The equation for the energy calculation (Eq. 1) is constructed assuming that k and k + n are the limits of the current range being processed and FFT[i] is the amplitude of the frequency for position i. 1 FFT [i] n k+n
E=
(1)
i=k
It is necessary to store the resulting value with the 42 remaining samples over 1 s to establish the historical record (Eq. 2). H = [Et0 , Et1 , . . . , Et42 ]
(2)
The arithmetic average is then calculated (Eq. 3). 1 H [i] 43 42
avg(H ) =
(3)
i=0
If the energy at the current time is higher than the calculated average (Eq. 4), a pulse is considered to have been detected, which depending on the defined thresholds for bass and mid/bass, generates a bass drum or snare drum within the virtual environment. E > avg(H )
(4)
2.4 Evaluation Metrics Qualitative and quantitative data collection was performed by analyzing emotions through a percentage measure for each of the 6 emotions analyzed. Through this metric the change or not of the emotional state that the user experiences when using the application is reported and if there is a change, how this can influence their self-perceived stress level, these values were analyzed at the beginning and end of the use of the application. An adaptation of the SISCO SV-21 academic stress questionnaire [39] was used, a questionnaire that allows collecting information regarding academic stress (stressors, symptoms and coping strategies) of the respondents (participants). This adaptation was made with the advice of an expert in psychology to obtain indicators related to symptoms of stress. Out of the original 21 items, seven were eliminated from the stressors dimension and seven from the coping strategies dimension because they were not related to the object of study, leaving seven items (see Table 1: Items 1–7) that have allowed us to measure the most common stress symptoms (reactions) of the participants. In addition, eight items were taken from the first version of the SISCO SV questionnaire [40] and were not taken in the second version because they allowed us to obtain indicators relating to other uncommon stress symptoms (see Table 1: items 8–15), obtaining at the end fifteen
52
C. A. Cabezas et al.
items that are evaluated on a Likert scale of five numerical values, where 1 means a little and 5 means a lot (scale used throughout this study). In addition, the survey asked participants to enter demographic data (name, age, academic level, place of residence). The first two questions of the questionnaire were not modified because the first one serves as a filter to find out if the participant has experienced stress during the course of an academic period (Question 1. During the course of this semester, have you had moments of worry or nervousness/stress?) and the second one provides information about the level of self-perceived stress (Question 2: in order to obtain greater precision, indicate your level of stress, using the Likert scale). At the end of the study, a short questionnaire was given to the participants to find out their appraisals about the mobile application, each question is evaluated on a 5-point Likert scale, the questions asked were: 1) How likely is it that you would recommend the application to a friend or member of your family? 2) From your point of view, how easy was it to use? 3) How likely is it for you that you would use the application again? Table 1. Items about SISCO SV-21 Symptom dimension (Reactions) Items
Reaction
Item 1
Chronic fatigue (permanent tiredness)
Item 2
Feelings of depression and sadness (feeling down)
Item 3
Anxiety (nervousness), anguish or despair
Item 4
Difficulty concentrating
Item 5
Feelings of aggressiveness or increased irritability
Item 6
Conflicts or tendency to argue, contradict, argue or fight
Item 7
Unwillingness to do academic work
Item 8
Sleep disorders (insomnia or nightmares)
Item 9
Headaches or migraines
Item 10
Digestion problems, stomach pain or diarrhea
Item 11
Scratching, nail-biting, rubbing, etc
Item 12
Drowsiness or increased need for sleep
Item 13
Restlessness (inability to relax and be calm)
Item 14
Isolation from others
Item 15
Increased or decreased food consumption
3 Results and Discussion Sampling was used to create groups of the target population, students of the last levels of Software Engineering at the Universidad de las Fuerzas Armadas ESPE - Ecuador, due to the availability of time and technological resources to which the students had
Effects of Virtual Reality and Music Therapy on Academic Stress Reduction
53
access. The selection of participants was carried out under the supervision of an expert in psychology related to the research. The number of participants selected was 6 students in an age range between 21 and 25 years, 3 men and 3 women, who were subjected to the application of music therapy with virtual reality. The participants used the application during five sessions, each session was carried out in a time interval of between 10 and 15 min, this depending on the duration of the musical piece selected by each participant and the time taken for the data collection process. The analysis of the results was also conducted with the advice of a psychologist. The percentage values stored in the database of the emotion recognition module for the evaluation of positive and negative emotions were obtained; in the same way, for the evaluation of the effects of stress, as well as the self-perceived stress level, the values of the forms completed by the participants were obtained, averaged, and transformed into a percentage measure to facilitate their interpretation. Figure 4a shows the variations of positive emotions when using the application. The obtained average values of positive emotions of all participants at the beginning of each session are represented in red, and the average values obtained of positive emotions at the end of the sessions are represented in blue. The participants started with an average of 21.53% positive emotions and at the end, their average increased to 70.72%, showing an increase of 49.19% in the average of positive emotions with a standard deviation of 11.32 at the end of the five sessions. Based on the data related to the positive emotions of the 6 participants, a greater increase of 54.44% was found in female students, while in male participants there was an increase of 43.94%.
Fig. 4. Results from a) positive emotions analyzed b) negative emotions analyzed: Measurements before (Red color) and after each session (Blue color).
On the other hand, negative emotions had the opposite effect to positive emotions. As shown in Fig. 4b, the average of negative emotions of all participants at the beginning of each session is shown in red and the average of negative emotions at the end of the sessions is shown in blue. The participants started with an average of 6.96% and at the end of the application their average was reduced to 1.71%, establishing a decrease of 5.25% in the average of negative emotions with a standard deviation of 2.13 at the end of the five sessions.
54
C. A. Cabezas et al.
The participants used the application during the entire study for an average of 1 h 8 min and 36 s, it was evident that the female participants used the application for an average time of 1 h 7 min 58 s while the male participants used it for an average of 1 h 9 min 15 s, as shown in Table 2, which shows a summary of the characteristics of the students, including the change in their emotions and their level of self-perceived stress. Table 2. Summary of participant characteristics: Gender, grade, change in their emotions, selfperceived stress, and stress level reduction. Students Gender Semester Stress Application Increase level usage time positive emotions
Decrease negative emotions
Stress level reduction
Student 1
Male
9
4
1:09:12
47,19
4,69
2
Student 2
Female 9
4
1:08:57
43,59
6,96
2
Student 3
Male
9
4
1:10:44
50,97
6,49
3
Student 4
Female 6
4
1:06:45
68,02
6,46
3
Student 5
Male
7
4
1:07:48
33,66
1,21
2
Student 6
Female 4
5
1:08:11
51,73
5,68
3
1:08:36
49,19 ± 11,3 5,25 ± 2,13 2,5 ± 0,55
Average
The results obtained from the application of the SISCO SV-21 questionnaire to confirm the change in the state of self-perceived stress and its symptoms (reactions) are shown in Fig. 5. These show that negative reactions were reduced with the use of the application. Participants started with an average score of 3.1 on the 15 test questions, evaluated on a 5-point Likert scale (see Table 1). At the end of the five sessions, participants experienced a reduction effect of their symptoms (reactions) that reached an average of 2 points. At the beginning of the application, the SISCO SV-21 questionnaire was used, a higher percentage of participants with sleep disorders, drowsiness and listlessness to perform their academic activities were reported, at the end of the five sessions it can be observed that there was a greater reduction in some stress reactions. According to the items of the questionnaire, the following results are reported 30% reduction in sleep disorder (item 1), chronic fatigue (item 2) and drowsiness (item 6), 26% reduction in skin itching, nail-biting (item 5), tendency to get involved in conflicts (item 12) and increased or decreased food consumption (item 15), 36% reduction in irritability or aggressiveness (item 11) and 24% reduction in isolation with others (item 13) and reluctance to perform
Effects of Virtual Reality and Music Therapy on Academic Stress Reduction
55
Fig. 5. Results from items about SISCO SV-21 Symptom dimension (Reactions) before (Red color) and after (Blue color) five sessions, each Item is specified in Table 1 and is evaluated on a 5-point Likert scale.
academic activities (item 14). The values of the items that were not specified obtained on average a decrease of one point or less. The information related to the level of self-perceived stress for each participant can be seen in Fig. 6. At the beginning of the application, the participants presented an average of 4.2 in their level of self-perceived stress and at the end of the sessions there was a significant reduction in the average score of 2.5 with a standard deviation of 0.55. In addition, it was observed that female participants had a reduction of 53.3%, being higher than male participants whose reduction was 46.7%. The results of this study provide evidence on the feasibility and effectiveness of a mobile application that uses music therapy and virtual reality environments to improve users’ emotions and decrease their self-perceived stress levels. The use of virtual reality technology as a technological tool for mental health support is important and useful since it helps users to change their mood positively, and to some extent improve their physiological state. There are two motivations for using mobile devices. First, using this type of device is more accessible for virtual reality technologies with certain limitations. Second, considering the repercussions of social isolation in the global context in which we live, it is more viable to provide people with a mobile application that can be used at any time and place without risking their health. The results show that positive emotions increased significantly at the end of each session, which corroborates a positive experience that the participants had when using the application. With an average of 49.19%, emotions improved and at the same time their self-perceived stress levels decreased by 50%, demonstrating an inverse relationship between people’s mood and their stress level. It could be observed that, despite the fact that female participants used the application to a lesser extent, they obtained better results than male participants, both in the increase of positive emotions and in the reduction of
56
C. A. Cabezas et al.
Fig. 6. Results from Self-perceived stress: Measurements before (Red color) and after (Blue color) five sessions.
their stress level, although it is true that the results reflect that there is no relationship between the time of use of the application and the levels of self-perceived stress reduction, However, there could be a direct relationship between the time of use of the application and the increase in indicators of positive emotions. In the questionnaire regarding the participants’ appraisals of the application, it was found that 100% of the participants would use the application again and would even recommend its use to others, 50% reported that it was easy to use, and 83% would use it again. According to these results, it can be indicated that there is a high level of acceptance by the participants. In addition, it can be indicated that our study provides a different approach to similar studies since one of our indicators is based on the detection of the participant’s emotions which, with the support of a psychology specialist, has been considered to have a close relationship with the levels of stress that they present [56, 57]. It should be noted that this study has some limitations. First, the time of use of the application was very short; it would be interesting to test how the application affects longer periods of use and when it has better results. Second, to stimulate relaxation, positive emotions and reduce stress, only a relaxing environment was used. However, it would be possible to experiment with other environments and analyze how it affects the participant’s emotions considering the duration or type of music selected.
4 Conclusions This paper proposed a mobile application based on music therapy and virtual reality to manage and to reduce the stress. This application could be used directly by the end user or by a specialist, in both cases, as a support or complement to relaxation and stress management therapies.
Effects of Virtual Reality and Music Therapy on Academic Stress Reduction
57
This mobile application records information about the participant’s emotions and applies the SISCO SV-21 academic stress questionnaire. After that, it selects a song based on the participant’s preference. Immediately, the song is analyzed to identify the rhythmic pattern and generate three-dimensional objects (musical instruments). Within the virtual environment the participant destroys these objects and upon executing this action an animation and a sound associated with the instrument is played, which makes the user’s experience more dynamic and satisfying, relaxing the participant. This therapy is performed during five sessions, to analyze and validate the stress levels and symptoms (reactions) of the participants. The results show that the use of the mobile application can improve the positive emotions achieving a best of 49.19%. It also has promoted relaxation which has led to a reduction of self-perceived stress levels by 50%. Now, the pandemic continues, therefore stress problems are active. Thus, this work could be a starting point for further studies related to stress-related emotion management using virtual reality-based technology. Acknowledgments. This research was carried out with the cooperation of Ms. Gabriela Tualombo, a psychologist, for her valuable professional support that has allowed the application of the necessary knowledge on the analysis of self-perceived stress and its symptoms, through the adaptation and application of the SISCO SV-21 academic stress questionnaire and the interpretation of the results. This study is part of the research project called “DEVELOPMENT OF A MOBILE- WEB SYSTEM AS A PREVENTIVE SOLUTION FOR THE REGISTRATION AND MONITORING OF PEOPLE WHO ARE IN DOMICULAR ISOLATION AND ARE POSSIBLE CASES OF CORONAVIRUS”: Project “APP COVID-LIFE”. And we would like to thank all the research assistants of the project, also the Universidad de las Fuerzas Armadas ESPE and the Research Group of Technology Applied to Biomedicine - GITbio, for the support to develop this work.
References 1. Salari, N., et al.: Prevalence of stress, anxiety, depression among the general population during the COVID-19 pandemic: a systematic review and meta-analysis. Globalizat. Health. 16 (2020). https://doi.org/10.1186/s12992-020-00589-w 2. Taneja, A., Vishal, S., Mahesh, V., Geethanjali, B.: Virtual reality based neuro-rehabilitation for mental stress reduction. In: 2017 Fourth International Conference on Signal Processing, Communication and Networking (ICSCN) (2017). https://doi.org/10.1109/ICSCN.2017.808 5665 3. Wang, C., et al.: Immediate psychological responses and associated factors during the initial stage of the 2019 coronavirus disease (COVID-19) epidemic among the general population in China. Int. J. Environ. Res. Publ. Health 17, 1729 (2020). https://doi.org/10.3390/ijerph 17051729 4. Jani, H.: Benefiting from online mental status examination system and mental health diagnostic system. In: The 3rd International Conference on Information Sciences and Interaction Sciences, pp. 66–70 (2010). https://doi.org/10.1109/ICICIS.2010.5534712 5. Li, Z., et al.: Vicarious traumatization in the general public, members, and non-members of medical teams aiding in COVID-19 control (2020). https://doi.org/10.1101/2020.02.29.200 29322
58
C. A. Cabezas et al.
6. Zamora, Z., Romero, E.: Estrés en Personas Mayores y Estudiantes Universitarios: Un Estudio Comparativo. Psicología Iberoamericana, pp. 56–68 (2010) 7. Ozamiz-Etxebarria, N., Dosil-Santamaria, M., Picaza-Gorrochategui, M., IdoiagaMondragon, N.: Niveles de estrés, ansiedad y depresión en la primera fase del brote del COVID-19 en una muestra recogida en el norte de España. Cadernos de Saúde Pública, 36 (2020). https://doi.org/10.1590/0102-311X00054020 8. Caldera, J.F., Pulido, B.E., Martínez, M.G.: Niveles de estrés y rendimiento académico en estudiantes de la carrera de Psicología del Centro Universitario de Los Altos (2007) 9. Barraza, A.: El estrés académico en los alumnos de postgrado. Revista de Psicología Científica, V I(2), 17–37 (2004) 10. Ichiro, T., Yuko Mizuno, M.: A survey of the lifestyle and mental status of co-medical students. IEEE. 19–23 (2010) 11. Kanehira, R., Ito, Y., Suzuki, M., Hideo, F.: Enhanced relaxation effect of music therapy with VR. In: 2018 14th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), pp. 1374–1378 (2018). https://doi.org/10.1109/ FSKD.2018.8686951 12. Tejada, S., Santillán, S., Diaz, R., Chávez, M., Huyhua, S., Sánchez, J.: Music therapy inthe reduction of academic stress in university students. Med. Nat. 14, 86–90 (2020) 13. Yusoff, M., Hadie, S., Yasin, M.: The roles of emotional intelligence, neuroticism, and academic stress on the relationship between psychological distress and burnout in medical students. BMC Med. Educ. 21, (2021). https://doi.org/10.1186/s12909-021-02733-5 14. Yokoyama, K., Ushida, J., Sugiura, Y., Mizuno, M., Mizuno, Y., Takata, K.: Heart rate indication using musical data. IEEE Trans. Biomed. Eng. 49, 729–733 (2002). https://doi.org/10. 1109/TBME.2002.1010857 15. Patrick, G.: The effects of vibroacoustic music on symptom reduction. IEEE Eng. Med. Biol. Mag. 18, 97–100 (1999). https://doi.org/10.1109/51.752987 16. Latif, R.: Preferred sound type for stress therapy. In: 2018 4th International Conference on Computer and Information Sciences (ICCOINS) (2018). https://doi.org/10.1109/ICCOINS. 2018.8510560 17. Chavan, D., Kumbhar, M., Chavan, R.: The human stress recognition of brain, using music therapy. In: 2016 International Conference on Computation of Power, Energy Information and Commuincation (ICCPEIC) (2016). https://doi.org/10.1109/ICCPEIC.2016.7557197 18. Baharum, A., Seong, T., Zain, N., Yusop, N., Omar, M., Rusli, N.: Releasing stress using music mood application: DeMuse. In: 2017 International Conference on Information and Communication Technology Convergence (ICTC). (2017). https://doi.org/10.1109/ICTC.2017.819 1001 19. Skulimowski, S., Badurowicz, M.: Wearable sensors as feedback method in virtual reality antistress therapy. In: 2017 International Conference on Electromagnetic Devices and Processes in Environment Protection with Seminar Applications of Superconductors (ELMECO & AoS) (2017). https://doi.org/10.1109/ELMECO.2017.8267716 20. Crosswell, L., Yun, G.: Examining virtual meditation as a stress management strategy on college campuses through longitudinal, quasi-experimental research. Behaviour & Information Technology, pp. 1–15 (2020).https://doi.org/10.1080/0144929X.2020.1838609 21. Arents, V., de Groot, P., Struben, V., van Stralen, K.: Use of 360° virtual reality video in medical obstetrical education: a quasi-experimental design. BMC Med. Educ. 21 (2021). https://doi.org/10.1186/s12909-021-02628-5 22. Wang, M.: Application and realistic dilemma of VR technology in film and television production. J. Phys: Conf. Ser. 1881, 022030 (2021). https://doi.org/10.1088/1742-6596/1881/ 2/022030
Effects of Virtual Reality and Music Therapy on Academic Stress Reduction
59
23. Li, J., Vinayagamoorthy, V., Williamson, J., Shamma, D., Cesar, P.: Social VR: a new medium for remote communication and collaboration. In: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (2021). https://doi.org/10.1145/3411763. 3441346 24. Wheeler, G., et al.: Virtual interaction and visualisation of 3D medical imaging data with VTK and Unity. Healthcare Technol. Lett. 5, 148–153 (2018). https://doi.org/10.1049/htl. 2018.5064 25. Lindner, P., Hamilton, W., Miloff, A., Carlbring, P.: How to treat depression with low- intensity virtual reality interventions: perspectives on translating cognitive behavioral techniques into the virtual reality modality and how to make anti-depressive use of virtual reality–unique experiences. Front. Psychiat. 10, (2019). https://doi.org/10.3389/fpsyt.2019.00792 26. Naylor, M., Ridout, B., Campbell, A.: A scoping review identifying the need for quality research on the use of virtual reality in workplace settings for stress management. Cyberpsychol. Behav. Soc. Netw. 23, 506–518 (2020). https://doi.org/10.1089/cyber.2019. 0287 27. Srivastava, A., Bhardwaj, S., Saraswat, S.: SCRUM model for agile methodology. In: 2017 International Conference on Computing, Communication and Automation (ICCCA) (2017). https://doi.org/10.1109/CCAA.2017.8229928 28. Kulkarni, P., Rajesh, T.M.: Analysis on techniques used to recognize and identifying the Human emotions. Int. J. Electrical Comput. Eng. (IJECE) 10, 3307 (2020). https://doi.org/ 10.11591/ijece.v10i3.pp3307-3314 29. Arriaga, O., Valdenegro-Toro, M., Plöger, P.: Real-time convolutional neural networks for emotion and gender classification. arXiv (2017) 30. Mufid, M., Basofi, A., Al Rasyid, M., Rochimansyah, I., Rokhim, A.: Design an MVC model using Python for flask framework development. In: 2019 International Electronics Symposium (IES) (2019). https://doi.org/10.1109/ELECSYM.2019.8901656 31. Liszio, S., Masuch, M.: Interactive immersive virtual environments cause relaxation and enhance resistance to acute stress. Annu. Rev. Cyberther. Telemed. 17, 65 (2019). https://doi. org/10.1109/AIVR50618.2020.00022 32. Rozmi, M., Rambli, D., Sulaiman, S., Zamin, N., Muhaiyuddin, N., Mean, F.: Design considerations for a virtual reality-based nature therapy to release stress. In: 2019 International Conference on Advances in the Emerging Computing Technologies (AECT) (2020). https:// doi.org/10.1109/AECT47998.2020.9194175 33. Vilaseca, M.: Musicoterapia y comunicación en adultos hospitalizados con Trastorno afectivo bipolar. Master’s thesis, Universidad Internacional de La Rioja (2018) 34. Matney, B.: Drum set training in music therapy: a resource for students, clinicians, and educators. Music. Ther. Perspect. 39, 95–104 (2020). https://doi.org/10.1093/mtp/miaa007 35. Hoffman, H.: Interacting with virtual objects via embodied avatar hands reduces pain intensity and diverts attention. Sci. Rep. 11 (2021). https://doi.org/10.1038/s41598-021-89526-4 36. Lin, C., Tan, W., Lee, S., Tseng, S., Lin, Y., Hsu, W.: The influence of experience satisfaction and sports attitude on somatosensory experience of information technology products: A case study of Wii sports. In: 2017 IEEE 8th International Conference on Awareness Science and Technology (iCAST) (2017). https://doi.org/10.1109/ICAwST.2017.8256523 37. Patin, F.: GameDev.net -- beat detection algorithms. http://archive.gamedev.net/archive/ref erence/programming/features/beatdetection 38. Albargues, E.: Procesos en postproducción de una batería acústica (2017) 39. Barraza, A.: Inventario SISCO SV-21 Inventario SIStémico COgnoscitivista, para el estudio del Estrés Académico Segunda versión de 21 Ítems. Ecorfan, Mexico (2018) 40. Barraza, A.: El Inventario SISCO del Estrés Académico. Investigación Educativa Duranguense, pp. 90–93 (2007)
ROBOFERT: Human - Robot Advanced Interface for Robotic Fertilization Process Christyan Cruz Ulloa1(B) , Anne Krus2 , Guido Torres Llerena1 , Antonio Barrientos1 , Jaime Del Cerro1 , and Constantino Valero2 1
Centro de Autom´ atica y Rob´ otica, Consejo Superior de Investigaciones Cient´ıficas, Universidad Polit´ecnica de Madrid, 28006 Madrid, Spain [email protected] 2 Departamento de Ingenier´ıa Agroforestal, ETSI Agron´ omica, Alimentaria y de Biosistemas, Universidad Polit´ecnica de Madrid, 28040 Madrid, Spain [email protected]
Abstract. The interfaces for Human-Robot interaction in different fields such as precision agriculture (PA) have made it possible to improve production processes, applying specialized treatments that require a high degree of precision at the plant level. The current fertilization processes are generalized for vast cultivation areas without considering each plant’s specific needs, generating collateral effects on the environment. The Sureveg Core Organic COfound ERA-Net project seeks to evaluate the benefits of growing vegetables in rows through the support of robotic systems. A robotic platform equipped with sensory, actuation, and communication systems and a robotic arm have been implemented to develop this proof of concept. The proposed method focuses on the development of a human-machine interface (IHM) that allows the integration of information coming from different systems from the robotized platform on the field and suggest to an operator (in a remote station) take a fertilization action based on specific vegetative needs to improve vegetable production. The proposed interface was implemented using Robot Operating System (ROS) and allows: visualizing the states of the robot within the crop by using a highly realistic environment developed in Unity3D and shows specific information of the plants’ vegetative data fertilization needs and suggests the user take action. The tests to validate the method have been carried out in the fields of the ETSIAAB-UPM. According to the multi-spectral data taken after (2 weeks after being planted) and before (3 months after growth), main results have shown that NDVI indexes mean values in the row crop vegetables have normal levels around 0.4 concerning initial NDVI values, and its growth was homogeneous, validating the influence of ROBOFERT. Keywords: Virtual reality · ROS · Robotics Human machine interface · Image processing
· Precision agriculture ·
Supported by European project “Sureveg: Strip-croppingand recycling for biodiverse and resource-efficient in-tensive vegetable production”, belonging to the actionERA-net CORE Organic Cofund: https://projects.au.dk/coreorganiccofund/. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Botto-Tobar et al. (Eds.): ICAETT 2021, LNNS 407, pp. 60–73, 2022. https://doi.org/10.1007/978-3-030-96147-3_5
ROBOFERT
1
61
Introduction
Recent developments in specialized software with better communication managers and graphics engines, combined with the low-cost robotic systems increase, have made it possible to implement more robust integrated systems to execute specialized robotic applications. The remote execution, monitoring, and control of these processes have been possible through the implementation of IHM, allowing an operator to supervise and control a process from a remote station. One of the approaches in recent years of this type of robotic application focuses on precision agriculture. The Sureveg project (“Strip-cropping and recycling of waste for biodiverse and resoURce-Efficient intensive VEGetable production”) aims to develop robotic systems and intelligent machines to intensify organic vegetable crops in rows [6]. Within the field of PA, several projects have been carried out involving robotic systems for the execution of tasks [18,22], mainly: fertilization and irrigation [7], location of plants and fruits [13], mapping [20], through the use of sensory systems such as: RGB, Multispectral cameras [21], Laser [24]. The recurrent monitoring of robotic applications in AP is developed through conventional interfaces that focus on the transmission of RGB videos captured from one or more external cameras or on-board the robot, which show to the user, the status of the crops or the robot [1], some perform first processing of the images generating segmentation of areas of interest [3]. However, the biggest deficit found after reviewing state-of-the-art lies in the lack of information shown to the operator during the online process for subsequent actions. The main contribution of this work is to improve the intensive production of vegetables grown in rows by implementing a robust method for remote monitoring of vegetative variables, crop status, the status of the robotic platform through visual information captured from onboard sensors of the robotic platform. These data are displayed in a highly realistic representation of the crop field, managed by an IHM. The system, in addition, provides an initial suggestion to apply or not the treatment in the current plant and allows to remotely activate the organic fertilization using the robot in the field by managing different buttons. This paper’s interface consists of a virtual environment modeled from the real system (RV), previously captured by a 3D system. It consists of a group of cultivation rows of 20 m long, for various types of vegetables such as cabbage and beans. The simulated environment has the count of the number of fertilized plants, the activation and deactivation of the robotic arm, application of the fertilizer to each plant, and location of the fertilized plants, this process works in conjunction with the physical robot and the monitoring of the process is seen through the robot’s camera. Through the proposed method, “ROBOFERT” seeks to improve the growth of vegetables grown in rows. After carrying out tests in the fields of ETSIAABUPM (with row plantations of cabbage), the main results have shown an improvement in the growth of vegetables cultivated in rows. This hypothesis has been corroborated using NDVI index captured from multispectral images of the cultivation before (initial phase of plant growth) and after fertilization (after three months of vegetable growth).
62
C. Cruz Ulloa et al.
This paper is structured in five sections. Section 2 describes the most relevant works in state-of-the-art; Sect. 3 shows materials and methods in detail— Section 4, containing the experiments and results. Finally, Sect. 5 presents the main findings.
2
Related Work
The proposed method has taken as a reference the most relevant works within the field of application of interfaces in robotic agriculture, analyzing the strengths and weaknesses of the same to develop a robust and functional system, then the main topics involved in the proposed development are discussed. 2.1
Interfaces in Robotics Applied to Agriculture
The immersive interface proposed focuses on introducing the operator to the mission scenario; through a rich and detailed reproduction of it, there are three fundamental types of immersive technologies: virtual reality (VR), augmented reality (AR), and mixed reality (RM) [23,29]. RA superimposes images or virtual elements within a video of the real environment; the related projects in AP are: identification of insects for pesticide application [19], control and monitoring of agricultural tractors [26], Mapping with UAV [11]. RV shows digitally synthesized scenarios with interactive elements; the scenarios scanned in 3D for fertilization training, pest location, and plantation monitoring, [16,24], is also applied for greenhouses [5]. Among the different robotics projects in the AP, NARCH (National Agricultural Research Center for Hokkaido Region) is one of the most relevant, which has an adaptive interface virtually and physically for its maneuverability. This project is based on the teleoperation and maneuverability of an agricultural tractor [18]. Another one is SAVSAR (Semi-Autonomous Vineyard Spraying Agricultural Robot), the adaptive interface consists of robot battery level, control of robot movements and camera movement, front and rear camera screen presentation of the robot, and additional information from sensors (visual and auditory), the robot performs inspection and distribution of fertilizer on a vineyard [15]. ROBOTIC WEED CUT is remotely operated through an adaptive interface, which consists of two prototypes, first: with screen navigation buttons, emergency stop, geographic orientation, and external camera visualization, second similar to the first with the difference of the robot’s maneuverability controls, battery level, target count and the visualization of the fruit to be sprayed [12,17]. RHEA a project of collaborative robots between terrestrial and aerial robots, which operate telematically and autonomously; the aerial robot flies over the work area and locates the plants and weeds to be sprayed; the virtual interface consists of a virtual environment, modeled to starting from the real system, previously captured by a 3D system. The spraying of crops autonomously traced the appropriate path from the data collection of the aerial robot; the teleoperated control is developed in the virtual environment [27].
ROBOFERT
63
Figure 1 shows the increase of articles related to Interfaces for Agricultural Robotics; in the web of knowledge, it shows an increase of around 22 % of articles per year. This allows corroborating the importance and rise of this type of interfaces
Fig. 1. Numbers of works in the Web of Knowledge including Interfaces for Robotic Agriculture as topic.
In Table 1 a comparison is shown between the advantages of the main methods developed and the proposed method. The proposed interface shows a great advantage and contribution to the main related works or similar methods in this field, the metrics of components proposed are: A:RGB Images, B:Multispectral Images, C: Security Systems, D: Assisted Fertilization, E: Real Time Model Visualisation in inmersive interface. Table 1. Comparative of the proposed method with current developments. Projects
A B C D E
NARCH [18]
x
SAVSAR [1]
x
SEARFS [10]
x x x x
VINEYARD [2]
x
x
ROBOTIC ARM [12]
x x
x
ROBOT WEED CONTROL [28] x x
x
ROBOTIC WEED CUT [17]
x x
x
TERRAIN MAPS [24]
x
TRACTOR RA [26]
x
x x x
RHEA [27]
x
x x x
ROBOFERT
x x x x x
64
C. Cruz Ulloa et al.
The proposed system to integrate all subsystems and manage the communications between field robot and interface is ROS, because of this nodes, topics and multi-platform structure [2]. Some others related works uses Matlab [10,28] or serial protocol [9,25].
3
Materials and Methods
For this research, a robotic platform has been implemented (Fig. 2) that will carry out fertilization in the field. It consist of an aluminum structure with wheels, coupled with an Igus CPR5DOF Robot by means of a custom-designed plate for the base. The platform is instrumented with an RGB camera, a PARROT-SEQUOIA multispectral camera, 3 laser lidar sensors and an actuation systems (a nozzle and pumping system to spray the liquid) in the prototype. The Fig. 2 shows the robotics platform described on the crop row. For the validation of the proposed method, fertilization tests have been developed using the robotic platform in the crop fields of ETSIAAB-UPM located at ETSIAAB-UPM (40◦ 26 38.9 N 3◦ 44 19.3 W) in Madrid. Which contains rows of crops with cabbages in the initial growth phase, a stage in which the application of fertilizer is convenient.
(a) Robot CPRIgus 5DOF used.
(b) Robotic mobile structure on ETSIAABUPM fields.
Fig. 2. Robotic mobile platform description.
The proposed method focuses on develop a continuous and robust interaction between the monitoring of the vegetative variables of the crop, the state of the robot in the field, and the decision-making by the user, for the execution of tasks of organic fertilization at the individual plant level on row crops, seeking an efficient intensive production of cultivated vegetables.
ROBOFERT
3.1
65
Interface for Monitoring and Control
The interface implemented is shown in the Fig. 3a-b; it has been developed in Unity3D, which has been used since it meets the requirements to implement the proposed method: it has a powerful graphic engine, allows wireless communication with ROS through Ros-Bridge (to manage the sending and receiving of data, through topics). The Unity Interface laptop features a Core Intel I7-10th
(a) Representation of a plant on the crop that doesn’t require fertilization.
(b) Representation of a plant on the crop that requires fertilization. Fig. 3. Advanced interface for monitoring and control the fertilization process.
66
C. Cruz Ulloa et al.
Gen processor and Nvidia GEFORCE-GTX 1660-Ti graphics card to support data flow management and visuals. The interface is made up of an environment that recreates the crop field’s details, mainly: crop rows, vegetables, trees, agricultural machinery, plants, etc. In order to represent the states of the platform at each moment. On the left side in blue squares of Figs. 3a-b, the information concerning the current plant (fertilization suggestion and nozzle status), the on-board camera’s image, and the platform control buttons are shown on the environment. On the right side, the multi-spectral information concerning the row of the real crop is shown, and in the upper part of the IHM, the suggestion of the system corresponding to the platform’s displacement is shown. The data received in the interface mainly show the variables of vegetative interest of the crop and robot in the field. In other way, the data sent by the interface, manages the start of the planning and execution processes of the fertilization process with the robotic arm. The data shown in the interface, coming from the sensors read from ROS are: – – – – – –
Background Field: Representation of the robot model in the field. Fertilization Status: Current status of the nozzle. Plant Number: Number of plant processed in the row. Plant Status: Suggestion issued by the system to apply or not the treatment. System Output (S-O): System suggestion for platform advance. In the crop row’s multispectral image (left side), the red circle shows those that require fertilization. The green circle suggests that they do not require treatment. – RGB image transmitted of the current process by the on-board camera. On the other hand, the operator’s actions can carry out based on the information provided, and the system’s suggestion through the bottoms: – Turn on all the systems. – Yellow button: Start planning robot movements to apply the treatment around the current plant and start the movement. This is developed in the onboard computer of the robot using Moveit planner tool, based on the row PC. This fertilization strategy method was implemented using the CCD method to solve the inverse kinematics based on the vegetable characterization this process is described at detail in [7]. – Back button: Process stopped.
3.2
Structure of Sub-systems Connection
The Fig. 4 shows the subsystems’ main interaction scheme for the bidirectional data flow, between the interface and the robotic platform on the field. The structure has five subsystems that develop specific functions in the fertilization process. Four of these correspond to the platform, and the remaining subsystem corresponds to the interface (with wireless communication). These subsystems are:
ROBOFERT
67
– Point Cloud Data Processing: Field reconstruction through point clouds (PC), using the laser system, described in [8,14]. – Trajectory Planner: Collision-free trajectory planning based on PC. – Fertilization: Nozzle drive.
Fig. 4. Schemes of interconnections between subsystems for the fertilization process (Uner Interface is remote, the rest of the subsystems are in the field).
68
C. Cruz Ulloa et al.
– Real Robot CPR: Execution of movements of the robotic arm. – User Interface: IHM for monitoring and control (remotely). The execution of processes and data flow is carried out on two computers; the first is on board the platform (Ubuntu Operating System) and is execute tasks for reading the sensors and controlling the robot’s actions; The second computer (Windows Operating System) is in the remote station, it contains the interface developed in Unity. Computers communicate using TCP/IP protocol through a Wi-Fi network. ROS manages the information flow; the different data is published through specific topics (/Sensor.CompressedImage for images, /Geometry.Pose for the platform’s position, /Sensor.Joy for the state of the robot joints) via Ros-Bridge to the interface.
4
Experiments and Results
The experiments development was carried out in a row of transplanted cabbage crops. Previous analysis has been made on these plants (in a first continuous pass) to evaluate their vegetative needs and determine which needs fertilization. The platform will move from the beginning to the end of the row (already with the data analyzed) to develop the experiments. In the interface, the system that controls the output data of the interface will previously load the vegetative information of the row to issue the suggestions to the operator. The results of the field execution and interface operation are shown in Appendix A. 4.1
Field Tests
The System Output (S-O) interface suggests the user place the platform on the first plant to start the process; once the platform is placed, its data and suggestions of applying or not the fertilization are displayed while it lasts. The S-O process set a massage of “Wait to Finish”; once the task is completed, the new S-O massage will be “Next Plant” to indicate the platform’s advance. Together with the visualization of the robot through the image transmitted by the onboard camera, these data will allow the user to press or not the buttons that start the planning and execution of robot arm movements. Figures 3a-b show two cases where the treatment should and should not be applied. The first case in the Fig. 3a corresponds to a plant on which the treatment must be applied, the data for plant number (6), plant status (Apply Fert), fertilization status (Applying) are shown in the same way, the System Output tells the user to wait for the process to finish. The second case in the Fig. 3b corresponds to a plant that does not require treatment. The data shown are plant number (7), plant status (Not Fert), fertilization status (OFF). In the same way, the S-O indicates to the user; the operator must advance to the next plant.
ROBOFERT
4.2
69
Execution of Fertilization Through Trajectory Planning
Figure 5a, shows the data captured by the sensory system from RVIZ (ROS’s information display), shows the platform with the robotic arm suspended in the center, the current position is represented in orange, while the path that the arm will travel to apply the fertilizer it is described superimposed in gray, like a semicircle around the base of the plant.
(a) Interface showing data of a plant that doesn‘t need fertilization and current robot status.
(b) Interface showing data of a plant that needs fertilization and current robot status.
Fig. 5. Interface showing different cases of plants in the row crop.
The PC data concerning the 3D reconstruction of the current row (Fig. 5b) is represented in green-blue (as a false depth color, if it is blue, it is higher) and allow the robot to locate itself within the environment and plan the trajectories in the function of the relative position arm/plant. The Fig. 5a shows the tool placed at the end of the robotic arm, with an angle of inclination that places towards the plant’s base to describe a semicircular trajectory. The system has activated the nozzle based on the arm’s current position (close to the base and with an optimal angle of insertion to SPRAY the liquid organic fertilizer on the base). It remains activated until the semicircular path is finished. 4.3
Comparison of Multi-spectral Images for Testing the Influence of the Method Proposed
To verify the influence of the proposed method on the incidence on the growth of the planted vegetables and regulation of NDVI indexes, we have compare the initial growth stage and the NDVI indexes of a section of the crop row and the same vegetables produced after three months. The Fig. 6a shows the mosaic of the RGB (upper) and multi-spectral (lower) images for the transplanted vegetables in the initial stage, with a mean diameter of 10 [cm]. The circles in red
70
C. Cruz Ulloa et al.
show those vegetables that require fertilization. This analysis has been carried out based on the NDVI (Normalized Difference Vegetation Index) indices that evaluate the quality and development of vegetables. This part of the work was detailed in [4]. The Fig. 6b shows the vegetables after three months of growth; after applying the method proposed in the cultivation, the mosaics for the RGB (upper) and multispectral (lower) images show an optimal development of the vegetables allowed verification of the proposed method. The same vegetables in the Fig. 6a, have been marked in red to show the change produced concerning those initially transplanted. The result is a crop that has developed consistent and uniform growth. Finally, the Fig. 7 shows the before and after variation of the NDVI indices, where it can be clearly seen that in the before (Fig. 6a), the three plants marked with a circle have an NDVI index lower than 0.3, which means that their vegetative conditions are not optimal. Later (Fig. 6b), after applying the proposed method and after its growth, this index improves considerably, and all plants have an average NVDI of 0.4.
(a) Crop images before fertilization
(b) Post-fertilization crop images
Fig. 6. RGB and multi-spectral analysis of the influence of the proposed method on the growth of vegetables. Plants are numerated form left to right [1–9] as reference.
ROBOFERT
71
Fig. 7. Comparison of the NDVI mean value calculated after (blue) and before (gray) plant growth.
5
Conclusions
We have presented ROBOFERT, a method that proposes improving the production of vegetables grown in rows through organic fertilization executed at the plant level through an advanced interface. The proposed method focuses on monitoring the variables of vegetative interest in a cultivation row and control of a mobile platform equipped with a robotic arm, sensory and actuation equipment to develop fertilization tasks. Through an IHM, the information from the sensory systems is displayed and processed and transmitted by ROS to an operator located in an external monitoring station, which can or not, execute the fertilization process based on the information and suggestion issued by the IHM. To validate this proof of concept, tests with the robotic platform and the IHM have been carried out in fields with row cabbage crops. The influence of using the proposed method has been verified by analyzing NDVI indices of multispectral sensors based on images captured before and after cultivation. The incidence of virtual reality interfaces and robotic tools in the PA has shown promising results. NDVI indexes allow validating this method. In the initial phase of transplantation, the NDVI index in 3 of the plants in the section of the crop row was less than optimal ( 50% high heterogeneity). Moreover, according to these criteria, the CV of the Pre Test was 53%, showing that the fifth cycle students started mathematics classes with a level of “great heterogeneity” in their learning. Then in the Process Test, it showed a CV of 38%, showing a “heterogeneous” level of mathematical mastery (indicating a lesser degree of difference in the achievement of mathematical competencies). In addition, the Post Test showed a CV of 17% demonstrating “moderate homogeneity” in the level of achievement of mathematical competencies at the end of the course.
80
R. Zuñiga-Quispe et al.
The analysis of the means and the coefficients of variation allow us to partially conclude that the academic performance in mathematics is reaching an increase in and at the same time a level of homogeneity of the mathematical competencies according to the expected learning. Table 2. Means and variations coefficients Virtual evaluations
Minimum
Maximum
Mean
SD
CV
Pre test
0
20
12.00
6.36
0.53 (53%)
Process test
0
20
14.17
5.43
0.38 (38%)
Post test
4
20
18.17
3.05
0.17 (17%)
Note: In Table 2, SD is standard deviation and CV is coefficient of variation.
3.2 Difference of Means Then, Table 3 shows the analysis with the Student’s T statistic according to gender. In this analysis, it is observed that according to the significance value there are no differences between the scores obtained between male and female students (p > .05). Table 3. Difference of means Virtual evaluations
Gender
Mean
t
p
Pre test
Men
13.05
1.069
.293
Women
10.75
1.074
.291
Process test
Men
13.68
−.573
.571
Women
14.75
−.584
.563
Men
17.89
−.480
.634
Women
18.50
−.498
.623
Post test
Note: In Table 3, Student’s t analysis (t) and significance (p).
3.3 Exploratory Analysis with Boxplot Then, for a better exploratory analysis, a box-plot analysis was performed [17]; to identify particular cases that could stand out or present deficiencies in the level of achievement of mathematical competencies. For this analysis, the “virtual evaluation” variable (pre test, process test and post test) was entered on the X axis and the “academic performance” variable (0–20) on the Y axis. Thus, in the boxplot (see Fig. 3) it can be seen that for Pre Test the central cases (50% of students located between the 25th and 75th percentile of Tukey’s first and third hinge) are close to score 5 and 15; and from the greatest value up to the 75th percentile of Tukey’s third hinge, 25% of the sample of students (approximately 8 students) were
Teaching of Mathematics Using Digital Tools Through the Flipped Classroom
81
located between scores 15 and 20. However, a total of 8 students (approximately) are close to 10 to 0 points. Then, for Process Test the central cases (50% of students located between the 25th and 75th percentile of Tukey’s first and third hinge) are close to score 10 and 20; including the greatest value up to the 75th percentile of Tukey’s third hinge, 25% of the sample of students (approximately 8 students). Both are located between scores 10 and 20 (25 students in total). Again, a total of 8 students (approximately) are found close to 10 to 0 points. Also, for Post Test the central cases (50% of students located between the 25th and 75th percentile of Tukey’s first and third hinge) are close to score 15 and 15; and from the greatest value up to the 75th percentile of Tukey’s third hinge, 25% of the sample of students (approximately 8 students) were located between scores 15 and 20 (25 students in total). It can be seen that the Post Test that between the 25th percentile and the lowest value there are an approximate of 8 students with scores close to 10 and 15. But unlike the Pre Test and Process Test, an atypical case can be observed (student 31) who has the highest score drop of sample (04); evidencing the need for more personalized attention in the teaching of mathematics.
Fig. 3. Boxplot analysis (Mathematic evaluation).
The box-plot analysis confirmed that there was an increase in the academic performance of the students, showing that approximately 75% of them showed an increase in the achievement of the mathematical competencies expected by MINEDU [5]. Next, exploratory analyzes were carried out and in the Post Test it was identified that there were two atypical cases for each group of men and women, and one extreme case was identified in the group of men. These outliers and extreme cases require special attention to provide tutoring or tutoring to optimize your math performance (see Fig. 4).
82
R. Zuñiga-Quispe et al.
Fig. 4. Boxplot analysis (Post test performance).
4 Discussion The purpose of this study was to analyze the experiences of teaching mathematics through the inverted classroom in fifth-cycle students of a private educational institution in Callao (Lima - Peru). In addition, according to a methodology of systematization of experiences using the inverted classroom for the teaching of mathematics in elementary school students. This study confirmed that digital tools such as the Classroom platform [5] are optimal technological resources for the development of learning sessions through the flipped classroom [2]. However, essentially, because it is about teaching learning, the usefulness of technological tools and the inverted classroom to develop synchronous sessions and spaces for asynchronous activities is also evidenced [8]. In addition, it is essential to mention that the experience is systematized in which the expected achievements in the mathematical competencies proposed by MINEDU are evidenced [5]. Of course, the analyses with the coefficients of variation [16] showed that the students in the initial evaluation (pre-test) had very heterogeneous grades; but in the process of development of the learning sessions, they develop the skills for solving problems of quantity, regularity, equivalence and change, form, movement and location, and data management; reaching in the post-test levels of homogeneity following the expected learning achievements [5]. With the present findings, it is not intended to affirm that digital tools and the inverted classroom are completely efficient technological resources for the development of mathematics learning sessions. In addition, it should be taken into account that the educational background related to mathematical performance in Peru is not entirely satisfactory [1, 6]. Also, to think that with the Covid-19 pandemic, schools were closed worldwide affecting more than 1520 million students and 63 million teachers; harming more the lower social classes who would be affected by not being able to access virtual education
Teaching of Mathematics Using Digital Tools Through the Flipped Classroom
83
[1]; and of course, they would not be able to develop their mathematical competences. However, given the criticisms that virtual education may have, due to the alleged inefficiency caused by the lack of teachers, the present study helps to confirm that it is feasible to guarantee gradual learning of mathematics in primary school children.
5 Conclusions and Future Works At the end of the experience, the expected results were obtained, implementing the flipped classroom through the Classroom platform for the teaching of mathematics was favorable thanks to the constant use of digital tools in synchronous and asynchronous classes. In addition, the students demonstrated managing various soft skills such as planning and time management, this was evidenced in the timely fulfillment of their asynchronous activities. In addition, interaction with others and the strengthening of teamwork were observed to carry out activities in their synchronous classes. Likewise, the achievement of mathematical competencies through the flipped classroom was confirmed, evidencing the qualifications and their autonomy when participating in the resolution of problems in the synchronous classes. Similarly, the adaptation to change in this type of teaching could be noted in a significant way. It is necessary that teachers continue to innovate in virtual teaching, making correct use of digital tools and that educational institutions have their own platforms for the development of the flipped classroom. In addition, it is important to mention that this study can be used to analyze the various soft skills possessed by students who improved their grades as opposed to students who did not meet the expected achievement. Furthermore, its validity can be confirmed in larger sample contexts and its implementation in face-to-face education as a teaching-learning tool can be evaluated. On the other hand, studies can be carried out to analyze the level of self-learning of students, because of asynchronous activities. Finally, it is important to disseminate this experience to publicize the results of implementing the flipped classroom as a teaching of mathematics, collaborating with the work of teachers.
References 1. Organization for economic co-operation and development OECD. “Peru: Student performance (PISA 2015)”. http://gpseducation.oecd.org/CountryProfile?primaryCountry=PER& treshold=10&topic=PI. Accessed 21 Apr 2021 2. Bergmann, J., Sams, A., Cols, S.: What is flipped learning? Flipped learning network (FLN) (2014). http://www.flippedlearning.org/cms/lib07/VA01923112/Centricity/Domain/ 46/FLIP_handout_FNL_Web.pdf. Accessed 10 May 2021 3. Lepp, L., Aaviku, T., Leijen, L., Pedaste, M., Saks, K.: Teaching during COVID-19: the decisions made in teaching. Educ. Sci. 11(47) (2021). https://doi.org/10.3390/educsci11 020047 4. Alcántara, A.: Educación superior y Covid-19: Una perspectiva comparada. En H. Casanova (Comp.). Educación y pandemia. Una visión académica. (pp. 75–82). Universidad de Nacional Autónoma de México: Instituto de Investigaciones sobre la Universidad y la Educación (2020) 5. Ministerio de Educación del Perú, Programa curricular de educación primaria (2016). http:// www.minedu.gob.pe/curriculo/pdf/programa-nivel-primaria-ebr.pdf. Accessed 08 Jun 2021
84
R. Zuñiga-Quispe et al.
6. Ministerio de Educación del Perú, PISA: Evaluación PISA (2018). https://es.calameo.com/ read/006286625977c1ced4d6c?view=slide&page=1. Accessed 06 Jul 2021 7. Hernandez, C., Tecpan, S.: Aula invertida mediada por el uso de plataformas virtuales: un estudio de caso en la formación de profesores de física. Estudios Pedagógicos 43(3), 193–204 (2017). https://scielo.conicyt.cl/pdf/estped/v43n3/art11.pdf. Accessed 21 Apr 2021 8. Basso, M., Bravo, M., Castro, A., Moraga, C.: Propuesta de modelo tecnológico para flipped Classroom (T-fliC) en educación superior. Revista Electrónica Educare 22(2), 1–17 (2018) 9. Moodle: Start building your online learning site in minutes (2021). https://moodle.com/get started/. Accessed 14 Mar 2021 10. Google: Obtén más tiempo para enseñar e inspirira a los alumnos con Classroom (2021). https://edu.google.com/intl/es-419/products/classroom/. Accessed 16 Apr 2021 11. Organización de las Naciones Unidas para la Educación, la Ciencia y la Cultura, Ministerio de Educación del Perú: Docentes y sus aprendizajes en modalidad virtual. Punto & Grafía SAC, Lima (2017) 12. Vega, J., Niño, F., Cárdenas, Y.: Enseñanza de las matemáticas básicas en un entorno eLearning: un estudio de caso de la Universidad Manuela Beltrán Virtual. Revista EAN, 79. 172–187 (2015). http://www.scielo.org.co/scielo.php?script=sci_arttext&pid=S0120-816 02015000200011. Accessed 30 Apr 2021 13. Goodwin, J.: Research in psychology: methods and design. Córdova, Brujas (2010) 14. Congreso de la República del Perú. Ley de Protección de datos personales. Ley N°. 29733. Lima. Perú (2013) 15. Congreso de la República del Perú. Código del Niño y del Adolescente. Ley N°. 27337. Lima. Perú (2000) 16. Rustom, A.: Estadística descriptiva, probabilidad e inferencia: Una visión conceptual y aplicada. Universidad de Chile, Santiago (2012) 17. Helsel, D.: Statistics for consored environmental data using Minitab and R, 2nd edn. Wiley, Denver (2011)
Assessing Digital Competencies of Students in the Fifth Cycle of Primary Education: A Diagnostic Study in the Context of Covid-19 Yesbany Cacha-Nuñez , Roxana Zuñiga-Quispe , and Ivan Iraola-Real(B) University of Sciences and Humanities, Lima 15314, Peru {yescachan,roxzunigaq}@uch.pe, [email protected]
Abstract. The context of virtual education requires not only the availability of Information and Communication Technologies (ICT), but also the development of digital competencies that will mediate the teaching and learning processes. These digital competencies and the development of autonomy are established as transversal competencies in the National Curriculum of the Peruvian Ministry of Education. In addition, it is in this context that it is urgent to evaluate the digital competencies of students. For these reasons, the present study aims to analyze diagnostically the level of digital competencies in primary school students in Lima-Peru. The sample consisted of 118 students (67 females (56.8%) and 51 males (43.2%), 71 (60.2%) from fifth grade and 47 (39.8%) from sixth grade. A scale assessing 5 digital competencies was applied, which evidenced adequate levels of reliability (α = 0.83). Finally, the results showed that the students have optimal levels of development of digital competencies, but in the exploratory analysis, 2 students were identified with difficulties in the digital content creation competency and 2 with difficulties in the digital security competency. Keywords: Digital skills · Virtual learning · Primary education · Information technology
1 Introduction In this 21st century known as the technological era, students demand that teachers can be updated and innovate their strategies to face the changes that this education demands [1]. In addition, the integration of technology in the educational field has allowed enhancing teaching and learning, in the current context, this virtual educational proposal requires that students can develop digital skills for meaningful learning and to reduce the gaps in education that increased in this pandemic. In addition, it is stated that this pandemic adds another degree of difficulty in education because it occurs abruptly and in most cases, there was no contingency plan that can provide a solution to virtual classes, in not only Peru but also worldwide [2].
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Botto-Tobar et al. (Eds.): ICAETT 2021, LNNS 407, pp. 85–96, 2022. https://doi.org/10.1007/978-3-030-96147-3_7
86
Y. Cacha-Nuñez et al.
In this context, it cannot be denied that virtual education has generated new educational inclusions such as being able to count on students with disabilities, mothers or workers who for various reasons were unable to study in person. However, it is necessary to think that in the current context of Covid-19, approximately 166 nations worldwide were affected by the pandemic, thus being forced to close their schools, affecting approximately 1,520 million students and more than 60 million teachers [3]. In addition, what also became evident was the abrupt need to take on virtual education with technological innovations [4]. This generated the need to rethink the way of teaching using technological tools without losing sight of the pedagogical purpose of the educational process. Thus, in the context, in addition to professional or academic competencies, the need for digital competencies became evident. Of course, one could think of the supremacy of digital competencies over classical competencies; perhaps considered obsolete [5]. However, what is relevant in education is to train learners by guaranteeing learning. Thus, to understand that digital competencies are a means of education and not an end.
2 Digital Competencies According to technological advances, there is a need to conceptualize the term digital competencies, which are also necessary for different lines of research. UNESCO defines it as the “spectrum of competencies that facilitate the use of digital devices, communication applications and networks to access and better manage information” (para. 3) [6]. Developing digital competencies enables people to become empowered with respect to social and cultural issues [7], as they serve to foster citizen participation in an increasingly digitized society [8]. In the educational field, they represent, from a general point of view, a set of knowledge, skills and abilities that facilitate learning based on innovation [7]. Information and Communication Technologies (ICT) and the amount of information about their use in the educational system represent new challenges for teachers, since they must understand that technological training is not enough, since the most important thing is to assume the development of digital competencies as a benefit for the achievement of teaching-learning [9]. In addition, this training and internalization process must be dynamic due to the rapid change of technology and the characteristics of students who are known as digital natives and who increasingly prefer hypertextual content, abundant information, and parallel activities and prefer playful activities instead of quizzes [10]. Therefore, digital competencies are considered as indispensable in the education of new technologies and that will make a student can effectively incorporate into society and their future work, so education includes ICT as a valuable tool for the professional world and to generate equal opportunities [11]. Developing these skills in students, not only consists of providing them with technological tools but also knowing what are the standards to be met to be digitally competent, the following table shows what are the digital skills to be developed in students (Table 1):
Assessing Digital Competencies of Students in the Fifth Cycle
87
Table 1. Adapted from the common framework of digital competence in education [12] Digital competencies Information and information literacy Correct selection of web pages, evaluation of information and management of virtual terms Communication and collaboration
Use digital tools to communicate, share information and digital content, and collaborate through digital channels
Creation of digital content
Represent writing in a virtual format and create digital content
Security
Protection of personal data and technological device
Technological problem solving
Update programs and applications, share information, and install peripheral equipment
2.1 Assessment of Digital Competencies In order to assess digital competencies in students, structured tests can be used. Within the Digital Competence Assessment (DCA), there is the instant DCA, which is considered as a quick means of assessment and can be taken by educational institutions. This test considers three dimensions to be evaluated: the technological dimension, the cognitive dimension and the ethical dimension [13]. These dimensions at the level of Peruvian basic education, the Ministry of Education [14] guide the work of teachers to strengthen digital skills; the purpose is to provide guidance at the pedagogical level supported by the use of ICT. Many of these guidelines were provided through the “Aprendo en Casa” (I learn at home) program [15]. However, it is important to mention that this was a contingency measure because there was no adequate preparation to deal with virtual education in the state of emergency [2]. Even though, in Peru, the National Curriculum for Regular Basic Education proposes since 2016 to work on transversal competencies called “works in virtual environments generated by ICT” and “manages their learning autonomously” [16]. It is evident that there was not adequate preparation at the governmental level as well as in the schools themselves. After what has been analyzed in the virtual educational context due to the Covid-19 pandemic [3], it is important to reflect on the importance of establishing a diagnosis of the level of development of digital competencies of teachers and students in basic education. These are the reasons why the present study proposes as an objective to analyze diagnostically the level of digital competencies in elementary school students in Lima-Peru.
3 Methodology Next, the work methodology is presented. This methodology consists of presenting preliminary analyses to identify the validity and reliability of the scale applied, then descriptive statistical analyses to identify the means, then exploratory analyses to identify particular cases of students with difficulties in digital competencies. Due to the
88
Y. Cacha-Nuñez et al.
characteristics of the study, the quantitative research approach [17] of evaluative type [18] was assumed. 3.1 Participants Due to the context, a process of mastery by convenience was applied [19]. 118 elementary school students were selected (67 females (56.8%) and 51 males (43.2%). Of the total 71 (60.2%) were in the fifth grade of primary education and 47 (39.8%) were in the sixth grade of primary education. 3.2 Instrument Digital Competences for Students Scale [20]. It is a multifactorial scale with 15 items distributed in 5 dimensions that correspond to the digital competences of the theory of Tourón, Martín, Navarro, Pradas and Iñigo [20], ranging from the level of digital knowledge, social interaction, collaborative activities, computer security, digital creation and technological problem solving, also taking into account the DCA, since the instrument evaluates technological, cognitive and ethical aspects. In the present study, the validity of the scale was evaluated by means of an exploratory factor analysis in which the Kaiser Meyer and Olkin (KMO) test of sample fit demonstrated the validity of the scale with a score of .72 and the Bartlett Sphericity Test also demonstrated significance (χ2 = 559.317, df = 105, p < .001). Finally, when evaluating the internal consistency of the general scale, the Cronbach’s Alpha coefficient showed reliability with a score of .83.
4 Results 4.1 Descriptive Statistics The analysis of descriptive statistics was carried out in order to fulfill the proposed objective of diagnosing the levels of achievement of digital competencies of elementary school students. Thus, according to the response dimensions of the Likert scale of the scale items (1 never to 5 always), the following evaluation criteria were established for the development of digital competencies: 1 “very basic progress”, 2 “basic progress”, 3 “progress in process”, 4 “optimal progress” and 5 “satisfactory progress”. Accordingly, Table 2 shows that the mean for “digital competencies in general” is 3.89, which indicates that students have “optimal progress” with respect to the level of digital knowledge, social interaction, collaborative activities, computer security, digital creation and technological problem solving. However, as the analysis of the mean does not indicate a total level of mastery of digital competencies (Fig. 1) and because it is a diagnostic study to identify students with greater difficulties, we intend to make an exploratory analysis by sections and according to each digital competency. Thus, Table 2 shows that the competency “information and information literacy” shows a mean of 3.78, which indicates that students have an “optimal progress” in efficiently selecting Internet pages to perform their school tasks, then they have been able to efficiently use Drive and have been able to recover deleted files. Then, it can be observed
Assessing Digital Competencies of Students in the Fifth Cycle
89
Fig. 1. Scatterplot of digital skills according to educational level and gender.
that the competence “communication and collaboration” obtained an average of 3.93 demonstrating an “optimal progress” in terms of achieving agile communications and to work as a team using digital tools. Regarding the competencies “digital content creation”, an average of 3.61 was observed, showing again an “optimal progress” for the elaboration of concept maps, videos, audios and solving mathematical exercises using ICT. Then, for the “digital security” competency, an average of 4.02 was observed, showing “optimal progress” in protecting files, documents, personal and family information when using digital resources. Finally, in the “digital problem solving” competency, an average of 4.11 was observed, also showing “optimal progress” for updating computer programs, sharing files over the Internet and connecting peripheral devices to the personal computer (printers, microphones, cameras, etc.). Table 2. Descritive statistics Variables/dimensions
N
Minimun
Maximun
Mean
SD
1. Information and Information Literacy
118
2
5
3,78
,801
2. Communication and Collaboration
118
2
5
3,93
,859
3. Digital Content Creation
118
1
5
3,61
1,015
4. Safety Digital
118
1
5
4,02
,916
5. Digital Problem Solving
118
2
5
4,11
,935
6. Digital Competences (General)
118
2
5
3,89
,680
90
Y. Cacha-Nuñez et al.
4.2 Exploratory Analysis with Boxplots Because the present study has a diagnostic orientation, exploratory analyses were performed. For this type of analysis, box plots were used to identify particular cases [21] of greater or lesser progress in digital competencies. Thus, in Fig. 2, which analyzes the “information and information literacy” competency, it can be seen that for fifth grade the median is close to level four (optimal progress), and in the case of sixth grade it is located at level 4 (optimal progress). Then, for both grades, the central cases (50% of students located between the 25th and 75th percentile of the first and third Tukey hinge) are located between levels 3 and 5. Also for both cases, from the 75th percentile to the highest value, they are located between values 4 and 5; that is, 25% of students in both grades (18 in fifth grade and 12 in sixth grade) show “optimal progress” and “satisfactory progress”. Only fifth grade shows values below level two, showing “very basic progress” and “basic progress”.
Fig. 2. Boxplot for the competency “information and information literacy” with grade level.
Thus, Fig. 3, which analyzes the competency “communication and collaboration”, shows that for fifth grade, the median is at level 4 (optimal progress). Also, confirming this difference, it can be observed that the central cases (50% of students located between the 25th and 75th percentile of the first and third Tukey hinges) are located between levels 3 and 5; from the smallest value to the 25th percentile of the first Tukey hinge, 25% are located in levels 2 and 4, equivalent to “basic progress” and “optimal progress” and from the highest value to the 75th percentile of the first Tukey hinge, the other 25% of students (approximately 18) are located between levels 4 and 5, corresponding to the levels “optimal progress” and “satisfactory progress”. While in the case of sixth grade, the median is located close to level 4 (optimal progress) and it can be observed that the central cases (50% of students located between the 25th and 75th percentile of the first and third Tukey hinge) are located between levels 3 and 5; from the smallest value to the 25th percentile of the first Tukey hinge, 25% are located in levels 2 and 4,
Assessing Digital Competencies of Students in the Fifth Cycle
91
equivalent to “basic progress” and “optimal progress” and from the highest value to the 75th percentile of the first Tukey hinge, the other 25% of students (approximately 12) are located between levels 4 and 5, corresponding to the levels “optimal progress” and “satisfactory progress”.
Fig. 3. Boxplot for the competency “communication and collaboration” with grade level.
Thus, Fig. 4, which analyzes the competency “creation of digital content”, shows that for fifth and sixth grade, the median is close to level four (optimal progress). Then, for both grades, the central cases (50% of students located between the 25th and 75th percentile of the first and third Tukey hinge) are located between levels 3 and 5. Also for both cases, from the 75th percentile to the highest value, they are located between values 4 and 5; that is, 25% of students in both grades (18 in fifth grade and 12 in sixth grade) reflect “optimal progress” and “satisfactory progress”. Nevertheless, particularly in fifth grade, two atypical cases were identified corresponding to level 1, who show a “very basic progress”; being necessary to provide them with advice on the use of digital resources. Figure 5, which analyzes the “digital safety” competency, shows that for fifth grade the median is at level 4, which represents an “optimal level”. Also, confirming this difference, it can be observed that the central cases (50% of students located between the 25th and 75th percentile of the first and third Tukey hinge) are located between levels 3 and 5; from the smallest value to the 25th percentile of the first Tukey hinge, 25% are located in levels 1 and 4, equivalent to “very basic progress” and “optimal progress” and from the highest value to the 75th percentile of the first Tukey hinge, the other 25% of students (approximately 18) are located between levels 4 and 5, corresponding to the levels “optimal progress” and “satisfactory progress”. However, we were able to identify two extreme cases close to level two, which represents “basic level”. For sixth grade, the median is between level 4 (optimal progress) and level 5 (satisfactory progress). This can be seen in the central cases (50% of students located between the 25th and 75th percentile of the first and third Tukey hinge), which are located between levels 3 and
92
Y. Cacha-Nuñez et al.
Fig. 4. Boxplot for the competency “digital content creation” with grade level.
5; and from the smallest value to the 25th percentile of the first Tukey hinge, 25% of sixth grade students (approximately 12) are located at the “basic progress” and “optimal progress” levels. From the highest value to the 75th percentile of the first Tukey hinge, the other 25% of students are located between levels 4 and 5, demonstrating a better performance in the development of this competency compared to fifth grade. For fifth grade, two outliers were identified (62 and 79) who are in a “very basic progress” in the mastery of the digital security competency.
Fig. 5. Boxplot for the competency “digital safety” with grade level.
Assessing Digital Competencies of Students in the Fifth Cycle
93
Finally, Fig. 6, which analyzes the “digital problem solving” competency, shows that in both fifth and sixth grade the median exceeds level 4, which represents an optimal level of development of this competency. In addition, confirming this similarity, it can be observed that in both groups the central cases (50% of students located between the 25th and 75th percentile of the first and third Tukey hinge) are located between level 5 and close to level 3. Moreover, from the smallest value to the 25th percentile of the first Tukey hinge, 25% of the sample of students (approximately 18 in fifth grade and 12 in sixth grade) are located between levels 1 and 4, equivalent to the “very basic” and “optimal” levels.
Fig. 6. Boxplot for the competency “digital problem solving” with grade level.
5 Discussion The importance of developing digital competencies in virtual education is paramount, since their implementation and incorporation is part of the development of conceptual, procedural and attitudinal knowledge in the use of ICT and learning. Among the findings, in a survey of fifth and sixth graders in three schools, it is confirmed that digital competencies are considered key and basic because they favor all individuals as they allow them to develop in different social networks, use and organize information efficiently, enabling the updating of skills and knowledge that will be useful throughout their lives [22]. Likewise, according to a specific research, it is important to develop and reinforce the security competence because if this is not done, there are behaviors that can be carried out such as: dialoguing with strangers, playing virtually for many hours or falling into addictions, as well as accidentally sharing personal information of family members through networks [23]. In this problem, the security factor is essential for the proper use of digital tools in the context of the massification of virtual education. Although students present optimal results in each digital competence, it is still unknown what subjective aspects in students about what they conceive as protection of personal and family data; which requires further study.
94
Y. Cacha-Nuñez et al.
On the other hand, a study mentions that one of the disadvantages of developing digital competencies is that students can have access to different information very easily and this influences their distraction, for example, in the use of social networks [24]. Another study of 160 Mexican students indicates that 78% of students do not have mastery of digital competencies, which prevents them from expressing themselves using images, videos and other virtual media [9]. Thus, it can be inferred that the development of digital competences requires a more in-depth evaluation, for example, as a limitation of the study is that it has not been possible to evaluate competencies related to the responsible use of digital tools and information. In addition, it is still pending to identify the type of digital tools that children use because they are usually known as “digital natives” but this is not evident when they make use of virtual platforms or office programs, in general, it is necessary to evaluate digital skills especially with educational technological resources.
6 Conclusions and Future Works The objective of this research was to analyze in a diagnostic way the level of digital competencies in primary school students in Lima-Peru. These competencies should be related to the transversal competencies proposed by MINEDU, seeking to improve student learning in this new educational modality. As general conclusions, it is mentioned that: • It is necessary to develop digital competencies in students for the correct teachinglearning process, even more so in times of compulsory virtual education, as it will help to eliminate the gaps of space and time by making adequate use of digital tools. • Digital competencies in students should be evaluated because they are linked to their academic progress; in addition, each dimension of the competencies brings with it several fields that must be mastered, since they allow them to develop in other aspects of life. Such is the case of the competence “digital content creation” that, by not mastering it, prevents to express themselves using digital tools or the competence of “security”, because its poor mastery, does not guarantee the protection of personal information. • Whereas, good mastery of the “information and information literacy” competency enables students to obtain information from reliable sources; the “communication and collaboration” competency enables students to interact with peers or teachers efficiently, and the “digital problem solving” competency enables students to update programs or applications. • The time of study and practice in this virtual modality has allowed students to have optimal levels in the development of digital competencies, according to the results obtained. • While it is true that most students have optimal levels in the development of digital skills, according to the analysis of students with difficulties there are 2 cases in the competence of digital content creation and 2 others in the competence of digital security, therefore, it is considered that they should be intervened through reinforcements by the teacher.
Assessing Digital Competencies of Students in the Fifth Cycle
95
Finally, we invite to continue the research to know the relationship between learning and mastery of digital competencies, through quantitative studies of correlational type. It is also intended to continue research, starting with the 4 students who have the lowest level in these competencies, which can be achieved with qualitative studies.
References 1. Valcárcel, A.: Las competencias digitales en el ámbito educativo. Universidad de Salamanca (2016) 2. United Nations Educational Scientific Cultural Organization: Covid-19 y educación superior: De los efectos inmediatos al día después. Análisis de impactos, respuestas políticas y recomendaciones (2020). Homepage. http://www.iesalc.unesco.org/wp-content/uploads/ 2020/05/COVID-19-ES-130520.pdf. Accessed 06 July 2021 3. Alcántara, A.: Educación superior y Covid-19: Una perspectiva comparada. En H. Casanova (Comp.). Educación y pandemia. Una visión académica. Universidad de Nacional Autónoma de México: Instituto de Investigaciones sobre la Universidad y la Educación, pp. 75–82 (2020) 4. Lepp, L., Aaviku, T., Leijen, Ä., Pedaste, M., Saks, K.: Teaching during COVID-19: the decisions made in teaching. Educ. Sci. 11(2), 47 (2021). https://doi.org/10.3390/educsci11 020047 5. Levano-Francia, L., Diaz, S.S., Guillén-Aparicio, P., Tello-Cabello, S., Herrera-Paico, N., Collantes-Inga, Z.: Competencias digitales y educación. Propósitos y Representaciones 7(2), 329 (2019). https://doi.org/10.20511/pyr2019.v7n2.329 6. United Nations Educational Scientific Cultural Organization: Las competencias digitales son esenciales para el empleo y la inclusión social (2018). https://es.unesco.org/news/competenc ias-digitales-son-esenciales-empleo-y-inclusion-social. Accessed 06 July 2021 7. García-Quismondo, M., Cruz-Palacios, E.: Gaming como Instrumento Educativo para una Educación en competencias Digitales desde los Academic Skills Centres. Revista General de Información y Documentación 28(2), 489–506 (2018) 8. Iordache, C., Mariën, I., Baelden, D.: Developing digital skills and competences: a QuickScan analysis of 13 digital literacy models. Italian J. Sociol. Educ. 9(1), 6–30 (2017) 9. Pech, S.H.Q., González, A.Z., Herrera, P.J.C.: Competencia digital en niños de educación básica del sureste de México. RICSH Revista Iberoamericana de las Ciencias Sociales y Humanísticas 9(17), 289–311 (2020). https://doi.org/10.23913/ricsh.v9i17.199 10. Prensky, M.: Digital natives, digital immigrants. On the Horizon 9(5), 1–6 (2001) 11. Gutierrez, K., García, V., Aquino, S.: El desarrollo de las competencias digitales de niños de quinto y sexto año en el marco del programa de MiCompu. Mx en Tabasco. Perspectivas docentes 61 (2017) 12. Tourón, J., Martín, D., Navarro, E., Pradas, S., Íñigo, V.: Validación de constructo de un instrumento para medir la competencia digital docente de los profesores (CDD). Revista Española de Pedagogía 76(269), 25–54 (2018) 13. Calvani, A., Cartelli, A., Fini, A., Ranieri, M.: Models and instruments for assessing digital competence at school. J. E-Learn. Knowl. Soc. 4(3), 183–193 (2008) 14. Ministerio de Educación del Perú: Orientaciones pedagógicas para el servicio educativo de educación básica durante el año 2020 en el marco de la emergencia sanitaria por el coronavirus COVID-19. 1rd edn. Ministerio de Educación Del Perú, Lima (2020) 15. Ministerio de Educación del Perú: Aprendo en casa. Ministerio de Educación del Perú, Lima (2020) 16. Ministerio de Educación del Perú: Currículo Nacional de Educación Básica, Lima (2016)
96
Y. Cacha-Nuñez et al.
17. Cadena, P.: Quantitative methods, qualitative methods or combination of research: an approach in the social sciences. Revista Mexicana de Ciencias Agrícolas 8(7), 1603–1617 (2017) 18. Kushner, S.: Evaluative Research Methods: Managing the Complexities of Judgment in the Field. Information Age Publishing, New York (2017) 19. Saumure, K., Given, L.: Convenience sample. In: The SAGE Encyclopedia of Qualitative Research Methods. Sage Publishing, Thousand Oaks, CA (2008) 20. Cacha-Nuñez, Y., Zuñiga-Quispe, R., Iraola-Real, I., Gonzales-Macavilca, M.: Analysis of digital and mathematical competences in elementary school students. V IEEE World Engineering Education Conference – EDUNINE (2021) 21. Helsel, D.: Statistics for Consored Environmental Data Using Minitab and R, 2nd edn. Wiley, Denver (2011) 22. Gutiérrez, K., García, V., Aquino, S.: El desarrollo de las competencias digitales de niños de quinto y sexto año en el marco del programa de MiCompu. Mx en Tabasco. Textos y contextos (2017) 23. Muñoz-Repiso, A.G.-V., Blanco, L.S., Martín, S.C., Gómez-Pablos, V.B.: Evaluación de las competencias digitales sobre seguridad de los estudiantes de Educación Básica. Revista de Educación a Distancia (RED) 19(61), 5 (2019). https://doi.org/10.6018/red/61/05 24. Diaz, D.: TIC en Educación Superior: Ventajas y desventajas. Educación y tecnología 4, 44–50 (2014)
External Factors and Their Impact on Satisfaction with Virtual Education in Peruvian University Students Ysabel Anahí Oyardo-Ruiz, Leydi Elizabeth Enciso-Suarez, and Ivan Iraola-Real(B) University of Sciences and Humanities, Lima 15314, Peru [email protected], [email protected]
Abstract. Satisfaction with virtual education is susceptible to several factors; therefore, the present research was aimed at determining the impact of external factors on satisfaction with virtual education provided by a private university in Lima, Peru. The sample consisted of 43 students, 2 males (4.7%) and 41 females (95.3%) between 20 and 53 years of age (Mage = 28.84, SD = 7.24), who completed in-surveys showing the following results: it was demonstrated that students who used only mobile data had a higher degree of satisfaction with virtual education. Finally, the student’s experience in virtual education was positively related to satisfaction with virtual education and that both the difficulties presented by the student and the number of suggestions for improvements in virtual education were negatively related to student satisfaction. Keywords: Distance education · Satisfaction · Online learning · Quality of education · Efficiency of education
1 Introduction The modality of study as “virtual education” has been present for a long time; this form of study is given depending on the context or hand in hand with face-to-face education, replacing some agents of education. However, the agent that cannot be replaced is the student, while media such as the classroom, books, blackboard, and notebooks are homologous [1]. The learner is essential because in this type of education he/she must develop autonomy and commitment to regulate his/her own educational activity [2]. When talking about virtual teaching, one can also mention synchronous and asynchronous learning, the latter being the one in which students learn at their own pace, since the presence of the teacher is not necessary, i.e., if the student needs to listen to a lecture, one or more times, he/she can do it without fear that the other students in the class will be delayed [3]. 1.1 Virtual University Education Virtual education transfers educational experiences outside the traditional classroom, that is, learning at any time and on any occasion, without geographical or scheduling © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Botto-Tobar et al. (Eds.): ICAETT 2021, LNNS 407, pp. 97–108, 2022. https://doi.org/10.1007/978-3-030-96147-3_8
98
Y. A. Oyardo-Ruiz et al.
barriers, relying on the Internet network for access to learning materials, and interacting with experts and learners [4]. This educational offer of the millennium must be supported by optimal, innovative, and motivated virtual tools and the use of modern technology such as computers, digital technology, Internet, associated software, and didactic material that facilitate meaningful learning in the student [5], for this reason, it is important that the user feels that the tool in which he works is friendly, easy to understand, attractive, with appropriate content to obtain the required competencies and reach the end of the learning process with the fulfillment of the goals set [6]. Currently, universities in foreign countries and in Peru are promoting the development of virtual courses in their respective platforms [4]. It is important to mention that several research studies have been conducted, which agree that the use of certain technologies applied to virtual education is enabling more effective learning, developing certain skills, previously only attributed to face-to-face teaching [7]. However, institutions that offer virtual programs should provide innovative, efficient, and high-quality services, not only based on technological tools and communication equipment but also focused on the needs of the user [6]. It should be considered that there are different reasons for students to abandon their virtual education, which may be due to various factors, ranging from personal and socio-economic issues, as well as external factors that university students experience, such as: when they lack adequate support from their families and workplaces [8]. Also when they have additional roles such as being a spouse, parent, and colleague [9]. In addition, it should be considered that the impact of these external factors moves between before and during virtual courses because they affect students’ decisions. For example, when students have a heavy workload and little time for study [10]. In virtual education it is important to consider external and internal factors because they can affect the decisions that university students make regarding whether to persist or abandon virtual study programs; among these factors are basic physical limitations of work, academic aptitude, family/personal problems, motivation to study, integration into the virtual study group, interaction and motivation [11]. However, it is important for the university to be aware of the fundamental needs and factors of the learners, consequently, these factors will influence the efficiency of their learning [12]. Such efficiency is related to student satisfaction, taking into account that the student feels satisfied when there is a congruence between the expectations prior to the course and the results achieved after this experience [13], therefore, the student’s point of view is a key element for the improvement of management and development in educational programs which will be a key entity for student satisfaction [14]. In accordance with the above, the purpose of this research is to determine the impact of external factors on satisfaction with the virtual education provided by a private university in Lima, Peru. Additionally, the following specific objectives are proposed: To describe the necessary aspects for virtual education and to analyze the relationship between external factors and satisfaction with virtual education of students of the professional school of primary education of a private university in Lima-Peru.
External Factors and Their Impact on Satisfaction
99
2 Methodology This article is applied research with a quantitative explanatory methodological approach of a descriptive correlational type. 2.1 Participants The samples were random and representative in a total of 43 students, 2 of whom were male (4.7%) and 41 female (95.3%) (Fig. 1). Ages ranged from 20 to 53 years (Mage = 28.84, SD = 7.24). The students surveyed were in the sixth, seventh, eighth, ninth, and tenth cycles of the professional career of primary education at a private university in Lima, Peru (Fig. 2).
Fig. 1. Frequency analysis by gender (1 men and 2 women).
2.2 Instruments Survey on We want to know your Reality. This sociodemographic survey collects personal and educational data, as well as data associated with the use of technology and connectivity. It has 20 open and closed items (dichotomous and multiple choice) and one optional item. Virtual Distance Education Survey. A survey evaluates and diagnoses the virtual education of the university. This survey has 7 items of closed questions (dichotomous and multiple choice).
100
Y. A. Oyardo-Ruiz et al.
Fig. 2. Frequency analysis by age.
Satisfaction with Virtual Education Scale. This is an adapted version of Ed Diener’s life satisfaction scale [15], to university virtual education contexts. It consists of 5 items with 5 response options on a Likert scale (Strongly disagree – Strongly agree). In the present study, validity was analyzed by means of exploratory factor analysis and the Kaiser Meyer and Olkin’s test of sample adjustment (KMO) in which the value of 0.82 was obtained, and Bartlett’s test of sphericity was significant (χ 2 = 92.571, df = 10, p < 0.001), also, it was confirmed that the scale is unidimensional; which demonstrates the validity of the scale [16]. Then, the reliability analysis with Cronbach’s Alpha Coefficient was 0.86, demonstrating that the survey is reliable to continue with the results [17].
3 Results After analyzing the validity and reliability of the scales, we proceeded to analyze the relationships between the variables. According to the first specific objective, which was to describe the aspects necessary for virtual education, the following analyses were performed: frequencies, box plots and correlation of variables. 3.1 Frequency Analysis When analyzing the respondents it was possible to identify that 3 (7%) of them corresponded to the sixth cycle of studies, 6 (14%) were in the seventh cycle, 27 (62.8%) were in the eighth cycle, 3 (7%) were in the ninth cycle and 4 (9.3%) were in the tenth cycle (Fig. 3).
External Factors and Their Impact on Satisfaction
101
Fig. 3. Frequency analysis by study cycle (sixth to tenth).
Fig. 4. Frequency analysis by marital status (1 single and 2 married).
The family demographic analyses were then performed and it was identified that out of 43 students, 18.6% were married and 81.4% were single (Fig. 4). Then it was identified that 39.53% of the respondents have children and of these 94.12% are of school age (Fig. 5).
102
Y. A. Oyardo-Ruiz et al.
Fig. 5. Frequency analysis by number of children.
Finally, we analyzed the demographic factors of work, identifying that 46.51% of the respondents worked virtually an average of 3 to 5 chronological hours per day (Fig. 6).
Fig. 6. Frequency analysis by employment status.
External Factors and Their Impact on Satisfaction
103
3.2 Boxplot Analysis To obtain more specific results according to each study cycle, box plots were used to evaluate the results [18]. For this analysis, the variable “student cycle” was entered on the X axis and the variable “satisfaction” on the Y axis with respect to this analysis and the levels of the scale (1 totally disagree – 5 totally agree). Thus, in the first boxplot (Fig. 7) it was identified that for the five cycles (6th to 10th) the medians are located between values 3 and 4 corresponding to the estimation categories “neither disagree nor agree” and “totally agree”, respectively. In addition, for the five samples of central cases (50% of students located between the 25th and 75th percentile of the first and third Tukey hinge) are located between values 2 and 4. Only the cases of the ninth cycle are higher, their central cases and the highest value (75%) are located between values 3 and 4. It is observed that in the eighth cycle 25% of the sample is located between the third Tukey hinge (75th percentile) and the highest value corresponding to values 3 and 5 of the estimation categories neither disagree nor agree and totally agree.
Fig. 7. Simple box plot by student cycle (sixth to tenth).
After the analysis, the variable “student work status” was entered on the X-axis and the variable “satisfaction” on the Y-axis with respect to this analysis and the levels of the scale (1 totally disagree – 5 totally agree). Thus, in the second box-plot (Fig. 8) it was identified that, for the sample of 43 students, the medians are located between values 3 and 4 corresponding to the estimation categories “neither disagree nor agree” and “totally agree”, respectively. In addition, for the first sample of those who work virtually (20 students) the central cases (50% of students located between the 25th and 75th percentile of the first and third Tukey hinge) are located between values 2 and 4. For the sample of those who do not work (23) the central cases were identified within values 2 and 4, but it was observed that the highest value (from the 75th percentile) are located between values 3 and 5 of the estimation categories “neither disagree nor agree” and “totally agree”.
104
Y. A. Oyardo-Ruiz et al.
Fig. 8. Simple box plot by student’s employment status.
For this last analysis, the variable “student’s Internet connection” was entered on the X axis and the variable “satisfaction” on the Y axis. With respect to this analysis and the levels of the scale (1 totally disagree – 5 totally agree). Thus, in the third boxplot (Fig. 9) it was identified that for the three samples (wifi network, mobile data, and both) the medians are located between values 2 and 4 correspondings to the estimation categories “neither disagree nor agree” and “strongly agree”, respectively. In addition, for the samples (wifi network and both) of central cases (50% of students located between the 25th and 75th percentile of the first and third Tukey hinge), they lie between values 2 and 4. Only for the sample that responded to mobile data the mean score, its central cases, and the highest value (75% of the students) their scores are higher, located between values 3 and 4. It is also observed that in the mobile data sample there was an outlier (respondent 29) where the degree of satisfaction was 5 (totally agree). 3.3 Relation Between Variables According to the second objective, which was to analyze the relationship between external factors and satisfaction with virtual education of students of the professional school of primary education of a private university in Lima-Peru, we proceeded to perform this correlation analysis by interpreting Pearson’s rho coefficients applying the criteria of Cohen (1992) [19], which considers mild relationships to be those located between the values .10 to .23, moderate from .24 to .36, and strong from .37 or more. Based on these criteria, it was possible to identify that “student experience in virtual education” was positively, strongly and significantly related to “satisfaction with virtual education”, demonstrating that this factor is directly proportional to student satisfaction, which means that the greater the experience in virtual education, the greater the student satisfaction.
External Factors and Their Impact on Satisfaction
105
Fig. 9. Simple box diagram by student connection.
Then, the variable “student difficulties for virtual education” was negatively, strongly, and significantly related to “satisfaction with virtual education” showing that these factors are inversely related to student satisfaction, which means that the greater the difficulties presented by the student (communication with teachers and classmates, difficulty concentrating, quality/internet access, lack of comfortable space, etc.), the lower the student satisfaction. Finally, the variable “improvements for virtual education” was negatively, strongly, and significantly related to “satisfaction with virtual education”, showing an inversely proportional relationship with student satisfaction; that is, the greater the need for improvements, the lower the student satisfaction (and vice versa) (Table 1). Table 1. Correlation between variables. Variables
Satisfaction with virtual education
1.
Virtual means of student-teacher communication
rho p
0.32
2.
Virtual means of class development
rho
−0.05
p
0.76
rho
0.46**
p
0.002
rho
−0.24
p
0.13
3. 4.
Student experience in virtual education Benefits for students in virtual education
0.16
(continued)
106
Y. A. Oyardo-Ruiz et al. Table 1. (continued)
5.
Variables
Satisfaction with virtual education
Student’s difficulties for virtual education
rho
−0.37*
p
0.02 −0.42**
6.
Improvements for virtual education
rho p
0.01
7.
Virtual partials
rho
−0.28
p
0.07
Note. *, **, ***Show significant relationships. *p < 0.05, **p < 0.01, ***p < 0.001 (bilateral).
4 Discussion The purpose of this research was to determine the impact of external factors on satisfaction with the virtual education provided by a private university in Lima, Peru. In addition, this study confirmed that students feel satisfied when there is a congruence between precourse expectations and the results achieved after this experience [13], therefore, the students’ point of view is a key element for the improvement of management and development in educational programs, which will be the key element for their satisfaction [14]. For example, it can be concluded that student experience in virtual education was related to satisfaction with virtual education. These results confirm the findings of Song et al. (2004) [20], who studied 76 postgraduate students who showed that the experience they had with virtual distance education influenced satisfaction with the quality of virtual education. Likewise, it can be concluded that both the difficulties presented by the student for virtual education and the number of suggestions for improvements in virtual education were negatively related to student satisfaction, demonstrating that the greater the difficulties presented by the students (communication with teachers and classmates, difficulty concentrating, quality/internet access, lack of comfortable space, etc.), the lower their satisfaction with virtual education. These results were supported by the empirical studies of Zambrano (2016) [21] who after analyzing a sample of 37 university students of the Theology program virtual modality of Quito showed that the difficulties presented by the student for virtual education as the amount of suggestion for improvements in virtual education programs influenced satisfaction with the quality of virtual education. Finally, this shows that external factors related to the experience in virtual education, difficulties in virtual education, and suggestions for improvement do have an impact on the low or high student satisfaction with this form of education.
External Factors and Their Impact on Satisfaction
107
5 Conclusions and Future Works It can be concluded that satisfaction with virtual education is directly related to experiencing university teaching using digital resources. In addition, that institutional difficulties (teaching communication) or personal difficulties (concentration problems or internet access) are related to dissatisfaction. However, it can be understood that, as future research projects; it is possible at a methodological level to expand the sample with the same number of men and women. Also, expand the sample to include students from the first cycles of studies. Then, for a better diagnosis of the level of satisfaction with virtual education, qualitative strategies such as a focus group can be used. In addition, it is intended to consider demographic factors such as marital status, number of children, etc. For a more complete analysis.
References 1. Calvo, D., Ospina D., Peláez, L.: Didáctica: Aproximaciones a un concepto caracterizado para la educación virtua 1(93), 8 (2013). https://dialnet.unirioja.es/servlet/articulo?codigo= 4897873. Accessed 21 Apr 2020 2. Wang, Q.: Developing a technology-supported learning model for elementary education level. Mimbar Sekolah Dasar 6(1), 141–146 (2019) 3. Hrastinski, S.: Asynchronous and synchronous e learning. EDUCAUSE Q. 31(4), 51–55 (2008) 4. Martínez, C.: La educación a distancia: sus características y necesidad en la educación actual 17(33), 21 (2008). https://dialnet.unirioja.es/servlet/articulo?codigo=5057022. Accessed 30 Apr 2020 5. Clover, I.: Advantages and disadvantages of eLearning (2017). https://elearningindustry.com/ advantages-anddisadvantages-of-elearning. Accessed 30 Apr 2020 6. Ceballos, O., Mejía, L., Botero, J.: Importancia de la medición y evaluación de la usabilidad de un objeto virtual de aprendizaje 13(25), 24–27 (2019). https://dialnet.unirioja.es/servlet/ articulo?codigo=7151035. Accessed 30 Apr 2020 7. Klamma, R., et al.: Social software for life-long learning. Educ. Technol. Soc. 10(3), 72–83 (2007) 8. Mehmet, K., Fatih, E., Mehmet, K., Kursat, C.: Challenges faced by adult learners in online distance education: a literature review 11(1), 5–22 (2019). https://eric.ed.gov/?q=+the+ext ernal+factors+of+a+student+affects+his+distance+education&id=EJ1213733. Accessed 10 May 2020 9. Thompson, J.J., Porto, S.C.S.: Supporting wellness in adult online education. Open Praxis 6(1), 17–28 (2014). https://doi.org/10.5944/openpraxis.6.1.100 10. Park, H., Choi, J.: Factors influencing adult learners’ decision to drop out or persist in online learning. Educ. Technol. Soc. 12(4), 207–217 (2009). https://doi.org/10.2307/jeductechsoci. 12.4.207 11. Choi, J., Kim, U.: Factors affecting adult student dropout rates in the Korean CyberUniversity degree programs. J. Contin. High. Educ. 66(1), 1–12 (2018). https://doi.org/10.1080/073 77363.2017.1400357 12. Alward, E., Phelps, Y.: Impactful leadership traits of virtual leaders in higher education 23(3), 72–93 (2019). https://eric.ed.gov/?q=+virtual+education+is+efficient&id=EJ1 228828. Accessed 08 June 2020
108
Y. A. Oyardo-Ruiz et al.
13. Allen, M., Omori, K., Burrell, N., Mabry, E., Timmerman, E.: Satisfaction with distance education. In: Moore, M.G. (ed.) Handbook of Distance Education, 3rd edn., pp. 143–154. Routledge, Nueva York (2013) 14. Jiménez, A., Terriquez, B., Robles, F.: Evaluación de la satisfacción académica de los estudiantes de la Universidad Autónoma de Nayarit. Revista Fuente 2(6), 46–56 (2011) 15. Diener, E., Emmons, A., Larsen, J., Griffin, S.: The satisfaction with life scale. J. Pers. Assess. 49, 71–75 (1985) 16. Field, A.: Discovering Statistics Using SPSS, 3era edn. Sage Publications, Lóndres (2009) 17. Aiken, R.: Psychological Testing and Assessment, 11th edn. Allyn & Bacon, Boston (2002) 18. Helsel, D.: Statistics for Consored Environmental Data Using Minitab and R, 2nd edn. Wiley, Denver (2011) 19. Cohen, J.: A power primer. Psychol. Bull. 112(1), 155–159 (1992) 20. Song, L., Singleton, S., Hill, R., Koh, H.: Improving online learning: student perceptions of useful and challenging characteristics. Internet High. Educ. 7(1), 59–70 (2004) 21. Zambrano, J.: Factores predictores de la satisfacción de estudiantes de cursos virtuales RIED. Revista Iberoamericana de Educación a Distancia 19(2), 217–235 (2016)
Comparative Study of Academic Performance in the 2018 PISA Test in Latin America Eda Vasquez-Anaya, Lucero Mogrovejo-Torres, Vanessa Aliaga-Ahuanari, and Ivan Iraola-Real(B) Universidad de Ciencias y Humanidades, Lima 15314, Peru {edavasqueza,luceromogrovejot,vanaliagaa}@uch.pe, [email protected]
Abstract. Achievements in mathematical, communicative, and scientific competencies in basic education are necessary for future professional training; and thus, academic performance not only reflects the knowledge acquired in the teachinglearning process, but also provides valuable information on the factors involved. That is why the present research work, aimed to comparatively analyze the academic performance of Latin America in the PISA 2018 Assessment, with emphasis on the performance of Peruvian students. Through a process of documentary analysis, the results from 10 Latin American countries that are part of the OECD were studied, in which adolescents between 15 and 16 years old were evaluated. The results showed that some countries, despite being in the best positions in Latin America, showed a downward trend in terms of academic performance, and countries such as Peru, were showing upward trends, despite still being in low positions in the world ranking. Keywords: PISA test · Academic performance · Evaluation educational
1 Introduction The academic performance of a student has been one of the pedagogical terms that has generated more definitions; this plurality reflects the importance that academic performance has as a variable of analysis in the educational field. Etymologically “performance” refers to the product or utility obtained from a thing or process [1]. In this sense Morales, Morales, and Holguín [2] point out that the use of the term “school performance” makes its appearance in the pedagogical literature in a context marked by an industrial economic model, where it is sought to obtain an increase in productivity and quality in the teaching-learning process, which can be objectively verifiable through measurement scales that allow the results obtained to be quantified. For Chadwick [3], academic achievement was the reflection of indicators and characteristics that the student had acquired through the teaching-learning process and enabled him/her to achieve a level of performance and academic achievements during his/her school years, which were reflected in the grade obtained. In the 21st century, for García, Alvarado and Jiménez [4], academic achievement became the level of knowledge demonstrated by a student in each subject, according to his or her age and performance, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Botto-Tobar et al. (Eds.): ICAETT 2021, LNNS 407, pp. 109–118, 2022. https://doi.org/10.1007/978-3-030-96147-3_9
110
E. Vasquez-Anaya et al.
associated with various social and economic factors that are interrelated in the student’s experience. In addition, academic achievement is the evaluation of the knowledge acquired by the student and that are the result of a formative process, which also are reflected in daily behaviors oriented to problem solving. In this way, a dual nature of students’ academic performance can be appreciated with clearly defined scenarios. First, social academic performance, which is reflected in the educational institution and the levels achieved, which are graded as the expected achievements are obtained. Secondly, individual academic performance, which is manifested in the application of the knowledge acquired outside the school field, that is, the practical application of the knowledge learned [5]. As can be seen, academic performance has gone from being a static figure to a dynamic reflection, not only of the educational process, but also of a wide range of factors that directly and indirectly affect the students’ learning process. 1.1 The PISA Assessment In response to the problem of assessing the academic performance of students nearing the age of high school graduation, the Organization for Economic Co-operation and Development (OECD, 2015a) [6], through the Programme for International Student Assessment (PISA), periodically assesses 15-year-old students in 79 countries in the areas of reading literacy, mathematics proficiency and science proficiency, taking samples from each country ranging from 4,500 to 10,000 students [7]. The PISA assessment has been carried out every three years since 2000, with the aim of providing countries. The PISA provides members with an effective tool that allows them to evaluate the performance and achievement of educational goals [8]. For this purpose, PISA proposes the evaluation of a series of skills that students must fulfill, in addition to providing information on the personal, family and student environment of the participants [6]. In each edition of the PISA assessment special attention is given to one of the competencies, thus for example in the year 2000 emphasis was placed on reading, in 2003 to the area of mathematics and in 2006 to the area of science, thus, following this sequence the year 2018 had as core competency the area of reading again. In the PISA tests, the assessments of reading, mathematics and science competencies, seek to identify abilities and aptitudes in students that go far beyond the educational sphere, not only in academic results, but also in the application of all this learning in daily life [7], which is why education for Walter Peñaloza is to promote the development of the human being and his potential; for which education must allow incorporating the richness of the people’s own culture and of the universal cultural heritage, it must promote the ability of people to take a position in the surrounding culture and awaken their creative power, valuing those learning that can produce results throughout the student’s life, i.e. that transcend the student’s student life [9]. As mentioned above, these competencies are three and are defined as follows: a) Reading competence, understood as that ability to understand, use and reflect on written texts; b) Mathematical competence, defined as that which provides the student with the faculty to reason, analyze and communicate mathematical operations in a wide variety of contexts, emphasizing the ability to apply this knowledge in daily life [10]; and finally, c) Competence in the area of science; where it is sought that the student’s scientific learning is complemented with an ability to manage and understand the surrounding environment, as well as its possibilities and limitations [11].
Comparative Study of Academic Performance
111
The PISA tests are indicators of the objectives to be met and to be able to reach an international level that increasingly brings us closer to the average [12], in Peru the efforts to improve the academic performance of students through the application of the Census Evaluation of Students (ECE) has been reflecting small achievements in the areas of the present research. In accordance with all that has been previously stated in the present, investigation, the purpose of this research is to comparatively analyze the academic performance of Latin America in the PISA 2018 Assessment, with emphasis on the performance and short-term projections of Peruvian students.
2 Methodology For the realization of this article, the documentary analysis method was used, which consists of the study of a document without considering the medium where it is stored, which can be written, audio, video, etc. [13]. When performing this analysis, two aspects were worked on, an external one in which several sources such as the Organization for Economic Cooperation and Development [6] and the Ministry of Education of Peru [12] were identified and located, and an internal one that focused on the analysis of the content of the documents. 2.1 Procedure In the present study, a documentary analysis of the OECD reports regarding the PISA 2018 assessment was conducted where students from 78 countries participated, whose age range ranged from 15 years old three months to 16 years old two months at the time of the assessment [14]. A base of 5 516,736 students was taken, representing 67.84% of the total number of students comprising the 10 Latin American countries that are members of the OECD (Table 1). Table 1. Participating population of the PISA 2018 test in Latin America. Country
Population 15 years old
Population finally represented by the sample (expansion)
Population finally represented by the sample (%)
Argentina
702 788
566 486
80.6
Brazil
3 132 463
2 036 861
65.0
Chile
239 492
213 832
89.3
Colombia
856 081
529 976
61.9
Costa Rica
72 444
45 475
62.8
Mexico
2 231 751
1 480 904
66.4
Panamá
72 084
38 540
53.5
Peru
580 690
424 586
73.1
Rep. Dominican
192 198
140 330
73.0
Uruguay
50 965
39 746
78.0
TOTAL
8 130 956
5 516 736
67.8
Note: Source: (OCDE 2019a) [14].
112
E. Vasquez-Anaya et al.
2.2 Measure Documentary analysis guide: this instrument allowed performing the analysis according to the following three categories: reading proficiency, mathematical proficiency, and science proficiency. For which the assessment instrument for the PISA 2018 test was considered, which consisted of a virtual exam, focused on reading proficiency (67.8%) with mathematics (15.4%) and science (16.8%) competencies as secondary [8]. For the assessment scale of competencies in PISA, IRT (Item Response Theory) is applied, this means that the higher the student scores, the better his or her positioning on the assessment scale corresponding to the competency [15]. For reading proficiency, the scale is divided into six levels, where the highest level is 6 and the lowest level is 1; the latter is divided into three levels: 1a, 1b and 1c for a better evaluation of the results. For mathematical competence, the scale is divided into six levels, where the maximum level is 6 and the minimum is 1. For science competence, the scale is divided into 6 levels, where 6 is the maximum level and 1 is the minimum level, the latter being divided into 1a and 1b according to their scores. In all three cases, the level below the minimum is not considered as achievement (Fig. 1).
Fig. 1. Rating scale in PISA 2018 Test. Source: (Office of Quality Learning Measurement 2019).
Comparative Study of Academic Performance
113
3 Results After the document analysis of the PISA 2018 tests corresponding to the Latin American region, it was decided to analyze the results in three categories according to the competencies assessed. 3.1 Reading Literacy An overview shows that once again Chile presents a better performance in this competition, ranking 43rd with 452 points, followed in Latin America by Uruguay in 48th place with 427 points and Costa Rica in 49th place with 426 points; all these countries, in addition to Mexico with 420, Brazil with 413 and Colombia with 412, are at Level 2, above the baseline; despite this, the Latin American region is still positioned below the overall OECD average. Argentina with 402, Peru with 401 and Panama with 377 points are in Level 1a, below the minimum expected. Finally, the Dominican Republic is positioned last in Level 1b with 342 points [16] (Fig. 2).
Fig. 2. PISA 2018 tests: Reading. Results of participating Latin American countries [15].
In contrast to these meager results, it is worth noting the upward trend with respect to previous editions of Peru compared to countries such as Chile and Colombia, which, although they are better positioned, their trend, for the last two editions of the PISA assessment, is downward [17] (Table 2). 3.2 Mathematics Competence In mathematics, Uruguay scored 418 and Chile 417 points, placing them at the top of the region, followed by Mexico (409), Costa Rica (402) and Peru (400), followed by Colombia (391), Brazil (384), Argentina (379), Colombia (391), Brazil (384), and Argentina (379) complete the Latin American countries that are all in Level 1. In addition,
114
E. Vasquez-Anaya et al. Table 2. Change in reading scores by average measure – Latin America (2009–2018).
Country
2009
2012
2015
2018
Average trend 2009–2018
Perú
370
384
398
401
+10,3
Argentina
398
396
–
402
+2,0
Chile
449
441
459
452
+1,0
Brazil
412
407
407
413
+0,3
Uruguay
426
411
437
427
+0,3
Colombia
413
403
425
412
−0,3
México
425
424
423
420
−1,7
Costa Rica
443
441
427
426
−5,3
Rep. Dominican
–
–
358
342
−16,1
Panamá
–
–
–
377
–
Note: Source: Ministry of Education and Personal Development (2019) [7].
below the baseline, at the bottom of the table is Panama (353) and the Dominican Republic (325) below Level 1, reflecting a crisis in the education of these countries in this competition. Once again, the results are discouraging for the region at the global level, all Latin American countries fail to surpass the baseline and are well below the OECD average [16] (Fig. 3). 78. Rep. Dominicana
325 353
76. Panamá
379
71. Argentina
384
70. Brasil
391
69. Colombia
400
64. Perú
402
63. Costa Rica
409
61. México
417
59. Chile 58. Uruguay
418 0 100 500
200
300
400
Fig. 3. PISA 2018 tests: Mathematics. Results from participating Latin American countries [15].
Comparative Study of Academic Performance
115
Despite this, once again Peru and Colombia are the only countries in the region showing an upward trend for several years, all other Latin American countries that have been part of this assessment show slight to moderate downward trends [16] (Table 3). Table 3. Variation in Mathematics scores by average measure – Latin America (2009–2018) País
2009
2012
2015
2018
Average trend
Perú
365
368
387
400
+11,7
Colombia
381
376
390
391
+3,3
Brazil
386
389
377
384
−0,7
Chile
421
423
423
417
−1,3
Costa Rica
409
407
400
402
−2,3
República Dominican
–
–
328
325
−3,0
Uruguay
427
409
418
418
−3,0
México
419
413
408
409
−3,3
Argentina
388
388
–
379
−4,5
Panamá
–
–
–
353
–
Note: Source: (OCDE, 2019a) [14].
3.3 Science Competence This competency is once again led by Chile, which obtained 444 points, followed far behind by Uruguay (426), Mexico (419), Costa Rica (416) and Colombia (413), which are in Level 2, surpassing the baseline, despite this, below the OECD average (489). Below the baseline, in Level 1, Peru obtained 404 points; Argentina (404) and Brazil (404) in a three-way tie reflect deficient performance in this competency. Panama (365) and the Dominican Republic (336) finished the table. Once again, the same constant as in previous competitions is repeated, placing Latin American countries, in the last places of the ranking [16] (Fig. 4). Despite these results, it should be noted that Peru and Colombia show a constant increase in this area, which demonstrates, in the case of Peru, that the policies implemented in the area of education have borne fruit, even if scarce, due to the level at which it is currently at, in the three areas it is the only one in the region that continues to show upward trends (Table 4).
116
E. Vasquez-Anaya et al. 78. Rep. Dominican
336
76. Panamá
365
64. Peru
404
65. Argentina
404
66. Brazil
404
62. Colombia
413
60. Costa Rica
416
57. México
419
54. Uruguay
426
45. Chile
444
0 100
200
300
400
500
Fig. 4. PISA 2018 tests: Science. Results from participating Latin American countries [15]. Table 4. Change in science scores by average measure – Latin America (2009–2018) Country
2009
2012
2015
2018
Average trend
Perú
369
373
397
404
+11,7
Republic Dominican
–
–
332
336
+4,0
Colombia
402
399
416
413
+3,7
Argentina
401
406
–
404
+1,5
México
416
415
416
419
+1,0
Brazil
405
402
401
404
−0,3
Uruguay
427
416
435
426
−0,3
Chile
447
445
447
444
−1,0
Costa Rica
430
429
420
416
−4,7
Panamá
–
–
–
365
–
Note: Source: (OCDE, 2019a) [14].
4 Discussion According to what has been analyzed, academic performance is measured by several factors, PISA not only tries to evaluate students in reading, mathematics, and science, but the analysis provided by PISA goes much further, drawing the attention of governments to the educational and social policies that are applied and that directly influence the academic performance of students [18]. In such sense, it is worrying that such measures adopted fail to position Latin America in positions above the Baseline established by the OECD and it is more worrying that, countries that were better positioned as the cases of Chile, Colombia, and Mexico in these last two editions of PISA (2015 and 2018)
Comparative Study of Academic Performance
117
presented downward trends [19]. Consequently, PISA is more than an evaluation; it is a wake-up call for governments, indicating that there are decisions that are not taking the expected course. On the other hand, the Peruvian case is striking in that in the three competencies evaluated, the upward trend is notorious, which, although it is an indicator that things are moving in the right direction, it also reflects that progress is still very deficient at the international level. In this regard, it is worth mentioning the Student Census Evaluations (ECE), which year after year are carried out by the Ministry of Education to learn about students’ achievements [12]. The ECEs play a key role in measuring academic performance, providing detailed information to educational staff and families in general to make decisions for the benefit of the quality of teaching-learning. It also makes it possible to compare the improvement or deficiency of students’ academic performance, as well as factors that may affect such performance. The efforts made by the Ministry of Education are reflected in international evaluations such as the PISA test, where there is a slight increase in academic performance. The big question that remains is whether these results should make Peruvian citizens proud, because although it is true that there is an improvement in the academic performance of students, the results are still well below the baseline, let alone the OECD average. However, it is necessary to continue working and implementing educational policies that favor the learning of content not only to take an exam in mathematics, communication, and science, but that favor learning for life as Walter Peñaloza (2005) stated [9].
5 Conclusions and Future Works This research provides a detailed analysis of the educational reality of Latin America in terms of international assessments. At the same time, it contributes with the reflection of the factors associated to the educational levels in the countries of the North in comparison with the Latin American ones. Factors such as investment in education, educational implementation, educational quality, etc. Factors that make competition unequal in the same standardized test for countries from different regions and different contexts. In this study it is concluded that the scores obtained from the Latin American countries in the PISA evaluations are deficient; Even though in the Peruvian case an increase is observed in the latest evaluations, these achievements are still deficient in comparison with other countries. As future research proposals it is intended: to carry out a study surveying students and teachers from Latin American countries to identify their attitudes and suggestions regarding how their performance should be evaluated. Then, carry out a study of experiences of international evaluations according to your knowledge. Also, carry out applied research with a quasi-experimental or action research methodology in which a new form of school evaluation is evaluated that considers more subjects such as arts, sports, languages, history, etc.
References 1. Albán, J., Calero, J.: El rendimiento académico: aproximación necesaria a un problema pedagógico actual. Revista Conrado 13(58), 213–220 (2017)
118
E. Vasquez-Anaya et al.
2. Morales, L., Morales, V., Holguín, S.: Rendimiento Escolar. Revista Electrónica Humanidades, Tecnología y Ciencia del Instituto Politécnico Nacional 15, 11–15 (2017) 3. Chadwick, C.: Teorías del aprendizaje y su implicancia en el trabajo en el aula. Revista de Educación 70(1), 35–46 (1979) 4. García, M., Alvarado, J., Jiménez, A.: La predicción del rendimiento académico: regresión lineal versus regresión logística. Psicothema 12(Supl. 2), 248–525 (2000) 5. Martínes, S.: Aplicación de técnicas de superaprendizaje y su incidencia en el rendimiento académico de los estudiantes de Educación General Básica Elemental de la Escuela de Educación Básica Rubén Silva, del cantón Patate, provincia de Tungurahua. (Tesis de grado, Universidad Técnica de Ambato, Ambato, Ecuador). https://repositorio.uta.edu.ec/bitstream/123456789/25448/1/Mart%C3%ADnez% 20P%C3%A9rez%20Sonia%20Gabriela%201804675641.pdf (2017). Accessed 20 June 2020 6. OCDE: Competencias en Iberoamérica Análisis de PISA 2015. https://www.oecd.org/skills/ piaac/Competencias-en-Iberoamerica-Analisis-de-PISA-2015.pdf (2015a). Accessed 10 July 2020 7. Ministerio de Educación y Formación Personal: Informe PISA 2018. Programa para la Evaluación Internacional de los Estudiantes, pp. 15–119. https://www.observatoriodelainfancia.es/ ficherosoia/documentos/5943_d_InformePISA2018-Espana1.pdf (2019). Accessed 25 May 2020 8. OCDE: Marco de Evaluación y de Análisis de PISA para el Desarrollo: Lectura, matemáticas y ciencias, Versión preliminar. https://www.oecd.org/pisa/aboutpisa/ebook%20-%20PISA-D% 20Framework_PRELIMINARY%20version_SPANISH.pdf (2017). Accessed 30 Apr 2020 9. Peñaloza, W.E.: currículo integral. Universidad Nacional Mayor de San Marcos, Lima (2005) 10. Estrada, A.: Estilos de aprendizaje y rendimiento académico. Revista Boletín Redipe 7(7), 218–228 (2018) 11. OCDE: El programa PISA de la OCDE ¿Qué es y para qué sirve? https://www.oecd.org/pisa/ 39730818.pdf (2015b). Accessed 16 Apr 2020 12. Ministerio de Educación del Perú: Marco de Fundamentación de las Pruebas de la Evaluación Censal de Estudiantes. http://umc.minedu.gob.pe/wp-content/uploads/2017/02/Marcode-Fundamentaci%C3%B3n-ECE.pdf (2016). Accessed 01 July 2020 13. Buckworth, J.: Issues in the teaching practicum. In the Challenge of Teaching, pp. 9–17. https://www.researchgate.net/publication/312000389_Issues_in_the_Teaching_ Practicum (2017). Accessed 10 July 2020 14. OECD: PISA 2018 Results (Volume I): What Students Know and Can Do. PISA, OECD Publishing, Paris (2019a) 15. Schleicher, A.: Insight and Interpretations. https://www.oecd.org/pisa/PISA%202018%20I nsights%20and%20Interpretations%20FINAL%20PDF.pdf. Accessed 06 July 2020 16. OECD: Programme for International Student Assessment (PISA) Results from PISA 2018 – Country Notes. https://www.oecd.org/pisa/publications/pisa-2018-snapshots.htm (2018). Accessed 20 June 2020 17. Bos, M., Viteri, A., Zoido P.: PISA 2018 en América Latina. ¿Cómo nos fue en lectura. Nota Informativa 18, 1–4 (2019) 18. OECD: PISA 2018 Results (Volume II): Where All Students Can Succeed. PISA, OECD Publishing, Paris (2019b). Accessed 28 Feb 2020. https://doi.org/10.1787/b5fd1b8f-en 19. OECD: PISA 2018 Results (Volume III): What School Life Means for Students’ Lives. PISA, OECD Publishing, Paris (2019c). Accessed 20 July 2020. https://doi.org/10.1787/acd788 51-en
Can Primary School Children Be Digital Learners? A Peruvian Case Study on Teaching with Digital Tool Gisela Chapa-Pazos, Estrella Cotillo-Galindo, and Ivan Iraola-Real(B) Universidad de Ciencias y Humanidades, Lima 15314, Perú {yovchapap,belcotillog}@uch.pe, [email protected]
Abstract. Digital tools are means of communication that facilitate information so that students can innovate their own learning. Thus, the purpose of this research is to analyze the pedagogical use of digital tools in the teaching of mathematics in Peruvian elementary education. Through a process of convenience sampling, two elementary school teachers from a private school (Lima-Peru) were identified and the participants were interviewed using an interview guide. Regarding the beginning of the session, it was necessary to improve learning. Also, according to the development, they allow the acquisition of strategies. Finally, in the closing guarantee the evaluation of the use of digital tools in each learning process, they play a very important role, because they allow these learning to be innovative and challenging, with some virtual tools, students are more dynamic and interact through audiovisual media and didactic applications. Keywords: Basic education · Educational technology · Information and communication technologies · Lesson preparation
1 Introduction Currently there are different technological tools to communicate and at the same time facilitate student learning. It should be noted that these tools are of great interest in the educational field at the current juncture, thus students face innovative challenges. The incorporation of technological elements that expand the possibilities of using resources that adapt to the needs of students and move with the dynamics of the current teachinglearning processes [1]. As can be seen, these tools that have been incorporated into the new virtual space of the students, allow them to interact without conditioning; this implementation of new technologies has allowed students to develop their learning processes in a more didactic, effective, and lasting way, where the accompaniment of the teacher plays an important role, since he/she serves as a guide and tutor in the middle of this whole process [2]. According to Pérez de la Maza [3], ICTs enable autonomous work and the development of self-control capacities and are adapted to the personal characteristics of the students, which favors the integration of learning and teaching. Likewise, ICTs present a multisensory stimulation, favor motivation, reinforcement, attention and decrease frustration © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Botto-Tobar et al. (Eds.): ICAETT 2021, LNNS 407, pp. 119–127, 2022. https://doi.org/10.1007/978-3-030-96147-3_10
120
G. Chapa-Pazos et al.
in the face of errors. In definitive, they are an active, versatile, flexible and adaptable learning element. Given all these relationships, it can be highlighted that students have more willingness to learn thanks to motivation [4, 5], and interest if their learning was based on ICT. Thus, the use of technologies entails a great impact, due to the change of methodology in the classrooms, to the new pedagogical models applied, which requires ongoing training of teachers [6]. Firstly, it is stated that today it is unthinkable to conceive of life in the classroom without the presence of ICTs. The school world has had to face numerous challenges to cope with the new changes, posing new learning models, new procedures and didactic strategies, new methodologies and new resources that facilitate the integration of ICT in the teaching-learning process [7]. According to Viveiros [8], for example, the consequences of the use of ICTs at the individual level and in society can be easily identified by the children. In addition, among the relevant functions of ICT in primary education, four can be identified: first, as a support tool for the creation and presentation of students’ work. The second, as a didactic resource, where they can be used as auxiliaries mainly using games and/or exercises that develop general competencies. The third as a source of information for the use of virtual tools used by students, and the fourth as development and distance support in the interaction of virtual classes. Therefore, interactivity is a characteristic of a significant part of the educational software and can be an added value for motivation. This added value contributes to the fact that digital education aims to complement teaching and learning today and in the future. And in the current context, the use of the Internet has revolutionized the concepts of interactivity; the empowerment of information has inevitably led to changes in the perspective of society that beyond being significant, have had and will continue to generate great impact and with it changes of increasingly accelerated trends [9]; process of which education cannot be alien requiring modifications and optimization in its pedagogical processes, which will be analyzed in this study. 1.1 Related Research The need to identify whether primary education students are potential digital learners requires considering what the government institutions have developed in this regard. For example, the Ministry of Education of Peru (MINEDU) at the beginning of the Covid19 pandemic raised the need to work on virtual education, in this way it developed the “I learn at home” program aimed at private and public schools throughout the country [10]. With this virtual education proposal, MINEDU also proposes the development of the necessary digital skills [11]. Competences that are also supported by the transversal competences of the National Basic Education Curriculum, a document in which it is proposed to educate using ICT and promote autonomous learning in students [12]. To achieve this, several studies have been carried out that confirm that students adapt very well to virtual education. For example, it is argued that students being considered digital natives will have a better chance of developing at a cognitive level, and it is the teacher who must face the challenge of developing their own digital skills [13]. Other comparative studies have identified that in private schools’ students develop better their
Can Primary School Children Be Digital Learners?
121
digital skills [14]. In addition to identify that, between men and women, men develop digital skills for fun and women for socialization [15], for empathy and social support between women [16]. In this way, the studies allow to conclude that the students entering are part of the ICT and this favors their cognitive development [17]. For these reasons, the following research aims to analyze the pedagogical use of digital tools in the teaching of mathematics in Peruvian primary education.
2 Methodology 2.1 Sample The sample consisted of two teachers from a private school in northern Lima, Peru, who studied primary education and interculturalism at a private university. The participating teachers worked in online or e-learning sessions. The participants were selected by a convenience sampling process due to their proximity and accessibility [18]. 2.2 Instruments Semi-Structured Interview Guide. This guide was built based on the pedagogical processes (motivation, prior knowledge, purpose, problematization, management, accompaniment, and evaluation) that were constituted as research categories. The interview consists of six open questions; for example, the following question was posed for the category “How do you pedagogically use digital tools to motivate students at the beginning of the learning sessions? Why do you do that? In addition, for the prior knowledge category, the following question was posed: “How do you use digital tools to collect the students’ prior knowledge during the learning session? Could you explain it? This interview guide was validated by expert judgment [19], in which three teachers with experience in elementary education participated. Moreover, for the objectivity of the instrument, emphasis was placed on the objectives to be evaluated by the judges [20]. 2.3 Procedures After reviewing the bibliography and the conception of the research, the instrument was applied to the two selected teachers. For this, the respective coordination with and proceeded according to international research ethics criteria, emphasizing the voluntary and anonymous participation of the interviewees. After the interviews, their transcripts were made and the textual coding and information categorization was carried out; Thus, the outline of the writing of the discussion of the results was established, taking as reference the pre-established categories and the emerging categories. This structure was based on the didactic processes established for the learning sessions. Finally, it was possible to identify the points of convergence between the two interviewed with the reviewed documents and thus it was possible to raise the conclusions and the proposal of future works.
122
G. Chapa-Pazos et al.
3 Analysis and Discussion of Results 3.1 Digital Tools at the Start of Mathematics Learning Sessions The use of digital tools to start a learning session has an essential role. This is due to their ability to increase the use of resources according to the needs of students and according to the current teaching and learning processes [1]. Being beneficial in the teaching work to motivate, identify students’ prior knowledge, problematize with them the topics to be taught and achieve the purpose and organization of the learning session. In addition, this type of use of digital tools at the beginning of the mathematics learning session will be analyzed below based on the following category. 1. The Motivation Within the pedagogical processes, motivation plays an important role. Motivation is that internal and positive attitude towards new learning; it is what moves the subject to learn [5]. There is no doubt that in the process in which the human brain acquires new learning, motivation plays a fundamental role [4]. In addition, in virtual education environments, the pedagogical use of digital tools is essential when motivating learning. Thus, the teachers interviewed stated that the reasons why they use digital tools to motivate learning are as follows: [The digital tools…] I use them to express and create through this language that is based on images and sounds, stimulating creativity with the students. Because the role that teachers use in virtual education is not only to fill the student with content but also to raise their level of motivation (Teacher 1). In addition, the following interviewee stated: Digital tools are very important in motivation; they allow to activate the creation and imagination of students, thus increasing their interest by being interactive with the use of these tools. [I use digital tools] Therefore, that students reflect and develop their critical level (Teacher 2). As it has been observed, teachers make use of digital tools to motivate students and achieve in them their critical reflection, raise their own motivation using interactive digital image and sound media. Moreover, continuing with the category of the use of digital tools at the beginning of the mathematics learning sessions, the following subcategory will be analyzed. 2. Collection of prior knowledge Within the pedagogical processes, prior knowledge is indispensable. They provide the alternative of integrating the needs, interests, and prior knowledge of students to the teaching and learning process that takes place in the classroom, generating activities that promote immersion in the situations presented [21]. Thus, the teachers interviewed
Can Primary School Children Be Digital Learners?
123
explained their reasons why they use digital tools to collect prior knowledge in the learning process, these reasons are the following: Digital tools play an important role in the collection of prior knowledge, using interactive videos that reflect the reality of society, in this way the student can start to create a cognitive conflict. Likewise, the inverted classroom method because it allows to put an end to traditional classes and allows students to learn individually activating their previous knowledge (Teacher 1). In the same way, the following interviewee stated: [I use digital tools through the inverted classroom because it allows students to recognize their previous knowledge and allows them to ask challenging questions, sharing videos, slides prepared according to the reality and objective of the session (Teacher 2). In both testimonies, it was possible to identify teachers collect prior knowledge through audiovisual resources or challenging questions to pose the problem. In addition, the two teachers agreed on the use of the flipped classroom to develop constructive learning in students (Flipped Learning Network [FLN]) [22]. In addition, with respect to problematization, this subcategory is analyzed as follows: 3. Problematization In this pedagogical process, students are encouraged to challenge by exposing what they think in the situations presented to them, so that the teacher can know what they think and favor the increase of the student’s critical level, thus, the teacher faces the situations that are proposed for discussion and feels the need for the acquisition of another knowledge that he/she does not yet have [23]. Likewise, the teachers interviewed stated that the reasons why they use digital tools to problematize in learning are as follows: Digital tools are important in challenging situations because they allow establishing bonds of trust between student and teacher. In this way, comparative examples of the situation are used, or also through the Class Dojo application starting with the question wheel (Teacher 1). In the same way, the following teacher said: [I use digital tools] exposing cases through dramatizations using concrete material of the topic to work on, also through comparative tables elaborated manually and that the students observe. Proposing them a problem and support their answer by means of concrete or graphic material (Teacher 2). In the testimonies of the teachers interviewed, it is observed that they use digital tools such as virtual applications and concrete material to promote the problematization and recognize the purpose and organization of the session, which is analyzed in the following subcategory.
124
G. Chapa-Pazos et al.
4. Purpose and organization The virtual tutor must keep the different collaborative spaces of his students active, facilitating communication, access to new links and content, and above all encouraging dialogue between the members of the group. The teacher thus becomes a tutor who accompanies the group and facilitates the achievement of common objectives. Likewise, he/she must motivate the inclusion and participation of all members proactively in these new spaces [24]. Likewise, the teachers interviewed stated that the reasons why they use these digital tools within the purpose and organization of the pedagogical process are as follows: [The teachers] select the information according to the purpose and according to the accessibility of the student, so that the child can interact in the virtual environments and that mathematics is fun (Teacher 1). Likewise, the other teacher said: Digital tools are important because digital resources are used in different formats, visual organizers are used, so that students can build their own learning (Teacher 2). It has been observed that for the learning of mathematics, teachers select different formats and types of digital resources according to the purpose of learning and accessibility of students. To deepen this reflection, the following category is analyzed. 3.2 Digital Tools in the Development of Mathematics Learning Sessions The school environment had to face several challenges to face new changes in the teaching-learning process, using new methodologies and new resources that facilitate the integration of ICT in this process [7]. These virtual tools in the development of the mathematics learning session will be analyzed below in the following sub-category. 1. Management and support Management and accompaniment are teacher’s actions that encourage students to reflect, criticize and analyze. As stated by the Ministry of Education of Peru [25], pedagogical accompaniment is the act of offering continuous advice to learners in the teaching-learning processes. Thus, teachers argued that the reasons why they use these digital tools in the management and accompaniment are as follows: The digital tools are interesting because they help us to perform through their observation of the development of the session and brainstorming questions is managed and clarifies the doubts of the student. By asking them questions to manage their own learning. (Teacher 1).
Can Primary School Children Be Digital Learners?
125
In the following way the second teacher specified: [Digital tools are linked when] the student participates actively and through this process he/she executes, discovers, and thus generates a critical learning that will cause them a conflict. Through the activities in which he participates, and, in this way, he discovers the new learning (Teacher 2). It could be observed that the digital tools allow the accompaniment in the development of the new learning of the students; favoring the evaluation of the characteristic learning also of the closing of the sessions that will be analyzed next. 3.3 Digital Tools in the Closing of Mathematics Learning Sessions It should be noted that the use of these tools plays a very important role throughout the learning process. Thus, the use of technologies entails a great impact, due to the change of methodology in the classrooms, to the new pedagogical models applied, which requires ongoing teacher training [6]. These digital tools used in the closing of the mathematics learning session will be analyzed in the following sub-category: 1. Evaluation In this process, it is necessary for the virtual teacher to be clear about the achievement he expects from his students. In the same way it is evidenced when a group of students with virtual media establishes communication processes with the teacher, in such a way that the ability to discuss, discern, reflect, question, or innovate in aspects related to mathematical knowledge or interaction with the media is made possible, in such a way that a review, clarification or feedback of the topics addressed can be made [26]. This could be evidenced in the following testimonies: Digital tools are beneficial in this formative and certifying evaluation process because they allow to see what the student has achieved with the competences worked on, through Pim Pom questions (Teacher 1). Digital tools help in the formative evaluation because it is transversal, it is done throughout the process of virtual classes. Using the Class Dojo application that motivates them to participate and interact with each other (Teacher 2). At the closing of the learning sessions, evaluation should be foreseen considering the skills, competencies, and performances to assess the progress of the students and the session [27]. In addition, this evaluation is feasible to be developed with digital resources as evidenced by the testimonies.
4 Conclusions and Future Works This research showed that the digital tools that have been incorporated into the new virtual space of students, allow them to interact without conditioning; this implementation of
126
G. Chapa-Pazos et al.
new technologies has allowed students to develop their learning processes in a more didactic, effective, and lasting way, where the accompaniment of the teacher plays an important role, as this serves as a guide and tutor during this whole process. Thus, it can be concluded that the use of digital technologies addresses in a global way to improve student learning not only by changing the way of teaching but also the way of learning. And in the same way this analysis commits to carry out future research on the use of digital tools in different educational contexts (rural, intercultural, special education, etc.), with different methodologies (quantitative) and with different samples of students, parents, and educational authorities.
References 1. García, N., Pérez, C.: Creación de ambientes digitales de aprendizaje. Editorial Digital UNID, México, D.F., México (2015) 2. Lai, W.: ICT supporting the learning process: The premise, reality, and promise. In: Voogt, J., Knezek, G. (eds.) International Handbook of Information Technology in Primary and Secondary Education. Springer, Boston, United States (2008) 3. Pérez de la Maza, L.: Programa de Estructuración Ambiental por Ordenador para personas con trastornos del espectro autista: PEAPO. En: Soto Pérez, F.J., Rodríguez Vázquez, J. (coords.). Serigráfica, Las nuevas tecnologías en la respuesta educativa a la diversidad Murcia (2002) 4. Carrillo, M., Padilla, J., Rosero, T., Villagómez, M.S.: La motivación y el aprendizaje. Alteridad 4(2), 20 (2011). https://doi.org/10.17163/alt.v4n2.2009.03 5. Ryan, R., Deci, E.: Self Determination Theory: Basic Psychological Needs in Motivation, Development, and Wellness. Guilford Publications, New York (2017) 6. Freeman, A., Adams, S., Cummins, M., David, A., Hall, C.: Informe Horizon. The New Media Consortium, Texas (2017) 7. Roblizo, M., Cózar, R.: Usos y competencias en TIC en los futuros maestros de educación infantil y primaria: hacia una alfabetización tecnológica real para docentes. Pixel-Bit. Revista de Medios y Educación, (47) (2015). https://www.redalyc.org/articulo.oa?id=368/368411 80002 Accessed 28 Feb 2020 8. Viveiros, J.: Internet en la educación primaria. San Vicente (Alicante), ECU, Spain (2015). https://elibro.net/es/ereader/bibliouch/43988?page=11 Accessed 16 April 2020 9. Levano-Francia, L., Diaz, S.S., Guillén-Aparicio, P., Tello-Cabello, S., Herrera-Paico, N., Collantes-Inga, Z.: Competencias digitales y educación. Propós. Represent. (2019). https:// doi.org/10.20511/pyr2019.v7n2.329 10. Ministerio de Educación del Perú: Aprendo en casa. Ministerio de Educación del Perú, Lima (2020) 11. Ministerio de Educación del Perú: Orientaciones pedagógicas para el servicio educativo de educación básica durante el año 2020 en el marco de la emergencia sanitaria por el coronavirus COVID-19, 1st edn. Ministerio de Educación Del Perú, Lima (2020) 12. Ministerio de Educación del Perú: Currículo Nacional de Educación Básica. Ministerio de Educación del Perú, Lima (2016) 13. Pech, S.H.Q., González, A.Z., Herrera, P.J.C.: Competencia digital en niños de educación básica del sureste de México. RICSH 9(17), 289–311 (2020) 14. Mortis, S., Tánori, J., Angulo, J., Villegas, M.: Contextual attribute variables in the Use of ICT in primary level students from Southern Sonora. Mexico. Estudios Sobre Educación 35, 499–515 (2018). https://doi.org/10.15581/004.35.499-515
Can Primary School Children Be Digital Learners?
127
15. Gebhardt, E., Thomson, S., Ainley, J., Hillman, K.: Gender Differences in Computer and Information Literacy: An In-depth Analysis of Data from ICILS. Springer, Open, Switzerland (2019) 16. Shaw, L., Gant, L.: In defense of the Internet: The relationship between internet comunication and depressionl loneliness, self-esteem a perceived social support. Cyber Psychol. Behav. 5(2), 157–172 (2002) 17. Pérez, M.A.C., Vinueza, M.A.P., Jaramillo, A.F.A., Parra, A.D.A.: Las Tecnologías de la Información y la Comunicación (TIC) como forma investigativa interdisciplinaria con un enfoque intercultural para el proceso de formación de los estudiantes. e-Cienc. Inform. (2018). https://doi.org/10.15517/eci.v1i1.33052 18. Dörnyei, Z.: Research Methods in Applied Linguistics. Oxford University Press, New York (2007) 19. Gelman, A., Hennig, C.: Beyond subjective and objective in statistics. J. R. Stat. Soc. Ser. A 180, 967–1033 (2017) 20. Alarcón, L.A.G., Trápaga, J.A.B., Navarro, R.E.: Content validity by experts judgment: proposal for a virtual tool. Apertura 9(2), 42–53 (2017). https://doi.org/10.32870/Ap.v9n 2.993 21. Villalta, M., Guzmán, A., Nussbaum, M.: Teaching processes and technology use in the classroom. Rev. Complutense edu. (2015). https://doi.org/10.5209/rev_RCED 2015.v26.n2.43303. Accessed 10 July 2020 22. Flipped Learning Network (FLN): The Four Pillars of F-L-I-P™ (2014). www.flippedlearn ing.org/definition, Accessed 06 July 2020 23. Abreu, J.B., Magalhães da Silva Freitas, N.: Proposições de inovação didática na perspectiva dos três momentos pedagógicos: tensões de um processo formativo. Ens. Pesqui. Educ. Ciênc. (Belo Horizonte) (2017). https://doi.org/10.1590/1983-21172017190123 24. Shah, C., Leeder, C.: Exploring collaborative work among graduate students through the C5 model of collaboration: a diary study. J. Inf. Sci. 42(5), 609–629 (2016). https://doi.org/10. 1177/0165551515603322 25. Ministerio de Educación del Perú: Compromisos de Gestión Escolar y Plan Anual de Trabajo de la IE. MINEDU, Lima (2017) 26. Sucerquia, E., Londoño, A., Jaramillo, C., De Carvalho, M.: Distance-virtual education: development and characteristics in mathematics courses. Rev. Virtual Univ. Católica del Norte, 48, 33–55 (2016). http://revistavirtual.ucn.edu.co/index.php/RevistaUCN/article/view/760/ 1286, Accessed 08 Jan 2020 27. Ministerio de Educación del Perú: Modelo de Sesión de Aprendizaje basado en enfoques y desempeños (2018). https://www.mineduperu.com/2018/12/modelo-de-sesion-de-aprend izaje-basado.html, Accessed 07 July 2020
Performance Evaluation of Teaching of the Professional School of Education of a Private University of Peru Ivan Iraola-Real(B) , Elvis Gonzales Choquehuanca , Gustavo Villar-Mayuntupa , Fernando Alvarado-Rojas , and Hugo Del Rosario Universidad de Ciencias y Humanidades, Lima 15314, Peru {iiraola,egonzales,gvillar,falvarado,hdelrosario}@uch.edu.pe
Abstract. University education requires constant evaluation of teaching performance to guarantee innovation and improvement of the quality of the educational service. Specifically in Peru, these processes are monitored by the National System of Evaluation and Accreditation of Educational Quality (SINEACE) and by the National Superintendency of Higher University Education (SUNEDU). In addition, according to these entities, the University of Sciences and Humanities proposes to evaluate the performance of teachers of a professional program of primary education at a private university in Lima-Peru. For this, a survey developed by 127 students was applied evaluating 29 teachers (16 women (55.2%) and 13 men (44.8%)). Finally, it was concluded that teachers present an optimal performance when teaching and in pedagogical management, but significant deficiencies in research and intellectual production. Keywords: Teaching evaluation performance · Education · Educational evaluation
1 Introduction Due to the relevance of teaching and learning processes, it is established that the role of teachers is essential in education [1]. Of course, it is important to mention that this is done to achieve educational excellence, which is often reflected in the pedagogical performance of teachers, thus expressing their professional competencies in achieving the expected learning [2]. As it is logical to think, at the level of university higher education, the evaluation of teaching performance should reflect in practice the objectives set out in the educational proposal; in such a way that results are obtained following the established goals and the evaluation process can be oriented to continuous improvement [3]. For these reasons, at the international level, exclusive value is attributed to the evaluation of teacher performance; establishing itself as a diagnostic research process [1], formative and summative, oriented to the improvement of teaching and learning; besides being a strategic process for decision making in the improvement of the educational service; in hiring decisions, promotion, salary scale modification, etc. [4]. Thus, teacher © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Botto-Tobar et al. (Eds.): ICAETT 2021, LNNS 407, pp. 128–138, 2022. https://doi.org/10.1007/978-3-030-96147-3_11
Performance Evaluation of Teaching of the Professional School
129
performance evaluation processes transcend the academic and allow the construction of an evaluative culture for administrative and pedagogical purposes [5], necessary for educational quality [6]. 1.1 Evaluation of Teaching Performance in the Peruvian University Context At the higher education level, the mission and vision of a university are reflected in the curriculum [7], which is the tool aimed at training students for the academic and working world [8]. However, to guarantee these achievements, in Peru, the National Superintendence of University Higher Education (in Spanish Superintendencia Nacional de Educación Superior Universitaria SUNEDU) and the National System of Educational Evaluation and Accreditation (in Spanish Sistema Nacional de Evaluación y Acreditación de la Calidad Educativa SINEACE) propose the fulfillment of standards to guarantee the quality of educational services in universities [6]. In addition, to achieve educational quality, it is necessary to comply with the stipulations of the University Law 30220 (in force since 2014) which specifies that the university curriculum requires innovation with the evaluation of its execution and teaching performance. This evaluative process requires the participation of the educational community; to monitor the adequate development of the curricula and the achievements of the graduate profiles [9]. At the same time, it allows the achievement of the SINEACE standards (Fig. 1):
Fig. 1. SINEACE standards [10].
The SINEACE standards are related to strategic management, comprehensive training (the central axis that evaluates the processes of teaching and learning, research, and social responsibility), and institutional support [10]. Although there are several standards, it should be considered that the evaluation process can be conducted partially, to obtain a better analysis of the results [11].
130
I. Iraola-Real et al.
1.2 Education Quality Standards and Teacher Performance Evaluation Assuming that evaluation is a developmental process that guarantees to teach efficiency and educational quality [12], the University of Sciences and Humanities (UCH) articulates this process to its curricular proposal for comprehensive training, which contemplates training in theoretical and practical knowledge, artistic and sports activities, research training, pre-professional practices from the first years, and counseling [13]. All these dimensions of comprehensive training are developed with various subjects that are evaluated through surveys in which students provide their opinions regarding teaching performance. In addition, these surveys address several dimensions, but by nature of the present study, three are specified: a) The Teaching Dimension, which evaluates the proficiency in the course, the teaching and learning processes, the teacherstudent relationship, and the administration of the chair. Then, b) the Pedagogical Management Dimension, which evaluates curricular management and the management of didactic materials. In addition, c) the Research Dimension, which evaluates research competencies, production, and intellectual activities (Fig. 2).
Fig. 2. SINEACE standards and the dimensions of the UCH teaching performance evaluation.
So, the UCH proposal can be evaluated in various activities and not only from within the subjects [13]. However, partial evaluation can be conducted for better development of the process [11]. Thus, for the evaluation of teaching performance, the second comprehensive training standard that specifically refers to the evaluation of teaching and learning processes is relevant [10].
Performance Evaluation of Teaching of the Professional School
131
Thus, for the above reasons, the objective of this research is to evaluate the performance of a sample of teachers in a professional program of primary education at a private university in Lima-Peru. Specifically, it is desired to: – To estimate teaching performance concerning the development of teaching and learning processes. – Assess teaching performance concerning pedagogical, curricular, and didactic materials management. – Evaluate teaching performance concerning research competencies, production, and intellectual activity.
2 Methodology The present study assumed a descriptive quantitative analysis methodology [14] assuming an evaluative research orientation [15]. 2.1 Participants For this study, 127 students evaluating 29 teachers from different areas of specialization who oversee the education of future professionals in primary education participated in the study. Of the total of teachers, 16 were women (55.2%) and 13 were men (44.8%); all of them work in a private university in Lima, Peru. 2.2 Instruments Teacher Performance Survey. This survey evaluates teaching performance in three dimensions: the teaching dimension, the pedagogical management dimension, and the research dimension. It consists of 20 items with ten response dimensions on a Likert scale. The present survey was constructed by the teaching team and subjected to a process of analysis by expert judgment [16]. A statistical validity and reliability procedure was then conducted. The preliminary results were shown in the validity analysis with the Kaiser Meyer and Olkin (KMO) sample adjustment test, in which the value of .65 was obtained, demonstrating that the scale is valid [17]. And in the analysis of reliability and internal consistency with Cronbach’s Alpha Coefficient, the scale reflected a coefficient of .92; demonstrating having reached adequate values [17], as they are greater than or equal to .70 or .60 [18] (See Table 1).
3 Results Once the validity and reliability of the scale had been confirmed, we proceeded to analyze the means obtained by the teachers about each evaluative dimension.
132
I. Iraola-Real et al. Table 1. Reliability and Validity of Survey.
Values optimal
Reliability
Validity
Cronbach
Item-total
KMO
Test Bartlett
≥.60 or .70
≥.30
≥.50
P < .05
Teaching dimension
.97
.96–.97
.65
.000
Pedagogical management dimension
.92
.91–.95
Research dimension
.98
.96–.97
Survey total
.92
.90–.91
Note: It is appreciated that KMO is good [17]. In addition, according to Aiken’s studies [18] the coefficient Alpha of Cronbach of the scale indicates that it is dependable on having overcome the value of.70, and the entire item must be major or equal to.30.
3.1 Mean Analysis (Teaching Dimension) For the analysis of the averages, the following criteria are considered: deficient performance (14 to less), regular performance (15 to 16), optimal performance (17 to 18), and outstanding performance (19 to 20). Table 2 below shows the averages ranging from 17.17 to 18.31, showing that the teachers show “optimal performance”. Table 2. Mean analysis teaching dimension Teaching dimension
Minimum
Maximum
Mean
SD
Domain of matter
0
20
17,66
4,253
Teaching and learning process
0
20
17,34
4,253
Student – teacher relationship
0
20
18,31
4,028
Professorship administration
0
20
17,17
3,947
Average in teaching
0
20
17,62
3,950
Then, in Fig. 3 it can be observed specifically with each teacher (from 1 to 29) their respective ratings for each one. Additionally, to identify the level of heterogeneity and homogeneity among the ratings of teaching performance, the Coefficients of Variation (CV) were analyzed according to the criteria of Rustom [19] (CV < 5% is great homogeneity, CV: 5%–20% is moderate homogeneity, CV: 20%–50% is heterogeneous and CV > 50% is great heterogeneity). In addition, according to these criteria, the CV was identified that in the teaching dimension the CV was 22% (0.22) showing that teachers have a heterogeneous performance in the proficiency in the subject they teach, in the teaching and learning processes, in the relationship with the students and in the administration of the course.
Performance Evaluation of Teaching of the Professional School
133
Fig. 3. Performance evaluation in the teaching dimension.
3.2 Mean Analysis (Pedagogical Management Dimension) For the pedagogical management dimension, Table 3 shows averages ranging from 17.07 to 17.34, again showing that teachers show “optimal performance”. Table 3. Mean analysis pedagogical management dimension Pedagogical management dimension
Minimum
Maximum
Mean
SD
Curricular management
0
20
17,34
4,514
Teaching materials management
0
20
17,07
4,855
Average pedagogical management
0
20
17,21
4,531
Thus, Fig. 4 shows specifically for each teacher (from 1 to 29) their respective ratings. Moreover, according to these criteria, the CV identified that in the pedagogical management dimension, the CV was 26% (0.26), showing that teachers have a heterogeneous performance in curricular management and the management of didactic materials. 3.3 Mean Analysis (Investigation Dimension) For the research dimension, Table 4 shows averages ranging from 6.97 to 9.66, showing that teachers show a “deficient performance”.
134
I. Iraola-Real et al.
Fig. 4. Evaluation of the teaching performance of pedagogical management.
Table 4. Mean analysis investigation dimension Investigation dimension
Minimum
Maximum
Mean
SD
Investigative competences
0
20
9,66
7.938
Intellectual production
0
20
6,97
9.631
Intellectual activity
0
20
8,07
9,369
Average research dimension
0
20
8.48
8,675
Thus, in Fig. 5 it can be observed specifically with each teacher (from 1 to 29) their respective ratings for each one. According to these criteria, the CV was identified that in the research dimension the CV was 102% (1.02) evidencing according to the criteria of Rustom [19] that teachers have great heterogeneity about research competencies, production, and intellectual activity. 3.4 Relationship Between Variables Finally, to explore the relationships between the dimensions evaluated in the teachers, correlation analyses were performed with Pearson’s r according to Cohen’s correlation
Performance Evaluation of Teaching of the Professional School
135
Fig. 5. Evaluation of the teaching performance of the research dimension.
criteria [20] to interpret the coefficients as mild (r = .10–.23), moderate (r = .24–.36) and strong (r = .37 or more). Thus, Table 5 shows that the teaching dimension is positively, moderately, and significantly related to the pedagogical management dimension (r = .85***, p < .001), and to the research dimension (r = .50**, p < .01). Finally, the pedagogical management dimension was positively, strongly, and significantly related to the research dimension (r = −.49**, p < .01). Table 5. Relations between dimensions Variables
1
2
1
Teaching dimension
(.97)
2
Pedagogical management dimension
.85***
(.92)
Research dimension
.50**
.49**
3
3
(.98)
Note. *, **, *** show the significant relationships. *p < .05, **p < .01, ***p < .001 (bilateral). Cronbach’s alpha in parentheses.
136
I. Iraola-Real et al.
4 Discussion According to the national standards of educational quality evaluation [9] and the University Law 30220 (in force since 2014) which specifies that the curriculum in higher education must be innovated through the evaluation of teaching performance; an evaluative process that must be executed with the participation of the educational community [9]; among them the students. The present study was oriented according to the objective of evaluating the performance of a sample of teachers in a professional program of primary education at a private university in Lima-Peru. However, to delimit the investigation, the evaluation conducted by the teachers was used, omitting the teacher self-evaluation, and the evaluation with teacher monitoring reports. Specifically, to evaluate the teaching dimension, referring to the proficiency in the subject, the adequate development of the teaching and learning processes, in the relationship with the students, and the administration of the course. In addition, the dimension of pedagogical management of the curriculum and the management of teaching materials [13]. It was evidenced that teachers have an optimal but heterogeneous performance in both dimensions. This calls for personalized attention to teachers with deficient performance. These results allow us to understand that, in general, teachers have achieved results following the established goals and that this evaluation process can be oriented to continuous improvement [3]. In this set of dimensions, it was not possible to evaluate student performance, although the teachers show an expected result, the impact of their teaching on student learning could not be confirmed. However, according to the proposal of the university studied [13], the Peruvian University Law [9], and the Peruvian educational quality standards established by SINEACE [10], teachers have a deficient research performance that would make it difficult to train students in research skills. Therefore, to improve teaching and learning, this evaluation fulfills a diagnostic function [1], to make decisions and improve the educational service [4]. Finally, by assuming the evaluation of teaching performance to guarantee educational efficiency [12], the professional school could guarantee the training of integral teachers to form integral professionals in charge of primary education. However, it is important to mention a limitation of the present study, in which other ways of doing research such as bibliography for the preparation of educational materials or lectures at conferences have not been considered, which should be considered so as not to accelerate conclusions about teaching research.
5 Conclusions and Future Works According to this diagnostic study, it can be observed at a general level that teachers have an excellent performance in teaching and pedagogical management. However, there are significant deficiencies in teaching research. Given this diagnosis, there is a need to pay particular attention to teachers with limitations in teaching and pedagogical management, as well as to implement strategies with all teachers to develop their research competencies, either by applying a participatory action research work process with training, consulting, and joint academic publications.
Performance Evaluation of Teaching of the Professional School
137
References 1. Gálvez, E., Milla, R.: Teaching performance evaluation model: Preparation for Student Learning within the framework for teacher good performance. Propós. Represent. 6(2), 431–452 (2018). http://www.scielo.org.pe/pdf/pyr/v6n2/en_a09v6n2.pdf Accessed 28 Feb 2020 2. Benítez, M., Cabay, C., Encalada, G.: Formación inicial del docente de educación física y su desempeño profesional, vol. 8, no. 48, pp. 83–95. Revista Digital de Educación Física, EmásF (2017). https://emasf.webcindario.com/Formacion_inicial_del_docente_de_EF_y_ su_desempen~o_profesional.pdf Accessed 28 Feb 2020 3. Gómez, L.F., López Valdés, M.G.: La evaluación del desempeño docente en la educación superior. Propós. Represent. (2019). https://doi.org/10.20511/pyr2019.v7n2.255 4. Stroebe, W.: Student evaluations of teaching: no measure for the TEF. Times Higher Education (2016). https://www.timeshighereducation.com/comment/studentevaluations-teachingno-measure-tef Accessed 30 April 2021 5. Amaranti, M.: Uso de resultados de la evaluación docente para mejorar la calidad de la docencia universitaria, (2017). www.congresouniversidad.cu/revista/index.php/rcu/article/ download/804/759/ Accessed 25 May 2021 6. British Council: The reform of the Peruvian University system: internationalization, progress, challenges and opportunities (2016). https://www.britishcouncil.pe/sites/default/files/the_ref orm_of_the_peruvian_university_system_interactive_version_23_02_2017.pdf Accessed 08 June 2021 7. Ried, D.: A model for curricular quality assessment and improvement. Am. J. Pharm. Educ. 75(10), 1–10 (2011) 8. Ruiz, J.: Evaluación del diseño de una asignatura por competencias, dentro del EEES, en la carrera de Pedagogía: estudio de un caso real. Rev. Educ. 35(1), 435–460 (2008) 9. Congreso de la República del Perú: Nueva Ley Universitaria. Ley N° 30220. Perú, Lima (2014) 10. Sistema Nacional de Evaluación: Acreditación y Certificación de la Calidad Educativa (SINEACE), Modelo de Acreditación para Programas de Estudios de Educación Superior Universitaria. Perú, Lima (2016) 11. Díaz, A.: Evaluación curricular y evaluación de programas con fines de acreditación: Cercanías y desencuentros. Conferencia del Congreso Nacional de Investigación Educativa, Sonora, México (2005) 12. Morrison, G., Ross, S., Kemp, J., Kalman, H.: Designing Effective Instruction, 6th edn. John Wiley & Sons Inc, Hoboken, NJ (2010) 13. Universidad de Ciencias y Humanidades (UCH): Formación Integral (2019). https://www. uch.edu.pe/universidad/formacion-integral Accessed 30 Apr 2020 14. Appelbaum, M., Cooper, H., Kline, R., Mayo-Wilson, E., Nezu, E., Rao, S.: Journal article reporting standards for quantitative research in psychology: the APA publications and communications board task force report. Am. Psychol. Assoc. 73(1), 3–25 (2018). https://psycnet. apa.org/fulltext/2018-00750-002.pdf Accessed 25 May 2020 15. Stufflebeam, L., Shinkfield, J.: Evaluation Theory, Models, and Applications. Jossey-Bass, San Francisco (2007) 16. Escobar-Pérez, J., Cuervo-Martínez, A.: Validez de contenido y juicio de expertos: una aproximación a su utilización. Av. Med. 6(1), 27–36 (2008). https://www.researchgate.net/ publication/302438451_Validez_de_contenido_y_juicio_de_expertos_Una_aproximacion_ a_su_utilizacion Accessed 28 Feb 2020 17. Field, A.: Discovering Statistics Using SPSS, 3era edn. Sage Publications, Lóndres (2009) 18. Aiken, R.: Psychological Testing and Assessment, 11th edn. Allyn & Bacon, Boston (2002)
138
I. Iraola-Real et al.
19. Rustom, A.: Estadística Descriptiva, Probabilidad e Inferencia: Una Visión Conceptual y Aplicada. Universidad de Chile, Santiago (2012) 20. Cohen, J: A power primer. Psychol. Bull. 112, 155–159 (1992)
Systematic Mapping of Literature About the Early Diagnosis of Alzheimer’s Disease Through the Use of Video Games María Camila Castiblanco , Leidy Viviana Cortés Carvajal , César Pardo(B) and Laura Daniela Lasso Arciniegas
,
GTI Research Group, University of Cauca, Carrera 2º Calle 15n., Popayán, Colombia {mcastiblanco,leidyv,cpardo,lauralasso}@unicauca.edu.co
Abstract. One of the cognitive diseases that mostly affect the adult population is Alzheimer’s disease (AD), there is still no treatment that has a 100% effectiveness; therefore, early diagnosis is of great help for potential patients, different solutions have been proposed over time that addresses the detection or discrimination of different neurocognitive disorders, including AD. The purpose of this systematic mapping is to review recent research into the early diagnosis of AD through the use of video games, which has become an innovative proposal that has great future potential for patients, caregivers, and health personnel. The methodology applied consists of three phases: planning, execution, and documentation, a set of objectives and research questions to classify the results found, inclusion criteria, exclusion, and quality assessment. The results of this investigation show that there are few studies related to the early diagnosis and/or discrimination of different types of dementia that include the use of video games and the digitization of desktop tests such as MOCA and MMSE but without being automatic, therefore, there is the need to develop more solutions to guide the design, development, and adoption of video games for the diagnosis of AD. Keywords: Alzheimer’s disease (AD) · Montreal Cognitive Assessment (MOCA) · Mini-Mental State Examination (MMSE) · Neurocognitive disorders · Serious games · Pervasive games
1 Introduction Worldwide people are affected by Dementia, which is a group of neurocognitive disorders caused by different diseases and lesions that damage brain functions; one of the most common is Alzheimer’s disease. In this article, the abbreviation AD will be used to refer to Alzheimer’s disease [9]. According to the World Health Organization (WHO), the current incidence of dementia is more than 50 million individuals worldwide, this figure is expected to increase to 152 million in 2050 [10, 11], for this reason, WHO recognized this disease as a public health priority. AD is a disease of great importance worldwide, and being a disease without a cure, its early detection is of great relevance for its treatment, because: (i) slow down the progress © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Botto-Tobar et al. (Eds.): ICAETT 2021, LNNS 407, pp. 139–153, 2022. https://doi.org/10.1007/978-3-030-96147-3_12
140
M. Camila Castiblanco et al.
of the disease [3], (ii) decrease in medical costs due to early detection, institutional costs and pharmacological treatments in long-term care, and (iii) socioemotional benefits that allow the improvement and temporary prolongation of the patient’s cognitive function [9]. Currently, the diagnosis methods for the AD consist of the application of desktop and psychomotor tests where a cognitive evaluation of the patient is carried out through a series of questions, some of the most used are Montreal Cognitive Assessment (MOCA) and Mini-Mental State Examination (MMSE) [7]. However, this type of evaluation takes time and exposes the patient to stress due to not knowing the certainty of their answers, which may affect the result of the diagnosis, which is commonly known as the “white coat effect” [8]. The results of this study indicate that the proposed solutions are promising about supporting the early diagnosis of AD, which includes the development of video games of the type serious and/or pervasive supported by existing gamification techniques, to provide a different user experience [12]. It’s important to note that these solutions automate the traditional tests for the AD diagnosis from the traditional MOCA and MMSE tests, however, these solutions don’t provide automatic detection. Taking into account the above-mentioned, the objective of this article is to present a systematic mapping of the literature in which the works related to the early diagnosis of AD through the use of video games are identified, also, identify the creation and implementation of the proposed solutions, analyze the effectiveness of the proposals, what mechanisms or techniques are in place to improve the processing of the data and the benefits and/or limitations, identify the creation and implementation of the proposed solutions, analyze the effectiveness of the proposals, what mechanisms or techniques are in place to improve the processing of data and the benefits and/or limitations. Additionally, it has been possible to observe some solutions that address the detection or discrimination of AD, mild cognitive impairment, and dementia, however, there has been little research about the support and benefits which brings not only patients but also caregivers and medical staff into the process of diagnosing AD. In the same way, it was possible to observe that the articles related to the early detection of Alzheimer’s using video games are scarce, which indicates that in this area of knowledge it is still necessary to provide different solutions. The paper has been organized in the following way: Sect. 2 presents the protocol used to carry out the systematic mapping. Section 3 deals with the results for the established research questions. Section 4 presents the discussion through a set of main observations, limitations of the mapping as well as the implications in the research field. Finally, in Sect. 5 the conclusions and future work are presented.
2 Research Protocol A systematic mapping allows a global view of a topic of interest and identify the primary articles that contribute to the research topic [16]. This article is based on the guide by Petersen et al. [15] where 3 phases are carried out: (i) planning, (ii) execution, and (iii) documentation. Each of them is presented in detail below.
Systematic Mapping of Literature About the Early Diagnosis
141
2.1 Planning In this phase, the following activities were carried out: (i) definition of the research scope, (ii) define the research questions, (iii) determine the search strategy, (iv) propose the inclusion and exclusion criteria, (v) define the quality evaluation criteria, (vi) determine the data extraction strategy, and finally (vii) choose the synthesis methods. 2.1.1 Definition of the Research Scope The PICOC method was used (Population, Intervention, Comparison, Outcome, Context) to establish a research question to clarify the scope of systematic mapping [17, 18]. The results of this method are generated from the following elements and questions: (i) Population: What is the population of interest? (ii) Intervention: How does the population intervene? (iii) Comparison: What is the point of comparison? (iv) Results: What are the desired results? and (v) Context: What is the study context where the research will be conducted? The answers to this study can be found in Table 1. The research question designed for systematic mapping was the following: ¿What work or initiatives are related to the development of proposals for the early diagnosis of AD through the implementation of serious and/or pervasive games? Table 1. PICOC definition. PICOC
Description
Population
People at risk such as adults over 40 years of age, people with low or non-existent education, family history, or people with the disease but who have not yet been diagnosed with Alzheimer’s
Intervention Identify the video games developed for the early detection of the suffering of AD Comparison EA detection methods through the application of tests through video games Outcome
A systematic mapping report of the literature that includes the classification and synthesis of the most relevant articles published on AD and video games applied to the diagnosis of AD
Context
Una investigación sistemática para consolidar una investigación académica y revisada por pares, clasificación y comparación, tendencias y direcciones futuras de investigación
Based on the research question, the following objectives were raised (O): O1: Determine the demographic scope and identify relevant sources of information for detecting the level of AD suffering, O2: Help researchers and interested parties know the quality, validation, processes and methods used by the authors of the studies found, O3: Identify the state of development of the proposals in terms of the results obtained and O4: Identify the main trends in diagnosing the level of disease in patients. Based on the proposed objectives, the research questions presented in the following section were posed.
142
M. Camila Castiblanco et al.
2.1.2 Research Questions Taking into account that the objective of this mapping is to identify the solutions proposed for the early diagnosis of AD through the use of video games, Table 2 presents the research questions, their motivation, and the relationship they have with the proposed objectives (Objective related – OR). Table 2. Research questions. #
Question
Motivation
OR
Q1 What is the geographical distribution of the information sources?
The geographical distribution is represented as regions, countries, universities and research teams leading communities related to the population at risk and/or suffering from AS
O1
Q2 What is the local distribution of information sources?
Discover the relevant places, for example: conferences and journals that contain most of the topics of our research interest
Q3 What are the most cited primary studies? Identify the most cited authors and works in the subject consulted Q4 What are the most cited primary studies? Identify the research methods of most interest to our search that are applied in the selected articles
O2
Q5 What kind of methods or tests have been Determine the most commonly used tests O3 proposed or used to facilitate the in research regarding AD and the diagnosis of Alzheimer’s disease through diagnosis of your condition the use of video games (serious and pervasive)? Q6 How effective has been the application of Determine the level of effectiveness in tests to evaluate the AD through the use the results of the tests and video games to of video games (serious and pervasive)? know the proposals that have had a better implementation and classification Q7 Benefits and/or limitations of the use of video games (serious and pervasive) in AD patients?
Determine the benefits, consequences, limitations, and challenges of proposed solutions for patients with AD
Q8 What is the trend of the proposals developed for the diagnosis of AD by means of video games?
Determining the influence of video games on the premature diagnosis of AD
O4
2.1.3 Search Strategy The search string takes into account the keywords that provide information to the proposed objective, relying on the logical connectors “AND” and “OR”. Additionally, the
Systematic Mapping of Literature About the Early Diagnosis
143
asterisk and quotation marks were included to refine the number of results obtained, likewise, the string was adapted to apply it in the different search engines, among them: (i) Google Scholar, (ii) IEEE Xplore, (iii) SpringerLink, (iv) Scopus and (v) Science Direct. In addition, studies that contributed to the research and were classified as gray literature were included. The defined search string was the following: (alzheimer*) AND (“serious games” OR “serious game” OR “pervasive games” OR “pervasive game” OR “ubiquitous games” OR “ubiquitous game”) AND (test* AND (“detection” OR diagno* OR “identification” OR “assessment”))), another important aspect in the advanced search is the time window, which includes the year 2017 to the present (2021). Table 3 shows the adaptation of the search chain for each of the search engines consulted. Table 3. Searcher and search string. Searcher
Search string
Science direct
(“alzheimer” OR “alzheimer’s”) AND ((diagnostic OR identification OR detection)) AND (serious OR pervasive AND (videogames OR videogame OR “video game” OR “video games” OR “digital games” OR “digital game”))
IEEE Xplore
((“Full Text Only”: “pervasive games” OR “Full Text Only”: “ubiquitous games” OR “Full Text Only”: “serious games”) AND (“Full Text Only”: test* AND (“Full Text Only”: “diagnostic” OR “Full Text Only”: “detection” OR “Full Text Only”: assess* OR “Full Text Only”: “evaluation” OR “Full Text Only”: “identification”)) AND (“Full Text Only”: alzheimer*))
Scopus
TITLE-ABS-KEY ((“alzheimer” OR “alzheimer’s”)) AND ALL (((diagnostic OR identification OR detection)) AND (serious OR pervasive AND (videogames OR videogame OR “video game” OR “video games” OR “digital games” OR “digital game”))) AND PUBYEAR > 2016
Springer Link
(“alzheimer” OR “alzheimer’s”) AND ((diagnostic OR identification OR detection)) AND (serious OR pervasive AND (videogames OR videogame OR “video game” OR “video games” OR “digital games” OR “digital game”)) AND NOT (rehabilitation) AND NOT (training) AND NOT (chemistry)
Google Scholar
(alzheimer*) AND (“serious games” OR “serious game” OR “pervasive games” OR “pervasive game” OR “ubiquitous games” OR “ubiquitous game”) AND (test* AND (“detection” OR diagno* OR “identification” OR “assessment”))) -chemistry -rehabilitation
2.1.4 Inclusion and Exclusion Criteria After collecting the results on various search engines, it was performed an analysis of the studies found with the aim of reducing the number of articles and focus them on the objective of this study. This process was carried out having a set of inclusion and exclusion criteria taken into account. Inclusion criteria taken into account: (i) Articles in English; (ii) Articles within the topic of interest that are related to solutions that can be used to make the diagnosis of Alzheimer’s disease through pervasive and serious video
144
M. Camila Castiblanco et al.
games and (iii) full articles published between 2017–2021 in workshops or congresses, magazines and conferences with recognition and peer review. The Exclusion criteria taken into account: (i) Duplicate articles that have been previously found in another search engine; (ii) Articles proposing solutions for Alzheimer’s diagnosis, but omit video games as a mechanism for its detection; (iii) Articles with solutions based on pervasive and serious video games for types of dementia, and does not delve into the disease of Alzheimer’s or its diagnosis; (iv) Articles naming pervasive and/or serious video games, in addition to Alzheimer’s disease but that do not deepen in the subject; (v) Opinion articles; (vi) Articles with no clear research methodology and (vii) Articles that name pervasive and serious videogames, and/or Alzheimer’s disease but they do not focus on the diagnosis. Therefore, articles that meet at least one factor of inclusion are covered, and articles that meet some exclusion criteria are not taken into account. 2.1.5 Quality Evaluation Criteria In order to measure the quality of the selected articles, it was proposed a questionnaire with a scoring system of three values (−1, 0 and +1). Table 4 presents the criteria taken into account to evaluate the primary articles, likewise, the scoring system to be used is presented. The total score for each article is obtained by adding the values assigned to each criterion, this score is in a range between −5 and +5. An article with a low rating is not excluded from systematic mapping, the results will be used for the purpose of finding more relevant studies that provide more information for the future work. The results obtained when applying the quality evaluation criteria can be seen in https://bit. ly/3j95GXb. 2.1.6 Data Extraction Strategy The link https://bit.ly/3yktkGj presents a set of possible responses related to the research questions defined above, this for the purpose of minimize subjectivity and ensure the criteria are applied to all studies primaries from a set of possible answers. 2.1.7 Synthesis Methods This method is based on the answer to the research questions asked by means of the analysis of the selected articles, which is divided into the following stages (i) identification of the title, date of publication, keywords, (ii) abstract, and (iii) other relevant aspects. The link https://bit.ly/37eE4Kw presents the relationship between the relationship between the research questions and the primary studies identified. 2.2 Execution At the moment of applying the search string in the search engines, it had to be adapted to the filters and parameterization provided by each engine, finally obtaining the search strings shown in Table 3. After the modifications were made, the search was started in each search engine. Table 5 the total number of articles (found, relevant, relevant repeated and primary)>.
Systematic Mapping of Literature About the Early Diagnosis
145
Table 4. Quality evaluation criteria’s questionnaire and score. #
Quality assessment criteria
Assigned score to possible answers 0
−1
C1 The research Yes provides a clear view of the application of video game mechanisms for the early detection of AD
Partially
No
C2 The research Yes presents in detail the results obtained after using the videogame
Partially
No
C3 Research proposes a Yes solution effective for early diagnosis of AD
Partially
No
C4 The research has been published in a Relevant magazine, conference or congress. I was used the proposed quartile classification by Scimago (https:// www.scimagojr.com) for classify the journals and the ranking of the Computing Research & Education (CORE, http://portal.core. edu.au/conf-ranks/) for congresses and conferences
Relevant (Q2 and Q3 quartiles for journals; A and B for congresses and conferences)
Not relevant (Q4 quartile for journals, C for congresses and conferences and No classification)
Partially
No
+1
Highly relevant (Q1 quartile for journals and A* for congresses and conferences)
C5 The research assesses Yes the effectiveness of the presented proposal for the early diagnosis of AD
146
M. Camila Castiblanco et al.
Table 5. Shows the total number of articles (found, relevant, relevant repeated and primary). #
Search engine
Found
Relevant
Repeat relevant
Selected primary
1
Science Direct
85
2
0
2
2 3
IEEE Xplore
43
0
0
0
Scopus
38
4
1
3
4
Springer Link
9
0
0
0
5
Google Scholar
230
3
1
2
Total
405
9
2
7
3 Results The results are presented below after conducting the analysis of the selected primary studies and answering the research questions. Each of the answers are referenced so that the reader can deep further into it later. The results obtained when applying the quality evaluation criteria can be seen in Table 6. Table 7 presents the relationship between the relationship between the research questions and the primary studies identified. Table 6. Results of quality assessment criteria. Criteria
Reference 1
2
3
4
5
6
7
C1
+1
−1
−1
+1
+1
+1
−1
C2
+1
−1
−1
+1
+1
+1
−1
C3
+1
−1
−1
+1
0
+1
−1
C4
−1
0
−1
+1
+1
+1
−1
C5
+1
−1
−1
+1
+1
+1
−1
Score
+3
−4
−5
+5
+4
+5
−5
3.1 Q1: What Is the Geographic Distribution of the Information Sources? The link https://bit.ly/3rNpWRR presents the geographic distribution of the articles selected as primary sources, studies can be located in 5 countries: (i) Spain [1, 4–6] (57.1%), (ii) Portugal [2] (14.2%), (iii) Spain [2] (14.2%) Spain [1, 4–6] (57.1%), (ii) Portugal [2] (14.2%), (iii) Germany [3] and (iv) France [7] (14.2%) respectively. 3.2 Q2: What Is the Local Distribution of Information Sources? Table 8 presents the local distribution of information sources, showing the scientific events and journals where the articles have been published.
Systematic Mapping of Literature About the Early Diagnosis
147
Table 7. Contributions of primary studies to each research question. Criteria
Reference 1
2
3
4
5
6
7
Q1
X
X
X
X
X
X
X
Q2
X
X
X
X
X
X
X
Q3
X
X
X
X
X
X
X
Q4
X
X
X
X
X
X
X
Q5
X
X
X
X
X
X
X
Q6
X
X
X
X
Q7
X
X
X
X
X
X
X
Q8
X
X
X
X
X
X
X
Table 8. Local distribution of information sources. #
Local distribution
Reference
%
1
Computational Science and Its Applications – ICCSA 2019
[1]
14.3%
2
Entertainment Computing – ICEC 2018
[2]
14.3%
3
Procedia Computer Science – 2017
[3]
14.3%
4
PeerJ 2017
[4]
14.3%
5
PeerJ 2018
[5]
14.3%
6
Methods of Information in Medicine 2017
[6]
14.3%
7
2020 Tenth International Conference on Image Processing Theory, Tools and Applications (IPTA)
[7]
14.3%
3.3 Q3: What Are the Most Cited Primary Studies? The link https://bit.ly/3fkL3Gj presents the number of citations for each primary article. It should be noted that the number of citations was compared in each of the search engines and the information with the highest number of citations was selected. 3.4 Q4: What Are the Most Frequently Applied Research Methods and in What Study Context? Twenty-nine percent of the primary articles analyzed [1, 2] use a mixed methodology, that is to say, the information collected from patients through the games is used together with an accompanied by an analysis, which is represented in 3 main phases: (i) data collection; (ii) preprocessing; and (iii) classification. In addition, another 29% of the articles [3, 7] are conducted as a systematic review of the literature. Systematic review of the literature, the objective in [3] is mainly focused on: the identification and classification of serious
148
M. Camila Castiblanco et al.
game mechanisms used to obtain an early diagnosis of AD or dementia through the available Smartphone applications, on the other hand, the available for Smartphone, in [7] the objective was: to create an immersive virtual environment with multiple tasks based on immersive virtual environment with multiple tasks based on existing printed (paper) tests for AD diagnosis. Similarly, another 29% of the articles [5, 6] conduct a pilot study, applying an analysis of psychometric validity where it is verified that the instrument actually measures what it claims to measure (for more on psychometric validity can be found in [13]), and [5] includes predictive validity and innovative machine learning techniques, criterion validity and external validity are used in [6]. The remaining 13% of the articles, only one article [4], in which a methodology is proposed, and the research is mainly focused on how to apply the concept of design science (also known as Design Science – DS) is applied). in information systems (IS), it should also be noted that it is a cross-sectional or prevalence study where the validation model based on the leave-one-out cross validation (LOOCV) strategy, where a training set is used with the available observations except for one, which is used as a training set, performing several iterations to take into account all the observations. Therefore, the algorithm is applied once for each instance, using all other instances as a training set and using the selected instance as a single-item test set [14]. This process is closely related to the statistical jackknife estimation method. 3.5 Q5: What Kind of Methods or Tests Have Been Proposed or Used to Facilitate the Diagnosis of Alzheimer’s Disease Through the Use of Video Games (Serious and Pervasive)? The following are the different proposals for the diagnosis of AD and/or the discrimination of Mild Cognitive Impairment (hereinafter MCI), AD and healthy patients proposed by each primary article analyzed, as well as a brief description and the video games, also a brief description and the video games proposed to support the diagnosis of AD. In [1] and [6] a group of different serious games is proposed, called “Panoramix” battery. “Panoramix”, among them: (i) Episodix, it is based on the gamification of the California Verbal Test (CLVT), which is widely used to assess episodic memory; (ii) Attentix, evaluates the player’s attention span; (iii) Executix, addresses executive functions and is based on the executive functions and is based on the gamification of the Tower of Hanoi (TOT) game; (iv) Workix, which is based on the Corsi Cubes test, evaluates working memory; (v) Semantix, based on the gamification of the Pyramids and Palmtrees test to assess semantic memory; (vi) Semantix, based on the gamification of the Pyramids and Palmtrees test to semantic memory; (vi) Prospectix, which addresses prospective memory through the Pursuit Rursuit the Pursuit Rotor Task test; and (vii) Gnosix, which focuses on visual gnosias, i.e., the capacity visual gnosias, i.e. the ability to visually recognize different items and assign meaning to them. In [2], three serious games are developed to stimulate and evaluate the cognitive performance of the users, three video games are proposed: (i) separate all the sheep according to their color in the corresponding (i) all sheep are separated according to their color in the corresponding area or field. The black sheep must remain on the right side while the white sheep must remain on the left side; (ii) all sheep are separated according to their color in the corresponding area or field. (ii) the player will be asked
Systematic Mapping of Literature About the Early Diagnosis
149
to identify the number of sheep that will appear in the enclosure and (iii) the objective is for the player to remember the order in which the cows are presented so, at the end, that the player can indicate the order in which the cows were milked. In [3], researchers are developing early detection mechanism for Alzheimer’s disease and dementia, some of the proposed video games are: (i) Sea Hero Quest, early results show that orientation problems seem to start earlier than memory problems, this represents an important finding for AD&D research; (ii) Neuroracer/Project Evo is able to positively differentiate between healthy older users amyloid deposits and a comparison that did not have these deposits and (iii) Brain Health is an application from the Acuity Game series that attempts to assess the user’s brain health by measuring game performance. In [4] and [5], they are based on a gamification of the neuropsychological test California Verbal Learning Test (CVLT), with the purpose of assessing episodic memory through a video game called “Episodix”. It uses the video game called “Episodix” proposed in [5]. In [7], virtual reality is used, inspired by the usual tests: MMSE, MOCA and MMSE, MOCA and Dubois Five Words. In this study, an application composed of 7 tasks: orientation, attention, memory, executive and visuospatial functions, and visuospatial functions, practice, language and abstraction. 3.6 Q6: How Effective Has Been the Application of Tests to Evaluate the AD Through the Use of Video Games (Serious and Pervasive)? The effectiveness in the application of tests to evaluate AD varies in relation to the diversity of approaches used in each study. For example, in [1], Machine Learning F1score, Accuracy and Cohen’s kappa(K) algorithms were used to measure its performance. Random Forest (hereafter RF) with 0.99, 1.00 and 0.98, respectively. In [4] the results in terms of effectiveness depend on the way in which the Episodix video game is applied, the best results were achieved by including a cognitive break. The metrics obtained are: accuracy, F1-score, specificity and sensitivity in the linear regression (LR) and RF algorithms reach the maximum value of 1.0. In [5] the best results are obtained by the gradient boosting algorithm, with F1-score = 0.92, K = 0.89. In [6] it is possible to evidence results from different games, such as, (i) Semantix, (ii) Procedurix and (iii) Episodix, the best result with a 100% accuracy is obtained when combining accuracy is obtained by combining related datasets in each set. On the other hand, in [2, 3] and [7] no data regarding effectiveness are presented, since [2] does not evidence a metric to calculate the performance of the games, but shows the performance of the players or patients, in [3], the identification of applications available on smartphones for early diagnosis of AD or dementia, this is achieved by applying serious gaming mechanisms. Finally, in [7], since it is a recent study, no application tests have been performed in patients. 3.7 Q7: Benefits and/or Limitations of the Use of Video Games (Serious and Pervasive) in AD Patients? When analyzing the articles, it is concluded that 100% [1–7] agree on the benefits of video games in patients with AD, taking into account that it is not necessary to consider
150
M. Camila Castiblanco et al.
the educational level or the technological skills of the user/patient when applying the proposed solutions. Likewise, users are less dependent on professionals for the use of video games, given that by playing on their own, they feel less overwhelmed, which allows to reduce and eliminate the influence of the “white coat” effect mentioned in other studies, as in [8]. In addition, in [2] video games are presented that have the potential to be used for cognitive stimulation at a low cost of acquisition and implementation, allowing the development of an accessible design that can be calibrated to the needs and characteristics of the target population. On the other hand, limitations are also evident such as: (i) it is necessary to deepen research and obtain a set of representative and normative data necessary for (i) further research is needed to obtain a representative and normative dataset necessary for clinical validity [1], (ii) further research is needed to improve the game (ii) further research is needed to improve the resolution of the game with respect to the identification of specific cognitive deficits, as well as to achieve a complete validation of the psychometric properties of the psychometric properties of the proposed games [4, 5], (iii) unfortunately, the serious gaming applications cannot be developed on web or desktop platforms, it is limited to smartphones and tablets, due to the fact that older adults can use these devices without prior knowledge [3], (iv) none of the proposed solutions provide automated detection, (v) given that the research is limited to a recent area of study, there is no model that formalizes the development of serious games for a context such as the support and diagnosis of AD [2], (vi) the sample size reported in some studies is small [6], (vii) visual problems generate incompatibility with a virtual assessment [7], (viii) another limitation to overcome is the reluctance of many patients to use the mobile device due to the of many patients to use the mobile device because of their unwillingness to learn how to use the technological device [2]. 3.8 Q8: What Is the Trend of the Proposals Developed for the Diagnosis of AD by Means of Video Games? The 57.1% of the proposals developed [1, 4–6], apply Machine Learning in order to improve the classification and analysis techniques of the data found and also make use of the extra trees (ET) classification algorithm extra trees – ET). Additionally, in [2] and [3] applications for smart devices such as Tablets and Smartphones are used to implement the video games. Finally, in [7] virtual reality is applied as a mechanism for the development of the application.
4 Discussion The following is an analysis of the results obtained from the systematic mapping conducted, in order to show the improvements or changes that can be carried out in the changes that can be made to the proposals found. 4.1 Main Observations AD is a type of dementia that is characterized by affecting the daily life of older adults diagnosed with AD [9]. In the early stages of AD, some of the symptoms that occur are:
Systematic Mapping of Literature About the Early Diagnosis
151
(i) problems remembering events of daily life, (ii) difficulty in memorizing activities (iii) drastic mood changes, (iv) dependence to perform basic tasks, among others. On the other hand, as AD is related to pathological changes in different types of dementia, it is considered a mixed pathology [9], which is why early detection of AD represents a great difficulty and necessity [1]. Thanks to the implementation of technology in the healthcare field, innovative solutions have emerged to support physicians in the diagnosis of different diseases, in this case, the early diagnosis of AD through the use of video games. Considering the primary articles identified and analyzed, it is evident that there is a relationship between AD and MCI, since they have symptoms in common, MCI is considered a potential precursor of AD [9]. On the other hand, there is little information about the early diagnosis of AD based on video games, the closest that can be found is the discrimination of patients with AD, MCI, and healthy patients, known as controls [1, 5]. Likewise, it can be observed that most of the studies found are associated with the diagnosis of MCI, and given the relationship between MCI and AD, it is important to find commonalities that can support efforts in each one. In addition, it is evident that factors such as minimizing the patient’s stress, which can improve the results of MCI and AD, are not considered, and therefore contributes to the reduction of the time used to perform the test and to the time spent in performing the test and obtaining the diagnosis. There are also important benefits of implementing video games in the early diagnosis of AD, among them: no educational level or technological skills are required [1], the user is less dependent on professionals [3], agility in obtaining and evaluating the patient’s results, allowing a continuous evaluation, among others [2]. Thus, the use of this type of video games becomes a promising solution. 4.2 Limitations During the application of the search string to compile the primary articles, important limitations for the development of the mapping became evident, such as: (i) the established chain includes some articles that do not contribute relevant information to the defined approach, but the analysis is carried out to avoid the loss of articles of importance for this study (ii) most of the studies found are based on the diagnosis of MCI, which is the reason why few articles were selected as primary, (iii) solutions based on video games are presented, however, there is no evidence of a fixed methodology for their development, (iv) statistical data on the population with AD are not constantly updated, therefore, the exact percentages of the condition is not known, (v) when adapting the search string for each of the engines, it is evident that one of them (ScienceDirect) has a maximum of 8 logical operators, which does not coincide with the number applied in the main search string, therefore, different changes are made in order to comply with the restriction and thus avoid excluding articles that provide relevant information to the research, (vi) in the initial stage of the mapping, the research question was posed: “What type of solutions have been proposed to measure the level of risk of suffering AD through video games?”, this was posed with the objective of deepening the research, however, no response was found in the primary articles analyzed which is why it was eliminated, finally, (vii) several authors were contacted via email to request some of the articles needed for the development of the mapping since there was no access to them.
152
M. Camila Castiblanco et al.
5 Conclusions and Future Work It is expected that the population suffering from AD will grow exponentially over time, since it is a disease without cure, its diagnosis in early stages is of vital importance to delay symptoms, reduce treatment costs and provide quality of life to the patient, so the early diagnosis of AD has become a topic of high interest in the field of health and science. Thanks to the literature analyzed, it can be concluded that video games are increasingly used for premature diagnosis and/or discrimination of different types of dementia, in addition, adults are facilitated to use and generate a sense of confidence and comfort. It is noted that there is no methodology for the development of these games, although it is clear that the inclusion of machine learning techniques helps to obtain a better classification of data, thus generates better results for clinical diagnosis support. Finally, with regard to further research, it is hoped to propose a solution that clarifies the elements to be taken into account in the development of video games by IT professionals. It is also expected to develop a video game with the aim of supporting health personnel in the early diagnosis of AD
References 1. Valladares-Rodríguez, S., Anido-Rifón, L., Fernández-Iglesias, M.J., Facal-Mayo, D.: A machine learning approach to the early diagnosis of alzheimer’s disease based on an ensemble of classifiers. In: Misra, S., et al. (eds.) ICCSA 2019. LNCS, vol. 11619, pp. 383–396. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-24289-3_28 2. Neto, H.S., Cerejeira, J., Roque, L.: Cognitive screening of older adults using serious games: an empirical study. Entertainment Comput. 28, 11–20 (2018). https://doi.org/10.1016/j.ent com.2018.08.002 3. Polzer, N., Gewald, H.: A structured analysis of smartphone applications to early diagnose alzheime´rs disease or dementia. Procedia Comput. Sci. 113, 448–453 (2017). https://doi.org/ 10.1016/j.procs.2017.08.293 4. Valladares-Rodriguez, S., Perez-Rodriguez, R., Facal, D., Fernandez-Iglesias, M.J., AnidoRifon, L., Mouriño-Garcia, M.: Design process and preliminary psychometric study of a video game to detect cognitive impairment in senior adults. PeerJ 2017(6), 1–35 (2017). https://doi. org/10.7717/peerj.3508 5. Valladares-Rodriguez, S., Fernández-Iglesias, M.J., Anido-Rifón, L., Facal, D., PérezRodríguez, R.: Episodix: a serious game to detect cognitive impairment in senior adults. A psychometric study. PeerJ 6, e5478 (2018). https://doi.org/10.7717/peerj.5478 6. Valladares-Rodriguez, S., Pérez-Rodriguez, R., Fernandez-Iglesias, J.M., Anido-Rifón, L.E., Facal, D., Rivas-Costa, C.: Learning to detect cognitive impairment through digital games and machine learning techniques: a preliminary study. Methods Inf. Med. 57(04), 197–207 (2018). https://doi.org/10.3414/ME17-02-0011 7. Florian, M., Margaux, S., Khalifa, D.: Cognitive tasks modelization and description in VR environment for Alzheimer’s disease state identification. In: 2020 10th International Conference on Image Processing Theory, Tools and Applications IPTA 2020 (2020). https://doi.org/ 10.1109/IPTA50016.2020.9286627 8. Mario, B., Massimiliano, M., Chiara, M., Alessandro, S.: White-coat effect among older patients with suspected cognitive impairment: prevalence and clinical implications. Int. J. Geriatr. Psychiatry 24(5), 509–517 (2009). https://doi.org/10.1002/gps.2145
Systematic Mapping of Literature About the Early Diagnosis
153
9. Association, A.: 2018 Alzheimer’s disease facts and figures. Alzheimer’s Dement. 14(3), 367–429 (2018). https://doi.org/10.1016/j.jalz.2018.02.001 10. World Health Organization: Dementia, (2020). https://bit.ly/3C4MYbS Accessed 01 June 2021 11. Subdirección de Enfermedades No Trasmisibles Grupo Gestión Integrada para la Salud Mental: Boletín de salud mental Demencia, Octubre de 2017, p. 19 (2017). https://bit.ly/3yg Lx7R 12. Baptista, G., Oliveira, T.: Gamification and serious games: a literature meta-analysis and integrative model. Comput. Hum. Behav. 92, 306–315 (2019). https://doi.org/10.1016/j.chb. 2018.11.030 13. Truijens, F.L., Cornelis, S., Desmet, M., De Smet, M.M., Meganck, R.: Validity beyond measurement: why psychometric validity is insufficient for valid psychotherapy research. Front. Psychol. (2019). https://doi.org/10.3389/fpsyg.2019.00532 14. Sammut, C., Webb, G.I. (eds.): Leave-One-Out Cross-Validation, in Encyclopedia of Machine Learning, pp. 600–601. Springer US, Boston, MA (2010) 15. Petersen, K., Vakkalanka, S., Kuzniarz, L.: Guidelines for conducting systematic mapping studies in software engineering: an update. Inf. Softw. Technol. 64, 1–18 (2015). https://doi. org/10.1016/j.infsof.2015.03.007 16. Kitchenham, B., Charters, S.: Guidelines for Performing Systematic Literature Reviews in Software Engineering, vol. 2, (2007) 17. Bruzza, M., Cabrera, A., Tupia, M.: Survey of the state of art based on PICOC about the use of artificial intelligence tools and expert systems to manage and generate tourist packages. In:2017 International Conference Infocom Technologies Unmanned Systems Trends Future Directions ICTUS 2017, vol. 2018-January, pp. 290–296 (2018). https://doi.org/10. 1109/ICTUS.2017.8286021 18. Ghani, I., Yasin, I.: Software security engineering in extreme programming methodology: a systematic literature review. Sci. Int. (Lahore) 25(2), 215–221 (2013). Accessed 28 Jul 2021 https://bit.ly/2VnLTee
Intelligent Systems
Design of the Process for Methane-Methanol at Soft Conditions Applied to Selection the Best Descriptors for Periodic Structures Using Artificial Intelligence Josue Lozada(B) , E. Reguera, and C. I. Aguirre-Velez Centro de Investigación en Ciencia Aplicada y Tecnología Avanzada, Instituto Politécnico Nacional, Unidad Legaria, México City, México
Abstract. Methane is a relevant energy vector in modern society considering both, its natural abundance, and the possibility to be produced from residual biomass through an anaerobic process. While methane is a supercritical gas at room temperature, methanol is a liquid, which facilitates its storage and handling as a fuel. This explains the interest and convenience to have a technology for the methane to methanol conversion. The selective oxidation of methane to methanol in a direct way is currently a challenge because depending on the reaction route different results can be obtained. The use of zeolites materials as support of copper complexes promises a clare way to find conditions in order to realize a catalytic process. There are different parameters involve zeolites with copper complexes that contribute to make a chemical environment for the interaction between methanol and active sites in the structure. Taking into account several structural and physical chemical parameters it is possible to make a selection of the best structures for this conversion process, helping on artificial intelligence algorithms. Keywords: Zeolites · Database · Machine learning
1 Introduction Different approaches have been proposed to carry out the challenge of conversion methane to methanol, but none is feasible industrially. One way to carry out to make this conversion is using zeolites combined with copper complexes to have better yields under “soft” conditions [1–4]. There are a large zeolite materials that could be useful, but an experimental exploration of all zeolites implies resources and time. For this reason it is necessary to identify the best structures for the selective oxidation process of methane to methanol taking into account several physical-chemical parameters involved in the chemical reaction. The systems studied up to this moment show relevant factors that contribute to catalytic process such as percentage of copper, ratio between Si/Al, Cu/Al in the structure, pore size, type of oxidant, chemical environment and others. In order to select the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Botto-Tobar et al. (Eds.): ICAETT 2021, LNNS 407, pp. 157–167, 2022. https://doi.org/10.1007/978-3-030-96147-3_13
158
J. Lozada et al.
best structures to carry out this conversion process taking into account all factors and methodologies based on artificial intelligence algorithms have been proposed. However even though intense research has been carried out for this type of systems, there are questions which remain to be answered. For example: how many sites are present in the structure and how many of those are active?; What is the underlying reaction mechanism?; What is the specific nature of the active sites? and finally; How the topology of the zeolite determines the reactivity? [1, 5]. In this paper we discuss the contribution of different factors, structural and physicalchemical, of zeolites combined with copper complexes involved in the methane-methanol conversion in order to make a data base useful for IA algorithm. Preliminar results of an algorithm application are shown.
2 Data Pre-processing 2.1 Materials Zeolites are microporous crystalline aluminosilicate organized in three-dimensional topologies which are based on the wide range of rings structures that can result from the condensation of aluminum and silicon as oxides around an equally wide range of structure directing materials. By 2012, 206 types of zeolites had been identified according to their structure, which 40 are natural, the rest are synthetic. Zeolites are composed of tetrahedra made up of a cation and four oxygen atoms, that is TO4 formula, the cation T can be silicon (Si), aluminum (Al) or germanium (Ge). Although silicon predominates as the tetrahedra interconnections since adjacent tetrahedra share oxygens as shown in Fig. 1.
Fig. 1. Example of the structure for a zeolite where the cation is T (grey) and O is atom of oxygen (black), the tetrahedra are interconnected periodically to give rise to the complete structure [17].
The inclusion of aluminum is chemically offset by the inclusion of K, Na and Ca or less frequently by Li, Mg, Sr, and Ba, these seven cations, although they are part of the zeolites do not become part of the shell TO2 [9]. Since the first demonstration that copper lodged in zeolites could convert methane to methanol, considerable progress has been made, which has been taken to improve the performance of this conversion. There are relevant results found of this systems: (i) The methanol yield per gram of material as a function of the Si/Al ratio; (ii) the role of zeolite topology, aluminum
Design of the Process for Methane-Methanol at Soft Conditions
159
content and sensitivity to general reaction conditions affect the conversion; (iii) the copper must be activated by an oxidant to create one or more active sites; (iv) the best yield zeolites in terms of methanol per copper atoms present are MOR, CHA, ZSM-5, and MAZ. (v)The larger pore zeolites (those based on rings of ten members or more) are somewhat less capable of stabilizing the copper sites required for efficient and selective methanol formation at low reaction temperatures [1–8]. Concerning the ability to convert methane to methanol selectively has its origin in stabilizing the intermediate species, such as methoxy (which is a radical consisting of a methyl group attached to oxygen) and thus protect them from over-oxidation (which can be carbon dioxide), therefore, the zeolites to be studied are those that contain rings of between 6 and 10 members since can stabilize the active site [1, 7, 8]. Considering that the experimental methods of zeolite synthesis yield give a statistical distribution of aluminum atoms throughout the zeolite framework, the aluminum distribution within a single unit cell is considered as a variable relevant that will form part of the database [1]. Many studies on the direct conversion of methane on a zeolite containing copper indicate using O2 ,N2 O,H2 O2 ,H2 O as oxidants for industrial application [1–8]; unlike zeolites combined with Fe that activation sites cannot be formed with O2 . Therefore, it was predicted that copper ion has a better reactivity to methane than the other transition metal ions, it was claimed that activation barrier energy changes depending on the zeolite structures; the aluminum position and its bonding structure with metal cations affected the M-O-M angle, which influenced to the activation barrier energy. It has been reported that materials with small pore Cu zeolites produce almost twice as much methanol per Cu atom in comparison to the medium and large pore zeolites. The activation energy necessary to break the C-H bond of methane, which is the step that determines the speed in methane conversion, is controlled by the Cu-O-Cu angle, which depended on the crystallographic location in a zeolite structure and active copper species [2] as shown in Fig. 2.
Fig. 2. Example of Cu-O-Cu angle for some common active sites which may form in the pores of the material [2].
A great variety of Cu zeolite catalysts have been evaluated during the last decades for a cyclical process of several steps and the representative results for the design of the process to obtain high methanol yield are: (i) highly dispersed copper-oxo active
160
J. Lozada et al.
species; (ii) copper actives species formed in small-pore channels; (iii) appropriate level of activation temperature and (iv) ion-exchanged. 2.2 Machine Learning in Chemistry Today, machine learning algorithms are used successfully for classification, regression, clustering tasks, or dimensionality reduction of large sets of especially high-dimensional input data [10]. Several studies have already been carried out for the implementation of artificial intelligence algorithms in catalysis chemistry. Some of the applications of machine learning in materials science are the discovery of new stable materials and the prediction of their structure; the calculation of materials properties; the construction of functionals at density functional theory(DFT) and the physical understanding and the optimization of performance of a certain task by using examples and/or past experience [10]. The general workflow for machine learning consists of a subset of the relevant population for which the target property values are known is chosen or data is created if necessary. This process is accompanied by the selection of a machine learning algorithm that will be used to adjust the desired target quantity [10]. Most of the work is generating, finding, and cleaning the data to make sure it is consistent, accurate, etc. Second, we need to decide how to map the properties of the system, that is, the input for the model, in a way that is suitable for the chosen algorithm. This involves translating the raw information into certain characteristics that will be used as inputs to the algorithm [10]. Once this process is finished, the model is trained optimizing its performance, generally it is measured through some type of cost function and typically this involves setting hyper-parameters that control the training process, structure, and properties of the model. The data is divided into several sets; ideally a separate validation data set from the test and training sets is used for hyper-parameter optimization but also the error of the validation test [10]. Before the model is ready for applications, it must be evaluated on previously unseen data, denoted as a test set, to estimate its generalizability and extrapolation; different methods ranging as cross validation and Monte Carlo cross validation all these methods are based on keeping some data hidden from the model during the training process [10]. The best choice for representation depends on the target property, more precisely, the cost of extracting the features should ideally be uncorrelated descriptors, since the number of correlated features can hinder the efficiency and accuracy of the model, when this happens, more features for selection are needed to simplify the models and improve their training efficiency.
3 Results The success of machine learning algorithms and their efficiency depends mainly on the amount of data and the quality of the available data, which is one of the most important challenges in materials science, that if determined experimentally is very expensive.
Design of the Process for Methane-Methanol at Soft Conditions
161
For this purpose, the databases to be used are the following: (i) International Zeolites Association (IZA) and (ii) Predicted Crystallography Open Database (PCOD). We consider 133 zeolite structures extracted from the IZA database that are labeled with three capital letters, where the best descriptors for describe the high yield of methanol is discussed in terms of zeolite topology, active species, and reaction parameters also we used software for the calculation of other descriptors that were not found in the databases, in view of this, the zeolite topology was described with the following descriptors. Framework density (FD): the density of the structure is a criterion for distinguishing zeolites and Zeolite-like materials from denser materials, it is defined as: (1) FD = (Number of T atoms)/ 1000Å3 For denser non zeolitic structures FD > 21, whereas for fully articulated zeolites FD ∈ [12.1,20.6] and if FD < 12 they have only been found for interrupted frames such as cloverite (-CLO), but they are not considered in the database, this descriptor is related to the volume of the pores but does not reflect the size of the pore openings [9, 14]. Coordination sequences (CS): each T atom (Si, Al) is connected to N1 = 4 neighboring T atoms (Si, Al) through oxygen bridges, these neighboring in the same way to N2 T atoms (Si, Al) in the next layer, where each T atom (Si, Al) is counted only once [9, 14]. Topological density (TD): the coordination sequence (CS) can be used to compute a topological density; any T atom can be described exactly by a set of p quadratic equations. Nk = ai k 2 + bi k + ci , for k = i + np, n = 0, 1, 2 . . . ∧ i = 1, 2, 3 . . . p
(2)
Where Nk represent the number of T atoms in the shell k, ai , bi , ci ∈ R. Therefore, we can define the exact topological density TD as the mean of all ai divided by the dimensionality of the topology (that is 3 for zeolites). TD =
ai > 3
(3)
The value of ai > has been approximated as the mean of ai for the last 100 terms of the CS with 1000 terms weighted with the multiplicity of the atom’s position divided by three. This descriptor is related to the number of T atoms of either Si or Al in the structure [9, 14]. We use the following open-source software Zeo++ for performing high-throughput geometry-based analysis of porous materials and their voids. The main code provides capabilities to calculate the following that is included at our database. (i) Number of channels; (ii) the pore diameter in angstrom for the largest embedded sphere in the pores (Di); (iii) the largest free sphere (Df); (iv) the largest included sphere along the path of the free sphere (Dif); (v) unit cell volume; (vi) density of the cell; (vii) accesible surface area in the material per unit cell (ASA); (viii) not accesible surface area in the material per unit cell (NASA); (ix) number of pockets (NP); (x) accesible surface area in the channels (CSA) [12]. Our theoretical model also assumes that the most likely active site to form in the pores of the material is the mono-µ-oxo di-copper as shown in Fig. 3.
162
J. Lozada et al.
Fig. 3. Mono-µ-oxo di-copper is the most likely active site in the materials to be considered, where the red atom represents an atom of oxygen and brown atoms are copper [1].
Whereas for the reaction parameters O2 and H2 O are the oxidants that we consider for the design of the experiment, also we assumed a cyclical process of three-step consists of: oxygen activation, methane reaction and methanol extraction. In typical operation, Cu-zeolite is activated for several hours at near 45 °C in an oxygen atmosphere and treated with an inert gas such as He to remove the O2 used in the activation of Cu-zeolite at that point methane is reacted for some time at about 200 °C–360 °C (473 K–633 K) and the produced methanol or methoxy group is desorbed or extracted from the Cu-zeolite using a solvent such as water to obtain methanol. For the choice of the variable to predict yield of methanol (R), the following kinetic model can be used as a viable proposal for each of the materials: considering the following model of simple two-step methane to methanol model A → B → C, where A is the reactant (methane) and B is the desired product (methanol) and C represents any undesirable over oxidized product, we can define the concentrations of A and B at a given time t as:
B(t) =
A(t) = A0 e−k1 t
(4)
A0 k1 −k1 t e − e−k2 t k2 − k1
(5)
Where A0 is the Initial concentration of reactant (methane) and k1 , k2 are the velocities in the chemical reaction from A to B and B to C, respectively. Then we define the selectivity towards B, SB (t) and the conversion of A, X (t) as follows: The selectivity can be written in terms of the conversion: SB (t) =
B(t) A0 − A(t)
(6)
X (t) =
A0 − A(t) A0
(7)
The above can be written the time depending on the conversion: X (t) =
A0 − A(t) A(t) A(t) =1− ⇒ = 1 − X (t) ⇒ A(t) = A0 (1 − X (t)) A0 A0 A0
(8)
Design of the Process for Methane-Methanol at Soft Conditions
163
By other part: A(t) = A0 e−k1 t = A0 (1 − X (t)) e−k1 t = 1 − X (t) t=
ln(1 − X (t)) −k1
(9)
This allows us to express the selectivity only as a function of the conversion and the relation of the rate constants: (t)) (t)) 1 −k1 ln(1−X −k2 ln(1−X −k1 −k1 SB (X ) = e −e X (t)(k2 /k1 − 1) SB (X ) =
1 − X − (1 − X )k2 /k1 X kk21 − 1
(10)
Where the ratio of the speeds kk21 depends on the contributions of free energies [13]. Which implies a broader thermodynamic model can be improved to obtain better predictions of methane to methanol conversion process, this is a proposal for the variable to be predicted, but that has to be improved to obtain better predictions of the process which entails the use of the density functional theory; until now there are no such calculations in databases, since it carries a great computational cost, but better results will be obtained for theoretical modeling. In this first approach the selectivities were proposed randomly because the main objective was to select the best experimental attributes that have most impact on the process and then apply an artificial intelligence algorithm in order to find some patterns in the data that experimentally would not be observed as shown in Table 1. For the implementation of the artificial intelligence algorithm, the best option are models where the value of a numerical variable that takes values within a continuous range must be predicted, because our model will be of the form. In this case, starting from a data set x (zeolite topology, active species, and reaction parameters), we try to estimate the value of y (yield of methanol), this is a statistical problem where multiple regression algorithms have been proposed. The most commonly used measure of error to estimate the quality of the model is the mean square error, which we apply to the N samples of the data set on which we are estimating the quality of our model. NMSE =
N (f (x) − y)2
(11)
i=1
In this case the value of MSE for our model is good because the data follow normal distributions with small variances, for example, as shown in Fig. 4. And the cross validation was also good. So, regression models are suitable, in particular some classical
164
J. Lozada et al. Table 1. Descriptors for the methane-methanol process.
Descriptor
Features
Zeolite topology
Framework density (FD) Topological density (TD) Ring sizes Number of channels Diameter for the largest embedded sphere in the pores (Di) Diameter for the largest free sphere (Df) Diameter along the path of the free sphere (Dif) Unit cell volume Density of the cell Accesible surface (ASA) Not accessible surface (NASA) Number of pockets (NP) Accesible surface in the channels (CSA)
Active species
Mono-µ-oxo di-copper
Reaction parameters
Oxidant Reaction temperature (300K) Methane conversion (X) Contribution of free energies G a Selectivity of methanol (S)
linear regression models. Therefore, in this model the errors due to bias and variance are in equilibrium. It is worth mentioning that the database with the most relevant parameters consider for the catalysis process has already been designed, as well as the methodology to follow for the design of the material with which we already have the most important part of the study, the first simulations of this work were carried out in IBM Watson [16], where the auto AI tool was used which allows to perform the training of the algorithm efficiently and quickly. The first tests showed that the linear regression algorithm is a great possible candidate to find those structural variables that participate in obtaining high methanol yields as shown in Fig. 5. Consequently, the density of the unit cell is one parameter that affects the process of methane-methanol, certainly it is necessary to improve the model since the selectivities proposed in this first part were hypothetical, so the thermodynamic model must be improved to obtain better predictions of the conversion process and thus propose the best structures for the conversion process, but the main problem about the choice the best descriptors and the first tests using artificial intelligence algorithms for the process from methane to methanol has already be solved in our database.
Design of the Process for Methane-Methanol at Soft Conditions
165
a.
b. Fig. 4. a) Histogram for framework density, b) histogram for the topological density
Fig. 5. Importance of the descriptors.
4 Conclusions An analysis of main descriptors was discussed above, the theoretical model still needs to be improved for its application on an industrial scale since mainly the kinetic and
166
J. Lozada et al.
thermodynamic part of this chemical reaction is the most difficult to calculate a suitable descriptor, since it involves the use of other models for computing the calculation of certain physical-chemical parameters inherent to the structure. Also clarifying that some parameters are missing to consider such as the Si/Al ratio because it reveals the structures that have higher performance for the designed database would take a long time to carry out the calculations of the yield, so in principle it is an adequate proposal but that needs more studies to get better predictions. Finally, for the implementation of artificial intelligence algorithms, there seems to be more evidence that machine learning algorithms are the most appropriate to implement in this process, the first preliminar tests of the implemented algorithm show good results that can be improved, which gives a way to the design of other materials using zeolites with this database since the structure of zeolites is perfectly described by this database, so for later studies better models for the proposals of the active site as well as a better route will be sought of direct activation for the process. Currently, these data are not available, but they are looking for new theoreticalexperimental models to propose both the best structures at low temperatures as well as the possible active sites that can be formed in the pores of the material, in this way this analysis will help the chemical intuition to select the best materials reducing the time that is carried out for the preparation of these materials with different reaction conditions reducing costs for the search of the best materials and the given process is accelerated.
References 1. Newton, M.A., Knorpp, A.J., Sushkevich, V.L., Palagin, D., van Bokhoven, J.A.: Active sites and mechanisms in the direct conversion of methane to methanol using Cu in zeolitic hosts. [A critical examination]. In: Royal Society of Chemistry, pp. 1325–1616 (2020) https://doi. org/10.1039/c7cs00709d 2. Park, M.B., Park, E.D., Ahn, W.-S.: Recent progress in direct conversion of methane to methanol over copper-exchanged zeolites. [A mini review]. In: Frontiers in Chemistry, vol. 7.514, pp. 1–7 (2019) https://doi.org/10.3389/fchem.2019.00514 3. Wang, X., et al.: Copper-modified zeolites and silica for conversion of methane to methanol. Catalysts 8(11), 545 (2018). https://doi.org/10.3390/catal811054 4. Grundner, S., et al.: Single site trinuclear copper oxygen clusters in mordenite for selective conversion of methane to methanol. Nat. Commun. 6, 7546 (2015). https://doi.org/10.1038/ ncomms8546 5. Alayon, E.M., Nachtegaal, M., Ranocchiari, M., van Bokhoven, J.A.: Catalytic conversion of methane to methanol over Cu–mordenite. R. Soc. Chem. 3(48), 404 (2012). https://doi.org/ 10.1039/C1CC15840F 6. Ikuno, T., et al.: Methane oxidation to methanol catalyzed by Cu-oxo clusters stabilized in NU-1000 metal–organic framework. J. Am. Chem. Soc. 139(30), 10294–10301 (2017). https://doi.org/10.1021/jacs.7b02936 7. Mahyuddin, M.H., Shiota, Y., Yoshizawa, K.: Methane selective oxidation to methanol by metal-exchanged zeolites: a review of active sites and their reactivity. J. Catal. Sci. Technol. 8(9), 1744–1768 (2019). https://doi.org/10.1039/C8CY02414F 8. Wang, G., Huang, L., Chen, W., Zhou, J., Zheng, A.: Rationally designing mixed Cu–(µ-O)– M (M = Cu {,} Ag {,} Zn {,} Au) centers over zeolite materials with high catalytic activity towards methane activation. J. Phys. Chem. 41(20), 26522–26531 (2018). https://doi.org/10. 1039/C8CP04872J
Design of the Process for Methane-Methanol at Soft Conditions
167
9. Iza database. http://www.iza-structure.org/databases/ 10. Schmidt, J., Marques, M.R.G., Botti, S., Marques, M.A.L.: Recent advances and applications of machine learning in solid state materials science. [A review article]. In: Nature, vol. 5.83, pp. 1–36 (2019). https://doi.org/10.1038/s41524-019-0221-0 11. Ohyama, J., et al.: Data science assisted investigation of catalytically active copper hydrate in zeolites for direct oxidation of methane to methanol using H2 O2 . Sci. Rep. 11, 2067 (2021). https://doi.org/10.1038/s41598-021-81403-4 12. Willems, T.F., Rycroft, C.H., Kazi, M., Meza, J.C Haranczyk, M.: Algorithms and tools for high-throughput geometry-based analysis of crystalline porous materials. [. .]. In: Microporous and mesoporous materials 13. Latimer, A.A., Kakekhani, A., Kulkarni, A.R., Nørskov. J.K.: Direct methane to methanol: the selectivity-conversion limit and design strategies. [A review]. In: ACS Catalysis, vol. 8, pp. 6894–6907 (2018). https://doi.org/10.1021/acscatal.8b00220 14. Baerlocher, C., McCusker, L.B., Olson, D.H.: Atlas of Zeolite Framework Types, vol. 6, Th edn. Publisher, Elsevier Science (2007) 15. Zeo++. http://zeoplusplus.org/ 16. IBM Cloud. https://dataplatform.cloud.ibm.com/login 17. Peral Yuste Angel: Sintesis de Zeolita ZSM-5 Con Porosidad Jerarquizada Como Catalizador Para el Craqueo de Poliofelinas. Publisher, BURGC DIGITAL, pp. 12–13 (2009)
An Interface for Audio Control Using Gesture Recognition and IMU Data ´ Victor H. Vimos1(B) , Angel Leonardo Valdivieso Caraguay1 , 1 azar1 Lorena Isabel Barona L´ opez , David Pozo Esp´ın2 , and Marco E. Benalc´ 1
Laboratorio de Investigaci´ on en Inteligencia y Visi´ on Artificial - Escuela Polit´ecnica Nacional, Quito 170517, Ecuador {victor.vimos,angel.valdivieso,lorena.barona,marco.benalcazar}@epn.edu.ec 2 Facultad de Ingenier´ıa y Ciencias Aplicadas - Universidad de Las Am´ericas, Quito 170523, Ecuador [email protected] https://laboratorio-ia.epn.edu.ec/es/
Abstract. Hand Gesture Recognition systems using electromyography sensors in conjunction with data from inertial measurement units are currently largely used for musical interfaces. However, bracelets are susceptible to displacements causing a decrease in the accuracy when they are used in such applications. In this study, a hand gesture recognition model applied to a musical interface has been tested using two different commercial armbands, Myo and GForce. Both armbands use the same pre-trained gesture recognition model and same hand gestures are recognized. We evaluate the robustness of the pre-trained model and the reached accuracy correcting the displacement of the sensors. The test performed to evaluate the system shows a classification accuracy of 94.33% and 90.70% respectively considering the same pre-trained model. The accuracy results obtained with both sensors are similar which evidences the robustness of the tested model and the importance of correcting the displacements. Keywords: Hand gesture recognition Music control interface · SVM
1
· MYO · GFORCE · IMU ·
Introduction
Hand Gesture Recognition (HGR) systems can use the electromyography (EMG) data and Inertial Measurement Units (IMU) to build human-machine interfaces that are responsible for determining which and when, a gesture was performed [1]. Hand gestures are a common and effective type of non-verbal communication which can be learned easily through direct observation [2]. In recent years, several applications of HGRs in many fields have been tested successfully [3–8]. In musical applications, HGR has been adopted to build new interfaces by the community [9,10]. More recently, musical applications use electromyography c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Botto-Tobar et al. (Eds.): ICAETT 2021, LNNS 407, pp. 168–180, 2022. https://doi.org/10.1007/978-3-030-96147-3_14
An Interface for Audio Control Using Gesture Recognition and IMU Data
169
together with inertial signals to enhance the control actions applied in musical interfaces [11–13]. Nevertheless, the use of EMG and IMU data has not reached its full potential nor have they been widely adopted [14]. This is caused mainly due to three factors. First, the performance of HGR systems used in musical control interfaces (MCI) still can be improved (i.e. recognition accuracy, processing time, number of gestures) [15,16]. Second, the protocol used for evaluating these models is not rigorous in terms of processing time, classification, recognition accuracy, and the EMG variability between people. Third, HGR implementations commonly are cumbersome because it is not easy or intuitive to use (i.e. a HGR implementation is expected to be on real-time, non-invasive, no sensor displacement problems) [17–19]. Moreover, HGR systems mostly require some training or strict procedure before usage. In the world of Interfaces for Musical Expression (IME), there are two design approaches: (a) start working with a conceptual idea and build your own device to fit that idea, or (b) start working with an available controller and see how it can be used in an IME context. This paper follows the second approach and focuses on this third issue, exploring two different commercial electromyography armbands (Myo and GForce) in a musical setting. We use the EMG and the IMU data provided by the armbands as inputs to interact with a musical interface. The proposal is intuitive and provides flexibility in the placement of the EMG sensor, maintaining the system accuracy. The obtained results using the two different sensors demonstrate the robustness of the model used and the easiness of adapting the model to different hardware.
2
Materials and Methods
In this section, the hand gesture recognition system, the music software, and the music control interface are described. The sensor used in our proposal to acquire the data and test the model is the Myo Armband. This armband is an electronic device that measures superficial EMG signals. This sensor consists of 8 bipolar channels which work with a sampling frequency 200 Hz and an IMU unit with a 3D gyroscope, 3D accelerometer, and a magnetometer (Fig. 1a). The data are send via Bluetooth to a computer, and the EMG signals feed the HGR system. Additionally, we use a different armband (GForce) for testing as well. This armband is provided by a different manufacturer (OYMotion), and whose hardware characteristics are different from the armband with which the data were acquired (Fig. 1b). 2.1
Hand Gesture Recognition
Hand Gesture Recognition systems used as human-machine interfaces are responsible for determining which and when, a gesture is performed. Hand gestures are a common and effective type of non-verbal communication. The hand gesture recognition architecture is generally composed of five modules: data
170
V. H. Vimos et al.
a)
b)
Fig. 1. a) Myo Armband sensor used to train and test b) GForce Armband sensor used for testing. Both armbands sensors are able to work with the same trained model.
acquisition, pre-processing, feature extraction, classification, and post-processing [20,21]. In the Fig. 2, the HGR architecture is shown.
Fig. 2. Hand gestures recognition system.
In our research, the software to be used as a musical environment and interact with the armbands is Reaper [35]. It allows us to link our software through a protocol for music applications. The hand gestures to be recognized and mapped to actions in Reaper are shown in Fig. 3. Data Acquisition: To create the Hand Gesture Recognition model, this work uses a dataset collected in previous research (EPN-EMG612 [22]), and it can be found in [23]. Additionally, the code for this paper has been uploaded to GitHub [24]. The EMGs are from people who wear the bracelet placed only on the right forearm, no matter if they were right or left-handed. The dataset is composed of 612 users, and was divided into two groups: 50% for training and 50% for offline testing. In our research, once the final model was trained, we will test it with 10 different people to evaluate and interact online with the musical interface.
An Interface for Audio Control Using Gesture Recognition and IMU Data
171
Fig. 3. Hand gestures recognized in this work to manage the MCI. a) waveOut, b) waveIn, c) fist, d) open, e) pinch, f) no Gesture.
Pre-processing: Neither filtering nor rectification were implemented for the EMG data. However, as part of the pre-processing module, the EMG energy (Eq. 1) is used to identify if a current analyzed window needs to be classified or not. Every EMG window must exceed an energy threshold to be computed for the classifier. A threshold of 17% of Energy (Eq. 1) was considered in this research as design criterion based on multiple tests with different energy thresholds. This process avoids the classification of unnecessary gestures if the threshold is not reached and, therefore, improves the computational cost. Feature Extraction: Five functions to extract features are used. These functions are applied over every 160-point EMG window according to the current procedure for both testing armbands. The set of functions is briefly explained as follows: 1. Energy (E): It is a feature for measuring energy distribution, and it can be represented as [25]: E=
L
abs{(xi ) · abs (xi ) − (xi−1 ) · abs (xi−1 )}
(1)
i=2
where xi is a sample of EMG signal, and L is total length of the EMG signal. 2. Root Mean Square (RMS): It describes the muscle force and non-fatigue contraction [26]. Mathematically, the RMS can be defined as: L 1 2 (xi ) (2) RM S = L i=1 where xi is a sample of EMG signal, and L is the total points of the EMG.
172
V. H. Vimos et al.
3. Standard Deviation (SD): This feature measures the dispersion of the EMG signal. It shows how the data are scattered respectively to the average, and it is expressed as: L 1 | xi − u |2 (3) SD = L − 1 i=1 where xi is a sample of EMG signal, u is the average, and L is the total points of the EMG. 4. Mean Absolute Value (MAV): It is a popular feature used in EMG based hand gesture recognition applications. The mean absolute value is the average of the absolute value of the EMG signal amplitude, and it is defined as follows: L
M AV =
1 | xi | L i=1
(4)
where xi is a sample of EMG signal, and L is the total points of the EMG. 5. Absolute Envelope (AE): It uses the Hilbert transform for calculating the instantaneous attributes of a time series, especially amplitude and frequency [27]. 2 2 AE =| AE |= f (t) + (H {f (t)}) (5) where H(t) is the Hilbert transform and f (t) is the EMG signal. Classification: A Support Vector Machine (SVM) was implemented to carry out the hand gesture classification. The SVM is a machine learning technique used to find the optimal separation hyper-plane in data classification [28,29]. It uses a kernel function in the input data to remap it into a new hyper-plane that facilitates the separation between classes. In this research, a polynomial kernel of third order with a one-vs-one strategy was implemented to carry out the classification procedure. To build our HGR model, a SVM multi-class classification was utilized. The multi-class problem is broken down to multiple binary classification cases, which is also called one-vs-one coding [30]. The number of classifiers necessary for one-vs-one multi-class classification can be retrieved with the formula n(n − 1)/2 where n is the number of gesture classes. Table 1. SVM configuration Matlab variable
Value
Kernel function
polynomial
Polynomial order 3 Box constrain
1 (variable value for regularization)
Standardize
(F eaturei − μ)/σ;
Coding
one vs one
where μ = mean, σ = standard deviation
An Interface for Audio Control Using Gesture Recognition and IMU Data
173
In the one-vs-one approach, each classifier separates points of two different classes, and comprising all one-vs-one classifiers leads to a multi-class classifier. The parameters used to configure the SVM can be observed in Table 1. We use SVM since it is a classifier that allows portability of HGR systems due to its low computational cost and real-time operation [28–30]. In addition, in experiments conducted in [31,32], the authors demonstrate that SVM is able to reach a higher performance than K-Nearest-Neighbor (KNN) for EMG signals classification. Additionally, in another research [33], the authors compare different algorithms such as: SVM, KNN, Artificial Neural Networks (ANN), Random Forest (RF), Naive Bayes (NB), to classify hand gestures. The results show that SVM reaches high performance classifying 7 hand gestures. Post-processing: Its objective is to filter spurious predictions in order to produce a smoother response [21,34] and adapt the classifier responses for the current tested musical application. For each observation of the EMG, we obtain a vector of 4 labels, where each label corresponds to the feature vector of the observation. We define a simple majority voting to assign a label to the current gesture. We assign the label that has more occurrences in the vector of labels. Otherwise, we assign the label no Gesture. 2.2
Music Control Interface
Reaper: The software Reaper [35] is a digital audio application for computers that offers a full multitrack audio and MIDI recording, editing, processing, mixing, and mastering toolset. Open Sound Control Protocol: Open Sound Control (OSC) [36] is a protocol for communication among computers, sound synthesizers, and other multimedia devices. This protocol is optimized to work with modern networking technology, bringing the benefits of modern networking technology to the world of electronic musical instruments. The OSC’s advantages include interoperability, accuracy, and flexibility. Architecture: The musical control interface is composed of 5 stages. The general scheme is shown in Fig. 4. The first stage (in green) is the acquisition of EMG data using the Myo or GForce armband. In the second stage (in blue), we have implemented an application in order to connect the hand gesture recognition system with Reaper. This application was built using Matlab (Fig. 5) and recognizes the gestures defined in the scheme shown in (Fig. 2). After recognizing a gesture, the stage two sends a command using the Open Sound Control protocol to Reaper. In the third stage (in purple), the commands sent by stage two are received in Reaper through the OSC protocol. In the fourth stage (in brown), Reaper performs actions considering the codes received from stage two. Finally, the fifth stage (in red) is the system’s audio output.
174
V. H. Vimos et al.
Fig. 4. Implemented scheme for the musical interface.
OSC Reaper Set-up: The IP address and port number to be configured in Reaper must be the same as the parameter configured in Matlab (Fig. 5). In [37], there is a demo configuration video. Matlab Hand Gesture Recognition App: To implement and test the proposed research, a Matlab graphical interface has been developed (Fig. 5). The app allows us to manage the tracks uploaded in Reaper. OSC Matlab Set-up: After executing the Matlab app, in the run tab (Fig. 5), the IP address and the port number must be filled with the same values as those configured in Reaper previously. Gestures Mapping: The Hand Gesture Recognition system is used in the MCI for selecting parameters in Reaper according to specific mapped actions (Fig. 6).
a)
b)
Fig. 5. a) Application running with Myo Armband b) Application running with GForce Armband.
An Interface for Audio Control Using Gesture Recognition and IMU Data
175
Fig. 6. Gestures mapping to Reaper actions. Users perform same gestures with Myo or GForce armband. The HGR model created is able to recognize gestures acquired with both sensors.
Same commands are sent through UDP protocol without considering the current armband selected. A brief explanation of each mapping gesture is detailed as follows: – waveOut: The waveOut gesture plays the current track. Nevertheless, repeating the gesture twice, the MCI pauses the track. – waveIn: When the waveIn gesture is performed, Matlab sends the command that allows Reaper to completely stop the current session. – fist: The fist gesture allows us to mix the tracks #1 and #2. Every change is done immediately after performing the gesture. It should be noted that if the fist gesture is executed twice, we can return to the previous track in mix mode (from track #1 to track #2). – open: The open gesture is used to exit from the hand gesture recognition system. All variables and configurations are automatically saved in the Matlab workspace after the software is closed. Then, we can do any action without taking into account the recognition software. When we re-connect to the software, the configuration and track processes are online again. – pinch: Performing the pinch gesture, we can turn the volume up or down. To set the volume up, the right hand must go up. To set the volume down, the right hand must go down until the volume reaches the desired one. – no Gesture: When no action is detected by the Hand Gesture Recognition system, a highlighted indicator in green is shown in the interface (Fig. 5).
3
MCI Test
To evaluate our MCI, we conducted a test with 10 users between 20–50 years old. All users wore the bracelet on the forearm of the right hand and the bracelet
176
V. H. Vimos et al.
was placed with different rotations between users. The general comment from the users is that they had no problem understanding how to play the music control interface. The main challenge concerning to MCI accuracy is related to the HGR system. The users quickly understood the selection mechanisms and were able to manage tracks from the sound bank loaded previously in Reaper. The test users evaluated our MCI trained accordingly instruction for the HGR model proposed in [38]. In [39], there is a demonstrative video of the system.
4
Results
In this section, we present the MCI performance based on the accuracy results for the test performed with both sensors (Myo and GForce armband). In addition, we compare the results among users and gestures. The classification results for the HGR model tested with the Myo armband is presented in Fig. 7. As can be observed, the classification accuracy obtained was 94.33%. The classification results for the HGR model tested with the GForce armband is presented in Fig. 8. For the GForce test, the same users from Myo group were used. As can be observed, the classification accuracy obtained was 90.70%. The approach used for our research considers the orientation correction, which helps to achieve high classification results. The control of the musical interface is directly linked to the correct recognition of gestures. The performance of the music control interface has the same accuracy as the HGR system. The best precision results in classification using the Myo were obtained during the test for the waveIn gesture with 100%. The best sensitivity result is waveIn gesture with 96%. Related to the result obtained with the GForce armband, the best precision results in classification were obtained during the test for the waveOut gesture with 95.9%, and the best sensitivity result was waveOut gesture with 93%. The noGesture is not considered for precision and sensitivity because it is the relax state in our research. As an example of the implementation of the HGR system using the Myo and GForce armband, we have included a link to the video related to a test carried out [39].
Fig. 7. Confusion matrix with 94.33% accuracy, tested using the MYO armband.
An Interface for Audio Control Using Gesture Recognition and IMU Data
177
Fig. 8. Confusion matrix with 90.70% accuracy, tested using the GForce armband.
5
Conclusions
The control of the musical interface is directly linked to the correct execution of the hand gestures. Each action performed by the users shows the effectiveness and success of the actions performed in the musical interface. The results show that the gesture recognition model evaluated with different sensors is robust. The OSC protocol used allows us to connect, control and interact remotely with different music devices as well as with different digital audio workstations, for example Reaper. Our evaluation shows that different online applications can be improved by having a better HGR system applying some correction techniques, regardless of considering the used armband, Myo or GForce. The obtained model created with the Myo armband data can be used with the GForce armband. The results show similar performance values even though those armband are different and their EMG signal amplitude are different. The model presented is robust to artifacts and responds with great performance to new EMG signals acquired by the new GForce. Acknowledgments. The authors gratefully acknowledge the financial support provided by Unidad de Innovacion y Tecnologia (UITEC) - Universidad de las Americas (UDLA) for the development of the research.
References 1. Jaramillo-Y´ anez, A., Benalc´ azar, M., Mena-Maldonado, E.: Real-time hand gesture recognition using surface electromyography and machine learning a systematic literature review. Sensors 20, 2467 (2020). https://doi.org/10.3390/s20092467 2. Archer, D.: Unspoken diversity cultural differences in gestures. Qual. Soc. 20, 79– 105 (1997). https://doi.org/10.1023/A:1024716331692
178
V. H. Vimos et al.
3. Bermeo-Calderon, J., Velasco, M., Rojas, J., Villarreal-Lopez, J., Resrepo, E.: Movement control system for a transradial prosthesis using myoelectric signals. In: International Conference on Advanced Engineering Theory and Applications, pp. 273–282 (2019). ISBN: 978-3-030-53021-1 4. Lu, L., Mao, J., Wang, W., Ding, G., Zhang, Z.: A study of personal recognition method based on EMG signal. IEEE Trans. Biomed. Circuits Syst. 14, 681–691 (2020) 5. Tavakoli, M., Benussi, C., Lourenco, J.: Single channel surface EMG control of advanced prosthetic hands a simple, low cost and efficient approach. Expert Syst. Appl. 79, 322–332 (2017). https://doi.org/10.1016/j.eswa.2017.03.012 6. Ullah, A., Ali, S., Khan, I., Khan, M., Faizullah, S.: Effect of analysis window and feature selection on classification of hand movements using EMG signal. In: Proceedings of SAI Intelligent Systems Conference, pp. 400–415 (2020). ISBN: 978-3-030-55190-2 7. Viriyasaksathian, B., Khemmachotikun, S., Kaimuk, P., Wongsawat, Y.: EMGbased upper-limb rehabilitation via music synchronization with augmented reality. In: 2011 IEEE International Conference on Robotics and Biomimetics, ROBIO 2011, pp. 2856–2859 (2011). https://doi.org/10.1109/ROBIO.2011.6181738 8. Wang, N., Lao, K., Zhang, X.: Design and myoelectric control of an anthropomorphic prosthetic hand. J. Bionic Eng. 14, 47–59 (2017). https://doi.org/10.1016/ S1672-6529(16)60377-3 9. Donnarumma, M., Caramiaux, B., Tanaka, A.: Muscular interactions combining EMG and MMG sensing for musical practice. In: KAIST (2013) 10. Tsubouchi, Y., Suzuki, K.: BioTones a wearable device for EMG auditory biofeedback. In: 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, pp. 6543–6546 (2010) 11. Kerber, F., Lessel, P., Kruger, A.: Same-side hand interactions with arm-placed devices using EMG. In: Proceedings Of The 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, pp. 1367–1372 (2015) 12. Tanaka, A., Knapp, R.: Multimodal interaction in music using the electromyogram and relative position sensing. In: NIME 2002 (2002) 13. Von Zezschwitz, E., et al.: An overview of current trends, developments, and research in human-computer interaction (2014). ISSN: 1862-5207 14. Benson, C., Manaris, B., Stoudenmier, S., Ward, T.: SoundMorpheus a myoelectricsensor based interface for sound spatialization and shaping. In: NIME, pp. 332–337 (2016) 15. Nymoen, K., Haugen, M., Jensenius, A.: MuMYO-evaluating and exploring the MYO armband for musical interaction. Louisiana State University (2015) 16. Xiao, Z., Chhatre, N., Kuatsjah, E., Menon, C.: Towards an FMG based augmented musical instrument interface. In: 2018 IEEE 9th Annual Information Technology, Electronics And Mobile Communication Conference (IEMCON), pp. 582–587 (2018) 17. Cui, X.: Interactive platform of gesture and music based on MYO armband and processing. J. Phys. Conf. Seri. 1288, 012010 (2019) 18. Di Donato, B., Dooley, J., Hockman, J., Hall, S.: MyoSpat a hand-gesture controlled system for sound and light projections manipulation (2017) 19. Wynnychuk, J., Porcher, R., Brajovic, L., Brajovic, M., Platas, N.: sutoolz 1.0 alpha 3D software music interface. In: Proceedings of the 2002 Conference on New Interfaces for Musical Expression, pp. 1–2 (2002)
An Interface for Audio Control Using Gesture Recognition and IMU Data
179
20. Barona L´ opez, L., et al.: An energy-based method for orientation correction of EMG bracelet sensors in hand gesture recognition systems. Sensors 20, 6327 (2020) 21. Vimos, V., Benalc´ azar, M., O˜ na, A., Cruz, P.: A novel technique for improving the robustness to sensor rotation in hand gesture recognition using SEMG. In: International Conference on Computer Science, Electronics and Industrial Engineering (CSEI), pp. 226–243 (2019) 22. Benalcazar, M., Barona, L., Valdivieso, L., Aguas, X., Zea, J.: EMG-EPN-612 hand gestures dataset (2020). https://doi.org/10.5281/zenodo.4023305 23. Intelligence, A. & Computer Vision Research Lab, Q. EMG-EPN-612 (2020). https://laboratorio-ia.epn.edu.ec/es/recursos/dataset/2020 emg dataset 612 24. Intelligence, A. & Computer Vision Research Lab, E. Code for the paper An Audio Control Interface Using Hand Gesture Recognition and IMU Data (2021). https:// github.com/laboratorioAI/EMG EPN SOUND MIX 25. Reig Albi˜ nana, D.: Implementaci’on de Algoritmos para la Extracci’on de Patrones Caracter’isticos en Sistemas de Reconocimiento De Voz en Matlab (2015) 26. Hudgins, B., Parker, P., Scott, R.: A new strategy for multifunction myoelectric control. IEEE Trans. Biomed. Eng. 40, 82–94 (1993). https://doi.org/10.1109/10. 204774 27. Feldman, M.: Hilbert transform, envelope, instantaneous phase, and frequency. Encycl. Struct. Health Monit. (2009). https://doi.org/10.1002/9780470061626. shm046 28. Winarno, H., Poernama, A., Soesanti, I., Nugroho, H.: Evaluation on EMG electrode reduction in recognizing the pattern of hand gesture by using SVM method. J. Phys. Conf. Seri. 1577, 012044 (2020) 29. Zhang, Z., Tang, Y., Zhao, S., Zhang, X.: Real-time surface EMG pattern recognition for hand gestures based on support vector machine. In: 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 1258–1262 (2019) 30. Vapnik, V.: Statistics for engineering and information science. Nat. Stat. Learn. Theory (2000) 31. Paul, Y., Goyal, V., Jaswal, R.: Comparative analysis between SVM & KNN classifier for EMG signal classification on elementary time domain features. In: 2017 4th International Conference on Signal Processing, Computing and Control (ISPCC), pp. 169–175 (2017) 32. Hasan, M.: Comparison between kNN and SVM for EMG signal classification. Int. J. Recent Innov. Trends Comput. Commun. 3, 6799–6801 (2015) 33. Senturk, Z., Bakay, M.: Machine learning based hand gesture recognition via EMG data. ADCAIJ Adv. Distrib. Comput. Artif. Intell. J. 10 (2021). https://doi.org/ 10.14201/ADCAIJ2021102123136 34. Benalc´ azar, M., Anchundia, C., Zea, J., Zambrano, P., Jaramillo, A., Segura, M.: Real-time hand gesture recognition based on artificial feed-forward neural networks and EMG. In: European Signal Processing Conference, pp. 1492–1496, September 2018. https://doi.org/10.23919/EUSIPCO.2018.8553126 35. Reaper Reaper-Digital Audio Workstation (2020). https://www.reaper.fm/ 36. Control, O. Open Sound Control (2021). http://opensoundcontrol.org/ 37. Intelligence, A. & Computer Vision Research Lab, E. Configuration video for the proposed Music Control Interface using HGR (2021). https://www.youtube.com/ watch?v=2CcTPGCVHiI&ab channel=ArtificialIntelligenceResearchLab
180
V. H. Vimos et al.
38. Intelligence, A. & Computer Vision Research Lab, E. Code for the paper An Energy-Based Method for Orientation Correction of EMG Bracelet Sensors in Hand Gesture Recognition Systems (2020). https://github.com/laboratorioAI/ 2020 ROT SVM EPN 39. Intelligence, A. & Computer Vision Research Lab, E. Demo video for the proposed Music Control Interface using HGR (2021). https://www.youtube.com/ watch?v=IKiI3JAoi3k&ab channel=ArtificialIntelligenceResearchLab
Advantages of Machine Learning in Networking-Monitoring Systems to Size Network Appliances and Identify Incongruences in Data Networks Anthony J. Bustamante(B) , Niskarsha Ghimire , Preet R. Sanghavi , Arpit Pokharel , and Victor E. Irekponor INICTEL-UNI – Instituto Nacional de Investigación y Capacitación en Telecomunicaciones, Lima, Peru
Abstract. This paper shows two potential uses that Machine Learning makes possible on Data Networks, multiple algorithms have been used to predict future behaviors about common components in data networks to take actions base on the predicted results, the first advantage identified is for sizing resources in digital network appliances and identification of discordances in the traffic going through the networks which could be used for reducing security risks. The focus of the paper has been on monitoring systems working with protocols like SNMP, Netflow, and Syslog to recollect data and act base on that. Different algorithms have been proven beginning with linear algorithms up to neural networks to predict future behaviors and reduce security gaps as well as for sizing resources. Overall, in this paper it has been demonstrated clearly two benefits of using Machine Learning in Data Networks. Keywords: SNMP · Machine learning · Neural network · Nagios · Proxy · Server · Syslog · Netflow · SIEM
1 Introduction Data Networks are an essential part of the infrastructure of every company, and operational teams to have a better understanding and knowledge of what happens in their networks trust monitoring protocols, software and tools to recollect information from them and act base on that data, some common vendors like PRTG, Solardwinds, Nagios, Zabbix, splunk, etc. make use of SNMP, Netflow, Syslog, NETCONF, gNMI, and other important protocols to collect this data. Nonetheless, the data usually is being treated passively and not being exploited for minimizing potential threats or malfunctions in the network, additionally due to these monitoring systems collect data actively from network appliances, the use this data to size any network component can be consequential as long as some analysis and Machine Learning techniques are applied. Currently, some of the most recent SIEM tools are assimilating AI on their platforms. Nevertheless, common platforms used by operators in network providers are still passive © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Botto-Tobar et al. (Eds.): ICAETT 2021, LNNS 407, pp. 181–195, 2022. https://doi.org/10.1007/978-3-030-96147-3_15
182
A. J. Bustamante et al.
and do not have machine learning algorithms inserted in them, and due to the big growing of Artificial Intelligence a comparison between multiple ML algorithms is being made in this paper to see the benefits that monitoring systems could have if this implementation were made in all these tools, as well as the importance that knowing and adopting tools based on machine Learning by operators, analysts and engineers of data networks has in order to know what behaviors to expect from every device in the network and set policies to analyze and react when the real results in real-time do not fit the predicted results by the tools as well as for sizing resources that are assigned to every device in a network or a future new device.
2 Previous Work Several projects have been carried out on Machine Learning applied on different fields. However, when it comes to sizing networking appliances or any other digital appliance this task is still being made empirically and by the experience that workers have, in fact, this paper is the result of the existing necessary to dimension resources in digital appliances, said this, we have to add that we have leveraged similar works for monitoring systems, in [6] researchers have tried to monitor voltage conditions by using Machine Learning on electrical machines which gave us the first idea of applying this concept upon networking appliances, in [7] ML was also used to identify user behaviors in smart homes and give an extra layer of intelligence to the infrastructures by identifying the houses’ patterns of answers in contrast to the environment surrounded by, which also was a main source to look into applications on networking to have better control on Data networks from a security perspective. In terms of security gaps, most of the works with Machine Learning have been done over classification and recognition, for instance, in [1] researchers have already been able to test ML to distinguish IoT packets from DDoS attack packets, other jobs regards to security are shown in [2–5] which in conjunction show the advantages that Machine Learning and Deep Learning have demonstrated on classifying packets and different types of malware, these works also have been helpful as base for this new contribution to predict data network behaviors which was constructed base on different regression models.
3 Design of the Experiment and Topology Used All the dataset has been collected from a real environment by using protocol SNMP and Netflow and stored in a monitoring server for three and a half months (see Fig. 1), this data has been used for training and also to compare the predictions made by the models afterward with the actual values. The monitoring server used is a general platform which could be used in almost any scenario that involves network devices, also the pre-processing of the data and design of the models was made using Tensorflow and Scikit-learn libraries mainly.
Advantages of Machine Learning in Networking-Monitoring Systems
183
3.1 Components Used in the Experiment • Servers and terminals: Appliances that consume and produce traffic every day. • Firewall: Perimeter appliance that protects the entire network from potential attacks from the internet and holds back the servers from being accessed by everyone. • Proxy Server: Appliance used for accessing to the Internet and hiding sources. • Nagios software: Tool used to store and process the data collected from the network. • SNMP protocol: Used to send hardware feature values to the Nagios server. • Netflow protocol: Used to send bandwidth consumption values to the Nagios Server. 3.2 Values Evaluated and Compared with the Machine Learning Algorithms • Bandwidth consumption: This parameter refers to the quantity of traffic going through the interfaces of the appliances, the values were obtained from the appliances with help of the Nagios server and was trained to produce a model to dimension and correlate future traffic according to the number of new users making use of the data network, the values are in Mb/s (Megabit per seconds). • Memory: Consumption taken out from the firewall and proxy server to train and predict the future consumption correlated with new users in the network, the values are in MB (Megabytes), or percentage (%).
Fig. 1. Environment under proof
184
A. J. Bustamante et al.
• CPU: Consumption taken out from the firewall and proxy server to train and predict the future consumption correlated with new users in the network, the values are in percentage (%). 3.3 Topology of the Environment Proven 3.4 Process (Fig. 2).
Fig. 2. Process of the experiment
3.5 Machine Learning Algorithms Proven Multiples regression algorithms have been proven, among the ones are: • Random Forest Regression
• Linear SVR
• XGB Regression
• AdaBoost Regression
• Linear Regression
• K Neighbors Regression
• Decision Tree Regression
• ElasticNet Regression
• Kernel Ridge
• ARD Regression
• Bayesian Ridge
• Artificial Neural Networks
• SGD Regression
Advantages of Machine Learning in Networking-Monitoring Systems
185
4 Analysis of the Results 4.1 Sizing of Resources in Network Devices Sizing is a fundamental part of any networking appliance as depending on the number of resources that a device has a performance can be obtained, this means that a device with inadequate resources could suffer from lack of these and consequently to experience malfunctioning or on the other hand misuse this resources if they are more than what is needed to work. The previous statement is why one of the focus of this paper has been to prove that by using Machine Learning sizing of almost any appliance in a network is feasible, by sizing it must be understood as knowing the quantity of resources that an appliance needs to work adequately, this paper is a consequence of these proofs since Machine Learning has been applied to size the amount of Bandwidth as well as CPU and Memory needed to handle the normal and future quantity of users going through the proxy server and the perimeter firewall (see Fig. 1). 4.2 Dataset Used and Preprocessing Independent variables stored for 3.5 months used for training: (Table 1) Table 1. Independent variables for sizing the proxy server. Timestamp
Connected clients
Active clients
Idle clients
1617256800
1226.17218
902.13341
324.03877
1617278400
1102.97019
833.09531
269.87489
1617300000
1098.53606
825.73575
272.80031
…
…
…
…
1624899600
1154.77044
787.35489
367.41556
Table 2. Independent variables for sizing the Firewall (values of traffic in Megabit/s). Timestamp
Input traffic (DMZ X)
Output traffic (DMZ X)
Input traffic (DMZ Y)
Output traffic (DMZ Y)
Input traffic (Inside)
Output traffic (Inside)
Input traffic (Outside)
Output traffic (Outside)
1617256800
5.03856
5.79119
0.81554
0.33079
26.92665
20.89671
17.91395
25.81333
1617278400
2.41819
3.10967
0.27499
0.19576
14.63181
8.44165
6.85066
12.56545
1617300000
4.05644
4.29181
0.23258
0.23839
7.25339
9.66778
8.49781
7.67136
…
…
…
…
…
…
…
…
…
1625119200
3.62238
3.31488
0.20555
0.14403
8.87727
13.28067
9.65857
8.16344
186
A. J. Bustamante et al.
Dependent variables stored for 3.5 months used for training: (Table 3) Table 3. Dependent variables for sizing the proxy server resources. Timestamp
CPU (%) Memory (%) Downloaded traffic (Mb/s) Uploaded traffic (Mb/s)
1617256800 6.85147
19.37597
27.81626
25.97085
1617278400 6.52981
18.17731
16.17160
16.16772
1617300000 5.45794
18.17917
8.52629
8.60174
…
…
…
…
20.67278
34.00921
39.97731
…
1624899600 8.35956
(Table 4) Table 4. Dependent variables for sizing the Firewall resources. Timestamp
CPU (%)
Memory (MB)
1617256800
19.27353
1061.00000
1617278400
11.31096
1060.97231
1617300000
11.16260
1061.91866
…
…
…
1625119200
10.02100
1177.30011
Feature Scaling was used to adapt all the independent variables to the machine learning models as well as Dimensionality reduction (Principal Component Analysis, Linear Discriminant Analysis and Kernel Principal Component Analysis) to look for better results and plot the results for a better understanding. Relationship used for the proxy server: Dependent variable = ML(Timestime + Connected clients + Active Clients + Idle Clients)
(1) Relationship used for the Firewall: 8 Timestamp + xj ; x = each independent variable Dependent variable = ML j=1
(2)
Advantages of Machine Learning in Networking-Monitoring Systems
187
Relationship between the samples taken for training: t1 = timestampi − timestampi+1 = 21600; timestamp = {1617253200 − 1618462800} (3)
t2 = timestampj − timestampj+1 = 1800; timestamp = {1618462800 − 1625119200} (4)
4.3 Results for the Proxy Server
Table 5. Relationship between real and predicted data with R2. Algorithm
Coefficient of determination (R2) Out bandwidth In bandwidth
Memory
CPU
Random Forest Regression
0.247065116
0.255616893 0.74126049
XGB Regression
0.301586934
0.260489906 0.781896352 −0.001054798
Linear Regression
0.387332754
0.337005877 0.734945417 −2.037963969
Decision Tree Regression
0.060999887 −0.745792592 0.653401011 −0.016982945
Kernel Ridge
0.333234867
0.275492705 0.646786887
Bayesian Ridge
0.390060873
0.338232598 0.735258209 −1.926475949
SGD Regression
0.38868041
0.330002602 0.734497347 −1.95893386
LinearSVR
0.266724041
0.160373202 0.729940706 −1.877575129
AdaBoost Regression K Neighbors Regression
0.212147312
0.341996373
−1.082329199 −0.279617307 0.654620984 −0.405480254 0.008122456 −1.074741271
0.007941604
0.13156448
ElasticNet Regression
0.389181653
0.340197237 0.733987964 −2.035745504
ARD Regression
0.389280862
0.338420068 0.734876713 −2.048984006
Artificial Neural Networks
0.380246639
0.369896064 0.752408945 −2.915065047
According to the results on Table 5 the results were good enough with at least two algorithms able to be used for this purpose and the plotted results showed a much deeper understanding indicating that the patterns of consumption were quite accurate matching all the peaks and minimum values, these results are good enough for predicting the patterns of consumption of resources to size any appliance base on the clients which are used by. Additionally, R squares with negative values are shown that also have been able to predict the patterns and peaks successfully, however the minimum values have not matched entirely, yet due to when it comes to sizing resources the main goal is to assure fair resources for a device without misusing any of them most of the algorithms complied with the objective and were able to predict peaks and patterns of consume. Figures from Figs. 3, 4, 5 and 6 show the real values compared with the predicted ones after training and obtaining the final models.
188
A. J. Bustamante et al.
Outbound Bandwidth to hold up all the clients using the proxy server:
Fig. 3. Real outbound bandwidth (Mb/s) vs machine learning predictions
Inbound Bandwidth to hold up all the clients using the proxy server:
Fig. 4. Real inbound bandwidth (Mb/s) vs machine learning predictions
Advantages of Machine Learning in Networking-Monitoring Systems
189
Fig. 5. Real memory consumption vs machine learning predictions
Memory needed to hold up all the clients using the proxy server: CPU needed to hold up all the clients using the proxy server:
Fig. 6. CPU consumption vs machine learning predictions
4.4 Results for the Firewall (Table 6) The sizing for the firewall was made by using the traffic going through its interfaces, nonetheless any other key values e.g. connected clients or number of sessions can be used. On Fig. 7 is observed that the patterns and peaks of CPU were complied with good accuracy as the R square shows and for Memory predictions although the R square results were not that good the plotted result shows that the predictions are incredibly close confirming with this that for sizing of networking resources like CPU, Memory or Bandwidth the R square method has not to be taken as the final or unique measurement.
190
A. J. Bustamante et al. Table 6. Relationship between real and predicted data with R2.
Algorithm
Coefficient of determination (R2) CPU
Memory
Random Forest Regression
0.673350443
−0.017572947
XGB Regression
0.528125493
−0.227372098
Linear Regression
0.489247577
0.096718418
0.21104306
−0.220418086
−0.032501288
−3.962110139
Decision Tree Regression Kernel Ridge Bayesian Ridge
0.485579189
0.097014879
SGD Regression
0.489639624
0.074946666
LinearSVR
0.607582551
0.032614694
AdaBoost Regression
0.574563053
−0.428028681
K Neighbors Regression
0.665671332
0.009711926
ElasticNet Regression
0.470128097
0.096017301
ARD Regression
0.49526098
0.096604593
Artificial Neural Networks
0.69835033
−1.146687522
CPU needed to hold up all the traffic going through the Firewall:
Fig. 7. CPU consumption vs machine learning predictions
Advantages of Machine Learning in Networking-Monitoring Systems
191
Memory needed to hold up all the traffic going through the Firewall: (Fig. 8)
Fig. 8. Memory consumption vs machine learning predictions
4.5 Predicting Future Behaviors for Networking Components Another use of Machine Learning in Data Networks proven is for predicting future behaviors in the networks to monitor anomalies and reduce security gaps, the same regression models have been proven to demonstrate this and the results are shown in Table 7. To carry out these proofs the independent variables in Table 2 has been used to find relationships between all the traffic crossing the firewall appliance, the result was quite good demonstrating that multiple security gaps especially for flood attacks can be approached. The final purpose behind this advantage is to be able to predict the behavior of a network and act base on the preprogrammed algorithms when some incongruences are found after a comparison with the real data. For this matter, according to the Table 7 the neural network model had the best performance at obtaining the most accurate results which were plotted to watch the comparison (see Fig. from Figs. 10, 11, 12, 13, 14, 15, 16 and 17). In Fig. 9 is shown the deep learning topology used, with “y” representing the traffic predicted for the interface chosen. Relationship used to determine the correlation between interfaces: 8 (Timestamp + xj − y) ; n > 0 & y = {x1 , x2 , x3 , x4 , x5 , x6 , x7 , x8 } Ry = ML j=n
(5)
192
A. J. Bustamante et al. Table 7. Relationship between real and predicted data with R Square.
Algorithm
Coefficient of determination R2 In (DMZ X)
Out (DMZ X)
In (DMZ Y)
Out (DMZ Y)
In (Inside)
Out (Inside)
In (Outside)
Out (Outside)
Random Forest Regression
0.9026
0.7163
0.6539
−0.3087
0.8956
0.9449
0.7966
0.9131
XGB Regression
0.9060
0.8292
0.7194
−0.2058
0.9236
0.9335
0.7562
0.9122
Linear Regression
0.8761
0.6933
0.4731
−1.5149
0.9048
0.9502
0.8229
0.9030
Decision Tree Regression
0.9135
0.2725
0.5255
−0.5943
0.8566
0.9294
0.7883
0.8596
Kernel Ridge
0.0918
0.3126
0.0133
−5.5037
0.6395
0.2638
0.4121
0.6433
Bayesian Ridge
0.8768
0.7035
0.4824
−1.5444
0.9049
0.9504
0.8249
0.9032
SGD Regression
0.7824
0.7012
0.4925
−1.5856
0.9073
0.9461
0.7715
0.9018
LinearSVR
0.6709
0.5083
0.2591
−1.5525
0.9049
0.9839
0.9568
0.9017
AdaBoost Regression
0.9001
0.3504
0.5905
−7.7126
0.6881
0.7990
0.1882
0.7295
K Neighbors Regression
0.6550
0.7898
0.7127
−0.2044
0.8940
0.8202
0.9217
0.9292
ElasticNet Regression
0.8626
0.7245
0.5248
−1.5010
0.9055
0.9551
0.8321
0.9048
ARD Regression
0.8768
0.7036
0.4824
−1.5438
0.9048
0.9504
0.8249
0.9032
Artificial Neural Networks
0.9844
0.8673
0.8191
0.0060
0.9996
0.9999
0.8518
0.9319
Fig. 9. Neural network used to train and predict network behaviors
Advantages of Machine Learning in Networking-Monitoring Systems
193
4.6 Behaviors Predicted for All the Interfaces of the Firewall Table 7 shows that when it comes to predicting behaviors with Machine Learning the remainder algorithms proven also offered good results. However, it is clearly seen that ANN had the best performance overall and the actual results were really quite similar to the expected ones as it is shown in the Figures from Figs. 10, 11, 12, 13, 14, 15, 16 and 17. 4.7 Behaviors Plotted and Comparison with Real Data Obtained from the Firewall
Fig. 10. Outbound traffic (Outside Interface) vs ML prediction
Fig. 11. Inbound traffic (Outside Interface) vs ML prediction
Fig. 12. Outbound traffic (Inside Interface) vs Fig. 13. Inbound traffic (Inside Interface) vs ML prediction ML prediction
Fig. 14. Outbound traffic (DMZ Z Interface) vs ML prediction
Fig. 15. Inbound traffic (DMZ X Interface) vs ML prediction
194
A. J. Bustamante et al.
Fig. 16. Outbound traffic (DMZ Y Interface) vs ML prediction
Fig. 17. Inbound traffic (DMZ Y Interface) vs ML prediction
5 Conclusions Two useful uses of Machine Learning in Data Networks have been presented, the results were good enough to demonstrate that Machine Learning is a good advantage that can be added to this field, whether for sizing of resources in network devices or predicting future behaviors for networking components ML works fine, some things to keep present are: • For sizing, the results do not have to be measured completely base on R square coefficient, yet also taking into account the plotted values since due to the correlation of real and predicted values is compared in real-time the probability of predicting every single value is low. Nevertheless, despite this fact, the patterns and peaks can be predicted really well to obtain a final model to use it for calculating future resources for as many devices as we want, also the final models and ML formulas can be inserted in monitoring tools like Nagios which was used in this paper or a new platform that can be even created by ourselves (e.g. apps). • For prediction of networks behaviors, the current paper was also looking for reducing security gaps and amazing results were found, Artificial Neural Networks have demonstrated to be the best model to predict future behaviors in data networks, this is showing us that we could actually know how our network could be in the future in terms not just of traffic how it was demonstrated here, but also about more values that are not being approached here yet that we could easily extrapolate, in fact as next step new applications on predicting protocols, connections, sessions will be addressed as well as looking for inserting these prediction models in monitoring systems and networking devices to add them a higher grade of intelligence and act immediately under every scenario that does not comply with the normal behaviors and predictions.
References 1. Doshi, R., Apthorpe, N., Feamster, N.: Machine learning DDoS detection for consumer internet of things devices. In: 2018 IEEE Security and Privacy Workshops (SPW), pp. 29–35 (2018). https://doi.org/10.1109/SPW.2018.00013 2. Pei, J., Chen, Y., Ji, W.: A DDoS attack detection method based on machine learning. In: ICSP 2019 J. Phys.: Conf. Ser. 1237 032040, pp. 3–4 (2019). https://doi.org/10.1088/1742-6596/ 1237/3/032040 3. Bensaoud, A., Abudawaood, N., Kalita, J.: Classifying Malware Images with Convolutional Neural Network Models. arXiv:2010.16108 (2020). https://doi.org/10.6633/IJNS.202011_ 22(6).17
Advantages of Machine Learning in Networking-Monitoring Systems
195
4. Gao, X., Changzhen, H., Shan, C., Liu, B., Niu, Z., Xie, H.: Malware classification for the cloud via semi-supervised transfer learning. J. Inf. Secur. Appl. (2020). https://doi.org/10.1016/j.jisa. 2020.102661 5. Yan, J., Yan, G., Jin, D.: Classifying malware represented as control flow graphs using deep graph convolutional neural network. In: 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), pp. 52–63 (2019). https://doi.org/10.1109/DSN. 2019.00020 6. Wong, T.K., Mun, H., Phang, S., Lum, K., Tan, W.: Real-time machine health monitoring system using machine learning with IoT technology. In: 14th EURECA 2020, pp. 2–10 (2021). https:// doi.org/10.1051/matecconf/202133502005 7. Xiao, G.: Machine learning in smart home energy monitoring system. ESAET 2021, 6–7 (2021). https://doi.org/10.1088/1755-1315/769/4/042035
Machine Vision
System for Troubleshooting Welded Printed Circuit Boards with Through Hole Technology Using Convolutional Neural Networks and Classic Computer Vision Techniques Alberto-Santiago Ramirez-Farfan1
and Miguel-Angel Quiroz-Martinez2(B)
1 Estudiante del Programa de Maestría META, Universidad Politécnica Salesiana, Guayaquil,
Ecuador [email protected] 2 Computer Science Department, Universidad Politécnica Salesiana, Guayaquil, Ecuador [email protected]
Abstract. Manual inspection in printed circuit board manufacturing is highly susceptible to failure through human error. This opens the way for automated visual inspection. Several methods exist for detection based on images captured by a camera. The objective of this work is to develop a computer vision system using convolutional neural networks and classical computer vision techniques for locating soldering faults on printed circuit boards with through-hole technology. For this purpose, the OpenCV library on Python is used to detect the region of interest within the image prior to the analysis and classification of the convolutional neural network ResNET50. Two types of faults were presented as lack of solder and solder bridge. The results obtained in the experimental classification tests have an accuracy margin higher than 90%. This makes a viable use of automated visual inspection in the testing and inspection processes of errors in the soldering of printed circuit boards. The dataset is available at: https://github.com/asrf001/ DatasetPCB.git. Keywords: Automated visual inspection · Convolutional neural networks · Classical computer vision · OpenCV · ResNET50
1 Introduction Computer vision gives computer systems a source of graphical information from the surrounding environment that can be used to address and pose new high-level problems [1]. In the manufacturing process of electronic equipment, on a large scale or in small production runs, manual visual techniques are still used during the testing and inspection stage to detect errors on printed circuit boards (PCBs) [2]. The purpose of this process is to recognize and detect possible failures in the production line due to welding defects such as bridging or missing welds [3]. Considering the time spent to perform this work manually, it gives way to the use of artificial intelligence and automatic optical inspection © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Botto-Tobar et al. (Eds.): ICAETT 2021, LNNS 407, pp. 199–208, 2022. https://doi.org/10.1007/978-3-030-96147-3_16
200
A.-S. Ramirez-Farfan and M.-A. Quiroz-Martinez
to detect possible failures by processing images captured by a camera after the PCB soldering process, prior to the electrical test process [4]. As presented in his paper [5], there are several categories for using automated visual inspection within the PCB manufacturing process, such as detecting faults in the silkscreen printing, in the conductive tracks present on the PCB, and detecting faults in the soldering process of the elements. The latter is the objective of this research. One of the most popular techniques for object detection is the use of convolutional neural networks, abbreviated by its acronym as (CNN), for their practicality, robustness, and processing speed [6]. There are also problems that do not require great computational power and can be solved using classical computer vision techniques [7]. This, in turn, raises the question: these two methods can be used to develop a welding fault detection system. In a complementary way, the classical vision technique allows extracting the area of interest and then analyzing that region with the neural network to obtain the classification of the faults present, as stated in his work [8]. Therefore, the objective of the present work is to develop a computer vision system using convolutional neural networks and classical computer vision techniques for the localization of soldering faults in printed circuit boards with through-hole technology. For this purpose, the OpenCV library on Python is used to detect the region of interest within the image prior to the analysis and classification of the convolutional neural network ResNET50. In [9], image subtraction techniques are applied to detect failures in the PCB conductive tracks, identifying functional defects that will be critical for the subsequent discarding of these parts. While [10] improves the results of applying image subtraction by adding other techniques from classical computer vision. A computer vision technique for detection is the use of pyramid network functions that facilitate recognition at different scales and is not affected by the change in the size of the images to be detected as it is compensated by the different scales of the detection pyramid [11]. One way to force learning and improve evaluation results is through data augmentation. This technique provides a larger amount of data. But the manual effort involved and the experience for the modification of the required parameters must be taken into consideration [12]. One method employed for data augmentation is random deletion within the area of interest. This increases the generalizability of the network [13]. As stated in his research [14], automated visual inspection for fault detection provides dynamic and varied information on potential faults in manufacturing processes. In turn, it becomes a tool that improves and enhances the production and manufacturing capacity of printed circuit boards. The classification results, in the experimental phase, have a margin of over 90% accuracy. This makes it feasible to use automated visual inspection in the testing and error inspection processes in the manufacture of through-hole technology printed circuit boards.
2 Materials and Methods In order to locate solder faults on printed circuit boards, it was divided into two stages. The first stage used classical computer vision techniques to locate the area of interest.
System for Troubleshooting Welded Printed Circuit Boards
201
The second stage evaluated this region with the trained neural network. For this case, the ResNet50 model was used, which allows training and implementation of a deeper convolutional neural network with a lower training complexity [15]. The process is detailed as follows. 2.1 Lighting System Lighting conditions are an important aspect of image recognition. The images were conditioned to ensure that the acquisition and processing process was stable. The variation of luminosity in the images was controlled through the use of artificial light in the environment where the sensor and the pieces to be worked were placed. By controlling the ambient light, changes in brightness that cause variance in image acquisition were avoided. Classical vision techniques are very sensitive to changes in brightness, whereas neural networks are not. Changes in brightness cause glitches or errors in image processing. For the present work, a low-angle lighting system was chosen, as we observed in Fig. 1. This distribution was used because the other two caused shadows on the part to be analyzed since the sensor is 12 cm away from the part.
Fig. 1. Lighting distribution according to the angle of incidence. Low-angle illumination was used for the present work.
A comparative table of system brightness under different conditions is presented. Table 1 shows the variation of the system as a function of ambient light. The variation ranges from 9 l× to 600 l×. This led to significant changes in the images when applying classical computer vision techniques. A low-angle lighting system was installed to minimize this effect. This controlled system provides constant brightness ranging from 350 to 360 l×, as shown in Fig. 2. Additionally, the structure where the sensor and samples are placed was coated to avoid the incidence of ambient light and improve image acquisition.
202
A.-S. Ramirez-Farfan and M.-A. Quiroz-Martinez Table 1. Comparison of system brightness under different conditions.
Condition
Brightness
Ambient light cloudy day
9–18 lx
Ambient light sunny day
40–50 lx
Low angle artificial light
350–360 lx
2.2 Image Acquisition The focal length was considered in the selection of the sensor. With the appropriate focal length, recognition of the proposed faults was guaranteed. Failures were labelled as P for solder bridges and F for solder failure. Webcams used for video conferencing have a focal length between 60 and 80 cm. This focal distance does not allow the recognition of the welding points of the system. For this application, a sensor with a focal length of 12 to 16 cm was used. Two types of images were used. High resolution 4:3 ratio of 23 megapixels with a size of 5520 * 4140 pixels. Standard resolution with 16:9 ratio of 1 megapixel with a size of 1280 * 720 pixels. The standard resolution was used for the real-time test. Table 2 shows the relationship of the area of the circuit board in relation to the proposed faults. The faults proposed for this system are shown in Fig. 2. Figure 2(a) shows the two-spot weld bridge. Figure 2(b) and (c) shows the unwelded spots. The first is style C-90–40 and the second C-50–25, respectively.
Fig. 2. Image of a 92 × 94 mm PCB captured with a focal length of 12 cm. Table 2. Size and ratio of the labels to the original image. Original PCB
Figure 2(a)
Figure 2(b)
Figure 2(c)
100%
0.17%
0.10%
0.04%
92 * 94 mm
5 * 3 mm
3 * 3 mm
2 * 2 mm
System for Troubleshooting Welded Printed Circuit Boards
203
2.3 Image Processing First, a grayscale transformation was performed. The grayscale transformation is performed by the interaction of the three channels of a red, green, and blue image. The general formula uses weights for each channel to obtain a new image. These weights are shown below. G = 0.299 ∗ R + 0.587 ∗ G + 0.114 ∗ B
(1)
A variant of the general grayscale was used for this system. The grayscale by the difference of red and green channels was used. This combination of channels makes it possible to easily differentiate the solder spots from the green colour of the anti-solder mask. Diff(x, y) = R(x, y) − G(x, y)
(2)
With this variant, a higher contrast of the image to be processed is obtained. Another possible technique to use is to change the colour space. It is changed to chromatic colour space. This technique reduces the effect produced by the change of brightness in the image. The next step is to apply smoothing to the image. The blur effect is used to provoke a blur and reduce the detail in the image. To apply the blur effect, we used cv2.medianBlur. You also have the option of using cv2.GaussianBlur. After smoothing the image with the blur effect, the edges were identified. To identify the edges present in the image, we applied the command cv2. Canny. With this, we detected all the edges present in the image. This step was performed to differentiate the background from the area of interest. Next, the contours of the image were obtained. To improve the detection of the contours, we used the process of dilation and erosion on the edges of the image. For dilation, we have cv2.dilate, and for erosion cv2.erode. This is to avoid discontinuous edges. After the dilation and erosion processes, the edges were detected with cv2.findContours. This command returns all contours present in the image. To differentiate the main contour, the area of all contours was considered. The larger contour corresponds to the contour of the printed circuit board. Using the points describing the contour, a minimum area rectangle covering the area of interest was approximated. As a final step, the minimum area rectangle was used to create a clipping mask. With this clipping mask, we obtained the area of interest Fig. 3. This area of interest was then processed with the neural network. 2.4 Neural Network Training The ResNet50 model was used, which is a variation of the ResNet model. This model contains 48 convolution layers, a MaxPool layer, and an Average Pool layer [16]. The model has more than 36 million parameters, of which 99.71% are trainable. This model resizes the image to a minimum side resolution of 800 pixels and a maximum size resolution of 1333 pixels. The size of the images and the size of the anchor windows must be considered when determining the labels. The dataset is composed of 3196 images. A model previously trained on the COCO dataset was used as a basis [17]. Using a base model for training the new data set gives better results [18]. The labels used are P and F for the two proposed faults. Images were captured at different scales.
204
A.-S. Ramirez-Farfan and M.-A. Quiroz-Martinez
Fig. 3. Extraction of the area of interest from a captured image prior to evaluation.
Different scales were used to divide the evaluation of the network into two stages. This division allows distinguishing the failure of lack of solder on the 92 * 94 mm PCB. Smaller PCBs were evaluated directly without dividing the image. For data magnification, the technique of rotating the images every 90° was used. In addition, vertical and horizontal mirroring of the images was applied. Internally, the network applied a random transformation of each of the images prior to processing. This transformation modified several image characteristics such as scale, brightness, contrast, among others. This process allows greater robustness of the system to variations in the capture of the image.
3 Experiments and Results This section validates and tests the performance of the proposed method. Different performance metrics were used, such as the percentage accuracy per label and the learning curve based on the training loss function of the model. Experiments were performed on the 3196 samples. The distribution provided to the data set was 70% for training and 30% for validation. The system processed between 15 and 17 epochs out of the 50 configured. The samples used were images with different scale levels in order to force the training. Figure 4 shows the training results of the two tags. An accuracy percentage of 84.7% was obtained for the F label and 87% for the P label. Missed soldering errors took more effort to detect. This is due to the ratio of the failure area to the whole image. Table 2 shows this relationship, which is between 0.1% and 0.04% of the total area of the captured image. The bridging errors, as they covered a larger area, ended up with a higher percentage of accuracy. Analyzing the graph corresponding to the loss function, we observed that it does not have a uniform decrease due to the similarity of the labels between the proposed failures, see Fig. 5.
System for Troubleshooting Welded Printed Circuit Boards
205
Fig. 4. Percentage of accuracy F label (green) and P label (blue).
Fig. 5. Training loss function curve of the model.
Implementing the model for detection and observing the experimental results showed that this detection method can have good results based on the proposed data set. For the 92 * 94 cm PCBs, a two-stage scan is considered given their size, and for the 34 * 45 cm PCBs, they were worked directly in a single scan. Figure 6 shows the evaluation of the upper part of the circuit. It has 21 detections of the F tag and 2 of the P tag. Averaging the percentage of accuracy of the detections of the F tag, we obtained 97.32%, and in the P tags, 98.1%. We had a percentage of minimum accuracy of 86.1% and a maximum of 99.8% of accuracy in the detection of faults. On the other hand, in Fig. 7, we evaluate the bottom of the circuit. It had 19 detections of label F and 2 of label P. Averaging the percentage of accuracy of the detections of label F; we obtained 95.65% and 93.45% in the labels P having a percentage of accuracy minimum of 80.7% and a maximum of 99.6% in the detection of faults.
206
A.-S. Ramirez-Farfan and M.-A. Quiroz-Martinez
Finally, with the results, we can analyze that within the detection and evaluation ranges based on the optimal focal length, the system presented a high percentage of success with respect to the failures proposed in the present work. Detection capacity can be enhanced by retraining the current neural network with new data sets that contribute to its learning.
Fig. 6. PCB fault detection in the upper zone using the ResNet50 model.
Fig. 7. PCB fault detection in the lower zone using the ResNet50 model.
System for Troubleshooting Welded Printed Circuit Boards
207
4 Discussion In this work, the feasibility of jointly using classical computer vision techniques with convolutional neural networks was verified. There are problems where the use of classic artificial vision techniques allows to obtain good results. These techniques are sensitive to changes in brightness level. Neural networks minimize these types of problems since training is carried out with various types of images. Our system allows detecting faults in the soldering of printed circuits with throughhole technology by evaluating a deep convolution network, such as ResNET50, with preprocessing based on classical techniques. Still, there are detection problems when the focal length between the sensor and the sample is not considered. To solve this problem, an inspection region is considered within the allowed focal length ranges, and the sample is evaluated with a sensor tour. In cases where the plates are small in size, a single evaluation can be directly performed since they are within the analysis area with reference to the focal length. However, another problem appears, which is the flexibility of the neural network with other types of samples. To solve this problem, the current neural network can be trained with a new dataset depending on the objective to be achieved, for example, to identify or detect faults in surface welding systems. Experimentation offers satisfactory results. Retraining the neural network is interesting since it allows to broaden the spectrum of its range of action.
5 Conclusions In this work, a classical vision system with a convolutional neural network ResNet50 was used in a complementary way, obtaining in the experimental tests an accuracy percentage higher than 90%. This makes feasible the use of a computer vision system employing convolutional neural networks and classical vision techniques in the testing and error-proofing processes of through-hole technology printed circuit boards. One of the most interesting findings found in this work is that although the training accuracy percentage is around 85%, at the time of practical implementation, we achieved an average accuracy percentage of over 95%. This is due to the different scales used in the image capture for the creation of the dataset. However, by providing images of other PCBs that were not considered in the training and validation, false positives are prone to occur. This is due to the inherent characteristics of the different solder pads that may be present on other boards outside the proposed data set. To minimize this effect, the system can be retrained by increasing its processing capacity and improving its effectiveness against sample variances. In future works, the presented technique can be adapted to a component detection system on printed circuit boards, surface solder detection systems and can be implemented in other programming languages given the flexibility of the functions used, which are found in many processing images tools. Acknowledgments. This work has been supported by the GIIAR research group and the Universidad Politécnica Salesiana.
208
A.-S. Ramirez-Farfan and M.-A. Quiroz-Martinez
References 1. Klette, R.: Image processing. In: Concise Computer Vision. UTCS, pp. 43–87. Springer, London (2014). https://doi.org/10.1007/978-1-4471-6320-6_2 2. Lin, Y.L., Chiang, Y.M., Hsu, H.C.: Capacitor detection in PCB using YOLO algorithm. In: 2018 International Conference on System Science and Engineering, ICSSE 2018 (2018) 3. Dai, W., Mujeeb, A., Erdt, M., Sourin, A.: Towards automatic optical inspection of soldering defects. In: Proceedings - 2018 International Conference on Cyberworlds, CW 2018 (2018) 4. Schwebig, A.I.M., Tutsch, R.: Compilation of training datasets for use of convolutional neural networks supporting automatic inspection processes in industry 4.0 based electronic manufacturing. J. Sensors Sens. Syst. (2020) 5. Anitha, D.B., Rao, M.: A survey on defect detection in bare PCB and assembled PCB using image processing techniques. In: Presented at the 2017 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET) (2018) 6. Zhang, Q., Zhang, M., Chen, T., Sun, Z., Ma, Y., Yu, B.: Recent advances in convolutional neural network acceleration. Neurocomputing. (2019) 7. Adibhatla, V.A.: Detecting defects in PCB using deep learning via convolution neural networks. In: 2018 13th International Microsystems, Packaging, Assembly and Circuits Technology Conference, pp. 202–205 (2018) 8. Tello, G., Al-Jarrah, O.Y., Yoo, P.D., Al-Hammadi, Y., Muhaidat, S., Lee, U.: Deep-structured machine learning model for the recognition of mixed-defect patterns in semiconductor fabrication processes. IEEE Trans. Semicond. Manuf. 31, 315–322 (2018) 9. Raihan, F., Ce, W.: PCB defect detection USING OPENCV with image subtraction method. In: Proceedings of 2017 International Conference on Information Management and Technology, ICIMTech 2017 (2018) 10. Kaur, B., Kaur, G., Kaur, A.: Detection of defective printed circuit boards using image processing. Int. J. Comput. Vis. Robot. 8, 418–434 (2018) 11. Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings, 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017. 2017 January, pp. 936–944 (2017) 12. Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (2020) 13. Zhong, Z., Zheng, L., Kang, G., Li, S., Yang, Y.: Random erasing data augmentation. In: Proceedings of the AAAI Conference on Artificial Intelligence (2020) 14. Cheong, L.K., Suandi, S.A., Rahman, S.: Defects and components recognition in printed circuit boards using convolutional neural network. In: Zawawi, M.A.M., Teoh, S.S., Abdullah, N.B., Mohd Sazali, M.I.S. (eds.) 10th International Conference on Robotics, Vision, Signal Processing and Power Applications. LNEE, vol. 547, pp. 75–81. Springer, Singapore (2019). https://doi.org/10.1007/978-981-13-6447-1_10 15. He, K., Zhang, X., Ren, S., Sun, J.: ResNet. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016) 16. Ji, Q., Huang, J., He, W., Sun, Y.: Optimized deep convolutional neural networks for identification of macular diseases from optical coherence tomography images. Algorithms 12 (2019) 17. Lin. T.Y., et al. Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele B., Tuytelaars T. (eds.) Computer Vision – ECCV 2014. ECCV 2014. LNCS, vol. 8693. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48 18. Vieira, J.P.A., Moura, R.S.: An analysis of convolutional neural networks for sentence classification. In: 2017 43rd Latin American Computer Conference, CLEI 2017 (2017)
Security
Cybersecurity Mechanisms for Information Security in Patients of Public Hospitals in Ecuador Mauricio Quimiz-Moreira(B) , Walter Zambrano-Romero, Cesar Moreira-Zambrano, Maritza Mendoza-Zambrano, and Emilio Cedeño-Palma Universidad Técnica de Manabí, Portoviejo, Manabí, Ecuador {mauricio.quimiz,walter.zambrano,armando.moreira, emilio.cedeno}@utm.edu.ec
Abstract. Information is considered the most important asset for most organizations in the healthcare sector, being sensitive and critical as it contains detailed patient data on their socioeconomic status, analysis, diagnoses and medical treatments that are administered by healthcare centers, avoiding unauthorized alteration or theft of data, which is a latent problem that can affect both the information systems managed in these environments and patients. Cybersecurity in public hospitals must be in line with clear assurance policies, in order to respond quickly and effectively to any type of threat, which are becoming more and more advanced. This paper analyzes the current situation of security and confidentiality of patient information in type II public hospitals in Ecuador, establishing an analysis methodology based on laws, standards, agreements and national and international regulations, applied to each of the health entities taken as a sample. Finally, guidelines are defined for an adequate treatment of patient information, which is an essential component of data confidentiality. As a result, it was found that patient information in the health units becomes a control mechanism in accordance with HIPAA and ISO 27799, and this study also shows the shortcomings in the assurance of patient information, despite the efforts made by the institutions, so that processes for continuous improvement are proposed. Keywords: Privacy · Integrity · Patient information · Cybersecurity · Confidentiality
1 Introduction Every company or organization, regardless of its nature and activity, requires security measures and controls to protect the integrity of information from the actions of cybercriminals who compromise the most valuable asset and cause serious damage to institutional prestige, disclosure of information or considerable economic losses. Security is a worldwide problem as shown by the publication of [1], which states that there are eminent risks in more than 70 countries, including Ecuador. [2] indicates that according to the information security firm GMS, in the last two years, computer © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Botto-Tobar et al. (Eds.): ICAETT 2021, LNNS 407, pp. 211–224, 2022. https://doi.org/10.1007/978-3-030-96147-3_17
212
M. Quimiz-Moreira et al.
attacks on hospitals, laboratories and other institutions in the health sector increased by 600%. The latest event in the health area that occurred in Latin America was the global cyberattack that affected companies and public institutions around the world also left among its victims Colombian government entities, such as the National Health Institute (INS) [3]. This is due to the fact that on the black market a medical record exceeds US$50, which is more than the information on a credit card, which can be between US$5 and US$15. This is because a card only allows access to one channel of information; whereas, a patient record includes all the patient’s personal data, home address, financial record, diseases, etc. [2]. From this initial point, two important concepts are addressed which are described by [4] where he speaks of information security and computer security, terms that are closely linked since the first ensures the information as a whole, and the second considers exclusively technical controls to ensure digital information. All this based on the basic principles or so-called security triangle that encompasses three elements: availability, confidentiality and integrity of information, each one contributes to make information security comprehensive [5]. These concepts are of great importance, as information is a very valuable and sensitive asset for organizations or companies [6] but here a question arises: How valuable is information for a health-oriented organization, there may be many answers, but all agree that the relevance is in ensuring the sensitive information handled by these entities, as is the case of patient data [7]. There are some Ecuadorian laws, rules, regulations and agreements oriented to the management and security of information for public or private organizations that manage public resources, as indicated by [4], among which we cite: ✓ Constitution of the Republic of Ecuador 2008. ✓ National Plan for Good Living 2017–2022. ✓ Law of the National System of Public Data Registry. ✓ Law of Electronic Commerce, Electronic Signatures and Data Messages. ✓ Organic Law on Transparency and Access to Public Information. ✓ Organic Law of Telecommunications. ✓ Organic Law of Jurisdictional Guarantees and Constitutional Control. ✓ Organic Integral Penal Code (COIP). ✓ Internal Control Standards of the Office of the Comptroller General of the State. ✓ International Agreements. Those oriented to the health field are cited: • Organic Law of Public Health. • Organic Law of the National Health System. • Ministerial Agreements oriented to the assurance of public health information. Although public health organizations in Ecuador can make use of these laws, rules, regulations and agreements to improve information security, there is no specific document that establishes guidelines for adequate management and privacy of patients’ medical information in Ecuador [8]. Some countries have established laws and regulations for the protection of clinical information; as is the case of the Health Insurance
Cybersecurity Mechanisms for Information Security
213
Portability and Accountability Act (HIPAA) that exists in the United States (Senate and House of Representatives of the United States of America in Congress assembled, 1996), or as exists within the European Union Regulation 2016/679 [9, 10], which serve as a reference for the assurance of patient information in those regions [11]. The health sector in Ecuador is segmented, as described by [11], with the Ministry of Public Health (MSP), the Ecuadorian Social Security Institute (IESS), the Armed Forces Social Security Institute (ISSFA) and the National Police Social Security Institute (ISSPOL), with the first of these entities covering the majority of the population and the last three covering workers, military and police, respectively. There are also the health services provided by municipalities and prefectures that work together with organizations linked to health issues at the local level [12]. Thus, the tasks related to the management and security of patient information that is in charge of the Information Technology (IT) area have been reduced, which originates the existence of a high rate of vulnerabilities on patient data, caused by the lack of the application of norms, laws or standards and in many cases this occurs due to carelessness of medical, administrative and technology personnel [13]. However, information assurance in the health area has taken place considering Ecuadorian laws and regulations, as well as national and international standards such as the case of the Technical Manual of processes based on international standards for IT risk management established by [14] to improve processes and be prepared for possible vulnerabilities; likewise, another case is the Adaptation of ISO 27001 and HIPPA Standards for security risk reduction described by [13], this, taking into account certain institutions that have benefited from some initiatives of undergraduate and graduate students in providing solutions to these problems, many of them remain only good intentions and are not given the appropriate follow-up to be replicated in all health entities, and by not being considered could cause serious effects such as disclosure, alteration or theft of patient information, even more so, when working in those dependencies with a Clinical History in electronic format (EHR) [15], causing serious damage to the institution as well as to the patient. We must also consider, the deployment of Internet of Things (IoT) infrastructures [16, 17], which are present in different fields, and one of them is the health sector so more emphasis must be made on information security, because there are modern cyber threats oriented to connected health devices [7, 18] that can compromise patient information. With the above described, it is essential to study the current situation of patient information security in type II public hospitals in Ecuador, through the analysis of norms and standards, considering as scope of the study the health entities of the cities of Portoviejo and Manta. Therefore, an analysis and evaluation of these norms related to the security of patients’ clinical information in the Verdi Cevallos Balda, IESS of Portoviejo and Rodríguez Zambrano of Manta hospitals is proposed, establishing the theoretical basis of the study.
2 Materials and Methods The research was conducted at the Verdi Cevallos Balda Hospital, the Hospital of the Ecuadorian Institute of Social Security (IESS) in Portoviejo and Rodríguez Zambrano
214
M. Quimiz-Moreira et al.
in Manta, linked to the security of patients’ clinical information. For the development of the present work, an introspective experiential research was applied, making use of current bibliographic material of laws, norms, standards and regulations, combined with field research, applying research techniques such as observation, surveys and analysis in the IT areas of the health centers under study. Consequently, the methodological process establishes the following activities: systematization of information, knowledge and characterization of experiences on information security, definition and structure of the controls to be evaluated, application of the evaluation and management and collection of relevant data to generate a proposal for change. An analysis was performed with the norms, laws and standards considered in this study, in which criteria based on ISO 27799, HIPAA Law and Regulation 2016/679 are established to verify the scope of importance of each of them, as shown in Table 1. Table 1. Comparison of laws and regulations Criteria
ISO 27799 Law HIPAA Regulation 2016/679
General criteria (information security principles) Confidentiality
X
X
X
Integrity
X
X
X
Availability
X X
X
Associates sanctions to health entities for non-compliance in securing patient information
X
X
Supports user consent for appropriate use of patient’s personal information
X
X
Supports the option to request correction or cancellation of information that the user considers inadequate
X
X
X
X
Specific criteria Focused on the specific security of patient data Suggests the implementation of controls at a X general level in the healthcare environment according to needs
Applicable internationally
X
Considers risk analysis to be fundamental for the improvement of security
X
Pseudonymization of patient information Classification of health information Source: Own development.
X X
X
Cybersecurity Mechanisms for Information Security
215
In order to carry out the evaluation tasks in the health care facilities, the criteria of the IT area managers, the standards and laws analyzed in Table 1 were taken into consideration, contemplating the main elements that interact with inpatient and/or outpatient information. Table 2. Hospital discharges according to province and location of the health facility. Province
Hospital discharges 2019
Azuay
80549
Bolivar
10032
Cañar
19153
Carchi
10094
Cotopaxi
26392
Chimborazo
36958
El Oro
52519
Esmeraldas
31912
Guayas
292213
Imbabura
31688
Loja
39190
Los rios
49790
Manabi
88788
Morona
16827
Napo
8435
Pastaza
8954
Pichincha Tungurahua Zamora
252413 46302 8335
Galapagos
1911
Sucumbios
13687
Orellana
6933
Santo Domingo De Los Tsachilas
39275
Santa Elena
22961
Source: Own development.
As shown in Table 2, the sample considered for this study was the province of Manabí since it is the third province with the highest number of hospital discharges at the national level Ecuadorian Institute of Statistics and Census INEC, [19, 20], thus, the hospitals considered are shown in Table 2 where the name is detailed and an identification code is defined that was applied during the analysis.
216
M. Quimiz-Moreira et al. Table 3. Level II Hospitals in the province of Manabí.
Institution
City
Code
Hospital IESS
Portoviejo
HIESS
Hospital Verdi Cevallos Balda
Portoviejo
HVCB
Hospital Rodríguez Zambrano
Manta
HRZ
Source: Own development.
The health entities considered in Table 3, were chosen according to the number of hospital discharges shown by the Ecuadorian Institute of Statistics and Census INEC, [20], in order to have an adequate discrimination criterion. Through the survey it was possible to collect essential information for the study and obtain adequate analysis data to propose timely improvements within the information security scheme, this is divided into 14 groups with a total of 60 questions considering as a basis the ISO 27799 standard, combining the HIPAA Law and Regulation 2016/679-EU, describing the scope of each group: Group 1: Information security policies. - This group considers top management’s guidelines for information security, which are embodied in the information security policies, their validity and regular review and validation process. Group 2: Organization of information security. - It considers the roles and responsibilities in the field of information security within an internal environment, the costs of security measures to be implemented establishing a gradual process until having an acceptable security. Use of mobile devices and teleworking. Group 3: Safety linked to human resources. - Consideration is given to an adequate personnel selection process, assignment of functions and responsibilities, continuous training plans, and an adequate job separation or change process. Ensuring the confidentiality of the people who handle patient data. Group 4: Asset management. - It considers the organization’s asset inventory, personnel responsible for the assets, plans for the proper use of the asset, classification, labeling and management of information, and management of removable media. Group 5: Access control. - Establishment of access control policies to networks and services in the organizational network, user management considering levels of access to information or systems, authentication, review and readjustment of privileges. Group 6: Management of patient data. - Consideration is given to the pseudonymization of patient data in the event of sharing information with third parties, maintaining an adequate process of acceptance by the patient regarding the use of his/her data for the purposes deemed appropriate, as well as allowing the withdrawal of these permissions for use when the patient deems it appropriate. Group 7: Physical and environmental security. - Consider physical security perimeters and restricted places for internal or external users, equipment for the protection of the technological infrastructure such as electrical system, battery system, wiring, equipment maintenance plans.
Cybersecurity Mechanisms for Information Security
217
Group 8: Operations security. - Procedures and operations such as change and capacity management, protection against malicious code, backup copies, event logging and tracking, software control in production, software installation restrictions, and information systems audits are considered. Group 9: Communications security. - Network security management is considered as communication controls, network segmentation, adequate protocols for the transfer of secure information, adequate management of electronic messaging, confidentiality agreements and information disclosure, high availability of communications for access to information, encryption methods used to encrypt information stored or in transit. Group 10: Acquisition, development and maintenance of information systems. Information systems security requirements, security in development and support processes, secure test environments, secure deployment processes to production environments. Group 11: Relationship with suppliers or third parties. - Consider appropriate information security policies for relationships with suppliers or third parties, the acquisition of services or delivery of secure information, and be clear about the conditions of consent for the use of personal data. Group 12: Information security incident management. - Incident management planning and information security improvements, responsibilities and procedures, event reporting, weakness reporting, event evaluation, incident response, learning and evidence gathering. Notification to competent authorities about the incidents occurred and communicating to affected users about the information leakage. Group 13: Business continuity. - Contingency plans that allow for business continuity in the event that some of its areas of interest are affected; verification, review and evaluation of the plan is important. Group 14: Compliance. - Adopt information security standards in accordance with the environment, and that are aligned with the policies, laws and regulations involved in the assurance of patients’ health information.
3 Results and Discussions This part describes the three activities that were developed to validate the controls that the health units have, these parameters are defined according to their use in most of the audits that are handled with ISO standards and used by some experts in computer security audits: • NC = Does not comply • CP = Partially complies • CS = Complies satisfactorily The results obtained from the application of a bank of questions linked to the controls presented in the standard analyzed as an instrument for the study of patient information security in type II public hospitals in Ecuador, in the three health centers, are shown in Table 4, 5 and 6, and in Fig. 1 the percentage graph for each health unit, respectively. The survey with its respective questions can be found in the annexes of this research.
218
M. Quimiz-Moreira et al. Table 4. Tabulated data of HRZ responses
Detail
NC
CP
CS
No. questions
Group 1
0
1
1
2
Group 2
3
1
0
4
Group 3
0
3
2
5
Group 4
1
3
2
6
Group 5
0
3
3
6
Group 6
1
2
0
3
Group 7
0
2
3
5
Group 8
5
3
0
8
Group 9
0
4
1
5
Group 10
3
1
0
4
Group 11
2
1
0
3
Group 12
0
2
0
2
Group 13
2
1
0
3
Group 14
1
2
1
4
Total
18
29
13
60
%
30%
48%
22%
Source: Own development.
Table 5. Tabulated data of HIESS responses Detail
NC
CP
CS
No. questions
Group 1
0
1
1
2
Group 2
2
2
0
4
Group 3
0
3
2
5
Group 4
2
2
2
6
Group 5
1
3
2
6
Group 6
2
1
0
3
Group 7
0
3
2
5
Group 8
3
5
0
8
Group 9
0
3
2
5
Group 10
2
2
0
4
Group 11
2
1
0
3
Group 12
0
2
0
2 (continued)
Cybersecurity Mechanisms for Information Security
219
Table 5. (continued) Detail
NC
CP
CS
No. questions
Group 13
2
1
0
3
Group 14
0
3
1
4
Total
16
32
12
60
%
27%
53%
20%
Source: Own development.
Table 6. Tabulated data of HVCB responses Detail
NC
CP
CS
No. questions
Group 1
0
1
1
2
Group 2
3
1
0
4
Group 3
0
3
2
5
Group 4
2
2
2
6
Group 5
1
3
2
6
Group 6
1
2
0
3
Group 7
0
2
3
5
Group 8
4
4
0
8
Group 9
0
4
1
5
Group 10
3
1
0
4
Group 11
2
1
0
3
Group 12
1
1
0
2
Group 13
2
1
0
3
Group 14
0
3
1
4
Total
19
29
12
60
%
32%
48%
20%
Source: Own development.
Figure 1 shows the three health units where the general status of information security is shown according to the groups presented, which include a total of 60 questions that establish non-compliance, partial compliance or satisfactory compliance with each of the controls evaluated, resulting in similar percentages in each of the health units, with 27% to 32% non-compliance (NC), 48% to 53% partial compliance (PC), and 20% to 22% with satisfactory compliance. This shows that there are safety gaps identified in each group that must be taken into account in order to achieve comprehensive safety.
220
M. Quimiz-Moreira et al.
Fig. 1. General compliance with controls. Source: Own development.
The security of patient data is of vital importance, but other parameters that contribute to comprehensive security should also be considered, and the review of the groups with no compliance will be considered. Within the questions evaluated in group 1, despite having generally established security policies, there are no regular reviews of compliance with these, nor is there a feedback process to readjust policies that do not meet current needs. In the questions of group 2, the organization of information security shows shortcomings in establishing responsibilities and functions, planning the costs of the security measures to be implemented, these should be clearly identified in the policies and planned for their gradual execution. In the questions of group 4, it is detected that, despite having an inventory of assets, there is no adequate classification, labeling and management of information. The questions in group 6, referring to patient data management, indicate that it is not clear how to establish the pseudonymization of patient data, nor is there an adequate process for the acceptance or rejection of patient data. The questions of group 8, refer to the drawbacks in the segmentation of the networks, no encryption methods are established for the information transmitted by digital means, nor are confidentiality agreements and disclosure of information between information systems established. The questions in group 10 show shortcomings in the processes for the acquisition, development and maintenance of information systems; the establishment of secure testing is indispensable before implementing new systems or software in production. Group 11 questions show that agreements with suppliers or third parties do not clearly establish confidentiality, integrity and availability of information agreements. It should be supported with the consent of patients for an adequate use of their data. The questions in group 13 clearly indicate that there is no business continuity plan in case of problems that compromise the security of the information, including its confidentiality, integrity or availability. The groups that have not been considered are no less important, and should also be reviewed in detail since some of the questions considered within the groups are partially fulfilled and others satisfactorily. The purpose is to ensure patient information in the health units, so a process for establishing controls according to HIPAA Law, Regulation
Cybersecurity Mechanisms for Information Security
221
2016/679-EU, ISO 27799, Organic Law of Public Health, Organic Law of the National Health System and Ministerial Agreement 5216-MSP is proposed in Fig. 2. Considering its application in the three health units evaluated for their similarities in the results obtained.
Fig. 2. Process for establishing security controls. Source: Own development.
Next, clauses are defined where a standard is proposed that complements ISO 27799, HIPAA Law and Regulation 2016/679 that according to the study conducted should be considered in the, as shown in Table 7. Table 7. Proposed standard complementing ISO 27799, HIPAA and regulation 2016/679. Clauses
Justification
Support
Scope
Define a scope of security in the healthcare environment, considering the appropriate criteria with respect to patients’ clinical information
ISO 27799
Information security policies
Policies should be framed within the scope of confidentiality, integrity and availability
ISO 27799 Law HIPAA Regulation 2016/679
Information security organization Define internal responsibilities in relation to information security
ISO 27799
Human resources
Establish adequate recruitment and education processes for staff regarding information security
ISO 27799 Law HIPAA Regulation 2016/679
Asset management
Properly classify information and ISO 27799 establish who is responsible for Regulation 2016/679 the assets (continued)
222
M. Quimiz-Moreira et al. Table 7. (continued)
Clauses
Justification
Support
Access control
Manage and control user access to ISO 27799 systems and patient information Regulation 2016/679
Data encryption
Establish encryption mechanisms ISO 27799 for data that is stored or in transit
environment security
Establish physical and ISO 27799 environmental security perimeters Law HIPAA within the healthcare environment Regulation 2016/679
Security operations
Monitoring and control of systems and information vulnerabilities. Establish regular auditing processes
ISO 27799 Reglamento 2016/679
Provider and patient relations
Establish criteria for pseudonymization of patient information when sharing information between institutions. Consider the consent of patients for the use of information processing, its correction or cancellation if necessary
ISO 27799 Regulation 2016/679
Security incident management
Establish procedures on security ISO 27799 incidents and how they should be Law HIPAA reported to the authorities and Regulation 2016/679 owners of the information in case of being compromised
Business continuity
Designing, planning and executing a contingency plan in case of interruptions in health services
ISO 27799
Compliance
Align with local laws, norms, standards and regulations to establish sanctions in case of non-compliance with security policies
ISO 27799 Law HIPAA Regulation 2016/679
Source: Own development.
4 Conclusions With the evaluation mechanisms of the security controls, it is shown that the security of patient information is not adequately carried out due to the fact that the ISO27799 standard, HIPAA Law and Regulation 2016/679 EU, are not an integral part of the patient information assurance process. The agreements and standards at the local level are very
Cybersecurity Mechanisms for Information Security
223
general, due to the fact that Ecuador does not have a Personal Data Protection Law, as is the case of the European Union Regulation 2016/679. The health units evaluated establish general controls for information assurance, but, they are not sufficient, it could be observed that the specific controls help to an adequate classification, labeling and treatment of patient information, which are not presented in a clear way, by not having these components there was no evidence of procedures for acceptance, rejection of the use of information, as well as an adequate procedure for the elimination of information. With the proposal presented, it is intended to contribute to improve security and focus specifically on patient data, in order to have a good treatment within the health unit or when required by external entities. Therefore, the application of security controls on human resources, within the selection process, relocation of personnel and in the assignment of functions, is of vital importance. The allocation of economic and human resources oriented to information security is a visible limitation in public entities due to the limited budget allocated by the state, causing IT managers to look for partial solutions (patch) that can be breached at any time.
References 1. El Diario: Ciberataque pone en riesgo sistemas informáticos de más de 70 países, incluido Ecuador (2017). http://www.eldiario.ec/noticias-manabi-ecuador/432814-ciberataque-poneen-riesgo-sistemas-informaticos-de-mas-de-70-paises-incluido-ecuador/ 2. El Comercio: Los ciberataques a la salud se incrementaron en un 600% (2016). https:// www.elcomercio.com/guaifai/ciberataques-hospitales-informacion-seguridad-datospersona les.html 3. Redacción Médica. El ciberataque mundial afectó el Instituto Nacional de Salud en Colombia (2017). https://www.redaccionmedica.ec/secciones/latinoamerica/el-ciberataquemundial-afect-el-instituto-nacional-de-salud-en-colombia--90215 4. Bracho, C., Cuzme, F., Pupiales, C., Suárez, L., Peluffo, D., Moreira, C.: Auditoría de seguridad informática siguiendo la metodología OSSTMMv3: caso de estudio. Maskana 8, 307–319 (2017). https://publicaciones.ucuenca.edu.ec/ojs/index.php/maskana/article/view/1471/1144 5. Casas, P. El Triángulo de la Seguridad | Seguridad en Cómputo (2017). http://blogs.acatlan. unam.mx/lasc/2015/11/19/el-triangulo-de-la-seguridad/ 6. Cuzme, F., León, M., Suárez, L., Dominguez, M.: Offensive security: ethical hacking. Adv. Intell. Syst. Comput. 1, 127–140 (2019) 7. Klonoff, D.: Cybersecurity for connected diabetes devices. J. Diabet. Sci. Technol. 9(5), 1143–1147 (2015). https://doi.org/10.1177/1932296815583334 8. Ayala, M.: Sistema de gestión de seguridad de información para mejorar el proceso de gestión del riesgo en un hospital nacional. Universidad César Vallejo. Universidad César Vallejo (2017). http://repositorio.ucv.edu.pe/handle/UCV/13753 9. Jalali, M., Kaiser, J.: Cybersecurity in hospitals: a systematic, organizational perspective. J. Med. Internet Res. 20(5), e10059 (2018). https://doi.org/10.2196/10059 10. Pillo, D., Enríquez, R.: Gobierno de TI con énfasis en seguridad de la información para hospitales públicos. Maskana 8(0), 42–55 (2017). https://publicaciones.ucuenca.edu.ec/ojs/ index.php/maskana/article/view/1451/1125 11. Buitrón, M., Gea, E., García, M.: Tecnologías en información y comunicación sanitaria. Rev. PUCE 102, 273–289 (2016). http://www.revistapuce.edu.ec/index.php/revpuce/article/view/ 15/17
224
M. Quimiz-Moreira et al.
12. Sánchez, I., Santos, A., Fernandez, M., Piattini, E.: HC+: Desarrollo de un marco metodológico para la mejora de calidad y la seguridad en los procesos de los Sistemas de Información en ambientes sanitarios (2013). http://repositorio.educacionsuperior.gob.ec/han dle/28000/2510 13. Barragán, C.: Adaptación de las normas ISO 27001 e HIPPA para la reducción de riesgos en la seguridad en hospitales nivel I del IESS (2017). http://dspace.espoch.edu.ec/bitstream/123 456789/7544/1/20T00915.pdf 14. Chuncha, S.: Manual técnico de procesos basado en normativa internacional para la gestión de riesgos informáticos en el departamento de sistemas del Hospital Provincial Docente Ambato. Universidad Técnica de Amabato (2017). http://repositorio.uta.edu.ec/handle/123 456789/8100 15. Sánchez, A., Fernández, J., Toval, A., Hernández, I., Sánchez, A., Carrillo, J.: Guía de buenas prácticas de seguridad informática en el tratamiento de datos de salud para el personal sanitario en atención primaria. Atencion Primaria 46(4), 214–222 (2014). https://doi.org/10.1016/j. aprim.2013.10.008 16. Pacheco, J., Hariri, S.: IoT Security framework for smart cyber infrastructures. In: IEEE 1st International Workshops on Foundations and Applications of Self Systems (FAS*W), pp. 242–247. IEEE (2016). https://doi.org/10.1109/FAS-W.2016.58 17. Ross, R., Michael, M., Janet, C.: Systems Security Engineering (December), pp. 1–78 (2016) 18. Dimitrov, D. Medical internet of things and big data in healthcare. Healthc. Inform. Res. 22(3), 156–163 (2016). https://doi.org/10.4258/hir.2016.22.3.156 19. Instituto Ecuatoriano de Estadísticas y Censos INEC. Anuario de Camas y Egresos Hospitalarios (2016). http://www.ecuadorencifras.gob.ec/camas-y-egresos-hospitalarios-2016/ 20. Yunga, J.: Anuario de Estadística: Recursos y Actividades de Salud (2014). http://www. ecuadorencifras.gob.ec/documentos/web-inec/Estadisticas_Sociales/Recursos_Actividades_ de_Salud/Publicaciones/Anuario_Rec_Act_Salud_2014.pdf
Technology Trends
Software Regression Testing in Industrial Settings: Preliminary Findings from a Literature Review Raúl H. Rosero1(B)
, Omar S. Gómez1 , Eduardo R. Villa1 , Raúl A. Aguilar3 and César J. Pardo2
,
1 GrIISoft Research Group, Escuela Superior Politécnica de Chimborazo, Riobamba 060155,
Ecuador [email protected] 2 Facultad de Ingeniería Electrónica Y Telecomunicaciones, Programa de Ingeniería de Sistemas, Grupo de Investigación Y Desarrollo de Tecnologías de La Información (GTI), Universidad del Cauca, Popayán, Colombia 3 Facultad de Matemáticas, Universidad Autónoma de Yucatán, Mérida, Yucatán, México
Abstract. In the professional field of software development, unit tests are often used as a verification mechanism of the code produced. With the design and execution of test cases, it is possible to verify new or modified code, as well as to verify existing versions of it. In order to prevent new defects from spreading over existing versions of the code, it is necessary to strategically re-run different test cases. This activity is known as software regression testing and can constitute a high percentage of the total cost of the verification process. Researchers continuously are dealing with making software regression testing efficient. In order to shed light on the application of software regression testing in industry, this paper presents a literature review on different aspects of regression testing applied in industrial settings. As a result, 40 primary studies that report the use of regression testing in the industry were identified. We observe that the main regression testing technique used is the selection of test cases followed by the prioritization of them. The use of combinations of metrics based on coverage, requirements, risk, defects, cost-efficiency searches is also observed. Keywords: Software engineering · Software engineering in industry · Software regression testing · Software verification · Literature review
1 Introduction Software testing constitute an important element in the software development process. It contributes to the quality of the software developed or maintained. A particular type of software testing approach is known as software regression testing, which aim is to determine a subset of test cases that should be re-executed in order to check either injection or propagation of defects [1]. During software development or production stages, the software product can be modified for different reasons, such as the fix of existing defects or the addition of new © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Botto-Tobar et al. (Eds.): ICAETT 2021, LNNS 407, pp. 227–237, 2022. https://doi.org/10.1007/978-3-030-96147-3_18
228
R. H. Rosero et al.
functionalities; being required the application of regression tests; activity that according to reports can cost up to 80% of the budget of the software development project or up to 50% of the maintenance budget [2, 3]. The frequency and the moment at which a regression test is applied depends on the development context. In traditional software development processes, REGRESSION TESTING is run after changes are made, usually at night or before new versions of the product are released [4]; on the other hand, in agile development processes, regression testing is run every time the code is saved and compiled. In any case, the goal of a regression test is to help ensure that new changes have not affected the behavior or functionality of the software product. Different regression testing approaches, as well as tools are documented in the literature. Examples of these approaches are prioritization, selection, or reduction of a set of test cases to be executed by this testing approach. However, research on this topic is still ongoing in industrial settings [5, 6]. It is observed a major research in academic contexts than industrial settings [5]. The main objective of this work is to present a global vision of the main software regression testing approaches, metrics and indicators reported in the industry. For this, the application of a literature review [7] is conducted. The rest of the document is structured as follows: Sect. 2 presents the relevant aspects regarding software regression testing. Section 3 presents the method used for conducting the literature review; later, in Sect. 4, the report of the results is discussed. Finally, in Sect. 5 some conclusions are presented.
2 Overview of Software Regression Testing Approaches At first glance, conducting regression testing consists in re-test (or re-execute) all of the available test cases. However, following this approach is expensive and time-consuming. In order to reduce the “retest all” approach, approaches such as minimization, selection, prioritization and optimization have been proposed. Following, we briefly describe each one of these approaches. 2.1 Regression by Minimization According to [8] and [9], the minimization or reduction consists in the elimination of redundant test cases. It is defined as following: Given a test suite, P, a set of requirements {r1 , r2 ,….rn }, which must be satisfied to provide the “adequate” test of a program, and subsets of P1 , P2 ,…, Pn , associated with each of the rj ’s, such that any of the pj ’s belonging to Pi , can be used to meet ri requirements. The problem focuses on representatively determine a P’ of test cases which is subset of P, and that satisfies all the requirements ri of P. 2.2 Regression by Selection In this approach, the reduction also occurs permanently, but this strategy focuses on the detection of the modified parts of a software program (code), which normally works based on static white-box analysis. According to [10], given a program P, the modified version of P, P’ and a suite of tests T, the problem consists in determining a subset of T, T’, with which to test P’.
Software Regression Testing in Industrial Settings
229
2.3 Regression by Prioritization The objective of this approach [11] is to determine and to order a set of tests based on a characteristic or property such as the ability to detect failures. Its main objective is to find the ideal permutation of the sequence of test cases. Formally: Given a suite of tests P, the set of permutations of P, PP and a function from PP to real numbers f: PP → R, then, find P’ that belongs to PP, such that (∀P”) (P” p PP) (P” = P’) [f (P’) ≥ f (P”)]. 2.4 Regression by Optimization This strategy uses combinations of the aforementioned approaches. The use of multiobjective optimization and artificial intelligence techniques are also used [12]. A thoroughly review of the mentioned approaches also the discussion of aspects such as their implementation and domain can be found in [5].
3 Method In order to examine evidence published in recent years on the subject of software regression testing in industry and thereby characterize the information found, the methodology proposed in [7] was used as a guideline. 3.1 Research Questions The purpose of a literature review is to bring together a set of existing knowledge, in our case, concerning the application of software regression testing in the software industry. Through posing a set of research questions, we aim to identify aspects as well as factors of the regression testing approaches applied and evaluated in the software industry. The questions posed for this literature review are as follows: • • • • •
RQ1 What software regression testing approaches are the most used in industry? RQ2 What software regression testing metrics are usually taken into account? RQ3 Under what process stage is regression testing usually applied? RQ4 In which software domains is regression testing applied? RQ5 At what level of verification is software regression testing applied?
3.2 Information Sources To obtain an expanded vision on this subject, different sources of information were reviewed, including journals and conference proceedings on this subject [13]. Databases such as: Scopus, Science Direct, Springer, IEEE Xplore, ACM Digital Library as well as Google Scholar were considered in the present review. These databases cover publications within the software engineering field [14]. Unpublished work in progress was not considered due to quality issues (only peer reviewed publications were considered).
230
R. H. Rosero et al.
3.3 Searching Criteria The searching criteria used for this review includes different terminology, the search string created for this review is as follows: (“Software regression testing” OR “software regression test”) AND (industry OR industrial) AND publication year between 1990 and 2020. We considered works from 1990 to 2020 related to software regression testing in industry. Another aspect was the language, in this case English was the only language selected. The initial search returned a result of 480 documents. 3.4 Study Selection Process Once executed the first filter by excluding documents with regards their tittle, we selected 420 documents, in the second filter we discarded documents by their summary, selecting in this stage 320 documents. We proceeded to read the contents of these documents, thus selecting 44 documents. We were not able to access the contents of four papers, so we ended up with 40 documents as primary studies. The primary studies selected cover information of the factors of interest to our review such as metrics, regression testing approaches, methods used, size of the software applications and finally the industrial context where the software regression testing is applied. 3.5 Information Extraction and Synthesis Using the described procedure, 40 primary studies were selected. These studies show empirical approaches that consider software regression approaches by prioritization, selection and optimization with different application methods. We examined each of the primary studies with regards the research questions previously presented. Concerning the software products used by the regression testing approaches reported, we classified the size of the product as follows: small, medium, and large (S, M, L), according to the number of lines of code: up to 10k lines of code (LOC) small, from 10k to 100k medium, and large software products greater than 100k. Those works that did not consider this information were left from the sample.
4 Results In this section, we briefly summarize the findings with regards each of the research question previously defined. With regards RQ1 (What software regression testing approaches are the most used in the industry?), it was observed that the test case selection technique is the most commonly used (24 papers) [18–21, 25–27, 29, 31, 32, 35–37, 40, 41, 43, 45, 47, 48, 50–54], followed by the prioritization technique (14 papers) [15–17, 22–24, 28, 30, 34, 39, 43, 45, 47, 50]. We observe that the approach that considers artificial intelligence methods such as data mining is the least used with 2 papers found [32, 39]. It is worth to mention that the minimization approach is not reported in the primary studies we examined. Regarding RQ2 (What software regression testing metrics are usually taken into account?), we observe that efficiency by means of detecting software defects is the most
Software Regression Testing in Industrial Settings
231
used metric with 22 papers [20–23, 25, 26, 28, 30–32, 34, 36, 37, 39–43, 47, 49–51, 53]. The second most used metric is the test case reduction metric with 8 papers [16, 27, 29, 35, 36, 40, 44, 48]. We also observe the reduction in execution time metric with 6 papers [23, 34, 39, 41, 45, 46]. Other metrics such as precision (4 papers) [34, 36, 38, 40], classes coverage, and based on requirements are also mentioned to be used. With regards RQ3 (Under what process stage is regression testing usually applied?). We observe that software regression testing is mostly applied under a corrective approach (software maintenance stage), with 22 papers [17, 18, 20, 22, 24–26, 28–32, 36, 38, 39, 43, 44, 47, 53]. On the other hand, we found 17 papers [15, 16, 19, 21, 23, 27, 34, 35, 37, 40–44, 48, 49, 54] mentioning to apply software regression testing under a progressive approach (software development stage). In the case of RQ4 (In which software domains is regression testing applied?), we observe that software regression testing approaches have been applied mostly in the context of information systems (17 papers) [15, 21, 23–25, 28, 32, 35, 36, 41, 43, 45, 46, 48, 51, 52, 54] such as: healthcare services, commerce, resource management, finance. Other application domains observed are telecommunications (10 papers) [16, 20, 22, 29, 30, 34, 40, 47, 49, 53], software development (web browsers development, 6 papers [24–26, 29, 32, 40]) and Gaming with 4 papers [26, 31, 33, 39]. We also observe primary studies discussing regression testing in the context of real time systems such as robotics and manufacturing (4 papers [17–19, 42]). Finally, for RQ5 (At what level of verification is software regression testing applied?), we observe that software regression testing is mainly used at unit testing level (21 papers) [16, 19–23, 26, 33, 34, 37–44, 46, 48, 49, 52], acceptance testing (17 papers) [15, 17, 18, 24, 25, 27–31, 35, 36, 45, 50, 51, 53, 54], and integration testing (2 papers) [32, 47]. A summary of the aforementioned results is available in Table 1 of Annex A.
5 Conclusions Secondary studies like the one here reported are subject to limitations, in this sense, the findings of our review have limitations related to the research design, the databases, and the applied bibliometric approaches. Concerning the research design, the choice of primary studies is based on a searching string, which does reduce the scope of the study. However, although increasing the number of keywords in the search string may improve the sample’s scope, it also tends to add noise (irrelevant documents) and makes the sample increasingly challenging from a practical perspective. However, the primary studies analyzed here correspond to a representative sample on the main subject here addressed. We also reviewed the literature beyond the primary studies included in the analysis and integrated important references into the literature review. Due to a literature review is a demanding endeavor, perhaps possible recent relevant papers may have been left out from our review. Our findings show that the software regression testing most applied in industrial settings is the selection of test cases (RQ1). The metric commonly used is the efficiency in the detection of defects (RQ2). Regression testing is mainly applied in the maintenance stage of the software process (RQ3). With regards to the application domains, it is
232
R. H. Rosero et al.
observed that medium-sized information systems are mostly used (RQ4). Finally, unit tests are commonly used in software regression testing (RQ5). With regards our previous research [5], where we observed that most of the software regression testing approaches are mainly reported under an academic setting, software regression testing in industry tends to use only a subset of the existing regression testing approaches reported by the academia. Acknowledgments. The authors thank the Escuela Superior Politécnica de Chimborazo, in particular to the Faculty of Informatics and Electronics for their support in carrying out this work. Also thank anonymous reviewers. César Pardo acknowledges the contribution of the Universidad del Cauca, where he works as an associate professor.
Annex A
Table 1. Characterization of the software regression testing approaches used in industry. Ref. RT approach
Verification App. domain level
[15] Prioritization Acceptance Information system
App. size Process stage Metric Medium
[16] Prioritization Unit testing Telecommunications Small
Development RC Development TCRS
[17] Prioritization Acceptance Robotics
Large
Maintenance
CR
[18] Selection
Acceptance Manufacturing
Large
Maintenance
ID
[19] Selection
Unit testing Manufacturing
Large
Development CR
[20] Selection
Unit testing Networking
Large
Maintenance
[21] Selection
Unit testing Information system
Large
Development DDE
[22] Prioritization Unit testing Networking
Small
Maintenance
[23] Prioritization Unit testing Information system
Medium
Development DDE, ETR
[24] Prioritization Acceptance Information system
Medium
Maintenance
TCS
[25] Selection
Acceptance Information system
Large
Maintenance
DDE
[26] Selection
Unit testing Videoconference
Medium
Maintenance
DDE
[27] Selection
Acceptance Software testing tool Small
DDE DDE
Development TCRS
[28] Prioritization Acceptance Information system
Medium
Maintenance
DDE
[29] Selection
Large
Maintenance
TCRS
Medium
Maintenance
DDE
Acceptance Electronics
[30] Prioritization Acceptance Networking [31] Selection
Acceptance Gaming
[32] Optimization Integration
Information system
Medium
Maintenance
DDE
Large
Maintenance
DDE
(continued)
Software Regression Testing in Industrial Settings
233
Table 1. (continued) Ref. RT approach
Verification App. domain level
App. size Process stage Metric
[33] Selection
Unit testing Web browser
Large
Maintenance
PC
[34] Prioritization Unit testing Telecommunications Small
Development DDE, PR, ETR
[35] Selection
Acceptance Information system
Medium
Development TCRS
[36] Selection
Acceptance Information system
Small
Maintenance
[37] Selection
Unit testing Software tool
Medium
Development DDE
[38] Selection
Unit testing Software tool
Large
Maintenance
P, R
Medium
Maintenance
ETR, DDE
[39] Prioritization Unit testing Entertainment
P, DDE, TCRS
[40] Optimization Unit testing Telecommunications Medium
Development DDR, TCRS, P
[41] Selection
Unit testing Information system
Development DDR, ETR
[42] Selection
Unit testing Electronics
Large Large
Development DDR
[43] Prioritization Unit testing Information system
Medium
Maintenance
DDE
[44] Selection
Large
Maintenance
TCRS
[45] Prioritization Acceptance Information system
Large
Development ETR
[46] Selection
Large
Development DDR, ETR
Unit testing Software tool Unit testing Information system
[47] Prioritization Integration
Telecommunications Large
[48] Selection
Unit testing Information system
Large
[49] Selection
Unit testing Telecommunications Large
Maintenance
DDE
Development TCRS Development DDE
[50] Prioritization Acceptance Software tools
Small
Maintenance
DDE
[51] Selection
Acceptance Information system
Large
Maintenance
DDE
[52] Selection
Unit testing Information system
Large
Maintenance
CC
[53] Selection
Acceptance Telecommunications Medium
Maintenance
DDE
[54] Selection
Acceptance Information system
Development ReqC
Small
Terminology: TCRS (Test Case Suite Reduction), ETR (Execution Time Reduction), RC = (Risk Coverage), CR (Cost Reduction), CC (Class Coverage), PC (Partition Coverage), ReqC (Requirements coverage), TCS (Test Case Similarity), DDE (Defect Detection Efficiency), (ID = Index of Diagnosability), DDR (Defect Detection Reduction), P (Precision), R (Recall).
234
R. H. Rosero et al.
References 1. Rothermel, G., Harrold, M.J.: A safe, efficient algorithm for regression test selection. In: Proceedings of the Conference on Software Maintenance, 1993, CSM-1993, pp. 358–367 (1993) 2. Leung, H.K.N., White, L.: Insights into regression testing [software testing]. In: Proceedings Conference on Software Maintenance - 1989. pp. 60–69 (1989) 3. Beizer, B.: Software Testing Techniques, 2Nd edn. Van Nostrand Reinhold Co., New York (1990) 4. Humble, J., Farley, D.: Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation. Pearson Education, Boston (2010) 5. Rosero, R.H., Gómez, O.S., Rodríguez, G.: 15 years of software regression testing techniques — a survey. Int. J. Softw. Eng. Knowl. Eng.. 26, 675–689 (2016) 6. Rosero, R.H., Gómez, O.S., Rodríguez, G.: Regression testing of database applications under an incremental software development setting. IEEE Access. 5, 18419–18428 (2017) 7. Kitchenham, B., Pearl Brereton, O., Budgen, D., Turner, M., Bailey, J., Linkman, S.: Systematic literature reviews in software engineering – a systematic literature review. Inf. Softw. Technol. 51, 7–15 (2009) 8. Rothermel, G., Harrold, M.J.: Empirical studies of a safe regression test selection technique. IEEE Trans. Softw. Eng. 24, 401–419 (1998) 9. Yoo, S., Harman, M.: Regression testing minimization, selection and prioritization: a survey. Softw. Test. Verif. Reliab. 22, 67–120 (2012) 10. Rothermel, G., Harrold, M.J.: Analyzing regression test selection techniques. IEEE Trans. Softw. Eng. 22, 529–551 (1996) 11. Rothermel, G., Untch, R.H., Chu, C., Harrold, M.J.: Prioritizing test cases for regression testing. IEEE Trans. Softw. Eng.. 27, 929–948 (2001) 12. Mohanty, R., Ravi, V., Patra, M.R.: The application of intelligent and soft-computing techniques to software engineering problems: a review. Int. J. Inf. Decis. Sci. 2, 233–272 (2010) 13. Santos, A., Gómez, O., Juristo, N.: Analyzing families of experiments in SE: a systematic mapping study. IEEE Trans. Softw. Eng. 46, 566–583 (2020) 14. Dieste, O., Grimán, A., Juristo, N.: Developing search strategies for detecting relevant experiments. Empir. Softw. Eng. 14, 513–539 (2009) 15. Lübke, D.: Selecting and prioritizing regression test suites by production usage risk in timeconstrained environments. In: Winkler, D., Biffl, S., Mendez, D., Bergsmann, J. (eds.) SWQD 2020. LNBIP, vol. 371, pp. 31–50. Springer, Cham (2020). https://doi.org/10.1007/978-3-03035510-4_3 16. Magalhães, C., Andrade, J., Perrusi, L., Mota, A., Barros, F., Maia, E.: HSP: a hybrid selection and prioritisation of regression test cases based on information retrieval and code coverage applied on an industrial case study. J. Syst. Softw. 159, 110430 (2020) 17. Wu, Z., Yang, Y., Li, Z., Zhao, R.: A time window based reinforcement learning reward for test case prioritization in continuous integration. In: Proceedings of the 11th Asia-Pacific Symposium on Internetware. pp. 1–6. Association for Computing Machinery, New York (2019) 18. Pal, D., Vain, J.: A systematic approach on modeling refinement and regression testing of real-time distributed systems. IFAC-PapersOnLine. 52, 1091–1096 (2019) 19. Correia, D.: An industrial application of test selection using test suite diagnosability. In: Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp. 1214–1216. Association for Computing Machinery, New York (2019)
Software Regression Testing in Industrial Settings
235
20. Marijan, D., Gotlieb, A., Liaaen, M.: A learning algorithm for optimizing continuous integration development and testing practice. Softw. Pract. Exp. 49, 192–213 (2019) 21. Ali, S., Hafeez, Y., Hussain, S., Yang, S.: Enhanced regression testing technique for agile software development and continuous integration strategies. Softw. Qual. J. 28(2), 397–423 (2019). https://doi.org/10.1007/s11219-019-09463-4 22. Wang, X., Zeng, H., Gao, H., Miao, H., Lin, W.: Location-based test case prioritization for software embedded in mobile devices using the law of gravitation. Mobile Inf. Syst. 2019, e9083956 (2019) 23. Tulasiraman, M., Vivekanandan, N., Kalimuthu, V.: Multi-objective test case prioritization using improved Pareto-optimal clonal selection algorithm. 3D Res. 9(3), 1–13 (2018). https:// doi.org/10.1007/s13319-018-0182-y 24. Aman, H., Nakano, T., Ogasawara, H., Kawahara, M.: A topic model and test history-based test case recommendation method for regression testing. In: 2018 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW). pp. 392–397 (2018) 25. Akin, A., Sentürk, S., Garousi, V.: Transitioning from manual to automated software regression testing: experience from the banking domain. In: 2018 25th Asia-Pacific Software Engineering Conference (APSEC), pp. 591–597 (2018) 26. Marijan, D., Liaaen, M.: Practical selective regression testing with effective redundancy in interleaved tests. In: Proceedings of the 40th International Conference on Software Engineering: Software Engineering in Practice, pp. 153–162. Association for Computing Machinery, New York (2018) 27. Araújo, J., Araújo, J., Magalhães, C., Andrade, J., Mota, A.: Feasibility of using source code changes on the selection of text-based regression test cases. In: Proceedings of the 2nd Brazilian Symposium on Systematic and Automated Software Testing, pp. 1–6. Association for Computing Machinery, New York (2017) 28. Aman, H., Nakano, T., Ogasawara, H., Kawahara, M.: A test case recommendation method based on morphological analysis, clustering and the Mahalanobis-Taguchi method. In: 2017 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), pp. 29–35 (2017) 29. Ramler, R., Salomon, C., Buchgeher, G., Lusser, M.: Tool support for change-based regression testing: an industry experience report. In: Winkler, D., Biffl, S., Bergsmann, J. (eds.) SWQD 2017. LNBIP, vol. 269, pp. 133–152. Springer, Cham (2017). https://doi.org/10.1007/978-3319-49421-0_10 30. EmaShankari, K.H., ThirumalaiSelvi, R., Balasubramanian, N.V.: Industry based regression testing using IIGRTCP algorithm and RFT tool. In: Lecture Notes in Engineering and Computer Science, pp. 473–478. Newswood Limited (2016) 31. de Oliveira Neto, F.G., Torkar, R., Machado, P.D.L.: Full modification coverage through automatic similarity-based test case selection. Inf. Softw. Techn.. 80, 124–137 (2016) 32. Anderson, J., Salem, S., Do, H.: Striving for failure: an industrial case study about test failure prediction. In: 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, pp. 49–58 (2015) 33. Horváth, F., et al.: Test suite evaluation using code coverage based metrics. Presented at the Proceedings of the 14th Symposium on Programming Languages and Software Tools (SPLST’15) October (2015) 34. Marijan, D.: Multi-perspective regression test prioritization for time-constrained environments. In: 2015 IEEE International Conference on Software Quality, Reliability and Security, pp. 157–162 (2015) 35. Arora, A., Chauhan, N.: A regression test selection technique by optimizing user stories in an agile environment. In: 2014 IEEE International Advance Computing Conference (IACC), pp. 1454–1458 (2014)
236
R. H. Rosero et al.
36. Fourneret, E., Cantenot, J., Bouquet, F., Legeard, B., Botella, J.: SeTGaM: Generalized technique for regression testing based on UML/OCL Models. In: 2014 Eighth International Conference on Software Security and Reliability (SERE), pp. 147–156 (2014) 37. Gligoric, M., Negara, S., Legunsen, O., Marinov, D.: An empirical evaluation and comparison of manual and automated test selection. In: Proceedings of the 29th ACM/IEEE International Conference on Automated Software Engineering, pp. 361–372. Association for Computing Machinery, New York (2014) 38. Anderson, J., Salem, S., Do, H.: Improving the effectiveness of test suite through mining historical data. In: Proceedings of the 11th Working Conference on Mining Software Repositories, pp. 142–151. ACM, New York (2014) 39. Marijan, D., Gotlieb, A., Sen, S.: Test case prioritization for continuous regression testing: an industrial case study. In: 2013 IEEE International Conference on Software Maintenance, pp. 540–543 (2013) 40. Xu, Z., Liu, Y., Gao, K.: A novel fuzzy classification to enhance software regression testing. In: 2013 IEEE Symposium on Computational Intelligence and Data Mining (CIDM), pp. 53–58 (2013) 41. Rogstad, E., Briand, L., Torkar, R.: Test case selection for black-box regression testing of database applications. Inf. Softw. Technol. 55, 1781–1795 (2013) 42. Buchgeher, G., Ernstbrunner, C., Ramler, R., Lusser, M.: Towards tool-support for test case selection in manual regression testing. In: 2013 IEEE Sixth International Conference on Software Testing, Verification and Validation Workshops, pp. 74–79 (2013) 43. Prakash, N., Rangaswamy, T.R.: Modular based multiple test case prioritization. In: 2012 IEEE International Conference on Computational Intelligence and Computing Research, pp. 1–7 (2012) 44. Chen, W., et al.: Change impact analysis for large-scale enterprise systems. In: ICEIS (2012) 45. Srikanth, H., Cohen, M.B.: Regression testing in software as a service: an industrial case study. In: 2011 27th IEEE International Conference on Software Maintenance (ICSM), pp. 372–381 (2011) 46. Rogstad, E., Briand, L., Dalberg, R., Rynning, M., Arisholm, E.: Industrial experiences with automated regression testing of a legacy database application. In: 2011 27th IEEE International Conference on Software Maintenance (ICSM), pp. 362–371 (2011) 47. Engström, E., Runeson, P., Ljung, A.: Improving regression testing transparency and efficiency with history-based prioritization – an industrial case study. In: Verification and Validation 2011 Fourth IEEE International Conference on Software Testing, pp. 367–376 (2011) 48. Juergens, E., Hummel, B., Deissenboeck, F., Feilkas, M., Schlögel, C., Wübbeke, A.: Regression test selection of manual system tests in practice. In: 2011 15th European Conference on Software Maintenance and Reengineering, pp. 309–312 (2011) 49. Wikstrand, G., Feldt, R., Gorantla, J.K., Zhe, W., White, C.: Dynamic regression test selection based on a file cache an industrial evaluation. In: 2009 International Conference on Software Testing Verification and Validation, pp. 299–302 (2009) 50. Ramasamy, K., Arul Mary, S.: Incorporating varying requirement priorities and costs in test case prioritization for new and regression testing. In: 2008 International Conference on Computing, Communication and Networking, pp. 1–9 (2008) 51. Sneed, H.M.: Selective Regression testing of a host to DotNet migration. In: 2006 22nd IEEE International Conference on Software Maintenance, pp. 104–112 (2006) 52. Skoglund, M., Runeson, P.: A case study of the class firewall regression test selection technique on a large scale distributed software system. In: 2005 International Symposium on Empirical Software Engineering, 2005, p. 10 (2005)
Software Regression Testing in Industrial Settings
237
53. White, L., Robinson, B.: Industrial real-time regression testing and analysis using firewalls. In: 20th IEEE International Conference on Software Maintenance, 2004. Proceedings, pp. 18–27 (2004) 54. Briand, L.C., Labiche, Y., Soccar, G.: Automating impact analysis and regression test selection based on UML designs. In: International Conference on Software Maintenance, 2002. Proceedings, pp. 252–261 (2002)
Simulation of the Physicochemical Properties of Anatase TiO2 with Oxygen Vacancies and Doping of Different Elements for Photocatalysis Processes Heraclio Heredia-Ureta1,2,3(B) , Ana E. Torres4 , Edilso F. Reguera1 , and Carlos I. Aguirre-Vélez1 1 Instituto Politécnico Nacional, Centro de Investigación en Ciencia Aplicada y Tecnología
Avanzada, Unidad Legaria, Mexico City, Mexico 2 Tecnológico Nacional de México Campus Culiacán, Culiacán, Mexico 3 Universidad Autónoma de Sinaloa, Culiacán, Mexico 4 Instituto de Ciencias Aplicadas y Tecnología, Universidad Nacional Autónoma de México,
CU, Coyoacán, 04510 Mexico City, Mexico
Abstract. The photocatalytic activity of titanium dioxide in visible range is a great target in material science. In order to promote visible solar radiation absorption by this semiconductor, its structure needs to be modified. A theoretical study of TiO2 anatase phase (TiO2 -a) has been studied under some modifications: oxygen vacancies (TiO2 -OV); oxygen vacancies mediated by atomic hydrogen (TiO2 -H); nitrogen doped (TiO2 -N); sulfur doped (TiO2 -S) and; fluorine doped (TiO2 -F). The results of band structures, density of states, band gap, conductivity and optical properties are analyzed for those different conditions. Keywords: TiO2 with oxygen vacancies · Doped TiO2 · Band gap engineering
1 Introduction Solar energy is a clean and abundant renewable energy source on earth. The energy from the sun that hits the earth for an hour is much more than what humans need for a year [1]. For this reason, great research efforts have been made to take advantage of this energy [2–4]. As for example the use of photocatalysis. A photocatalysis process is useful to break down harmful organic and inorganic pollutants present in the air and in aqueous systems [5–7] and for hydrogen production by water splitting [8–11]. The basic idea behind this process occurs when an electron (e− ) in valence band (VB) is excited by a photon with an energy greater than the band gap of the material, carrying it to the conduction band (CB) and leaving a positive charge hole (h+ ) in the VB. This excitation left a hole in the valence band which behaves as positive charge and an electron-hole (e− -h+ ) pair is created. These photoexcited charge carriers are mainly responsible for the reduction and oxidation processes, respectively [12]. The photo-excited charge carriers migrate to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Botto-Tobar et al. (Eds.): ICAETT 2021, LNNS 407, pp. 238–249, 2022. https://doi.org/10.1007/978-3-030-96147-3_19
Simulation of the Physicochemical Properties of Anatase TiO2
239
the surface of the photocatalyst and initiate secondary reactions with the components adsorbed on the surface of the solid semiconductor. However, these photoexcited charge carriers can recombine (emitting a photon or phonon) eliminating the e− -h+ pair initially generated. If the e− -h+ pair recombination happens quickly, possibly there will not be enough time for reaching the surface of the semiconductor and carry out the reduction and oxidation processes, and therefore the efficiency of the photocatalyst drops drastically [13]. Since Fujishima and Honda [14] successfully applied titanium dioxide (TiO2 ) for the photoelectrochemical of water splitting, TiO2 has been widely used as a remediation technology for the removal of pollutants from air, drinking water and wastewater [15], and production of hydrogen [16–18]. TiO2 exists in three different polymorphic forms: anatase (TiO2 -a), rutile (TiO2 -r) and brookite (TiO2 -b) with band gap of 3.2, 3.0 and ∼3.2 eV, respectively, activated in the ultraviolet region (wavelength N > F > S dopant. Also, the results doped TiO2 -a (101) with OV (without OV) show isolated
Simulation of the Physicochemical Properties of Anatase TiO2
241
sub-bands near the middle of the band gap, suggesting that recombination problem will be greater follows the order: S > F > N > C (C > N > F > S) dopant. Conductivity and other important optical properties are omitted. Although, DOS, BS, band gap, conductivity and optical properties are very important measurements for the prediction of non-metallic doped TiO2 properties, we did not find any reported work that included all of them. This work shows a theoretical study of TiO2 -a at different modifications: 1) oxygen vacancies (TiO2 -OV); 2) oxygen vacancies mediated by 2 atomic hydrogens (TiO2 -H); 3) nitrogen doped (TiO2 -N); 4) sulfur doped (TiO2 -S) and; 5) fluorine doped (TiO2 -F). N, S and F elements were chosen for doping TiO2 -a, since they are the closest to oxygen in the periodic table and of which promising results have been obtained [29–35, 39–42, 44]. In this study, BS, DOS, band gap, conductivity and the optical properties were obtained for the different modifications made to TiO2 -a.
2 Models and Computation Details DFT calculations were carried out by using the Vienna ab initio simulation package (VASP) [45, 46]. The interaction between the ionic cores and the valence electrons was described by projector augmented wave (PAW) approach [47, 48] with the cutoff energy of 500 eV, the break condition for the ionic relaxation loop of −2 × 10–2 and the global break condition for the electronic SC-loop of 1 × 10–8 . The Perdew-Burke-Ernzerhof (PBE) [49] functional was used to treat the nonlocal exchange and correlation energies. Due to the self-interaction error (SIE), local and semi-local functionals systematically underestimate band gaps [50–52]. For this reason a fraction of exact (Fock) exchange in a hybrid functional [53, 54], such as the Heyd Scuseria-Ernzerhof (HSE) functional [55, 56]. Generally, the functional hybrid HSE is very accurate, but it also requires a high computational cost. An alternative is adding Hubbard’s U correction [57] to the functional PBE (PBE + U) which can mitigate some of its deficiencies [58], maintaining computational efficiency. The exchange interaction (J) may be incorporated into the on-site Coulomb repulsion term (U) to define the effective Hubbard as Ueff = U − J [57]. The PBE + U approach introduces an on-site correction in order to describe systems with localized d and f electrons, which can predict band gaps comparable with experimental results. We apply this correction to the electrons in the 3d orbitals of Titanium. Because the precision of PBE + U is highly dependent on the chosen values of Ueff . We systematically test a wide variety of Ueff values and choose Ueff = 4.7 eV, which gives us roughly the experimental value of 3.2 eV. This Ueff value is close to the value of 4.2 eV used in other works [58–60] so we consider it to be a good approximation. The k point grid sampling using the Monkhorst–Pack [61] mesh of 6 × 6 × 4 (16 × 16 × 6) is used for electronic properties calculations (geometry optimization). The optimization of the cell geometry for TiO2 -a was carried out similar to the explanation given in [62]. The 2 × 2 × 1 supercells of pure and modified TiO2 -a are shown in Fig. 1. In Fig. 1, the red, blue, brown, dark blue, yellow and pink spheres represent the atoms of oxygen, titanium, hydrogen, nitrogen, sulfur and fluorine, respectively. To visualize the structures, the crystallography software VESTA [63] was used and 20% more of the size of the supercell (in x, y and z) is illustrated to better see its periodicity.
242
H. Heredia-Ureta et al.
3 Results Figure 2 shows graphs of conductivity, BS and DOS obtained for the supercells of Fig. 1. The energies plotted in Fig. 2 show the difference between the energy of the initial system and the energy at fermi level [27]. The conductivity and DOS calculations in the boxes to the left of each figure were performed in BoltzTrap2 [64]. In these calculations, there is a slightly different DOS to the calculation made in VASP, this is because the calculation made in BoltzTrap2 is less accurate than the one obtained with VASP. We carry out this conductivity calculation for a qualitatively comparison of conductivity behavior between pure and modified material. (a)
(b)
(c)
(d)
(e)
(f)
Fig. 1. Figures of the pure and modified TiO2 -a supercells: a) TiO2 -a, b) TiO2 -OV, c) TiO2 -H, d) TiO2 -N, e) TiO2 -S and f) TiO2 -F.
Figure 2, show TiO2 -a modified creates sub-bands reducing the band gap, however these isolated sub-bands closer to the middle of the band gap act as effective recombination centers reducing the efficiency of the photocatalyst [27]. However, if the sub-bands are close to the VB or CB, these will act as less effective recombination centers that works as trap centers, so the charge carriers are trapped in these energy levels for a certain time [27]. The conductivity (sigma [1/(ohm*m)]) increase in the modified TiO2 -a when these sub-bands created close to the VB or CB. In the Figs. 2d and 2e sub-bands are created separated from each other and further away from the VB, thus producing a smaller DOS and a lower conductivity so that the transport of the photo-generated charge carriers is poorer. In Fig. 2f, sub-bands are join to each other and closer to the bands, which produces a large DOS and a high conductivity, in this way the transport of the photo-generated charge carriers increase.
Simulation of the Physicochemical Properties of Anatase TiO2
243
On the other hand, the band gap is reduced if sub-bands created are more separated from each other. The values obtained for energy band gap were 2.121, 2.329, 2.604, 2.867, 3.1 and 3.2 eV, for TiO2 -H, TiO2 -S, TiO2 -N, TiO2 -OV, TiO2 -F and TiO2 -a, respectively. (a)
(b)
(c)
(d)
(e)
(f)
Fig. 2. Conductivity, BS and DOS obtained for the supercells of Fig. 1.
The optical properties of pure and modified TiO2 -a can be described by the complex dielectric function, that is, ε(ω) = ε1 (ω) + i ε2 (ω). Where ω, i, ε1 (ω) and ε2 (ω), represent the angular frequency (2πf ), the imaginary number (−1)1/2 , the real part and the imaginary part of the complex dielectric function, respectively. The optical properties such as the absorption coefficient α(ω), the energy loss spectrum L(ω), extinction
244
H. Heredia-Ureta et al.
coefficient κ(ω), reflectivity R(ω) and the refractive index n(ω) are given by [65] α(ω) =
√
2ω
1/2 ε12 (ω) + ε22 (ω) − ε1 (ω)
1 L(ω) = Im − ε(ω)
κ(ω) =
=
ε2 (ω) 2 ε1 (ω) + ε22 (ω)
α(ω) 2ω
√ ε1 (ω) + iε2 (ω) − 1 2 R(ω) = √ ε1 (ω) + iε2 (ω) + 1 1/2 1 n(ω) = √ ε12 (ω) + ε22 (ω) + ε1 (ω) 2
(1) (2) (3)
(4)
(5)
Figure 3 shows the graphs of the absorption coefficient, energy loss spectrum, extinction coefficient and reflectivity for pure and modified TiO2 -a. (a)
(b)
(c)
(d)
Fig. 3. Optical analysis in the zz direction: a) Absorption coefficient, b) Energy loss spectrum, c) Extinction coefficient and d) Reflectivity.
Simulation of the Physicochemical Properties of Anatase TiO2
245
The absorption coefficient, energy loss spectrum and extinction coefficient of all modified TiO2 -a have better behavior than pure structure, except TiO2 -F in the interval from 475 to 800 nm. Figures 3a and 3c show that pure TiO2 -a has better absorption and extinction coefficients than its modifications over almost the entire 320 to 425 nm interval. Also, in Figures 3a and 3c show that the TiO2 -S (TiO2 -H) modification has the highest absorption and extinction coefficient that stands out from the others in the interval of 450 to 540 nm (540 to 800 nm). Figure 3b shows that pure TiO2 -a has similar (higher) energy loss spectrum values than its modifications TiO2 -H and TiO2 -S (TiO2 -F and TiO2 -OV) in almost the entire interval from 300 to 400 nm with the exception of the interval from 325 to 370 nm where pure TiO2 -a has the lowest energy loss spectrum. Also, in Fig. 3b show that TiO2 -S, TiO2 -H and TiO2 -OV have the best energy loss spectrum that exceed the others, in the intervals of 400 to 510 nm, 510 to 730 nm and 730 to 800 nm, respectively. Figure 3d shows that the reflectivity is similar or lower than that obtained with pure TiO2 -a, and for the entire wavelength interval it is lower than 0.35 for all cases of modified TiO2 -a, which it is good for materials used in photocatalysis. From the analysis of the graphs in Fig. 3, pure TiO2 -a is more efficient to absorb solar energy in the region close to UV (300 to 400 nm), while all its modifications except for TiO2 -F are most efficient at absorbing solar energy in the near visible and infrared region (425 to 800 nm). However, pure TiO2 -a will be more (less) efficient than its modifications TiO2 -S, TiO2 -H, TiO2 -OV and TiO2 -N to generate e− -h+ pairs in UV (visible and infrared) interval. TiO2 -S and TiO2 -H modifications are the most efficient for generating e− -h+ pairs, since they show higher absorption coefficient, energy loss spectrum and extinction coefficient in comparison with the other modifications in 400 to 510 nm and 525 to 730 nm intervals, respectively. In addition, the TiO2 -S and TiO2 -H modifications show absorption coefficient, energy loss spectrum and extinction coefficient similar or slightly lower than pure TiO2 -a in the region close to UV, which suggests their overall performance to generate e− -h+ pairs will not be affected since the solar energy of the UV region is close to 5%.
4 Conclusions The different theoretical measurements used allowed us to carry out a good analysis of the performance of the different modifications made to TiO2 -a. The BS, DOS, optical properties and conductivity calculations show important information about band gap, efficiency of charge carriers transportation, the location of recombination centers and efficiency of solar radiation absorption into e− -h+ pairs in the wavelength interval of the analyzed spectrum. From the results, the doping with sulfur (TiO2 -S) has the best performance, which already has a good band gap of 2.329 eV, whereas optical properties superior to pure TiO2 -a in the range of 425 to 800 nm and properties optical superior to the other modifications in the interval of 400 to 510 nm, moreover it has very good optical properties also in the intervals of 300 a 400 nm and 510 to 800 nm. In addition, according to the
246
H. Heredia-Ureta et al.
analysis of BS and DOS, the recombination of carriers will not increase significantly respect to pure TiO2 -a since the sub-bands created are close to the VB. For this reason, this modification has the best balance between band gap improvement and almost zero increase in recombinations. So it is believed that this TiO2 -a modification would have a good increase in photocatalytic performance. On the other hand, the TiO2 -H configuration has superior optical properties in the interval from 540 to 800 nm and lower band gap (2.121 eV), but it has the disadvantage that this modification will considerably increase recombinations with respect to pure TiO2 -a by creating a sub-band isolated near the middle of the band gap.
5 Future Work As future work we will analyze the performance of different modifications of the TiO2 (001) slab with vacuum (modeling the surface effects) for photocatalysis processes. Among the possible modifications of the TiO2 -a are the combination of: 1) 2) 3) 4)
Oxygen vacancies. Non-metallic dopants (H, S, N, F). Superficial metallic dopants (Pt, Ag, Au). Different locations for the oxygen vacancies and dopants.
References 1. Lewis, N.S.: Toward cost-effective solar energy use. Science 315(5813), 798–801 (2007) 2. Norton, B.: Harnessing Solar Heat. Lecture Notes in Energy, vol. 18. Springer, Dordrecht (2014) 3. Ahmadi, M.H., et al.: Solar power technology for electricity generation: a critical review. Energy Sci. Eng. 6(5), 340–361 (2018) 4. Razi, F., Dincer, I.: A critical evaluation of potential routes of solar hydrogen production for sustainable development. J. Clean. Prod. 264, 121582 (2020) 5. Pichat, P.: Photocatalysis and Water Purification: From Fundamentals to Recent Applications, Wiley-VCH Verlag GmbH, Weinheim (2013) 6. Anpo, M., Takeuchi, M.: The design and development of highly reactive titanium oxide photocatalysts operating under visible light irradiation. J. Catal. 216(1–2), 505–516 (2003) 7. Konstantinou, I., Albanis, T.A.: Photocatalytic transformation of pesticides in aqueous titanium dioxide suspensions using artificial and solar light: intermediates and degradation pathways. Appl. Catal. B 42(4), 319–335 (2003) 8. Chen, X., Li, C., Grätzel, M., Kostecki, R., Mao, S.S.: Nanomaterials for renewable energy production and storage. Chem. Soc. Rev. 41(23), 7909–7937 (2012) 9. Walter, M.G., et al.: Solar water splitting cells. Chem. Rev. 110(11), 6446–6473 (2010) 10. Hisatomi, T., Kubota, J., Domen, K.: Recent advances in semiconductors for photocatalytic and photoelectrochemical water splitting. Chem. Soc. Rev. 43(22), 7520–7535 (2014) 11. Kudo, A., Miseki, Y.: Heterogeneous photocatalyst materials for water splitting. Chem. Soc. Rev. 38(1), 253–278 (2009)
Simulation of the Physicochemical Properties of Anatase TiO2
247
12. Banerjee, S., Pillai, S.C., Falaras, P., O’Shea, K.E., Byrne, J.A., Dionysiou, D.D.: New insights into the mechanism of visible light photocatalysis. J. Phys. Chem. Lett. 5(15), 2543–2554 (2014) 13. Suib, S.L.: New and Future Developments in Catalysis: Batteries, Hydrogen Storage and Fuel Cells. Elsevier, Amsterdam (2013) 14. Fujishima, A., Honda, K.: Electrochemical photolysis of water at a semiconductor electrode. Nature 238(5358), 37–38 (1972) 15. Barakat, M., Kumar, R.: Photocatalytic activity enhancement of titanium dioxide nanoparticles. In: Photocatalytic Activity Enhancement of Titanium Dioxide Nanoparticles. SMS, pp. 1–29. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-24271-2_1 16. Ni, M., Leung, M.K.H., Leung, D.Y.C., Sumathy, K.: A review and recent developments in photocatalytic water-splitting using TiO2 for hydrogen production. Renew. Sustain. Energy Rev. 11(3), 401–425 (2007) 17. Naldoni, A., et al.: Photocatalysis with reduced TiO2: from black TiO2 to co-catalyst-free hydrogen production. ACS Catal. 9, 345–364 (2019) 18. Kumaravel, V., Mathew, S., Bartlett, J., Pillai, S.C.: Photocatalytic hydrogen production using metal doped TiO2: a review of recent advances. Appl. Catal. B 244, 1021–1064 (2019) 19. Pelaez, M., et al.: A review on the visible light active titanium dioxide photocatalysts for environmental applications. Appl. Catal. B 125, 331–349 (2012) 20. Fox, M.A., Dulay, M.T.: Heterogeneous photocatalysis. Chem. Rev. 93(1), 341–357 (1993) 21. Long, R., English, N.J.: Tailoring the electronic structure of TiO2 by cation codoping from hybrid density functional theory calculations. Phys. Rev. B 83(15), 155209 (2011) 22. Xu, X., Randorn, C., Efstathiou, P., Irvine, J.T.S.: A red metallic oxide photocatalyst. Nat. Mater. 11(7), 595–598 (2012) 23. Chen, X., Liu, L., Yu, P.Y., Mao, S.S.: Increasing solar absorption for photocatalysis with black hydrogenated titanium dioxide nanocrystals. Science 331(6018), 746–750 (2011) 24. Nakamura, I., Negishi, N., Kutsuna, S., Ihara, T., Sugihara, S., Takeuchi, K.: Role of oxygen vacancy in the plasma-treated TiO2 photocatalyst with visible light activity for NO removal. J. Mol. Catal. A Chem. 161(1–2), 205–212 (2000) 25. Liu, G., et al.: Enhanced photoactivity of oxygen-deficient anatase TiO2 sheets with dominant 001 facets. J. Phys. Chem. C 113(52), 21784–21788 (2009) 26. Yang, Y., et al.: An unusual strong visible-light absorption band in red anatase TiO2 photocatalyst induced by atomic hydrogen-occupied oxygen vacancies. Adv. Mater. 30(6), 1704479 (2018) 27. Muller, R.S., Kamins, T.I., Chan, M.: Device Electronics for Integrated Circuits, 3rd edn. Wiley, New York (2003) 28. Shayegan, Z., Lee, C.S., Haghighat, F.: TiO2 photocatalyst for removal of volatile organic compounds in gas phase – a review. Chem. Eng. J. 334, 2408–2439 (2018) 29. Daghrir, R., Drogui, P., Robert, D.: Modified TiO2 for environmental photocatalytic applications: a review. Ind. Eng. Chem. Res. 52(10), 3581–3599 (2013) 30. Zeng, L., et al.: A modular calcination method to prepare modified N-doped TiO2 nanoparticle with high photocatalytic activity. Appl. Catal. B 183, 308–316 (2016) 31. Bianchi, C.L., et al.: N-doped TiO2 from TiCl3 for photodegradation of air pollutants. Catal. Today 144(1–2), 31–36 (2009) 32. Cong, Y., Zhang, J., Chen, F., Anpo, M.: Synthesis and characterization of nitrogen-doped TiO2 nanophotocatalyst with high visible light activity. J. Phys. Chem. C 111(19), 6976–6982 (2007) 33. Jo, W.K., Kang, H.J.: Aluminum sheet-based S-doped TiO2 for photocatalytic decomposition of toxic organic vapors. Chin. J. Catal. 35(7), 1189–1195 (2014)
248
H. Heredia-Ureta et al.
34. Jo, W.K., Kim, J.T.: Decomposition of gas-phase aromatic hydrocarbons by applying an annular-type reactor coated with sulfur-doped photocatalyst under visible-light irradiation. J. Chem. Technol. Biotechnol. 85(4), 485–492 (2009) 35. Ohno, T., Akiyoshi, M., Umebayashi, T., Asai, K., Mitsui, T., Matsumura, M.: Preparation of S-doped TiO2 photocatalysts and their photocatalytic activities under visible light. Appl. Catal. A 265(1), 115–121 (2004) 36. Khan, R., Kim, S.W., Kim, T.J., Nam, C.M.: Comparative study of the photocatalytic performance of boron–iron Co-doped and boron-doped TiO2 nanoparticles. Mater. Chem. Phys. 112(1), 167–172 (2008) 37. Kim, M.S., Liu, G., Nam, W.K., Kim, B.W.: Preparation of porous carbon-doped TiO2 film by sol–gel method and its application for the removal of gaseous toluene in the optical fiber reactor. J. Ind. Eng. Chem. 17(2), 223–228 (2011) 38. Dong, F., Guo, S., Wang, H., Li, X., Wu, Z.: Enhancement of the visible light photocatalytic activity of C-Doped TiO2 nanomaterials prepared by a green synthetic approach. J. Phys. Chem. C 115(27), 13285–13292 (2011) 39. Li, D., Ohashi, N., Hishita, S., Kolodiazhnyi, T., Haneda, H.: Origin of visible-light-driven photocatalysis: a comparative study on N/F-doped and N-F-codoped TiO2 powders by means of experimental characterizations and theoretical calculations. J. Solid State Chem. 178(11), 3293–3302 (2005) 40. Chen, Q.L., Tang, C.Q.: First-principles calculations on electronic structures of N/F-doped and N-F-codoped TiO2 anatase (101) surfaces. Acta Phys. Chim. Sin. 25(05), 915–920 (2009) 41. Di Valentin, C., Pacchioni, G.: Trends in non-metal doping of anatase TiO2: B, C N and F. Catal. Today 206, 12–18 (2013) 42. Muhich, C.L., Westcott, J.Y., Fuerst, T., Weimer, A.W., Musgrave, C.B.: Increasing the photocatalytic activity of anatase TiO2 through B, C, and N doping. J. Phys. Chem. C 118(47), 27415–27427 (2014) 43. Wang, M., Feng, M., Zuo, X.: First principles study of the electronic structure and magnetism of oxygen-deficient anatase TiO2 (001) surface. Appl. Surf. Sci. 292, 475–479 (2014) 44. González-Torres, J.C., Poulain, E., Domínguez-Soria, V., García-Cruz, R., Olvera-Neria, O.: C-, N-, S-, and F-doped anatase TiO2 (101) with oxygen vacancies: photocatalysts active in the visible region. Int. J. Photoenergy 2018, 1–12 (2018) 45. Kresse, G., Hafner, J.: Ab-initio molecular-dynamics simulation of the liquidmetaleamorphous-semiconductor transition in germanium. Phys. Rev. B 49(20), 14251– 14269 (1994) 46. Kresse, G., Furthmüller, J.: Efficient iterative schemes for abinitio total-energy calculations using a plane-wave basis set. Phys. Rev. B 54(16), 11169–11186 (1996) 47. Blochl, P.E.: Projector augmented-wave method. Phys. Rev. B 50(24), 17953–17979 (1994) 48. Kresse, G., Joubert, D.: From ultrasoft pseudopotentials to the projector augmented-wave method. Phys. Rev. B 59(3), 1758–1775 (1999) 49. Hammer, B., Hansen, L.B., Nørskov, J.K.: Improved adsorption energetics within densityfunctional theory using revised perdew-burke-ernzerhof functionals. Phys. Rev. B 59(11), 7413–7421 (1999) 50. Hüfner, S.: Electronic structure of NiO and related 3d-transition-metal compounds. Adv. Phys. 43(2), 183–356 (1994) 51. Jiao, Z.Y., Ma, S.H., Guo, Y.L.: Simulation of optical function for phosphide crystals following the DFT band structure calculations. Comput. Theoret. Chem. 970(1–3), 79–84 (2011) 52. Wang, Y., et al.: Electronic structure of III-V zinc-blende semiconductors from first principles. Phys. Rev. B, 87(23), 235203 (2013) 53. Perdew, J.P.: Climbing the ladder of density functional approximations. MRS Bull. 38(9), 743–750 (2013). https://doi.org/10.1557/mrs.2013.178
Simulation of the Physicochemical Properties of Anatase TiO2
249
54. Becke, A.D.: A new mixing of Hartree-Fock and local density-functional theories. J. Chem. Phys. 98, 1372–1377 (1993) 55. Heyd, J., Scuseria, G.E., Ernzerhof, M.: Hybrid functionals based on a screened Coulomb potential. J. Chem. Phys. 118(18), 8207–8215 (2003) 56. Heyd, J., Scuseria, G.E., Ernzerhof, M.: Erratum: “Hybrid functionals based on a screened Coulomb potential [J. Chem. Phys. 118, 8207 (2003)]. J. Chem. Phys. 124(21), 219906 (2006) 57. Dudarev, S.L., Botton, G.A., Savrasov, S.Y., Humphreys, C.J., Sutton, A.P.: Electron-energyloss spectra and the structural stability of nickel oxide: an LSDA+U study. Phys. Rev. B 57(3), 1505–1509 (1998) 58. Morgan, B.J., Scanlon, D.O., Watson, G.W.: Small polarons in Nb- and Ta-doped rutile and anatase TiO2. J. Mater. Chem. 19(29), 5175 (2009) 59. Morgan, B.J., Watson, G.W.: A DFT+U description of oxygen vacancies at the TiO2 rutile (110) surface. Surf. Sci. 601(21), 5034–5041 (2007) 60. Ma, X., Wu, Y., Lu, Y., Xu, J., Wang, Y., Zhu, Y.: Effect of compensated codoping on the photoelectrochemical properties of anatase TiO2 photocatalyst. J. Phys. Chem. C 115(34), 16963–16969 (2011) 61. Monkhorst, H.J., Pack, J.D.: Special points for brillouin-zone integrations. Phys. Rev. B 13(12), 5188–5192 (1976) 62. Torres, A.E., Rodríguez-Pineda, J., Zanella, R.: Relevance of dispersion and the electronic spin in the DFT+U approach for the description of pristine and defective TiO2 anatase. ACS Omega 6(36), 23170–23180 (2021) 63. Momma, K., Izumi, F.: VESTA 3 for three-dimensional visualization of crystal, volumetric and morphology data. J. Appl. Crystallogr. 44(6), 1272–1276 (2011) 64. Madsen, G.K.H., Carrete, J., Verstraete, M.J.: BoltzTraP2, a program for interpolating band structures and calculating semi-classical transport coefficients. Comput. Phys. Commun. 231, 140–145 (2018) 65. Saha, S., Sinha, T.P., Mookerjee, A.: Electronic structure, chemical bonding, and optical properties of paraelectric BaTiO3. Phys. Rev. B 62(13), 8828–8834 (2000)
A Blockchain-Based Approach for Issuing Health Insurance Contracts and Claims Julio C. Mendoza-Tello1(B) , Tatiana Mendoza-Tello2 , and Jenny Villacís-Ramón3 1 Faculty of Engineering and Applied Sciences, Central University of Ecuador, Quito, Ecuador
[email protected]
2 Insurance Consultant, E3-12 Cristóbal Colón Avenue, Quito, Ecuador 3 IT Consultant, NE35-49 Julio Moreno Street, Quito, Ecuador
Abstract. Health insurance provides economic coverage and medical assistance to its members. For this, the collaboration of several entities is necessary, such as: the insured party, the insurer and health service providers (medical center and pharmacy). A set of rules generates a flow of information from two activities: policy issuance and claims management. However, these have the following drawbacks: human supervision errors and lack of access for data visibility. Given these challenges, this document describes how the characteristics of blockchain-based smart contracts can support these tasks without requiring trusted third parties. For this, a model and layered overview are described using smart contracts. Conclusions and future work are explained at the end of the document. Keywords: Health · Insurance · Blockchain · Smart contract
1 Introduction The increase in medical expenses and expectations of obtaining better service lead to obtaining health insurance. Health insurance acts as an umbrella of protection and provides benefits in three ways. First, it provides access to preventive medicine. It is likely that people with health insurance will access early detection tests for diseases (such as cancer). Timely treatment can reduce the negative effects of a disease. In this way, the progression of the disease can be minimized, and risky surgical procedures can be avoided. Second, it provides access to disease treatment. Insurance companies offer health care plans according to the preference of the insured party. After paying a deductible, the insured person can receive specialized care from a health service provider such as a hospital, a medical center, or a pharmacy. Third, it provides access to economic coverage, discounts, and reimbursements for medical expenses. Insurance companies conduct negotiations with health service providers for discounts on care for their insured clients. This allows an insurer to cover a part of the health care expenses, including the provision of medicines, surgical procedures, and disease treatment [1]. In this way, an insured person has access to reimbursements and economic discounts on the bill. Otherwise, an uninsured person could pay too much for these services; higher health care costs impede access to specialized treatment, which leads to the deterioration of health. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Botto-Tobar et al. (Eds.): ICAETT 2021, LNNS 407, pp. 250–260, 2022. https://doi.org/10.1007/978-3-030-96147-3_20
A Blockchain-Based Approach for Issuing Health Insurance Contracts and Claims
251
A person obtains health insurance through a membership application form. A person who purchases insurance is called an insured party. An insurer provides risk coverage upon payment of a fee (called an insurance premium). This fee is paid by the insured party to the insurer. The insured party is involved in three scenarios: issuance of the insurance contract (policy), payment of the insurance premium, and management of the claim. An insured party receives health coverage based on collaborative efforts between the insurer and health service providers. However, this process is not without errors. Lack of visibility and access to data causes long delays for both issuance and claims requests. Most management systems require the supervision of humans susceptible to errors for the activities of collection and processing of information. In addition, the insurance process is affected by falsification and abuse of services by policyholders and providers who wish to receive a higher disbursement for an insured event [2]. In practice, issuance and claims processes are governed by rules. From this point of view, blockchain-based smart contracts can help automate these activities without the need for a central supervisory authority. In addition, these innovations store immutable records and provide transparent processes. The sections of this document are structured as follows. Section 2 describes the methodology and theorical background, provides an overview of blockchain and smart contracts working together and explains the concept of health insurance. Section 3 explains how blockchain characteristics can support the issuance of health insurance contracts and claims. With these requirements, a layer-based model is designed based on three scenarios: issuance, payment, and claims. Finally, conclusions, contributions and future research are described in Sect. 4.
2 Methodology and Background This research involves two fields of study, namely (Fig. 1): – Blockchain as an underlying technology to design smart contracts, which provide automation, immutability, transparency, and decentralization to the model. – Healthcare insurance as a field of application. Three scenarios are identified: issuance of the insurance contract, payment of the insurance premium, and the claim. A brief literature review is provided. It is explained how blockchain implements smart contracts. With these considerations, a layered model is designed to explain how blockchain characteristics can support the health insurance issuance and claim process. Finally, usage scenarios based on smart contracts are described. 2.1 How Do Smart Contract and Blockchain Work Together? Blockchain is a P2P distributed ledger with versatility to register any digital asset without requiring a trusted third party. This technology uses complex cryptographic algorithms that guarantee the integrity and anonymity of the operations of the scheme. That is, it relies on the veracity of cryptographic proof, rather than centralized control authority. For this reason, blockchain is a disruptive technology for society and information technology [3].
252
J. C. Mendoza-Tello et al.
Fig. 1. Blockchain characteristics in several healthcare insurance tasks.
Blockchain began with bitcoin. In fact, blockchain is the underlying technology of all cryptocurrencies. Each bitcoin is referenced through a token, which can be transferred from user A to user B. This is possible because cryptocurrencies are linked to usercontrolled addresses that demonstrate ownership. In this case, user A gives control of those tokens to user B. There is no supervisory authority that certifies transactional validity; instead, asymmetric cryptography methods validate and prove the ownership of a cryptocurrency. A Blockchain data model defines sequentially linked blocks. Each block has two sections: a header and transactions. The header identifies, validates, and links each block with the previous one. Recorded events are immutable due to two mechanisms. First, the previous block header hash. This procedure links a new block to the previous one. Second, Merkle tree generation. This procedure dually groups transactions using hash functions. In this scheme, all transactions are certified through a timestamp provided by cryptographic proof that rejects any tampering within the chain. In this process, two types of participants (or nodes) stand out, namely: lightweight and mining. First, a lightweight node performs transactions using a simplified payment method [4]. Second, a mining node has special functionalities that validate, authenticate, and record the transactions carried out by lightweight nodes [5]. Furthermore, a mining node groups a set of transactions within blocks. Each block is validated using a consensus cryptographic mechanism that must meet a difficult target. Currently, the blockchain structure is used to implement smart contracts. A smart contract is a set of instructions that are executed according to a programming condition [6]. A smart contract is implemented according to the publication of a transaction and can be invoked by an external entity (e.g., user application) or by another smart contract. A smart contract data model defines three main fields. (a) A code program to define procedures, functions and rules based on Turing complete language. A contract implemented on blockchain is immutable; that is, its encoding cannot be modified. The code is validated by mining nodes using (a) a decentralized consensus mechanism, (b) an account balance; that is, it can send and receive cryptocurrencies, and (c) a storage by recording of a transaction on the blockchain. Figure 2 shows how smart contract and blockchain work together.
A Blockchain-Based Approach for Issuing Health Insurance Contracts and Claims
253
Fig. 2. Smart contracts and blockchain working together.
2.2 Health Insurance An insurance contract is an agreement that allows the transfer of risk from the insured party to the insurer. This helps minimize loss and guarantees the continuity of the business or insured entity. A health insurance contract allows an insurer to cover a portion of the medical expenses of an insured party. For this, the insured party must pay a fee called an insurance premium to the insurer. In general, this contract covers expenses for: healthcare, admission and treatment, nursing, medications, ambulance, paramedical care, hospitalization, and surgical procedures. Basically, a health insurance company reimburses its policyholders through a deductible value. For this, contract clauses specify the benefits and costs of using health providers that are internal and external to the network. Several insurance plans were developed, which were extensively studied by McGuire [7], namely: conventional, health maintenance organization, point of service, preferred provider organizations, and high-deductible health plans. 2.3 Related Works Information Review. The literature was referenced through Scopus. The information search included bibliographic databases such as: Elsevier, IEEE Xplore, Springer, Emerald Insight, and ACM Digital. For this, a search was conducted using Kitchenham’s recommendations [8]; thus, four tasks were performed. First, the search strategy. Keywords were used, namely: insurance, blockchain, and smart contract. First, 63 documents were found. However, most of these studies do not agree with the central focus of the study. Therefore, 40 documents were discarded and a sample of 23 results was obtained in this initial phase of evaluation. Second, identification of inclusion and exclusion criteria. Only three types of documents were selected, namely: journal articles, conference papers, and book chapters. The language chosen was English. Only documents published since 2009 were selected (the year bitcoin and blockchain were created [4]). Third, quality evaluation. Only full studies were chosen; that is, documents that describe analysis, methodology, results, and conclusions. In this final evaluation, 16 documents were rejected and only 7 documents were selected (Table 1).
254
J. C. Mendoza-Tello et al.
Fourth, synthesis. MS Excel and Mendeley were used to summarize and manage the bibliographic sources. This review culminated on April 10, 2021. Table 1. Prior research of blockchain focused on insurance Research
Finding
SWOT analysis assesses the maturity of Blockchain interaction for the average user is blockchain [9] complex due to the limited scalability of this technology Insurance contracts against natural Hazards [10]
Methodology for implementing a blockchain-based digital insurance contract against natural hazards
Design of travel insurance based on blockchain [11]
Blockchain-based smart contracts to implement functions of the travel insurance system
Design of a car insurance system based on blockchain [12]
Sensors and smart contracts for providing on-demand coverage
Blockchain and liability insurance [13]
Blockchain-based architecture that complies with the General Data Protection Regulation
Authentication scheme based on homomorphic encryption [14]
Blockchain transaction authentication method based on homomorphic encryption
Access control scheme based on Blockchain [15]
Storage scheme based on InterPlanetary File System (IPFS) and blockchain
Review Synthesis. Blockchain research for insurance is in the early stages of conceptualization and development. A SWOT analysis assesses the maturity of blockchain for implementation in the insurance market [9]. A methodology for implementing legal insurance contracts was proposed by Pagano [10]. Also, cases for travel insurance [11], and car insurance [12, 13] were designed using homomorphic encryption [14], IoT, and storage system [15]. However, little research focuses on the health insurance process, namely: underwriting, sales, fraud detection, reinsurance, and claim management. For this reason, this document proposes a blockchain-based model for issuing health insurance contracts and claims.
3 Results: Blockchain-Based Model for Issuing Health Insurance Contracts and Claims In this section, a model is proposed. For this, blockchain characteristics are identified. In addition, a layered overview and usage scenarios are defined. 3.1 Blockchain Characteristics Applied to the Model An insurance contract transfers risk from an insured party to the insurer. Thus, an economic umbrella is provided that reduces the possibility of ruin due to a risk or catastrophic
A Blockchain-Based Approach for Issuing Health Insurance Contracts and Claims
255
event. Health insurance acquisitions and claims are activities that can be supported using blockchain characteristics. Five blockchain characteristics are identified, namely: – Automation. Blockchain has the versatility to transport tokens in its structure. A token is an alphanumeric string that represents a digital asset, such as: digital currency or a smart contract. Digital currency is used for both insurance premium payments and economic reimbursements for medical care. This information flow is automated through smart contracts. Codes are implemented to support three scenarios: issuance of an insurance contract, payment of an insurance premium, and a healthcare insurance claim. At the same time, the clinical history is compiled and updated in real time. Visibility and data access are improved by using private keys; that is, any user can grant or deny access permissions to doctors and health service operators. This allows the insurer to develop procedures based on transnational policies. – Digital authentication. Authentication is the gateway to the scheme and enables functionality according to predefined rules. This provides security and ensures non-repudiation of behaviour. – Transparency and immutability. Any user can verify the correctness of operations and data flowing from one point to another. In addition, blockchain has the property of durability; that is, the recorded data will be permanent over time. Once an insurance contract is recorded in blockchain, it cannot be manipulated or deleted. Information about the medication suggested by a doctor and provided by a pharmacy cannot be adulterated or modified. Similarly, no entity can ignore the obligations and benefits received throughout the insurance process. In addition, blockchain performs an exact timestamp on operations carried out by various entities without requiring a central control authority. Fraud is mitigated due to the impossibility of tampering with documentation. This implicit versatility also provides trust and ensures non-repudiation of behavior. – Decentralization. The scheme relies on cryptographic algorithms rather than a trusted third party. The P2P implementation is fault tolerant and reduces the likelihood of a system fail. Any node can recover the system and its decentralized storage guarantees the provision of reliable data to the user. That is, the implicit decentralization of the protocol eliminates the possibility of delay in billing, provision of health services and claims systems. An insurer can create receipts anywhere in the scheme and ensure an auditable record of insurance claims. A decentralized health record speeds up the issuance of a policy because it simply proceeds with the verification of data, instead of entering the same information again. All operations are validated and accepted by all nodes. This nullifies the possibility of fraud in the system. This decentralization implies that all nodes are witnesses to the behavior of the network. All users can know the same reputation score of an insurer based on the level of compliance and satisfaction of the insured party. In a similar way, an insurer can know the same reputation score of an insured party based on the fulfillment of the policy payment. An insurer can continuously collect data from some sources to assess risks and define custom prices.
256
J. C. Mendoza-Tello et al.
3.2 Layered Overview In Fig. 3, a bottom-up schema describes the model functionality based on four layers. First, the P2P mining layer. It is composed of mining nodes that maintain the basic operation of the scheme. For this, decentralized consensus algorithms are used to validate the integrity and correctness of each transaction within its respective block. In turn, it sequentially integrates new blocks into the chain. Second, the blockchain layer. It specifies the data model schema. Each transaction is grouped within a block, and each new block is linked to a previous one. In this way, each transaction is referenced through a token, and each token is a reference to digital currencies and smart contracts. The references are addresses under the user’s control, which demonstrates the privilege of use. Third, the smart contract layer. It specifies the business rules of the schema. For this, codes are implemented to solve user requirements. This process designs smart contracts as follows: – Authentication smart contract. It defines access rules based on user and privilege. Transparently, a smart contract is created to store user information. A user is created with a password, and personal data is encrypted using a 256-bit hash. In addition, a system administrator grants access privileges, which enable the use of system options, including other smart contracts. – Insurance smart contract. It defines the contractual terms of the insurance policy. Basically, the contract has an identifier and a 256-bit hash that contains encrypted information about: the policy holder, the holder account number for accreditation of the reimbursement of medical expenses, contract and premium value, dependents, health declaration, and contract status. Encrypted information can be transformed into plain text and stored in an external repository. – Claim smart contract. It validates health insurance claims generated through medical care and prescriptions. The data set to validate includes the identity of the insured party, medical prescriptions, medicines, and price. In short, this smart contract checks whether to make a refund or not. Collaborative tasks between insurance and claim smart contracts are carried out to determine the total amount of reimbursement to the client. – EHR smart contract. It stores the electronic health record of the insured party (EHR). Four, user layer. It describes users who interact with the scheme. In this context, four user entities are defined, namely: – Insured party. The beneficiary of the insurance contract, which may be represented by a contractor. An insurance holder can enroll several dependents in the same policy. – Insurer. A company that provides insurance to its clients or policyholders. According to the event registering new insurance contracts, two users are defined as follows: a technical underwriter and a medical underwriter. In addition, for an insurance claims event, two users are defined: a technical auditor and a medical auditor. – Medical Center. An entity that provides health care services to the insured party. – Pharmacy. An entity that provides drugs to the insured party.
A Blockchain-Based Approach for Issuing Health Insurance Contracts and Claims
257
Fig. 3. A layered overview for issuing health insurance contracts and claims.
3.3 Usage Scenarios According to a successful user authentication, three possible scenarios are identified. First, issuance of the insurance contract (or insurance policy). An insured party fills out a healthcare insurance affiliation application form. This form is reviewed from two standpoints: medical and technical. In this case, two users of the insurance company intervene: a medical underwriter (who reviews and signs the health report of the insured party), and a technical underwriter (who reviews and signs the technical report of the application form). If both reports are favorable, the scheme generates an insurance contract signed by the technical underwriter (on behalf of the insurer). According to the contractual terms, the insured party can sign the insurance policy. In this point, the insurance smart contract is registered in blockchain. Figure 4 shows the steps for issuing a healthcare insurance policy using a smart contract based on blockchain. Second, payment of the insurance premium. A client wallet transfers tokens (digital money) to an insurer wallet through an insurance smart contract. The contract status is updated within blockchain. Figure 5 shows the payment process of the insurance fee using digital tokens.
258
J. C. Mendoza-Tello et al.
Fig. 4. Issuance of an insurance contract using blockchain.
Fig. 5. Payment of the insurance fee using blockchain.
Third, healthcare insurance claim. A medical center provides health services to the insured party, such as hospital care, outpatient services, organ transplantation or medical emergencies. A medical center records the details of the service provided to a client using a claim form. The medical prescription and surgical procedures (if applicable) are recorded and at the same time, EHR is updated. A claim _id is generated that links the insured event to the claim made. With this _id, the pharmacy service can include the medicines purchased in the same insured event. All information is stored in a smart contract claim that consults the policy status and notifies the insurer. Automatically, the reimbursement process of money for medical assistance is initiated. In this notice, two insurance company users were involved: a medical auditor (who supervises and signs the health report of the claim form) and a technical auditor (who supervises and signs the technical report of the claim form). According to the report of both auditing users, the smart contract of the claim reimburses money for medical expenses to the insured party. Figure 6 shows the claim process for an event secured using blockchain.
A Blockchain-Based Approach for Issuing Health Insurance Contracts and Claims
259
Fig. 6. Healthcare insurance claim using blockchain.
4 Conclusions 4.1 Contributions This research highlights the characteristics of blockchain that issue health insurance contracts and claims, namely: automation, authentication, transparency, immutability, and decentralization. Based on these characteristics, a layer model is defined to abstract the functionality of the schema components, namely: P2P mining, blockchain, smart contract, and user. Consequently, three usage scenarios are described, namely: issuance of an insurance contract (policy), payment of an insurance premium, and management of claims. This preliminary research provides an overview and explains three considerations: (a) automated insurance business rules in smart contracts, (b) guarantee of durability of transactions due to the property of immutability, and (c) trust provided by a decentralized consensus mechanism. 4.2 Limitations and Future Research The benefits of blockchain technology for the insurance field are evident. However, not all health insurance rules can be modeled, especially those related to fraud detection. In some cases, the variety of medical treatments involves complex validations that require great experience and knowledge. Future work is required to apply machine learning algorithms in smart contracts. In this way, the large amount of information generated from claims management can be used to detect fraudulent behavior. Finally, blockchain is an
260
J. C. Mendoza-Tello et al.
emerging technology that faces scalability and standardization challenges. It is necessary to define regulations that provide guidelines that maximize the use of technology.
References 1. Zamonsky, L.: Healthcare Insurance and You - The Savvy Consumer’s Guide. Apress (2013) 2. Kose, I., Gokturk, M., Kilic, K.: An interactive machine-learning-based electronic fraud and abuse detection system in healthcare insurance. Appl. Soft Comput. J. 36, 283–299 (2015). https://doi.org/10.1016/j.asoc.2015.07.018 3. Mendoza-Tello, J.C., Mora, H., Pujol-López, F.A., Lytras, M.D.: Disruptive innovation of cryptocurrencies in consumer acceptance and trust. IseB 17(2–4), 195–222 (2019). https:// doi.org/10.1007/s10257-019-00415-w 4. Nakamoto, S.: Bitcoin: A Peer-to-Peer Electronic Cash System (2009) 5. Antonopoulos, A.M.: Mastering Bitcoin, Sebastopol, CA (2016) 6. Szabo, N.: Formalizing and securing relationships on public networks. First Monday. 2 (1997). https://doi.org/10.5210/fm.v2i9.548 7. McGuire, T.G.: Demand for Health Insurance. In: Handbook of Health Economics, pp. 317– 396. Elsevier B.V (2011) 8. Kitchenham, B.: Guidelines for performing systematic literature reviews in software engineering, Durham, UK (2007) 9. Gatteschi, V., Lamberti, F., Demartini, C., Pranteda, C., Santamaría, V.: Blockchain and smart contracts for insurance: Is the technology mature enough? Futur. Internet. 10, 8–13 (2018). https://doi.org/10.3390/fi10020020 10. Pagano, A.J., Romagnoli, F., Vannucci, E.: Implementation of blockchain technology in insurance contracts against natural hazards: a methodological multi-disciplinary approach. Environ. Clim. Technol. 23, 211–229 (2019). https://doi.org/10.2478/rtuect-2019-0091 11. Liu, J.L., Wu, X.Y., Yu, W.J., Wang, Z.C., Zhao, H.L., Cang, N.M.: Research and design of travel insurance system based on blockchain. In: ICIIBMS 2019 - 4th International Conference Intelligence Informatics Biomed. Science, pp. 121–124 (2019). https://doi.org/10.1109/ICI IBMS46890.2019.8991444 12. Lamberti, F., Gatteschi, V., Demartini, C., Pelissier, M., Gómez, A., Santamaría, V.: Blockchains can work for car insurance. IEEE Consum. Electron. Mag. 7, 72–81 (2018). https://doi.org/10.1109/MCE.2018.2816247 13. Campanile, L., Iacono, M., Levis, A.H., Marulli, F., Mastroianni, M.: Privacy regulations, smart roads, blockchain, and liability insurance: putting technologies to work. IEEE Secur. Priv. 19, 34–43 (2021). https://doi.org/10.1109/MSEC.2020.3012059 14. Xiao, L., Deng, H., Tan, M., Xiao, W.: Insurance block: a blockchain credit transaction authentication scheme based on homomorphic encryption. In: Zheng, Z., Dai, H.-N., Tang, M., Chen, X. (eds.) BlockSys 2019. CCIS, vol. 1156, pp. 747–751. Springer, Singapore (2020). https://doi.org/10.1007/978-981-15-2777-7_61 15. Sun, J., Yao, X., Wang, S., Wu, Y.: Non-repudiation storage and access control scheme of insurance data based on blockchain in IPFS. IEEE Access. 8, 155145–155155 (2020). https:// doi.org/10.1109/ACCESS.2020.3018816
Integration of Administrative Records for Social Protection Policies-A Systematic Literature Review of Cases in the Latin American and Caribbean Region Yasmina Vizuete-Salazar1 , Miguel Flores4 , Ana Bel´en Cabezas2 , Lisette Zambrano3(B) , and Andr´es Vinueza3 1
Facultad de Sistemas, Escuela Polit´ecnica Nacional, Quito, Ecuador [email protected] 2 Departamento de Estudios Pol´ıticos, Facultad Latinoamericana de Ciencias Sociales Flacso, Quito, Ecuador 3 Departamento de Ciencia de Datos, Digital Mind, Quito, Ecuador [email protected], [email protected] 4 Grupo MODES, SIGTI, Departamento de Matem´ atica, Facultad de Ciencias, Escuela Polit´ecnica Nacional, Quito, Ecuador [email protected]
Abstract. The objective of this study is to explore, analyze, evaluate and compile experiences and good practices in relation to the use and implementation of administrative records in social information systems in the latin american and caribbean region. A systematic literature review has been carried out on the current situation of the use of information systems for the registration, updating, validation and management of data on potential beneficiaries of government social protection programs. As a result of this review, it has been possible to identify methods for updating and validating data, as well as methods for updating socioeconomic indicators, which indicate the integration of ICTs in government policies for a-better targeting of resources, giving way to Smart Government. Keywords: Social Information System · Administrative records · Social protection programs · Systematic literature review · Smart Government · Latin America and Caribbean region
1
Introduction
In the context of data-driven societies, Social Information Systems (SIS) and social registries (beneficiary registries) become the way to organize, store, process and distribute data and information from social protection programs, including in their processes the use of administrative records from different sources managed by the State or other entities. The countries of Latin America and the Caribbean seek to establish mechanisms to bring public policies closer to low-income families. Therefore, they c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Botto-Tobar et al. (Eds.): ICAETT 2021, LNNS 407, pp. 261–272, 2022. https://doi.org/10.1007/978-3-030-96147-3_21
262
Y. Vizuete-Salazar et al.
have developed SIS to fulfill this purpose. The countries’ SIS are at different levels of construction, structures, methodologies and coverage. There are countries with SIS that have advanced processes for integrating administrative records, including the use of technological platforms for citizen use. Other countries are developing their information systems, which are aimed at their consolidation, making their performance reach the expected results in terms of coverage and applicability [1]. The objective of this study is to do a systematic literature review for the exploration, analysis, evaluation and compilation of experiences and good practices in relation to the use, exploitation and implementation of administrative records in social information systems in the Latin American and Caribbean region. A diagnosis was made of the current situation regarding the use of information systems for the registration, updating, validation and management of data on potential beneficiaries of government social protection programs. The research seeks to identify the characteristics, means of interoperability, variables and databases used in the SIS. These elements allow updating data, cleaning information and maintaining a dynamic mechanism to identify people who can benefit from government social programs, such as policies and subsidies to improve the quality of life of citizens, as well as to reduce social gaps and inequalities. The structures and characteristics of the information systems have been explored to improve the understanding of the functioning of the programs that use them, as well as the use of the administrative records of the entities as sources of information for updating household data and socioeconomic indices. This article presents a bibliographic review of the information available in each of the countries on their SIS and is organized as follows: the methodology used; the results obtained; the analysis of the results; and the conclusions.
2
Methodology
The methodology used to collect the information from social information systems was based on the use of qualitative archiving and bibliographic analysis techniques. This technique categorizes the information collected in order to carry out a comparative analysis of each country to identify outstanding aspects of good practices in the use of administrative records. The research method used is the systematic literature review (SLR). It is based on the guidelines presented in [2], which consists of the following stages 1) Planning: the SLR protocol is developed, then the research questions, objectives, search strings, sources used to perform the searches, language of the papers, and exclusion and inclusion criteria are defined. 2) Execution: the execution of the protocol, whereby the defined search string is executed in the selected sources, according to the inclusion and exclusion criteria. 3) Reporting: this consists of communicating the results of the SLR. 2.1
Planning
The main objective of this work is to gain in-depth knowledge of the SIS of Latin American and Caribbean countries. The research questions defined are:
Integration of Administrative Records for Social Protection Policies
263
RQ1. Which SIS are used by Latin America and the Caribbean countries that allow updating data on the target population of social programs? RQ2. Which characteristics and sources of information for updating household data and socioeconomic indices have the SIS? Identification and Source Selection. The documents to be selected were prioritized, including those published by international organizations such as the IDB, World Bank and ECLAC, as well as the institutional documents of each of the institutions in charge of the information systems. We also work with the Scopus and Google Scholar databases. These databases offer a wide range of scientific literature [3]. In addition, we used the official websites of the information systems identified in each country, thus expanding the sources of information for the systematic literature review. Search String Definition: We combined terms with logical operators such as “AND” and “OR”. The resulting search string was defined as follows: “TITLE-KEY-ABS ((“social information systems” OR “administrative records” OR “social registries”) AND (“Latin America countries” OR “Caribbean countries”)”. Inclusion and Exclusion Criteria. The selected documents were delimited in terms of the inclusion criteria (IC) and exclusion criteria (EC), as detailed below: Inclusion Criteria – Documents that consider studies on data collection programs in the SIS. – Technical documents explaining the use of administrative records, analysis methodologies and calculation of social indexes. – The legislation regulates the use and processing of administrative records as a tool for updating data. – Documents showing updated descriptions of the most recent situations of application of the systems used. Exclusion Criteria – Documents not describing government use of administrative records. In addition, papers written in English or Spanish were reviewed. However, it is important to mention that there is no defined time limit for the search of documents, because the objective of this review is to compile all available information. 2.2
Execution
In this stage, the previously mentioned defined protocol is executed, so the selected search string and sources are executed. The search results are filtered according to the inclusion and exclusion criteria. The result of this stage is to synthesize and record relevant information from the found documents.
264
Y. Vizuete-Salazar et al.
Three phases were carried out (Fig. 1). As a result of the initial phase, it was found that Berner and Van Hemelryck (2020) [1] make a detailed compilation of the social information systems in the region, for which the following search strings were adjusted to identify bibliography referring to each of the social registries or of the institutions responsible of them. For example, the search was carried out by the name “Instituto Mixto de Ayuda Social”, by its acronym ´ “IMAS” or by its program “Sistema Nacional de Informaci´ on y Registro Unico de Beneficiarios del Estado”.
Fig. 1. Phases of the process followed in this systematic literature review
Therefore, 72 results were obtained from international organizations, websites of government entities, Scopus and Google Scholar. However, from the scientific bases as a result of the search, few documents based on the search strings are yielded, i.e., there is no evidence of scientific documents on the application of administrative records in the formulation of social policy by governments, which shows the relevance. Extraction and Synthesis Strategy. The selected references were used for the collection of information from the SIS. The extraction of information was based on the use of qualitative archiving techniques and bibliographic analysis to answer each of the research questions. The information was categorized for a comparative analysis of each country and to identify relevant aspects of good practices in the use of administrative records.
3
Results
In this section, we proceed to communicate the results of the SLR. 3.1
Description of the Social Information System and/or Receiver Registry
The following countries have been identified as having a Social Registry or a Social Information System: Argentina [4], Bolivia [5], Brazil [6], Colombia [7],
Integration of Administrative Records for Social Protection Policies
265
Costa Rica [8,9], Chile [10–12], Dominican Republic [13,14], Ecuador [15], El Salvador [16], Honduras [17], Mexico [18], Panama [19–22], Paraguay [23–25], Peru [26] and Uruguay [27], 15 countries in total. There is no available information about Guatemala, Haiti, Nicaragua, Guyana, Suriname and Venezuela. Dominican Republic, Honduras, Panama, and Paraguay do not use administrative records in their processes. Argentina, Bolivia, Brazil, Colombia, Costa Rica, Chile, Ecuador, El Salvador, Mexico, Peru, and Uruguay have incorporated into their schemes the use of administrative records for data validation processes and/or update of the socioeconomic status of the family, as shown in Fig. 2. The SIS of these countries uses administrative records from the following categories of databases: Civil Registration, Social Security or Private Security Benefits, Tax, fiscal or financial records, Payroll of employees of municipalities and/or central government, List of elected authorities in Voting of localities and at the state level, Formal work registration, Income per capita or per family, Vehicle ownership, Consumption of drinking water, gas, telephone line or electricity, Health insurance, Real estate and Alimony.
Fig. 2. Latin American and Caribbean countries that use administrative records
3.2
Methods for Updating Recipient Data
The collection of information from recipients or beneficiaries is carried out through mixed mechanisms. In the first instance, door-to-door sweeps of home visits are carried out, by means of summons or requests for data collection through the completion of a social form. In a second instance, the data collected are corroborated using administrative records to minimize biases and inconsistencies that arise at the time of the survey. These methods are applied in Ecuador, Costa Rica, Brazil, Argentina, Bolivia, El Salvador, Peru, Colombia and Mexico. These visits or surveys are carried out for a period of between three
266
Y. Vizuete-Salazar et al.
and five years, in the periods between population censuses, in order to obtain complementary information on the family situation [4,9,11,15,16,18,26,28–31]. Regarding the use of administrative databases, the government entities in charge of the SIS sign interinstitutional agreements with the entities that provide the data, in order to guarantee a regular flow of data and interoperability with the cooperating institutions [6,9,15,16,18,26,28,30,32,33]. ´ Brazil developed the Meu CadUnico application, which allows citizens reg´ istered in the Cadastro Unico to access their own family data. The application allows to know if the family registry is outdated, has any inconsistency or if the family is included in a verification process (new update of family data is ´ required). This tool aims to reduce the need to go to the Cadastro Unico’s physical post. Families must periodically update their records to remain in the social programs database [6]. Colombia also has the information automatically updated through the institutional web page. The information is updated every two months [30]. Argentina’s National Tax and Social Identification System (SINTyS) carries out activities to establish the exchange of information between state institutions. The databases of each jurisdiction are integrated in the provincial server to standardize them, perform minimum validations and then be reviewed in SINTyS, to ensure their quality and integrity. The interconnected databases are used to validate individuals’ data, control the coverage and eligibility of social and health programs, and verify labor and tax status [33]. In the case of Peru, the sources of information used are those contained in the Social Information Exchange Mechanism (MIIS), which has data on socioeconomic, demographic, vulnerability, disability, ethnic and geographic characteristics. This information is available at the level of individuals, households, dwellings, population centers, communities, population groups or geographic jurisdictions [26]. The information from Chile’s Social Household Registry (RSH) contains a socioeconomic characterization form that collects information on households and their members. It considers location, housing and household members, education, health, occupation and income. There is an online data collection mechanism through a platform that citizens access to enter or update their information. After collecting primary information, it uses databases from different state institutions, with specific information on the variables with which the Socioeconomic Qualification is constructed (income from work, pensions, capital, health contributions, among others) [34]. In the case of Uruguay, 94% of citizens and residents are registered in the Social Area Information System. The use of administrative data allows the statistical calculation to assess the overall housing situation, comfort, education and household composition. The information uses existing administrative databases using national identification to link records [35]. Based on this evidence, the SIS used by Latin America and the Caribbean countries have been identified. In addition, methods for updating recipient data have been described that allow keeping the population classification up to date
Integration of Administrative Records for Social Protection Policies
267
based on the use of administrative records. In this way, the first research question posed in this study is answered. 3.3
Methods for Updating Socioeconomic Indicators
This section describes the methods used with administrative records to update socioeconomic indicators. The cases presented from Peru, Chile and Uruguay show that administrative databases are used to update household socioeconomic indicators, taking advantage of these sources for periodic updates of household qualification. The cases of Chile and Uruguay show that it is possible to make inferences about socioeconomic status from administrative records. Data processing for the definition of the Socioeconomic Classifier (SC) in Peru is established in the Methodology for the Determination of the Socioeconomic Classification, which proposes the following steps: Evaluation of private insurance, Vehicle ownership, Household income, Evaluation of electricity consumption, and, Calculation of the Household Focalization Index, according to the information in the Single Socioeconomic Form to determine the SC [26]. In this sense, the following validation process is carried out: 1) Review of the information through access to databases; 2) Implementation of action plans to identify inconsistencies in the information; 3) Monitoring of patterns and trends; 4) Analysis of the consistency rules of the data collection instruments applied to the households; 5) Analysis of the data collection instruments applied to the households [26]. Additionally, the rectification process consists of the following steps: 1) Conduct monitoring of socioeconomic classification; 2) Implement action plans based on monitoring; 3) Improve socioeconomic classification based on monitoring [26]. Chile uses the Socioeconomic Score (SES), which is the percentage range in which households are placed by inferring the household’s income level based on the total effective income of household members divided by the number of members. Subsequently, an adjustment is applied to the actual household income based on variables such as education, real estate, vehicles and alimony payments [10,11,34]. In the case of Uruguay, the Critical Deprivation Index (CDI) performs a proxy means test, based on a probabilistic model that estimates the probability that the household belongs to the first quintile of per capita income. The CDI is estimated based on the structural variables of the household and its members, referring to housing, household composition, educational context, among others, resulting in a score that ranks households according to their level of vulnerability. The sources of information used are Housing, Comfort, Education and Household Composition [27,35,36]. In this section, the sources of information used by the SIS in their methods for updating socioeconomic indicators have been identified. As common characteristics, it is evident that the SIS have developed methods to interconnect the various data provider entities, which has allowed them to have a greater volume of administrative records. The countries cited in this section have developed
268
Y. Vizuete-Salazar et al.
mechanisms for interoperability and transfer of administrative records from various sources of information, which becomes a strength for the SIS because the updating of socioeconomic indicators is carried out based on continuously generated data. With this section, the second research question of this study is answered. 3.4
Good Practices Identified
Additionally, as a result of the literature review carried out, the following good practices are identified. It was considered that these good practices have allowed the SIS, in each of the aforementioned countries, to be strengthened, especially with the use of direct citizen access applications or with the verification of the quality of the databases to be exchanged. 1. The countries of Brazil, Colombia and Chile have citizen service points in municipal or government entities. In addition, updating mechanisms have been implemented through web portals or direct citizen access applications, bringing registration closer to the potential beneficiaries of social programs. In addition, these mechanisms have become the means of communication with citizens to notify them that they must update their data. 2. The good practice of Uruguay’s SIIAS is to improve the levels of efficiency and effectiveness of management, strengthening the capacity to formulate public policies through the exchange of data between ministries, autonomous entities, municipalities and other state agencies [35]. 3. In Chile, public entities (municipalities) or private nonprofit entities may be designated as executors of the application process for admission, updating, rectification and/or supplementation of the information in the Social Household Registry, subject to prior agreement with the Ministry of Social Development. In this way, the points of attention to the citizenship are expanded [34]. 4. Argentina’s SINTyS system verifies the quality of the databases to be exchanged, ensuring that the data it receives from member organizations are processed in accordance with general standards for all participants. This constitutes a good data management practice, because data traceability is guaranteed from the origin of data generation [33]. 5. In Brazil, families are notified to update their data. If they do not make periodic updates, they are deactivated from the registers. The respective legal regulations are issued to this effect [6].
4
Discussion
One of the main results of this review in the countries studied is that the central objective of the SIS is to characterize the population from its socioeconomic condition to direct social benefit programs. In this sense, Social Information Systems are in a process of consolidation focused on the use of administrative records, aimed at the permanent updating
Integration of Administrative Records for Social Protection Policies
269
of socioeconomic indices, the inclusion of individuals or the verification of information collected in door-to-door surveys. Although there are different practices in the management of this information, there are countries that make better use of their administrative databases and have adequately regulated the integration of existing information from different institutions. One of the countries that stands out for its high level of coverage is Uruguay, which has a coverage of 94% in relation to its total population, as of 2017. This is due to the fact that the information system of that country (SIIAS) automatically performed the income data of potential beneficiaries from the administrative records of the different State entities. Another noteworthy case in the collection of information and use of administrative records is Chile, which maintains a coverage of 78% and uses an online, face-to-face information collection methodology and an automatic updating process considering the constant information in the administrative records. Colombia and Peru show that the administrative bases are used to update the socioeconomic indicators of families, making use of information sources for periodic updates of family qualification. In other words, there are several countries that obtain high performance from their administrative records, as well as their interoperability and integration, which allows for adequate management of state resources and the development of dynamic updating processes. In addition, the biases and inconsistencies that arise at the time of the face-to-face survey are minimized. In addition, it has been noted that while administrative records have been used for beneficiary registration or information verification, door-to-door surveys have not been eliminated. Door-to-door information collection is mainly used for information collection as in the cases of Peru, Colombia and Ecuador, while in other cases this process is used for information verification as in the case of Brazil (in the verification process). This is mainly due to the maturity of the data provider systems of individuals. However, there are already cases, such as Argentina and Uruguay, where interoperability between state entities is being standardized, so that quality information can be obtained to update data on individuals and socioeconomic status.
5
Conclusions
This study has succeeded in answering the two research questions posed. The SIS used by Latin American and Caribbean countries to update data on the target population of social programs have been identified, as well as their characteristics and sources of information used for the processes of updating, verification, rectification and integration of family data and socioeconomic indexes. It requires interinstitutional cooperation agreements, institutional strengthening and development of a computer tool to link and integrate databases. The establishment of interoperability among information provider institutions will reduce data manipulation prior to processing.
270
Y. Vizuete-Salazar et al.
In conclusion, all SISs seek to implement information updating processes on a regular basis and involve the use of administrative databases for data validation and reduction of inconsistencies. The information from administrative records becomes reliable information, since it does not present the biases inherent to face-to-face surveys. In addition, these systems allow governments to better target public resources since they focus on qualified populations in conditions of poverty, vulnerability and social exclusion. The design of public social policy is based on data-driven models, so that decision-makers can better follow up. The use of administrative records in the region is beginning to show good results. Governments still have work to do to achieve the necessary interoperability among their entities and to update their population data quickly and automatically. The challenge is posed. Acknowledgments. Our thanks to the Secretary of Higher Education, Science, Technology and Innovation of Ecuador (SENESCYT) for supporting this work in part.
References 1. Berner, H., Van Hemelryck, T.: Sistemas de informaci´ on social y registros de destinatarios de la protecci´ on social no contributiva en am´erica latina: avances y desaf´ıos frente al COVID-19 (2020) 2. Kitchenham, B.A., Budgen, D., Brereton, P.: Evidence-Based Software Engineering and Systematic Reviews. Chapman and Hall/CRC (2015). https://doi.org/10. 1201/b19467 3. Dieste, O., Grim´ an, A., Juristo, N.: Developing search strategies for detecting relevant experiments. Empir. Softw. Eng. 14(5), 513–539 (2008) 4. Argentina.gob.ar. Sisfam (2021). https://www.argentina.gob.ar/politicassociales/ siempro/sisfam 5. The World Bank: Plataforma de registro integrado de programas sociales del estado plurinacional de bolivia (pregips) (2019). https://documents1.worldbank. org/curated/en/404561561971962868/pdf/Herramienta-de-Evaluaci%C3%B3nde-Registros-Sociales-Plataforma-de-Registro-Integrado-de-Programas-Socialesdel-Estado-Plurinacional-de-Bolivia-PREGIPS.pdf 6. Governo Federal: Minist´erio da cidadania, June 2021. https://www.gov.br/ cidadania/pt-br/acoes-e-programas/cadastro-unico 7. Departamento Nacional de Planificaci´ on: Departamento nacional de planificaci´ on (2020). https://colaboracion.dnp.gov.co/CDT/Prensa/Sisben-Abece.pdf ´ 8. SINIRUBE: Sistema nacional de informaci´ on y registro Unico de beneficiarios del estado, June 2020. https://www.sinirube.go.cr/ 9. Instituto Mixto de Ayuda Social, June 2021. https://www.imas.go.cr/es/general/ organo-desconcentrado-sistema-nacional-de-informacion-y-registro-unico-debeneficiarios-del 10. Banco Mundial: Registro social de hogares de chile, February 2018. http://www. desarrollosocialyfamilia.gob.cl/storage/docs/RSH paper.pdf 11. Gobierno de Chile: Gobierno de chile, June 2021. https://www.gob.cl/ noticias/registro-social-de-hogares-preguntas-y-respuestas-sobre-el-sistemade-informacion-que-permite-acceder-a-los-beneficios-del-estado/
Integration of Administrative Records for Social Protection Policies
271
12. Ministerio de Desarrollo Social y Familia: Registro social de hogares (2020). http://www.registrosocial.gob.cl/docs/Protocolos-de-ingreso-y-actualizaci %C3%B3n-Febrero-2020.pdf 13. Ministerio de Desarrollo Social y Familia: Comisi´ on econ´ omica para america latina y el caribe, June 2015. https://www.cepal.org/sites/default/files/events/ files/2015-06-16 presentacion siuben.pdf 14. SIUBEN: Calidad de vida-tercer estudio socio econ´ omico de hogares (2018). https://siuben.gob.do/wp-content/uploads/2020/06/siuben-calidad-de-vida2018-digital.pdf 15. Registro Interconectado de Programas Sociales (2019). http://rips.registrosocial. gob.ec/Rips-web/ ´ 16. PACSES: Registro Unico de participantes (2014). http://rup.proteccionsocial.gob. sv/Archivos/documentos/DocumentoTecnicoRUP.pdf ´ 17. Centro Nacional de Informaci´ on del Sector Social: Registro Unico de participantes (2021). https://ceniss.gob.hn/rup.html 18. SEDESOL: Sistema de informaci´ on social integral, November 2018. https:// www.coneval.org.mx/Eventos/Documents/Sistema-Informacion-Social-IntegralCONEVAL.pdf 19. Direcci´ on de Inclusi´ on y Desarrollo Social: Oportunidades y desaf´ıos en el ´ desarrollo de los registros Unicos de beneficiarios e implementaci´ on de mecanismos de acompa˜ namiento familiar (2017). https://www.sisca.int/centro-dedocumentacion/encuentros-virtuales/jornada-de-alto-nivel-politico-y-tecnicoipm/intercambio-de-experiencias-oportunidades-en-el-desarrollo-de-los-registrosde-beneficiarios-e-implementacion-de-mecanismos-de-acompanamiento-fa ´ 20. Gobierno de la RepUblica de Panam´ a: Ley no 54 - registro nacional de beneficiarios y dicta otras disposiciones (2016). https://siteal.iiep.unesco.org/sites/default/ files/sit accion files/pa 0060 0.pdf 21. Ministerio de Desarrollo Social: Selecci´ on de hogares de la red de oportunidades (2009). https://www.mides.gob.pa/programas/p-120-a-los-65/? csrt=11349557298616286365 ´ 22. Ministerio de Desarrollo Social de Panam´ a: Mds y cepal evalUan avances y retos para la implementaci´ on del registro social de hogares en paraguay (2021). https:// www.mds.gov.py/index.php/noticias/mds-y-cepal-evaluan-avances-y-retos-parala-implementacion-del-registro-social-de-hogares-en-paraguay 23. Agencia de Informaci´ on Paraguaya: M´ as de 10.000 beneficiarios del programa abrazo se benefician con seguro de vida (2017). https://www.ip.gov.py/ip/masde-10-000-beneficiarios-del-programa-abrazo-se-benefician-con-seguro-de-vida/ 24. MADES: Presentan avances del ´Indice de calidad de vida con la incorporaci´ on de variables ambientales (2020). http://www.mades.gov.py/2020/08/ 18/presentan-avances-del-indice-de-calidad-de-vida-con-la-incorporacion-devariables-ambientales/ 25. Sistema Integrado de Informaci´ on Social. qu´e es el siis? (2015). https://www.siis. gov.py/que-es-el-siis 26. Ministerio de Desarrollo e Inclusi´ on Social: Ministerio de desarrollo e inclusi´ on social, July 2016. http://extwprlegs1.fao.org/docs/pdf/per159937anx.pdf ´ 27. SIIAS: Sistema de informaci´ on integrada del Area social, July 2018. http://siias. mides.gub.uy/59483/como-se-integra-la-informacion-de-las-personas-al-siias 28. Comit´e Interinstitucional del Registro Social: Norma t´ecnica para la recopilaci´ on, actualizaci´ on, uso y transferencia de informaci´ on del registro social (2020)
272
Y. Vizuete-Salazar et al.
29. Roxana, M.V.: Sistema de identificaci´ on de la poblaci´ on objetivo: Sipo en costa rica. 0530. banco mundial, June 2005. https://mef.gob.pe/contenidos/pol econ/ documentos/Focalizacion Costa Rica.pdf 30. Departamento Nacional de Planificaci´ on: Ministerio de desarrollo e inclusi´ on social (2020). https://colaboracion.dnp.gov.co/CDT/Prensa/Sisben-Abece.pdf 31. Jim´enez, F.D.: Efectividad en la selecci´ on de beneficiarios de los programas avancemos y bienestar familiar. Econom´ıa y Sociedad 22(52), 1 (2017). https://doi.org/ 10.15359/eys.22-52.1 32. V´ıquez, R.M.: Norma t´ecnica para la recopilaci´ on, actualizaci´ on, uso y transferencia de informaci´ on del registro social (2020) 33. Sistema de Identificaci´ on Nacional Tributario y Social: Sintys - sistema de identificaci´ on nacional tributario y social, June 2021. https://www.argentina.gob.ar/ politicassociales/sintys 34. Ministerio de Desarrollo Social y Familia: Registro social de hogares, February 2020. http://www.registrosocial.gob.cl/docs/Protocolos-de-ingreso-yactualizaci%C3%B3n-Febrero-2020.pdf ´ 35. SIIAS: Sistema de informaci´ on integrada del Area social - documentos t´ecnicos, July 2018. http://siias.mides.gub.uy/92604/documentos-tecnicos 36. Ministerio de Desarrollo Social y Familia: qu´e es el ´ındice de carencias cr´ıticas? serie de documentos “aportes a la conceptualizaci´ on de la pobreza y la focalizaci´ on de las pol´ıticas” (2014). http://dinem.mides.gub.uy/innovaportal/file/61719/1/quees-el-indice-de-carencias-criticas.-2014.pdf
Implementation and Analysis of the Results of the Application of the Methodology for Hybrid Multi-cloud Replication Systems Diego P. Rodriguez Alvarado(B) and Erwin J. Sacoto-Cabrera GIHP4C-Universidad Polit´ecnica Salesiana, Cuenca, Ecuador [email protected], [email protected] http://www.ups.edu.ec Abstract. The main problem of local storage systems are the physical threats that can damage the integrity of the hardware and thus lose all the stored information. In this paper we present a methodology to perform data replication in PostgreSQL databases using OpenNebula cloud management software as a prototype of a private cloud and Amazon Web Services as a public cloud. The results obtained made it possible to determine the authenticity of the replicated information and to test the dynamic replication using multiple clouds. Keywords: Cloud computing AWS
1
· Multi-cloud · OpenNebula · Amazon
Introduction
Cloud computing is an up-and-coming technology that has been widely deployed and has become the main backbone of the digital transformation to support other technologies, from mobile phones to wearable devices, connected vehicles and the future networked society [4,21,44]. Cloud computing as a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction as defined in [5] by NIST (National Institute of standards and Technology). Thus, in recent years, the growing trend of network service users to use cloud computing has encouraged web service providers to provide services with different functional and non-functional (quality of service) characteristics and to provide them with a set of services [29]. In the same way, the authors of [15,47] emphasize that there are three services provided by cloud computing that are Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). Services such as DaaS Universidad Polit´ecnica Salesiana. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Botto-Tobar et al. (Eds.): ICAETT 2021, LNNS 407, pp. 273–286, 2022. https://doi.org/10.1007/978-3-030-96147-3_22
274
D. P. Rodriguez Alvarado and E. J. Sacoto-Cabrera
(Data as a Service), WaaS (Workplace as a Service), AaaS (All as a Service) are now also considered. In all described models, the services are sold on demand either by a moment or hour, and the client can access services as their desire for a particular given time. The best-known online cloud computing applications are Microsoft Apps, Google Apps, Amazon Apps, IBM Apps and Alibaba Cloud. And also, cloud can be public, private and hybrid. A public cloud sells everything over the Internet. A private cloud is a data community organization which offers types of assistance to a few individuals. Hybrid cloud is a cloud computing environment where an organization gives and manages not many assets internally and others externally, as described in [6,45]. Cloud computing is steadily maturing into one of the most publicised concepts in the world of computing, and has the potential to deliver exciting things in many sectors, from industry to social media to the home [23,38]. The several industrial sectors focus on the usage of services and resources from multiple Clouds is driven by the needs of their consumers expressed in simple requirements, like service quality and cost, its called Multi-Cloud, as described in [23]. In the same way, the authors in [23,24,30] focus the research in identifying the state of the art in building Multi-Cloud and which enhancements need to be done to the current solutions to comply with the needs of software developers. In this regard, in the NIST report [22], the authors stated the Clouds can be used serially when moved from one Cloud to another, or simultaneous when using services from different Clouds. The serial case scenario, when the migration is from a Private Cloud to a Public Cloud, and the simultaneous use when some services are lying on the Private Cloud, while other services are lying on a Public Cloud, is called Hybrid Cloud. 1.1
Paper Contributions and Outline
In this paper, we propose an architecture to enable to replication of on-premise cloud to hybrid multi-cloud and present the results of the methodology applied. Specifically, this paper studies the feasibility of methodology to implementation of migration from OpenNebula On-premise Cloud to Amazon AWS as a public cloud. The proposed methodology is analyzed with four stages. The first stage is an Port Forwarding and group security of AWS configuration. The second stage is an OpenNebula On-premise cloud configuration. The third stage is an Amazon Web Servise (AWS) public cloud configuration. And, the fourth stage is to configure a dynamic replication of an online database from the on-premise cloud to the public cloud.
2
Related Work
This work is inspired by the multi-cloud concept presented in [25]. Multi-cloud technology uses multiple clouds from different providers without the complexity of the platform, i.e., consists of many and autonomous cloud environments with no rational agreement with a third party or cloud providers, as described in [37].
Implementation and Analysis of the Results
275
Within the multi-cloud domain, some research complements the study of cloud and multi-cloud computing through different application scenarios and new concepts. For instance, in the context of multi-cloud, in [1,14,18], the authors discuss some related approaches proposed for multi-cloud environments and cloud elasticity. In [36], described the frameworks for multi-cloud platforms as well as tools to select appropriate platforms for specific applications. In the framework of the new multi-cloud concepts, in [38] the authors introduce multi-site virtual cluster, cross-cloud concepts, as well as concepts as recovery, redundancy and security information for multi-cloud. In this respect, in [17], it can be understood that in multiple clouds, the user or company in question uses different cloud services for different business applications. For example, they may store data in one private cloud, share documents in Microsoft Cloud Platform and perform data analysis in another cloud. Likewise, cross-cloud architectures are designed to make data transfer and application utilisation across clouds more agile and cohesive. The reasons for using the multi-cloud approach are varied. For example, a 5G business model may seek to use a multi-cloud approach for services access considerations, where the virtual operator may operate on one cloud, but requires access to different services offered by infrastructure providers, as detailed in [5,39–43] Finally, several problems have been encountered in multi-cloud integration as described in [18,26]. In this regard, in (6), the authors describe the need to use optimisation techniques to maximise the Service Level Agreement (SLA) through VM/container usage and traffic consolidation through multi-clouds. In [26,48,50] the authors described the problems generated by migration solutions have made the seamless migration of on-premises applications to the cloud a difficult task, sometimes based on trial and error. Regarding security, in [2,9], the authors propose a security solution that aims to offer a secure data storage for mobile cloud computing users based on multi-clouds scheme. However, these papers focus on specific aspects of migration from on-premise cloud to multi-cloud but do not discuss a dynamic migration methodology and its outcome for hybrid multi-clouds. The rest of this paper is organized as follows: In Sect. 3, we describe the system architecture, on-premise cloud and public cloud characteristics, and scheme in detail. Section 4 provides details of the methodology and system configuration to implementation of the system. Section 5 shows and discusses the results. Finally, in Sect. 6, we present the conclusion and future work.
3
System Architecture
The system architecture is inspired by the model described in [10,11]. The authors propose in the mentioned studies an architecture based on On-premise cloud and public cloud-based with vendors as Microsoft, Google and Amazon. The system architecture proposed is suitable for dynamic data replication strategies in cloud environments described in [27] to provide consistency, availability, durability and scaling. The proposed system architecture is shown in Fig. 1 based on Amazon provides cloud service like AWS to reply a service on-premise
276
D. P. Rodriguez Alvarado and E. J. Sacoto-Cabrera
developed on OpenNebula. Amazon AWS provides two types of complementary replication features like: Multi-Availability Zone (AZ) Deployments and Read Replicas [3]. The proposed system architecture is made up of the following parts: – OpenNebula.- It is a flexible cloud platform tool that organizes storage, network, and virtualization technologies to allow the dynamic location of services in distributed infrastructures [7,13,35]. OpenNebula Community, is an open-source data center virtualization technology, offering feature-rich, flexible solutions for the comprehensive management of virtualized data centers to enable on-premise infrastructure as a service clouds, as described in [51]. We installed and configured the OpenNebula cloud management software as a prototype implementation of a private cloud, as follows characteristics: – Software setup: • Cloud management platform - OpenNebula • Hypervisor - KVM • Operating System - Ubuntu 16.04 LTS • OpenNebula Sunstone – Hardware setup: • Requires a host machine with at least two cores. • Supports a broad range of x64 multicore processors. • Provide at least 8GB of RAM to take full advantage of opennebula cloud – Hardware Virtualization Support • To support 64-bit virtual machines, support for hardware virtualization (Intel VT-x or AMD RVI) must be enabled on virtualization software. – Network Adapters • One Gigabit Ethernet controller We develop an on-premise cloud because the OpenNebula has a toolkit that manages a data centre’s virtual infrastructure to build private, public and hybrid implementations of infrastructure as a service [35] Amazon AWS.- Amazon web services is a collection of remote computing services that together form a cloud computing platform, offered over the Internet by amazon.com, as described in [3,32,49]. Specifically, we use the Elastic Compute Cloud (EC2) AWS, whereby you can create virtual machines and run them on one of Amazon’s data centres. These virtual machines take a slice of that centre and simulate the hardware of a physical server [12]. In this regard, we set up the AWS EC2 instance with the following features: – Software setup: • Operating System - Ubuntu 16.04 LTS • Hypervisor - HVM – Hardware setup: • Requires an instance of the t2.micro family of AWS EC2 instances. – Network Adapters • One Gigabit Ethernet controller
Implementation and Analysis of the Results
277
– Replication tool: We used PostgreSQL as a replication tool, which is a public domain open-source replication tool that allows to implementation of database streaming replication in networked environments in a collaborative way [33,46]. We use a dynamic replication scheme that uses changes made to the on-premise cloud database to store replicas of data dynamically in the cloud on AWS, i.e., every change registered in the private database is immediately saved in its replica in AWS. – PostgreSQL Database: To perform the replication tests between the onpremise cloud and the AWS cloud, we used the PostgreSQL. It is a well-known object-oriented database manager used in open source environments [19,34].
Fig. 1. System architecture
4
Methodology and System Configuration
Cloud computing facilities rely on virtualization technologies such as Software Defined Networks (SDN), Virtual Networks Functions (VNF), among others, which allow clouds to acquire or release computing resources on-demand and in such a way that the loss of any system component does not lead to system failure [8,28]. The proposed methodology is a dynamic replication scheme for the data saved on On-premise Cloud (PostgresSQL), i.e., it detects a change in the database, replicates it immediately to reduce the response time and shows that the number of replicas active at the same time must be managed according to the CPU usage. 4.1
Methodology
OpenNebula provides many different interfaces that can be used to interact with the functionality offered to manage physical and virtual resources [51]. There are four main perspectives for interacting with OpenNebula: Cloud interfaces for Cloud Consumers, Administration interfaces for Cloud Administrators, Extensible low-level APIs for Cloud Integrators in Ruby, Java and XMLRPC API and a market place for Appliance Builders.
278
D. P. Rodriguez Alvarado and E. J. Sacoto-Cabrera
Specifically, we used the methodology proposed by Open Cloud Computing Interface (OCCI) [16], which is a boundary API that acts as a service front-end to an IaaS provider’s internal infrastructure management framework. In this regard, the OpenNebula implements a full EC2 Query interface, which enables seamless integration with Amazon EC2 clouds. To perform from the On-premise cloud to AWS Cloud replication successfully, we used a network technique called port forwarding must be applied. In [27], the authors explain different services that can be configured with this technique. – There must be satisfactory connectivity between the two instances and it must be bidirectional between the two. – The databases must be of the same version in both instances. – The primary database must listen to the public IP address of the AWS instance. – The master database must allow replication to the “replicator” user from the public IP address of the instance in AWS. – The “replicator” role must be created only in the main database. All of the configurations below are designed to meet these requirements. Because each OpenNebula component is connected to the Internet through a separate third layer device, the settings must be implemented on the third layer device and in the case of AWS, the security group to which the instance belongs must be edited. 4.2
Configuration
The configuration to implement the described methodology consists of three steps. Firstly, we configure port forwarding and security groups on AWS. Secondly, we configure the private cloud. Thirdly, we configure the public cloud. Fourthly, we configure the database on the On-premise cloud. – Allow internet connection to the instance in OpenNebula: to allow the instance in AWS to replicate the database we must configure a port forwarding in the third layer device where the OpenNebula network interface is connected, it is important to define the ports used in the connection. Figure 2(a) shows the configuration that we apply in our third layer device and add the inbound rule in the security group of the instance in AWS. Figure 2(b) shows the configuration of the inbound rule added in the instance security group. – Private Cloud Configuration: To configure the private cloud, we used two virtual machines, a virtual machine that will be used for “Front”. The “Front” includes the Opennebula-sunstone interface for web management of the cloud, as well as control of installed network interfaces and control of file storage. And the second virtual machine used for “Node”. The “Node” is where the instance files are stored and which allocates Ram memory to the instances created. Figure 3(a) shows the cloud dashboard with the instances and cloud used resources.
Implementation and Analysis of the Results
279
(a) Port Forwarding Configuration (b) Inbound rule configuration in AWS security groups
Fig. 2. Instance access configuration.
(a) Opennebula Dashboard
(b) EC2 Instance
Fig. 3. Opennebula and EC2 dashboard
– Public Cloud Configuration: The cloud computing called EC2 from AWS allows to create instances of various operating systems, the creation of our instance was guided by the EC2 console to choose the operating system and the type of instance we need, AWS has 55 types of instances that differ in resources, we used the type “t2.micro” with ubuntu 16.04 operating system. Figure 3(b) shows the instance in AWS. After creating the instance, we can show that the instance has been assigned a public IPv4, this IP is the one that was used to make connections between instances and to connect the databases to perform the replication. – Databases Configuration: The database was installed on both clouds and configured to listen to addresses other than localhost, because PostreSQL listens only to the local address when the basic PostgreSQL installation is applied.
280
D. P. Rodriguez Alvarado and E. J. Sacoto-Cabrera
(a) Address Listener Configuration
(b) Creation of the Role
Fig. 4. Databases configuration
Figure 4(a) shows the configuration applied in both databases. After this configuration, in the main database, we have created a “replicator” role. Figure 4(b) shows the replicator role used to replicate the database in AWS. In this state, we configure the IP of the replicator host in PostgreSQL.
(a) PostgreSQL File Configuration
(b) Replicator Configuration
Fig. 5. PostgreSQL and replicator configuration
Figure 5(a) shows the settings of our configuration file in the main database to allow replication to the created role, the IP was provided from the AWS console. After performing the configuration on the main database, we set up a configuration file called “recovery.conf” on the replicator host. Figure 5(b) shows the configuration added on the replicator host, this file contains the IP of the main host, the user, and the password of the replicator user. The last step was to give permissions to the configuration file and start the PostgreSQL service.
Fig. 6. Permission settings
Figure 6 shows the commands to give permissions and start the PostgreSQL service.
Implementation and Analysis of the Results
5
281
Test and Results
The reply tests were performed with the Apache JMeter tool [20,31].1 Specifically, we performed a data insertion in a database created in the instance hosted in OpenNebula, the same data was replicated in the instance hosted in AWS without user intervention. To verify the results, two queries were performed on the two instances to compare the data obtained and we concluded that the replication was successful.
Fig. 7. JDBC connection configuration
Figure 7 shows the configuration to connect JMeter with the database, the PostgreSQL JDBC libraries must be downloaded to make the connections, JMeter connects to the database and stores the session in a variable to perform queries on the connection, this configuration was performed with the public IP of OpenNebula and AWS. 5.1
Test 1 and Result 1
For our first test we configured a Java Database Connectivity (JDBC) request to insert data, this data was inserted in the main database and the replication database automatically replicated all the inserted information without user intervention. Figure 8(a) shows the configuration. After the execution of the first test, we observe with the JMeter tool that the data have been entered correctly. In Fig. 8(b) we can observe the results that the tool presents.
(a) JDBC Request to insert data
(b) Data Insertion Results
Fig. 8. Test 1 and Result 1 1
JMeter is aapplication, designed to test and measure the performance and functional behaviour of client/server applications, such as web applications or FTP applications, as described in [31]
282
5.2
D. P. Rodriguez Alvarado and E. J. Sacoto-Cabrera
Test 2 and Result 2
To perform the second test, we run the following query: “select * from public.data where id = 5400;” on the main database to obtain a specific record and then compare it with the same test run on the replicated database. Figure 9(a) shows the result of the first query executed on the main database.
(a) First Query Executed on Main (b) First Query Executed on the Database Replicated Database
Fig. 9. Result 2 and Result 3
5.3
Test 3 and Result 3
To perform the second test, we run the To perform the third test, we run the following query “select * from public.data where id = 5400;” on the replicated database to obtain the same value we selected in test 2. Figure 9(b) shows the result of the first query executed on the main database. Figure 9(a) shows test 3 result, it is the same a test 3 result, as show in Fig. 9(b). 5.4
Test 4 and Result 4
To perform the fifth test, we run the query “select sum(random) from public.data;” on the replicated database to obtain a sum of the random values inserted in test 1 and compare with the sum performed in test 4. In Fig. 10(a) shows the results of the summation of the random data from the database.
(a) Second Query Executed on (b) Second Query Executed on the Main Database Replicated Database
Fig. 10. Result 4 and Result 5
Implementation and Analysis of the Results
5.5
283
Test 5 and Result 5
To perform the fifth test, we run the query “select sum(random) from public.data;” on the replicated database to obtain a sum of the random values inserted in test 1 and compare with the sum performed in test 4. Figure 10(b) shows the test 5 result, it is the same test 4 results, as show in Fig. 10(a).
6
Conclusions
In this paper, the methodology for data replication between an on-premise cloud and an AWS public cloud to present a continuous information backup solution was evaluated as a multi-cloud application. Our main results suggest that the methodology proposed in Sect. 4.1 and evaluated through different tests in Sect. 5 allows for efficient replication between public and private clouds. The use of a multi-cloud has many advantages fast access to data access on the cloud onpremises and data security in the cloud public. In addition, several cloud computing providers are making promising improvements every day that will attract more replication methodologies.
7
Future Works
Future work will include a replication methodology for different cloud computing providers, with data selected to be replicated to various public clouds and thus not send all the information to a single provider, this will permit distributing the information to multiple public or private clouds.
References 1. Al-Dhuraibi, Y., Paraiso, F., Djarallah, N., Merle, P.: Elasticity in cloud computing: state of the art and research challenges. IEEE Trans. Serv. Comput. 11(2), 430–447 (2017) 2. Alqahtani, H.S., Sant, P.: A multi-cloud approach for secure data storage on smart device. In: 2016 Sixth International Conference on Digital Information and Communication Technology and its Applications (DICTAP), pp. 63–69. IEEE (2016) 3. Amazon EC2: Amazon web services, November 2012 (2015). http://aws.amazon. com/es/ec2/ 4. Sunyaev, A.: Cloud computing. In: Internet Computing, pp. 195–236. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-34957-8 7 5. Aranda, J., Cabrera, E.J.S., Haro-Mendoza, D., Salinas, F.A.: 5g networks: A review from the perspectives of architecture, business models, cybersecurity, and research developments. Novasinergia 4(1), 6–41 (2021). ISSN: 2631-2654 6. Balne, S.: Review on challenges in SAAS model in cloud computing. J. Innov. Dev. Pharm. Techn. Sci 2, 8–11 (2019) 7. Baranov, A.V., et al.: Approaches to cloud infrastructures integration. Comput. Res. Mmodel 8(3), 583–590 (2016)
284
D. P. Rodriguez Alvarado and E. J. Sacoto-Cabrera
8. Barham, P., et al.: Xen and the art of virtualization. ACM SIGOPS Oper. Syst. Rev. 37(5), 164–177 (2003) 9. Bohli, J.M., Gruschka, N., Jensen, M., Iacono, L.L., Marnau, N.: Security and privacy-enhancing multicloud architectures. IEEE Trans. Depend. Secur. Comput. 10(4), 212–224 (2013) 10. Bsoul, M., Al-Khasawneh, A., Kilani, Y., Obeidat, I.: A threshold-based dynamic data replication strategy. J. Supercomput. 60(3), 301–310 (2012) 11. Chang, R.S., Chang, H.P.: A dynamic data replication strategy using accessweights in data grids. J. Supercomput 45(3), 277–295 (2008) 12. Cloud, A.E.C.: Amazon web services. Retrieved November 9(2011), 2011 (2011) 13. Cordeiro, T.D., et al.: Open source cloud computing platforms. In: 2010 Ninth International Conference on Grid and Cloud Computing. pp. 366–371. IEEE (2010) 14. Coutinho, E.F., de Carvalho Sousa, F.R., Rego, P.A.L., Gomes, D.G., de Souza, J.N.: Elasticity in cloud computing: a survey. Ann. Telecommun.70(7), 289–309 (2015) 15. Dudin, E., Smetanin, Y.G.: A review of cloud computing. Sci. Tech. Inf. Process. 38(4), 280–284 (2011) 16. Edmonds, O.W.A., ICCLab, Z., Thijs Metsch, I., Boris Par´ ak, C.: Open cloud computing interface-HTTP protocol. Technical report. In The Open Grid Forum Document Series (2013) 17. Elkhatib, Y., Blair, G.S., Surajbali, B.: Experiences of using a hybrid cloud to construct an environmental virtual observatory. In: Proceedings of the 3rd International Workshop on Cloud Data and Platforms, pp. 13–18 (2013) 18. Ferrer, A.J., P´erez, D.G., Gonz´ alez, R.S.: Multi-cloud platform-as-a-service model, functionalities and approaches. Procedia Comput. Sci. 97, 63–72 (2016) 19. Ginest` a, M.G., Mora, O.P.: Bases de datos en postgresql. Sl]:[sn] (2012) 20. Halili, E.H.: Apache JMeter. Packt Publishing, Birmingham (2008) 21. Helali, L., Omri, M.N.: A survey of data center consolidation in cloud computing systems. Comput. Sci. Rev. 39, 100366 (2021) 22. Hogan, M., Hogan, M., Liu, F., Sokol, A., Tong, J.: NIST Cloud Computing Standards Roadmap: Version 1.0 (2011) 23. Hong, J., Dreibholz, T., Schenkel, J.A., Hu, J.A.: An overview of multi-cloud computing. In: Barolli, L., Takizawa, M., Xhafa, F., Enokido, T. (eds.) WAINA 2019. AISC, vol. 927, pp. 1055–1068. Springer, Cham (2019). https://doi.org/10.1007/ 978-3-030-15035-8 103 24. Hong, J., Dreibholz, T., Schenkel, J.A., Hu, J.A.: An overview of multi-cloud computing. In: Barolli, L., Takizawa, M., Xhafa, F., Enokido, T. (eds.) Web, Artificial Intelligence and Network Applications, pp. 1055–1068. Springer International Publishing, Cham (2019) 25. Imran, H.A., et al.: Multi-cloud: a comprehensive review. In: 2020 IEEE 23rd International Multitopic Conference (INMIC). pp. 1–5. IEEE (2020) 26. Jamshidi, P., Pahl, C., Chinenyeze, S., Liu, X.: Cloud migration patterns: a multicloud service architecture perspective. In: Toumani, F., et al. (eds.) ICSOC 2014. LNCS, vol. 8954, pp. 6–19. Springer, Cham (2015). https://doi.org/10.1007/9783-319-22885-3 2 27. Jayalakshmi, D., Rashmi, R.T., Srinivasan, R.: Dynamic data replication strategy in cloud environments. In: 2015 Fifth International Conference on Advances in Computing and Communications (ICACC). pp. 102–105. IEEE (2015) 28. Jeon, M., Lim, K.H., Ahn, H., Lee, B.D.: Dynamic data replication scheme in the cloud computing environment. In: 2012 Second Symposium on Network Cloud Computing and Applications, pp. 40–47. IEEE (2012)
Implementation and Analysis of the Results
285
29. Jula, A., Sundararajan, E., Othman, Z.: Cloud computing service composition: a systematic literature review. Exp. Syst. Appl. 41(8), 3809–3824 (2014) 30. Keahey, K., Armstrong, P., Bresnahan, J., LaBissoniere, D., Riteau, P.: Infrastructure outsourcing in multi-cloud environment. In: Proceedings of the 2012 Workshop on Cloud Services, Federation, and The 8th Open Cirrus Summit, pp. 33–38 (2012) 31. Matam, S., Jain, J.: Pro Apache JMeter: Web Application Performance Testing. Apress, Berkley (2017). https://doi.org/10.1007/978-1-4842-2961-3 32. Mathew, S., Varia, J.: Overview of Amazon Web Services. Amazon Whitepapers (2014) 33. Moiz, S.A., Sailaja, P., Venkataswamy, G., Pal, S.N.: Database replication: a survey of open source and commercial tools. Int. J. Comput. Appl. 13(6), 1–8 (2011) 34. Momjian, B.: PostgreSQL: Introduction and Concepts, vol. 192. Addison-Wesley New York (2001) 35. Moniruzzaman, A., Nafi, K.W., Hossain, S.A.: An experimental study of load balancing of open nebula open-source cloud computing platform. In: 2014 International Conference on Informatics, Electronics & Vision (ICIEV). pp. 1–6. IEEE (2014) 36. Paraiso, F., Haderer, N., Merle, P., Rouvoy, R., Seinturier, L.: A federated multicloud PaaS infrastructure. In: 2012 IEEE Fifth International Conference on Cloud Computing, pp. 392–399. IEEE (2012) 37. Paraiso, F., Merle, P., Seinturier, L.: soCloud: a service-oriented component-based PaaS for managing portability, provisioning, elasticity, and high availability across multiple clouds. Computing 98(5), 539–565 (2016) 38. Petcu, D.: Multi-cloud: expectations and current approaches. In: Proceedings of the 2013 International Workshop on Multi-cloud Applications and Federated Clouds. pp. 1–6 (2013) 39. Sacoto Cabrera, E.: An´ alisis basado en teor´ıa de juegos de modelos de negocio de operadores m´ oviles virtuales en redes 4G y 5G. Ph.D. thesis, Universitat Polit`ecnica de Val`encia (2021) 40. Sacoto-Cabrera, E.J., Guijarro, L., Vidal, J.R., Pla, V.: Economic feasibility of virtual operators in 5g via network slicing. Fut. Gene. Comput. Syst. 109, 172– 187 (2020) 41. Sacoto-Cabrera, E.J., Le´ on-Paredes, G., Verdugo-Romero, W.: LoRaWAN: appli´ L´ cation of nonlinear optimization to base stations location. In: Rocha, A., opezL´ opez, P.C., Salgado-Guerrero, J.P. (eds.) Communication, Smart Technologies and Innovation for Society. SIST, vol. 252, pp. 515–524. Springer, Singapore (2022). https://doi.org/10.1007/978-981-16-4126-8 46 42. Sacoto-Cabrera, E.J., Sanchis-Cano, A., Guijarro, L., Vidal, J.R., Pla, V.: Strategic interaction between operators in the context of spectrum sharing for 5g networks. Wirel. Commun. Mobile Comput. 2018 (2018) 43. Sacoto Cabrera, E.J., Guijarro, L., Maill´e, P.: Game theoretical analysis of a multimno mvno business model in 5g networks. Electronics 9(6), 933 (2020) 44. Sadeeq, M.M., Abdulkareem, N.M., Zeebaree, S.R., Ahmed, D.M., Sami, A.S., Zebari, R.R.: Iot and cloud computing issues, challenges and opportunities: a review. Qubahan Acad. J. 1(2), 1–7 (2021) 45. Sala-Z´ arate, M., Colombo-Mendoza, L.: Cloud computing: a review of PaaS, IaaS, SaaS services and providers. L´ ampsakos 7, 47–57 (2012) 46. Sch¨ onig, H.J.: PostgreSQL Replication. Packt Publishing Ltd., Birmingham(2015) 47. Srivastava, P., Khan, R.: A review paper on cloud computing. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 8(6), 17–20 (2018)
286
D. P. Rodriguez Alvarado and E. J. Sacoto-Cabrera
48. Tricomi, G., Panarello, A., Merlino, G., Longo, F., Bruneo, D., Puliafito, A.: Orchestrated multi-cloud application deployment in open stack with tosca. In: 2017 IEEE International Conference on Smart Computing (SMARTCOMP). pp. 1–6. IEEE (2017) 49. Vadicherla, P.: Amazon web services based migration strategy for legacy systems– review. Int. J. Adv. Res. Electr. Electron. Instrum. Eng.10(3), 832–835 (2021) 50. Zardari, M.A., Jung, L.T., Zakaria, M.N.B.: Hybrid multi-cloud data security (HMCDS) model and data classification. In: 2013 International Conference on Advanced Computer Science Applications and Technologies, pp. 166–171. IEEE (2013) 51. Zhang, Z., Wu, C., Cheung, D.W.: A survey on cloud interoperability: taxonomies, standards, and practice. ACM SIGMETRICS Perform. Eval. Rev. 40(4), 13–22 (2013)
Diagnostic Previous to the Design of a Network of Medium-Sized University Campuses: An Improvement to the Methodology of Santillán-Lima Juan Carlos Santillán-Lima1 , Fernando Molina-Granja2(B) Washington Luna-Encalada1 , and Raúl Lozada-Yánez1
,
1 Escuela Superior Politécnica de Chimborazo, Riobamba, Ecuador
{carlos.santillan01,wluna,raul.lozada}@espoch.edu.ec 2 Universidad Nacional de Chimborazo, Riobamba, Ecuador [email protected]
Abstract. Updating the telecommunications infrastructure design methodology for medium-sized university campuses, as a case study of the Bolívar State University, through a previous diagnosis focuses on analyzing the different campuses as a whole and then subdividing them and analyzing in detail the behavior of each one. Components of the network which allows the reuse of equipment and put the network operational with simple adjustments in the physical infrastructure as well as great budget savings. This article proposes an improvement to the Methodology for the design of telecommunications infrastructure for mediumsized university campuses and demonstrates a significant improvement in terms of access to services and compliance with institutional accreditation parameters, as well as a reduction in costs. Keywords: Network diagnosis · Network redesign · Telecommunications infrastructure · Network improvement
1 Introduction It is important to understand that the use of Information and Communication Technologies (ICTs) in the university environment arises at the moment that it is proposed to respond to the challenges of a technician society like the one we are experiencing, [1] within these great challenges we find that the requirements that arise from the different users of the network infrastructure of a university are ever greater [2], as well as the accelerated development of Yábar [3]. The use of ICTs leads us to seek new alternatives for the design and implementation of telecommunications networks in university environments that offer not only large bandwidths but also “Connectivity” and “Student access” [3] to the various services offered by each University, as well as “Technological innovation” [4] for which it is necessary to use transmission media and technologies that guarantee a scalable © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Botto-Tobar et al. (Eds.): ICAETT 2021, LNNS 407, pp. 287–297, 2022. https://doi.org/10.1007/978-3-030-96147-3_23
288
J. C. Santillán-Lima et al.
telecommunications infrastructure that is sustainable over time both economically and in services that comes to bear [2]. But, a network is not designed from scratch, except when there is the creation of a new campus, which is why it is necessary to diagnose the networks that currently exist in order to improve them. Finding the errors that the network has both physically and logically. Network outages are a major concern of network designers, service providers, and network administrators. As network technology becomes more critical to businesses, in this case universities, effective network management in the presence of failures is increasingly urgent [5]. Information technologies and IT governance are currently considered within organizations as the backbone to improve existing processes and that they respond to strategic objectives, mission and vision; In this way, productivity, competitiveness, consolidation of the business model and business progress are obtained for companies [6]. Organizations currently require a large part of Information Technology (IT), for their development and work performance of processes. Strong amounts of economic capital are invested in technology, with the magnificent purpose of creating added value in IT and its proper use in companies allows them to obtain quality products with efficiency, effectiveness, and safety [7]. The network needs to be analyzed as a whole, that is, as a high-level abstraction of its structure and the technologies that constitute it. It is easier to predict the behavior of the network as a whole and to know if the network has errors and interruptions or not. And at the same time subdivide the whole into subnets that in the same way it is easy to predict their behavior, in the case of university campuses it could be the faculties, buildings that make up the faculties, and even laboratories. The network administration also needs a deep diagnosis of its internal structure to be successful in its repair, restructuring or redesign, knowing its internal structure refers to knowing the current state of its components such as switches, routers, Access point, cabling, fiber optics, the status of its connection terminals and connected elements and other physical components of the network, as well as its logical structure and the programming of each physical component. Outage or failure is one of the essential metrics of network performance and is directly related to the availability of a network. Disruption of network elements can degrade network availability and can eventually cause service interruption [8]. Understanding the health of a network through failure and outage analysis is important for evaluating network availability, identifying problem areas to improve network availability, and modeling exact network behavior [9, 10]. Despite the efforts made to study network performance problems, such as loss, latency [11], and jitter by authors Fraleigh, Choi, Choi, Wang and failure measurement and interruption analysis in backbone networks by the authors Iannaccone G et al. [12, 13, 8, 14, 15].
2 Methodology Santillán, in two previous articles, proposes the design of a telecommunications network infrastructure for medium-sized campuses from scratch, in these investigations the
Diagnostic Previous to the Design of a Network
289
authors address the design of the wired and wireless infrastructure, because in reality it is very difficult to find this type of scenarios except the creation of a university or university campus from scratch, an update to this methodology is proposed in this research by adding the previous diagnosis of the current university networks in which it is desired to apply this methodology [2]. Next, a series of steps is proposed that make up the update to the methodology: a. As a first step we must subdivide the network of our university into subnets, in the case of large universities it can be subdivided by campus, or if it does not have several campuses, it can be subdivided by faculties and administrative buildings. b. After applying the first step, these subnets are analyzed to determine if the network complies with its optimal operation, both in transmission speeds, packet loss, connectivity from a point of the subnet to the internet and to the internal servers of the university, as well as connectivity between subnets. c. If, when applying the previous step, we find errors in the operation of the subnets, we proceed to subdivide them into smaller subnets until we find the source of the error. d. The steps described above save a lot of time in the detection of errors, but to fully verify the real state of the infrastructure of the university network, a deep diagnosis of its internal structure is needed to know the current state of its components such as the switches, routers, access points, cabling, fiber optics, the status of their connection terminals and connected elements and other physical components of the network, as well as their logical structure and the programming of each physical component. e. If logical errors are found, the components are reprogrammed, and the information security techniques update is applied to the reconfigured equipment. f. If physical errors are found, the simplest components are changed, such as connectors, short-range UTP cabling, Access Point location changes, faulty computers change, pictail changes, among others. g. If the errors are due to the lack of structured cabling components such as a security rack, since the lack of this component allows the manipulation of the equipment by the users causing errors in the network connectivity. h. If the errors are caused by long-range fiber optics, routers or switches, servers and other components that cannot be repaired or that no longer have a warranty, it is recommended to proceed with the application of the method proposed [2].
3 Results When applying the proposed methodology to the State University of Bolívar, the following results are found. 3.1 Subdivision of the General Network of the State University of Bolívar The State University of Bolívar has three university campuses, which will be taken as the first subdivision of the total network of the State University of Bolívar.
290
J. C. Santillán-Lima et al.
After subdividing, each of the networks is analyzed, obtaining the following results: • The following new features were found at the parent campus of the Bolívar State University: lack and failures of connectivity in the wireless network, failures in connectivity in the wired network, low bandwidth in the network, loss of packets when connecting to the Internet. • At the El Aguacoto campus, low transmission speeds, low wireless coverage, packet loss, low bandwidth, lack of firewall/security device were detected. • At the San Miguel university campus, no wireless connectivity, sporadic internet outages, and packet loss were detected. 3.2 Subdivision of the Bolívar State University Campuses After performing the subdivision by campus, we found the following drawbacks on the campuses: Main Campus Bolívar State University In the matrix of the State University of Bolívar there are four faculties in which the following novelties were found: • Faculty of administrative sciences, business management and informatics: Lack and failures of connectivity in the wireless network, failures in connectivity in the wired network, low bandwidth in the network, loss of packets when connecting to the Internet, connectivity failures to other faculties and administrative buildings. • Faculty of Health and Human Sciences: Lack and failures of connectivity in the wireless network, failures in connectivity in the wired network, low bandwidths in the network, loss of packets when connecting to the Internet, connectivity failures to other faculties and administrative buildings. • Faculty of jurisprudence and political science: Lack and failures of connectivity in the wireless network, failures in connectivity in the wired network, low bandwidth in the network, loss of packets when connecting to the Internet, connectivity failures to other schools and buildings administrative. • Faculty of education, social, philosophical and humanistic sciences: Lack and failures of connectivity in the wireless network, failures in connectivity in the wired network, low bandwidths in the network, loss of packets when connecting to the Internet, connectivity to other faculties and administrative buildings. • Administrative buildings: Lack and failures of connectivity in the wireless network, failures in connectivity in the wired network, low bandwidth in the network, loss of packets when connecting to the Internet, connectivity failures to other faculties and administrative buildings. After analyzing the subdivisions of the university campus we can find the origin of the problems described above: • Lack and failures of connectivity in the wireless network: it is due to a bad dimensioning of the wireless network and in certain cases the deterioration typical of the years of the Access points.
Diagnostic Previous to the Design of a Network
291
• Connectivity failures in the wired network: it is due to faulty network points, and in certain cases to UTP cables that have already reached their useful life. • Low bandwidths in the network: this is due to the speed contracted by the institution, which is not in line with the high demand from users, and in even greater cases due to poor use of the network. • Loss of packets when connecting to the internet and connectivity failures to other faculties and administrative buildings: this error is due to a poor implementation of the existing backbone since it does not meet the speeds required by the university and contains several sections that use UTP generating necks bottle in information.
Campus El Aguacoto The El Aguacoto campus was subdivided as follows. • Datacenter building: in the building it was observed the lack of network points in the classrooms, lack of network points in some offices, limited wireless connectivity outdoors. • Teaching cubicles: lack of network points in some offices, limited wireless connectivity outdoors, low bandwidth in the network, lost packets when connecting to the internet, connectivity failures to other buildings. • Investigation: limited wireless connectivity outdoors, low network bandwidths, packet loss when connecting to the internet, connectivity failures to other buildings. • Library: limited wireless connectivity outdoors, low network bandwidths, packet loss when connecting to the internet, connectivity failures to other buildings. • Dairy factory: limited outdoor wireless connectivity, low network bandwidths, packet loss when connecting to the internet, connectivity failures to other buildings. When analyzing the subdivisions of the university campus we can find the origin of the problems described above: • Limited outdoor wireless connectivity - Lack of outdoor Access point. • Limited wired connectivity due to lack of network points. • Low bandwidths in the network: this is due to the speed contracted by the institution, which is not in line with the high demand of users, and in even greater cases, the poor use of the network. • Loss of packets when connecting to the internet and connectivity failures to other faculties and administrative buildings: this error is due to a wireless backbone that has reached its useful life and that currently does not support the bandwidth given by the university.
Campus San Miguel The San Miguel campus is made up of: • First floor central building: null wireless connectivity, low network bandwidth, connectivity failures to the second floor.
292
J. C. Santillán-Lima et al.
• Second floor central building: null wireless connectivity, low bandwidths, connectivity failures to the first floor. • Offices leveling unit: null wireless connectivity, low bandwidths, connectivity failures to the second floor. • Auditorium: zero connectivity. • Outdoors: zero connectivity. • Library: lack of access points The problems described are caused by the following reasons: • Zero connectivity: need for a design that meets the requirements of these campus components • Low bandwidths in the network: this is due to the speed contracted by the institution, which is not in line with the high demand of users, and in even greater cases, the poor use of the network. • Connectivity failures: in the case of this campus, the failures are mainly due to the bad practice of structured cabling standards, which causes UTP cables to move from the port that corresponds to them from time to time replace campus network equipment as it has reached its useful life. 3.3 Actions Applied The actions applied that are described below have been budgeted at around five hundred thousand US dollars, which will be executed in two phases: The first phase currently in execution comprises the parent campus and part of the Aguacoto campus wireless network, while that, The second stage corresponds to the backbone of the Aguacoto campus network and the missing part of the Aguacoto campus wireless network, and the San Miguel campus. Main Campus Bolívar State University To solve the problem of wireless connectivity, a redesign of this network is carried out, taking into account the number of users that the network can host, as well as the number of recurring users and the regulations For the determination of internet traffic The parameters of the ETSI EG 202 057-4 standard referring to “Web-browsing-HTML”, “Bulk Data” and “E-mail (server access)” are used as shown in Table 1 [16]. To solve connectivity failures in the wired network, defective network points and UTP cables that had already reached their useful life were detected and replaced. The bandwidth was doubled, reaching 200 Mbps and a document needs to be written on a correct use of the network. Finally, a new backbone has been designed since the previous one did not meet the necessary requirements and generates great disagreement in network users, the design is fully compatible with the current datacenter and equipment found in the UEB headquarters, (see Fig. 1).
Diagnostic Previous to the Design of a Network
293
Table 1. Regulations for determining traffic Application
Degree of symmetry
Typical One-way delay amount of data
Web-browsing–html Primarlyone-way ∼10 KB
Preferred