400 53 39MB
English Pages XV, 378 [389] Year 2021
Advances in Intelligent Systems and Computing 1297
Jezreel Mejia Mirna Muñoz Álvaro Rocha Yadira Quiñonez Editors
New Perspectives in Software Engineering Proceedings of the 9th International Conference on Software Process Improvement (CIMPS 2020)
Advances in Intelligent Systems and Computing Volume 1297
Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Nikhil R. Pal, Indian Statistical Institute, Kolkata, India Rafael Bello Perez, Faculty of Mathematics, Physics and Computing, Universidad Central de Las Villas, Santa Clara, Cuba Emilio S. Corchado, University of Salamanca, Salamanca, Spain Hani Hagras, School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK László T. Kóczy, Department of Automation, Széchenyi István University, Gyor, Hungary Vladik Kreinovich, Department of Computer Science, University of Texas at El Paso, El Paso, TX, USA Chin-Teng Lin, Department of Electrical Engineering, National Chiao Tung University, Hsinchu, Taiwan Jie Lu, Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, Australia Patricia Melin, Graduate Program of Computer Science, Tijuana Institute of Technology, Tijuana, Mexico Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro, Rio de Janeiro, Brazil Ngoc Thanh Nguyen , Faculty of Computer Science and Management, Wrocław University of Technology, Wrocław, Poland Jun Wang, Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong
The series “Advances in Intelligent Systems and Computing” contains publications on theory, applications, and design methods of Intelligent Systems and Intelligent Computing. Virtually all disciplines such as engineering, natural sciences, computer and information science, ICT, economics, business, e-commerce, environment, healthcare, life science are covered. The list of topics spans all the areas of modern intelligent systems and computing such as: computational intelligence, soft computing including neural networks, fuzzy systems, evolutionary computing and the fusion of these paradigms, social intelligence, ambient intelligence, computational neuroscience, artificial life, virtual worlds and society, cognitive science and systems, Perception and Vision, DNA and immune based systems, self-organizing and adaptive systems, e-Learning and teaching, human-centered and human-centric computing, recommender systems, intelligent control, robotics and mechatronics including human-machine teaming, knowledge-based paradigms, learning paradigms, machine ethics, intelligent data analysis, knowledge management, intelligent agents, intelligent decision making and support, intelligent network security, trust management, interactive entertainment, Web intelligence and multimedia. The publications within “Advances in Intelligent Systems and Computing” are primarily proceedings of important conferences, symposia and congresses. They cover significant recent developments in the field, both of a foundational and applicable character. An important characteristic feature of the series is the short publication time and world-wide distribution. This permits a rapid and broad dissemination of research results. ** Indexing: The books of this series are submitted to ISI Proceedings, EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink **
More information about this series at http://www.springer.com/series/11156
Jezreel Mejia Mirna Muñoz Álvaro Rocha Yadira Quiñonez •
•
•
Editors
New Perspectives in Software Engineering Proceedings of the 9th International Conference on Software Process Improvement (CIMPS 2020)
123
Editors Jezreel Mejia Centro de Investigación en Matemáticas A.C. Unidad Zacatecas Zacatecas, Mexico
Mirna Muñoz Centro de Investigación en Matemáticas A.C. Unidad Zacatecas Zacatecas, Mexico
Álvaro Rocha Departamento de Engenharia Informática Universidade de Coimbra Coimbra, Portugal
Yadira Quiñonez Facultad de Informática Mazatlán Universidad Autónoma de Sinaloa Mazatlán, Mexico
ISSN 2194-5357 ISSN 2194-5365 (electronic) Advances in Intelligent Systems and Computing ISBN 978-3-030-63328-8 ISBN 978-3-030-63329-5 (eBook) https://doi.org/10.1007/978-3-030-63329-5 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Introduction
This book contains a selection of papers accepted for presentation and discussion at the 2020 International Conference on Software Process Improvement (CIMPS 2020). This Conference had the support of the Mathematics Research Center/Centro de Investigación en Matemáticas (CIMAT A.C.), Facultad de Informática Mazatlán| Universidad Autónoma de Sinaloa, México (FIMAZ-UAS) and Iberian Association for Information Sistems and Technologies/Associação Ibérica de Sistemas e Tecnologas de Informação (AISTI). It took place at FIMAZ-UAS, Mazatlán, Sinaloa, México, from 21st to 25th October 2020. The International Conference on Software Process Improvement (CIMPS) is a global forum for researchers and practitioners that present and discuss the most recent innovations, trends, results, experiences and concerns in the several perspectives of software engineering with clear relationship but not limited to software processes, security in information and communication technology and big data field. One of its main aims is to strengthen the drive toward a holistic symbiosis among academy, society, industry, government and business community promoting the creation of networks by disseminating the results of recent research in order to aligning their needs. CIMPS 2020 built on the successes of CIMPS’12, CIMPS’13, CIMPS’14, which took place on Zacatecas, Zac; CIMPS’15 which took place on Mazatlán, Sinaloa; CIMPS’16 which took place on Aguascalientes, Aguascalientes, México; CIMPS’17 which took place again on Zacatecas, Zac, México; CIMPS’18 which took place on Guadalajara, Jalisco, México; and the last edition CIMPS’19 which took place on León, Guanajuato, México. The Program Committee of CIMPS 2020 was composed of a multidisciplinary group of experts and those who are intimately concerned with software engineering and information systems and technologies. They have had the responsibility for evaluating, in a ‘blind review’ process, the papers received for each of the main themes proposed for the Conference: Organizational Models, Standards and Methodologies, Knowledge Management, Software Systems, Applications and Tools, Information and Communication Technologies and Processes in non-software domains (Mining, automotive, aerospace, business, health care,
v
vi
Introduction
manufacturing, etc.) with a demonstrated relationship to software engineering challenges. CIMPS 2020 received contributions from several countries around the world. The articles accepted for presentation and discussion at the Conference are published by Springer (this book) and extended versions of best selected papers will be published in relevant journals, including SCI/SSCI and Scopus indexed journals. We acknowledge all those who contributed to the staging of CIMPS 2020 (authors, committees and sponsors); their involvement and support is very much appreciated. October 2020
Jezreel Mejia Mirna Muñoz Yadira Quiñonez Álvaro Rocha
Organization
Conference General Chairs Jezreel Mejía Mirna Muñoz
Mathematics Research Center, Research Unit Zacatecas, Mexico Mathematics Research Center, Research Unit Zacatecas, Mexico
The general chairs and co-chair are researchers in computer science at the Research Center in Mathematics, Zacatecas, México. Their research field is software engineering, which focuses on process improvment, multimodel environment, project management, acquisition and outsourcing process, solicitation and supplier agreement development, agile methodologies, metrics, validation and verification and information technology security. They have published several technical papers on acquisition process improvement, project management, TSPi, CMMI, multimodel environment. They have been members of the team that has translated CMMI-DEV v1.2 and v1.3 to Spanish. General Support CIMPS General Support represents centers, organizations or networks. These members collaborate with different European, Latin America and North America Organizations. The following people have been members of the CIMPS conference since its foundation for the last nine years. Gonzalo Cuevas Agustín Jose A. Calvo-Manzano Villalón Tomas San Feliu Gilabert Álvaro Rocha
Politechnical University of Madrid, Spain Politechnical University of Madrid, Spain Politechnical University of Madrid, Spain Universidade de Lisboa, Portugal
vii
viii
Organization
Local Committee CIMPS established a local committee from the Mathematics Research Center; Research Unit Zacatecas, MX, and the Faculty of Informatics of Autonomous University of Sinaloa (FIMAZ-UAS), MX. The list below comprises the local committee members. CIMAT UNIT ZACATECAS Isaac Rodríguez Maldonado (Support), Mexico Ana Patricia Montoya Méndez (Support), Mexico Héctor Octavio Girón Bobadilla (Support), Mexico Israel Faustino Cruz (Support), Mexico Einar Jhordany Serna (Support), Mexico Edgar Bonilla Rivas (Support), Mexico Elizabeth Villanueva Rosas, Mexico Ana Patricia Montoya Méndez, Mexico FIMAZ-UAS Rogelio Estrada Lizárraga (Local Chair), Mexico Alma Yadira, Quiñonez Carrillo (Local Co-chair), Mexico Rosa Leticia Ibarra Martínez (Public Relations), Mexico Delma Lidia Mendoza Tirado (Public Relations) Mexico Carmen Mireya Sánchez Arellano (Finance), Mexico Bertha Elena Félix Colado (Finance), Mexico Héctor Luis López López (Logistics), Mexico Diana Patricia Camargo Saracho (Logistics), Mexico Miguel Ángel Astorga Sánchez (Staff), Mexico Juan Ignacio Franco González (Staff), Mexico Oscar Manuel Peña Bañuelos (Conferences), Mexico Ana María Delgado Burgueño (Conferences), Mexico Rogelio Alfonso Noris Covarrubias (Webmaster), Mexico Manuel Iván Tostado Ramírez (Scientific Local Committee), Mexico Ana Paulina Alfaro Rodríguez (Scientific Local Committee), Mexico Lucio Guadalupe Quirino Rodríguez (Scientific Local Committee), Mexico Alán Josué Barraza Osuna (Scientific Local Committee), Mexico Rafael Mendoza Zatarain (Scientific Local Committee), Mexico José Nicolás Zaragoza González (Scientific Local Committee), Mexico Sandra Olivia Qui Orozco (Scientific Local Committee), Mexico Juan Francisco Peraza Garzón (Scientific Local Committee), Mexico Raquel Aguayo Gonzalez (Logistics), Mexico
Organization
ix
Scientific Program Committee CIMPS established an international committee of selected well-known experts in software engineering who are willing to be mentioned in the program and to review a set of papers each year. The list below comprises the Scientific Program Committee members. Adriana Peña Pérez-Negrón Alejandro Rodríguez González Alejandra García Hernández Álvaro Rocha Ángel M. García Pedrero Antoni Lluis Mesquida Calafat Antonio de Amescua Seco Baltasar García Perez-Schofield Benjamín Ojeda Magaña Carlos Abraham Carballo Monsivais Carla Pacheco Carlos Alberto Fernández y Fernández Claudio Meneses Villegas Edgar Alan Calvillo Moreno Edgar Oswaldo Díaz Eleazar Aguirre Anaya Fernando Moreira Francisco Jesus Rey Losada Gabriel A. García Mireles Giner Alor Hernández Gloria P. Gasca Hurtado Gonzalo Cuevas Agustín Gonzalo Luzardo Gustavo Illescas Héctor Cardona Reyes Héctor Duran Limón Himer Ávila George Hugo Arnoldo Mitre Hernández
University of Guadalajara, Mexico Politechnical University of Madrid, Spain Autonomous University of Zacatecas, Mexico Universidade de Lisboa, Portugal Politechnical University of Madrid, Spain University of Islas Baleares, Spain University Carlos III of Madrid, Spain University of Vigo, Spain University of Guadalajara, Mexico CIMAT Unit Zacatecas, Mexico Technological University of Mixteca, Oaxaca, Mexico Technological University of Mixteca, Oaxaca, Mexico Catholic University of North, Chile Technological University of Aguascalientes, Mexico INEGI, Mexico National Politechnical Institute, Mexico University of Portucalense, Portugal University of Vigo, Spain University of Sonora, Mexico Technological University of Orizaba, Mexico University of Medellin, Colombia Politechnical University of Madrid, Spain Higher Polytechnic School of Litoral, Ecuador National University of Central Buenos Aires Province, Argentina CIMAT Unit Zacatecas, Mexico University of Guadalajara CUCEA, Mexico University of Guadalajara, Mexico CIMAT Unit Zacatecas, Mexico
x
Hugo O. Alejandrez-Sánchez Iván García Pacheco Jaime Muñoz Arteaga Jezreel Mejía Miranda Jorge Luis García Alcaraz José Alberto Benítez Andrades Jose A. Calvo-Manzano Villalón José Antonio Cervantes Álvarez José Antonio Orizaga Trejo José E. Guzmán-Mendoza José Guadalupe Arceo Olague José Luis Sánchez Cervantes Juan Francisco Peraza Garzón Juan Manuel Toloza Leopoldo Gómez Barba Lisbeth Rodriguez Mazahua Lohana Lema Moreta Luis Omar Colombo Mendoza Luz Sussy Bayona Ore Magdalena Arcilla Cobián Manuel Mora Manuel Pérez Cota María de León Sigg María del Pilar Salas Zárate Mario Andrés Paredes Valverde Mary Luz Sánchez-Gordón Miguel Ángel De la Torre Gómora Mirna Muñoz Mata Omar S. Gómez Patricia María Henríquez Coronel
Organization
National Center for Research and Technological Development, CENIDET, Mexico Technological University of Mixteca, Oaxaca, Mexico Autonomous University of Aguascalientes, Mexico CIMAT Unit Zacatecas, Mexico Autonomous University of Juárez City, Mexico University of Lion, Spain Politechnical University of Madrid, Spain University of Guadalajara, Mexico University of Guadalajara CUCEA, Mexico Polytechnic University of Aguascalientes, Mexico Autonomous University of Zacatecas, Mexico Technological University of Orizaba, Mexico Autonomous University of Zacatecas, Mexico National University of Central Buenos Aires Province, Argentina University of Guadalajara, Mexico Technological University of Orizaba, Mexico University of the Holy Spirit, Ecuador Technological University of Orizaba, Mexico Autonomous University of Peru National Distance Education University, Spain Autonomous University of Aguascalientes, Mexico University of Vigo, Spain Autonomous University of Zacatecas, Mexico Technological University of Orizaba, Mexico University of Murcia, Spain Østfold University College, Norway University of Guadalajara CUCEI, Mexico CIMAT Unit Zacatecas, Mexico Higher Polytechnic School of Chimborazo, Ecuador University Eloy, Alfaro de Manabi, Ecuador
Organization
Perla Velasco-Elizondo Ramiro Goncalves Raúl Aguilar Vera Ricardo Colomo Palacios Santiago Matalonga Sergio Galván Cruz Sodel Vázquez Reyes Sonia López Ruiz Stewart Santos Arce Sulema Torres Ramos Tomas San Feliu Gilabert Ulises Juárez Martínez Vianca Vega Víctor Saquicela Viviana Y. Rosales Morales Yadira Quiñonez Yasmin Hernández Yilmaz Murat
xi
Autonomous University of Zacatecas, Mexico University Tras-os Montes, Portugal Autonomous University of Yucatán, Mexico Østfold University College, Norway University of the West of Scotland Autonomous University of Aguascalientes, Mexico Autonomous University of Zacatecas, Mexico University of Guadalajara, Mexico University of Guadalajara, Mexico University of Guadalajara, Mexico Politechnical University of Madrid, Spain Technological University of Orizaba, Mexico Catholic University of North Chile, Chile University of Cuenca, Ecuador University of Veracruz, Mexico Autonomous University of Sinaloa, Mexico INEEL, Mexico Çankaya University, Turkey
Contents
Organizational Models, Standards and Methodologies A Case Study of Improving a Very Small Entity with an Agile Software Development Based on the Basic Profile of the ISO/IEC 29110 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mario Negrete, Uriel Infante, and Mirna Muñoz Building a Guideline to Reinforce Agile Software Development with the Basic Profile of ISO/IEC 29110 in Very Small Entities . . . . . . . Sergio Galván-Cruz, Mirna Muñoz, Jezreel Mejía, Claude Y. Laporte, and Mario Negrete Best Practices for Software Development: A Systematic Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rodrigo Ordoñez-Pacheco, Karen Cortes-Verdin, and Jorge Octavio Ocharán-Hernández How Capital Structure Boosts ICTs Adoption in Mexican and Colombian Small Firms: A PLS-SEM Multigroup Analysis . . . . . . . . . . Héctor Cuevas-Vargas, Héctor Abraham Cortés-Palacios, Gildardo Adolfo Vargas-Aguirre, and Salvador Estrada CHAT SPI: Knowledge Extraction Proposal Using DialogFlow for Software Process Improvement in Small and Medium Enterprises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jezreel Mejía, Isaac Rodríguez-Maldonado, and Yadira Quiñonez Process Model to Develop Educational Applications for Hospital School Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jaime Muñoz-Arteaga, César Velázquez Amador, Héctor Cardona Reyes, Miguel Ortiz Esparza, and Gerardo Ortiz Aguiñaga
3
20
38
56
71
86
xiii
xiv
Contents
COMET-OCEP: A Software Process for Research and Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jesús Fonseca, Miguel De-la-Torre, Salvador Cervantes, Eric Granger, and Jezreel Mejia
99
Knowledge Management Knowledge Transfer in Software Development Teams Using Gamification: A Systematic Literature Review . . . . . . . . . . . . . . . . . . . . 115 Saray Galeano-Ospino, Liliana Machuca-Villegas, and Gloria Piedad Gasca-Hurtado Architecture of a Platform on Sharing Endogenous Knowledge to Adapt to Climate Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Halguieta Trawina, Ibrahima Diop, Sadouanouan Malo, and Yaya Traore Discovery and Enrichment of Knowledges from a Semantic Wiki . . . . . 142 Julie Thiombiano, Yaya Traoré, Sadouanouan Malo, and Oumarou Sié Towards a Knowledge Condensation Tool to Capture Expertise . . . . . . 154 Jose R. Martínez-Garcia, Ramón R. Palacio, Francisco-Edgar Castillo-Barrera, Gilberto Borrego, and Hector D. Marquez-Encinas Software Systems, Applications and Tools Building Microservices for Scalability and Availability: Step by Step, from Beginning to End . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Víctor Saquicela, Geovanny Campoverde, Johnny Avila, and Maria Eugenia Fajardo Evolution of Naturalistic Programming: A Need . . . . . . . . . . . . . . . . . . 185 Lizbeth A. Hernández-González, Ulises Juárez-Martínez, and Luisa M. Alducin-Francisco Use of e-Health as an Accessibility and Management Strategy Within Health Centers in Ecuador Through the Implementation of a Progressive Web Application as a Tool for Technological Development and Innovation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Joel Rivera, Freddy Tapia, Diego Terán, Hernán Aules, and Sylvia Moncayo Proposal for a New Method to Improve the Trajectory Generation of a Robotic Arm Using a Distribution Function . . . . . . . . . . . . . . . . . . 213 Yadira Quiñonez, Oscar Zatarain, Carmen Lizarraga, and Jezreel Mejía
Contents
xv
Towards Development of a Mobile Application to Evaluate Mental Health: Systematic Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Jorge A. Solís-Galván, Sodel Vázquez-Reyes, Margarita Martínez-Fierro, Perla Velasco-Elizondo, Idalia Garza-Veloz, and Claudia Caldera-Villalobos Model Proposed for the Production of User-Oriented Virtual Reality Scenarios for Training in the Driving of Unmanned Vehicles . . . . . . . . 258 Cristian Trujillo-Espinoza, Héctor Cardona-Reyes, and José E. Guzmán-Mendoza Virtual Reality and Tourism: Visiting Machu Picchu . . . . . . . . . . . . . . 269 Jean Diestro Mandros, Roberto Garcia Mercado, and Sussy Bayona-Oré EEG Data Modeling for Brain Connectivity Estimation in 3D Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Aurora Espinoza-Valdez, Adriana Peña Pérez Negrón, Ricardo A. Salido-Ruiz, and David Bonilla Carranza Cutting-Edge Technology for Video Games . . . . . . . . . . . . . . . . . . . . . . 291 Adriana Peña Pérez Negrón, David Bonilla Carranza, and Jorge Berumen Mora Design Techniques for Usability in m-Commerce Context: A Systematic Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Israel Monzón, Paula Angeleri, and Abraham Dávila A Taxonomy on Continuous Integration and Deployment Tools and Frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 Patricia Ortegon Cano, Ayrton Mondragon Mejia, Silvana De Gyves Avila, Gloria Eva Zagal Dominguez, Ismael Solis Moreno, and Arianne Navarro Lepe Accessifier: A Plug-in to Verify Accessibility Requirements for Web Widgets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Gabriel Alberto García-Mireles and Ivan Moreno-Soto M-Learning and Student-Centered Design: A Systematic Review of the Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Yesenia Hernández-Velázquez, Carmen Mezura-Godoy, and Viviana Yarel Rosales-Morales Implementation of Software for the Determination of Modeling Error in a Tubular Reactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364 José Cortes-Barreda, Juan Hernandez-Espinosa, Galo R. Urrea-García, Guadalupe Luna-Solano, and Denis Cantu-Lozano Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
Organizational Models, Standards and Methodologies
A Case Study of Improving a Very Small Entity with an Agile Software Development Based on the Basic Profile of the ISO/IEC 29110 Mario Negrete1 , Uriel Infante1 , and Mirna Muñoz2(B) 1 Rocktech, Blvd, Bosques del Campestre 201, Las Quintas, 2do, Piso, 37125 León, Mexico
{mario.negrete,uriel.infante}@rocktech.mx 2 Centro de Investigación en Matemáticas, Parque Quantum, Ciudad del Conocimiento Avenida
Lassec, Andador Galileo Galilei, Manzana, 3 Lote 7, 98160 Zacatecas, Mexico [email protected]
Abstract. Nowadays the agile methodologies are gaining attention from the software development market in several ways because they are oriented to satisfy the needs of the businesses. Agile methodologies such as Scrum, Lean, and XP are among the most popular. Some organizations have implemented them in their processes, using the sprints of Scrum to distribute, and monitor the workload of the project. Many organizations have problems adapting these concepts because a common thinking is that agile is fast, avoiding work products, processes, tasks or even roles, causing more troubles than benefits among the team. This paper shows an overview of a guide, which was created based on agile methodologies and the Basic profile of the ISO/IEC 29110, as well as a case study performed in an organization to validate the benefits of using this guide. Besides, the results obtained were compared with the results of improvements in agile methodologies proposed by other authors. Keywords: Agile methodology · Guide · Basic profile · ISO/IEC 29110 · Scrum · Case study
1 Introduction Many software models, methodologies, frameworks, and standards were created in order to help software development organizations to achieve the market demands such as quality, improvements, and continuous changes. The Capability Maturity Model Integration (CMMI), and the ISO/IEC 12207 provide proven practices to define what an organization shall do to improve the software development processes. The use of them help to get benefits such as cost reduction, failure reduction, increment in quality, productivity, and customer satisfaction [1]. Models like MoProSoft (a model for Mexican software development industry) are oriented to Small and Medium-Sized Enterprises (SMEs) with 50 to 210 people [2]. Besides, CMMI or MoProSoft have maturity levels to implement them. These resources are designed to improve SMEs to big organizations; however, they © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 3–19, 2021. https://doi.org/10.1007/978-3-030-63329-5_1
4
M. Negrete et al.
are very difficult to implement in a Very Small Entity (VSE) causing problems among the team and not allowing to get the expected benefits. A VSE can be an organization composed from 2 to 25 people, which can be an organization, department group, or unit. Over recent years, agile software development has gained attention for its use in many software organizations, especially SMEs and VSEs, that need to handle continuous change in requirements, customer satisfaction, and delivery of the product [3]. In this context, some organizations noticed that the agile approach could provide simplification and improvements in their processes making them more flexible. Therefore, they changed the use of models like CMMI to something with simpler implementation to their processes [4]. In this research, we report the results obtained through the use of a guide, which was developed based on the agile methods Scrum and XP, and the Basic Profile of the ISO/IEC 29110 standard. This guide aims to help VSEs to reinforce their agile environments with engineering practices from an international standard that help them to improve their development process while attending the customer demands within the budget, effort, planning, progress, changes, and quality. The obtained results were compared with those found in the related works to highlight the obtained benefits. After the introduction, the structure of the paper is the following: the background of the research work and the key concepts are reported in Sect. 2; the related works and the comparison with the results obtained by other proposals are presented Sect. 3; an overview of the guide is presented in Sect. 4; the case study implementation is reported in Sect. 5; finally, conclusions, limitations, recommendations, and the next steps are included in the Sect. 6.
2 Background When some VSEs tried to implement the agile approach to their current process without a guide, they got issues because of the inadequate implementation. These situations could cause more problems than benefits. In this context, some software standards were developed to help them handle many aspects such as budget, schedule, effort, and resources [2]. The above mentioned becomes critical because the quality of the products, or services for every organization is a factor that could indicate the future of the business based on customer confidence. Consultants and assessors are usually employed as a primary solution but sometimes this solution is out of a VSE budget [7]. 2.1 Approaches A characteristic of agile methods is that they adopt the Agile Manifesto, created by The Agile Alliance, which has values and principles to develop software quickly, and handle the customer’s changes [12]. Some agile methods are used in software development to perform project management activities, and are useful when a project demands solutions that evolve across time, and implies self-organizing and cross-functional teams. The software industry has
A Case Study of Improving a Very Small Entity
5
adopted the agile methods as a formula to deliver a product in the expected time with an adequate budget. Nowadays, some of the agile methods most used in the software development market are Scrum, XP, and Lean. Table 1 shows the methods and approaches that are commonly used by organizations. Table 1. Agile approaches Method
Definition
Roles
Scrum
Scrum is an incremental and iterative framework committed to software development and management, which focuses on delivering a product in time with the maximum value [3]. Scrum uses an empirical process based on 3 pillars: transparency, inspection, and adaptation [6]
Scrum has three roles: Product Owner (responsible for the success of the product), Scrum Master (responsible for working as planned), and the Development Team, who must be capable to organize themselves and complete their tasks
Lean
Lean is an agile toolkit for Not defined managing software development. It aims to eliminate any activities that can be considered unnecessary to magnify the customer value, with benefits such as quality builds, the transmission of knowledge, and fast delivery [3]. Lean uses a tool called Kanban, which is usually integrated with Scrum, to monitor the workflow of the project, visualizing the work to reduce cost, limit the work in progress, increase the quality, and deliver the product faster [8] (continued)
2.2 ISO/IEC 29110 The ISO/IEC 29110 is a standard series developed to fulfill the needs of VSEs, intending to facilitate the implementation of software engineering international standards to the business needs of this type of organizations [2]. The agile guide applied is based on the activities, tasks, work products, and roles of the Basic profile of the ISO/IEC 29110.
6
M. Negrete et al. Table 1. (continued)
Method
Definition
eXtreme Programming (XP) XP is a framework based on delivering business value through short cycles of time. XP focuses on development details using pair programming, coding standards, continuous integration, small releases, and other practices [9]
Roles XP has seven roles: tracker (responsible for checking the progress in the project), customer (shares the creation of user stories), programmer (builds the code components), coach (mentors the team in architecture and vision of the project), manager (shares the responsibilities of the coach and the tracker), tester (checks the quality of code), and doomsayer (responsible for risk management)
This is the only profile that could be certified by an official organization. As Table 2 shows, the ISO/IEC 29110 provides a set of 4 profiles [2]. Table 2. ISO/IEC 29110 profiles Profile
Definition
Entry
This profile is designed for startups and VSEs with small projects. It has two processes: project management and software implementation
Basic
This profile addresses VSEs, which are developing one product by a single team with two processes: project management and software implementation
Intermediate
Designed for VSEs with more than one project in parallel. It has two processes: project management and software implementation, and one conditional, the acquisition management
Advanced
Addresses VSEs, which want to rise as a competitive software development business. It has four processes: project management and software implementation, and two conditionals, acquisition management process, and software transition and disposal process
3 Related Works The following related works show proposals that were implemented to improve Agile development in different ways collecting quantitative results. In [3] authors proposed the L-ScrumBan process combined with Lean thinking and Agile approaches. L-ScrumBan is an agile framework for managing software development process with five tools, roles, types of meetings, and a process. A survey was used
A Case Study of Improving a Very Small Entity
7
as a research methodology to validate the framework to prove their five goals, which are the following. • Agile methodologies can improve the software development process. (The best result was very high). • Scrum is the most suitable methodology to use in software development, but it does not have a way to visualize the workflow of the project. (The best result was very high). • Lean principles can help to develop software, but they do not have the technical and managerial capabilities. (The best result was very high). • Kanban is limited to be just as a management tool. (The best result was very high). • L-ScrumBan can guide the team through the lifetime of the project. (The best result was very high). In [5] the authors presented a metric, Product Backlog Rating (PBR) to measure the quality of the testing process in Scrum performing a case study to evaluate the Sprint. PBR works with the product backlog items (PBL) considering the complexity level and Test Assessment Rating (TAR). The case study was applied to an organization, which has been using Scrum for eight years, six-seven developers, and sprints of three-tofour weeks. The interpretation scale was worst (1), bad (2), moderate (3), good (4), and excellent (5). Five PBL were evaluated getting the following TAR: PBL1.- 3.41 points. (Moderate); PBL2.- 4.25 points. (Good); PBL3.- 4.58 points. (Good); PBL4.3.70 points. (Moderate); and PBL5.- 2.58 points (Bad). In [1] the authors reported Scrum + as a result of the use of the means-end analysis technique of the Artificial Intelligence research field to improve the Scrum with the ISO/IEC 29110 Entry profile without losing the agile part. The proposal shows, as a conclusion, that Scrum + has a coverage of 89% from the original Scrum with a 79% level of coverage. This proposal does not have a case study to share the results, impacts, and limitations. In [4] the authors reported the mapping between CMMI and Scrum, which was applied in six organizations, comparing the previous results of these organizations with only an agile approach. The satisfaction obtained using the CMMI version 1.3 was 37% with these new practices, 17% partial satisfaction, and 46% unsatisfied. They conclude that CMMI and Scrum can work together according to the maturity levels 2 and 3 but without results of the case study about impacts, and limitations. In [6] the authors proposed quality control activities in the Scrum with the concept of test backlog. Using a case study of four months in a medium-sized software development organization, they validated their model with the following results. The evaluation compared two sprints, in the first sprint in twenty days they have twenty-six bugs, but in the second sprint the implementation of quality product went up, and the bug frequency decreased with their products with quality, within the time, cost, and scope. In the paper [10], the authors presented Q-Scrum, which is a model that integrates the ISO/IEC 29110 with Scrum using the roles, work products, and activities. The conclusion was that Scrum cannot achieve the requirements of the standard alone but without an implementation, this implementation does not have limitations and results.
8
M. Negrete et al.
As related works, the authors, who were selected using a Systematic Mapping, were those who have published results about an improvement in an agile environment. These results were compared with the results of the agile guide to refine the benefits obtained.
4 Guide to Reinforce VSEs The guide provides VSEs a way to reinforce its agile processes, and facilitate the implementation of better practices of an international standard in an agile approach for software development, getting benefits such as project planning, cost estimation, effort estimation, progress tracking, quality control, management of changes and work products, clarity of each role, agile risk management, and rework reduction [11]. Project Management and Software Implementation processes of the ISO/IEC 29110 were considered to create the guide to reinforce the agile approach. The guide covers all the aspects of the lifecycle such as planning, development, and testing. From here to the end of the paper this guide will be named as “agile guide”. 4.1 Agile Guide Structure The agile guide is following the same structure according to the Scrum that consists of seven events: E1 Project Vision Meeting, E2 Estimation Meeting, E3 Sprint Planning, E4 Sprint, E5 Daily Scrum, E6 Sprint Review, and E7 Sprint Retrospective. Through the project execution, these events provide work products, some of them typical from the agile environment, and others are from the Basic profile of ISO/IEC 29110 work products. Figure 1 shows the Project Management process of the agile guide, which uses the Scrum structure. As figure shows, during the project execution, the process starts with the Event 1, when the VSE chooses the activities to be performed according to the Mission Statement. Then, Event 2 consists of a thirty-minute estimation meeting where the Development Team identifies the user stories of the Product Backlog, and the number of sprints needed to achieve the project goals. In Event 3, it is created the strategy to follow during the project according to the number of sprints. Event 4 consists of a Sprint with a duration of two to four weeks for the Development Team that needs to create the deliverable. Event 5 is the Daily Scrum, which is a fifteen-minutes meeting where the Development Team, and the Scrum Master match the project progress with the goal for that sprint, identifying possible delays or other issues during development. Event 6 is Sprint Review, performed at the end of the Sprint to review the feature added to the deliverable and to adjust the Product Backlog. If the project is finished, the team will deliver the product, otherwise, the Development Team shows the results of the Sprint to the Product Owner. Event 7 is the Sprint Retrospective, performed after the end of the Sprint to check among the Development Team, Scrum Master, and Product Owner the following questions, what went well? what did not go so well?, and what could be improved?
A Case Study of Improving a Very Small Entity
9
Fig. 1. Overview of the project management process of the agile guide [11].
Figure 2 shows the Software Implementation process proposed in the agile guide using the Scrum structure. During the project execution only the events 3, 4 and 6 are executed as next described. In Event 3 the Change requests are evaluated as well as the new User Stories to know if they can be considered. The Project Repository is defined, the Product Backlog items are assigned, and the Verification and Validation lists are determined. Event 4 consist of code Software Components, create or update the Product Operation Guide, software user Documentation, and the Meeting records, the testing is applied until get a satisfactory result and in Event 6 is Sprint Review, performed at the end of the Sprint to add the increment to the deliverable, and update the Maintenance Record. Figures 1 and 2 are an adaptation of the original diagrams of the guide.
10
M. Negrete et al.
Fig. 2. Overview of software implementation process of the agile guide [11].
Besides, due to the importance of work products to understand the agile guide, Table 3 shows the mapping among work products in the ISO/IEC 29110, therefore, it shows the work products in an agile environment taking into account their features, usage, and description that are part of the agile guide. Table 3. Agile guide and ISO/IEC 29110 work products [11]. ISO/IEC 29110 Basic Profile
Agile environment
Description
Acceptance Record
Acceptance Record
Contains the list of Work products, which are delivered to the customer
Agreement
Mission Statement
Description of the work to be done during the project (continued)
A Case Study of Improving a Very Small Entity
11
Table 3. (continued) ISO/IEC 29110 Basic Profile
Agile environment
Description
Change Request
Change Request
Issue, modification, or improvement requested by the customer
Correction Register
Defects of the Software Component, potential improvements
Actions required to fix an issue or deviation blocking the plan execution
Maintenance Documentation
Maintenance Documentation
Software Configuration description and needed environment to develop and test
Meeting Record
Meeting Record
Record filled in with the agreements between the participants at the end of a meeting
Product Operation Guide
Product Operation Guide
The information needed to install and handle the Software
Progress Status Record
Task Board, Project Risk, Burndown Chart
Reports the progress status of the project against the Project Plan
Project Plan
Project Plan
Shows the project features to be executed in order to achieve the project goals, and the quality according to the schedule
Project Repository
Project Repository
Stores the project work products and deliverables in an electronic medium
Project Repository Backup
Project Repository Backup
Backup of the Project Repository to recover the information if it is needed
Requirements Specification
User Stories
Software requirements
Software
Increment
Deliverable produced at the end of the Sprint to the Customer, which is made by a collection of Software Components
Software Component
Software Component
Code units linked in a set
Software Design
Product Design
Graphical and textual information of the Software architecture (continued)
12
M. Negrete et al. Table 3. (continued)
ISO/IEC 29110 Basic Profile
Agile environment
Description
Software Product
Software Product
Set of software products
Software User Documentation
Software User Documentation Manual with the description of the usage of the Software from the perspective of the user interface
Test Cases and Test Procedures
User Acceptance Test, Unit Tests
Procedures and other elements needed to test the code
Test Report
Test Report
Results of the test execution
Traceability Record
Traceability Record
Records of the relationship among the User Stories, Product Design, Software Components, User Acceptance Test, and Unit Tests
Verification Record
Verification Record
Records of the verification execution
Validation Record
Validation Record
Records of the validation execution
5 Case Study A case study was used to evaluate and validate the results of the agile guide implementation because it is an empirical research method for software engineering [13]. The scenario was with a group developing a new software project in a company located in Leon, Guanajuato, Mexico. The team was composed of five people who, on average, had been developing agile for one and half years. The main objective of the development was to create a pet care mobile application and its entire infrastructure. 5.1 Case Study Design The objective of the case study was to implement the agile guide in a new project that aimed to generate a Minimum Viable Product (MVP) of a software product to know which points of improvement can be detected in the processes after finishing the MVP. Table 4 shows the research questions of the case study. 5.2 Preparing Data Collection The recollection of information during the case study execution was made using an online paid set of tools to manage the tasks, repositories and schedule of the project. Besides, the information was statistically analyzed to check the results of the metrics. The information obtained was gathered in a Google datasheet.
A Case Study of Improving a Very Small Entity
13
Table 4. Case study’s research questions. Research question
Description
Did the agile guide implementation help to get benefits for the team?
This research question allows knowing if the agile guide can help improving quality, reducing rework, enhanced tracking project progress, etc.
Did the results show quantitative and adequate metrics?
This research question allows knowing if the selected metrics can help to see the positive, negative, and neutral results
Was any problem detected during the project execution?
This research question allows knowing if the project had problems that affected the results
5.3 Collecting the Evidence Before beginning with the implementation, the objective was to identify how the team did project management, and handled other software implementations, using a survey. This survey was about requirements, planning, project status, risks, changes, team, records, quality, and customers. It was applied in one hour and a half. The results showed that the team had a weak implementation of the project management, but at the same time, it was very strong on the software implementation according to the comparison with the agile guide processes. The issues encountered on the project management were: (1) the requirements were not being taken correctly; (2) there was no change control protocol; and (3) there was a confusion among the team members with their roles in the project, this situation resulted in a multi-task profile presented in each team member. Therefore, it was concluded that the team had the knowledge of Scrum agile method, but in the context of a VSE, some of the steps in the agile method were wrongly adapted to fulfill the company business process, therefore, the team had to face a certain number of problems. On the software implementation side, the development team had more experience in elements such as version control, continuous deployment automation, and software environments management were strong tasks, but testing procedure was the only found issue. Taking into account the weaknesses and strengths of the organization, the following steps show the process followed in order to achieve the reinforcement of the agile environment by implementing proven practices of the Basic profile using the agile guide, as well as, the challenges encountered: 1. Definition of roles: From the beginning of the project, it was defined how the tasks would be distributed using the Scrum conventions, and some peer programming techniques used on XP, depending on the current roles: a Scrum Master, a Product Owner, and three members of the Development Team. 2. Work Products application: The team was trained on the work products and the process to comply with the events of the agile guide. The Scrum Master assumed
14
M. Negrete et al.
the role to identify its implementation, and delegate activities to the Development Team. Besides, it was categorized to which process each work product belonged: Project Management or Software Implementation. Then, the first step was to create the project repository to store all the work products. This repository consisted of resources and tools that served as evidence of the work product implementation, among these tools was found the use of online file storage services for documents, version control software to keep track of the code changes, and project management tools to track user stories, tasks, and testing reports. Some of the work products were adapted to fit with the current work scheme of the team. 3. Events: The events from the Project Management and Software Implementation processes of the agile guide were introduced to the team in the same way as the work products, adapting its content to fulfill the team development process requirements. The first meeting was the Project Vision Meeting in order to review the Mission Statement with the client, and to define the product backlog, the number of sprints, and the conformation of the development team, allowing to schedule the events of the project. In this case study, as mentioned before, the development team was formed by three developers, and it was decided that the project will be completed in two sprints. After that, the events took place in the established order for each sprint as follows: Estimation Meeting, Sprint Planning, Sprint, Daily Scrum, Sprint Review, and Sprint Retrospective. According to the size of the team and the objectives of the project, some events were gathered and held in one meeting, such as the cases of Estimation Meeting/Sprint Planning and Sprint Review/Sprint Retrospective. 5.4 Reporting Following the objective of the research questions, the selected metrics help to know how efficiently and effectively does the team accept the agile guide into the current process? And what was the performance of the team after using the agile guide? The results of the selected metrics of the first and the second sprints allow identifying the difference between the first performance of the implementation, and the results after the MVP conclusion, which are described as follow: 1. Sprint Goal Success. This metric aims to accomplish the goal proposed by the Product Owner. This goal is elaborated through a specific set of product backlog items. a) The first sprint goal success was the creation of two applications, a web platform, and a mobile app, which were working together to allow the user to book appointments, check recommendations, and schedule medical consultations for their pets. This project goal was completed. b) The second sprint goal success was the creation of the first module of the mobile app, which can help to sign up and log in. This project goal was completed. 2. Burndown Chart. This chart shows the speed of the team through the project after accomplishing the objectives and requirements. Besides, it allows the team to compare with the estimated team based on time.
A Case Study of Improving a Very Small Entity
15
a) The burndown chart in the first sprint is shown in Fig. 3. The vertical axis represents the points of effort of the sprint. The horizontal axis is the number of days of the sprint. This sprint forecast that in twenty-one days the 155 points would be finished. In this sprint, the time defined to finish was achieved.
Fig. 3. Sprint 1 - burndown chart
b) The burndown chart in the second sprint is shown in Fig. 4. The vertical numbers are the points of effort of the sprint. The horizontal numbers are the number of days of the sprint. This sprint forecast that in twenty-one days the 163.5 points would be finished. In this sprint, the time defined to finish was not achieved. The delay was three days.
Fig. 4. Sprint 2 – burndown chart
3. Defect Density. This metric aims to measure the number of defects per code component size based on the points of effort.
16
M. Negrete et al.
a) The defect density in the first sprint was of 8.39%, obtained from the number of defects found in the sprint (13) divided by the total effort of the sprint in hours (155). b) The defect density in the second sprint was of 4.89%, obtained from the number of defects found in the sprint (8) divided by the total effort of the sprint in hours (163.5). 4. Team Velocity. Team velocity is measured by the sum of the points of effort of the sprints divided by the number of sprints. The points of effort were defined as a relation 1:1 with hours. The team velocity for this case study was 159 user points. 5. Team Satisfaction. This metric aims to see how satisfied the team is with their work and each other. Avoiding team conflicts or possible issues. The survey has the following questions. (1) Do you like the current workflow? (2) How open is the team to change? (3) Can you work with your team without problems? (4) Do you feel valued for your contributions?. A scale was defined to answer those questions. Strongly agree, Agree, Neutral, Disagree, and Strongly disagree. a) The team satisfaction in the first sprint was measured using a survey in every sprint retrospective. The average in every question is the following: (1) Agree, (2) Strongly agree, (3) Agree, (4) Strongly Agree. b) The team satisfaction in the second sprint was measured as follows: (1) Agree, (2) Strongly agree, (3) Strongly agree, (4) Strongly Agree.
6 Discussion, Conclusion, and Next Steps As discussion we focused on the answers of the research questions established in the case study: • Regarding to the question about if the agile guide implementation helped to get benefits for the team, the answer is yes, the team found new benefits from the use of the agile guide, such as better estimation, project planning improvement, following the project progress, reducing the rework. In addition, based on the metrics and the retrospective results, it can be detected that the project has been finished in better terms than the others that they do not use the agile guide before for this specific organization. • Regarding to the question about if the results showed quantitative and adequate metrics, the answer is yes, the results showed quantitative and qualitative perspective to provide enough information to compare with the results of other related works. • Regarding to the question about if there was any problem detected during the project execution, the answer is yes, the lack of one member who tests in the project causes delays in the delivery of it. This member has multiple responsibilities of the organization and other projects reducing his time. Besides, the comparative of the obtained results with the related works are next provided: (1) L-Scrum Ban was validated using a survey from experts showing that they can improve the agile development process with Lean principles and Kanban but it has a lack
A Case Study of Improving a Very Small Entity
17
of case study to prove their implementation; (2) Scrum + shows that covers the 89% of the original Scrum combined with the ISO/IEC 29110. This agile guide is following the Entry profile, which is certifiable and, according to the results the sprint goal success, was completed and the organization aims to get the certification to prove the use of the standard; (3) Q-Scrum only confirms that the Scrum alone cannot achieve the requirements of the standard. According to the burndown chart result, the requirements were completed only with three days of delay of the second sprint showing the effectiveness of the agile guide; (4) the Product Backlog Rating was validated using a case study getting information about the testing part, The results of the defect density show 8.39% in 159 h of development being a good sign of implementation; (5) The test backlog shows that using this adaption is possible to reduce the bugs; and (6) CMMI with Scrum mapping shows that the CMMI in maturity levels 2 and 3 can work together with Scrum. The limitation of this proposal is that there is no evidence about the implementation of team satisfaction using this approach; the team satisfaction survey shows that in a general way the team agrees with this agile guide. However, none of them provides metrics neither qualitative result to analyze the efficiency and effectiveness of their proposals, because it is complicated to use traditional metrics such as Capability Maturity Model Integration (CMMI) or Quality Improvement Paradigm (QIP) because they are not appropriate for the agile approach. Then, they are not capable to measure the “agility” of an agile process [5]. This fact highlights the contribution of this research work because we showed the quantitative analysis of the application of an agile guide, considering the team and the information through the implementation. According to some case studies, the implementation of the Basic profile of the ISO/IEC 29110, even to academia for undergraduate and graduate levels, is more appropriate than other models with other targets [2]. Besides, this standard helps organizations, which do not have the expertise to search and to adopt new practices [14]. Based on our experience, the use of Scrum and XP with the elements of the ISO/IEC 29110 brings benefits without causing issues to the team in a short time. The metrics obtained using only the agile methodology, and reinforced with the agile guide can confirm that the adoption of these activities was easy for the team, following the demands of the customer. The benefits obtained after using the agile guide were the following: • The project planning. The creation of work products helps the team to check the information at any time. Besides, from the beginning of the project with the mission statement to the end with the acceptance record, all the things such as tasks, roles, components, events, responsibilities, etc., were controlled. Another important thing was the evidence provided from all the work products. • Cost and Effort estimation. The team did not use to consider the points of effort, or the cost of the project in every module. After the agile guide implementation, there were used tools, techniques, and tactics to keep tracking through the project execution. • Tracking of the project progress. Some work products and events help to track the teamwork. The daily meeting provides a solution to know where there could be a bottleneck. The retrospective to improve the current process, or the use of the burndown chart to estimate the velocity of the team.
18
M. Negrete et al.
• Reduction of risks and rework. Before the implementation of the agile guide, it was complicated to associate some tasks to the team because there were no defined responsibilities. Some tasks had to be done many times to minimize errors. After the use of the agile guide, the management helped to handle these situations defining roles and tasks. According to the results obtained through this case study after implementing the agile guide. The next step of this research work is to propose an improvement to the agile guide in the testing area being validated by a new case study considering metrics oriented to quantitative results in testing to help small organizations to adapt the agile guide better and with fewer issues.
References 1. Galván-Cruz, S., Mora, M., O’Connor, R.: A means-ends design of SCRUM+: an agiledisciplined balanced SCRUM enhanced with the ISO/IEC 29110 standard. In: Mejia, J., Muñoz, M., Rocha, Á., Quiñonez, Y., Calvo-Manzano, J. (eds.) Trends and Applications in Software Engineering, CIMPS 2017. Advances in Intelligent Systems and Computing, vol. 688. Springer, Cham (2018) 2. Muñoz, M., Mejía, J., Peña, A., Lara, G., Laporte, C.Y.: Transitioning international software engineering standards to academia: analyzing the results of the adoption of ISO/IEC 29110 in four Mexican universities. Comput. Stan. Interfaces 66, 103340 (2019). https://doi.org/10. 1016/j.csi.2019.03.008, ISSN 0920-5489 3. Albarqi, A., Qureshi, R.: The Proposed L-Scrumban Methodology to Improve the Efficiency of Agile Software Development, pp. 23–35, May 2018 4. Bahaa Farid, A., Abd Elghany, A.S., Mostafa Helmy, Y.: Implementing project management category process areas of cmmi version 1.3 using scrum practices, and assets. Int. J. Adv. Comput. Sci. Appl. (IJACSA) 7(2) (2016). http://dx.doi.org/10.14569/IJACSA.2016.070234 5. Kayes, I., Sarker, M., Chakareski, J.: Product backlog rating: a case study on measuring test quality in Scrum. Innov. Syst. Softw. Eng. 12, 303–317 (2016). https://doi.org/10.1007/s11 334-016-0271-0 6. Aamir, M., Khan, M.N.A.: Incorporating quality control activities in scrum in relation to the concept of test backlog. S¯adhan¯a 42, 1051–1061 (2017). https://doi.org/10.1007/s12046-0170688-7 7. Meathawachananont, A., Buranarach, M., Amsuriya, P., Chaimongkhon, S. Krairaksa, K., Supnithi, T.: Software process capability self-assessment support system based on task and work product characteristics: a case study of ISO/IEC 29110 standard. IEICE Trans. Inf. Syst. E103.D(2), 339–347 (2020). https://doi.org/10.1587/transinf.2018EDP7303 8. Janes, A.: A guide to lean software development in action. In: 2015 IEEE Eighth International Conference on Software Testing, Verification and Validation Workshops (ICSTW), Graz, pp. 1–2 (2015). https://doi.org/10.1109/icstw.2015.7107412 9. Blankenship, J., Bussa, M., Millet, S.: Pro Agile .NET Development with Scrum, 1st edn., pp. 1–53. Apress, New York (2011) 10. Pasini, A.C., Esponda, S., Boracchia, M., Pesado, P.M.: Q-Scrum, una fusión de Scrum y el estándar ISO/IEC 29110 XVIII Congreso Argentino de Ciencias de la computación (2013) 11. ISO/IEC. Systems and software engineering - Lifecycle profiles for Very Small Entities (VSEs) - Part 5-4-2: Agile Software Development Guidelines for the Basic profile. To be published
A Case Study of Improving a Very Small Entity
19
12. Martin, R., Martin, M.: Agile Principles, Patterns, and Practices in C#. Prentice Hall (2007). ISBN-10/ASIN 0131857258 13. Runeson, P., Höst, M.: Guidelines for conducting and reporting case study research in software engineering. Empir. Softw. Eng. 14, 131 (2009). https://doi.org/10.1007/s10664-008-9102-8 14. Laporte, C.Y., O’Connor, R.: Software process improvement standards and guides for very small organization: an overview of eight implementations. CrossTalk. J. Defense Softw. Eng. 30(3), 23–27 (2017). ISSN 2160-1577
Building a Guideline to Reinforce Agile Software Development with the Basic Profile of ISO/IEC 29110 in Very Small Entities Sergio Galván-Cruz1(B) , Mirna Muñoz2 , Jezreel Mejía2 , Claude Y. Laporte3 , and Mario Negrete2 1 Universidad Autónoma de Aguascalientes, Av. Universidad 940 C.U, 20130 Aguascalientes,
AGS, Mexico [email protected] 2 Centro de Investigación en Matemáticas, Parque Quantum, Ciudad del Conocimiento, Av. Lassec, Andador Galileo Galilei, Manzana, 3 Lote, 7CP 98160 Zacatecas, ZAC, Mexico {mirna.munoz,jmejia,mario.negrete}@cimat.mx 3 Department of Software and IT Engineering, École de technologie supérieure, Montreal, Canada [email protected]
Abstract. The importance of Very Small Entities (VSEs) in the development chain of the software industry, highlights the needs to provide support for them to develop quality software products within budget and schedule. However, most VSEs do not have experience in the implementation of software engineering standards. Additionally, a large percentage of them are using agile methods in an effort to produce software that meets the time requested by the market. This paper presents a proposal of an international guide, it will provide a guide for VSEs that want to reinforce their agile environment to develop software using an agile approach based on Scrum and XP with practices of the ISO/IEC 29110. The agile guide will also provide a guide that facilitates the implementation of an agile approach for VSEs that are already using the software Basic profile of the ISO/IEC 29110 series. This paper presents the problems that VSEs using an agile method have, the process followed to develop the agile guide, and a description of the agile guide. Keywords: ISO/IEC 29110 · Scrum · XP · VSEs · Compliance · Guide · Software development · Agile
1 Introduction Producing software products with quality and meeting the budget and schedule are key variables to have competitive advantage for Software Development Organizations (SDOs). Therefore, the use of systems and software engineering standards or models is becoming of high importance for SDOs. These standards or models, like the © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 20–37, 2021. https://doi.org/10.1007/978-3-030-63329-5_2
Building a Guideline to Reinforce Agile Software Development
21
ISO/IEC/IEEE 12207 standard and the CMMI® for Development model have been developed to document proven management and engineering practices. Thus, large or mediumsized, small or very small SDOs can obtain these benefits by a correct implementation of these standards or models [1, 2]. However, in practice, only medium and large-sized SDOs have the required resources (e.g., technical expertise, number of employees, financial budgets, and available time by customers) for implementing them [4]. For the case of small (organization having from 26 to 50 employees) or very small (having up to 25 employees) software development organizations having the required resources is a big challenge. These types of organizations are characterized by: operating with very limited budgets, less technical expertise, less management expertise, lack of interest in using heavy software development methodologies, a highly dynamic and informal organizational culture, and strong time pressures from customers [4]. Thus, in these organizations despite the potential benefits, and initial interest in the implementation of well-defined processes, and supported by international standards or models, their high organizational, technical and economic barriers avoid their utilization [4, 5]. As a solution to this situation, a new software process standard was released for this kind of organizations, the ISO/IEC 29110, which aims to help VSEs with up to 25 people, in the implementation of proven practices related to the Project Management Process (PM), and Systems or Software Implementation (SI) Process [6]. It should be noted that the ISO/IEC 29110 standard defines the minimum activities, and work products that require small, or very small organizations to perform [7, 8]. However, because the novelty of the ISO/IEC 29110 standard, the Agile-based System Development Methodologies (ASDMs) are not linked currently with this standard, even when one of the main features of the ISO/IEC 29110 is that it could be used with any development approach or methodology [9]. The above mentioned becomes a problem, because for very small entities (VSEs), in special for the software development, the ASDMs like SCRUM and XP [10, 11] are widely used. ASDMs offer VSEs benefits such as avoiding heavy, or rigor of traditional software development methodologies. Taking as base that ASDMs are becoming common system development methodologies (SDMs) used for VSEs and that ISO/IEC 29110 could be used with any development approach or methodology, this paper focuses on providing a proposal focused on supporting VSEs in the implementation and use of the software Basic profile of the ISO/IEC 29110 series while using agile methods, specifically SCRUM and XP. This becomes important because each time the software industry is demanding high quality software from suppliers that demonstrating the use of best practices from international models and standards. So that, sometimes to be able to demonstrate the use of quality models or standards by the VSE is conditional to be hired as a supplier for a medium or big company. After the introduction, the rest of the paper is structured as follows: Sect. 2 presents the background composed of an overview of key concepts; Sect. 3 shows the research bases such as goal, questions and the methodology; Sect. 4 describes how the agile guide was built; Sect. 5 shows the proposed agile guide; and Sect. 6 shows the conclusion and future work.
22
S. Galván-Cruz et al.
2 Background This section covers 2 concepts in which this research work is based, i.e., the ASDMs (XP and Scrum), and the ISO/IEC 29110 series. 2.1 ASDMs Methods According to ISO using agile development methods allow focusing on providing rapid and frequent deliveries of high-value software [12]. There are many agile approaches, two of the most popular are Scrum and eXtreme Programming (XP) [10, 11]. • Scrum. Based on Agile Alliance, 2018 “Scrum is a process framework used to manage product development and other knowledge work”. Besides, it remarks that if Scrum is used in the correct way, the development teams could establish the hypothesis about how they think, how things work, how to test the project, and how to do correct adjustments. All the above ideas based on their experiences. Kumar [10] highlights that SCRUM is the agile method most commonly used for developing software products. It can manage projects in any scope with special emphasis in projects with aggressive delivering, and with complex requirements. The Agile Alliance recommends Scrum because it is best suited for projects that could be split in small iterations (2–4 weeks) [13]. • Extreme Programming (XP). According to Agile Alliance, XP is an agile software development method focused on the quality of the software. Also, XP could improve the quality of life for the development team. XP has special emphasis in the correct practices for software development. The use of XP is recommended when a project has dynamically changed in the requirements, its risks are based on this dynamically changes, it has short development teams, and the technology could use unit tests [13]. Besides, the objective of XP is to improve quality software, and to improve the responses caused by the change requests from the customers [14]. XP is based on five essential points: planning, management, coding, designing, and testing [10]. 2.2 ISO/IEC 29110 The present ISO/IEC 29110 series of systems and software engineering standards and guides aims to help VSEs in the implementation of proven engineering practices to develop non-critical products [9]. Using it, a VSE could get advantages such as: the increase of the product quality, the reduction of the development time, and the development of software products within the established budget and schedule [15, 16]. This standard provides a roadmap of four profiles (i.e., Entry, Basic, Intermediate, and Advance). Two processes are the foundation of the four software profiles: the Project Management process and the Software Implementation process. Besides, the guides provide a set of process elements i.e., objective, activities, tasks, roles, and work products [9]. As Fig. 1 shows, according to Laporte [17], the ISO/IEC 29110 series covers a wide spectrum of development approaches in which it can be implemented. As figure shows the ISO/IEC 29110 can be used with both low ceremony development approaches, which are
Building a Guideline to Reinforce Agile Software Development
23
those using little documentation and light development process, and the high ceremony development approaches, which are those using well-documented development process, traceability among artifacts and a change control board. Also, it covers both waterfall and interactive and all possible combinations among these development approaches, including agile approaches such as XP, Scrum and, Adaptive development.
Fig. 1. Spectrum of development approaches for the implementation of ISO/IEC 29110 [17].
3 Research Goals, Questions and Methodology It is well known that agile methods lack of rigor because they are focused on agility [10]. Therefore, this research arises from the need to help VSEs, which currently implement an agile methodology (SCRUM or XP), and wish to reinforce their agile environment implementing engineering practices using an international standard such as ISO/IEC 29110. Then, the goal of this research is to provide a proposal that allows VSEs implementing proven practices of the software Basic Profile of ISO/IEC 29110 in an agile environment. This proposal is focused on the Basic profile because it is the only profile, at the present time, of ISO/IEC 29110 that can be certified. To address this research, two questions were established: 1) Which are the main problems that an organization using agile software development has? and 2) what information should contain a guide to support the improvement of the processes with the practices of ISO/IEC 29110 from VSEs using agile software development environments? To answer the established questions, a 4-step methodology was established: 1. Identify the problems that present VSEs using an agile approach by analyzing a sample of VSEs using an agile software development approach. 2. Perform a compliance analysis between agile methods (SCRUM and XP) and the software Basic profile of ISO/IEC 29110.
24
S. Galván-Cruz et al.
3. Define a set of criteria to build a proposal of a guide. 4. Build proposal of a guide using the established criteria established on step The next section describes how each step of the 4-step methodology was performed. It is important to mention that a team of 6 members was integrated to develop this research. Three of them are Software Engineering experts in the implementation of agile methods and the implementation of models and standards, and the other three members of the team are software engineers with experience in agile methods in VSEs.
4 Four-Step Methodology to Develop a Proposal of a Guide 4.1 Step 1- Identify the Problems that VSEs Have During the Implementation of an Agile Method To achieve this step, from 2017 to 2019 we analysed software development projects of a sample of 8 Mexican VSEs using an agile environment. The analysis focused on identifying how they develop software products using the activities and tasks documented in the software Basic profile of the ISO/IEC 29110. Table 1 shows the problems detected about the Project Management process. The table is divided in 4 sections, presenting problems detected in the four activities of the PM process. It is important to highlight that we considered as problem when there was no evidence that the VSE carry out the practice. Table 1. Problems detected from the Project Management process Detected problems
VSE1 VSE2 VSE3 VSE4 VSE5 VSE6 VSE7 VSE8
Project Planning Activity Do not use any formal way to receive and/or formalize the request of a new software product (e.g. Statement of work)
X
X
Do not establish in a formal way the project goals Do not carry out the definition of delivery instructions
X
X X
X
X
X
Do not identify and document the resources necessary for the product
X
Do not have defined the team composition
X
X
X
(continued)
Building a Guideline to Reinforce Agile Software Development
25
Table 1. (continued) Detected problems
VSE1 VSE2 VSE3 VSE4 VSE5 VSE6 VSE7 VSE8
Do not calculate efforts and costs of the project
X
X
X
Do not perform the risk X management in an adequate way or it is not performed
X
X
X
X
X
Do not define how to control the work products throughout the project
X
X
X
X
X
X
X
X
X
X
X
Do not carry out the verification X of the project artifacts
X
X
X
X
X
X
X
Do not carry out the validation of project artifacts
X
X
X
X
X
X
X
X
Do not carry out planned activities
X
X
Without control or control of only some work products throughout the project
X
X
X
X
X
X
X
Do not manage the change requests
X
X
X
X
X
X
X
Do not perform the repository backups
X
X
X
X
X
X
X
Do not carry out a project plan
X
X
X
Project Execution Activity
Do not have records of the meetings carried out with the team
X
Do not have records of the meetings carried out with the customer
X
Project Assessment and Control Activity Keep project information outdated
X
X (continued)
Table 2 shows the problems detected from the Software Implementation process. The table is divided in 6 sections, presenting problems detected in the six activities of the SI process.
26
S. Galván-Cruz et al. Table 1. (continued)
Detected problems
VSE1 VSE2 VSE3 VSE4 VSE5 VSE6 VSE7 VSE8
Do not record the project reviews
X
Do not perform the management of corrective actions
X
X X
Project Closure Activity Do not formalize the project closure
X
X
X
Table 2. Problems detected from the software implementation process Detected problems
VSE1 VSE2 VSE3 VSE4 VSE5 VSE6 VSE7 VSE8
Software Implementation Activity Do not document the implementation environment
X
X
X
X
X
Software requirements analysis Activity Do not perform traceability records of the requirements
X
X
X
X
Software Architectural and Detailed Design Activity Do not design or update unit test X cases
X
X
X
X
X
Do not perform traceability X records of the requirements with software design
X
X
X
X
X
X
X
X
X
X
X
Do not perform traceability X records of the requirements with the software
X
X
X
X
X
X
X X
Software Construction Activity Do not perform unit test cases
X
X X
Product, Integration and Test Activity Do not use the test cases and the X test procedures (continued)
4.2 Step 2 - Perform a Compliance Analysis Between Agile Methods (SCRUM and XP) and the Basic Profile of ISO/IEC 29110 To achieve this analysis, As Fig. 2 shows, a template was created to register the mapping data. Starting from the left side, in column one, the task ID of ISO/IEC 29110 is listed as a foundation. In columns two and three the events or practices from Scrum or XP
Building a Guideline to Reinforce Agile Software Development
27
Table 2. (continued) Detected problems
VSE1 VSE2 VSE3 VSE4 VSE5 VSE6 VSE7 VSE8
Do not register results of tests
X
Do not perform traceability X records of the requirements with performed test
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X X
Product Delivery Activity Do not develop user, operation X and maintenance documentation
X
X
that match with the ISO task are registered, in the case in which there is not a Scrum event or XP practice, in column four the task from ISO to be kept, as part of the guide, is registered. In column five the justification of the mapping done in previous columns is registered. From column six to nine, finding elements such as inputs, tasks, roles, and work products are registered. Finally, in column ten the reference source, which the information was obtained, is registered. Figure 2 shows an example of the mapping performed for the Project Plan execution activity is shown.
Fig. 2. Mapping to identify the compliance between the software Basic profiles of ISO/IEC 29110 and the agile method Scrum and XP.
To perform this analysis, the members of the team were divided in subgroups of two people: one expert and one software engineer. To reduce biases, the subgroups followed the idea of the pair programming technique to perform the mapping, then the mapping were performed by the two members perform and discuss the mapping. Besides, the
28
S. Galván-Cruz et al.
teams used a peer review technique to verify the mapping. Finally, the team performed work meetings to validate results. 4.3 Step 3 - Define a Set of Criteria to Build the Agile Guide After having completed the mapping between the Basic profile of the ISO/IEC 29110 and the agile methods, the team defined a set of criteria that helped them build the agile guide. Figure 3 shows the workflow diagram used to decide when to include a practice from an agile method, and when to include a practice from the Basic profile of the ISO/IEC 29110. This approach allowed us to better develop a guide to reinforce the agile software development environment.
Fig. 3. Defined criteria and its implementation to select the elements to build the agile guide.
4.4 Step 4 - Build the Agile Guide Applying the Established Criteria The ISO agile guide was built by identifying how each practice of the Basic profile of the ISO/IEC 29110 is covered by performing an agile event focusing on inputs, work products, tasks, roles, and output work products. Figure 4 shows the work done for the task titled PM1.1 Review the statement of work, that is the first task of the PM.1 Project Planning activity of the Project Management Process. As Fig. 4 shows, a template was designed to register the information as follows: (1) the identification of the process and/or activity; (2) the definition of the event and/or practices executed in the agile environment; (3) a table with the previous mapping performed; (4) the justification of the mapping;
Building a Guideline to Reinforce Agile Software Development
29
Fig. 4. Example of the mapping for the task review the statement of work of the basic profile ISO/IEC 29110 to an agile environment.
(5) and the elements identified in the agile environment that are performed to cover the ISO/IEC 29110 tasks. The same work was done for the 26 tasks of the Project Management process and for the 41 tasks of the Software Implementation process.
5 Proposed Guide After applying the defined criteria to select the practices to be included in the guide, the results showed that more of the Scrum practices are related to the Project Management Process. Besides, more of the XP practices are related to the Software Implementation Process. Then, the team decided to take the events of Scrum as foundation of the guide to cover both processes Project Management and Software Implementation. This section provides an overview of the proposed guide obtained by performing the 4 step-methodology. The proposed guide named from here to the end of the paper as “agile guide” aims to provide a guideline for VSEs that want to reinforce their agile environment to develop software, which are using an agile approach based on Scrum and XP, with practices of the ISO/IEC 29110 [18]. The agile guide, also pretend to facilitate the implementation of an agile approach in VSEs that are already using the software Basic profile of the ISO/IEC 29110 series.
30
S. Galván-Cruz et al.
Figure 5 shows the events of Scrum related to Project Management. As the figure shows, the Scrum events related to Project Management process are provided in blue color and those who are not related to Project Management process are greyed. Therefore, except the Sprint, most of the events of Scrum (i.e., project vision meeting, estimation meeting, sprint planning, sprint review and sprint retrospective) should be performed to cover the activities related to the project management process. Mission Statement
Verification Results
E1 Project Vision Meeting
Meeting Record
Mission Statement
E2. Estimation Meeting
Product Backlog
Meeting Record
E3. Sprint Planning
E4. Sprint
Sprint Backlog
E5.Daily Scrum
Verification Results Meeting Record
E6. Sprint Review
Burndown Chart
Change Request
Software Product
Acceptance Record
E7. Sprint Retrospective
Fig. 5. Events of scrum from the project management process [18].
Figure 6 presents the events of Scrum related to Software Implementation process. As the figure shows, the Scrum events related to the Software Implementation process are provided in blue color and those who are not related to software implementation process are greyed. Three events (i.e., Sprint, Sprint Planning and Sprint Review) should be performed to cover the activities related to the Software Implementation process. Next a brief description of the structure of the agile guide and its elements is presented. It is important to highlight that the agile guide was structured following the way in which the software Basic Profile is described to facilitate the understanding of VSEs on how to implement the proven practices while they continue working as they used to. The Agile guide for VSEs is structured as follows: • Input Work Products for the Project Management events: this section provides a mapping between the work products of the software Basic profile of the ISO/IEC 29110 and the work products of an agile environment. The agile guide includes input
Building a Guideline to Reinforce Agile Software Development
31
E1 Project Vision Meeting
E2. Estimation Meeting
User Stories
Validation Results Sprint Backlog
E3. Sprint Planning
Product Backlog Project Repository
User Acceptance Tests Product Design Change Request
Verification Results Traceability Record
E5.Daily Scrum E4. Sprint Unit Tests
Test Report
Software Components Software Product
E6. Sprint Review
Meeting Records Product Operation Guide
E7. Sprint Retrospective
Software User Documentation Product Operation Guide User Acceptance Tests
Increment Maintenance Documentation
Fig. 6. Events of scrum from the software implementation process [18].
•
•
• •
work products for the Project Management process and the Software Implementation process. Output Work Products for the Project Management events: this section provides a mapping between the work products of the software Basic profile of the ISO/IEC 29110 and the work products of an agile environment. The guide includes input work products for the Project Management process and the Software Implementation process. Internal Work Products of the guide for the Project Management events: this section provides a mapping between the internal work products of the software Basic profile of the ISO/IEC 29110 and the work products of an agile environment that are inputs for some events, these products are not focused on the customer but are useful for the team. The guide includes internal work products for the Project Management process and the Software Implementation process. Roles involved for reinforcing an agile environment: this section contains a mapping between the roles of the software Basic profile of the ISO/IEC 29110 and the roles of an agile environment. Events description: this section provides a detailed description of how to reinforce an agile environment using proven practices of the software Basic profile of the ISO/IEC 29110. This agile guide was developed such that VSEs would execute the tasks as they would follow a recipe to prepare a complex meal. The following elements are included in the guide: (1) the name of the event; (2) the event description; (3) a diagram of the event overview; (4) a diagram of the event tasks; (5) a table in which the data of
32
S. Galván-Cruz et al.
role, task list, input work products, and our work products; and (6) a table in which a set of additional tasks, to be performed during the execution the event, are suggested (this table is useful when a VSE wants to initiate a certification). It is important to highlight that each Scrum event (Project Vision Meeting, Estimation Meeting, Sprint Planning, Sprint, Sprint Review, and Sprint Retrospective) was developed using the same elements. To illustrate the one element of the agile guide, we provide an example of an event. Figure 7 shows the overview of the Project Vision Meeting event.
Mission Statement [Reviewed]
Development Team [Integrated]
Mission Statement
Project Vision Meeting
Product Backlog [Developed, Prioritized]
Number of Sprints [Defined]
Project Plan
Fig. 7. Project vision meeting event [18].
As Fig. 7 shows, this diagram provides an overview of the inputs and outputs of the Project Vision Meeting event. As the figure shows, this event receives the Mission Statement as an input, and it produces, as outputs, the following work products: Mission Statement [Reviewed], the Development Team [Integrated], Product Backlog [Developed and Prioritized], the Number of Sprints [Defined], and Project Plan. Figure 8 shows the diagram of the tasks for the Project Vision Meeting event. The agile guide describes with more depth the events, this sequence tries to lead the user to have a better understanding of every task executed in the event. As Fig. 8 shows, this diagram describes the flow of the tasks performed in the Project Vision Meeting event, where the Mission Statement is an input of task TE.PM.1.1 producing, as output, the Mission Statement [Reviewed]. This work product is the input for tasks TE.PM.1.2; and also, the TE.PM.1.3, TE.PM.1.4, and TE.PM.1.6. The execution of TE.PM.1.2 obtains the Development team [Integrated] as output; the execution of TE.PM.1.3 obtains the Product Backlog [Development] as output; the execution of TE.PM.1.4 obtains the Product Backlog [Prioritized] as output, and the execution of TE.PM.1.6 obtains the Project Plan. The input of the TE.PM.1.5 is the Development team [integrated], it provides as an output the Number of Sprints [Defined]. The tasks marked with an asterisk (*) are additional tasks for a VSE that should be implemented if the VSE wants to initiate a formal certification of the two processes of the software Basic profile. Table 3 shows the description in detail of Project Vision Meeting tasks, and Table 4 shows the complementary tasks suggested by the Project Vision Meeting event. Also, the
Building a Guideline to Reinforce Agile Software Development
Mission Statement
Mission Statement [Reviewed]
Development Team [Integrated]
TE.PM.1.1 Review the Mission Statement supplied by customer
Mission Statement [Reviewed]
TE.PM.1.2 Integrate the Development Team
Development Team [Integrated]
TE.PM.1.3 Develop the Product Backlog using the Product Backlog template.
Product Backlog [Developed]
TE.PM.1.4 Prioritize the Product Backlog according to customer business goals provided in the mission statement
Product Backlog [Prioritized]
TE.PM.1.5 Determine the Number of Sprints to be performed during the project
Number of Sprints [Defined]
*TE.PM.1.6 Include Product Description, Scope, Objectives and Deliverables in the Project Plan
Project Plan
33
Fig. 8. Tasks of the project vision meeting event [18].
guide provides the description of each task in a table following the same representation of the tasks diagram. This table aims to provide a ‘recipe’ of the events to help a VSE to implement it in a quick and easy way. As Table 3 shows, it includes three columns as follows: the first column listed the roles involved to execute a task, the second column describes the tasks, the third column lists the inputs of the tasks, and the fourth column lists the outputs of the tasks. Some of the roles abbreviations used in the Table 3 are: Customer (CUS), Product Owner (PO) and Scrum Master (SM). If a VSE wants to initiate a formal certification of the two processes of the software Basic profile in an agile environment, it has to execute additional tasks. Complementary to Table 3, the guide includes a second table, Table 4, that provides the additional tasks to be executed during the execution of the Event 1- Project Vision Meeting. The same process was followed for the other six events: E2. Estimation Meeting; E3. Sprint Planning; E4. Sprint; E5. Daily Scrum; E6. Sprint Review and E7. Sprint Retrospective. For reasons of space, these events are not presented in this paper. The verification of the ISO agile guide has been done by experts of the ISO Working Group 24. The experts reviewed it and made comments to identify defects and improve the proposed guide during meetings of the working group. Once the experts of WG24 were satisfied with the proposed agile guide, experts of the 58 countries [19], that participate to the development of software and systems engineering standards, have reviewed the guide and made comments to improve it and correct defects of the ISO agile guide.
34
S. Galván-Cruz et al. Table 3. Event 1 - project vision meeting task list [18]
Roles
Task list
Input work products
Output work products
CUS PO SM
TE. PM. 1.1 Review the Mission Statement supplied by the Customer Using the Mission Statement, the Development Team and the Product Owner should get an understanding of the requested product Note: If the VSE has not documented its Mission Statement, a template is provided in Annex A of this guide
Mission Statement
Mission Statement [Reviewed]
CUS PO SM
TE. PM. 1.2 Integrate the Mission Statement Development Team [Reviewed] During the Project Vision Meeting the Development Team is integrated and documented as part of the Project Plan
Development Team [Integrated]
CUS PO SM
TE. PM. 1.3 Develop the Product Backlog Taking the Mission Statement as a base, the Development Team together with the Customer should identify the activities that must be performed to meet the project and name them as Product Backlog Note: At this point, only the elements of the Product Backlog are identified
Mission Statement [Reviewed] Product Backlog [Developed]
Mission Statement [Reviewed] Development Team [Integrated]
(continued)
6 Conclusion and Future Work Nowadays, agile methodologies have had a relevant impact in VSEs, so that a high percentage of them are using an agile environment to develop their software products. The main problem detected by performing this research was that the VSEs do not adequately implement agile methods, therefore, they have different levels of control and maturity regarding the use of the agile environment. Based on this fact, there is an improving need for reinforcing the frameworks provided by agile methods to ensure the quality of their products.
Building a Guideline to Reinforce Agile Software Development
35
Table 3. (continued) Roles
Task list
Input work products
Output work products
CUS PO SM
TE. PM. 1.4 Prioritize the Product Backlog according to Customer business goals provided in the Mission Statement Note 1: Due to the nature of Scrum, when a Sprint is finished, the Development Team should deliver an increment to the Customer. Therefore, the Development Team should ensure that the product is being developed according to the customer’s priority Note 2: Delivery Instructions are reinforced by the Definition of Done, defined for each Sprint Backlog Item
Product Backlog [Developed] Mission Statement [Reviewed]
Mission Statement [Reviewed] Product Backlog [Prioritized]
CUS PO SM
TE. PM. 1.5 Determine the Number of Sprints to be performed during the project Note 1: Due to the nature of Scrum, a Sprint is the time-box in which the Development team should produce an increment of the product to be delivered to the customer Note 2: The duration of a Sprint is typically between 2 to 4 weeks Note 3: Once the Number of Sprints is determined, it is added to the Project Plan
Mission Statement [Reviewed] Product Backlog [Prioritized] Development Team [Integrated]
Mission Statement [Reviewed] Number of Sprints [Defined]
Based on the results of the analysis of the problems presented by 8 VSEs focused on their software development environments and the ISO/IEC 29110, it can be concluded that the proven practices provided by the software Basic profile of ISO/IEC 29110 can be easily implemented to reinforce an agile environment. The development of the agile guide aims to help VSEs using agile environments. On one hand, to understand how they can meet a standard such as the ISO/IEC 29110 while performing their agile practices and events. On the other hand, to provide practices that can reinforce their agile development process used to develop software products.
36
S. Galván-Cruz et al. Table 4. Event 1- Project Vision Meeting additional task
Role Task list PO SM
Input work products
Output work products
TE. PM.1.6 Include Product Mission Statement [Reviewed] Project Plan Description, Scope, Objectives and Deliverables in the Project Plan Note: In an agile environment, the Product Backlog description, Scope, Objectives and Deliverables provide a better understanding of the project
Regarding the ISO agile guide, two of the most popular agile methodologies were taken as a foundation: Scrum and eXtreme Programming. The guide will allow VSEs, using an agile software development environment, to implement in an easy way and quickly, proven practices from the software Basic profile of the ISO/IEC 29110 without changing their agile approach. It is important to mention that the ISO agile guide presented in this paper is a proposal that is presently being reviewed by the experts of ISO. As future work, after this international guide will be approved and published by ISO in 2021, it could be replicated for other 3 software profiles of ISO/IEC 29110 (for example, the Entry, Intermediate or Advanced). Also, with the Basic profile analyzed in this research, the agile guide can add other agile approaches such as UPEDU, Crystal, FDD, etc. Finally, a full compliance study is still missing to evaluate the qualitative and quantitative results of the implementation of the agile guide in VSEs. To complement the Agile guide for VSEs that have adopted or that want to adopt a DevOps approach, Working Group 24 has recently started the development of an ISO/IEC 29110 DevOps Guide [20].
References 1. Laporte, C.Y., O’Connor, R.V., Paucar, L.H.G.: The implementation of ISO/IEC 29110 software engineering standards and guides in very small entities. In: Maciaszek, L.A., Filipe, J. (eds.) Evaluation of Novel Approaches to Software Engineering. ENASE 2015. Communications in Computer and Information Science, vol. 599. Springer, Cham (2016) 2. Larrucea, X., O’Connor, R.V., Colomo-Palacios, R., Laporte, C.Y.: Software process improvement in very small organizations. IEEE Softw. 33(2), 85–89 (2016) 3. Laporte, C.Y., Mejia, J.: Delivering software-and systems-engineering standards for small teams. Computer 53(8) (2020) 4. Ania, I., Mejía, M.: Considering the growth of the software services industry in Mexico. Inf. Technol. Dev. 13(3), 269–291 (2007) 5. Bourque, P., Fairley, R.E. (eds.) Guide to the Software Engineering Body of Knowledge, Version 3.0. IEEE Computer Society (2014)
Building a Guideline to Reinforce Agile Software Development
37
6. Laporte, C.Y., O’Connor, R.: Software process improvement standards and guides for very small organization: an overview of eight implementations. CrossTalk J. Defense Softw. Eng. 30(3), 23–27 (2017) 7. Takeuchi, M., Kohtake, N., Shirasaka, S., Koishi, Y., Shioya, K.: Report on an assessment experience based on ISO/IEC 29110. J. Softw. Evol. Process 26(3), 306–312 (2014) 8. O’Connor, R.V., Laporte, C.Y.: Software project management in very small entities with ISO/IEC 29110. In: Winkler, D., O’Connor, R.V., Messnarz, R. (eds.) Systems, Software and Services Process Improvement. EuroSPI 2012. Communications in Computer and Information Science, vol. 301. Springer, Heidelberg (2012) 9. ISO/IEC: Software engineering- Lifecycle profiles for Very Small Entities (VSEs) - Part 51-2: Management and engineering guide: Generic profile group: Basic profile. ISO/IEC TR 29110-5-1-2:2011. Technical report. (2011). http://standards.iso.org/ittf/PubliclyAvailableS tandards/index.html 10. Kumar, G., Bhatia, P.K.: Impact of Agile methodology on software development process. Int. J. Comput. Technol. Electron. Eng. (IJCTEE) 2(4), 46–50 (2012) 11. Dingsøyr, T., Nerur, S., Balijepally, V., Moe, N.B.: A decade of agile methodologies: towards explaining agile software development (2012) 12. ISO/IEC/IEEE International Standard ISO/IEC/IEEE 26515:2018 Systems and software engineering—Developing information for users in an agile environment (2018) 13. Agile Alliance Glosary. https://www.agilealliance.org/agile101/agile-glossary/ 14. Blankenship, J., Bussa, M., Millett, S.: Pro Agile .NET Development with Scrum. Apress. ISBN 978-1-4302-3534-7 (eBook), p. 372 (2011) 15. Laporte, C.Y., Munoz, M., Mejia, J., O’Connor, R.V.: Applying software engineering standards in very small entities: from startups to grownups. IEEE Softw. 35(1), 99–103 (2017). https://doi.org/10.1109/MS.2017.4541041 16. Laporte, C.Y., O’Connor, R.V.: Systems and software engineering standards for very small entities: accomplishments and overview. Computer 49(8), 84–87 (2016). https://doi.org/10. 1109/MC.2016.242 17. Laporte, C.Y.: Public Site of the ISO Working Group Mandated to Develop ISO/IEC 29110 Standards and Guides for Very Small Entities involved in the Development or Maintenance of Systems and/or Software. http://profs.etsmtl.ca/claporte/English/VSE/index.html 18. ISO/IEC TR 29110-5-4 - Systems and software engineering—Lifecycle profiles for Very Small Entities (VSEs)—Part 5-4: Agile software development guidelines, International Organization for Standardization/International Electrotechnical Commission (to be published) 19. ISO/IEC JTC 1/SC7 Software and system engineering. https://www.iso.org/committee/ 45086.html 20. ISO/IEC TR 29110-5-5 - Systems and software engineering—Lifecycle profiles for Very Small Entities (VSEs)—Part 5-5: DevOps guidelines, International Organization for Standardization/International Electrotechnical Commission (to be published)
Best Practices for Software Development: A Systematic Literature Review Rodrigo Ordoñez-Pacheco, Karen Cortes-Verdin(B) , and Jorge Octavio Ocharán-Hernández School of Statistics and Informatics, Universidad Veracruzana, Av. Xalapa esq. Avila Camacho S/N, C.P., 91020 Xalapa, Veracruz, Mexico [email protected], {kcortes,jocharan}@uv.mx
Abstract. Software process standardization is crucial for organizations dedicated to software development in order to produce quality products on predictable schedules consistently. For this matter, the adoption of best practices is an essential factor in the standardization processes. However, best practices tend to be described as common sense, opinions, or casual advice and are poorly formalized and documented. Thus, it is important to know the state-of-the-art in the field of best practices in software development and their identification. Here we report the conduction of a systematic literature review (SLR) that aims to identify (a) what best practices are, (b) what are their distinctive characteristics; (c) which methods, techniques or strategies are used to identify them, and (d) how their performance is evaluated. From a total of 24 primary studies selected, we identified seven different definitions of best practice in software development, two best practice classification schemes based on their characteristics such as name, definition, stakeholders, and context. Besides, we found one method, three strategies, and five techniques used for best practice identification and two methods to evaluate their performance as separate entities in the software development life cycle. The results of this SLR will help in the identification and evaluation of best practices for a software development organization that aims to standardize their processes. Keywords: Best practice · Software process improvement · Software development · Systematic literature review
1 Introduction Over the past decades, software development organizations have made a substantial effort to standardize their processes to increase the quality of their products. One of the most popular strategies to achieve this is the adoption of best practices in software development. A “best practice” refers to an activity that has been proven to give good results in the past [1]. The empirical nature of best practices usually results in them being described as common sense, casual advice, or subjective opinion. Due to this, the documentation regarding best practices is scarce at best. Nonetheless, resources like SWEBOK or the IEEE/ISO software engineering related standards indicate that there is © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 38–55, 2021. https://doi.org/10.1007/978-3-030-63329-5_3
Best Practices for Software Development: A Systematic Literature Review
39
an effort to formalize what would be considered as “best practice” knowledge. Software development organizations have already included many of them as the de facto standard. However, some of them have not been thoroughly tested, lack formal implementations, or their results are unpredictable. The publication of the Capability Maturity Model (CMM), in 1993, was one of the early attempts to standardize the adoption of best practices within the software development process. However, the CMM focus was oriented towards the improvement of software development organizations and not towards the nature of best practices. Almost two decades later, in 2010, Capers [2] emphasizes how the lack of professionalization in development practices still plagues the industry. As a result, he talks about how approximately 35% of software projects are canceled, and 75% of the ones that are finished are delivered late. Furthermore, he argues that best practices are not universal; they might have different results depending on the type and size of software being produced. All of this emphasizes how important it is to delve deeper into the nature of best practices. The School of Statistics and Informatics at the Universidad Veracruzana, in Mexico, has recently decided to establish a software development center in which all students will have the opportunity to participate in the development of software for real-world projects. This was done to attract people from different market sectors interested in collaborating with the University in the development of such projects. Although the University has some experience in the development of successful software projects, such experience has not been formalized. It has been thought that there is a significant number of best practices that can be obtained from various sources. Therefore, it is desirable to recover such experience so it can be used in the software development center. Thus, during the first semester of 2020, a systematic literature review (SLR) was conducted to identify research that helped define, identify, and evaluate best practices. Such SLR is presented in this paper, which is organized as follows: Sect. 2 presents related work on the definition of best practices, their components, their methods of identification and their evaluation; next, Sect. 3 addresses the SLR method and is organized in two sections: planning and conduction; Sect. 4 deals with the results and, finally, conclusions and future work are given.
2 Related Work A systematic literature review of best practices was presented in [3]. This SLR intended to investigate the challenges and best practices in global software development (GSD) companies when adopting the Follow-The-Sun (FTS) Strategy. FTS is a strategy where “software development is distributed over 24 h per day, in order to reduce development time.” [4]. In this strategy, the transition of the tasks is performed at the end of the day when a different team in a different location and time starts its workday. This work found 36 best practices and challenges from 27 primary studies published since 1990. From their findings, the authors also present research opportunities. The main findings reveal that most of the best practices found are general best practices adapted to GSD. They found that the challenges are mainly related to FTS implementation. Garousi et al. [5] present a systematic literature review about challenges and best-practices in industry-academia collaborations in software engineering. So far, the
40
R. Ordoñez-Pacheco et al.
attempts of industry-academy have not been successful, so it seems there these two significant communities have very little interest in collaborating. However, there are several academic researchers and industry practitioners interested in studying industryacademia collaboration, and their work has increased in recent years. The authors focused on what they call “experience” papers (not an empirical application of theoretical approaches) since their focus was on challenges, best-practices (patterns), and antipatterns of industry-academia collaborations. They used thematic analysis and found 17 best-practice themes and ten challenge themes. The outcome of this work, however, is not related to software development best practices but best practices supporting industry-academia collaborations. A startup company is a new company with a strong and innovative technological basis. In the software development industry, startup companies are addressing the need for innovative software products. However, such companies are affected by process maturity, funding, economic and political environments, and marketing, to name only a few factors. In [6], Klotins et al. conducted a systematic literature mapping for the identification of software engineering knowledge areas needed in startup companies. For their study, the authors considered SWEBOK knowledge areas. They identified 54 best practices and 11 of 15 SWEBOK knowledge areas. However, the authors found that there are “gaps in practices supporting successful transition through the startup life cycle” [6]. In turn, Marques and Robledo presented “a SLR for finding best practices taught to students in academia” in [37]. They argue that there has been much discussion about the teaching of software engineering best practices but not about the meaning of “best practice.” They use the ACM-IEEE Software Engineering Guideline as a basis to determine what must be taught to undergraduate students. They identified 17 primary studies with a total of 70 best practices. However, the authors stated that not all of the best practices identified corresponded to software engineering best practices since some were instructional best practices. The authors also commented on the limited number of primary studies finally selected. Also, they commented that the best practice validation and report were very diverse. Some of the studies did not even have validation. On the definition of “best practice,” the authors found that many studies do not define or describe it. To the best of our knowledge, no additional systematic reviews about software development best practices were found that approached them from an academic perspective. It can be seen from the discussion above that none of the SLR addresses methods for the identification of best practices, best practices components, nor best practices evaluation.
3 Method We conduct a systematic literature review following guidelines provided by Kitchenham and Charters [8] on software engineering systematic literature reviews and the guidelines on snowballing by Wohlin [9]. The method will be covered in the following two subsections: Planning and Conduction.
Best Practices for Software Development: A Systematic Literature Review
41
3.1 Planning During this phase, the research questions were formulated, the search process was established, the primary sources were chosen, and the selection criteria were defined, which we describe as follows. Research Questions In order to drive the review process, four research questions were formulated. Each of the questions reflects upon an essential field of what could be referred to as “best practice knowledge.” 1. [RQ1] How are best practices defined in software development? To further an understanding of best practices in software development, it is necessary to achieve a somewhat agreed definition of them. Thus, it is vital to search and analyze how different authors conceptualize best practices. The theory, or lack thereof, behind the term usually hints at the state in which the concept is. Thus, it creates an opportunity to identify if a consensus has been reached or if additional research is needed. 2. [RQ2] What are the characteristics or distinctive elements of best practices? Best practices can differ widely depending on processes, standards, policies, or even personal preferences, but they share elements in common. Whether to classify or to improve the notion of what a best practice is, it is crucial to analyze their main components. 3. [RQ3] Which methods, techniques, or strategies are used to identify best practices? As said before, best practices can come in many shapes and sizes. Sometimes they can even pass unnoticed by the very same people performing them. To solve this, a mechanism must be found so they can be accurately identified and classified. 4. [RQ4] In which ways are software development best practices performance evaluated? Usually, when best practices are prescribed, and the software product has been released, the quality of it is measured in a variety of ways. Costs, defects, and deadlines are frequent evaluators of the software product, but none of them evaluate best practices as separate entities. Since best practices and their results may vary, it is necessary to find a way in which they can be evaluated individually. This would result in the ability to recommend or warn about specific practices. Search Process The search process followed a mixed search strategy in order to perform the study selection for this review; first an automated search was conducted and then a manual search using snowballing. For this reason, once the research questions were established,
42
R. Ordoñez-Pacheco et al.
several keywords were identified. Such keywords represent the main concepts around which this review is centered. Some synonyms were also included as keywords, as shown in Table 1 below. Table 1. Keywords and synonyms found. Keywords
Synonyms
Best practices
Good practices
Software development – Software engineering
–
Classification
Categorization, organization
Identification
Detection
Standardization
–
Evaluation
Assessment
Three search strings were constructed based on the keywords found. These search strings are logical formulations that can be used in most of the automated oriented search repositories. The search strings are listed in Table 2. Table 2. Search strings ID
Search string
A
((best practices) OR (good practices)) AND ((software engineering) OR (software development)) AND ((identification) OR (detection))
B
((best practices) OR (good practices)) AND ((software engineering) OR (software development)) AND ((classification) OR (organization) OR (categorization))
C
((best practices) OR (good practices)) AND ((software development) OR (software engineering)) AND ((evaluation) OR (assessment))
Sources The automated search was performed on four digital libraries: IEEE Xplore Digital Library, ACM Digital Library, Elsevier Science Direct, and Springer Link. The main reasons behind the selection of these sources were the complete access provided by the Universidad Veracruzana (by means of CONRICyT) to the IEEE and ACM repositories, their ease of use and availability. As noted by Kurhmann et al. [10] these four sources are considered as “Appropriate Data Sources” for software engineering related research. However, not all repositories operate in the same way, and some of them required that the search strings were adjusted. Before the final search strings were obtained, several candidate search strings were used to ensure the relevance of the results. The process included:
Best Practices for Software Development: A Systematic Literature Review
• • • •
43
Using search string variations. Testing the search strings with different field parameters. Reading at least the first 50 entries of the search results to evaluate their relevance. Comparing search string performance.
As an abridged example, only the search strings for IEEE Xplore are shown in Table 3. The complete table with search strings for all sources can be consulted at https://www. uv.mx/its/files/2020/08/BPSD_Appendix.pdf. Table 3. Excerpt of the search strings used. Source A
B
C
IEEE Xplore Digital Library
(“All Metadata”:”best practices” OR “All Metadata”:”good practices”) AND (“All Metadata”:”software engineering” OR “All Metadata”:”software development”) AND (“All Metadata”:”classification” OR “All Metadata”:”organization” OR “All Metadata”:”categorization”)
(“All Metadata”:”best practices” OR “All Metadata”:”good practices”) AND (“All Metadata”:”software engineering” OR “All Metadata”:”software development”) AND (“All Metadata”:”evaluation” OR “All Metadta”:”assessment”)
(“All Metadata”:”best practices” OR “All Metadata”:”good practices”) AND (“All Metadata”:”software engineering” OR “All Metadata”:”software development”) AND (“All Metadata”:”identification” OR “All Metadata”:”detection”)
Primary Study Selection To choose the primary studies that to be analyzed, a set of criteria was established to ensure the quality and relevance of the selected publications. The inclusion and exclusion criteria are shown in Table 4. After the selection of primary studies had been made, the guidelines presented by Garousi and Felderer [11] were followed for data extraction. Using Google Spreadsheets, the data was recorded and logged. An example can be seen in Table 5 on the next page. This was done to organize the information better and ease its retrieval. Finally, using the information from the previous step, a narrative synthesis was performed in order to answer the four research questions. This consisted of reading the selected papers, identifying the research questions they answered, separating the approached themes, and redacting the interpreted conclusion on the content.
44
R. Ordoñez-Pacheco et al. Table 4. Selection criteria.
Inclusion criteria
Exclusion criteria
ID
Description
ID
Description
IC1
Peer-reviewed article from journal or conference
EC1
Full text is not available or behind a paywall
IC2
“best practices,” “good practices” “software engineering” or “software development” are present in the title or abstract
EC2
It is a secondary research paper
IC3
Published between January 1993 and March 2020
EC3
The paper is an editorial or about a workshop
EC4
The introduction, motivations or objectives do not answer any of the research questions
Table 5. Example of the form used for data extraction Search type: automated search Title:
“Theorizing about software development practices.” [28]
Author(s): Päivärinta, T., & Smolander, K. Year:
2014
Source:
Elsevier S.D.
Pub. Type:
Journal
Data:
“A practice means a more and less organized situated activity that is conducted recurrently by human agents or, as Bordieu defines them, practices are “the recognizable patterned actions in which both individuals and groups engage. They are not a mechanical reaction to rules, norms or models, but a strategic, yet regulated improvisation responding to the dialectical relationship between a specific situation in a field and habitus” (…)” [28, p.125]
Answers:
RQ1
RQ2 RQ3 RQ4
3.2 Conduction The selection process was divided into three phases: Initial Selection, Secondary Selection, and Final Selection. During these phases, different criteria were used to perform the primary study selection. The reason behind this decision was because there were different options available at the digital libraries to customize the search. For example, the IEEE Xplore Digital Library and the ACM Digital Library were the only ones that allowed the customization of search fields such as title, abstract, full text, and the like. During the Initial Selection criteria, IC1, IC2, IC3, EC1, and EC2 were applied to determine if the candidate study was to be included or not. The main concern of the Initial
Best Practices for Software Development: A Systematic Literature Review
45
Selection was to select candidate studies that showed the most apparent relevance for this review. During this selection, papers were assigned different statuses: “included”, “to be determined” (for papers that needed further clarification), and “eliminated.” Next, in the Secondary Selection, candidate papers with the status of “to be determined” were analyzed using the criteria IC2, EC2, and EC3. Studies that passed were labeled as “included,” while the rest were labeled “eliminated.” In the Final Selection, EC4 was applied to all “included” studies to assure their relevance. An abridged explanation of these phases is shown in Table 6 below. Table 6. Selection phases and criteria applied. Initial selection
Secondary selection
Final selection
Criteria Description
Criteria Description
Criteria Description
IC1, IC2, IC3, EC1, EC2
The paper was published IC2, in a journal or conference EC2, within the specified time EC3 period. At least one of the terms in IC2 is found in the title. Paper is completely available and is a primary study
The abstract was EC4 read, and IC2 terms were looked for in it. To comply with EC2 and EC3, details of the publication were searched
Further reading of the Introduction or Objectives of the paper allow selection according to EC4
Because of space and format issues, only the candidate studies extracted from IEEE Xplore Digital Library are presented here as an example (for further reference, all the selection process data can be consulted at https://www.uv.mx/its/files/2020/08/BPSD_A ppendix.pdf). The classification of the candidate studies into the “included,” “to be determined” or “eliminated” categories, is shown in Table 7. Table 7. Candidate studies and initial selection results of the IEEE Xplore. Search results Search string
Initial selection Candidate papers
Eliminated
To be determined
Included
A
56
39
10
7
B
138
93
35
10
C
136
114
19
3
As shown in Table 8, on the next page, the Secondary Selection was conducted on the papers labeled as “to be determined.” This consisted of reading the abstract and using the corresponding criteria for the selection, as explained previously. To conclude the process, the Final Selection was performed on the remaining papers. This procedure
46
R. Ordoñez-Pacheco et al.
can also be visualized in Table 8. As mentioned before, the complete data for all sources can be consulted at https://www.uv.mx/its/files/2020/08/BPSD_Appendix.pdf. As the last step, a selective snowballing search was made on the findings from the selection process. The use of EC4 during the Final Selection found five papers that referenced possible useful studies. Since the papers found from this search technique were less in number, an abridged version of the previously mentioned process was conducted. Information from all included studies was managed through Zotero and Mendeley. Table 8. Secondary and final selection results of the IEEE Xplore Digital Library Search string
Secondary selection
Final selection
Eliminated Included Eliminated Included A
49
7
52
4
B
123
15
131
7
C
126
10
132
4
Even though the time span of the search was 27 years, it was found that 58.3% of the included papers were published within the last decade. It seems that researchers are taking an interest in formalizing the knowledge around best practices. From the 3,343 candidate studies that were initially found in all four sources and through snowballing, only 24 primary studies were considered relevant to this review. Springer Link was the source that provided the most candidate studies (38.3%), while the IEEE Xplore Digital Library provided the least (9.9%). However, from the final 24 included studies, the latter provided the most (62.5%), the ACM Digital Library coming second (16.6%), Elsevier Science Direct (12.5%) third and in the last place with only one paper being included Springer Link (4.1%) and the Journal of Object Technology (4.1%). It was noted that a vast majority of the included studies (87.5%) were conference articles while the rest (12.5%) were published in journals. Table 9 shows an excerpt from the list of the included studies. The complete list can be seen at https://www.uv.mx/its/ files/2020/08/BPSD_Appendix.pdf.
Best Practices for Software Development: A Systematic Literature Review
47
Table 9. Included studies by date, title, source, and type of publication. ID
Year
Title
Source
Type
S01
1993
“The Motorola software engineering benchmark program: Organization, directions, and results.” [12]
IEEE Xplore
Conf.
S02
2000
“Software engineering best practices applied to the modeling process” [13]
IEEE Xplore (SB from [17])
Conf.
S03
2003
“Industrial strength software and IEEE Xplore quality: Software and engineering at Siemens.” [14]
Conf.
S04
2003
“Précis of best practices for IEEE Xplore Pakistan’s software industry” [15]
Conf.
S05
2003
“The best practice promise and myth” [16]
Journal of Object Technology (SB from [25])
Journal
…
…
…
…
…
3.3 Quality Assessment To ensure the quality of the included studies, the guidelines published by Garousi et al. [36] were used. Table 10 shows the designed quality assessment instrument. The quality assurance questions (QAQ) were designed to evaluate the degree to which the included studies answered the research question(s) and how good were the included studies structured. Table 10. Quality assessment instrument used for included study evaluation. Paper ID:
Date:
Answer each one of the questions according to the primary study characteristics (0: No, 0.5: Partially, 1: Yes) Question 1. To what degree does the study answer the research question(s)? 2. Does the study give a detailed description of the used methodology? 3. Does the study clearly state its objectives or research questions? 4. Is the study appropriately structured according to its objective? 5. Does the study give enough evidence to support its claims or conclusions? 6. Do other authors cite the study? TOTAL:
Score
48
R. Ordoñez-Pacheco et al.
All included studies were evaluated against the previously mentioned quality assessment instrument; possible scores ranged from 0 to 6 Pts. After the evaluations, the average score of the studies was 4.9 pts. This reflects that, on average, the included studies can provide well-structured and reliable information relevant to this review. The scores showed that 95.8% (23) scored above 3pts., and only 4.1% (1) was scored at 2.5pts. [25]. It is also worth mentioning that, about QAQ1, 83.3% (20) of the included studies had detailed answers to one or more research questions while the other 16.6% (4) had partial answers to one or more research questions. For further reference, the complete list of scores can be consulted at https://www.uv.mx/its/files/2020/08/BPSD_Appendix.pdf.
4 Results The following section presents the answers found to each of the research questions. The information provided here is a narrative synthesis based on the data extracted from the included studies. Later, the Threats to Validity section is included; here, the limitations of the review are discussed. 4.1 Answers to Research Questions [RQ1] How are Best Practices Defined in Software Development? It was found that only ten papers [13, 15–17, 21, 22, 25, 28–31] used a citation or went beyond a casual definition for best practices. From these, 6 used citations [15, 21, 22, 25, 29, 30] to define a best practice; the rest [13, 16, 17, 28] discuss it in detail. Although almost none of the definitions used were identical, several similarities could be found between them. Table 11 below shows the definitions found. It is worth noting that Päivärinta and Smolander [28] discuss more in-depth into the conceptualization of software development practices. Their research builds a theoretic model that tries to explain the fundamental concepts of practice. They discuss that a practice can be defined as a somewhat organized activity that is conducted recurrently by human agents [28]. Also, they mention that practices are not static and that they can and have emerged spontaneously or as a response to rising trends. Their flexible and organic vision of practice is shared with Dodani [16], who in turn, adds that a practice is “continuous evolution.” [RQ2] What are the Characteristics or Distinctive Elements of Best Practices? There was a noticeable absence of papers that explicitly discussed the specific characteristics or elements of a best practice. However, at least 3 of them [19, 30, 35] explain how best practices can be classified following a set of criteria. Alwazae et al. [30] created a very well-structured template to classify best practices; they insist on the necessity of detailed documentation on best practices. Harutyunyan and Riehle [35] designed a similar artifact to record best practices, their focus on practicality results in a less complex document. Finally, Zhu et al. [19] built a repository of knowledge to ease the search, distribution, and implementation of best practices within
Best Practices for Software Development: A Systematic Literature Review
49
Table 11. Definitions found in included studies. Research question
Title of publication
Definition
How are best practices defined in software development?
S06 [17]
“A repeatable activity defined in such a way that someone other than the definer can implement it with demonstrable repeatability” [17]
S02 [13]
“A management or technical practice that has consistently demonstrated to improve one or more of: Productivity, Cost, Schedule, Quality, User Satisfaction, Predictability of Cost and Schedule” [13]
S14 [25]
“A well-defined method that contributes to a successful step in product development” [25] “A method or technique that [has] consistently shown results superior to those achieved with other means” [25]
S19 [30]
“The most efficient (least amount of effort) and effective (best results) way of accomplishing a task, based on repeatable procedures that have proven themselves over time for large numbers of people” [30]
S05[16]
“A proven practice in a given context. (…). Among the proven practices, it is the best at achieving some result. (…). It is well documented, used widely by the community, and continually evolving.” [16]
S20 [31]
“However, best practices are(..): “commercially proven approaches to software development that, when used in combination, strike at the root causes of software development problems. They are best practices not so much because you can precisely quantify their value but rather because they are commonly used in industry by successful organizations”.” [31]
50
R. Ordoñez-Pacheco et al.
a software development organization. Their main contribution to this review is the creation of a sorting mechanism based on tags, and these in “facets.” Despite not having mentioned a precise classification of best practices, their system prototypes show that some consideration was given to figure out what “fields” a best practice should have. It could be concluded that there are four common components in a best practice: name, definition, stakeholders, and context. Table 12 illustrates further the contributions of these three papers. Table 12. General characteristics of best practices according to the reviewed papers. Research question
Title of publication
Characteristics
What are the characteristics or distinctive elements of best practices?
“An infrastructure for indexing and organizing best practices” [19] “Applying a template for best practice Documentation.” [26] “Industry best practices for open source governance and component reuse.” [35]
Name: Usually, a commonly known name decided by what made it popular in the first place. This field varies immensely. A best practice can be named after something formal like a published standard o something casual like an acronym or catchphrase Definition: A short explanation about what the practice does, and which problem(s) are solved by it Stakeholders: This can refer to its enforcers, subjects, or observers. This is the human factor that drives the practice forward. Depending on the best practice, the number of stakeholders can vary from large organizations to a few people in a group Context: Although not very specific, this field includes a set of variables that further detail the actions, requirements, limitations, or cost of the best practice. It is important to note that best practices should not be considered as “one-size-fits-all.” Instead, each practice can be beneficial or detrimental, depending on its context
[RQ3] Which Methods, Techniques, or Strategies are Used to Identify Best Practices? Most of the included studies answered this question. In total, 13 papers [12, 14, 17, 18, 21–24, 27, 31, 32, 34, 35] approached the topic. Still, not all papers answered the question with the same detail. Four papers [12, 18, 31, 34] briefly mentioned techniques such as interviews and questionnaires as the tools used in their respective research to extract or identify best practices. These, however, did not include detailed descriptions of their methods, techniques, or strategies. Shull and Turner [17] discuss how their best practice repository is fed by stakeholder input. A validation process follows this, but they emphasize that best practices should be proposed by the community. Majchrzak [23] and Harutyunyan and Riehle [35] talk about how they used semi-structured interviews to extract best practices. On the other hand, Achatz and Paulisch [14] talk about how Siemens made an effort to create conferences, workshops, formal meetings, and the use of their intranet to identify and disseminate
Best Practices for Software Development: A Systematic Literature Review
51
their best practices within their organization. Petrillo and Pimenta [24] focus on the detection of best and bad practices in game development. They propose the analysis of a post-mortem document, which is an artifact that presents what went right and wrong during the project. Washburn et al. [32] retake this approach and analyze 155 postmortems from different game development projects. This results in the identification of what they call “do’s and don’ts.” Lastly, the only paper that approached in a detailed manner the answer is the one by Calvo-Manzano et al. [21], which in turn is cited by the other two papers [22, 27]. Their proposed method is divided into two phases. During the first one, best practices are identified within the working context of an organization, then, on the second one, the organization’s best practice documentation is analyzed and compared against the previous findings. The other two papers [22, 27], present real-world cases in which this method was used, evaluated, and validated. These approaches can be found in Table 13 below. Table 13. Approaches found in reviewed studies for best practice identification. Research question Which methods, techniques, or strategies are used to identify best practices?
Findings Method
Techniques
Strategies
Two phased analysis [21, 22, 27]
Interviews and semi-structured interviews [12, 17, 23] Questionnaires [12, 18, 31, 34] Conferences [14] Formal meetings [14] Workshops [14]
Stakeholder’s proposals [17] Post-mortem analysis [24, 32] Intranet usage (forums) [14]
[RQ4] In Which Ways is Software Development Best Practices Performance Evaluated? It was a complicated task to answer this research question. Usually, when measuring performance in a software-related field, what tends to be evaluated is the software or project themselves. Defects, costs, milestones, or accomplished goals are familiar ways in which software or project performance is measured. However, very rarely are best practices evaluated as separate entities, each with independent and varying results. In this regard, the review found only three papers [20, 30, 33] that approach the topic. Alwazae et al. [30] include the fields “BP Properties” and “BP Implementation” along with performance-related considerations within their best practice template. For example, they consider that a classified best practice should specify which areas of the software process it improves or how it diminishes defects in code and by how much. It is implied that these observations, if logged, usually come from personal or organizational experience. Cohen and Money [20] present a formal approach to evaluate best practices as separate entities. They insist on isolating best practices from different software development
52
R. Ordoñez-Pacheco et al.
methodologies. In their opinion, a best practice has independent results with cumulative properties that can be added or subtracted from projects as seen fit. Similarly, Karnouskos et al. [33], propose a Six-Step methodology to evaluate practices in software development. They use the characteristics of the ISO/IEC 25010 quality model as a reference when ranking practices. However, their approach is not constrained by the said standard. Their proposal states that the selection of practices should be oriented by what the stakeholders consider important. 4.2 Threats to Validity This section presents the main threats to validity in the findings discussed. Due to the resources available via the University, certain limitations are important to note. Full access to the content provided in both Elsevier Science Direct and Springer Link was not possible. This, however, does not mean that said sources were not consulted, it means that only the studies available to the University (via CONRICyT) were included. The number of studies not considered for this review because of their “paid access” status was: 3,353 studies from Elsevier Science Direct and 18,710 studies from Springer Link. Very likely, both numbers include duplicate studies. A team of three researchers conducted this review; one of them was the leading researcher during the selection process. So, even though this review was done under the supervision of the whole team, personal bias must be taken into consideration.
5 Conclusions This paper presents the results from a Systematic Literature Review focused on the definition, detection, classification, and evaluation of software development best practices. During the paper selection, it was noted that a substantial quantity of papers had incidental or superficial mentions of best practices within their titles and abstracts. The apparent, but not real, the relevance of these papers slowed the selection process. In the end, a body of 24 primary studies was selected and reviewed. It is worth mentioning that these papers represented a minuscule percentage of the initial candidate studies. In this regard, the proportion of found versus included papers coincides with the reported results of Marques’ and Robledo’s [7] review. It was determined that an apparent increase in best practices related research had taken place in the last ten years. Publications like the ones authored by Capers [2] and Thompson and Fox [18] might have been a source of inspiration. Nonetheless, it is crucial to emphasize that most of the initial candidate studies were irrelevant to the review because of their casual approach towards best practices. The absence of proper studies reveals a lack of structure and formalization in best practice knowledge. This review concluded that even though a more or less well-defined concept of a software development best practice is already shared within the community, its standardization by a recognized institution would be helpful. On the other hand, the identification, classification, and evaluation of best practices is practically an obscure area. While there is a general idea of how to perform these activities, structured methods are needed. This could help ease the confusion caused by the very well-known conflict of agile versus
Best Practices for Software Development: A Systematic Literature Review
53
traditional methodologies. A proper source of best practice knowledge could serve as a bridge between these two. This paper would like to draw attention to four of the included papers: Päivärinta and Smolander [28], Alwazae et al. [30], Calvo-Manzano et al. [21] and Cohen and Money [20]. These four papers presented the most detailed findings of this review. They approach the fundamental problem of best practice definition, classification, extraction, and evaluation, respectively, in a structured and clear manner. Their contribution is, perhaps, the basis on which future research could be inspired. For future work, it remains to use the SLR findings in the identification of software development best practices in the organization. Such identification will help in the development of software projects of their academic software development center.
References 1. Institute of Electrical and Electronics Engineers: IEEE Recommended Practice on Software Reliability. IEEE Std 1633-2016 (Revision of IEEE Std 1633-2008). IEEE Press, New York (2016) 2. Jones, C.: Software Engineering Best Practices. McGraw-Hill Education, New York (2009) 3. Kroll, J., Hashmi, S.I., Audy, J.L.N., Richardson I.: A systematic literature review of best practices and challenges in follow-the-sun software development. In: IEEE 8th International Conference on Global Software Engineering Workshops, pp.18–23. IEEE Press, New York (2013) 4. Cameron, A.: A novel approach to distributed concurrent software development using a follow-the-sun technique. Unpublished EDS working paper (2004) 5. Garousi, V., Petersen, K., Ozkan, V.: Challenges and best practices in industry-academia collaborations in software engineering: a systematic literature review. J. Inf. Softw. Technol. 79, 106–127 (2016) 6. Klotin, E., Unterkalmsteiner, M.: Software engineering knowledge areas in startup companies: a mapping study. In: Fernandes, J., Machado, R., Wnuk, K. (eds.) Software Business ICSOB 2015, pp. 245–257. Springer, Heidelberg (2015) 7. Marques, M., Robledo, J.: What software engineering “best practices” are we teaching students - a systematic literature review. In: IEEE Frontiers in Education Conference (FIE), pp. 1–8. IEEE Press, New York (2018) 8. Kitchenham B., Charters S.: Guidelines for performing systematic literature reviews in software engineering. Technical report, Ver. 2.3, EBSE. Durham (2007) 9. Wohlin, C.: Guidelines for snowballing in systematic literature studies and a replication in software engineering. In: 18th International Conference on Evaluation and Assessment in Software Engineering, pp. 2–4. Association for Computing Machinery, New York (2014) 10. Kuhrmann, M., Fernández, D.M., Daneva, M.: On the pragmatic design of literature studies in software engineering: an experience-based guideline. Empirical Softw. Eng. 22(6), 2852– 2891 (2017). https://doi.org/10.1007/s10664-016-9492-y 11. Garousi, V., Felderer, M.: Experience-based guidelines for effective and efficient data extraction in systematic reviews in software engineering. In: 21st International Conference on Evaluation and Assessment in Software Engineering EASE 2017, pp. 1–10. Association for Computing Machinery, New York (2017) 12. Fritsch, J.: The Motorola software engineering benchmark program: organization, directions, and results. In: IEEE 17th International Computer Software and Applications Conference, pp. 284–290. IEEE Press, New York (1993)
54
R. Ordoñez-Pacheco et al.
13. Withers, D.H.X.: Software engineering best practices applied to the modeling process. In: 2000 Winter Simulation Conference, pp. 432–439. IEEE Press, New York (1993) 14. Achatz, R., Paulisch, F.: Industrial strength software and quality: software and engineering at Siemens. In: Third International Conference on Quality Software, pp. 321–326. IEEE Press, New York (2003) 15. Lodhi, F., Tariq, A., Naveed, S., Gul, S., Khalid, M.: Precis of best practices for Pakistan’s local software industry. In: 7th International Multi Topic Conference, pp. 451–456. IEEE Press, New York (2003) 16. Dodani, M.H.: The best practice promise and myth. J. Object Technol. 2(4), 65–68 (2003) 17. Shull, F., Turner, R.: An empirical approach to best practice identification and selection: the US department of defense acquisition best practices clearinghouse. In: 2005 International Symposium on Empirical Software Engineering, pp. 133–140. IEEE Press, New York (2005) 18. Thompson, J.B., Fox, A.J.: Best practice: is this the cinderella area of software engineering? In: 18th Conference on Software Engineering Education & Training CSEET 2005, pp. 1–8. IEEE Press, New York (2005) 19. Zhu, L., Staples, M., Gorton, I.: An infrastructure for indexing and organizing best practices. In: Second International Workshop on Realising Evidence-Based Software Engineering (REBSE 2007), pp. 1–6. IEEE Press, New York (2007) 20. Cohen, S.J., Money, W.H.: Bridge methods: using a balanced project practice portfolio to integrate agile and formal process methodologies. In: 42nd Hawaii International Conference on System Sciences, pp. 1–10. IEEE Press, New York (2009) 21. Calvo-Manzano, J.A., Cuevas, G., Muñoz, M.A., San Feliu, T., Álvaro, R., Sánchez, Á.: Identifying best practices for a software development organization through knowledge management. In: 5th Iberian Conference on Information Systems and Technologies, Santiago de Compostela, pp. 1–6. IEEE Press, New York (2010) 22. Calvo-Manzano, J.A., Cuevas, G., Mejía, J., Muñoz, M., San Feliu, T., Sánchez, Á., Rocha, Á.: Approach to identify internal best practices in a software organization. Commun. Comput. Inf. Sci. 99, 07–118 (2010) 23. Majchrzak, T.A.: Best practices for technical aspects of software testing in enterprises. In: 2010 International Conference on Information Society, pp. 195–202. IEEE Press, New York (2010) 24. Petrillo, F., Pimenta, M.: Is agility out there? In: 28th ACM International Conference on Design of Communication, pp. 9–15. Association for Computing Machinery, New York (2010) 25. Tighy, G.: Evaluation of software engineering management best practices in the Western Cape. In: 4th IEEE Software Engineering Colloquium (SE), pp. 21–23. IEEE Press, New York (2012) 26. Camacho, C., Marczak, S., Conte, T.: On the identification of best practices for improving the efficiency of testing activities in distributed software projects: preliminary findings from an empirical study. In: IEEE 8th International Conference on Global Software Engineering Workshops, pp. 1–4. IEEE Press, New York (2013) 27. Mejía, J., Muñoz, M., Rojo, G.E., Tinajero, I., Ramírez H., García J.: Knowledge extraction tacit for software process improvement in a governmental organization. In: 9th Iberian Conference on Information Systems and Technologies (CISTI), pp. 1–7. IEEE Press, New York (2014) 28. Päivärinta, T., Smolander, K.: Theorizing about software development practices. Sci. Comput. Program. 101, 124–135 (2015) 29. del Pilar Salas-Zárate, M., Alor-Hernández, G., Valencia-García, R., Rodríguez-Mazahua, L., Rodríguez-González, A., López Cuadrado, J.L.: Analyzing best practices on web development frameworks: the lift approach. Sci. Comput. Program. 102, 1–19 (2015) 30. Alwazae, M., Perjons, E., Johannesson, P.: Applying a template for best practice documentation. Procedia Comput. Sci. 72, 252–260 (2015)
Best Practices for Software Development: A Systematic Literature Review
55
31. Aniche, M.F.: Detection strategies of smells in web software development. In: IEEE International Conference on Software Maintenance and Evolution (ICSME), pp. 598–601. IEEE Press, New York (2015) 32. Washburn Jr, M., Sathiyanarayanan, P., Nagappan, M., Zimmermann, T., Bird, C.: What went right and what went wrong: an analysis of 155 post-mortems from game development. In: 38th International Conference on Software Engineering, pp 280–289. IEEE Press, New York (2016) 33. Karnouskos, S., Sinha, R., Leitao, P., Ribeiro, L., Strasser, T.I.: Assessing the integration of software agents and industrial automation systems with ISO/IEC 25010. In: IEEE 16th International Conference on Industrial Informatics, pp. 61–66. IEEE Press, New York (2018) 34. Borg, M., Olsson, T., Franke, U., Assar, S.: Digitalization of Swedish government agencies. In: 40th International Conference on Software Engineering: Software Engineering in Society, pp. 37–46. IEEE Press, New York (2018) 35. Harutyunyan, N., Riehle, D.: Industry best practices for open source governance and component reuse. In: 24th European Conference on Pattern Languages of Programs, pp. 1–14. Association for Computing Machinery, New York (2019) 36. Garousi, V., Felderer, M., Mäntylä, M.V.: Guidelines for including grey literature and conducting multivocal literature reviews in software engineering. Inf. Softw. Technol. 106, 101–121 (2019). https://doi.org/10.1016/j.infsof.2018.09.006
How Capital Structure Boosts ICTs Adoption in Mexican and Colombian Small Firms: A PLS-SEM Multigroup Analysis Héctor Cuevas-Vargas1(B) , Héctor Abraham Cortés-Palacios2 , Gildardo Adolfo Vargas-Aguirre2 , and Salvador Estrada3 1 Universidad Tecnológica del Suroeste de Guanajuato, Guanajuato, Mexico
[email protected] 2 Universidad Autónoma de Aguascalientes, Aguascalientes, Mexico
[email protected], [email protected] 3 Universidad de Guanajuato, Guanajuato, Mexico [email protected]
Abstract. Scientific literature shows a very limited empirical evidence that address the relationship between capital structure and Information and Communication Technologies (ICTs) adoption, therefore, this empirical study aimed to examine the relationship between capital structure decisions and ICTs adoption in Mexican and Colombian SMEs, and compare with a multigroup analysis this relationship between both Latin American countries. Based on a sample of 320 SMEs, a Partial Least Squares Structural Equations Modeling (PLS-SEM) was applied to assess the structural model as a hierarchical component model. The results allow us to infer that capital structure has a predictive power on the ICTs adoption of small business in Mexico and Colombia. In the same vein, the findings indicate that capital structure has a positive and significant effect on ICTs adoption, both in Mexican and Colombian firms. On the other hand, it was not found any significant difference between the Mexican and Colombian firms. Keywords: ICTs adoption · Capital structure · Mexican SMEs · Colombian SMEs · Multigroup analysis
1 Introduction The usage of Information and Communication Technologies (ICT) is profoundly transforming the interaction between the economic agents around the world. Candelon et al. [1] state that the world is entering a Schumpeterian cycle of creative destruction, driven by technology and that it is about to disrupt and radically change several industries, such as automotive, financial services, health care and even governments. Following the authors, the usage and exploitation of technology will have implications for the reallocation of wealth, value and power among the participants of the economy. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 56–70, 2021. https://doi.org/10.1007/978-3-030-63329-5_4
How Capital Structure Boosts ICTs Adoption
57
On one hand, the Small and Medium Enterprises (SMEs) are an important player for their respective economies, but what happen regarding their advances on technological usage? Mendo and Fitzgerald [2] stand a paradoxical situation, where despite the growth of models of e-commerce, the UK SMEs were reluctant to adopt technology-based business models. Part of this resistance lied on the heterogeneity of SMEs, thus, to address it, the authors point out the necessity to abandon the paradigm of one-fit-all recipe regarding the adoption of technologies. In a similar way, Arendt [3] explored the barriers for SMEs to adopt information and communication technologies, and found that one of the barriers to ICT adoption was the financing, since for this kind of projects are hard to determine its rate of return, and typically its payments arrive in the long term. On the other hand, it is important to note that SMEs in both Colombia and Mexico are very similar, since the business tissue is made up, for the most part of small-scale businesses. Even in terms of their size classification, for example, small and mediumsized companies, defined according to Colombian laws as those that have a staff of less than 200 employees, they represent 99.5% of the national business park [4], where a microenterprise has between 1 and 10 employees, the small one has between 11 and 50 employees, and the medium-sized has between 51 and 200 employees for all sectors. The same occurs in the Mexican case, they represent 99.7% of the total of formal firms [5], where the classification of micro and small companies being the same as in Colombia, and only in the case of Mexican medium-sized companies, there is a difference, since they are able to have up to 250 employees in the case of the industrial sector. According to ECLAC [6], Colombia’s economy is based especially on the production of primary goods for export and consumer goods for the domestic market. In Mexico, most export revenues come from manufacturing, followed by domestic natural resources. Under the overall production scheme, the external sector has specialized in manufacturing and has operated largely under the maquila regime, which depends largely on imports of intermediate goods [7]. For this reason, one of the structural challenges facing the Colombian SMEs according to the large SME survey is the access to formal financing [8]. Likewise, in the case of Mexican SMEs, only 8% had access to financing in 2017, according to data from ENAPROCE, which 81.3% was used it mainly to purchase supplies, 27.5% to acquire machinery and equipment, and 25.6% for paying other credits [9]. On the other hand, in the descriptive study carried out with Colombian and Mexican SMEs by Cuevas-Vargas et al. [10], they found that Mexican SMEs showed a higher relevance index in the adoption of ICTs, finding that the country influences the organizational structure of SMEs. However, it is highlighted that the level of ICT adoption is very limited in both countries. With this problematical situation in main, the aim of this empirical research is to examine the relationship between capital structure decisions and ICTs adoption in Mexican and Colombia SMEs. The research also aims to assess whether a significant difference can be found between the ways that Mexican and Colombian SMEs manage their capital structure decisions for adopting ICTs in their businesses. Accordingly, there are three main contributions of this study. Firstly, to provide empirical evidence of the relationships proposed in the theoretical research model in the context of SMEs in two emerging countries, Colombia and Mexico; second, the measurement of the model as a hierarchical components model; third, the methodology
58
H. Cuevas-Vargas et al.
used through PLS-SEM Multigroup Analysis and the application of the importanceperformance map analysis (IPMA) to prioritize managerial actions. Thus, this work is structured fivefold: the introduction that is the one described, followed by a review of the literature and the formulation of the research hypothesis. In the third section is the methodology that shows the reliability and validity of the scales. In the fourth section, the results and their discussion are presented and in the last section are the conclusions, limitations and future lines of research.
2 Literature Review Capital structure is understood as the proportion between debt and equity that finance a firm. The seminal article by Modigliani and Miller [11] state that there is not a right proportion among sources of financing, since the proportion does not affect the value of the firm. There are other theories, such as the agency theory, which states that the relationship between managers and owners influenced the capital structure regarding the use of securities as a form to organize the firm [12–14]. There is also the pecking order theory, characterizing the selection of the capital structure in a continuum determined by the existence of asymmetrical information and the actions of the management in favor of stockholders’ interests [15, 16]. Finally, there is the tradeoff theory, in which it is stated the existence of an objective function that allows selecting a capital structure that maximizes the value of the firm [15, 17]. To address the theory regarding ICT adoption, it is necessary to review the seminal book by Rogers [18, p. 12], in which the author states that the innovation is an idea, practice or object that is perceived as new by an individual or other unit of adoption. This definition is useful, since in that way, the SMEs are the unit of adoption, and the integration of information and communication technologies constitute an innovation. Rogers [18] states that the process of innovation diffusion is a process of communication that uses certain channels over the time among the members of a social system. There are also other specific theoretical frameworks regarding the adoption of technologies by firms, such as Mendo and Fitzgerald [2], who present a set of theories that explain the adoption of technologies and electronic business model. In their work, the authors allude a convergence of organizational learning, the institutional statement of the influences of the environment where the firm is immersed and the opportunities identified by the managerial team of the firm. Regarding practical issues, Antlová [19] found that SMEs are likely to use technology to manage and process data, instead of for sharing knowledge. Regarding the capital structure choice, there are some factors found empirically that may be related with the ICT adoption. In that sense, there is the seminal paper by Titman and Wessels [20], who found that the uniqueness of the line of business (whose one of its measurements is the proportion of research and development expenses) is inversely related with the capital structure, implying lower debt ratios. In relation to the capital structure of information and communication technology companies, most of the research involves the sources of financing that these companies use. Tech companies can draw on large amounts of debt and equity financing during the start year [21]. In general, it is the case that the companies in which their main asset is ICT use bank debt as the main financing, while the other organizations seek to
How Capital Structure Boosts ICTs Adoption
59
grow venture capital [22]. Furthermore, Minola et al. [23] reveal that start-ups that use technology often have greater accessibility and preference to the use of capital instead of debt financing. Giotopoulos et al. [24] conducted a survey for 3,500 Greek Small and Medium Enterprises (SME’s). The authors measured the ICT adoption with five indicators (intention, infrastructure, integration and electronic sales and procurements). In their results, the authors found that organizational factors, such as decentralized decision-making, welleducated and skilled workers and visionary leadership have a favorable impact on the likelihood to adopt technologies. Kılıçaslan et al. [25] explored the impact of labor productivity growth in Turkish manufacturing. An important thing to point out in this study was the measurement of ICT, which was partly based on the capital investment, thus dividing into ICT and nonICT investment. In their descriptive results, the authors find that the ICT capital is higher in most the manufacturing sectors, excepting beverages, leather, chemical, metals and furniture. It is relevant to explore the interactions of capital structure in technological firms. One example of this is the study by Chen et al. [26] studied the relationship between investment in R&D and capital structure in small and medium enterprises (SME’s) of the IT industry in Taiwan. The authors found that when the IT SMEs engage in R&D activities, the firms tend to have lower debt. Another study is from Neville and Lucey [27] who analyzed the capital structure of 100 Irish technological SME’s. It is relevant to remark the importance of the internal sources as the main source of financing and the switching to from equity to debt over time. In last instance, there is a positive relationship between internal finance and revenue, and a negative relationship between debt and revenue. The ICT adoption is part of innovation matters. In this case, it is important to explore the study of O’Brien [28], who focused on how the innovating strategies impact the capital structure decision. In his results, O’Brien found that the firms that pursuit an innovating strategy observe also the creation of financial slack, which in turn implies that they will lower the debt financing. Finally, there is necessary to introduce a contextual framework. In that sense, it is relevant to expose the work by Demuner et al. [29] who tried to characterize the ITC adoption in the specific context of Mexican SMEs. In their results, the authors found that there are some advances in the adoption of technology; but these technologies are mainly basic, such as PC’s and some internet applications. These findings, nevertheless, are consistent with the characterization provided by Antlová [19]. In this sense, the hypothesis of this study is framed as follows: H1: The higher the level of capital structure, the higher the level of ICT adoption.
3 Methods This empirical research is of an explanatory type with a quantitative approach, nonexperimental, cross-sectional, and predictive design, using the statistical technique of PLS-SEM, through the statistical software SmartPLS® 3 Ringle et al. [30] which allows solving possible problems of data non-normality due to it works with non-parametric
60
H. Cuevas-Vargas et al.
tests [31]. Firstly, the measurement model was estimated and then the structural model with the complete sample was assessed as a hierarchical components model [32] using the indicator repetition approach [33, 34] a necessary action when working with higher order or second order models [31, 35]. Second, the multigroup analysis technique (PLSMGA) was used to identify the way in which managers of Colombian and Mexican SMEs manage their capital structure decisions to adopt ICTs in their firms, after that, evaluate whether it is a statistical significant difference through the PLS-MGA [36]. Finally, we applied the Importance-Performance Map Analysis. 3.1 Sample Design and Data Collection The present empirical study took as reference the National Statistical Directory of Economic Units (DENUE) INEGI [37], for the Mexican case and the Bogotá Chamber of Commerce [38], for the Colombian case. We worked with a confidence level of 95% and a margin of error of 5%, obtaining a sample of 245 SMEs from the department of Bogotá, Colombia and 205 SMEs from Aguascalientes, Mexico. For this, a survey was designed with the purpose of being answered by the managers or owners of this type of organizations. It was carried out as a personal interview with a total sample of 450 firms selected through a random sampling in both Latin American countries, from January to April 2018, obtaining at the end 320 completed and valid surveys, which represent the sample of this study. The sample of Mexican SMEs belongs to the industrial sector, whose companies have between 11 and 250 employees. In the case of the sample of Colombian SMEs, they belong to the industrial and services sector whose companies have between 5 and 200 employees. According to the distribution of the sample, 175 SMEs are Mexican and 145 Colombian. From the total of the Mexican sample, 82.9% are small business (11 to 50 employees) and 17.1% are medium-sized (51 to 250 employees). On the other hand, 47.6% of the Colombian sample are micro businesses (5 to 10 employees), 47.6% are small business (11 to 50 employees), and only 4.8% are medium-sized firms (51 to 200 employees). Furthermore, 62.9% of the Mexican firms are mature, due to they have been operating for over 10 years. In the same vein, 54.5% of the Colombian firms are mature. In the case of the Mexican SMEs, 88% are run by men and only 12% by women; there is a greater representation of women in the Colombian case, since 33.8% are led by women and 66.2% by men. 72% of Mexican SMEs are family-based and only 28% are nonfamily businesses; and in the case of Colombian SMEs only 46.9% are family firms and 53.1% are non-family. Finally, regarding the level of training of their managers, 53.1% of Mexican managers or owners have a bachelor’s level, 7.5% have a postgraduate degree, 10.9% have a technical or commercial career, 16.6% have an upper secondary level and 12% only with basic education; In the case of Colombian SMEs, 69% of their managers or owners have a bachelor’s degree, 11% a postgraduate degree, 19.3% have a technical or commercial career and only 0.7% have an upper secondary level. 3.2 Variables To measure capital structure, a higher order scale (HOC) was used, which consists of two dimensions: (1) the sources of internal financing, made up of three indicators that measure
How Capital Structure Boosts ICTs Adoption
61
the reinvestment of profits, the contributions of the members and the incorporation of new partners [39–41]; and (2) the sources of external financing, made up of nine indicators, which allow measuring the loans of family members, private lenders or unregulated loan houses, government loans, supplier credit, commercial banks, savings banks, credit cards, leasing, and financial factoring [39–41]. All of them were measured with a Likerttype scale from 1 to 5 points; where possible answers range from low importance to high importance. To measure the adoption of ICTs, the higher order scale (HOC) adapted from Chen and Tsou [42] was used, measured through four dimensions of reflective type: (1) the infrastructure of ICTs measured with four indicators that measure the acquisition of hardware and software, apps adoption, and employee training; (2) the strategic alignment measured with four indicators that measure the alignment of the company’s ICT strategy with the corporate strategy to achieve greater effectiveness; (3) the organizational structure measured with five indicators that measure the changes that have been made in adopting ICTs to empower employees, facilitate integration between departments and help managers make timely decisions; and (4) individual learning measured with five indicators that measure learning skills and acquired knowledge that allow them to effectively manipulate ICTs. All of them measured with a Likert-type scale from 1 to 5 points; where the possible answers range from totally disagree to totally agree. 3.3 Reliability and Validity To assess the reliability and validity of the scales, the measurement model was estimated using the PLS-SEM statistical technique with SmartPLS® 3 [30]. Based on the results obtained and presented in Table 1, it was found that there is a high internal consistency of the six lower-order reflective constructs, since the composite reliability (CR) exceeds the value of 0.708 recommended by Hair et al. [31], as well as Cronbach’s Alpha [43] for each of the constructs is greater than 0.7 as suggested by Hair et al. [44], and Nunnally and Bernstein [45], and finally exceeds the critical value of 0.5 suggested for the average variance extracted (AVE) [46, 47]. Likewise, it was found that the standardized outer loadings of the indicators easily exceed the value of 0.7 [31], and they are statistically significant (p < 0.001), which guarantees the commonality of each indicator; and since the AVE values are greater than 0.5, it is guaranteed that each of the scales used has convergent validity [31]. On the other hand, the two higher order constructs fulfill the CR, Cronbach’s alpha and AVE values. Regarding the evidence of discriminant validity, this was evaluated through two tests, shown in Table 2. First, above the diagonal, the Fornell-Larcker criterion test is shown, which was determined using the square root of each of the construct AVE, whose values in bold represent the diagonal of the table, and according to Fornell and Larcker [46], these values are higher than their corresponding correlations with any other construct, as observed above the diagonal. Second, below the diagonal, the HeterotraitMonotrait Ratio test (HTMT90) [48] is presented as it is considered as a criterion of better performance to determine the discriminant validity of the constructs [48, 49], which was determined by means of complete bootstrapping using 5,000 subsamples and the results presented below the diagonal, demonstrate that the values of the correlations between the reflective constructs are below 0.90 [48, 50, 51].
62
H. Cuevas-Vargas et al. Table 1. Reflective measurement model assessment
Lower order Constructs
Convergent validity
Internal consistency
Indicators
Outer loadings range
AVE
CR
α
>0.7
>0.5
>0.7
>0.7
Internal financing FFI2-FFI3
0.879-0.907
0.798
0.888
0.748
External financing
FFE3-FF5
0.770-0.847
0.673
0.860
0.756
ICT infrastructure
ITI1-ITI4
0.940-0.958
0.904
0.974
0.965
Strategic alignment
SA1-SA4
0.916-0.942
0.873
0.956
0.951
Organizational Structure
OS1-OS5
0.910-0.946
0.861
0.969
0.960
Individual learning
IL1-IL5
0.910-0.948
0.869
0.971
0.962
HOC
Variables
Path coefficients
Capital structure
Internal financing 0.836
ICT adoption
Reliability
External financing
0.911
ICT infrastructure
0.898
Strategic alignment
0.918
Organizational structure
0.937
Individual learning
0.929
AVE
CR
α
0.558
0.863
0.801
0.744
0.981
0.980
Source: Own calculations based on data. Results obtained with SmartPLS® 3. Ringle et al. [30]. Note: The t-values of all outer loadings and path coefficients were significant (p < 0.001)
For this reason, based on the evaluation of these criteria, it can be concluded that the data of this study are reliable and valid to test the hypothesis with PLS-SEM and with PLS-MGA.
4 Results and Discussion To test the research hypothesis, the structural model was assessed using bootstrapping with 5000 subsamples through SmartPLS® 3 [30], as can be seen in Table 3, there is enough evidence to obtain confidence intervals to evaluate the precision of the parameters. These results show that the structural model has predictive relevance Q2 , since when
How Capital Structure Boosts ICTs Adoption
63
Table 2. Discriminant validity for the lower and higher order constructs Lower order constructs
IF
EF
ICTI
SA
OS
IL
Internal financing (IF)
0.893
0.535
0.095
0.138
0.137
0.175
External financing (EF)
0.707
0.820
0.265
0.315
0.274
0.244
ICT infrastructure (ICTI)
0.109
0.309
0.951
0.802
0.788
0.759
Strategic alignment (SA)
0.217
0.371
0.836
0.934
0.808
0.794
Organizational structure (OS)
0.160
0.321
0.808
0.844
0.928
0.842
Individual learning (IL)
0.205
0.282
0.788
0.828
0.876
0.932
HOCs
Capital structure
ICT adoption
Capital Structure
0.747
0.272
ICT Adoption
0.305
0.862
Source: Own contribution based on data from results obtained with SmartPLS ® 3. Ringle et al. [30]. Note: The diagonal numbers (in bold) represent the square root of the AVE values (for lower and higher order constructs). Above the diagonal, the Fornell-Larcker criterion test is presented; below the diagonal, the HTMT.90 correlations ratio test is presented.
evaluating the predictive power of exogenous constructs over endogenous ones using the blindfolding technique [52, 53], a value of 0.051 was obtained, and in agreement with Hair et al. [31], a Q2 value greater than zero for a specific reflective type endogenous construct shows the predictive relevance of the path model for a particular dependent construct. Therefore, the research model has predictive relevance. Table 3. PLS-SEM results of the structural model Hypothesis
Standardized coefficient β
t-value
p-value
Decision
Q2
Capital structure → ICT adoption
0.272***
4.892
0.000
Supported
0.051
Source: Own contribution based on data from results obtained with SmartPLS® 3. Ringle et al. [30]. Note: *** = p < 0.001; ** = p < 0.05
Regarding H1 , the results obtained and shown in Table 3 (β = 0.272, p < 0.001) indicate that capital structure has positive and significant effects on ICT adoption, therefore, H1 is accepted, since it has been found that capital structure has a significant impact of 27.2% on ICT adoption by SMEs that make up the sample. Likewise, it was possible to identify that capital structure has a significant impact on ICT adoption by Colombian SMEs (β = 0.395, p < 0.001). Likewise, it demonstrated to have a significant impact on the adoption of ICTs by Mexican SMEs (β = 0.346, p < 0.001). In this sense, there are various studies of theoretical and practical evidence on the direct and indirect positive effects of ICTs adoption on the performance of SMEs [54–57], and suggest that ICTs can improve the general, financial and operational performance of SMEs, if they are used by the firms properly. Therefore, it is necessary for SMEs in
64
H. Cuevas-Vargas et al.
both Mexico and Colombia to redouble their efforts to invest and improve their practices in the adoption of ICTs. Moreover, it is important to note that our findings are similar to those found in [56, 58] in which they affirm that, in order to benefit from the adoption of ICTs, offer better services and explore new business opportunities, certain conditions must be met in the adoption of ICTs, such as: ICTs infrastructure, specialized ICT personnel, favorable environment for the company and monetary resources to invest in ICTs. However, it is necessary to underline the importance of investments being longterm because their positive impact will be reflected after a period of adoption [59, 60]. On the other hand, it is important to consider that organizations that adopt ICTs have to adjust their structure, make internal changes (training their personnel) and align it with their corporate strategy. However, when evaluating the model through PLS-MGA, it was found that there are no statistically significant differences in the way that managers or owners of Colombian SMEs manage their financing for the adoption of ICTs compared to Mexicans’, as shown in the difference of their path coefficients presented in Table 4. For this reason, according to [61], a result is significant at 5% probability of error, if the p-value is less than 0.05 or greater than 0.95, and in this study a path coefficient difference of 0.050 was obtained with a p-value of 0.318, therefore, there is no statistically significant difference in the management of the capital structure and its relationship with the adoption of ICTs by the managers or owners of Colombian and Mexican SMEs. This indicates that the way in which Colombian SMEs manage their capital structure to adopt ICTs is very similar to that carried out by managers or owners of Mexican SMEs. Table 4. PLS-MGA results Path
Path coefficient group: Colombia
Path coefficient group: Mexico
Path coefficient difference
p-value Col vs Mex
Capital structure → ICT adoption
0.395*** (t = 5.287)
0.346*** (t = 4.639)
0.050 N.S.
0.318
Source: Own calculations based on data from results obtained with SmartPLS® 3. Ringle et al. [30]. Note: *** = p < 0.001; ** = p < 0.05; N.S. = Non-significant
On the other hand, when applying the importance-performance map analysis, it was found that despite there being no statistically significant differences in the multigroup analysis, it was possible to identify that Mexican SMEs demonstrated to have a better performance in the adoption of ICT (51.73) compared with Colombian SMEs (46.37), despite the fact that this level of performance is relatively low, which may be due in the first instance to the lack of financing and secondly, to the lack of an ICT adoption strategy, since it is evident in both countries that they have low ICT infrastructure and little training of their employees in the use of ICT, that their strategic alignment of ICT adoption is not aligned with the corporate strategy of SMEs, their organizational structure has not been changed by adopting ICT to facilitate integration among departments and empower their employees, and by the lack of effective application of ICT by employees; this problem is most notable in the case of Colombian SMEs. In the same vein, it was
How Capital Structure Boosts ICTs Adoption
65
found that Colombian SMEs have a better performance of their capital structure (71.02), as well as both internal and external sources of financing (71.14 and 71.08) respectively, indicating that the financing has not been a barrier to adopting ICTs, and the problem lies mainly in the proper ICT adoption strategy. For their part, Mexican SMEs evidenced having a weak performance of their capital structure (48.27), and of internal and external financing (49.43 and 47.07) respectively, as shown in Table 5. Table 5. Importance-performance map analysis of exogenous variables on ICT adoption Endogenous variables
Total sample N = 320
Colombian SMEs N = 145
Mexican SMEs N = 175
ICT adoption performance
49.29
46.37
51.73
ICT infrastructure 43.71 performance
40.94
46.03
Strategic alignment performance
51.65
49.06
53.71
Organizational structure perf.
51.69
48.61
54.20
Individual learning performance
49.36
46.44
51.82
Exogenous variables
Perf.
Imp.
Perf.
Imp.
Perf.
Imp.
Capital structure
58.38
0.304
71.02
0.477
48.27
0.466
Sources of internal financing
59.15
0.922
71.14
0.874
49.43
0.973
Sources of external financing
57.98
1.061
71.08
1.076
47.07
1.027
Members’ contributions
60.39
0.070
60.39
0.070
50.14
0.107
Incorporation of new members
57.81
0.064
57.81
0.064
48.85
0.131
Government loans
48.59
0.051
48.59
0.051
33.14
0.065
Supplier credit
67.19
0.058
67.18
0.058
58.43
0.082
Commercial banks
56.48
0.059
56.48
0.059
46.71
0.079
Source: Own calculations based on data from results obtained with SmartPLS® 3. Ringle et al. [30]. Note: Perf. = performance; Imp. = importance
It should be noted that external financing sources proved to be of greater importance to improve the performance of ICT adoption in both countries, and in particular, supplier credits have been a source of financing with higher performance in financing SMEs in both countries, but it has not been the variable that has the greatest importance to improve the level of ICT adoption, for that reason, the empirical evidence indicates that if Colombian SMEs want to improve their level of ICT adoption based on their
66
H. Cuevas-Vargas et al.
capital structure, they should pay special attention to commercial bank loans, followed by government loans and the contributions of their members, since Colombian SMEs that have done it this way, have seen it reflected in a greater impact on their ICT adoption. On the other hand, we suggest Mexican SMEs to improve their level of ICT adoption, by taking advantage of their internal sources of financing, such as contributions of their members and the incorporation of new partners, followed by loans from commercial banks, since by leveraging these sources of funding they will improve their ICT adoption, as long as they are accompanied by an appropriate ICT adoption strategy.
5 Conclusions Capital structure and ICTs adoption have attracted intense debate and attention in the business field in recent decades. Despite extensive empirical analysis of leverage decisions in large companies, research on capital structure of SMEs has been relatively recent. Furthermore, analysis of ICT financing decisions in SMEs in Latin America, including Mexico and Colombia, is still scarce. Therefore, this work analyzed the influence that capital structure has on the adoption of ICTs and, shows evidence of the findings between these two countries. In this sense, the results of this research have important implications for the policies of the firm, the industry and the micro levels. Firstly, this study found that there are no significant differences between Colombia and Mexico, in terms of the management of the capital structure by owners or managers and the positive impact it has on the adoption of ICTs. However, in the analysis of the importance-performance map in the adoption of ICTs, we found that the restriction for access to financing is greater in Mexico, which, in the short, medium and long term, would imply a greater development in ICTs in Colombia, for which a greater impulse of public and private policies in Mexico is needed that allow greater access to resources by organizations. The outcomes indicate that the performance is low in the adoption of ICTs and that there are no significant differences with respect to the two countries, however, it is important to note that SMEs in Mexico and Colombia do reflect what is recurrent in the Latin American countries, first, that they do not count the economic resources to invest in ICTs and, second, that it is not a priority in their corporate strategy. On the other hand, it is important to note that SMEs in both countries obtained better results in the adoption of ICTs when they used external sources of financing compared to internal ones, which contrasts with the results obtained in various studies, for example [23, 27, 62–65], and with the postulate of the Pecking order theory developed by Myers [8] in which he suggests that internal sources of financing are a priority, while the use of external sources will be deferred until the internal sources are exhausted. On the contrary, other studies agree with our findings regarding the positive impact of external financing on new technologies since it tends to reduce information asymmetry [66–70]. In general, our analysis proposes that the Pecking order theory is not used by SMEs in technology adoption and that the traditional financial belief is not valid for this sector, although it has favorable results in other countries. Therefore, it is suggested to the organizations of both Latin American countries to strengthen internal management, use and invest internal resources for the adoption of ICTs, since it minimizes the risk
How Capital Structure Boosts ICTs Adoption
67
of failure, asymmetric information (which increases the cost of financing) allocating resources rationally and using them effectively and efficiently in your strategy in order to maximize corporate value. Within the limitations of this research, it is highlighted that the sample only included SMEs from one state (Aguascalientes) in the case of Mexican SMEs and from one department (Bogota) in the case of Colombian SMEs, that is why future studies should consider not only SMEs but also large firms and regions of these two Latin American countries. Another limitation consisted in that this research was cross-sectional, so it would be important to carry out a longitudinal study. Furthermore, this study concludes with some future research areas. First, it suggests including the life cycle of organizations and debt maturity, differentiating small from medium-sized companies, and, in the same way, including the ownership structure. This especially affects SMEs since a large part of the firm’s investments could be explained by the structure of the property and the life cycle. On the other hand, it would be important to know the mediating effect of ICTs on the relationship between financing and innovation, and how much business performance increases when SMEs improve their capital structure.
References 1. Candelon, F., Reeeves, M., Wu, D.: The new digital world: hegemony or harmony? The Boston Consulting Group, pp. 1–6 (2017 2. Mendo, F.A., Fitzgerald, G.: Theoretical approaches to study SMEs eBusiness progression. J. Comput. Inf. Technol. 13(2), 123–136 (2005). https://doi.org/10.2498/cit.2005.02.04 3. Arendt, L.: Barriers to ICT adoption in SMEs: How to bridge the digital divide? J. Syst. Inf. Technol. 10(2), 93–108 (2008). https://doi.org/10.1108/13287260810897738 4. Lozano, M.M., Restrepo Sánchez, L.M.: Nacimiento y supervivencia de las empresas en Colombia, Bogotá (2016). http://www.confecamaras.org.co/phocadownload/Cuadernos_de_ analisis_economico/Cuaderno_de_Analisis_Economico_N_11.pdf 5. INEGI, “National Statistical Directory of Economic Units (DENUE)” (2020). https://www. inegi.org.mx/app/mapa/denue/default.aspx. Accessed 02 Aug 2020 6. CEPAL, Estudio Económico de América Latina y el Caribe 2014. Santiago de Chile: Naciones Unidas (2014) 7. López Velarde, L.A.: Desarrollo Económico y empleo en México, 2000–2014: La alianza del Pácifico ¿una alternativa? In: Artigas, B. (ed.) Sociedad, desarrollo y políticas públicas I, Universidad Autónoma Metropolitana, pp. 455–477 (2018) 8. ANIF, Retos del Financiamiento Pyme en Colombia: Gran Encuesta Pyme de ANIF, vol. 126, no. 126, pp. 310–312 (2020). http://www.anif.co 9. ENAPROCE, Encuesta Nacional sobre Productividad y Competitividad de las Micro, Pequenas y Medianas empresas (2018). https://www.inegi.org.mx/programas/enaproce/2018/ 10. Cuevas-Vargas, H., Velázquez-Espinoza, N., Cortés-Palacios, H.A., del S. Ramírez-Razo, M.: Innovación, tecnología y estructura de capital. Un estudio comparativo de las pequeñas empresas mexicanas y colombianas. Ideas Concyteg 15(256), 17–33 (2020). https://sices.gua najuato.gob.mx/resources/ideas/ebooks/256/descargas.pdf 11. Modigliani, F., Miller, M.H.: The cost of capital, corporation finance and the theory of investment. Am. Econ. Rev. 48(3), 261–297 (1958). https://www.jstor.org/stable/1812919 12. Fama, E.F.: Agency problems and the theory of the firm. J. Polit. Econ. 88(2), 288–307 (1980). https://doi.org/10.1017/CBO9780511817410.022
68
H. Cuevas-Vargas et al.
13. Jensen, M.C., Meckling, W.H.: Theory of the firm: managerial behavior, agency costs and ownership structure. Hum. Relations 3(4), 305–360 (1976). https://doi.org/10.1016/0304-405 X(76)90026-X 14. Ross, S.A.: American economic association the economic theory of agency: the principal’s problem. Source Am. Econ. Rev. 63(2), 134–139 (1973) 15. Myers, S.C.: The capital structure puzzle. J. Finan. 39(3), 575–592 (1984) 16. Myers, S.C., Majluf, N.S.: Corporate financing and investment decisions when firms have information that investors do not have. J. Financ. Econ. 13(2), 187–221 (1984). https://doi. org/10.1016/0304-405X(84)90023-0 17. Myers, S.C.: Determinants of corporate borrowing. J. Financ. Econ. 5(2), 147–175 (1977). https://www.sciencedirect.com/science/article/pii/0304405X77900150 18. Rogers, E.M.: Diffusion of Innovations, 3rd edn. The Free Press, New York (2005) 19. Antlová, K.: Informaˇcní management motivation and barriers of ICT adoption in small and medium-sized enterprises. Inf. Manag. 2, 140–155 (2009) 20. Titman, S., Wessels, R.: The determinants of capital structure choice. J. Finan. 43(1), 1–19 (1988). https://doi.org/10.1111/j.1540-6261.1988.tb02585.x 21. Coleman, S., Robb, A.: Capital structure theory and new technology firms: is there a match? Manag. Res. Rev. 35(2), 106–120 (2012). https://doi.org/10.1108/01409171211195143 22. Minola, T., Giorgino, M.: External capital for NTBFs: the role of bank and venture capital. Int. J. Entrep. Innov. Manag. 14(2–3), 222–247 (2011). https://doi.org/10.1504/IJEIM.2011. 041733 23. Minola, T., Cassia, L., Criaco, G.: Financing patterns in new technology-based firms: an extension of the pecking order theory. Int. J. Entrep. Small Bus. 19(2), 212–233 (2013). https://doi.org/10.1504/IJESB.2013.054964 24. Giotopoulos, I., Kontolaimou, A., Korra, E., Tsakanikas, A.: What drives ICT adoption by SMEs? rvidence from a large-scale survey in Greece. J. Bus. Res. 81(December 2016), 60–69 (2017). https://doi.org/10.1016/j.jbusres.2017.08.007 25. Kılıçaslan, Y., Sickles, R.C., Atay Kayı¸s, A., Üçdo˘gruk Gürel, Y.: Impact of ICT on the productivity of the firm: evidence from Turkish manufacturing. J. Prod. Anal. 47(3), 277–289 (2017). https://doi.org/10.1007/s11123-017-0497-3 26. Chen, H.L., Hsu, W.T., Huang, Y.S.: Top management team characteristics, R&D investment and capital structure in the IT industry. Small Bus. Econ. 35(3), 319–333 (2010). https://doi. org/10.1007/s11187-008-9166-2 27. Neville, C., Lucey, B.M.: Capital structure and irish tech SMEs. SSRN Electron. J. (2017). https://doi.org/10.2139/ssrn.2910979 28. O’Brien, J.P.: The capital structure implications of pursuing a strategy of innovation. Strateg. Manag. J. 24(5), 415–431 (2003). https://doi.org/10.1002/smj.308 29. Demuner, R., Autónoma, U., María, R., Rogel, N.: María del Rosario Demuner Flores, Universidad Autónoma del Estado de México Osvaldo Urbano Becerril Torres, Universidad Autónoma del Estado de México Rosa María Nava Rogel, Universidad Autónoma del Estado de México. Rev. Glob. Negocios 2(1), 15–28 (2014) 30. Ringle, C.M., Wende, S., Becker, J.: Smartpls 3. SmartPLS GmbH, Boenningstedt (2015). http://www.smartpls.com 31. Hair, J.F., Hult, G.T.M., Ringle, C., Sarstedt, M.: A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM). SAGE Publications Inc., Thousand Oaks (2017) 32. Lohmöller, J.-B.: Latent Variable Path Modeling with Partial Least Squares. Physica, Heidelberg (1989) 33. Ringle, C.M., Sarstedt, M., Straub, D.W.: A critical look at the use of PLS-SEM in MIS quarterly. MIS Q. 36(1), iii–xiv (2012). https://ssrn.com/abstract=2176426
How Capital Structure Boosts ICTs Adoption
69
34. Wetzels, M., Odekerken-Schröder, G., van Oppen, C.: Using PLS path modeling for assessing hierarchical construct models: guidelines and empirical illustration. MIS Q. 33(1), 177–195 (2009). https://doi.org/10.2307/20650284 35. Cuevas-Vargas, H.: La influencia de la innovación y la tecnología en la competitividad de las Pymes manufactureras del estado de Aguascalientes. Universidad Autónoma de Aguascalientes (2016) 36. Henseler, J., Ringle, C.M., Sinkovics, R.R.: The use of partial least squares path modeling in international marketing. Adv. Int. Mark. 20(2009), 277–319 (2009). https://doi.org/10.1108/ S1474-7979(2009)0000020014 37. INEGI, Directorio Estadístico Nacional de Unidades Económicas (2017). https://www.inegi. org.mx/app/mapa/denue/. Accessed 04 Feb 2017 38. Bogotá Chamber of Commerce, Bases de datos e información empresarial. Cámara de Comercio de Bogotá (2018). https://www.ccb.org.co/Fortalezca-su-empresa/Temas-destacados/ Bases-de-datos-e-informacion-empresarial 39. AECA, La innovación en la empresa: factor de supervivencia, Principios de Organización y Sistemas, documento 7. Asociación Española de Contabilidad y Administración de Empresas, Madrid (1995) 40. Ferraro, C., Goldstein, E.: Políticas de acceso al financiamiento para las pequeñas y medianas empresas en América Latina. Com. Económica para América Lat. y el Caribe, p. 41 (2011) 41. Cortés-Palacios, H.A.: Influencia del financiamiento y de la innovación en el desempeño de las Pymes manufactureras en el estado de Aguascalientes. Universidad Autónoma de Aguascalientes (2016) 42. Chen, J.-S., Tsou, H.-T.: Information technology adoption for service innovation practices and competitive advantage: the case of financial firms. Inf. Res. Int. Electron. J. (2007) 43. Cronbach, L.J.: Coefficient alpha and the internal structure of tests. Psychometrika 16(3), 297–334 (1951). https://doi.org/10.1007/BF02310555 44. Hair, J.F., Black, W.C., Babin, B.J., Anderson, R.E.: Multivariate Data Analysis, Seventh. Pearson Education, New Jersey (2014) 45. Nunnally, J.C., Bernstein, I.H.: Psychometric Theory. McGraw-Hill, New York (1994) 46. Fornell, C., Larcker, D.F.: Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 18(1), 39–50 (1981). https://doi.org/10.2307/3151312 47. Hair, J.F., Sarstedt, M., Ringle, C.M., Mena, J.A.: An assessment of the use of partial least squares structural equation modeling in marketing research. J. Acad. Mark. Sci. 40(3), 414– 433 (2012). https://doi.org/10.1007/s11747-011-0261-6 48. Henseler, J., Ringle, C.M., Sarstedt, M.: A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 43(1), 115–135 (2014). https://doi.org/10.1007/s11747-014-0403-8 49. Cuevas-Vargas, H., Parga-Montoya, N., Fernández-Escobedo, R.: Effects of entrepreneurial orientation on business performance: the mediating role of customer satisfaction—a formative-reflective model analysis. SAGE Open 9(2), 1–14 (2019). https://doi.org/10.1177/ 2158244019859088 50. Gold, A.H., Malhotra, A., Segars, A.H.: Knowledge management: an organizational capabilities perspective. J. Manag. Inf. Syst. 18(1), 185–214 (2001). https://doi.org/10.1080/074 21222.2001.11045669 51. Teo, T.S.H., Srivastava, S.C., Jiang, L.: Trust and electronic government success: an empirical study. J. Manag. Inf. Syst. 25(3), 99–132 (2008). https://doi.org/10.2753/MIS0742-122225 0303 52. Geisser, S.: A predictive approach to the random effect model. Biometrika 61(1), 101–107 (1974). https://doi.org/10.1093/biomet/61.1.101 53. Stone, M.: Cross-validatory choice and assessment of statistical predictions. J. Roy. Stat. Soc. Ser. B 36(2), 111–147 (1974)
70
H. Cuevas-Vargas et al.
54. Cuevas-Vargas, H., Parga-Montoya, N.: Adopción de tecnologías de información y comunicación en la pyme de un país emergente: Implicaciones en la innovación al proceso para un mejor desempeño empresarial. Concienc. Tecnológica, no. 56, pp. 43–53 (2018). http://www. redalyc.org/pdf/120/12020109.pdf 55. Cuevas-Vargas, H., Parga-Montoya, N., Hernández-Castorena, O.: Information and communication technologies to achieve an optimal relationship between supply chain management, innovation, and performance. In: García-Alcaraz, J.L., Leal Jamil, G., Avelar-Sosa, L., Briones Peñalver, A.J. (eds.) Handbook of Research on Industrial Applications for Improved Supply Chain Performance, pp. 262–284. IGI Global, Hershey (2020) 56. Manochehri, N.N., Al-Esmail, R.A., Ashrafi, R.: Examining the impact of information and communication technologies (ICT) on enterprise practices: a preliminary perspective from Qatar. Electron. J. Inf. Syst. Dev. Ctries. 51(1), 1–16 (2012). https://doi.org/10.1002/j.16814835.2012.tb00360.x 57. Steinfield, C., LaRose, R., Chew, H.E., Tong, S.T.: Small and medium-sized enterprises in rural business clusters: the relation between ICT adoption and benefits derived from cluster membership. Inf. Soc. 28(2), 110–120 (2012). https://doi.org/10.1080/01972243.2012. 651004 58. Ollo-López, A., Aramendía-Muneta, M.E.: ICT impact on competitiveness, innovation and environment. Telemat. Inf. 29(2), 204–210 (2012). https://doi.org/10.1016/j.tele.2011.08.002 59. Bayo-Moriones, A., Billón, M., Lera-López, F.: Perceived performance effects of ICT in manufacturing SMEs. Ind. Manag. Data Syst. 113(1), 117–135 (2013). https://doi.org/10. 1108/02635571311289700 60. Consoli, D.: Literature analysis on determinant factors and the impact of ICT in SMEs. Procedia Soc. Behav. Sci. 62(Figure 1), 93–97 (2012). https://doi.org/10.1016/j.sbspro.2012. 09.016 61. Sarstedt, M., Henseler, J., Ringle, C.M.: Multigroup analysis in partial least squares (PLS) path modeling: alternative methods and empirical results. Adv. Int. Mark. 22, 195–218 (2011). https://doi.org/10.1108/S1474-7979(2011)0000022012 62. Berger, A.N., Udell, G.F.: The economics of small business finance: the roles of private equity and debt markets in the financial growth cycle. J. Bank. Financ. 22(6–8), 613–673 (1998). https://doi.org/10.1016/S0378-4266(98)00038-7 63. Brown, J.R., Fazzari, S.M., Petersen, B.C.: Financing innovation and growth: cash flow, external equity, and the 1990s R&D boom. J. Finan. 64(1), 151–185 (2009). https://doi.org/ 10.1111/j.1540-6261.2008.01431 64. Khan, M.K., Kaleem, A., Zulfiqar, S., Akram, U.: Innovation investment: behaviour of Chinese firms towards financing sources. Int. J. Innov. Manag. 23(7), 1–29 (2019). https://doi.org/10. 1142/S1363919619500701 65. Revest, V., Sapio, A.: Financing technology-based small firms in Europe: what do we know? Small Bus. Econ. 39(1), 179–205 (2012). https://doi.org/10.1007/s11187-010-9291-6 66. Agénor, P.R., Canuto, O.: Access to finance, product innovation and middle-income traps. Res. Econ. 71(2), 337–355 (2017). https://doi.org/10.1016/j.rie.2017.03.004 67. Casson, P.D., Martin, R., Nisar, T.M.: The financing decisions of innovative firms. Res. Int. Bus. Financ. 22(2), 208–221 (2008). https://doi.org/10.1016/j.ribaf.2007.05.001 68. Nightingale, P., Coad, A.: Muppets and Gazelles: Political and Methodological Biases in Entrepreneurship Research. Falmer, Brighton (2013) 69. Ogbonna, U.G., Ejem, C.A.: Dynamic modeling of market value and capital structure in Nigerian firms. Int. J. Econ. Financ. Issues 10(1), 1–5 (2020). https://doi.org/10.32479/ijefi. 8848 70. Schneider, C., Veugelers, R.: On young highly innovative companies: why they matter and how (not) to policy support them. Ind. Corp. Chang. 19(4), 969–1007 (2010). https://doi.org/ 10.1093/icc/dtp052
CHAT SPI: Knowledge Extraction Proposal Using DialogFlow for Software Process Improvement in Small and Medium Enterprises Jezreel Mejía1 , Isaac Rodríguez-Maldonado1 , and Yadira Quiñonez2(B) 1 Centro de Investigación en Matemáticas, (CIMAT, A.C.), Unidad Zacatecas, México Calle
Lasec, Andador Galileo Galilei. Andador 3, Lote 7, CP 98160 Zacatecas, Mexico {jmejia,isaac.rodriguez}@cimat.mx 2 Universidad Autónoma de Sinaloa, Mazatlán, Mexico [email protected]
Abstract. The development of the software products and services within organizations generates a large amount of information that is not registered and is generally known as tacit knowledge. This knowledge is the most valuable information for an organization since knowledge helps to improve and avoid repetition of errors. This search is even more accentuated in the organizations called SMEs: (Small and Medium Enterprises), since they do not have a culture of processes and as a result lack processes. Therefore, the large growth of this information in the SMEs generates the need to create a platform which automates the knowledge extraction. To achieved this, this article presents a proposal to extract knowledge using DialogFlow for Software Process Improvement in SMEs. The proposal presents a modification in how DialogFlow works and the establishment of a knowledge repository based on ISO/IEC 29110. This modification allowed to develop a CHAT SPI. Keywords: Software Process Improvement · Knowledge extraction · DialogFlow · ISO/IEC 29110
1 Introduction According to Nonaka [1] the knowledge is an integral concept with profound meanings, with the belief that it increases an organization’s capacity for effective action. Knowledge is also defined as “justified true beliefs”. The knowledge, according to Davenport and Allee [2] is professional intellect, know-what, know-how, know-why, along with experience, concepts, values, beliefs, and way of working that can be shared and communicated. In this sense, the development of software products and services generates a large amount of information not registered within organizations and it can represent the most valuable information for an organization since knowledge helps to improve and avoid repetition of errors. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 71–85, 2021. https://doi.org/10.1007/978-3-030-63329-5_5
72
J. Mejía et al.
In this context, knowledge management analyze and controls the knowledge flows generated in the organization [3], in order to manage and formalize tacit knowledge to explicit knowledge, making use of knowledge extraction. The use of knowledge extraction has already been implemented in different organizations as indicated in [4, 5]. Therefore, in order to implement Software Process Improvement (SPI) the knowledge extraction is a key element to know the organization knowledge and it be implemented in its processes [6]. The SPI is supported by standards, models and methodologies, of which some are focused on Small and Medium-Sized Enterprises (SMEs), such as the ISO/IEC 29110 [7]. Currently, the SMEs are important in the worldwide as they make up more than 90% of the organizations [4]. SMEs also generate a large amount of information, which does not exist in a formalized way. Besides, this organizational knowledge is only known from person to person, and not extract in an automated way [4]. Therefore, the large growth of this information in the SMEs generates the need to create a platform which automates the extraction of knowledge aligned to a standard for SMEs in this case ISO/IEC 29110. To achieved this, the goal of this article is presenting a proposal to Knowledge extraction using DialogFlow for SPI in Small and Medium Enterprises of software development and establishment a knowledge base on ISO/IEC 29110. Therefore, after the introduction, the structure of the paper is the following: the background of the research work and the key concepts are reported in Sect. 2; the related works by other proposals are presented Sect. 3; in Sect. 4, how works DialogFlow is presented; in Sect. 5, CHAT SPI is showed and how it is worked; the case study implementation is reported in Sect. 6; finally, conclusions and the future work are included in the Sect. 7.
2 Background The next section will describe the basic concepts used in the development of this proposal: knowledge, ISO/IEC 29110, software process improvement and chatbot. 2.1 Knowledge The knowledge is grouped according to the different characteristics with which they can be identified. This knowledge can be tacit, explicit, or organizational by nature. Tacit knowledge is the knowledge that is in people’s heads or in the heads of a community of people, such as a company. It is tacit knowledge that makes people smart. Explicit knowledge is the knowledge that has been explicitly given to a community of people, and it is what they consider knowing [8]. Organizational knowledge is information that is important to the company, combined with experience and understanding, and is preserved. This knowledge is divided into different levels [9] where: • At the first level of the hierarchy, there is data. The data describe the properties of objects or events. The data exists on its own and is not included in the activity until sometime. Data collection also occurs without having to resolve any issues.
CHAT SPI: Knowledge Extraction Proposal Using DialogFlow
73
• At the second level of the hierarchy is the information. The information is characterized by a more specific objective for which the data are combined with the procedure of its processing. The information is contained in the answers to questions that begin with the words “what”, “who”, “where”, “when”, “how”, etc. A dataset becomes information only when it is relevant to the needs of the company. • At the third level of the hierarchy is knowledge. Knowledge is transmitted through the formation and answers the question “how”. Any newly acquired knowledge must be verified with existing concepts. If the information can be grouped with existing concepts, it can be added to it. 2.2 ISO/IEC 29110 ISO (International Organization for Standardization) and IEC (International Electrotechnical Commission) constitute the specialized system for worldwide standardization [7]. The national member organisms of ISO and IEC participate in the development of the International Standards through technical committees established by the respective organization, to attend to areas of technical activity. ISO/IEC 29110 [10] is an international software engineering standard that defines life cycle profiles for Very Small Entities (VSEs) that develop non-critical software. It aims to meet the specific needs of VSEs and to address the problem of low adoption of standards by small companies (NYCE). ISO/IEC 29110 is structured through two processes that are Project Management (PM) and Software Implementation (SI), in addition, these have objectives and activities. • PM Process: The purpose of the PM process is to establish and carry out in a systematic way the tasks of the software implementation project, which allow to fulfil the objectives of the project in quality, time and expected cost [5]. • SI process: The purpose of the Software Implementation process is the systematic performance of Analysis, Design, Construction, Integration and Testing activities for new or modified Software products in accordance with the specified requirements [5]. 2.3 Software Process Improvement To understand the concept of process improvement, it is necessary to understand two concepts; a) software process: “It is defined as a structure for the activities, actions and tasks required to build high quality software” [11]; b) process improvement: “A program of activities designed to improve the performance and maturity of the organization’s processes and the results of that program” [12]. Therefore, Software Process Improvement (SPI) is: A scheme of activities to improve the performance and maturity of organizational process activities, actions, and tasks in order to build high quality software products and services. 2.4 Chatbot Chatbot is a conversational software used to communicate between a computer and a human. Most chatbots are simply “FAQ bots”—all they do is provide scripted answers
74
J. Mejía et al.
to a group of matching FAQs. They lack intelligence and are unable to understand the meaning of customers’ questions to answer accordingly. They literally just look at the customer’s words and compare them to a database of written questions and answers [13]. So that the building chatbots that can interact with the inherent nature of humans is a challenge in computing, artificial intelligence, and natural language processing. A conversational chatbot, however, can go beyond just answering questions to focus on intent, as specific actions already defined.
3 Related Works With the premise of extracting the knowledge, some authors have implemented improvements with the chatbot implementing the technique of “Convolutional Neural Network” [13], it allows that user’s response is analyzed and used to compare with any established domain element. Another way to extract knowledge is through ontologies since smart services, which is a generic knowledge model not limited to a specific domain. It can be applied to any robot service, as it provides a high-level scheme that allows the dynamic generation and management of the basic information needed for all robot services [14]. The next section describes each improvement to extract knowledge such as tools, techniques, and domains. 3.1 Tools The following tool presents an approach to building a chatbot called FIT-EBot as a smart assistant to provide administrative and learning support to students in a higher education environment [15]. Taking into account the trends of chatbots in higher education environments recently the FIT-EBot can interact as a teacher or administrative assistant to quickly respond to common student inquiries such as subject matter, exercises, course registration, course schedules, assignments, scholarships, regulations, etc. The FIT-EBot is implemented based on the DialogFlow framework, a Google technology integrated with artificial intelligence techniques to analyze user messages and generate responses. The aim of this is to identify user intent and extract context information from the FIT-EBot approach. Another tool creates a Chatbot based on Artificial Intelligence (AI) [8], that will respond to Natural Language. It is designed to understand human voice or text messages. The Chatbot is based on the communication made by the User and responds by looking at its database. The AI-based Chatbot will retrieve the response from the standard knowledge management repository, and several users could query the knowledge base through the Chatbots, it allows the number of users increasing exponentially. Another ontology-based tool was developed to study the complex application domains, object-oriented modelling languages in [16]. In this research the authors, highlighted an important component of knowledge management in an organization capable of ensuring the effective functioning of the knowledge spiral is the creation of knowledgebased systems (KBS) capable of accumulating knowledge and using it in management decision making. In the initial stages of creating an integrated knowledge base, when
CHAT SPI: Knowledge Extraction Proposal Using DialogFlow
75
there is no vision of the application domain, it is reasonable to use a knowledge casebased representation in which knowledge is formulated in the form of decision-making precedents. 3.2 Techniques Convolution neural network (CNN) technique [17], also named as deep learning is a branch of machine learning that mimics the neuronal system in the human brain (neural network). The ability of CNN can increase the amount of processing power, which can simulate the nerve and neurons (called node) into Artificial Neural Network (ANN) model. This model composed of three layers namely input layer, output layer, and hidden layer/s. CNNs are regularized versions of multilayer perceptron’s. Multilayer perceptron’s usually refer to fully connected networks, that is, each neuron in one layer is connected to all neurons in the next layer. This model would apply as follows in a chatbot, first: heuristics is used by the system to locate the best response from its database of predefined responses. Second: the most appropriate response template may involve simple algorithms like keywords matching. It may require more complex processing with machine learning or deep learning for the dialogue selection process, which is essentially a prediction problem, and using heuristics to identify. Regardless of the use of heuristic, these systems only output predefined responses and do not generate new output. The authors in [17] present a retrieval based conversational system by using the matching algorithms between the input utterance and the candidate responses. Besides, the combination between convolution neural networks (CNNs) and recurrent neural networks (RNNs) were used inside this model. 3.3 Open Domains Chatbot The seq2seq [13] is a model that shows a simple language model based on deep neural with attention mechanism can be used to train an open domain chatbot. According the authors, this model managed to reduce the complexities of language understanding, feature extraction, domain recognition, intent detection, semantic slot filling, and the generation of languages involved in Chatbots training. This proposal is an efficient and scalable model for training an end to end chatbot capable of interacting with humans naturally in an open domain Question Answering (QA) setting. This model is built on Google TensorFlow (version 1.13) framework, which facilitates the training of our datasets from Cornell Movie Conversation on GPUs. The proposed Chatbot is first transformed into Question Answering (QA) and then modelled with Seq2Seq’s Encoder, the Decoder proposed and Attention mechanism. The Seq2Seq architecture uses a multi-layered and modified Recurrent Neural Networks (Gated Recurrent Units [GRUs]) to map the input sequence to a vector of a fixed dimensionality. Another proposal for the authors [14] is an ontological intelligent service robot, which is a generic knowledge model not limited to a specific domain. The robot framework used by the authors in their experiments needs a service description that describes specific information including domain knowledge, instance level knowledge to perform a robot service. The service description is packaged in a service package and then provided to the
76
J. Mejía et al.
framework to start the service. The information in the service package can be integrated into the Intelligent Service Robot Ontology (ISRO). All the knowledge in ISRO is based on the ontology description in OWL format because the ontology is easily shared and interchangeable. By considering both low-level information and symbolic knowledge, ISRO [14] achieves integrated knowledge management that supports abstraction and concretion by defining the semantic relationships between them. 3.4 Result As a result of the related works, it is important to highlight the use of chatbots for several environments such as: 1) higher education environments, named FIT-EBot, which can interact as a teacher or administrative assistant to quickly respond to student questions. Another tool is a Chatbot based on Artificial Intelligence (AI) based on the communication made by the user and responds by looking at its database [8]. As well as the uses of chatbots as such, modifications have been proposed which allow the improvement of this service such as the Neural Network Convolution (CNN) technique, also called deep learning, which is a branch of machine learning that imitates the human brain’s neural system (neural network). The above cases are based on a single domain which only the chatbot is an expert in, but the authors [13, 14] propose techniques which allow the use of several domains or several knowledge sources. The tools collected allow a knowledge base and interaction with the user, this interaction is done by a chatbot. The function done by the chatbot is that the user extracts information from the chatbot through a FAQ. Also as proposed by the authors [17] the change of domain or the use of several domains is something that has already been done. Therefore, after analyzing the results, we chose to work with DialogFlow, for the management of the chatbot, and a database that allows the use of binary domains that allows knowledge extraction with a chatbot with the ISO/IEC 29110 domain.
4 How Work DialogFlow (Google) DialogFlow is a platform with natural language understanding which allows the design of a conversational user interface and supports integration into web applications, devices, interactive voice response systems, among others. To interpret and process natural language, a very robust language analyzer is needed [18]. A DialogFlow is a virtual agent (Chatbot) that handles conversations with end users by a chat API [19]. It is a natural language understanding module that understands the nuances of human language. It facilitates the translation of text into structured data. A DialogFlow agent is like a human agent in a call center. Like a human agent, it needs training to understand the intended conversation situations, but the training does not need to be too explicit. The normal flow in the chatbot is the users write a question in the chat API, and the chatbot facilitates the translation of text into structured data defined regarding to knowledge domain to recovery a possible answer. As shown in Fig. 1, DialogFlow starts its flow from an end-user expression, either from a question or an action. The next step is that the agent finds the attempt corresponding to the expression, once it finds the attempt it scans its training sentences to give a
CHAT SPI: Knowledge Extraction Proposal Using DialogFlow
77
response to the end-user, concluding the flow. As shown in Fig. 2 DialogFlow receives a question and from a FAQ script gives an answer to the user.
Fig. 1. DialogFlow information floss.
5 CHAT SPI As shown in Fig. 1, DialogFlow operation is like any chatbot. It defaults to receiving a question from the user and it do is provide scripted answers from a group of matching FAQs. This way of working is not usable in the knowledge extraction in software process improvement since must be a domain expert who perform the questions about the reference domain in this case ISO/IEC 29110. In this case, the chatbot must be transformed in the domain expert and It is he who should perform the questions and the domain user is who give the answers allowing to feeding the synonyms and lexicon for established domain. Therefore, to solve this, CHAT SPI was created, and several entities were modified in the architecture. The following describes the modified architecture for a successful extraction of knowledge in the software process improvement domain: Modified Architecture: As shown in the Fig. 2, the architecture for the operation of this project is as follows, DialogFlow will formalize the user’s response, this response will be given to DialogFlow in a token structure, which is made up of significant words from user, removing the concatenation phrases, DialogFlow will return a formal response to the API which will determine what the next question is. In the same way it has a SQL database, which stores user responses, what is needed for user sessions and traceability with user and domain responses. Also contains a graph-oriented database which will contain, the structure of the ISO/IEC29110 standard which will be the API Chat Domain. CHAT SPI: The proposed API into the CHAT SPI performs three principal functions: 1) Communication with the DialogFlow Agent. 2) The management of the SQL database. 3) Questionnaire creation, 4) Query with the graph-oriented database (Domain). These functions are described below:
78
J. Mejía et al.
Fig. 2. Final architecture of the system.
1. Communication with the DialogFlow agent: first the proposed API analyzes the user’s response to divide it in tokens, once it was analyzed the concatenation words are removed (e.g. and, or, is, in …). When the token is cleaned, it is passed to the token concatenation to be sent to the DialogFlow agent as a sentence. This sentence must have at least three words and a context related to the established domain that is given by the proposed API. Therefore, DialogFlow analyses the received sentence, and it returns a formalized phrase according to established domain to the API. This is repeated until the entire user response is analyzed. 2. The management of the SQL database: The API will store in the SQL database the user’s responses to recover and then analyze these. In the same manner it stores the traceability created between the user’s response and the established domain, as well as API variables for its operation such a as session and delivery sequential questions from a questionnaire. 3. The questions must be structured according to established domain, in this case ISO/IEC 29110. The question structure has the following elements: • Question structure: 1. Start question: Once the user response is inputted into Chat SPI, the user’s response is analyzed to identify any domain elements: work product (both input and output) or a role. 2. Sequential question: Once the user response is inputted into Chat API, it is analyzed by the Chat SPI and DialogFlow agent to structure the next question.
CHAT SPI: Knowledge Extraction Proposal Using DialogFlow
79
The next question is structured take in account the next domain elements not identified into the user response: work product (both input and output) or a role. 3. Specific questions: This kind of question is to identify the status of the work product (input or output) or involved. 4. Confirmation questions: This question is only to confirm the identified role in the user’s response. 5. Link questions: If the user’s response contain any domain element (work product or role) not identified into the established domain (in this case ISO/IEC 29110), the link question allows to link the identified element into the user’s response with any task of the established domain. The start question (1) is the mandatory question and the 2–5 questions can be combined regarding to user’s response 4. Query with the graph-oriented database Neo4J. the API will query the Neo4J database, which has the structure and information of ISO/IEC 29110, this information will be requested by the API to make the traceability between the answers and the domain, and the questions that users are structured by the CHAT SPI. Domain: As mentioned earlier for each chatbot you must define your own domain. Therefore, a software process improvement domain must be created. In this case an ISO/IEC 29110 domain was established and implemented in a graph-oriented database. The standard ISO/IEC 29110 [20] is structured in Project Management and Implementation Software processes. These processes contain activities, each activity has tasks that must be completed, these tasks are related to a Role and a work product of which can be input or output products. The process structure looks like levels of deep: • Processes • Activities • Tasks • Work products • Roles Once the structure has been identified, it is transferred and its knowledge store in the graph-oriented database, for example: the Project Management process in ISO/IEC 29110 is to establish and carry out in a systematic way the Tasks of the software implementation project, which allows complying with the project’s Objectives in the expected quality, time, and costs. This process has the following activities: • • • •
PM.1 Project Planning PM.2 Project Plan Execution PM.3 Project Assessment and Control PM.4 Project Closure
80
J. Mejía et al.
Taking PM.4 (see Table 1) the standard indicates that for this activity the roles of project manager (PM) and customer (CUS) are implicated, to perform the two tasks PM4.1 and PM4.2 as shown in Table 1, to perform this activity they will need the input work products (inputs), and as a result of this activity the output work products (outputs) will be generated. Table 1. Project Closure. Role
Task List
Input
Output
PM CUS
PM.4.1. Formalize the completion of the project according to the Delivery Instructions established in the Project Plan, providing acceptance support, and getting the Acceptance Record signed.
Project Plan • Delivery Instructions Software Configuration [delivered]
Acceptance Record Software Configuration [accepted]
PM
PM.4.2 Update Project Repository.
Software Configuration [accepted] Project Repository
Project Repository [updated]
As shown in the Fig. 3 below the structure of the standard has the similarity of a graph, once you pass this structure to a graph you get the next result:
Fig. 3. Data base NEO4J.
CHAT SPI: Knowledge Extraction Proposal Using DialogFlow
81
As can be seen in Fig. 3, the blue node represents the task, the red node represents the role and the orange nodes represent the work products. In differentiating the work products between input and output the relationships make direction of the origin or destination. That is, if the relationship starts in a task and ends in a work product it is an output and vice-versa if the relationship starts in a work product and ends in a task it is an input. How the Chat SPI Works: As shown in the Fig. 4 the API starts showing an initial question to the user, once the user responded, the user response is cleaned from the concatenation words, once the text is cleaned a phrase from the user’s response is sent to the DialogFlow agent, the DialogFlow agent sends a formalized phrase to the API. This formalized phrase is analyzed to see if it complies with the established domain (ISO/IEC 29110). In case if it complies, the response is saved, and the Chat SPI will continue with the next question.
Fig. 4. Chat SPI Flowchart.
If the user’s response does not have the domain information regarding to the start question, the Chat SPI perform the sequential questions about work products or roles (see Questionnaire entity), as shown in the Fig. 5. In case a user response is partial, Chat SPI ask again about this partial answer.
82
J. Mejía et al.
Fig. 5. Generates question module Flowchart.
As shown in Fig. 5, the first question is launched regarding to the task (1: Start question), the next questions (2–5) are launched depending on the user’s response. If the user did not respond all the necessary domain elements such as work product or roles, the sequential question (2) is displayed. In case the user gives an incomplete response, the Chat SPI displays the specific question option (3), which seeks to complete the knowledge. In case the user’s response contain any element that is not match with any domain element, the question (4) is displayed to confirm which term corresponds in the established domain elements. Finally, if the user’s response contains any domain element (work product or role) not identified into the established domain, the link question (5) is displayed to link the identified element with any task of the established domain.
6 Case Study After creating the foundations for the proposed CHAT SPI, the case study was carried out with three experts in ISO/IEC 29110. They have participated in 10 internal audits of SMEs software development to obtain the standard certification from 2019 to 2020. These organizations have obtained certification and recertification in some cases. The case study was made through the interaction of each of those experts towards the CHAT SPI. To carry out the experiment, the following activities were established: 1. 2. 3. 4. 5.
Set the domain in the Chat SPI in this case ISO/IEC 29110. Define the questions of the questionnaire with respect to the established domain. Schedule interviews. Train in the Chat SPI to expert user. Expert user interaction with the Chat SPI.
CHAT SPI: Knowledge Extraction Proposal Using DialogFlow
83
Schedule Interviews: Once the activities 1–2 were established, questionnaire was established and the questions were structured, a meeting was scheduled with the three experts to train in the Chat SPI. Train in the Chat SPI: Along the training, the experts received an explanation of how the questionnaire works, a basic example was given so that they would know the kind of questions, the answers typed into Chat SPI to response the displayed questions, and finally, the set day to perform the questionnaire was scheduled. Expert User Interaction with the Chat SPI: The interaction went on the scheduled day; the interaction was saved in a file for the purpose of learning about the interaction and correcting errors. After the expert user interaction with the CHAT SPI, the questions generated by the DialogFlow agent and presented in the CHAT SPI were analyzed to evaluate: • Construction of the questions: if the questions are structured regarding to establish domain. • User response: If the content of the response generates questions to give continuity with the questions that are generated after its analysis. • 29110 Domain: Que the domain is built in an appropriate way with a correct flow, that allows to transfer the questions for each task of the standard, and as well as evaluate the user their knowledge of it. • Domain Lexicon: as is common words of processes and synonyms of roles, work products, in the same way that these synonyms enrich training phrases. A summary of the interaction between expert user and the Chat SPI is shown below, as an example: 1. START QUESTION: With emphasis on the work products you use, explain how you carry out the following task: “Monitor the Project Plan execution” Answer: By using a tool to visualize the planned tasks, the task’s due date is reviewed, and the work is done in that order 2. SEQUENTIAL QUESTION: What roles are involved in the task: “Monitor the Project Plan execution” Answer: The Project Manager 3. START QUESTION: Emphasizing the work products you use, explain how you carry out the following task: “Analyze and evaluate the Change impact.” Answer: A meeting is held with the team members where each one brings the advantages and difficulties that would be had from the perspective of each role, they are recorded in the minutes of the meeting, and with the help of the technical leader a resolution is reached 4. SEQUENTIAL QUESTION: What roles are involved in the task: “Analyze and evaluate the Change impact” Answer: Analyst, designer, and project manager 5. SEQUENTIAL QUESTION: What roles are involved in the task: “Conduct revision meetings with the Work Team” Answer: Project Manager and Technical Leader
84
J. Mejía et al.
Regarding to the interaction between user expert and Chat SPI, it can see, the questions construction was coherent with the ISO/IEC 29110 domain and the followed flow of the questionnaire was according to Fig. 8. This allows us to see that the questions are displayed as required to extract the knowledge.
7 Conclusions and Further Research Work Currently, organizations are searching to implement or define their processes, in order to improve the quality of the software products and services. This search is even more accentuated in the companies called SMEs. In these organizations, the development of these software products and services generates a large amount of information that is not registered and many of these organizations lose this knowledge and it is not reflected in their software development processes. Therefore, the proposal presented in this article has an objective to support for knowledge extraction. As a result of the related works allowed us to identify that the chatbots are the most appropriate as they are developed for user interaction in the form of a questionnaire for information extraction. In this analysis, the DialogFlow framework was selected to establish the knowledge extraction to Software Process Improvement. However, a modification on this framework was performed allowed to propose a CHAT SPI. This proposal was evaluated with a case study, where the response of the experts in ISO/IEC 29110, allowed to evaluate the Chat SPI and the following results were obtained: • Construction of the questions: The construction was done in a correct way, it did not lose sense of the question, nor the sequence to extract information. • User response: The response of the users does allow the construction of the next question, but even more, the Chat SPI needs to be fed with more synonyms and language of the users. • 29110 Domain: The questions were developed in order with the domain established in the graph-oriented database. • Domain Lexicon: It must be applied to more respondents so that they learn and improve the amount of lexicon understood by the Chat SPI. A future work, the CHAT SPI proposal must be applied to more respondents to learn and improve the amount of lexicon understood by it. Moreover, in the knowledge base will integrate another domain different to ISO/IEC 29110 for example another standard or methodology as Scrum.
References 1. Nonaka, I., Lewin, A.Y.: A dynamic theory of organizational knowledge creation. Organ. Sci. 5(1), 14–37 (1994) 2. Davenport, T., Jarvenpaa, S., Beers, M.: Improving knowledge work processes. Sloan Manag. Rev. 37, 53–66 (1996)
CHAT SPI: Knowledge Extraction Proposal Using DialogFlow
85
3. Castillo-Rojas, W., Medina Quispe, F., Molina, F.F.: Una Metodología para Procesos Data WareHousing Basada en la Experiencia. A Methodology for Data WareHousing Processes Based on Experience, vol. 26, pp. 83–103 (2018) 4. Mejía, J., Muñoz, M., Guillermo, R., Iván, E.T., Heltton, E.R., Jesús, M.G.: Knowledge extraction tacit for software process improvement in a governmental organization. In: 9th Iberian Conference on Information Systems and Technologies, pp. 1–7, IEEE Press, Barcelona (2014) 5. Muñoz, M., Mejía, J., García, J., Minero, J.J.: Introducing the process improvement in Higher Education Institutions. In: 10th Iberian Conference on Information Systems and Technologies, pp. 1–7, IEEE Press (2015) 6. Mejía, J., Íñiguez, F., Muñoz, M.: Data analysis for software process improvement: a systematic literature review. In: Rocha, Á., Correia, A., Adeli, H., Reis, L., Costanzo, S. (eds.) Recent Advances in Information Systems and Technologies. Advances in Intelligent Systems and Computing. LNCS, vol 569, pp. 48–59, Springer, Cham (2017) 7. ISO/IEC 17024:2012(en). https://www.iso.org/obp/ui#iso:std:iso-iec:17024:ed-2:v1:en 8. Narendra, U.P., Pradeep, B.S., Prabhakar, M.: Externalization of tacit knowledge in a knowledge management system using chat bots. In: 3rd International Conference on Science in Information Technology, pp. 613–617. IEEE Press, Bandung (2017) 9. Avdeenko, T.: Intelligent support of knowledge transformation based on integration of casebased and rule-based reasoning. ACM International Conference Proceedings Series, pp. 66–71 (2017) 10. Certificación ISO/IEC 29110 – NYCE. https://www.nyce.org.mx/certificacion-isoiec-29110/ 11. Pressman, R.S.: Ingenieria del Software. Un Enfoque Practico (2010) 12. CMMI ® para Desarrollo, Versión 1.3 Equipo del Producto CMMI (2010). http://www.sei. cmu.edu. Accessed 29 Mar 2019 13. Abdullahi, S.S., Yiming, S., Abdullahi, A., Aliyu, U.: Open domain chatbot based on attentive end-to-end Seq2Seq mechanism. In: ACM International Conference Proceedings Series, pp. 339–344 (2019) 14. Chang, D.S., Cho, G.H., Choi, Y.S.: Ontology-based knowledge model for human-robot interactive services. In: Proceedings of the ACM Symposium on Applied Computing, pp. 2029–2038 (2020) 15. Hien, H.T., Cuong, P.N., Nam, L.N.H., Nhung, H.L.T.K., Thang, L.D.: Intelligent assistants in higher-education environments: the FIT-EBOt, a chatbot for administrative and learning support. In: ACM International Conference Proceedings Series, pp. 69–76 (2018) 16. Ruiz-Castilla, J.S., Ledeneva, Y., Cervantes, J., Trueba, A.: A model for knowledge management in software industry. In: Figueroa-García, J., López-Santana, E., Ferro-Escobar, R. (eds.) Applied Computer Sciences in Engineering. Communications in Computer and Information Science. LNCS, vol. 657, pp. 3–14. Springer, Cham (2016) 17. Bautista, A.M., Feliu, T.S.: Defect prediction in software repositories with artificial neural networks. In: Mejia, J., Munoz, M., Rocha, Á., Calvo-Manzano, J. (eds.) Trends and Applications in Software Engineering. Advances in Intelligent Systems and Computing. LNCS, vol. 405, pp. 165–174. Springer, Cham (2016) 18. DialogFlow Documentation. https://cloud.google.com/dialogflow/docs 19. Dialogflow basics. https://cloud.google.com/dialogflow/docs/basics 20. Technical Report ISO/IEC TR 29110-5-1-2, vol. 2010, p. 42 (2011). www.iso.org
Process Model to Develop Educational Applications for Hospital School Programs Jaime Muñoz-Arteaga1(B) , César Velázquez Amador1 , Héctor Cardona Reyes2 , Miguel Ortiz Esparza1 , and Gerardo Ortiz Aguiñaga2 1 Universidad Autónoma de Aguascalientes (UAA), Aguascalientes, Mexico {jaime.munoz,eduardo.velazquez,miguel.ortiz}@edu.uaa.mx 2 CIMAT, Zacatecas, Mexico {hector.cardona,gerardo.ortiz}@cimat.mx
Abstract. Current work preconizes the development of educational applications to support learning activities for a hospital school program. A hospital school program allows to fulfill the right children and young people to be educated in a situation of illness or a process of rehabilitation. For this, a user-centered design, multidisciplinary and incremental process model is proposed here, this model is a guide to design and develop educational application, as an alternative solution to the lack of educational resources relevant to hospital school programs. A case study implements the proposed process model to develop interactive applications to support the learning of a hospitalized child in particular in basic math area. Keywords: Educational applications · Spiral model process · Inclusive education · Hospital school program · Learner profile
1 Introduction Information and communications technology has been used to improve teaching and learning process since the last decade. The use of these technologies in educational work environments is one of the characteristics that define the evolution of teaching and learning in recent years. Nowadays, the use of intelligent devices is radically transforming the way users can communicate and access information. With these technological advances, there are more and more new forms of learning, taking into account a large diversity of users such as hospitalized children since they need a long stay at the hospital, even some hospitals can offer access to devices but children require to be aware of the use of technology on behalf of their education [1, 2]. An educational application is an interactive application that enables virtual teaching and self-learning, they are accessible under mobile devices such as tablets, cell phones, smartphones, etc. [3, 4]. The educational applications are seen as the fastest-growing technologies that would have a sure impact on education in future years, according to the Trends Shaping Education report [5]. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 86–98, 2021. https://doi.org/10.1007/978-3-030-63329-5_6
Process Model to Develop Educational Applications
87
Children with chronic diseases require to make face not only medical procedures in a hospital, but also everything the lack to have an absence of regular life, with a strong impact on school, social and family contexts. This requires a specific educational intervention for this population that is vulnerable to health reasons and loses continuity in its development process. In general, a hospital school program is offered to fulfill the right children and young people to be educated in a situation of illness or a process of rehabilitation [6]. Current work promotes the development of educational applications as technological support for teaching and learning activities required for hospitalized children in elementary school. The structure of this work begins with an introduction section to identify the context and some definitions related to current research work. The second section describes some educational issues related to the hospital school program. The third section proposes in detail a process model as a user-centered approach to develop educational applications. The next section implements the model throughout a case study where interactive applications are developed to support the learning of hospitalized children at elementary school in Mexico. Five section analyzes the results and some development issues. Finally, some conclusions and future works are proposed.
2 A Problem Outline The educational application is increasingly being accepted as a technological support in the teaching-learning process. Research in special education for many years has identified issues in hospitalized children during their school at elementary school [6, 7]. In a similar way healthy child, hospitalized children present learning needs for training, recreation, leisure to develop and, therefore, the right to is not available education, the availability of educators, instructional means to guide the learning process. Find next a list of problems related to hospital school programs at basic education level: • There a few systematic approaches considering educational applications as an alternative solution to support hospital school programs [7, 8]. • No there isn’t a software process model suitable to develop educational resource for hospital school programs. • It is not available an open repository with educational resources designed to hospital school programs [9]. • It is not possible to offer a personal teaching in particular when there are a large number of students in a classroom [10]. • It lacks an effective training program for teachers about the use of mobile devices to support math and reading disorders [11]. • In general, teachers don’t have access to best practices for the use of educational applications to assist children learning needs [12].
3 Process Model Nowadays, children at elementary school can use several technological resources as complementary support for learning activities, but for hospitalized children require a
88
J. Muñoz-Arteaga et al.
larger diversity of technological resources. A multidisciplinary team is required including teachers, doctors, nurses, psycho pedagogue, software designer and programmer. They are involved in a huge effort of creating resources that are designed for use in formal and non-formal educational situation. It lies over the multidisciplinary team the whole responsibility about the content being created to give certain guarantee about being it the most suitable content that could be offered for each specific learning/teaching situation [2]. Find here some frequently roles considering the use of technology in a hospital school program: • Physician treats the patient, makes a diagnosis, which the physician uses the information symptoms and test results to determine the illness of a patient. • Nurse activities includes notably caring for patients, administering medicine, communicating with doctors, checking vital signs and notify the doctor agreement to academic activities at the hospital. • Special education teacher is responsible for the selection of activities, resources and strategies that are the most appropriate consideration to the student profile. • Pyscho-pedagogue focuses on teaching children life-preparing knowledge such as social or non-curriculum skills, and cultural norms. • The student represents the hospitalized child who requires elementary school. The student use the educational content proposed by the teacher. • Software analyst is the person who work notably work with final user, this role identify skills that are meaningful to measure the child’s performance. interacting with the teacher and the student in order to facilitate the use of technology. • Software designer is responsible for design interactive content considering the learning needs of the student. • Programmer is responsible for coding software, based on the user requirements expected to support with educational applications. In addition, the development of educational applications requires the use of models and prototypes, frequently releases for user assessment and large software project commitment is not feasible since changes may require at any time [11]. Therefore, it is essential to have a software engineering model process which help manage main activities (requirements, design, development, evaluation, among others) and the participation of each role at different stages of process model [13]. In order to mitigate the previous problem, current work preconizes a certain order for the development of educational applications for hospital school program. This work proposes a process model (see Fig. 1) to design and develop educational applications to be used for teachers as a useful educational resource to help children in a hospital school program. In fact, the proposed model is inspired from Boehm’s software process model [14], it considers a collaborative production of artifacts and educational resources (see model of Fig. 1). This process model advocates a user-centered approach because the users (such as teachers and children in the hospital) can participate in several phases of development of an educational application. The process model in the Fig. 1 is characterized as an iterative and evolutive process. In the analysis phase is necessary to define the learning objective for the production of new educational resources as well as the organization of human resources. It is involving
Process Model to Develop Educational Applications
89
Fig. 1. Process model to design and develop educational applications for hospital school programs (Inspired from Boehm’s model).
here instructional designers, media and subject matter specialists. The design phase focuses on the specification of prototypes and risks related to new educational resources. In the third phase takes into account the previous prototype for the software development of educational application, these are tested to find any error in coding, and they are sent to the users for feedback. User experience and usability evaluations of educational resources are conducted in the fourth phase. Based on the user assessment, development process enters into the next iteration.
4 Case Study The elementary school is compulsory in Mexico, this education is free and accessible for all Mexican children, even for hospitalized children [15]. For this, there is a national program called “Let’s Continue Learning … in the Hospital” (SIGAMOS as acronym of Spanish translation “SIGAMOS aprendiendo en la hospital”) [16]. The USAER (Service Units as a Support for Regular Education), teacher have the official validity by the Mexican ministry of public education [17]. They provide educational attention for children and young people that they can’t attend regular school for health reasons; so
90
J. Muñoz-Arteaga et al.
that once their particular situation is taken care of health, after they can join the regular school and thus continue their formal education [18]. In this case study, the USAER teachers conform a multidisciplinary team including teachers, psychologists, social workers and pedagogues. The USAER teachers have attended several children with a certain kind of cancer at Miguel Hidalgo Hospital, Aguascalientes in Mexico. Current work has implemented the process model of Fig. 1 for a large software development because every child requires several educational applications in order to help individualized learning assistance. No room to present the full development of project, then this case study is worthy to offer customized educational applications according the learning requirement for only one hospitalized child. For this reason, a physician, a nurse and the USAER teachers have selected for this case study a child with 7 years old presenting a math learning delay. In addition, a software analyst, a programmer and a user interface designer of “Universidad Autónoma de Aguascalientes” have proposed technological solutions for the hospital school program of this case study. 4.1 First Iteration The first iteration of proposed process model was developed to identify the teacher requirements and the requirements of student that was attending the hospital school program. Analysis Phase. First of all, it was necessary to identify the learner profile [10] which was developed by a quest based on the Monterrey test [19] used by USAER teachers. In this test, the series of values of a learner profile are represented here in terms of a radar representation (see Fig. 2); these values represent cognitive skills (perception, attention, coordination, reasoning and language level) as well as basic skills (motor, auditory ability, visual and social).
Fig. 2. Learner profile related to basic math skills.
The radar representation of Fig. 2 describes some math light difficulties for the student such as attention, reasoning, cognition skill. Once the learner profile has identified by
Process Model to Develop Educational Applications
91
USAER teachers and the physician of hospital, a multidisciplinary work is necessary to carry out to design digital educational application for learning math problem such as: mispronounce and misunderstanding numbers, lack of order number representation. Design Phase. In a meeting the USAER teachers have requested some logical activities to do by the student such as order objects, identify and pronounce numbers. To avoid the risk of losing user requirements, the software analyst and USAER teachers have designed some sketches related to user interface of to support logical activities with educational applications.
Fig. 3. Sketch to select and put in order trees
Fig. 4. Sketch related to identify and pronounce numbers.
The first sketch Fig. 3 describe the scenario where the student can order objects selecting trees and putting them in a certain order. In the second sketch (see Fig. 4), the student can select and pronounce a number, next this number will be displayed under graphical and textual representation. In addition, thanks to the physician and the nurse of this case study, it was possible to use a hospital room adapted as a classroom with internet and some technological resources such as one lap-top computer and some electronic tablets. Development Phase. The programmer has identified several educational applications from some open repositories, for example here a list of identified interactive applications from Google play: number spelling learning, “los números en español del 1 al 100”, find the hidden objects [20]. Even if, these applications were useful to support logical activities, but they didn’t cover all user requirements, and notably they didn’t have a version for the student native language. Evaluation Phase. Given several online educational applications didn’t cover the user requirements of this case study. Then a second iteration was necessary to develop some new educational applications with mobile user interface in Spanish language integrating several functions to cover learning math needs of hospitalized student.
4.2 Second Iteration Second iteration was conducted to develop some educational applications taking into account user requirements and prototypes of previous iteration of spiral process.
92
J. Muñoz-Arteaga et al.
Analysis Phase. Like Heshmat et al. [21] work, the software analyst has analyzed the user requirements and prototypes of previous iteration, the strategy to use several prototypes has proposed to cover the user learning needs. Next development of software should be according to the specification of previous prototype, then they were going to be tested by the teachers in order to offer the better one to the hospitalized student. Design Phase. In this phase, the analyst and designer have proposed to USAER teachers a prototype in term of wireframes for every educational application, then the risk to manage in this iteration was to save the coherence between the software code according the design model. For this reason, the wireframe of Fig. 5 was according of sketch of Fig. 3, where the user can select and put in order a set of trees, in general the order goes from smallest to tallest trees.
Fig. 5. Wireframe for an educational application to select and put in order trees
Fig. 6. Wireframe for an educational application to identify and pronounce numbers
The Fig. 6 shows a wireframe to support the wireframe of Fig. 4, where a sound sensor get the student pronunciation of a number, then this number is displayed in a symbolic representation as well as a graphical representation. In addition, a message is displayed about the result of validation of user input. Development Phase. The software code for two educational applications was under Android operating system; the development of this code was relatively easy since was according to models defined in the previous phases. The Fig. 7 shows the user interface for an educational application where the student can interactively select the trees, then the student can select to put in order from smallest to tallest trees. If the student selection is right a confirmation message sent to user; for a wrong selection an invitation message is displayed to suggest reorder the trees. The Fig. 8 shows the user interface of the new educational application where the student can select and pronounce the number eight, then this number is reproduced and displayed number 8 under textual and graphical representation (8 red apples in a tree). A message is displayed about the result of validation of student answers. In addition, the student can have the control of volume level and the navigation of content. Evaluation Phase. Taking into account the usability test of Nielsen [22], a particular usability evaluation was developed by USAER teachers in order to measure several usability aspects of two new educational applications. According to Fig. 9 shows the educational application to identify and pronounce numbers (red columns) presents a better usability quality than the educational application
Process Model to Develop Educational Applications
Fig. 7. User interface of an educational application to select and put in order trees.
93
Fig. 8. User interface of an educational application to identify and pronounce numbers.
Fig. 9. Usability test applied to new developed educational applications.
to put in order trees (blue columns). Indeed, the application with red columns had a better value related to user satisfaction, efficiency, usability and learnability; the rest of values they have a similar score.
4.3 Third Iteration This third iteration presents here the training initiative on behalf of teachers of hospitalized children. Analysis Phase. USAER teachers had new developed educational application, then a training program is required in order these teachers can use and test them with the hospitalized student. But the USAER didn’t have before a guide or recommendations to use educational applications, it is proposed to emphasize the use of several interactive applications as an effective educational material. Design Phase. The risk in this phase was the resistance to change related to identify effective educational resources in terms of mobile application in particular by the older teachers. Then, training teachers was the strategy to overcome previous risk. For this,
94
J. Muñoz-Arteaga et al.
the designer and programmer have proposed to design and develop content for a training course for USAER teachers in order they gain experience to use several digital applications under mobile devices [23].
Fig. 10. Online course under the platform Moodle
The course online was composed of four units (see Fig. 10). The first unit has presented the use of technology in education in particular the use of open educational resources under web and mobile devices [24]. The second unit has offered a list of educational applications to help in math and reading learning activities at elementary school. The following unit has proposed the strategies to insert educational applications in context of a hospital school program. Finally, the fourth unit has proposed scenarios to use a usability test and the evaluation of user experience. Development Phase. The training course was carried out in this phase for 8 USAER teachers during two weeks in the installations of Autonomous University of Aguascalientes. Training sessions for USAER teachers in the laboratory (see Fig. 11) were possible to conduct several practical activities, such as: the access to content from internet, the use of mobile devices and the use of some educational resources (such as learning objects and educational applications) related to learning deficit. Next, USAER teachers have required to define an instructional design related to courses of their hospitalized children, the designer and programmer have selected several educational applications as an educative support for learning difficulties such as basic math difficulties. Evaluation Phase. All USAER teachers of this case study have used and tested the new educational applications in the classroom with the student for two months, after several learning activities the student has preferred to use the educational application that help to identify and pronounce numbers. The student had fun every time he pronounce a number, he could understand in an easy matter a textual graphical and sound representation. It was
Process Model to Develop Educational Applications
95
Fig. 11. A training session for USAER teachers using new educational applications
possible to the student to build scenarios about numbers, these have helped hospitalized students to mitigate some troubles related to basic math.
Fig. 12. Learner profile comparing pre-test and post-test related to basic math skills.
Once the student has used the new educational applications in classroom of hospital school program, it was possible to have any evolution in basic math. This is showed in Fig. 12 in terms of a radar representation, where the student has a light improvement in reasoning, attention and a few memory and cognitive skills. The student in this case study requires to continue the use of more educational applications recommended by USAER teachers in order to reduce much more math troubles.
5 Discussion The proposed model of Fig. 1 was implemented here in a case study presenting only three iterations as a part of a large production of educational applications to hospital school program. After looking for previous educational applications, at the end of first iteration, teachers have evaluated the software and provides the feedback. Based on the customer
96
J. Muñoz-Arteaga et al.
assessment, development process enters into the next iteration and afterwards follows the linear approach to implement feedback provided by the user. The process of iterations along the spiral model carries on with throughout the life of the software. USAER teachers, software analyst and designer have collaborated under a user-centered approach in order to mitigate the effort towards the production of new educational applications. This collaboration was so extensive that the programmer has proposed the development of content to train the teachers in such way they could have the opportunity to use the educational news applications to support learning needs of hospitalized student. The evaluation was twofold since usability of educational applications as well as the user experience test are required. A usability test was applied to identify the better educational application that support learning needs of the student. Even if the result of learning experience presents a light improvement in math, for better result more new educational applications are required to support next learning activities specified by USAER teachers. In addition, it is necessary to emphasize some human factors (psychological, gender and social) related to learning activities, the study of these factors go beyond the scope of current work.
6 Conclusion Current work proposes a process model inspired from Boehm’s model in order to have a conceptual guide for the development of educational application to be used under mobile devices during learning activities at basic education for hospitalized children. The process model was useful for teachers to identify different aspects to evaluate the child progress of their activities using interactive applications which children can learn in a ludic manner. The proposed process model of Fig. 1 can be identified with the following characteristics: • User-Centered Design: The process model incorporates in the phases the participation of teachers and end-users, in this way, in the early stages of the process model (analysis and design) the participation of teachers is encouraged to cover the children with basic math learning needs, and in later stages (prototyping and evaluation) verify that these needs have considered. • Iterative and Incremental: The process model has the flexibility to perform several iterations of the process, each iteration allows to identify better educational applications. • Model Driven Approach: The software to be developed is for educational applications that cover the specification of several models at different abstraction level, such as user requirements, sketch, wireframe, user graphical prototypes. In addition, the use of models help to carry out the evaluation in particular in the case study the user experience and usability tests. • Training Teachers: The training courses for teachers have offered useful educational content in particular identify the best practice to use new educational applications, a space to answer questions in case of any doubt. With this is more likely that the educational application will be adopted as an effective educational resource.
Process Model to Develop Educational Applications
97
Finally, as future work is considered to design a repository to share educational resources for hospital school programs. Other initiative as future work, it is the use of risk to balance agile methods [25] using tangible user interface in particular to support reading, mathematics and writing skills for children with learning disability at elementary school. Acknowledgements. The authors gratefully appreciate the collaboration with the USAER teachers. This work has been made possible, thanks to the constant support of several institutions such as UAA, IEA (“Instituto de Educación de Aguascalientes”), and CONACYT.
References 1. Antonenko, P.: A Framework for aligning needs, abilities and affordances to inform design and practice of educational technologies. Br. J. Edu. Technol. 48, 916–927 (2016) 2. Prendes, P., Serrano, J.L.: Integración de TIC en aulas hospitalarias como recursos para la mejora de los procesos educativos. Chapter book (2015) 3. Camacho, M.: Los dispositivos móviles en educación y su impacto en el aprendizaje, de Samsung Electronics Iberia, S.A.U. (2016). ISBN 978-84-945619-9-3 4. Dore, R.A., Shirilla, M., Hopkins, E., Collins, M., Scott, M., Schatz, J.: Education in the app store: using a mobile game to support US preschoolers’ vocabulary learning. J. Child. Media 13, 1–20 (2019) 5. OECD: Trends Shaping Education. OECD Publishing, Paris (2019) 6. González, C., Ottaviano, M., Violant V.: Uso de las TIC para la Atención Educativa Hospitalaria y Domiciliaria, una nueva formación de postgrado en la Universidad de la Laguna-España (2013) 7. Ratnapalan, S., Rayar, M.S., Crawley, M.: Educational services for hospitalized children. Paediatr. child Health 14, 433–436 (2009) 8. Carroll, N., Richardson, I.: Aligning healthcare innovation and software requirements through design thinking. In: IEEE/ACM International Workshop on Software Engineering in Healthcare Systems (SEHS), pp. 1–7. IEEE (2016) 9. Ortiz, M., Muñoz-Arteaga, J., Canul-Reich, J., Broisin, J.: An eco-system architectural model for delivering educational services to children with learning problems in basic mathematics. Int. J. Inf. Technol. Syst. Approach (IJITSA) 12, 61–81 (2019) 10. Xu, D., Wang, Z., Chen, K., Huang, W.: Personalized learning path recommender based on user profile using social tags. IEEE Xplore 1, 511–514 (2012) 11. Mortis, S.V., Muñoz-Arteaga, J., Zapata, A.: Reducción de brecha digital e inclusión educativa: Experiencias en norte, centro y sur de México. Miguel Angel Porrua Publishing, México (2017) 12. Mize, M., Bryant, D.P., Bryant, B.R.: Teaching reading to students with learning disabilities: effects of combined iPad-assisted and peer-assisted instruction on oral reading fluency performance. Assist. Technol. J. 15, 1–8 (2019) 13. Quispe, A., Bernal, C., Salazar, G.: Use of applications for children with learning difficulties. CAMPUS Rev. 12, 13–26 (2017) 14. Boehm, B.: A spiral model of software development and enhancement. Computer 21, 61–72 (1988). https://doi.org/10.1109/2.59 15. Universidad de Sonora, La problemática de la enseñanza y el aprendizaje de las matemáticas en la escuela primaria III. Divulgación de Investigación de la SEP (2010)
98
J. Muñoz-Arteaga et al.
16. Secretaria de Salud de México, SIGAMOS (Programa Sigamos aprendiendo en el Hospital) (2018). https://www.gob.mx/salud/acciones-y-programas/programa-sigamos-aprend iendo-en-el-hospital?state=published 17. USAER: Unidades de Servicio y Apoyo a la Educación Regular” Instituto de Educación del Estado de Aguascalientes (2019). http://www.iea.gob.mx/webiea/sistema_educativo/sis tema_eduespecial.aspx 18. SEP-Secretaria de Educación Pública. Educación especial (2019). https://www.educacionesp ecial.sep.gob.mx/2016/index_disca.html 19. Gómez-Palacio, M., Guajardo, E., Cárdenas, M., Maldonado, H.: Prueba monterrey, 5th version. Para grupos integrados, México (1981) 20. Google play apps (2020). https://play.google.com/store/apps 21. Heshmat, M., Mostafa, N., Park, J.: Towards patient-oriented design: A case of the Egyptian private outpatient clinics. In: Proceedings of the 50th Hawaii International Conference on System Sciences (2017) 22. Nielsen, J.: Usability Engineering. Academic Press, Cambridge (1994) 23. Chelkowski, L., Yan, Z., Asaro-Saddler, K.: The use of mobile devices with students with disabilities: a literature review. Prevent. School Failure: Alternat. Educ. Child. Youth 63(3), 277–295 (2019) 24. Herro, D., Kiger, D., Owens, C.: Mobile technology: case-based suggestions for classroom integration and teacher educators. J. Digit. Learn. Teach. Educ. 30, 30–40 (2013) 25. Boehm, B., Turner, R.: Using risk to balance agile and plan-driven methods. Comput. Rev. 36, 57–66 (2003)
COMET-OCEP: A Software Process for Research and Development Jesús Fonseca1 , Miguel De-la-Torre1(B) , Salvador Cervantes1 , Eric Granger2 , and Jezreel Mejia3 1 Centro Universitario de Los Valles, Universidad de Guadalajara, Guadalajara, Jalisco, Mexico
[email protected], {miguel.dgomora,salvador.cervantes}@academicos.udg.mx 2 École de technologie supérieure, Montreal, Canada [email protected] 3 Centro de Investigación en Matemáticas A. C, Zacatecas, Mexico [email protected]
Abstract. Research and development (R&D) activities are employed to find new theories commonly derived in software products; but the gap between the proofof-concept software and the final implementation is commonly hard to narrow. In a previous work, a combination of the COMET (collaborative object modeling and architectural design method) and OCEP (Open Community engagement model) methodologies was successfully applied to the development of a person re-identification system. In this paper, the COMET-OCEP software process is conceptually related to other processes, and its guidelines are detailed. Additionally, the advantages of the COMET-OCEP process are highlighted through the analysis of a test case. Keywords: COMET-OCEP · Research and development · Software process
1 Introduction Research and development (R&D) activities are fundamental in most innovative organizations, and involve the creation, study, and implementation of new theories to provide solutions to emerging problems [1]. Despite the success of the continuous efforts in R&D either in software industry and academy, there is still some room to shorten the gap between research products (proof of concept results, research reports, conference or journal papers, etc.), and software products for final users. Although software is a key component in a variety of products, some software elements are created in the context of a specific project, commonly following an undocumented and informal software process, and the resulting products are rarely in a final user-friendly version [2, 3]. On the other hand, programming practices to prove scientific theories employed in final user software may be biased by the test bench employed during proof of concept stage [4, 5]. Aiming the reduction of conceptual mistakes in the R&D process, organizations, researchers, and © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 99–112, 2021. https://doi.org/10.1007/978-3-030-63329-5_7
100
J. Fonseca et al.
academia have encouraged the application of software engineering and quality assurance processes to produce scientific software. This practice aims to start with new scientific ideas and pursues the production of usable end-user software artifacts that incorporate new, cutting edge technology. However, software companies demand R&D departments to construct mature software to transform data into valuable information and insights that are not available at a first glance [6–8]. Methodologies recently employed for R&D including KDD, COMET, OCEP and Scrum were analyzed, and five characteristics were found to be desirable in a software process for R&D. First, considering a small working team or single developer (or researcher) requires a sequential process. Second, in a R&D process several small proof-of-concept experiments are usually required before an architecture is established, to generate a mature model before construction e.g. at requirements elicitation stage. Third, in a R&D process all algorithmic decisions should be sustained by an underlying theory, and the reasoning for every design decision should be clearly documented. This is a natural step for researchers, due to the need to publish details on their findings and methodologies that allow to generate reproducible results. Fourth, an incremental and adaptive development is another characteristic in R&D, due to new findings and an evolving technology and science. Finally, a research-driven process cannot be coherent without delivery-driven artifacts generated at every development stage. In this paper, the COMET-OCEP software process first mentioned in [9] is detailed and conceptually compared to other processes in literature. In the COMET-OCEP process, the implementation stages from the original definition of COMET [32], were replaced by the OCEP software process, with a strong focus on an incremental construction driven by research objectives, providing a software process appropriate for the generation of innovative software products. The findings previously described in [9] were significantly extended through clearer description of each stage, the analysis was enhanced by considering a deeper literature survey and a test case. The rest of the paper is organized as follows. It starts with a general overview of software processes used in R&D (Sect. 2). Section 3 details the COMET-OCEP software process, describing each stage and its importance. Section 4 describes the application of the COMET-OCEP software process to the Perception framework. Finally, Sect. 5 gives final conclusions and insight from the relevance of the COMET-OCEP software process, as well as future research directions.
2 Software Development in R&D The impact of R&D in the information technology (IT) industry has been studied around the world, suggesting an increased production related to the intensity of R&D activities, as well as the firm’s age and size [10]. However, the complex and expensive R&D activities in IT industry are commonly separated from formal software practices, which constitutes a serious risk to the production of reliable scientific results [5]. In a research scenario, the concept of value usually stands for both, academic value, and business value. Moreover many times the domain problem makes it hard to achieve the required skills and domain problem knowledge to properly adapt agile values [11]. Current research activities commonly retrieve scientific findings with aid of software. Such findings are
COMET-OCEP: A Software Process for Research and Development
101
commonly applied to a specific research field [2] or used to develop novel software, adapted for research scientific development [3, 12, 13]. In fact, R&D processes rarely provide a formal process to produce a final user application. The research community has noted this issue and proposed adaptations of agile methodologies to develop data science processes [7, 8] and large scale applications [14]. Other researchers have applied agile methodologies in industry with a great success rate [15]. However, its adoption is organization dependent and needs a progressive effort to improve overall performance over the projects. Even more, its application in research scenarios is not always well documented as its focus is value delivery and adaption to change. On the opposite, scientific advances reported in scientific-style articles are suitable to be employed to improve either products or services. However, it is common to find developers that either misinterpret the scientific contributions, do not perceive their economic potential, or just see them as too formal [16, 17]. Approaches proposed for research and development of data science applications include Knowledge Discovery Database (KDD) and its derivatives, being the most common CRoss Industry Standard Process for Data Mining (CRISP-DM) [6, 7]. In these approaches, the proof-of-concept prototypes and research driven artifacts are important, besides the software production artifacts. On the other hand, successful research-based development has been reported, such as the telecommunications software and systems group (TSSG) that perform activities on applied research and commercialization projects following an agile development process that promotes a team-based approach. This software process was successfully applied to FeedHenry, which started as a research project and scaled to a startup [18]. Additionally, the proliferation of small size software development corporations has favored the adaptation of software development processes that were initially proposed for large scale companies. As an example, the Rational unified process (RUP) has been tailored to be implemented with thirteen or even eight different roles within the development team [19, 20]. Agile methodologies have also been successfully adapted to be followed by developer teams as small as one or two developers per project, with limitations in task paralleling [21]. However, these methodologies are not necessarily applicable to R&D environments. Finally, new and innovative software is commonly developed under uncertain conditions. The problem may be poorly defined for the application; the solution may be nonexistent or scarcely sketched by some experts; and business constraints may require a quick and good enough solution that may not correspond to the first approximation. Software must be quickly presented to customers to obtain adequate feedback regarding the new features that are incrementally incorporated to the system. Such a strategy accelerates development by reducing documentation efforts, at expenses of a high risk to lose the knowledge for software maintenance and evolution [22]. Summarizing, Table 1 presents the five characteristics found to be required for software production in any organization driven by R&D environment [36].
102
J. Fonseca et al. Table 1. Desirable characteristics for software production in a R&D environment.
Desirable characteristic
Rationale
C1. Sequential software process
Research-based model should be mature before software construction
C2. Modeling phases
A mature modeling provides the means for a mature construction. New concepts and theories should be proved before software construction through disposable prototyping
C3. Adaptive to theoretical advances Research is based on new theories and concepts. The rationale behind each algorithm implementation is supported by theoretical findings, and the process should provide adaptability C4. Incremental software process
Due to new findings and evolving technologies, the software is constructed incrementally
C5. Delivery-driven artifacts
Artifacts are produced as fast as possible, considering quality, and tailored to the objective of the artifact: differentiation between artifacts for research, modeling, and construction
2.1 Software Processes in R&D Publications related to scientific software engineering were surveyed since the first publication of the application of the COMET-OCEP software process [9]. The search on the EBSCO database included the keywords “software engineering” and “scientific software”. Results included 16 documents, and after filtering out unrelated investigations, only 8 were relevant, and are discussed below. Authors like Shlomo and Johanson identified a key difference between developing commercial software and computational scientific software: variability [23, 24]. Scientific software development has been identified as highly iterative and incremental as new knowledge need to be applied or updated to evaluate its value. From this perspective, several proposals have emerged to tackle such scenario from two perspectives. When the research team is also the development team, the usage of general guidelines and project templates has been reported. Serena Bonaretti et al. reported the use of general software guidelines to develop an open and reproducible software for research on femoral knee cartilage from MR images [25]. Such guidelines include the usage of open source software, open file formats, code reuse and licensed distribution. Sadly, no further process was reported. On the other hand, Michael Riesch et al. proposed a project skeleton for fast scientific software development, reducing the software engineering effort [26]. The report of the FEMPAR Framework only mention the usage of the object oriented programming paradigm for code reusability [27], and [28] mentions the importance of the licensing process. The importance of software engineering principles are remarked in [29] for the bioinformatics area, but they do not mention the usage of any specific process. Additionally, the usage of Test Driven Development (TDD) has been identified as a reliable way for developing scientific software [30]. In TDD a software project is created by
COMET-OCEP: A Software Process for Research and Development
103
incrementally creating unit tests and the bare minimum software required to pass them, fitting the incremental and iterative properties shown in research. This flexibility and agility have also been shown by other authors, remarking how agile methodologies have been able to successfully implement complex scientific software to better fit changing requirements [31]. A comparison of software processes commonly reported in literature is summarized in Table 2, considering the five characteristics in a R&D software process from Table 1. The OCEP approach to the specification stage lacks a specification model, such as Scrum [37] and COMET [34]. However, this approach could be enriched with one of the tools provided by the other software processes such as user stories or UML use cases. Another interesting characteristic of COMET is its iterative throwaway prototyping, where developers create software to validate requirements and improve their understanding. As agile software processes are focused on a quick value delivery, both Scrum and OCEP provide fast design and implementation stages based on collaborative effort. On the other hand, COMET provides a more traditional approach with an incremental prototyping approach. Table 2. Comparison between known software development processes against the identified required characteristics (see Table 1) for R&D software process. Approach
C1
C2
Waterfall
X
X
Recurrent waterfall
X
X
Prototyping
X
Spiral USDP
X
C4
C5
X
X
Partial
X
X
X
Agile
C3 X
X
X
X
X
Partial
TDD
X
X
X
Partial
OCEP
X
X
X
X
COMET
X
X
X
Partial
X
The main difference between these software process is in the validation stage, where Scrum finds to review the overall sprint iteration through two meetings, one to review the software artifact in a sprint review meeting, and another to review the internal teamwork in a sprint retrospective meeting. Although OCEP is an agile process, the review includes a demonstration in the refinement stage, providing publishable data for researchers. The evolution stage is similar between OCEP and COMET with an iterative approach, where new requirements and modifications are included through new iterations. Scrum allows to do this through the product owner role, in charge of the product backlog. Only OCEP establishes the researcher role, who is defined as a stakeholder that provides specialized requirements.
104
J. Fonseca et al.
3 A Software Development Process for R&D The COMET-OCEP software process is divided in two phases. The design phase favors theoretical activities and mature software architecture and design, whereas the agile incremental construction phase facilitates building usable releases implementing the identified theory. The design phase suggests an effort at requirements elicitation stage, where researchers are suitable to find and eventually publish useful results. In parallel, developers enhance their understanding of the application domain and the proposed algorithmic solution. New technology requires a mature, well understood, and well documented process to obtain a solid product. For this reason, it is advantageous to spend enough time in the analysis and design stage before the construction phase. Figure 1 depicts the phases and stages of the COMET-OCEP software process. Subsections below detail each stage of the design and construction phases, emphasizing their main activities and artifacts. 3.1 Design Phase - Research The definition of research and business objectives is retrieved through three incrementally granular, modeling sub-stages. Researchers and developers define the overall problematic and proposed solution. In a R&D environment, it is important to consider that the development team must either include at least one researcher, or a single developer should sustain a solid research experience. Communication is essential and visual representations facilitate interactions (e.g. UML, sketches, etc.) The design document produced should be platform independent; should identify of the elements to construct the system; and mention existing technologies that address requirements. If a technology
Fig. 1. COMET-OCEP software process for R&D projects. Solid lines indicate mandatory workflows whereas dashed lines indicate the optional prototyping stages. The direction of data flow is indicated by arrows.
COMET-OCEP: A Software Process for Research and Development
105
is available to solve a requirement, the cost of adopting it should be analyzed against a development from-scratch. Requirement modeling starts by defining a subset of functionalities, goals, and use cases using UML; and documentation is organized in an SRS. Initially, techniques like interviews, brainstorm meetings and developer as apprentice can be used to collect the list of requirements. Developers should define the essential documentation to define system requirement, and iterative throwaway prototyping is recommended to clarify stakeholder requirements and communicate complex research strategies to address unsolved issues. Well documented design patterns and idioms, as well as innovative code from proof of concept prototypes constitute a useful source of knowledge. Analysis modeling employs the SRS to define static and dynamic models of the system, suitable to support requirements. The static model involves the communication channels and structure that provide services to actors as described at the SRS. Entity class diagrams are encouraged to conceptualize the data structures and their relationships, promoting a data-oriented analysis of complex data types. Context diagrams are suggested to contextualize the designed solution with external systems and possible interactions. Considering a research context, entity, class and context diagrams are integrated at further stages, employing shared resources or perform joint research activities. On the other hand, dynamic modeling involves operations and messages passed between objects. Abstracting operations between objects is useful for design stages as these operations are suitable to be included in the software architecture. Communication diagrams are commonly employed to depict abstract operations. Each operation is identified in the diagram as suggested by COMET. Narrative definitions are added when appropriate, including the logic of each operation in messages, data passed between objects, and the goal of actions. Design modeling establishes the software architecture based on the SRS, as well as the static and dynamic models. It is encouraged to use the architectural patterns, and UML class diagrams are suggested for architecture specification. Class definition can be extended by tagging classes with stereotypes detected during previous modeling stages, to clarify the role of the classes into the overall solution. For complex methods, a more detailed description can be generated with step-by-step specifications. For instance, classes tagged as might contain the solution to a complex problem in the research domain.
3.2 Construction Phase - Development The construction phase is oriented to generate incremental publishable releases of the software, by iterating across the four stages on coherent subsets of the design document. At least three iterations are recommended during the construction phase. The initial iteration provides the means to develop base functionalities required at further iterations. Then, one or more development iterations are suggested to construct software increments, adding value to the project, supplying answers to research questions, and completing requirements. At the end, the final iteration consists in system integration
106
J. Fonseca et al.
and evaluation before release. The four stages of the construction phase are detailed below. Detailed design starts with a definition meeting, where the SRS is revised by the researcher and developers to select the highest priority use cases, according to functionality dependencies and research requirements. The top priority use cases are further refined into functionality sets, and researchers define the test cases to validate them. The process constitutes an extension of the analysis and design stages. Intensive use of the requirement specification and software architecture is encouraged to detect any inconsistency before implementation. Development is driven by the software architecture and evaluated by test cases defined by the researchers during the detailed design stage. Initially, activities are related to software management and configuration, including software repositories and the development environment. Test cases include components that are not yet implemented and may be treated as black box components during unit testing. At the end, base functionalities that are used as dependencies are implemented and stored in the software repository as initial artifacts. Further iterations develop use cases employing previously implemented base functionalities. Unit tests are performed during this stage, and only the component under development should be tested as it will be integrated in further stages. At the end, all components are integrated to generate a complete version. Refine stage consists in the evaluation of the implementation from a research point of view to validate consistency of the implementation. As the increment was implemented using a static test case, new data should be provided in order to obtain new statistical or research-oriented data. Regardless of the results, the increment is considered for publish stage, accompanied with possible suggestions or research issues to be solved. The initial and final iterations are important to validate common dependencies and the overall application. Publish is the final stage before starting a new iteration and involves testing and integration of the software release generated during development. Although integration or system testing is not required at first, the initial software increment is passed as-is in a software release. Other development iterations allow to incrementally integrate new features into the software basis. Releases may be published into some medium, such as a website or software repository, according to the agreements between the researchers and developers. A final iteration shall include a completely functional product, including documentation. From this stage, it is possible to go back to any previous stage to add new functionalities or correct errors.
4 Test Case in Person Re-identification In order to test and improve the COMET-OCEP an application for quick development, prototyping and evaluation of person re-identification models. The development team
COMET-OCEP: A Software Process for Research and Development
107
consisted of a single developer with scarce experience in computer vision, and two researchers in the area. Researchers looked for a tool to evaluate the performance of a model at each stage to test new configurations quickly without defining the whole data processing pipeline. The framework was aimed to be suitable to develop end user applications, as well as research-oriented tools for evaluation, comparison, and implementation of common algorithms. In a first approach, seeking for adaptability and quick response to the changes, the Scrum process was selected to implement the required software. However, the lack of expertise from the developer resulted in a poor planning and modeling, leading to a project failure. A key factor to this failure was the poor modeling and planning generated from the limited experience and knowledge gap between the researchers and the developer, a factor which has been identified by other authors, [16, 17]. This experience generated interesting insights which inspired the COMET-OCEP process. The development team was asked to follow the proposed software process and restart the project from ground. By following the COMET-OCEP process the project was completed, and the inexperienced developer quickly gained hands-on experience in the complex problem domain proposed by the researchers. In the following subsections, the reported gains are explained as extracted by personal experience from the development team. 4.1 COMET-OCEP Design Phase Requirement Modeling Stage: research needs were analyzed and new development skills were identified: 1) how to implement machine learning and computer vision algorithms; 2) what is the work-flow followed by re-identification systems; 3) what file formats can be employed for model definition and persistence; 4) what are the metrics to compare distinct methods, and how to visualize them; and 5) how to integrate machine learning models in applications. Developer and researchers discussed the building blocks and well known algorithms involved in general pattern recognition, including image preprocessing algorithms, feature extraction, and classification [33]. Weekly meetings were planned to evaluate and discuss the requirements and prototypes created by the development team. After each meeting, the requirements were refined, and the prototypes improved. Three throwaway prototypes were built. The first was focused on finding a strategy for model persistence, and was implemented as a command line application for supervised classification of images. The second was a research-oriented throwaway prototype, implemented as a Jupyter notebook. The third was built to integrate functionalities into the graphical user interfaces. It was composed of various modules that allowed to generate the user interface sketches for stakeholders’ approval. Use cases were detailed from this last prototype. After the requirements were approved by the researchers, the developer gained hands-on experience and ready to use building blocks to use in further stages. Analysis Modeling Stage. It started by abstracting data composition and values involved on a person re-identification model, into an entity class diagram. Dynamic modeling helped to summarize and visualize the messages and operations involved in the process, as well as interactions the user may perform with the system.
108
J. Fonseca et al.
Design Modeling Stage. Two object-oriented architectures where created: one for the workbench and another for the framework. The framework architecture based on the entity class diagram, enriched with the operation defined in the dynamic model. Each model is composed by five components: a data source, and four algorithmic modules: preprocessing, description, and classifier1 .
4.2 COMET-OCEP Construction Phase At the first construction increment, a meeting was planned every week to revise advances, and an increment was expected every two weeks. Before starting an increment, requirements were revised, and the next functionality was selected and detailed to be implemented. During the detailed design stage, details on the algorithms to be implemented were included for each module, as well as metrics employed for evaluation. Similarly, GUI sketches were presented for approval to be implemented as the window in the workbench. The development stage focused on the development of the workbench, reviewing metric visualization and metrics control for the metrics module and the user experience. On the opposite, when development addressed the Perception framework, a research question was solved for each module: Which algorithm is suitable to be efficiently implemented in a person re-identification system? For implementation, the selected settings showed the advantage of using an interpreted, multiplatform programming language. Libraries and development tools used in the implementation are summarized in Table 3. Table 3. Libraries and tools employed in implementation. Package
Description
Python 3.6.7
Base python programming interpreter
Scikit-Learn 0.20.0
Machine learning library
Scikit-Image 0.14.1
Image processing library
NumPy 1.15.2
Matrix and numeric computation library
Matplotlib 3.0.0
Library for publishable plotting
Seaborn 0.9.0
Statistical data visualisation library
Visual Studio Code 1.26.1
Code editor with IntelliSense, compatible with unittest for Python
Publication: software administration was governed by the overall design and driven by use cases. The git software configuration management tool was used to administrate software increments and launches, and an online repository was used to maintain private subsequent versions on bitbucket.org2 . 1 Artifacts from the design phase are available at https://drive.google.com/drive/folders/1Qkz-5ey
J7MoUzrtzx5IFF3VoagZGVMwL?usp=sharing. 2 The perception software is currently being registered, and will be available at the same address
as documents from design phase.
COMET-OCEP: A Software Process for Research and Development
109
Refinement Stage: occurred differently on each increment. Testing took place at different stages of the development process. During each increment, unit tests took place to validate each algorithm and module. Integration tests were performed at the end of each review if the module and algorithms were approved and integrated to the framework. Finally, system tests were performed at every delivery to the stakeholders. A total of 584 lines of code for 33 tests were written for the unittest module to run unit, integration and system tests. At the final iteration, the Perception framework was ready and integrated. It features a multi-language graphical evaluation and training tool for person re-identification systems which generates pre-trained models to be used into custom applications. By observing good documentation guidelines, Perception also includes PDF and HTML internal documentation to be published along the tool itself. Figure 2 shows the final product overview.
Fig. 2. Products included as part of Perception. From left to right: An extensible multi language GUI automatically loaded from the system language. Evaluation and training graphical tools, including a metrics module widget. PDF and HTML internal documentation generated from documentation, as well as a user manual.
5 Conclusion In this paper, a software process based on COMET and OCEP methodologies was proposed for research and development (R&D) projects. It combines efforts to achieve a solid design base to drive R&D tasks through agile software practices. An iterative design phase allows a development team composed of researchers and developers to design a software solution focusing on research objectives and its inclusion into an innovative solution. The so generated architecture is implemented following agile principles with focus on answering research questions and its integration into a publishable software solution. As a test case, a framework for person re-identification was designed and implemented by a development team composed of two researchers and one developer. The design document generated at the design (COMET) phase demonstrated to be suitable for agile incremental development at the construction (OCEP) phase. Even with limited or no experience on machine learning, after designing throwaway prototypes, the
110
J. Fonseca et al.
developer became a machine learning practitioner, capable to understand and implement research requirements and enhance them through software engineering practices. With two research reports and a usable software solution, COMET-OCEP showed to be a reliable software process to build innovative R&D applications, compared to the case of applying Scrum. Additionally, the advantages of prototype-based requirements refinement, with multiple proof of concept disposable pieces of software, encouraged the generation of innovative architectures, and contribute with new scientific knowledge when results are properly published. Partial products of this practice included demonstrative programs, pieces of code to reproduce experiments, and reusable software modules. Examples of research oriented demonstrative programs include the creation of proof of concept software to analyze the increasing amount of data generated from everyday human activity. Small, demonstrative prototypes constitute an insight of the validity of new algorithms. Future research should include the use of COMET-OCEP in more scenarios, with distinct types of projects including web-based systems, distributed databases, and researchoriented software. A wider variety of sizes and members of the development teams, as well as different application domains may be analysed to evaluate its applicability.
References 1. Kenton, W.: Research and Development (R&D), Investorpedia, 05 July 2020. https://www. investopedia.com/terms/r/randd.asp. Accessed 04 Aug 2020 2. Kellogg, L., Bangerth, W., Hwang, L.J., Heister, T., Gassmoller, R.: The role of scientific communities in creating reusable software: lessons from geophysics. Comput. Sci. Eng. 21, 25–35 2018 3. Ahalt, S., et al.: Water science software institute: agile and open source scientific software development. Comput. Sci. Eng. 16(3), 18–26 (2014) 4. Kelly, D.F.: A software chasm: software engineering and scientific computing. IEEE Softw. 24(6), 119–120 (2007) 5. Storer, T.: Bridging the chasm: a survey of software engineering practice in scientific programming. ACM Comput. Surv. 50(4), 1–32 (2017) 6. Sarkar, D., Raghav, B., Tushar, S.: The Python machine learning ecosystem. In: Sarkar, D., Raghav, B., Tushar, S. (eds.) Practical Machine Learning with Python, pp. 67–118. Apress, Berkeley (2018) 7. do Nascimento, G.S., de Oliveira, A.A.: An agile knowledge discovery in databases software process. In: Data and Knowledge Engineering. Springer, Heidelberg (2012) 8. Alnoukari, M., Alzoabi, Z., Hanna, S.: Applying adaptive software development (ASD) agile modeling on predictive data mining applications: ASD-DM methodology. In: 2008 International Symposium on Information Technology (2008) 9. Fonseca Bustos, J., De la Torre Gómora, M.Á., Álvarez, S.C.: Software engineering process for developing a person re-identification framework. In: 7th International Conference on Software Process improvement (CIMPS), Guadalajara, Jalisco (2018) 10. Thangavelu, S., Jyotishi, A.: Influence of R&D and IPR Regulations on the performance of IT firms in India: an empirical analysis using Tobin’s Q approach. In: Proceedings of the 2017 ACM SIGMIS Conference on Computers and People Research, Bangalore, India (2017) 11. Morris, C., Segal, J.: Lessons learned from a scientific software development project. IEEE Softw. 29(4), 9–12 (2012)
COMET-OCEP: A Software Process for Research and Development
111
12. Hannay, J.E., MacLeod, C., Singer, J., Langtangen, H.P., Pfahl, D., Wilson, G.: How do scientists develop and use scientific software?. In: ICSE Workshop on Software Engineering for Computational Science and Engineering (2009) 13. Marban, O., Segovia, J., Menasalvas, E., Fernandez-Baizan, C.: Toward data mining engineering: A software engineering approach. Inf. Syst. 34(1), 87–107 (2009) 14. VMEdu: A Guide to the scrum body of knowledge (SBOK Guide), S. Study, Ed., VMEdu (2016) 15. CollabNet: The 13th annual state of agile report 2019. https://www.stateofagile.com/\#ufh-i521251909-13th-annual-state-of-agile-report/473508. Accessed 05 Dec 2019 16. Yamashita, A.: Integration of SE research and industry: reflections, theories and illustrative example. In: IEEE/ACM 2nd International Workshop on Software Engineering Research and Industrial Practice, Florence, Italy (2015) 17. Gorton, I.: Cyberinfrastructures: bridging the divide between scientific research and software engineering. Computer 47(8), 48–55 (2014) 18. Dowling, P.: Successfully transitioning a research project to a commercial spin-out using an agile software process. J. Softw. Evol. Process 26(5), 468–475 (2014) 19. Borges, P., Monteiro, P., Machado, R.J.: Tailoring RUP to small software development teams. In: 37th EUROMICRO Conference on Software Engineering and Advanced Applications, Oulu, Finland (2011) 20. Monteiro, P., Borges, P., Machado, R.J., Ribeiro, P.: A reduced set of {RUP} roles to small software development teams. In: International Conference on Software and System Process (ICSSP), Zurich, Switzerland (2012) 21. Septian, W., Gata, W.: Software development framework on small team using agile framework for small projects (AFSP) with neural network estimation. In: 11th International Conference on Information Communication Technology and System (ICTS), Surabaya (2017) 22. Nascimento, L.M.A., Horta Travassos, G.: Software knowledge registration practices at software innovation startups: results of an exploratory study. In: Proceedings of the 31st Brazilian Symposium on Software Engineering, Fortaleza, CE, Brazil (2017) 23. Shlomo, M., Yotam, L.: Customized project charter for computational scientific software products. J. Comput. Methods Sci. Eng. 18(1), 165–176 (2018) 24. Johanson, A., Hasselbring, W.: Software engineering for computational science: past, present, future. Comput. Sci. Eng. 20(2), 90–109 (2018) 25. Bonaretti, S., Gold, G.E., Beaupre, G.S.: pyKNEEr: An image analysis workflow for open and reproducible research on femoral knee cartilage. PLoS ONE 15(1), 1–19 (2020) 26. Riesch, M., Nguyen, T.D., Jirauschek, C.: Bertha: project skeleton for scientific software. PLoS ONE 15(3), 1–12 (2020) 27. Badia, S., Martín, A.F., Principe, J.: FEMPAR: an object-oriented parallel finite element framework. Archives Comput Methods Eng 25(2), 195–271 (2018) 28. Netto, M.A.S., Calheiros, R.N., Rodrigues, E.R., Cunha, R.L.F., Buyya, R.: HPC cloud for scientific and business applications: taxonomy, vision, and research challenges. ACM Computing Surveys, 51(1), 8:1–8:29 (2018) 29. López-Fernández, H., Reboiro-Jato, M., Glez-Peña, D., Laza, R., Pavón, R., Fdez-Riverola, F.: GC4S: a bioinformatics-oriented Java software library of reusable graphical user interface components. PLoS ONE 13(9), 1–19 (2018) 30. Nanthaamornphong, A., Carver, J.C.: Test-driven development in HPC science: a case study. Comput. Sci. Eng. 20(5), 98–113 (2018) 31. Rashid, N., Khan, S.U.: Using agile methods for the development of green and sustainable software: success factors for GSD vendors. Journal of Software: Evol. Process 30(8), e1927 (2018) 32. Gomaa, H.: Software Modeling and Design: UML, Use Cases, Patterns, and Software Architectures. Cambridge University Press, Cambridge (2011)
112
J. Fonseca et al.
33. Gonzalez, R., Woods, R.E.: Digital Image Processing. Pearson, New York (2018) 34. Pisano, F.M.: Applying use case driven UML-based comet method for autonomous flight management on IMA platform. In: IEEE/AIAA 34th Digital Avionics Systems Conference (DASC) (2015) 35. Oktaba, H., Alquicira Esquivel, C., Su Ramos, A., Martínez Martínez, A., Quintanilla Osorio, G., Ruvalcaba López, M., López Lira Hinojo, F., Rivera López, M.E., Orozco Mendoza, M.J., Fernández Ordóñez, Y., Flores Lemus, M.Á.: Modelo de Procesos para la Industria de Software: MoProSoft, Ver. 1.3, UNAM, Mexico (2005) 36. Mikulskiene, B.: Research and Development Project Management. Mykolas Romeris University, Lithuania (2014) 37. Schwaber, K., Sutherland, J.: The Scrum Guide. Scrum.org (2017)
Knowledge Management
Knowledge Transfer in Software Development Teams Using Gamification: A Systematic Literature Review Saray Galeano-Ospino1 , Liliana Machuca-Villegas1,2 , and Gloria Piedad Gasca-Hurtado1(B) 1 Universidad de Medellín, Carrera 87 no. 30-65, 50026 Medellín, Colombia
{sgaleano,gpgasca}@udem.edu.co 2 Universidad del Valle, Calle 13, 100-00, 760032 Cali, Colombia
[email protected]
Abstract. One of the objectives of knowledge management is knowledge transfer. In software development projects, collaborative work is the key to teamwork. These projects are classified as knowledge-intensive, and their activities are related to the materialization of an organization’s knowledge. Therefore, software development teams require knowledge transfer capabilities. However, there are difficulties associated with communication and collaboration that create challenges for software development teams and require mitigation. One way to mitigate such problems is the use of gamification. This paper presents the results of a systematic review of the literature to identify gamification based strategies that encourage knowledge transfer from software development teams. These strategies were classified, and interesting approaches were found to identify gamification as a key strategy for software development teams. The use of gamification achieves positive results in the generation of knowledge transfer capabilities in software development teams. Keywords: Knowledge transfer · Software development teams · Gamification · Software development collaboration
1 Introduction Knowledge management (KM) identifies, captures, and leverages an organization’s collective knowledge to be competitive [1]. KM improves communications and knowledge among an organization’s employees through a set of techniques and tools involved in the processes of storing, distributing, sharing, and communicating data and information. In turn, it allows for continuous learning, through previously captured and stored lessons learned [2]. One of the main objectives of the KM process is to transfer knowledge. This objective focuses on ensuring that knowledge is in the right person in the organization [3]. Under this scenario, it is necessary to count on the collaboration of the workers so that © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 115–130, 2021. https://doi.org/10.1007/978-3-030-63329-5_8
116
S. Galeano-Ospino et al.
the knowledge is shared and used in the best way to benefit the interests of the organization. The above implies considering social and human factors related to knowledge management within organizations [4]. Besides, the success of software development projects is linked to social and human factors such as team members’ collaboration skills [5]. The exchange of team knowledge plays a fundamental role in this type of project [6]. They are knowledge-intensive, and the software development activities are considered as the materialization of the organization’s knowledge [7]. The proximity of software development activities to KM practices facilitates knowledge transfer in software development projects [7]. Therefore, in software development teams, it is possible to generate enough capacities to make their knowledge available to their coworkers [8]. However, although knowledge should be shared and socialized, difficulties have been identified with the form of communication between project stakeholders [9]. For example, knowledge is often handled in its tacit form, as the way it is shared focuses on conversations between all involved in the process (team, client, sponsors, etc.). Therefore, it is individualized or isolated knowledge, with less value than if it were shared knowledge that could be used to its full potential [4]. On the other side, some difficulties affect teamwork associated with collaboration, such as communication and the direction of the project leader [10, 11]. Software development teams need to work collaboratively to achieve their goals. Under this scenario, it is important to propose strategies to mitigate the collaboration challenges in software development teams’ demanded in the knowledge transfer process, such as sharing knowledge, information, experience, and teamwork. For this, it is necessary to study different strategies as gamification, to guide collaborative work, improve interaction, and knowledge transfer among team members [8]. Therefore, this paper presents the results of a systematic literature review (SLR) based on the original guidelines proposed by Pérez Rave in [12]. This review is the starting point to face the challenges mentioned. The SLR aims to evaluate the literature to identify gamification based strategies that encourage collaborative knowledge transfer from software development teams. The rest of the paper is structured as follows. Section 2 presents the theoretical context supporting the review. Section 3 describes the methodology used for the SLR. Section 4 presents the results. Section 5 discusses these results, and Sect. 6 describes the conclusions and future work.
2 Theory The theoretical basis for this research is described below. 2.1 Knowledge Management It is essential to define what knowledge is and how it is classified before explaining KM’s concept. For this research, knowledge can be understood as the communication process determined by communication theory around the subject (person), the object, the medium, and the message. This theory specifies that when the subject receives the
Knowledge Transfer in Software Development
117
object’s message, through a medium, in some code, it is filtered by its cognitive capacities and knowledge models [13]. There are two fundamental classifications of recognized knowledge: explicit and tacit [14]. Explicit knowledge refers to knowledge that can be articulated, codified, and communicated in a symbolic or natural language. Tacit knowledge is that personal knowledge that is not easy to raise through formal language, so it isn’t easy to transmit and share with others [9]. This type of knowledge is rooted in individual actions and experience, as well as on the ideals, values, emotions, intuition, ideas, and subjective aspects of each person [13]. Interest in knowledge within organizations has led to KM for the benefit of the organization. Therefore, KM is a process of identifying, capturing, and leveraging collective knowledge to help it compete [1]. It is also conceived as the systemic and specific process of an organization whose purpose is to acquire, organize and communicate both tacit and explicit knowledge so that others can make use of it and be more productive in their work [1]. KM’s role is to capture, organize, update, and share the knowledge created for use by the resource to act necessary to complete specific processes within the organization [13]. 2.2 Knowledge Transfer Since transfer means passing an element from one place to another, knowledge transfer means moving useful information and experience from one context to another [15]. It should be noted that knowledge transfer differs from knowledge sharing since a person shares knowledge that does not mean that he or she has already made a transfer. Consequently, knowledge is transferred from entity A (person, business unit, or company) to entity B, when B can apply it in a useful way in its context [15]. Knowledge cannot be fully transferred during a conversation [16]; however, sharing knowledge is a facilitator of knowledge transfer [15]. In this way, a KM strategy facilitates and improves the flow of learning to transfer skills and experiences from where they reside to where they are needed, across time, space, and geographical distribution [13]. Many organizations follow different KM strategies that match their culture, priorities, and capacities [17]. Therefore, knowledge transfer emphasizes systematic approaches to transfer: obtaining, organizing, restructuring, storing or memorizing, building for deployment, and distributing knowledge to action points where it will be used to do the work [17]. 2.3 Knowledge Management in Software Development Teams KM in software development organizations is a broad field with several disciplines that can influence the fulfillment of organizational objectives [4]. Some software development companies have made efforts to take advantage of KM’s benefits in their activities, reduce the time and costs of their processes, and increase the quality of their products [18]. Software engineering is a discipline-based mainly on knowledge, whose main products and resources are represented in intellectual capital [19]. Consequently, software development is a collaborative process that needs to bring together experience, technological skills, and process knowledge. KM in software development is essential to capture the
118
S. Galeano-Ospino et al.
non-outsourced knowledge of team members, build organizational knowledge from a repository of personal knowledge, and avoid knowledge loss with the departure of an experienced worker [4]. 2.4 Software Development Team Collaboration Collaboration is one of the essential factors in software development teams [8]. It contributes to the construction of new knowledge, through proposals of good practices for the benefit of the work teams. Collaboration allows knowledge to be socialized, improving the participation of team members [11]. In this way, collaborative work in software development teams is one of the most important factors. Furthermore, the adequate integration of a work team influences its performance, so the skills, knowledge, and interactive styles of each member must be complemented to obtain a highly capable team [8]. 2.5 Gamification The principles behind gamification have existed for decades, but the term itself became widespread in 2010 with the initial definition of “the application of game design elements in non-game contexts” [20]. One of the main reasons why gamification has become so popular in recent years is that games have a strong “pull” factor. Games affect positive emotions, relieve stress, create stronger social relationships, give a sense of achievement, and improve cognitive skills [21]. Nowadays, gamification is being used in different work contexts, to make the development of activities more attractive and fun [22]. Also, to promote skills such as motivation, commitment, performance, and collaboration in the participants [23]. Gamification intends to use the elements of play to influence a change in a user’s behavior [24].
3 Materials and Methods This study aims to identify gamification based strategies that encourage collaboration in knowledge transfer from software development teams. To this end, a systematic review of the literature was carried out based on the guidelines proposed by Pérez Rave in [12]. This study was conducted during March 2020. This study aims to answer the following research questions: RQ1. What knowledge transfer strategies have been used in software development teams? RQ2. What gamification strategies have been used in the software development team? RQ3. What gamification strategies have been used in the context of knowledge transfer in software development teams? Three search strings were defined according to each research question (Table 1). The databases used in the search process were Scopus, Springer, and IEEE. These databases gather studies related to the area of computer science. The results of the search returned a total of 299 primary studies. Inclusion and exclusion criteria were defined to select the studies as follows:
Knowledge Transfer in Software Development
119
Table 1. Search string Research question
Search string
RQ1
(“knowledge transfer” AND “Software Development” AND “knowledge management” OR “Software Development team*”)
RQ2
(gamification AND “Software development” AND “software engineering” OR “Software Development team*”)
RQ3
(gamification AND “knowledge transfer” AND “Software Development” OR “Software Development team*”)
• Inclusion criteria: i) publications written in Spanish and English; ii) publications from the last five years (2015-2019); iii) papers that presented specific strategies and iv) research and conference papers. • Exclusion criteria: i) review and philosophical papers such as literature reviews and reflective studies. Table 2 shows the results of the studies found according to the search strings and databases. The total refers to the number of studies found in the database. Table 2. Indicator results by search strings in the selected databases. Search string indicators
Springer Scopus IEEE Total
Search string associated to RQ1
3
144
18
165
Search string associated to RQ2
9
90
25
124
Search string associated to RQ3
0
5
5
10
12
239
48
299
Total
Table 3 shows the results of the selected studies from the inclusion and exclusion criteria for each database. The total refers to the number of studies selected after the inclusion and exclusion criteria. Table 3. Number of studies selected after applying inclusion and exclusion criteria. Search string indicators
Springer
Scopus
IEEE
Total
After the inclusion and exclusion criteria RQ1
0
5
6
11
After the inclusion and exclusion criteria RQ2
2
4
12
18
After the inclusion and exclusion criteria RQ3
0
3
1
4
Total
2
12
19
33
120
S. Galeano-Ospino et al.
After searching the primary studies, an analysis of the titles and abstracts was made to verify their relationship with the research topic. As a result of this process, 33 papers were selected (Table 3). The primary studies list is presented in [25] reference. Moreover, a more detailed review of these studies was carried out, doing a complete reading of each of them. These selected papers constitute a basis for presenting the current state of knowledge transfer and gamification strategies in software development teams.
4 Results Comparison criteria were defined to extract the information from the selected papers. They facilitated the analysis of the papers and the structuring of the answers to the research questions presented. The criteria defined are: • • • • •
KT - Knowledge transfer strategy GM - gamification strategy SDLC - Software Development Life Cycle Phase Use of agile or traditional methodology by the development team Social or human factor that promotes the strategy (Factor).
The results of the KT and GM criteria are presented in Table 4. The studies that merged KT and GM as a work proposal are also presented. Table 4. Comparison criteria in the papers analyzed. Research Question
KT
GM
KT and GM
RQ1
8
n/a
n/a
RQ2
n/a
24
n/a
RQ3
n/a
n/a
1
*Not applicable (n/a)
In this research, a knowledge transfer strategy or gamification strategy is considered as a set of actions aimed at improving the processes developed by a software development team.
Knowledge Transfer in Software Development
121
According to the statements of the strategies found in the SLR, the strategies were typified as follows: i. ii. iii. iv. v. vi.
Software tool, Games, Frameworks, Methodologies, Models, Others.
Table 5 classifies the strategies identified in the primary studies according to their typology. Some examples of these strategies, according to their typology, are presented in Table 6. Table 5. Classification of strategies by typology. Comparison criteria Type of strategy i) KT
2 0
GM
12 1
7
0 1
0
Total
14 2
9
Percentage
43 6
KT + GM
Total %
ii) iii) iv) v) vi) 2
2 0
2
8
24
3 1
0
24
73
0 0
0
1
3
5 1
2
33
100%
27 15 3
6
To know the phases of the software development life cycle (SDLC) [50] in which the need for knowledge transfer arises, the papers were classified as follows: a) Requirement Phase, b) Analysis Phase, c) Design Phase, d) Development Phase, e) Testing Phase, f) Maintenance Phase (Table 7). The studies were classified according to the development methodology used in the context of the identified strategy types. Table 8 shows the type of strategy and its relationship to the methodology. Within the results, the distinction between studies belonging to KT and GM is maintained. Similarly, the percentage calculated for each classification is based on the total values presented in Table 5.
122
S. Galeano-Ospino et al. Table 6. Typology of classified strategies.
Type of strategy i)
Comparison criteria KT
GM
Social Screencasting System [26]
Agile Workbench [33] BuildingK [11] DMGame [34–36] DFLOW [37] CodeFights [38] Virtual JIRA Software add-on [39, 40] REVISE [41] Stack Overflow [42]
ii)
KT + GM
SCRUT [49]
iii)
KT Framework [27] SMARTKT [28]
Scrumban [43] Agon Framework [44] GamiTracifY [45] G.A.M.E. (Gathering, Analysis, Modeling and Execution) [46] Scrum Paper City [47]
iv)
Grounded Theory Methodology (GTM) [29, 30]
VENVI [48]
v) vi)
Mindstorm [31] Process for knowledge transfer [32]
Finally, to identify the social and human factors covered by the strategies, the studies were classified according to the factor perceived in it. The distinction between studies belonging to KT and GM is maintained. Table 9 shows the results obtained from this relationship, a study can relate to one or more factors. The base values for calculating the percentage are presented in Table 5.
[53]
[44]
ii)
iii) [29, 30, 48, 57]
[27, 43, 45, 46, 54, 55]
[49]
[26, 40, 51, 52]
d)
Percentage
3
18
Total 21
52
17
[32]
[31] 1
6
vi) 7
[56]
[47]
[11, 33–35, 37]
c)
[58]
b)
v)
iv)
a) [36, 39, 41, 42]
a)
SDLC Phase
i)
Type of strategy
3
1
[38]
e)
3
1
[28]
f)
33
2
1
5
9
2
14
Total
Table 7. Classification of the type of strategies with the phases of the software development life cycle (SDLC).
6
3
15
27
6
43
100%
% Knowledge Transfer in Software Development 123
124
S. Galeano-Ospino et al. Table 8. Classification of the type of strategies and development methodology.
Type of strategy
KT Agile
GM Traditional
Total
Total
Agile
Traditional
i)
0
1
1
8
0
8
ii)
0
0
0
1
0
1
iii)
0
0
0
4
0
4
iv)
1
0
1
1
0
1
v)
0
0
0
1
0
1
vi)
1
0
1
0
0
0
Total Percentage
2
1
3/8
15
0
15/24
25
12.5
37.5
63
0
63
Table 9. Classification of the type of strategy and the social or human factor it encourages. Type of strategy
KT
GM
Social or human factors
Social or human factors
A i)
B
1 0
C
Total A
Total
B
1
2
3 0
C 3
6
ii)
0 0
0
0
1 0
0
1
iii)
1 0
0
1
4 1
1
6
iv)
0 0
1
1
1 0
1
2
v)
0 0
0
0
0 0
0
0
vi)
0 0
0
0
0 0
0
0
Total
2 0
2 4/8
9 1
Percentage 25 0
25 50
38 4
5 15/24 21 58
A. Collaboration B. Communication C. Motivation
5 Discussion The findings are expressed in response to the questions posed. 5.1 RQ1. What Knowledge Transfer Strategies Have Been Used in Software Development Teams? Knowledge transfer strategies represent 24% of the studies analyzed (Table 5). Within these strategies, software tools, games, frameworks, methodologies, and others are identified. The results show that the software tool, the framework and methodologies are the most used types of strategies. An example of a software tool used by the studies analyzed
Knowledge Transfer in Software Development
125
is web application [26]. In the “Others” typology, strategies were identified as a process that performs knowledge transfer from one place to another [32] and a Mindstorm technique [31]. These strategies were applied in the analysis, development, and maintenance phases of the SDLC. The development phase was the most predominant. It highlights the application of knowledge transfer in the tasks of coding and code review. For the success of these tasks, it is essential to consider collaboration among team members. For example, Zieris [30] promotes peer programming, a collaborative practice between two developers on one computer. This practice generates benefits in software development such as i) decrease of defects in the code, ii) more readable code, and iii) better maintenance. Here, knowledge transfer plays an important role. Besides, 25% of these strategies use agile methodologies and 12.5% a traditional one (Table 8). This scenario shows a trend towards using agile methodologies as a consequence of their good practices to guide the collaborative work in a development team. It also identifies that knowledge transfer strategies encourage collaboration (25%) and motivation (25%) as social and human factors (Table 9). In the case of collaboration, these strategies share knowledge using software tools and frameworks. In [26], they use a system to continuously and automatically record the use of a tool, to show the whole team how it works. In [28], the SMARTKT (Smart Knowledge Transfer) framework is proposed. Its purpose is to graph the knowledge of an application to help developers in the testing and maintenance phase extract the knowledge related to the software development domain, the specific field of the application, and its interrelations. 5.2 RQ2. What Gamification Strategies Have Been Used in Software Development Teams? The studies analyzed report 73% of gamification strategies. The most used type of strategy is the software tool, with 50%. There are other types of strategies, such as frameworks (29%), methodologies (13%), games (4%), and models (4%) (Table 5). The strategy type software tool is implemented through games under different technological platforms such as web applications, mobile applications, or other technology. Researches that have strategy type frameworks and methodologies use games, models, or software platforms to implement them. In the same way, the strategy game type uses frameworks and software tools for its implementation. These strategies were developed in the requirements, design, development, and testing phases. The phase of the SDLC where the use of any gamification strategy is facilitated is the development phase, especially in coding and code review tasks. Code review is the most common. The success of the code review task depends on a close collaboration between reviewers and developers. These strategies are 63% focused on agile methodologies, such as Scrum (Table 8). Most of these strategies are implemented in the design phase. This phase requires the intervention of different project stakeholders such as the developers, the user interface designers, the user representative, and the project manager. Furthermore, this percentage may be related to the fact that the strategies were implemented in industry software development teams and to the growing interest of the technology sector in adopting
126
S. Galeano-Ospino et al.
agile development methodologies. An example of this is presented in the case study described in [43]. The authors take an agile hybrid method, called Scrumban (based on a set of elements borrowed from Scrum and Kanban), with game elements aimed at socially connecting team members and motivating them to use their skills and knowledge more effectively. Most gamification strategies promote social and human factors such as collaboration with 38%, motivation, and communication with 21% and 4% participation, respectively (Table 9). It is important to highlight that, to promote the collaboration factor in its majority, the type of strategy software tool was used in the requirement and development phases. In the development team, these strategies used game elements such as points, position tables, levels, rewards, and trophies. On the other side, the proposal presented by Jurado [11] has an approach to knowledge management. However, the research is focused on the construction and refinement of knowledge, and not on the transfer. 5.3 RQ3. What Gamification Strategies Have Been Used in the Context of Knowledge Transfer in Software Development Teams? From the studies reviewed, only one presents a gamification based strategy to promote knowledge transfer in a software development team [49]. The strategy was implemented in the SDLC phase of development in the code review task, seeking to influence collaboration among team members. The points are used as gamification elements assigned to the participants (programmers or reviewers) for sharing knowledge with the team members. However, it does not validate that programmers or reviewers can apply that knowledge in a useful way in their context. Knowledge sharing is essential as a facilitator of knowledge transfer [15], but it is not enough to make knowledge transfer happen. Finally, the SLR results indicate that the software tool is the type of strategy most used in knowledge transfer and gamification. These strategies can be applications or platforms. However, other types of strategies have been used in software development teams, such as games, methodologies, models, and frameworks, which influence collaboration, communication, and motivation in development teams. On the other side, the phases of the development life cycle in which the strategies were studied are requirements, design, development, and maintenance. One of the phases where the use of any strategy is facilitated is the development phase. This trend may be related to the fact that the studies analyzed are focused on the industry. Most studies tend to improve software development activities with gamification based strategies. In some cases, oriented to the construction and motivation to share knowledge, but no concrete solution is reported to the failures of collaboration in knowledge transfer in software development teams.
6 Conclusions and Future Work In this research, an SLR was performed to characterize state of the art in identifying gamification-based strategies that encourage collaboration in the knowledge transfer
Knowledge Transfer in Software Development
127
from software development teams. These strategies were classified as knowledge transfer and gamification strategies. Besides, the SDLC phase in which the strategy was implemented, the development methodology used by the software development team, and whether it promotes any social and human factors were analyzed. The types of strategies most used in knowledge transfer strategies are software tools, frameworks and methodologies. These strategies enhance the coding tasks of a development team and promote social and human factors such as motivation and collaboration. Social Screencasting System [26] is an example of software tool and SMARTKT [28] is an example of framework. In gamification strategies, the type of strategy most used is the software tool. These strategies generally first design a game, using gamification elements, then develop applications or platforms, and finally test it on a software development team task. Some examples of this type of strategy are DMGame [34–36], JIRA Software add on [39, 40] and Stack Overflow [42]. Generally, the types of strategy software tools and framework are used to support collaboration in knowledge transfer in a software development team. The SDLC phase that facilitates the use of any strategy is the development phase, in coding and code review tasks, using an agile development methodology. Gamification promotes collaboration in knowledge transfer. According to this, as future work, we propose the design of gamification-based strategies to encourage collaboration in the knowledge transfer in software development teams.
References 1. Alavi, M., Leidner, D.: Knowledge management and knowledge management systems: conceptual foundations and research issues. MIS Q. 107–136 (2001) 2. Torres, K., Lamenta, P.: Knowledge Management and information systems in organizations. Negotium 11(32), 3–20. Rev Científica Electrónica Ciencias Gerenciales/ Sci e-journal Manag. Sci. 5–21 (2015) 3. Dalkir, K.: Knowledge Management in Theory and Practice. MIT press, Cambridge (2013) 4. Yanzer Cabral, A.R., Ribeiro, M.B., Noll, R.P.: Knowledge management in agile software projects: a systematic review. J. Inf. Knowl. Manag 13(1), 1450010 (2014). https://doi.org/ 10.1142/S0219649214500105 5. Machuca-Villegas, L., Gasca-Hurtado, G.P.: Gamification for improving software project: systematic mapping in project management. In: 2018 13th Iberian Conference on Information Systems and Technologies (CISTI), pp. 1–6 (2018) 6. Amin, A., Basri, S., Hassan, M.F., Rehman, M.: Software engineering occupational stress and knowledge sharing in the context of global software development. In: 2011 National Postgraduate Conference - Energy Sustain Exploring Innovation Minds, NPC, pp. 10–14 (2011). https://doi.org/10.1109/NatPC.2011.6136269 7. de Vasconcelos, J.B., Kimble, C., Carreteiro, P., Rocha, Á.: The application of knowledge management to software evolution. Int. J. Inf. Manage. 37, 1499–1506 (2017). https://doi. org/10.1016/j.ijinfomgt.2016.05.005 8. Hernández, L., Muñoz, M., Mejia, J., Peña, A.: Gamification in software engineering teamworks: a systematic literature review. applied software engineering. In: 2016 International Conference on Software Process Improvement (CIMPS), January:1–8 (2017). https://doi. org/10.1109/cimps.2016.7802799
128
S. Galeano-Ospino et al.
9. Nonaka, I., Nishida, K.: The concept of “Ba”: building a foundation for knowledge creation. Calif. Manage. Rev. 40(3), 40–54 (1998) 10. Olgun, S., Yilmaz, M., Clarke, P.M., O’Connor, R.V.: A systematic investigation into the use of game elements in the context of software business landscapes: a systematic literature review. Commun. Comput. Inf. Sci. 770, 384–398 (2017). https://doi.org/10.1007/978-3-31967383-7_28 11. Jurado, J.L., Fernandez, A., Collazos, C.A.: Applying gamification in the context of knowledge management. In: ACM International Conference Proceeding Series, 10–13 October, pp. 21–22 (2015). https://doi.org/10.1145/2809563.2809606 12. Pérez Rave, J.I.: Revisión sistemática de literatura en Ingeniería Ampliada y actualizada.Segunda edición 0–183 (2019) 13. Brenda, L.: Flores Rios Modelo de evolución de la gestión del conocimiento en MiPyMes, de acuerdo con el nivel de madurez en un programa de mejora de procesos de software (2016) 14. Nonaka, I., Takeuchi, H.: The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation. Bost Harvard Bus Sch (1991) 15. Camacho, J.J., Sánchez-Torres, J.M., Galvis-Lista, E.: Understanding the process of knowledge transfer in software engineering: a systematic literature review. Int. J. Soft. Comput. Softw. Eng. 3, 219–229 (2013). https://doi.org/10.7321/jscse.v3.n3.33 16. Kayani, J., Zia, M.Q.: The analysis of knowledge, knowledge management and knowledge management cycles: a broad review. Int. J. Acad. Res. Econ. Manag. Sci. 1, 2226–3624 (2012) 17. Wiig, K.M.: Integrating intellectual capital and knowledge management. Long Range Plann 30(3), 399–405 (1997). https://doi.org/10.1016/s0024-6301(97)90256-9 18. Rodríguez Elias, O.M.: Knowledge Management as Support in the Software Maintenance Process. Knowledge (2003) 19. Capote, J., Llanten Astaiza, C.J., Pardo Calvache, C.J., et al.: Knowledge management as a support mechanism for improving software programmes in micro, small and medium-sized companies. Rev. Ing e Investig 28, 137–145 (2008) 20. Engedal, J.Ø.: Gamification Thesis 2015 Engedal (2015) 21. Daneva, M., Pastor, O.: Requirements engineering: foundation for software quality. In: 22nd International Working Conference, REFSQ 2016 Gothenburg, Sweden, 14–17 March 2016 Proceedings. Lect Notes Comput Sci (including Subser Lect Notes Artifical Intelleligent Lecture Notes Bioinformatics) 9619 (2016). https://doi.org/10.1007/978-3-319-30282-9 22. Pedreira, O., García, F., Brisaboa, N., Piattini, M.: Gamification in software engineering a systematic mapping. Inf. Softw. Technol. 57:157–168 (2015). https://doi.org/10.1016/j.inf sof.2014.08.007 23. Machuca-Villegas, L., Gasca-Hurtado, G.P.: Estrategias de gamificación con fines de mejora de procesos software en la gestión de proyectos. RISTI - Rev Iber Sist e Tecnol Inf 490–499 (2019). https://doi.org/10.17013/risti.n.pi-pf 24. Machuca-Villegas, L., Gasca-Hurtado, G.P.: Aproximación de un modelo basado en gamificación para influir en la productividad de equipos de desarrollo de software Toward a Model based on Gamification to Influence the Productivity of Software Development Teams. Cist 2019, 19–22 (2019) 25. Galeano-Ospino, S., Machuca-Villegas, L., Gasca-Hurtado, G.P.: Prioritized Papers to be Analyzed (2020). https://monoapps.co/saray/Prioritized_Papers_To_Be_Analized.pdf 26. Lubick, K., Barik, T., Murphy-Hill, E.: Can social screencasting help developers learn new tools?. In: 2015 IEEE/ACM 8th International Workshop on Cooperative and Human Aspects of Software Engineering, pp. 113–114 (2015). https://doi.org/10.1109/CHASE.2015.18 27. Sodanil, M., Quirchmayr, G., Porrawatpreyakorn, N., Tjoa, A.M.: (2015) A knowledge transfer framework for secure coding practices. In: Proceedings of 2015 12th International Joint Conference on Computer Science and Software Engineering (JCSSE), pp. 120–125 (2015). https://doi.org/10.1109/JCSSE.2015.7219782
Knowledge Transfer in Software Development
129
28. Majumdar, S., Papdeja, S., Das, P.P., Ghosh, S.K.: SMARTKT: a search framework to assist program comprehension using smart knowledge transfer. In: Proceedings - International Conference on Software Quality, Reliability and Security (QRS) 2019, pp. 97–108 (2019). https:// doi.org/10.1109/QRS.2019.00026 29. Zieris, F., Prechelt, L.: Observations on knowledge transfer of professional software developers during pair programming. In: Proceeding of International Conference Software Engineering, pp. 242–250 (2016). https://doi.org/10.1145/2889160.2889249 30. Zieris, F.: Qualitative analysis of knowledge transfer in pair programming. In: 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, vol. 2, pp. 855— 858 (2015) 31. Fabri, J.A., L’Erario, A., Palácios, R.H.C., Godoy, W.: Applying mindstorm in teaching and learning process and software project management. In: 2015 IEEE Frontiers in Education Conference (FIE) 2015. https://doi.org/10.1109/FIE.2015.7344054 32. Gupta, R.K, Anand, T.: Knowledge transfer for global roles in GSE. In: Proceeding of - 2017 IEEE 12th International Conference Global Software Engineering ICGSE, pp. 81–85 (217). https://doi.org/10.1109/ICGSE.2017.2 33. Sharma, V.S., Kaulgud, V., Duraisamy, P.: A gamification approach for distributed agile delivery. In: Proceedings of International Conference Software Engineering, 16 May 2016, pp. 42–45 (2016). https://doi.org/10.1145/2896958.2896966 34. Piras, L., Dellagiacoma, D., Perini, A., et al.: Design thinking and acceptance requirements for designing gamified software. In: Proceedings of International Conference Resources Challenges Information Science, 1–12 May (2019). https://doi.org/10.1109/RCIS.2019.887 6973 35. Perini, A., Seyff, N., Stade, M., Susi, A.: Exploring RE knowledge for gamification: can RE achieve a high score?. In: Proceedings - 2018 1st Interantional Workshop on Affective Computing for Requirements Engineering, pp. 14–19 (2018). https://doi.org/10.1109/AffectRE. 2018.00009 36. Kifetew, F.M., Munante, D., Perini, A., et al.: Gamifying collaborative prioritization: does pointsification work?. In: Proceeding - 2017 IEEE 25th International Requirement Engineering Conference RE, pp. 322–331 (2017). https://doi.org/10.1109/RE.2017.66 37. Minelli, R., Mocci, A., Lanza, M.: Free hugs - praising developers for their actions. In: Proceedings International Conference Software Engineering, vol. 2, pp. 555–558 (2015). https://doi.org/10.1109/ICSE.2015.342 38. Fraser, G.: Gamification of software testing. In: Proceeding of 2017 IEEE/ACM 12th International Workshop Automation Software Testing, AST, pp. 2–7 (2017). https://doi.org/10. 1109/AST.2017.20 39. Marques, R., Costa, G., Mira Da Silva, M., Gonçalves, P.: Gamifying software development scrum projects. In: Proceedings of 2017 9th International Conference Virtual Worlds Games Serious Application VS-Games, pp. 141–144 (2017). https://doi.org/10.1109/VS-GAMES. 2017.8056584 40. Marques, R., Costa, G., Da Silva, M.M., et al.: Improving scrum adoption with gamification. Am. Conf. Inf. Syst. 2018 Digit Disruption, AMCIS, 1–10 (2018) 41. Unkelos-Shpigel, N., Hadar, I.: Inviting everyone to play: Gamifying collaborative requirements engineering. In: 2015 IEEE Fifth International Workshop on Empirical Requirements Engineering, pp. 13–16 (2015). https://doi.org/10.1109/EmpiRE.2015.7431301 42. Papoutoglou, M., Kapitsaki, G.M., Mittas, N.: Linking personality traits and interpersonal skills to gamification awards. In: 2018 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), pp. 214–221 (2018). https://doi.org/10.1109/SEAA. 2018.00042
130
S. Galeano-Ospino et al.
43. Yilmaz, M., O’Connor, R.V.: A scrumban integrated gamification approach to guide software process improvement: a Turkish case study. Teh Vjesn, 23 237–245. https://doi.org/10.17559/ TV-20140922220409 44. Piras, L., Giorgini, P., Mylopoulos, J.: Acceptance requirements and their gamification solutions. In: Proceeding - 2016 IEEE 24th International Requirement Engineering Conference RE, pp. 365–370 (2016). https://doi.org/10.1109/RE.2016.43 45. Parizi, R.M.: On the gamification of human-centric traceability tasks in software testing and coding. In: 2016 IEEE/ACIS 14th International Conference Software Engineering Resources Management Application SERA, pp. 193–200 (2016). https://doi.org/10.1109/SERA.2016. 7516146 46. Brito, J., Vieira, V., Duran, A.: Towards a framework for gamification design on crowdsourcing systems: the game. approach. In: 2015 12th International Conference on Information Technology-New Generations ITNG, pp. 445–450 (2015). https://doi.org/10.1109/ITNG.201 5.78 47. Hof, S., Kropp, M., Landolt, M.: Use of gamification to teach agile values and collaboration. Ann. Conf. Innov. Technol. Comput. Sci. Educ. ITiCSE Part F1286, 323–328 (2017). https:// doi.org/10.1145/3059009.3059043 48. Isaac, J., Babu, S.V.: Supporting computational thinking through gamification. In: Proceedings of 2016 IEEE Symposium 3D User Interfaces, 3DUI, pp. 245–246 (2016). https://doi.org/10. 1109/3DUI.2016.7460062 49. Unkelos-Shpigel, N., Hadar, I.: Gamifying software engineering tasks based on cognitive principles: The case of code review. In: Proceedings of 8th International Workshop Cooperating Human Aspects Software Engineering CHASE, pp. 119–120 (2015). https://doi.org/10. 1109/CHASE.2015.21 50. Lekh, R.: Pooja Exhaustive study of SDLC phases and their best praxctices to create CDP model for process improvement. In: Conference Proceeding - 2015 International Conference Advances Computer Engineering Application ICACEA, pp. 997–1003 (2015). https://doi. org/10.1109/ICACEA.2015.7164852 51. Muñoz, M., Peña, A., Mejia, J., et al.: Gamification to identify software development team members’ profiles. In: Communications in Computer and Information Science, pp. 219–228. Springer, Cham (2018) 52. Sun, W., Marakas, G., Aguirre-Urreta, M.: The effectiveness of pair programming: Software professionals’ perceptions. IEEE Softw. 33, 72–79 (2015) 53. Ghanbari, H., Similä, J., Markkula, J.: Utilizing online serious games to facilitate distributed requirements elicitation. Elsevier Ltd (2015) 54. Unkelos-Shpigel, N., Hadar, I.: Let’s make it fun: Gamifying and formalizing Code review. In: ENASE 2016 - Proceedings of the 11th International Conference on Evaluation of Novel Software Approaches to Software Engineering, pp. 391–395. https://doi.org/10.5220/000593 7203910395 55. Parizi, R.M., Dehghantanha, A.: On the understanding of gamification in blockchain systems. In: Proceedings - 2018 IEEE 6th International Conference Future Internet Things Cloud Work W-FiCloud, pp. 214–219 (2018). https://doi.org/10.1109/W-FiCloud.2018.00041 56. Tsunoda, M., Yumoto, H: Applying gamification and posing to software development. In: Proceeding - Asia-Pacific Software Engineering Conference APSEC, December, pp. 638–642 (2018). https://doi.org/10.1109/APSEC.2018.00081 57. Tsunoda, M., Hayashi, T., Sasaki, S., et al.: How do gamification rules and personal preferences affect coding?. In: 2018 9th International Workshop on Empirical Software Engineering in Practice (IWESEP), pp. 13–18 (2018). https://doi.org/10.1109/IWESEP.2018.00011 58. Silva, D., Coelho, A., Duarte, C., Henriques, P.C.: Gamification at scraim. In: Lecture Notes Inst Computer Science Soc Telecommunication Engineering LNICST 176. LNICST, pp. 141– 147 (2017). https://doi.org/10.1007/978-3-319-51055-2_18
Architecture of a Platform on Sharing Endogenous Knowledge to Adapt to Climate Change Halguieta Trawina1(B) , Ibrahima Diop2 , Sadouanouan Malo3 , and Yaya Traore1 1 Université Joseph KI-ZERBO, Ouagadougou, Burkina Faso
[email protected], [email protected] 2 Université Assane SECK, Ziguinchor, Senegal [email protected] 3 Université Nazi Boni, Bobo-Dioulasso, Bobo-Dioulasso, Burkina Faso [email protected]
Abstract. Burkina Faso, a country in the heart of the Sahel, is one of the most vulnerable areas in Africa. 86% of its rural population is engaged in agricultural activities. Its predominantly rural population, with little theoretical and scientific knowledge of climate change, suffers the consequences of this phenomenon, including crop losses, lack of pasture and food shortages. How to make rural agriculture resilient to climate change by taking into account and popularizing the local knowledge of rural people. In this paper we focus on farmers’ perceptions of climate change and strategies for endogenous resilience. We present the architecture of a platform for sharing and co-constructing knowledge on adaptation strategies developed by farmers to face climate risks in Burkina Faso. This architecture is a semantic wiki based on ontologies that formalize endogenous knowledge on agricultural techniques adapted to climate change. Keywords: Architecture · Semantic web · Semantic wiki · Social web · Ontologies · Endogenous knowledge · Agricultural techniques · Local knowledge
1 Introduction In Burkina Faso, agricultural activity occupies 86% of the rural population whose practices remain traditional with production dependent on climatic conditions [1]. This vulnerable population lives in areas most sensitive to climate change. If we take into account the rapid pace of climate change and its impacts such as the occurrence of extreme weather events, temperature changes, unpredictable seasonality, etc., then there is an urgent need for this population to adapt [2]. Adaptation in this context is a social process that must be anchored in specific contexts and enlightened by the needs and aspirations of those most directly affected. In our study, this adaptation consists of the appropriation, use and popularization of endogenous knowledge on agricultural techniques. For a © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 131–141, 2021. https://doi.org/10.1007/978-3-030-63329-5_9
132
H. Trawina et al.
better use of this endogenous knowledge, there should be a consensus on what endogenous knowledge is, who should benefit from it, or how to approach it and measure its effectiveness. An effective adaptation in the context of our study must therefore: – be focused on the vulnerabilities, capacities and aspirations of the affected populations; – tackle the multiple interrelated obstacles specific to each location; – use top-down and bottom-up approaches. The adaptation or appropriation of endogenous knowledge must take into account intersectionality, in particular the different experiences of actors in the field, in addition to factors such as education and access to technologies. The general objective of the work presented in this paper is to increase the resilience and adaptation capacities of rural populations facing challenges of climate change. The approach that we propose promotes innovation and offers opportunities to exchange knowledge and experiences between disciplines, sectors because our approach involves a synergy of actions between emerging technologies and proven endogenous ancestral knowledge. We then offer a platform that promotes: – opening of data on endogenous agricultural techniques in order to make the mechanisms of resilience accessible; – the linking of these data by a good structuring of the knowledge of the field and the processes; – finally, a combination of endogenous knowledge and scientific knowledge (climatic and territorial data) in order to then share them between the actors in this field. We therefore propose a social and semantic web platform which is well suited for the management of this endogenous knowledge in a context of climate change [3]. Its implementation is based on the use of semantic wikis and domain ontologies. This promotes the sharing and co-construction of local know-how on the Web. The paper presents the architecture of the social and semantic web platform. After this introduction, in Sect. 2, we provide the background. Then, we propose a solution to the problems identified in the context and we detail the architecture of the platform proposed in Sects. 3 and 4. We end this article with a conclusion and perspectives.
2 Context Climate change has very serious repercussions on man, his environment and agri-culture. In addition, the Intergovernmental Group of Experts on Climate Change (IPCC), in its report [2], indicated that climate change, by its experienced or expected impacts, poses for poor and vulnerable regions of the world enormous challenges for economic and social development. In the short term, the expected adverse effects of climate change would result from the increase in the frequency and intensity of extreme events such as
Architecture of a Platform on Sharing Endogenous Knowledge
133
droughts, floods, heat waves. In the long term, the expected impacts of climate change would come from the modification of the structure and function of ecosystems induced by it. Therefore, the need for societies to take measures allowing them to be more resili-ent to the observed climate risks or to anticipate their effects and impacts. In this same logic, the report [1] on endogenous knowledge in 2016 gives an important place to local know-how (the bunds in stony cords [4]; the Zaï [5]; the half-moons [6]; etc.) for better resilience to climate change. The inventory of ancestral practices has shown that they are of capital interest in the context of restoring the productivity of degraded land and the rehabilitation of agriculture in the case of Burkina Faso [1]. The observation today is that this knowledge of the peasants, which is of major interest, seem to be gradually gaining in credit due to their adaptability to agro-ecological and social contexts but also and above all their accessibility to all. That is why taking into account and sharing this endogenous knowledge become more than a necessity. What about sharing knowledge for adaptation to climate change? One of the recommendations mentioned in the Paris COP21 agreement [7] is to share this knowledge based on these endogenous techniques, which are very easy to acquire and at very low cost to the benefit of the populations who need them most. This is highly recommended in the Paris COP21 agreement which invites member countries to work to facilitate the sharing of endogenous knowledge for a better adaptation to climate change. The 2017 United Nations Climate Change Conference in Bonn [7] reiterated the same recommendation. To emphasize the importance of knowledge sharing Steffen S. Janus in his text-book [8] said that the solutions to overcome many development problems around the world are generally local which unfortunately are never produced in other countries for lack of sharing. There is a growing and urgent need to mobilize and share adaptation techniques. Although this knowledge exists in abundance, the actors on the ground do not have sufficient access.
3 Related Work Climate change affects all agricultural sectors in multiple ways, which vary from one region to another. It makes more difficult to predict seasonal climatic conditions and increases the frequency and intensity of severe weather events such as floods, cyclones and hurricanes [9] and [10]. In [11], the authors focus on the effects of climate change on agriculture. Considering the extent of the damage, the authors recommend that specific measures to the situation be taken to mitigate this impact of the farming system on weather conditions. The authors named this approach agrometeorology. They suggest this approach be popularized by TIC because of the lack of information on agrometeorology on the web. Other authors such as [8] believe that the solutions to the harmful effects of climate change are local. Unfortunately, these solutions very often remain local for lack of popularization on a large scale, and sharing this local knowledge is an effective way to mitigate the effects of climate change.
134
H. Trawina et al.
However, the authors make the difference between the so-called tacit knowledge (which is difficult to express) and the explicit one (which is easily recorded in writing). The authors of [12] go in the same direction. They suggest that this sharing be done through an integrated platform (IAIF) that could be interrogated for the discovery of new knowledge. To make this possible, the authors of [12] suggest that the platform integrate data sources already available on the Internet, which will enable to recover a huge amount of knowledge on agriculture and its related fields. This data is in different forms and formats such as relational databases, XML, RSS, web pages and others. The goal is to better combine, organize and aggregate these resources, so as to optimize their use. All these authors have clearly understood the importance of using the web and sharing knowledge for a resilience to climate change and a better adaptation of agriculture without however specifying what types of knowledge should be shared. There are also platforms (websites, platforms and web portals) currently enable to publish and disseminate knowledge. Among them we can cite: AgroInformatics1 , AfricaAdapt2 , ENDA energy3 , SAWAP4 , WASCAL5 , CILSS6 , IAA7 , PNUD/climate, change adaptation8 , africanriskcapacity9 . Through these platforms, researchers manage to publish articles to value and promote the sharing of knowledge in the domain of agriculture, climate change, adaptation to climate change. The following table (cf Table 1) summarizes to what these platforms offer as content, namely the sharing of knowledge on climate change, on adaptation to CC and then on endogenous knowledge. From this table, we can see that no platform addresses all three aspects simultaneously, which raises two major concerns: interoperability between platforms and the lack of platforms dedicated to the sharing of endogenous knowledge in a context of climate change • The heterogeneity of data sources and their non-structuring makes the platforms noninteroperable According to [12], a huge amount of knowledge on agriculture and its related fields (climate data, rainfall, etc.) is published on the Internet in different forms and formats such as relational databases, XML, RSS, web pages and [13].
1 https://agroinformatics.org/. 2 http://www.africa-adapt.net. 3 http://endaenergie.org/. 4 http://www.sawap.net. 5 https://wascal.org. 6 http://www.cilss.int/. 7 https://www.africaadaptationinitiative.org/fr. 8 https://www.adaptation-undp.org/. 9 https://www.africanriskcapacity.org/.
Architecture of a Platform on Sharing Endogenous Knowledge
135
Table 1. Synthesis of knowledge sharing platforms for adaptation to climate change. Platforms
KSP/CC
KSP/Adaptation
KSP/Endogenous
AfricaAdapt
Yes
Yes
No
ENDA energy
No
No
Yes
GAN
No
Yes
Yes
SAWAP
No
Yes
Yes
WASCAL
No
Yes
Yes
CILSS
Yes
Yes
No
IAA
Yes
Yes
No
PNUD/climate change adaptation
No
Yes
No
Afreican risk capacity
Yes
Yes
No
KSP: Knowledge Sharing Platform CC: Climate Change
Faced with this concern of non-interoperability, Néjib Moalla and al [14] said in their article that interoperability issues arise from the increasing complexity of products and services. Interoperability, they say, aims at increasing the ability of heterogeneous systems and organizations to coordinate their activities efficiently. • The absence of platforms only dedicated to sharing local know-how This limits their popularization for better adaptation to climate change. Sharing is more intended for scientific knowledge, and poses the problem of co-construction of knowledge by field actors, hence the need to take it into account However, it is important that these information resources be organized, centralized, combined and aggregated so as to optimize their use and provide richer information benefits and more functionality to final users [10]. The advances brought by the technology of the Semantic Web [15] and the Semantic Wiki [16] and [17] have made more and more available repositories of knowledge on the Internet [12] and will make it possible to better respond to the interoperability problems of current platforms for sharing, heterogeneity and content format.
4 The Solution, a Social and Semantic Web Platform for Sharing Endogenous Knowledge In [18], we had described our solution approach for the sharing of endogenous knowledge in the context of climate change. We distinguish two great families of knowledge: tacit knowledge (which difficult to express) and explicit knowledge (which from experimentation or scientific data). Referring to this categorization of knowledge, we will say that endogenous knowledge (or local know-how) is considered as tacit knowledge because it is unconsciently
136
H. Trawina et al.
understood, difficult to express but acquired by experience, direct action and generally shared through some very interactive conversations, narrations and common experience [19]. Thus, the solution which we propose allows to link these two types of knowledge (tacit and explicit) for the adaptation of agriculture to climate change through only one social web and semantic platform, which is composed of: – a Semantic Wiki for the collect and sharing of endogenous knowledge (tacit); – a scrapping module for (explicit) scientific knowledge sharing platforms on climate change that exist on the web; – knowledge bases for the representation of tacit and explicit knowledge for adaptation to climate change; – and a system for discovering links and establishing relationships between those two knowledge bases. This solution will provide a platform that integrate data on endogenous but also scientific knowledge. In addition, we propose a co-construction of this knowledge be-cause the actors holding this knowledge (the peasants) will also be able to add new knowledge described in an approximate language [20]. In this way it will then be possible to ensure the sharing after the co-construction of endogenous knowledge in the context of climate change. The following section presents the architecture of that platform.
5 Architecture of a Social and Semantic Web Platform for Sharing Endogenous Knowledge In this part, we present the architecture of the social and semantic web platform, which should allow farming communities and other actors to share and co-construction their Knowledge based on local know-how in the context of climate change. It is made up of five layers (see Fig. 1 below) which are: (A) the users layer, which describes the users and their roles in the co-construction of knowledge, (B) the semantic wiki layer which provides a query interface, (C) the so-called data persistence layer composed of endogenous (tacit) and scientific (explicit) knowledge bases, (D) the scrapping layer and finally (E) the layer representing the module of links discovering. • Users Layer (Layer A) For our solution, five categories of users have been identified, they are: – Peasants or farmers: it is a group of users holding local know-how which is important to describe in an approximate language as explained in [12]. – Agricultural technicians and engineers: responsible for identifying endogenous knowledge. They form the group of users who will be alongside actors in the rural world for their assistance in identifying local techniques or knowledge. Having received training in agriculture and/or livestock, they will be able to transcribe the information received.
Architecture of a Platform on Sharing Endogenous Knowledge
137
Fig. 1. Architecture of the social and semantic web platform and the sharing of endogenous knowledge (or local know-how).
– Experts in the field: This is the group of users made up of researchers, academics, etc. who are interested in the fields of research on climate change, adaptation strategies, local knowledge, etc. – Knowledge engineers: constitute the group of actors in charge of formalizing knowledge on local know-how. • Semantic WIKI layer (layer B) This semantic Wiki layer is made up of a set of modules developed using third-party components that will ensure the various functionalities of the Wiki. It allows you to add information on the metadata of articles and to characterize their relationships. Our Wiki can be compared to Wiki (IkeWIKI) proposed by Schaffert S. in his article [21], as a support for collaborative knowledge engineering. Like this Wiki [12] which supports different levels of formalization ranging from informal texts to formal ontologies, the semantic Wiki layer of our architecture will have a functionality which allow to dispose of an Interactive interface easy to use and based to on approximate language, as explained in [13] • Persistence layer (layer C)
138
H. Trawina et al.
The persistence layer consists of knowledge bases related to endogenous knowledge qualified as tacit knowledge according to [19] and those consisting of scientific knowledge of researchers qualified of explicit knowledge according to [19]. Thanks to the link discovery layer, it will be possible to correlate the two categories of knowledge. This new knowledge is stored in RDF (S) and OWL format because the Wiki only supports RDF (S) - OWL reasoning. • Web scrapping layer (layer D) As we mentioned in the context, there are platforms on the web where researchers publish articles related to climate change, adaptations to climate change or any other scientific data or knowledge that we wish to exploit as structured data then integrate into our platform. Indeed, the role of the layer (D) or Scrapping layer is to make the scrapping of all of the platforms that we will have identified by their URLs to extract the data called explicit knowledge which has been the subject of approved experimentation then to inject into the knowledge base layer in RDF format. In other words, this layer will allow the extraction of data from Web pages and then bring them back into structured data (RDF(S), OWL, SPARQL), which will be transmitted to the persistence layer or knowledge base layer (Fig. 2).
Fig. 2. Web scrapping
• Discovery layer (layer E) This layer allows you to find links between tacit and explicit knowledge bases. This can allow us in the long term to find applications of endogenous knowledge in scientific knowledge.
6 Discussion As detailed in the literature review section, several web platforms and portals exist on the web and aim to promote content on local and endogenous know-how for the adaptation of agricultural techniques.
Architecture of a Platform on Sharing Endogenous Knowledge
139
Other platforms proposed by authors also exist. AFROWeeds (African Weeds of rice) proposed by [22] integrates a collaborative web space and provides access to a common knowledge base and different tools such as identification systems, information systems, document provision, forum, wikini, etc.; only this platform only takes into account data in the field of rice cultivation. API-AGRO proposed by [23] is an agricultural data exchange platform federating public and private actors. It is based on the use of APIs (Application Programming Interface), a technology that facilitates data sharing with clear rules of diffusion and use between different actors of the agricultural ecosystem. In addition to the fact that all of these web portals and platforms all aim to enable the sharing of information, we note that endogenous knowledge is not put in the foreground; moreover, they do not take into account the semantic aspect of the data. [1] in its report “Climate change and sustainable agriculture in Burkina Faso: Resilience strategies based on local knowledge” has made a very valuable inventory of endogenous agricultural techniques for the resilience of rural populations in Burkina Faso, but this remains static data that cannot be exploited by software agents. In our approach, we will exploit the work of [1], and by integrating semantic and social web technologies, we propose a platform for sharing endogenous or local knowledge in a context of climate change. The architecture of a Social and Semantic Wiki that we have proposed in this article will bring a bonus to this concern to organize, centralize, build and co-construct endogenous knowledge of populations in order to share it for a better resilience to climate change.
7 Conclusion and Perspectives In this article, we have a brief view of the consequences of climate change in agriculture which occupies a very prominent place in our communities especially in rural areas. We then looked at the means of resilience to these climate changes by focusing more specifically on the local know-how of agriculture as a CC adaptation solution. The main question that we ask ourselves and that we have tried to provide answers to is how to adapt to Climat Change through the sharing of this endogenous knowledge. In the literature, we have noted that attempts at resolution already exist for the promotion and sharing of this knowledge through standard platforms (websites, portals, …) but the observation is that there is a lack of interoperability between those platforms, also the data published on them are in standard formats (html, pdf, …) and not formalized, therefore not understandable by software agents but only by humans. That is why in our solution proposal, we recommend using new technologies of the Web namely the Web and the semantic and social Wiki to propose a knowledge base which will be built in RDF (S) and OWL format from tacit knowledge (endogenous knowledge) and explicit knowledge (knowledge already published by some researchers). Our social and semantic Wiki solution will allow the construction and co-construction of the knowledge base of endogenous agricultural techniques for a better adaptation to CC. Finally, we propose the architecture of our social and semantic Wiki platform.
140
H. Trawina et al.
In perspective, we plan in our next articles to come back in more detail to the description of each layer of this architecture, then take stock of existing ontologies in the field of agriculture, climate change, adaptation measures or resilience at the end of which we will propose a methodology for constructing the ontology of our solution of local knowledge in the context of climate change. Also, we believe that this analysis of the knowledge that we have circumcised in Burkina Faso may well be extend to other West African countries which are also affected by Climate Change.
References 1. Dipama, P.J.M.: Changement climatique et agriculture durable au Burkina Faso: stratégies de résilence basées sur les savoirs locaux rapport d’étude. Promouvoir la résilience des économies en zones semi-arides (PRESA) et Innovation environnement développement Afrique (IEDA), p. 36 (2016) 2. GIEC: Ed. Bilan 2007 des changements climatiques. Contribution des Groupes de travail I, II (2007) 3. Trawina, H., Mao, S., Diop, I., Traore, Y.: Towards a social and semantic web platform for sharing endogenous knowledge to adapt to climate change. In : 2020 15th Iberian Conference on Information Systems and Technologies (CISTI), pp. 1–5. IEEE (2020) 4. Lu pour vous: Burkina Faso : des pratiques agricoles endogènes pour (…) - IED afrique | Innovations Environnement Développement 5. Kabore, P.N., Barbier, B., Ouoba, P., Kiema, A., Some, L., Ouedraogo, A.: Perceptions du changement climatique, impacts environnementaux et stratégies endogènes d’adaptation par les producteurs du Centre-nord du Burkina Faso Farmers’ perceptions of climate change, environmental impacts and endogenous adaptive strategies in the North Central of Burkina Faso (2019) 6. Zougmore: Rôle des nutriments dans le succès des techniques de conservation des eaux et des sols (cordons pierreux, bandes enherbées, zaï et demi-lunes) au Burkina Faso 7. COP21 Accord de Paris (2015) 8. Janus, S.S.: Le partage des connaissances pour des organisations plus efficaces (2017) 9. FAO-Adapt Programme-cadre sur l’adaptation au changement climatique (2011) 10. Diop, I.: Vers un système de gestion des connaissances du changement climatique. Université Gaston Berger de Saint-Louis (2014) 11. Sheokand, R.N., Singh, S.: ICT based agro-informatics for precision and climate resilient agriculture. In: Proceedings of AIPA (2012) 12. Shoaib, M., Basharat, A.: Semantic web based integrated agriculture information framework. In: International Conference on Computer Research and Development, pp. 285–289, January 2010 13. Ganguly, A.R., Gupta, A., Khan, S.: Data mining technologies and decision support systems for business and scientific applications. Encyclopedia of Data Warehousing and Mining, ResearchGate (2005) 14. Moalla, N., Panetto, H., Boucher, X.: Interopérabilité et partage de connaissances. Revuedes Sciences et Technologies de l’Information (2012) 15. Laublet, P., Reynaud, C., Charlet, J.: Sur quelques aspects du web sémantique. Assises du GDR I 3, 59–78 (2002) 16. Traore, Y.: Extraction de connaissances dans une plateforme web social et sémantique basée sur un wiki. Thèse, Université Gaston Berger de Saint-Louis (2017)
Architecture of a Platform on Sharing Endogenous Knowledge
141
17. Berners-Lee, T.: Linked Data - Design Issues, 27 July 2006 18. Trawina, H., Malo, S., Diop, I., Traore, Y.: Towards a social and semantic web platform for sharing endogenous knowledge to adapt to climate change (2020) 19. Janus, S.S.: Becoming a Knowledge-Sharing Organization: A Handbook for Scaling Up Solutions through Knowledge Capturing and Sharing. The World Bank (2016) 20. Afo-Loko, O.: Adoption de l’agriculture intelligente à Tandjoare-Togo comme option d’atténuation des changements climatiques: une étude de cas du labour de conservation (2016) 21. Schaffert, S.: IkeWiki: a semantic wiki for collaborative knowledge management. In: Proceedings of the 15th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises, pp. 388–396 (2006) 22. Bourgeois, T.L., Grard, P., Marnotte, P.: La plateforme AfroWeeds. Le partage d’information et l’aide à l’identification des adventices des rizières en Afrique dans une démarche collaborative régionale, p. 10 (2010) 23. Sine, M., Haezebrouck, T.P., Emonet, E.: API-AGRO - création d’une plateforme d’échange de données agricoles fédératrice d’acteurs publics et privés. Innovations Agronomiques, pp. 201–214 (2019)
Discovery and Enrichment of Knowledges from a Semantic Wiki Julie Thiombiano1(B) , Yaya Traoré1 , Sadouanouan Malo2 , and Oumarou Sié1 1 Université Joseph Ki-Zerbo, Ouagadougou, Burkina Faso
[email protected], [email protected], [email protected] 2 Université Nazi Boni, Bobo Dioulasso, Burkina Faso [email protected]
Abstract. Semantic wikis extend wiki platforms with the ability to represent structured information. They allow users to semantically annotate wiki content to enrich it with semantic properties, and then to propose new ways to find, navigate and present wiki content through reasoning. Semantic wikis are used by many communities for a wide variety of purposes, such as organizing domain knowledge. They have had reasonable success as collaborative platforms in the context of social semantic applications. In this paper, we consider a semantic wiki based on IDOMEN ontology. We use the Wiki infrastructure for the ontology maintenance, with pages containing categories that are IDOMEN concepts, typed links as relations and attributes. IDOMEN is used within wiki for supporting page annotation (tag organization), semantic searching, guided editing, and consistency checking. Our work consisted, on the one hand to discover new knowledge from pairing the concepts of domain ontology and the clusters of tags that we got with tags provided by users; On the other hand, it will be the enrichment of ontology by placing the new knowledge discovered previously. Keywords: Clustering · Ontology · Semantic wiki · Semantic web · Knowledge discovery · Ontology enrichment
1 Introduction Our work is a part of the project1 An agent-based simulation platform for awareness raising in the context of controlling and preventing the spread of an epidemic: the case of meningitis in Burkina Faso. The goal is to use knowledge engineering methods and, particularly, semantic web technologies to propose solutions for meningitis knowledge sharing. A knowledge’s base on meningitis is being built through a platform. This base will allow the sharing of knowledge relating to the subject (meningitis). An ontology IDOMEN (Infectious Disease Ontology for MENingitis) (Fig. 1) is developed by [1] and will identify all known concepts of meningitis and the relationships between them (concepts). The plat-form is inspired by the Semantic Media Wiki engine [2] and Sweet 1 http://www.ceamitic.sn/plateforme-de-simulation-a-base-dagents/.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 142–153, 2021. https://doi.org/10.1007/978-3-030-63329-5_10
Discovery and Enrichment of Knowledges from a Semantic Wiki
143
Wiki [3]. Users create and share information freely by simply editing pages. In order that the data contained in the web platform’s pages be usable (for example, to access content, search for usable information) by human agents and also by software agents, each page must have a semantic layer associated to its content allowing that page to be annotated. In our case, semantic annotation will consist in associating to the page a set of categories that are previously defined as concepts in the ontology and tags that can be freely associated by the user. This annotation will produce a lot of metadata that will probably contain news and useful knowledge. The challenge is to discover new knowledge from the tags associated with pages and enrich the ontology by placing them there. This will allow, on the one hand, a better management of shared knowledge and, on the other hand, to save ontology to be outdated. Recent approaches use data mining techniques in the discovery of knowledges and enrichment of ontologies. In fact, the two domains, data mining and ontological metadata are extremely linked [4]: on the one hand, ontologies make possible to valorize the results obtained after the data mining and, on the other hand, data mining techniques make possible to build and enrich ontologies. In this paper we propose an approach to enrich ontology from new knowledge discovered by using a data mining technique, the clustering. Our approach is built around the k-means algorithm and consists of pairing between clusters provided by clustering and ontology concepts to discover new knowledge. When it is done, then we process to ontology enrichment by placing new concepts on one side and new labels of concepts on the other. Afterwards in the article, we will introduce in Sect. 2 the preliminaries and the problem statement, in Sect. 3 we will address the works related to our approach followed by our approach description in Sect. 4. The Sect. 5 will address the approach validation and in Sect. 6 we will conclude and give the perspectives.
2 Preliminaries and Problem Statement 2.1 Preliminaries Definition 1. Ontology [5] represents the relevant concepts and relationship between the concepts of a domain. Each concept is defined by a set of consensual terms that is not specific to an individual but accepted by a community of users. An ontology O is formally defined by O = (S, L) [6], where – – – – – –
S represents the conceptual structure and L represents the lexical structure. The conceptual structure is defined by S = (C, R, ≤ C, σ R) , where C, R are disjointed sets containing concepts and associative relationships, ≤C defines the hierarchy of concepts, σ R : R → C × C is the signature of a relationship between concepts. The lexicon contains all the labels that are associated with the concepts and relationships, of the conceptual component of ontology. It is defined by:
L = (LC , LR, FC , FR ) where LC and LR are separate sets from concept labels, and relationships. FC is a function defined on all concepts by: ∀ l ∈ LC , FC (l) = {c/c ∈ C} and FR is a function defined on all relationships by:
144
J. Thiombiano et al.
∀ l ∈ LR, FR (l) = {r/r ∈ C}. To access the concepts and relationships designated by a label, we use these functions.
Fig. 1. Extract from IDOMEN
WikiMeningitis The knowledge sharing platform called WikiMeningitis is a collaborative system inspired by semantic wiki engines to share knowledge about meningitis in Burkina Faso. It is under development and it offers an easy-to-use interface, collaborative content management, a semantic annotation module, a knowledge discovery and ontology enrichment module and a heterogeneous format data storage medium in a MySQL relational database or in the Resource Description Framework (RDF) triplestore. Using Semantic Web technology, it offers a semantic search module that uses the Simple Protocol And RDF Query Language (SPARQL) query engine to query the RDF triplestore. A module for discovering knowledge and enriching ontology is in the finalization phase. The semantic search entry forms are developed using Java Server Page (JSP) technology. A page created in the platform must be annotated and its content is stored in both the MySQL database and the RDF triplestore. The triplestore and database exchange through the annotation system based on tags and categories that are ontology concepts. The figure below shows the architecture of the platform (Fig. 2). So, when a page is published by a user, it will be structured (Fig. 3) by content, keyword and category. • Content: It refers to a knowledge that a user has about the meningitis and want to share. • Keyword: Freely associated by users to annotate the content of the page. • Category: It designates the class to which the content will be assigned. The categories are concepts of ontology As wikiMeningitis is mainly composed of: pages created by users; categories and tags used to annotate pages, we consider in this paper the following wiki formalism [6]: W S = (C;⊆; P, T, RC, RT) :
Discovery and Enrichment of Knowledges from a Semantic Wiki
145
Fig. 2. Architecture of the platform
Fig. 3. Graphical representation of the page structure
– – – – –
C represents all categories; ⊆ represents the subcategory relationship between the categories of C P represents all the pages; T represents all the tags; RC represents a binary relationship between C and P. A couple (p; c) ∈ RC indicates that the page p ∈ P is classified in the category c ∈ C; – RT represents a binary relationship between P and T. A couple (p; t) ∈ RT denotes the fact that the p ∈ P page is tagged with the t ∈ T tag.
Definition 2. Jaro-Winkler distance [7] Higher the Jaro-Winkler distance is between two strings; more similar they are. This measure is particularly suitable for processing short strings such as keywords. The Jaro-Winkler distance dw of two strings s1 and s2 is: (1) dw = dj + (lp 1 − dj
146
J. Thiombiano et al.
– with dj is Jaro’s distance between s1 and s2 – l the length of the common prefix (l < 4) – p the coefficient allowing to favour chains with a common prefix (Winkler proposes for value p = 0.1). 2.2 Problem Statement A semantic platform which leans on ontologic knowledge base is a powerful tool for users’ collaboration and for knowledge sharing. However, over time the power of this tool will decrease if it is not updated because a domain’s knowledge constantly evolves over time so the knowledge base may become outdated. In addition, a large amount of data is produced with user activity and stored in the platform. This data may contain new knowledge. This leads us to the problem solved in this paper: How to extract new knowledge from this data? how to enrich ontology by placing them there?
3 Related Work Most of the work on concept identification in the ontology enrichment process is much more based on data mining techniques. The frequent pattern extraction technique allows the extraction of subsets of elements whose support is greater than or equal to the minimum threshold set by the user. The work of [6] adds a semantic aspect to frequent pattern extraction for the search for frequent patterns potentially useful to guide the discovery of categories and sub-categories in a semantic wiki. [8] proposes a methodology to update a basic ontology (Ob)from a general ontology (Og) that includes in its structural representation all the concepts of the basic ontology and the new concept to be added. [4] chooses to privilege the technique of extracting sequential patterns from a textual corpus for the automatic enrichment of the domain ontology. Their approach integrates the discovery of new concepts and their placement in the ontology. [15] uses for ontology enrichment the discovery of multi-relational association rules coded in SWRL (Semantic Web Rules Language) from ontological Knowledge Bases. This method has the particularity of taking into account intentional knowledge and since the discovered rules are represented in the SWRL, they can be directly integrated into the ontology. Thus, some works [9, 10] have chosen to use classification approaches in the process of extracting new concepts. This technique attempts to link candidate terms to ontological concepts. And when the number of candidate terms is high, they switch to the information gain technique to extract only the most representative terms. Using a principle similar to what is explained in the work of [9–12] performs a clustering of terms by a clustering method according to their frequency of appearance in the corpus. Based on the extraction of ontological relationships in the Turkish Wikipedia, [13] proposes a weakly supervised ontology enrichment algorithm where for a given word, a few little-known concepts ontologically linked to similarity scores computed via word2vec models may lead to the discovery of other related concepts. Using a semi-automated web document as a corpus, [14] proposes a process of ontology enrichment using statistics and linguistics by applying evaluation techniques in the field of tourism. With the technique of pattern extraction, the difficulty lies in the exploitation of the patterns because the number
Discovery and Enrichment of Knowledges from a Semantic Wiki
147
of extracted patterns is often very high. For example, for the Apriori algorithm with n elements up to 2n patterns can be generated. The number of generated patterns being often high, this can be a problem in the exploitation. For work using the classification technique, while this method reduces the problem due to the large number of patterns that are often high, it should be noted that it requires experts to build a labeled extraction base, which takes time and also requires supervision. Also, to our knowledge, we have not encountered any work using the clustering technique for ontology enrichment from a semantic wiki. This is why we propose an ontology enrichment approach based on a form of clustering, k-means, which addresses the mentioned limitations for extracting new concepts. Our goal is to form clusters of tags and make matches in order to discover new knowledge. For the placement, we draw inspiration from the work of [4].
4 Description of the Approach The approach is structured in four steps as showed on the figure (Fig. 4). The first three steps make the knowledges discovery and the last step make the enrichment of ontology. – Data pre-processing: Consists in building a data set from the shared data stored in the platform. This data set must satisfy the input needs of the search algorithm, resulting in a binary data matrix. – Clustering: Consist to cluster the different elements according to their similarities using the k-means algorithm with the Euclidean distance as the distance. One of the reasons why we chose k-means is that on the one hand we wanted the task to be unsupervised but also, we initially assign to k the value corresponding to the number of concepts existing in the ontology. Since we have binary data and the Euclidean distance uses numerical data, we used the frequencies of appearance of the elements to form the clusters. We then ensure that the resulting clusters are well formed by applying the Jaccard distance to the data set to obtain a matrix of distances between tags. This will allow us to check for inter- and intra-cluster similarities. The result of this step will be well-formed clusters; – Pairing: This step consists of matching domain ontology concepts with potential new knowledge using the Jaro-Winkler distance [7] to find the similarity between two terms: For a given Cluster Cl and for an ontology concept C • If for all elements of Cl, there is at least one element similar to C, then all other Cl elements will be used to enrich the labels of the concept C • Otherwise Cl is a new knowledge with the Centroide as a concept and other elements of the cluster will serve as its labels.
148
J. Thiombiano et al.
This discovery step is performed by the FindSim (Algorithm 2) – Placement: This step places the newly discovered terms (concepts and/or concept labels) in the existing ontology. The onto Update algorithm (Fig. 5) enriches the ontology and uses the two algorithms [4] below to perform this task: • Algorithm Placement: is used when it is a new concept that must be placed with its labels included. The algorithm receives as an input all the Concepts and labels relationships; • Algorithm Place-Lab: used for the enrichment only, concerns the labels of a concept. So, this algorithm places a new label in the ontology. Since labels are inherited from parent concepts by sons, the algorithm checks that the label does not already exist for a child concept, to avoid redundancies (lines 1-4-7) (Fig. 6).
Fig. 4. Illustration of the proposed approach
Discovery and Enrichment of Knowledges from a Semantic Wiki
Fig. 5. OntoUpdate algorithm
Fig. 6. FindSim algorithm
149
150
J. Thiombiano et al.
5 Approach Validation Let’s consider the following data extracted from WikiMeningitis (Fig. 7) and part of IDOMEN (Fig. 8).
Fig. 7. OntoUpdate algorithm
Fig. 8. Existing Ontology
The different tags correspond to the following terms in the field of meningitis: – – – – – – – – –
A = risk factors B = climatics factors C = social factors D = vomitting H = meningitis sign G = photophobia Ca = stiff neck E = Infectious agent carrier of meningitis F = Healthy carrier of meningitis The dataset we get is the following DS matrix (Table 1): Table 1. Dataset DS •
P01 P02 P03 P04 P05
A
0
1
1
1
1
B
0
1
1
1
1
C
0
1
1
1
1
D
0
1
0
0
1
E
0
0
1
1
1
F
1
0
1
0
1
G
1
1
0
0
0
H
0
0
0
1
0
Ca 0
0
1
0
0
Discovery and Enrichment of Knowledges from a Semantic Wiki
151
We move on to the next step, which is to form clusters (Fig. 9).
Fig. 9. Clusters building
Now let’s check the quality of the clusters formed (Table 2). Table 2. Distance matrix A
B
C
D
E
F
G
H
A
0
B
0
0
C
0
0
D
0.4 0.4 0.4 0
E
0.2 0.2 0.2 0.6 0
F
0.6 0.6 0.6 0.6 0.4 0
G
0.8 0.8 0.8 0.4 1
H
0.6 0.6 0.6 0.6 0.4 0.8 0.6 0
Ca
0
0.6 0
Ca 0.6 0.6 0.6 0.6 0.4 0.4 0.6 0.4 0
By comparing the distances on the table above we can see that the intra-group similarities are stronger than the inter-group similarities. For example: – – – –
d (A, B) < d (A, E) d (A, C) < d (A, E) d (A, B) < d (A, F) d (A, C) < d (A, F)
152
J. Thiombiano et al.
So the elements {A, B, C} are more similar to each other than the elements {A, B, E} or {A, B, F} or {A, C, E} or {A, C, F}. After application of the FindSim algorithm we have a set of new knowledge that we place, the result of the ontology after enrichment is as follows (Fig. 10):
Fig. 10. Result of enrichment
The new terms discovered and placed are: – The new concept E and his label F – The labels B and C that enrich the concept A – The label D (corresponds to “vomitting” in IDOMEN) that enriches the concept H The different elements have been well placed in IDOMEN.
6 Conclusion and Perspectives In this paper we propose an approach that allows to discover and enrich knowledge from a semantic wiki. Our approach allowed us to discover new knowledge from the dataset considered. This therefore enriched the basic ontology. This approach has the advantage of extracting valuable knowledge from the mountains of wiki data to enrich the ontology. However, efforts remain to be made to perfect the method. Indeed, in a first step we are going to evaluate the efficiency of the algorithms we are using by testing the them on more data provided by WikiMeningitis. In second step we will evaluate the method with several experts in the field of meningitis. Afterward in the next step will be to explore other clustering methods such as hierarchical and density clustering but also to propose a supervised multi-label classification method to categorize pages from their creation according to keywords.
References 1. Béré, W., Camara, G., Malo, S., Lo, M., Ouaro, S.: Towards meningitis ontology for the annotation of text corpora, pp. 322–331 (2018) 2. Krötzsch, M., Vrandecic, D., Völkel, M.: Semantic mediawiki. In: ISWC 2006: 5th International Semantic Web Conference, Athens, Ga, USA, 5–9 November 2006 (2006)
Discovery and Enrichment of Knowledges from a Semantic Wiki
153
3. Buffa, M., Gandon, F., Ereteo, G.: Wiki et web sémantique. In: Trichet, F. (ed.) IC 2007: 18e Journées Francophones d’Ingénierie des connaissances (2007) 4. Di Jorio, L., Abrouk, L., Fiot, C.: Enrichissement d’ontologie basé sur les motifs séquentiels. Lirmm-00176073(2007) 5. Gruber, T.R.: Toward principles for the design of ontologies used for knowledge sharing. Int. J.Hum.-Comput. Stud.- Spec. Issue: Role Formal Ontol. Inf. Technol. V 907–928 (1993) 6. Traoré, Y., Malo, S., Diop, C.T., Lo, M., Ouaro, S.: Extraction des connaissances dans un wiki sémantique: apport des ontologies dans le prétraitement. In: 5th Journées Francophones sur les Ontologies (JFO), 14−16 November 2014, Hammamet, Tunisie, pp. 127–138 (2014) 7. Winkler, W.E.: The state of record linkage and current research problems. Statistics of Income Division, Internal Revenue Service Publication R99/04,1999 (1999) 8. Ngom, A.N., Traore, Y., Diallo, P.F., Kamarasangare, F., Lo, M.: A method to update an ontology: simulation. In: IKE 2016 the 2016 International Conference on Information and Knowledge Engineering, Las vegas USA (2016) 9. Han, E.H., Karypis, G.: Centroid based document classification: analysis and experimental results. In: Proceedings of the 4th European Conference of Principles of Data Mining and Knowledge Discovery, pp. 424–431 (2000) 10. Neshatian, K., Hejazi, M.R.: Text categorization and classification in terms of multiattribute concepts for enriching existing ontologies. In: Proceedings of the 2nd Workshop on Information Technology and its Disciplines, pp. 43–48 (2004) 11. Agirre, E., Ansa, O., Hovy, E., Martinez, D.: Enriching very large ontologies using the WWW. In: Proceedings of ECAI 2000 workshop on Ontology Learning (2000) 12. Parekh, V., Gwo, J., Finin, T.: Mining domain specific texts and glossaries to evaluate and enrich domain ontologies, international conference of information and knowledge engineering, Juin (2004) 13. Pembeci, ˙I.: Using word embeddings for ontology enrichment. Int. J. Intell. Syst. Appl. Eng. 4(3), 49–56 (2016) 14. Kuntarto, G.P., Gunawan, I.P., Moechtar, F.L., Ahmadin, Y., Santoso, B.I.: Dwipa ontology III: implementation of ontology method enrichment on tourism domain. Int. J. Smart Sens. Intell. Syst. 10(4) (2017) 15. d’Amato, C., Staab, S., Tettamanzi, A.G.B., Duc, M.T., Gandon, F.: Ontology enrichment by discovering multi-relational association rules from ontological knowledge bases. In: SAC 2016 - 31st ACM Symposium on Applied Computing, April 2016, Pisa, Italy, pp. 333–338 (2016). https://doi.org/10.1145/2851613.2851842.hal-01322947
Towards a Knowledge Condensation Tool to Capture Expertise Jose R. Martínez-Garcia1(B) , Ramón R. Palacio2 , Francisco-Edgar Castillo-Barrera1 , Gilberto Borrego3 , and Hector D. Marquez-Encinas2 1 Ciencias de la Computación, Universidad Autonoma de San Luis Potosi, San Luis Potosi,
Mexico [email protected], [email protected] 2 Unidad Navojoa, Instituto Tecnologico de Sonora, Navojoa, Mexico [email protected], [email protected] 3 Computación y Diseño, Instituto Tecnologico de Sonora, Ciudad Obregon, Mexico [email protected]
Abstract. Finding relevant expertise is a critical need in software organizations since developers use it to support their knowledge needs. In software organizations, expertise can be found in architectural knowledge (AK), which incorporates the experience and problem reasoning of human resources through two primary sources: experts and artifacts; consequently, know how to model it, could grant access to the best practices, training practices, and data across applications; this has generated considerable interest by organizations, which seek to develop formal approaches to condensate the available knowledge from different sources in a systematic manner. In this work, we present a semantic knowledge modeling, which was developed using knowledge modeling techniques; the model aims to link artifacts and experts within the organization; moreover, we present ExCap a tool that implements the semantic model presented. Keywords: Knowledge · Expertise · Software engineering · Knowledge representation · Expertise location
1 Introduction Knowledge plays a key role in supporting the software development process [1]; which consists of activities that demand an application of knowledge representations for their understand and execution [2, 3]. During these activities developers deal with different situations like i) decision-making: developers are constantly making both technical and managerial decisions, these decisions are commonly based on personal experience obtained from previous projects or informal conversations [4, 5]; ii) training or update in technologies: sometimes developers need to use methods or technologies they are not familiar with [6], and iii) problem domain: developers often need to solve problems during the software development (e.g., bug fixing, requirement misunderstanding, programming errors, or architecture design). © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 154–165, 2021. https://doi.org/10.1007/978-3-030-63329-5_11
Towards a Knowledge Condensation Tool to Capture Expertise
155
Consequently, developers constantly need knowledge beyond the one they already possess; this means that they need expertise. K. Anders Ericsson define expertise as the characteristics and skills that make an individual suitable to solve problems fast and effectively, moreover it includes the resources used by this individual [7, 8]. Within Software Development expertise can be found in the architectural knowledge (AK), which refers to the elements employed to construct an architectural design (e.g., structures, properties, and relationships). AK incorporates the experience and problem reasoning of human resources into the organizational culture through two primary sources: experts and artifacts [9, 10]. Artifacts denotes physical and digital documents (e.g., requirements, vision) and source code [11, 12], while Experts are software developers specialized in problem-solving tasks at different stages of the process [7, 13]. As mention before, developers deal with different situations during the software development; therefore, becoming an expert in such situations requires a large amount of knowledge; furthermore, it can be time-consuming and often technically difficult to achieve [14]. Developers accumulate expertise from the technologies they used, and the problems solved within projects (e.g., bug solving, requirements, architecture design), however members of the organizations do not benefit from this expertise. Much research in recent years has highlighted the importance of AK management; researchers suggest that AK should be captured and managed along with the system description to improve the software development process by reusing AK [15, 16]. Current literature in AKM generally speaks about a mechanism that puts in an explicit form the knowledge from formal artifacts. In this sense, Borrego et al. present the knowledge condensation concept, which describes the process of capturing and classifying expertise before it loses, where the aim is easing the knowledge retrieve [17]. Compared with AK management, knowledge condensation centers on tacit and informal components that lack of mechanisms to make them explicit; this involves artifacts generated and shared informally among developers (e.g. unstructured text and electronic media, books, or source code). The knowledge condensation concept comes from an agile development environment where tacit knowledge is preferred [9, 18]—agile method states that face-to-face conversations is the most effective way to share knowledge, moreover, it is only documented the information that the team consider sufficient to understand the project [19]. Little amount of knowledge becomes explicit and usually stays in the log files of unstructured textual and electronic media (UTEM) [10]; overtime this knowledge lose meaning and context, thus, it is prone to vaporize. Therefore, knowledge condensation aims to classify, retrieve, and share among stakeholders, valuable knowledge that was in an unsuitable form for its recovery. Although the intention of knowledge condensation is not to convert AK into a formal notation, it represents a step forward towards AK formalization, i.e., transforming from explicit documented knowledge into explicit formalized knowledge [20]. Acquiring relevant expertise is an essential task for developers, and knowing how to model it in an explicit form will allow taking advantage of the best practices, training knowledge, and the data across multiple applications in the organization. Furthermore, capture expertise could improve the way it is consumed and also help with the situations
156
J. R. Martínez-Garcia et al.
mentioned earlier; thus, organizations must develop formal approaches to condensate the available knowledge from different sources in a systematic manner. Current proposals condensate one of the two primary AK sources; proposals from artifact’s as expertise source, have been focused mostly one type of artifact which is (source code) [6, 21, 22], rather than other artifacts as the definition’s details, moreover in this proposals only the creator is aware of the expertise existence, instead of the whole organization. On the other hand, some proposals use experts as a source of expertise since developers often have knowledge or strategies for a particular problem. However, constant question to colleagues could lead to an interpersonal erosion in interpersonal relationships; this is an important aspect to build trusting relationships in collaborative environments, especially in agile development. Moreover, sometimes experts are unavailable for diverse reasons (e.g., skips the day, is on vacations or leaves the company). To the best of our knowledge current proposals does not link the artifacts and experts (provider); therefore, this paper propose a tool to capture expertise by means of a semantic model, which link both artifacts and experts. The remainder of this work is structured as follows: Sect. 2 describes the related work on proposals that condensate knowledge. Section 3 explains the integration of artifacts and experts by means of a semantic modelling. Section 4 presents a tool to condensate expertise; a discussion is presented in Sect. 5 and finally conclusion and future work is presented in Sect. 6.
2 Related Work Our approach for capturing expertise is not the first to condensate knowledge from developers in software organizations. Here we analyze related proposals to understand how they capture and consume expertise. Current proposals focus only on a particular AK source; regarding artifact proposals, the main focus is source code. For example, Ponzanelli et al. [23] present SeaHawk, an eclipse plugin that assists programmers with code snippets from Stack Overflow; also, SeaHawk uses language-independent annotations to link documents with code snippets. Similarly, Brandt et al. [24] present BluePrint, a web search interface integrated into Adobe Flex Builder development environment; Blueprint augment queries with code context through a code-centric view of search results embedded in the editor. Also, McMillan et al. [22] present Portfolio, a code search system to support developers; the system retrieves functions and describes their usage; moreover, using a combination of models, the system addresses the developers’ sharing functions behavior. Apart from that, only a few works use other types of artifacts; for instance, Borrego et al. [25] present a complementation tool to slack, which consists of a classification mechanism based on social tagging; this mechanism takes advantage of the architectural knowledge from the unstructured and textual electronic means (UTEM). In parallel, Bonilla-Morales [26] propose a tool to reuse use case diagrams; the tool allows storage information from use case diagrams. On the other hand, some proposals use experts as an AK source, try to make aware organizations of their expertise. For example, Bhat et al. [4] propose a recommendation system that identifies developers who could be involved in the design of a software
Towards a Knowledge Condensation Tool to Capture Expertise
157
system; the approach quantifies the skills of developers to match and recommend an individual suitable with the needs of the system to design. Alternatively, Matter et al. [27] propose a vocabulary-based model, which suggests developers with the appropriate expertise to handle a bug report; the model uses developers’ contributions to suggest someone. Moreover, Kagdi et al. [28] present xFinder, a tool that recommends experts based on their expertise, which is measured using the developer’s commit contributions. Furthermore, Minto and Murphy et al. [29] propose Emergent Expertise Locator (EEL), an approach to ease the process of finding the right expert to ask about a problem during a software development task. As can be seen, numerous proposals have asserted the importance of knowledge condensation in software development organizations. These proposals distinguish between artifacts and experts as the primary source of expertise for organizations. Artifacts proposals exploit the expertise generated by developers during the software development process with two characteristic limitations: a narrow focus on source code, and an informal accumulation of knowledge realized by developers but unnoticed by organizations. Expert proposals, on the other hand, employ developers as a relevant source of expertise in the software development process; to perform tasks such as handling a bug report, system design, and problem-solving. Although the empirical evidence of the described works supports the value of experts on helping, proposals give little importance to the artifacts they generate or use; furthermore, constant questions to experts could lead to an interpersonal relationship erosion. In the current literature, we found relevant challenges to discuss. First, proposals that condensate knowledge must grant access to both individuals with particular expertise level and their artifacts. This access will ensure the availability of the required resources, even if the provider is not available, and also reduce the interpersonal relationship erosion. Second, a proposal should incorporate more sources than just code, like other digital sources (e.g., bookmarks, books, manuals, and tutorials) are frequently used. One way to address these challenges could be using semantic knowledge modeling; the aims is to build a search engine for both structured and unstructured data, to search and centralize applications, databases, files, and spreadsheets.
3 Semantic Knowledge Modelling From the previous section, we found that proposals accumulate expertise accumulate without the organization being aware of their existence, and since each proposals use their own inputs and outputs cannot be centralized. In this sense, semantic knowledge modeling could be a way to link experts and the artifacts developers produce and consume during software development. In previous work in Martinez-Garcia et al. (2020), we present an ontology to link artifacts and the developers that use them [30]. The present work is an extension, here we describe the semantic model used to develop the ontology and how our proposed tool incorporates the elements from the model; we aim to highlight how a semantic modeling could improve the knowledge condensation. The semantic model was developed using knowledge modeling techniques; these techniques consist of performing tasks such as knowledge acquisition, requirements specification and knowledge conceptualization. The knowledge acquisition task consists of acquiring
158
J. R. Martínez-Garcia et al.
the knowledge from the domain experts (e.g., interviews, focus groups or surveys) or from the literature. The requirements specification task consists of following guidelines help to capture knowledge from users and to produce a vocabulary of the main concepts. Finally, the conceptualization task consists of organize and model the acquired knowledge using external representations (e.g., UML, IDEF5). Software organizations produce and consume knowledge, which is formally known as AK. Figure 1 show the main elements of the semantic model: projects, artifacts, and developers. Developer represents a description of a team member in a software development project; this description could include properties such as a name, email, or cellphone. Projects represent a description of the current or pasts works of a developer; the aim is to create a developers’ skills profile based on the project record; the properties used to describe a project are a language, platform, role, and project name. Artifacts represent a description of the different resources used by developers to solve problems or doubts while developing software (e.g., bookmarks, code snippets, documentation, and tutorials); artifact descriptions include properties such as a platform and a subject.
Fig. 1. Semantic knowledge model of architectural knowledge (AK).
The presented model allows a link among their elements through with the following characteristics. First we have data values: help giving a description of an element; for example, in the case of the developer element, it gives a description of their personal data. Second, properties are binary relations on instances of elements from the model; their purpose is to link two instances. For example, the property “isUsedBy” links an instance of Artifacts with a Developers; thus, knowing the provider of an artifact grant access to the expert and their experience. The property “belongsTo” link an instance of Artifacts with a Project, the aim here is to locate in which projects developers create
Towards a Knowledge Condensation Tool to Capture Expertise
159
or generate an artifact. Finally, the “worksIn” property link developers and projects to build a background of developers’ skills based on current and past projects.
4 ExCap a Tool to Capture Expertise in the Software Development Here we present ExCap (Expertise Capture), a tool to condensate expertise in software developer. ExCap uses all the digital artifacts that developers create and consume during a software development project. Figure 2 shows a description of the current architecture of ExCap too, which consist of two layers: (i) Client Layer and, (ii) Server Layer.
Fig. 2. Architecture of ExCap tool
(i) In the Client Layer ExCap works as a background demon process, using digital documents as a source of expertise for the developers. In this sense, the tool contemplates any type of digital document to condensate such as code classes, manuals, books, or video tutorials. ExpCap tool was implemented to capture and search expertise in software organizations, this tool use a Java interface that makes it easier for the user to carry out these two activities; it uses a drag and drop function to facilitate the artifact sharing in the tool. (ii) On the Server Layer, we have two different sections which help the application to perform its functions. First, we have a section that manages the user data, the ExCap connects to a database to perform basic functions such as login and project registration, it uses a communication protocol with the database through HTTP and
160
J. R. Martínez-Garcia et al.
MySQL. Second, such as filtering, uploading files and creating new projects, it also has personal settings. Second, the server has an FTP communication protocol, which is configured directly to work directly with application to perform its functions; these consist functions consist of tasks such as filtering, uploading files and creating new projects. All the expertise share by developers by means of using ExCap is represented as an expertise graph, which consist of interlinked instances of the elements described in the semantic modelling (see Fig. 1). From the expertise graph, an RDF file is generate. The Resource Description Framework (RFD) is data model, which is based on making statements about resources in expressions of the form subject-predicate-object commonly known as triples. The subject denotes the resource, and the predicate denotes aspects of the resources; moreover, it expresses the relationship between the subject and the object. To manipulate RDF files we use Apache Jena, an open source java framework for building semantic web and linked data applications. Next, we describe the tool functionality, which consist in tabs that incorporate the elements from the semantic modeling presented (see Fig. 1). 4.1 Developers First, the system requests a login before starting the daemon; otherwise, you must create a user, filling the necessary data (see Fig. 3). The user profile helps us to keep a record of the expertise of a developer working in the organization. This record includes information about the artifacts produced and consumed by developers; also, where these artifacts were applied. This tab helps to create developer’s instances from the model presented before, we used email and name as data values to give a description about the developer.
Fig. 3. Developers semantic incorporation in ExCap
Towards a Knowledge Condensation Tool to Capture Expertise
161
4.2 Projects In the projects tab the developer can create a new project description; the creation of projects is an essential aspect to link developer’s experience with their projects. Developers can create or update current or past projects they have participated in. Projects are classified based on their platform (e.g., web, desktop or mobile) (Fig. 4).
Fig. 4. Project semantic incorporation in ExCap
4.3 Artifacts In the upload tab, user can capture knowledge, the user selects the file (drag) and drop into the tool. In this case, the tool considers digital artifacts: video tutorials, word or pdf documents which could be manuals, and any file that could help a developer to resolve a doubt or problem. As well as projects, artifacts are classified based on their platform; furthermore, we use keywords to associate these artifacts to a problem or topic (Fig. 5).
Fig. 5. Artifacts semantic incorporation in ExCap
162
J. R. Martínez-Garcia et al.
4.4 Knowledge Condensation Use Case Scenario in ExCap Tool In this section we describe a scenario of the knowledge condensation process using ExCap tool.
Fig. 6. Expertise condensation scenario
Figure 6 describes the process of knowledge condensation in the ExCap tool. The developer must initiate the tool and sign up to build a developers’ profile (1); thus, every registered user generates an instance of a developer. Developers register their current or past projects (2); those projects will be associated with the current developer to describe her/his skills. Developers can share resources that might be useful to other colleagues (3). We consider digital artifacts such as manuals source code and video tutorials. Both artifacts and projects are classified based on a development platform: desktop, web or mobile. Furthermore, artifacts include keywords to relate resources with a particular topic or situation on which it can be used. Moreover, artifacts are associated to a particular project where a developer used them. The instances created within the ExCap tool are linked among each other; thus, expertise condensed turn into a graph with all the knowledge from the organization (4).
5 Discussion Some relevant aspects related to this work that is important to discuss are: (i) the way current approaches condensate expertise in software development; (ii) our semantic knowledge model; (iii) ExCap as an alternative to condensate knowledge. (i)
The importance of expertise has generated interest by organizations, which are developing proposals to capture and share it within the organizations. These proposals focus on a particular source of AK (artifacts or experts). Regarding artifacts
Towards a Knowledge Condensation Tool to Capture Expertise
163
proposals, the main goal is to reuse the resources produced and consumed by developers; these proposals have some limitations: a narrow focus on source code, and an informal accumulation of knowledge realized by developers but unnoticed by organizations [22–24]. Apart from that, expert proposals employ developers as a source of expertise; experienced developers often have strategies for a particular problem; in general, these proposals do not grant access to the artifacts produced by the developers [4, 28]. Therefore, the developers’ availability limits access to expertise. Furthermore, constant questions to experts could lead to interpersonal relationship erosion. (ii) We address the limitations mentioned before with semantic knowledge modeling, which consists of information structures that allow store statements about facts. The main elements of the structure are artifacts, projects, and developers. Developer represents a description of a team member in a software development project. Projects represent a description of the current or pasts works of a developer; the aim is to create a developers’ skills profile based on the project record. Therefore, the model links the developers experience using the projects that had works or is currently working; the artifacts are link to a particular project where they were used, and to the developer that produce or consume it. This model will allow researchers in software engineering to centralize applications, which consequently could grant access to the expertise produced and consumed in software organizations. (iii) Finally, since the proposals narrow focus on source code, we propose an alternative to capture expertise in software development. ExCap captures the expertise found on the digital documents such as books, source code files or video tutorials. ExCap incorporates the elements from the presented semantic knowledge modeling; the aim is to link the digital documents to a developer and the situation where they were employed. Finally, since our proposals use the presented it allow us to integrate with other proposals that capture expertise from software development organizations.
6 Conclusion and Future Work In this work, we addressed the problem of capturing relevant expertise within software organizations. First, we presented a semantic knowledge modeling for both structured and unstructured data; the goal is to search and centralize applications, databases, and files. Second, we described ExCap, which is a tool that incorporates the elements of our semantic modeling. ExCap capture artifacts used by developers, particularly digital documents (e.g., books, manuals, source code files, or video tutorials). Finally, the knowledge modeling proposed can be used to centralize applications such as the mentioned in the related work. As future work, we pretend to develop different approaches to condensate knowledge using this model. In addition, we pretend to measure the usability with software development groups to capture their perception of the tool.
References 1. Colomo-Palacios, R., Fernandes, E., Soto-Acosta, P., Larrucea, X.: A case analysis of enabling continuous software deployment through knowledge management. Int. J. Inf. Manage. 40, 186–189 (2018). https://doi.org/10.1016/j.ijinfomgt.2017.11.005
164
J. R. Martínez-Garcia et al.
2. Levy, M., Hazzan, O.: Knowledge management in practice: the case of agile software development. In: Proceedings of the 2009 ICSE Workshop on Cooperative and Human Aspects on Software Engineering, CHASE 2009, pp. 60–65 (2009). https://doi.org/10.1109/CHASE. 2009.5071412 3. Schneider, K.: Experience and Knowledge Management in Software Engineering. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-540-95880-2 4. Bhat, M., Shumaiev, K., Koch, K., Hohenstein, U., Biesdorf, A., Matthes, F.: An expert recommendation system for design decision making: who should be involved in making a design decision? In: Proceedings – 2018 IEEE 15th International Conference on Software Architecture, ICSA 2018, pp. 85–94. Institute of Electrical and Electronics Engineers Inc. (2018). https://doi.org/10.1109/ICSA.2018.00018 5. Viana, D., Rabelo, J., Conte, T., Vieira, A., Barroso, E., Dib, M.: A qualitative study about the life cycle of lessons learned. In: 2013 6th International Workshop on Cooperative and Human Aspects of Software Engineering, CHASE 2013 – Proceedings, pp. 73–76 (2013). https://doi.org/10.1109/CHASE.2013.6614734 6. Moreno, L., Bavota, G., Di Penta, M., Oliveto, R., Marcus, A.: How can I use this method? In: International Conference on Software Engineering, pp. 880–890 (2015). https://doi.org/ 10.1109/ICSE.2015.98 7. Ericsson, K.A., Prietula, M.J., Cokely, E.T.: The making of an expert. Harv. Bus. Rev., 115– 121 (2007). https://doi.org/10.1201/b17434 8. Sonnentag, S., Niessen, C., Volmer, J.: Expertise in software design. In: Ericsson, K.A., Charness, N., Feltovich, P., Hoffman, R.R. (eds.) Cambridge Handbook of Expertise and Expert Performance, pp. 373–387. Cambridge University Press, Cambridge (2006) 9. Clerc, V., Lago, P., Van Vliet, H.: Architectural knowledge management practices in agile global software development. In: Proceedings – 2011 6th IEEE International Conference on Global Software Engineering Workshops, ICGSE Workshops 2011, pp. 1–8 (2011). https:// doi.org/10.1109/ICGSE-W.2011.17 10. Borrego, G., Moran, A.L., Palacio, R., Rodriguez, O.M.: Understanding architectural knowledge sharing in AGSD teams: an empirical study. In: 2016 IEEE 11th International Conference on Global Software Engineering (ICGSE), pp. 109–118. IEEE (2016). https://doi.org/ 10.1109/ICGSE.2016.29 11. Jedlitschka, A., Ciolkowski, M., Denger, C., Freimut, B., Schlichting, A.: Relevant information sources for successful technology transfer: a survey using inspections as an example. In: First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007), pp. 31–40. IEEE (2007). https://doi.org/10.1109/ESEM.2007.60 12. Sharif, K.Y., Buckley, J.: Observation of open source programmers’ information seeking. In: IEEE International Conference on Program Comprehension, pp. 307–308 (2009). https://doi. org/10.1109/ICPC.2009.5090071 13. Rupakheti, C.R., Hou, D.: Satisfying programmers’ information needs in API-based programming. In: IEEE International Conference on Program Comprehension, pp. 250–253 (2011). https://doi.org/10.1109/ICPC.2011.16 14. Baltes, S., Diehl, S.: Towards a theory of software development expertise. In: ESEC/FSE 2018 – Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp. 187–200. Association for Computing Machinery, Inc., New York (2018). https://doi.org/10.1145/323 6024.3236061 15. Capilla, R., Jansen, A., Tang, A., Avgeriou, P., Babar, M.A.: 10 years of software architecture knowledge management: practice and future. J. Syst. Softw. 116, 191–205 (2016). https://doi. org/10.1016/j.jss.2015.08.054
Towards a Knowledge Condensation Tool to Capture Expertise
165
16. Tang, A., Avgeriou, P., Jansen, A., Capilla, R., Ali Babar, M.: A comparative study of architecture knowledge management tools. J. Syst. Softw. 83, 352–370 (2010). https://doi.org/10. 1016/j.jss.2009.08.032 17. Borrego, G., Morán, A.L., Palacio, R.R., Vizcaíno, A., García, F.O.: Towards a reduction in architectural knowledge vaporization during agile global software development. Inf. Softw. Technol. (2019). https://doi.org/10.1016/J.INFSOF.2019.04.008 18. Razzak, M.A., Ahmed, R.: Knowledge sharing in distributed agile projects: techniques, strategies and challenges. In: 2014 Federated Conference on Computer Science and Information Systems, FedCSIS 2014, pp. 1431–1440. Institute of Electrical and Electronics Engineers Inc. (2014). https://doi.org/10.15439/2014F280 19. Manifesto for Agile Software Development. https://agilemanifesto.org/. Accessed 30 Apr 2019 20. Nonaka, I.: The knowledge-creating company (2007). https://doi.org/10.1016/b978-0-75067009-8.50016-1 21. McMillan, C., Grechanik, M., Poshyvanyk, D., Fu, C., Xie, Q.: Exemplar: a source code search engine for finding highly relevant applications. IEEE Trans. Softw. Eng. 38, 1069–1087 (2012). https://doi.org/10.1109/TSE.2011.84 22. McMillan, C., Poshyvanyk, D., Grechanik, M., Xie, Q., Fu, C.: Portfolio: searching for relevant functions and their usages in millions of lines of code. ACM Trans. Softw. Eng. Methodol. 22, 1–30 (2013). https://doi.org/10.1145/2522920.2522930 23. Ponzanelli, L., Bavota, G., Di Penta, M., Oliveto, R., Lanza, M.: Mining stackoverflow to turn the IDE into a self-confident programming Prompter. In: 11th Working Conference on Mining Software Repositories, MSR 2014 – Proceedings, pp. 102–111. Association for Computing Machinery, Inc., New York (2014). https://doi.org/10.1145/2597073.2597077 24. Brandt, J., Dontcheva, M., Weskamp, M., Klemmer, S.R.: Example-centric programming. In: Proceedings of the 28th International Conference on Human Factors in Computing Systems – CHI 2010, p. 513. ACM Press, New York (2010). https://doi.org/10.1145/1753326.1753402 25. Borrego, G., Salazar-Lugo, G., Parra, M., Palacio, R.: Slack’s knowledge classification mechanism for architectural knowledge condensation. In: International Conference on Computational Science and Computational Intelligence, Las Vegas, Nevada, USA, pp. 1121–1126 (2019). https://doi.org/10.1109/CSCI49370.2019.00212 26. Bonilla-Morales, B., Crespo, S., Clunie, C.: Reuse of Use Cases Diagrams: An Approach Based on Ontologies and Semantic Web Technologies (2012) 27. Matter, D., Kuhn, A., Nierstrasz, O.: Assigning bug reports using a vocabulary-based expertise model of developers. In: Proceedings of the 2009 6th IEEE International Working Conference on Mining Software Repositories, MSR 2009, pp. 131–140 (2009). https://doi.org/10.1109/ MSR.2009.5069491 28. Kagdi, H., Hammad, M., Maletic, J.I.: Who can help me with this source code change? In: IEEE International Conference on Software Maintenance, ICSM, pp. 157–166 (2008). https:// doi.org/10.1109/ICSM.2008.4658064 29. Minto, S., Murphy, G.C.: Recommending emergent teams. In: Proceedings – ICSE 2007 Workshops: Fourth International Workshop on Mining Software Repositories, MSR 2007 (2007). https://doi.org/10.1109/MSR.2007.27 30. Martínez-García, J.R., Castillo-Barrera, F.-E., Palacio, R.R., Borrego, G., Cuevas-Tello, J.C.: Ontology for knowledge condensation to support expertise location in the code phase during software development process. IET Softw. 14, 234–241 (2020). https://doi.org/10.1049/ietsen.2019.0272
Software Systems, Applications and Tools
Building Microservices for Scalability and Availability: Step by Step, from Beginning to End V´ıctor Saquicela(B) , Geovanny Campoverde, Johnny Avila, and Maria Eugenia Fajardo University of Cuenca, Av. 12 de Abril, Cuenca, Ecuador {victor.saquicela,geovanny.campoverde,johnny.avilam, mariaeugenia.fajardo}@ucuenca.edu.ec
Abstract. The problem of developing an application based on microservices is gaining traction over monolithic applications. Similarly to RESTbased applications, their architecture may provide benefits in tasks related to their development and deployment. In this paper, we present an approach for the development and deploy of applications based on microservices using the following resources: a microservices technology software architecture, a continuous integration framework, and an environment for the deployment of microservices with high scalability and availability. Keywords: Microservices styles · Scalability
1
· Continuous integration · Architecture
Introduction
In recent years, the arriving of new requirements, new technologies, new paradigms, new methodologies, Web 2.0 applications related to the software industry, and due to some of the limitations of monolithic applications based on SOAP services and traditional Representational State Transfer (REST) services, microservices-based on REST have increased their presence on the Web, mainly due to their relative simplicity and their natural suitability for the Web. However, using microservices still requires much human intervention since the majority of their software components work autonomously and contain lists of the available configurations. This makes the microservices-based software development process difficult, affecting the efficiency in the development of applications. Traditionally, monolithic applications have focused on defining vertical architectures, and have been normally applied to SOAP and REST services and their corresponding middleware. More recently, these approaches have started to be adapted into more lightweight approaches for the development of applications based on microservices [2,5,6,17]. c The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 169–184, 2021. https://doi.org/10.1007/978-3-030-63329-5_12
170
V. Saquicela et al.
In this paper, the challenge of automating the application development process using microservices is addressed by: (1) defining a microservices technology architecture that allows standardization in the use of latest generation technologies, (2) defining a continuous integration technologies framework, that allows development microservices, and (3) defining an environment for the deployment of microservices, allowing high scalability and availability of applications. The main contribution of our work is the partial automation of the process of development applications using microservices from start to finish. The remainder of this paper is structured as follows: we first introduce some background and related work in the context of the development of applications based on microservices. Then, we describe our approach for microservice architecture, continuous integration framework, and the environment for deploy microservices. Finally, we present some conclusions and identify future lines of work.
2
Background and Related Works
In this section, we present some background information related to the Service Oriented Architecture (SOA) and specifically, to microservices. We introduce this topic by describing the current state of the art, and identifying its limitations. A Web service is a method of communication between two electronic devices over the web (Internet). The W3C defines a Web service as a software system designed to support interoperability in machine-to-machine interaction over a network. It comprises an interface described in a machine-processable format. Other systems interact with the Web service in a manner prescribed by its description using messages, typically transmitted using HTTP with an XML serialization in conjunction with other Web-related standards1 . Essentially, a Web service is a modular, self-describing, and a self-contained software application that is discoverable and accessible through standard interfaces over the network [4,20]. Web service technology allows for uniform access via Web standards to software components residing on various platforms and written in different programming languages. From a technological point of view, the community distinguishes two types of Web services: classical Web services based on WSDL/SOAP (big Web services, WS-*) and Web APIs RESTful). The first have defined a stack of standards and the second are characterized for simplicity. SOAP relies on a comprehensive stack of technology standards. SOAP plays a major role in the interoperability within and among enterprises, and serves as a basic element for the rapid development of low-cost and easy-to-compose distributed applications in heterogeneous environments [13]. APIs are characterized by their relative simplicity and their suitability for the Web. Web APIs, according to the REST paradigm [8], are commonly referred to as RESTful services. RESTful services are centered around resources, which are interconnected 1
http://en.wikipedia.org/wiki/Webservice.
Microservices from Start to Finish
171
by hyperlinks and grouped into collections, whose retrieval and manipulation is enabled through a fixed set of operations commonly implemented by using HTTP. Moreover, RESTful are lighter in their technological stack. The Representational State Transfer (REST) is an architecture style for applications within distributed environments and applies especially to Web Services. The creation of the adjective RESTful accomplished a simple ability to express whether something works accordingly to the principles of REST [8]. REST is an architecture style rather than a concrete architecture. RESTful APIs are characterized by resource-representation decoupling, so that resource content can be accessed via different formats. The REST architectural style proposes a uniform interface, that if applied to a Web service induces desirable properties, such as performance, scalability, and modifiability, enabling Web services that facilitates working on the Web. In REST architectural style, data and functionality are considered resources. These resources are accessed using Uniform Resource Identifiers (URI), and exploited using a set of simple, well- defined operations. The concept of microservices appears as a natural evolution of REST services within the Web. A Microservice is a small piece of code than can be deployed, executed, tested, and scaled independently [19]. The goal of a microservice is to abstract the whole functionality of a system component in a single application, making it easier to maintain and integrate with the remaining components. Like any other application, depending on the system requirements and the development team knowledge, microservices can be developed in a wide range of programming languages and use a variety of technologies to complete the tasks for which they were created. Regarding microservices architecture, authors of [5] apply a systematic mapping study methodology to identify, classify, and evaluate the current state of the art on architecting microservices. This study allows classifying, comparing, and evaluating architectural solutions, methods, and techniques specific for microservices. Traditionally, monolithic systems have been developed aiming to group the functionality and its services in a single code base. Due to the simplicity of its structure, developing monolithic applications is usually less expensive than its alternatives. However, an application that concentrates all its functionality is not necessarily better, especially if it tends to grow up in complexity, users, developers and payload. Monolithic applications have enormous disadvantages compared to the use of microservices. On one hand, the implementation of new functionalities and maintainability can be as complex as extensivity of the system. A single change in the code may imply that many developers have to intervene to analyze the impact of the change and give their approval [3]. On the other hand, the use of resources can be oversized since the entire system is seen as a single module, thus being able to waste resources on components that do not necessarily require it, leaving aside components that need large amounts of resources to function properly [6]. For this reason, migrating to a microservices architecture is viable since the system is easily upgradeable and modifiable, because it is composed
172
V. Saquicela et al.
of small software components that work independently. Morevoer, the facility to add resources to the services according to the needs that each one presents allows achieving better performance in both, the functionality of the system and the flow of information. The Continuous Integration process is a highly accepted practice within the software development industry. Although there is no established framework to serve as a guide, one can find a wide variety of useful implementations that can be adapted according to the needs of each software development company. As explained in [17], the use of these practices allows teams to make continuous deliveries of products in short periods, being more productive and effective. As indicated in [21], the easiness at functionalities and changes delivery is one of the biggest advantages of applying a continuous integration process. For instance, if there is an error in a recent deployment or change, the development team receives feedback immediately so that the necessary corrective measures can be taken to update the change, and upload it again to the application server. Additionally, it is important to indicate that this process is of great help when using agile frameworks such as Scrum2 since it allows product deliveries to be iterative and immediate, making it possible for stakeholders to observe constant progress of the project and that any change can be easily personalized [23]. The microservices architecture success corresponds directly to the adequate coordination and joint work of all the teams involved in the development, as well as in quality assurance and operations. Thus, it is necessary to use a set of good practices and recommendations that allow working with agile development methodologies, which would make it possible to quickly deliver functional software components. This concept is known as DevOps, where the terms of development and operations are combined to generate a much more flexible and robust component where all teams collaborate properly, and everything is perfectly orchestrated and automated to convey a system from development to production deployment [7,24]. Another important feature of DevOps relies on the proper use of tools that allow controlling and automating each of the stages of the software development life cycle, which in turn will allow us to generate an own framework where agile development methodologies are included, within a continuous integration process. DevOps emerges as a movement to improve communication, collaboration, and integration between the development and IT operations teams [22]. Automation is a crucial concept for DevOps success, since every process in the software development cycle must be automated: application building from the code on the repository, automated tests, automated integration and automated deployment on each environment from development to production. From here, it is clear that continuous integration and continuous delivery are techniques that enable DevOps on any development team. Continuous integration is a common practice in software development industry. In [16] a continuous integration process is proposed using Jenkins, using a master-slave configuration within a real-life scenario for automating test exe2
https://www.scrum.org/.
Microservices from Start to Finish
173
cution and releasing code to production environments on multiple sites and multiple platforms. In [15], Jenkins is used for creating pipelines to completely automate continuous integration and continuous delivery process for HighPerformance Computing (HPC) software. Here, every commit is automatically transformed in releasable software ensuring that all of them passes through the same validation and building process and eliminating possible errors in the circle when it is made manually. In [1], authors approach how to take advantage of Jenkins flexibility and plugins for evolving from pure continuous integration to continuous delivery. Here, Jenkins is used as orchestrator between several tools that help in CI/CD like Artifactory, chef, puppet Sonarqube and others. Finally, in [12] a pipeline from CI/CD with docker is presented. A continuous integration process produces docker images that pass through a validation process and if there are no errors, the same images are deployed by continuous delivery tools. In this work, an efficient work model is presented, which starts from the creation of a microservice within a continuous integration process until it becomes part of a scalable and robust architecture for highly available environments. Additionally, we describe the use of open source technologies at each stage and show how they help in the automation of the entire process. Finally, it is expected that this document will be taken as a reference guide for software projects that wish to implement innovative technologies that allow the horizontal growth of their systems due to the versatility of the use of microservices.
3
Microservice Software Architecture
We present an architecture approach for developing scalable, maintainable, and robust microservices based on the Spring Boot framework and its compatible libraries, as shown in Fig. 1, to become an ordered guide for the developed of a microservice. Observe in the figure that the architecture is divided into layers. From the security to the data connection layer, all these layers will be briefly described below. In this work, we assume that there are functional and nonfunctional requirements acquired at some earlier stage in the software development process. Based on these requirements, the microservice will be formed of all or some of the components described in the figure. 3.1
Microservices with Spring Boot
Spring Boot3 is a software development framework used to create microservicesbased applications in Java. It also helps developers to build, deploy, and run standalone applications with a minimal effort and very simple configurations. Spring Boot brings many benefits such as autoconfiguration and a large number of compatible libraries and starter dependencies that make easy the software development cycle. Therefore, Spring Boot was selected as the standard for the development of applications based on microservices by the development team of the “University of Cuenca” in Ecuador. 3
https://spring.io/projects/spring-boot.
174
V. Saquicela et al.
Fig. 1. Proposed microservice architecture
3.2
RESTful Services
RESTful services are widely used in the software industry due to its simplicity. REST adopts all the web precepts like its architecture and its HTTP protocol which provides functionalities for communication between system components, like well-defined actions (GET, PUT, POST, DELETE), package forwarding, cache memory and, encryption and security, each of them are important for building fast, robust, scalable and secure service-based applications. RESTful services can be implemented easily in Spring Boot taking advantage of its auto configurable characteristics and its starter dependencies. As a matter of fact, developers will be able to build and publish RESTful services using just a few annotations in the code. Therefore, RESTful services is the architectural style used in this work for the development of applications based on microservices. 3.3
Business Logic
Business Logic is the core of each application, this layer comprises all the functionalities, which are called when a REST service is invoked. The business logic takes the data received at the REST layer, proceeds to validate and sends the results to the persistence layer for insertion, update, deletion or query objects. 3.4
Persistence
Data is the main component of any Information System, every application needs to use data stores to persist, access, or analyze information, data stores can be relational databases, NoSQL databases and others [9].
Microservices from Start to Finish
175
Spring (Predecessor of Spring Boot) has created various projects to create frameworks that help in the data interaction between applications and data stores in different technologies. Spring Data gather all these previous projects in a unique project whose goal is to simplify the data access over the different data store technologies, in a few related Spring Boot starter dependencies. In the architecture proposed in this work, Spring Data JPA4 is used for object persistence, Spring Data JPA is the implementation of JPA technology for Spring Data. JPA, widely documented in [11], is a standard for data persistence implemented by the most important data providers or ORMs, among them particularly hibernate5 , which facilitates data persistence by mapping objects in database relations (a mapped object is known as an entity). To achieve this, JPA uses well-defined annotations over the code to define entities, fields, and relations in a database. Spring Data JPA can be easily implemented in Spring Boot projects just including the Spring starter data JPA dependency on the project and annotate the code to define entities. Spring Data also provides repository classes to interact between entities and the database. 3.5
Security
Security is a crucial aspect in microservice-based applications. When compared to monolithic applications, security tends to become complex in microservices as the application grows and uses more components and services. Microservices security must be approached in two ways [18]: the first one is directly related to the number of microservices and the network monitoring that must be implemented in every service; the second one is related to internal communication between the different components of an information system, and how a vulnerability in just one component could compromise the security of the entire system. To analyze the first point, it becomes necessary to understand that although microservices break the problem down into small, easy-to-maintain parts, it granulates security and increases complexity as new components appear in the system. In large systems, security control could become a very complex task if a robust set of monitoring and control tools is not implemented. Securing a large network of microservices involves an analysis of each packet transmitted in each communication interaction, and looking for any anomalies in data transmission. Additionally, security of every microservice is also important to analyze, since the presence of a vulnerability could introduce incorrect information and compromise the integrity of the data, no only in that microservice but in the whole system. This happens since if a component sends wrong data to another, then the second component will introduce wrong data too and generate a chain reaction that damages the data of the entire information system. To deal with security in microservices, as in the architecture shown in Fig. 1, JWT (JSON Web Token)6 is selected as access control, security, and authorization mechanism for the communication and information exchanging between 4 5 6
https://spring.io/projects/spring-data-jpa. https://hibernate.org/. https://jwt.io/.
176
V. Saquicela et al.
the different components in a system. JWT is an open standard used in secure data ransmission, it sends information through an encrypted JSON which has a header, a payload, and a secret key. The encrypted JSON is sent as a token on the header of the HTTP request, then, the microservice decrypts the token and validates it using the same encryption algorithm and secret key that were used for the token creation. Fiunally, the microservice only executes the request whether the token is correctly validated. 3.6
Indexing
Full-text search requires the implementation of complex algorithms called search engines. Fortunately, there are free and open source search engines that can be implemented over hibernate in Spring Boot like Apache Lucene7 , Elasticsearch8 , or Solr9 . Full-text search is achieved in basically two steps, first, data is scanned and all the words are indexed in an index store, and then, the search is executed over the indexes and not over the data. Full-text search does not only looks for exact string matching, search engines can found words that sound similar, have similar writing, words that may be synonyms, or even that result from the conjugation of verbs. The engine then sorts the results by relevance and displays them to the end-user. To use full-text search in Spring Boot developers only have to import the correct dependencies depending on the technology of indexes what they want to use and annotate the entities and fields as indexed. 3.7
Cache
Spring Boot brings the possibility of including a cache between the microservice and the data store, this cache is used for storing the results of a function execution in an intermediate memory. When a function with the annotation @Cacheable is executed, this technology stores the returned results and the parameters used for the execution, after that, if the function is called again with the same parameters, the microservice will return the results directly from its cache instead of executing the process. Cache increases the performance of the microservice when it is used in the right way. However, it may involve data integrity errors if it is used without previous analysis of the methods that will use the cache, and the data retention time in this memory. 3.8
Data Audit
A data audit is directly linked with security, every change in data must be stored in history together with information about it, the old and new values, the user 7 8 9
https://lucene.apache.org/. https://www.elastic.co/. https://lucene.apache.org/solr/.
Microservices from Start to Finish
177
who made the change, the time when the data was updated, the computer IP address where the change was made and other relevant information. Spring Boot uses Hibernate Envers10 technology and annotations to achieve data audits. 3.9
Connection Pool
The last layer in the proposed architecture in Fig. 1 is the connection pool. The connection pool is a technique for sharing database connections between multiple clients for achieving better performance in data accessing and storing. The application uses it because when a request comes to the microservice, database connections have been already opened to attend that request and the application does not spend time opening and closing connections with the database. In microservices, connection pools are mandatory due to the number of clients that could be requesting information. When a large number of clients are trying to open a connection to a database, it could result in a timeout error because the limit of permitted connections was exceeded. Connection pools deal with this problem maintaining a constant number of open connections and coordinating the requests of the clients among them. HikariCP11 is widely used in Java applications due its simple configuration and flexibility. HikariPC reuses the database connection properties used by Spring Boot, hence, it is enough to add certain additional parameters in the configuration file and HikariCP will automatically be able to manage the connections between the microservice and the database.
4
Continuous Integration Process
The development of microservices adapts perfectly to the continuous integration cycle, due to its facility to encapsulate specific functionalities in a single module. In Fig. 2, a continuous integration framework is presented, where all the components and tools utilized are open source, which have been tailored to meet the full cycle from development to deployment of a microservice to an application server. As the Figure shows, the continuous integration cycle involves several stages, ranging from the creation of the code by the development team, through versioning tools, integration, and deployment, implementation of microservice in a server, monitoring, and reporting. In the figure, the software quality component is omitted, since there is currently a unit dedicated exclusively to the entire automatic testing and validation process. 4.1
Development Teams
Microservice development starts with cloning the initial archetype from an archetype repository to the developer machines. This initial code contains all 10 11
https://hibernate.org/orm/envers/. https://github.com/brettwooldridge/HikariCP.
178
V. Saquicela et al.
Fig. 2. Continuous integration cycle
the necessary libraries so that the members of the development team can connect their databases, external information repositories, and libraries. Eclipse12 , NetBeans13 , or Spring Tools can be used to edit the code and local tests, being Eclipse IDE the most recommended due to its adequate customization to work with SpringBoot and Java. These IDE’s provide the necessary tools to perform the coding, compilation, and testing of the developed microservices before they are sent to a test or production environment. 4.2
Version Control System
A fundamental part of the continuous integration process is the implementation of a version control system, which mainly serves to keep versions of the code in an external repository and which can be accessed by all members of a development team. As explained in [10], a version control system allows you to maintain full control of all the changes that have been made in the source code of the applications. Each action carried out in the code is recorded in such a way that the author, time, and date of the changes can be known exactly. The version control system implemented in the cycle presented is Git using the GitLab tool14 , 12 13 14
https://www.eclipse.org/. https://netbeans.org/. https://about.gitlab.com/.
Microservices from Start to Finish
179
which in turn acts as an information repository. Here, the code developed by the programmers is consolidated with all the changes and made it available so that it can be uploaded to an application server. 4.3
Integration and Deployment
Once the code is available in GitLab, it must be compiled to generate a functional microservice. Jenkins tool is used for this task and the ones that follow. It is in charge of automating much of the continuous integration process as explained in [5]. Jenkins is the tool in charge of connecting with the GitLab repository, obtaining the code of the branch corresponding to the environment one may want to deploy, and generating a software component called an artifact. This component is created using Maven15 , which is a tool to create compiled software packages in Java language. This artifact is stored in its artifact repository as an image. Additionally, it is important to indicate that the necessary libraries for compilation are obtained from an artifact repository called Jfrog Artifactory16 , where additionally own libraries are stored. 4.4
Implementation
The next step in the continuous integration process is to create a container with the compiled image of the microservice using Docker. To do this, Jenkins downloads the image from the artifact repository, accesses the server where the container is going to be deployed using ssh, and runs a command to create a container of Java with the code of developed microservice. At the end of this process, the container is deployed on the server where the command was executed and it will be available for use through the ports assigned by the server. The container with the microservice installed internally will start taking its internal configurations. In case that we want to extract the configurations so that they work in an external repository, a container volume would be necessary. The advantage of extracting the configuration is that it will be easily maintainable without the need to recompile the microservice, avoiding starting the continuous integration process again. 4.5
Reports
Reports are generated automatically by Jenkins and sent out to the operations and development team. These reports contain the result of the entire continuous integration process, including indicating whether there were any errors in the compilation or in the deployment. Reports generation is very important since it serves as feedback for both, the development and the operations teams, because, it helps improve the continuous integration process. The continuous integration process contemplates the phases from the beginning in the creation 15 16
https://maven.apache.org/. https://jfrog.com/artifactory/.
180
V. Saquicela et al.
of the microservice to its deployment in any environment, including, the development, testing, pre-production, or post-production. Additionally, it is important to emphasize that each stage of the continuous integration process is accompanied by the use of free software tools, flexible, and that easily integrate to automate all processes. In this way, the development teams will only upload their changes to the version control system and these will be reflected directly in an application server after the whole process has been completed.
5
Microservice Life Cycle
A microservice follows a continuous integration process to be deployed in a server application. This section explains how a microservice interacts in a real production environment where it communicates with other microservices or external services like information repositories, databases, etc. To this end, an architecture based on microservices for high availability applications has been implemented. In Fig. 3, the architecture model is presented, it corresponds to the successful implementation of architecture at the University of Cuenca, where monolithic systems were no longer used to work with microservices in a balanced and easily scalable environment. This has been possible due to the creation of a highly available environment where a load balancer, a discovery server, centralized indexing, centralized logs, and centralized cache, are all available; all these, using Docker containers within an OpenStack infrastructure. 5.1
Load Balancing
In the proposed architecture, a load balancer is implemented since in a real application, there are thousands of request that need to be efficiently handled and with a fast response. The load balancer takes all the multiple requests and send it to different instances of the microservices, and the microservices use a second internal load balancer to send request among them. Once a request has been sent from a web application, mobile application, or another service, a load balancer called Ribbon takes such request and, using the round robbin method, sends it to an instance of the microservice. To determine which instance the request should be sent to, the load balancer uses a proxy server called Zuul17 . This server contains the information provided by a discovery tool called Eureka18 , which describes in detail all the microservices deployed with their respective instances. If the microservice needs the information from another microservice, it uses a second load balancer with Zuul to send a request and obtain a response. This architecture does not use the same load balancer for external and internal requests, because it is necessary to have an isolated environment for the security of the internal communication of the different instances of the microservices. 17 18
https://github.com/Netflix/zuul. https://github.com/Netflix/eureka.
Microservices from Start to Finish
181
Fig. 3. Microservice life cycle
5.2
Centralized Configuration Server
As we have explained in previous sections, microservices use files to keep all the configuration isolated from the code. Nonetheless, when we have many instances from the same microservice, the scenery changes due to the difficulty to maintain as many files as instances. Fortunately, a centralized configuration server has been implemented to deal with this problem. This server, implemented with GitLab, has all the configuration files for all the microservices deployed. If a new instance of a microservice needs to be created, it takes the configuration file from the server and uses it to start. Thus, it is very easy to make any change to the configuration of a microservice, since by modifying the file corresponding to its configuration, all of these instances will inherit this new change while maintaining consistency. 5.3
Centralized Indexing
The indexes are implemented to improve responses to full-text queries. We have multiple instances of a microservice, each instance may, at a certain point in time, have different indexes because data insertions may occur for any of their instances, giving the end-user the feeling that the system has some error or that the data is inconsistent. To solve this problem in a balanced microservices environment, a centralized index system using Elasticsearch, Hibernate, and Lucene has been implemented. These tools allow each inserted data to be indexed in a
182
V. Saquicela et al.
common space for all instances of a microservice, in such a way that the same response information is always available when the request goes to any instance. 5.4
Centralized Cache
Managing an independent cache in each microservice instance is just as inefficient as having indexes in each instance. To see this note that if data is updated only in the cache of the instance where the request was entered will be updated, it gives the feeling of inconsistency in the data. Eventually, all caches will have the same information since the queries will be entered by any entity and, if they are not found in the cache, they will access the database to obtain the information and leave the new data loaded in the cache. To deal with this problem, the use of a Redis19 cluster is proposed, which is a service created and optimized to centralize the cache. The implementation of this cache cluster guarantees that all the instances of a microservice will always have the same information available, thus ensuring that the user obtains correct data and, above all, with an immediate response capacity to any update and query of records. 5.5
Centralized Logs
The administration of logs is very important in the management of computer systems as well, since, it allows developers to know in detail each of the actions that have been carried out. In a microservices architecture, we are once again tempted to deal with the problem of multiple instances that may exist. Each instance generates its respective logs, which implies that if one want to audit any microservice, one must access the logs of each instance to obtain the complete record of events. The solution to this problem is to implement a centralized log system, which allows the events of all the instances of a microservice to be registered in a single place. Thus, in the proposed architecture, the Graylog20 tool is used, which uses Elasticsearch as its search engine on a Mongo21 database that is responsible for storing all events. Graylog is able to recognize all the instances of the same microservice, organize their logs properly, group them and store them in the database, in such a way that the people in charge of monitoring will see a single event repository for each microservice.
6
Conclusions and Future Work
In this work, a complete cycle for the implementation of a microservices in high availability environments has been presented. The process begins with the creation of the microservice-based on an archetype and then becomes part of a continuous integration process where it will circulate through various stages 19 20 21
https://redis.io/. https://www.graylog.org/. https://www.mongodb.com/.
Microservices from Start to Finish
183
until it reaches a production environment where it will be deployed in a highly available and easily scalable architecture. One of the main advantages of using a microservices architecture like the one that has been proposed, is that the systems can grow horizontally, incrementally, and quickly because the software deliveries are continuous. The future works that are considered for the process mentioned above are the following: the implementation of exhaustive automatic quality tests to measure the overhead, performance, availability, and scalability of the architecture; the process formalization, evaluation using operational metrics and monitoring in a determined period allow improving the described process. Acknowledgments. This work was carried out at the Direction of Information and Communication Technologies of the University of Cuenca, with the support of several people.
References 1. Armenise, V.: Continuous delivery with Jenkins: Jenkins solutions to implement continuous delivery. In: 2015 IEEE/ACM 3rd International Workshop on Release Engineering, pp. 24–27. IEEE (2015) 2. Balalaie, A., Heydarnoori, A., Jamshidi, P.: Microservices architecture enables DevOps: migration to a cloud-native architecture. IEEE Software 33(3), 42–52 (2016) 3. Bucchiarone, A., Dragoni, N., Dustdar, S., Larsen, S.T., Mazzara, M.: From monolithic to microservices: an experience report from the banking domain. IEEE Software 35(3), 50–55 (2018) 4. Curbera, F., Nagy, W., Weerawarana, S.: Web services: Why and how. In: Workshop on Object-Oriented Web Services-OOPSLA, vol. 2001 (2001) 5. Di Francesco, P., Malavolta, I., Lago, P.: Research on architecting microservices: trends, focus, and potential for industrial adoption. In: 2017 IEEE International Conference on Software Architecture (ICSA), pp. 21–30. IEEE (2017) 6. Djogic, E., Ribic, S., Donko, D.: Monolithic to microservices redesign of event driven integration platform. In: 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 1411–1414. IEEE (2018) 7. Ebert, C., Gallardo, G., Hernantes, J., Serrano, N.: DevOps. IEEE Software 33(3), 94–100 (2016) 8. Roy Thomas Fielding. Rest: architectural styles and the design of network-based software architectures. Doctoral dissertation, University of California (2000) 9. Gutierrez, F.: Pro Spring Boot. Springer (2016) 10. Hethey, J.M.: GitLab Repository Management. Packt Publishing Ltd. (2013) 11. Keith, M., Schincariol, M., Keith, J.: Pro JPA 2: Mastering the JavaTM Persistence API. Apress (2011) 12. Mironov, O.: DevOps pipeline with docker (2018) 13. Papazoglou, M.P., Traverso, P., Dustdar, S., Leymann, F.: Service-oriented computing: a research roadmap. Int. J. Cooperat. Inf. Syst. 17(2), 223–255 (2008) 14. Jim´enez Quintana, J.Y.: Proposed environment for the continuous integration of performance tests
184
V. Saquicela et al.
15. Sampedro, Z., Holt, A., Hauser, T.: Continuous integration and delivery for HPC: using singularity and Jenkins. In: Proceedings of the Practice and Experience on Advanced Research Computing, pp. 1–6 (2018) 16. Seth, N., Khare, R.: ACI (automated continuous integration) using Jenkins: key for successful embedded software development. In: 2015 2nd International Conference on Recent Advances in Engineering & Computational Sciences (RAECS), pp. 1–6. IEEE (2015) 17. Shahin, M., Babar, M.A., Zhu, L.: Continuous integration, delivery and deployment: a systematic review on approaches, tools, challenges and practices. IEEE Access 5, 3909–3943 (2017) 18. Sun, Y., Nanda, S., Jaeger, T.: Security-as-a-service for microservices-based cloud applications. In: 2015 IEEE 7th International Conference on Cloud Computing Technology and Science (CloudCom), pp. 50–57. IEEE (2015) 19. Th¨ ones, J.: Microservices. IEEE Software 32(1), 116 (2015) 20. Tsalgatidou, A., Pilioura, T.: An overview of standards and related technology in web services. Distrib. Parallel Databases 12(2–3), 135–162 (2002) 21. Vassallo, C., Palomba, F., Gall, H.C.: Continuous refactoring in CI: a preliminary study on the perceived advantages and barriers. In: 2018 IEEE International Conference on Software Maintenance and Evolution (ICSME), pp. 564–568. IEEE (2018) 22. Waller, J., Ehmke, N.C., Hasselbring, W.: Including performance benchmarks into continuous integration to enable DevOps. ACM SIGSOFT Software Eng. Not. 40(2), 1–4 (2015) 23. Zhao, Y., Serebrenik, A., Zhou, Y., Filkov, V., Vasilescu, B.: The impact of continuous integration on other software development practices: a large-scale empirical study. In: 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 60–71. IEEE (2017) 24. Zhu, L., Bass, L., Champlin-Scharff, G.: DevOps and its practices. IEEE Software 33(3), 32–34 (2016)
Evolution of Naturalistic Programming: A Need Lizbeth A. Hernández-González1(B) , Ulises Juárez-Martínez2 , and Luisa M. Alducin-Francisco2 1 Faculty of Statistics and Informatics, University of Veracruz, Xalapa, Mexico
[email protected] 2 Technological Institute of Orizaba, National Technological of Mexico, Orizaba, Mexico
[email protected], [email protected]
Abstract. For years, researchers and programmers have studied, proposed, and reflected on how to enhance the expressive power of programming languages. One of the most popular thoughts on this matter is that natural language can be used as a programming tool. In this work, we review the state-of-the-art of naturalistic programming, whose goal is to achieve greater expressiveness in programming languages and better approach the user domain. We review the only three naturalistic general-purpose languages that can generate executable code by comparing them concerning a conceptual model – Pegasus, Cal-4700, and SN. We discuss the similarities and differences between each language. Then, we introduce and discuss some examples of computer programs developed with these languages, showing their level of expressiveness. Finally, we reflect on the findings that show the need for the paradigm to evolve having a software development method, to be considered as a formal proposal in Software Engineering. Keywords: Naturalistic programming · Naturalistic paradigm · Natural language programming · Expressiveness · Software engineering methods
1 Introduction For decades, programmers have been in the search for greater expressiveness when programming. In this sense, programming languages have evolved chiefly in terms of a closer approach to the user domain to achieve greater expressiveness. However, there is still a gap between the problem domain and the solution domain that has not been properly addressed. This gap significantly contributes to customer dissatisfaction and high costs because of a lack of clarity in abstractions and information loss across the phases of the software development process. The naturalistic programming paradigm (NPP) seeks to incorporate elements of natural language into the software development process to reduce ambiguity and thus reach greater expressiveness – i.e. greater clarity when representing ideas. Naturalistic programming was first proposed in 2003 [1]. Nowadays, 17 years later, a conceptual model for naturalistic programming exists [2], along with several general-purpose programming languages supporting the NPP. Nevertheless, not a case study reported in the literature © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 185–198, 2021. https://doi.org/10.1007/978-3-030-63329-5_13
186
L. A. Hernández-González et al.
discusses or explores a formal, detailed process for implementing the NPP in software development. That said, naturalistic programming needs to reach greater maturity for the scientific community to fully take advantage of its benefits. This article discusses state-of-the-art initiatives in naturalistic programming, thus analyzing nowadays-mostrepresentative general-purpose naturalistic languages to trace a tendency line in the continuous evolution of the NPP. The remainder of this paper is structured as follows: In Sect. 2 we discuss the background of naturalistic programming. In Sect. 3, we discuss the only three state-of-the-art naturalistic languages – Pegasus, Cal-4700, and SN – and compare their distinctive features with respect to the conceptual model for naturalistic programming proposed in [2]. In Sect. 4, we introduce and discuss some examples of computer programs written with these three languages, to observe the level of expressiveness of each one. In Sect. 5, we discuss our findings and propose our reflection regarding the potential trends in the evolution of the NPP. Finally, our conclusions and remarks for future work are proposed in Sect. 6 and Sect. 7, respectively.
2 Background Programming paradigms have remarkably evolved through the history of computing. A programming paradigm can be seen as a particular style for programming. Each programming paradigm proposes its own form of representing user problems using language abstractions. In this sense, the Object-Oriented Paradigm (OOP) is perhaps one of the most common general-purpose paradigms. The OOP has contributed to more natural decomposition of a problem. As an evolving paradigm, OOP is currently an implementation method in which programs are organized as a cooperative collection of objects, each of which represents an instance of some class, and whose classes are all members of a hierarchy of classes, united through inheritance relationships [3]. To complete the OOP, the Aspect-Oriented Paradigm (AOP) emerged in 1997 [4] as a way of representing transversal actions of a program that are difficult to represent with the OOP. Some examples of these actions are non-functional requirements of computing systems, such as security, concurrency, or network traffic. According to researchers in [4], AOP gives support to programmers by separating components from aspects. In this sense, coordination between objects and aspects is performed by a weaver, thus increasing the modularization of both components and aspects. Depending on the generated events, reusable modules (e.g. aspects) are activated. Despite novel initiatives, the gap between the problem domain and the solution domain still needs to be addressed, especially due to the lack of expressiveness in current programming initiatives. The search for greater expressiveness1 in programming languages dates back to the decade of 1970. Pioneers Felleisen [5], Paterson and Hewitt [6], Chandra [7], and Hoare [8] studied the expressive power of programming languages from different perspectives. For instance, in [5], Felleisen developed a formal notation to compare expressiveness levels among different programming languages, considering the great number of affirmations found in the literature at that time. Eventually, works 1 “The state of showing what someone thinks or feels,” The Cambridge English Dictionary.
Evolution of Naturalistic Programming: A Need
187
such as those reported in [9] and [10] took as basis Felleisen’s work to propose their own contributions. Consequently by searching greater expressiveness, researchers have sought to respond to expressiveness challenges from different programming approaches. The most common of these approaches views natural human language as a tool for programming. According to Lieberman and Liu [11], the fact that one could program directly in natural language, without the need for formal programming languages, is an old dream in computer science. Nevertheless, given the high-level of complexity of natural language, this dream has been difficult to fulfill, even though the literature reports great progress. The NPP was first reported in 2003 [1] as a way to achieve greater levels of expressiveness when programming. This paradigm entails the use of natural language elements to develop programming languages that are reflective (they “know” themselves) and can generate more expressive and executable computer programs. Consequently, naturalistic programming would have the potential to eliminate ambiguity, or at least minimize it as much as possible, hence increasing customer satisfaction and decreasing rework costs. Another important characteristic of natural language is its ability to refer to elements that have been previously or subsequently defined. An interesting approach to naturalistic languages is discussed in [13] and [14]. The authors proposed a natural language command interpreter (NLCI) that accepts action commands in English and translates them into executable code using an ontology that models an API. Other naturalistic initiatives can be listed as follows: • Macho [15] synthesizes executable Java programs using a combination of natural language, unit tests (one or more examples of correct input and output), and a large database of open-source Java code. The system also filters potential candidate programs, the best candidates are executed on each example unit on the unit test, and their output is compared with the reference. • Metafor [16] is useful as a brainstorming tool, converting user stories into objectoriented Python code. Since it is not yet possible to convert arbitrary English descriptions into code, Metafor relies on an expressive subset of English. • Kaimeng [17] is a quasi-natural language designed for programming at a higher level than traditional programming languages. The underlying premise of Kaimeng is that knowledge is always expressed in natural language. Kaimeng shares some similarities with the OOP, it relies on terms such as class, instance, and inherited knowledge; it also keeps some control structures. Additionally, a sentence can correspond to a code fragment written in a traditional programming language, such as C. Other programming initiatives have gone beyond what is reported in [15, 16], and [17]. In [18], the authors proposed a method for generating source code. They used as basis descriptions from use cases with certain grammatical rules. Then, they created an XML intermediate file to subsequently generate the corresponding Java code. As can be observed, current programming initiatives are proposed under the OOP, thus creating source code that is often incomplete. The goal of the NPP is to move away from abstractions from other paradigms, thus using sentences with meaning in natural language in order to generate fully executable code. The programming initiatives discussed in this section demonstrate the importance of and interest in achieving
188
L. A. Hernández-González et al.
greater expressiveness in programming languages, which is at the core of the naturalistic programming philosophy. In this sense, Yin [17] states a key insight of the naturalistic paradigm: “Although free natural language programming is still hard, programming with an unlimited amount of words in natural form under restricted lexical and syntactic rules is practical.” If we view the concept of naturalistic as something “Similar to what exists in nature.”2 , naturalistic programming is not about representing natural language in its entirety – since there would be too many variations – but rather to incorporate key elements of natural languages, such as reflectivity, into programming languages to enhance their expressive power. According to Moreno [19], reflectivity refers to the ability of a program to “know” itself, reason, and make decisions to ultimately adapt its structure and behavior. In the context of naturalistic programming, a program is said to be reflective if it can identify and refer to elements previously mentioned – e.g. “Claudia is a student. Enroll her in the programming course.” In this sense, the goal of the NPP is to have general-purpose languages that are expressive, reflective and can generate fully executable code.
3 State-of-the-Art The NPP has slowly evolved since the moment it was first proposed. In [12], the authors conducted a deeply study of naturalistic programming technologies, discussing current domain-specific and general-purpose languages. Since the NPP seeks to rely on reflective, executable general-purpose languages that use natural language elements, this section will discuss just that. We review the only three and representative naturalistic programming languages that include the aspects previously mentioned, emphasizing on their naturalistic foundations, as well as a conceptual model for naturalistic programming that strengthens the purpose of the NPP with solid reference elements. 3.1 Pegasus Pegasus is the first naturalistic language reported in the literature [20]. According to its developers, the main disadvantage of programming paradigms other than the NPP is that developers must describe their program ideas into programming language structures, while the most natural way for writing computer programs would be to use programmer’s native language without having to learn over and over new programming languages. In Pegasus, the task of translating from one language to another, or from one programming language to another, is handled by its translator components [21]. The goal is that programmers can express their program ideas as everyday speech. The smallest unit of processing in Pegasus is the “idea”, which is bound to human perceptions. This concept is formalized in [20] to make it accessible for a computer. To this end, the researchers agreed to use both natural language and formal notation to avoid misinterpretations. In [20], the authors define the concept of Naturalistic Programming as “Writing computer programs with the help of natural language.” Taking into account the researchers’ previous reflections, in [21] they listed the advantages of the NPP as follows: 2 A definition of The Cambridge English Dictionary.
Evolution of Naturalistic Programming: A Need
189
• The NPP releases programmers from having to learn new programming languages over and over. • The NPP improves documentation. Developers can create programs in their native language, which in turn can be translated into other languages, thus reducing ambiguity and misunderstandings. • Computer programs need not be written multiple times in different programming languages. This means that under the NPP, the same algorithm need not be implemented over and over in order to be available in contemporary programs. • The NPP flattens the learning curve of programming languages. • The NPP allows programmers to focus more on solving a particular program-related problem, rather than solving abstraction issues or technical problems. • The NPP offers more opportunities for code reuse. • Pegasus could even ask users for clarification about the context. 3.2 A Conceptual Model for Naturalistic Programming A key contribution to the NPP is its conceptual model. According to the model developed in [2], a programming paradigm must be generic enough to identify domain-abstractions, while simultaneously describing enough expressive elements to support general-purpose languages. The conceptual model for naturalistic programming contains the elements required for developing general-purpose languages, considering the following elements3 : • Endophora. An endophora is an indirect reference defined in the same text and is divided into two classifications: • Anaphora. Indirect reference whose referent has been previously defined in the same text: “Take the key, use it in the lock.” • Cataphora. Indirect reference whose referent is subsequently defined in the same text: “After taking it, use the key in the lock.” • Phrase. A group of words that is part of, rather than the whole of, a sentence. Imperative statements are the most common phrases; they lack a grammatical subject, yet they do have linguistic meaning. The basic elements of the conceptual model for naturalistic programmingare as follows [2]: 1. 2. 3. 4. 5.
A noun as a base abstraction that is either singular or plural. An adjective as a complement to the noun. A verb as an action undertaken by a noun. A circumstance as a determinant that responds to events. A phrase used to define instructions with a complexity beyond that of the subject and the predicate.
3 Definitions of The Oxford English Dictionary and The Cambridge English Dictionary.
190
L. A. Hernández-González et al.
6. The element must enable the definition of instructions in terms of those elements previously described (anaphoras). 7. As types are defined explicitly and statically, they are used in the construction of instructions comprising phrases. 8. The conceptual model is implemented via a textual language that offers a level of expressiveness similar to a natural language, but which is formalized to avoid ambiguity. The conceptual model contributes to reducing the gap between the problem and solution domains. 3.3 SN As a proof of concept of the conceptual model, authors in [2] propose the SN (Sicut Naturali) naturalistic language prototype. SN can handle indirect references and temporality. Moreover, it is formal enough to reduce ambiguity by describing abstractions using a subset of English. SN emerged from the need to have a general-purpose naturalistic programming language that is highly expressive and easy to understand by both developers and programmers. Moreover, SN intends to successfully translate program ideas without losing or distorting information. SN includes the following elements [2], derived from the conceptual model: 1. 2. 3. 4. 5.
Nouns (singular and plural). Adjectives to extract noun features. Attributes to describe the elements that make up a noun. Verbs to define actions undertaken by a noun. Circumstances to establish the conditions under which nouns, adjectives, and verbs react. 6. Derived attributes, whose value depends on either other attributes or verbs. 7. Noun phrases, whose core is the noun. 8. Sentences. 9. Naturalistic iterator. SN allows developers to work with reflective instructions (anaphoric or cataphoric references) to carry out iterations. 10. Naturalistic decisions. 3.4 Cal-4700 The Cal-4700 [22] initiative, reported in the grey literature, is a proposal for plain English programming that seeks to release developers from the burden of using a formal programming language by allowing them to use natural language, similar to writing a pseudo-code, thus skipping the translation step. According to the developers, father, and son, Cal-4700 is a plain English compiler that can successfully and positively respond to the following questions [22]: 1. Is it easier to program when you do not have to transform your natural language thoughts into an alternative syntax?
Evolution of Naturalistic Programming: A Need
191
2. Can natural languages be parsed in a relatively “sloppy” manner, in the same way as humans apparently parse them, and still ensure a stable enough environment for productive programming? 3. Can low-level programs, such as compilers, be written conveniently and efficiently in high-level languages, such as English? To respond to these questions, Cal-4700 works as a parser, similar to those parsing centers of the human brain. 3.5 Comparison of Naturalistic Languages Table 1 allows for comparison of the three general-purpose naturalistic languages previously discussed – Pegasus, Cal-4700, and SN – since they are the only three languages that support the NPP considering that they have natural language elements, they are reflective (i.e. can identify previously mentioned elements and refer to them), and they can generate fully executable code using phrases written in natural language without using abstractions from other programming paradigms. Table 1. Comparison of naturalistic languages. Model Elements
Pegasus
SN
Cal-4700
Singular and plural nouns
x (Implicit)
x
x
Adjective
x (Implicit)
x
x
Verb
x
x
x
Circumstance (events)
x
x
x
Phrasea
x
x
x
Anaphoric reference
x
x
x
Types defined explicitly and statically
–
x
x
Naturalistic assignments
x
x
x
Attributes
x
x
x
Sentences
x
x
–
Naturalistic iterator/loops
x
x
x
Naturalistic decisions (conditionals)
x
x
x
Cataphoric reference (Optional)
x (Using loops)
x (Using loops)
x
External files for execution
x
–
x
Other elements
a Instructions with complexity greater than a subject + verb phrase.
The comparison is based on the naturalistic model proposed in [2] and the characteristics or properties that each language defines. We tested the functionality of SN and Cal-4700 by actually executing them using the corresponding documentation and
192
L. A. Hernández-González et al.
executable files. We gathered information on SN execution from the repositories of the Technological Institute of Orizaba. As for Cal-4700, we retrieved the necessary information for its implementation from the Osmosian4 official website and [22]. Conversely, as regards Pegasus, we report its functionality simply by using the information and implementation examples exhibited and reviewed on the Pegasus project official website5 , since there is not an available compiler for testing. Overall, all three languages can create common statements in natural language; that is, statements using nouns, verbs, adjectives, and attributes. Similarly, the three languages can identify phrases (most commonly imperative statements), make anaphoric references, execute program-like instructions (such as responding to events and defining conditional statements), and use naturalistic iterators. Pegasus does not allow users to define data types explicitly, which are probably predefined in its dictionary. Definition of both nouns and adjectives is not explicit, either, yet the examples introduced in Pegasus official website clearly show their uses. Conversely, SN and Cal-4700 can define “things” using singular and plural statements, in Cal-4700, “Thing” memory is managed by the programmer. On the other hand, Cal-4700 does not support strict grammatical structures, yet it does support imperative statements. All three languages implement cataphoric references, which they use within loops. Additionally, the fact that no external files are necessary for execution is a remarkable attribute of SN language. Pegasus relies on a database for its execution, whereas Cal-4700 uses its own file, “the noodle” when compiling a program. Finally. SN is simply installed as any other programming language and compiled as any other computer program. Pegasus allows users to choose the programming language of the code, whereas SN and Cal-4700 can generate fully executable code. Additional distinctive features of each language can be described as follows: • Pegasus can translate ideas in many natural languages, such as English, German, Chinese, Spanish, Hindi, and Russian, among others, as well as in many programming languages, such as Java, C, C++, Ruby, Python, Haskell, and Prolog [21]. • Cal-4700 allows the use of flags, data type conversion routines, records, and “possessive phrases” (normally used to access fields in records). However, it does not support nested IFs or nested loops, objects, real numbers, or equations since these concepts are not used in natural speech [22]. Even though the authors do not particularly encourage the use of comments in programming, they understand their importance. Hence, Cal-4700 allows users to make three types of comments to document programs: simple comments, remarks (to make a permanent remark in the code), and qualifiers (considered part of the program and affect how the compiled code executes). • According to SN developers, the purpose of naturalistic languages is to allow users to write programs without making annotations. Hence, SN does not allow comments. SN is one of a kind since it was built with respect to a grammar and without using either ontologies nor databases. Additionally, SN implements non-functional requirements through circumstances and allows using embedded grammar to perform, for 4 http://www.osmosian.com/. 5 http://www.pegasus-project.org/en/Welcome.html.
Evolution of Naturalistic Programming: A Need
193
instance, mathematical calculations. Furthermore, SN supports noun phrases6 and derived attributes, whose value depends on other attributes or verbs [2]. The purpose of this section was to concisely stress what current naturalistic programming languages have to offer, while we simultaneously reflect on their potential improvement opportunities. As can be seen, each author has his own way of interpreting the implementation of a naturalistic language. Executing computer programs with SN and Cal-4700 allowed us to experience what the programmer’s interaction with such languages would be like. As a remarkable observation, we concluded that the NPP is a promising paradigm, but it must further evolve to be considered as a formal software development proposal.
4 Examples of Naturalistic Programming Languages The goal of this section is to exemplify how Pegasus, SN, and Cal-4700 are used to write computer programs, which can contribute to increase our understanding of the NPP. An example of a program written with Pegasus was retrieved from the official website [21], because there is no compiler available for testing. As for the SN and Cal-4700 examples, we developed them as a part of an academic project. 4.1 Pegasus The next Listing shows the use of data queries in natural language, macro instructions, and database systems. As can be observed, code is written as if someone was telling a story in natural speech. Additionally, we can notice the lack of an explicit definition of data types, nouns, adjectives, or verbs, as previously mentioned. “Computer, take all employees who have been working for us longer than 20 years and whose last name starts with A to O and send them an invitation to the celebration of the jubilee.” “Sending an invitation to a ceremony to someone (to a person): Create a new letter with the address of the person as addressees. Write “Dear (the first name of the person)…”. 4.2 Cal-4700 The listing included in Fig. 1 was written using the Cal-4700 editor, available for MSWindows, and exemplifies the naturalistic representation of a primitive lottery. We let readers decide on the level of expressiveness of such instructions. As can be observed, the program uses a random number generator, a loop, an addition, and a conditional. Like Pegasus, Cal-4700 defines several types of nouns in its support file to help programmers. Additionally, Cal-4700 only compiles files without extensions and uses its own editor to invoke the compiler and generate a program. Files must be stored in a unique directory also containing “the noodle” file, which acts as the brain of Cal-4700. Finally, the name of the executable file is that of the folder storing the files. 6 A word or group of words functioning in a sentence as subject, object, or prepositional object.
194
L. A. Hernández-González et al.
Fig. 1. Naturalistic representation of a primitive lottery in Cal-4700.
Using Cal-4700, programmers can construct console applications or screen applications (in a graphic way). Its own documentation was created with “what-you-see-is-whatyou-get” document editor. For further reference on Cal-4700, its Integrated Development Environment and applications like robotic creations, see [22]. 4.3 SN Listing included in Fig. 2 introduces an example code written with SN using its basic editor, also available for MS-Windows. The code is written to also perform a lottery simulator. You can see that there is no need to use run, start up, shut down, and close loop instructions, getting a shorter code. SN generates binary code (bytecode), and the name of the main class (executable) is the name designated in the source code, after the main instruction. In Fig. 2, the executable class generated is Lottery.class.
Fig. 2. Naturalistic Lottery Simulator in SN.
Additionally, Listing in Fig. 3 performs the average calculation of three numbers. In this case, SN uses an embedded grammar to represent mathematical instructions plainly, thus respecting their inherent expressiveness. For further reference on SN and its functionality, authors in [23] present a thorough discussion of a set of computer programs developed with this language.
Evolution of Naturalistic Programming: A Need
195
Fig. 3. Embedded Grammars in SN.
5 Discussion As noted in Sects. 3 and 4, Pegasus, Cal-4700, and SN offer beyond what is necessary to be referred to as general-purpose languages. The code snippets presented in Sect. 4 were discussed chiefly in order to understand the intention of the NPP and verify whether the features of the three programming languages contribute to the naturalistic programming philosophy. As a result of our review, a series of inquiries surfaced, such as how flexible is Pegasus when defining nouns? Since we do not have access to the compiler, all theoretical assumptions about Pegasus remain like this. Other inquiries can be discussed as follows: • Can all imperative statements be removed? As regards mathematical instructions, Knöll [21] suggests that they be written as is since mathematics is a formal discipline whose representations are expressive enough by themselves. We agree with this statement, otherwise, it might seem like a setback in its formal representation. SN uses embedded grammars and gives programmers the choice to either type the formal representation of mathematical operations or use their naturalistic counterparts. In cases such as mathematical operations, it is more convenient to keep imperative instructions. • Which of the three languages is more declarative or expressive? All three languages have some differences in the way they were implemented. This is understandable given the variations that a language must have to express ideas. About the level of expressiveness, we will focus on SN and Cal-4700, since we tested programs using the corresponding compilers. Cal-4700 is a particularly good option that includes advanced features such as the use of flags, data type conversion routines, the use of records and “possessive phrases”. Nevertheless, it would be desirable to avoid using instructions such as run, start up, shutdown, and allocate memory (for memory management), since they are not used in natural language. Similarly, it is important to consider the inclusion of mathematical representations in a formal way, respecting their expressiveness. About SN, we think that being able to implement non-functional requirements by using circumstances is also an advanced feature, however, it lacks data type conversion routines, for example. It is important to note that SN is still a prototype. An essential consideration that would promote the use of these languages would be the ability to build applications with access to databases. So, there are still issues to solve.
196
L. A. Hernández-González et al.
Another aspect worthy of analysis is the learning curve of a programming language. Under the NPP, the goal is to write computer programs more naturally and expressively; however, the degree of naturality of a programming language in comparison to others must be determined after conducting a controlled case study, not just isolated examples. Currently, no case study is reported where naturalistic languages are used within a software development process through detailed steps. Additionally, it is important to consider the size of a program and how it should be analyzed and structured in order to be constructed. Questions regarding the learnability, flexibility, and expressive power of naturalistic programming languages pave the way for increasing research on the NPP and its evolution trends. According to researchers, the conceptual model for naturalistic programming, along with the implementation of all three languages – Pegasus, Cal-4700, and SN – in software development projects will contribute to better understanding of how the NPP can evolve and toward which trends. To this end, the first step would be to set specific design and analysis guidelines for the software development process under the NPP, such as noun identification and representation guidelines. Additionally, it is important to consider how the NPP can significantly improve the software development process. Having analysis and design methods is key to NPP acceptance. The underlying implications of such naturalistic methods in the software development process could be used to further strengthen computer programs in terms of agility, defect density, and development time, among others. As a result, moving from one phase to the other in the software development process could entail less information loss and fewer rework costs, yet this must be confirmed through tests. There is a latent need for an NPP functionality verification and validation to weigh the advantages of naturalistic programming over its underlying challenges. In the end, a method would allow the NPP to be considered as a serious paradigm of software development, otherwise, its functionality is not validated by concrete data.
6 Conclusions One of the goals of this paper is to highlight the state-of-the-art of the NPP to promote its use as a general-purpose software development paradigm. The main advantages of naturalistic programming are attractive. We discuss the three general-purpose languages – Pegasus, Cal-4700, and SN – that follow the naturalistic programming philosophy; that is, they integrate natural language elements, are reflective, and generate more expressive and executable programs. After comparing their features concerning the conceptual model for naturalistic programming [2], we identified the achievements of each of these languages and the need to further implement them intensively to enhance the status of the NPP as a serious software development paradigm. However, to this end, the scientific community needs to rely on a formal, detailed method setting specific guidelines throughout the entire software development process. History demonstrates that following the existence of programming languages supporting a corresponding programming paradigm, the next step is to set the necessary software design and analysis tasks. Without such progress, any advantage of a programming paradigm remains a theoretical assumption.
Evolution of Naturalistic Programming: A Need
197
7 Future Work Following our review of naturalistic programming languages Pegasus, Cal-4700, and SN, we can highlight several lines of research supporting the evolution of the NPP, and thus its use in Software Engineering. The first of these lines is briefly mentioned in the discussion section above and addresses the possibility of a naturalistic method for software development. However, the efforts to create this method must not be superficial. For instance, the simple stage of software analysis carries its own challenges: • AOP and OOP tend to be orthogonal. In other words, the notation used in AOP to represent aspects in software modeling and implementation is directly linked to object-oriented languages. In the history of the evolution of programming languages, the NPP follows the AOP. Hence, it is important to identify those elements of notation that can be considered under a naturalistic perspective, regardless of the programming paradigm having originated them. • It is important to overcome the challenge of identifying nouns, adjectives, attributes, and verbs. • How to find a method to establish anaphoric references in an orderly fashion. • How to find a method to properly write corresponding sentences and phrases. • SN supports nominal phrases and derived attributes. How can these features support the development of highly expressive computer programs?
Acknowledgments. We thank the Technological Institute of Orizaba and the University of Veracruz for the support granted to this research.
References 1. Lopes, C.V., Dourish, P., Lorenz, D.H., Lieberherr, K.: Beyond AOP: toward naturalistic programming. SIGPLAN Not. 38(12), 34–43 (2003) 2. Pulido-Prieto, O., Juarez-Martinez, U.: A model for naturalistic programming with implementation. Appl. Sci. 9(18), 3936 (2019) 3. Booch, G., Maksimchuk, R.A., Engle, M.W., Young, B., Conallen, J., Houston, K.: ObjectOriented Analysis and Design with Applications. Third edition, Addison-Wesley (2007) 4. Kiczales, G., Lamping, J., Mendhekar, A., Maeda, C., Lopes, C., Loingtier, J., et al.: Aspectoriented programming. In: Proceedings of the European Conference on Object-Oriented Programming (ECOOP), pp. 220–242. Springer-Verlag (1997) 5. Felleisen, M.: On the expressive power of programming languages. Sci. Comput. Program. 17(1), 35–75 (1991) 6. Paterson, M.S., Hewitt, C.E.: Comparative schematology. 119–127. Association for Computing Machinery, New York (1970) 7. Chandra, A.K., Manna, Z.: On the power of programming features. Comput. Lang. 1(3), 219–232 (1975) 8. Hoare, C.A.R.: The varieties of programming languages. In: International Joint Conference on Theory and Practice of Software Development, Lecture Notes in Computer Science, pp. 1–18. Springer, Berlin (1989)
198
L. A. Hernández-González et al.
9. Felleisen, M., Findler, R.B., Flatt, M., Krishnamurthi, S., Barzilay, E., McCarthy, J., et al.: A programmable programming language. Commun. ACM 61(3), 62–71 (2018) 10. Davidson, J., Michaelson, G.: Expressiveness, meanings and machines. Computability. 7(4), 367–394 (2018) 11. Lieberman, H., Liu, H.: Feasibility studies for programming in natural language. In: HumanComputer Interaction Series, pp. 45–473. Springer, Dordrecht (2006) 12. Pulido-Prieto, O., Juarez-Martinez, U.: A survey of naturalistic programming technologies. ACM Comput Surv, 50(5), 70:1–70:35 (2017) 13. Landhäußer, M., Weigelt, S., Tichy, W.F.: NLCI: a natural language command interpreter. Autom. Softw. Eng. 24(4), 839–861 (2016) 14. Weigelt, S., Landhäußer, M., Blersch, M.: How to prepare an API for programming in natural language. In: Proceedings of the Posters and Demo Track of the 15th International Conference on Semantic Systems (SEMANTiCS 2019). CEUR Workshop Proceedings, vol. 2451, pp. 141–145. ACM, New York (2019) 15. Cozzie, A., Finnicum, M., King, S.: Macho: programming with man pages. In: Proceedings of the 13th USENIX Conference on Hot Topics in Operating Systems, p. 7 (2011) 16. Liu, H., Lieberman, H.: Metafor: visualizing stories as code. IUI 2005, pp. 305–307, Association for Computing Machinery, California (2005) 17. Yin, P.: Natural language programming based on knowledge. In: 2010 International Conference on Artificial Intelligence and Computational Intelligence, vol. 2, pp. 69–73 (2010) 18. Mefteh, M., Bouassida, N., Ben-Abdallah, H.: Towards naturalistic programming: mapping language-independent requirements to constrained language specifications. Sci. Comput. Program. 166, 89–119 (2018) 19. Moreno, F., Jiménez, J., Castañeda, S.: Una propuesta para la clasificación de la programación reflexiva orientada al desarrollo de sistemas autónomos. Ingeniería y Competitividad 16(2), 91–104 (2014) 20. Knöll, R., Mezini, M.: Pegasus: first steps toward a naturalistic programming language. In: Companion to the 21st ACM SIGPLAN Symposium on Object-oriented Programming Systems, Languages, and Applications. OOPSLA 2006, pp. 542–559. ACM, New York (2006) 21. Knöll, R.: Pegasus Natural Programming. http://www.pegasus-project.org/en/Welcome.html 22. Rzeppa, G., Rzeppa, D.: The osmosian order of plain english programmers blog. https://osm osianplainenglishprogramming.blog 23. Alducin-Francisco, L.M., Juarez-Martinez, U., Pelaez-Camarena, S.G., Rodriguez-Mazahua, L., Abud-Figueroa, M.A., Pulido-Prieto, O.: Perspectives for software development using the naturalistic language SN. In: 2019 8th International Conference on Software Process Improvement (CIMPS), pp. 1–11 (2019)
Use of e-Health as an Accessibility and Management Strategy Within Health Centers in Ecuador Through the Implementation of a Progressive Web Application as a Tool for Technological Development and Innovation Joel Rivera1 , Freddy Tapia1(B) , Diego Terán1 , Hernán Aules2 , and Sylvia Moncayo3 1 Department of Computer Science, Universidad de las Fuerzas Armadas - ESPE, Sangolquí,
Ecuador {jarivera11,fmtapia,dfterne}@espe.edu.ec.com 2 Department of Mathematics, Universidad Central del Ecuador, Quito, Ecuador [email protected] 3 Institute of Languages, Universidad de las Fuerzas Armadas – ESPE, Sangolquí, Ecuador [email protected]
Abstract. In Ecuador in recent years there has been a strong investment in the field of health, due to the increase of patients and medical specialties, along with a deficit in the handling and distribution of information due to lack of trained staff or little optimization and control of the technological resources, generating problems in the administration and assignment of medical appointments, being the waiting times in activations of medical appointments the main problem. Nowadays, health systems increasingly require more innovative processes that contribute to a better provision of services, being patients and all health center staff the direct and indirect beneficiaries respectively. For that it is necessary to have a centralized service, which allows us to have a friendlier and faster access to the information. Taking in account these scenarios, Information and Communications Technology arise as a way to solve all the previously described aspects and therefore this work has a purpose, which is to design and develop a Progressive Web Application for mobile devices which provides easy access and usability to patients as well as helping to improve the waiting times generated in the activation of medical appointments. Keywords: e-Health · Medical appointments · Progressive web apps · Cloud computing
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 199–212, 2021. https://doi.org/10.1007/978-3-030-63329-5_14
200
J. Rivera et al.
1 Introduction Because services and health areas have been increased and in many cases the traditional medical appointment processes have been reconsidered and redefined, all this aim at satisfying the demand for existing services, users have no knowledge about the procedure to follow for the request, or assignment of medical appointments, as well as the information offered in the health centers become erroneous or null [1]. These factors cause poor management in the generation of medical appointments and in most cases generate losses and reallocation of these, which can sometimes become critical if it is a patient that requires rapid and specialized care, in addition to causing a great loss of resources for the health centers [2]. This proposal aims to optimize technological resources to provide improvements in patient’s medical appointments problems that exist within health centers in Ecuador, it will be applied to a case study with the aim of validating it. For this, it is proposed to design a Progressive Web App (PWA), which uses real-time connection to the data of the medical center and the native features of the patient’s mobile device. This technology will provide better usability and portability at the moment that the patients interact with medical appointments. The main contributions of this paper includes: (1) to determine the associated concepts in this proposal regarding e-health and prototype deployment models.; (2) To determine the necessary infrastructure to automate the used processes in the administration of medical appointments in the health center; (3) To design and develop a Progressive Web App that allows the integration between the administration processes, medical appointments process, and the implemented mobility systems. The remainder of this article is organized as follows. Section 2 discusses the related works, while Sect. 3 presents the experimental setup and methodology. Section 4 describes the data analysis, including an explanation of the mathematical model applied to validate the preliminary results. Section 5 details the conclusions. Finally, Sect. 6 presents the recommendations and future works.
2 Related Works According to Cruz, using e-Health through mobile applications can help the health field improve, maintain, and evolve the models for administration and management processes in a sustained manner over time [3]. In Latin America, there are several challenges regarding the development of e-Health-based ecosystems, highlighting that these services have not been widely implemented due to persistent organizational, physical, and technological problems in this region [4]. In Ecuador, the implementation of e-Health has not currently been prioritized due to the lack of coordination and participation among the different medical centers and public and private institutions that are part of the health ecosystem. It is evident that these weaknesses still persist in the country and that they require a strengthening of the information and communication technology systems, generating material for future academic research within the field of health [5]; According to López et al., a prototype was made based on services implemented in the cloud to carry out the scheduling of medical appointments and to reduce the level of absenteeism through telephone calls and text messages, thus demonstrating that the implementation of e-Health-based systems improve the quality of management and administration in
Use of e-Health as an Accessibility and Management Strategy
201
terms of health processes [6]. According to Terán et al., in their work, some aspects that are necessary to improve the implementation of an e-Health solution are mentioned; the authors specifically emphasize the need to manage medical appointments through a web application in order to reduce mobility times and absenteeism rates [7], based on this need, this project seeks to improve and propose complementary scenarios to the initial proposal, which based on a study are considered feasible and adaptable to the Ecuadorian reality.
3 Experimental Setup and Methodology 3.1 Absenteeism According to the Ministry of Public Health of Ecuador, there is a high rate of absenteeism in medical appointments scheduled at the Ecuadorian Institute of Social Security (IESS), according to the regulatory body there es an average of 17.60% of absenteeism in medical appointments [2], it is necessary to mention that there is no more updated public information by the regulatory body. 3.2 E-Health E-Health has gradually evolved to the point of referring to all the functions involved in the structure of the health system therefore e-Health is not just a improve to the patient’s health or exchanging files between health institutions [3]. E-Health tries to create the necessary reforms in the health system to obtain a general improvement worldwide [4]. E-Health proposes solutions and improvements in the area of health, that is why the basic characteristics and specifications that allow interoperability between technologies and information and communication systems related to medical care must be taken into account, in order to prevent damage to both patients and health processes [5], The present project proposes to improve the processes of administration and assignment of medical appointments using these characteristics, which will be applied as follows: • Efficiency: Through the prototype, it will be possible to reduce patient care times for the assignment of medical appointments in the different offices of the health center. • Improvement in the quality of health care: Easy use of the prototype to both patients and health center staff will be guaranteed, so that medical facilities can be used optimally. • Evidence-based: This project will include statistical calculations that support the optimization in patient’s care and assignment of medical appointments. • Empowerment of consumers and patients: This project will be integrated with current technological systems, making available to patients all information related to the different services and offices of the health center. • Encourage: This project will help foster a new link between health centers and patients through the appropriate use of information technologies and will also help to promote the use of e-Health in health centers. • Ethics: This project will always maintain the patient’s information as private in order to avoid misuse of information.
202
J. Rivera et al.
3.3 Cloud Computing With regard to e-Health, cloud computing provides a solid, efficient and reliable infrastructure through the internet using a “pay-per-use” model, which allows you to maintain computer facilities, data storage, and software to facilitate the procedures and routines of the different medical services in a flexible and scalable way, in addition to significantly reducing the costs of operation and maintenance [6, 7]. In the health field, there are general aspects of cloud computing that must be considered. • Availability: Services and data must be available all the time without performance degradation in order to work continuously and effectively. • Reliability: The use of cloud computing in a sensitive field, in health’s case, requires reliability for the services provided. • Data management: Good database management is necessary to handle a big diversity and amount of data. • Scalability: e-Health systems could have hundreds of health care providers and millions of patients. • Security: Due to the fact that several e-Health services would be offered and used by different users, it is necessary to implement different authentication methods and access controls. Privacy: Due to the fact that e-Health services handle sensitive patient’s information, it is important to maintain the privacy of that information. 3.4 Progressive Web Apps – PWA PWA is a set of techniques, APIs, and strategies that allow developers to provide users the experience of native use of applications, similar to mobile devices and meet the following basic characteristics [8]: • Fast: Almost all the time show things on the user’s device in less than a couple of seconds. • Reliable: They work even without a stable internet connection, and also works on old devices. • Attractive: By enabling notifications even on the web, users can be notified of events carried out by the application even if the browser is closed. They can even be installed directly on the home screen of the device and developers can also select the application icon. The real advantage and utility of PWA are the user experiences that progressively improve as the browsers improve, this user experience improves through a set of features that give the application reliability and speed to obtain minimum times at the moment of loading the content, regardless of the quality of internet connection [8, 9].
Use of e-Health as an Accessibility and Management Strategy
203
3.5 System Infrastructure Due to the present project is created as a service which will be focused on patients and health center staff, the following aspects should be prioritized [10]: • Internet access is a requirement for the correct operation of the application, and it is necessary that the health center has an open Wi-Fi network so that patients who do not have the mobile data service can use it. • The level of technological literacy for the use of the proposed application must be of a medium level because said application will be maintained in a hybrid web environment through a PWA. Taking into account the aforementioned aspects, the services of the application can be adequately offered to the community of the health center through a technological infrastructure that consists of a hosting that will be responsible for hosting the web application and server for the REST API [11]. The connection to the servers will be through the internet and patients will be able to access the internet through the internal network of the health center located on floors one, two, five, and nine or through a mobile data network, as indicated in Fig. 1.
Fig. 1. Infrastructure diagram.
3.6 System Architecture The prototype will use a distributed architecture [12] in which part of the server will be responsible for executing the REST web services that will facilitate communication between the various platforms that will be managed [13]. As mentioned above, the application will be a PWA, which works in a hybrid way using web resources and native resources of mobile devices as indicated in Fig. 2.
204
J. Rivera et al.
Fig. 2. Architecture diagram
In the architecture diagram, it can be noticed that the database will be hosted on a PaaS server, which is implemented in the “Google Cloud Platform” using a free student license, that database will be responsible for storing the health center information used by the prototype. The database information will be recovered by the backend server, which will be developed in NodeJS and implemented in the “Microsoft Azure” platform, that server will be in charge of exposing all the REST web services necessary for the PWA developed in Ionic can access the information. The application developed with the Ionic framework will work on angular, thus allowing access to REST services through HTTPS requests, which will retrieve the information from the database in order to be presented. Finally, the application will be hosted in a free hosting, which will facilitate access to it through a URL, which can be executed from any internet browser regardless of the operating system. 3.7 Application Prototype For the prototype of the application, the health center “AsistaNet” will be taken as a case of study, in which the application will be tested in order to collect and analyze data regarding the waiting times in the activation of medical appointments. For the prototype, the REST API server will be responsible for providing REST web services for the management of database information, it will be developed in NodeJS because it has great versatility and speed in both development and implementation. Additionally, the prototype will consist of 8 screens, of which 4 will be new developments of the present project, 3 will be improved and one will be taken from a previously implemented project [14], which focuses on the accessibility and mobility of health centers. Below are the 4 new screens of the application which are focused on the flow of activation of appointments in the health center (Figs. 3 and 4).
Use of e-Health as an Accessibility and Management Strategy
205
Fig. 3. Interfaces for scanning identity card
Fig. 4. Interfaces of medical appointments
4 Data Analysis In the present investigation, the interpolation of data was first carried out using the ksmooth statistical technique for adjustment of interpolating polynomial data, this procedure was performed, because the intervening variables are of the random type, among which there is, the variable day as independent, However, the time variables were received in the morning, afternoon and night with the use of the prototype (October) and without the prototype (September), they were considered as dependent variables, then it is observed, in Fig. 5 that when using the prototype the interpolation polynomial is below the interpolation polynomial without the prototype, which clearly indicates that times are optimized.
206
J. Rivera et al.
Fig. 5. Data collected in the morning
In a similar way, the different interpolation polynomials are analyzed, for the data received in the afternoon, in which a mathematical behavior of similar conditions is observed (see Fig. 6).
Fig. 6. Data collected in the afternoon
Where the polynomial of interpolation with the prototype is still below the polynomial without the prototype, then it can be said that in the afternoon, with the prototype the activation times of medical appointments are more efficient. Finally, when we proceed to collect data at night, again we have a similar behavior than we have in the morning and afternoon, as shown in Fig. 7.
Use of e-Health as an Accessibility and Management Strategy
207
Fig. 7. Data collected in the night
In the previous figure, it can be noticed that when the prototype is used, the data interpolation polynomial it is always below the interpolating polynomial without the use of the prototype, so we mentioned that the optimization and efficiency of the prototype is better in the three scenarios, (morning, afternoon and night). Secondly, a new statistical test is carried out with the purpose of corroborating the aforementioned, for this the Poisson distribution will be used, which has a mathematical parameter lambda (λ), which is obtained by calculating the mathematical hope that consists in the sum of the independent variable with the dependent ones, applying the following mathematical model. E(X ) = λ =
n
xi · yi
(1)
i=1
In particular, the expected value was determined for the case of data, without the prototype collected on the mornings of September. n E(X ) = λ = (1 ∗ 0.454 + 2 ∗ 0.452 + . . . + 25 ∗ 0.433)
(2)
i=1
In the same way, the different lambdas of the variables are determined analogously, as indicated in Table 1. Table 1. Calculation of the different λ parameters Parameters
September without prototype
October with prototype
Morning
150.3208
26.87125
Afternoon
205.2347
43.78642
Night
88.90638
23.47398
208
J. Rivera et al.
Making the respective comparisons of the different statistical parameters, it is analyzed that the numerical values with the use of the prototype are below the values without the use of this, in a fairly high range (see Table 2). Table 2. Ranges of the parameters Parameter Range Morning
150.3208 − 26.87125 = 123.45
Afternoon 205.2347 − 43.78642 = 161.45 Night
88.90638 − 23.47398 = 65.432
That is, from Xmin = 65.432, to Xmax = 123.45 determining values, which serve as indicators of the parameter (λ), for the analysis corresponding to the experiment. As mentioned earlier, the Poisson distribution is a discrete probability distribution that is applied to the occurrences of an event during a certain interval, the investigation is carried out for 25 days, in which the time that takes to activate a medical appointment was collected daily with the prototype and without it, where the random variable X represents the prototype time. It should also be mentioned that each of these random variables represents the total number of occurrences of a phenomenon during a fixed period of time or in a fixed region of space, for this the following probability distribution model is applied. X ∼ P(λ); P(X = k) =
e−λ λk , k = 0, 1, 2, . . . , 25 k!
(3)
Where lambda (λ) is the number determined from the expected value of the random variable X, in the observation period, then particularly, for the case corresponding to the first day and second day, without the prototype, carried out in the mornings in the month of September. P(X = 1) =
e−150.3208 150.32081 = 7.8257 × 10−64 1!
P(X = 2) =
e−150.3208 150.32082 = 5.8819 × 10−62 2!
(4)
And so, successively the calculations are made for all the intervening variables, to optimize the calculation procedures the language R is used, with the mathematical algorithm, Table 3.
Use of e-Health as an Accessibility and Management Strategy
209
Table 3. Algorithm implemented in R > for (i in 1:25) cat(i,”\t”,dpois(i,150.3208),”\n”) 7.825735e−64 5.881854e−62 2.947216e−60 1.107570e−58 3.329816e−57 8.342343e−56 1.791468e−54 3.366186e−53 5.622309e−52 8.451500e−51 1.154942e−49 1.446765e−48 1.672915e−47
1.796242e−46 1.800083e−45 1.691187e−44 1.495415e−43 1.248845e−42 9.880386e−42 7.426138e−41 5.315728e−40 3.632112e−39 2.373834e−38 1.486820e−37 8.939996e−37
From the previous data, the different Poisson distributions were finally plotted in reference to the different lambdas, when the experimentation is carried out without the prototype in the month of September (see Fig. 8).
Fig. 8. Poisson distribution without prototype
Then the graphs of the random variables are made, with the Poisson distribution, when the prototype is used in the month of October (see Fig. 9). In Figs. 9–10, it is clearly observed that while the prototype is used in the interval of (10–20) days, the Poisson distribution already begins its functional growth quickly, optimizing the time, while in the interval from (20–25) the Poisson distribution has just begun to perform its functional growth without the prototype, this allows to infer the mathematical behaviors of the different lambdas, then it is concluded that the implementation of the prototype would be a very good alternative to optimize time.
210
J. Rivera et al.
Fig. 9. Poisson distribution with prototype
Regarding the absenteeism index, information was requested from the Asistanet Health Center from May 2019 to September 2019, in addition to the month of October 2019 in which the application was implemented (see Table 4). Table 4. Health center absenteeism form May 2019 to October 2019 Absenteeism Month
Medical appointments
Attended appointment
Absenteeism
% Absenteeism
May 2019
17624
16001
1623
9.21
June 2019
16798
14964
1834
10.92
July 2019
17321
15738
1583
9.14
August 2019
15438
13684
1754
11.36
September 2019
16566
14824
1742
10.52
October 2019
16927
15606
1321
7.80
Average
16779
15136
1643
It can be noticed that in the month of October the absenteeism rate is 7.80%, which is a lower percentage than the registered in previous months in which the health center did not have an application that helps in the administration and management of medical appointments.
Use of e-Health as an Accessibility and Management Strategy
211
5 Conclusions Many of the health centers in Ecuador do not take adequate advantage of the available technological resources and in some cases they have deprecated technology, which represents a problem since to innovate or optimize processes that improve the quality of the different medical services, technological resources need to be correctly used. In Ecuador, the implementation of e-Health services is undeveloped or under development, due to the lack of knowledge or adaptability to these new proposals, which could be very supportive for each segment of the health field. Through the use of PWA technology, it was possible to provide a prototype with greater portability thus allowing to cover a greater number of patients. Through the implementation of the prototype, a significant reduction of waiting times for the activation of medical appointments was observed, which benefits both the health center and the patients since the waiting lines and the number of personnel needed are greatly reduced for patient care in the reception area. Through the use of e-Health concepts and characteristics as the basis of good practices in health processes, it was possible to provide a solution that helps with the administration and management of medical appointments, as well as improving access and information management.
6 Recommendations It is recommended to consider longer time spans for data collection regarding medical appointment activation times, which will help to determine with more precision the average time required for this process. It is recommended to extend the period regarding the collection of absenteeism data generated in the health center It is recommended to optimize the process of reading patient identification, since in case the patient does not have a mobile device with a high-resolution camera, it could cause problems in reading it. For future works, the implementation of the appointment of medical appointments can be carried out directly from the device, taking into consideration that the greatest obstacle to this is the direct integration with the Ecuadorian Institute of Social Security (IESS) system.
References 1. Mesa, J.: Factores Determinantes del Absentismo en Consultas Externas de la Agencia Sanitaria Costa del Sol, Julio 2015. https://riuma.uma.es/xmlui/bitstream/handle/10630/10149/ TD_Jabalera_Mesa.pdf?sequence=1 2. S. N. d. P. d. S. d. Salud: La Propuesta Estratégica Integral para los usuarios que por primera o segunda vez no asisten a una cita agendada por Contact Center. Ministerio de Salud Pública, Quito (2017) 3. Cruz-Cortes, M.E.: Mobile Apps, a tool for public health. Mexican J. Med. Res. ICSa 8, 33–39 (2020)
212
J. Rivera et al.
4. Curioso, W.: Building capacity and training for digital health: challenges and opportunities in Latin America. J. Med. Internet Res. 21(12), e16513 (2019) 5. Vayas, E., Jimenez, A.: E-Health in Ecuador: experiences and good practice. In: Sixth International Conference on eDemocracy & eGovernment (ICEDEG) (2019) 6. López, R., Chiriboga, M.: The present situation of e-health and mHealth in Ecuador. Latin Am. J. Telehealth 4, 261–267 (2017) 7. Terán, D., Rivera, J., Tapia, F., Aules, H.: Use of e-health as a mobility and accessibility strategy within health centers in Ecuador with the aim of reducing absenteeism to medical consultations. In: ICAIW, p. 15 (2019) 8. Chiara, C., Chiara, R., Norbert, G.: mHealth and telemedicine apps: in search of a common regulation. ecancer Med. Sci. 12, 853 (2018) 9. García, M.: eHealth (tecnología y medicina). In: Conferencia de Directores y Decanos de Ingeniería Informática (2017) 10. DeNardis, L.: E-health Standards and Interoperability. ITU-T Technology Watch Report (2012) 11. Goessling, S.: Architecting Cloud Computing Solutions. Packt Publishing, Birmingham (2018) 12. Haufe, K., Dzombeta, S., Brandis, K.: Proposal for a security management in cloud computing for health care. Sci. World J. 2014, 7 (2014) 13. Sheppard, D.: Beginning Progressive Web App Development: Creating a Native App Experience on the Web. Apress, New York (2017) 14. Hume, D.: Progressive Web Apps. Manning Publications, Shelter Island (2017) 15. Santos, L., Takako, P.: Analyzing the availability and performance of an e-health system integrated with edge, fog and cloud infrastructures. J. Cloud Comput. 7, 16 (2018) 16. Suciu, G., Martian, A., Craciunescu, R.: Big data, internet of things and cloud convergence – an architecture for secure e-health applications. J. Med. Syst. 39, 141 (2015) 17. Bharat Peddi, S.V., Yassine, A., Shirmohammadi, S.: Cloud based virtualization for a calorie measurement e-health mobile application. In: IEEE International Conference on Multimedia and Expo Workshops (2015) 18. Kamel, M., Al-Shorbaji, N.: On the Internet of Things, smart cities and the WHO healthy cities. Int. J. Health Geogr. 13, 10 (2014)
Proposal for a New Method to Improve the Trajectory Generation of a Robotic Arm Using a Distribution Function Yadira Quiñonez1 , Oscar Zatarain1 , Carmen Lizarraga1 , and Jezreel Mejía2(B) 1 Universidad Autónoma de Sinaloa, Mazatlán, Mexico
{yadiraqui,carmen.lizarraga}@uas.edu.mx, [email protected] 2 Centro de Investigación en Matemáticas, Zacatecas, Mexico [email protected]
Abstract. Robotic control is one of the most important problems and is considered the central part of trajectory planning and motion control, and several methods can be found to generate the trajectory of a robotic arm. However, those methods imply a lot of calculation process and operations or other problems that cause decreasing the accuracy of the results and much compile time. For these reasons, a novel method is proposed to calculate the trajectory and get really accurate results with an insignificant compile time. Also, it is easy to implement, and it can make different velocities and acceleration shapes to obtain a smooth trajectory, opening new ways of control applications. the values for different initial and final positions using the distribution function proposed, LSPB, and cubic polynomial have been compared with a trajectory of 1 and 0.5 s. The paper ends with a critical discussion of experimental results. Keywords: Trajectory generation · Motion control · Robotic arm · Distribution function · LSPB method · Cubic polynomial
1 Introduction There are many fields of applications for manipulator robots, such as in the industrial robotics [1, 2] and in the robotic systems applied in the medical area [3, 4], in which an adequate knowledge of variables and parameters is required to obtain exact trajectory accuracy. In this sense, the trajectory of a robotic arm has been a subject widely studied by different researchers using different techniques. These techniques help us to understand and make different tasks; for example, in industries, using different manipulators in manufacturing processes is common, which means, the robot arm makes repeated work while making a trajectory. Hence, this topic has been improved over time until today, because the manipulators must have greater accuracy when making a trajectory. Nowadays, the way to control a robotic arm exists with many different options. Using any of those options require to apply a method to calculate the trajectory. Also, the design of the control which is used to generate the trajectory, for example, using © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 213–231, 2021. https://doi.org/10.1007/978-3-030-63329-5_15
214
Y. Quiñonez et al.
interfaces, joysticks, genetic algorithms, voice commands, keyboards, industrial controls, etc. However, many authors have said that these methods have some issues and imply a lot of computations and compile time to execute the desired trajectory, these disadvantages arise when choosing the method to calculate the trajectory. One of the most famous methods to calculate the trajectory of a robotic arm [5, 6] are cubic polynomials [7], trapezoidal velocity profile or also called Linear Segment with Parabolic Blend (LSPB) [8], and the Euler angles [6], to mention some of them. These methods have a lot of robust characteristics such as smooth motion in the joint space, and the motion of the robotic arm is predictable in the task space. However, polynomials, LSPB, and Euler angles have specific problems such as many authors have mentioned. Concerning cubic polynomials, the authors in [7] mention that it is difficult to calculate an optimal trajectory, because it is necessary to calculate the optimal time, and this implies more calculations. Therefore, the behavior becomes more complex and it is difficult to obtain a smooth trajectory with precise results. In another work [9], the authors comment other disadvantages of (N − 1)-order polynomial, mainly emphasizing that the robotic arm has an oscillatory behavior when velocity and acceleration are generated for a determinate position, and as a result of the oscillations, the robotic arm performs an unnatural motion. Also, when the order of the polynomial is increased, the system of constraint equations becomes difficult to solve, and therefore, the compile time is prolonged. According to [8], when using the trapezoidal trajectory, it does not require a lot of calculations, however, an additional response-time is necessary and it presents constant acceleration, velocity, and deceleration regions, as a consequence, vibrations are generated which restrict position accuracy. Finally, Dong et al. [6] mention that calculating the trajectory of a robotic arm using the Euler angles can cause a malfunction due to the nonlinearity of the three Euler angles. To solve all the above-mentioned problems regarding arm control and the calculation of their trajectory, it is proposed a distribution function that makes it easier to compute an exact trajectory for any trajectory and parameters and that can grand many advantages when using the distribution function proposed. This solution became the goal of this paper to improve the control and the calculation of the trajectory of a robotic arm.
2 Related Works Robotic control is one of the most important problems and is considered the main part of trajectory planning and motion control. The primary solution for robotic control is forward and inverse-kinematics because these provide the angles of robotic arm joints; however, this process is quite complex and may have no solution or may have multiple solutions. Therefore, the enforcement of different methods is required to improve computational processing. The authors [10] performed a kinematic analysis for the trajectory planning problem with multiple manipulators using the Denavit-Hartenberg (DH) method and solving the inverses kinematics equation through matrix operation and algebraic method. In this work [11] proposed eight inverse-kinematics solutions using analytical, geometric, and algebraic methods combined with the Paden-Kahan subproblem as well as matrix theory. There are other works [12–16] that proposed some conventional tools to discover the kinematic solutions of the manipulator robot.
Proposal for a New Method to Improve the Trajectory Generation of a Robotic Arm
215
Other kinds of methods to control a robotic arm have been implemented for different branches, for example, the authors [17] have designed an interface with open control algorithms to generate and control three types of movements: joint interpolation, linear interpolation, and circular arcs. According to the results presented in this work, some errors of position are found at the end effector, and the process shows oscillations on the desired coordinates. Other authors have proposed using high-order polynomials, the authors [18] promotes using 3rd, 4th and 5th-° polynomial trajectory to present three trajectory optimization problems, and it has been mentioned that slower trajectory occurs when it had a high-degree polynomial, and the polynomial coefficients cannot be often obtained. According to [8], the trapezoidal trajectory has a constant velocity and acceleration, and a large jerk causes vibration, the accuracy of the results decreases and requires additional response time. Another research work is presented in [19]. The authors propose the optimization of the trajectory planning using one of the most interesting distribution functions such as the Gaussian distribution. As a result of the analysis of the related works, it can be highlighted that there are several pieces of research works that present solutions to robotic control, however, several disadvantages in these algorithms are presented, for example, the trajectory can oscillate between feasible and infeasible during optimization, and it needs an optimization process which implies more calculations to generate an optimized trajectory.
3 Construction of the Distribution Function and Properties Regardless of the approach used to generate the trajectory, a smooth trajectory is always essential; so the velocity was modelled to achieve this purpose, such as in Fig. 1 is presented. This function was obtained, making many simulations in MATLAB and observing the motion of the robotic arm:
Fig. 1. Velocity representation which was modelled without a determinate position.
216
Y. Quiñonez et al.
The velocity has a function which is presented below: ⎧ x+ ts x−ζ +1 a+1 2 ⎪ ⎨ − ln|n|n n x −ts n x 2 ts ˙ S(x) = x2 nx+ x −na+1 ⎪ ⎩ 0 if x = 0
(1)
˙ Note that S(x) is always positive, that means, it keeps greater or equal to zero; also, the velocity changes its shape each time that n increases and the trajectory becomes light before to get to ts (in Fig. 1, ts = 1). However, Stoping and decreasing the velocity before to get to the end-point is required; So, the motion does not force when the robotic arm gets to the end-point. knowing the function of the velocity, it can help to determine the position, but, the distribution function is explained first. The velocity of the distribution function presented below is the function (1): 3.1 Distribution Function A circle is used in which its arch is divided n times, where n can be any real number such to nts −ζ , that n > 1, and in a time equal to ts , nts −ζ steps are traveled. Before getting x+ txs −1 x−ζ − na the number of steps is n , which is the number of steps moved, and n is the distance to travel or the number of steps that needed to be done through the a time (ts ).The number −n is necessary to change the shape of the velocity because the x+ txs −1 a − n is modified, so it can help to get different trajectories to achieve distance n a specific end-point. Then, the numbers of steps traveled and the numbers of missing ts steps are represented as follows 0 < nx−ζ ≤ nx+ x −1 − na . We are going to say that for any Sf ≤ 1 exist ζ such that (see Fig. 2):
Fig. 2. Construction of the function S(x).
nx−ζ nts −ζ ) ≤ S(ts ) = + O 0 ≤ S(x) = + OS(x ) = Sf ≤ 1 S(x x+ txs −1 |nts − na | − na n
(2)
Proposal for a New Method to Improve the Trajectory Generation of a Robotic Arm
217
Where
nx −ζ = q0 ≤ 1 0 ≤ OS(x ) = ts x + x −1 − na n
(3)
OS(x ) is a constant value where is the origin or, in this case, is the start position q0 where the join is located for any x ≤ ts , and Sf is the final position of the joint. So, the distribution function is presented below: ⎧ x−ζ ⎨ nts + OS(x ) where a ≤ ts − 1 and 0 < x 2 x+ x −1 a −n n (4) S(x) = ⎩ 0 if x < aorx ≤ 0 Now, to know that S(x) is a truly distribution function, S(x) needs to fulfilled the next characteristics, lim S(x) = 1, lim S(x) = 0, if x1 ≤ x2 , then S(x1 ) ≤ S(x2 ) and x→∞ x→−∞
S y+ = lim S(x) = S(y), In the next section, the distribution function application is x→y+
given. Proving the first characteristic, knowing that 0 ≤ OS(x ) = have to know the value of nts −ζ
t + tts −1 s −na
ns
nts −ζ ntS −na
nts −ζ ts + tts −1 s n −na
≤ 1, we
. Then the final position has to be selected (Sf ), 0 ≤
+ OS(x ) ≤ Sf , and 0 ≤
this implies
nx −ζ
x + ts −1 n x −na
nts −ζ t + tts −1 s −na
ns
≤ Sf − OS(x ) , if Sf = 1 and OS(x ) = 0 nts −ζ ts + tts −1 s n −na nts −ζ t + ts −1 n s ts −na
= 1 and if OS(x ) = 1 then
and 0 ≤ OS(x ) ≤ 1. Therefore lim S(x) = x→ts
= 0 if and only if Sf ≤ 1
+ OS(x ) ≤ 1. So, Sf = 1 is
necessary to obtain lim S(x) = 1, and if Sf < 1 then lim S(x) < 1, this characteristic is x→ts
x→ts
needed in generation trajectories methods because for a determined time, the end-point that no always is 1, it is looked. Another prove is the next one:
lim
x→∞
nx−ζ x+ txs −1
n
− na
nx−ζ
= lim
x→∞
n
ts
x+ txs −ζ
ts nx+ x −ζ ts nx+ x −ζ
−
na
= lim
x→∞
ts nx+ x −ζ
n− x 1−
na ts nx+ x −1
=
n0 1−
=1
na ∞
(5) Proving lim S(x) = 0, it is had that lim S(x) = lim S(x) and S(x) = 0 for x→−∞
x→−∞
when x ≤ 0, therefore lim S(x) = lim S(x) = 0. x→−∞
x→0
x→0
S(x2 ) hasless or equal missing If x1 ≤ x2 , then 0 < nx1−ζ ≤ nx2 −ζ that means, x1 + xts1 −1 x2 + xts2 −1 a − n ≤ n − na and x + ts1−1 ≤ steps than S(x1 ), then 0 < n n
1 , x2 + ts −1 x2 n −na
due to 0 < nx1 −ζ ≤ nx2 −ζ therefore
nx1 −ζ x1 + ts −1 x1 n −na
≤
1
x1
−na
nx2 −ζ . x2 + ts −1 x2 n −na
For the fourth characteristic, it is proved using the definition of limit. So, knowing ts that f (x) = nx−ζ is a continuous function, then g(x) = nx+ x −1 − na with g(x) = 0 has
218
Y. Quiñonez et al. y+ ts −1
ts
to be a continuous function for every y > 0. If lim nx+ x −1 − na = n y − na , then x→y ts y+ ts −1 if 0 < |x − y| < δ, such that for any ε > 0, nx+ x −1 − na − n y − na < ε then
x+ txs −1 y+ ts −1 and x + txs < n 0 exist. Therefore lim g(x) =
ln n
x→y
− na is a continuous function for any y > 0.
Due to f (x) is a continuous function and g(x) too, then
f (x) g(x)
continuous function (see Fig. 3).
=
nx−ζ ts
nx+ x −1 −na
is a
Fig. 3. Representation of the distribution function with ts > 0 and a < t2s − 1.
˙ This distribution function can be obtained making an integral on S(x), knowing that C can be any constant value, then: ts ln|n|nx−ζ +1 na+1 x2 − ts nx+ x n1−ζ eln(n)x ˙ ∫ S(x)dx =∫− dx = +C ln(n)ts 2 ts eln(n)x+ x − na+1 x2 nx+ x − na+1 =
nx−ζ +1 x+ txs
n
− na+1
+C =
nx−ζ x+ txs −1
n
− na
+ OS(x )
(5)
˙ Then, S(x) ≥ 0 and ts
˙ ∫ S(x)dx = Sf ≤ 1 0
(6)
Proposal for a New Method to Improve the Trajectory Generation of a Robotic Arm
219
˙ That means S(x) is a density function, according to [20]. knowing everything which has been presented, the number ζ can be calculated, this parameter helps us to obtain any trajectory between two points which are OS(x ) = q0 and the end-point Sf , with just one computation. This parameter reduces the compile time and an exact trajectory is obtained. So, for any x, q0 and Sf such that Sf ≥ S(x) + q0 and S(x) + q0 ≥ q0 for each x ≥ 0, and the equalization is fulfilled when the total time is reached: q0 ≤
nx−ζ x+ txs −1
n
− na
+ q0 ≤ Sf
(7)
ts ln (Sf −q0 ) nx+ x −1 −na
Developing the in equation, then ζ ≥ x − , and calculating ln n the limit for when x → ts , then:
ts
ln Sf − q0 nx+ x −1 − na ln Sf − q0 nts − na ζ = lim x − = tS − (8) x→ts ln n ln n Also, it is needed to know that: ζ = ts − For when q0 ≥ −
nx−ζ
n
x+ txs −1
−na
ln q0 − Sf nts − na ln(n)
(9)
+ q0 ≥ Sf , that helps to make the trajectory from a
major point to a minor one. Using the distribution function to generate the trajectory, an acceleration function is obtained and is equal to: ¨ S(x) =
⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩
ln(n)nx−ζ +1
nx+
ts +a+1 x
ts ln(n)+n2a+2 ln(n) x4 −4nx+ x
+a+1
ts ln(n)ts x2 + 2nx+ x 3 x+ txs 4 a+1 x n −n
+a+1
−2n2x+
2ts x
2ts ts ts x+ n2x+ x ln(n)+nx+ x
+a+1
ln(n) ts2
(10)
0 ifx = 0
Then, the position, velocity, and acceleration can be presented in Fig. 4. Note that the velocity and acceleration have been changed slightly and its position too, but in the two cases, the trajectory comes from the start-point 0, and the end-point is 1. Changing the parameter a, it can help us to obtain different shapes of velocity, acceleration, and primarily different positions through the time, that could be an advantage to evade objects while the robotic arm is making a trajectory; also, the smoothest trajectory can be selected as well as possible.
220
Y. Quiñonez et al.
Fig. 4. Position, velocity, and acceleration of the trajectory in one second, starting in 0 and finalizing in 1 using different values of a to change the velocity and acceleration.
4 Trajectory Generation 4.1 Joint Space Now, the function that is going to be used for making the trajectory is presented below ⎧ nx−ζ ⎪ + q0 if 0 < x ≤ ts and q0 < Sf (1stcondition) ⎪ ⎨ nx+ txs −1 −na x−ζ (11) S(x) = − nts + q0 if 0 < x ≤ tS and Sf < q0 (2ndcondition) x+ −1 ⎪ a ⎪ ⎩ n x −n 0 ifx ≤ 0 where n is any real number greater than 1; nevertheless, the accuracy of the calculus of ζ could decrease a little, and the difference of the two trajectory change slightly choosing a real number √ between 2 natural numbers and the natural number, for example n = 2 and n = 2. Using n = 2 not presents large values for ζ and allows us to show many advantages easily for when the total time is ts = 1; for that reason, n = 2 is used to get the results and show the examples. In Fig. 5, two different values of n are used to generate two different shapes of trajectories, so the idea is more clear of what happens if n is bigger. Example 1. Using n = 2 and n = 10 with t s = 1 and q0 = 0.5, two trajectories are generated and reached the same end-point, so Sf = 0.95. Then, two different shapes of the two trajectories are got in Fig. 5.
Proposal for a New Method to Improve the Trajectory Generation of a Robotic Arm
221
Fig. 5. The right side, it is presented the trajectory S(x) for when n = 2 and the left side, it is presented the trajectory S(x) for when n = 10 in order to obtain a different shape of the trajectory planning.
Example 2. In this example is used (8) in the first case and (9) in the second case to compute ζ . At this moment, is easy to note that S(x) start in q0 which is the initial position when x = 0 and a trajectory is generated to get Sf , which is the final position, and for each different Sf exist each different ζ , which makes possible to get the final position starting in any q0 . In this example, the first case is used to generate the trajectory in Fig. 6, and (8) is used to calculate ζ .
Fig. 6. Example of trajectory made by S(x) in the first case, with parameters q0 equal to 0.15, Sf equal to 0.45, a equal to 0, ts equal to 0.4 s and ζ equal to 3.783042037.
Note that ts is too small; nevertheless, an exact trajectory can be obtained, such as it was presented in Fig. 6. In Sect. 5, different results are going to present using different methods to calculate the trajectory with a time 0.5 s to show that the distribution function is more exact than LSPB and cubic polynomials.
222
Y. Quiñonez et al.
Example 3. Another example is used to show when (9) is employed. Taking Sf = −0.1 and q0 = 0.12659, the trajectory is shown in Fig. 6, in this case, Sf < q0 and for that reason, (9) is utilized to calculate ζ and then (10) generates the trajectory (see Fig. 7):
Fig. 7. Example of trajectory made by S(x) with the second case and ζ equal to π .
Another characteristic that can be exploited is taking a = {0.5, . . . , 0.75, . . . , 0.85, . . . , 0.99} and obtained a motion which is not linear at all, it means, if you take an a too far from ts , as far as a is from ts , a linear trajectory is obtained. On another hand, as near as a is from ts , you can obtain a curved-motion. Figure 8 represents the trajectory taking Sf = −0.55, q0 = 0.45 and a approximates to ts .
Fig. 8. Trajectory representation for when a is too close to ts .
This is a unique property for when ts = 1 because always is fulfilled that 0 < nx−ζ ≤ n − na . In another hand, for when ts > 1 it has to be meticulously selecting a and ts n such that always accomplish that 0 < nx−ζ ≤ nx+ x −1 − na . The trajectory could have a curved motion, even when ts > 1, but, while ts gets bigger, the trajectory becomes linear motion. In Fig. 9 it can bobserved the trajectory. x+ txs −1
Proposal for a New Method to Improve the Trajectory Generation of a Robotic Arm
223
Fig. 9. Trajectory representation for when ts = 4 and 2 ≥ a ≥ 0.
To represent the trajectory S(x), multiplying (10) by π is necessary to obtain the radians that each joint has to rotate or equivalent to multiplying by 180° to obtain the degrees that each joint has to rotate. So, the next function is obtained and was programmed in MATLAB. ⎧ ⎪ nx−ζ ⎪ π + q0 if 0 < x ≤ ts and q0 < Sf (1stcondition) ⎪ t ⎪ ⎨ nx+ xs −1 −na . (12) Sp (x) = nx−ζ π − x+ ts −1 + q0 if 0 < x ≤ tS and Sf < q0 (2ndcondition) ⎪ ⎪ ⎪ n x −na ⎪ ⎩ 0 if x ≤ 0
4.2 Task Space Example 4. The next two examples of the trajectory have been represented in the task space (x, y, z) so, a linear and curved trajectory can be represented easily. S(x) gives us the values from the task space, it means, x = S(x1 ), y = S(x2 ) and z = S(x3 ) with different final position, Sfx = S(ts ) = xf , Sfy = S(ts ) = yf and Sfz = S(ts ) = zf and different initial position qx = x0 , qy = y0 and qz = z0 . The examples are represented in Fig. 10 and 11. Then, the position, velocity and acceleration are obtained such as were presented in Fig. 4 for when ts = 1 and a ≤ 0. So, this could be one of the greatest velocity and acceleration of the robotic trajectory arm, due to, the articulation is from the start position q0 = 0 and the final position Sf = 1, which that means, the articulation rotates 0 to 180° or π radians in one second or in the task space, the end-point is 1. Changing the value of a, the velocity and acceleration can change to greater or less magnitude, and also, a different shape of trajectory planning can be obtained, so, most of the time a smooth trajectory can be obtained. to choose a that is not too close to ts such as it was presented in Fig. 8 is very necessary. When a = 0.99, the articulation accelerates in the last instant before getting the total time 1, and that provokes a sudden acceleration and velocity.
224
Y. Quiñonez et al.
Fig. 10. Representation of the trajectory at the final position Sf 1 , Sf 2 , Sf 3 with linear-motion represented in the task space and parameters Sf 1 = xf = 0.85, Sf 2 = yf = −0.85, Sf 3 = z = 0.85, qx = 0.65, qy = 0.45, qz = −0.15, ts = 1, ax = −0.50, ay = −100 and az = −200.
Fig. 11. Representation of the trajectory at the final position Sf 1 , Sf 2 , Sf 3 with curve-motion represented in the task space and parameters Sf 1 = xf = 0.85, Sf 2 = yf = −0.85, Sf 3 = zf = 0.85, qx = 0.65, qy = 0.45, qz = −0.15, ts = 1, ax = 0.55, ay = 0.95 and az = 0.99.
At this point, the degrees which the joint moves are represented in the interval [−1, 1] in each joint and this is well known. So, to program the trajectory in Matlab, S(x) (12) is multiplied by π , so, in that way, it can be obtained the radians that the joint needs to ˙ ¨ rotate, in the same way, the function S(x) and S(x) (function (1) and (10)), that means, the velocity and acceleration have as units π -rad/s and π -rad/s2 . With all these examples, the function S(x) − q0 represents the answer to the question, how much does it need to increment or decrease the joint starting in a specific and get to another different position? The statement is answered, making only one calculation, which is the parameter ζ . This reduction of the process to calculate the trajectory gives us many answers without wasting a lot of time to compile the simulation and it can be operated easily. Moreover, according to Maxima, a Computer Algebra system, Fig. 12 presents a concise trajectory where Sf = 0.516 and q0 = 0.51599 then ζ =
Proposal for a New Method to Improve the Trajectory Generation of a Robotic Arm
225
17.60964047…, ζ = 17.609 is used to show how accurate the distribution function can be to generate a trajectory, getting a relative error equal to 8.527131783 × 10−7% .
Fig. 12. Result of the end-point in maxima to measure the relative error.
5 Experimental Results The guide of MATLAB was used to compare the results using the cubic polynomials methods and Linear Segment with Parabolic Blend (LSPB). Every result obtained was proved with a time of ts = 1 or total time equal to 1, and different q0 (initial position) and Sf (final position) were selected to compare the trajectories result with the distribution function. First, mentioning that using the LSPB method; some computes have to be made, for example, the maximum velocity, acceleration, Time acceleration and the total time of the trajectory, this is mentioned in [21, 22]. So, the maximum acceleration using a total time equal to 1 was calculated, then, the velocity too. The maximum acceleration with a total time equal to 1 is presented below:
4 Sf − q0 (13) amax ≥ ts2 Then, the time tb and the velocity which is represented as Vmax were calculated, the formulas are presented in [21] and the function to calculate the trajectory using the method LSPB is presented in [22] due to the LSPB was presented to obtain the trajectory using the desired velocity. To generate a trajectory using cubic polynomials, it has to be calculated a0 , a1 , a2 , and a3 [21], also, four decimal places has been used when the values of an (n = {0, 1, 2, 3}) are calculated to reduce the compile time. Table 1 shows the results of each method which are presented in radians and the initial position and the final position are shown in degrees and T is the total time which each method needed to get to the final position (Sf ).
226
Y. Quiñonez et al.
Table 1. Results to compare the trajectories with the three methods presented, using as a desired total time equal to T = 1 q0
Sf
Distribution F-ζ (rad)
ζ
Tζ
Cubic Polynomials −an (rad)
Ta n
LSPB (rad)
Vmax
TLSPB
1.6
1.601
0.0279427
18.4576
1
0.0279
1
0.0281
3.510−4
1.000 s
0
45
0.785398
1
0.7854
1.5
0.7853
1.5
1.0011 s
3
0
75
1.309
2.2630
1
1.3090
1.5
1.3089
2.5
1.0011 s
15
90
1.5708
2.2630
1
1.5708
2
1.5707
2.5
1.0011 s
25
175
3.05433
1.2630
1
3.0543
2.5
3.0540
4.8
1.0038 s
0
180
3.14159
1
1
3.1416
3
3.1412
6
1.0038 s
Next, Fig. 13 shows the results obtained from the trajectories of the distribution function and the LSPB method.
Fig. 13. Trajectory generation using the distribution function and LSPB.
At this moment, note that LSPB requires more time to obtain the desired position and loses accuracy in large jerks (see Fig. 14), so, it requires a time greater than 1 to get an accurate result and as long as more time the method LSPB has, it will be more accurate.
Proposal for a New Method to Improve the Trajectory Generation of a Robotic Arm
227
Fig. 14. End-point located in the time Tζ = 1 s using the distribution function and TLSPB = 1.0038 s using LSPB.
The second more accurate method is Cubic polynomials, but it requires few additional time to obtain reliable results and get a less violent motion; also, the trajectory is generated just for a total time equal to 1, with a time which is less than 1 s, the trajectory is not made, or in the best case, the trajectory is not accurate. The distribution function is not affected by the time, and also, a smooth trajectory is obtained changing the parameter a for when ts = 0.5 s (see Fig. 15 where the acceleration is shown for a total time equal to 0.5 s), so, the results are compared with the cubic polynomials using a time T = 0.5 s.
Fig. 15. Acceleration result using the Distribution Function with a time tS = 0.5 and a = −0.62 to get a smooth trajectory starting in q0 = 0 and end-point equal to Sf = 180◦ = (1)π .
228
Y. Quiñonez et al.
Table 2 shows that the accuracy of the Cubic polynomial decrease when a shorter total time is had. For that reason, the cubic polynomial is not recommended for when the total time is less than 3 s; in large jerks, this method becomes inaccurate, and it generates trajectories that are not smooth. On the other hand, the distribution function can be used to obtain smooth trajectories in a total time equal to 0.5, such as it was presented in Fig. 15; also, it is not affected by the time chosen. Figure 16 and 17 represents the final result using the distribution function and then Cubic Polynomials, the first joint was used to generate the trajectory, the second and third-joint keep in the same position. Table 2. Results to compare the trajectories using the distribution function and Cubic Polynomials with a total time equal to T = 0.5. q0
Sf
Distribution F - ζ
ζ
Tζ
Cubic Polynomials −an
Ta n
0
45
0.25 π rad = 45°
2.8892
0.5
0.7179(rad) = 41.13264°
0.5
0
135
0.75 π rad = 135°
1.3043
0.5
2.1583(rad) = 123.661481
0.5
0
175
35 π rad = 175° 36
0.9299
0.5
2.792(rad) = 159.96982
0.5
0
180
(1) π rad = 180°
0.8892
0.5
2.8718(rad) = 164.542°
0.5
Fig. 16. The final result of the distribution function using the GUIDE of MATLAB.
Proposal for a New Method to Improve the Trajectory Generation of a Robotic Arm
229
Fig. 17. The final result of the Cubic Polynomials using the GUIDE of MATLAB.
6 Conclusion The distribution function presented in this paper, it is a novel method to calculate trajectories without decreasing accuracy in the results, with the desired total time, the parameter a to obtain a smooth trajectory can be chosen, therefore, different shapes of the velocity and acceleration function can be obtained and also the smoothest trajectory. Moreover, the distribution function does not present issues in large jerks or short trajectories; as a result, that can lead to smaller torque resolutions, for those reasons in future works, it is desirable to use LCD Touch screen to control a robotic arm in which small trajectories or any another kind of control can be presented and give trajectories to small or large jerks. Also, this distribution function is easy to applicate and generate a trajectory with an insignificant compile time due to this method just implicates to do one computation to create the trajectory which is ζ , different to other methods such as cubic polynomials and LSPB which imply many or some more calculations. Also, the computation of ζ could be used to design controls, for example, PID controller or other electronic controls, improving the designs and making them more economical. In future works, generates different motion of trajectories with via points or using different interesting functions to generate trajectories are sought in future works and getting more applications that can help to solve more problems using this distribution function.
230
Y. Quiñonez et al.
References 1. Grau, A., Indri, M., Bello, L.L., Sauter, T.: Industrial robotics in factory automation: from the early stage to the Internet of Things. In: 43rd Annual Conference of the IEEE Industrial Electronics Society, Beijing, pp. 6159–6164. IEEE Press (2017) 2. Yenorkar, R., Chaskar, U.M.: GUI based pick and place robotic arm for multipurpose industrial applications. In: Second International Conference on Intelligent Computing and Control Systems, Madurai, India, pp. 200–203 (2018) 3. Burgner-Kahrs, J., Rucker, D.C., Choset, H.: Continuum robots for medical applications: a survey. IEEE Trans. Robot. 31(6), 1261–1280 (2015) 4. Murali, A., Sen, S., Kehoe, B., Garg, A., Mcfarland, S., Patil, S., Boyd, W.D., Lim, S., Abbeel, P., Goldberg, K.: Learning by observation for surgical subtasks: multilateral cutting of 3D viscoelastic and 2D orthotropic tissue phantoms. In: IEEE International Conference on Robotics and Automation, Seattle, WA, pp. 1202–1209. IEEE Press (2015) 5. Williams, R.L.: Simplified robotics joint-space trajectory generation with a via point using a single polynomial. J. Robot. 2013, 1–6 (2013) 6. Dong, M., Yao, G., Li, J., Zhang, L.: Research on attitude interpolation and tracking control based on improved orientation vector SLERP method. Robotica 38(4), 719–731 (2020) 7. Sidobre, D., Desormeaux, K.: Smooth cubic polynomial trajectories for human-robot interactions. J. Intell. Robot. Syst. 95, 851–869 (2018) 8. Hong-Jun, H., Yungdeug, S., Jang-Mok, K.: A trapezoidal velocity profile generator for position control using a feedback strategy. Energies 12(7), 1–14 (2019) 9. Sciavicco, L., Siciliano, B.: Modelling and Control of Robot Manipulators. Springer, London (2010) 10. Liu, X., Qiu, C., Zeng, Q., Li, A.: Kinematics analysis and trajectory planning of collaborative welding robot with multiple manipulators. Procedia CIRP 81, 1034–1039 (2019) 11. Zhao, R., Shi, Z., Guan, Y., Shao, Z., Zhang, Q., Wang, G.: Inverse kinematic solution of 6R robot manipulators based on screw theory and the Paden-Kahan subproblem. Int. J. Adv. Robot. Syst. 15(6), 1–11 (2018) 12. Wang, Y., Su, C., Wang, H., Zhang Z., Sheng, C., Cui, W., Liang, X., Lu, X.: A convenient kinematic calibration and inverse solution method for 4-DOF robot. In: Chinese Control and Decision Conference, Nanchang, China, pp. 5747–5750. IEEE Press (2019) 13. Csanádi, B., Tar, J.K., Bitó, J.F.: Matrix inversion-free quasi-differential approach in solving the inverse kinematic task. In: 17th International Symposium on Computational Intelligence and Informatics, Budapest, pp. 000061–000066. IEEE Press (2016) 14. Liu, W., Chen, D., Steil, J.J.: Analytical inverse kinematics solver for anthropomorphic 7DOF redundant manipulators with human-like configuration constraints. J. Intell. Robot. Syst. 86(1), 63–79 (2017) 15. Kuhlemann, I., Schweikard, A., Ernst, F., Jauer, P.: Robust inverse kinematics by configuration control for redundant manipulators with seven DOF. In: 2nd International Conference on Control, Automation and Robotics, Hong Kong, pp. 49–55. IEEE Press (2016) 16. Gong, M., Li, X., Zhang, L.: Analytical inverse kinematics and self-motion application for 7-DOF redundant manipulator. IEEE Access 7, 18662–18674 (2019) 17. Aroca-Trujillo, J.L., Pérez-Ruiz, A., Rodriguez-Serrezuela, R.: Generation and control of basic geometric trajectories for a robot manipulator using CompactRIO®. J. Robotics. 2017, 1–11 (2017) 18. Barghi-Jond, H., Nabiyev, V., Benveniste, R.: Trajectory planning using high order polynomials under acceleration constraint. J. Optim. Ind. Eng. 10(21), 1–6 (2017) 19. Mukadam, M., Yan, X., Boots, B.: Gaussian process motion planning. In: International Conference on Robotics and Automation, pp. 9–15, IEEE Press, Stockholm (2016)
Proposal for a New Method to Improve the Trajectory Generation of a Robotic Arm
231
20. Park, K.I.: Fundamentals of Probability and Stochastic Processes with Applications to Communications. Springer, Cham (2018) 21. Zhiyong, Z., Dongjian, H., Lei, T., Lingshuai, M.: Picking robot arm trajectory planning method. Sens. Transd. 162, 11–20 (2014) 22. García Martínez, J.R., Rodríguez Reséndiz, J., Martínez Prado, M.Á., Cruz Miguel, E.E.: Assessment of jerk performance s-curve and trapezoidal velocity profiles. In: International Engineering Congress, Santiago de Queretaro, pp. 1–7. IEEE Press (2017)
Towards Development of a Mobile Application to Evaluate Mental Health: Systematic Literature Review Jorge A. Solís-Galván, Sodel Vázquez-Reyes(B) , Margarita Martínez-Fierro, Perla Velasco-Elizondo, Idalia Garza-Veloz, and Claudia Caldera-Villalobos Universidad Autónoma de Zacatecas, Campus Siglo XXI, Carr. Zacatecas-Guadalajara Km. 6, Ejido “La Escondida”, 98160 Zacatecas, Mexico {36171296,vazquezs,margaritamf,pvelasco,idaliagv}@uaz.edu.mx, [email protected]
Abstract. Mental disorders such as depression, anxiety, and stress are increasingly present in the lives of many people, which has led these disorders to be a latent public health problem today. This situation has prompted the development of new solutions focused on improving people’s mental health status, some of these solutions are based on mobile applications. This article presents the results of a systematic literature review that was carried out with the aim of identifying mobile applications that address the most common mental health conditions, focusing on the design, development and/or evaluation of this kind of application. 152 primary studies were selected, 86 of them reported on evaluations of mental health applications. However, 72 of the primary studies addressed more than one mental disorder, highlighting depression, stress, and anxiety. Keywords: Mobile application · Mental health · Mental disorders · Depression · Anxiety · Stress · Anguish
1 Introduction “Mental health is a complex phenomenon determined by multiple social, environmental, biological, and psychological factors, and includes conditions such as depression, epilepsy, dementia, anxiety, developmental disorders in childhood, and schizophrenia” [1]. The World Health Organization (WHO) predicts by 2030 that mental illness will be the main burden of disease worldwide [2]. In Mexico, the results of the National Survey of Psychiatric Epidemiology (ENEP for its initials in Spanish) indicated that approximately one in five individuals has at least one mental disorder at some point in their life [3]. Unfortunately, there is a lack of information about the use of services for mental health problems by Mexican university students, a population that according to various researches is at greater risk of suffering from a mental disorder. Although students can benefit from the health services offered © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 232–257, 2021. https://doi.org/10.1007/978-3-030-63329-5_16
Towards Development of a Mobile Application to Evaluate Mental Health
233
by their universities, little is known about their willingness to seek help from these services [4]. Due to the consequences of mental health conditions such as depression, which is now the world’s leading cause of disability, the need for innovative solutions is evident [2]. The prevalence of mobile phone use today has enabled mobile app-based mental health interventions to become an increasingly popular approach to combat traditional barriers to accessing mental health services [5]. Due to the impact of mental health conditions on people, and some of them such as depression with the greatest impact on young people [6], the Programa Académico de Medicina Humana de la Universidad Autónoma de Zacatecas seeks to find out the state of mental health of the student population to identify the conditions that could be affecting students and trying to find solutions in early stages. To achieve this goal, the development of a solution based on a mobile application has been considered. The systematic literature review aims to identify the mobile applications that have been developed in the field of mental health, in such a way that it is possible to locate the successes and mistakes made in the development of these applications, in order to use this knowledge to develop the desired solution at the Universidad Autónoma de Zacatecas.
2 Methods For the development of this research, the three main phases of the Systematic Literature Review (SLR) were taken as the basis: planning the review, conducting the review and informing the results [7]. The following sections detail how each of the phases were executed. 2.1 Planning the Review Planning is the first phase of a systematic review. Within this phase, as the first activity, the need for revision was identified. 2.1.1 Identify the Need to Perform the SLR The Programa Académico de Medicina Humana has factors that could influence the origin of mental health conditions in students. According to the research, the symptoms of depression and physical damage in students in the health area may be related to the excessive workload they have [8]. From these researches, it was identified the need for an auxiliary solution in the detection of mental health conditions, which should be developed from the best practices that have been used to develop tools in the field of mental health. Hence the need for this review, which aims to identify the mental health applications that have been developed, it is meant to detect the successes and mistakes that have been made.
234
J. A. Solís-Galván et al.
2.1.2 Research Questions For the review process, research questions were formulated, that were useful throughout the process: 1) Which existing mobile applications are currently used to treat the most common mental health-related conditions such as depression, stress, anguish, anxiety?; 2) What are the strategies that were used for the development of mobile applications focused on mental health? 2.1.3 Search String The keywords that were part of the search string were extracted from the research questions. Indicating synonyms and related terms, in addition to the use of logical connectors, the search string was generated, which is shown in Table 1. Table 1. Keywords and search string Keywords
Synonyms or related words
Search string
Mobile application
Mobile App, Cellphone App, Smartphone app, App
Detection
Discovery, Location
(((mobile OR smartphone OR cellphone) AND (app OR application)) OR app OR application) AND (detection OR discovery OR location) AND ((mental AND health) OR depression OR anxiety OR anguish OR stress)
Mental health Depression Anxiety Stress Anguish
2.1.4 Data Sources Selection As the last step of this phase, the data sources were chosen to execute the search. One of the criteria used to choose these sources was the fact that they are considered very relevant in the field of Software Engineering, and even when working in conjunction with the Health Sciences area, the aim of this review is focused on finding software solutions. Finally, the chosen sources were: 1) IEEE Xplore, 2) ACM Digital Library, 3) ScienceDirect and 4) Journal of Medical Internet Research (JMIR). 2.2 Conducting the Review In the second phase of the SLR process, the primary studies are selected. This section describes the main activities that were carried out to meet the objective of the phase. 2.2.1 Inclusion and Exclusion Criteria Inclusion criteria: 1) title and abstract of the study are in Spanish or English; 2) the study was published between 2010 and 2020; 3) the study has at least two keywords in the
Towards Development of a Mobile Application to Evaluate Mental Health
235
title and abstract; and 4) the study is a review, evaluation, design or development of a mobile mental health application. Exclusion criteria: 1) the study is not accessible; 2) it is a repeated study; and 3) does not contain information about a review, evaluation, design or development of a mobile mental health application. 2.2.2 Primary Studies Selection For the selection process, 4 main steps were identified. In step number 1, the search string generated in the first phase was used, adapting it according to the conditions of each of the search engines. In step number 2, only studies whose title and abstract are in Spanish or English were selected; whose publication year is between 2010 and 2020; and those where the title and the abstract have at least two of the keywords that were previously indicated in Table 1. In step number 3, the titles and abstracts of all the studies were read to identify those that met the requested features. Finally, in the last step, the remaining criteria were applied, in some studies, it was necessary to read them completely to identify those that met all the criteria. Studies that met this last step were selected as primary studies. Figure 1 shows how the filtering of the studies was, of the 450,915 that were obtained at the beginning until reaching the 152 primary studies. Appendix A shows information from each of the primary studies. The complete references of the primary studies can be accessed at the following link: https://bit.ly/ 31w0T9n.
Fig. 1. Selection process of primary studies.
236
J. A. Solís-Galván et al.
2.2.3 Data Extraction From the studies that were selected as primary, extraction of the information from each of them was performed; a template in a text document was used as a basis, the data extracted from the studies were: title, author(s), publication year, country, keywords, data source, goal, problem, strategy, findings and summary.
3 Results This section presents the analysis of the main results of the SLR. 3.1 Studies Selection and Inclusion A total of 450,915 studies that were obtained in the initial search of the data sources, 448,287 were excluded after applying the first three inclusion criteria. 2,182 studies were excluded after reading the title and abstract, leaving a total of 346 studies. Finally, 194 studies were filtered, leaving a total of 152 studies that meet the inclusion and exclusion criteria, these studies represent the primary studies. 3.2 Primary Studies Features The selected primary studies were published between 2012 and 2020. Figure 2 shows the distribution by year of publication of these studies, the distribution shows an increasing trend of studies related to technological solutions in the field of mental health over the years. This gives relevance to our proposal for the mobile application to assess mental health.
Fig. 2. Primary studies distribution by publication year.
The primary studies had their origin in 4 continents of the world, highlighting cities in Europe: Manchester, Cambridge, Lancaster, Hamburg, Oxford and Bristol; in North America: Toronto, San Francisco, Palo Alto, Seattle and Chicago; in Asia; Hong Kong, Beijing, Seoul, Bangalore and Manila; and Oceania: Sydney, Melbourne and Geelong.
Towards Development of a Mobile Application to Evaluate Mental Health
237
The country where the subject has been studied the most is the United States, 50 primary studies were published in that country. According to the inclusion and exclusion criteria, the articles that reported the results about any of the following processes were selected: 1) Design of a mental health application, 2) Development process of a mental health application, 3) Evaluation of an application or 4) Analysis of mental health applications. Due to the heterogeneity of the selected studies, most of them reported about more than one of the mentioned categories; 86 studies reported only about the evaluation of one or more applications, 22 reported about the analysis of mental health applications, 20 reported the design, development, and evaluation of an application, 15 the design and development processes, 5 only reported about the process of developing a mental health application, 2 reported about the development and evaluation of a mental health application and only 1 reported about the design process of a mental health application. Regarding mental health conditions that are addressed in each of the primary studies, they were categorized based on the classification found in the Diagnostic and Statistical Manual of Mental Disorders of the American Psychiatric Association [9]. From the total of primary studies, 72 were classified in the category of “mental disorders” because more than one mental disorder was addressed; within these studies, the most frequent conditions that were jointly addressed were depression, anxiety, and stress. 41 studies involved conditions belonging to depressive disorders classification; 19 studies involved conditions classified as trauma-related disorders and stress factors; 14 addressed conditions classified as anxiety disorders; 2 studies focused on conditions classified as spectrum of schizophrenia and other psychotic disorders; 2 addressed bipolar disorder; and the remaining 2 studies involved a condition classified in the category of conditions that need further study, which refers to suicidal behavior. The complete distribution of the primary studies by disorders is shown in Fig. 3.
Fig. 3. Primary studies distribution by disorder.
Depressive, anxiety, and stress disorders have been identified as the most studied conditions; this may be due to depression affecting millions of people in the world today.
238
J. A. Solís-Galván et al.
In addition, depression is recognized to be the main cause of disability worldwide. After analyzing the presented results, it is possible to recognize the need for the detection of these conditions in the early stages. 3.3 Mental Health Applications Within the primary studies that involve the design, development and/or evaluation process of one or more mobile applications, there were 130 in total, mentioning 121 applications. Of these 130 studies, 122 describe mobile applications, in 5 of the remaining primary studies the name of the mobile applications with which they worked were not reported, and the mobile applications described in the remaining 3 primary studies were not included in the list due to that they only got to the prototype phase. Detailed applications information is shown in Appendix B. The features offered by each of the applications extracted from the studies are very diverse. However, some functionalities that they have in common stand out, such as: the registration of user activities; monitoring the symptoms of a particular condition; a diary, in which users can record how they felt during the day; information about the conditions being addressed; application of questionnaires to detect a specific mental health condition; even some more specialized applications record compliance with taking medications in patients who require it. Regarding the operating system where the mobile applications run, 47 applications were developed for the dominant operating systems in the mobile phone market, Android and iOS; 39 were developed only for the Android operating system; 21 were implemented only for iOS; and of the remaining 14 applications the target operating system was not reported. These results show the preference of application developers to implement their solutions in the two strongest operating systems on the market today. In some studies, it was mentioned that by developing applications compatible with these two operating systems, a greater number of users could be reached. 3.4 Strategies for the Development of Mobile Mental Health Applications Within the set of primary studies, the process of design and/or development of 44 mental health applications were reported, in most of these studies the strategies used in the development of these applications were highlighted. The most mentioned strategy was the use of a user-centered design approach, developing techniques such as the creation of “Persona”, participatory design, empathy maps, focus groups, as well as interviews. Another of the mentioned strategies was the consultation with professionals in the mental health area to base the development of the mobile application in theory. For studies reporting the evaluation of one or more mental health applications, a randomized controlled trial was developed with potential users of the applications. The objective of some of these tests was to evaluate the acceptance and usability of the applications. Within usability evaluations aspects such as ease of use, ease of learning, quality of information, satisfaction, among others, were studied. Of the most widely used scales to assess usability were the System Usability Scale (SUS) and the Mobile Application Rating Scale (MARS).
Towards Development of a Mobile Application to Evaluate Mental Health
239
Studies that describe evaluating mental health applications used an app review approach, most of them in the top app stores, App Store for iOS apps and Play Store for Android apps. After obtaining the applications and filtering them, applications were evaluated quantitatively and qualitatively. Quantitative evaluations were performed based on the number of downloads and rating scores. In the case of qualitative evaluations, they were based on user reviews of the applications. In general, the problems that stood out the most were the usability of the applications.
4 Conclusions and Future Work The main aim of the SLR was to identify mobile applications that address the most common mental health conditions. After the primary study selection process, out of a total of 450,815 studies, 152 met the inclusion and exclusion criteria. Analysis of the results shows that a wide variety of mobile applications have been developed to improve the mental health of users. Most of these applications have been developed to mainly address conditions such as depression, anxiety, and stress. However, a search in the iOS and Android application stores carried out in conjunction with this review showed that more than 50% of the applications were not found in the application stores, which would mean that the project was not continued. Some of the primary studies that addressed the analysis of mental health applications mention the main problems that these types of tools have. The studies that were considered most relevant and the problems that were identified are discussed below. Estrada, Wadley and Lederman in their article report the results of an analysis on the comments provided by users in the application stores; from the analysis, 4 main concerns were identified in the users: 1) Little emotional support from the application, 2) The application can become a distraction from real life, 3) The information provided to the application may differ from what that users talk to other people and 4) The use of applications can discourage face-to-face interactions. Taking these results as a reference, it is possible to identify which are some of the features that are most important for users of mental health applications and in this way, apply this knowledge to the tool to be developed, for example emotional support from the app, the relation between the real life and the application, the connection about the information that is provided to the app and the user actions and the usefulness of the app in encouraging the user to attend therapy. Nicholas, Larsen, Proudfoot, and Christensen reported in their study that the mobile apps they analyzed had attachment-related problems in mental health theory. Most of the mobile applications did not refer to the guidelines used in clinical practice or to information related to psychoeducation. Even though all the applications specified the mental disorder for which they were developed, after evaluating their content, it was identified that few were developed according to the specific features to address each disorder. Another problem found is related to privacy and security, very few applications provided the user with a statement about the protection, storage and distribution of information. A very serious problem is the fact that the applications do not respond to situations in which users present extreme moods, including suicidal ideas. The results of this study highlight the deficiencies that exist in the mental health mobile applications
240
J. A. Solís-Galván et al.
available in the market, such as bugs, issues related to information privacy, the lack of features to address each disorder, the response of the application to extreme situations, usability problems, etc.; as software developers, it is helpful to keep these shortcomings in mind when working on developments, in this case related to mental health. In this way, it is possible to develop quality interventions based on evidence on mental health that help to address the main mental disorders that affect a large part of the population. In the Alqahtani and Orji study, a usability-focused analysis was conducted from user reviews. The study analyzed the comments of users of applications that were related to mental health, anxiety, depression or emotions. Issues identified in the comments were classified as bugs, UI design issues, lack of explanations, data loss, connectivity issues, and battery and memory usage issues. Taking the results of these studies as a reference, it is possible to identify some recommendations to try to avoid the aforementioned problems. Since it has been identified that deficiencies related to usability have a very strong impact on the use and adoption of mobile health applications, one of the main recommendations is to use methods such as user-centered design, in this way it is possible to develop mobile applications taking into account the needs and preferences of end users. A suggestion that is constantly mentioned emphasizes developing applications based on mental health theory. It is also important that these applications adhere to the clinical practices used by health professionals; in this way, it will be possible to develop tools with specific features to address each disorder in a correct way. The privacy and security of users ‘information is a very important element in applications related to health, that is why it is important to implement mechanisms that protect users’ information, as well as to inform them of the treatment that will be given to their personal information. A last but not least recommendation is to perform usability tests, in order to identify errors from the end user point of view and solve them before the application is available to the public. These recommendations should not be taken lightly. According to the aforementioned, in recent years the development of applications focused on health has grown, however, most of these applications have problems that can result in low rates of adoption and/or use of the tool. Therefore, it is considered necessary to take into account each of the points mentioned above for the development of tools in the field of mental health. The solution to be developed requires particular features previously detected. For example: the application of inventories for the detection of conditions such as anxiety, depression, and stress; daily record of students’ mood; a web platform that shows the statistics of the inventory results and the students’ state of mind, so that the authorities of the Universidad Autónoma de Zacatecas can make decisions proactively. As a consequence, it becomes unfeasible to use any of the applications found to provide solutions to the problems previously raised. However, strategies, successes and good practices used in the development of applications such as user-centered design, consultation with health professionals and usability tests, will be taken as the basis for the development of the raised application. As future work, a mobile application will be developed to assist in the early detection of conditions such as depression, stress, anxiety, and anguish.
Towards Development of a Mobile Application to Evaluate Mental Health
241
Acknowledgement. Thank you to Dr. Aldonso Becerra-Sánchez, for his advices in the definition of the background and planning the review used in the early stage of this work.
Appendix A. Primary Studies Id
Name
Year
Disorder
PS1
Mobile Mental Wellness Training for Stress Management: Feasibility and Design Implications Based on a One-Month Field Study
2013
Stress
PS2
A Comparison of Two Delivery Modalities of a Mobile Phone-Based Assessment for Serious Mental Illness: Native Smartphone Application vs Text-Messaging Only Implementations
2013
Mental disorders
PS3
The State of Mental Digi-Therapeutics: A Systematic Assessment of Depression and Anxiety Apps Available for Arabic Speakers
2020
Mental disorders
PS4
Usability Issues in Mental Health Applications
2019
Mental disorders
PS5
A mobile application to complement face-to-face interactions in psychological intervention for social anxiety management
2019
Social anxiety
PS6
The Use and Effectiveness of Mobile Apps for Depression: Results from a Fully Remote Clinical Trial
2016
Depression
PS7
Towards Early Detection of Depression through Smartphone Sensing
2019
Depression (MDD)
PS8
The Prevalence and Usage of Mobile Health Applications among Mental Health Patients in Saudi Arabia
2018
Mental disorders
PS9
Effects of a 12-min 2019 Smartphone-Based Mindful Breathing Task on Heart Rate Variability for Students with Clinically Relevant Chronic Pain, Depression, and Anxiety: Protocol for a Randomized Controlled Trial
Mental disorders
(continued)
242
J. A. Solís-Galván et al.
(continued) Id
Name
Year
Disorder
PS10
Engagement in mobile phone app for self-monitoring of emotional wellbeing predicts changes in mental health: MoodPrism
2018
Mental disorders
PS11
A randomized controlled trial of three 2018 smartphone apps for enhancing public mental health
Mental disorders
PS12
Development and Pilot Evaluation of 2018 Smartphone-Delivered Cognitive Behavior Therapy Strategies for Mood and Anxiety-Related Problems: MoodMission
Mental disorders
PS13
Applying the Principles for Digital Development: Case Study of a Smartphone App to Support Collaborative Care for Rural Patients with Posttraumatic Stress Disorder or Bipolar Disorder
2018
Bipolar disorder
PS14
Acceptability of mHealth augmentation of Collaborative Care: A mixed methods pilot study
2018
Mental disorders
PS15
There is a non-evidence-based app for 2020 that: A systematic review and mixed methods analysis of depression- and anxiety-related apps that incorporate unrecognized techniques
Mental disorders
PS16
Transdiagnostic Mobile Health: Smartphone Intervention Reduces Depressive Symptoms in People with Mood and Psychotic Disorders
2019
Mental disorders
PS17
Self-Reflected Well-Being via a Smartphone App in Clinical Medical Students: Feasibility Study
2018
Mental disorders
PS18
Creating Live Interactions to Mitigate 2016 Barriers (CLIMB): A Mobile Intervention to Improve Social Functioning in People with Chronic Psychotic Disorders
Psychosis
PS19
Does a Mobile Phone Depression-Screening App Motivate Mobile Phone Users with High Depressive Symptoms to Seek a Health Care Professional’s Help?
Depression
2016
(continued)
Towards Development of a Mobile Application to Evaluate Mental Health
243
(continued) Id
Name
Year
Disorder
PS20
MoodHacker Mobile Web App with Email for Adults to Self-Manage Mild-to-Moderate Depression: Randomized Controlled Trial
2016
Depression
PS21
Adding a smartphone app to Internet-based self-help for social anxiety: a randomized controlled trial
2018
Social anxiety
PS22
Behavior Analytics of Users Completing Ecological Momentary Assessments in the Form of Mental Health Scales and Mood Logs on a Smartphone App
2019
Mental disorders
PS23
A Mobile Application for 2016 Campus-based Psychosocial Wellness Program
Mental disorders
PS24
Smartphone app to investigate the relationship between social connectivity and mental health
2017
Mental disorders
PS25
A Stress Management App Intervention for Cancer Survivors: Design, Development, and Usability Testing
2018
Stress
PS26
Addressing Depression Comorbid 2019 with Diabetes or Hypertension in Resource-Poor Settings: A Qualitative Study About User Perception of a Nurse-Supported Smartphone App in Peru
Depression
PS27
Counseling with Guided Use of a Mobile Well-Being App for Students Experiencing Anxiety or Depression: Clinical Outcomes of a Feasibility Trial Embedded in a Student Counseling Service
Mental disorders
PS28
Consumer smartphone apps marketed 2018 for child and adolescent anxiety: A systematic review and content analysis
Anxiety
PS29
A Stress Relief App Intervention for Newly Employed Nursing Staff: Quasi-Experimental Design
Stress
2019
2019
(continued)
244
J. A. Solís-Galván et al.
(continued) Id
Name
Year
Disorder
PS30
A multi-faceted approach to characterizing user behavior and experience in a digital mental health intervention
2019
Mental disorders
PS31
Mood and Stress Evaluation of Adult Patients with Moyamoya Disease in Korea: Ecological Momentary Assessment Method Using a Mobile Phone App
2019
Stress
PS32
A Novel Mobile Phone App Intervention with Phone Coaching to Reduce Symptoms of Depression in Survivors of Women’s Cancer: Pre-Post Pilot Study
2020
Depression
PS33
Developing Mental or Behavioral Health Mobile Apps for Pilot Studies by Leveraging Survey Platforms: A Do-it-Yourself Process
2020
Depression
PS34
Response Time as an Implicit 2019 Self-Schema Indicator for Depression Among Undergraduate Students: Preliminary Findings from a Mobile App–Based Depression Assessment
Depression
PS35
Intermittent mindfulness practice can be beneficial, and daily practice can be harmful. An in depth, mixed methods study of the “Calm” app’s (mostly positive) effects
Mental disorders
PS36
A New Mental Health Mobile App for 2019 Well-Being and Stress Reduction in Working Women: Randomized Controlled Trial
Stress
PS37
A Systematic, Multi-domain Review of Mobile Smartphone Apps for Evidence-Based Stress Management
Stress
PS38
Development and Preliminary 2018 Feasibility Study of a Brief Behavioral Activation Mobile Application (Behavioral Apptivation) to be used in Conjunction with Ongoing Therapy
2020
2016
Depression
(continued)
Towards Development of a Mobile Application to Evaluate Mental Health
245
(continued) Id
Name
Year
Disorder
PS39
Pilot randomized controlled trial of a 2019 Spanish-language Behavioral Activation mobile app (¡Aptívate!) for the treatment of depressive symptoms among united states Latinx adults with limited English proficiency
Depression
PS40
Pilot Randomized Trial of a Self-Help 2019 Behavioral Activation Mobile App for Utilization in Primary Care
Depression
PS41
A Mobile Phone App to Improve the Mental Health of Taxi Drivers: Single-Arm Feasibility Trial
2020
Mental disorders
PS42
Mobile Apps for Suicide Prevention: Review of Virtual Stores and Literature
2017
Suicide risk
PS43
A Mental Health Chatbot for Regulating Emotions (SERMO) Concept and Usability Test
2020
Mental disorders
PS44
Salutary effects of an attention bias modification mobile application on biobehavioral measures of stress and anxiety during pregnancy
2017
Mental disorders
PS45
Development of an Ambulatory 2019 Biofeedback App to Enhance Emotional Awareness in Patients with Borderline Personality Disorder: Multicycle Usability Testing Study
Mental disorders
PS46
Integration of a Technology-Based Mental Health Screening Program into Routine Practices of Primary Health Care Services in Peru (The Allillanchu Project): Development and Implementation
2018
Mental disorders
PS47
Brief report: Feasibility of a mindfulness and self-compassion based mobile intervention for adolescents
2016
Mental disorders
PS48
A mobile application for panic 2020 disorder and agoraphobia: Insights from a multi-methods feasibility study
Panic disorder
(continued)
246
J. A. Solís-Galván et al.
(continued) Id
Name
Year
Disorder
PS49
Long-Term Outcomes of a Therapist-Supported, Smartphone-Based Intervention for Elevated Symptoms of Depression and Anxiety: Quasi experimental, Pre-Postintervention Study
2019
Mental disorders
PS50
A Feasibility Trial of Power Up: Smartphone App to Support Patient Activation and Shared Decision Making for Mental Health in Young People
2019
Mental disorders
PS51
Development and Long-Term Acceptability of ExPRESS, a Mobile Phone App to Monitor Basic Symptoms and Early Signs of Psychosis Relapse
2019
Psychosis
PS52
‘It feels different from real life’: Users’ Opinions of Mobile Applications for Mental Health
2015
Mental disorders
PS53
A Mobile App–Based Intervention for 2018 Depression: End-User and Expert Usability Testing Study
Depression
PS54
Cognitive and Behavioral Skills 2018 Exercises Completed by Patients with Major Depression During Smartphone Cognitive Behavioral Therapy: Secondary Analysis of a Randomized Controlled Trial
Depression
PS55
Young People’s Response to Six Smartphone Apps for Anxiety and Depression: Focus Group Study
2019
Mental disorders
PS56
Automated Mobile Phone–Based 2019 Mental Health Resource for Homeless Youth: Pilot Study Assessing Feasibility and Acceptability
Mental disorders
PS57
Feasibility of a Therapist-Supported, Mobile Phone–Delivered Online Intervention for Depression: Longitudinal Observational Study
Depression
PS58
A Peer-Led Electronic Mental Health 2019 Recovery App in a Community-Based Public Mental Health Service: Pilot Trial
2019
Mental disorders
(continued)
Towards Development of a Mobile Application to Evaluate Mental Health
247
(continued) Id
Name
Year
Disorder
PS59
Early Signs Monitoring to Prevent 2020 Relapse in Psychosis and Promote Well-Being, Engagement, and Recovery: Protocol for a Feasibility Cluster Randomized Controlled Trial Harnessing Mobile Phone Technology Blended with Peer Support
Mental disorders
PS60
Validity of Mind Monitoring System as a Mental Health Indicator using Voice
2016
Mental disorders
PS61
Efficacy of an internet and app-based gratitude intervention in reducing repetitive negative thinking and mechanisms of change in the intervention’s effect on anxiety and depression: Results from a randomized controlled trial
2019
Mental disorders
PS62
A Behavioral Activation Mobile Health App for Smokers with Depression: Development and Pilot Evaluation in a Single-Arm Trial
2019
Depression
PS63
Youth Codesign of a Mobile Phone 2018 App to Facilitate Self-Monitoring and Management of Mood Symptoms in Young People with Major Depression, Suicidal Ideation, and Self-Harm
Mental disorders
PS64
Gamification in Stress Management Apps: A Critical App Review
2017
Stress
PS65
Efficacy of the Mindfulness Meditation Mobile App “Calm” to Reduce Stress Among College Students: Randomized Controlled Trial
2019
Stress
PS66
Smartphone-based ecological momentary assessment for Chinese patients with depression: An exploratory study in Taiwan
2016
Depression
PS67
Effect of Brief Biofeedback via a Smartphone App on Stress Recovery: Randomized Experimental Study
2019
Stress
(continued)
248
J. A. Solís-Galván et al.
(continued) Id
Name
Year
Disorder
PS68
An Empathy-Driven, Conversational Artificial Intelligence Agent (Wysa) for Digital Mental Well-Being: Real-World Data Evaluation Mixed-Methods Study
2018
Depression (MDD)
PS69
Use of a smartphone application to screen for depression and suicide in South Korea
2017
Mental disorders
PS70
Accuracy of a Chatbot (Ada) in the 2019 Diagnosis of Mental Disorders: Comparative Case Study with Lay and Expert Users
Mental disorders
PS71
Depression Screening Using Daily Mental-Health Ratings from a Smartphone Application for Breast Cancer Patients
Depression
PS72
Associations Among Emotional State, 2020 Sleep Quality, and Resting-State EEG Spectra: A Longitudinal Study in Graduate Students
Mental disorders
PS73
Free mobile apps on depression for Indian users: A brief overview and critique
2017
Depression
PS74
Quantifying App Store Dynamics: Longitudinal Tracking of Mental Health Apps
2016
Depression
PS75
Uptake and usage of IntelliCare: A publicly available suite of mental health and well-being apps
2016
Mental disorders
PS76
Android and iPhone Mobile Apps for Psychosocial Wellness and Stress Management: Systematic Search in App Stores and Literature Review
2020
Stress
PS77
Evaluation of an mHealth App (DeStressify) on University Students’ Mental Health: Pilot Trial
2018
Mental disorders
PS78
Designing Daybuilder: An Experimental App to Support People with Depression
2012
Depression
2016
(continued)
Towards Development of a Mobile Application to Evaluate Mental Health
249
(continued) Id
Name
Year
Disorder
PS79
A randomized controlled trial on a smartphone self-help application (Be Good to Yourself) to reduce depressive symptoms
2018
Depression
PS80
Stress management for middle managers via an acceptance and commitment-based smartphone application: A randomized controlled trial
2014
Stress
PS81
A fully automated conversational agent for promoting mental well-being: A pilot RCT using mixed methods
2017
Mental disorders
PS82
Efficacy and Moderation of Mobile App–Based Programs for Mindfulness-Based Training, Self-Compassion Training, and Cognitive Behavioral Psychoeducation on Mental Health: Randomized Controlled Noninferiority Trial
2018
Mental disorders
PS83
Smartphone Cognitive Behavioral Therapy as an Adjunct to Pharmacotherapy for Refractory Depression: Randomized Controlled Trial
2017
Depression
PS84
Interaction and Engagement with an Anxiety Management App: Analysis Using Large-Scale Behavioral Data
2018
Anxiety
PS85
Use of a Mobile Phone App to Treat Depression Comorbid with Hypertension or Diabetes: A Pilot Study in Brazil and Peru
2019
Depression
PS86
The challenger app for social anxiety disorder: New advances in mobile psychological treatment
2015
Social anxiety
PS87
iCare-Stress: An Integrated Mental Health Software
2017
Mental disorders (continued)
250
J. A. Solís-Galván et al.
(continued) Id
Name
Year
Disorder
PS88
Guided Self-Help Works: 2019 Randomized Waitlist Controlled Trial of Pacifica, a Mobile App Integrating Cognitive Behavioral Therapy and Mindfulness for Stress, Anxiety, and Depression
Mental disorders
PS89
MedLink: A mobile intervention to address failure points in the treatment of depression in general medicine
2015
Depression
PS90
IntelliCare: An Eclectic, Skills-Based App Suite for the Treatment of Depression and Anxiety
2017
Mental disorders
PS91
Comparing usage of a web and app stress management intervention: An observational study
2018
Stress
PS92
Incorporation of a Stress Reducing Mobile App in the Care of Patients with Type 2 Diabetes: A Prospective Study
2017
Stress
PS93
Anti-depression and anti-suicidal application
2020
Depression
PS94
Immediate Mood Scaler: Tracking 2017 Symptoms of Depression and Anxiety Using a Novel Mobile Mood Scale
Mental disorders
PS95
Assessing Real-Time Moderation for Developing Adaptive Mobile Health Interventions for Medical Interns: Micro-Randomized Trial
2020
Mental disorders
PS96
Capturing and Analyzing Pervasive Data for SmartHealth
2014
Mental disorders
PS97
Mobile Apps for Bipolar Disorder: A Systematic Review of Features and Content Quality
2015
Trastorno bipolar
PS98
The WorkingWell Mobile Phone App for Individuals with Serious Mental Illnesses: Proof-of-Concept, Mixed-Methods Feasibility Study
2018
Mental disorders
(continued)
Towards Development of a Mobile Application to Evaluate Mental Health
251
(continued) Id
Name
Year
PS99
Smartphone-based safety planning 2018 and self-monitoring for suicidal patients: Rationale and study protocol of the CASPAR (Continuous Assessment for Suicide Prevention and Research) study
Suicide risk
PS100
Reviewing the data security and privacy policies of mobile apps for depression
Depression
PS101
Testing an app-assisted treatment for 2019 suicide prevention in a randomized controlled trial: Effects on suicide risk and depression
Mental disorders
PS102
Evaluation of a Mobile Device Survey 2019 System for Behavioral Risk Factors (SHAPE): App Development and Usability Study
Mental disorders
PS103
Developing an Application for 2018 Dealing with Depression through the Analysis of Information and Requirements found in Groups from a Social Network
Depression
PS104
Psychologist in a Pocket: Lexicon Development and Content Validation of a Mobile-Based App for Depression Screening
Depression
PS105
A Randomized Controlled Trial of the 2019 PTSD Coach Mobile Health App at Reducing Pain and Psychological Symptoms among Injured Emergency Department Patients: Preliminary Results
Posttraumatic stress disorder (PTSD)
PS106
How private is your mental health app 2019 data? An empirical study of mental health app privacy policies and practices
Mental disorders
PS107
Designing a Mobile Application to Support the Indicated Prevention and Early Intervention of Childhood Anxiety
2015
Anxiety
PS108
eMindLog: Self-Measurement of Anxiety and Depression Using Mobile Technology
2017
Mental disorders
2019
2016
Disorder
(continued)
252
J. A. Solís-Galván et al.
(continued) Id
Name
Year
Disorder
PS109
Worker Preferences for a Mental Health App Within Male-Dominated Industries: Participatory Study
2018
Mental disorders
PS110
Development and initial evaluation of a mobile application to help with mindfulness training and practice
2017
Mental disorders
PS111
Efficacy of the Digital Therapeutic 2020 Mobile App BioBase to Reduce Stress and Improve Mental Well-Being Among University Students: Randomized Controlled Trial
Stress
PS112
Co-designing the Aboriginal and Islander Mental Health Initiative for Youth (AIMhi-Y) App: Results of a formative mixed methods study
2020
Mental disorders
PS113
Using Mobile Health Gamification to Facilitate Cognitive Behavioral Therapy Skills Practice in Child Anxiety Treatment: Open Clinical Trial
2018
Anxiety
PS114
Using Mobile Apps to Assess and Treat Depression in Hispanic and Latino Populations: Fully Remote Randomized Clinical Trial
2018
Depression
PS115
Exploring the Time Trend of Stress 2019 Levels While Using the Crowdsensing Mobile Health Platform, TrackYourStress, and the Influence of Perceived Stress Reactivity: Ecological Momentary Assessment Pilot Study
Stress
PS116
Functionality of Top-Rated Mobile Apps for Depression: Systematic Search and Evaluation
2020
Depression
PS117
Validation of an mHealth App for 2019 Depression Screening and Monitoring (Psychologist in a Pocket): Correlational Study and Concurrence Analysis
Depression
PS118
Testing the acceptability and initial efficacy of a smartphone-app mindfulness intervention for college student veterans with PTSD
Posttraumatic stress disorder (PTSD)
2020
(continued)
Towards Development of a Mobile Application to Evaluate Mental Health
253
(continued) Id
Name
Year
Disorder
PS119
Development of a Mobile Phone App to Support Self-Monitoring of Emotional Well-Being: A Mental Health Digital Innovation
2016
Mental disorders
PS120
Availability, readability, and content of privacy policies and terms of agreements of mental health apps
2019
Mental disorders
PS121
Posttraumatic Stress Disorder and 2017 Mobile Health: App Investigation and Scoping Literature Review
Posttraumatic stress disorder (PTSD)
PS122
SmileTeq: An Assistive and 2019 Recommendation Mobile Application for People with Anxiety, Depression or Stress
Mental disorders
PS123
Physician Anxiety and Burnout: Symptom Correlates and a Prospective Pilot Study of App-Delivered Mindfulness Training
2020
Anxiety
PS124
Beam: a mobile application to improve happiness and mental health
2014
Mental disorders
PS125
Mobile app for stress monitoring using voice features
2015
Stress
PS126
A Mobile Phone–Based Intervention to Improve Mental Health Among Homeless Young Adults: Pilot Feasibility Trial
2019
Mental disorders
PS127
Mental Health Apps in China: Analysis and Quality Assessment
2019
Mental disorders
PS128
Speech Analysis and Depression
2016
Depression
PS129
Finding a Depression App: A Review and Content Analysis of the Depression App Marketplace
2015
Depression
PS130
Using a Smartphone App and Clinician Portal to Enhance Brief Cognitive Behavioral Therapy for Childhood Anxiety Disorders
2020
Anxiety
PS131
Self-Directed Engagement with a Mobile App (Sinasprite) and Its Effects on Confidence in Coping Skills, Depression, and Anxiety: Retrospective Longitudinal Study
2018
Mental disorders
(continued)
254
J. A. Solís-Galván et al.
(continued) Id
Name
Year
Disorder
PS132
Feasibility of mobile mental wellness training for older adults
2018
Depression
PS133
Imagine your mood: Study design and 2017 protocol of a randomized controlled micro-trial using app-based experience sampling methodology to explore processes of change during relapse prevention interventions for recurrent depression
Depression (MDD)
PS134
User Experience of Cognitive Behavioral Therapy Apps for Depression: An Analysis of App Functionality and User Reviews
Depression
PS135
Development of a Mobile Application 2013 for People with Panic Disorder as augmentation for an Internet-based Intervention
Panic disorder
PS136
Exploring User Learnability and Learning Performance in an App for Depression: Usability Study
Depression
PS137
Usability of a Smartphone Application 2017 to Support the Prevention and Early Intervention of Anxiety in Youth
Anxiety
PS138
Towards Situation-aware Mobile Applications in Mental Health
2016
Mental disorders
PS139
Development of a Digital 2020 Content-Free Speech Analysis Tool for the Measurement of Mental Health and Follow-Up for Mental Disorders: Protocol for a Case-Control Study
Mental disorders
PS140
Mental Health App Design – A Journey from Concept to Completion
Anxiety
PS141
Utilizing a Personal Smartphone 2015 Custom App to Assess the Patient Health Questionnaire-9 (PHQ-9) Depressive Symptoms in Patients with Major Depressive Disorder
Depression (MDD)
PS142
Daily longitudinal self-monitoring of mood variability in bipolar disorder and borderline personality disorder
Mental disorders
2018
2017
2015
2016
(continued)
Towards Development of a Mobile Application to Evaluate Mental Health (continued) Id
Name
PS143
An App That Incorporates 2018 Gamification, Mini-Games, and Social Connection to Improve Men’s Mental Health and Well-Being (MindMax): Participatory Design Process
Mental disorders
PS144
Naturalistic evaluation of a sport-themed mental health and wellbeing app aimed at men (MindMax), that incorporates applied video games and gamification
2020
Mental disorders
PS145
Development of a Mobile Clinical Prediction Tool to Estimate Future Depression Severity and Guide Treatment in Primary Care: User-Centered Design
2018
Depression
PS146
Effects of a Mindfulness Meditation 2019 App on Subjective Well-Being: Active Randomized Controlled Trial and Experience Sampling Study
Mental disorders
PS147
A review of popular smartphone apps 2019 for depression and anxiety: Assessing the inclusion of evidence-based content
Mental disorders
PS148
Experiences of General Practitioners 2018 and Practice Support Staff Using a Health and Lifestyle Screening App in Primary Health Care: Implementation Case Study
Mental disorders
PS149
Human-Centered Development of an Activity Diary App for People with Depression
2019
Depression
PS150
Adapting a Psychosocial Intervention for Smartphone Delivery to Middle-Aged and Older Adults with Serious Mental Illness
2017
Mental disorders
PS151
An Online- and Mobile-Based 2019 Application to Facilitate Exposure for Childhood Anxiety Disorders
Anxiety
PS152
Emotion-Polarity Visualizer on Smartphone
Mental disorders
Appendix B. Mobile Applications
Year
2019
Disorder
255
Application name ClinTouch Appsiety Ipst Health Tips EVO Me Mindfulness Meditation app MoodMission MoodPrism MoodKit SPIRIT Ginger Emotional Support FOCUS Particip8 CLIMB Depression Monitor MoodHacker Challenger Moment Health PsychUP StressProffen SR_APP PsyMate iCanThrive K-CESD-R Calm Florescer Behavioral Apptivation Apptivate Moodivate Driving to Health SERMO PersonalZen SenseIT BodiMojo PowerUp BlueWatch Music eScape Headspace What's Up Mindshift
Platform Android & iOS Android Android & iOS Android & iOS Android & iOS Android Not reported Android Android & iOS Android & iOS Android Android & iOS Not reported Android & iOS iOS iOS Android & iOS iOS Android Android Android & iOS Not reported Android & iOS Android Android & iOS Android & iOS Android & iOS iOS Android & iOS iOS Android & iOS Android & iOS iOS Android iOS Android & iOS iOS iOS iOS iOS iOS
Meru Health Stay Strong App EMPOWER MIMOSYS GET.ON Gratitude Actify iHOPE Happify Wysa Ada Pit-a-Pat Daily Sampling System Aspire Day to Day Daily Feats Worry Knot ME Locate Social Force My Mantra Thought Challenger iCope Purple Chill MoveMe Slumber Time DeStressify DayBuilder Be Good to Yourself Viary Shim Living with heart Kokoro-app CONEMO iCare-Stress Pacifica, now Sanvello MedLink Healthy Mind Serenita Anti depression and anti suicidal Immediate Mood Scaler INTERN HEALTH SmartMood WorkingWell
Not reported iOS Not reported Android Android & iOS iOS Android & iOS Android & iOS Android & iOS Android & iOS Android & iOS Android Android & iOS Android Android Android Android Android Android Android Android Android Android Android Android & iOS iOS iOS iOS Not reported Android & iOS iOS Not reported Android Android & iOS Android Android Android & iOS Android iOS Android & iOS Not reported Android
BackUP mEMA LifeApp’tite SHAPE Yuu PSTD Coach REACH eMindLog HeadGear Mindfulness BioBase ALMhi-Y TrackYourStress Psychologist in a Pocket SmileTeq Unwinding Anxiety Beam StressID Pocket Helper HearMeOut SmartCat Sinasprite Oiva Imagine your modo GET.ON PAPP Thought Challenger SituMan MoodBuster VioceSense Self-help for Anxiety Management Mindful Moods MoodZoom MindMax Wildflowers Check Up GP Dacemo AnxietyCoach PNViz
Android & iOS Android & iOS Not reported Android & iOS Android Android & iOS Android & iOS Android & iOS Android & iOS Android Android & iOS Not reported Android & iOS Android Not reported Android & iOS Android Android Not reported Android Android Android & iOS Android & iOS Not reported Android & iOS Android Android & iOS Android Android & iOS Android & iOS Android & iOS Android Android & iOS iOS Not reported iOS iOS Android
256 J. A. Solís-Galván et al.
Towards Development of a Mobile Application to Evaluate Mental Health
257
References 1. del C. Martínez-Martínez, M., Muñoz-Zurita, G., Rojas-Valderrama, K., Sánchez-Hernánde, J.A.: Prevalence of depressive symptoms of undergraduate medicine students from Puebla, Mexico. Aten. Fam. 23(4), 145–149 (2016). https://doi.org/10.1016/j.af.2016.10.004 2. Torous, J., Wadley, G., Wolters, M.K., Calvo, R.A.: 4th symposium on computing and mental health: designing ethical eMental health services. In: Conference on Human Factors in Computing Systems – Proceedings, pp. 1–9 (2019). https://doi.org/10.1145/3290607.3298997 3. Medina-Mora, M.E., et al.: Prevalencia de trastornos mentales y uso de servicios: Resultados de la Encuesta Nacional de Epidemiología Psiquiátrica en México. Salud Ment. 26(4), 1–16 (2003) 4. Benjet, C., et al.: Psychopathology and self-harm among incoming first-year students in six Mexican universities. Salud Publica Mex. 61(1), 16–26 (2019). https://doi.org/10.21149/9158 5. Donker, T., Katherine, P., Proudfoot, J., Clarke, J., Birch, M.-R., Christensen, H.: Smartphones for smarter delivery of mental health programs: a systematic review. Donker J. Med. Internet Res. 15(2013), 1–19 (2013). https://doi.org/10.2196/jmir.2791 6. Organización Mundial de la Salud: “Depresión,” World Health Organization (2020). https:// www.who.int/es/news-room/fact-sheets/detail/depression. Accessed 25 Feb 2020 7. Kitchenham, B., Kitchenham, B., Charters, S.: Guidelines for performing Systematic Literature Reviews in Software Engineering (2007) 8. Soria Trujano, R., Morales Pérez, A.K., Ávila Ramos, E.: Depresión y problemas de salud en estudiantes universitarios de la carrera de Medicina. Diferencias de género. Altern. en Psicol. 18(31), 45–59 (2015) 9. Asociación Americana de Psiquiatría, Manual diagnóstico y estadístico de los trastornos mentales (DSM-5®), 5a ed. Arlington, VA (2014)
Model Proposed for the Production of User-Oriented Virtual Reality Scenarios for Training in the Driving of Unmanned Vehicles Cristian Trujillo-Espinoza1 , Héctor Cardona-Reyes2(B) , and José E. Guzmán-Mendoza3 1 Center for Research in Mathematics, Quantum: Knowledge City, Zacatecas, Mexico
[email protected] 2 CONACYT Research Fellow, CIMAT Zacatecas, Zacatecas, Mexico
[email protected] 3 Polytechnic University of Aguascalientes, Aguascalientes, Mexico
[email protected]
Abstract. This work presents a model to produce user-oriented virtual reality scenarios, this type of scenario produced focuses on the user being able to be trained to drive unmanned vehicles (UAV) such as drones. The cost of drones and the costs associated with their operation (fuel, maintenance, training personnel, etc.) are much lower than those incurred by the operation of manned aircraft. Due to this, the adoption of the use of drones in different domains has increased, particularly in the industry. This brings with it a series of difficulties that must be considered when using these vehicles, such as the correct operation by the user. To face this perspective, a model is proposed to produce user-centered virtual reality scenarios. The design of tasks is considered as a fundamental part, the elements of the virtual reality environment, and the types of user interaction, with the aim of having the virtual reality scenarios according to the user’s needs. The main actor in this model are UAV pilots. The main objective of the model is that the user can be trained so that he can control a UAV through different user tasks. Finally, a virtual reality scenario is presented as an implementation of the model to initiate users in the flight of drones. Keywords: Virtual reality · Training · User-centered design · Learning environments
1 Introduction Today the field of Unmanned Aerial Vehicles (UAVs), particularly drones, has had a significant boom, and this is reflected in the large number of users who increasingly have to purchase a vehicle of this type, either because of the affordable cost and the © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 258–268, 2021. https://doi.org/10.1007/978-3-030-63329-5_17
Model Proposed for the Production of User-Oriented Virtual Reality
259
ease with which you can perform various tasks that were previously expensive, timeconsuming, or considered a risk factor. Therefore, the incorporation of this type of vehicle in various contexts has allowed us to have smaller, lighter drones that can be configured to the needs of the task for which they are intended. The contexts in which these types of vehicles have generally ventured are industry, government, the education sector, and entertainment, the latter having a greater boom [17]. Figure 1 presents the projected evolution in the short, medium and long term (from 2020 to 2030) of UAVs and how this impacts applications in various sectors, such as civil work, mining, agriculture, and entertainment, in addition to the benefits that this brings, which are the accessibility of UAVs and the reduction of costs in the processes involved.
Fig. 1. Possible evolution of shared use of airspace, source [5].
UAVs have generated a technological ecosystem in which various technological solutions can be generated, this allows for increasingly smaller UAVs, with longer battery life, configurable and adaptable to the tasks assigned, and the needs of the users. In addition, this brings with it other factors that feed the technological ecosystem, such as better user-oriented software design, greater precision when operating, and the incorporation of interaction strategies that facilitate the operation of these UAVs. This has brought benefits such as greater accessibility and cost reduction [7]. As part of the technological ecosystem that involves UAVs, it is necessary to have user-oriented strategies that allow acquiring the necessary skills for the correct driving of these vehicles, which is also safe without putting the physical components of the vehicle at risk. Therefore, this work proposes the use of virtual reality for the generation of
260
C. Trujillo-Espinoza et al.
scenarios that allow the user to acquire the necessary skills to drive a UAV in a learning scheme, according to the user’s needs. This work consists of seven sections, the next section presents the works related to the proposal, the concepts related to the proposal are presented in section three. A problem section that addresses the challenges involved in producing these types of useroriented scenarios. Section five presents the elements that make up the proposed model, a discussion about the proposed model is presented in section six. Finally, a section of conclusions and future work is presented in section seven.
2 Related Works This section presents a review of the literature of works related to the proposal. The works analyzed include the use of virtual reality to pilot drones, web-based prototypes with the use of maps, and techniques to operate drones using video game engines for the generation of virtual reality environments. The works found in the literature are described below: • Postal et al. [20] presents the development of a virtual reality environment with the aim of training drone pilots, the proposed immersive interface allows the improvement of the user experience compared to the traditional control interface during the performance of the tasks of proposed training. Various devices were used for the execution of the virtual reality environment, such as an Oculus Rift1 virtual reality system to visualize the scene, a Kinect to control the drone through movements. • Nguyen et al. [18] on the other hand, proposes a web-based system called DroneVR. This system is based on real-world flight data using OpenStreetMap2 , allowing the definition of flight paths. Also, it uses techniques such as Gaussian models and Kalman filters to detect moving objects and predict and follow the object’s movements, respectively. • Liu et al. [16] proposes a solution using the Unreal Engine3 video game engine, in the same way it relies on a virtual reality system to present the produced scenario. The video game engine allows you to simulate the physical and mechanical characteristics of a drone. Tests were carried out with students, who in an immersive environment were able to carry out training activities. • Car Parking Simulator4 , is a video game based on a car driving simulator, with a high level of interactivity. It presents truly realistic physics and the objective is that the user, through the use of their hands as a means of interaction, drives a car and performs the parking task over various levels of difficulty. See Fig. 2. • Totally Realistic Space Combat Simulator5 , is a simulator where the user can pilot a ship and face combats with various enemies. The advantages of this simulator is that 1 https://www.oculus.com/rift/. 2 https://www.openstreetmap.org/. 3 https://www.unrealengine.com/en-US/. 4 http://sloppystudio.com/. 5 https://baronboese.itch.io/totally-realistic-space-combat-simulator.
Model Proposed for the Production of User-Oriented Virtual Reality
261
Fig. 2. Examples of vehicle simulators, A: Car Parking Simulator. B: Totally Realistic Space Combat. C: Machine Inspector.
it is designed for low-cost platforms, such as Oculus GO, Gear VR, among others that only work with 3 Degrees of Freedom (3DOF) [13]. See Fig. 2. • Machine Inspector6 , more than a game it is a Demo with a high degree of interactivity that it allows within a virtual reality environment. The main objective is to carry out assembly and disassembly of mechanical parts, serving as a basis for training in the assembly of motors and mechanical devices. See Fig. 2. This review of works allows to know the several approaches under which the user is presented with an alternative for their training for the driving of various types of UAV, some strategies are based on video games to hook the user to fulfill various tasks, others try to imitate the use of vehicles as real as possible, as in the case of Car Parking Simulator and others are based on innovative proposals that include the conjunction of several devices, such as Kinect, sensors and virtual reality systems. The following section presents the theoretical foundations related to this proposal.
3 Theorical Foundations 3.1 Virtual Reality According to Jerald et al. [10], the term virtual reality is used to describe imaginary worlds that only exist in computers and in the mind. Other definitions at a technical level such as that of Gigante et al. [6], establish that virtual reality is: “The illusion of participation in a synthetic environment as opposed to an external observation of this environment. Virtual reality relies on stereoscopic three-dimensional (3D) vision and orientation sensor vision devices, hand and body movement sensors, and binaural sound. Virtual reality is a multisensory and immersive experience”. Therefore, in virtual reality we can conclude that a virtual world is a description of a collection of objects in space, the rules and relationships that govern these objects. In virtual reality systems, these virtual worlds are generated by a computer [9]. 3.2 Types of Virtual Reality According to the levels of interaction and immersion shown in Fig. 3, there are three main types of virtual reality [3], which are described below: 6 https://sidequestvr.com/app/545/machine-inspector.
262
C. Trujillo-Espinoza et al.
Fig. 3. Taxonomy of interaction levels HCI y HRWI (Human-Real World interaction), source: Rubio-Tamayo [23].
• Desktop or non-immersive virtual reality systems [3]: It consists of a computer with graphics processing capabilities to run multimedia content such as video games, simulations, among others, and the means of interaction can be the keyboard, mouse or by middle of joysticks. • Semi-immersive virtual reality system [3]: It is intended to provide users a sense of immersion by a virtual environment through stereoscopic images. • Total immersion virtual reality system [9]: It is a more sophisticated form of immersion, which includes a pair of helmet-mounted screens that the user places on their head. With this the user experiences a sensation of complete immersion since he is completely isolated from the outside world and concentrates only on the three-dimensional elements presented by the screens. 3.3 Immersion According to Jerald et al. [10], immersion is the degree to which a virtual reality system projects sensory stimulus to users, in an immersive, vivid, interactive way and obtaining feedback in real time. 3.4 User-Centered Design User-Centered Design (UCD) approaches recommend several principles of design, including an early focus on users and tasks, empirical measurement, and interactive design. UCD approaches enable developers to understand the potential users’ needs of their software, and how their goals and activities can be best supported by the software [22]. According to Akanmu et al. [1], UCD is inevitably opined as an approach to deliver user-friendly and user-centered products. Methods in its framework like ethnography, survey, interview, direct, and indirect observations have been stressed as functional modes of inquiry towards the identification of users’ needs and investigation of their psychological inclination to information systems.
Model Proposed for the Production of User-Oriented Virtual Reality
263
The need for UCD adoption is not disregarded in Virtual Reality design also. Most emphatically, the need to involve the prospective users -who are often domain experts-, in the design process is emphasized in order to achieve the domain for the production of training scenarios, which is fundamental to virtual reality functionality. 3.5 User Experience The definition of User Experience (UX) according to Hartson et al. [8] indicates that: UX is the totality of the effect or effects felt by a user as a result of interaction with, and the usage context of, a system, device, or product, including the influence of usability, usefulness, and emotional impact during the interaction, and savoring the memory after the interaction. “Interaction with” is broad and embraces seeing, touching, and thinking about the system or product, including admiring it and its presentation before any physical interaction. 3.6 Unmanned Aerial Vehicles Unmanned aerial vehicles (UAVs) are aviation systems known to be remotely piloted and are commonly called drones [19]. These vehicles fly without a pilot on board, although they may involve human pilots in controlling the vehicle from meters or kilometers away. In some cases, drones require close remote control by a human pilot; However, there are fully autonomous drones that have the ability to decide how they will execute complex tasks. Correct operation of Unmanned Aerial Vehicle (UAV) during mission considers influences of a range of factors that cause deviations of some parameters of motion from their optimal values. UAV operation undoubtedly involves ensuring a certain level of safety of flights. The safe operation of the flight task depends to a large extent on the availability, time of action, and the number of negative factors. In the event that the effect of negative factors is not such that the result of their actions does not violate the safety of the flight task, it can be assumed that such requirements correspond to normal flight conditions. Normal flight conditions are the key to the successful completion of the flight task of the UAV. However, during the flight, the UAV operates in real airspace, interacts with technical systems, and is vulnerable to human actions. All these create the preconditions for a whole range of factors that negatively affect the safety of air traffic and can cause the appearance of a specific situation of the flight [11]. A specific situation is the result of dangerous factors and reflects a significant impact on flight safety. After analyzing the hazardous factors, an assessment of the probability of their occurrence can be made. Due to this analysis, the grouping of dangerous factors is carried out according to the level of their influence and the most important of them are allocated [12]. Factors that are characterized by low risk cannot be considered if their elimination is dominated by material costs.
4 Problem Outline After a considerable amount of training-based simulation, pilots still often encounter anxiety and tension during their first flight, much of this going back to the lack of realism
264
C. Trujillo-Espinoza et al.
in virtual simulators. These simulators are intended to allow the pilot to familiarize himself with the instrumentation of the aircraft and to have a real feeling of flight. Unfortunately, existing simulators do not provide an immersive experience of a real flight [2]. Another problem encountered is the high cost of an unmanned aerial vehicle, both in terms of the price of the aircraft and operating costs. The price of drones and the costs associated with their operation (fuel, maintenance, training personnel, etc.) are much lower than those incurred by the operation of manned aircraft [5].
5 Proposed Model This section presents the elements that make up the proposed user-centered model for the generation of virtual reality scenarios to train UAV pilots. The proposed model is presented in Fig. 4.
Fig. 4. User-centered model proposed for the generation of scenarios in virtual reality.
As can be seen in Fig. 4, the model starts with the main actor who is the user, who in principle has a need to acquire skills to pilot a UAV, to achieve this, a set must be designed of tasks according to the context on which the user is going to work, and how the user is going to interact with the system, it is also important to know the characteristics of the UAV identified for the user’s needs and what guidelines exist to pilot this UAV. What is obtained in these stages will allow to have elements to start with the design of the virtual reality scenario, where the workflow for the design and creation of the scenario is considered according to the tasks defined in the task design stage. In the simulation, it is important to capture the UAV characteristics that were established in the UAV characteristics identification stage. Finally, identify the tools that will allow the implementation of the scenario in virtual reality and the elements with which it will interact within the scenario. Next, the stages that make up the proposed model of Fig. 4 are presented.
Model Proposed for the Production of User-Oriented Virtual Reality
265
5.1 Task Design The design of user tasks considers the previous analysis of the user’s needs on the tasks considered necessary to be able to pilot an unmanned aerial vehicle, as well as knowing if they have little or a lot of experience in handling drones. Figure 5 presents an example of a user task design represented in a Concur Task Tree (CTT) notation [15], where the user performs a task of piloting the drone where he has to avoid obstacles and For the task to be completed you must have to evade all obstacles correctly.
Fig. 5. Example of user task design using CTT notation, for a training scenario to pilot a drone.
5.2 UAV Features For the generation of virtual reality scenarios, it is necessary to know the characteristics of UAVs, this type of vehicle has very diverse architectures and therefore there are several guidelines that must be considered. Some of the considerations are: if the UAV is single rotor and can be conventional, coaxial, and nano. Another type of drone is multirotor and are characterized by being of the tricopter, quadcopter and hexacopter type, or if it is a hybrid of those mentioned. Therefore, it is necessary to identify the characteristics of the UAV suitable for the user’s needs and that it has the capacity to attend the designed tasks. 5.3 Scenario Design At this stage, the stage design includes 3 elements that are described below: a. Workflow: It refers to all the activities that the user is going to carry out within the virtual reality scenario, ranging from the instructions to interact with the scenario and the objectives to be covered when executing the training tasks. b. Simulation: Based on the defined user tasks and the drone guidelines, the characteristics of the simulation to recreate are determined. As an example in Fig. 6, A: the simulation is shown in which the user is going to travel through an urban area with
266
C. Trujillo-Espinoza et al.
Fig. 6. Virtual reality scenario for basic level training for piloting a drone. A: It presents a user task that must evade several obstacles. B: It presents a task to the user where you must travel a specific urban area.
obstacles B: a city scenario is presented where the pilot can develop their skills to travel through an area specific. c. Virtual Reality: They include elements of virtual reality as the model of the drone with physical characteristics according to the guidelines of UAVs identified, they are also considered objects that will interact with the user and the objects to be static in the scene as well of the design elements that allow the setting is pleasant and motivating for the user. 5.4 Feedback Several strategies available in the literature are proposed for evaluating the user experience, for example the User Experience Questionnaire (UEQ) [4], which helps us to know the impression on the virtual reality scenario (Attractiveness) and if the user likes it, if he feels familiar and how easy it is to learn to use the software (Perspicuity), also the Computer System Usability Questionnaire (CSUQ), which is very useful when it is required to know if the system is easy to use [14]. The virtual reality scenario will have the ability to generate information on the user’s performance, this will serve to know the level of acquisition of skills to pilot a drone, and as progress continues, the feedback information will allow generating new scenarios and new user tasks accordingly to new training needs. Evaluations are also considered, such as AttrakDiff [21, 24], which allow us to know and evaluate aspects related to the functionality and usefulness of the proposed scenarios, as well as aspects related to the skills and attitudes of people regarding how the users interact with the virtual scenarios.
6 Discussion This work presents a model proposal to produce user-centered virtual reality scenarios to train UAV pilots. Design artifacts are proposed in the model that allow a correct generation of virtual reality scenarios according to UAV training needs. In order to prepare this proposal, the existing works that address UAV simulation (See Sect. 2) through virtual reality and other devices that allow immersion within the
Model Proposed for the Production of User-Oriented Virtual Reality
267
proposed environments were reviewed in the literature. Among the identified works we can find experimental studies, creation of virtual environments where the user is immersed and can interact using their hands, applications that focus on the industrial sector, and even video games with various themes ranging from piloting a ship and parking a car correctly. The proposed model considers four major elements that are the user, the design of tasks, and the design of scenarios and feedback. We consider that these elements allow, above all, to design scenarios in a user-centered way, considering the characteristics of the user, allowing to obtain a profile with their characteristics and abilities. This has the advantage that the same scenario can be used by different users who meet the profile. Regarding the design of tasks, it seeks to capture in the best way the real-world task and that it can be taken to a design context considering the appropriate UAV. The virtual reality scenario design includes the interaction elements involved, such as the virtual drone, interactive objects appropriate to the tasks to be performed and other objects that complement the scenario. And finally, feedback, which is a fundamental part of knowing the user’s perspective at the time of use and their perception of the scenarios presented for their training tasks. Therefore, the importance of having a user-centered model that allows to produce new scenarios according to the needs and skills acquired from the user.
7 Conclusions and Future Works The use of virtual reality scenarios will allow pilots to have a more effective and real training when operating an unmanned aerial vehicle in a way that the pilot feels safer and with less tension when operating in real life. Therefore, the training of these pilots in virtual reality scenarios is sought so that companies, institutions, etc., that train their pilots, reduce costs since physically repairing a drone is usually more expensive than if it is done virtual and it would also reduce the learning curve. As future work, a low-cost virtual reality system will be carried out for pilots to train their flight skills and offer a more realistic environment, including testing usability and user experience.
References 1. Akanmu, S.A., Jamaludin, Z.: A user-centered design methodology for students ‘data-focused infovis. In: 2014 3rd International Conference on User Science and Engineering (i-USEr), pp. 115–118. IEEE (2014) 2. Cardenas, I.S., Letdara, C.N., Selle, B., Kim, J.H.: Immersifly: next generation of immersive pilot training. In: 2017 International Conference on Computational Science and Computational Intelligence (CSCI), pp. 1203–1206. IEEE (2017) 3. Cruz, J.A.F., Gallardo, P.C., Villarreal, E.A.: La realidad virtual, una tecnología innovadora aplicable al proceso de enseñanza de los estudiantes de Ingeniería. Aper-tura 6(2), 1–10 (2014) 4. Devy, N.P.I.R., Wibirama, S., Santosa, P.I.: Evaluating user experience of English learning interface using user experience questionnaire and system usability scale. In: 2017 1st International Conference on Informatics and Computational Sciences (ICICoS), pp. 101–106. IEEE (2017)
268
C. Trujillo-Espinoza et al.
5. de España, G.: Plan estratégico para el desarrollo del sector civil de los dones en España 2018–2021 (2020). https://www.mitma.gob.es/el-ministerio/planes-estrategicos/drones-esp ania-2018-20215 6. Gigante, M.A.: Virtual reality: definitions, history and applications. In: Virtualreality Systems, pp. 3–14. Elsevier (1993) 7. GisandBeers.com: Documento del plan estratégico de drones (2018). http://www.gisandbeers. com/documento-del-plan-estrategico-de-drones/ 8. Hartson, R., Pyla, P.S.: The UX Book: Process and Guidelines for Ensuring a Quality User Experience. Elsevier, Amsterdam (2012) 9. Himma, K.E., Tavani, H.T.: The Handbook of Information and Computer Ethics. Wiley, Hoboken (2008) 10. Jerald, J.: The VR Book: Human-Centered Design for Virtual Reality. Morgan & Claypool, San Rafael (2015) 11. Kharchenko, V., Kuzmenko, N., Kukush, A., Ostroumov, I.: Multi-parametric data recovery for unmanned aerial vehicle navigation system. In: 2016 4th International Conference on Methods and Systems of Navigation and Motion Control (MSNMC), pp. 295–299. IEEE (2016) 12. Kharchenko, V., Ostroumov, I.V.: Multiple-choice classification in air navigation system. In: Proceedings of the NAU2, pp. 5–9 (2008) 13. Lang, B., Batallé, J.: An introduction to positional tracking and degrees of freedom (DOF). Road to Virtual Reality (2018) 14. Lewis, J.R.: Measuring perceived usability: the CSUQ, SUS, and UMUX. Int. J. Hum.Comput. Interact. 34(12), 1148–1156 (2018) 15. Li, J., Liying, F., Qing, X., Shi, Z., Yiliu, X.: Interface generation technology based on concur task tree. In: 2010 International Conference on Information, Networking and Automation (ICINA), vol. 2, pp. V2–350. IEEE (2010) 16. Liu, H., Bi, Z., Dai, J., Yu, Y., Shi, Y.: UAV simulation flight training system. In: 2018 International Conference on Virtual Reality and Visualization (ICVRV), pp. 150–151. IEEE (2018) 17. Martínez, F.P.: Presente y futuro de la tecnología de la realidad virtual. Creatividad y sociedad (2011) 18. Nguyen, V.T., Jung, K., Dang, T.: Dronevr: a web virtual reality simulator for drone operator. In: AIVR, pp. 257–262 (2019) 19. Otto, A., Agatz, N., Campbell, J., Golden, B., Pesch, E.: Optimization approaches for civil applications of unmanned aerial vehicles (UAVs) or aerial drones: a survey. Networks 72(4), 411–458 (2018) 20. Postal, G.R., Pavan, W., Rieder, R.: A virtual environment for drone pilot training using VR devices. In: 2016 XVIII Symposium on Virtual and Augmented Reality (SVR), pp. 183–187. IEEE (2016) 21. Rossana Vivas Bravo, E.S.C.R.: Proceso para la Evaluaci´on de Aspectos Rela-cionados con la Experiencia de Usuario para Entornos Virtuales de Aprendizaje. Master’s thesis, Universidad San BuenaVentura Cali, Cali-Colombia (2013) 22. Salah, D.: A framework for the integration of user centered design and agile software development processes. In: Proceedings of the 33rd International Conference on Software Engineering, pp. 1132–1133 (2011) 23. Tamayo, J.L.R., Barrio, M.G.: Realidad virtual (hmd) e interacción desde la perspectiva de la construcción narrativa y la comunicación: propuesta taxon´omica.Icono 1414(2), 12 (2016) 24. Walsh, T., Varsaluoma, J., Kujala, S., Nurkka, P., Petrie, H., Power, C.: Axe UX: exploring long-term user experience with iScale and AttrakDiff. In: Proceedings of the 18th International Academic Mindtrek Conference: Media Business, Management, Content & Services, pp. 32– 39 (2014)
Virtual Reality and Tourism: Visiting Machu Picchu Jean Diestro Mandros2 , Roberto Garcia Mercado2 , and Sussy Bayona-Oré1,2(B) 1 Universidad Autónoma del Perú, Villa el Salvador, Peru
[email protected] 2 Universidad San Martin de Porres, Lima, Peru
[email protected], [email protected]
Abstract. In COVID-19 times, the mobilization of people has become a latent danger due to the possible contagion, consequently people are deprived of visiting archaeological sites in person. Thus, virtual reality emerges as an alternative solution. Individuals can immerse themselves within virtual environments to explore monuments, museums, and other cultural heritage sites. The purpose of this article is to present a system to make virtual visits to the Citadel of Machu Picchu. The UP4VED methodology was used the system construction. The tests of the 3D model were carried out with hotel guests, who showed their satisfaction with the immersive virtual visit to the citadel, considering it an easy-to-use technology that facilitates interaction. Keywords: Virtual reality · UP4VED · Tourism · Machu Picchu
1 Introduction Information and Communication Technologies (ICT) bring with them new ways of doing things and when incorporated into organizations they become a strategic business ally. In particular, we refer to 3D virtual reality technology. The application of 3D virtual reality occurs in organizations in many sectors, including the culture sector and tourism. Virtual reality offers the possibility of visiting places of the past and there are many examples of interactive visualizations of different historical places [1]. At a time when people cannot move to the tourist sites due to prevention restrictions, virtual technology becomes present as an alternative solution to visit archaeological sites. Thus, recent developments in virtual reality technology present an opportunity for the tourism industry. The virtual reality experience can influence the decision-making process and the attitude of the users towards tourist destinations [2]. Cultural tourism has been identified as an important social and economic contribution which motivates researchers to explore the application of virtual reality and increased reality [3]. Cultural heritage sites are ideal subjects for interactive visualizations in virtual reality as well as tourist sites and museums [4]. Virtual reality as a technology to be applied in tourism has been recognized for over 20 years, which has renewed interest in academic and business circles [5]. Likewise, © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 269–279, 2021. https://doi.org/10.1007/978-3-030-63329-5_18
270
J. Diestro Mandros et al.
virtual reality offers a sensory experience of tourist destinations and attractions, in such a way that it increases a person’s taste for visiting this destination in the future [6]. In the case of Web3D technologies, they provide the necessary autonomy and are more accessible to the public since they can be used directly with the web browser without any need for identification or installation of additional software [7]. The user, when manipulating a virtual reality software can feel a natural interaction using the keyboard and mouse [8]. Reality applications are diverse and used in several areas. From virtual tours to virtual museums [9], virtual art galleries [10]. In the mining sector it is used for the training of workers and the simulation of activities [11]. In the tourism sector, tourists can visit remote places in real time without restrictions. Peru is characterized by its tourist attractions, among other important tourist attractions is the citadel of Machu Picchu, architectural complex and ancient Andean Inca town. It is important to make use of 3D virtual technology to visit the citadel of Machu Picchu, located in the department of Cuzco, using virtual environments as a means. In this way, the virtual technology whose fundamental characteristics are immersion and interaction, makes possible the virtual visit, through the oculus rift. This article presents a virtual reality application using UP4VED [12] methodology to make it available to a hotel where interested guests can take a virtual tour of Machu Picchu. This article is organized in 5 sections that includes the introduction. Section 2 presents the theoretical basis of the research. Section 3 describes the UP4VED methodology that has been used for the development of the software. Section 4 describes the results. Finally, Sect. 5 presents the conclusions.
2 Background 2.1 Macchu Picchu The citadel of Macchu Picchu is located in the province of Urubamba in the department of Cusco in Peru and close to Cuzco, the imperial city. Surrounded by the Machu Picchu and Huayna Picchu mountains as part of its orographic formation [13], in the Central Mountain Range of the Peruvian Andes and in the Yungas region. At the foot of the mountains runs the Vilcanota-Urubamba. It is located at 450 m above the level of the valley and at 2438 m above sea level. The built-up area is approximately 530 m long by 200 m wide. Machu Picchu is considered one of the most impressive remains of the Inca period cities and one of the new seven wonders of the modern world [14]. 2.2 Virtual Reality Virtual reality is the group of technologies that bring the user closer to objects, environments and three-dimensional computer-generated environments. Virtual reality is a human-computer technology, whose purpose is to engage immersively in an artificial setting in a natural way [15]. Another definition relates virtual reality as the technology that allows the creation of three-dimensional spaces by means of a computer, in this
Virtual Reality and Tourism: Visiting Machu Picchu
271
way reality can be simulated. The purpose of VR is to allow a person to experience and manipulate the environment as if it were the real world [16]. Virtual reality is divided into two types: non-immersive virtual reality and immersive virtual reality. Non-immersive Virtual Reality allows interaction through the mouse and keyboard, on a graphic monitor. While Immersive Virtual Reality presents possibilities in different applications related to tourism such as visit cultural heritage sites, decision making process, travel planning or tourism promotion [17] to generate an immersive experience in the third dimension. 2.3 Unreal Engine 4 Unreal Engine 4 [18] is geared towards the development of 3D environments and can be used by designers, programmers, professionals, or amateurs, who are in the 3D industry. Among the main features presented are: (1) DirectX 11 and 12 rendering and support: with HD scene options, it handles thousands of dynamic lights simulating real or focused light in the same scene and instance of different types of 3D materials that can also live in the same scene, (2) the visual effects editor provides the tools needed to create fire, smoke, snow and sand, in addition to a fast and efficient management of GPUs and CPUs, (3) creation of open worlds: You can create the open world environment, the landscape system, as well as handling of foliage that can be improved, within the characteristics of size, color and density of the terrain, (4) you can build different ways to reproduce sound in a 3D environment, (5) you can preview instantly from any point of the application without waiting to save or render, (6) Blueprints as a new form of visual scripting that makes possible a fast learning curve to build prototypes using programming in c++ in visual form.
3 Methodology UP4VED is used to implement the proposal. UP4VED is a development methodology based on the unified process that has a set of practices for the construction of virtual environments, where the proposals set out in the existing methodologies for the development of Virtual Environments [12] are brought together. The content of UP4VED is organized through a hierarchy of packages, each of which includes disciplines, roles, tasks, work products and guides. For his part, Cardona states that UP4VED’s life cycle is composed of the description of the activities that make up its development phases: beginning, elaboration, construction and transition. Figure 1 shows the stages established by the UP4VED methodology. Section 4 presents the implementation of the methodology. The main activities in the Beginning phase are: • • • •
Define the stakeholders of the project. Establish the project risk assessment. Establish the business process. Perform the business rules.
272
J. Diestro Mandros et al.
Beginning
• The objecves, the reach of the project related to the type and requirements of the virtual environment are established. The needs of stakeholders are idenfied and recorded as user stories.
Elaboraon
• The architecture for the life cycle of the virtual environment, applicaon context, is defined, the design is carried out and the foundaons are laid to guide the implementaon of a virtual environment.
Construccon
• The virtual environment is developed, for each of the iteraons defined for the development. In this phase, the 2D and 3D components that make up the virtual environment must be completed, tested and integrated, so that a publishable and usable version of the virtual environment is achieved.
Transion
• The virtual environment is delivered fully operaonal and funconal to the end users of the same, aer the compleon of various tests, especially those of usability.
Fig. 1. Stages of the UP4VED methodology
The main activities in the Elaboration phase are: • • • • • •
Obtain the details of the requirements. Elaborate use case diagram, use cases, and their specifications. Describe the elements of a virtual environment. Define the static and dynamic structures. Perform the image collection, the storyboard, and user interfaces. Design logical and physical of the system. The main activities in the Construction phase are:
• Create the virtual environment assets are created (3D creation and texturing). • Import into the graphics engine. • Develop the project. The main activity in the Transition phase is: • Test the virtual environment.
Virtual Reality and Tourism: Visiting Machu Picchu
273
4 Results The activities carried out in each of the stages defined by UP4VED are presented below. 4.1 Beginning Phase The project begins formally with the initial meeting with the client representing the hotel and the minutes of the meeting are issued. The project’s stakeholders were defined as the Supervisor of the hotel’s reservation area, Sales Manager, Project Manager and the development team. Project risks were determined such as: that they are oriented to possible changes in requirements, noncompliance with project dates, changes in the scope of the project or temporary lack of key personnel. The business process and rules were stablished. 4.2 Elaboration Phase The system requirements are presented in Table 1. Table 1. System requirements. Requirement
Descriptión
Restrictions
REQ 1 Guiding guest
The guest will be guided by the application in an interactive way through Machu Picchu starting from the main gate to the Intihuatana. For this purpose the guest will use an analogical control. Each time the guest advances through the virtual stage, background music will play and will stop when the review begins
The application can only be installed on a PC. The appliance can only be used with Oculus Rift lenses
REQ 2 The immersive virtual reality Set up virtual environment application must have an option to setup the application so that it supports different types of PC configuration, the following options must be displayed as a menu: Speed of frames per second, Number of polygons
The application can only be installed on a PC. The appliance can only be used with Oculus Rift lenses
REQ 3 Walk (run)
The application can only be installed on a PC. The appliance can only be used with Oculus Rift lenses
The guest when walking through Machu Picchu should seem to be walking around the place, should have sound of footsteps and a camera movement similar to when one is walking or running
(continued)
274
J. Diestro Mandros et al. Table 1. (continued)
Requirement
Descriptión
Restrictions
REQ 4 Show Floating Review
When a guest passes through a representative site within Machu Picchu, he or she should display a floating message indicating a summary of the site in the form of a window with configurable borders. This message will be disabled with a small flame when moving away from the site
The application can only be installed on a PC. The appliance can only be used with Oculus Rift lenses
REQ 5 Listen to review
When the guest passes by a representative site within Machu Picchu a voice will explain the nearby place detailing
The application can only be installed on a PC. The appliance can only be used with Oculus Rift lenses
REQ NF 1 Commanded movement
The virtual reality application must be able to support an analogical control for PC
The application can only be installed on a PC. The control used will be XBOX 360 or higher
In the system analysis phase, use cases were defined. The following system actors were also defined: (1) the hotel receptionist, who is the user in charge of starting the Machu Picchu virtual guide application, is also in charge of setting the graphics quality level parameters depending on the PC where the system is installed, and (2) guest is the user who is in charge of moving through the application of the virtual guide of Machu Picchu, also when interacting with some specific areas of Machu Picchu will be shown a brief review through floating messages and sounds with a female voice. The description of the elements for a virtual element was made. Taking into consideration the design of the virtual tour they were classified as: • Static structures are generated in three dimensions (3D) with textures that simulate real life. These elements do not require animation and are part of the surrounding architecture (chairs, tables, foot-dras). In this case the static walls are: (1) the stone walls of the Machu Picchu complex, (2) a field of almost 2 km long, where all the elements of the ruin of Machu Picchu rest on top, (3) the mountain environment is a cylinder that surrounds the Machu Picchu complex and (4) the rocks and small stones, because Machu Picchu has areas that are in ruins, only rocks and small stones that surround some architectural structures of Machu Picchu remain. • Dynamic structures that are structures that have animation or that interact with the user directly such as: (1) the grass inside the complex that move to simulate the wind, (2) the trees and small vegetation, (3) the small animals with feeding and observation movements and (4) the visible messages that are messages that appear in different
Virtual Reality and Tourism: Visiting Machu Picchu
275
zones of Machu Picchu, appear when they get closer and disappear when they move away. • The images are collected and through the process of generating the prototype they are classified into reference images, architectural plane type and texture type. The reference images are images that help the 3D designer to understand the way the architecture of Ma-chu Picchu is organized (Fig. 2 shows the right and left side of Machu Picchu), the architectural plane type images that help the 3D designer to generate the terrain with the real dimensions and height, and the texture images that are used only to generate textures, i.e. “the skin” of a 3D object, as they are only threedimensional lines when created (e.g. photographs of the walls of Machu Picchu, see Fig. 3).
Fig. 2. One Machu Picchu (right side, left side)
Fig. 3. Sacred Square and the texturization of the Sacred Square
In order to develop the requirements of an immersive virtual reality project, it is necessary to understand how the user interfaces and the path will be. That is, the interaction between the user and the user in the virtual environment and how he will visualize it with the Oculus Rift. This is where the Storyboard comes in. The tourist, on starting the virtual environment, will go to the door of the western urban sector known as the cover group. As he
276
J. Diestro Mandros et al.
advances, he will see on his right the sun’s temple and the royal tomb, as well as the royal palace, from there he will continue his tour of a place full of giant rocks known as the Cantera”, and then head 20 m ahead with the priest’s house, the main temple. Then you can go up some stairs to the Intihuatana, you can ramp through some rocks to access the main square, finally go to the area of the urban sector starting, where is the industrial area, the three covers and high group. Once the use cases are defined, the Storyboard, and the virtual environment interfaces are described how the solution will be given in order to satisfy the requirements. The architecture design is defined. The software, design technique, texturing and creation of materials are selected. Then the elements of the system are defined (See Table 2). Table 2. System elements Name
Function
Technical detail
Command Xbox 360
Used to be able to mobilize the user through the virtual environment
Wireless Technology: 2.4-GHz. Maximum distance: 30 feet. Integrated communication port. Adjustable AA battery.
PC Asus
PC used to run the application of the virtual guide of Machu Picchu
Brand: ASUS ROG. Processor Type: Intel Core i7 3rd Gen. Processor Speed: 2.3 GHz. Video card: Nvidia mx660 2gb. Memory: 32 GB. Hard disk capacity: 750 GB. Screen size: 15.6”. Operating System: Windows 8.1
OculusRift DK2
Used for the user to visualize Head tracking: 6 degrees of freedom Machu Pic-chu in an immersive with low latency. Field of vision: way 100 degrees diagonally Display technology: OLED. Resolution: 1920 × 1080 (960 × 1080 per eye) Inputs: DVI/HDMI and USB Platforms: PC. Weight: 440 g
Figure 4 shows the design level solution of the immersive virtual environment. 4.3 Construction Phase In this phase the virtual environment is developed. For this purpose, the Assets are created (3D creation and texturization). In order to create the virtual environment of Machu Picchu you must generate the main walls as well as the terrain. To do this, a 3D surface is created in the 3D Studio Max program of the “Poly” type, which means that its faces and vertices can be modified (See Fig. 5).
Virtual Reality and Tourism: Visiting Machu Picchu
277
Fig. 4. Solution design diagram
Fig. 5. 3D surface, wall with the selected texture and modelled and textured
4.4 Transition Phase In this phase, the virtual operating and functional environment is delivered to the hotel manager for implementation. Previously, the tests were carried out. The results of the implementation of the immersive virtual reality system significantly improve the way the guest can find out about a tourist site when requested. In this way, a new peripheral service on tourist information services is provided to hotel guests.
278
J. Diestro Mandros et al.
To determine whether the virtual tour covered this objective, a comparison was made between the indicators before and after the implementation of the virtual system through a survey of 16 hotel guests. A 22-question questionnaire was designed, based on the SERVQUAL model. The results show that the hotel’s measurement of guest perception of tourist information services changed the tangibility index (two questions) from 0.5 to 0.7. This indicates for the hotel that the replacement of the old ways of communicating tourist information is positively accepted by the guest. The limitations of this study are that the design only covers the urban sector of the citadel of Machu Picchu and has the basic functions of exploration and written and oral descriptions at the informative level of the citadel.
5 Conclusions This article presents the implementation of virtual environments using 3D technology to make a virtual visit to the Machu Picchu citadel. The UP4VED methodology was used for the construction. The 3D virtual technology allows for immersive, interactive, and experiential experiences. In this way, users who wish to visit the citadel can do so using the oculus rift. This application was tested with hotel guests, who showed their satisfaction with the product. In times like these, COVID-19 3D technologies can be applied to develop products for education, health, tourism, culture, among others.
References 1. Deggim, S., Kersten, T., Tschirschwitz, F., Hinrichsen, N.: Segeberg 1600—reconstructing a historic town for virtual reality visualisation as an immersive experience. Int. Arch. Photogramm. Remote Sensing Spat. Inf. Sci. 42(2/W8), 87–94 (2017) 2. Tussyadiah, L., Wang, D., Jia, C.: Virtual reality and attitudes toward tourism destinations. In: Schegg, R., Stangl, B. (eds.) Information and Communication Technologies in Tourism 2017 in Rome, Italy, pp. 229–239. Springer, Cham (2017) 3. Han, D.-I.D., Weber, J., Bastiaansen, M., Mitas, O., Lub, X.: Virtual and augmented reality technologies to enhance the visitor experience in cultural tourism. In: tom Dieck, M.C., Jung, T. (eds.) Augmented Reality and Virtual Reality. PI, pp. 113–128. Springer, Cham (2019) 4. Büyüksalih, G., Kan, T., Özkan, G.E., Meriç, M., Isın, L., Kersten, T.P.: Preserving the knowledge of the past through virtual visits: from 3D laser scanning to virtual reality visualisation at the Istanbul Çatalca ˙Ince˘giz caves. PFG – J. Photogramm. Remote Sensing Geoinf. Sci. 88(2), 133–146 (2020) 5. Jung, T., Tom Dieck, M.C., Lee, H., Chung, N.: Effects of virtual reality and augmented reality on visitors experience in museum. In: Inversini, A., Schegg, R. (eds.), Information and Communication Technologies in Tourism, pp. 621–635 (2016) 6. Gibson, A., O’Rawe, M.: Virtual Reality as a travel promotional tool: insights from a consumer travel fair. In: Jung, T., Tom Dieck, M. (eds) Augmented Reality and Virtual Reality, pp. 93– 107, Cham (2018) 7. Bayona, S., Gonzales, K, Cuadros, R.: Implementation of 3D virtual environment using Web3D technologies. In: 2016 11th Iberian Conference on Information Systems and Technologies (CISTI), pp. 1–6. IEEE (2016)
Virtual Reality and Tourism: Visiting Machu Picchu
279
8. Bozgeyikli, L., Bozgeyikli, E., Raij, A., Alqasemi, R., Katkoori, S., Dubey, R.: Vocational rehabilitation of individuals with autism spectrum disorder with virtual reality. J. ACM Trans. Access. Comput. 10(2), 1–25 (2017) 9. Zhang, J., Yang, Y.: Design and implementation of virtual museum based on Web3D. Trans. Edutainment III, LNCS 5940, 154–165 (2009) 10. Nadal, J.: http://www.recercat.net/handle/2072/196454 11. Wyk, E., Villiers, R.: Virtual reality training applications for the mining industry. Association for computing machinery. In: 6th International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa, pp. 53–63. ACM New York (2008) 12. Cardona, J.: UP4VED. http://www.jdcardona.com/post/34147285538/up4ved 13. https://www.cuscoperu.com/en/travel/machu-picchu/citadel-of-machu-picchu 14. https://es.wikipedia.org/wiki/Machu_Picchu 15. Schultheis, M., Rizzo, A.: The application of virtual reality technology in rehabilitation. Rehabil. Psychol. 46(3), 296–311 (2001) 16. Wolfartsberger, J.: Analyzing the potential of virtual reality for engineering design review. Autom. Construct. 104, 27–37 (2019) 17. Beck, J., Rainoldi, M., Egger, R.: Virtual reality in tourism: a state-of-the-art review. Tourism Rev. 74(3), 586–612 (2019) 18. Unreal Engine. https://www.unrealengine.com/vr-page
EEG Data Modeling for Brain Connectivity Estimation in 3D Graphs Aurora Espinoza-Valdez, Adriana Peña Pérez Negrón(B) , Ricardo A. Salido-Ruiz, and David Bonilla Carranza Universidad de Guadalajara CUCEI, Blvd. Marcelino García Barragán #1421, 44430 Guadalajara, Jalisco, Mexico {aurora.espinoza,ricardo.salido, jose.bcarranza}@academicos.udg.mx, [email protected]
Abstract. The quantitative study of brain electrical activity through graphs is a useful tool to visualize the connectivity topology among different brain zones. Different systems use graphs for brain connectivity analysis. However, in this proposal, the analysis is dynamic regarding the number of electrodes. Also, data is applied to a 3D visualization for rapid comprehension of the vertices connections related not only to their number of edges, how it is usually presented but in relation to their connections’ weight. Furthermore, the 3D visualization allows manipulation of six brain zones in an independent way, and the structure of the graph. Keywords: Brain electrical activity · Brain zones · Graphs topology connectivity · EEG · 3D graphs connectivity
1 Introduction The brain is one of the most complex organs in the human body, on it functional, connected interactions take place. The study of brain connectivity helps to understand how electric (EEG, electroencephalography), magnetic (MEG, magnetoencephalography), or metabolic (fMRI, functional Magnetic Resonance Imagery) activity is propagated and connected through different brain zones [1]. There is a number of approaches for brain functional connectivity analyzes, which through matrix representations, describe connections dynamics. In turn, matrix representations are better characterized by graphs. Graphs theory is a powerful means to explain the neural network dynamics in normal or pathological processes. The graphs’ complexity, because of the number of edges between vertex, has motivated the development of graphs visualizers [2]. However, even though there is a number of 3D graphs visualizers for brain data, most of them are for Magnetic Resonance Imaginary (MRI) data, and just a few visualizers for other modalities such as MEG or EEG, which are more economical and more applied in the scientific community. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 280–290, 2021. https://doi.org/10.1007/978-3-030-63329-5_19
EEG Data Modeling for Brain Connectivity Estimation in 3D Graphs
281
EEG 3D graph visualizers are focused in the brain connectivity valuation. Therefore, in this paper is proposed this type of visualizer but with new different visualization features, helping to support the interpretation of brain zones connections based on the graph theory perspective. 1.1 Background Among the approaches for graph visualization for brain connectivity created in different computer platforms stand out Matlab®, Python, Delphy, and R. Most of these visualizers are set for Windows or Linux operative systems, with some exceptions [2]. Within the 3D network graphs visualization for Matlab® 3D there is the BrainNet Viewer Network visualization, which has a brain surface in 3D [2]. The BCT Netwer calculation that performs a network analysis [2, 3]. Also for Matlab® is the PANDA Network construction for MRI images that build the graphs from dMRI data, but it runs only in Linux [2], the GRETNA Network construction & calculation Network, which build data from fMRI at rest [4], and the GAT Network construction & calculation for MRI data [5]. For EEG and MatLab also, there is the eConnectome Connectivity calculation that pre-processes data for connectivity analysis [6]. A recent visualizer for MatLab, the BRAPH allows creating graphs from different data types such as structural MRI, functional MRI (fMRI), positron emission tomography (PET), and EEG for brain connectivity analysis based on the graph theory [7]. There are fewer approaches for Python. For 3D graphs, there is the PyNN Neuronal network modeling for neuronal networks, but just for Linux [8]. The Connectome Viewer Network visualization with a brain surface in 3D, also just for Linux [9]. And NetworkX Network calculation & visualization, based on 2D graphs [10]. Other programming languages such as C++, Delphy, and R visualizers are the Caret Network visualization with a 3D surface [11], the Pajek Network visualization based on 2D graphs [12], and the Brainwaver Network construction & calculation that creates graphs form fMRI (using wavelet, a reference for signals comparisons) [12]. In the system here presented, we propose working the selection of electrodes in a dynamic way and then constructing 3D graphs, including the feature of working with independent brain zones.
2 EEG Data Visualizers from electroencephalography signs require to pre-process data, that is, data requires to be in a certain format for the application. EEG sings are collected in a multichannel way within the time domain, which requires understanding the EEG channel relations through a matrix representation to build the graph. However, the EEG data is segmented in time intervals, which requires the use of time windows for its analysis. Once a time window is selected, it is braked into frequency bands, making it possible to get as many matrix representations as frequency bands and time windows required by register them as needed. The matrix representations are clustered in tensioners of n order, where n is the number of matrix representations for each
282
A. Espinoza-Valdez et al.
time window. For each window time, it can be n matrix representations, and therefore n number of graphs. Then, for each m EEG time interval, there are m × n graphs, where the automatic representations of their relations support a required fast analysis. A matrix representation requires different matrix types from the different EEG relations of the registered channels. A matrix can show, for example, the time domain relation such as the Pearson coefficient correlation to understand similarities on the waveform or signals face of the channels. Other more popular EEG matrix representations are the coherence and partial directed coherence (PDC) relations for the frequency domain of the channels, which are interpreted as the functional connectivity coefficient among the different brain zones with or without information flux direction. Next are described some of the most common EEG matrix representations. 2.1 Coherence Coherence is a function in the frequency domain with magnitudes normalized in values between 0 and 1. Coherence indicates the extent to which an input signal corresponds to an output signal in the frequency domain (see Fig. 1), that is, the quadratic correlation coefficient for the amplitude and phase between two signals [14].
Fig. 1. Coherence as a function of frequency
Often it is not necessary or interesting to understand the coherence function behavior in all the frequency values. This is the reason to extract only a representative value (e.g. media, highest value) of the function for a defined frequency interval. In other words, only one value by frequency band of interest is calculated to relate all the involved elements (i.e., in EEG electrodes). Then it is possible to construct a matrix in which the coherence elements are their average, the highest value, or other criteria over the coherence values in the frequency band selected. In this way, the elements (electrodes) relation is represented by a solo value through the matrix representation, as shown in Fig. 2.
EEG Data Modeling for Brain Connectivity Estimation in 3D Graphs
283
Fig. 2. Coherence matrix (right) for a selected frequency band related to the connectivity of electrodes (left). The intensity level in the matrix is related to coherence coefficient, where white is 0, and black is 1. Here, the flow direction is not considered.
2.2 Partial Directed Coherence (PDC) The partial directed coherence (PDC) [15], also normalized between 0 and 1, shows the direct flux between two channels i, j. PDC is normalized to show the relation between the output of a channel j towards a channel i with respect to all the j outputs of the source channel, so the PDC highlights the flow over its origin. The same connectivity shown in Fig. 2 can be represented by a PDC matrix as shown in Fig. 3.
Fig. 3. PDC matrix (right) for a selected frequency band related to the connectivity of electrodes (left). The intensity level in the matrix is related to coherence coefficient, where white is 0, and black is 1. Here, the flow direction is considered.
In the coherence matrix or PDC, each value in the matrix represents the average coherence between the i, j EEG channels in a specific frequency band. Both matrices represent directed and not directed graphs respectively.
284
A. Espinoza-Valdez et al.
2.3 Graphs A graph is defined as an ordered triplet (V(G), E(G), Ψ ) that consists on an empty set V(G) of vertex, a set of E(G) of edges, and the incidence function Ψ that associates each edge in an ordered pair, not necessarily different, of vertices of G. That is, Ψ _G:E(G) → W(G), where W(G) is the set of vertex pair defined as W(G) = {{u, v}|u, v ∈ V(G)} Ψ _G (e) = {u, v} ⇒ e ∈ E(G), that is the joins of u and v [16]. From this general definition are explained the next basic graph definitions. Definition 1. The distance dij between two vertices is the length of the shortest path between those two vertices vi and vj , helpful to establish metrics and graph properties [16]. In this case, the distance for a directed or not directed graph is calculated without auto-regulations. Distance is also useful to calculate centralities. Definition 2. The ki grade of a vi vertex of a not directed graph is the number of adjacent edges to the vertex. For a directed graph, there are two types: (1) input: kin of a vertex is the number of arrows to the vertex vi . (2) output kout of a vertex is the number of arrows that go out of the vi vertex [16]. The grades distribution indicates how many vertices have a certain grade in the network. The concepts of grade and distance in graphs are intuitive indicators of the topological characteristics of a connectivity graph. They help to determine the number of vertex connections, their distance, and their neighborhood interrelation. Furthermore, grade and distance also allow determining the nodes in the generated structure. Definition 3. A graph G is connected if there is a trajectory between any two vertices of G [16]. Definition 4. In a graph G defined over V is define an adjacency matrix, A(G) = aij where aij = c(c ∈ R) if (i, j) ∈ E, and aij = 0 if (i, j) ∈ / E [16]. By expressing a graph through a matrix, the matrix theory can be applied to get information from the graph. The adjacency matrix fulfills the properties for matrix analysis, that is, all the values can be analyzed if G is a complete graph. For the coherence metric, the adjacent matrix is a symmetric matrix, a not directed graph. And for a PDC an asymmetric adjacency matrix is obtained a directed graph. Definition 5. In a graph G, the Laplacian matrix is defined as L(G) = lij , where lij = d (i) if i = j, lij = −1 if i = j y(i, j) ∈ E, or 0 in other case [17]. The Laplacian matrix of a G graph contains interesting properties for the graph topology. For example, to understand if it is a connected graph, connectivity must be warrantied to apply some graph theory metrics. This also allows determining the graph connection homogeneity and the temporal dynamic. Actually, a series of metrics associated with the topology of the graph can be described, which allows obtaining conclusions about its behavior by making an analysis of how the vertices of the graph are connected.
EEG Data Modeling for Brain Connectivity Estimation in 3D Graphs
285
3 Pre-processing Data Currently, there are a number of full analyses based on the graph topology in free software [4, 7], but in this case, also a static comparison is proposed, along with connectivity properties analyses to quantify the temporal dynamic of the graph. Likewise, the differences of the matrix families are observed for connectivity analysis. Figure 4 presents the structure for the design of the visualizer that contains different blocks of data processing; the following subsection explains these blocks in detail. 3.1 Time Window Selection The EEG electrophysiological signals are segmented for its analysis in intervals of time, in which an event of interest can be selected. Typically, several segments of interest are selected, generating a great amount of data. It has to be kept in mind that for each segment, a matrix representation supports the connectivity analysis. 3.2 Electrodes Selection The graphs visualizer uses pre-processed data from the EEG adjacency matrices drawing the multichannel relation. The number of vertex is taken from the matrix dimensions, which cannot be modified. In our proposed visualizer, in the drawn graph for the EEG, the electrodes can be selected before the connectivity analysis. This option will also allow creating individual graphs for each brain zone for separate analysis. 3.3 Matrix Representation Selection According to the functional connectivity analysis, different matrix representations over the EEG data can be estimated over different brain connections. Among them, the most known is the coherence and the PDC, the former based on the Granger causality, which analysis is used to determine the direct causal interaction among electrophysiological signals [6, 15]. Both estimators have a coefficient given by the frequency function. That is to say, to make a frequency band analysis, it is required to average all the coefficients in the frequency band of interest. In the EEG usually, five frequency bands are studied δ, θ, α, β and γ, generating a matrix representation for each one, and an elected estimator. Therefore, the connectivity brain estimator chosen can have five different graphs. 3.4 Graph Analysis Figure 5 shows the steps for graph analysis. From the coherence or PDC matrix from the EEG, the first step is to implement a dynamic threshold according to requirements. The dynamic threshold supports dynamic connectivity according to: 1) most significant connections; 2) the graph connectivity; and, 3) an average of a set of connections. Afterward, different graphs metrics can be established for a detailed analysis. Then a 3D graph can be drawn, which can vary through time.
286
A. Espinoza-Valdez et al.
Fig. 4. Visualizer design diagram.
3.5 3D Visualizer Using the pre-processed data from the EEG, a matrix is generated. With such matrix, 3D images can be created. As can be observed in Fig. 4, the number of connections to a
EEG Data Modeling for Brain Connectivity Estimation in 3D Graphs
287
Fig. 5. Steps to process data for graph analysis.
vertex is represented by the edge size. Likewise, the thickness of the edge corresponds to the power of the connection, a number between 0 and 1. For a symmetric matrix, a not directed graph is inferred. In the case of a not symmetric matrix, a directed graph is assumed, and the edges are represented by an arrow denoting the direction of the connection. The 3D visualizer is a JavaScript development, with the React Development Tool and the three.js libraries. A JSON (Java Script Object Notation) file with the graphs is the means to create independent images. It is a web application. In Fig. 6 are depicted four application screens. First, the user access the application by a password. Then in the menu, the JSON file can be uploaded or a tool to convert ccs files to JSON files can be applied. Through the JSON file the graphs is constructed.
Fig. 6. Application start menu for the session, menu to initialize the visualizer, instructions to convert ccv to JSON files, and the display of the graph.
Once created, the 3D images are interactive. The user can make zoom on the 3D figure, rotate it to see it from different perspectives, and select certain nodes and manipulate them, that is, moving or rotating them without losing the connections, as can be observed in Fig. 7.
288
A. Espinoza-Valdez et al.
Fig. 7. Snapshots of the graphs views.
Any interaction with the 3D connectivity graph representation can be downloaded as an image. Also, another JSON can be uploaded to change the 3D graph at any time. Finally, a sequence among graphs can be used to generate an animation with the changes through the different connections. In Fig. 8, can be appreciated the changes on the connections and the vertex degree (vertex size), from different time sequences.
Fig. 8. Dynamic graph in the study time.
EEG Data Modeling for Brain Connectivity Estimation in 3D Graphs
289
4 Conclusions In this paper is presented the design for a 3D visualizer of the brain connectivity for EEG data. The proposal contemplates frequency bands and the selected electrodes for the analysis. It is based on the graphs theory to characterize brain connectivity in 3D images. Through the connectivity of the graph, several metrics can be applied to a dynamic topology of the graphs. There are a number of tools for brain connectivity visualization; however, in this proposed system, it can be select the number of electrodes to build the graphs by brain regions. Furthermore, the interactive 3D graph has visual information that helps make a fast visual analysis of the graph structure among brain zones.
References 1. Ioannides, A.A.: Dynamic functional connectivity. Curr. Opin. Neurobiol. 17(2), 161–170 (2007). https://doi.org/10.1016/j.conb.2007.03.008 2. Xia, M., Wang, J., He, Y.: BrainNet viewer: a network visualization tool for human brain connectomics. PLoS ONE 8, 7 (2013) 3. Rubinov, M., Sporns, O.: Complex network measures of brain connectivity: uses and interpretations. Neuroimage 52(3), 1059–1069 (2010) 4. Wang, J., Wang, X., Xia, M., Liao, X., Evans, A., He, Y.: GRETNA: a graph theoretical network analysis toolbox for imaging connectomics. Front. Hum. Neurosci. 9, 386 (2015) 5. Hosseini, S.M. Hadi., Hoeft, F., Kesler, S.R.: GAT: a graph-theoretical analysis toolbox for analyzing between-group differences in large-scale structural and functional brain networks. PloS one 7(7), e40709 (2012). https://doi.org/10.1371/journal.pone.0040709 6. He, B.D.Y., Astolfi, L., Babiloni, F., Yuan, H., Yang, L.: eConnectome: a MATLAB toolbox for mapping and imaging of brain functional connectivity. J. Neurosci. Methods 195(2), 261–269 (2011). https://doi.org/10.1016/j.jneumeth.2010.11.015 7. Mijalkov, M., Kakaei, E., Pereira, J.B., Westman, E.: Volpe, G: BRAPH: a graph theory software for the analysis of brain connectivity. PLoS ONE 12(8), e0178798 (2017) 8. Davison, A.P., Brüderle, D., Eppler, J., Kremkow, J., Muller, E., Pecevski, D., Perrinet, L., Yger, P.: PyNN: a common interface for neuronal network simulators. Front. in Neuroinform. 2, 11 (2009) 9. Gerhard, S., Daducci, A., Lemkaddem, A., Meuli, R., Thiran, J.P., Hagmann, P.: The connectome viewer toolkit: an open source framework to manage, analyze, and visualize connectomes. Front. Neuroinform. 5, 3 (2011) 10. Hagberg, A., Swart, P., Chult, D.S.: Exploring network structure, dynamics, and function using NetworkX. No. LA-UR-08-05495, LA-UR-08-5495. Los Alamos National Lab. (LANL), Los Alamos, NM (United States) (2008) 11. Kuhn, M.: Building predictive models in R using the caret package. J. Stat. Softw. 28(5), 1–26 (2008) 12. Batagelj, V., Mrvar, A.: Pajek—analysis and visualization of large networks. In: Jünger, M., Mutzel, P. (eds.) Graph Drawing Software Mathematics and Visualization, pp. 77–103. Springer, Heidelberg (2004) 13. Achard, S.: Brainwaver: basic wavelet analysis of multivariate time series with a visualisation and parameterization using graph theory. R package version 1 (2012) 14. Marple, Jr.S.L., William, M.C.:. Digital spectral analysis with applications. vol. 390, pp. 2043– 2043 (1989)
290
A. Espinoza-Valdez et al.
15. Baccalá, L., Sameshima, K.: Partial directed coherence: a new concept in neural structure determination. Biol. Cybern. 84, 463–474 (2001). https://doi.org/10.1007/PL00007990 16. Reinhard, D.: Graph theory Grad. Texts in Math, vol. 101 (2005) 17. Chung, F.R., Graham, F.C.: Spectral Graph Theory. American Mathematical Society, Rhode Island (2009)
Cutting-Edge Technology for Video Games Adriana Peña Pérez Negrón, David Bonilla Carranza(B) , and Jorge Berumen Mora Universidad de Guadalajara CUCEI, Blvd. Marcelino García Barragán #1421, 44430 Guadalajara, Jalisco, Mexico [email protected], [email protected], [email protected]
Abstract. Video games have become the dominant industry for media entertainment on their different platforms, consoles, computers, and mobile devices. Video games are also dabbling into many different areas, such as training, education, and health. This multimillionaire industry is pushing the development of new technologies, and at the same time, is taking advantage of them. In a game, the player pretends a reality, and this pretended reality is increasingly becoming an immersive experience in video games, due to those new technologies. This paper discusses the implications in the use of cutting-edge technologies in video games aimed to enhance the player experience by getting more believable gaming situations and placing the player in the center of the game. Keywords: Player experience · Gameplay · Video game trends · Technology trends
1 Introduction Ernest Adams [1] defined game as a play activity in a pretended reality, where the players try to achieve arbitrary nontrivial goals, acting according to specific rules. As the author stated, this is not a rigorous but convenient game description that covers most cases. In this definition are highlighted four essential elements of games [1]: 1. Play, which is the participatory, interactive element in this entertainment activity; 2. Pretending, creating a notion of reality in the players’ mind; 3. Goals, the objectives of the game, which should not be trivial to generate a challenge; and 4. Rules, the definitions, and instructions that the player accept in order to play the game. Video games are part of the game’s universe, but mediated by a digital device. Video games represent a billion dollars’ industry [2, 3]. The introduction of mobile platforms in 2008 for their distribution and their diversification has hardly contributed to expanding the use of video games all over the world; since then, more game engines, tools, and available platforms have emerged for the creation of digital games [3]. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 291–304, 2021. https://doi.org/10.1007/978-3-030-63329-5_20
292
A. Peña Pérez Negrón et al.
Although video games constitute an important part of the software industry, the research community has not studied them enough. Likewise, even though video games are software, its development differs from ‘traditional’ or non-game software development [4]. The development of video games involves collaborative labor of design and programming, a joint of technical and creative efforts [5]. Unlike conventional games, from a functional point of view, video games do not require written rules. Although, as players need to understand how to play, the rules are somehow hidden in the game, demanding a proper design to allow learning rules through playing. In video games, the pace or rate of the events that occur is set by the video game and not only by the player, as in traditional games. Likewise, a video game automatically determines when the player reaches a goal, indicating victory or defeat. Furthermore, because video games take place in a computer-generated scenario, they put on the table, pictures, animation, movies, music, dialog, and sound effects that conventional games cannot provide [1], giving way to the player experience. The player experience is established from the gameplay, the gaming process of the player within the game, influenced by factors such as genre, platform, setting, hardware, or mood, which will set the experience outcome [6, 7]. Linking game experience with game design elements is still an ongoing work. However, through a combination of metrical game data with users’ feedback (e.g., biofeedback or surveys), it might be possible to relate the game experience with design elements [7]. In any case, new technologies or new ways to implement design elements in video games should represent an impact on the player experience. New technologies through input/output devices can also be included in the video game to enhance the player experience. For example, by three-dimensional (3D) content in Virtual Reality (VR), or a mix of the video game with real-life by Augmented Reality (AR), both representatives of the user interface (UI). The UI translates the player’s inputs into actions in the game, passing them to the core mechanics and presenting feedback elements. UI has a direct impact on how the player will perceive the game. Furthermore, sensors can include in the game information from the player and the real world to be part of it. All of which makes the game visible, audible, and playable as part of the generation of the player experience [1]. Another important feature in the video games worth to mention and related to the UI is the perspective of the camera. In the first-person perspective, the player sees the game as if through their own eyes, while in a third-person perspective, the avatar is seen by the player. As can be inferred from the above, the computer aspect in video games allows including cutting-edge technologies to create new and innovative gameplay experiences. In this context, new tendencies on computer paradigms such Artificial Intelligence (AI), play an important role in, for example, generating strategies that determine a variety of available options to enrich the game, to produce creatures to fight in believable simulations, or fictitious players in order to give a solo player the multiplayer experience. Video games are part of the daily life of new generations, with benefits that are still in the process of being understood.
Cutting-Edge Technology for Video Games
293
1.1 Advantages of Video Games Despite the widespread use of video games, this topic’s research is still in an early stage [3, 4, 8]. Several studies from a psychological point of view are focused on its negative impact, with themes such as violence, addiction, and depression. However, lately, research has also focused on their psychological benefits, related to cognitive, motivational, emotional, self-esteem, and social factors. Including other benefits in specific skills such as time reaction, hand-eye coordination, and spatial visualization improvement that have been found [9, 10]. Video games have also been linked to several benefits in Education. An important advantage in this topic is their motivational aspect; people like to play games. In this area, video games have been used to support language development, math reading, and social skills, for example [10]. Another important field that has taken advantage of the motivational impact of video games is health. For instance, it has been found that the attention needed to play can distract a person from the pain sensation. In this regard, the use of video games has been proposed to distract children during cancer chemotherapy and the treatment of burn injuries [11, 12]. In therapy, video games help people deal with repetitive, boring exercises; and for rehabilitation in several areas such as perceptual disorders, memory, or language difficulties [12]. Likewise, active video games can promote healthy habits. In this context, the personalization of the player interest through different content or rule-based filtering to infer the player preferences, and new technologies like wearable and mobile devices have been used to encourage people for outdoor exercising [13]. In this paper are presented cutting-edge technologies that influence or represent a possibility for its inclusion in the video games to enhance the player experience. These technologies are diverse approaches that include data treatment, 3D based technologies, mixing real life with digital data, and biofeedback through sensors and wearable devices.
2 Cutting-Edge Technologies for Video Games Technology is evolving, enabling fast changes in our daily life, and the video games industry has contributed to some of these advances while benefiting from them. Some of these new technologies are already part of the video game community, and others represent an area of opportunity that might be integrated to enrich the player experience. Although it is always risky to predict new trends, a selection of cutting-edge technologies was made by connecting them to video games. 2.1 Cloud Computing Cloud computing represents the on-demand availability of computer resources over the Internet, a tendency that is gaining straight to support video games without requiring powerful gaming devices. By streaming the videogame, this technology is the base of other technologies that present advantages for video games. The most known streaming platforms in video games are PlaystationTM , which allows downloading the game or streaming it with Internet bandwidth, requesting
294
A. Peña Pérez Negrón et al.
at least 5 Mbps. GeoForce NOWTM is also a recognized platform for streaming games; and Google StadiaTM is a videogame subscription on the Cloud for computers, mobile devices, and smart TVs with several AAA titles, games with super production characteristics. Streaming games are the base for massive-multipayer online games (MMOG) or more commonly known as MMO; these games enable players to cooperate and compete with each other on a large scale. Thousands of players can be connected simultaneously and make the most of 5G, Edge Computing, and Big data. 5G data networks will allow more and faster data interchange with the Cloud, and more stable connections. For video games, this means less latency, high fidelity graphics, and better performance in MMO. This technology will mainly promote mobile-based games. Although, up to now, no game requires it. Edge Computing is a distributed computing paradigm that brings computation and data storage closer to the location, to a data center, improving time response, and saving bandwidth. Although at the time, Edge Computing has been used only for the process of time-sensitive data, at some point, it might benefit video games in the same aspects that 5G. Big data means a massive volume of structured or unstructured data. This field studies how to treat massive volumes of data to extract information. One of the main reasons for online and social gaming proliferation is because big data players’ behavioral helps to understand what is working well in the game, and therefore used to become or stay successful; this represents the area of game telemetry [14]. 2.2 Artificial Intelligence All video games include some type of Artificial Intelligence (AI) [15], some of the most common AI strategies in video games are to create opponents, content generation, and data treatment to understand players’ behavior. In video games, AI takes care of the behavior and decision-making process of the non-player characters (NPC). Machine-learning technics enable NPC to improve their performance by learning and adapting to the player’s strengths and weaknesses or imitating their opponents’ tactics [15, 16]. According to [17], one of the biggest AI-driven games of modern times is ‘Alien: Isolation’, a horror game based on Ridley Scott’s movie’s aesthetic and style. Through tried and tested AI techniques, the NPC was designed because the alien demands a large amount of experience compared with short life NPC characters. The NPC for the ‘Alien: Isolation’ game architecture uses two management systems: the macrosystem director that points the alien to the player periodically, and the alien microsystem that hunts the player. AI has also been applied to create content in the game. The procedural content generation has become a standard practice in the industry. In this context, AI has been applied to generate, for example, game assets, by taking off that work from designers. Furthermore, AI has also been utilized to generate game levels based on the player’s preferences [18].
Cutting-Edge Technology for Video Games
295
Games routinely collect data about how the players experience the game; this information is fed into AI algorithms and is ultimately used to predict game adaptations according to the players’ predilections. Another AI advantage for video games is its potential to create playmates. In the action-adventure first-person shooter (FPS) video game ‘Wolfenstein: The Old Blood’ developed by MachineGamesTM and released in May 2015 for Microsoft Windows, Player station 4, and Xbox One consoles, AI can control a playmate. The story is about two sisters looking for their father and helping Frenches against Nazis. The story progress can be played alone or by two interconnected campaigns. The player has the role of one of the sisters, and the other sister can be another player or an artificial character in the cooperation mode. AI supports the companion sister’s actions of, for example, teleporting the player to avoid losing health points or eliminating enemies, particularly those who caused substantial damages to the player. If the artificial companion has the possibility, it will hide and never start a fight on its own [19]. 2.3 3D Technologies Development in graphics for computers has always been linked to video games. Lately, the use of 3D graphics is a regular practice to generate excellent scenarios and avatars for the players. 3D models are mathematical representations of polygonal objects, with texture, shadows, transparencies, reflects, lights, and point of view, among other characteristics, to generate 3D content. Through the rendering process, the 3D model displays the 3D graphic, fundamental for Virtual Reality [5]. Virtual reality (VR) are computer-generated 3D scenarios with which the user can interact. Most computer games can be successfully transformed into VR, taking advantage of modern devices for immersion. New games and new content based on VR are emerging for high-end and mobile games [20]. The ‘Half -Life: Alyx’ game is considered one of the first proper high budget VR efforts. On it, the main method for user interaction is a gravity gun, which can be picked up to manipulate any object in the scenario. Also, its gravity gloves have a grab button that, through a wrist movement, the user can manipulate the grabbed object [21]. Figure 1 shows a scene from the ‘Half -Life: Alyx’ announcement trailer, where the user is loading the gun. VR is also giving rise to hyper location-based entertainment (LBE), in which what the player sees corresponds to the physical space of the room where they can run, duck, or reach and feel virtual objects getting tactile feedback [22]. For 3D content, the 3D models can be artistically made by a computer-aided design application, but also by being directly scanned form a physical object through a 3D scan. This technology is helpful to digitalized objects to be included in the game, like for example, a replica of the player. Moreover, 3D printing, printing 3D objects, can be included in the games that require objects inspection to solve certain puzzles. Also, at some point, the player could print objects to get tactile feedback for the video game. In this area, the Thing3DTM startup enterprise enables to create connected action figures to interface with videogames. The FabZatTM system integrated into a mobile
296
A. Peña Pérez Negrón et al.
Fig. 1. A Half-Life Alyx screenshot of the announcement trailer.
game allows us to customize a game character to be 3D printed. And, the ToyzeTM app allows customizing popular game characters also to be 3D printed [23]. Hologram, large-scale holograms illuminated with lasers, can show 3D objects that can be observed from different perspectives. By holography, some parts of the game could be displayed out of the computer to create immersion. This is not a new idea, the arcade game ‘Time Traveler’ released in 1991 was the first holographic video game. In a special arcade cabinet, the characters are displayed using reflection to make them appear free-standing, see Fig. 2.
Fig. 2. Time Travel arcade game.
Cutting-Edge Technology for Video Games
297
2.4 Real-World Embedded in Video Games In video games, real-life objects and situations have been introduced to create new and different playing experiences. This includes the use of sensors, the users’ location, or the display of graphics over real scenarios. Through portable and wearable devices, the real world can also be connected to the virtual world, evolving into new types of mixed situations. A usually starting point to differentiate and understand the degree of real and virtual mixed factors in the technologies is the well-known continuum of Milgram and Kishino [24] that places on the left edge the real world, and on the right one VR; this means that moving to the right of the continuum increases the degree of computer-generated stimuli. In the middle of the continuum is placed the Mixed reality (MR) in which can be found Augmented reality (AR) and Augmented virtuality (AV). However, as pointed out by [25], the boundaries between these hybrid experiences have not been entirely established by researchers and practitioners; following these authors MR, AR and AV are next briefly described. Mixed reality (MR), here a distinction that differentiates ‘pure’ MX from AR and AV, is that in MX the digital content is merged into the real world interacting in real-time [25]. MX has been proposed to overcome the lack of facial expressions on the avatar or to include natural gestures to increase co-presence by using cameras that capture the user’s real body to merge it into a virtual environment [26]. This type of merge in video games can place the real player into the video game, increasing realism in the players’ avatars. Augmented reality (AR), represents the interaction of the real-world objects somehow improved or augmented by computer-generated perceptual information. The ‘Pokémon Go’ game is the most well-known example of AR in video games. It uses the GPS mobile device to locate, capture, battle, and train Pokémon virtual creatures that seem to be in a real-world location. A little further on AR, the ‘Minecraft Earth’ game allows constructing models that can be seen in a real-world setting through a mobile device. The game also put objects that can be seen through the mobile device, so the player can tap on them and discover new resources for building virtual worlds [27]. Augmented virtuality (AV) superimpose real-world elements into the virtual environment [23]. The new game pods or LBE are a good example of this technology, as mentioned, in these locations a real world-building structure is part of the game that is displayed through a HMD to the players, see Fig. 3, walls stairs and textures become part of the game [22]. Internet of Things (IoT) interrelates computer devices with mechanical and digital machines to transfer data over a network. Through a light sensor, the action-adventure game ‘Boktai: The Sun is in Your Hand ” encourages the day play in some parts of the game. Although this is a local sensor, this is an example of what sensors can include in the game. Another IoT that could be included in the game is custom scents through scented pods [28]. Also, the use of two sensors have been proposed to calculate depth, extracting 3D features from the environment, this could be used to establish the surrounding objects of the player to incorporate them into the game [29].
298
A. Peña Pérez Negrón et al.
Fig. 3. The SingularityTM in an AV pod or LBE.
Through IoT, in the exergame ‘Zombies, Run!’ an apocalyptic story, the player as the Runner 5 character, runs in real life against brain-hungry, virus-spreading, and bloody zombies, while collecting items and uncover mysteries to rebuild a town in the game. Players can run on teams, in which case if a team member survives, all team members are considered survivors [30]. Robots are autonomous entities that can take care of different tasks without external influence. In video games, robots can be used to collect information about the environment, give tactile feedback to the player, or even become players’ ally giving some type of advice. 2.5 Direct Control The interaction with video games is typically accomplished through game controls or the standard keyboard and mouse, but other technologies allow us to control the game or the avatar. The avatar is the player representation within the video game; in order to control it, different techniques follow the player’s physical movements allowing direct control over it. Hand and face gesture tracking, there are already developed sensors that recognize our hand gestures to substitute the game controller. Also, facial recognition can be the way to convey our face into our avatar, permeating our emotions into it [29].
Cutting-Edge Technology for Video Games
299
‘The Curious Tale of the Stolen Pets’ game released near the end of 2019 for the Oculus QuestTM and PC VRTM platforms, is among the first developed games with hand tracking. In the game, the player reaches and touches objects like with his/her own hands, see Fig. 4. The game scales the world to the player height, making it proper for kids [31].
Fig. 4. Hand tracking in ‘The Curious Tale of the Stolen Pets’ game
Eye tracker, eye movement has also been included in video games as an input source. It can be used in the game as a natural targeting; also, NPC can be aware of the player gaze and act in consequence, and it can be used to move the point of view of the displayed scenario according to the player’s gaze direction [32]. Voice recognition, computer commands by voice or speech recognition has been in the market since the late nineteens. In video games can now let the player to command a squad, adapted for example to team-based FPS games for better immersion. The Intel RealSenseTM technology couples 3D cameras and microphone with a SDK, to implement a 3D scanning aforementioned, gesture and face tracking, and voice recognition; this will soon bring an excellent opportunity for video games to incorporate them [29, 33]. The virtual suit, the ultimate direct control over the game, is a complete suit that puts the player into the game with tactile feedback in the whole body. In the market is the TeslasuitTM with 68 channels; unfortunately, out of reach for most players due to its very high price. TeslaTM also introduced a glove that allows feeling virtual textures and gathering biometric data [34, 35].
300
A. Peña Pérez Negrón et al.
2.6 Biosignals Signals from our body can become input data for the game to control it, or as a means to understand the player’s emotion or state of mind. Heart rate sensors can be applied in games to indicate fear, stress, or tension. This technology is used in the ‘Nevermind’ game, which becomes harder when it detects the players’ fears [36]. Brain-computer interfaces (BCI), while in the past, this was possible only through invasive devices, nowadays are emerging non-invasive ones. Although currently only for therapeutic purposes, BCI could, at some point, represent an alternative for video games to understand the emotional state of the player [37]. The breathing rate is the number of breaths a person takes per minute. When a person exercises, the body requires more oxygen, to cope with this demand, the breathing increases. Also, hyperventilation or excessive breathing can be caused by an emotional reaction of anxiety. Galvanic skin response (GSR) devices measure skin conductance based on the amount of moisture on the skin according to the sweating rate. The sweating rate is associated with physiological or physiological arousal because the sympathetic nervous system controls the sweat glands [38].
3 Video Games with Cutting-Edge Technology The game genre has an important role in selecting the technology to be applied for the game, in which the type of immersion and the input/output devices have a direct impact in the game experience. There is no consensus on game genre classification. Although the diversity of games has expanded, sometimes mixing genres, the basic game genres are next enlisted [1]: • Action games are those that include physical challenges. The two most well-known subtype of action games are: – Shooter games – Fighting games • Strategy games are those with tactical or logistical challenges. • Role-playing games (RPG) are those in which the player assumes the role of a character emphasizing character advancement more than collaborative storytelling. • Sports games simulate the practice of a sport. • Racing/Driven games simulate imaginary or real vehicles. • Construction and management games primarily offer economical and conceptual challenges for the player to build something within the ongoing process. • Adventure games provide mainly exploration and puzzle-solving. • Puzzle games offer logic and conceptual challenges. Among the most known cross-genre games are the Action-Adventure ones, composed of exploration and puzzle-solving combined with physical challenges. In Table 1 are summarized the examples of video games using cutting-edge technology presented in this paper. In the first column is the name, the second column has the
Cutting-Edge Technology for Video Games
301
technology they exemplify, the third column has the official website of the game, and in the last fourth column is the game genres. Besides the basic genres, it can be found the sandbox genre (Minecraft Earth), which consists of giving the players a high degree of creativity to complete the tasks. The exergame genre (Zombies, Run!), also known as fitness games, are video games with a form of exercise for the player. And the horror genre (Nevermind), designed to scare the player. Table 1. Example of video games with cutting-edge technology Video game
Technology
Website
Genre
Alien: Isolation
AI
https://www.sega.com/ games/alien-isolation% E2%84%A2/
Action-Adventure
Wolfenstein: The Old Blood
AI
https://bethesda.net/es/ FPS game/wolfenstein-youngb lood/
Half-Life: Alyx
VR
https://half-life.com/es/ alyx
FPS
Time Traveler
Hologram
N/A
Action-Adventure
Pokémon Go
AR
www.pokemongo.com
Adventure
Minecraft Earth
AR
https://www.minecraft. net/es-es/about-earth/
First person sandbox
Boktai: The Sun is in Your Hand
Sensors
https://www.konami.com/ Action-Adventure-Role mg/archive/other/boktai/ play
Zombies, Run!
IoT AR
https://zombiesrungame. com/
Exergame
The Curious Tale of the Stolen Pets
Tracking
https://www.fasttravelga mes.com/thecurioustaleo fthestolenpets/
Puzzle-Adventure
Nevermind
Biosignals
https://nevermindgame. com/about
First person horror
Regarding the game experience, these games represent particularities because of their use of technology. AI is a common practice in Action games to create enemies or to be a game partner. The ‘Alien: Isolation’ is acknowledged in the gamers community because it does produce the feeling of facing a not predictable NPC. In the case of the game ‘Wolfenstein: The Old bold’, the AI of the NPC is also used to be a player partner; in this case, the player requires to be patient, but as the character learns from the player style it better matches the player expectations. In the AR games, for ‘Pokémon Go’, the players with mid-range mobile devices prefer a fluid game over this technology, deactivating this feature. For ‘Minecraft Earth’, the experience is still not multiplayer as its original version, making the player to expect this
302
A. Peña Pérez Negrón et al.
feature on the AR version. And The ‘Zombies, Run!’ game, with a combination of AR and IoT, has demonstrated its motivational element for people to exercise in outdoors. In the case of the use of trackers, as in ‘The Curious Tale of the Stolen Pets’, some of the players’ complains are related to the latency between the tracker and the game update. Finally, in the ‘Nevermind’ game, because the players detected the blurry screen with the increase in the cardiac rhythm, they try to manage it.
4 Conclusions Video games are a continually growing entertainment field that has contributed to technology development. As a multimillionaire industry based on technology, video games’ trend is to include cutting-edge technology. In this paper were presented some implications of implementing pioneer technologies already applied or to come in the video games. Cutting-edge technologies were classified as part of a) Cloud computing, including the use of Big Data for the game telemetry; b) Artificial Intelligence, mainly for NPC and procedural content generation; c) Technologies based on 3D as VR, 3D printing, and holograms; d) Real world embedded in video games, with technologies that include AR, AV and MR, including IoT; and e) Technologies that allow direct control in the game, mainly through body trackers, and voice recognition. The application of new technology tendency in video games aims to generate a growing immersive player experience, in which Virtual Reality and Augmented reality enhanced by IoT can put the player inside the game, and this is happening right now. Different examples of video games on the market were presented connected with the use of new technologies, and others were presented as a possibility. Finally, some implications regarding the game experience were related to games that have included cutting to the edge technology. As future work, such implications will be studied mainly in the real world embedded in game technologies.
References 1. Adams, E.: Fundamentals of Game Design, 2nd edn. Pearson Education (2010) 2. Video Game Industry Statistics, Trends & Data. https://www.wepc.com/news/video-gamestatistics/#video-gaming-industry-overview 3. Young, C.J.: Game changers: everyday gamemakers and the development of the video game industry. Doctoral dissertation, University of Toronto, Canada (2018) 4. Murphy-Hill, E., Zimmermann, T., Nagappan, N.: Cowboys, ankle sprains, and keepers of quality: how is video game development different from software development? In: International Conference on Software Engineering, pp. 1–11 (2014) 5. Bonilla, D., Peña, A., Contreras, M.: Teaching approach for the development of virtual reality videogames. In: International Conference on Software Process Improvement, pp. 276–288. Springer, Cham (2019) 6. Taffazoli, H., Tiedemann, T.: Confined and condemned: the impact a restricted user interface has on the player experience in survival horror games. Bachelor thesis, Södertörn University (2017)
Cutting-Edge Technology for Video Games
303
7. Nacke, L., Drachen, A., Kuikkaniemi, K., Niesenhaus, J., Korhonen, H.J., Hoogen, W.M., Poels, K., IJsselsteijn, W.A., De Kort, Y.A.: Playability and player experience research. In: DiGRA 2009: Breaking New Ground: Innovation in Games, Play, Practice and Theory (2009) 8. Wang, B., Taylor, L., Sun, Q.: Families that play together stay together: investigating family bonding through video games. New Media Soc. 20(11), 4074–4094 (2018) 9. Granic, I., Lobel, A., Engels, R.C.: The benefits of playing video games. Am. Psychol. 69(1), 66 (2014) 10. Griffiths, M.D.: The educational benefits of videogames. Educ. Health 20(3), 47–51 (2002) 11. Griffiths, M.: Can videogames be good for your health? J. Health Psychol. Editor. 9(3), 339–344 (2004) 12. Miller, K., Rodger, S., Bucolo, S., Greer, R., Kimble, R.M.: Multi-modal distraction. Using technology to combat pain in young children with burn injuries. Burns 36(5), 647–658 (2010) 13. González, C.S.G., del Río, N.G., Adelantado, V.N.: Exploring the benefits of using gamification and videogames for physical exercise: a review of state of art. IJIMAI 5(2), 46–52 (2018) 14. Rands, K.: How big data is disrupting the gaming industry (2018). https://www.cio.com/art icle/3251172/how-big-data-is-disrupting-the-gaming-industry.html 15. Rhalibi, A.E., Wong, K.W., Price, M.: Artificial intelligence for computer games. Int. J. Comput. Game Technol. 2009, 3 (2009). Article ID 251652. https://doi.org/10.1155/2009/ 251652 16. Thomson, T.: The perfect organism—the ai of alien: isolation (2017). https://becominghuman. ai/the-perfect-organism-d350c05d8960 17. Togelius, J., Yannakakis, G.N., Stanley, K.O., Browne, C.: Search-based procedural content generation. In: Di Chio, C., et al. (eds.) Applications of Evolutionary Computation. EvoApplications. Lecture Notes in Computer Science, vol. 6024. Springer, Heidelberg (2010) 18. Cooperation mode vs single-player in Wolfenstein Youngblood. https://guides.gamepressure. com/wolfenstein-youngblood/guide.asp?ID=50969 19. Koss, H.: What does the future of the gaming industry look like? (2020). https://builtin.com/ media-gaming/future-of-gaming 20. Kovak, N.: Virtual reality in gaming (2011). https://thinkmobiles.com/blog/virtual-realitygaming/ 21. Papavasiliou, S.: Half-Life: Alyx makes VR worthwhile (2020). https://thelumberjack.org/ 2020/04/19/half-life-alyx-makes-vr-worthwhile/ 22. Sag, A.: Location-based VR: the next phase of immersive entertainment (2019). https:// www.forbes.com/sites/moorinsights/2019/01/04/location-based-vr-the-next-phase-of-imm ersive-entertainment/#750fe6163f57 23. Anusci, V.: How to 3D print video game characters (2015). https://all3dp.com/3d-print-videogame-characters/ 24. Milgram, P., Kishino, F.: A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst. 77(12), 1321–1329 (1994) 25. Flavián, C., Ibáñez-Sánchez, S., Orús, C.: The impact of virtual, augmented and mixed reality technologies on the customer experience. J. Bus. Res. 100, 547–560 (2019) 26. Jo, D., Kim, K.H., Kim, G.J.: Effects of avatar and background types on users’ co-presence and trust for mixed reality-based teleconference systems. In: Proceedings the 30th Conference on Computer Animation and Social Agents, pp. 27–36 (2017) 27. Keith, S.: Minecraft earth is coming – it will change the way you see your town (2019). https://www.theguardian.com/games/2019/oct/01/minecraft-earth-launch-games-microsoftaugmented-reality 28. Smells like there could be drama in the scented proprietary pods business (2019). https://www. theverge.com/2019/1/7/18171432/moodo-artiris-parfum-pod-connected-scent-ces-2019
304
A. Peña Pérez Negrón et al.
29. Intel Realsense. https://www.intelrealsense.com/stereo-depth/ 30. Zombie Run! The original zombie-infested 5 k. https://www.zombierun.com/zombiehow-itworks 31. Sutrich, N.: Hands on with The Curious Tale of the Stolen Pets, an Oculus Quest hand tracking game (2020). https://www.androidcentral.com/hands-curious-tale-stolen-pets-oculus-questhand-tracking-game 32. Eye tracking in gaming, how does it work? https://help.tobii.com/hc/en-us/articles/115003 295025-Eye-tracking-in-gaming-how-does-it-work-#:~:text=Eye%20tracking%20lets%20y our%20device,when%20it%20comes%20to%20gaming.&text=Your%20experience%20b ecomes%20richer%2C%20and,point%20on%20your%20device’s%20screen 33. An introduction to Intel RealSense technology for game developers. https://gamedevelopm ent.tutsplus.com/articles/an-introduction-to-intel-realsense-technology-for-game-developer s–cms-24740 34. Teslasuit. https://teslasuit.io/ 35. Virtual reality gloves by Teslasuit. https://teslasuit.io/blog/vr-glove-by-teslasuit/ 36. Nevermind. https://nevermindgame.com/about 37. Brain-computer interfaces: the video game controllers of the future. https://www.factortech.com/roundup/this-week-facial-recognition-used-to-capture-fugitive-spacex-commitsto-city-to-city-rocket-travel-and-uk-reveals-it-launched-a-cyber-attack-on-islamic-state/ 38. Camara, C., Peris-Lopez, P., Tapiador, J.E., Suarez-Tangil, G.: Non-invasive multi-modal human identification system combining ECG, GSR, and airflow biosignals. J. Med. Biol. Eng. 35(6), 735–748 (2015)
Design Techniques for Usability in m-Commerce Context: A Systematic Literature Review Israel Monzón1 , Paula Angeleri2 , and Abraham Dávila3(B) 1 Escuela de Graduados, Pontificia Universidad Católica del Perú, Lima, Perú
[email protected] 2 Facultad de Ingeniería y Tecnología Informática, Universidad de Belgrano,
Buenos Aires, Argentina [email protected] 3 Departamento de Ingeniería, Pontificia Universidad Católica del Perú, Lima, Peru [email protected]
Abstract. The intensive use of mobile devices around the world has generated a new scenario for electronic commerce called m-Commerce, which has particular characteristics due to the nature of these devices. In this context, usability has become a key element, since the size of the devices introduces restrictions to the application designers and this can determine the acceptance or rejection of the software. The objective of this paper is to analyze in a comparative way the different techniques for the design of user interfaces that provides a high level of usability in mobile applications. For this study, a systematic literature review was performed in recognized databases. As a result, 20 studies were identified, comprising 13 techniques, 5 approaches and 2 methods that can be used for the design of high usability Apps. Many of these techniques can be used both in mobile commerce as well as in any other field of mobile e-Business. Keywords: Usability · Ease of use · Mobile commerce · Mobile electronic commerce · m-Commerce · Mobile interface
1 Introduction The intensive use of mobile devices worldwide is increasing every year [1, 2]. In the United States of America, since 2015, the majority of American citizens have a smartphone [3], and since 2019, 37% of them have been connected to the internet through these devices [4]. Only in China, in 2018, mobile internet users reached 753 million, and 97.5% of them use a mobile phone to access the internet [5]. It has been estimated that by 2020 there will be, around the world, about 7.26 billion mobile device users [2], and by 2025, the number of subscribers to mobile services will reach 5.8 billion [6]. Mobile devices, such as smartphones and tablets, are increasingly used for retail sales via the internet [7], connecting buyers with sellers to specify their offers and demands [8]. This high level of use of devices in the field of e-commerce or electronic commerce has led to the establishment of the term m-commerce or mobile commerce [9]. In the © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 305–322, 2021. https://doi.org/10.1007/978-3-030-63329-5_21
306
I. Monzón et al.
United States of America, in 2018, sales by m-commerce exceeded 208 billion dollars, which represented almost 50% of e-commerce sales [7]. In addition, there are a wide variety of types of mobile commerce services and applications [10, 11], and within these, some are more used than others [12]. The services for mobile commerce are also varied: including, among others, the payment of top-ups, payments of bills or invoices, tickets and purchasing services [13]; so that every mobile device with internet access can allow mobile commerce [14]. However, the characteristics of mobile devices have an effect on the usability of application user interfaces [15]. Wherein, design challenges encompass interactional [16] or usability challenges [17], and the needs and preferences of users [18]. In the field of m-commerce, certain problems have been identified such as: difficulties of use and visibility during purchase flows [19], too many screens to go through to consult products [20], carelessness in data validation, confusion in the use of buttons [21], inconveniences in finding or displaying information [22], buttons not large enough to be touched [20], lack of further design considerations to meet the expectations and needs of the elderly [23] or visually impaired persons [24], difficulty in menu visibility [25], and very small print fonts [20]. Other areas have also reported problems such as: health [26] and government [27]. Furthermore, as utility is so important like usability [28], other problems come up, such as content visualization problem, or Applications (Apps) that do not work as they should. In this context, usability is a key element to gain the trust of users in using mobile commerce applications to make their purchases [29, 30]. Better user satisfaction can be achieved by improving the performance of these applications [31]. Usability helps to determine if the interfaces are good, bad or poor [32], to find errors in the interfaces that do not make them completely pleasant or easy to use [33, 34], and to take into account certain restrictions or limitations to be considered by designers of mobile device applications [35, 36]. However, there is a great variety of techniques and methods related to the design of interfaces for mobile applications with a high degree of usability, so it is necessary to identify their similarities and differences to know which are more convenient for the implementation of a mobile interface. This article presents a comparative analysis of design techniques for usability in the context of m-commerce, for which a systematic literature review (SLR) was carried out. The article is organized as follows: Sect. 2 presents a frame of reference and related work; in Sect. 3, the definition of the SLR; in Sect. 4, the analysis of the results; in Sect. 5, the final discussion.
2 Background and Related Work In this section, the key concepts and works related to this study are introduced. 2.1 e-Commerce and m-Commerce The development of electronic commerce (e-Commerce) emerged with the beginning of the Internet and the need for business expansion, taking as reference the exchange of electronic data for the transfer of business documents between computers (EDI) [37].
Design Techniques for Usability in m-Commerce Context
307
A consensus definition indicates that they are commercial activities that are carried out between people (which can be individuals, organizations or public administrations), where technology is used for data exchange, operations are carried out on a monetary basis to obtain a profit [38]; and which are mainly carried out through the internet, the web or mobile applications [39]. In this context, terms have also been defined in relation to the related actors, such as [39]: Business to Business (B2B), Business to Consumer (B2C), Consumer to Consumer (C2C); as well as other schemes or approaches [37, 40, 41]. Mobile commerce (m-Commerce) is characterized mainly because its commercial or business activities are based on wireless communication through a mobile device, such as a cell phone, a digital personal assistant, a smartphone or a mobile device installed in vehicles [42], among others [14]. There are other definitions that highlight one or other aspect [37, 43] and [42], but that can be reduced to the application of e-commerce in the context of mobile equipment. In this context, since mobile devices have an important restriction, which is the size of the screen, the design of the graphical user interface becomes critical for m-Commerce. 2.2 User Interface and Usability The user interface is defined, according to [32], as the set of computer components with which users interact with systems to perform tasks desired by users, and are considered important, as they are the users’ first contact with the computer, and for supporting user tasks. A good user interface design should provide an easy, natural and attractive way to interact with the computer, reducing user frustration when using it [32, 44] thus influencing the usability of the product [44]. Furthermore, in the case of modern mobile devices, the greatest interaction is carried out through touch commands (screen), which allows for a wide series of tactile interactions [45]. According to [46], usability is defined as the ability of the software product to be understood, learned, used and attractive to the user, when used under certain conditions. In [28], usability is defined as the quality attribute that assesses the ease of use of user interfaces. Also, in [28], it is determined that usability works together with the quality attribute utility having both the same degree of importance because together they help determine if something is useful or not. In [47], it is pointed out that usability, utility and sympathy are consistent and that they offset the cost, which determines the acceptance of things by users and buyers. 2.3 Usability Evaluation in m-Commerce Applications In the evaluation of usability in the context of m-commerce we have: (i) in [20], the usability of the product pages of two clothing websites that report some problems is evaluated (low visibility of product information, low navigability due to button placement, and difficulties to activate the functionalities due to bad letter design and lack of contrast); (ii) in [48], the usability of the graphical user interface of a mobile commerce system is evaluated by two different groups of users, where both groups identified the GUI design errors (difficulty in accessing the functionalities more important, the text associated with buttons did not reflect the expected action and inconsistency was found
308
I. Monzón et al.
between the input data and the expected outputs) and, therefore, pointed out some recommendations; (iii) in [49], problems with mobile shopping applications were found (such as the size of items on the screen, confusion in the execution of unwanted actions, inability to compare equivalent products from different brands, lack of filters for extensive searches and lack of online help or similar); (iv) in [50], includes problems such as ambiguous texts, short understandable descriptions, small font size, and icons that do not represent their functionality well; (v) in [14], a categorization of mobile devices is established that determines some characteristics that affect the usability of mobile devices. 2.4 Related Studies In the previous review of the literature, similar works [16, 51, 52] and complementary works [36, 53–60] were identified. Similar works include: [16] where the design challenges related to hierarchical menus and navigation and exploration for mobile devices are addressed; [52] where three navigation techniques are compared in the execution of tasks aimed at user performance and satisfaction; and [51] where a SLR is performed on application of techniques and tools for the development of mobile user interfaces from 2014 to 2018. Among the complementary works are: [53] where a SLR is carried out and 7 design elements of interest for mobile commerce are identified; [36] where a series of design considerations are presented for the development of mobile web interfaces in the B2B context; [56] where a SLR is carried out on methods based on the experience of evaluating mobile usability in the health sector; and [57] where a method is described, that establishes in a systematic way, the process of creating user interfaces of mobile applications oriented to usability.
3 Systematic Literature Review (SLR) In this study a systematic literature review was performed with the main goal of analyzing different user interfaces design techniques that provides a high level of usability in mobile applications, based on guidelines [61] and some recommendations from [62]. 3.1 Identify the Need for Realization This research is of interest for the following reasons: (i) Smartphones generate favorable conversion rates for retail sales on the Internet, so companies look for potential customers in this context [7]; (ii) usability is a critical element for e-commerce applications and represents a challenge considering the space restrictions on phone screens [14]; (iii) the perception of the difficulty of use in navigating a website often causes users to not find a product and leave the site without buying it [28]; (iv) design problems regarding usability on mobile devices are the reason for the delay in accepting m-commerce [14]; furthermore, (v) a bad user interface causes stress, dissatisfaction, reduced productivity and therefore economic losses [32].
Design Techniques for Usability in m-Commerce Context
309
3.2 Research Purpose and Questions This study seeks to analyze the design techniques of graphical user interfaces that are used in the mobile environment, within the software product perspective. The scope of the research considers articles referring to the design of user interfaces regarding usability software product quality sub-characteristics. Studies referring to the software development process were not taken into account, since the idea was to provide a solution from a different point of view. Another limitation in the research scope were frameworks provided for adapting an HTML desktop interface page to a mobile user interface, because it is not the best technique from achieving high level of usability in mobile interfaces. Considering this scope, a set of research questions and their respective motivations have been established, which are presented in Table 1. Table 1. Research questions Research question
Motivation
RQ1 What techniques are used to design usable GUIs in the context of mobile commerce?
Analyze the different techniques (approaches or methodsa ) that are used to create the GUI design or GUI elements for mobile commerce applications
RQ2 What are the similarities and differences of the identified GUI design techniques?
Identify relevant similarities and differences between the techniques so that they can be used as the basis for creating new techniques
RQ3 What usability elements do these techniques support?
Know the usability attributes that are covered by the different techniques in the GUI design
RQ4 What GUI elements do these techniques support?
Know the different elements of the GUI, for example: page, menu, search result, etc., to which the GUI design techniques are directed
a Words such as method were used in the search string, although the scope of the search is referred
to software product design, instead of software design process, because some author may have used this term.
3.3 Research Protocol Search Strategy. PICO (Population, Intervention, Comparison and Outcomes) strategy was used to define the search, as suggested by [61]. The population is the set of entities that will be the object of the review, in this case the graphic interface design is established. The intervention is the population that will be evaluated, in this case they are mobile devices. The comparison does not apply, since one is not taken as a reference. The outcome is the expected information, which in this case are techniques, methods or approaches. Table 2 presents the search string elaborated with the alternative terms for each PICO element.
310
I. Monzón et al. Table 2. Search string
PICO element Search string Population
(design OR interface OR screen OR visual* OR navigat*) AND (usability OR friendly OR eas* OR effic* OR effective* OR quick* OR fast* OR aesthetic OR pleasant OR satisf* OR accessib*)
Intervention
Mobile OR tablet OR smartphone OR handheld
Outcomes
Technique OR method OR approach Population AND Intervention AND Outcomes
The chain was last applied in September 2019 on a set of digital databases relevant to our study domain [61]: (Sco) Scopus, (SDi) Elsevier Science Direct, (ACM) ACM Digital Library, (IEEE) IEEE Xplore, (WoS) Web of Science, (Proq) Proquest, and (EHo) EBSCOhost Web. In addition, publications from 2000 were considered; because in 2001 the transmission standards of mobile communication systems with 3G technology were created, which further encouraged the conditions for the growth of m-commerce [63]. The criteria established for the SLR at the level of inclusion (IC) and exclusion (EC) are presented in Table 3. The quality assessment questionnaire based on [64] was applied. In this study defines a questionnaire and a score according to the level of compliance: 1 point does comply, 0.5 point if it partially complies and 0 if it does not comply. Table 4 presents the applied research questions. Table 3. Inclusion criteria (IC) and exclusion criteria (EC) Id
Criteria
IC.1
Conference or magazine articles obtained from relevant databases
IC.2
Articles that correspond to temporary requirements
IC.3
Articles written in the English language
IC.4
Articles that correspond to primary studies
EC.1
Articles that their title, summary or content have no relation to the research topic
EC.2
Duplicate articles
EC.3
If there are articles with similar titles, the oldest ones will be rejected
EC.4
Articles that correspond to magazines, conference summaries, secondary studies or tertiary studies
The procedure followed involved 6 stages. In the first stage, the search string was executed and inclusion criteria IC.1 and IC.2 were applied. In the second stage, the titles of the articles were reviewed, and exclusion criteria EC.1 and EC.4 were applied. In the third stage, similar or complementary titles from the same authors were reviewed applying the exclusion criteria EC.2 and EC.3. In the fourth stage, the abstracts were
Design Techniques for Usability in m-Commerce Context
311
Table 4. Quality assessment questionnaire No.
Question
1
Do the articles come from rigorous research?
2
Do the articles refer to different types of mobile devices?
3
Do the articles refer to more than one usability sub-characteristic?
4
Do the articles provide graphics of the solutions?
5
Do the articles provide links to your solutions so that they can be evaluated?
6
Do the articles answer all the research questions?
7
Have the articles validated the techniques through usability quality evaluation/s?
8
Is the context of the article or its solution related to mobile commerce?
read and exclusion criteria EC.1 and EC.3 were applied. In the fifth stage, the content of the articles was reviewed, using inclusion criteria IC.3 and IC.4, and exclusion criteria EC.1 and EC.4. In addition, the bibliographic references of those selected articles were reviewed in order to find possible additional studies. In the sixth stage, the quality evaluation was performed using the predefined questionnaire. Data Extraction Strategy. For data extraction, a form was defined considering the data required to answer the research questions and the bibliographic data of the article. The structure contains: (i) study identifier to be identified in the SLR; (ii) title; (iii) country of origin; (iv) type of article, (v) name of the publication, and (vi) data necessary to answer each question.
4 Analysis of Results The execution of the selection process based on the results obtained are presented in Fig. 1. Search in Sco, IEEE, ACM, SDi, WoS, Proq, EHo (n = 14808)
1st Stage Review of pub. year >= 2000 (IC.1, IC.2) Excluded: 27 Included: 14781
2nd Stage Review of titles and type of doc. (EC.1, EC.4) Excluded: 14392 Included: 389
3rd Stage Review of duplicates and similar titles (EC.2, EC.3) Excluded: 92 Included: 297
4th Stage Review of Summary (EC.1, IC.3) Excluded: 231 Included: 66
5th Stage Review of Content (EC.1, EC.4, IC.3, IC.4) Excluded: 40 Included: 26
6th Stage Quality
Excluded: 6 Included: 20
Fig. 1. Result of study selection
At the end of the 6th stage, 20 relevant studies were obtained. Bibliographic references were reviewed on the selected articles, obtaining 6 additional articles, represented in Table 5. The quality evaluation of the studies obtained was then applied, the results of which are presented in Table 6, where 6 articles [S05, S08, S12, S18, S21 and S22] were removed from the selection criteria for failing to score the acceptance threshold established of
312
I. Monzón et al. Table 5. List of selected studies
Id
Title
Authors
Year Ref.
S01 Roller Interface for Mobile Device Applications
Wang
2007 [65]
S02 Energy-Efficient Graphical User Interface Design
Vallerio
2006 [66]
S03 Complementary menus: Combining adaptable and adaptive approaches for menu interface
Park
2011 [67]
S04 Designing interfaces in a mobile environment: An implementation on a programming language
Rias
2010 [68]
S05 One-handed Mobile Video Browsing
Hürst
2008 [69]
S06 From appearing to disappearing ephemeral adaptation for small Bouzit screens
2014 [70]
S07 Improving mobile web search experience with slide-film interface
Shtykh
2008 [71]
S08 Menu structuring for mobile devices
Sauerwein 2008 [72]
S09 A method for searching photos on a mobile phone by using the Chun fisheye view technique
2011 [73]
S10 RegionalSliding: Facilitating small target selection with Xu marking menu for one-handed thumb use on touchscreen-based mobile devices
2015 [74]
S11 The mobile tree browser: A space filling information visualization for browsing labelled hierarchies on mobile devices
Craig
2015 [75]
S12 Visualization by information type on mobile device
Yoo
2006 [76]
S13 Wavelet Menus on Handheld Devices: Stacking Metaphor for Novice Mode and Eyes-Free Selection for Expert Mode
Francone
2010 [77]
S14 X-O arch menu: Combining precise positioning with efficient menu selection on touch devices
Thalmann 2014 [78]
S15 A mobile interface for navigating hierarchical information space
Chhetri
2015 [79]
S16 Toward More Efficient User Interfaces for Mobile Video Browsing: An In-Depth Exploration of the Design Space
Huber
2010 [80]
S17 Leaf Menus: Linear Menus with Stroke Shortcuts for Small Handheld Devices
Roudaut
2009 [81]
S18 Map-based Music Interfaces for Mobile Devices
Frank
2008 [82]
S19 Performance of smartphone users with half-pie and linear menus
Yang
2017 [83]
S20 The role of responsive design in web development
Almeida
2017 [84]
S21 Mobile device interfaces illiterate
Nasution
2016 [85] (continued)
Design Techniques for Usability in m-Commerce Context
313
Table 5. (continued) Id
Title
Authors
Year Ref.
S22 “Just-in-place” information for mobile device interfaces
Kjeldskov 2002 [86]
S23 Framy - Visualising geographic data on mobile interfaces
Paolino
2008 [87]
S24 Halo: a technique for visualizing off-screen objects
Baudisch
2003 [88]
S25 Focus+context visualization techniques for displaying large lists with multiple points of interest on small tactile screens
Huot
2007 [89]
S26 On the effectiveness of Overview+Detail visualization on mobile devices
Burigat
2013 [90]
3.2 points, that is, 40% of the maximum possible score, which is 8 (one point for each question in the questionnaire). The study was then carried out with the remaining 20 articles. Table 6. Quality assessment of selected studies Score
Total
Studies
Score
Total
Studies
6.0
1
S16
4.0
2
S20, S25
5.5
4
S01, S15, S23, S26
3.5
2
S14, S17
5.0
4
S04, S07, S19, S24
3.0
5
S05, S08, S12, S18, S22
4.5
7
S02, S03, S06, S09, S10, S11, S13
2.0
1
S21
4.1 RQ1. What Techniques Are Used for GUI Design in the Context of Mobile Commerce? Of the 20 selected studies (100%), 13 techniques (65%), 5 approaches (25%) and 2 methods (10%) were found for GUI design. Based on the orientation of your application, they can be classified into 5 categories: • Interface oriented (15%): web responsive [S20], energy efficient [S02] and roller interface [S01]. • Search-oriented (20%): navigation between nodes [S04], sliding film [S07], scanning mobile videos [S16], and searching for images using the fisheye technique [S09]. • Display oriented (30%): moving tree browser [S11], framy (geographic data) [S23], halo (off-screen objects) [S24], focus + context [S25], improved radial edge tree [S15], and overview and detail [S26]. • Oriented to the menu interface (30%): wavelet menu [S13], X-O arc menu [S14], leaf menu [S17], half pastel menu [S19], disappearance of the ephemeral adaptation [S06], and coupling of adaptive approaches [S03]. • Selection-oriented (5%): regional slip [S10].
314
I. Monzón et al.
4.2 RQ2. What Are the Similarities and Differences of the Identified GUI Design Techniques? The following similarities of these GUI design techniques were found: • Based on other techniques: 35% of the techniques found are similar because their creation is inspired by techniques or solutions that already exist, or they are simply complemented by the use of third-party techniques. These include the following 7 studies: [S02] [S06] [S09] [S10] [S13] [S14] and [S15]. • Use of visual effects: 25% of the techniques found share the use of visual effects in order to facilitate the visualization, scope or search for data or elements. These include the following 5 studies: [S06] [S09] [S11] [S13] and [S17]. • Use of colors for specific purposes: 20% of the techniques found incorporate the use of colors to achieve certain specific objectives, or make it easier for the user to identify or search for certain words, elements or objectives of interest. These include the following 4 studies: [S02] [S04] [S15] and [S23]. • Use of algorithms: 20% of the techniques found use an algorithm to complete the visualization of the data or detect certain elements or objectives. These include the following 4 studies: [S03] [S06] [S10] and [S15]. • Avoid or manage occlusion: 20% of the techniques found directly or indirectly avoid or manage the occlusion of interface objects. These include the following 4 studies: [S10] [S14] [S17] and [S24]. • Accessibility to a greater number of users: (i) 15% of the techniques found offer two alternatives of use for the user: one for beginner users and another for expert users. Among these we have the following 3 studies: [S13] [S14] and [S17]; and (ii) 10% of the techniques found include the participation of left-handed people in their design. Among these are the following 2 studies: [S11] and [S17]. • Updates information automatically: 10% of the techniques found automatically update information in the GUI in order to find targets of interest to the user. Among these are the following 2 studies: [S23] and [S24]. • Provide more than one type of information at the same time: 10% of the techniques found handle in the GUI more than one type of information at the same time. Among these are the following 2 studies: [S25] and [S26]. Among the most outstanding differences of these GUI design techniques, the following can be pointed out: • Differences in GUI. 55% of the studies reviewed differ in having various forms of GUI for presenting interfaces, menu items, data or objectives of interest to the user. This is because they try to make the most of the screen space of mobile devices. Among these are the following 11 studies: [S01] [S06] [S07] [S13] [S14] [S15] [S17] [S19] [S23] [S24] and [S25]. • A technique aimed at reducing the battery power consumption of mobile devices: [S02]. • A technique that manages a hybrid menu, that is, it has two presentations to show the menu items. At the same time, it allows the pre-display of the sub-menus in order to facilitate memorization and search: [S13].
Design Techniques for Usability in m-Commerce Context
315
• A technique that allows giving information about a precise position: [S14]. • A technique that can display or activate a menu near the edges of the screen: [S17]. 4.3 RQ3. What Usability Sub-characteristics Do These Techniques Support? In accordance with the provisions of [46], which refers to an international standard made by consensus, it was found that many of these techniques cover the following sub-characteristics of usability: the ability to be used, the aesthetics of the user interface, and the ability to be learned. The other sub-characteristics of usability that have been found the least in these primary studies are protection against errors, accessibility and the ability to recognize their adequacy. Table 7 presents the sub-characteristics of usability and related studies. Table 7. Usability sub-characteristics that support these techniques Usability
Studies
Total
%
Ability to be used
S01, S02, S03, S04, S06, S07, S09, S10, S11, S13, S14, S15, S16, S17, S19, S20, S23, S24, S25, S26
20
100
User interface aesthetics
S01, S03, S04, S07, S09, S11, S13, S14, S16, S23, S24, S25, S26
13
65
Ability to be learned
S02, S04, S07, S10, S13, S14, S16, S17, S19
9
45
Protection against errors
S04, S09, S10, S19, S24, S25, S26
7
35
Accessibility
S11, S17
2
10
Ability to recognize their adequacy
S04, S13
2
10
4.4 RQ4. What GUI Elements Do These Techniques Support? The techniques identified support various elements of the GUI. In Table 8, presents the articles analyzed and classified according to the GUI element it supports. It can be seen that there is a greater number of jobs for the Menu item, which is decisive for the organization of the content. It is also appreciated that the elements: hierarchical structures, links and maps are studied with interest since they provide a complementary or substitute scheme to the menus.
316
I. Monzón et al. Table 8. GUI elements that support these techniques GUI element
Studies
Total
Menu
S03, S06, S10, S13, S14, S17, S19
7
Hierarchical structure
S01, S11, S15
3
Link
S04, S10, S20
3
Map
S23, S24, S26
3
Web page
S01, S20
2
Application interface
S01, S02
2
Image or photo
S09, S20
2
Search result
S07
1
Linear structure
S01
1
Large list
S25
1
Button
S20
1
Map label
S10
1
Video
S16
1
5 Final Discussion and Future Work A variety of techniques, approaches, and methods for designing usable m-commerce applications have been identified, which can be classified according to the solution they provide. In our case, we defined five categories in RQ1 based on solution orientation as: interface oriented, search oriented, display oriented, menu oriented, selection oriented. Although not all of these techniques share the same similarities, some could be found: the use of visual effects, the use colors for specific purposes, the use of algorithms, avoiding or managing data occlusion, and offering greater accessibility to more users are the most shared similarities by these techniques. Among the most notable difference, it is observed that many of these techniques differ in the way of presenting their GUI design layout, because of the way they addressed small space problem of mobile devices. For those, the usability sub-characteristics most supported by most of them are the ability to be used (100%), the user interface aesthetics (65%) and the ability to be learned (45%), as stated in Table 7, therefore the ability to be used sub-characteristic should be the first to be taken into account when developing software for mobile user interfaces. The GUI elements that support these techniques are quite varied, being the menu the element that is most found in these techniques. Finally, this work could be complemented with other lines of research, such as a systematic literature review of the design of application interfaces in the context of m-commerce, referring exclusively to software life cycle processes, focusing on analysis and design. This second study would cover the customer-driven development or user driven development, and the user-centered design (UCD) process.
Design Techniques for Usability in m-Commerce Context
317
References 1. Poushter, J., Bishop, C., Chwe, H.: Social Media Use Continues to Rise in Developing Countries but Plateaus Across Developed Ones, Washington, DC (2018). https://www.pew research.org/global/2018/06/19/social-media-use-continues-to-rise-in-developing-countr ies-but-plateaus-across-developed-ones/ 2. O’Dea, S.: Forecast Number of Mobile Users Worldwide from 2019 to 2023. Statista (2020). https://www.statista.com/statistics/218984/number-of-global-mobile-users-since-2010/ 3. Smith, A.: U.S. Smartphone Use in 2015 (2015). https://doi.org/10.1590/s1809-982320130 00400007 4. Anderson, M.: Mobile Technology and Home Broadband 2019 (2019). https://www.pewres earch.org/internet/2019/06/13/mobile-technology-and-home-broadband-2019/ 5. CNNIC: Statistical Report on Internet Development in China (2018). http://www1.cnnic.cn/ IDR/ReportDownloads/201302/P020130221391269963814.pdf 6. GSM Association: The Mobile Economy 2019, London (2019). https://www.gsma.com/r/ mobileeconomy/3/ 7. Taffera-Santos, N.: The Future of Retail 2019: Top 10 Trends that Will Shape Retail in the Year Ahead, New York (2019). https://on.emarketer.com/rs/867-SLG-901/images/eMarketer_Fut ure_of_Retail_Report_Braze_2019.pdf. Accessed 27 May 2020 8. Bayles, M.: Will Your Smartphone Get You a Job? (2019). https://doi.org/10.5860/choice.511007 9. Singh, A.: Impact of mobile commerce in e-commerce in perspective of Indian scenario. Asian J. Technol. Manag. Res. 06(02), 1–6 (2016) 10. Kale, A., Rajivkumar, M.: M-commerce: services and applications. Int. J. Adv. Sci. Res. 3(1), 19–21 (2018) 11. Du, S., Li, H.: The knowledge mapping of mobile commerce research: a visual analysis based on I-model. Sustainability 11(6), 1–26 (2019). https://doi.org/10.3390/su11061580 12. Chen, Z., Li, R., Chen, X., Xu, H.: A survey study on consumer perception of mobilecommerce applications. Procedia Environ. Sci. 11, 118–124 (2011). https://doi.org/10.1016/ j.proenv.2011.12.019 13. George, A.S., Singh, T.: M-commerce: evaluation of the technology drawbacks. In: 4th International Conference on Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), pp. 1–6 (2015). https://doi.org/10.1109/icrito.2015.7359251 14. Jakimoski, K.: Analysis of the usability of M-commerce applications. Int. J. U- E-Serv. Sci. Technol. 7(6), 13–20 (2014). https://doi.org/10.14257/ijunesst.2014.7.6.02 15. Iqbal, M.W., Ahmad, N., Shahzad, S.K.: Usability evaluation of adaptive features in smartphones. In: International Conference on Knowledge Based and Intelligent Information and Engineering Systems (KES2017) in Procedia Computer Science, vol. 112, pp. 2185–2194 (2017). https://doi.org/10.1016/j.procs.2017.08.258 16. Huang, K.-Y.: Challenges in human-computer interaction design for mobile devices. In: World Congress on Engineering and Computer Science 2009 (WCECS 2009), vol. 1, pp. 1–6 (2009). https://doi.org/10.1007/s007790200022 17. Lentz, J.: User interface design for the mobile web. best practices for designing applications for multiple device platforms. IBM Dev. Mob. Dev. (2011). https://developer.ibm.com/art icles/wa-interface/, Accessed 20 May 2020 18. Kravets, U.: The future of responsive design. going beyond browser size. IBM Dev. Mob. Dev. (2018). https://developer.ibm.com/articles/responsive-design-future/ 19. Matlock, D., Rendell, A., Heath, B., Swaid, S.: M-commerce apps usability: the case of mobile hotel booking apps. In: International Conference on Software Engineering Research and Practice (SERP 2018), pp. 42–45 (2018)
318
I. Monzón et al.
20. Bozzi, C., Mont’Alvão, C.: An analysis of usability issues on fashion M-commerce websites’ product page. In: 20th Congress of the International Ergonomics Association (IEA 2018), vol. 824, pp. 3–12 (2018). https://doi.org/10.1007/978-3-319-96071-5_1 21. Hussain, A., Mkpojiogu, E.O.C., Yahaya, N.B., Bakar, N.Z.B.A.: A mobile usability assessment of an M-shopping app. J. Adv. Res. Dyn. Control Syst. 10(10-Spec. Issue), 1212–1217 (2018) 22. Hussain, A., Mkpojiogu, E.O.C., Abubakar, H., Hassan, H.M.: A mobile usability test assessment of an online shopping application. J. Comput. Theor. Nanosci. 16(5/6), 2511–2516 (2019). https://doi.org/10.1166/jctn.2019.7923 23. Wong, C.Y., Ibrahim, R., Hamid, T.A., Mansor, E.I.: Usability and design issues of smartphone user interface and mobile apps for older adults. In: International Conference on User Science and Engineering (i-USEr 2018), vol. CCIS 886, pp. 93–104 (2018). https://doi.org/10.1007/ 978-981-13-1628-9 24. Khan, A., Khusro, S., Alam, I.: BlindSense: an accessibility-inclusive universal user interface for blind people. Eng. Technol. Appl. Sci. Res. 8(2), 2775– 2784 (2018). https://www.researchgate.net/publication/324605587_BlindSense_An_Accessi bility-inclusive_Universal_User_Interface_for_Blind_People 25. Hussain, A., Mkpojiogu, E.O.C., Jamaludin, N.H., Moh, S.T.L.: A usability evaluation of lazada mobile application. In: AIP Conference Proceedings, vol. 1891, no. 1, p. 020059 (2017). https://doi.org/10.1063/1.5005392 26. Alqahtani, F., Orji, R.: Usability issues in mental health applications. In: 27th Conference on User Modeling, Adaptation and Personalization Adjunct (UMAP 2019 Adjunct) (2019). https://doi.org/10.1145/3314183.3323676 27. Bilal, M., Yu, Z., Song, S., Wang, C.: Evaluate accessibility and usability issues of particular China and Pakistan government websites. In: 2019 2nd International Conference on Artificial Intelligence and Big Data (ICAIBD), pp. 316–322 (2019). https://doi.org/10.1109/icaibd. 2019.8836990 28. Nielsen, J.: Usability 101: Introduction to Usability. Nielsen Norman Group (2012). http:// www.nngroup.com/articles/usability-101-introduction-to-usability/ 29. Bin Hussain, A., Mahmood, A.T., Naser, R.K.: Investigating the effect of M-commerce design usability on customers’ trust. In: AIP Conference Proceedings, vol. 1891, no. 1, p. 020077 (2017). https://doi.org/10.1063/1.5005410 30. Jung, W.: The effect of representational UI Design quality of mobile shopping applications on users’ intention to shop. Procedia Comput. Sci 2017(121), 166–169 (2017). https://doi. org/10.1016/j.procs.2017.11.023 31. Ye, P.-H., Liu, L.-Q.: Influence factors of users satisfaction of mobile commerce - an empirical research in China. In: 3rd International Conference on Management Science and Engineering (MSE 2017), vol. 50 (2017). https://doi.org/10.2991/mse-17.2017.50 32. Stone, D., Jarrett, C., Woodroffe, M., Minocha, S.: Introducing user interface design. In: User Interface Design and Evaluation, pp. 3–24 (2005) 33. Nielsen, J.: 10 Usability Heuristics for User Interface Design. Nielsen Norman Group (1994). https://www.nngroup.com/articles/ten-usability-heuristics/ 34. Nielsen, J.: Top 10 Mistakes in Web Design. Nielsen Norman Group (2011). https://www. nngroup.com/articles/top-10-mistakes-web-design/ 35. Glavinic, V., Ljubic, S., Kukec, M.: Transformable menu component for mobile device applications: working with both adaptive and adaptable user interfaces. Int. J. Interact. Mob. Technol. 2(3), 22–27 (2008) 36. Łobaziewicz, M.: The design of B2B system user interface for mobile systems. Procedia Comput. Sci. 2015(65), 1124–1133 (2015). https://doi.org/10.1016/j.procs.2015.09.036 37. Qin, Z.: Introduction to E-commerce (2009)
Design Techniques for Usability in m-Commerce Context
319
38. ISO/IEC 15944-1: Information Technology - Business Agreement Semantic Descriptive Techniques - Part 1: Operational Aspects of Open-Edi for Implementation (2002) 39. Laudon, K.C., Guercio Traver, C.: E-commerce: Negocios, Tecnología, Sociedad, Novena Edi (2013) 40. Manzoor, A.: E-commerce: An Introduction (2010) 41. Radovilsky, Z.: Business Models for E-Commerce (2015) 42. Sandhu, P.: Mobile commerce: beyond E-commerce. Int. J. Comput. Sci. Technol. 3(1), 759– 763 (2012). https://doi.org/10.1109/ICRITO.2015.7359251 43. Sreenivasan, J., Mohd Noor, M.N.: A conceptual framework on mobile commerce acceptance and usage among Malaysian consumers: the influence of location, privacy, trust and purchasing power. WSEAS Trans. Inf. Sci. Appl. 7(5), 661–670 (2010) 44. Preece, J., Rogers, Y., Sharp, H.: Interaction Design: Beyond Human–Computer Interaction (2002) 45. Villamor, C., Willis, D., Luke, W.: Touch gesture reference guide, pp. 253–306 (2010). https:// doi.org/10.1007/978-1-4842-4865-2_11 46. Portal ISO 25000: ISO/IEC 25010. http://iso25000.com/index.php/normas-iso-25000/iso25010 47. Shackel, B.: Usability - context, framework, definition, design and evaluation. Interact. Comput. 21(5–6), 339–346 (2009). https://doi.org/10.1016/j.intcom.2009.04.007 48. Novak, G., Lundberg, L.: Usability evaluation of an M-commerce system using proxy users. In: International Conference on Human-Computer Interaction (HCII 2015), vol. 529, pp. 164– 169 (2015). https://doi.org/10.1007/978-3-319-21383-5_28 49. Öztürk, Ö., Rizvanoˇglu, K.: M-commerce usability: an explorative study on Turkish private shopping apps and mobile sites. In: International Conference of Design, User Experience, and Usability (DUXU/HCII 2013), vol. 8015, pp. 623–630 (2013). https://doi.org/10.1007/ 978-3-642-39253-5_69 50. Cooharojananone, N., Muadthong, A., Limniramol, R., Tetsuro, K., Hitoshi, O.: The evaluation of M-commerce interface on smart phone in Thailand. In: 13th International Conference on Advanced Communication Technology, pp. 1428–1433 (2011) 51. Qasim, I., Azam, F., Anwar, M.W., Tufail, H., Qasim, T.: Mobile user interface development techniques: a systematic literature review. In: 2018 IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), pp. 1029–1034 (2019). https://doi.org/10.1109/IEMCON.2018.8614764 52. Burigat, S., Chittaro, L., Gabrielli, S.: Navigation techniques for small-screen devices: an evaluation on maps and web pages. Int. J. Hum Comput Stud. 66(2), 78–97 (2008). https:// doi.org/10.1016/j.ijhcs.2007.08.006 53. Ahmad, Z., Ibrahim, R.: Mobile commerce (M-commerce) interface design: a review of literature. IOSR J. Comput. Eng. 19(3), 66–70 (2017). https://doi.org/10.9790/0661-190304 6670 54. Miremadi, A., Aminilari, M., Hassanian-Esfahani, R.: A new trust model for B2C E-commerce based on 3D user interfaces. In: 7th International Conference on e-Commerce in Developing Countries: with Focus on e-Security, pp. 1–12 (2013). https://doi.org/10.1109/ecdc.2013.655 6747 55. Jin, B.S., Ji, Y.G.: Usability risk level evaluation for physical user interface of mobile phone. Comput. Ind. 61(4), 350–363 (2010). https://doi.org/10.1016/j.compind.2009.12.006 56. Zapata, B.C., Fernández-Alemán, J.L., Idri, A., Toval, A.: Empirical studies on usability of mHealth apps: a systematic literature review. J. Med. Syst. 39(2), 1–19 (2015) 57. Wetchakorn, T., Prompoon, N.: Method for Mobile user interface design patterns creation for IOS platform. In: 2015 12th International Joint Conference on Computer Science and Software Engineering (JCSSE), pp. 150–155 (2015). https://doi.org/10.1109/jcsse.2015.721 9787
320
I. Monzón et al.
58. Li, N., Hua, Q., Wang, S., Yu, K., Wang, L.: Research on a pattern-based user interface development method. In: 2015 7th International Conference on Intelligent Human-Machine Systems and Cybernetics, vol. 1, pp. 443–447 (2015). https://doi.org/10.1109/ihmsc.2015.203 59. Paz, F., Pow-Sang, J.A.: Usability evaluation methods for software development: a systematic mapping review. In: 2015 8th International Conference on Advanced Software Engineering and Its Applications (ASEA), pp. 1–4 (2015). https://doi.org/10.1109/asea.2015.8 60. Paz, F., Pow-Sang, J.A.: Current trends in usability evaluation methods: a systematic review. In: 2014 7th International Conference on Advanced Software Engineering and Its Applications (ASEA), pp. 11–15 (2014). https://doi.org/10.1109/asea.2014.10 61. Kitchenham, B., Charters, S.: Guidelines for performing systematic literature reviews in software engineering: version 2.3 (2007) 62. Biolchini, J., Gomes Mian, P., Cruz Natali, A.C., Horta Travassos, G.: Systematic review in software engineering (2005). https://doi.org/10.1145/2372233.2372235 63. Elsen, I., Hartung, F., Horn, U., Kampmann, M., Peters, L.: Streaming technology in 3G mobile communication systems. Comput. (Long. Beach. Calif.) 34(9), 46–52 (2001). https:// doi.org/10.1109/2.947089 64. Rouhani, B.D., Mahrin, M.N.Z.R., Nikpay, F., Ahmad, R.B., Nikfard, P.: A systematic literature review on enterprise architecture implementation methodologies. Inf. Softw. Technol. 62(2015), 1–20 (2015). https://doi.org/10.1016/j.infsof.2015.01.012 65. Wang, L., Sajeev, A.S.M.: Roller interface for mobile device applications. In: Eighth Australasian User Interface Conference (AUIC2007) in Conferences in Research and Practice in Information Technology (CRPIT), vol. 64, pp. 7–13 (2007) 66. Vallerio, K.S., Zhong, L., Jha, N.K.: Energy-efficient graphical user interface design. IEEE Trans. Mob. Comput. 5(7), 846–859 (2006). https://doi.org/10.1109/TMC.2006.97 67. Park, J., Han, S.H.: Complementary menus: combining adaptable and adaptive approaches for menu interface. Int. J. Ind. Ergon. 41(3), 305–316 (2011). https://doi.org/10.1016/j.ergon. 2011.01.010 68. Rias, R.M., Ismail, F.: Designing interfaces in a mobile environment: an implementation on a programming language. In: 2010 International Conference on User Science and Engineering (i-USEr), pp. 232–237 (2010). https://doi.org/10.1109/iuser.2010.5716758 69. Hürst, W., Merkle, P.: One-handed mobile video browsing. In: 1st International Conference on Designing Interactive User Experiences for TV and Video (uxTV 2008), pp. 169–178 (2008). https://doi.org/10.1145/1453805.1453839 70. Bouzit, S., Chêne, D., Calvary, G.: From appearing to disappearing ephemeral adaptation for small screens. In: Australian Computer-Human Interaction Conference on Designing Futures: the Future of Design (OzCHI 2014), pp. 41–48 (2014). https://doi.org/10.1145/2686612.268 6619 71. Shtykh, R.Y., Jin, Q.: Improving mobile web search experience with slide-film interface. In: 2008 IEEE International Conference on Signal Image Technology and Internet Based Systems, pp. 659–664 (2008). https://doi.org/10.1109/SITIS.2008.43 72. Sauerwein, K., Prevost, N., de Luca, A.: Menu structuring for mobile devices. In: INFORMATIK 2008. Beherrschbare Systeme. Dank Informatik. Band 1, vol. 1, pp. 307–312 (2008) 73. Chun, J., Han, S.H., Im, H., Park, Y.S.: A method for searching photos on a mobile phone by using the fisheye view technique. Int. J. Ind. Ergon. 41, 280–288 (2011). https://doi.org/10. 1016/j.ergon.2011.02.009 74. Xu, W., Yu, C., Liu, J., Shi, Y.: RegionalSliding: facilitating small target selection with marking menu for one-handed thumb use on touchscreen-based mobile devices. Pervasive Mob. Comput. 17, 63–78 (2015). https://doi.org/10.1016/j.pmcj.2014.02.005
Design Techniques for Usability in m-Commerce Context
321
75. Craig, P., Huang, X.: The mobile tree browser: a space filling information visualization for browsing labelled hierarchies on mobile devices. In: 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, pp. 2240–2247 (2015). https://doi.org/10.1109/cit/iucc/dasc/picom.2015.331 76. Yoo, H.Y., Cheon, S.H.: Visualization by information type on mobile device. In: Asia-Pacific Symposium on Information Visualisation (APVIS 2006) in Conferences in Research and Practice in Information Technology, vol. 60, pp. 143–146 (2006) 77. Francone, J., Bailly, G., Lecolinet, E., Mandran, N., Nigay, L.: Wavelet menus on handheld devices: stacking metaphor for novice mode and eyes-free selection for expert mode. In: ACM International Conference on Advanced Visual Interfaces (AVI 2010), pp. 173–180 (2010). https://doi.org/10.1145/1842993.1843025 78. Thalmann, F., Heckel, M., Von Zadow, U., Dachselt, R.: X-O arch menu: combining precise positioning with efficient menu selection on touch devices. In: Ninth ACM International Conference on Interactive Tabletops and Surfaces (ITS 2014), pp. 317–322 (2014). https:// doi.org/10.1145/2669485.2669539 79. Chhetri, A.P., Zhang, K., Jain, E.: A mobile interface for navigating hierarchical information space. J. Vis. Lang. Comput. 31, 48–69 (2015). https://doi.org/10.1016/j.jvlc.2015.10.002 80. Huber, J., Steimle, J., Mühlhäuser, M.: Toward more efficient user interfaces for mobile video browsing: an in-depth exploration of the design space. In: 18th ACM International Conference on Multimedia (MM 2010), pp. 341–350 (2010). https://doi.org/10.1145/1873951.1873999 81. Roudaut, A., Bailly, G., Lecolinet, E., Nigay, L.: Leaf menus: linear menus with stroke shortcuts for small handheld devices. In: IFIP Conference on Human-Computer Interaction. INTERACT 2009: Human-Computer Interaction – INTERACT 2009, vol. 5726, pp. 616–619 (2009). https://doi.org/10.1007/978-3-642-03655-2_69 82. Frank, J., Lidy, T., Hlavac, P., Rauber, A.: Map-based music interfaces for mobile devices. In: 16th ACM International Conference on Multimedia (MM 2008), pp. 981–982 (2008). https:// doi.org/10.1145/1459359.1459539 83. Yang, H.-H., Chen, Z.-N., Hung, C.-W.: Performance of smartphone users with half-pie and linear menus. Behav. Inf. Technol. 36(9), 935–954 (2017). https://doi.org/10.1080/0144929X. 2017.1312529 84. Almeida, F., Monteiro, J.: The Role of Responsive Design in Web Development. Webology 14(2), 48–65 (2017) 85. Nasution, M.I.P., Andriana, S.D., Syafitri, P.D., Rahayu, E., Lubis, M.R.: Mobile device interfaces illiterate In: 2015 International Conference on Technology, Informatics, Management, Engineering & Environment (TIME-E 2015), pp. 117–120 (2015). https://doi.org/10.1109/ time-e.2015.7389758 86. Kjeldskov, J.: ‘Just-in-place’ information for mobile device interfaces. In: International Conference on Mobile Human-Computer Interaction. Mobile HCI 2002: Human Computer Interaction with Mobile Devices, vol. 2411, pp. 271–275 (2002). https://doi.org/10.1007/3-54045756-9 87. Paolino, L., Sebillo, M., Tortora, G., Vitiello, G.: Framy - visualising geographic data on mobile interfaces. J. Locat. Based Serv. 2(3), 236–252 (2008). https://doi.org/10.1080/174 89720802487949 88. Baudisch, P., Rosenholtz, R.: Halo: a technique for visualizing off-screen locations. In: SIGCHI Conference on Human Factors in Computing Systems (CHI 2003), April 2013, pp. 481–488 (2003)
322
I. Monzón et al.
89. Huot, S., Lecolinet, E.: Focus+context visualization techniques for displaying large lists with multiple points of interest on small tactile screens. In: IFIP Conference on Human-Computer Interaction. INTERACT 2007: Human-Computer Interaction – INTERACT 2007, vol. 4663, pp. 219–233 (2007). https://doi.org/10.1007/978-3-540-74800-7 90. Burigat, S., Chittaro, L.: On the effectiveness of Overview+Detail visualization on mobile devices. Pers. Ubiquitous Comput. 17(2), 371–385 (2013). https://doi.org/10.1007/s00779011-0500-3
A Taxonomy on Continuous Integration and Deployment Tools and Frameworks Patricia Ortegon Cano(B) , Ayrton Mondragon Mejia, Silvana De Gyves Avila, Gloria Eva Zagal Dominguez, Ismael Solis Moreno, and Arianne Navarro Lepe IBM, Mexico Software Lab, Carretera al Castillo 2200, 45680 El Salto Jalisco, Mexico {patricia.ortegon,ayrton.mondragon,silvana.degyves, gloria.eva.zagal,arianne.navarro}@ibm.com, [email protected]
Abstract. Software development has become a critical activity in modern companies. Competitive advantage and customer satisfaction depend strongly on the efficient delivery of new software capabilities. This requires the application of agile methodologies and the use of tools and frameworks which allow a fast and reliable integration and evolution of software systems. Currently, there exist several tools which are used for different purposes all along the software development process. Therefore, it is important for researchers on software engineering and developers to have a wide overview and understanding of the capabilities and limitations of current technology. This paper presents an insightful study of the state of the art on continuous integration and deployment tools. It depicts a taxonomy of current approaches based on their usability during the software development process. Furthermore, it discusses the current challenges and propose a set of study opportunities which could lead further research in this field. Keywords: Agile methodologies · Continuous integration · Continuous deployment · Software engineering · Versioning control · Code validation · Software integration
1 Introduction During the past two decades the software process has seen an explosion of light weight and adaptive models. They assume that writing code quickly, having it evaluated by the costumers, being “wrong”, and refactoring quickly, is more cost effective, as compared to traditional approaches. These models (e.g. Lean, Agile, Scrum, etc.) provide an attractive line of work that satisfies the current software development needs of the industry, and are the start point of more effective and lighter-weight methodologies [1]. Among the activities described by these methodologies one critical step is the constant incorporation of new software modules or modifications to existing ones. This is not a trivial task in software development since the large number of programmers involved as well as the velocity that the software is produced, particularly when agile methodologies are stablished. The integration and deployment of software requires to © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 323–336, 2021. https://doi.org/10.1007/978-3-030-63329-5_22
324
P. O. Cano et al.
be aware of the different pieces of code delivered by the entire development team, code blending to integrate new modules and modifications to the existing software trunk, code validation to ensure that all additions and modifications perfectly work together, and versioning control to manage the software evolution across several integration cycles. In the IT context, the term integration refers to the action of combining different source codes to determine how they work as a whole. In agile methodologies, integration takes place in a constant manner, which is called “Continuous Integration” (CI). This idea was introduced as part of the concepts within the Extreme Programming methodology [2, 3]. CI is considered a software development practice that represents a key factor in terms of quality assurance. It promotes that every member of a team integrates their work frequently (at least once per day, but ideally after every change), so that the rest of the team is always working with the latest version of the system. This integration is verified by an automated build with an automated test, in order to identify integration issues as soon as possible [2, 4]. It validates the system via automated regression tests and sometimes, through dynamic and static code validation [5]. The combination of these practices helps to ensure the health of the system. On the other hand, Continuous Delivery & Deployment (CD) is a practice that looks for automating the entire software release process. The objective is to do CI and then automatically prepare and track a release to production [6]. From now on, we will refer this integrated practice as CI/CD. The usage of CI/CD during the software development process has impact not only in terms of quality, but also in terms of time. By continuously integrate small amounts of code into a system, instead of big parts of code every major change, developers can reduce integration times, identify issues and solve them in a fast and effective way, instead of facing weeks only performing integration and then dealing with a large amount of errors [3]. In order to efficiently implement CI/CD in development teams and find its technological limitations, it is important to understand the wide variety of tools and frameworks currently available. However, there is a lack of studies portraying the CI/CD ecosystem which could support developers and software engineering researchers on those challenging tasks. The objective of this paper is to present a study on CI/CD identifying its main components, discussing current challenges, and revealing further research and development opportunities. This approach is intended to allow researchers and software engineers to have a wide picture of current CI/CD tools and frameworks, which could ease their integration for achieving more efficient and productive software development environments. Furthermore, it also looks to identify current technological gaps and discuss possible approaches, which could serve as a start point for researchers in the quest of novel CI/CD mechanisms. The main contributions of this paper can be summarized as follows: • The identification of CI/CD stages and a classification of current tools and frameworks in each stage. This is critical to understand existing capabilities and limitations. • The identification of challenges and research opportunities which can lead the development and integration of new solutions.
A Taxonomy on Continuous Integration
325
The rest of this paper is structured as follows: Sect. 2 introduces the topic of CI/CD and provides a background on the field describing each identified phase. Section 3 presents a taxonomy on current CI/CD tools and frameworks on their usability during the software development process. Section 4 discusses notorious challenges in the field of CI/CD. Section 5 presents the identified research and development opportunities. Finally, Sect. 6 describes the conclusions of this work.
2 Background 2.1 Continuous Integration Software development methodologies are in constant evolution. In a market where releases cannot take place every few years, but instead, they are needed every few months, approaches that support the development of software systems in short periods of time are proposed. Usually, these are based on adaptive models, such as Lean, Scrum, Agile, etc., and adapted to specific needs. One common characteristic among these approaches is the importance of issues detection at early stages of the development process. A software development practice oriented to identify issues as early as possible is known as Continuous Integration (CI). CI is the process of incorporating and validating work frequently. When applying CI, developers generate private builds, and validated them. Once 100% tests have passed, they integrate their work with the project’s integration branch and create a new build [7]. These activities take place typically once per day, but it is encouraged to integrate as soon as a relevant change has been performed, which can lead to multiple builds per day. In the event of a failure, which can be before or after integration, developers must identify the problem and fix it [4]. This action prevents issues to be found in future builds. Continuous Integration can be performed following an automatic, manual or hybrid approach. The use of tools to automate CI provides the benefits of using repetitive errorprone methods. From an automatic point of view, CI is a process that builds, tests, analyzes, and deploys an application. It helps to ensure that the application functions correctly, follows best practices, and is deployable. This process runs with each sourcecode change and provides immediate feedback to the development team [8]. Some of the principles and activities that give support CI include: committing code frequently, categorizing developer tests, using a dedicated integration build machine, using continuous feedback mechanisms, maintaining a single source repository, using self-testing builds, fixing broken builds, automate the deployment, among others [4, 7]. 2.2 Benefits of Continuous Integration One of the key benefits provided using CI is the reduction of risks. The integration of features and changes multiple times a day eliminates blind spots, giving the team a better project visibility, letting them know where they are and what needs to be fixed [3], reducing the risk of a lack of cohesive-deployable software and of a low-quality product [4]. The use of automated tools has an impact across most of the activities within a project, such as, code compilation, database integration, testing, inspection,
326
P. O. Cano et al.
deployment, and feedback [4]. It ensures that team members do not perform repeated manual activities, and also that every one of these activities is performed always in the same way. This generates reductions in terms of time, money and effort. Also, the automated tests and validations help developers to identify and fix bugs as soon as they appear. Therefore, generating a high-quality product. Another benefit offered by using CI is the generation of a deployable solution at any time [4]. Finally, the effective use of CI increases confidence in the software product. Since it has been tested and validated after every change, team members are aware of the product state, which generates confidence in making new changes when needed [9]. It is important to mention that, like every methodology and software development practice, the use of CI depends on the type of project, budget, resources and available time. Almost all types of projects benefit from CI. However, it is mostly used in complex projects that are based on java, .NET and Ruby on Rails [7]. 2.3 Components and Main Phases of a Continuous Integration System The main components within a Continuous Integration system are the developer, the version control repository, the CI server, the build script and the feedback mechanism. The developer is the person in charge of generating new code and fix the problems when identified. The version control system is a tool that keeps track of every modification to the code. Some of its responsibilities include backup/restore, synchronization, shortterm/long-term undo, traceability, sandboxing, and branching/merging. It includes a repository to keep track of all the changes [10]. The CI server is a source code management system that hosts source code and hardware infrastructure to execute several types of tests [11]. The CI server needs a script to execute in an automated manner. This script, also known as build script, defines the activities that the CI server will perform, which include: clean, compile, integrate, test, and deploy [4, 8]. The feedback mechanism is also part of the CI server and is the component in charge of informing the team of a successful build or if an error has arisen. The interaction among CI components is illustrated in Fig. 1 and described as follows:
Version Control System
Poll
Commit changes
Build Script
CI Server
Generates Feedback Mechanism
Developer
Developer
Fig. 1. Continuous integration system [3].
A Taxonomy on Continuous Integration
327
1. The developer commits code to the version control repository. In the meantime, the CI server on the integration build machine is polling this repository for changes (e.g., every few minutes depending on the current setup). 2. After a commit occurs, the CI server detects a change in the version control repository, so the CI server retrieves the latest copy of the code from the repository and executes a build script, which integrates the software. 3. The CI server generates feedback by e-mailing build results to specified project members. 4. The CI server continues to poll for changes in the version control repository. 2.4 Continuous Delivery and Deployment Implementing an automated way of building, testing and packing each relevant change in the code and keep it in a common repository (CI), is key to reduce time in the development process and improve software quality. When extending CI in order to automate more phases towards deployment, the next steps involve Continuous Delivery and Continuous Deployment, also known as CD. Continuous Delivery provides developers with the ability to release in a frequent way, working versions of their code. The purpose of this practice is that the software product being developed, can be released at any time. Code must be ready in the master branch, in order to be deployed manually into the production environment. On the other hand, Continuous Deployment refers to performing deployment of every change that passes all the stages of the pipeline into production, in an automated way. The idea behind CD is to take the production pipeline and make the final step [6]. Combining CI with CD introduces ongoing automation not only to validate the code integration is performed with no bugs (building & testing), but also supports automation in terms of delivery and deployment. This process is known as the “CI/CD pipeline”, where CD is performed after a successful CI. The structure of the CI/CD pipeline is depicted in Fig. 2. Continuous Integration Build
Test
Merge
Continuous Delivery
Continuous Deployment
Automatically release
Automatically deploy
Fig. 2. CI/CD pipeline.
The next section provides a taxonomy that includes the most relevant tools and frameworks used to implement CI/CD.
3 Taxonomy of Tools and Frameworks As discussed in the previous section, CI/CD is a process that helps improving the development process for a software application, reducing time by reacting quickly to
328
P. O. Cano et al.
any bug on the code, and performing commits more often to have manageable changes. To do this, there are several tools that help the teams to implement the process and, in this section, we will categorize those tools based on their main purpose. The existing tools for CI/CD can be categorized in 5 major categories [4, 11, 12]: • Version control management • Static code analysis • Build automation
• Test automation • CI/CD servers
These categories will be explained in next sub-sections with some examples of the most common tools. 3.1 Version Control Management A version control system is a tool where all developers can keep track of the changes to a file or set of files over time and if there is any conflict or bug in the production application. It can be restored to a previous version without major changes or larger waiting times. We can find two main types of version control applications Version Control System (VCS) and distributed version control system (DVCS). A VCS has as main characteristic to have a centralized repository being managed by the team where a DVCS, the system itself is who manages the central repository and gives teams the ability to fully mirror the repository and create branches to work on. If any server dies, the repositories can be easily copied back to the server and restore it since every clone of the repository is a full backup of all the data [10]. It is usual that the control version tool is connected to the build automation tool, in order to have the latest changes up to date available for the developers. Some common Control Version applications are: • Git is a free DVCS that allows users to store repositories, merge changes and keep track of all the changes that has been made to the code. Git stores the data as a snapshot of the project over time, this means it takes a snapshot every time you commit or save the state of your project [10]. There are other web applications hosted online that help you keep track of your changes in a visual way, like GitHub, where you can merge, modify and commit your changes in a web page and manage your files from there [10]. • Mercurial is a free, open source, distributed source control management tool, like Git, Mercurial is a DVCS that allows developers to clone the entire repository to their workstations. It works very similar to Git, with snapshots of the files to keep track of the ones that have been changed. It has a more user friendly way to work with since the commands are simpler and you don’t need to have a deep technical expertise to use them [13]. • Apache Subversion is an open source VCS where the user has a central repository that can manage, it keeps tracks of the changes to a file or set of files and the users can restore a previous version if needed, can operate across networks, which allows it to be used by users in different locations [14].
A Taxonomy on Continuous Integration
329
• CVS is a VCS that allows you to record the history of source files and documents. Developers can work on branches and CVS can merge the code from different locations, it can run in a server/client format so with this teams can work on the same project even if they are located in different geographies [15]. 3.2 Static Code Analysis Static code analysis tools find code defects by examining the code without executing the program. As such, static code analysis has become an essential part of CI/CD. It is usually integrated during the build process or in the testing stage. These tools are very efficient in identifying hundreds of types of defects such as concurrency, data flow, dynamic memory, and numerical defects. It is particularly well suited to security because many security problems occur in corner cases and hard-to-reach states that can be difficult to exercise by actually running the code [16]. Some common static analysis tools are: • Veracode is a static analysis tool that is built on the SaaS model. This tool is mainly used to analyze the code from a security point of view [17]. • Coverity Scan is a free service for static code analysis of Open Source projects. Supports 14 languages including JavaScript, .NET, Java, and Python. Integrates with Jenkins, Github and TravisCI [18]. • Code Sonar employs a unified dataflow and symbolic execution analysis that examines the computation of the complete application. Includes support for C, C++, C#, Java and Python [19]. 3.3 Build Automation Build automation is a process for automating the steps of creating a new software build. This means the steps needed to convert the source code to a binary code by compiling it. A “build” refers to a release version of a software project. These tools have a build scheduler which is responsible for executing the build. It can be configured in different manners: periodically, every defined time, or triggered when it detects a new change on the version control system. Selecting the build automation tool would be one of the firsts steps when migrating to a CI process. Some commonly used build automation tools are: • Apache Ant is a Java library and command-line tool. It works driving processes described in configuration files to create a build. The main known usage of Ant is the build of Java applications. Ant can support built-in tasks allowing to compile, assemble, test and run Java applications but can also be used effectively to build non Java applications, like C or C++ [20]. • Apache Maven is a software project management and comprehension tool. Maven can manage a project’s build and documentation from a central repository [21]. • Make is a build automation tool that builds executable programs and libraries from source code. It reads files called “Makefiles” which specify how to produce the target application build [22].
330
P. O. Cano et al.
When we are including a CD pipeline, it is important keep control of the deployment infrastructure. A configuration management tool continuously monitors and ensures that organizational infrastructure is configured to the correct specifications. These are two of the most popular tools to handle this task: • Chef is a configuration manager tool that use imperative language, this means you can use Ruby to set your configuration settings. In this type of tool, you specify the configuration state for your nodes or for your system and the tool takes that and ensures it is maintained in that way [23]. • Puppet is a configuration tool that uses declarative language like JSON or XML where you define the resources requirements and the tool takes this definition and ensure it is maintained in that way. This tool is easier to use if you have system administration background [24]. 3.4 Test Automation Continuous integration requires that for every commit the whole application is built, and a set of automated tests are run against it. To make this process faster and efficient, Test Orchestration is fundamental. It involves setting the environment and selecting a set of tests cases that are executed on it. Then, results are reported so the team can identify errors over the test coverage [25]. The CI pipeline is commonly focused on unit and integration testing. However, when we add a CD pipeline, we need to add support for functional and performance testing in order to be ready to deploy to production. Here, we present some of the most popular automation tools. We need to keep in mind that the kind of project and the programming language is also important when selecting the best fit. • xUnit refers to a family of unit test frameworks, that started with SmalltalkUnit in 1999. Later, it was ported to Java, creating JUnit, following a list of many other languages like CppUnit (C++), Nunit (.Net), PyUnit (Python), PHPUnit (PHP). They all share the same architecture and are widely supported by the current CI servers [26]. • JUnit is a widely used and extended unit test framework. It is implemented in and used with Java. The JUnit reporting format is the standard in Jenkins. There are multiple tools like ReportUnit and junit2html that can be used to convert it to HTML [27, 28]. • Selenium is one of the most popular frameworks for testing web applications. Selenium WebDriver provides a test domain-specific language (Selenese) to write tests in a number of popular programming languages, including C#, Groovy, Java, Perl, PHP, Python, Ruby and Scala. The tests can then run against most modern web browsers. It needs to be integrated with a unit testing framework to run functional tests [29]. • Cucumber is a Behavior Driven Development (BDD) framework which is used with Selenium for performing acceptance testing. It is supported for most of the CI servers [30]. • Apache JMeter was designed to load test functional behavior and measure performance. It was originally used for testing Web Applications but has since expanded to other test functions. It can be integrated with Jenkins [31].
A Taxonomy on Continuous Integration
331
• CodeCover is a test coverage tool widely used for Java and COBOL. It is well integrated with a host of development and testing tools including Ant, Jenkins, JUnit and Eclipse [32]. • FrogLogic Coco provides test code coverage for C, C++ and C#. It integrates with all major build, CI, and test tools [33]. 3.5 CI/CD Servers Continuous Integration servers are systems that integrate, build, and test software automatically and regularly. CI servers can simplify and automate the execution of other routine development tasks. These tools usually provide a convenient dashboard where build and other tasks results are published [4]. Some CI servers have support for a CD pipeline, an automated implementation of an application’s build, deploy, test, and release process. Some of the most important CI/CD servers are: • Jenkins (CI/CD) is an open source automation server that helps with the automation of building, testing and deploying software applications. It is a server-based system and supports version control, like Git or Mercurial among others. Builds can be triggered by a commit in a version control system, by scheduling a build job or by requesting a specific build. This tool can be extended with a large number of features available for its use [34]. • Travis-CI (CI/CD) is a free for open source or paid for private projects. It is a server that hosts CI services used to build and test projects. It is connected to GitHub [35]. GitHub notifies to Travis-CI every time a new commit has been pushed to a repository or a pull request is submitted. It can be configured for specific branches only or to match a specific pattern. Once it has been notified it takes the branch or commit specified, builds it and run the specified test cases. Once this process is complete, the service notifies the developer with the results of the process [36]. • The UrbanCode Build (CI/CD) tool is a distributed, multi-platform, enterprise-scale build management solution that uses a template-driven system to configure and run software builds. This tool can detect a change, build, test, and deliver feedback [37]. • Octopus (CD) is an automated deployment and release management server. It is designed to simplify deployment of ASP .NET applications, Windows Services and databases [38]. 3.6 Summary In Sect. 3 we classified and described, based on 5 categories, a set of tools and frameworks commonly used to implement the CI/CD pipeline. The following table summarizes the findings, adding details associated to supported programming languages and CI/CD related tasks (Table 1).
332
P. O. Cano et al. Table 1. Commonly used CI/CD tools.
4 Current Challenges on CI/CD We have discussed the different stages involved in a CI/CD pipeline, and the benefits that we could get from implementing these automated tools in the development life cycle of a software project. But we have also found some challenges that companies can face while adopting these technologies. Implementation in Legacy Projects. Many companies have processes that cover some of the stages described in a CI/CD pipeline. They have been honing their own tools over the years and making them work for their specific projects [39]. Shifting to an official CI/CD pipeline implementation will require to change parts of the development workflow, deprecating some of their automation tools in place and demanding new technical skills and qualifications. This must be done with extra caution, or the productivity of the development team could be impacted.
A Taxonomy on Continuous Integration
333
Lack of Suitable Tools and Technologies. Arobust, out-of-the-box, comprehensive, and yet highly customizable solution for CI/CD doesn’t exist yet [40]. As we have described in previous sections, there are multiple options for each of the stages in the pipeline, some of them might be more appropriate to some projects than others. There are some projects with very specific requirements where none of the existing tools will be adequate for automation or where automation will be difficult (e.g., monolithic applications, projects with hardware dependencies or multiple environments hard to recreate during testing). Dealing with such applications that aren’t amenable to CI/CD is also challenging. The existing tools and technologies have limitations that could interfere with the goals of continuous practices [41]. Lack of Proper Test Strategy. One of the most strategic stages of a CI/CD pipeline is testing. Automated continuous testing allows a software development team to find more bugs and small problems in early stages, before they become large ones. But the challenges that come with this practice are one of the most prominent roadblocks to adopting a continuous integration pipeline [40]. It is common for teams that are not using continuous practices to lack in automated testing, and depending on the size of the code base, this can implicate a great effort from the team. This situation arises from different reasons, such as, poor infrastructure for automating test, time-consuming, laborious process for automating manual tests and dependencies between hardware and software. When we are including a continuous delivery practice in the pipeline, we need to include performance tests, which are one of the most challenging types of tests to automate, due to the lack of tools that can integrate all the steps that are involved [42]. Lack of Meaningful Dashboards and Metrics. Different members of a team will have different priorities, and even during different stages of the development process, these priorities could change. Having a well design dashboard with meaningful indicators is key to take advantage of all the data that is generated when a CI/CD pipeline is implemented [40]. This is not always easy to accomplish and could lead to frustration with the tool from part of the team, when the different members are not able to see the information concerning their work, or when they get too much information coming from different logs that make hard to find what is really relevant. Managing Multiple Environments. Having multiple environments is a best practice in the CI/CD implementation. It will isolate the production environment and add levels of control for assurance the quality of the changes. Development, staging and production are the most frequently used, but can vary depending on the business process. These multiple environments represent a challenge, since it is key to keep their similarities and peculiarities. Without a plan in place, these environments can quickly increase, consuming lot of computing and storage resources, and become difficult to control.
5 Research and Development Opportunities Quality assurance is one of the most important steps of software change management, and the practices included in a CI/CD pipeline are becoming one of the most popular
334
P. O. Cano et al.
tools of this process, helping to improve the quality of the final product [43]. We have presented some of the most common challenges when trying to implement this workflow that also comes with some development opportunities. Outdated or missing documentation is a common scenario in software development teams, that usually causes delays when performing maintenance work in software and this could also have an impact in the quality of the product. Continuous documentation integrated to the CI/CD pipeline, could be an effective way to always have documentation up to date. Creating new tools that could integrate the documentation process in an easier and automated way into the development life cycle could help to improve the quality of the final product [44]. As mentioned before, the lack of adequate reporting tools is a challenge that could affect the acceptance of continuous practices in a team. This creates an area of opportunity, further research is needed in order to create tools that can explode all the information that is created during the execution of the pipeline, as well as the way the results are presented and monitored depending on the different stakeholders in the projects [3]. The CI/CD pipeline improves the quality of the products, though the defects and problems found in the processes are determined for the test cases or checkpoints a human being (developer) created. This is an area of opportunity for using machine learning algorithms, having pipeline jobs able to take their own decisions based on the data analytics and not only on the set of conditions they must met.
6 Conclusions We have presented a taxonomy of the CI/CD pipeline giving a brief explanation of all the steps that are involved as well as a list of some of the most common tools for each stage. We categorized these tools based on the main purpose in Build Automation, Version Control, Testing Frameworks, Static Code Analysis and CI/CD Servers. In this paper we also discussed some of the most important challenges identified when migrating/implementing a CI/CD process. We observed that along with technology challenges, there are others related with the team culture and business strategy that could slow down the implementation. We also identified a number of research opportunities including continuous documentation, mature reporting tools and the use of machine learning algorithms across CI/CD pipelines to further automate software development processes. As conclusion we can say that there is a vast amount of options when it comes to select the tools for continuous practices. Those tools try to adapt and cover all needs during the CI/CD process, but this is a difficult task to achieve, because of the different types of software projects. New technologies like cloud development and containers, offer the possibility to extend the resources required for different processes in a more flexible way, but this will continue being a challenge since not all the companies are ready to move to those environments. Along with the technological challenges, the cultural and organizational challenge is quite important, it is important to break down barriers among teams and promote a collaborative culture. Communication and collaboration among the different stakeholders involved in the different stages of a CI/CD pipeline along with the right tools that can foster those
A Taxonomy on Continuous Integration
335
requirements will be key to achieve the goal of these practices, improve the quality of the final product. This process has been evolving over the years and it will continue doing so, new practices will be added and new tools will be created, it will be challenging for the teams to adapt to those new changes, but having a well established and documented process and having resources where the companies or the teams can look for information it will be easier to adapt and implement CI/CD process.
References 1. Richard, K., Leffingwell, D.: SAFe 5.0 Distilled: Achieving Business Agility with the Scaled Agile Framework. Addison-Wesley Professional, Boston (2020) 2. Zampetti, F., Scalabrino, S., Oliveto, R., Canfora G., Di Penta, M.: How open source projects use static code analysis tools in continuous integration pipelines. In: IEEE/ACM 14th International Conference on Mining Software Repositories, Buenos Aires (2017) 3. Meyer, M.: Continuous integration and its tools. IEEE Softw. 31(3), 14–16 (2014) 4. Duvall, P., Glover, A., Matyas, S.: Continuous Integration: Improving Software Quality and Reducing Risk. Addison-Wesley Professional, Boston (2007) 5. Ambler, S.W., Lines, M.: Disciplined Agile Delivery: A Practitioner’s Guide to Agile Software Delivery in the Enterprise. IBM Press, Upper Saddle River (2012) 6. Humble, J., Farley, D.: Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation. Addison-Wesley Professional, Boston (2010) 7. Pathania, N.: Learning Continuous Integration with Jenkins. Packt Publishing, Birmingham (2017) 8. Berntson, C., Kawalerowicz, M.: Continuous Integration in.NET. Manning Publications, Shelter Island (2011) 9. Hilton, M., Tunnel, T., Huang, K., Marinov, D., Dig, D.: Usage, costs, and benefits of continuous integration in open-source projects. In: 31st IEEE/ACM International Conference on Automated Software Engineering (2016) 10. Chacon, S., Straub, B.: Pro Git. Apress, Berkeley (2014) 11. Yu, L., Alégroth, E., Chatzipetrou, P., Gorschek, T.: Utilising CI environment for efficient and effective testing of NFRs. Elsevier Inf. Softw. Tech. 117, 1–6 (2020) 12. Prakash, W., Burns, E.: Hudson Continuous Integration in Practice. Oracle Press (2014) 13. Mercurial community, Mercurial - Work easier, Work faster. https://www.mercurial-scm.org. Accessed 7 Aug 2020 14. Apache Software Foundation, Apache Subversion (2020). https://subversion.apache.org. Accessed 07 Aug 2020 15. GNU, CVS - Open Source Version Control (2020). https://www.nongnu.org/cvs. Accessed 07 Aug 2020 16. Novak, J., Krajnc, A., Žontar, R.: Taxonomy of static code analysis tools. In: 33rd International Convention MIPRO. IEEE (2010) 17. Veracode: Static Analysis (SAST) (2020). https://www.veracode.com/products/binary-staticanalysis-sast. Accessed 07 Aug 2020 18. Synopsis Inc.: Coverity Scan - Static Analysis (2020). https://scan.coverity.com. Accessed 07 Aug 2020 19. GrammaTech: CodeSonar - Static Analysis SAST Software for Secure SDLC (2020). https:// www.grammatech.com/products/codesonar. Accessed 07 Aug 2020 20. The Apache Software Foundation: Apache Ant Project (2020). https://ant.apache.org. Accessed 07 Aug 2020
336
P. O. Cano et al.
21. The Apache Software Foundation: Apache Maven Project (2020). https://maven.apache.org. Accessed 07 Aug 2020 22. GNU: Make - GNU Project - Free Software Foundation (2020). https://www.gnu.org/sof tware/make. Accessed 07 Aug 2020 23. Cheff, Cheff Compliance. https://www.chef.io/home. Accessed 07 Aug 2020 24. Puppet: Relay by Puppet Put your Infrastructure on Autopilot (2020). https://puppet.com. Accessed 07 Aug 2020 25. Knauss, E., Pelliccione, P., Heldal, R., Ågren, M., Hellman, S., Maniette, D.: Continuous integration beyond the team: a tooling perspective on challenges in the automotive industry. In: Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (2016) 26. Vemula, R.: Test-driven approach using xUnit.net. In: Real-Time Web Application Development. Apress, Berkeley (2017) 27. Venkatesan, K.P., Gade Rozario, R., Fiaidhi, J.: Junit Framework for Unit Testing. In: TechRxiv (2020) 28. The JUnit Team: JUnit5 (2020). https://junit.org. Accessed 07 Aug 2020 29. Software Freedom Conservancy: SeleniumHQ Browser Automation (2020). https://www.sel enium.dev. Accessed 07 Aug 2020 30. SmartBear Software: Cucumber (2020). https://cucumber.io. Accessed 07 Aug 2020 31. Apache Software Foundation: Apache JMeter (2020). https://jmeter.apache.org Accessed 07 Aug 2020 32. CodeCover, CodeCover. http://codecover.org. Accessed 07 Aug 2020 33. Froglogic: Coco Code Coverage (2020). https://www.froglogic.com/coco Accessed 07 Aug 2020 34. Jenkins: Jenkins - Build Great Things at Any Scale (2020). https://www.jenkins.io. Accessed 07 Aug 2020 35. Github Inc.: Github - Built for Developers (2020). https://github.com. Accessed 07 Aug 2020 36. Travis CI, Travis CI - Test and Deploy with Confidence. https://travis-ci.org. Accessed 07 Aug 2020 37. UrbanCode, UrbanCode Build. https://www.urbancode.com/product/build. Accessed 07 Aug 2020 38. Octopus Deploy: Continuous Delivery, Deployment and DevOps platform - Octopus Deploy (2020). https://octopus.com. Accessed 07 Aug 2020 39. Arachchi, S., Perera, I.: Continuous integration and continuous delivery pipeline automation for agile software project management. In: 2018 Moratuwa Engineering Research Conference (MERCon), pp. 156–161 (2018) 40. Chen, L.: Continuous delivery: huge benefits, but challenges too. IEEE Softw. 32(2), 50–54 (2015) 41. Gallaba, K.: Improving the robustness and efficiency of continuous integration and deployment. In: IEEE International Conference on Software Maintenance and Evolution (ICSME), Cleveland, OH, USA (2019) 42. De Gyves Avila, S., Ortegon Cano, P., Mondragon Mejia, A., Solis Moreno, I., Navarro Lepe, A.: A data driven platform for improving performance assessment of software defined storage solutions. In: International Conference on Software Process Improvement (2019) 43. Rahman, M.M., Roy, C.K.: Impact of continuous integration on code reviews. In: IEEE/ACM 14th International Conference on Mining Software Repositories (2017) 44. Shimel, A.: Documenting DevOps: Agile, Automation and Continuous Documentation (2015). https://devops.com/documenting-devops-agile-automation-and-continuous-docume ntation. Accessed 07 Aug 2020
Accessifier: A Plug-in to Verify Accessibility Requirements for Web Widgets Gabriel Alberto García-Mireles(B) and Ivan Moreno-Soto Departamento de Matemáticas, Universidad de Sonora, 83000 Hermosillo, Sonora, Mexico [email protected], [email protected]
Abstract. Accessibility refers to the extent a system can be used by people with the widest range of capabilities. In the case of current web-based systems, where web pages can be built dynamically, some concerns have arisen about their accessibility. A major research area has studied accessibility of static web pages, but little research is focused on how dynamic web pages address accessibility recommendations. Thus, we developed Accessifier, a plug-in for Chrome browser, considering the Web Accessibility Initiative recommendations. We implemented requirements to assess accessibility of the following widgets: buttons, menu buttons, breadcrumbs, carousels, menu bars and modal dialogs. Accessifier was validated by assessing the widgets found in 15 popular websites. Our results showed that Accessifier can identify 87% of widgets and 75% of errors detected by the tool also were found by manual assessment. Thus, preliminary results showed that Accessifier can verify accessibility requirements for some web widgets. Keywords: WAI-ARIA · Accessibility · Web widgets · Plug-in · Tool · Assessment
1 Introduction Nowadays millions of individuals depend on the content provided through different digital media to work, to achieve educational purposes and to satisfy entertainment goals. Thus, websites become an important media to support daily activities, including those of people that live with some type of disability. According the World Health Organization, around 15% of world population lives with some form of disability [1]. In Mexico, the estimated prevalence of disability is around 13% of the total population [2]. Besides, the population aging process and the prevalence of chronic diseases can increase the percentages of disability [1]. Accessibility is defined as the extent a system can be used with the widest range of capabilities to achieve a specified goal in an specified context [3]. For web technology, the Web Content Accessibility Guidelines (WCAG) is a main source of recommended practices for enhancing its accessibility [4]. These guidelines make web content more accessible whatever it is consulted on desktops or mobile devices [4]. Besides, people without disabilities also can perceive improvement on usability of websites when there is an approach to make them more accessible [5]. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 337–348, 2021. https://doi.org/10.1007/978-3-030-63329-5_23
338
G. A. García-Mireles and I. Moreno-Soto
The Web Accessibility Initiative (WAI) have released several guidelines and they are used, mainly, to assess static web pages [6–8]. However, current web applications create dynamic web pages that can display distinct content considering either the response from users or system events. In addition, web designers and developers can include widgets that have specific behavior, such as carousels or sliders, that cannot be determined only by checking HTML tags. Few research studies have been conducted for developing tools to support accessibility of web applications or rich internet applications (RIA) [6]. In order to assess the accessibility we need to apply another set of guidelines (WAI-ARIA) [9], that provides semantic information for web widgets used in contemporary web applications (e.g., carrousels, sliders, among others). In this work, we propose a plug-in, named Accessifier, to verify a set of web widgets. Accessifier is based on recommended practices to implement the WAI-ARIA [10]. This work describes the plug-in and a preliminary validation. The paper is structured as follows: Sect. 2 briefly describes related work about supporting accessibility of web applications. Section 3 outlines the main features of Accessifier while Sect. 4 presents methodology. Next, Sect. 5 draws our results of assessing the plug-in and Sect. 6 presents the discussion. Finally, Sect. 7 depicts our conclusions and future work.
2 Related Work A major stream of research has been conducted for assessing the accessibility of websites. There are reports from diverse domains, such as e-government services [11], higher education websites [8, 12], among others. Indeed, large IT companies are also interested in enhancing accessibility of their products and services, particularly when accessibility conformance must be documented and reported to clients [13]. Assessing conformance of websites with recommended guidelines, such as WCAG, is a common approach [6]. In general, automatic assessment tools implement WCAG recommended practices to determine whether HTML code is accessible [7, 14]. These tools assess static web pages where the main issues reported are related to alternative texts, color contrast, link visibility, list elements, among others [8, 11]. Despite advances in assessing static web pages, current websites contain pages that are dynamically created, using technologies such as AJAX, to provide users with an interaction similar to a desktop application [15]. A significant part of websites are widgets that enable more effective interaction and presentation of web content [16]. Although an individual without disability can visually identify changes in the web content or page structure due to JavaScript commands, these updates are difficult to perceive by assistive technologies such as screen readers [17]. Thus, these widgets could be not be accessible. To improve accessibility of dynamic web pages, World Wide Web Consortium (W3C) published the WAI-ARIA (WAI-Accessible Rich Internet Application) accessibility guidelines [9]. Its purpose is to make web content and web applications more accessible. Particularly, it supports the accessibility of dynamic content and widgets developed with AJAX, HTML, JavaScript, among other technologies. Besides, authoring practices [10] guide how to use WAI-ARIA guidelines by describing recommended usage patterns. These practices recommend adding semantic information in widgets where HTML tags should include WAI-ARIA attributes for roles, states, and properties.
Accessifier: A Plug-in to Verify Accessibility Requirements
339
The incorporation of WAI-ARIA HTML elements is a way for a web developer to provide adequate semantics for custom widgets which make them accessible and interoperable with assistive technologies. Roles identify widgets and they do not change with time or user actions. Role information is used by assistive technology to provide the normal processing specified by the role. States and properties declare attributes of an element that affect and describe an interaction [10]. Accessibility assessment tools have limited capabilities and analyze only the original HTML structure of a web page [16]. Indeed, their major shortcoming is the inability to check whether the web application widgets follow the WAI-ARIA recommendations [18]. Therefore, several efforts have been spent to develop automatic evaluations tools targeted to dynamic web pages. For instance, Doush et al. [18] propose a conceptual framework for automatic evaluation of accessibility of RIAs, but the proposal does not provide a tool. Other studies have explored distinct mechanisms to automatically assess accessibility of dynamic web pages. One of them checks DHTML accessibility based on static JavaScript code [19]. Other piece of work identifies differences between assessing static HTML code and DOM tree generated for the web page [20]. Besides, Watanabe et al. [6] propose test cases based on screen readers’ usage scenarios to automatically test the DOM structure of a web page with the purpose of identifying accessibility barriers. Despite the efforts to automatically assess accessibility of interactive widgets and web pages, no approach has effectively achieved this goal [16]. A main issue is that existing tools still cannot automatically identify and classify web page code segment as a RIA component [16]. Another approach to assess accessibility of web widgets is considering the semantic information provided in the HTML code. Watanabe et al. [6] developed a prototype, named aria-check tool, that considers behavior-based acceptance tests to evaluate specific RIA accessibility requirements of a widget. In this case, the prototype only implements ARIA conformance evaluation for the tab panel widget. The tool executes two procedures: an initial verification of HTML tags with WAI-ARIA elements that describe the widget’s behavior and run acceptance test cases to evaluate whether the web page was developed considering the WAI-ARIA requirements. The goal of the verification procedure is to provide developers with technological flaws about widgets structure and presentation. This phase does not evaluate updated elements. Before conducting dynamic accessibility evaluations, Watanabe et al. [6] suggest that it is necessary implementing setup verification of a specific web widget considering its markup structure and ARIA attributes.
3 Accessifier: A Plug-in for Verifying Accessibility Requirements Accessifier is a plug-in for Chrome browser that assesses WAI-ARIA recommendations for a set of widgets that could be included in a web page. The plug-in was developed in JavaScript using both jQuery library and the Chrome plug-in API. Accessifier has three main elements: a popup dialog that displays a set of widgets that can be assessed (Fig. 1), a set of scripts that records keyboard events included in the web page, and the set of tests that verifies the semantic information of identified widgets. After installing Accessifier
340
G. A. García-Mireles and I. Moreno-Soto
in the browser, an icon is added aside the navigation bar. When a web developer has loaded a web page, then the icon can be clicked to select widgets to be tested and execute the appropriate tests.
Fig. 1. Popup dialog for selecting widgets.
When the selected tests are executed, the web page is reloaded to inject code that register keyboard events. Then, the plug-in verifies specific WAI-ARIA recommendations for semantic information that each widget should include. For instance, Table 1 depicts requirements for a carrousel that were derived from WAI-ARIA recommended practices [10]. In total, Accessifier implements requirements for the following widgets: button, link, breadcrumb, persistent menu, button menu, modal dialog, and carousel. Besides, the plug-in checks that a widget belongs to the page sequence by means of using the tabindex attribute. In addition, it verifies that the JavaScript code handles keyboards events. Moreover, the plug-in assesses widgets and native HTML code that includes an explicit role attribute. After executing the testing procedure, Accessifier shows a modal dialog reporting errors found in each one of the widgets. When widgets pass the requirements of this tool, the report only includes the number of widgets identified. For example, Fig. 2 shows, in the top, the code of a carousel that has defects. In the bottom, the specific defects found.
Accessifier: A Plug-in to Verify Accessibility Requirements
341
Table 1. Carousel widget requirements derived from WAI-ARIA practices. Elements
Accessibility requirements
Role
Role is defined as region or group Each slide has defined role = “group”
Structure
There are slide controls for pause, before, and after actions The controls appear before slides Each slide has its own container Each slide has aria-roledescription = “slide”
Initial state
Each button is related to JavaScript code for handling keyboard events
Text alternatives
Usage of aria-label or aria-labelledby to provide text alternatives Usage of either aria-label or aria-labelledby, but not both in the same slide An aria-labelledby can refer to a former ID Each slide has an accessible attribute
Fig. 2. Example of a report for an image carousel widget.
4 Methodology Based on testing procedures of accessibility assessment tools [6, 7], we developed the methodology of this study. The goal is to determine the extent Accessifier correctly
342
G. A. García-Mireles and I. Moreno-Soto
identifies the semantic information in the studied widgets. Thus, we followed these steps: selecting web sites, identifying widgets, performing automatic verification using Accessifier, conducting manual accessibility assessment and comparing testing results. Selecting web sites. We selected 15 websites from 50 that are reported as the topmost popular ones. The list of sites was gathered from alexa.com. When evaluating accessibility tools, several researchers select websites listed in this site [7, 21, 22]. From the 15 websites selected, we only assess the home page, that has, at least, one widget that Accessifier can verify. Identifying widgets. Based on the widgets that our tool can verify (button, link, breadcrumb, carousel, persistent menu, modal dialog, and menu button), we identified the number of widgets included in each web page. This task allows us to identify differences between the real widgets in a static web page and that found for Accessifier. Performing automatic verification using Accessifier. To assess a widget, Accessifier identifies its HTML elements by searching them in the corresponding Document Object Model (DOM) of the web page under assessment. To identify the widget, Accessifier searches within HTML elements the WAI-ARIA attributes. For instance, for a carousel, Accessifier searches for any element containing the class attribute with values “carousel” or “slider”. Other way for the identification of a carousel widget is checking the attribute aria-roledescription with value “carousel” in a container without dependency to another carousel container. For each identified widget, Accessifier verifies that WAI-ARIA semantic information is included and whether onkeydown and onkeypress events are handled in JavaScript. When Accessifier do not find the expected WAI-ARIA attributes or code related to a keyboard event, then it reports an error. Conducting manual accessibility assessment. Manual assessment was carried out on the widgets identified by Accessifier. By visual observation, the corresponding widget’s HTML code was reviewed to identify whether the WAI-ARIA attributes were included. For interactive behavior, aria-expanded attribute was checked to verify that the element get focus and it is appropriately activated when Enter key is pressed or left-button mouse is clicked. Considering bar menus, arrow keys were used to carry out navigation testing. Comparing testing results. To verify the assessment results, we compared both automatic testing results provided by Accessifier and the manual testing results. An error is reported when both automatic and manual test found it. A widget correctly implements accessibility requirements when the plug-in do not find errors nor the manual testing procedure. A false positive case is reported when Accessifier reports an error, but the manual testing procedure does not find evidence of this error. On the contrary, a false negative case is reported when Accessifier do not report any error, but the manual testing shows it. Our equipment for testing Accessifier is an Asus laptop based on Intel Core i7, 8 GB RAM and GNU/Linux. We installed the Chromium browser, version 83.0.4103.61, to carry out the testing procedure. Accessifier was installed in the browser and it was available as an icon to execute tests.
5 Results Table 2 shows the 15 websites verified in the testing procedure. Besides, this table also presents the number of widgets found on June 20, 2020, by visual inspection of the home
Accessifier: A Plug-in to Verify Accessibility Requirements
343
page of each website. In total, there are 1849 widgets to assess. The most common are links (1585) and buttons (186) because these are also addressed by native HTML code. Other widgets found were menu button (43), menu bar (11), carousel (10) and modal dialog (14). In the assessed pages we do not find any breadcrumb widget. Buttons, menu buttons, and links that belongs to a menu bar are not counted as independent widgets because they are part of a menu bar. Considering the number of widgets per website, Google has 20 while Wikipedia has 325. Table 2. Websites assessed and their corresponding widgets. Website
Button
Link
Menu button
google.com
5
12
2
youtube.com
30
111
4
facebook.com
7
47
baidu.com
Menu bar 1
13
37
2
323
zoom.us
60
99
live.com
1
26
netflix.com
7
25
14
50
10
1
4
63
1
1
wikipedia.org
microsoft.com office.com blogspot.com
Carousel
2
Modal dialog
Total
1
20
6
152
3
57
1
53 325
2
1
3
1
1
1
166 29 32
16
1
76 1
1
70 17
bing.com
9
56
5
Stackoverflow. com
8
178
7
1
aliexpress.com
18
289
3
2
ebay.comebay. com
8
253
10
1
2
1
275
186
1585
43
11
10
14
1849
Total
1
71 194 312
Table 3 presents the number of widgets identified by Accessifier in each one of assessed websites. Since Accessifier searches elements inside the DOM tree of a web page, we can find differences in the number of identified elements in the static web page [20]. The global difference among the number of widgets is 12.94%. The lack of specific WAI-ARIA attributes, e.g. role, hinders the possibility that a widget could be identified by Accessifier. Considering the sites with the largest number of widgets identified by Accessifier (youtube.com, zoom.us, aliexpress.com), the tool does not identify the persistent menu group and counted independently each one of their elements. Besides, the Table 3 also presents the number of errors that were found by Accessifier. We do not find errors in four sites (baidu.com, wikipedia.org, blogspot.com, aliexpress. com), and six sites showed, at least, 70% of consistency between the number of errors
344
G. A. García-Mireles and I. Moreno-Soto
identified by Accessifier and the manual verification. In total, our tool found 180 errors, but the manual verification identified only 135 which represents 75% of correctness. The remaining 25% represents the number of false positives (45). We found more false positives in websites of Microsoft (17), youtube (8) and zoom (5). In the case of the Microsoft website, the false positives correspond to carousels and persistent menus. The errors reported by Accessifier for these widgets are missing role, incomplete controls, and missing slide label for the carousel, and more than one list of elements in the persistent menu. Table 3. Accessibility related errors found in selected websites. Website
Number of widgets identified
Errors found by accessifier
Verified errors
False positive
False negative
google.com
16
5
1
4
0
youtube.com
212
30
22
8
0
facebook.com
54
10
7
3
0
baidu.com
53
0
0
0
2
wikipedia.org
325
0
0
0
0
zoom.us
181
33
28
5
16
live.com
25
13
11
2
28
netflix.com
31
4
2
2
0
microsoft.com
51
36
19
17
15
office.com
29
6
3
3
52
blogspot.com
16
0
0
0
0
bing.com
39
12
12
0
42
stackoverflow. com
66
3
2
1
6
aliexpress.com 269
0
0
0
0
28
28
0
35
ebay.com
335
False negative errors arose when we conducted the manual assessment for widgets included in the web page. The websites with more false negatives found were Office (52), Bing (42), and Ebay (35). The widgets related to false negatives are the following: persistent menus and carousels. Accessifier does not identify errors for each one of the widgets due to the following: the structure of the widgets found in the websites is too complex for the tool to properly analyze them, slide containers and element lists are nested in other containers, and slides have more nested containers inside them for the elements they have. These errors are related to the WAI-ARIA requirement that show the container structure of slides.
Accessifier: A Plug-in to Verify Accessibility Requirements
345
The analysis of errors by widget is presented in Table 4. Modal dialog (15) and menu button (22) show the better results. For the former, Accessifier found the same number of errors than the manual testing procedure. For the latter, Accessifier showed a 95% of consistency with the manual testing procedure. However, we found an important number of false positives for button, menu bar and carousel widgets. For false positive errors related to button widgets, the reason is that keyboard interaction is implemented without registering events at loading time, this is common in google.com and youtube.com. The false positive errors reported for carousels are due to the failure of the tool to detect semantic data on the carousel container. The false positive errors related to menu bars are due to Accessifier identifies a navigation element as a persistent menu when it is used as a footer on the web page. Finally, the false negative errors related to link widget (7) are due to failure to detect empty links and elements with incorrect role “link” that act as menu buttons. Table 4. Number of errors by widget. Widget
Errors identified by accessifier
Verified errors
False positives
Button
64
45
19
Link
False negatives 0
4
2
2
7
Menu button
22
21
1
0
Menubar
23
14
9
60
Carousel
52
38
14
129
Modal dialog
15
15
0
0
6 Discussion We have presented Accessifier plug-in developed to determine accessibility requirements on web pages that include widgets. Accessibility requirements were derived from WAIARIA recommended practices [10]. As a result of validating the plug-in, we verified accessibility requirements of 15 popular websites. In total, Accessifier identified 87% of the widgets that are included in websites. Differences can be explained by the distinct environment used to identify widgets, since the plug-in searches elements in the browser, using a DOM structure, and the manual assessment count widgets by visual inspection in the content of a static web page [20]. Accessifier, in total, found 180 errors, but only 135 of them were identified in the manual assessment. Thus, the plug-in has a 75% of correctness. The remaining errors (45) were false positives. Carousels and menu bars presented the most important number of false positives. In the former widget, the problem was the failure of Accessifier to identify semantic data in attributes. In the latter, the problem was that Accessifier do not include the capability to search menu bars in the footer of a web page. Considering
346
G. A. García-Mireles and I. Moreno-Soto
the false negatives related to the link widget, Accessifier does not detect empty links or the role link that act as menu button. Thus, as other tools that assess WAI-ARIA requirements, Accessifier shows potential to verify accessibility requirements, but it requires more work to improve its robustness. So, it could be an exploratory tool that has potential to verify semantic data embedded into HTML elements of web widgets. Comparing the effectiveness results of Accessifier with other accessibility assessment tools, we can find similarities. For instance, Vigo et al. [23] analyzed six tools that used WCAG to assess accessibility of websites. They found that tools only implement between 14% and 38% of criteria, and some of them produce lower correctness scores (66 to 71%) due to an increase in false positives. Similarly, the assessment of eight plug-ins for assessing accessibility found that individual tools have poor coverage of WCAG 2.1 success criteria, between 10% to 40% by individual tool [7]. Considering their usability results, it is reported that developers may spend a lot of effort trying to understand accessibility evaluation because the variations in the implementation of these tools, how to use them, and how to interpret the results [7]. Our work was inspired in the proposal of Watanabe et al. [6], in the sense we also verify semantic information recommended for WAI-ARIA to be included in HTML elements. While Watanabe et al. [6] reported the implementation of rules to verify only the tab panel widget, we implemented rules to verify seven widgets: buttons, links, breadcrumb, menu button, menu bar, carousel, and modal dialog. According to Watanabe et al. [6], we also consider that a first step to assess accessibility in dynamic web pages is verifying WAI-ARIA semantic information that should be available in widgets. There is a lack of tools for assessing accessibility in websites that uses interactive widgets. Indeed, Watanabe et al. [21] conclude that WAI-ARIA is not considered in many web projects. To ensure web accessibility, web developers should be aware and implement guidelines, such as WAI-ARIA, but this scenario barely occurs [24]. Besides, current automatic accessibility evaluation tools cannot assess the whole set of requirements included in the WAI-ARIA specification [16]. Thus, research and development work are ahead to implement assessment tools that address WAI-ARIA requirements and present appropriate evidence of their effectiveness [6]. On the other hand, this report has several limitations. First, we conducted an exploratory research to develop a tool that support verification of WAI-ARIA requirements for a set of widgets and tested our tool considering 15 websites. Thus, our results are not conclusive about the set of widgets in WAI-ARIA authoring practices, nor the results reported by our tool, since the set of assessed websites was small. Besides, the native HTML elements, such as links and buttons, need to be checked with WCAG guidelines [4]. On the other hand, the manual assessment and comparison between automatic and manual assessment was conducted by one of the authors of this study. To mitigate this threat, the assessment procedures were documented, and the other author verified results.
7 Conclusions Accessifier is a plug-in that supports the identification of accessibility requirements for web widgets considering the WAI-ARIA recommendations. The tool implements rules
Accessifier: A Plug-in to Verify Accessibility Requirements
347
to determine if WAI-ARIA specific attributes are included in seven distinct widgets. The plug-in identified 87% of the widgets used in 15 websites and 75% of the errors found were verified by a manual assessment. Thus, the plug-in has potential to support the verification of semantic data of WAI-ARIA recommendations for dynamic web pages. This could be a first step before a dynamic accessibility evaluation can be carried out. Indeed, Accessifier could support the development process of web applications by determining whether WAI-ARIA attributes are included in web widgets. As a further work, we consider implementing accessibility rules related to remaining widgets in WAI-ARIA recommended authoring practices. In addition, it is necessary to review the relationship between WAI-ARIA practices and WCAG practices to unify assessment criteria and generate an assessment report. On the other hand, developing tools for assessing dynamic behavior of web widgets is still relevant for improving accessibility of current web pages.
References 1. World Health Organization (WHO): World Report on Disability. Summary (2011). https:// www.who.int/disabilities/world_report/2011/report/en/ 2. Instituto Nacional de Estadística y Geografía (INEGI): La discapacidad en México, datos al 2014 (2014). https://www.inegi.org.mx/app/biblioteca/ficha.html?upc=702825090203 3. ISO: Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — System and software quality models. ISO/IEC 25010 (2011) 4. W3C: Web Content Accessibility Guidelines (WCAG) 2.1 (2018). https://www.w3.org/TR/ WCAG21/ 5. Petrie, H., Kheir, O.: The relationship between accessibility and usability of Websites. In: Conference on Human Factors in Computing Systems – Proceedings, pp. 397–406 (2007). https://doi.org/10.1145/1240624.1240688 6. Watanabe, W.M., Fortes, R.P.M., Dias, A.L.: Acceptance tests for validating ARIA requirements in widgets. Univers. Access Inf. Soc. 16, 3–27 (2017). https://doi.org/10.1007/s10209015-0437-9 7. Frazão, T., Duarte, C.: Comparing accessibility evaluation plug-ins. In: Proceedings of the 17th International Web for All Conference (W4A 2020), pp. 1–11 ( 2020). https://doi.org/10. 1145/3371300.3383346 8. Ismail, A., Kuppusamy, K.S.: Web accessibility investigation and identification of major issues of higher education websites with statistical measures: a case study of college websites. J. King Saud Univ. - Comput. Inf. Sci. (2019). https://doi.org/10.1016/j.jksuci.2019.03.011 9. W3C: WAI-ARIA Overview (2016). https://www.w3.org/WAI/standards-guidelines/aria/ 10. W3C: WAI-ARIA Authoring Practices 1.1 (2019). https://www.w3.org/TR/wai-aria-practi ces/ 11. Bhagat, S., Joshi, P.: Evaluation of accessibility and accessibility audit methods for egovernance portals. In: International Conference Proceedings Series Part F1481, pp. 220–226 (2019). https://doi.org/10.1145/3326365.3326394 12. Laitano, M. I.: Accesibilidad web en el espacio universitario público argentino. Rev. española Doc. Científica. 38, e079 (2015). https://doi.org/10.3989/redc.2015.1.1136. 13. Snider, S., Scott, W.L., Trewin, S.: Accessibility information needs in the enterprise. ACM Trans. Access. Comput. 12 (2020). https://doi.org/10.1145/3368620 14. W3C: Web Accessibility Evaluation Tools List (2016). https://www.w3.org/WAI/ER/tools/
348
G. A. García-Mireles and I. Moreno-Soto
15. Casteleyn, S., Garrigós, I., Mazón, J.N.: Ten years of Rich Internet applications: a systematic mapping study, and beyond. ACM Trans. Web. 8, 1–46 (2014). https://doi.org/10.1145/262 6369 16. Antonelli, H.L., Sensiate, L., Watanabe, W.M., De Mattos Fortes, R.P.: Challenges of automatically evaluating Rich Internet Applications accessibility. In: SIGDOC 2019 - Proceedings of the 37th ACM International Conference on the Design of Communication, pp. 1--6 ( 2019). https://doi.org/10.1145/3328020.3353950 17. Carvalho, L.P., Ferreira, L.P., Freire, A.P.: Accessibility evaluation of Rich Internet Applications interface components for mobile screen readers. In: Proceedings of ACM Symposium on Applied Computing, 04–08-April, pp. 181–186 (2016). https://doi.org/10.1145/2851613. 2851680 18. Abu Doush, I., Alkhateeb, F., Al Maghayreh, E., Al-Betar, M.A.: The design of RIA accessibility evaluation tool. Adv. Eng. Softw. 57, 1–7 (2013). https://doi.org/10.1016/j.advengsoft. 2012.11.004. 19. Tateishi, T., Miyashita, H., Naoshi, T., Saito, S., Ono, K.: DHTML accessibility checking based on static JavaScript analysis. In: Stephanidis, C. (eds.) Universal Access in HumanComputer Interaction. Applications and Services. LNCS, UAHCI 2007, vol. 4556, pp. 167– 176 ( 2007). https://doi.org/10.1007/978-3-540-73283-9_20 20. Fernandes, N., Lopes, R., Carriço, L.: On web accessibility evaluation environments. In: W4A 2011 - International Cross-Disciplinary Conference on Web Accessibility, pp. 1–10 ( 2011). https://doi.org/10.1145/1969289.1969295 21. Watanabe, W.M., Dias, A.L., Fortes, R.P.D.M.: Fona: quantitative metric to measure focus navigation on rich internet applications. ACM Trans. Web. 9, 1–28 (2015). https://doi.org/10. 1145/2812812 22. Hanson, V.L., Richards, J.T.: Progress on website accessibility? ACM Trans. Web. 7 (2013). https://doi.org/10.1145/2435215.2435217. 23. Vigo, M., Brown, J., Conway, V.: Benchmarking web accessibility evaluation tools, pp. 1–10 ( 2013). https://doi.org/10.1145/2461121.2461124 24. Holloway, C.: Disability interaction (DIX): a manifesto. Interactions. 26, 44–49 (2019). https://doi.org/10.1145/3310322
M-Learning and Student-Centered Design: A Systematic Review of the Literature Yesenia Hernández-Velázquez1(B) , Carmen Mezura-Godoy1 , and Viviana Yarel Rosales-Morales2 1 Faculty of Statistics and Informatics, Universidad Veracruzana, Xalapa, Mexico
[email protected], [email protected] 2 Catedras CONACYT - Faculty of Statistics and Informatics, Universidad Veracruzana,
Xalapa, Mexico [email protected]
Abstract. Currently, the development of M-Learning applications, with a usercentered approach, has become an emerging area of great importance, since these applications are expected to generate a better user experience (UX). This paper presents a systematic literature review (SLR) on e-learning the characteristics that current M-Learning applications cover to better satisfy the needs of students and also the models or architectures that are followed for their development. The literature review was performed on the most popular digital databases. The review process consisted of defining keywords for the work search; subsequently, was made a selection task based on inclusion and exclusion criteria; and finally, an analysis of the works was carried out. The results of the process review provided the main contributions to the field of study, as well as the areas of opportunity, allowing a discussion on possible future directions of research. Keywords: M-Learning · User-centered design · Models · Architectures
1 Introduction The Mobile learning (M-Learning) is defined as the set of learning activities supported by mobile technology, which allows access to knowledge from anywhere and anytime. In this way, it allows to receive and assimilate the information to turn it into learning using any technological means [1–3]. The ubiquity of M-Learning provides a better learning experience through access to extensible learning environments, supporting customizing and adapting content as well as, offering the possibility of new ways of assessing learning and fostering collaboration outside the classroom [4]. To ensure a better user experience on M-Learning applications, in the specialized literature it is proposed to consider the following characteristics: portability, availability, adaptability, persistence, usability, and interactivity, among others [5–8]. These characteristics allow users to access learning content at any time and any place. However, because the ubiquity of M-Learning allows a greater diversity of users with different © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 349–363, 2021. https://doi.org/10.1007/978-3-030-63329-5_24
350
Y. Hernández-Velázquez et al.
learning needs, it is important to consider different designs and interaction means for each student. Given this scenario, the user experience becomes different based on the context and characteristics of the users. In these circumstances, in the work proposed by Kumar [9] to improve the user experience (UX), personalization or adaptability of the M-learning application, it is suggested to consider the following aspects for the presentation of educational content: learning styles, use preferences and different learning times from students; as well as the characteristics and limitations of the mobile devices they use. The UX seeks to maintain an optimal relationship between the user and the technological environment, from this perspective it is necessary to design an M-Learning system with a focus on user-centered design (UCD). UCD is a philosophy to guarantee the success of a software product, which is based on user participation in all phases of product design [10]. It is supported by different disciplines such as: Human factors, Ergonomics, Human Computer Interaction, User experience, Usability, Accessibility, Information architecture, Interaction Design, among others [11]. This paper aims to present a systematic review of the literature on the characteristics of education and elearning applications that must be considered in M-Learning to better meet the needs of students and provide a better user experience; as well as software models or architectures that continue to develop. There are studies that present reviews of the state of the art on M-Learning applications such as [9, 12–16], which focus on the interaction types with the student, usability assessments and trends from mobile learning studies. Specifically, our work is distinguished by identifying desirable characteristics to consider when developing applications M-Learning student-centered based on an analysis of the literature. Therefore, in this work, in the first place, the description of characteristics is made that, according to the literature, favor that mobile educational applications have a better experience of use. Subsequently, and based on these characteristics, a SLR of works proposed in the literature is presented, which presents descriptions of mobile educational applications to gather evidence from this field of study. Based on the evaluation of the evidence, conclusions are drawn regarding the defined research questions [17].
2 M-Learning and Student-Centered Design The use of M-Learning cannot be reduced simply to the acquisition of educational content through technological resources, it requires an adequate framework that implies a conceptual level design, with strategies, methodologies and regulations included [5, 6]. Therefore, in this section, we present the topics and features that seek to improve the user experience on M-learning. M-Learning: The term M-Learning, emerged in the 80 s from the development of wireless data networks, is considered an extension of the E-Learning through mobile devices. The purpose of M-Learning is to provide ubiquity in the learning process; therefore according to [5–8], for mobile applications grant best results is desirable to consider the features or aspects such as: accessibility, standardized content, content granularity, reuse, portability, durability or persistence, usability, self-contained, authorship and mobile device capabilities, described in Table 1.
M-Learning and Student-Centered Design
351
Table 1. Desirable M-learning characteristics. Characteristics
Description
Accessibility
Localizable content and efficient recovery using standards
Standardized content
Educational content follows a homogeneous structure that supports familiarity with the content
Granularity
Educational material can be decomposed in minimum units of learning for classification and assembly
Reuse
Educational content with a defined objective, capable of being used in different teaching contexts
Portability
Ability to host the application on different platforms transparently without affecting the content and structure
Durability
A mechanism to allow incorporating new content or modifications to existing ones
Usability
The use of the educational application must be effective, efficient and satisfactory for the users
Self-contained
By itself, the educational application will meet the proposed objective. Incorporating links to documents only to deepen the subject
Authorship
Incorporate sources of educational resources, to comply with copyright laws
Mobile device capabilities
Consider for content delivery the capabilities of mobile devices
User-Centered Design • User-centered design-UCD: According to [18], is an approach for the development of interactive systems aimed at making systems usable through a cyclical process. To establish the UCD as the means that allows both learning and how the use of the application is unique for the student, in the literature it refers to the incorporation of two characteristics: i) Configuration of the application and ii) Personalization/adaptability [19, 20] that are described in Table 2. • Educational content-oriented characteristic: also, to provide educational content with a structure that encourages comprehensive learning and student autonomy, in [21] it is suggested that the elements of the M-Learning application are: learning objective, informative content, learning activities, evaluation, feedback, collaboration between participants, communication between participants, described in Table 3.
352
Y. Hernández-Velázquez et al. Table 2. Characteristics for a student-centered design
Characteristic
Description
Application settings
Allow the student to establish their status as a student (learning style, prior knowledge, technological skills), time preferences, colors, among others
Customization/adaptability
Elements such as educational content, examples, exercises, and assessment should be focused on the needs of the student
Table 3. Characteristic oriented to educational content Characteristic
Description
Learning objective
The disciplinary competencies to be achieved must be described
Informative content
Corresponds to the presentation of text, images, videos that will give students the information needed to acquire educational competence
Learning activities
Use different activities according to each of the disciplines in which said competence will be acquired
Evaluation
To perform a test to assess the skills acquired by the student
Feedback
Allow the students to receive feedback on the knowledge they have acquired and the areas to be strengthened
Collaboration between participants
Allow learning activities to exist that encourage participants in the teaching-learning process to collaborate
Communication with participants
Offer a mechanism that allows the different actors (teachers and students) involved in learning to share their experiences
It is worth noting the importance of incorporating an evaluation in a M-Learning application since it is a critical and necessary component to monitor the student’s academic achievements and keep in mind the level of understanding acquired [22]. Also, according to [23, 24] feedback is a determining factor in the acceptance and use of applications M-Learning, providing insight that helps students to make adjustments in their performance, promoting self-assessment to the receive constructive criticism.
3 Research Method For the selection and analysis of related works, the phases and activities described in [25] were used: 1) planning the search, 2) conducting the search, and 3) presenting the review report. The Fig. 1 summarizes the methodology followed to select and analyze
M-Learning and Student-Centered Design
353
related works. The research was chosen based on evaluating the inclusion criteria for evaluating the quality of the papers, which allowed 26 works to be included out of a total of 63 works reviewed. Journal papers and proceedings of international conferences were included in this research.
Fig. 1. Diagram of the search methodology for related works.
Within the selection process, the first step consisted of compiling the publications found in the identified sources. The digital databases consulted were Willey online library, Springer link, ACM Digital Library, IEEE explore, Science direct, as well as works found in well-known conferences or magazines in the area. The search and retrieval of related works were based on the three research questions described next. Research Questions. The research questions addressed are: R1: How many research works related to the development of mobile educational applications aimed at a better user experience have been proposed in the last ten years?, R2: Do the works propose architectural views of components, software architectures, or models of mobile educational applications?, and R3: What characteristics proposed in the specialized literature on M-Learning are found most frequently in related works? In this context, R1 is proposed to specify the number of works in the literature related to the development of mobile educational applications that provide elements to improve the student’s experience of use. On the other hand, the intention of posing R2 is to determine the level of maturity of the work, the abstraction, and technical details that can be found in these works. Finally, R3 is established to have a relationship of the characteristics of M-Learning, the desirable structure of educational applications and user-centered design are taken into account in the research proposed in the literature, as well as the way they are incorporated.
354
Y. Hernández-Velázquez et al.
Search String. To select the search string as seen in Fig. 1, it was considered that it should provide maximum coverage, but with adequate size to narrow the results [20]. The terms used are based on the research questions and have been selected using three different domains as a starting point: i) Mobile phones as a target device, including smartphones and tablets, ii) Mobile learning as the field specified application, and iii) user-centered design as the subject studied along with other similar terms. Other search strings derived from the first one were used in the different electronic databases mentioned above. Inclusion and Exclusion Criteria: For the selection and analysis of works, only those who met these inclusion criteria were selected: 1) The document focuses on mobile phones as a target device, including smartphones and tablets, 2) The document reports on the development of mobile educational applications, and 3) The document provides information on user-centered design. The exclusion criteria to filter the documents from the list of related works were the following: 1) The document is not written in English, 2) The document describes only a pedagogical study of the use of mobile educational applications and finally 3) The document is not published in an international journal or conference proceedings. Quality Assessment: To measure the relevance of the papers to the study, a checklist was generated to assess the correspondence between the abstract, the content, and the results of each reported research. The checklist was as follows: QA1- Are the research objectives specified? QA2-Have they been cited by other authors? and QA3- Does the study report credible findings with supporting data? All titles and abstracts were read, and the inclusion, exclusion, and quality evaluation criteria were verified. A classification of the papers was made according to the proposed type described. Finally, characteristics tables and statistics were generated.
4 Analysis of Results The analysis of the works was carried out by answering the three proposed research questions. R1: How many research works related to the development of mobile educational applications aimed at a better user experience have been proposed in the last ten years? In the educational field, various studies that address the use of smartphones to offer educational content can be found in the literature. However, as it is a line that includes technological and pedagogical aspects, the orientation of the works has these two aspects. Based on the inclusion and exclusion criteria described in the methodology section, 63 papers on the development of mobile educational applications were selected. The number of publications found per digital library is presented in a bar graph present in Fig. 2, where three works from the years 2011, 2012, and 2014 respectively were included; 7 papers from the years 2016, 2018, and 2019, finally 2 works from the years 2017 were included. As a result of this review, 26 works that incorporate some of the characteristics described as desirable in the second section of the document are highlighted. These
M-Learning and Student-Centered Design
Excluded papers
8 3 5 WILLEY ONLINE LIBRARY
4
355
Related papers
2
5
7
8
2 6
SPRINGER LINK
ACM DIGITAL LIBRARY
IEEE EXPLORE
SCIENCE DIRECT
7 6 OTHERS
Fig. 2. Works reviewed by digital library
generally related works can be classified into two groups. The first group corresponds to works that offer educational content through mobile devices. These works mainly describe the functionalities present in the applications, the software components that were used. On the other hand, the second group corresponds to the works where models or architectures are presented through which the aim is to identify general characteristics to be considered in mobile educational applications. R2: Do the works propose architectural views of components, software architectures, or models of mobile educational applications? As seen in Table 4, 12 works were found that describe the development of mobile educational applications and the software components that were used for the implementation. On the other hand, in Tables 5 and 6 there are 14 works; of which 10 papers proposing software architectures that were evaluated through the development of a mobile educational application; a conceptual model describing the elements to consider in a mobile educational application and three models include a software architecture based on the model and development of a mobile educational application. R3: What characteristics proposed in the specialized literature on M-Learning are found most frequently in related works? Tables 1, 2, and 3 describe a series of characteristics that are taken as a reference to answer the third question posed. First, the results obtained in the 12 works that present developments in mobile educational applications and describe an architectural view of components will be described. Concerning the desirable characteristics of M-Learning as seen in Table 4, more than 91% of the works allow portability, however, more than 54% of these works require an internet connection to comply with this feature. On the other hand, more than 83% of these 12 works do not require consulting additional material because the application is self-contained. Finally, 75% of the works incorporate some means of interaction that does not limit the student to only review educational content through mobile devices in this group of works. Regarding the application structure through which seeks to meet the objective of learning defined, 100% of the works incorporate educational content, more than 50% include examples and feedback is included in more 58%. However, the characteristics and evaluation exercises show whether the student is acquiring knowledge is only 50% and 33% respectively. Concerning the characteristics that allow the content of the application to focus on preferences, learning styles, the progress of learning, and the student’s mobile device, it
356
Y. Hernández-Velázquez et al. Table 4. List of proposals that present mobile developments and their characteristics
Characteristic type
Characteristic
List of related works
Learning
Educational content
[26–37]
Examples
[27, 28, 32, 35–37]
Exercises
[26, 28, 29, 34–37]
Evaluation
[26, 28, 31, 32]
User-centered design M-Learning
Feedback
[26, 28, 30, 32–34, 36]
Collaborative activities
[26, 32]
Communication between participants
[26, 32, 36]
Application settings
[28, 31, 33, 36]
Adaptation and customization
[31, 33, 34, 36]
Reuse of content
[26]
Authorship
[36]
Usability
[35]
Interactivity
[26, 27, 29, 30, 32–35, 37]
Standardized content
[28, 29, 32, 35, 36]
Content granularity
[26, 31]
Portability
[26–30, 32–37]
Durability
-
Accessibility
[26–32, 35, 36]
Self-contained
[26–30, 32–36]
Table 5. List of works that present SW models or architectures Model
Software architecture
Component architectural view
[19, 38–40]
[38–50]
[38–50]
is observed that in 33% of the works, the application can be configured and present adaptability of the content. On the other hand, Table 5 presents 14 related works that describe models or software architectures, of which in the first column 4 works describing a model are presented, in the second column the works that incorporate a software architecture are presented and finally which of these works also present an architectural view of components. Table 6 describes the features present in the jobs that are models or software architectures. Most of these works consider the desirable characteristics of the M-Learning; where portability and the application is self-contained are considered in more than 85%
M-Learning and Student-Centered Design
357
Table 6. List of proposed SW models or architectures and their characteristics Characteristic type
Characteristic
List of related works
Learning
Educational content
[19, 38–50]
Examples
[19, 39, 41–43, 45, 47–50]
Exercises
[19, 38, 39, 41–43, 45, 47–50]
Evaluation
[19, 39–43, 47–50]
User-centered design M-Learning
Feedback
[43, 49, 50]
Collaborative activities
[48]
Communication between participants
[48]
Application settings
[19, 39–42, 44]
Adaptation/customization
[19, 38–44, 46, 47, 49, 50]
Reuse of content
[19, 39, 42, 43, 47–50]
Authorship
[42, 48]
Usability
[42, 43, 45, 47–50]
Interactivity
[43, 45, 48]
Standardized content
[19, 38, 39, 41–43, 48–50]
Content granularity
[19, 38, 39, 41–44, 47, 49, 50]
Portability
[38, 39, 41–50]
Durability
-
Accessibility
[38, 39, 42, 43, 45–50]
Self-contained
[38, 39, 41–50]
of the works, showing the importance of access to educational content on different platforms, and the application does not require in as much as possible to consult external sources to complement educational content. Similarly, accessibility and granularity are found in more than 71% of works. These works describe the importance of having educational content that can be viewed anywhere and anytime, referring to the use of standards; as well as the possibility of having minimum learning units that can be organized and reused. Regarding the structure of the educational application, all the works incorporate educational content, as well as more than 78% allow the student to have exercises, and more than 71% present an evaluation to know the degree of assimilation of the content, as well as 71% describe practical examples. Finally, with regard to the characteristics that allow the content of the application to focus on preferences, learning styles, the progress of learning, and the student’s mobile device, it is observed that more than 42% of the works allow the application to be configured. On the other hand, adaptability is a feature that is most frequently addressed in the proposed architectures and models. Table 7 shows the relationship between the works that include interaction and the means they use to offer this characteristic. As you can see, the most used interactive means
358
Y. Hernández-Velázquez et al.
are offering content with augmented reality and solving practical exercises. On the other hand, the use of collaborative means is the least used alternatives, causing students not to share their doubts and knowledge that could improve the learning process. Table 7. List of interaction media and related works Type of interactivity
Work list
Forums and Collaborative activity [26] Virtual tours
[27, 30, 37]
Content with augmented reality
[35, 37, 43, 45]
Gamification
[29, 32]
Practical exercises
[29, 33, 34, 48]
Table 8 shows the relationship of the 26 papers included in this research and the types of adaptability. In the first column are the adaptability types, in the second column are related works that describe only the components with which the applications were developed. On the other hand, the third column presents the works that describe software models or architectures. Table 8. Adaptability or personalization types Adaptability or personalization types
Component architectural view
Software models or architectures
To device
-
[38, 42, 44, 46, 47, 49, 50]
To context
[34]
[40, 43, 44]
In the learning style
[31, 33, 36]
[19, 38, 41, 44, 46]
To the advancement of learning
-
[19, 39, 44]
5 Discussion The results obtained show that there are works in the literature that present efforts in the field of developing mobile educational applications that seek to promote a better user experience for students. First, the findings in the 12 papers describing the development of applications M-Learning through an architectural view of components are described. Finally, the findings of the 14 papers presented models or software architectures are presented. Architectural Views of M-Learning Application Components Firstly, some papers analyzed describe mobile educational applications that mostly incorporate a learning structure with educational content, examples, and exercises; however,
M-Learning and Student-Centered Design
359
only some works include evaluation means and it is a static evaluation, that is, an evaluation that is not adapted to the context of the student. It is also noted that only in three works there some means of feedback that would allow the student to be aware of their level of knowledge and advancement; as well as the lack of feedback to the user on the use of the application using contextual help on the components of the application interface. Regarding the applications that allow adapting or personalizing their content, there are different means to determine what type of content to present, however in most of them the advancement of learning, the context, and the characteristics of mobile devices that would allow are not contemplated comprehensively focus the content on the needs of the student, affecting the user experience. On the other hand, regarding the incorporation of the socialization and collaboration of the participants in the teachinglearning process so far, three works were found; however, these works do not take into account the context and the particular characteristics of the student, limiting the use of the application to specific scenarios and affecting the user experience. Another characteristic closely related to granting a better user experience is usability and particularly in this group of works, it is necessary to validate that this characteristic is being fulfilled, through usability evaluations, as well as to include elements in the application design that increase efficiency, effectiveness, and satisfaction. Finally, in the review of the state of the art, it was identified that development of the proposed mobile educational applications mainly focuses on presenting educational content via mobile devices without incorporating some intrinsic characteristics of mobile devices, of the design focused on the student and M-Learning that in the literature are described as desirable to improve the use and acceptance of mobile educational applications. Software Models or Architectures In this review, it can be seen that there are proposals that consider an educational content structure, made up of learning material, examples, exercises and evaluation of what has been learned, and in some cases some feedback mechanism; however, they mainly focus on telling the student where he was wrong and what the correct answer was. Another factor present in the evaluation is that it is not a dynamic evaluation that adapts to the context and preferences of the student. On the other hand, you may also notice that only one proposal was found where socialization and collaborative activities are taken into account; however, it not considers to adapt the educational content, limiting the realization of activities to specific scenarios and affecting the user experience of the student. It can also be observed that in this group of works, only three proposals consider the adaptability based on the advancement of learning as a fundamental element. Moreover, different works incorporate the features of mobile devices for adaptability; however, they limited to incorporate the type of device in some cases and others the internet, leaving aside the battery level and characteristics of the hardware, among others. Finally, it can be concluded that the analyzed works regarding models or architectures for the development of mobile educational applications lack the following elements: 1) a comprehensive student-centered design, 2) feedback in terms of educational content, 3) examples and exercises, 4) dynamic adaptive assessment, and 5) learning preferences of the student.
360
Y. Hernández-Velázquez et al.
6 Conclusion and Future Work This study carried out a systematic literature review on the development of mobile educational applications that incorporate features that can improve user experience. A search strategy was designed and applied; three research questions were specified and answered. After the search and analysis processes were analyzed 26 of 63 publications, they provided elements to answer the research questions. As well as identifying that according to [8, 51, 52], the proposed works on the development of mobile educational applications partially incorporate the factors that, from the student’s perspective, increases the acceptance and use of mobile educational applications (the collaboration of the students, ease of use, new forms of interaction of the application; as well as the ability to adapt the content to the personal qualities of the student). Based on the results, research opportunities could be identified to continue exploring, such as: 1) Adaptive mobile educational applications that allow us to have configurable student models, 2) Dynamic assessment methods, 3) Collaborative activities that take into account the context, among others. Acknowledgments. This work was partially developed under the support of the National Council of Science and Technology (CONACYT) in the scope of the Catedras CONACYT project “Infraestructura para Agilizar el Desarrollo de Sistemas Centrados en el Usuario”, Ref. 3053. In addition, we thank CONACYT for the doctoral scholarship number 333330 of the first author, as well as at Universidad Veracruzana for the support in the development of this research.
References 1. Santiago, R., Trabaldo, S., Mobile Learning: Nuevas realidades en el aula, Digital-Text (2015) 2. Lee, M.J.W., Chan, A.: Pervasive, lifestyle integrated mobile learning for distance learners: an analysis and unexpected results from a podcasting study. Open Learn J. Open Distance e-Learn. 22(3), 201–218 (2007) 3. Traxler, J.: Learning in a mobile age. Int. J. Mob. Blended Learn. (IJMBL) 1(1), 1–12 (2009) 4. Nedungadi, P., Raman, R.: A new approach to personalization: integrating e-learning and m-learning. Educ. Technol. Res. Dev. 60(4), 659–678 (2012) 5. Mehdipour, Y., Zerehkafi, H.: Mobile learning for education: benefits and challenges. Int. J. Comput. Eng. Res. 3, 93–101 (2013) 6. Ally, M.: Mobile Learning Transforming the Delivery of Education and Training. AU Press (2009) 7. SCORM: Advanced Distributed Learning Initiative (2004). https://adlnet.gov/projects/scorm2004-4th-edition/ 8. Alrasheedi, M., Capretz, L.F.: Determination of critical success factors affecting mobile learning: a meta-analysis approach. Turkish Online J. Educ. Technol. 14(2), 41–51 (2018) 9. Kumar, B.A., Goundar, M.S., Chand, S.S.: Usability guideline for mobile learning applications: an update. Educ. Inf. Technol. 24(6), 3537–3553 (2019) 10. Abras, C., Maloney-Krichmar, D., Preece, J.: User-centered design. In: Bainbridge, W. (ed.) Encyclopedia of Human-Computer Interaction, vol. 37, no. 4, pp. 445–456. Sage Publications, Thousand Oaks (2004) 11. Garreta Domingo, M., Mor Pera, E.: Diseño centrado en el usuario, Cataluña: UOC (2010) 12. Crompton, H., Burke, D., Gregory, K.H.: The use of mobile learning in PK-12 education: a systematic review. Comput. Educ. 110, 51–63 (2017)
M-Learning and Student-Centered Design
361
13. Al-Emran, M., Mezhuyev, V., Kamaludin, A.: Technology acceptance model in m-learning context: a systematic review. Comput. Educ. 125, 389–412 (2018) 14. Fombona, J., Pascual-Sevillana, Á., Gonzalez-Videgaray, M.: M-learning and augmented reality: a review of the scientific literature on the WoS repository. Comunicar. Med. Educ. Res. J. 25(2), 63–71 (2017) 15. Sarrab, M., Al Shibli, I., Badursha, N.: Understanding the factors driving m-learning adoption: a literature review. Int. Rev. Res. Open Distrib. Learn. 17(4), 331–349 (2016) 16. Wen-Hsiung, W., Yen-Chun, J.W., Chun-Yu, C., Hao-Yun, K., Che-Hung, L., Sih-Han, H.: Review of trends from mobile learning studies: a meta-analysis. J. Educ. Technol. Soc. 20(2), 113–126 (2017) 17. Penzenstadler, B., Bauer, V., Calero, C., Franch, X.: Sustainability in software engineering: a systematic literature review. In: 16th International Conference on Evaluation & Assessment in Software Engineering (EASE 2012) (2012) 18. Jokela, T., Iivari, N., Matero, J., Karukka, M.: The standard of user-centered design and the standard definition of usability: analyzing ISO 13407 against ISO 9241-11. In: Proceedings of the Latin American conference on Human-computer interaction, pp. 53-60 (2003) 19. Peng, H., Ma, S., Spector, J.M.: Personalized adaptive learning: an emerging pedagogical approach enabled by a smart learning environment. Smart Learn. Environ. 6(1), 1–14 (2019) 20. Schardt, C., Adams, M.B., Owens, T., Keitz, S., Fontelo, P.: Utilization of the PICO framework to improve searching PubMed for clinical questions. BMC Med. Inform. Decis. Mak. 7(1), 16 (2007) 21. Krahenbuhl, K.S.: Student-centered education and constructivism: challenges, concerns, and clarity for teachers. Clearing House: J. Educ. Strateg. Issues Ideas 89(3), 97–105 (2016) 22. Goldin, I., Narciss, S., Foltz, P., Bauer, M.: New directions in formative feedback in interactive learning environments. Int. J. Artif. Intell. Educ. 27(3), 385–392 (2017). https://doi.org/10. 1007/s40593-016-0135-7 23. Muñoz, J., González, C.: Evaluación en Sistemas de Aprendizaje Móvil: una revisión de la literatura. Iberian J. Inf. Syst. Technol. E22, 187–199 (2019) 24. Díaz López, M.M.: The impact of feedback and formative evaluation on biosciences teachinglearning. Revista Cubana de Educación Médica Superior 32(3), 147–156 (2018) 25. Kitchenham, B.: Procedures for Performing Systematic Reviews, vol. 33, pp. 1–26. Keele, UK, Keele University (2004) 26. Nikou, S.A., Economides, A.A., Nikou, S.A., Economides, A.A.: Mobile-based microlearning and assessment: impact on learning performance and motivation of high school students. J. Comput. Assist. Learn. 34(3), 269–278 (2018) 27. Bian, H.X.: Application of virtual reality in music teaching system. Int. J. Emerg. Technol. Learn. 11(11), 21–25 (2016) 28. de la Torre, I., de Abajo, B.S., López-Coronado, M.: M-learning App multilingüe en Android para la ayuda al estudio de asignaturas del Grado de Ingeniería en Telecomunicación. In: Conference Proceedings EDUNOVATIC 2018: 3rd Virtual International Conference on Education, Innovation and ICT (2018) 29. Marqués, Á.C., Pérez, J.J.N.: Liad@ s, una App interactiva para la prevención de la violencia de género en adolescentes. In: Conference Proceedings EDUNOVATIC 2018: 3rd Virtual International Conference on Education, Innovation and ICT (2019) 30. Degli Innocenti, E., Geronazzo, M., Vescovi, D., Nordahl, R., Serafin, S., Ludovico, L.A., Avanzini, F.: Mobile virtual reality for musical genre learning in primary education. Comput. Educ. 139, 102–117 (2019) 31. Surahman, E., Alfindasari, D.: Developing adaptive mobile learning with the principle of coherence mayer on biology subjects of high school to support the open and distance education. In: 3rd International Conference on Education and Training (ICET 2017) (2017)
362
Y. Hernández-Velázquez et al.
32. Palomo-Duarte, M., Berns, A., Isla-Montes, J.-L., Dodero, J.-M., Kabtoul, O.: A collaborative mobile learning system to facilitate foreign language learning and assessment processes. In: Proceedings of the Fourth International Conference on Technological Ecosystems for Enhancing Multiculturality, Salamanca, Spain (2016) 33. Guo, X., Wu, M.: An empirical study of rural primary school students’ English knowledge construction based on interactive mobile learning application. In: ICDEL 2018 Proceedings of the 2018 International Conference on Distance Education and Learning, New York (2018) 34. De la Peña Esteban, F.D., Lizcano Casas, D., Martínez Rey, M.A., Lara Torralbo, J.A.: Application of simulators for teaching engineering subject. In: Proceedings of the First International Conference on Data Science, E-learning and Information Systems, Madrid, España (2018) 35. Ramos, M.J.H., Comendador, B.E.V.: ARTitser: A mobile augmented reality in classroom interactive learning tool on biological science for junior high school students. In: Proceedings of the 2019 5th International Conference on Education and Training Technologies, Seúl, República de Corea (2019) 36. Adamu, M.S.: developing a mobile learning app: a user-centric approach. In: AfriCHI 2016 Proceedings of the First African Conference on Human Computer Interaction, Nairobi, Kenya (2016) 37. Kyza, E.A., Georgiou, Y. Digital tools for enriching informal inquiry-based mobile learning: the design of the TraceReaders location-based augmented reality learning platform. In: VRCAI 2016 Proceedings of the 3rd Asia-Europe Symposium on Simulation & Serious Gaming, Zhuhai, China (2016) 38. Abech, M., da Costa, C.A., Barbosa, J.L.V., Rigo, S.J., da Rosa Righi, R.: A model for learning objects adaptation in light of mobile and context-aware computing. Pers. Ubiquit. Comput. 20(2), 167–184 (2016) 39. Battou, A., El Mezouary, A., Cherkaoui, C., Mammass, D.: An adaptive learning system architecture based on a granular learning object framework. Int. J. Comput. Appl. 32(5), 18–27 (2011) 40. Xie, H., Zou, D., Zhang, R., Wang, M., Kwan, R.: Personalized word learning for university students: a profile-based method for e-learning systems. J. Comput. High. Educ. 31(2), 273– 289 (2019) 41. Saryar, S., Kolekar, S.V., Pai, R.M., Pai, M.M.: Mobile learning recommender system based on learning styles. In: Soft Computing and Signal Processing, Singapore (2019) 42. Badidi, E.: A cloud-based framework for personalized mobile learning provisioning using learning objects metadata adaptation. In: CSEDU 2016 Proceedings of the 8th International Conference on Computer Supported Education, Portugal 2016 (2016) 43. Alvarado, L.A.R., Domínguez, E.L., Velázquez, Y.H., Isidro, S.D., Toledo, C.B.E.: Layered software architecture for the development of mobile learning objects with augmented reality. IEEE Access 6, 57897–57909 (2018) 44. Huang, H.C., Wang, T.Y., Hsieh, F.M.: Constructing an Adaptive mobile learning system for the support of personalized learning and device adaptation. Proc.-Soc. Behav. Sci. 64, 332–341 (2012) 45. Pagaduan, R.A., Caliwag, J.A., Castillo, R.E., Felonia, P.E., Gonzales, R.A.C.: DataMinerSchema: a mobile learning application for analytical modeling primer utilizing augmented reality technique. In: Proceedings of the 2019 2nd International Conference on Information Science and Systems (2019) 46. Bhuttoo, V., Soman, K., Sungkur, R.K.: Responsive design and content adaptation for elearning on mobile devices. In: 2017 1st International Conference on Next Generation Computing Applications (NextComp) (2017) 47. Vásquez-Ramírez, R., Alor-Hernández, G., Rodríguez-González, A.: Athena: a hybrid management system for multi-device educational content. Comput. Appl. Eng. Educ. 22(4), 750–763 (2014)
M-Learning and Student-Centered Design
363
48. Oyelere, S.S., Suhonen, J., Wajiga, G.M., Sutinen, E.: Design, development, and evaluation of a mobile learning application for computing education. Educ. Inf. Technol. 23(1), 467–495 (2017) 49. Vásquez-Ramírez, R., Bustos-Lopez, M., Montes, A.J.H., Alor-Hernández, G., SanchezRamirez, C.: An open cloud-based platform for multi-device educational software generation. In: Trends and Applications in Software Engineering (2016) 50. Vásquez-Ramírez, R., Bustos-Lopez, M., Alor-Hernández, G., Sanchez-Ramírez, C., GarcíaAlcaraz, J.L.: AthenaCloud: a cloud-based platform for multi-device educational software generation. Comput Sci. Inf. Syst, 13(3), 957–981 (2016) 51. Hamidi, H., Chavoshi, A.: Analysis of the essential factors for the adoption of mobile learning in higher education: a case study of students of the university of technology. Telemat. Inform. 35(4), 1053–1070 (2018) 52. Liu, Y., Li, H., Carlsson, C.: Factors driving the adoption of m-learning: an empirical study. Comput. Educ. 55(3), 1211–1219 (2010)
Implementation of Software for the Determination of Modeling Error in a Tubular Reactor José Cortes-Barreda, Juan Hernandez-Espinosa, Galo R. Urrea-García(B) , Guadalupe Luna-Solano, and Denis Cantu-Lozano Tecnológico Nacional de México/Instituto Tecnológico de Orizaba, Av. Instituto Tecnológico No. 852, C.P. 94320 Orizaba, Veracruz, Mexico [email protected]
Abstract. Results of the temperature and composition adjustment for the compensation of modeling errors in the process of obtaining maleic anhydride (C2 H2 (CO)2 O) by means of the partial oxidation of benzene in a tubular type reactor are presented in this work. For process characterization, a 1% step change in reactor jacket temperature was made after 5 s of the operation, the molar flow responses of maleic anhydride and the process temperature were obtained. Obtained results were adjusted using an uncertainty method to compensate the errors for temperature and composition. Keywords: Reactor · Composition · Temperature
1 Introduction Process simulation in Chemical Engineering allows understanding the physicochemical behavior in industrial equipment at different process steps, the simulation represents a non-expensive tool compared to developing experimental work and provides considerable precision results of process behavior. Chemical reactors are the more important stages of transformation processes. Design and modeling of chemical reactors represents a crucial part in research and development of transformation processes [1]. A mathematical model must have the capacity of representing a system using a set of equations, which represent the system with high precision. The mathematical models are described in terms of differential equations which are obtained from conservation laws and basic principles. Obtaining a mathematical model is the step most important in the analysis of a system. In the present work, the compensation of modeling errors for the maleic anhydride process by partial oxidation of benzene is represented. Open loop response of temperature and composition will be adjusted using an uncertainty method to approximate to rigorous mathematical model response. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 364–376, 2021. https://doi.org/10.1007/978-3-030-63329-5_25
Implementation of Software for the Determination
365
In Fig. 1 an scheme of a tubular reactor is presented; it is also known as packed bed tubular reactor or piston flow reactor, it is composed of a cylindrical body and is commonly packed with a solid catalyst, reagents are fed in the gas phase and react at the active sites of the catalyst as the fluid passes through the reactor bed.
Fig. 1. Tubular reactor scheme [2].
Model solution is strongly dependent of transport, thermodynamic and kinetic parameters. As feed stream flow continuously across of the tubular reactor, temperature is commonly affected by competing heat generation and heat removal dynamics which causes dependence of temperature and composition on axial reactor position. Mathematical model solution was carried out by means of the finite differences method, in which partial differential equations (PDE) are approximated by a finite set of ordinary differential equations (ODE) [3]. ODE’s are integrated by means of the 4th order Runge-Kutta method, obtaining the temperature and composition dynamics at different reactor points as well as temperature and composition profiles. Alternatively, temperature and composition dynamics can be characterized as first order plus dead time (FOPDT) models, which is a simplified linear approximation of nonlinear process behavior that is valid in a close range of operating conditions. For this purpose, a 1% step change in reactor jacket temperature was made after reactor have reached steady state operation, temperature and molar flow responses of maleic anhydride at the reactor outlet are obtained. The uncertainty model quantifies the dispersion attributed to a measurement result, in this document; an uncertainty algorithm for modeling error estimation is presented aimed to compensate variations between actual and modeled process behavior.
2 Model Description Maleic anhydride production by partial oxidation of benzene is represented by a reaction mechanism consisting of three simultaneous reactions which are assumed as pseudo first order reactions [4]. k1
C6 H6 + 402 → C4 H2 O3 + CO + CO2 + 2H2 O. k2
C6 H6 + 602 → 3CO + 3CO2 + 3H2 O. k3
C4 H2 O3 + 202 → 2CO + 2CO2 + H2 O.
(1) (2) (3)
where: (C6 H6 ) is benzene, (O2 ) is oxygen, (CO) is carbon monoxide, (CO2 ) is carbon dioxide, (H2 O) is water and (C4 H2 O3 ) is maleic anhydride.
366
J. Cortes-Barreda et al.
The reaction of interest is (1) which represents maleic anhydride production, while reactions (2) and (3) are undesired reactions, since both leads to reactant and the product over oxidation. In Eqs. (1)–(3); k 1 , k 2 and k 3 are reaction rate constants which are given by the Arrhenius equation: ki (t, x) = Ai · e(−Ei / R·Ts (t,x)) .
(4)
where: Ai is the frequency factor for reaction i, Ei is the activation energy for reaction i, R is the universal gas constant, and Ts is the solid catalyst temperature (K). Two mass balances were considered that represent the behavior of the molar flow of benzene and maleic anhydride in the reactor. The partial differential equations are as follows: ∂ 2 fb ∂fb ∂fb = Deff 2 − v − k1 fb − k2 fb . ∂t ∂x ∂x
(5)
∂fm ∂ 2 fm ∂fm = Deff + k1 fb − k3 fm . −v (6) 2 ∂t ∂x ∂x Two heat balances, describing the behavior of the fluid phase temperature, Tf , and the solid phase catalyst temperature, Ts , are presented. ∂Tf ∂ 2 Tf ∂Tf = Keff − Ufw Tf − Tj − Usf Ts − Tf . (7) −v 2 ∂t ∂x ∂x ∂Ts = −Usf Ts − Tf + cs (H1 k1 fb + H2 k2 fb + H3 k3 fm ). (8) ∂t The following boundary and initial conditions were considered for the model solution [5]. Deff
∂ 2 fb =0 ∂x2
z = 0, t ≥ 0.
(9)
Deff
∂ 2 fb =0 ∂x2
z = 0, t ≥ 0.
(10)
Keff
∂ 2 Tf =0 ∂x2
z = 0, t ≥ 0.
(11)
Deff
∂ 2 fb =0 ∂x2
z = 1, t ≥ 0.
(12)
Deff
∂ 2 fb =0 ∂x2
z = 1, t ≥ 0.
(13)
∂ 2 Tf =0 z = 1, t ≥ 0. (14) ∂x2 Model parameters presented in the Tables 1 and 2 were considered for model solution. In Fig. 2, a flow diagram is presented for describing the sequence solution of software implemented in FORTRAN programming language for solution of the mathematical model represented by Eqs. (4)–(14). Keff
Implementation of Software for the Determination
367
Table 1. Reaction rate and enthalpy parameters [4]. Reaction Ai (s−1 ) E i (J mol−1 ) H i (J mol−1 ) (1)
86,760
71,711.7
−1,490,000
(2)
37,260
71,711.7
−2,322,000
(3)
149.4
36,026.3
−832,000
Table 2. Process conditions and parameters for partial oxidation of benzene [4]. Parameters
Values (units)
Molar flow of benzene, fb
0.009 (mol s−1 )
Molar flow of maleic anhydride, fm
0.0 (mol s−1 )
Gas velocity, v
2.48 (m s−1 )
Effective mass diffusion coefficient, D
0.00317 (m2 s−1 )
Effective heat diffusion coefficient, K
0.0317 (m2 s−1 )
Effective heat transfer coefficient of fluid phase-temperature from the wall, 26 (s−1 ) Ufw Fluid temperature, Tf
733 (K)
Solid catalyst temperature Ts
633 (K)
Wall temperature, Tj
690 (K)
Effective heat transfer coefficient solid-fluid phase, Usf
30 (s−1 )
Solid-phase heat balance constant, cs
0.729 (s K J−1 )
2.1 Flow Diagram Description Process parameters have been adopted from [4]. The sequence of the steps for the programing of the reactor model is: Step 1: Arrhenius Eq. (4) is applied for obtain reaction rate constants. Step 2: Finite difference method was used for discretizing partial derivatives respect to axial position in such a way that each PDE becomes a set of ODE. Step 3: The resulting ODE are integrated by the 4th order Runge-Kutta method. Step 4: Temporal variation of benzene molar flow, maleic anhydride molar flow, fluid temperature and catalyst temperature is obtained. For this program, Rapid Application Development (RAD) methodology allows to handle low-code rapid application development.
368
J. Cortes-Barreda et al.
Fig. 2. Process flow diagram: code for the open loop model solution
Implementation of Software for the Determination
369
2.2 Response of Temperature and Composition of Maleic Anhydride In exothermic reactions heat is released as reagents are consumed continuously in catalyst active sites as fluid phase travels through the reactor bed giving raise to sudden temperature increase inside the reactor as can be seen in fluid and catalyst temperature profiles presented in Figs. 3 and 4, respectively. These profiles represent temperature dependence with reactor axial position for both, fluid and catalyst phases. Cooling jacket is required in this equipment in order to avoid temperature runaway increase; in this case, maximum temperature value (856.2 K) is reached close to reactor inlet (0.64 m). After passing the hot spot, temperature decreases softly as consequence of heat removal through cooling jacket.
Temperature (K)
840
810
780
750 0.0
0.5
1.0
1.5 2.0 Length (m)
2.5
3.0
Fig. 3. Fluid temperature profile.
Temperature (K)
690
680
670
0.0
0.5
1.0
1.5 2.0 Length (m)
2.5
3.0
Fig. 4. Catalyst temperature profile.
In Fig. 4 it can be seen that catalyst undergoes noticeable heating due to reaction of the components in the process, the response of the catalyst shows that the hot spot is located at 0,864 m with a temperature of 695.9 K. In Figs. 5 and 6, maleic anhydride and benzene molar flow profiles are presented, it can be seen that they exhibit opposite behavior through the reactor bed, thus, as maleic
370
J. Cortes-Barreda et al.
anhydride molar flow increases, benzene molar flow decreases almost reaching depletion at reactor exit.
Molar flow (mol/s)
0.0028
0.0021
0.0014
0.0007
0.0
0.5
1.0
1.5 2.0 Length (m)
2.5
3.0
Fig. 5. Maleic anhydride molar flow profile.
Molar flow (mol/s)
0.009
0.007
0.005
0.003 0.0
0.5
1.0
1.5 2.0 Length (m)
2.5
3.0
Fig. 6. Benzene molar flow profile.
2.3 Plant Identification One of the most used representations of process dynamics is by mean of simplified first order plus dead time (FOPDT) model. These models are used commonly because they capture dominant process behavior in the operating range of interest and can be obtained through simple step change test of the open loop process. For process characterization, a 1% step change in reactor jacket temperature was made when process reaches a steady state operation, from which the maleic anhydride molar flow and temperature responses were obtained.
Implementation of Software for the Determination
371
Programming of Step Change Test. Jacket temperature is given a 1% increase respect to nominal operation value, equivalent to an increase of 6.9 K. The following paragraph shows the code programmed in FORTRAN language. if (t .gt. d0) then tc = tcn + 6.9d0 else tc = tcn end if
2.4 Molar Flow and Temperature Response In Fig. 7, it can be seen that positive step change in jacket temperature causes decrease in maleic anhydride molar flow and increases products of undesired reaction (2) and (3) which, in turn, consume maleic anhydride and benzene. The resulting temperature increase in location 0.5 of dimensionless reactor length can be observed in Fig. 7 as response to +1% step change in jacket temperature.
Flujo molar (mol/s)
0.0024
0.0023
0.0022
0.0021 4
6 Tiempo (s)
8
Fig. 7. Response of maleic anhydride concentration.
From Fig. 8, it can be deduced that an increase in the temperature in the reactor jacket will have an effect at each reactor control point, since it will increase the energy in each section of the process. 2.5 FOPDT Parameters The equations for calculating the variables of control are follows [6]. K=
cs . 7.33
(15)
372
J. Cortes-Barreda et al.
Temperatura (K)
930
925
920
915
910 4
6 Tiempo (s)
8
Fig. 8. Temperature response in the point 0.05.
τ=
3 (t2 − t1 ). 2
(16)
θ = t2 − τ.
(17)
where: K process gain, τ time constant, θ timeouts. m is the step change magnitude of input variable, in this case is the jacket temperature. c is the total steady state change of process output variable after step change of input variable; t 1 and t2 are the time at which occur 23.2 and 63.2% of total steady state change, respectively, for process output variables. After step change in jacket temperature, FOPDT model parameters were obtained for maleic anhydride molar flow at reactor output and for fluid temperature at several positions inside the reactor. These parameters are presented in Table 3. Table 3. System parameters before changes in jacket temperature 1% Parameter Molar flow
Temperature position (0.05)
(0.10)
(0.20)
(0.30)
K
0.000027 (mol/sK) 2.652649 (−) 5.259208 (−) 9.838871 (−) 4.58566 (−)
τ
1.26 (s)
0.06291 (s)
0.117715 (s)
0.207 (s)
0.098655 (s)
θ
0.69 (s)
0.007904 (s)
0.017955 (s)
0.036 (s)
0.016965 (s)
2.6 Compensation of Process Modeling Error Although FOPDT models can simplify the representation of process dynamics, they can also induce some modeling errors since they are only approximations to rigorous process dynamics. In order to compensate deviations of process model parameters, estimation of
Implementation of Software for the Determination
373
model uncertainty can be accounted from actual model response. Departing from time domain representation of a generic FOPTD mathematical model of a chemical reactor dc1 = f1 + g1 u1 . dt
(18)
dTi = f2 + g2 u2 . dt
(19)
The following equations allow obtaining an estimation of process uncertainty; they are based on adaptive feedback control system theory [7] proposed for address process uncertainty through modeling error compensation approach. dCm(t)ref d η1 de1 (20) = −τ e1 η1 (t) − + − f1 − g1 μ1 . dt dt dt 1 f1 = − cm(t). τ1
(21)
1 f2 = − Ti (t). τ2
(22)
g1 =
k1 . τ1
(23)
g2 =
k2 . τ2
(24)
2.7 Process Description for the Implementation of Software in the Determination of Modeling Error in a Tubular Reactor The process model solution algorithm is programmed in FORTRAN language [8]. The sequence of the steps for the programing of the reactor model is: Step 1: A step change of +1% in the temperature of the reactor jacket was applied at 5 s, getting the responses of the molar flow of maleic anhydride at the outlet of the reactor and of the temperature at several points distributed along the reactor. Step 2: FOPTD model parameters, i.e., gain (K), time constant (τ ) and dead time (θ ) were obtained from Eqs. (15), (16) and (17) that characterize the process. Step 3: The results obtained in the step 2 were applied for determination of process uncertainty by Eq. (18). Initial values of model uncertainty, η, where assigned for concentration and temperature. Model uncertainty derivative was integrated by Cycle Do in the software. Step 4: The results of step 3 were compared with the values of the open-loop system, for the determination of errors of the real model against the estimated model and results were plotted. In Fig. 9 it is presented a flow diagram of the algorithm programmed in FORTRAN language for modeling error estimation.
374
J. Cortes-Barreda et al.
Fig. 9. Process flow diagram: code for modeling error compensation.
3 Modeling Error Results In Fig. 10 the modeling temperature error is presented for maleic anhydride production. It can be seen that temperature undergoes important variations in position located between 0.7–1.5 m of reactor length. Variations can be attributed to nonlinear dominant phenomena that are not included in linear FOPTD model approximation. The model presents a constant error in concentration measurements, caused by temperature variations in the reactor, as can be seen in Fig. 11.
Implementation of Software for the Determination
375
Temperature error
0.004
0.003
0.002
0.001
0.0
0.5
1.0
1.5 2.0 Length (m)
2.5
3.0
Fig. 10. Modeling error in the process of obtaining maleic anhydride.
Error
0.017
0.012
0.007
0.002 0.0
0.5
1.0
1.5 2.0 Length(m)
2.5
3.0
Fig. 11. Error in temperature measurement.
4 Conclusions Numerical model solution algorithms represents a method of great importance to estimate the viability of a process, this is due to its low cost with respect to the collection of data on an experimental basis. Development and use of special purpose software allows obtain and analyze information chemical process research and gives us an excellent option for reducing operating time and experimental tests. The benzene partial oxidation solution method showed an small error in output molar flow and in the temperature compared with estimated values in open loop. Therefore, small deviation of maleic anhydride molar flow at the reactor outlet could be expected due to uncertainty or deviations in model parameters. Acknowledgments. Authors acknowledge Tecnológio Nacional de México for Research Project Number 7809.20-P.
376
J. Cortes-Barreda et al.
References 1. Levenspiel, O.: Chemical Reaction Engineering, 3rd edn. Jonh Wiley, New York (1999) 2. Fogler, H.: Elements of Chemical Reaction Engineering, 6th edn. Pearson. Pretince Hall, Upper Saddle River (2020) 3. Steven, C.C., Raymond, P.C.: Numerical Methods for Engineers, 8th edn. McGraw-Hill, New York (2015) 4. Van den Berg, F.W.J., Hoefsloot, H.C.J., Boelens, H.F.M., Smilde, A.K.: Selection of optimal sensor position in a tubular reactor using robust degree of observability criteria. Chem. Eng. Sci. 55(2), 827–837 (2000) 5. Hernandez Espinoza, J., Urrea Garcia, G.R., Luna Solano, G.: Estructura de Control Variable en Cascada para Compensar Variaciones en Parámetros en un Reactor Tubular. Congreso Nacional de Control Automático. Monterrey, Nuevo León, Mexico, 4–6 de octubre del 2017 6. Smith, C.A., Corripio, A.B.: Principles and Practice of Automatic Process Control, 3rd edn. Wiley, New York (2005) 7. Álvarez-Ramírez, J.: Adaptive control of feedback linearizable systems: a modeling error compensation approach. Int. J. Robust Nonlinear Control 9, 361–377 (1999) 8. Chapman, J.S.: Fortran for Scientists and Engineers. McGraw-Hill, New York (2018)
Author Index
A Aguiñaga, Gerardo Ortiz, 86 Alducin-Francisco, Luisa M., 185 Amador, César Velázquez, 86 Angeleri, Paula, 305 Aules, Hernán, 199 Avila, Johnny, 169 B Bayona-Oré, Sussy, 269 Berumen Mora, Jorge, 291 Bonilla Carranza, David, 291 Borrego, Gilberto, 154 C Caldera-Villalobos, Claudia, 232 Campoverde, Geovanny, 169 Cano, Patricia Ortegon, 323 Cantu-Lozano, Denis, 364 Cardona-Reyes, Héctor, 258 Carranza, David Bonilla, 280 Castillo-Barrera, Francisco-Edgar, 154 Cervantes, Salvador, 99 Cortes-Barreda, José, 364 Cortés-Palacios, Héctor Abraham, 56 Cortes-Verdin, Karen, 38 Cuevas-Vargas, Héctor, 56 D Dávila, Abraham, 305 De Gyves Avila, Silvana, 323 De-la-Torre, Miguel, 99 Diestro Mandros, Jean, 269 Diop, Ibrahima, 131 Dominguez, Gloria Eva Zagal, 323
E Esparza, Miguel Ortiz, 86 Espinoza-Valdez, Aurora, 280 Estrada, Salvador, 56 F Fajardo, Maria Eugenia, 169 Fonseca, Jesús, 99 G Galeano-Ospino, Saray, 115 Galván-Cruz, Sergio, 20 Garcia Mercado, Roberto, 269 García-Mireles, Gabriel Alberto, 337 Garza-Veloz, Idalia, 232 Gasca-Hurtado, Gloria Piedad, 115 Granger, Eric, 99 Guzmán-Mendoza, José E., 258 H Hernandez-Espinosa, Juan, 364 Hernández-González, Lizbeth A., 185 Hernández-Velázquez, Yesenia, 349 I Infante, Uriel, 3 J Juárez-Martínez, Ulises, 185 L Laporte, Claude Y., 20 Lepe, Arianne Navarro, 323 Lizarraga, Carmen, 213 Luna-Solano, Guadalupe, 364
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 J. Mejia et al. (Eds.): CIMPS 2020, AISC 1297, pp. 377–378, 2021. https://doi.org/10.1007/978-3-030-63329-5
378 M Machuca-Villegas, Liliana, 115 Malo, Sadouanouan, 131, 142 Marquez-Encinas, Hector D., 154 Martínez-Fierro, Margarita, 232 Martínez-Garcia, Jose R., 154 Mejia, Ayrton Mondragon, 323 Mejía, Jezreel, 20, 71, 99, 213 Mezura-Godoy, Carmen, 349 Moncayo, Sylvia, 199 Monzón, Israel, 305 Moreno, Ismael Solis, 323 Moreno-Soto, Ivan, 337 Muñoz, Mirna, 3, 20 Muñoz-Arteaga, Jaime, 86 N Negrete, Mario, 3, 20 Negrón, Adriana Peña Pérez, 280 O Ocharán-Hernández, Jorge Octavio, 38 Ordoñez-Pacheco, Rodrigo, 38
Author Index R Reyes, Héctor Cardona, 86 Rivera, Joel, 199 Rodríguez-Maldonado, Isaac, 71 Rosales-Morales, Viviana Yarel, 349 S Salido-Ruiz, Ricardo A., 280 Saquicela, Víctor, 169 Sié, Oumarou, 142 Solís-Galván, Jorge A., 232 T Tapia, Freddy, 199 Terán, Diego, 199 Thiombiano, Julie, 142 Traoré, Yaya, 131, 142 Trawina, Halguieta, 131 Trujillo-Espinoza, Cristian, 258 U Urrea-García, Galo R., 364
P Palacio, Ramón R., 154 Peña Pérez Negrón, Adriana, 291
V Vargas-Aguirre, Gildardo Adolfo, 56 Vázquez-Reyes, Sodel, 232 Velasco-Elizondo, Perla, 232
Q Quiñonez, Yadira, 71, 213
Z Zatarain, Oscar, 213