138 59 28MB
English Pages 380 [379] Year 2023
Lecture Notes in Networks and Systems 784
Katerina Kabassi Phivos Mylonas Jaime Caro Editors
Novel & Intelligent Digital Systems: Proceedings of the 3rd International Conference (NiDS 2023) Volume 2
Lecture Notes in Networks and Systems
784
Series Editor Janusz Kacprzyk , Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland
Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas— UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Türkiye Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong
The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the worldwide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science. For proposals from Asia please contact Aninda Bose ([email protected]).
Katerina Kabassi · Phivos Mylonas · Jaime Caro Editors
Novel & Intelligent Digital Systems: Proceedings of the 3rd International Conference (NiDS 2023) Volume 2
Editors Katerina Kabassi Department of Environment Ionian University Zakynthos, Greece
Phivos Mylonas Department of Informatics and Computer Engineering University of West Attica Egaleo, Greece
Jaime Caro College of Engineering University of the Philippines Diliman, Philippines
ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-3-031-44145-5 ISBN 978-3-031-44146-2 (eBook) https://doi.org/10.1007/978-3-031-44146-2 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.
Preface
The 3rd International Conference on Novel & Intelligent Digital Systems (NiDS2023) was held in Athens, Greece, from September 28 to 29, 2023, under the auspices of the Institute of Intelligent Systems (IIS). The conference was implemented hybrid, allowing participants to attend it either online or onsite. The Hosting Institution of NiDS2023 was the University of West Attica (Greece). NiDS 2023 places significant importance on the innovations within intelligent systems and the collaborative research that empowers and enriches artificial intelligence (AI) in software development. It encourages high-quality research, establishing a forum for investigating the obstacles and cutting-edge breakthroughs in AI. It also stimulates an exchange of ideas, strengthening and expanding the network of researchers, academics, and industry representatives in this domain. NiDS is designed for experts, researchers, and scholars in artificial and computational intelligence, as well as computer science in general, offering them the opportunity to delve into relevant, interconnected, and mutually complementary fields. Topics within the scope of NiDS series include, but are not limited to: Adaptive Systems Affective Computing Augmented Reality Big Data Bioinformatics Cloud Computing Cognitive Systems Collaborative Learning Cybersecurity Data Analytics Data Mining and Knowledge Extraction Decision-Making Systems Deep Learning Digital Marketing Digital Technology Distance Learning E-Commerce Educational Data Mining E-Learning Environmental Informatics Expert Systems Fuzzy Systems Genetic Algorithm Applications Human–Machine Interaction Information Retrieval
vi
Preface
Intelligent Information Systems Intelligent Modeling Machine Learning Medical Informatics Mobile Computing Multi-Agent Systems Natural Language Processing Neural Networks Pattern Recognition Personalized Systems and Services Pervasive Multimedia Systems Recommender Systems Reinforcement Learning Semantic Web Applications Sentiment Analysis Serious Gaming Smart Cities Smart Grid Social Media Applications Social Network Analytics Text Mining Ubiquitous Computing User Modeling Virtual Reality Web Intelligence. The call for scientific papers seeks contributions that present significant and original research findings in the utilization of advanced computer technologies and interdisciplinary approaches to empower, support, and improve intelligent systems. The international Program Committee consisted of leading members of the intelligent systems community, as well as highly promising younger researchers. The conference (General) chair was Mirjana Ivanovic from University of Novi Sad (Serbia), whereas the Program Committee chairs were Katerina Kabassi from Ionian University (Greece), Phivos Mylonas from University of West Attica (Greece), and Jaime Caro from University of the Philippines Diliman (Philippines). The keynote speakers of NiDS 2023 were: a. Stefano A. Cerri, Emeritus Professor, University of Montpellier (France) with speech title “Towards foundational principles in Interactive AI: From stamp collecting to Physics”, and b. Prof. Michael L. Tee, Vice Chancellor for Planning & Development, University of the Philippines Manila with speech title “Healthcare in the time of AI”. The scientific papers underwent a thorough review by two to three reviewers, including Senior Reviewer, using a double-blind process, highlighting our dedication to ensuring NiDS’s status as a premier, exclusive, and high-quality conference. We believe that the selected full papers encompass highly significant research, while the short papers introduce intriguing and novel ideas. In the review process, the reviewers’ evaluations were generally respected. The management of reviews and proceedings preparation was facilitated through EasyChair.
Preface
vii
We would like to thank all those who have contributed to the conference, the authors, the Program Committee members, and the Organization Committee with its chair, Kitty Panourgia, as well as the Institute of Intelligent Systems. Katerina Kabassi Phivos Mylonas Jaime Caro
Committees
Conference Committee General Conference Chair Mirjana Ivanovic
University of Novi Sad, Serbia
Honorary Chair Cleo Sgouropoulou
University of West Attica, Greece
Program Committee Chairs Katerina Kabassi Phivos Mylonas Jaime Caro
Ionian University, Greece University of West Attica, Greece University of the Philippines Diliman, Philippines
Program Advising Chairs Claude Frasson Vassilis Gerogiannis Alaa Mohasseb
University of Montreal, Canada University of Thessaly, Greece University of Portsmouth, UK
Workshop and Tutorial Chairs Andreas Kanavos Stergios Palamas
Ionian University, Greece Ionian University, Greece
Poster and Demos Chairs Nikos Antonopoulos Gerasimos Vonitsanos
Ionian University, Greece University of Patras, Greece
Doctoral Consortium Chairs Karima Boussaha Zakaria Laboudi
University of Oum El Bouaghi, Algeria University of Oum El Bouaghi, Algeria
x
Committees
Organization Chair Kitty Panourgia
Neoanalysis Ltd., Greece
Publicity Chair Sudhanshu Joshi
Doon University, India
The Conference is held under the auspices of the Institute of Intelligent Systems.
Program Committee Jozelle Addawe Shahzad Ashraf Maumita Bhattacharya Siddhartha Bhattacharyya Karima Boussaha Ivo Bukovsky George Caridakis Jaime Caro Adriana Coroiu Samia Drissi Eduard Edelhauser Ligaya Leah Figueroa Claude Frasson Peter Hajek Richelle Ann Juayong Katerina Kabassi Dimitrios Kalles Zoe Kanetaki Georgia Kapitsaki Panagiotis Karkazis Efkleidis Keramopoulos Petia Koprinkova-Hristova
University of the Philippines Baguio, Philippines Hohai University, China Charles Sturt University, Australia RCC Institute of Information Technology, India University of Oum El Bouaghi, Algeria CTU, Czech Republic University of the Aegean, Greece University of the Philippines Diliman, Philippines Babes, -Bolyai University, Romania University of Souk Ahras, Algeria University of Petrosani, Romania University of the Philippines Diliman, Philippines University of Montreal, Canada University of Pardubice, Czech Republic University of the Philippines Diliman, Philippines Ionian University, Greece Hellenic Open University, Greece University of West Attica, Greece University of Cyprus, Cyprus University of West Attica, Greece International Hellenic University, Greece Bulgarian Academy of Sciences, Bulgaria
Committees
Sofia Kouah Akrivi Krouska Florin Leon Jasmine Malinao Andreas Marougkas Phivos Mylonas Stavros Ntalampiras Christos Papakostas Kyparisia Papanikolaou Nikolaos Polatidis Filippo Sciarrone Cleo Sgouropoulou Geoffrey Solano Dimitris Sotiros Oleg Sychev Christos Troussas Aurelio Vilbar Panagiotis Vlamos Athanasios Voulodimos Ioannis Voyiatzis Laboudi Zakaria
xi
University of Larbi Ben M’hidi O.E.B, Algeria University of West Attica, Greece Technical University of Iasi, Romania UPV Tacloban College, Philippines University of West Attica, Greece University of West Attica, Greece University of Milan, Italy University of West Attica, Greece ASPETE, Greece University of Brighton, UK Universitas Mercatorum, Italy University of West Attica, Greece University of the Philippines Manila, Philippines WUST, Poland Volgograd State Technical University, Russia University of West Attica, Greece University of the Philippines Cebu, Philippines Ionian University, Greece NTUA, Greece University of West Attica, Greece University of Oum El Bouaghi, Algeria
Contents
Smart Energy Management Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mohamed Salah Benkhalfallah, Sofia Kouah, and Meryem Ammi
1
The Interaction of People with Disabilities with the Intelligent Packaging . . . . . . Maria Poli, Konstantinos Malagas, Spyridon Nomikos, Apostolos Papapostolou, and Grigorios Vlassas
9
Home Bound: Virtual Home for Reminiscence Therapy of Dementia Patients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lonnie France E. Gonzales, Frances Lei R. Ramirez, Samuel Kirby H. Aguilar, Richelle Ann B. Juayong, Jaime D. L. Caro, Veeda Michelle M. Anlacan, and Roland Dominic Jamora Empowering Responsible Digital Citizenship Through an Augmented Reality Educational Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marios Iakovidis, Christos Papakostas, Christos Troussas, and Cleo Sgouropoulou
21
31
Model Decomposition of Robustness Diagram with Loop and Time Controls to Sequence Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kliezl P. Eclipse and Jasmine A. Malinao
40
Enhancing Predictive Battery Maintenance Through the Use of Explainable Boosting Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sadiqa Jafari and Yung-Cheol Byun
55
Revolutionizing Agricultural Education with Virtual Reality and Gamification: A Novel Approach for Enhancing Knowledge Transfer and Skill Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Panagiotis Strousopoulos, Christos Troussas, Christos Papakostas, Akrivi Krouska, and Cleo Sgouropoulou
67
The Game Designer’s Perspectives and the DIZU-EVG Instrument for Educational Video Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yavor Dankov
81
Intelligent Assessment of the Acoustic Ecology of the Urban Environment . . . . . Nikolay Rashevskiy, Danila Parygin, Konstantin Nazarov, Ivan Sinitsyn, and Vladislav Feklistov
91
xiv
Contents
DuckyCode: A Hybrid Platform with Graphical and Tangible User Interfaces to Program Educational Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Theodosios Sapounidis, Pavlos Mantziaris, and Ioannis Kedros Blockchain-Enhanced Labor Protection: An Innovative Complaint Platform for Transparent Workplace Compliance and Fair Competition . . . . . . . . 110 John Christidis, Helen C. Leligou, and Pericles Papadopoulos IoT-Based Intelligent Medical Decision Support System for Cardiovascular Diseases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Nadjem Eddine Menaceur, Sofia Kouah, and Makhlouf Derdour Embracing Blockchain Technology in Logistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Denis Sinkevich, Anton Anikin, and Vladislav Smirnov Artificial Intelligence and Internet of Medical Things for Medical Decision Support Systems: Comparative Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Asma Merabet, Asma Saighi, and Zakaria Laboudi A Conceptual Framework for a Critical Approach to the Digital World: Integrating Digital Humanities and Informal Learning into Educational Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Maria-Sofia Georgopoulou, Christos Troussas, and Cleo Sgouropoulou Optimizing Image and Signal Processing Through the Application of Various Filtering Techniques: A Comparative Study . . . . . . . . . . . . . . . . . . . . . . 151 Aliza Lyca Gonzales, Anna Liza Ramos, Jephta Michail Lacson, Kyle Spencer Go, and Regie Boy Furigay Hybrid-Service Learning During Disasters: Coaching Teachers Develop Sustainability-Integrated Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Aurelio Vilbar Secure Genotype Imputation Using the Hidden Markov Model with Homomorphic Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Chloe S. de Leon and Richard Bryann Chua Combining Convolutional Neural Networks and Rule-Based Approach for Detection and Classification of Tomato Plant Disease . . . . . . . . . . . . . . . . . . . . 191 Erika Rhae Magabo, Anna Liza Ramos, Aaron De Leon, and Christian Arcedo
Contents
xv
Moving in Space: Development Process Analysis on a Virtual Reality Therapy Application for Children with Cerebral Palsy . . . . . . . . . . . . . . . . . . . . . . 205 Josiah Cyrus Boque, Marie Eliza R. Aguila, Cherica A. Tee, Jaime D. L. Caro, Bryan Andrei Galecio, Isabel Teresa Salido, Romuel Aloizeus Apuya, Michael L. Tee, Veeda Michelle M. Anlacan, and Roland Dominic G. Jamora Design Thinking (DT) and User Experience (UX) as Springboard to Teacher-Made Mobile Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Jeraline Gumalal and Aurelio Vilbar The Designer-Oriented Process Analysis of Utilizing the DIZU-EVG Instrument for Educational Video Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Yavor Dankov A State-of-the-Art Review of the Mutation Analysis Technique for Testing Multi-agent Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 Soufiene Boukeloul, Nour El Houda Dehimi, and Makhlouf Derdour HomeWorks: Designing Mobile Applications to Introduce a Digital Platform for the Household Services Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Kurt Ian Khalid I. Israel, Kim Bryann V. Tuico, Richelle Ann B. Juayong, Jaime D. L. Caro, and Jozelle C. Addawe Developing and Accessing Policies in Maritime Education Using Multicriteria Analysis and Fuzzy Cognitive Map Models . . . . . . . . . . . . . . . . . . . . 247 Stefanos Karnavas, Dimitrios Kardaras, Stavroula Barbounaki, Christos Troussas, Panagiota Tselenti, and Athanasios Kyriazis Applying Machine Learning and Agent Behavior Trees to Model Social Competition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Alexander Anokhin, Tatyana Ereshchenko, Danila Parygin, Danila Khoroshun, and Polina Kalyagina An Integrated Platform for Educational and Research Management Using Institutional Digital Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Konstantinos Chytas, Anastasios Tsolakidis, Evangelia Triperina, Nikitas N. Karanikolas, and Christos Skourlas Classification of Alzheimer’s Disease Subjects from MRI Using Deep Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Orestis Papadimitriou, Athanasios Kanavos, Phivos Mylonas, and Manolis Maragoudakis
xvi
Contents
A Computing System for the Recording and Integrating Medical Data of Patients Undergoing Hemodialysis Treatment . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Dimitrios Tsakiridis and Anastasios Vasiliadis Bidirectional Transformers as a Means of Efficient Building of Knowledge Bases: A Case Study with XLM-RoBERTa . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 Alexander Katyshev, Anton Anikin, and Alexey Zubankov UP V-Ikot: Augmented Reality Mobile Application to Assist Campus Visits and Tours Inside the Up Diliman Campus . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 Amadeus Rex N. Lisondra, Ryosuke Josef S. Nakano, Miguel S. Pardiñas, Josiah Cyrus Boque, Jaime D. L. Caro, and Richelle Ann B. Juayong Social Media to Develop Students’ Creative Writing Performance . . . . . . . . . . . . 308 Loubert John P. Go, Ericka Mae Encabo, Aurelio P. Vilbar, and Yolanda R. Casas Importance and Effectiveness of Delurking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Maria Anastasia Katikaridi Iskowela: Designing a Student Recruitment Platform and Marketing Tool for Educational Institutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Jeremy King L. Tsang, Gorge Lichael Vann N. Vedasto, Jozelle C. Addawe, Richelle Ann B. Juayong, and Jaime D. L. Caro Meta-features Based Architecture for the Automatic Selection of Prediction Models for MOOCs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 Houssam Ahmed Amin Bahi, Karima Boussaha, and Zakaria Laboudi Mapping of Robustness Diagram with Loop and Time Controls to Petri Net with Considerations on Soundness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 Cris Niño N. Sulla and Jasmine A. Malinao Computer-Aided Development of Adaptive Learning Games . . . . . . . . . . . . . . . . . 354 Alexander Khayrov, Olga Shabalina, Natalya Sadovnikova, Alexander Kataev, and Tayana Petrova Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
Smart Energy Management Systems Mohamed Salah Benkhalfallah1,2(B)
, Sofia Kouah1,2
, and Meryem Ammi3
1 Artificial Intelligence and Autonomous Things Laboratory, Department of Mathematics and
Informatics, University of Oum El Bouaghi, Oum El Bouaghi, Algeria {mohamedsalah.benkhalfallah,sofia.kouah}@univ-oeb.dz 2 University of Oum El Bouaghi, Oum El Bouaghi, Algeria 3 Forensic Science Department, Naïf Arab University for Security Sciences, Riyadh, Saudi Arabia [email protected]
Abstract. The development of advanced energy management systems has become increasingly important in recent years, particularly with the rapid growth in Smart Cities. To address this, there has been a focus on transitioning to Smart energy management systems, which are considered key to promoting economic advance and environmental sustainability. Intelligent energy management systems aim to optimize energy consumption, reduce waste, and improve efficiency. Existing research provides valuable insights into the opportunities and challenges for developing such systems. The objective of this paper is to provide a comprehensive overview of Smart energy management systems and their benefits in addressing the increasing energy demands of urbanization, safeguarding its cost-effectiveness contemporary services, reliability, and sustainability. The paper examines various solution aspects that support the adoption of Smart energy management systems in Smart cities. Keywords: Smart City · Smart Energy · Artificial Intelligence · Machine Learning · Deep Learning · Internet of Things
1 Introduction Smart Energy is the primary goal of countries, regional and international institutions worldwide. It is considered the key engine and the active component for the growth and development of Smart Cities, given its indispensable significance across all sectors of human activity. Nevertheless, it is important to notice that the major portion of the energy used today is derived from conventional and traditional sources. Consequently, nations across the globe are dynamically endeavoring to make a paradigm shift towards Smart Energy, ensuring optimal utilization and equitable distribution of resources for both current and future generations. This transition towards Smart Energy is poised to meet the increasing energy requirements while rationalizing consumption, improving energy effectiveness, generating novel employment opportunities, safeguarding the ecosystem, and promoting the ethos of sustainable development [1]. This work aims at exploring the importance of Smart energy in a Smart city context. The focus of the paper is to analyze © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 1–8, 2023. https://doi.org/10.1007/978-3-031-44146-2_1
2
M. S. Benkhalfallah et al.
the main features, and scrutinize intricate challenges, and opportunities associated with Smart energy management, while accentuating its significance for economic growth, environmental sustainability, and overall development [1]. Thus, the paper provides an overview of intelligent energy management. Firstly, it presents a succinct synopsis of the field and context of the study. Secondly, it delves into a review of some pertinent work, followed by a discussion of potential obstacles that may be confronted throughout the endeavor. After that, it elucidates the problem statement and outlines the general objectives, ongoing investigations, and prospective trajectories for future research. Finally, it concludes with a summary of the key findings.
2 Domain and Context The work is part of the Smart City context, also known as a digital city or eco-city, which seeks to enhance the quality of life for its citizens by mitigating poverty and unemployment, providing efficient, integrated, and transparent urban services, ensuring safety and security, protecting the environment, managing energy resources effectiveness, ensuring sustainable development to meet the requirements of current and future generations. This involves a high degree of integration of physical, digital, and human systems using new information and communication technologies and digital techniques creatively, to create a Smart Economy, Smart Industry, Smart Energy, Smart management, Smart Mobility, Smart services, and Smart Lifestyle [2]. Our work focuses on the increasingly important issue of Smart City, energy management, which according to the international standard ISO 50001–2018 is considered a systematic approach for saving and monitoring energy, improving its performance, increasing its resource efficiency, and significantly reducing its costs [3].
3 State of the Art To provide a comprehensive understanding of Smart energy management, this section presents a review of some relevant literature. The review encompasses the following works: Yujie Wang et al. [4] have built a low-carbon city by using the digital version of the digital twin to reduce energy consumption while maintaining or increasing the current widely understood level of economic activity. This work is based on: Digital Twin, the Internet of Things, Big Data, and Cloud computing. Authors in [5], planned, designed, and evaluated the implementation of a Smart energy system based on the IoT for Smart cities. They proposed a system that utilizes Smart green energy and enables comprehensive monitoring, secure communications, and cost savings by efficiently controlling energy consumption and forecasting demand. The study leveraged IoT technology and a deep reinforcement learning algorithm with adaptive regression, which considered the main parameters of operational logs, power wastage, power requirements, and average failure rate. In [6], W. S. Costa et al. presented a prototype of “tomada inteligente”, called Smart-Plug, to monitor energy consumption in Smart homes simply and inexpensively. This model is capable of measuring voltage, and current, calculating active power and power factor, and then providing the energy consumption to the consumer
Smart Energy Management Systems
3
using Power Line Communication (PLC) technology, knowing the consumption provided by the Smart metering. This work aims to provide the values of energy quality and energy demand with acceptable accuracy for the residential consumer. Author et al. [7] investigated the concerns and comparisons surrounding the Internet of Energy (IoE) before and after participation in such services, presenting the perceptions, challenges, and threats of IoE by focusing on the core of security taking into account the key components: Smart Grids, the Internet of Vehicles, the Cloud and their relationship with IoE. Various mechanisms such as the cloud, software-defined networks, and blockchain have also been combined to address the flaws in traditional energy structures. Many attacks have also been introduced with mitigation tactics from academic and industry practices and good damage containment strategies, to ensure more security with practical solutions to keep power data and privacy rights, and to help design better energy security frameworks in terms of impacts and repercussions. In the same context, other works have introduced innovative approaches to intelligent energy management systems. For instance, Foroozandeh et al. [8] have proposed a single contract power optimization model for Smart buildings, utilizing intelligent energy management to optimize energy consumption. Similarly, Alzoubi [9] has suggested the use of machine learning to predict and control energy consumption in Smart homes. Gherairi [10] has designed an intelligent energy management system for smart homes, which utilized multi-agent systems to optimize energy consumption. Elweddad and Gunaser [11] have proposed an energy management system that utilizes machine learning algorithms and a genetic algorithm to predict microgrid operation. Finally, Dimitroulis and Alamaniotis [12] presented a fuzzy logic energy management system for residential prosumers, optimizing on-grid electrical systems. The reviewed works (See Table 1. Comparison of some related works) have made significant contributions towards the development of intelligent energy management systems by focusing on energy optimization, efficiency, and reduction of environmental impact. The utilization of various paradigms technologies, such as artificial intelligence, machine learning, fuzzy logic, and multi-agent systems [13, 14], has enabled accurate predictions, effective control of energy consumption, and a reduction in energy waste. These works cover diverse issues related to energy, and they use different models and techniques to achieve their objectives. Overall, the literature review highlights the importance of energy conservation, cost savings, and the need for secure energy systems. Table 1. Comparison of some related works Authors
Contribution
Approach
Technologies / Paradigms
Yujie Wang et al. [4]
Low-carbon city development
Digital twin-based approach
Digital Twin, IoT, Big Data, Cloud computing (continued)
4
M. S. Benkhalfallah et al. Table 1. (continued)
Authors
Contribution
X. Zhang et al. [5] Smart Green Energy Comprehensive Monitoring Secure Communications Cost Savings Controlling Energy Consumption Forecasting Demand
Approach
Technologies / Paradigms
IoT-based approach
IoT, deep reinforcement learning algorithm, adaptive regression
W. S. Costa et al. [6]
Energy Consumption Smart-Plug-based to Residents approach Measure voltage, current Calculate active power and power factor
M. M. Mogadem et al. [7]
Address flaws in traditional energy structures Present mitigation tactics for security threats
Power Line Communication (PLC)
Internet of Smart Grids, IoT, Energy-based approach Cloud, Internet of Vehicles, blockchain, software-defined networks
Foroozandeh et al. Energy Optimization Energy Resource [8] in Smart Buildings Management Approach Propose an Energy Resource Management (ERM) Approach
Intelligent energy management with a single contract power optimization model
Alzoubi [9]
Energy consumption prediction Control in Smart homes
Machine learning-based approach
Gherairi [10]
Intelligent energy management system for Smart homes
Multi-agent Multi-agent systems system-based approach
Elweddad and Gunaser [11]
Microgrid operation prediction and optimization
Machine learning and genetic algorithm-based approach
Machine learning, genetic algorithm
Dimitroulis and Alamaniotis [12]
Optimized on-grid electrical systems for residential prosumers
Fuzzy logic-based approach
Fuzzy logic
Machine learning
Smart Energy Management Systems
5
4 Research Challenges In this context, several challenges arise, among others: ensuring that energy data is continuously collected, monitored, analyzed, and evaluated in an interactive way to meet energy management requirements. Accurate and manageable energy monitoring is crucial, as is the flexibility of power plants in responding to changing electricity consumption patterns, particularly during peak hours. Other challenges include ensuring the reliability of the power supply which should be continuous, uninterrupted, and adaptable to unforeseen environmental events (fighting climate change), energy cost optimization, and the technical challenge of obsolescence of Smart objects [15]. Additionally, the integration of renewable energy sources into the grid, the development of effective energy storage solutions, data management, and the need for standardized communication protocols for Smart devices. Also, cybersecurity is a critical issue that should be addressed, as Smart energy systems are vulnerable to cyber-attacks, and the data they generate can be sensitive and private. Overall, addressing these challenges requires a coordinated effort from researchers, officials, industrials, and consumers to create a sustainable and efficient energy system.
5 Some Suggestions for Meeting or Mitigating These Challenges The smart energy management landscape can witness significant improvements in efficiency, reliability, cost-effectiveness, and sustainability, by proactively addressing some previous challenges, among them: implementing a robust real-time data collection system using advanced sensor technologies, improving power plant agility and adaptability by applying sophisticated control and optimization algorithms, enhance network resilience against unexpected environmental events and react quickly to anomalies, integrate smart meters and pricing mechanisms to reduce costs, invest in research and development of storage technologies, such as advanced batteries, foster an ecosystem of innovation and collaboration to combat smart device obsolescence, create unified communication protocols for smart devices to facilitate interoperability and seamless integration of their elastic resources, besides developing strong encryption, authentication, and intrusion detection systems [16, 17].
6 Research Problem Statement Smart cities have advanced significantly, and this progress has had an impact on various aspects, including the energy sector, where several challenges have emerged, such as climate change, unanticipated environmental constraints, capricious consumption patterns, and wanton energy dissipation. As a result, there is an urgent need to tackle these issues and find sustainable solutions for energy usage in Smart cities. The problem that we address in our PhD research concerns mainly how to combine unexpected environmental events and constraints, new information and communication technologies, modern techniques and paradigms, as well as IoT and artificial intelligence to provide acceptable efficiency and advanced intelligent energy management while rationalizing its consumption, and reducing its costs.
6
M. S. Benkhalfallah et al.
7 Outline of Objectives The ultimate goal of proposing an intelligent and cost-effective, economically viable energy management strategy based on harnessing the power of artificial intelligence models, machine learning, multi-agent systems, cutting-edge models, and an array of contemporary technologies interlaced with the seamless integration of the Internet of Things and intelligent control mechanisms. This global vision aims to pave the way toward a sustainable future by providing precise solutions, overcoming the challenges associated with energy management, monitoring it in real-time, efficiently improving consumption, reducing costs, embracing sustainable energy practices and ensuring reliable and flexible energy, supplying individuals and societies, and promoting environmentally-friendly practices [18].
8 Ongoing Works and Future Directions of the Research To establish a consistent and methodical comparison between the presented works and other related ones, we have to make, first time, a choice of which kind of energy should we study and the most relevant criteria that impact energy consumption and management. This makes the subject of a current work that aims at providing a rigorous review of existing works that are founded on IA, IoT, and the newest technologies such as Cloud computing, Edge, and Fog computing. Regarding energy indicators, the energy management system, as suggested by ISO 50001, defines a comprehensive suite of energy indicators encompassing the Plan Do Check Act (PDCA) for Energy Management, Case study DATA CENTER KPI Specifically for data centers, Case study on Key Performance Indicators (KPIs) [3]. Some factors that also impact energy consumption are Weather variations, Time changes, Economic activity, Preferential tariffs as contractual incentives, Eco-citizen campaigns, and Changes induced by occasional events [3]. Through the judicious employment of these discerning indicators, one can glean invaluable insights into energy consumption patterns, thereby fostering a coherent framework for astute energy management practices. Furthermore, in the pursuit of advancing the frontiers of its management, it is prudent to explore the realm of deep learning techniques and Multi-Agent Systems (MAS) as promising avenues for future research endeavors. The integration of these cutting-edge methodologies has the potential to engender a more sophisticated and refined approach to smart energy management, thereby heralding an era of heightened efficiency and optimal resource allocation.
9 Conclusion Expectations of using Smart energy in urban environments are of great position as a future option for sustainable energy, to meet the growing demand for urban energy and coordinate its consumption while simultaneously curbing carbon emissions. With the relentless march of technological advancements, the energy landscape is poised to undergo a transformative paradigm shift, characterized by heightened efficiency, unwavering usability, cost-effectiveness, ubiquitous accessibility, and resolute sustainability. As a consequence, the utilization of Smart energy stands poised to emerge as a beacon
Smart Energy Management Systems
7
of hope, enabling urban environments to achieve a harmonious equilibrium between burgeoning energy needs and ecological conscientiousness [19]. Our comprehensive review covers the cutting-edge technologies and avant-garde methodologies pertinent to the realm of Smart energy. Our analysis also provides valuable information that can be used to guide future research and development endeavors in the Smart energy field and offers a pivotal catalyst to foster the creation of a more sustainable urban environment, characterized by enhanced livability and ecological equilibrium.
References 1. Sadiq, M., Ou, J.P., Duong, K.D., Van, L., Ngo, T.Q., Bui, T.X.: The influence of economic factors on the sustainable energy consumption: evidence from China. Econ. Res. Ekonomska Istraživanja 36(1), 1751–1773 (2023) 2. Popova, Y., Zagulova, D.: Utaut model for smart city concept implementation: use of web applications by residents for everyday operations. In: Informatics, vol. 9(1), pp. 27. MDPI (2022) 3. Iturralde Carrera, L.A., Álvarez González, A.L., Rodríguez-Reséndiz, J., Álvarez-Alvarado, J.M.: Selection of the energy performance indicator for hotels based on ISO 50001: a case study. Sustainability. 15(2), 1568 (2023) 4. Wang, Y., Kang, X., Chen, Z.: A survey of digital twin techniques in smart manufacturing and management of energy applications. Green Energy Intell. Transp. 1(2), 100014 (2022) 5. Zhang, X., Manogaran, G., Muthu, B.: IoT enabled integrated system for green energy into smart cities. Sustain. Energy Technol. Assess. 46, 101208 (2021) 6. da Silva Costa, W., dos Santos, W.G., de Oliveira Rocha, H.R., Segatto, M.E., Silva, J.A.: Power line communication based smartplug prototype for power consumption monitoring in smart homes. IEEE Lat. Am. Trans. 19(11), 1849−1857 (2021) 7. Mogadem, M.M., Li, Y., Meheretie, D.L..: A survey on Internet of Energy security: related fields, challenges, threats and emerging technologies. Cluster Comput. 1−37 (2021) 8. Foroozandeh, Z., Ramos, S., Soares, J., Vale, Z., Dias, M.: Single contract power optimization: a novel business model for smart buildings using intelligent energy management. Int. J. Electr. Power Energy Syst. 135, 107534 (2022) 9. Alzoubi, A.: Machine learning for intelligent energy consumption in smart homes. Int. J. Computations Inform. Manuf. (IJCIM) 2(1) (2022) 10. Gherairi, S.: Design and implementation of an intelligent energy management system for smart home utilizing a multi-agent system. Ain Shams Eng. J. 14(3), 101897 (2023) 11. Elweddad, M.A., Guneser, M.T.: Intelligent energy management and prediction of micro grid operation based on machine learning algorithms and genetic algorithm. Int. J. Renew. Energy Res. (IJRER) 12(4), 2002−2014 (2022) 12. Dimitroulis, P., Alamaniotis, M.: A fuzzy logic energy management system of on-grid electrical system for residential prosumers. Electric Power Sys. Res. 202, 107621 (2022) 13. Sofia, K., Ilham, K.: Multi-layer agent based architecture for Internet of Things systems. J. Inf. Technol. Res. (JITR) 11(4), 32–52 (2018) 14. Kouah, S., Saïdouni, D.E., Kitouni, I.: Open fuzzy synchronized petri net: formal specification model for multi-agent systems. Int. J. Intell. Inf. Technol. (IJIIT) 12(1), 63−94 (2016) 15. Ali, M., Prakash, K., Hossain, M.A., Pota, H.R.: Intelligent energy management: evolving developments, current challenges, and research directions for sustainable future. J. Clean Prod. 314, 127904 (2021)
8
M. S. Benkhalfallah et al.
16. Ang, T.-Z., Salem, M., Kamarol, M., Das, H.S., Nazari, M.A., Prabaharan, N.: A comprehensive study of renewable energy sources: classifications, challenges and suggestions. Energ. Strat. Rev. 43, 100939 (2022) 17. Zhao, X., Ma, X., Chen, B., Shang, Y., Song, M.: Challenges toward carbon neutrality in China: strategies and countermeasures. Resour. Conserv. Recycl. 176, 105959 (2022) 18. Nutakki, M., Mandava, S.: Review on optimization techniques and role of artificial intelligence in home energy management systems. Eng. Appl. Artif. Intell. 119, 105721 (2023) 19. Marti, L., Puertas, R.: Sustainable energy development analysis: energy trilemma. Sustain. Technol. Entrepreneurship 1(1), 100007 (2022)
The Interaction of People with Disabilities with the Intelligent Packaging Maria Poli1(B) , Konstantinos Malagas2 , Spyridon Nomikos3 Apostolos Papapostolou3 , and Grigorios Vlassas4
,
1 Department of Interior Architecture, University of West Attica, Aigaleo, Greece
[email protected]
2 Laboratory of Informatics and New Technologies in Shipping, Transport, and Insular
Development (LINTSTID), Department of Shipping Trade and Transport, University of the Aegean, Chios, Greece [email protected] 3 Department of Graphic Design and Visual Communication, University of West Attica, Aigaleo, Greece {nomic,pap}@uniwa.gr 4 Department of Tourism Management, University of West Attica, Aigaleo, Greece [email protected]
Abstract. Technology offers significant services to people with disabilities that can improve their daily life. The current study examines the contribution of intelligent packaging to serve the specific needs of the three categories of disabled persons, those with mobility, vision, and hearing problems, within and outside the home. The research has been applied in the Greek context. Focus group meetings and detailed discussions with the participants (n = 12) were applied to study their relevant perceptions. The study participants identified significant problems in their daily lives; however, they pointed out that adopting intelligent packaging offers them valuable services and contributes to their well-being. In addition, a more holistic approach should be adopted as these people are equal members of society, and require advanced services in every aspect of their daily lives (inside and outside the home) like the other citizens. The primary analysis of the relevant literature and the in-depth discussion in the focus group consists of the basis and leads to the generation of an initial general research framework that can apply to further research. Keywords: Intelligent packaging · accessibility · technology · people with disabilities · conceptual framework
1 Introduction In a society where technology is developing speedily and cities are becoming sophisticated, new advanced services meet every human need/fantasy. Elderly and disabled people are rapidly increasing, and their needs are changing, requiring more modern and personalized services. The shopping experience is important for people with disabilities (PwD) and technology offers tailor-made solutions for them. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 9–20, 2023. https://doi.org/10.1007/978-3-031-44146-2_2
10
M. Poli et al.
Emphasizing disability as a social issue raises the question of social equality and full participation of those people in daily life, based on the principles of a ‘universal design’ [1]. Disabled persons and persons with reduced mobility means are those who have a permanent or temporary physical, mental, intellectual or sensory impairment which, in interaction with various barriers, may hinder their full and effective accessibility…’ [2]. Technology, sustainability, and social well-being are associated issues, that can apply to PwD daily activities, and are the main characteristics of the ‘smart sustainable city’. The application of these issues proves that society is based on equality and mainly on the support of its citizens, offering equal opportunities and respecting All. In addition, technology offers valuable services to PwD, in particular in shopping. The terms Smart, Intelligent, and Hybrid are used to state the situation that exists and conceptually surrounds an object, the system around it, or the use of technology and their relationships [3]. Specifically, Smart is characterized in terms of object use, Intelligent in terms of its interaction, and hybrid in terms of technology [3]. These advanced technologies are successfully implemented in packaging. According to [4], the packaging material incorporates the appropriate technology and provides information to consumers about the content, protects it from the environment, contributes to the efficient handling and storage, communicates with the user through the product’s design and visual configuration, and finally contributes to the product utility [5]. One of the main relevant technologies that are widely used in Radio frequency identification (RFID) implements radio waves to automatically identify people or objects. Finally, smart and intelligent solutions are important for the majority of PwD, improving their daily lives within and outside the home. The current study examined this issue and used a focus group where PwD with problems in movement, sight, and hearing and their carers/escorts participated (n = 12). Questions and detailed discussion were applied to discover the problems and challenges that arise for PwD within and outside the home, regarding the adoption of smart systems in daily life, particularly in shopping and packaging. The participants pointed out that a more holistic approach is needed where the adoption of smart technologies assists them within and outside the home. The particular study is based on the primary analysis of the relevant literature, and the detailed discussion in a focus group led to the generation of a general research framework that can apply to other studies, in order for more detailed and robust results to be collected.
2 Literature Review In this part of the study, the disability, and the packaging and RFID issues are briefly presented. 2.1 Disability Disability is extremely diverse and a global public health issue. In addition, disability is a complex, special, and profound social issue that needs additional attention if we consider that more than 1.3 billion people live with some form and degree of disability, representing at least 17% of the world’s population and making up the world’s largest minority group. The majority (almost 80%) of disabilities are acquired later in life while
The Interaction of People with Disabilities
11
as consumers they posit more than $13 trillion in annual disposable income [6]. In particular, 360 million people worldwide have moderate to profound hearing loss, 285 million people are visually impaired (39 million of whom are blind), and 75 million people require a wheelchair (of which only 5–15% have access to one) [7]. According to the Department of Economic and Social Affairs of the United Nations [8], the number of people experiencing disability is increasing dramatically due to an increase in chronic health conditions, demographic trends, and an aging population. Almost everyone is likely to experience some form of disability – temporary or permanent – at some point in life. Article 25 of the United Nations Convention on the Rights of Persons with Disabilities (CRPD) reinforces the right of PwD to achieve the highest standard of health, without discrimination. However, the reality is that few countries provide adequate quality services to those people. Disability and poverty mutually exist and have a higher prevalence in lower-income countries [9]. Disabilities and human functions are impairment, activity, participation, and environment. Based on the ICF 2002 model the term ‘functionality’ is referred to functions, activities, and participation of the body, and the term ‘disability’ is referred to impairments, activity limitations, and participation limitations and is inextricably linked to the concept of human functioning [10, 11]. According to the WHO classification (ICF 2002), the proposed model is based on the interaction of biological psychological, and social factors to be able to have a comprehensive understanding of the issue of disability. Thus, the organism’s medical or biological ‘dysfunction’ has social ramifications, affecting those people’s daily activities and social life. Therefore a more social approach to disability dominates recent years and technology can contribute to this issue. 2.2 Packaging and RFID The shopping experience and an efficient and helpful packaging system are important for PwD According to Yam [12] and the American Heritage Dictionary the word ‘intelligent’ is defined as ‘showing sound judgment and rationality’. Intelligent packaging offers the ‘whole product’ intelligent functions (such as detection, recording, tracking, communication, and application of scientific logic), and this facilitates buying decisionmaking, extends shelf-life, enhances safety, improves quality, provides information, and alerts for possible problems [12]. Therefore, the application of ‘intelligent packaging’ offers the ability to monitor the product inside and outside of the package, communicate with customers, and warn them in time. Smart packaging offers improved functionality, allows stakeholders to track a product’s inventory and journey from shipment to delivery, provides easy information on changing regulations, and monitors supply chain operations [13]. In particular, ‘smart labels’ are used to control the quality or other product characteristics and provide indications of suitability for consumption [14]. Although smart packaging offers high-quality services to all stakeholders (consumers, industry, wholesalers, etc.), intelligent packaging provides more advanced and useful services to all including PwD. Barcode technology such as RFID provides valuable identification solutions, and through this access control and physical security, product tracking in supply chains, and recognition at points of sale is achieved. RFID systems are a subset of automatic Identification Systems and use radio waves to automatically identify people or objects [15].
12
M. Poli et al.
Thus, RFID is a tag or reader-based automatic identification system used to identify items and accumulate data without human intervention. RFID tags have some identification number stored in their databases and are able to act upon retrieving information about that number from the database. RFID offers a way to connect offline objects to the internet, providing information from the physical object and its physical location to the object’s digital ‘monitor’ which can include extensive information about its characteristics and life cycle history or even the methods it may be called upon to interact with the Internet [16]. This digitization can be in the form of database registry files and processes, software programs, autonomous agents, or other forms of digital information and/or algorithms [17]. RFID technology offers many applications [18]; RFID tags are categorized into active and passive and are widely used for intelligent packaging tags.
3 Methodology The study is part of a longer research and used qualitative research to acquire all the required information responding to the study’s objectives. Qualitative research aims to collect and analyze a variety of non-numerical data to better understand the participants’ concepts, opinions, or experiences [19]. In particular, a focus group was organized, with the aim to gather information regarding issues related to the adoption of new technologies and in particular intelligent packaging and the relevant offered services within and outside the home. Focus groups consist of a non-standard technique of information gathering, based on apparently informal discussion and interaction among a group of people, where the role of moderator and observer are important to lead the discussion according to ‘the cognitive purposes outlined’, observe non-verbal behaviors and collect non-verbal information [20]. The focus group technique provides sufficiently detailed information collection in a short amount of time and at a low cost [21] and it is easy to be organized [22]. In this specific case, the methodology was to gather and discuss good practices that participants may be familiar with. Using them, selected participants are encouraged with open-ended questions in a discussion-type atmosphere in order to create a comparative analysis of the research objectives. In addition, trust building is fundamental in focus group meetings and impacts the collection of the required information, and the development of effective generative conversations, and this was applied in the specific organization of the focus group. Trottier [23] pointed out that ‘generative dialogue’ contributes to real change, connects people; allows individuals, groups, and organizations to discuss in-depth various issues and become real people; creates new relations, and manifests a shift in emotional and mental frameworks. These targets are achieved in the current focus group case. In specific, in this research, the following principles were followed: • a carefully planned discussion, • emphasis is given to gaining insights into the defined area of interest; • structuring was based on an open-ended course of questions, designed to elicit ideas and opinions focused on the study’s objectives, and • they were conducted in a tolerant, non-threatening environment.
The Interaction of People with Disabilities
13
The discussion time lasted 90 min, and the moderator (the first author of the current study) and observer (assistant of the moderator) managed the meeting. The research was carried out in an area of the University of Western Attica (Library) (Athens-Greece), with a circular layout as is usually used in focus groups [24]. The participants belonged to the main three (3) groups of disability (mobility, blind, and deaf), while carers/escorts of them (professionals or relatives) were also engaged. In particular, three (3) PwD from each group and three (3) escorts participated, in total 12 people (9 PwD and 4 escorts) (n = 12). The highest representativeness of the participants is tried to be achieved based on criteria of age, geographical spread, gender, educational background, and profession. Also, all the participants had medium to high knowledge of digital technologies. Regarding the structure of the discussion and the questions in the focus group, these are managed as follows: • The focus group moderator made a short presentation of the relevant literature. • Three (3) levels of questions were used. In specific: • Integration Questions. They were used to introduce the participants so that they feel comfortable with each other and integrate into the discussion centered on the topic of interest (theoretical framework). • Probing Questions. Focusing and presenting arguments on new technologies, packaging, and technology-enabled markets. Also, the presentation of the possibilities of improving the quality of life, sustainability, autonomy, and accessibility provided. A collection of opinions but also a description of everyday life within and outside the home was achieved. • Exit Question. Confirmation that nothing was lost in the discussion and that everyone had the opportunity and time to contribute their views. In the end, the moderator and the carers/escorts discussed some general issues relevant to the study’s objectives. In addition, the moderator and the observer have the following characteristics: • Both were quite familiar with the asked questions and the subject of discussion. • Introduced and guided the discussion while appreciating that all group participants have something to offer regardless of their education, experience, or background. • Kept their personal opinions and ideas out of the session. • Recorded any dynamic interaction, and non-verbal communication, ensuring that each participant had a chance to express his/her opinion. • Finally, the discussion was recorded and a content analysis was followed. Regarding the content analysis, the coding of the collected information concerns phrases, sentences, and keywords, which reflect the study’s objectives. From a methodological point of view, the most important element in content analysis concerns the attempt to systematically represent the factors that lead to a specific behavior, as they are presented and perceived by the participants themselves. This analysis includes (a) the coding of cases until the point where new variations were not found, (b) the categorization and linking of the categories, and (c) the continuous search for similarities and differences that exist between the cases and circumstances that arise, in order to ensure that the complexity and variety of data have been fully researched [25–27]. The following Fig. 1 highlights the main stages of the current study, where the main target is to present a general research framework that can apply to relevant studies.
14
M. Poli et al.
Fig. 1. The current study’s outlook
The detailed discussion with the participants and their carers/escorts generated interesting results.
4 The Outcomes From the Implementation of the Discussion in the Focus Group Table 1 above shows that there are significant problems inside and outside the home for particular disabled persons. Also, Fig. 2 below presents the interaction of these people with the inside (home) and outside environment. The main outcome of the in-depth discussion with the focus group participants was the important role of technology which significantly improve their daily life inside and outside the home. In particular, inside their houses, all the participants identified significant problems of accessibility. In addition, all agreed that the operation of smart appliances such as smart refrigerators, smart ovens, and smart air conditions is extremely helpful. Outside their houses, there are also accessibility problems in supermarkets and their surrounding areas, mainly in parking areas, lack of long pedestrian routes, absence of assistance, and the provided information. Also, technology can offer significant assistance to PwD, particularly smart and intelligent packaging incorporating RFID technologies, which positively facilitates their daily life. Fully accessible services are required for entertainment areas. Finally, the majority of the participants pointed out that the operation of smart appliances in a home is associated with smart and intelligent packaging, protection of the environment, and sustainability, and all these are well-operated in the smart city environment. Figure 3 below shows that the technology and specific smart and intelligent packaging offer valuable services to disabled people (those with mobility, deaf, blind, and mental/spiritual problems). Autonomy, accessibility, and technology adoption in particular smart packaging are important for PwD and lead to their higher quality of life.
The Interaction of People with Disabilities
15
Table 1. The main findings of the questions and discussion in the focus group Statistical analysis Inside House
Outside House
Perceptions about Technology
Damaged sidewalk plates
Technology plays a catalytic role in access
Small doors
Lack of ramps at entrances in supermarkets, and parking areas are not very accessible
RFID applications are not well known but their results are very supportive of daily life
Furniture obstruct movement
Occupied ramp entrances and parking areas for disabled people
There was an update on packaging applications related to pharmaceutical products and e-passes for tolls
People with Mobility Small spaces Problems
Smart appliances such Small pedestrian as smart refrigerators, routes smart ovens, are useful Unaccessible entrances to residences, shopping, and entertainment places Narrow elevators People with Vision Problems
Small spaces
Damaged sidewalk plates
Technology has played a catalytic role in access
Small doors
Lack of guidance for RFID applications in specific routes outdoor signage, sensors, and reading smart sticks solve the issues of security and orientation. Inside the houses, safety is a very important issue. The space is familiar but the forgotten kitchen operation is important for them. A better design is required (continued)
16
M. Poli et al. Table 1. (continued) Inside House
Outside House
Furniture obstruct movement
Routes are not planned appropriately
Perceptions about Technology
Short pedestrian routes Construction works without the appropriate security Narrow elevators People with Hearing Problems
Transfer with lights
Lack of written information in many places
Technology has played a catalytic role in access
Lack of written information for safety reasons in screens and traffic lights
Their main concern in outdoor spaces and shops is the information that is not presented on screens and signage. As far as the interior is concerned, surely switching sound to pulsating light on devices provides the solution and surely all the technological applications help a lot
Short pedestrian routes
The Interaction of People with Disabilities
17
Fig. 2. The interaction of disabled people with the inside (home) and outside (supermarket, pharmacy, entertainment, etc.) environment (this part of a rich picture).
Fig. 3. The impact of technology on disabled person’s daily life on an inside and outside environment – an infographic edition of a conceptual framework relation (Poli, 2021).
5 Discussion and Conclusion Technology serves PwD needs by offering higher quality services inside (home) and outside the environment, such as shopping and socializing. At home environment, smart appliances and a fully accessible environment assist PwD to access and use products easier and make their life better. In the outside environment, smart technologies and products that incorporate intelligent packaging are highly useful. This form of packaging contributes to the traceability of products and the control of their transportation, quality, and safety [28, 29], helpful services for all. Thus, intelligent packaging, shopping, entertainment areas, and their surrounding environment that use advanced services that respond to PwD needs are required. Finally, those fully accessible and valuable services significantly contribute to those people’s well-being (see Fig. 4 below). The above-mentioned issues are pointed out by the participants of the current study. In particular, PwD and their carers/escorts (with significant problems in mobility, vision,
18
M. Poli et al.
Fig. 4. The impact of smart packaging on people with disabilities (PwD)
and hearing) participated in a well-organized focus group, and after a detailed and indepth discussion, all highlighted the important role of intelligent packaging and the new technologies, as these highly improve daily life. Four main factors were identified by the participants that the intelligent packaging should offer: communication, usability, protection-safety & content improvement. Also, supportive facilities based on advanced technologies such as shopping areas parking, pedestrian routes, transportation and entertainment services, and provision of helpful information are all essential services for PwD. Technology applied in a home environment, objects and shopping, and entertainment and their surrounding areas are all interconnected and provide highly useful services and information to consumers and citizens; while PwD needs more of these services. Consequently, a more system-wide approach with the contribution of many stakeholders (governments, research communities, etc.) should be applied [30, 31] in order to establish smart cities, smart regulations, etc. which emphasize the real needs of PwD. Technology has the potential to reduce people’s weaknesses in terms of disability with the basic condition that there is training in its correct use, and the positive attitude of these persons to adopt these technologies. Designers and developers of new technologies, researchers, and policy-makers could benefit from the current study gaining useful insights about the use of new technologies in shopping by PwD.
6 Limitations of the study and suggestions for future research The main limitation of the current study is that the study’s participants were positive about the adoption of new technologies, and for this reason were selected to satisfy the study’s objectives; however, there are fewer technology-oriented PwD and their perceptions should also be taken into account in future studies. In addition, the implementation of quantitative studies with the participation of a large number of PwD is strongly recommended. Finally, the participation of people with other kinds of problems in disabilities studies such as intellectual is also useful.
The Interaction of People with Disabilities
19
References 1. Mangou, E.: Application of geographic information systems in the field of local government, with an emphasis on the accessibility of the disabled. Auth. Dept. Civ. Eng. (2015) 2. Regulation (EC) No. 1300/2014 (of 18 November 2014), of the European Parliament and of the Council on the technical specifications for interoperability relating to the accessibility of the Union’s rail system for persons with disabilities and persons with reduced mobility, https://eur-lex.europa.eu/eli/reg/2014/1300/oj. Accessed 27 Feb 2023 3. Nomikos, S., Renieri, D., Kalaitzi, S., Vlachos, G., Darzentas, I.: Smart Packaging. Innovations and culture shaping. University of the Aegean, Department of Product and Systems Design Engineering (2005) 4. Nomikos, S.: Application of a Model for evaluation of prints in publishing procedures in Greece. Doctoral thesis. Department of Product and Systems Design Engineering. University of the Aegean (2007) 5. Abbot, D.A.: Packaging Perspectives, Kendall Hunt Pub Co, 1989, U.S.A. (1989). ISBN 10: 0840352735 6. World Economic Forum. This smartphone app can help blind people navigate more trains and buses. Here’s how. (2022). https://www.weforum.org/agenda/2022/06/app-to-help-blind-peo ple-navigate-public-transit-to-debut-in-washington?utm_source=linkedin&utm_medium= social_video&utm_term=1_1&utm_content=26331_app_transport_blind_people&utm_ campaign=social_video_2022 7. Assistive technology, Debating Europe. How will new technology improve accessibility for people with disabilities? (2016) https://www.debatingeurope.eu/2016/02/18/will-new-techno logy-improve-accessibility-people-disabilities/#.YktsgCi_xPZ 8. Silberner. Jo. Npr. Nearly 1 In 7 People On Earth Is Disabled, Survey Finds (2011). https://www.npr.org/sections/health-shots/2011/06/09/137084239/nearly-1-in-7-peo ple-on-earth-are-disabled-survey-finds 9. WHO. 10 Facts on disability WHO (2020). https://www.who.int/news-room/facts-in-pictures/ detail/disabilities 10. WHO. Disability and Health (2021). https://www.who.int/news-room/fact-sheets/detail/dis ability-and-health 11. WHO. 15th Conference of States Parties to the Convention on the Rights of Persons with Disabilities (COSP15), 14–16 June 2022. Department of Economic and Social Affairs Disability. https://www.un.org/development/desa/disabilities/ 12. Poli, M.: Ambient intelligence and smart environments: a preliminary overview. In: NiDS2021 Proceedings Will be Published in the Frontiers in Artificial Intelligence and Applications (FAIA) Book Series of IOS press, as an Open Access (OA) volume, pp. 46–52. https://doi. org/10.3233/FAIA210074 13. Poli, M.: Smart technologies and the case of people with disabilities: a preliminary overview. In: NiDS2021 Proceedings Will be Published in the Frontiers in Artificial Intelligence and Applications (FAIA) Book Series of IOS PRESS, as an Open Access (OA) volume. σελ. 217–222 (2021). https://doi.org/10.3233/FAIA210096 14. Poli, M., Malagas. K., Nomikos. S., Papapostolou, A.: The relationship between disability, technology, and sustainable development: the Greek reality (2022). ISSN: 2654–0460 ISBN: 978–618–84403–6–4 15. Apostolopoulos, P., Tsiropoulou. E., Papavasiliou, S.: Cognitive data offloading in mobile edge computing for Internet of Things. IEEE Access 8, 55736–55749 (2018) 16. Poli, M., Malagas, K.: The relationship of disability, new technologies, and smart packaging: the Greek experience. In: Krouska, A., Troussas, C., Caro, J. (eds.) Novel & Intelligent Digital Systems: Proceedings of the 2nd International Conference (NiDS 2022). NiDS 2022. Lecture
20
17.
18. 19. 20. 21. 22. 23.
24. 25. 26.
27. 28. 29.
30. 31.
M. Poli et al. Notes in Networks and Systems, vol. 556, pp. 276−289. Springer, Cham (2023). https://doi. org/10.1007/978-3-031-17601-2_27 Rashid, Z., Sequi, J., Pous, R., Peig, E.: Using augmented reality and Internet of Things to improve accessibility of people with motor disabilities in the context of smart cities. Futur. Gener. Comput. Syst. 76, 248–261 (2017) Tselepi, M.: Applications and RFID technology. School of Management and Economics. Department of Business Administration TEI Kavala (2013) Haven, T.L., Van Grootel, D.L.: Preregistering qualitative research. Account. Res. 26(3), 229–244 (2019) Acocella, I.: The focus groups in social research: advantages and disadvantages. Qual. Quant. 46, 1125–1136 (2012) Bertrand, J.T., Brown, J.E., Ward, M.V.: Techniques for analyzing focus group data. Eval. Rev. 16(2), 198–209 (1992) Stokes, D., Bergin, R.: Methodology or methodology? an evaluation of focus groups and depth interviews. Qual. Market Res. Int. J. 9(1), 26−37 (2006) Trottier, P.: Generative Dialogue and Emergent Change. The Institute For Emergent Organizational Development and Emergent Change® (2012). https://emergentchange.net/2012/05/ 20/httpwww-trot/. Accessed 23 Jan 2023 Cohen, L., Manion, L., Morrison, K.: Research Methods in Education. London, Routledge Falmer, Taylor & Francis Group (2008) Basch, C.E.: Focus group interview: an under-utilised research technique for improving theory and practice in health education. Health Educ. Q. 14(4), 411–448 (1987) Bellenger, D.N., Bemhardt, K.L., Goldstucker, J.L.: Qualitative research techniques: focus group interves. In: Bellrenger, D.N., Bemhardt, K.L., Goldstucker, J.L. (ed.), Qualitative Research in Marketing, American Marketing Association, Chicago and in Higginbortham, J.B. and Cox K.K. (eds.) (1979) Focus Group Interves: A Reader, American Marketing Association, Chicago, pp. 13−34 (1979) Poulopoulos, H., Tsibouklis, A.: Focus group interview (focus group interview) a new methodological research tool in the field of social sciences. Soc. Work 39,160–163 (1995) Balbinot-Alfaro, E., Craveiro, D.V., Lima, K.O., Costa, H.L.G., Lopes, D.R., Prentice, C.: Intelligent packaging with pH indicator potential. Food Eng. Rev. 11(4), 235–244 (2019) Mirza Alizadeh, A., Masoomian, M., Shakooie, M., Zabihzadeh Khajavi, M., Farhoodi, M.: Trends and applications of intelligent packaging in dairy products: a review. Crit. Rev. Food Sci. Nut. 62(2), 383–397 (2022) Checkland, P.: Soft systems methodology: a thirty year retrospective. Syst. Res. Behav. Sci. 17, 11–58 (2000) Checkland, P, Tsouvalis, C.: Reflecting on SSM: the link between root definitions and conceptual models. Syst. Res. Behav. Sci. Official J. Int. Fed. Sys. Res. 14(3), 153−68 (1997)
Home Bound: Virtual Home for Reminiscence Therapy of Dementia Patients Lonnie France E. Gonzales1 , Frances Lei R. Ramirez1(B) , Samuel Kirby H. Aguilar1 , Richelle Ann B. Juayong1 , Jaime D. L. Caro1 , Veeda Michelle M. Anlacan2 , and Roland Dominic Jamora2 1 Service Science and Software Engineering Laboratory, University of the Philippines Diliman,
Quezon City, Philippines [email protected] 2 College of Medicine, University of the Philippines Manila, Metro Manila, Philippines
Abstract. In the Philippines, treatments for dementia are mostly limited to drug medication and home care. While there are existing local non-pharmacological advances to supplement dementia therapy through the use of VR software, these have only been partially implemented. In particular, usability testing has yet to be conducted or the reminiscence bump phenomenon has not been fully incorporated in these existing studies. This paper presents a design that addresses these gaps by furthering the development of a virtual home that incorporates reminiscence therapy and the reminiscence bump. This is meant to address the symptoms of dementia and create real experiences for patients with dementia, as well as contribute towards developing a tool that can be used in actual therapy sessions. The principles of usability, personalization, and portability were considered in developing the design, and design strategies that meet these considerations were also identified. Keywords: dementia · reminiscence therapy · virtual reality · virtual home
1 Introduction Dementia is an umbrella term for conditions and symptoms involving cognitive decline that is severe enough to affect a person’s day-by-day activities [1]. It impairs an individual’s ability to think, remember, and make decisions. Around 55 million people worldwide are estimated to have dementia [2]. In the Philippines, there is a high dementia prevalence of 10.6% among Filipino older adults, which is higher than the estimated prevalence of 7.6% for Southeast Asia [3]. From this rate, dementia cases in the country are projected to be at 1,474,588 by the year 2030 [4]. The behavioral and psychological symptoms of dementia (BPSD) are specific disturbances that affect people with dementia, such as apathy, depression, aggression, anxiety, irritability, disinhibition, and hallucinations [5]. The occurrence of BPSD could be associated with distress in patients and caregivers, long-term hospitalization, misuse of medication, and increased healthcare costs [5]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 21–30, 2023. https://doi.org/10.1007/978-3-031-44146-2_3
22
L. F. E. Gonzales et al.
In general, there is no known cure for dementia, and most focus is put on the treatment and alleviation of symptoms. Pharmacological treatment is done via medicine prescribed by a doctor to target specific symptoms. In the Philippines, treatments for dementia are mostly limited to drug medication and home care [9]. Patients may also be hesitant with using drugs as medication because of cost, too many medications, fear, and mistrust [11]. On the other hand, non-pharmacological treatments are done without medicine. These interventions are considered as strategies to address BPSD, stimulate cognitive function, and improve the quality of life for patients [7]. Treatments of this type include music therapy, art therapy, and physical exercises [6]. One particular non-pharmacological intervention is reminiscence therapy, which involves using all senses to help patients remember events, people, and places from the past [7]. This is usually done through a discussion of memories, using tangible objects such as photographs and music [7]. Reminiscence therapy targets the reminiscence bump phenomenon, which suggests that older people most remember the memories from their youth and early adulthood, specifically when they were aged 10–30 years old [8]. Recent research efforts have considered the use of virtual reality (VR) in the conduct of reminiscence therapy sessions. In the Philippines, a VR application for dementia therapy which uses collective memory, or the memory shared by a group of people, as a personalization scheme was designed [9]. It featured a virtual house with elements based on the Filipino 1960’s-1980’s time period. Another local study focused on the management of BPSD by having patients visit a virtual environment that features familiar places in the Philippines such as Rizal Park, Palawan, and a church [10]. Patients could perform painting, puzzles, and music activities while being guided by a virtual companion. The application was implemented on Oculus Quest 2–maximizing the device’s hand tracking and gesture detection capabilities. While these local non-pharmacological advances exist to supplement dementia therapy through the use of VR software [9, 10], they have only been partially implemented and have yet to conduct usability testing [9], or have yet to fully incorporate reminiscence bump in VR-based therapies [10]. This paper extends a previous approach to treat dementia with VR software that makes use of personalized reminiscence in a virtual home environment [9]. The proposed design consists of user activities grounded in reminiscence therapy sessions, as well as caregiver features and hardware setup based from findings of a previous VR application [10], guided by design strategies to make the application suitable and usable for elderly dementia patients. 1.1 Objectives To address the gaps in existing studies, this research aims to develop a virtual home that incorporates reminiscence therapy and reminiscence bump to address BPSD and create real experiences for patients with dementia.
Home Bound: Virtual Home for Reminiscence Therapy
23
In order to achieve this general objective, the following specific objectives are identified: 1. List design strategies based on the following considerations which are further discussed in Sect. 3.1. a. Usability ensures that target users, or elderly dementia patients, can use the application despite having decreased motor and cognitive abilities. b. Personalization helps users in approaching the virtual environment as it has familiar and recognizable elements. c. Portability allows the software to be implemented in other platforms for better accessibility and immersiveness. 2. Develop the virtual home with the following features guided by activities found in traditional reminiscence therapy. The first four features are considered in [9], while the last feature is considered in [10]. a. Personal memory elements Personal photos of the user can be uploaded to the application. This will show up in the virtual home through a photo album and is meant to trigger reminiscence of memories and personalize the user’s experience. b. Interactive activities Household routines, such as watering the plants and setting the table, are tasks that patients with dementia may have difficulty doing in real life. These will be included in the virtual home as simplified activities that they can perform. c. Relaxing activities To induce relaxation in patients with dementia, activities, such as watching the TV and listening to the radio, are included in the virtual home. TV programs, commercials, and songs from a past time period will be used to target the reminiscence bump phenomenon. d. Scenery Three kinds of scenery will be included in the application: the home environment itself, nature, and real locations. These will allow patients to visit places that they may have difficulty going to given their condition. e. Caregiver features While the user is inside the virtual environment, the caregiver can see the user’s view and operate a virtual companion. These features allow the caregiver to control the flow of the application use and guide the user throughout the experience. 3. Conduct testing to determine the quality of the software developed. a. Usability Testing Testing will be conducted with volunteers in order to gather feedback on the application through a questionnaire. b. Validation from Domain Experts Interviews and discussions with dementia experts about the virtual home and its features will be conducted as a way of determining the goodness of the application and its potential use for dementia patients. c. Software Quality The quality of the VR application will be evaluated using the standards of the ISO/IEC 25010 model.
24
L. F. E. Gonzales et al.
2 Theoretical and Conceptual Framework The theoretical and conceptual framework has four main stages as seen in Fig. 1. Requirements analysis consists of identifying the symptoms of dementia and practices in traditional reminiscence therapy, as well as the concerns and considerations of current research efforts. The software design phase consists of formulating use cases, features, and strategies to address the requirements. The software development phase consists of prototyping the software that meets the research objectives. Finally, the testing phase checks the quality of software produced while exploring the gaps in the software design and research objectives.
Fig. 1. Diagram of theoretical and conceptual framework
3 Application Design The VR application will be entitled “Home Bound”, which means on the way home. From [9], several models for the virtual home elements have already been created, and this will be improved on in this study. Additional models will also be created, such as tableware and house architecture. 3.1 Design Strategies These strategies are made so that the users can focus on enjoying the software application to induce relaxation and to manage the symptoms of dementia. These are established from the usability basics of [16]. User interfaces from the usability basics are assumed to be embodied as interactable objects and use cases in the virtual home environment. Moreover, emergent themes across current research efforts and existing literature have motivated these strategies.
Home Bound: Virtual Home for Reminiscence Therapy
25
1. Encourage focus These strategies assist in encouraging focus for the software as these are presumed to create a more relatable and recognizable virtual environment. a. Make activities familiar and simple In order to target the reminiscence bump phenomenon, collective memory [9] and fondness objects of the elderly [12] are incorporated in the application through elements and activities based on the 1960’s–1980’s Filipino home environment. b. Include caregivers in the virtual environment In order to guide and interact with users, caregivers are included in the use of the application [10]. The user’s personal photos chosen by the caregiver can be uploaded as a way to personalize the user’s experience. A virtual companion will also be included in the virtual environment to help in minimizing feelings of isolation for users. c. Allow user autonomy during application use Since the software simulates a home, users will be allowed to freely explore and interact with their environment [17]. This follows from the Montessori method approach wherein patients are encouraged to be independent and to engage in meaningful activities [20]. 2. Limit distractions These strategies help limit distractions to avoid causing confusion and hesitancy in users during application use. a. Slow down head and body movement The use of VR technology may result in motion sickness [14, 15]. User movement will then be minimized by using teleportation for navigation and arranging interactive objects closer together. The use of fast transitions will also be avoided. b. Mimic natural phenomena Users may become confused with unrealistic visuals due to cognitive dissonance and unfamiliarity with the technology [10]. The elements and controls of the virtual home should be natural and intuitive [18]. Natural environments can also reduce one’s depression, anxiety, and stress [13]. c. Limit user input From [10], elderly and dementia patients may have difficulty in using traditional VR joystick and button controls. As such, application controls will be limited to hand gestures and movements while teleportation will be limited to specific areas in the house. d. Avoid excessive stimuli To avoid overwhelming target users, bright and loud scenarios will be avoided. Interactive objects will be strategically arranged to avoid a clutter of items that can confuse users. 3.2 User Features The virtual home will have four accessible areas modified from [9]. These areas are the entrance, sala or living room, dining room, and garden. Each area will have objects that the user can interact with. Figure 2 shows the layout of the rearranged virtual home. Users will remain seated throughout the use of the application, and navigation of the environment will be through teleportation. Interaction with objects will be through
26
L. F. E. Gonzales et al.
Fig. 2. Top view layout of the virtual home
Fig. 3. Entrance of the virtual home
the use of gestures such as pointing and grabbing. This can be achieved with the hand tracking and gesture recognition capabilities of the Oculus Quest 2. These features were not considered in [9] wherein the authors proposed the use of handheld controllers which may be difficult to learn and operate as discussed in the previous section. Upon entry into the virtual environment, the user will first see the entrance to the virtual home as shown in Fig. 3. In this area, preliminary instructions may be given to the user to orient them about the controls of the application, such as looking around and making gestures. Interacting with the front door will let the user enter the virtual home. After entry into the virtual home, the user will be teleported to the sala or living room as seen in Fig. 4. There will be 4 interactive objects which are as follows: • Television The user can turn the television on or off in order to watch shows and commercials from the 1960s–1980s. • Radio The user can turn the radio on or off in order to listen to songs released in the 1960s–1980s. • Photo Album Personal photos that are uploaded to the application will show up in the virtual environment through the photo album. The user can interact with the album to browse through the photos or arrange the photos themselves through a drag and drop activity. • Painting Interacting with the painting will teleport users to a streetview application. This will allow them to look around environments that are closer to reality, such as the beach and tourist destinations in the Philippines. The user can also teleport to the dining room in Fig. 5. In this area, the user can set the table. The activity follows a puzzle mechanic wherein the user will have to place tableware items into their correct positions on the dining table. The last area is the garden, shown in Fig. 6. The following activities can be performed by the user in this location: • Water the plants Through a spray bottle, the user can water plants found in the garden.
Home Bound: Virtual Home for Reminiscence Therapy
Fig. 4. Sala of the virtual home
27
Fig. 5. Dining room of the virtual home
• View nature Interacting with a rocking chair begins a nature viewing activity wherein the user can look around the nature scenery in the garden, and even watch birds.
Fig. 6. Garden area of the virtual home with watering the plants activity on the left and nature viewing activity on the right
3.3 Caregiver Features The caregiver is present throughout the use of the application as they will control the flow and guide the user in the virtual home, similar to [10]. This will be primarily done through the Unity Editor. The caregiver can fill out the user’s basic information such as their name and age through the editor’s Inspector window, while upload of personal photos can be done through the Project window. When the user is ready and the user’s information is uploaded, the caregiver can start the virtual home by entering Play Mode in the editor. During the session, the caregiver will have access to a caregiver view module in the Unity Editor as seen in Fig. 7. This module will allow them to see what the user is currently seeing and doing while inside the virtual home. A timer is also present at the
28
L. F. E. Gonzales et al.
upper middle portion of the view as a guide for the caregiver about the current duration of the session. On the lower left corner, there are emote buttons that the caregiver can use to operate a virtual companion present inside the virtual home. Lastly, a view statistics button on the lower right corner will allow the caregiver to view tracked information about the application use of the user, such as the time they spent on a certain activity. This information may be used to assess therapy progress or as discussion points in therapy sessions.
Fig. 7. Caregiver view of the VR application
Fig. 8. Diagram of testing setup
4 Testing Plan The following tests will be conducted to evaluate the quality of the software and determine its usability as an application that can manage BPSD. 1. Software Usability Testing Volunteer testers, preferably aged 40 to 59 years old, will be invited to test the application. The participants will be asked to answer a questionnaire related to their experience inside the virtual home and while using the VR headset, as well as usability and quality of the application. The testing questionnaire in [10] will be used as the primary reference. The setup of the testing is shown in Figure 8. Throughout the duration of the testing, the tester, which serves as the patient, will be seated on a swivel chair or any chair without a backrest. A 3 m by 3 m space is provided for the tester to freely move around in. The researchers, which serve as the caregiver, will be seated nearby to guide the tester throughout the testing session. The caregiver features of the application will be accessed through a desktop computer that the researchers will use to control the virtual home. 2. Software Validation Since the participants of the software usability testing will not include actual dementia patients, the suitability of the application to manage the symptoms of dementia will be evaluated through feedback gathered from consultations and focus group discussions with domain experts.
Home Bound: Virtual Home for Reminiscence Therapy
29
3. Software Quality To evaluate the quality of the VR application, the standards of the ISO/IEC 25010 model will be used, particularly the characteristics of functional suitability and usability [19]. These are meant to assess whether the application is suitably developed for elderly dementia patients, and usable in terms of its learnability, operability, and user interface. Questions will be integrated into the software usability testing questionnaire.
5 Conclusion This paper presents a VR software design that extensively incorporates reminiscence therapy and the reminiscence bump phenomenon to address the symptoms of dementia. Design strategies were identified to guide the virtual home design in line with the considerations of usability for elderly dementia patients, personalization to aid in reminiscence of memories, and portability to allow implementation in other platforms. Initial testing plans were discussed to evaluate the quality of the application. These plans do not include actual testing with dementia patients, and further medical validation may be secured in future works to assess the application’s effectiveness of the virtual home to manage BPSD.
References 1. Wu, Y., et al.: The changing prevalence and incidence of dementia over time—current evidence. Nat. Rev. Neurol. 13(6), 327–329 (2017) 2. Dementia. https://www.who.int/news-room/fact-sheets/detail/dementia. Accessed 17 Jan 2023 3. Dominguez, J., Fe de Guzman, M., Reandelar, M., Thi Phung, T.K.: Prevalence of dementia and associated risk factors: a population-based study in the Philippines. J. Alzheimer’s Disease 63(3), 1065–1073 (2018) 4. Dominguez, J., et al.: Dementia incidence, burden and cost of care: a Filipino communitybased study. Front. Publ. Health (9) (2021) 5. Cerejeira, J., Lagarto, L., Mukaetova-Ladinska, E.B.: Behavioral and psychological symptoms of dementia. Front. Neurol. (3) (2012) 6. Oliveira, A.M., et al.: Nonpharmacological interventions to reduce behavioral and psychological symptoms of dementia: a systematic review. BioMed. Res. Int. (2015) 7. Berg-Weger, M., Stewart, D.B.: Non-pharmacologic interventions for persons with dementia. Mo. Med. 114(2), 116–119 (2017) 8. Munawar, K., Kuhn, S.K., Haque, S.: Understanding the reminiscence bump: a systematic review. PLOS ONE 13(12) (2018) 9. Avelino, A.M., et al.: Designing an immersive VR application using collective memory for dementia therapy. In: Proceedings of Workshop on Computation: Theory and Practice 2020 (2020) 10. Anlacan, V.M., et al.: Virtual reality therapy game for patients with behavioral and psychological symptoms of dementia in the Philippines (2022) 11. Mitchell, A.J., Selmes, T.: Why don’t patients take their medicine? Reasons and solutions in psychiatry. Adv. Psychiatr. Treat. 13(5), 336–346 (2007)
30
L. F. E. Gonzales et al.
12. Talamo, A., Camilli, M., Di Lucchio, L., Ventura, S.: Information from the past: how elderly people orchestrate presences, memories and technologies at home. Univ. Access Inf. Soc. 16(3), 739–753 (2016) 13. Appel, L.: Evaluating the impact of VR-therapy on BPSD and QoL of individuals with dementia admitted to hospital. Case Med. Res. (2019) 14. Baniasadi, T., Ayyoubzadeh, S.M., Mohammadzadeh, N.: Challenges and practical considerations in applying virtual reality in medical education and treatment. Oman Med. J. 35(3) (2020) 15. Garrett, B., Taverner, T., Gromala, D., Tao, G., Cordingley, E., Sun, C.: Virtual reality clinical research: promises and challenges. JMIR Serious Games 6(4) (2018) 16. Ferre, X., Juristo, N., Windl, H., Constantine, L.: Usability basics for software developers. IEEE Softw. 18(1), 22–29 (2001) 17. Friedman, B.: Workshop participants: user autonomy. SIGCHI Bull. 30(1) (1998) 18. Nacke, L.E., Kalyn, M., Lough, C., Mandryk, R.L.: Biofeedback game design: using direct and indirect physiological control to enhance game interaction. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 103–112. Association for Computing Machinery, New York, USA (2011) 19. ISO/IEC 25010. https://iso25000.com/index.php/en/iso-25000-standards/iso-25010. Accessed 17 Jan 2023 20. Camp, C., Antenucci, V., Roberts, A., Fickenscher, T., Erkes, J., Neal, T.: The Montessori Method Applied to Dementia: An International Perspective. Montessori Life (2017)
Empowering Responsible Digital Citizenship Through an Augmented Reality Educational Game Marios Iakovidis(B)
, Christos Papakostas , Christos Troussas , and Cleo Sgouropoulou
Department of Informatics and Computer Engineering, University of West Attica, Egaleo, Greece {msc-ditrep21003,cpapakostas,ctrouss}@uniwa.gr
Abstract. As big data analytics becomes increasingly prevalent, fostering a comprehensive understanding of digital information ownership and responsibility is crucial. This paper presents an educational game that uses mobile augmented reality and a server backend to promote responsible digital citizenship among students. The mobile app uses ArUco markers to locate virtual characters, which represent digital citizens and are role-played by the participants. In the game, students can assign keywords to their characters and create text based on the keywords of nearby characters. The server backend facilitates communication between the characters and sets the interaction range for collecting keywords. The paper details the system’s architecture and a proposed use-case as an educational scenario that employs the game. This educational game holds the potential to equip students with the skills to become conscientious digital citizens and seeks to enhance understanding of data ethics, digital information ownership, and responsible data practices. An initial evaluation of the developed prototype has been performed, yielding encouraging results. Keywords: Augmented Reality · Educational Game · Digital Citizenship · Data Education · Data Ethics · Data Responsibility
1 Introduction In today’s interconnected digital world, major technology companies have often avoided taking responsibility for the collection and treatment of personal data from their users. Although as traditionally quoted “knowledge is power”, leading technology firms frequently attempt to downplay their influence and responsibility in shaping the digital landscape, while simultaneously playing an active destabilizing role in terms of attention time diversion and personal data processing [1, 2]. In this context, it is crucial to recognize that “decision is power” as well [3]. This notion highlights the importance of empowering individuals to make informed choices about their data privacy and online behavior. Equipping users with the necessary knowledge and tools to exercise their agency fosters a more responsible and equitable online environment. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 31–39, 2023. https://doi.org/10.1007/978-3-031-44146-2_4
32
M. Iakovidis et al.
Simultaneously, it is essential to recognize the role of data holders and processors in shaping the information landscape. By curating and controlling the flow of information, these entities significantly influence users’ perceptions, beliefs, and decisions [4]. Social media has become an integral part of modern life and its influence on decisionmaking [5], the formation of beliefs and self-esteem is a topic of ongoing research. However, the impact of social media on individuals can be complex and is influenced by various factors, including individual characteristics, cultural context, and social media engagement patterns. For example, studies have found that the effect of social media on self-esteem varies across cultures, with stronger associations observed in collectivistic cultures compared to individualistic cultures [6]. In addition to influencing beliefs and self-esteem, social media platforms have been found to contribute to the formation of echo chambers and limit exposure to diverse perspectives. The prevalence of homophilic clusters dominating online interactions on social media platforms, such as Facebook and Twitter, further exacerbates this issue. Furthermore, the spread of fake news has been found to be faster than real news, highlighting the importance of educating individuals on how to spot fake news [7]. The impact of technology on social media extends beyond beliefs and self-esteem, as advances in computing technology have raised ethical concerns related to predictive algorithms and data handling. In response, the concept of “abstracted power” has been proposed to help computing professionals understand the potential consequences of their actions [8]. Moreover, data literacy has become increasingly important in bridging the gap between those who can effectively handle data and those who can’t. To promote data literacy, the term “creative data literacy” has been proposed to emphasize the need for alternative paths to data for non-technical learners [9]. Moreover, as Floridi argues [10], complying with regulations is necessary but not enough to guide society in the right direction. Digital regulations, like General Data Protection Regulation (GDPR) in Europe, set the limit between legal and illegal actions, but they do not address the moral aspect of it [10]. Therefore, it is important for individuals to develop critical thinking skills and consider the ethical implications of their actions [11]. Although there are existing educational games and initiatives that address the data ethics and responsibility questions [12–15], a multiplayer setup, supported by augmented reality, may offer a novel approach to teaching these essential concepts in a more dynamic and interactive learning environment. This paper seeks to contribute to the ongoing discussion by presenting a novel educational game that utilizes mobile augmented reality technology to promote understanding of big data analytics, digital information ownership, and responsibility, while emphasizing the need for transparent and ethical practices in data management and fostering critical thinking and digital literacy among users to help them navigate the ever-evolving online ecosystem. The remainder of this article is organized as follows. Section 2 discusses the system architecture and design patterns. Section 3 presents a use case application for promoting data ethics in a classroom setting. Section 4 evaluates the game’s effectiveness using a pilot study with postgraduate students. Section 5 addresses limitations, the need for diverse participants, and future research directions, including the incorporation of the Technology Acceptance Model.
Empowering Responsible Digital Citizenship Through an Augmented
33
2 System Overview The educational game system consists of a client-server architecture designed to support real-time, interactive learning experience in a classroom environment (Fig. 1). The system uses several design patterns, including Model-View-Controller (MVC), Data Access Object (DAO), and Publish-Subscribe, to ensure a modular, maintainable, and scalable structure. The ArUco markers are used for tracking game characters, while the PublishSubscribe pattern over Socket.IO enables efficient communication between clients and the server. Model-View-Controller (MVC) pattern: The MVC pattern is employed in both the client and server components to separate the concerns of data management, user interface, and user input handling. This separation of concerns makes the application more modular, maintainable, and scalable. By applying the MVC pattern to the client-side application, the Model manages local game data, the View handles rendering the game and interfacing with the user, and the Controller processes user input and communicates with the server. Similarly, on the server-side, the Model represents the server’s data and game state, the View could be an optional web interface or command-line dashboard for monitoring purposes, and the Controller manages the communication between clients and server using Socket.IO. Data Access Object (DAO) pattern: The DAO pattern is used in the server to handle communication with the MongoDB database. This abstraction layer isolates the server’s data access logic from the rest of the application, allowing for greater flexibility in database interactions and making it easier to modify or replace the database in the future, if needed.
Fig. 1. System Architecture
34
M. Iakovidis et al.
Publish-Subscribe pattern: The Publish-Subscribe pattern facilitates bidirectional communication between the Model and the View components in both the client and the server using Socket.IO. This pattern allows clients and the server to subscribe to events and publish messages to those events, enabling efficient, real-time communication. The pattern supports game activities such as adding keywords, clustering keywords, generating new ones, and creating articles, as well as the exchange of keywords and articles between neighbors. The following is a detailed overview of the system architecture: Client The client-side application is developed using Unity3D and runs on the students’ mobile devices. It consists of the following components: a) Model: Stores data about the player’s character, such as character keywords and created articles. This component is responsible for maintaining the character’s state and managing the local game data. b) View: Handles the rendering of the game and interfaces with the user. This component is responsible for presenting the game’s visuals, providing an interactive user interface, and displaying information about the character’s state, keywords, and articles. c) Controller: Handles user input, communicates with the server via Socket.IO, and processes ArUco marker data using OpenCV. This component is responsible for interpreting user actions, updating the Model and View accordingly, and sending/receiving data from the server. It also processes the ArUco marker information to determine the position and distance of game characters, which is used to identify neighboring characters. Server The server-side application is developed using Node.js and manages the overall game state, including character positions, keywords, and articles. It consists of the following components: a) Model: Stores data about the game state, such as character keywords and created articles. This component is responsible for maintaining a global view of the game, including the relationships between characters and the exchange of keywords and articles among them. b) View: Provides a basic web interface or command-line dashboard for debugging or monitoring purposes. This component allows developers or administrators to observe the game’s progress, monitor performance, and troubleshoot issues in real-time. c) Controller: Handles incoming Socket.IO events from clients, updates the Model accordingly, and emits events back to the clients. This component is responsible for processing client requests, such as updating character positions or exchanging keywords and articles. It also handles the logic for determining neighbors based on the pre-determined radius and manages the communication between clients using the Publish-Subscribe pattern.
Empowering Responsible Digital Citizenship Through an Augmented
35
Technologies The educational game system employs various technologies to support its functionalities: a) Unity3D: A powerful game engine used to develop the client-side application, offering a wide range of tools and features for creating engaging, interactive experiences. b) OpenCV and ArUco: Computer vision libraries used to detect and track markers representing virtual characters of the students in the client. OpenCV provides image processing capabilities, while ArUco enables the identification and tracking of markers. c) Socket.IO: A JavaScript library used to facilitate real-time bidirectional communication between the client and server through the Publish-Subscribe pattern. Socket.IO simplifies the process of implementing real-time interactions and data exchange within the educational game system. By incorporating these design patterns in the architecture, the system benefits from improved modularity, maintainability, and scalability. Each design pattern serves a specific purpose and addresses particular challenges in software development, contributing to a robust and efficient educational game system.
3 Proposed Use Case: Critical Thinking in Data Ethics and Data Responsibility In this section, we present a proposed use case application of the educational game system, which focuses on promoting data ethics and data responsibility awareness among participants. The chosen use case is particularly relevant in today’s digital landscape, as students need to develop digital citizenship skills and learn to critically evaluate the information they encounter online. The following steps outline the educational scenario that can be employed in a classroom setting: App Download and Marker Selection: Participants begin by downloading the Android app onto their phones. They then choose one ArUco marker from a list provided within the app. Upon selecting a marker, it becomes bound to the unique ID of the participant’s app instance. Character Creation and Keyword Generation: The educator prompts participants to imagine a character, generating keywords that describe the character’s attributes, interests, and behaviors. Keyword Collection: Each character collects keywords from neighboring characters based on their proximity, as determined by the distance between their ArUco markers. Keyword Aggregation: Participants decide whether the collected keywords can be combined into smaller subsets. This is an important step as it exposes the “black box” [16], that this process usually is in the social media, to the participants and can potentially provide an insight on how users lose their agency over their trail of data [3]. Article Generation: The educator asks participants to create a social media-style article using the subset of keywords.
36
M. Iakovidis et al.
Article Exchange: Participants send their generated articles to their neighbors, and they receive a corresponding number of articles from these neighbors. Article Evaluation and Discussion: After exchanging articles, participants critically evaluate the received articles in terms of data ethics, data responsibility, and the roles of a data subject and a data processor. The educator facilitates group discussions around the following tasks: Identifying Ethical Concerns and Data Responsibility: Students consider potential ethical issues and their responsibilities when handling the collected keywords and character information. They address questions such as: • Were all the keywords used fairly and accurately to represent the characters? • Did any of the articles distort or manipulate the character’s attributes for a specific purpose? • Did the aggregation of keywords into subsets maintain the integrity of the original character descriptions? This proposed use case of the educational game system aims to engage students in a creative and collaborative learning experience that encourages critical thinking about data ethics and data responsibility.
4 Evaluation The first iteration cycle of the game’s development process involved a pilot study. Towards this direction, a small group of 15 postgraduate students were recruited to evaluate the game. The participants were all students of a master’s degree program from the Department of Informatics and Computer Engineering at the University of West Attica. To identify students with limited knowledge and exposure to cloud computing, big data, and algorithms, a selection process was implemented using a Likert scale questionnaire. The questionnaire aimed to assess the participants’ familiarity and understanding of the subjects. Based on the questionnaire responses, individuals with lower scores indicating lesser knowledge in these areas were chosen [9]. Their involvement in the iterative design process and the use of a qualitative research approach allowed for a thorough evaluation of the system’s content, design, and quality. The insights and feedback gained from the students were crucial in ensuring that the final product met the needs of its target audience. By incorporating their recommendations and feedback, the authors were able to optimize the learning process and improve the game’s design. To gather data on the aspects of the game, the authors conducted interviews with the students, using a thorough explanation of the game’s design as the basis for the discussion. Based on their analysis of the system and its instructional plan, the students presented their recommendations to the authors. These recommendations included specific changes or improvements to the system’s design, as well as suggestions for how to optimize the learning experience. The questions asked to evaluate the game’s effectiveness in promoting responsible digital citizenship and data ethics among students were the following:
Empowering Responsible Digital Citizenship Through an Augmented
37
• Question 1 (Q1): To what extent does the game teach students about responsible digital citizenship and data ethics, and how effectively does it convey these concepts? • Question 2 (Q2): How does the game promote critical thinking and teach students to identify between the data subject and data processor? • Question 3 (Q3): What impact does the game have on students’ attitudes and behaviors regarding responsible digital citizenship and data ethics? Are students more likely to engage in responsible online behavior and apply what they’ve learned in the game to real-world situations? Based on the analysis of the provided questions, we have compiled the results in the form of pie charts (Figs. 2, 3 and 4). In our survey research, we employed a Likert scale to gauge the opinions of the students. Specifically, we used a 5-point Likert scale format, where the participants could choose from the following response options: 1. Strongly disagree, 2. Disagree, 3. Neither agree nor disagree, 4. Agree, and 5. Strongly agree. To facilitate reporting, Figs. 2, 3 and 4 categorize the response options into low, fair, and high groups as follows: the low group represents the strongly disagree and disagree options, the fair group encompasses the neither agree nor disagree option, and the high group includes the agree and strongly agree answers.
Question 2
Question 1 fair
fair
low 7%
Question 3
low 6%
16%
fair
22%
low 9%
21%
high
high
high
71%
78%
70%
Fig. 2. Question 1 results
Fig. 3. Question 2 results
Fig. 4. Question 3 results
As far as the first question is concerned, a percentage of 71% of the students found that the game effectively teaches students about responsible digital citizenship and data ethics, whereas a percentage of 78% of the students found that the game promotes critical thinking and teaches students to identify the data subject and the data processor. Finally, 70% of the students are likely to engage in responsible online behavior and apply what they’ve learned in the game to real-world situations.
5 Limitations – Future Work The selection of postgraduate students from the Department of Informatics and Computer Engineering at the University of West Attica for the pilot study may have been appropriate for evaluating the prototype of the game. However, it may not have been representative
38
M. Iakovidis et al.
of the broader population of students who could benefit from the game’s content related to responsible digital citizenship and data ethics. To address this potential limitation, it would be helpful to include a more diverse group of participants in future iterations of the game’s development process. This could include undergraduate students from various disciplines and backgrounds, as well as middle and high school students who are just beginning to develop their digital literacy skills. Additionally, it would be valuable to collect both quantitative and qualitative data from participants to evaluate the game’s effectiveness. This could include measures of knowledge gain and changes in attitudes and behaviors related to responsible digital citizenship and data ethics, as well as feedback from participants on the game’s usability, engagement, and effectiveness in meeting its learning objectives. This comprehensive evaluation will provide valuable insights into the game’s impact and inform future development and refinement of the game and its pedagogical approach. By incorporating a more diverse group of participants and collecting a range of data types, the game’s developers can better ensure that it effectively promotes responsible digital citizenship and data ethics among a broad population of students. As part of the future work, we plan to integrate a Technology Acceptance Model (TAM) into the evaluation process of our educational game. TAM, a widely recognized model in understanding user adoption and acceptance of new technologies [17], will allow us to assess the factors influencing the acceptance of our augmented reality software tool among students and educators [18-20]. We also plan to expand the educational outcomes to include fake news detection by the use of role-playing character cards and include and additional step in the educational scenario: Detecting and Addressing Fake News: Students analyze the articles to identify any elements of fake news, misinformation, or disinformation. They discuss strategies for recognizing and combating fake news, addressing questions such as: • Were any of the articles misleading, biased, or promoting false information about the characters? • How can one verify the accuracy and authenticity of the information presented in such articles?
References 1. Beattie, A., Daubs, M.S.: Framing ‘digital well-being’ as a social good. First Monday (2020). https://doi.org/10.5210/fm.v25i12.10430 2. Newton, C.: Why these Facebook research scandals are different. The Verge (2021). https:// www.theverge.com/2021/9/23/22688976/facebook-research-scandals. Accessed 31 Jan 2023 3. Rudnianski, M., Bestougeff, H.: Bridging games and diplomacy. In: Diplomacy Games: Formal Models and International Negotiations, Avenhaus, R., Zartman, I.W.: (eds.) Springer, Berlin, Heidelberg, pp. 149–179 (2007). https://doi.org/10.1007/978-3-540-68304-9_8 4. Büchi, M., Fosch-Villaronga, E., Lutz, C., Tamò-Larrieux, A., Velidi, S.: Making sense of algorithmic profiling: user perceptions on Facebook. Inf. Commun. Soc. 26(4), 809–825 (2023). https://doi.org/10.1080/1369118X.2021.1989011
Empowering Responsible Digital Citizenship Through an Augmented
39
5. Chauhan, R.S., Connelly, S., Howe, D.C., Soderberg, A.T., Crisostomo, M.: The danger of ‘fake news’: how using social media for information dissemination can inhibit the ethical decision making process. Ethics Behav. 32(4), 287–306 (2022). https://doi.org/10.1080/105 08422.2021.1890598 6. Cingel, D.P., Carter, M.C., Krause, H.-V.: Social media and self-esteem. Curr. Opin. Psychol. 45, 101304 (2022). https://doi.org/10.1016/j.copsyc.2022.101304 7. Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W., Starnini, M.: The echo chamber effect on social media. Proc. Natl. Acad. Sci. U. S. A. 118(9), e2023301118 (2021). https://doi.org/10.1073/pnas.2023301118 8. Peterson, T.L., Ferreira, R., Vardi, M.Y.: Abstracted Power and Responsibility in Computer Science Ethics Education. IEEE Trans. Technol. Soc., p. 1 (2023). https://doi.org/10.1109/ TTS.2022.3233776 9. D’Ignazio, C.: Creative data literacy: bridging the gap between the data-haves and data-have nots. Inf. Des. J. 23(1), 6–18 (2017). https://doi.org/10.1075/idj.23.1.03dig 10. Floridi, L.: Soft ethics and the governance of the digital. Philosophy Technol. 31(1), 1–8 (2018). https://doi.org/10.1007/s13347-018-0303-9 11. Yamano, P.: Cyberethics in the elementary classroom: teaching responsible use of technology. presented at the Society for Information Technology & Teacher Education International Conference, Association for the Advancement of Computing in Education (AACE), pp. 3667–3670 (2006). Accessed 26 Jan 2023. https://www.learntechlib.org/primary/p/22669/ 12. European Data Protection Supervisor, Klein, G., Bauman, Y.: The European Data Protection Supervisor presents the cartoon introduction to digital ethics. LU: Publications Office of the European Union (2018). Accessed 15 Feb 2023. https://data.europa.eu/doi/https://doi.org/10. 2804/534765 13. Junior, R.B.: The Fake News Detective: A Game to Learn Busting Fake News as Fact Checkers using Pedagogy for Critical Thinking (2020). Accessed 31 Jan 2023. https://smartech.gatech. edu/handle/1853/63023 14. Schrier, K.K.: Spreading learning through fake news games. gamevironments, 15, 15 (2021). https://doi.org/10.48783/gameviron.v15i15.157 15. Shapiro, B.R., et al.: Re-shape: a method to teach data ethics for data science education. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, in CHI’20. New York, NY, USA: Association for Computing Machinery, pp. 1–13 (2020). https://doi.org/10.1145/3313831.3376251 16. Kynigos, C.: A ‘black-and-white box’ approach to user empowerment with component computing. Interact. Learn. Environ. 12(1–2), 27–71 (2004). https://doi.org/10.1080/104948204 2000300896 17. Maranguni´c, N., Grani´c, A.: Technology acceptance model: a literature review from 1986 to 2013. Univers. Access Inf. Soc. 14(1), 81–95 (2015). https://doi.org/10.1007/s10209-0140348-1 18. Papakostas, C., Troussas, C., Krouska, A., Sgouropoulou, C.: Exploring users’ behavioral intention to adopt mobile augmented reality in education through an extended technology acceptance model. Int. J. Human-Computer Interact. 39(6), 1294–1302 (2023). https://doi. org/10.1080/10447318.2022.2062551 19. Papakostas, C., Troussas, C., Krouska, A., Sgouropoulou, C.: User acceptance of augmented reality welding simulator in engineering training. Educ. Inf. Technol. 27(1), 791–817 (2022). https://doi.org/10.1007/s10639-020-10418-7 20. Papakostas, C., Troussas, C., Krouska, A., Sgouropoulou, C.: Measuring user experience, usability and interactivity of a personalized mobile augmented reality training system. Sensors 21(11), 3888 (2021). https://doi.org/10.3390/s21113888
Model Decomposition of Robustness Diagram with Loop and Time Controls to Sequence Diagrams Kliezl P. Eclipse(B)
and Jasmine A. Malinao
Division of Natural Sciences and Mathematics, University of the Philippines Tacloban College, Tacloban, Philippines {kpeclipse,jamalinao1}@up.edu.ph
Abstract. Robustness Diagram with Loop and Time Controls (RDLT) is a workflow model designed to represent systems and can capture all workflow dimensions: resource, process, and case. Such can be mapped to Class Diagrams - to extract the resource dimension of the system it represents, as well as to Petri Nets - to extract both process and case dimensions thereof. Currently, it lacks the available tools to perform automated model transformations to frequently-used diagrams. Furthermore, no existing literature has explored decomposing RDLT into a diagram that captures both the resource and case dimensions, such as Sequence Diagrams. This paper proposes the mapping of RDLT into Sequence Diagrams to utilize the existing automated tools for Sequence Diagrams in RDLT analysis. The RDLT components and their mapped Sequence Diagram components are used in the proposed mapping from an input RDLT to an output set of Sequence Diagrams, and the mapping can produce a set of Sequence Diagrams based on its objects, controllers with multiple incoming arcs and at least one outgoing arc, and for checking arcs and referencing vertices. Keywords: Workflows · Robustness Diagram with Loop and Time Controls · Sequence Diagram · mapping
1
Introduction
Workflow and Workflow Management Systems have great contributions in analyzing different systems in both business and scientific domains. A workflow is the automation of procedures where tasks are passed in line with a specified set of rules between participants to reach or contribute to an overall goal. Its model can range from simple to complexly large systems [4], based on the workflow dimensions captured, namely the resource, process, and case [2]. Work is a specification of cases with the relevant processes that enable their execution [6], profiling both case and process dimensions. An activity is the actual performance of a resource on a work specification [6], profiling all three dimensions. Case management is the specification of a case within a system and the resources attributed to it [13], c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 40–54, 2023. https://doi.org/10.1007/978-3-031-44146-2_5
Model Decomposition of RDLT to Sequence Diagrams
41
profiling the resource and case dimensions. The Robustness Diagram with Loop and Time Controls (RDLT) is an extension of the Robustness Diagram that can capture all dimensions [6], which makes a powerful tool for representing realworld complex systems such as the adsorption chillers [6,10] and the Philippine Integrated Disease Surveillance and Response (PIDSR) system [5]. It previously had no automated tools that could support its implementation and model-tomodel transformation, unlike other workflow models like UML Class Diagrams, UML Sequence Diagrams, and Petri Nets. Both Class Diagrams and Sequence Diagrams were previously used [7] to verify the completeness of user requirements with the Robustness Diagram. The limited tools and the existing literature on the requirements traceability computation from Robustness Diagram to Class and Sequence Diagrams led to work for matrix representation and model verification for RDLTs [1] and the partial mapping of RDLT to Class Diagrams - to extract the resource dimension of the system it represents, as well as to Petri Nets - to extract both process and case dimensions thereof [13]. To have more tools available for RDLT analysis, this paper proposes a mapping of RDLT into Sequence Diagrams, a model that simultaneously captures both resource and case dimensions, and compares the activity profile of the input RDLT with the case management profile extracted from the set of output Sequence Diagrams to validate the correctness of the proposed mapping. 1.1
Robustness Diagram with Loop and Time Controls
The Robustness Diagram with Loop and Time Controls (RDLT) [6], is an extension of the Robustness Diagram that captures all three workflow dimensions (See Fig. 1).
Fig. 1. RDLT (Based on [13]).
42
K. P. Eclipse and J. A. Malinao
Definition 1. RDLT [6, 8] An RDLT is a graph representation R of a system that is defined as R = (V, E, T, M ) where: – V is a finite set of vertices where every vertex is either a boundary or entity object or a controller. – E is a finite set of arcs such that no two objects are connected to each other. Furthermore, every arc (x, y) has following attributes: • C : E → Σ ∪ {} where Σ is a finite non-empty set of symbols and is the empty string. C(x, y) ∈ Σ means that C(x, y) is a condition that is required to be satisfied, e.g. input requirement or parameter [13], to proceed from x to y. Meanwhile, C(x, y) ∈ means that there is no condition imposed by (x, y) or signifies that x is the owner object of the controller y. • L : E → N is the maximum number of traversals allowed on the arc. – T : E × Nn is a mapping such that T ((x, y)) = (t1 , ..., tn ) for every (x, y) ∈ E where n = L((x, y)) and ti ∈ N is the time a check or traversal is done on (x, y) by some algorithm’s walk on R. – M : V → {0, 1} indicates whether u ∈ V is a center of a reset-bound subsystem (RBS). Given a vertex u such that M (u) = 1, an RBS is a substructure Gu of R that is induced by a center u ∈ V and the set of controllers owned by u. An arc (x, y) ∈ E is said to be a bridge of Gu if and only if (1) x is not a vertex in Gu but y is. We then say that (x, y) is an in-bridge of y in Gu ; or (2) x is a vertex in Gu but y is not. We then say that (x, y) is an out-bridge of y in Gu . A pair of arcs (a, b) and (c, d) are type-alike (with respect to y) if and only if (1) y is present in both arcs and (2) either both arcs are bridges of y of Gu or both are not. An arc (x, y) can have one of the two types of constraint: an Input Requirement Constraint or a Parameter Constraint. In Fig. 1, arcs (y2, x2) and (y5, y3) both have their assigned parameter constraint as indicated by the term send, while the remaining arcs have an input requirement constraint. Definition 2. Input Requirement Constraint [13] A constraint C((x, y)) on arc (x, y) in RDLT R is an input requirement constraint if it requires input from the system environment or user to be satisfied. Definition 3. Parameter Constraint [13] A constraint C((x, y)) on arc (x, y) in RDLT R is a parameter constraint if it requires passing a parameter from object x to object y to be satisfied. An activity profile can be extracted from an RDLT using an activity extraction algorithm in [6]. In some cases, an RDLT can have multiple activity profiles. Given the RDLT in Fig. 1 with x1 as the start vertex and y3 as the end vertex, one possible activity profile is S = {S(1), S(2), S(3), S(4), S(5)}, where S(1) = {(x1, y1), (x1, y2)}, S(2) = {(y1, x2), (y2, x2)}, S(3) = {(x2, y4)}, S(4) = {(y4, y5), (x2, y5)}, and S(5) = {(y2, y3), (y5, y3)}.
Model Decomposition of RDLT to Sequence Diagrams
43
Definition 4. Reachability Configuration [6] A reachability configuration S(t) in R contains the arcs at time step t ∈ N. We call a set S = {S(1), S(2), . . . , S(d)}, d ∈ N, as an activity profile in R where ∃(u, v) ∈ S(1) and (x, y) ∈ S(d) such that ∃w, z ∈ V where (w, u), (y, z) ∈ E. Definition 5. Unconstrained Arc [6] An arc (x, y) ∈ E is unconstrained if ∀(v, y) ∈ E where (x, y) and (v, y) are type-alike, any of the following holds, 1. C((v, y)) ∈ {, C((x, y))}, 2. |{ti ∈ T ((x, y))|ti ≥ 1}| ≤ |{tj ∈ T ((v, y))|tj ≥ 1}| ≤ L((v, y)), 3. C((v, y)) ∈ Σ, C((x, y)) = ∧ T (v, y) = [0]. 1.2
Sequence Diagram
The Sequence Diagram [9] is a UML Behavioral Diagram that deals with the communication between resources via the sequence of exchanged messages. The Object Dimension [12] is the horizontal axis that shows the participants in the interaction. The Time Dimension [12] represents time (order, not duration) proceedings down the page. With this temporal aspect, a case management profile can be extracted from a Sequence Diagram by using the process flow extraction of Sequence Diagrams in [7]. Given an input Sequence Diagram, the extraction outputs an updated Sequence Diagram with time-of-execution values next to the representation of arcs. A Sequence Diagram also uses Sequence Fragments [12] represented as a box that encloses a portion of the interaction within the diagram. A fragment has an operator in the top left corner of the box that indicates its type. Some types of fragments are alternative (alt), option (opt), reference (ref ), loop, break, and sequence diagram (sd). 1.3
Related Works
RDLT to Other Workflows. Literature in [13] explored the partial decomposition of RDLT to the design perspective of Class Diagrams. This included conditions in mapping RDLT components to class diagram components. Currently, this decomposition is limited to supporting associations between classes. They also introduced a partial mapping from RDLT to Petri Nets by considering 9 structures. They did not consider the limit L attribute in RDLT, however, which results in a mapped Petri Net being unable to limit the number of times a transition may fire. This work recommended the model decomposition of RDLT to Sequence Diagrams and to use their work for insights. Sequence Diagram to Other Workflows. Literature in [3] introduced a model transformation of Sequence Diagrams to Activity Diagrams, both of which are models under the UML Behavioral Diagrams. The Sequence Diagram is represented in the three-column table called sequence table which consists of different components of the diagram such as objects, messages, loops, and more.
44
K. P. Eclipse and J. A. Malinao
Fig. 2. Sequence Diagram
The proposal is to have an automated tool that converts the entire components into its equivalent Activity table, then references the Activity table and converts it to its equivalent Activity diagram. Literature in [11] introduced a partial transformation of Sequence Diagrams to Coloured Petri Net (CPN). The proposal is to successfully generate CPNs executable with CPN tools equivalent to its input Sequence Diagram after taking advantage of the model-to-model transformation techniques. The input Sequence Diagram, however, is defined using only selected main features. This opens the possibility that not every component of a diagram may be captured during the model transformation. Using the Sequence Diagrams in the model decomposition of RDLT brings access to the transformation of RDLT to other workflow models in which a Sequence Diagram can transform into. Sequence Diagram Requirements Traceability Using Robustness Diagram. Literature in [7] used the Robustness Diagram in the ICONIX paradigm as verification for User Requirements Traceability in Sequence Diagrams based on the pre-specified user requirements expressed in Use Case Texts. The Use Case Text is converted to its Robustness Diagram and Sequence Diagram. Both diagrams are compared by computing for the Requirements Traceability Value (RTV) which determines the percentage of the consistency and completeness of representation between them. With RDLT being an extension of the Robustness Diagram, computing for the RTV can be adapted to validate the percentage of
Model Decomposition of RDLT to Sequence Diagrams
45
completeness of the components between RDLT and Sequence Diagrams once the proposed mapping is performed.
2 2.1
Methods Identified RDLT to Sequence Diagrams Components
Vertex Set. Sequence Diagram has existing special participant types: boundary and entity. Thus, the RDLT objects are mapped as participants with their lifelines in a Sequence Diagram. The center of an RBS is mapped as a participant with a note attached to its lifeline indicating that its M-value is 1. If an object has owned controllers, then it has its Sequence Diagram. Every controller of an RDLT is mapped as a synchronous message in a Sequence Diagram. If that controller has multiple incoming arcs and at least one outgoing arc, then it also has its Sequence Diagram. A synchronous message expects a return message. The T-attribute updates when an arc is checked and/or traversed. With a controller represented as a synchronous message, the response is the updating of T-values. Arc Set. An object in an RDLT may have owned controllers represented with an ownership arc. This is partially mapped as an outgoing synchronous message from a participant, which can either be a self-message or not. This implies that it came from a participant, the same as how controllers came from their owner object. An arc (x, y) from a controller x to an object y where y is not xowner is partially mapped as an outgoing message x() from its participant xowner to the lifeline of participant y. This makes the message x() the glue between participants. An arc from a controller x to another controller y where both have different owners is partially mapped as an incoming asynchronous message x() from the lifeline of xowner , then a self-message y() from the lifeline of yowner . This means the message x() came from a different participant and was sent to the participant yowner first before reaching y(). An arc from a controller x to another controller y where they both have the same owner is partially mapped as a self-message x() then an outgoing message y() from the lifeline of their owner. This means they both came from one participant and that message x() was sent first before y(). The L-attribute is mapped as a note attached to the lifeline of participant x or xowner with contents “L(x,y) = a” where a is the L-value. This introduces the L-values of every arc in RDLT to its mapped set of Sequence Diagrams. Every input requirement constraint c of an arc (x,y) is mapped as a condition c in an opt Sequence Fragment, where the fragment contains a synchronous message. This means the condition within the fragment needs to be satisfied before sending a synchronous message. This is skipped if c is . Every parameter constraint send c [13] of an arc (x,y) where y is a controller is mapped as a parameter get c inside the message y(). In this case, x is either a controller or an object. The controllers can either have the same owners or not, so the term “get”
46
K. P. Eclipse and J. A. Malinao
means the message y() is getting the parameter c from the owner of message x(). If x is an object, then “get” means the message y() is getting the parameter c from participant x itself. If y is an object, then x is automatically a controller with its owner, and the parameter constraint is mapped as a parameter send c inside the message x(). The term “send” means the participant owner of the message x() is sending the parameter c to the participant y. To fully map the arc and to check whether it is unconstrained, a synchronous message is bounded within at least two (2) opt sequence fragments. The first checks if the limit has not been reached. It can be an alt fragment instead if the previous vertex representation has multiple outgoing arcs. The second fragment checks the constraint and if such a path is unconstrained using the isUnconstrained Sequence Diagram (See Fig. 3) before sending a synchronous message. Updating of T-values also occurs within the two fragments, one for the arc check represented as an asynchronous message “Update leftmost 0 of T(x, y)”, and the other for the arc traversal as a return message, the updateTValues Sequence Diagram (See Fig. 4). 2.2
Proposed Mapping from RDLT to Sequence Diagrams
Algorithm 1 is a proposed algorithm for mapping an input RDLT to its set of Sequence Diagrams that uses the identified components in Sect. 2.1. The algorithm also takes note of the looping arcs of every object. Algorithm 1 RDLT to SDs Input: RDLT R Output: Sequence Diagrams (SDs) mapped from R 1. For every object Sequence Diagram o or controller Sequence Diagram c: (a) Map sequence of arcs (x, y) in E and controllers from one object to another object, starting from ownership arcs: i. Create a loop fragment if there exists a looping arc LA(y) = (x,y) in R where x is a controller and y is an object, and y owns x. If the C-value of the looping arc (x,y) is in Σ, and there exists an arc (u,y) that is a type-alike of (x,y) and has a C-value that is in Σ, and the C-value of (x,y) is not equal to the C-value of (u,y), then include in the loop fragment all arcs (u,y) before starting at y. Otherwise, start the fragment at y if the C-value of (x,y) is an empty string . ii. If y in arc (x, y) is an object, then attach a ref fragment on the lifeline of y after the return message. Within the bounds of the fragment, add the name of the Sequence Diagram of y. iii. If y in arc (x, y) is a controller that has multiple incoming arcs and at least one outgoing arc, then add a ref fragment after the return message. Within the bounds of the fragment, add the name of the Sequence Diagram of y. 2. Generate start Sequence Diagram. (a) If the source is an object o, use its mapped participant and use the ref fragment.
Model Decomposition of RDLT to Sequence Diagrams
47
(b) If the source is a controller c, map the sequence of arcs from the source to an object, then use the mapped participant of the object and end with the ref fragment. – If there is no information on c’s owner nor its previous tasks, the owner is abstracted as a participant cowner and the previous tasks are abstracted as an asynchronous message c(). Remark 1. With RDLT being multidimensional that can contain multiple activities, it can produce multiple Sequence Diagrams.
Fig. 3. isUnconstrained Sequence Diagram that requires an arc (x,y) as input.
3
Results
Given the RDLT in Fig. 1, Algorithm 1 generated two (2) Sequence Diagrams for every object with owned controllers: vertices x1 and x2, one (1) Sequence
48
K. P. Eclipse and J. A. Malinao
Fig. 4. updateTValues Sequence Diagram that requires an arc (x,y) as input.
Diagram for every controller with multiple incoming arcs and at least one outgoing arc: vertex y5, the isUnconstrained and updateTValues for every arc (See Figs. 3 and 4), and a start Sequence Diagram that references x1 as the source. After getting the case management profile in the generated Sequence Diagrams using the flow extraction algorithm in [7], they are updated with the time of traversals (See Figs. 6, 7, and 8).
4
Discussion
The start Sequence Diagram is the first diagram to read, hence its name. In Fig. 5, the x1 Sequence Diagram is referenced next since x1 is the source vertex of the input RDLT. While reading an object or controller Sequence Diagram, the condition “isUnconstrained(x,y) == true” in an opt fragment is reached. This is an indication to check the isUnconstrained Sequence Diagram with an input arc. Whatever is returned (can either be true or false) from the diagram determines whether a synchronous message can be sent or not. If a synchronous message is sent, a return message “updateTValues(x,y)” follows. This indicates to check the updateTValues Sequence Diagram with an input arc. While reading the x1 Sequence Diagram, the x2 Sequence Diagram is checked when reaching one of the ref fragments. While reading the x2 Sequence Diagram, the y5 is checked when reaching one of the ref fragments. An indication that the sink is reached is when you reached the end of the sequence diagram which does not end with
Model Decomposition of RDLT to Sequence Diagrams
49
Fig. 5. Generated start Sequence Diagram.
a ref fragment. In Fig. 6 and 8, the synchronous message y3() in both diagrams is the mapped sink of the RDLT. Comparing the time step of every arc representation of the Sequence Diagrams in Figs. 6, 7, and 8 with the time steps of the Activity profile of RDLT in Fig. 1, the Case Management is 100% consistent with the Activity. 4.1
Algorithm Analysis
Theorem 1. Algorithm 1 has a Time Complexity of O(n2 ) where n is the number of vertices in RDLT R. Proof. Let R be an input RDLT. Step 2 of the proposed mapping goes through all v ∈ V to check for objects. Step 1-a of the mapping goes through all arcs (x, y) ∈ E in relation to objects and owned controllers to be included in the generated Sequence Diagram. In an RDLT, the maximum number of arcs in E is n2 where n is the number of vertices. In the worst-case scenario, the proposed mapping goes through n vertices for generating Sequence Diagrams and n2 arcs for mapping them in the Sequence Diagrams. Overall, the maximum running time is O(n2 ). Theorem 2. Algorithm 1 has a Space Complexity of O(n2 ) where n is the number of vertices in RDLT R. Proof. Let R be an input RDLT. Step 1 of the proposed mapping generates a Sequence Diagram for each object with owned controllers. Step 1-a of the mapping generates both an isU nconstrained via the “isUnconstrained == true” condition of a fragment and an updateT V alues Sequence Diagrams via the
50
K. P. Eclipse and J. A. Malinao
Fig. 6. Generated x1 Sequence Diagram (with time of traversals).
updateT V alues return message for every traversed arc. In an RDLT, the maximum number of arcs in E is n2 where n is the number of vertices. In the worst-case scenario, the proposed mapping generates n Sequence Diagrams for objects and n2 Sequence Diagrams for every arc. Overall, the maximum space complexity is O(n2 ).
Model Decomposition of RDLT to Sequence Diagrams
Fig. 7. Generated x2 Sequence Diagram (with time of traversals).
51
52
K. P. Eclipse and J. A. Malinao
Fig. 8. Generated y5 Sequence Diagram (with time of traversal).
5
Conclusions
This paper was able to map RDLT into Sequence Diagrams. We identified the RDLT components and their mapped Sequence Diagram components, which were used in the proposed mapping of RDLT to its set of Sequence Diagrams (See Sect. 2.2). From the mapping, an input RDLT generated a set of Sequence Diagrams, one for every object with its owned controllers, one for every controller that has multiple incoming arcs and at least one outgoing arc, one to check whether an arc is unconstrained, one to update the T-values of an arc
Model Decomposition of RDLT to Sequence Diagrams
53
and its type-alike, and a start diagram to reference the source. This paper also determined the 100% consistency of the activity profile of the input RDLT and the Case Management Profile of the output set of Sequence Diagrams which validates the completeness of the proposed mapping. This paper also discussed the time and space complexity of the algorithm, which is O(n2 ) where n is the number of vertices in the input RDLT. This paper was not able to perform a formal validation of the generated Sequence Diagrams with the metric on the Requirements Traceability Computation of Sequence Diagrams and Robustness Diagrams [7]. Such a metric can be adapted to fully support the traceability between Sequence Diagrams and RDLT, and computing for the RTV can further validate the quantifiable completeness of the mapping. Another recommendation would be an extension of the proposed mapping in consideration of multiple sources and/or sinks, or a more optimal mapping with a lesser number of generated Sequence Diagrams. The proposed mapping and the best practices could also be applied for future work to generate a set of Sequence Diagrams for the RDLT representing the chiller system [6,10] or the PIDSR System [5].
References 1. Delos Reyes, R., Agnes, K., Malinao, J., Juayong, R.: Matrix representation and automation of verification of soundness of robustness diagram with loop and time controls. In: Proceedings of the WCTP (2018) 2. Hollingsworth, D.: Workflow management coalition the workflow reference model. WFMC-TC-1003 19, 224 (1995) 3. Kulkarni, R., Srinivasa, C.: Novel approach to transform UML sequence diagram to activity diagram. J. Univ. Shanghai Sci. Technol. 23, 1247–1255 (2021) 4. Ladyman, J., Lambert, J., Wiesner, K.: What is a complex system? Eur. J. Philos. Sci. 3, 33–67 (2013) 5. Lopez, J.C.L., Bayuga, M.J., Juayong, R.A., Malinao, J.A., Caro, J., Tee, M.: Workflow models for integrated disease surveillance and response systems. In: Theory and Practice of Computation, 2021 Taylor and Francis Group, London (2020). ISBN 978-0-367-41473-3 6. Malinao, J.: On building multidimensional workflow models for complex systems modelling. Fakult¨ at f¨ ur Informatik (Pattern Recognition and Image Processing Group) Institute of Computer Graphics and Algorithm, Technische Universit¨ at Wien, Vienna, Austria (2017) 7. Malinao, J.A., et al.: A metric for user requirements traceability in sequence, class diagrams, and lines-of-code via robustness diagrams. In: Nishizaki, S.Y., Numao, M., Caro, J., Suarez, M.T. (eds.) Theory and Practice of Computation. Proceedings in Information and Communications Technology, vol. 7. Springer, Tokyo (2013) 8. Malinao, J.A., Juayong, R.A.: Classical soundness in robustness diagram with loop and time controls. (Submitted) Philippine J. Sci. (2023) 9. Object Management Group. OMG Unified Modeling Language (OMG UML) Version 2.5 (2015) 10. Rezk, A.: Theoretical and experimental investigation of silica gel/water adsorption refrigeration systems. University of Birmingham (2012)
54
K. P. Eclipse and J. A. Malinao
11. Soares, J., Lima, B., Faria, J.: Automatic model transformation from UML sequence diagrams to coloured petri nets. In: Proceedings of the 6th International Conference on Model-Driven Engineering and Software Development (2018) 12. Visual Paradigm, What is sequence diagram? https://www.visual-paradigm.com/ guide/uml-unified-modeling-language/what-is-sequence-diagram/ 13. Yiu, A., Garcia, J., Malinao, J., Juayong, R.: On model decomposition of multidimensional workflow diagrams. In: Proceedings of the WCTP (2018)
Enhancing Predictive Battery Maintenance Through the Use of Explainable Boosting Machine Sadiqa Jafari1
and Yung-Cheol Byun2(B)
1
Department of Electronic Engineering, Institute of Information Science and Technology, Jeju National University, Jeju 63243, South Korea 2 Department of Computer Engineering, Major of Electronic Engineering, Jeju National University, Institute of Information Science & Technology, Jeju 63243, South Korea [email protected]
Abstract. Battery Remaining Useful Life (RUL) prediction is crucial for the predictive maintenance of lithium-ion batteries. This paper presents a study on applying an Explainable Boosting Machine (EBM) for RUL prediction of lithium-ion batteries. EBM is a machine learning technique that combines the benefits of gradient boosting and rule-based systems, making it highly interpretable and suitable for applications in safety-critical domains. We evaluated the performance of EBM compared to other machine learning techniques and demonstrated its superiority in accuracy, interpretability, and robustness. The results show that EBM can accurately predict the RUL of lithium-ion batteries; the interpretability of EBM provides insights into the factors that affect battery RUL and enables a better understanding of battery degradation mechanisms. The proposed method highlights the potential of EBM for battery RUL prediction and guarantees the secure and reliable functioning of batteries. Keywords: Predictive maintenance Explainable boosting machine
1
· Remaining useful life · Battery ·
Introduction
The increasing use of battery-powered equipment in various industries has emphasized the need for effective maintenance processes to ensure their reliable operation. Because of their dense charge and extended lifespan, lithium-ion batteries have become a popular option for energy storage. But as time passes, these batteries degrade, reducing their performance and capacity. Battery-powered systems must be adequately maintained in order to function properly. Maintenance plays a crucial role in ensuring the proper functioning of battery-powered systems. On the other hand, preventive maintenance involves regularly scheduled maintenance tasks to prevent breakdowns. However, determining the optimal maintenance intervals can be challenging, leading to excessive or insufficient c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 55–66, 2023. https://doi.org/10.1007/978-3-031-44146-2_6
56
S. Jafari and Y.-C Byun
maintenance, both costly and inefficient. In recent years, predictive maintenance has gained significant attention as an advanced maintenance strategy. It involves monitoring the health of equipment and predicting when maintenance should be performed based on operational and sensor data. One of the business findings noted in Industry 4.0 is predictive maintenance [1]. In the context of batteries, predictive maintenance is of utmost importance. This maintenance aims to improve the battery’s performance and reliability while lowering the costs related to unforeseen battery breakdowns. When the equipment is possible to fail, choose the appropriate repair task so that a favorable balance between maintenance frequency and expense has been reached [2]. Battery health monitoring, analysis of State of Charge (SoC), and prediction of RUL play vital roles in optimizing battery performance and lifespan. While SoC refers to the discrepancy between the battery’s current capacity and its rated capacity, RUL stands for the operational life of a battery that is left under current operating conditions. In this study, we critically analyze and review existing works related to battery RUL prediction. We examine the strengths and weaknesses of various methodologies, including statistical approaches, machine learning techniques, and hybrid models. We consider factors such as prediction accuracy, computational efficiency, interpretability, and the ability to handle real-time data. By identifying the limitations and gaps in the existing literature, we aim to provide insights and guidance for future research in this field. Our research aims to contribute to the field of predictive battery maintenance by implementing and evaluating a machine-learning model for RUL prediction. The main focus is on feature extraction from raw data and comparing the performance of different models. Operators can optimize maintenance schedules, reduce downtime, and improve battery-powered systems’ overall reliability and cost-effectiveness by accurately predicting battery RUL. The main Contributions of this research are: – Feature extraction from raw data. – Machine learning is used to implement and evaluate the model. – Compare the model performance The rest of this paper is classified as follows: Sect. 2 summarizes related research work on RUL prediction. Section 3 describes the methodology adopted for this study. Section 4 presents the experimental results and discussions. Finally, we summarize the conclusions and outline potential guidance for future study.
2
Related Work
In this section, we provide a critical analysis of the existing works reviewed in the field of battery RUL prediction. We have explored various methodologies and approaches employed in previous studies and evaluated their strengths, weaknesses, and contributions. The studied works include RUL prediction models
Enhancing Predictive Battery Maintenance Through the Use of EBM
57
based on data, physics, and experience. Experience-based models provide practical estimations but have limitations in terms of accuracy. Physics-based models offer deeper insights but require detailed knowledge and may be limited by complexity. Data-driven models capture complex patterns but require quality data and computational resources. Hybrid approaches and improved interpretability are needed. At first, the literature of study on the RUL issue is the primary subject of this article. The second part provides a quick overview of several RUL prediction models. The RUL problem’s applicability in lithium-ion batteries is then researched. 2.1
RUL Issue
The RUL is when a machine can operate without needing repair or replacement. Engineers can plan maintenance work, improve operating efficiency, and avert unplanned rest by regarding RUL. As a result, the highest priority in the predictive maintenance process is estimating RUL. RUL has been described in [3] as the period from the present to the end of its usable life, and it is utilized to describe the current state of a system’s health. The three models of the RUL prediction approach are experience-based, physics-based, and data-driven. An experience-based model is a method that doesn’t rely on the results of a mechanistic model-based system or the equipment’s previous data. Because prognosis researchers are more interested in numerical conditions of data, this sort of prognosis needs to be addressed [4]. The physics-based model uses measurable data and a physical damage model to forecast how deterioration or damage will behave in the future and how the RUL will change. The data-driven model is roughly divided into two stages: learning from previous raw sensor inputs and using the trained model to predict marks and create determinations [5]. 2.2
Machine Learning Models for RUL Prediction
A deep learning model based on the battery’s ability to forecast an RUL has been proposed in the study. Lithium-ion batteries with accuracy, the importance of RUL prediction in intelligent battery health management systems is rising. The model has been used to analyze real NASA data on lithium-ion battery cycle life. The experiment results have shown that the recommended method can improve RUL prediction accuracy [6]. In order to decarbonize the transportation industry, cutting-edge EVs with long driving ranges, safety features, and higher reliability must be created and adopted. However, environmental degradation concerns, lithium-ion battery capacity degradation with use, and EOL repurposing present significant barriers to their application. The effectiveness of a battery is established in this regard by determining its RUL. Additionally, several ML methods for RUL assessment are in-depth and separately researched. Finally, an application-specific review is provided, highlighting the benefits of effectiveness and precision [7]. The supervised kind of ML technique known as Gaussian process regression, or GPR used to address issues with probabilistic classification and regression. The predictions are produced from the statements
58
S. Jafari and Y.-C Byun
through the interpolation process, and the advantages that Gaussian operations provide include the following. Since the prediction is stochastic, empirical probabilities may be determined to decide whether to modify it in a specific area of interest (Gaussian) [8,9]. A variety of search engines, and the complete categorization of SOH prediction techniques, including model-based, mathematical model-based, data-driven, and hybrid strategies, has been covered then a thorough review of each approach is presented with recommendations in [10]. For RUL prediction and SOH monitoring of lithium-ion batteries, Qu et al. [11] introduced a neural network-based technique that combines the LSTM network with particle swarm optimization and attention mechanisms. A validation and verification framework for lithiumion batteries based on the Monte Carlo method was offered by Zhang et al. [12]. They developed the fusion techniques RVM and PF to reduce the training data and used a case study to apply the framework. A suggested integrated learning system based on monitoring data fits the lithium battery deterioration model and forecasts RUL. Relevance Vector Machine (RVM), Random Forest (RF), Elastic Net (EN), Autoregressive Model (AR), and Long Short-Term Memory (LSTM) network are the five fundamental learners that make up the ensemble learning approach, which aims to improve prediction performance [13]. Wei et al. [14] merged the Monte Carlo Dropout (MC dropout) and Gated Recurrent Unit (GRU) to account for the uncertainty of the RUL prediction findings. Additionally, we detailed the evaluations on the lithium-ion battery RUL issue and demonstrated how it differs from earlier research in Table 1. Table 1. Applications of the RUL issue in lithium-ion batteries. No-ref Goal
3
Approach
Dataset
[15]
Suggested a technique to LR, GPR predict SOH and RUL based on LR and GPR
NASA
[16]
Offered a modified LSTM to predict RUL prediction and SOH estimate
LSTM
NASA
[17]
Provided a technique for RUL prediction using Artificial Bee Colonies (ABC) and SVR
SVR, ABC NASA
[18]
Suggested GPR to predict RUL GPR and SOH
NASA
Methodology of the Study
In this part, we presented the suggested strategy then we described the dataset. The suggested model comprises three primary components: data preprocessing, prediction model, and evaluation metrics. Collecting battery data and identifying
Enhancing Predictive Battery Maintenance Through the Use of EBM
59
the crucial aspects using feature extraction, feature selection, and data normalization constitute data preprocessing which Fig. 1 displays an overall structure to predict RUL for the battery dataset.
Fig. 1. EBM model framework for RUL battery prediction.
3.1
Proposed Method
The data is collected from the battery, which includes voltage, current, temperature, and other relevant parameters. This data is used to create a model of the battery’s behavior over time then the data is used to train an EBM model to predict the RUL of the battery. The model employs algorithms to analyze the data and learn the patterns and relationships between the parameters and the battery’s RUL. The trained model is then operated to predict the RUL of the battery. By employing predictive maintenance for batteries RUL, it is possible
60
S. Jafari and Y.-C Byun
to proactively identify when a battery needs maintenance, which can help to reduce the costs associated with unexpected battery failures and improve the performance and reliability of the battery. 3.2
Explainable Boosting Machine
The EBM, a novel interpretability method, is also a component of the InterpretML framework EBM. EBM is a glass-box model that aims to be as accurate as cutting-edge machine-learning techniques like RF and Boosted Trees (BT) while being very understandable and explicable. By removing the requirement that the connection is a simple weighted sum and assuming that the sum of arbitrary functions describes the outcome for each attribute, A Generalized Additive Model (GAM) provides interpretability. A GAM of Eq. 1 [19]: G(E[y]) = α0 +
m
fj (xj )
(1)
i=1
where G is the link function that allows the GAM to be adjusted to various conditions, including regression or classification, first, EBM employs cuttingedge machine learning methods like bagging and gradient boosting to train each feature function fj . The order of the features is irrelevant because the boosting method is strictly limited to training on a single feature at a time in round-robin form with a very low learning rate. It is rounds-robin cycles over the features to reduce co-linearity impacts and train the optimum feature function fj for each feature. It demonstrates how each feature contributes to the model’s problem prediction. EBMs are incredibly small, have quick prediction times, and have the possibility to profit from the fusion of unique traits. A boosting technique turns a group of poor learners into strong learners to boost performance. EBM can automatically extract pairwise feature interactions and train features using boosting to increase accuracy. Second, pairwise interaction terms of the following format may be automatically found and included by EBM that shows in Eq. 2: G(E[y]) = α0 +
m 1
fi (xj ) +
K
fi,j (xi , xj )
(2)
1
where K pairs of features and high intelligibility might be achieved by rendering the paired interaction fi,j (xi , xj ) as a heatmap on the two-dimensional (xi , xj ) -plane. Although the model’s explainability is unaffected by adding more interactions, the final prediction model may need to be more understandable due to the high number of interactions. 3.3
Data Explanation
The dataset was compiled by the Hawaii Natural Energy Institute and looked at 14 NMC-LCO 18650 batteries with a nominal capacity of 2.8 Ah that was
Enhancing Predictive Battery Maintenance Through the Use of EBM
61
cycled more than 1000 times at 25 ◦ C with a CC-CV charge rate of C/2 rate and discharge rate of 1.5C. The 14 batteries’ summaries are contained in the dataset. The characteristics that highlight the voltage and current behavior during each cycle were extracted from the original dataset. 3.4
Data Processing
Data preprocessing includes feature extraction, selection, normalization, and segmentation processes. Creating training, testing, and validation datasets is a step in data preparation. We created three training, validation, and testing categories from the battery dataset, splitting them into 65% training data and 35% testing data. The dataset division taking into account training and testing, is shown in Table 2. Table 2. Summarize the separation of the training and testing datasets.
3.5
No Data
Rows Percentage(%)
1
Total data
2500
2
Training data 1625
65
3
Test data
35
875
100
Feature Selection
Each charging and discharging cycle in the dataset consists of a unique set of data points. However, these data points cannot be directly used for model building since the features of each cycle need to be equivalent. One approach to address this challenge is randomly selecting data points from each cycle. Nevertheless, this strategy carries the risk of losing important information. Therefore, it is crucial to consider the behavior of the Li-ion battery. The extracted features must accurately capture the battery’s behavior and remain applicable across various operating conditions and similar batteries for successful forecasting.
4
Result and Discussion
This section discusses the implementation approach of the proposed process and covers the experimental setup, performance assessment, and machine learningbased RUL prediction. We present the assessment findings for our proposed paradigm in this section. For all our investigations, we have utilized the system setup outlined in Table 3.
62
S. Jafari and Y.-C Byun Table 3. Details of implementation environment.
4.1
Number System Component
Description
1
Operating system
Windows 10
2
RAM
16 G
3
CPU
Intel(R) Core(TM) i5-9600K
4
CPU GHz
3.70
5
Programing language Python 3.8.3
6
Browser
Google Chrome
Evaluation Metrics
This paper uses RMSE (Root Mean Square Error), R2 (Coefficient of Determination), and Mean Absolute Percentage Error (MAPE) to evaluate the research RUL performance; these metrics provided a comprehensive view of accuracy, the goodness of fit, and relative errors in the RUL prediction model which can be shown as follows in Eq. 3, 4 and 5: (RM SE)RU L =
N 1 (RU Li − RUˆLi )2 N i=1
(3)
N
(RU Li − RUˆLi ) | ˆ i=1 (RU Li − RU Li ) N 1 RU Li − RUˆLi = × 100 N i=1 RU Li
(R )RU L = 1 − | i=1 N 2
(M AP E)RU L
(4)
(5)
where RU Li represents the target’s actual value and RUˆLi represents its anticipated value.N is the length of RUˆLi and also RU Li . The R2 value ranges from 0 to 1 and represents how accurately a model predicts the actual RUL value. We compared the proposed method between different models that, according to a thorough analysis using evaluation metrics for the battery (R2 and RMSE) as shown in Table 4. EBM approach achieves good efficiency with 97% R2 , which outperformed the methods. The results demonstrate that the EBM model performed better than other approaches. A graphical comparison of the RMSE for various models is depicted in Fig. 2. The RUL prediction results based on EBM exhibit the highest accuracy and the lowest uncertainty, validating its accuracy and robustness.
Enhancing Predictive Battery Maintenance Through the Use of EBM
Table 4. Comparison of RMSE and R2 . NUM Models
RMSE R2
1
SVM
115.02 87
2
Knn
85.02
3
Random forest
77.21
4
Proposed method 58.93
93.03 94.25 97
Fig. 2. Comparison of Root Mean Squared Error.
Fig. 3. RUL prediction result training.
63
64
S. Jafari and Y.-C Byun
Figure 3 illustrates an RUL battery data set in which all collected data are normalized to reduce the training variability; the blue line demonstrates the actual data, and the orange light line displays the normalized dataset. Using the test set, Fig. 4 illustrates the correlation between the actual and predicted RUL values. The x-axis describes the RUL predictions, while the yaxis defines the samples or instances. The light blue line describes the actual dataset values, while the orange line represents the predicted values obtained utilizing the suggested model. As shown in Fig. 4, the EBM model demonstrates good prediction performance for the RUL battery dataset.
Fig. 4. RUL prediction result testing.
Popular machine learning models, namely LSTM-Elman, PSO-ELM, and the proposed Explainable Boosting Machine, were employed to compare their performance in predicting RUL. These models were chosen based on their effectiveness and ability to handle the complexity of the dataset. Table 5 compares the MAPE values for the different RUL prediction methods. The LSTM-Elman method achieved a MAPE of 2.99%, while the PSO-ELM method achieved a MAPE of 1.38%. The proposed Explainable Boosting Machine achieved the lowest MAPE of 0.45%, indicating its superior accuracy in predicting battery RUL. Table 5. Comparison of MAPE values for various RUL predictions with related work methods. Method
MAPE (%) Ref
LSTM-Elman
2.99
[20]
PSO-ELM
1.38
[21]
Explainable Boosting Machine 0.45
Proposed method
Enhancing Predictive Battery Maintenance Through the Use of EBM
5
65
Conclusion and Future Work
In this study, we have demonstrated the potential of EBM for accurately predicting battery RUL. Combining the benefits of gradient boosting and rule-based systems, EBM provides a highly interpretable solution for RUL prediction in safety-critical domains such as battery management. A high-precision RUL prediction effect is carried out by the suggested EBM model, with an RMSE value of 58.93% and an R2 value of 0.97%. This work accelerates the development of life cycle and intelligent management systems by offering theoretical support for the expansion of lithium-ion battery systems. Our findings demonstrate that EBM outperforms other cutting-edge machine learning methods. The interpretability of EBM provides insights into the factors that affect battery RUL and enables a better understanding of battery degradation mechanisms. Our work highlights the importance of predictive battery maintenance and opens avenues for further research in this field. In addition to improving the performance of the EBM for RUL prediction and exploring its applications in different battery systems, future work can also focus on integrating EBM with other machinelearning methods, such as combining the EBM model with deep learning models or hybridizing it with other predictive modeling approaches. Further, enhance the accuracy and robustness of RUL predictions. Additionally, exploring the potential of incorporating domain knowledge or expert rules into the EBM framework can improve the interpretability and explainability of the predictions, allowing for better decision-making in battery maintenance and management. Acknowledgements. This research was financially supported by the Ministry of Small and Medium-sized Enterprises (SMEs) and Startups (MSS), Korea, under the “Regional Specialized Industry Development Plus Program (R&D, S3246057)” supervised by the Korea Technology and Information Promotion Agency for SMEs (TIPA). This work was also financially supported by the Ministry Of Trade, Industry & ENERGY (MOTIE) through the fostering project of The Establishment Project of Industry-University Fusion District.
References 1. Vutetakis, D.G., Viswanathan, V.V.: Determining the state-of-health of maintenance-free aircraft batteries. In: Proceedings of the Tenth Annual Battery Conference on Applications and Advances, vol. 2, pp. 13–18 (1995) 2. Xiongzi, C., Jinsong, Y., Diyin, T., Yingxun, W.: Remaining useful life prognostic estimation for aircraft subsystems or components: a review. In: IEEE 2011 10th International Conference on Electronic Measurement & Instruments, vol. 2, pp. 94–98 (2011) 3. Asmai, S.A., Hussin, B., Yusof, M.M.: A framework of an intelligent maintenance prognosis tool, pp. 241–245. IEEE (2010) 4. An, D., Kim, N.H., Choi, J.H.: Practical options for selecting data-driven or physics-based prognostics algorithms with reviews. Reliab. Eng. Syst. Saf. 133, 223–236 (2015)
66
S. Jafari and Y.-C Byun
5. An, D., Kim, N.H., Choi, J.-H.: Synchronous estimation of state of health and remaining useful lifetime for lithium-ion battery using the incremental capacity and artificial neural networks. J. Energy Storage 26, 100951 (2019) 6. Ren, L., Zhao, L., Hong, S., Zhao, S., Wang, H., Zhang, L.: Remaining useful life prediction for lithium-ion battery: a deep learning approach. IEEE Access 6, 50587–50598 (2018) 7. Sharma, P., Bora, B.J.: A review of modern machine learning techniques in the prediction of remaining useful life of lithium-ion batteries. Batteries 9(1), 13 (2023) 8. Said, Z., et al.: Application of novel framework based on ensemble boosted regression trees and Gaussian process regression in modelling thermal performance of small-scale organic rankine cycle using hybrid nanofluid. J. Clean. Prod. 360, 132194 (2022) 9. Saleh, E., Tarawneh, A., Naser, M.Z., Abedi, M., Almasabha, G.: You only design once (YODO): Gaussian Process-Batch Bayesian optimization framework for mixture design of ultra high performance concrete. Constr. Build. Mater. 330, 127270 (2022) 10. Tian, H., Qin, P., Li, K., Zhao, Z.: A review of the state of health for lithium-ion batteries: research status and suggestions. J. Clean. Prod. 261, 120813 (2020) 11. Qu, J., Liu, F., Ma, Y., Fan, J.: A neural-network-based method for RUL prediction and SOH monitoring of lithium-ion battery. IEEE Access 7, 87178–87191 (2019) 12. Zhang, Y., Xiong, R., He, H., Pecht, M.G.: Long short-term memory recurrent neural network for remaining useful life prediction of lithium-ion batteries. IEEE Trans. Veh. Technol. 67(7), 5695–5705 (2018) 13. Wu, J., Kong, L., Cheng, Z., Yang, Y., Zuo, H.: RUL prediction for lithium batteries using a novel ensemble learning method. Energy Rep. 8, 313–326 (2022) 14. Wei, M., Gu, H., Ye, M., Wang, Q., Xu, X., Wu, C.: Remaining useful life prediction of lithium-ion batteries based on Monte Carlo Dropout and gated recurrent unit. Energy Rep. 7, 2862–2871 (2021) 15. Yu, J.: State of health prediction of lithium-ion batteries: multiscale logic regression and Gaussian process regression ensemble. Reliab. Eng. Syst. Saf. 174, 82–95 (2018) 16. Li, P., et al.: State-of-health estimation and remaining useful life prediction for the lithium-ion battery based on a variant long short term memory neural network. J. Power Sources 459, 228069 (2020) 17. Wang, Y., Ni, Y., Lu, S., Wang, J., Zhang, X.: Remaining useful life prediction of lithium-ion batteries using support vector regression optimized by artificial bee colony. IEEE Trans. Veh. Technol. 68(10), 9543–9553 (2019) 18. Jia, J., Liang, J., Shi, Y., Wen, J., Pang, X., Zeng, J.: SOH and RUL prediction of lithium-ion batteries based on Gaussian process regression with indirect health indicators. Energies 13(2), 375 (2020) 19. Hastie, T., Tibshirani, R.: Generalized additive models: some applications. J. Am. Stat. Assoc. 82(398), 371–386 (1987) 20. Li, X., Zhang, L., Wang, Z., Dong, P.: Remaining useful life prediction for lithiumion batteries based on a hybrid model combining the long short-term memory and Elman neural networks. J. Energy Storage 21, 510–518 (2019) 21. Yao, F., He, W., Wu, Y., Ding, F., Meng, D.: Remaining useful life prediction of lithium-ion batteries using a hybrid model. Energy 248, 123622 (2022)
Revolutionizing Agricultural Education with Virtual Reality and Gamification: A Novel Approach for Enhancing Knowledge Transfer and Skill Acquisition Panagiotis Strousopoulos , Christos Troussas(B) , Christos Papakostas , Akrivi Krouska , and Cleo Sgouropoulou Department of Informatics and Computer Engineering, University of West Attica, 12243 Egaleo, Greece {pstrousopoulos,ctrouss,cpapakostas,akrouska,csgouro}@uniwa.gr
Abstract. Agricultural education holds significant importance as it provides comprehensive training in the principles and practices of agriculture, including apiculture, which plays a crucial role in sustainable practices and the cultivation of skilled farmers and apiarists. However, traditional approaches to agricultural education face inherent limitations when it comes to engaging students effectively and facilitating optimal knowledge transfer and skill acquisition. Recognizing this compelling need for innovative methodologies, this research paper introduces a novel approach to apiculture education that harnesses the power of virtual reality (VR) and gamification principles in a captivating application named Beekeeper World. Beekeeper World immerses users in an interactive and engaging VR environment where they assume the role of beekeepers responsible for safeguarding bees from threats, symbolized by spiders. By leveraging digital technologies and gamification techniques, this application aims to cultivate a new generation of passionate and skilled apiarists. This study explores the implications of Beekeeper World for apiculture education and the broader agricultural sector, underscoring the importance of innovative teaching methods in preparing future professionals for the dynamic realm of apiculture and ensuring the long-term sustainability of global food systems. Keywords: Agricultural Education · Apiculture Education · Beekeeping · Gamification · Virtual Reality
1 Introduction Climate change [1, 2], population growth [3, 4], and altering consumer preferences [5] have all recently posed challenges to agriculture. It is now more important than ever to train the next generation of agricultural professionals to manage this difficult terrain. To fulfill these expectations, agricultural education has evolved by embracing novel teaching methods and advanced technologies [6–8]. Apiculture, or beekeeping, is one example © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 67–80, 2023. https://doi.org/10.1007/978-3-031-44146-2_7
68
P. Strousopoulos et al.
of how pollination services help to support global food systems [9, 10]. The demand for well-trained, competent apiarists is increasing as the need for high-quality honey and other bee-derived products develops. As a result, how apiculture is taught must change to ensure that new practitioners can effectively address the industry’s numerous issues [11]. In parallel, virtual reality and gamification have emerged as a powerful tool for enhancing learning and engagement in various fields [12–15], including agriculture. Virtual reality practice is the use of virtual reality technology to simulate realistic training environments and allow individuals to practice tasks or procedures in a safe and controlled setting. Also, gamification employs game design elements like points, badges, and leaderboards to motivate learners and adopt a more immersive educational experience. By incorporating gamification into digital agricultural education platforms, instructors can cultivate a stimulating learning environment that encourages active participation and sustained interest [16–18]. In apiculture, virtual reality and gamified learning experiences can support acquiring essential skills and knowledge, empowering new apiarists to make well-informed decisions and overcome industry challenges. Ultimately, this combination has the potential to transform agricultural education and support the development of skilled professionals in apiculture and beyond. Research studies have shown that virtual reality and gamification have the potential to enhance knowledge transfer and skill acquisition in agricultural education. In [19], the authors discuss the use of virtual reality technology for creating interactive environments to model ecosystem reference conditions. The authors argue that virtual reality can provide a more engaging and effective way to visualize and understand complex ecological systems, particularly for stakeholders involved in ecosystem management and conservation. In [20], the authors explore the use of virtual reality technology to create digital twins of greenhouse environments and explore the potential of these digital twins for improving human interaction with greenhouses. They claim that virtual reality-based digital twins can provide a more effective and immersive way to interact with greenhouses, allowing stakeholders to explore and interact with the environment in a more intuitive and engaging way. Concerning gamification, in [21], the authors focus on the use of gamification as a tool for enhancing learning in agriculture and livestock technical education. In [22], the authors explore the development of a conceptual model for a rice information system that incorporates gamification and the Soft System Methodology (SSM) to enhance user engagement and effectiveness. They note that existing rice information systems often lack user engagement and fail to effectively meet the needs of stakeholders while proposing a new approach that integrates gamification and SSM for addressing complex and ill-defined problems. Concluding, review works [23, 24] in the field have emphasized the potential benefits of using virtual reality and gamification techniques for enhancing learning outcomes. In view of the above, this paper introduces a novel gamified application called Beekeeper World, designed to enhance apiculture education through an immersive and engaging experience. The application places the user as a robot navigating a randomly generated island with bees, beehives, and flowers. The primary objective is to protect the bees from their adversaries, the spiders, which symbolize the threats these vital pollinators face in real-life scenarios. Beekeeper World encourages users to understand better preserving the bee population.
Revolutionizing Agricultural Education with Virtual Reality and Gamification
69
The game integrates critical aspects of apiculture, such as foraging, hive management, and the interdependence between bees and their environment. Combining the power of gamification with the crucial subject matter of apiculture, Beekeeper World offers an innovative approach to agricultural education. The application aims not only to increase awareness of the essential role bees play in our ecosystem but also to cultivate a new generation of passionate apiarists equipped to tackle the challenges faced by the industry.
2 Architecture and Implementation This section presents the architecture and implementation of the Beekeeper World application, a gamification-based platform designed to revolutionize agriculture training, specifically in apiculture. The application is built upon a sophisticated network of interconnected components, including the User Interface, Pedagogical and Gameplay models. These core components work synergistically to create a captivating and immersive learning experience, ultimately harnessing the potential of gamification to enhance and transform the traditional approach to apiculture education. 2.1 User Interface Model Menu Scene The main menu’s User Interface (UI) is expertly crafted to prioritize usability and navigation ease, offering an engaging user experience. Upon launching the application, the layout presents the primary options. The Play button allows users to proceed to network type selection and engage with the virtual world by hosting their server or joining an existing one. The Options button enables users to access a submenu containing settings such as changing their name or adjusting the application’s volume. Lastly, the Quit button allows users to exit the application. Game Scene The game scene’s User Interface (UI) aims to deliver an effortless, coherent interaction, enabling gamers to concentrate on fundamental gameplay aspects. A tidy arrangement characterizes the interface, where vital data and functionalities are thoughtfully positioned, guaranteeing straightforward access and comprehension. Crucial components encompassing the UI involve dynamic cues, a reverse chronometer, and an array of menu selections that facilitate customization, observation of participant information, and administration of virtual world assets. User Interface Design A harmonious color scheme enhances the UI’s visual appeal and accessibility (Fig. 1). Consistency in graphic design, such as typography and button styles, helps create a seamless application experience. Feedback mechanisms like loading screens and connection status notifications keep users informed of the application’s current state. These principles establish a UI that is easy to navigate and enjoyable for user interaction.
70
P. Strousopoulos et al.
Fig. 1. Dual-Scene Structure - Menu Scene and Interactive Game Environment
2.2 Pedagogical Model Learning Objectives The game’s primary learning objectives are to raise players’ understanding of bees’ vital role in our ecosystem and their challenges. Players learn about foraging, hive management, and bee preservation through engaging gameplay. They will come to appreciate the interdependence between bees, flowers, and their environment, as they work to ensure the survival of their bee colony. The game also teaches resource management by allowing players to sell honey and strategize based on their in-game resources. This encourages players to make informed decisions, balancing their objectives to protect their bees while maximizing honey production. By integrating these educational goals into the core gameplay mechanics, players can gain valuable knowledge and skills related to bee ecology and sustainable practices while enjoying an immersive gaming experience. Feedback Mechanisms Effective feedback mechanisms are critical to the game’s pedagogical model, providing players with essential information about their performance and progress. Performance metrics offer insights into the success of the player’s efforts in various aspects of the game, such as bee protection, honey production, and spider elimination. Points and progress indicators clearly inform players how effectively they accomplish their goals and where they may improve. Players may make better-informed judgments, change their strategy, and acquire a greater knowledge of the game’s fundamental concepts by tracking their progress and receiving real-time feedback. Adaptive Learning The adaptive learning approach enhances the educational experience by personalizing gameplay according to each player’s performance. As players progress and demonstrate their mastery of key concepts, their abilities can be upgraded to provide new challenges and ways to protect their bees. This model ensures that players are consistently engaged and motivated to learn while allowing them to apply their newly acquired knowledge in more complex scenarios. Additionally, the game difficulty adjusts over time by increasing
Revolutionizing Agricultural Education with Virtual Reality and Gamification
71
the number of spiders, ensuring that players are continually challenged and encouraged to refine their strategies. As players experience the immediate outcomes of their efforts and thoroughly grasp the game’s instructional objectives, this dynamic learning environment fosters a sense of success and personal progress (Fig. 2).
Fig. 2. Pedagogical Model: Learn to Adapt and Overcome
2.3 Gameplay Model The Players In Beekeeper World, each player assumes the role of a robot to defend bees and their hives from the threat posed by spiders. This multiplayer aspect encourages collaboration and competition, as players work together or against each other to protect the beehive from the spiders’ attacks. As players eliminate spiders, they ensure the safety of the bees and their environment. Additionally, the players engage in the economic aspect of the game by selling honey. The income generated can be used to upgrade the players’ abilities, enhancing their skills in speed and strength. Players may strategize and cooperate to achieve common goals or compete to become the most efficient bee protector. The players’ connection to the beehive and spiders and the multiplayer dimension shape a dynamic and engaging gameplay experience. The Bees, the Beehive, and the Flowers Bees have a vital role in the game because they contribute to the virtual world’s natural cycle and economy. They collect nectar from flowers and carry it to the beehive, which converts it into honey. The resource management center where honey is generated is the beehive. There are four sorts of flowers, each having a random amount of nectar. This diversity enriches the game environment, encourages players to strategize on optimizing nectar collection, and highlights the crucial connection between bees, flowers, and their shared ecosystem. The Spiders Spiders act as antagonists in the game and threaten the bees and their hives (Fig. 3). They symbolize real-life dangers bees face, raising awareness about the significance of bee conservation. Players must ensure the beehive’s safety by eliminating spider
72
P. Strousopoulos et al.
threats, securing the bees’ habitat, and maintaining a healthy balance in the game’s ecosystem. The spiders’ connection to the beehive and bees introduces challenges and enables strategic thinking. Players are encouraged to learn about protecting bees and their environment while engaging in exciting gameplay.
Fig. 3. Dynamic Gameplay: Interactions and Roles within the Game’s Atmosphere
2.4 Combined Architecture The game blends an intuitive UI, a robust pedagogical model, and captivating gameplay to create an engaging and educational experience. The UI model offers an appealing interface with consistent design elements, enabling seamless navigation and interaction. The pedagogical model teaches valuable lessons like bee preservation and strategic decision-making, using performance metrics and adaptive learning to challenge players and enhance their skills, and the gameplay model, diverse agents such as playercontrolled robots, bees, and spiders, forming intricate relationships that mirror real-life ecological systems. The multiplayer aspect promotes collaboration and competition, as players protect the beehive and pursue their objectives. The game’s interconnected models establish an entertaining and educational gaming experience, enabling the crucial interrelationship between bees and their ecosystem.
Revolutionizing Agricultural Education with Virtual Reality and Gamification
73
3 Game Rationale This section investigates the underlying mechanics and rationale of the Beekeeper World, a cooperative multiplayer game focused on a dynamic ecosystem, resource management, and strategic decision-making. The setup is on a randomly generated island where players must effectively manage resources and make strategic decisions to ensure the survival of their beehive from predatory spiders while taking the role of robots. The application promotes player engagement while enhancing the game’s educational value, balancing realism and enjoyment. 3.1 World Creation World creation is one of the most critical assets of the game. Each world is unique while maintaining consistency with previously generated worlds due to the procedural generation that happens every time a new session starts. Various elements, such as trees, rocks, plants, and flowers, are generated to fill the ecosystem (Fig. 4). They are critical for motivating players to replay the game and improve their abilities.
Fig. 4. A screenshot of a randomly created environment in Beekeeper World
3.2 Agents’ Artificial Intelligence In the game environment, three distinct agents interact to create a dynamic and engaging experience: player-controlled robots, computer-controlled bees, and spiders (Fig. 5). A Unity library powers the artificial intelligence of both bees and spiders for pathfinding. Bees are programmed to seek out the nearest available flower. A flower is considered available if it contains nectar and is not currently being accessed by another bee. This
74
P. Strousopoulos et al.
availability system introduces strategic decision-making for the players who can predict the movement of each bee. Spiders’ behavior is programmed to move randomly in the world until they detect a bee or a beehive within their field of view. Once a beehive is detected, spiders attempt to destroy it. If they encounter a bee, they try to capture and consume it. Bees are generally faster than spiders but can become vulnerable when distracted by their primary objective, collecting nectar. The interactions of these three characters - robots, bees, and spiders - in Beekeeper World create an immersive game universe that forces players to think critically and adapt to every terrain.
Fig. 5. A screenshot of a spider hunting a bee in Beekeeper World
3.3 Player, Upgrades, and Strategies In Beekeeper World, players must carefully strategize and make decisions to ensure the survival of their beehive while maximizing their score. Central to the player’s experience is the delicate balance between selling honey and upgrading their attack capabilities (Fig. 6). Bees produce honey that can be sold for money, with its value randomly increasing by 0.01 to 0.02 euros per kilogram each second. However, players must be cautious, as beehives can only store a limited amount of honey, ranging from 3.5 to 4 kg. Additionally, the capacity of a beehive is permanently reduced with each hit from a spider. The game offers two primary upgrades: attack power and attack speed. In multiplayer scenarios, players may choose to allocate resources differently, considering the strength of their teammates and the need to maintain a balanced team dynamic. As the game progresses, the number of spiders spawning increases to a maximum, presenting an escalating challenge to players. Players can earn extra money by killing multiple spiders with a single hit, incentivizing strategic and precise combat. The learning curve for new players primarily involves trial and error, with experience gradually leading to improved performance and more advanced strategies.
Revolutionizing Agricultural Education with Virtual Reality and Gamification
75
Communication and collaboration between players during multiplayer games can also impact their decision-making processes and overall strategies. Experienced players may approach the game differently, utilizing unique tactics and prioritizing specific aspects based on their familiarity with game mechanics. As the game evolves, feedback and observed gameplay patterns will be vital for identifying areas of potential improvement, ensuring that the player experience remains engaging and challenging while offering a satisfying sense of progression and accomplishment.
Fig. 6. A screenshot of the market menu in Beekeeper World
3.4 Scoring and Performance Evaluation The Scoring and Performance Evaluation system is intended to evaluate players’ efficiency and effectiveness in several parts of the game (Fig. 7). The evaluation system considers the number of spiders killed, unintentional bees killed, honey remaining in the beehive, bees rescued at the conclusion, and the player’s overall effectiveness in preserving bees and managing resources. This multifaceted scoring approach ensures that players are rewarded for excelling in different areas, promoting balanced gameplay and a diverse range of strategic approaches. The number of spiders killed directly measures a player’s combat proficiency and ability to defend the beehive. The bees killed by accident indicate a lack of precision, while the bees saved at the end highlight the game’s educational value, which is the player’s connection with the bees. The amount of honey remaining in the beehive is another crucial element of the scoring system, reflecting a player’s capacity to manage resources effectively. Players are encouraged to sell honey strategically, considering the fluctuating market value and the risk of overfilling the beehive. Maintaining a healthy honey stockpile requires players to balance their need for funds to purchase upgrades against the potential for increased honey value over time.
76
P. Strousopoulos et al.
Fig. 7. A screenshot of the player’s score in Beekeeper World
4 Evaluation The evaluation of the players’ interaction with Beekeeper World was divided into two phases to understand its educational and entertainment value comprehensively. The evaluation was a cognitive walkthrough, a method aimed at assessing the players’ experience and comprehension of the game and their enjoyment and engagement while playing it. The evaluation team focused on how the players’ learning process unfolded, how they developed and adapted their strategies, as well as their engagement level throughout the game. In total, 50 undergraduate students from the Department of Informatics and Computer Engineering of the University of West Attica participated. Regarding the participants’ profiles, all were between 21 and 23 years old and were writing their bachelor’s thesis. The students used the prototype application for a one-hour session under the guidance of the evaluators. After the completion of the process, all students were asked two main questions to gauge their perception of the game’s educational and entertainment value: 1. On a scale from 1 to 3, with 1 being “not educational” and 3 being “very educational,” did you feel that playing Beekeeper World increased your understanding of bee conservation and the challenges bees face? 2. On a scale from 1 to 3, with 1 being “not fun” and three being ‘very fun,’ how would you rate your experience playing Beekeeper World? The answers revealed that 2 students found the educational value low, 4 students found the educational value medium, and 44 students found the educational value high. Regarding the fun factor, 1 student found it low, 2 students found it medium, and 47 students found it high (Figs. 8, 9). The participants were observed to learn the game mechanics. This learning process was primarily experiential and involved trial and error methodology. The game’s design
Revolutionizing Agricultural Education with Virtual Reality and Gamification
77
effectively conveyed the importance of protecting bees, a vital aspect of the game, while providing an engaging and entertaining experience for the players. Moreover, the game’s resource management and scoring systems played a significant role in reinforcing the educational message of the game. As the players progressed, they were observed making strategic decisions and adapting their gameplay to optimize their scores. This indicated that the players were internalizing the game’s educational content, which highlighted the game’s success in integrating education with entertainment. A pie chart was used to visualize the participants’ feedback on the main aspects of the game, the educational value, and the fun factor. The pie chart allowed for a clear and concise representation of the players’ opinions, making identifying trends and patterns in their responses easy.
Fig. 8. Educational Value Pie Chart
The pie chart revealed that the significant majority of the participants found the game enjoyable and engaging, indicating that it successfully achieved its dual objectives of education and entertainment.
78
P. Strousopoulos et al.
Fig. 9. Fun Factor Pie Chart
5 Conclusion In conclusion, this scientific paper delved into the multifaceted nature of Beekeeper World, a groundbreaking game developed to raise awareness about bee conservation while delivering an engaging and enjoyable gameplay experience. The analysis demonstrated the game’s effectiveness in accomplishing its dual goals of education and entertainment. Beekeeper World’s core mechanics, resource management, and scoring system are crucial in fostering strategic thinking and decision-making abilities among players. Moreover, the game’s design pushes players to adapt to evolving challenges, such as the increasing number of spiders and the pressing need to safeguard the bees and their hive. The game’s distinctive blend of educational content and entertainment value resonated with many players, successfully communicating the significance of protecting bees while providing an absorbing gaming experience. The players’ evaluations revealed that Beekeeper World effectively achieved its aims, with a considerable majority of participants commending the game for its educational merit and entertainment factor. Beekeeper World exhibits the potential to inspire players to take decisive action in critical environmental issues and sets a precedent for future games seeking to incorporate educational components. By continuously refining and enhancing the game’s educational aspects, Beekeeper World can strengthen its impact on players, equipping them with the knowledge and motivation to become proactive agents of change in the quest for bee conservation and environmental sustainability.
Revolutionizing Agricultural Education with Virtual Reality and Gamification
79
References 1. Chen, S., Chen, X., Xu, J.: Impacts of climate change on agriculture: evidence from China. J. Environ. Econ. Manage. 76, 105–124 (2016). https://doi.org/10.1016/J.JEEM.2015.01.005 2. Arora, N.K.: Impact of climate change on agriculture production and its sustainable solutions. Environ. Sustain. 2(2), 95−6 (2019). https://doi.org/10.1007/S42398-019-00078-W 3. Kopittke, P.M., Menzies, N.W., Wang, P., McKenna, B.A., Lombi, E.: Soil and the intensification of agriculture for global food security. Environ. Int. 132, 105078 (2019). https://doi. org/10.1016/J.ENVINT.2019.105078 4. Alexander, P., Rounsevell, M.D.A., Dislich, C., Dodson, J.R., Engström, K., Moran, D.: Drivers for global agricultural land use change: the nexus of diet, population, yield and bioenergy. Glob. Environ. Chang. 35, 138–147 (2015). https://doi.org/10.1016/J.GLOENV CHA.2015.08.011 5. Lusk, J.L., McCluskey, J.: Understanding the impacts of food consumer choice and food policy outcomes. Appl. Econ. Perspect. Policy 40(1), 5–21 (2018). https://doi.org/10.1093/ AEPP/PPX054 6. Muthuprasad, T., Aiswarya, S., Aditya, K.S., Jha, G.K.: Students’ perception and preference for online education in India during COVID -19 pandemic. Soc. Sci. Humanit. Open 3(1), 100101 (2021). https://doi.org/10.1016/J.SSAHO.2020.100101 7. Deegan, D., Wims, P., Pettit, T.: Practical skills training in agricultural education—a comparison between traditional and blended approaches. J. Agric. Educ. Extension 22(2), 145−161 (2016). https://doi.org/10.1080/1389224X.2015.1063520 8. Diise, A.I., Mohammed, A.A., Zakaria, H.: Organizing project method of teaching for effective agricultural knowledge and skills acquisition: comparison of individual and group student projects J. Educ. Pract. 9(23), 56–66 (2018) https://www.iiste.org/Journals/index.php/JEP/art icle/view/44017. Accessed 05 May 2023 9. Arzoumanidis, I., Raggi, A., Petti, L.: Life cycle assessment of honey: considering the pollination service. Adm. Sci. 9(1), 27 (2019). https://doi.org/10.3390/ADMSCI9010027 10. Bailes, E.J., Ollerton, J., Pattrick, J.G., Glover, B.J.: How can an understanding of plant– pollinator interactions contribute to global food security? Curr. Opin. Plant Biol. 26, 72–79 (2015). https://doi.org/10.1016/J.PBI.2015.06.002 11. Onabe, M.B., Edet, A.E., Ubi, G.M.: Harnessing potentials and optimization of apicultural education as pathway for alleviating poverty in Southern Nigeria. Ann. Res. Rev. Biol. 37(11), 64–75 (2022). https://doi.org/10.9734/ARRB/2022/V37I1130550 12. Krouska, A., Troussas, C., Sgouropoulou, C.: Applying genetic algorithms for recommending adequate competitors in mobile game-based learning environments. In: Kumar, V., Troussas, C. (eds.) ITS 2020. LNCS, vol. 12149, pp. 196–204. Springer, Cham (2020). https://doi.org/ 10.1007/978-3-030-49663-0_23 13. Krouska, A., Troussas, C., Sgouropoulou, C.: A Personalized brain-based quiz game for improving students’ cognitive functions. In: Frasson, C., Bamidis, P., Vlamos, P. (eds.) BFAL 2020. LNCS (LNAI), vol. 12462, pp. 102–106. Springer, Cham (2020). https://doi.org/10. 1007/978-3-030-60735-7_11 14. Marougkas, A., Troussas, C., Krouska, A., Sgouropoulou, C.: Virtual reality in education: reviewing different technological approaches and their implementations. In: Krouska, A., Troussas, C., Caro, J. (eds.) Novel & Intelligent Digital Systems: Proceedings of the 2nd International Conference (NiDS 2022). NiDS 2022. Lecture Notes in Networks and Systems, vol 556, pp 77−83. Springer, Cham. https://doi.org/10.1007/978-3-031-17601-2_8 15. Marougkas, A., Troussas, C., Krouska, A., Sgouropoulou, C.: A framework for personalized fully immersive virtual reality learning environments with gamified design in education. Front. Artif. Intell. Appl. 338, V–VI (2021). https://doi.org/10.3233/FAIA210080
80
P. Strousopoulos et al.
16. Nuritha, I., Widartha, V.P., Bukhori, S.: Designing gamification on social agriculture (SociAg) application to increase end-user engagement. In: Proceedings of the 2017 4th International Conference on Computer Applications and Information Processing Technology, CAIPT, pp. 1–5 (2017). https://doi.org/10.1109/CAIPT.2017.8320713 17. Martin, J., Torres, D., Fernandez, A., Pravisani, S., Briend, G.: Using citizen science gamification in agriculture collaborative knowledge production. In: ACM International Conference Proceeding Series (2018). https://doi.org/10.1145/3233824.3233859 18. Szilágyi, R., Kovács, T., Nagy, K., Várallyai, L.: Development of farm simulation application, an example for gamification in higher education. J. Agric. Inform. 8(2), 12–21 (2017). https:// doi.org/10.17700/JAI.2017.8.2.373 19. Chandler, T., et al.: Immersive landscapes: modelling ecosystem reference conditions in virtual reality. Landsc. Ecol. 37(5), 1293–1309 (2022). https://doi.org/10.1007/S10980-02101313-8/FIGURES/6 20. Slob, N., Hurst, W., van de Zedde, R., Tekinerdogan, B.: Virtual reality-based digital twins for greenhouses: a focus on human interaction. Comput. Electron Agric. 208, 107815 (2023). https://doi.org/10.1016/J.COMPAG.2023.107815 21. Brelaz, É.C.D.D.O., et al.: Gamification of the fazenda 3d: a playful alternative to learning in the agriculture and livestock technical education. In: Proceedings - 14th Latin American Conference on Learning Technologies, LACLO, pp. 290–294 (2019). https://doi.org/10.1109/ LACLO49268.2019.00056 22. Sutoyo, M.A.H., Sensuse, D.I.: Designing a conceptual model for rice information systems using gamification and soft system methodology. In: 2018 International Conference on Advanced Computer Science and Information Systems (ICACSIS), pp. 63−68. IEEE (2018). https://doi.org/10.1109/ICACSIS.2018.8618195 23. Hernandez-Aguilera, J.N., et al.: Games and fieldwork in agriculture: a systematic review of the 21st century in economics and social science. Games 11(4), 47 (2020). https://doi.org/10. 3390/G11040047 24. Anastasiou, E., Balafoutis, A.T., Fountas, S.: Applications of extended reality (XR) in agriculture, livestock farming, and aquaculture: a review. Smart Agric. Technol. 3, 100105 (2023). https://doi.org/10.1016/J.ATECH.2022.100105
The Game Designer’s Perspectives and the DIZU-EVG Instrument for Educational Video Games Yavor Dankov(B) Faculty of Mathematics and Informatics, Sofia University “St. Kliment Ohdridski”, Sofia, Bulgaria [email protected]
Abstract. The paper presents the model of the educational video game designer’s perspectives and the possible application of this model to design, create, and improve the design of educational video games. The model presents the three perspectives of the designer and the corresponding characteristics. The designer’s role is fundamental to the appropriate and successful design, creation, and improvement of the design of educational video games. The model can be applied in developing various types of educational video games. This paper presents the possibility of using the model in combination with the DIZU-EVG instrument for visualizing gaming and learning results to design, create and improve educational maze video games in the APOGEE platform. With the help of the DIZU-EVG instrument for educational video games and through the tool’s functionality to visualize customized (personalized) dashboards, the results of the played game sessions of players and learners and the statistics of the designed and played educational video games are visualized and are presented to the users and the designer. The presented model of the perspectives of the educational video game designer can be widely used in developing and improving different types of educational video games in various fields. The model focuses precisely on the critical role of the designer and its characteristics. Using the model in combination with the DIZU-EVG instrument for visualizing gaming and learning results through customized (personalized) dashboards will provide an opportunity for appropriate and purposeful design and improvement of educational video games based on visually analyzed results by designers. Keywords: Educational video games · Serious games · Game-based learning · DIZU-EVG instrument · Designer’s Perspectives
1 Introduction Designing educational video games is a labor-intensive process that requires a wide range of professionals with diverse knowledge in various fields of education [1]. Realizing a successful educational video game often requires extensive research and planning of all stages and activities involved in the design and creation of the game and its testing and © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 81–90, 2023. https://doi.org/10.1007/978-3-031-44146-2_8
82
Y. Dankov
improvement [2]. Usually, the design process begins with a set concept of a goal [3, 4] (as an example - creating an educational video game in the field of history for the learning of educational history material by the relevant users - learners) and a target group of users who will play the game and learn, through it (as an example - high school learners and players who will use the educational video game in their learning process and learn by playing the history material) [5]. Game-based learning is among the key advantages of educational video games that provide high efficiency, gaming experience, and high receptiveness to the users of these games [3, 6–9]. Among the key activities when starting the educational video game design process is researching the target group of users for whom this game will be intended [10]. This includes researching and analyzing the characteristics of all potential users of the game (learners and players) through various methods of gathering the necessary information, such as surveys before the start of the design process, conducting interviews with users, and many other techniques [11]. The study of user characteristics, such as static and dynamic characteristics of users will support the process of designing the educational video game by providing the possibility of its customization (personalization) according to the analyzed characteristics of learners and players [10, 12]. The results of these studies are analyzed, and the necessary conclusions are drawn that will influence the overall process of designing educational video games [10]. This will directly affect and support the performance of the set tasks and the successful realization of the set goals in the educational video game. Usually, all these activities are carried out by the designer of educational video games. The designer’s work is multi-layered and complicated and often involves the in-depth and careful analysis of all available information from various sources [13]. This is done to design and create a suitable and effective educational video game for the relevant users that meets all the defined requirements and simultaneously provides a quality gaming and learning user experience appropriate to the user’s environment, specially designed for the users. [14–16]. This paper focuses specifically on the designer’s key role in designing, creating, and improving the design of educational video games. The designer has the critical task of analyzing all the information. This can be done both before the educational video game design process begins and after learners and players have played the games. This will affect and determine the planning and implementation of all design activities following the set goals, tasks, and defined users. After the educational video game is created, it is moved to its use by the users - learners and players. All activities related to the use of the game generate a large amount of data that both the designer and the users must examine to obtain feedback on the results achieved. That is why the DIZU-EVG (Data visualIZation instrUment for Educational Video Games) instrument was designed to visualize game and learning results from educational video games [1]. The concept of the DIZU-EVG instrument is presented in the previous studies of the author of this paper [17] and a detailed description of the tool’s functionalities, as well as the process of its use, are presented in [18]. This paper presents a model of the educational video game designer’s perspectives and the possible application of this model to design, create, and improve the design of educational video games. The model presents the three perspectives of the designer and the corresponding characteristics. The designer’s role is fundamental to the appropriate
The Game Designer’s Perspectives and the DIZU-EVG Instrument
83
and successful design, creation, and improvement of the design of educational video games. The model can be applied in developing various types of educational video games. This paper presents the possibility of using the model in combination with the DIZU-EVG instrument for visualizing gaming and learning results to design, create and improve educational maze video games in the APOGEE platform [19]. With the help of the DIZU-EVG instrument for educational video games and through the tool’s functionality to visualize customized (personalized) dashboards, the results of the played game sessions of players and learners and the statistics of the designed and played educational video games are visualized and are presented to the users and the designer. The presented model of the perspectives of the educational video game designer can be widely used in developing and improving different types of educational video games in various fields. The model focuses precisely on the critical role of the designer and its characteristics. Using the model in combination with the DIZU-EVG instrument for visualizing gaming and learning results through customized (personalized) dashboards will provide an opportunity for appropriate and purposeful design and improvement of educational video games based on visually analyzed results by designers. The paper continues with the presentation of the model of the game designer’s perspectives, detailed in Sect. 2 of this document. The three perspectives are presented in three subsections within Sect. 2. Section 3 introduces the use of the model in combination with the DIZU-EVG instrument for the visualization of gaming and learning results. The paper ends with a conclusion.
2 The Proposed Model of the Game Designer’s Perspectives The paper presents the model of the educational video game designer’s perspectives (Fig. 1). The model presents the three perspectives of the game designer and the relevant designer’s role in each perspective. The designer’s role is fundamental to the purposeful and successful design, creation, and improvement of the designed educational video games. Each of the designer’s perspectives is characterized by a specific role that the designer can perceive in designing and creating educational video games, as well as in improving these educational video games. Each of the designer’s perspectives is characterized by the specific and characteristic behavior of the designer, as well as by the appropriate role that the designer perceives in certain circumstances. These are the roles of a Game Creator, a Gaming Content Tester, and a Learning Content Tester of the educational content integrated into the educational video game designed by the particular designer. Therefore, the model visually provides a starting point for designers (unfamiliar with the specialized processes of designing, creating, and improving educational video games), perceiving the designer from three different perspectives with distinct design roles. Using the model, especially by designers who are educators (e.g., non-IT professionals), will be beneficial for them in the initial design process of video games. This will further contribute to a better understanding of the different perspectives of the designers (specialists and non-professionals) and their respective roles and contribute to the increased attention and attitude of the designers towards the processes of designing educational video games and their future development and improvement.
84
Y. Dankov
Fig. 1. Model of the Educational Video Game Designer’s Perspectives
The subsequent subsections of the paper present the description of the three perspectives of the designer illustrated in the model. 2.1 Perspective I - Game Designer as Game Creator of Educational Video Game The first perspective of the model is “Perspective I - Game Creator.” This perspective perceives the game designer in the role as the educational video game’s creator - the individual responsible for the overall creative process of designing and creating the game, which the target group of learners and players will use. In their role as the creator of video games, the designer must determine the game’s main goals and the target group of users for whom this game will be intended. All available users’ data and the data for the application area of the educational video game must be processed and analyzed, hence allowing the identification of the main content characteristics and supporting the decisions of the appropriate inclusion of this content in the video game. The designer must decide on the proper gaming and learning content to be integrated into the game, considering many factors as: • The defined goals of the educational video game. What the content should be to be as appropriate as possible for learning by the learners to achieve the set goals; • The defined type of educational video game. The chosen type of video game will predetermine the selection and distribution of the relevant gaming and learning (educational) content, which will be integrated into the game’s virtual environment.
The Game Designer’s Perspectives and the DIZU-EVG Instrument
85
• The characteristics of the application area (discipline, subject, educational area). It predetermines the primary theme of the educational video game. • The static and dynamic characteristics of learners and players. It is also possible to consider the parameters of the players’ and learners’ characteristic playing and learning styles [20]. Based on the individual characteristics of the learners and the players, designers can select and customize the content to some extent, meeting these users’ specific needs and requirements [10]. After the selection and integration of the gaming and learning content in the designed game, the process continues to the next steps of the game development - these are the processes for implementing the designed game and generating the game’s end version. In many cases, the designer is a teacher who does not have to be a specialist in information technology to implement the whole video game himself. Therefore, when developing video games, the game’s creation is aided by information technology specialists who implement the video game and provide the finished software product for end users. This process is complex and requires a multidisciplinary team of designers and developers. An example of an alternative to this challenge is the APOGEE platform, which provides the opportunity to design and automatically generate educational maze video games from people who are not specialists in information technology [18]. The author of this article has made a significant contribution to the development of the platform and its enrichment with assistive and analytical tools that assist the designer in designing and developing educational video games [19, 21–23]. 2.2 Perspective II - Game Designer as Tester of the Designed and Integrated Learning Content The second perspective of the model is “Perspective II - Learning Content Tester.“ This perspective perceives the game designer in the role as a tester of the created (designed), selected, and integrated educational content in the designed educational video game. In the role of a tester of educational content, the designer has complete freedom to play the game himself by focusing on the learners’ experience and testing the learning content of the game. This can happen before the game’s final version to validate whether the content meets the game design criteria, defined goals, tasks, etc. Numerous software platforms provide these opportunities for designing educational video games that offer an interactive software environment for creating video games, such as Unity software [24]. The Unity interactive interface provides an opportunity to pre-test the prototype of the designed game so that the designer can test and validate the game’s design and integrated content. In the presence of discrepancies, according to the set criteria in the game’s design, the designer must use Perspective I of the presented model and take the role of the game creator to reflect the necessary changes to the educational video game’s educational content. As a tester of the designed and integrated educational content in the game, the designer is responsible for testing and validating this content. The educational content in educational video games is one of the most important contents of the game that users need to perceive and learn while playing the video game. This content must meet the set goals, the field of application, the characteristics of the learners, etc., to provide an
86
Y. Dankov
effective and easy opportunity for users to learn the learning material and an enriched learning experience for users. 2.3 Perspective III - Game Designer as Tester of the Designed and Integrated Gaming Content The third perspective of the model is “Perspective III - Gaming Content Tester.“ This perspective perceives the game designer as a tester of the designed, selected, and integrated gaming content in an educational video game. The designer in the role of a gaming content tester must validate whether the gaming content integrated into the video game meets the defined requirements and criteria established in designing the educational video game. The integrated gaming elements in the virtual environment should provide an enjoyable gaming experience for the users [11]. As a gaming content tester, the designer analyzes and evaluates whether the integrated content meets the set requirements for the game. Gaming content testing and requirements validation can be implemented both ways - after the creation of the game’s final version (by testing the finished product) and during design by testing a prototype of the educational video game. Therefore, in the presence of inconsistencies with the set requirements, the designer must analyze the results and make general conclusions about the gaming content and its possible correction. In these circumstances, the designer must step into the role of a game creator (Perspective I of the presented model) to make specific design decisions and make the necessary adjustments to the gaming content.
3 Using Game Designer’s Perspective Model in Combination with the DIZU-EVG Instrument This paper presents the possibility of using the presented model in combination with the DIZU-EVG instrument for visualizing gaming and learning results to design, create and improve educational maze video games in the APOGEE platform. The DIZU-EVG instrument provides diverse functionalities for the designers of educational video games and the learners and players of these games. Among the fundamental functionalities of the instrument is specifically the visualization of the results of the educational video games through customized (personalized) dashboards, accessible to the relevant users (game designers, learners, and players) in the APOGE platform. In this paper, the focus is on using the presented model of the designer’s perspectives in combination with the essential functionality of the DIZU-EVG tool to visualize results through customized dashboards that educational video game designers can use. The presented model in combination with the DIZU-EVG instrument for visualizing gaming and learning results are used especially to design, create and improve educational maze video games in the APOGEE platform. Using the model combined with the DIZU-EVG tool summarizes the possible benefits for the designer.
The Game Designer’s Perspectives and the DIZU-EVG Instrument
87
3.1 DIZU-EVG Instrument and Game Designer’s Perspective I In the first perspective of the model - “Perspective 1”, the designer perceives himself in the role of a creator of educational video games. Its task is to design and create an educational video game for a specific target group of users to play and learn through it. After the game sessions, the results of the game played by the users are generated. The DIZU-EVG tool provides the ability to visualize these results through customized dashboards. Through the specialized and customized design dashboard of the DIZU-EVG tool, the designer can visually analyze the results and statistics regarding the educational video game he designed. This is implemented more specifically with the “Visualize Game Designer Dashboard” functionality, which visualizes a dashboard for the designer, described in detail in the author’s previous publication of this paper [1]. The DIZU instrument and the visualized designer’s dashboard will provide an opportunity for visual analysis by the designers of the results achieved by the users who have played the game and learned the educational content through it. These visualized results will support decision-making for designing better and more personalized educational video games by designers on the APOGEE platform. Based on the visualized aggregated statistics and information about the results achieved within the designed games, this designer will have the opportunity to take the necessary actions to improve the game design and plan activities to create new educational video game designs and future game development. 3.2 DIZU-EVG Instrument and Game Designer’s Perspective II In the second perspective of the presented model (Perspective II), the designer is a tester of the designed and integrated learning content in the educational maze video game. In this circumstance, the functionality of the DIZU-EVG instrument can be used to visualize the results of the playing sessions by the learners related to their learning and degree of susceptibility and absorption of the learning material (content) and the degree of satisfaction and learning user experience. This is realized through the “View Learner Profile Dashboard” functionality, described in [1]. Therefore, the DIZU-EVG instrument visualizes a personalized dashboard, reflecting all results related to learners’ perception of learning content. This dashboard is presented to the learning content designers only for the games designed by the particular designer. Therefore, this will contribute precisely to the analysis and evaluation of the quality of the integrated training content in the video game by the designer (perspective 2) and to take timely action to improve this content and validation, according to the set indicators for achieving the goals of the educational maze video game. 3.3 DIZU-EVG Instrument and Game Designer’s Perspective III In the third perspective of the presented model (Perspective III), the designer is the tester of the designed and integrated gaming content for the created educational video game. The validation of the gaming content, according to the defined requirements, can be assisted by the DIZU-EVG instrument’s functionality for visualizing user game results,
88
Y. Dankov
through a personalized dashboard for players or “View Player Dashboards Functionality, in detail. Therefore, using the DIZU-EVG tool, the designer in the role of a gaming content tester (perspective 3) will be significantly assisted in the visual analysis of the players’ results after playing gaming sessions of the designed and created video games by the specific designer. This will help develop design decisions to improve the game user experience and achieve higher satisfaction and commitment to the game by the players.
4 Conclusion and Future Work The paper presented the model of the educational video game designer’s perspectives and the model described the three perspectives of the designer and the corresponding characteristics. The paper, also presented the possibility of using the model in combination with the DIZU-EVG instrument for visualizing gaming and learning results to design, create and improve educational maze video games in the APOGEE platform. With the help of the DIZU-EVG instrument for educational video games and through the tool’s functionality to visualize customized (personalized) dashboards, the results of the played game sessions of players and learners and the statistics of the designed and played educational video games are visualized and are presented to the users and the designer. Using the model in combination with the DIZU-EVG instrument for visualizing gaming and learning results through customized (personalized) dashboards will provide an opportunity for appropriate and purposeful design and improvement of educational video games based on visually analyzed results by designers. Future work is planned to continue using the presented model and the DIZU-EVG instrument to design various educational video games and, based on the visual analysis of the results, to make informed decisions for improving the designs of educational video games. Acknowledgements. This research is supported by the Bulgarian Ministry of Education and Science under the National Program “Young Scientists and Postdoctoral Students – 2”.
References 1. Dankov, Y.: DIZU-EVG – an instrument for visualization of data from educational video games. In: Silhavy, R., Silhavy, P. (eds.) Software Engineering Research in System Science. CSOC 2023. Lecture Notes in Networks and Systems, vol. 722, pp. 769–778. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-35311-6_73 2. Ekin, C., Polat, E., Hopcan, S.: Drawing the big picture of games in education: a topic modeling-based review of past 55 years. Int. J. Comput. Educ. 194, 104700 (2023). https:// doi.org/10.1016/j.compedu.2022.104700 3. Clark, D., Hernández-Zavaleta, J., Becker, S.: Academically meaningful play: designing digital games for the classroom to support meaningful gameplay, meaningful learning, and meaningful access. Comput. Educ. 194, 104704 (2022). https://doi.org/10.1016/j.compedu.2022. 104704
The Game Designer’s Perspectives and the DIZU-EVG Instrument
89
4. Lesmes, C.Z., Acosta-Solano, J., Benavides, L.B., Umaña Ibáñez, S.F.: Design and production of educational video games for the inclusion of deaf children. Procedia Comput. Sci. 198, 626–631 (2021). https://doi.org/10.1016/j.procs.2021.12.297 5. Rahimi, F., Kim, B., Levy, R., Boyd, J.: A game design plot: exploring the educational potential of history-based video games. IEEE Trans. Games 12(3), 312–322 (2020). https://doi.org/10. 1109/TG.2019.2954880 6. Law, E., Sun, X.: Evaluating user experience of adaptive digital educational games with activity theory. Int. J. Hum.-Comput. Stud. 70(7), 478–497 (2012). https://doi.org/10.1016/j. ijhcs.2012.01.007 7. Acquah, E.O., Katz, H.T.: Digital game-based L2 learning outcomes for primary through high-school students: a systematic literature review. Comput. Educ. 143, 103667 (2020). https://doi.org/10.1016/j.compedu.2019.103667 8. Vinter, A., Bard, P., Duplessy, H.L., Poulin-Charronnat, B.: A comparison of the impact of digital games eliciting explicit and implicit learning processes in preschoolers. Int. J. Child-Comput. Interact. 34, 100534 (2022). https://doi.org/10.1016/j.ijcci.2022.100534 9. Bainbridge, K., et al.: Does embedding learning supports enhance transfer during game-based learning? Learn. Instr. 77, 101547 (2022). https://doi.org/10.1016/j.learninstruc.2021.101547 10. Xu, Z., Zdravkovic, A., Moreno, M., Woodruff, E.: Understanding optimal problem-solving in a digital game: the interplay of learner attributes and learning behavior. Comput. Educ. Open 3, 100117 (2022). https://doi.org/10.1016/j.caeo.2022.100117 11. Mylonas, G., Hofstaetter, J., Giannakos, M., Friedl, A., Koulouris, P.: Playful interventions for sustainability awareness in educational environments: a longitudinal, large-scale study in three countries. Int. J. Child-Comput. Interact. 35, 100562 (2023). https://doi.org/10.1016/j. ijcci.2022.100562 12. Terzieva, V., Bontchev, B., Dankov, Y., Paunova-Hubenova, E.: How to tailor educational maze games: the student’s preferences. J. Sustain. 14(6794), 2022 (2022). https://doi.org/10. 3390/su14116794 13. Urgo, M., Terkaj, W., Mondellini, M., Colombo, G.: Design of serious games in engineering education: an application to the configuration and analysis of manufacturing systems. CIRP J. Manufact. Sci. Technol. 36, 172–184 (2022). https://doi.org/10.1016/j.cirpj.2021.11.006 14. Deykov, Y., Andreeva, A.: Current aspects of the virtual design of expo-environment - gallery, museum, church. In: V International Conference Modern Technologies in Cultural Heritage, vol. 5, pp. 17–22. Technical University of Sofia (TU-Sofia) (2017). ISSN 2367–6523 15. Andreeva, A.: Design of experimental exhibition space. Bulgar. J. Eng. Des. Issue 37, 27– 33, Mechanical Engineering Faculty, Technical University of Sofia (TU-Sofia) (2018). ISSN 1313–7530 16. Dankov, Y., Antonova, A., Bontchev, B.: Adopting user-centered design to identify assessment metrics for adaptive video games for education. In: Ahram, T., Taiar, R. (eds.) Human Interaction, Emerging Technologies and Future Systems V. IHIET 2021. Lecture Notes in Networks and Systems, vol 319. Springer, Cham. (2022) https://doi.org/10.1007/978-3-03085540-6_37 17. Dankov, Y.: Conceptual model of a data visualization instrument for educational video games. In: Abraham, A., Pllana, S., Casalino, G., Ma, K., Bajaj, A. (eds.) Intelligent Systems Design and Applications. ISDA 2022. Lecture Notes in Networks and Systems, vol. 717. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-35510-3_29 18. Dankov, Y.: User-oriented process analysis of using the DIZU-EVG instrument for educational video games. In: Silhavy, R., Silhavy, P. (eds.) Networks and Systems in Cybernetics. CSOC 2023. Lecture Notes in Networks and Systems, vol. 723, pp. 684–693. Springer, Cham. https:// doi.org/10.1007/978-3-031-35317-8_61
90
Y. Dankov
19. Bontchev, B.; Vassileva, D.; Dankov, Y.: The APOGEE software platform for construction of rich maze video games for education. In: Proceedings of the 14th International Conference on Software Technologies (ICSOFT 2019). SCITEPRESS - Science and Technology Publications, Lda, Setubal, PRT, pp. 491–498. https://doi.org/10.5220/0007930404910498 (2019) 20. Bontchev, B., Vassileva, D., Aleksieva-Petrova, A., Petrov, M.: Playing styles based on experiential learning theory. Comput. Hum. Behav. 85, 319–328 (2018). https://doi.org/10.1016/ j.chb.2018.04.009 21. Dankov, Y., Bontchev, B.: Towards a taxonomy of instruments for facilitated design and evaluation of video games for education. In: Proceedings of the 21st International Conference on Computer Systems and Technologies (CompSysTech 2020). Association for Computing Machinery, New York, NY, USA, pp. 285–292. (2020). https://doi.org/10.1145/3407982.340 8010 22. Dankov, Y., Bontchev, B., Terzieva, V.: Design and creation of educational video games using assistive software instruments. In: Ahram, T.Z., Karwowski, W., Kalra, J. (eds.) Advances in Artificial Intelligence, Software and Systems Engineering. AHFE 2021. Lecture Notes in Networks and Systems, vol. 271, pp. 341–349. Springer, Cham. (2021). https://doi.org/10. 1007/978-3-030-80624-8_42 23. Dankov, Y., Bontchev, B.: Software instruments for management of the design of educational video games. In: Ahram, T., Taiar, R., Groff, F. (eds.) Human Interaction, Emerging Technologies and Future Applications IV. IHIET-AI 2021. Advances in Intelligent Systems and Computing, vol. 1378, pp. 414–421. Springer, Cham. (2021). https://doi.org/10.1007/978-3030-74009-2_53 24. Unity Official Website. https://unity.com/. Accessed 15 Mar 2023
Intelligent Assessment of the Acoustic Ecology of the Urban Environment Nikolay Rashevskiy , Danila Parygin(B) , Konstantin Nazarov , Ivan Sinitsyn, and Vladislav Feklistov Volgograd State Technical University, 1 Akademicheskaya Str., 400074 Volgograd, Russia [email protected]
Abstract. The ecology of urban space consists not only in a certain state of natural and climatic conditions. Today, the quality of the urban environment is also determined by various aspects of sensory ecology, including the influence of sound on humans. It is customary to talk about the existence of the soundscape of the city. The article analyzes the existing approaches to the formation of the soundscape of the urban area. Modern information technologies allow automating the process of assessing the state of ecoacoustics in the city through the use of machine learning. At the same time, it is possible to control individual sound elements to form a comfortable urban environment. The authors propose the process of analyzing and processing the original sound of the city using neural networks and the study of spectrograms. These tools are used to separate audio tracks into components, for example, car sounds, construction site sounds, etc. It is also possible to form a set of sound files that take into account the changes made by urban planners. The resulting files with the sounds of the city are used in sociological studies to identify the preferences of residents. The proposed solutions are tested in the formation of the sound landscape of the areas in the city of Volgograd, Russia. Keywords: Ecoacoustics · Soundscape · Urban Environment Quality · City Ecology · Ecosystem Analysis · Urban Studies
1 Introduction Canadian composer, writer, ecologist Raymond Murray Schafer was the first to declare the importance of acoustic ecology. He became the main initiator and conceptual inspirer in the research of ecoacoustics. In 1967, his textbook “Ear Cleaning. Notes for an Experimental Music Course”, in which Schafer encouraged readers to listen to the voices, noises, pulse and intonation of the city. Schafer introduced the concept of soundscape and gave it a definition in his studies of the urban sound environment: soundscape is “the whole set of sounds heard by an individual at a given moment from a specific point” [1]. The soundscape is characterized by the perception of the sound environment in context and its inherent physical and emotional response. The research aims to measure the noise level in various places, such as cities, workplaces and residential areas, and determine its impact on human health using artificial © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 91–100, 2023. https://doi.org/10.1007/978-3-031-44146-2_9
92
N. Rashevskiy et al.
intelligence technologies and decision support in the formation of planning decisions in the field of urban planning and architecture. Without taking into account the sound-scape in urban planning, first of all, the sustainable development of urbanized territories and the livability of existing cities suffer. Therefore, improving the methods of forming the soundscape of the city is an important task.
2 Existing Approaches to the City Soundscape Formation The urban environment mainly influences the human being by means of visual images, forming visual perception of the surrounding space. However, the modern era of human civilization differs from the previous one by the return to the culture of hearing in the processes of communication and perception [2]. Urban sounds have increasingly become the subject of research in the scientific works of scientists from many countries around the world, using various models, methods and standardized ways of expressing the results obtained. The analysis and design of the soundscape is a new field of research, bringing together independent scientific disciplines related to sound and the urban environment. In many countries, including Russia, the soundscape is considered in terms of noise pollution, legal acts and regulations regulate the permissible level of noise pressure in urban development conditions [3]. Therefore, many scientists and planners today advocate the need to make acoustic environmental ecology a research program using innovative methods and models in order to identify the positive and negative aspects of the impact of sound on humans in urban environments. The sounds of industrial and transport development were associated with progress and increased well-being in the minds of the world community until the beginning of the 20th century. Nowadays, society comes to understand that the sound environment around them is not a harmless embodiment of technological progress and a sense of movement, but a threat to the health of city residents [4]. Acoustic pollution is one of the most dangerous threats to the physical health of citizens, as well as psychological. Sound may provoking cardiological, neurological problems and activating protective psychological mechanisms in people, contributing to greater abstraction. People are vulnerable to sound exposure, as they are unable to constantly monitor the sounds perceived by our hearing organs and subject them to evaluation and filtering. The average person is unaware of how any one sound affects them, making them a potential target for adverse sound effects. The same problem is encountered by professionals involved in designing sound environments: landscape designers, architects, urban planners, who also interact with the visual environment, and therefore lack the basic conceptual tools and skills to work competently with the urban soundscape. This problem is characteristic of many countries of the world, as only the topic of the soundscape has been reflected in the scientific field of urban research and become the subject of general discussion, but not the conceptual achievements of numerous researchers of the sound environment. Urban planning, legal and domestic practices remain at the level of the 1960s in matters of sound impact assessment and regulation. A holistic approach that focuses on the quality of the sound environment is needed. Incorporating soundscape design as a major aspect of the master planning process will
Intelligent Assessment of the Acoustic Ecology
93
allow urbanized spaces to be designed for maximum well-being and create a sustainable urban environment. Soundscape has been considered as a method of research on urban space by scientists from various scientific fields. Researchers in their works raise the issue that sound is an important source of sociological knowledge although it is a very young direction in Russian practice. Among them are K.S. Mayorova [1, 4], A.V. Logutov [5], G.A. Gimranova [6] and others. The topic of soundscape in these works is considered from the position of sociological research. At the same time, with a large number of theoretical works, there are very few practical implementations on the issue of the sound environment of the city. One can highlight the work [7] which describes the approach to the analysis of auditory landscapes of urban areas and proposes a methodology for describing the urban landscape, and also presents sociological studies of the opinions of citizens about the comfort of the sound environment. Review of Russian regulations on noise protection has shown that the emphasis is placed on the definition of permissible and dangerous levels of noise in quantitative terms [8]. But the subjective assessment of different sources of sound, which can be no less dangerous, is not considered by the users of urban spaces. Foreign experience of studies of the sound landscape has stepped much further. The amount of accumulated theoretical knowledge allows the technical universities to create training programs in the field of applied acoustics and soundscape of cities, created for specialists in the field of sound and vibration. An example of this is the SONORUS [9] program at Chalmers University of Technology in Sweden. Huge resources are being made available to create noise maps of megacities. For example, the interactive noise map of Berlin [10], which records the daytime and nighttime noise levels in residential buildings, hospitals and schools, makes it possible to develop an action plan for the operational control of the noise situation in the city. The average noise level is calculated for a height of 4 m above the ground. There is also a SONYC [11] project that includes large-scale noise monitoring and uses the latest in machine learning technology, big data analysis and citizen science reporting to more effectively monitor, analyze and mitigate urban noise pollution. The Positive Soundscapes project [12] aims to develop a process map to help planners and other urban planning decision makers use soundscape assessment and modeling tools and techniques and incorporate the developed methodology into regulations and the urban planning process in the UK. Neil Bruce in the same Positive Soundscapes project [13] developed an interactive soundscape simulator that allows users to manipulate a set of parameters to investigate whether a group correlation exists between factors such as source selection, location, and noise levels. In conjunction with field studies and sociological surveys, this method allows the evaluation of the soundscape not only in terms of objective measurement data, but also reveals the subjective auditory preferences of users. In his research work, Reeman Mohammed Rehan [14], based on international case studies describing the impact of sound on the formation of open urban spaces, presented an approach to the formation of soundscape, which considers environmental sound as a resource effective in urban planning and design process. The method has been implemented in the planning process of one of Cairo’s noisiest squares. The renovation of the
94
N. Rashevskiy et al.
square will achieve an effective result in reducing the noise impact while maintaining the sound identity of the place. Soundscape research for interior space was first conducted by Indonesian researchers [16]. They developed an acoustic simulator of the sound landscape of a passenger train. A distinctive feature of the approach is the application of the concept of auralization to account for the reflection of sound inside a passenger train. The concept of soundscape composition using the acoustic environment simulator can be implemented to understand the existing soundscape, correct the soundscape (by selecting a different sound source to mask and adjusting sound levels), and evaluate the effect of acoustic treatment inside a passenger train. The article [17] explores an approach that incorporates soundscape research and design into the urban sound planning process. A framework for designing the soundscape in urban public open spaces is proposed, taking into account four key components: the characteristics of each sound source, the acoustic effects of the space, the sociodemographic aspect of the users, and other physical conditions. Design tools/models for soundscapes are also presented, including a software package for auralization and design change and public participation, and an artificial neural network model for predicting acoustic comfort based on various design variables. In contrast to the approaches considered in the review of sources, it is proposed to use modern technologies in assessing and shaping the sound landscape of a territory, namely the use of machine learning and decision theory, which will provide new opportunities for analyzing and understanding the sound environment. Thus, it is necessary to develop a methodology for assessing the sound landscape of the city, which will be based on: application of machine learning technologies to decompose the studied sound environment into separate sound paths; sociological research aimed at determining the preferred/unpreferred sound effects for users of urban space. The method will make it possible to design a comfortable urban environment, taking into account the auditory preferences of residents, and to reconstruct existing spaces, including the necessary components of the sound environment.
3 Intelligent Analysis of the Urban Soundscape This section describes the principles of the proposed process for automated assessment of the sound landscape of the city. It is designed to work with audio tracks containing the sounds of the city and assess the sound comfort of the urban environment using machine learning technologies. The process of splitting an audio file into separate tracks includes the following subprocesses: 1. “Loading an audio file”. To start, the user uploads an audio file in WAV and MP3 format. The duration of the audio file must be no less than 30 s and no more than 3–4 min. 2. “Processing an audio file through a neural network”. In this stage the program will analyze the audio file through the neural network and divide it into different audio components. The process of the stage is divided into several parts:
Intelligent Assessment of the Acoustic Ecology
95
2.1. “Converting an audio track into a spectrogram”. Work with the representation of the audio track as a spectrogram should be done to separate the sounds. TimeFrequency is a two-dimensional matrix that displays the frequency content of the audio signal in time (Fig. 1).
Fig. 1. Example of the studied spectrograms.
Thus, you can visualize a representation of sound using a heat map that has time on the x-axis and frequency on the y-axis. Each element on the heatmap represents the amplitude of the signal at a specific time and frequency. Next to some heatmaps, there is a color bar that shows which colors indicate high amplitude values and which colors indicate low amplitude values. If there is no color bar, we can assume that brighter colors indicate higher amplitudes than darker colors. The position of the waves in this audio visualization format are responsible for the frequency of the different sounds, thereby allowing the audio track to be divided into multiple components. Also this representation allows the application of two-dimensional convolutional neural networks in sound processing tasks. 2.2. “Processing seconds”. Program processing of downloaded audio through a neural network. Here, through the trained scales, the audio is analyzed and the user should get the separated sounds of the urban environment in the output. 2.3. The creation of “masks”. The mask is a matrix of the same shape as the spectrogram, which is multiplied element by element to get an initial estimate. The neural network
96
2.4.
3.
3.1.
3.2.
3.3.
3.4.
4.
N. Rashevskiy et al.
tries to create a mask, which should be superimposed on the spectrogram image. A separate mask will be created for each type of sound. “Final result processing”. After we have created masks for each category of sounds that we need to separate from the common audio track, we overlay them on the spectrogram of the original material and the output gives the user a completely separated track into the individual sounds that the neural network has classified. “Edit audio file”. After processing the audio track, the user is given a user interface to work with individual audio tracks. The editing interface appears to be an ordinary audio editor, where musicians create music by crossing different samples. In our case, the user will work with already prepared split sounds, such as sounds of cars on the road, voices of passers-by, sounds of nature, etc. It is assumed that the current audio track is analyzed by the program algorithm to classify the acceptability of “urban environment” sounds. Loud or “negative for urban environment” sounds significantly affect the overall human perception of the urban environment, so the overall audio track will be classified as negative for humans. Reducing such parameters will greatly improve the situation for auditory perception. The audio editor has tools that allow the user to do the following: Turn down the volume of individual audio tracks". The user can do this to evaluate how the overall playback will be perceived if the volume of a particular track is reduced. The volume reduction is done using the "volume control" which will be located on the left side of the tracks in the program interface. “Hide or delete certain tracks”. The user can listen to the general environment if a particular track is missing. This action is performed using the button to the left of the tracks in the program interface. “Add additional sounds”. This tool allows you to add additional sounds to the audio track. In addition, depending on which sound is added, the track will be modified in some way. For example, if you add a "pine" ambience, the overall volume of the audio track will decrease, because this type of tree is able to reduce the volume of the urban environment very well. “Listening to the edited audio track”. The user can experiment with the sound in different ways, evaluating the result through its playback. Listening is done through playing the general track, rewinding and stopping it completely. “Saving the edited audio track”. The user can save the edited result to a separate audio file in MP3 or WAV format. The user can also save the audio file settings into a separate project file, which can be reloaded into the application and continue working from where the user ended his session last time.
4 Study of the Acoustic Ecology of a Section of an Urban Area The developed approach was applied to a number of studies in the city of Volgograd, Russia. The first study analyzed the site of the Privokzalnaya Square opposite the railway station “Volgograd–1” [17]. And the study described in this article included an analysis of the area adjacent to the Volgograd Arena stadium, which hosted the 2018 FIFA World Cup matches (Fig. 2). The measuring point was chosen in the southern pedestrian part of Victory Park. A group of 25 people was selected to conduct a sociological study. Using the developed
Intelligent Assessment of the Acoustic Ecology
97
Fig. 2. Existing position of the sound landscape of Victory Park: positive sounds, auditory signs.
application, the audio tracks were evaluated and shown to the respondents, where they adjusted the virtual soundscape according to their personal preferences. The interviewees identified “positive” and “negative” environmental sounds, as well as vivid and memorable sound cues [18]. The results showed that most respondents had a positive view of natural sounds, such as the rustling of leaves, birdsong, and water sounds (Fig. 2). Participants noted the sound sign and also expressed a favorable attitude toward it. In the analyzed soundscape such a landmark was the sound of the match at the Volgograd Arena stadium. Thus, we can conclude that the identity of the soundscape is important. The data is presented in Fig. 3.
98
N. Rashevskiy et al.
Fig. 3. Assessment of the sound landscape of Victory Park. Natural sounds.
The sounds from cars, railroads, and streetcars were the most negative among anthropogenic sounds (Fig. 4). Respondents had an ambivalent attitude towards sounds coming from people themselves (conversations, footsteps).
Fig. 4. Existing position of the sound landscape of Victory Park: sources of noise.
Opinions were divided. Some people found these sounds acceptable, others did not like them. Still, the odds turned out to be on the positive side (Fig. 5). Human footsteps
Intelligent Assessment of the Acoustic Ecology
99
and conversations have a good masking effect, drowning out the sounds of vehicular traffic.
Fig. 5. Assessment of the sound landscape of Victory Park. Anthropogenic sounds.
5 Conclusion The data obtained from the study of the opinions of city dwellers is used to develop a conceptual plan of measures to improve the sound landscape of the analyzed area of the territory. Such a concept includes measures to reduce noise by redirecting the attention of city dwellers to “positive” sounds and acoustic masking. To improve the area around the square in accordance with the strategy of the urban sound landscape, it is necessary to develop noise protection and propose new acoustics for the area. This can be achieved by increasing the area of greenery, replacing the asphalt parking lot with a lawn grid, installing fountains, and installing sound-absorbing barriers. The article provides an overview of existing approaches to assessing the acoustic ecology of urban areas. The main directions of development of this field are identified, involving interdisciplinary research, including the use of modern information technologies. A process for analyzing and processing urban sounds based on machine learning technology is proposed. Based on the proposed solutions, an analysis of the area of the city of Volgograd was conducted. Further research requires the development of automation of the process of forming the sound landscape. It is necessary to expand the knowledge base of solutions that can reduce negative sound effects. In addition, a decision support system that can generate recommendations for urban planners taking into account the limitations of the surrounding urban environment can be developed. Acknowledgments. The study has been supported by the grant from the Russian Science Foundation (RSF) and the Administration of the Volgograd Oblast (Russia) No. 22–11-20024, https:// rscf.ru/en/project/22-11-20024/. The authors express gratitude to colleagues from the Department of Digital Technologies for Urban Studies, Architecture and Civil Engineering, VSTU involved in the development of the project.
100
N. Rashevskiy et al.
References 1. Majorova, K.S.: Academic research into sound and the auditory renaissance in urbanism. In: Proceedings of the III International Scientific Conference, Veliky Novgorod, 28–30 August 2019 Veliky Novgorod, pp. 316–324 (2020). (in Russian) 2. Ather, D., Rashevskiy, N., Parygin, D., Gurtyakov, A., Katerinina, S.: Intelligent assessment of the visual ecology of the urban environment. In: Proceedings of the 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS 2022), Tashkent, Uzbekistan, 10–12 October 2022, pp. 361–366. IEEE (2022) 3. Sadovnikova, N., Savina, O., Parygin, D., Churakov, A., Shuklin, A.: Application of scenario forecasting methods and fuzzy multi-criteria modeling in substantiation of urban area development strategies. Information 14(4), 241 (2023) 4. Majorova, K.S.: Sound action and sound violence: a conceptual vocabulary for describing urban conflicts. Bull. Moscow Univ. 4, 55–67 (2020). (In Russian) 5. Logutov, A.V.: Sound practices and the materiality of urban space. Urban Stud. Pract. 2, 4(9), 39–50 (2017). (in Russian) 6. Gimranova, G.A.: Soundscape as a tool for exploring urban space. Soc. Hum. Sci.: Theory Pract. 1(2), 490–496 (2018). (In Russian) 7. Chubukova, M.A.: Features of the sound environment of the Arbat district of Moscow. Urban Stud. Pract. 67–78 (2015). (in Russian) 8. Code of Practice 51.13330.2011 “Building codes and regulations 23–03–2003 Noise protection. https://base.garant.ru/77322649/. Accessed 18 Feb 2023. (in Russian) 9. Urban sound planning - the SONORUS project. http://www.ta.chalmers.se. Accessed 7 Feb 2023 10. Lärmkarte Berlin 2018 So laut ist es vor Ihrer Haustür. https://interaktiv.morgenpost.de/lae rmkarte-berlin. Accessed 9 Feb 2023. (in German) 11. Sounds of New York City (SONYC), https://wp.nyu.edu/sonyc. Accessed 12 Feb 2023 12. Adams, M.D., Davies, W.J., Bruce, N.S.: Soundscapes: an urban planning process map. In: Proceedings of the 38th International Congress and Exposition on Noise Control Engineering 2009. INTER-NOISE (2009) 13. Bruce, N.S., Davies, W.J., Adams, M.D.: Development of a soundscape simulator tool. (2009) 14. Rehan, R.M.: The phonic identity of the city urban soundscape for sustainable spaces. HBRC J. 12, 337–349 (2016) 15. Zakri, K.: The development of acoustic environment simulator for passenger’s train soundscape. J. Phys: Conf. Ser. 1075, 3633–3638 (2018) 16. Kang, J.: Urban sound planning – a soundscape approach. (2019) 17. Rashevskiy, N.M., Parygin, D.S., Nazarov, K.R., Sinitsyn, I.S., Feklistov, V.A.: Intelligent analysis of the urban soundscape. Urban Sociol. 1, 125–139 (2023). (in Russian) 18. Shuklin, A., Parygin, D., Gurtyakov, A., Savina, O., Rashevskiy, N.: Synthetic news as a tool for evaluating urban area development policies. In: Proceedings of the 2022 International Conference on Engineering and Emerging Technologies (ICEET), Kuala Lumpur, Malaysia, 27–28 October 2022. IEEE (2022)
DuckyCode: A Hybrid Platform with Graphical and Tangible User Interfaces to Program Educational Robots Theodosios Sapounidis1,2(B)
, Pavlos Mantziaris2 , and Ioannis Kedros2
1 Department of Philosophy and Education, Aristotle University of Thessaloniki, Thessaloniki,
Greece [email protected] 2 DuckyCode, Thessaloniki, Greece
Abstract. Nowadays it seems that there is a great interest in the development of educational systems based on tangible interfaces and educational robotics. However, existing systems seem to have limited capabilities and at the same time show reduced scalability. Therefore, this article describes: a) the sources of potential benefits of tangible user interfaces b) the challenges, and a series of design guidelines for designing such systems c) the DuckyCode system which is an educational robot programming platform that combines tangible, graphical, and text-based programming subsystems. The system presents a series of capabilities that appear for the first time in the relevant literature, enabling users to configure a full internet - connected tangible system as they wish, to interact simultaneously with different interfaces, and to exchange code with remote users. Therefore, the system appears as a programming platform aimed at both experienced and novice users like children and adults. Keywords: Educational robotics · Tangible User interfaces · Graphical User interfaces · Programming education
1 Introduction At the beginning of the 19th century Friedrich Froebel’s theories on children’s learning and play with “gifts” transformed teaching [1]. The perceptual and sensorial playful activities with gifts were the beginning of manipulatives. The computationally enhanced versions of these manipulatives are recently known as digital manipulatives [2]. Today digital manipulatives are known as tangible user interfaces (TUIs) and are physical objects through which we can interact with the digital world. Although, in recent years, several efforts seem to have been made to develop tangible programming systems, it appears that the available systems are limited [3–5]. For this reason, this article presents the theoretical background behind the creation of a system that tries to be a hybrid educational programming platform that combines at the same time TUIs, GUIs, and text-based programming in a single platform. Moreover, the challenges and the innovative characteristics behind the platform are presented and analyzed. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 101–109, 2023. https://doi.org/10.1007/978-3-031-44146-2_10
102
T. Sapounidis et al.
2 Background In the short history of TUIs research many tangible systems have been designed for children and their purpose was to connect the physical world with digital effects. The reason is that TUIs in comparison to other interfaces might have a number of possible benefits attributed to four key factors which are: the physicality of the interface, the properties that can be incorporated, the novel actions that can be accomplished with the interface, and the collaboration capabilities. 2.1 Sources of Possible Benefits Physicality It is proposed that TUIs facilitate both exploratory and expressive style activities [6]. Based on the physicality of the interface, it is supported that in exploratory activities the physical familiarity enables exploration [7]. This way, the user gains experimental understanding of a knowledge model guided by the appropriate constraints [8] that the physicality of the interface puts into effect. Separately, the physicality of the interface creates sensory-motor experiences, like those described by Sara Price in her digitally augmented physical spaces [9]. The advantage of tangibles is based on the idea that they enable an enactive mode of reasoning and empirical abstraction of sensory-motor schemes [7]. Moreover, the physical manipulation that is facilitated by the tangibles is considered to be more familiar and bodily engaging for the users and thus has a positive impact on engagement and enjoyment [10, 11]. Based on the familiarity of the interface Marshall [12] supports that tangible interfaces might be more accessible for children lowering the threshold of participation [13]. Material Properties Physical objects have several properties that can be used to make a tangible environment more efficient than a traditional one providing a richer interactive experience increasing the possibilities for reflection and understanding [10, 14]. These properties may not be easily represented in a graphical environment and in this way can reinforce learning in a specific domain [15]. For instance, the incorporation of material properties such as size, weight, smoothness, and temperature might better serve learning in domains like programming, mathematics, or chemistry [12]. The combined effect of these properties could probably render a tangible interface more intuitive and easier to use for learning. One may assume that tangibles can lower the mental effort required to learn how the interface operates and thus free cognitive resources to focus on the learning process itself [12, 16]. Collaboration Affordances TUIs can be used to support multi-hand, face-to-face, co-located collaborative activities, allowing more than one system user to collaborate with others [17]. Such shared interfaces where users interact simultaneously are shown to provide more opportunities for equal participation [18]. During the tangible-mediated collaboration, users may increase their visibility at the work of other participants and can easily communicate ideas and thoughts about the working plane [19]. Users can also monitor each other’s
DuckyCode: A Hybrid Platform with Graphical and Tangible User Interfaces
103
gaze and gestures achieving a richer collaboration than interacting with the graphical representation on a computer monitor [20]. Furthermore, many studies have shown that working with other peers on certain collaborative tasks may strengthen children’s motivation, engagement, and enjoyment (e.g., [21, 22]. Facilitating the playfulness approach for young children is well known to be beneficial for children’s development [23]. Price’s qualitative study results [6] supported that TUIs, in a collaborative setting, are appropriate for creating a playful learning activity for children. Nevertheless, the assumption that playful learning in a collaborative environment can be augmented by TUIs, has not been studied extensively with experimental methods [7, 24]. Novelty Appeal In Self-Determination Theory, it is argued that people will be intrinsically motivated by activities that have among others novelty appeal [25]. The different novel activities that can be accomplished by the children, together with the novel mixes of tangible and digital ‘transforms’ are one of the key factors that may increase exploration and reflection [26]. Also, tangibles in novel activities, like augmented reality, where they serve both as input and a place where the system projects information is a step towards developing new physically based interface metaphors that are intuitive and unique to Augmented Reality [27]. Appropriate representations on the interface may be helpful in reducing the problem’s complexity and influencing problem-solving [7]. 2.2 The Challenges Tangible user interfaces particularly at the domain of programming, are considered easier and faster to use particular for younger children [13, 28] more user-friendly and intuitive [29] compared to graphical user interfaces. Moreover, it looks that there are distinct user profiles that prefer TUIs in relation to GUIs [24, 30]. However, designing and building these types of tools are significantly challenging. Based on the suggestions and findings from the existing literature, there are opportunities and gaps on which researchers and designers can apply new solutions. By addressing these challenges, researchers and designers can create more effective and user-friendly programming tools and further advance the field of tangible user interfaces. Specifically, some existing challenges are: Creating Platforms that Provide a Diversity of Interfaces to Better Support Different Learning Styles Designing systems that simultaneously combine tangible, textual, or graphical programming interfaces can be a challenging task. Such hybrid solutions allow users to seamlessly switch between different interfaces based on their preferences, learning styles, gender, previous experience, and age [31]. In addition, the ability of such hybrid systems to learn programming using different subsystems offers researchers a unique opportunity to study the advantages or disadvantages of tangible interfaces in the field of programming [11, 32]. Increasing Portability and Reducing the Cost Cost and portability are crucial considerations before tangible interfaces can reach the
104
T. Sapounidis et al.
market, allowing for dynamic and unrestricted programming activities in children’s everyday play areas [14]. The cost and portability pose limitations in adopting and evaluating tangible systems in real classrooms [33]. Bringing theoretical designs to life and then to market also requires resource-intensive processes [29]. Therefore, to address these challenges, efforts are needed to reduce costs and achieve higher portability. Meeting these requirements can enable larger-scale evaluations and better exploration of the potential fields where tangible user interfaces can better serve users’ needs. To Reduce the Mental and Physical Gap Among Output and Input In order to reduce the mental gap between the tangible movements carried out in the tangible environment (input) and the result of the programming (output) it is desirable that both (input - output) take place in the same physical space [1, 14, 34–36]. The Ability to Represent Many Programming Structures and the Existence of a Sufficient Number of Commands and Parameters The existence of a sufficient number of different commands and parameters makes the process of learning programming and at the same time the use of the system itself more interesting for both young and older children [37]. To satisfy such a requirement we need to go beyond the simple commands “forward”, “backward”, “turn right”, and “turn left”. It would thus be useful to support repetition structures, control structures, the possibility of code storage, and the use of multiple sensors if we are programming an educational robot [38–40]. Support User Interaction on the Interface and not on Another Medium Enabling realistic and intense interaction between users and the system on the interface is desirable for programming tools of this kind [1, 37]. Such a mechanism might be achieved by incorporating additional indications that provide information and signs about the system’s internal state, execution state, wrong connections or even syntax errors. For example, the system’s internal state information can include details about connection quality and battery status, while the programming structure can offer feedback on executed commands and functions. Users should also receive clear indications to identify and correct errors, through synchronous feedback when an error occurs at any programming stage. Implementing these features may vary in complexity depending on the chosen technology. To Exploit the Appropriate Properties of Physical Objects The use and exploitation of the physical properties and characteristics of an educational system can be beneficial to users and can be a pathway to innovation. These properties can be grouped into two categories. The first category concerns properties which have to do with obstacles which prevent users from taking an action, while the second category has to do with properties which aim to help knowledge in the knowledge field [8]. These physical properties can reduce cognitive load and allow users to explore and understand the system intuitively [41, 42]. For example, the tangible object which represents a
DuckyCode: A Hybrid Platform with Graphical and Tangible User Interfaces
105
parameter with the number 3 may have half the weight of a parameter with the number 6 [4, 43]. Increasing System Availability Enhancing system availability is a challenging field that researchers strive to address. This involves using common and easily accessible materials like paper, ensuring systems are not dependent on specific computers, minimizing the need for frequent battery recharging, and utilizing existing equipment such as robots and other tools that can be easily found in market or learning environments. Improving the Reliability of the System Consistency is an important consideration for tangible user interfaces as long as a system failure might cause user disappointment. Researchers in many cases have acknowledged encountering minor or major problems during their intervention or pilot studies. While problems may have a minimal or medium impact in research or controlled environments, they become significant when considering market viability. Therefore, it is necessary to adopt the latest technologies in order to develop better systems. Enhancing Adaptability One of the common problems that appears in the literature is the ability of tangible systems to represent multiple and different concepts [1, 44]. In order to do so, users should be able to intervene and modify the concepts represented by the tangible objects.
3 The Proposed System “DuckyCode” The systems consist of duckling-like educational robots, a base, and a set of physical magnetic objects. The system offers four different ways to program the robot (tangible, graphical, graphical remotely, and text-based). The tangible subsystem offers a collection of magnetic tiles which represent commands, parameters, and programming structures (loops, ifs, etc.). When the user connects the magnetic tiles, sequences are made, and the robot moves according to the commands given. The tangible technology of the DuckyCode system is mainly aimed at novice children and tries to teach the basics of programming. An illustrative example of code along with the robot and magnetic tiles is shown in the following figure (Fig. 1). In this example, we can see 4 magnetic commands and 1 parameter (smaller tiles) connected to the base. If the user presses the “RUN” button the base will send commands (turn left, light on, two steps backward, and one step forward) to the robot for execution.
106
T. Sapounidis et al.
Fig. 1. The tangible subsystem (the robot, the base, and the magnetic commands – parameters)
The robot has Wi-Fi, Bluetooth, two motors, one color TFT screen, and a series of sensors like sound, gesture, touch, etc. The robot itself “runs” a webserver and an access point, therefore the user can login to the access point and see the scratch-like graphics (also known as blockly) hosted on the robot’s webserver (Fig. 2).
Fig. 2. The blockly hosted at the robot’s webserver
Therefore, the robot is fully connected with the internet, can be programmed from any device, mobile phone, tablet, or computer, and without the need to install an application because the programming environment is depicted in a browser. This graphical programming is aimed at older and more experienced children. Similarly, because of the Wi-Fi connectivity the robot uses services and can accept commands from the internet. In this case, the user does not need to have the robot in the same location. Using this capability, which is also graphical, students might program
DuckyCode: A Hybrid Platform with Graphical and Tangible User Interfaces
107
the school’s robot even from home as long as the robot is turned on ensuring this way high availability. Finally, the robot offers direct access to the robot’s microcontroller unit through a USB type C connector, and therefore expert users can program the microcontroller using well-known text-based programming environments like Arduino IDE. Thus, the system provides a variety of interfaces to better support different ages and learning needs. At the same time, since the subsystems are either magnetic or located on the robot’s webserver, we ensure portability and reduce the cost of purchasing different subsystems. As far as the tangible subsystem is concerned, a) it reduces the physical gap between output and input since the programming is done in the physical world and the result is shown in the same physical space through the robot’s movements b) it offers interaction with the user on the tangible tiles of commands and parameters with appropriate lights indications (green and red). Therefore, users can monitor the execution of the program and at the same time detect syntactic errors. Finally, the system sets the appropriate physical constraints for the users and prevents connecting the tiles the wrong way.
References 1. Zuckerman, O., Arida, S., Resnick, M.: Extending tangible interfaces for education: digital montessori-inspired manipulatives. In: Proceedings of the SIGCHI Conference on Human factors in Computing Systems, pp. 859–868. ACM, Portland, Oregon, USA (2005) 2. Resnick, M., et al.: Digital manipulatives: new toys to think with. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 281–287. ACM Press/Addison-Wesley Publishing Co., Los Angeles, California, United States (1998) 3. Sapounidis, T., Demetriadis, S.: Educational robots driven by tangible programming languages: a review on the field. In: Alimisis, D., Moro, M., Menegatti, E. (eds.) Educational Robotics in the Makers Era, pp. 205–214. Springer International Publishing, Cham (2017). https://doi.org/10.1007/978-3-319-55553-9_16 4. Sapounidis, T., Stamelos, I., Demetriadis, S.: Tangible user interfaces for programming and education: a new field for innovation and entrepreneurship. In: Papadopoulos, P.M., Burger, R., Faria, A. (eds.) Innovation and Entrepreneurship in Education, pp. 271–295. Emerald Group Publishing Limited (2016). https://doi.org/10.1108/S2051-229520160000002016 5. Sapounidis, T., Demetriadis, S.: Tangible programming interfaces: a literature review. In: 4th Balkan Conference in Informatics, pp. 70–75. Thessaloniki, GREECE (2009) 6. Price, S., Rogers, Y., Scaife, M., Stanton, D., Neale, H.: Using “tangibles” to promote novel forms of playful learning. Interact. Comput. 15(2), 169–185 (2003). https://doi.org/10.1016/ S0953-5438(03)00006-7 7. Schneider, B., Jermann, P., Zufferey, G., Dillenbourg, P.: Benefits of a tangible interface for collaborative learning and interaction. IEEE Trans. Learn. Technol. 4(3), 222–232 (2011) 8. Ullmer, B., Ishii, H., Jacob, R.J.K.: Token constraint systems for tangible interaction with digital information. ACM Trans. Comput. Hum. Interact. 12(1), 81–118 (2005) 9. Price, S., Rogers, Y.: Let’s get physical: the learning benefits of interacting in digitally augmented physical spaces. Comput. Educ. 43(1–2), 137–151 (2004) 10. Fernaeus, Y., Tholander, J.: Designing for programming as joint performances among groups of children. Interact. Comput. 18(5), 1012–1031 (2006)
108
T. Sapounidis et al.
11. Xie, L., Antle, A.N., Motamedi, N.: Are tangibles more fun?: comparing children’s enjoyment and engagement using physical, graphical and tangible user interfaces. In: Proceedings of the 2nd International Conference on Tangible and Embedded Interaction (TEI 2008), pp. 191–198. ACM, New York, NY, USA (2008) 12. Marshall, P.: Do tangible interfaces enhance learning?. In: Proceedings of the 1st International Conference on Tangible and Embedded Interaction, pp. 163–170. ACM, Baton Rouge, Louisiana, USA (2007) 13. Sapounidis, T., Demetriadis, S.: Tangible versus graphical user interfaces for robot programming: exploring cross-age children’s preferences. Pers. Ubiquitous Comput. 17(8), 1775–1786 (2013). https://doi.org/10.1007/s00779-013-0641-7 14. Fernaeus, Y., Tholander, J.: Finding design qualities in a tangible programming space. In: CHI 2006 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 447–456. ACM, Montreal, Canada (2006) 15. Manches, A., O’Malley, C., Benford, S.: The role of physical representations in solving number problems: a comparison of young children’s use of physical and virtual materials. Comput. Educ. 54(3), 622–640 (2010) 16. Jacob, R.J.K., et al., ‘Reality-based interaction: a framework for post-WIMP interfaces. In: Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems, pp. 201–210. ACM (2008) 17. Falcão, T.P., Price, S.: What have you done! The role of’interference’in tangible environments for supporting collaborative learning. In: Proceedings of the 9th International Conference on computer supported collaborative Learning, pp. 325–334. International Society of the Learning Sciences, Rhodes, Greece (2009) 18. Rogers, Y., Lim, Y., Hazlewood, R., Marshall, P.: Equal opportunities: do shareable interfaces promote more group participation than single user displays? Hum. Comput. Interact. 24(1–2), 79–116 (2009) 19. Stanton, D., et al.: Classroom collaboration in the design of tangible interfaces for storytelling. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 482–489. ACM, Seattle, Washington, United States (2001) 20. Goldin-Meadow, S.: Hearing Gesture: How Our Hands Help us Think. Belknap Press (2005) 21. Scott, S.D., Mandryk, R.L., Inkpen, K.M.: Understanding children’s collaborative interactions in shared environments. J. Comput. Assist. Learn. 19(2), 220–228 (2003). https://doi.org/10. 1046/j.0266-4909.2003.00022.x 22. Inkpen, K., Booth, K.S., Gribble, S.D., Klawe, M.: Give and take: children collaborating on one computer. In: CHI 1995 Conference Companion on Human Factors in Computing Systems, in CHI 1995, pp. 258–259. ACM, Denver, Colorado, United States (1995). https:// doi.org/10.1145/223355.223663 23. Clements, D.: Playing with computers, playing with ideas. Educ. Psychol. Rev. 7(2), 203–207 (1995). https://doi.org/10.1007/BF02212494 24. Sapounidis, T., Demetriadis, S., Papadopoulos, P.M., Stamovlasis, D.: Tangible and graphical programming with experienced children: a mixed methods analysis. Int. J. Child-Comput. Interact. 19, 67–78 (2019). https://doi.org/10.1016/j.ijcci.2018.12.001 25. Ryan, R.M., Deci, E.L.: Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am. Psychol. 55(1), 68–78 (2000). https://doi.org/10. 1037/0003-066X.55.1.68 26. Rogers, Y., Scaife, M., Gabrielli, S., Smith, H., Harris, Eric: A conceptual framework for mixed reality environments: designing novel learning activities for young children. Presence Teleoper. Virtual Environ. 11(6), 677–686 (2002). https://doi.org/10.1162/105474602 321050776
DuckyCode: A Hybrid Platform with Graphical and Tangible User Interfaces
109
27. Billinghurst, M., Kato, H., Poupyrev, I.: Tangible augmented reality. In: ACM SIGGRAPH ASIA 2008 courses, in SIGGRAPH Asia 2008, pp. 7:1–7:10. ACM, Singapore (2008). https:// doi.org/10.1145/1508044.1508051 28. Sapounidis, T., Demetriadis, S., Stamelos, I.: Evaluating children performance with graphical and tangible robot programming tools. Pers. Ubiquitous Comput. 19(1), 225–237 (2015). https://doi.org/10.1007/s00779-014-0774-3 29. Shaer, O., Jacob, R.J.K.: A specification paradigm for the design and implementation of tangible user interfaces. ACM Trans. Comput. Hum. Interact. 16(4), 1–39 (2009). https://doi. org/10.1145/1614390.1614395 30. Sapounidis, T., Stamovlasis, D., Demetriadis, S.: Latent class modeling of children’s preference profiles on tangible and graphical robot programming. IEEE Trans. Educ. 62(2), 127–133 (2019). https://doi.org/10.1109/TE.2018.2876363 31. Horn, M.S., Jordan Crouser, R., Bers, M.U.: Tangible interaction and learning: the case for a hybrid approach. Personal Ubiquitous Comput. 16(4), 379–389 (2012). https://doi.org/10. 1007/s00779-011-0404-2 32. Sylla, C., Branco, P., Coutinho, C., Coquet, E.: TUIs vs. GUIs: comparing the learning potential with preschoolers. Personal Ubiquitous Comput. 16(4), 421–432 (2012). https://doi.org/ 10.1007/s00779-011-0407-z 33. Kwon, D.-Y., Kim, H.-S., Shim, J.-K., Lee, W.-G.: Algorithmic bricks: a tangible robot programming tool for elementary school students. IEEE. Trans. Educ. 55(4), 474–479 (2012) 34. Rekimoto, J., Ullmer, B., Oba, H.: DataTiles: a modular platform for mixed physical and graphical interactions. In: CHI 2001 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 269–276. ACM, Seattle, Washington, New York, NY, USA (2001) 35. Kitamura, Y., Itoh, Y., Masaki, T., Kishino, F.: ActiveCube: a bi-directional user interface using cubes. In: Proceedings. Fourth International Conference on Knowledge-Based Intelligent Engineering Systems and Allied Technologies, pp. 99–102. Brighton, UK (2000) 36. McNerney, T.S.: From turtles to Tangible Programming Bricks: explorations in physical language design. Pers. Ubiquitous Comput. 8(5), 326–337 (2004) 37. Cockburn, A., Bryant, A.: Leogo: an equal opportunity user interface for programming. J. Vis. Lang. Comput. 8(5–6), 601–619 (1997) 38. Wyeth, P., Purchase, H.C.: Tangible programming elements for young children. In: CHI’02 Extended Abstracts on Human Factors in Computing Systems, pp. 774–775. ACM, Minneapolis, Minnesota, USA (2002) 39. Kahn, K.: Drawings on napkins, video-game animation, and other ways to program computers. Commun. ACM 39(8), 49–59 (1996) 40. Kelleher, C., Pausch, R.: Lowering the barriers to programming. ACM Comput. Surv. 37(2), 83–137 (2005). https://doi.org/10.1145/1089733.1089734 41. Fishkin, K.P.: A taxonomy for and analysis of tangible interfaces. Pers. Ubiquitous Comput. 8(5), 347–358 (2004) 42. Blackwell, A.: Cognitive dimensions of tangible programming languages. In: Proceedings of the first joint conference of the Empirical Assessment in Software Engineering and Psychology of Programming Interest Groups, pp. 391–405. Citeseer, Keele, UK (2003) 43. Sapounidis, T., Demetriadis, S.: Touch your program with hands: qualities in tangible programming tools for novice. In: 15th Panhellenic Conference on Informatics (IEEE/PCI), pp. 363–367. IEEE (2011). https://doi.org/10.1109/PCI.2011.5 44. Zuckerman, O., Resnick, M.: A physical interface for system dynamics simulation. In: CHI 2003 Extended Abstracts on Human Factors in Computing Systems, pp. 810–811. ACM, New York, NY, Florida, USA (2003)
Blockchain-Enhanced Labor Protection: An Innovative Complaint Platform for Transparent Workplace Compliance and Fair Competition John Christidis1(B) , Helen C. Leligou2 , and Pericles Papadopoulos1 1
2
Department of Electrical and Electronics Engineering, University of West Attica, 250 Thivon Av., 12244 Athens, Greece [email protected] Department of Industrial Design and Production Engineering, University of West Attica, 250 Thivon Av., 12244 Athens, Greece
Abstract. Information systems for labor protection play a crucial role in managing and maintaining workplace safety and health. These digital platforms aim to minimize risks, promote safe work practices, and ensure compliance with relevant laws and regulations. Despite the implementation of such systems, countries still face issues related to unfair competition due to incomplete recording of employee work data. Employers may underreport work hours, wages, or workforce size, evading taxes, social security contributions, or compliance with labor regulations, creating an uneven competitive landscape. In this paper, we propose a blockchain-based complaint platform that leverages real-time data and operates in conjunction with existing labor protection information systems. This innovative solution aims to address the problem of undeclared or underreported work by allowing for anonymous and transparent complaint submissions. By integrating blockchain technology with existing labor protection systems, the proposed platform ensures the validity of complaints in real time while protecting employee interests and promoting fair competition among businesses. Keywords: Blockchain · Smart Contracts · Labour Protection Information System · Decentralized Application · Complaint Application
1
·
Introduction
Information systems for labor protection are digital platforms designed to manage and maintain workplace safety and health. They aim to ensure the well-being of employees by minimizing risks, promoting safe work practices, and complying with relevant laws and regulations. Information systems for labor protection can be used at the country level to manage and monitor workplace safety and c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 110–121, 2023. https://doi.org/10.1007/978-3-031-44146-2_11
Blockchain-Enhanced Labor Protection
111
health across various industries. These systems may be operated by governmental agencies, such as departments of labor or occupational safety and health organizations, to ensure compliance with national regulations and standards. United States maintain the OSHA [1] Information System (OIS), which collects and manages data on workplace accidents, inspections, and enforcement actions. In Germany, The Federal Institute for Occupational Safety and Health (BAuA) [2] is responsible for researching and developing regulations for occupational safety. They maintain various information systems to collect and analyze data on workplace accidents, hazards, and compliance with safety standards. However, countries that have implemented labor protection systems to manage and monitor workplace safety and health may still face issues related to unfair competition due to incomplete recording of employees’ work. Employers may underreport employee work hours, wages, or even the number of employees in their organization to evade taxes, social security contributions, or compliance with labor regulations. This can result in an uneven competitive landscape for businesses that accurately report their workforce data and comply with regulations. In Greece for example, the information system called ERGANI [3] managed by the Ministry of Labor and Social Affairs aims to simplify administrative procedures, improve transparency, and enhance the efficiency of labor-related processes. ERGANI is mainly used for employment registration and management. Employers are required to register new employees with the ERGANI system, providing details such as personal information, job description, working hours, and wages. Employers must also report terminations of employment and any changes in employment contracts or working conditions through it. Recently, the time card or digital work card feature was released that works in conjunction with the ERGANI information system. The time card is used from the employees by punching it when they start and finish their shift. The data are stored in ERGANI information system. The purpose of the time card is to eliminate undeclared and implied work. With the time card, the interests of employees are ensured by fully recording the actual working hours, the framework is created for transparent and effective controls, as the inspectors of the Labor Inspectorate will know how many and which employees should be in the workplace, law-abiding companies are protected from unfair competition against those who practice undeclared or undeclared work, and finally the income of the insurance system from contributions is guaranteed. However its purpose is not fulfilled because there are companies where employees are forced to punch their card after the end of their shift and then continue working. To address the problem mentioned above we propose a complaint platform where it leverages real-time data and is powered by blockchain technology. This platform must be used in conjunction with any information system to ensure the validity of the complaint. At the same time with the use of blockchain technology, complaints will be made anonymously and transparently. The remainder of this paper is organized as follows: In the next section we mention related works. In Sect. 3 we present the complain platform, the tools the
112
J. Christidis et al.
we use, its technical analysis and the smart contract architecture. The discussion is in Sect. 4 and the in final section the conclusion.
2
Related Work
Blockchain technology is suitable for any situation requiring a decentralized infrastructure that enables multiple participants to engage within the same network while ensuring complete transparency and dependability among individuals who are unfamiliar with one another. [4] propose a blockchain-powered platform that is anonymous, transparent, and decentralized, allowing individuals to submit complaints anonymously and collaborate with authorities to address their concerns. Authorities can begin working on a complaint once it is received through the platform. People who have similar issues or believe a problem should be prioritized can either support or oppose the complaint. Then, authorities can initiate crowdfunding campaigns, enabling both complainants and authorities to contribute. They assert that their proposed blockchain-based solution will be one of the most secure and trustworthy platforms for users to voice their concerns. [5] proposed a secure and transparent grievance resolution system using Ethereum blockchain, aiming to create a more interconnected digital environment through a decentralized application. They think Ethereum Blockchain is an ideal platform for developing such a decentralized web application, as it offers the necessary tools for its creation. Each complaint is represented by a smart contract that is executed on the Ethereum blockchain. The process begins when a user submits a complaint, providing the necessary details, which are then stored in the Ethereum blockchain. A complaint number is assigned to the user for tracking purposes. Only after the information is stored in the blockchain can officials access the complaint and take appropriate action. The complainant will be kept informed about every step taken regarding their submitted complaint. [6] suggested a blockchain-based approach to handle grievances related to both cognizable and non-cognizable offenses. When the police file an FIR (first information report), it will be encrypted, stored in the IPFS, and its hash will be added to the blockchain network. If the police choose not to file the FIR due to external pressure or deny receiving a complaint, the complainant will possess solid evidence against them, as the complaint and its timestamp will be securely stored on the blockchain network. They mention that by maintaining all records within an unalterable database, the possibility of tampering with FIR/NCR and evading detection is eliminated. In their research [7] introduced a technical solution to address the situation that many businesses lack the processes and infrastructure needed to fulfill several legal obligations concerning data subjects’ rights which results in manual labor and extended waiting periods for users. Data protection authorities receive complaints about these delays, but affected individuals cannot legally provide proof of request initiation since these requests typically occur through company platforms or email. They proposed a blockchain-based application that enables secure submission and tracking of data access requests while maintaining data protection for the subjects, facili-
Blockchain-Enhanced Labor Protection
113
tating the filing of complaints and ensuring the enforcement of data protection rights.
3
Technical Overview
The implementation of the blockchain-based complaint platform incorporates several specific features and functionalities aimed at enhancing labor protection and streamlining the complaint management process. The platform caters to two types of users: employees and competent authorities. Employees can anonymously submit complaints through a mobile decentralized application (D-app) on their phones, providing their GPS coordinates via Geolocation, and blockchain account. Once recorded on the blockchain, complaints undergo a validation process in conjunction with an existing information system. The platform seamlessly integrates with this system, retrieving relevant data such as company names and shift details to ensure compatibility and avoid duplication. Transparency and anonymity are key aspects of the platform. Complaint data is securely stored on the blockchain, enabling authorized parties to access and verify information without compromising the identity of the complainants. Personal identifying information is not stored on the blockchain, safeguarding the privacy and protection of individuals involved in the complaints. Real-time validity of complaints is ensured through the utilization of smart contracts and automated validation mechanisms. To prevent misuse of the platform and ensure the authenticity of complaints, authorities are notified only when a certain number of complaints from different employees of the same company are validated. This measure helps maintain the integrity of the system and prevents false complaints from triggering unnecessary actions. By incorporating these features and functionalities, the blockchain-based complaint platform establishes a robust and reliable system for enhancing labor protection. It promotes accountability, empowers employees to voice their concerns, and contributes to the overall improvement of workplace conditions and fair competition. The platform’s implementation highlights the utilization of blockchain technology to create a transparent, secure, and efficient platform for managing and addressing labor-related complaints. 3.1
Core Components
Blockchain network [8]: The network proposed for this platform is a hybrid blockchain network. Public networks have low efficiency and require user input in the form of payment for each recording. On the other hand, private networks are centralized with the result that there is a possibility of data variability. Hybrid networks allow access to data from specific nodes, but because they have a decentralized architecture, the variability of the data depends on the number of nodes. At the same time they have high efficiency. The advantage of blockchain in this platform are also some of its critical features. For starters, Blockchain has the feature of anonymity. As it was mentioned above, blockchain uses digital addresses that are created from a pair of public/private keys for each user and keep their identity hidden within the network. Anonymity is important
114
J. Christidis et al.
because the employee can be targeted by the company they work for if they make a complaint. Furthermore, due to its transparency, any record can be verified and combined with its decentralized architecture, it is almost impossible to tampered. This reduces the possibility of complaints being suppressed by companies interests. Finally, the blockchain is a deterministic system that allows the automation of tasks, resulting in any comparison or calculation being done with absoluteness and certainty. As for the location of the nodes, the stakeholders could be the branches of government agencies or ministries responsible for labor, employment, and social welfare issues as well as in the competent bodies for complaints. Anyone can access the network from the platform. The Blockchain that is proposed is Ethereum-based [11] because it allows for the creation and deployment of smart contracts. Smart Contracts: [9] Smart contracts are the platform’s specific logic that resides on the blockchain network. Smart contracts store all complaints and complete their validation. At the same time, smart contracts enable the automation of tasks. So the appropriate logic is created so that any process done outside the network is automatically activated by command from the smart contract. Blockchain Wallet: A digital address is needed for a user to communicate with the blockchain and smart contracts. Digital addresses are stored and can be used via a Blockchain Wallet. There are several wallets that employees can download to their device with Metamask [10] being the best known. In the context of this research, each employee has an address that is assigned to one of their jobs. So for example if a user works in two companies two different addresses will be used one for each job. Geolocation: In order to be able to verify the location of the user and if they are in their workplace by the time they make the complaint, there should be access to the GPS of their device to see their geographic coordinates. For the proper work of the platform, it is assumed that Geolocation works with an accuracy suitable for the system. Decentralized Applications (D-apps) [9]: A mobile D-app is proposed for the employees to be able to make their complaints while a web D-app can be used for the authorities in order to receive updates about complaints for companies and at the same time allow them change the status of the companies which will be discussed further in the next sub-section. External Information System and Oracle [11]: As mentioned above, the operation of the platform is done in collaboration with an already existing information system. Oracles are channels of communication between the Blockchain and the outside world without the need of human action. In this work the Oracle has three uses. The first is to listen to the smart contract. When the smart contract receives a complaint, Oracle listens to it and acts accordingly. The second is to validate the complaints. Complaints are validated in the Oracle and the smart contract and this will be discussed further bellow. The third is to send the appropriate information to the blockchain to complete the validation of the complaint.
Blockchain-Enhanced Labor Protection
3.2
115
Technical Analysis of the Scenario
For the platform to function properly, the following must apply: – GPS for geographic coordinates must work accurately. – It will be possible to add the following elements to the already existing information system: – The geographic coordinates of the company – The total number of employees – The limit of complaints needed to inform the authorities – The digital addresses of the employees for each company which are matched with the employees – The identification number (ID) of the company – A record for every employee for the start and the finish of their shift (this could be through the use of a digital work card or something similar). – Each employee has a unique address for each job. So if they work for two companies, they will have two different addresses. The employee can never have the same digital address for two employments. – At the beginning of the employment of an employee in a company, the employee should create an address by themselves which they will add to their data in the information system and not in the blockchain network. – At the definitive end of an employee’s employment with a company, the company is obliged to delete the employee from the information system. At that moment, the former employee’s digital address should also be automatically deleted. – Companies do not have access to employees’ digital addresses through the information system. – Companies should have more than 10 employees. – Companies should add their geographical coordinates and their ID to the information platform. – Authorities have their own address. – The contract owner and agents are non-profit entities. As mentioned in the requirements of the platform, the companies must have registered in advance in the information system the geographical coordinates of their company and the company ID. When a new employee starts their employment in a company they must create a digital address using his Blockchain wallet (e.g. Metamask) and store it in the information system. The company where they work does not have access to the employee’s digital address. Suppose the employee uses the D-app from their smart phone and make a complaint (1). Using the Blockchain wallet and its digital address, it executes a transaction to the Blockchain network for the complaint. At this moment the D-app will request permission to record the time and location (geographical coordinates of the user) on the network. In the smart contract responsible for the complaints and located on the Blockchain, the complaint is recorded and matched with a unique number with the timestamp, location and digital address of the employee. The complaint must then be validated.
116
J. Christidis et al.
After the registration of the complaint, an event (2) is emitted to the Blockchain. The event contains information of the address of the employee and the number of the complaint. Outside of the Blockchain there is an Oracle mechanism that listens to that particular kind of event (there are other kinds of events). When the Oracle listens to the event it activates and sends a request to the information system to find the necessary information about the employee and the company they work for based on the employee’s digital address (3). Because the digital address of the employee is unique and only for one company, false data cannot be obtained under normal conditions (an abnormal condition would be for the user’s digital signature to change by someone who has access to the information system’s database). Then two scenarios may follow. The first is that the employee’s digital address does not exist in the information system, which means that the user is no longer an employee of a company, so the complaint will not be validated. In the second case the data exists and are received by the Oracle (4). The data received are the following: – – – – –
The The The The The
last record where the employee started their shift (Starting Time) last record where the employee finished their shift (Finishing Time) limit of complaints needed to inform the authorities (Complaints Limit) ID of the company geographic coordinates of the company
The data is then processed to be sent to the Blockchain. Oracle then sends the data for validation to the Blockchain with a transaction. Oracle has this ability because, like employees, it has a digital address. Specifically in Blockchain there is a second smart contract responsible for which addresses are considered Oracles. If someone tries to send incorrect data to the blockchain, their transaction will be canceled because their digital address is not considered Oracle. Then the decision is made to validate the complaint (5). To do this, the data from the Oracle pass through an algorithm that is responsible for the validation decision. The algorithm works as follows: – First, it checks if the employee is really in their workplace with a small positional deviation. – Then the Starting Time, Finishing Time, and Complaint Time (which is the time the employee made the complaint) are converted into seconds. – Then the algorithm checks if the Starting Time is greater than the Finishing Time. – If it is, then this means that the program works normally and overtime (if any) will be recorded in the system. – If not, then the Finishing Time is compared with the Complaint Time. – If the Complaint Time is longer, then there is a possibility of undeclared work, and the smart contract registers the complaint as valid. – If any of the above do not apply, the transaction is canceled. The algorithm then decides whether an event should be broadcast to the authorities (6). For this to happen, the number of complaints from different
Blockchain-Enhanced Labor Protection
117
employees of the same company must be greater than Complaints Limit. After each complaint validation, the following are recorded and updated in the smart contract: – The total number of complaints to a company – The number of complaints by an employee in the company they work for (Each employee can only make one complaint per day to each company they work for) – The number of complaints from different employees to the same company So if the number of complaints from different employees exceeds Complaint Limit, an event is emitted to the Blockchain that notifies the authorities. At the same time, the company is declared for investigation. All events are displayed in the web D-app of the authorities. The authorities, then, using their digital address and the company ID, can declare companies as Companies with Incomplete Work Registration or Companies with Full Work Registration by making a transaction on the Blockchain (7) (Fig. 1).
Fig. 1. Workflow of the proposed complaint system.
3.3
Smart Contract Architecture
There are three smart contracts on the platform and they are called ComplaintService, Oracles and Authorities. These three contracts are deployed by an authority called the Owner who has the ability to use specific methods of the contract without the rest of the users being able to use them. ComplaintService was explained in detail in the technical analysis of the scenario. Oracles has methods that allow the contract Owner to declare and un-declare addresses
118
J. Christidis et al.
as Oracles. Authorities work exactly in the same way. The Owner of the contract can declare addresses as Authorities. Only addresses that are declared Authorities are able to declare companies as Companies with Incomplete Work Registration or Companies with Full Work Registration (Fig. 2).
Fig. 2. Roles of the entities.
3.4
Real Life Use Case
For an example of a real life use case of the system discussed above, the information system ERGANI will be used. ERGANI is used in Greece and is managed from the Ministry of Labour and Social Affairs (MLSA). The nodes of the blockchain should be located in the branches of MLSA, in the Labor Inspection Body and in the National Social Insurance Agency. The Labor Inspection Authority is an independent administrative body aimed at monitoring the implementation of labor legislation, with the objective of safeguarding both workers’ rights and their safety and health. Additionally, the Labor Inspection Authority is responsible for investigating, independently and parallel to the social insurance organizations, the legal insurance coverage of workers, playing a significant role in combating undeclared work in Greece. ERGANI is used in conjunction with the digital work card. When an employee start and finish their shift on a company they punch their card. The start and finishing time of the employee’s shift is stored in the ERGANI information system. By making the necessary changes to ERGANI as mentioned in the requirements in Sect. 3.2 the complaint platform can easily be integrated to ERGANI. If an employee wants to use the complaint platform to make a complaint they can just use the D-app mentioned in Sect. 3.1 from their smartphone. So just as mentioned in Sect. 3.3 if the Complaint Limit is surpassed for a certain company the Labor Inspection Body will be notified. After the proper investigation is done, they can declare the company as Company with Incomplete Work Registration or Company with Full Work Registration depending on the result of the investigation (Fig. 3).
Blockchain-Enhanced Labor Protection
119
Fig. 3. Example of the location of blockchain nodes if the platform was used in Greece.
4
Discussion
The main concern regarding the blockchain-based complaint platform is the potential impact of malicious users. Specifically, when a user submits a complaint outside the D-app, they have the ability to input arbitrary geographical coordinates. This poses a problem as it allows users to falsely report incidents by submitting coordinates of their workplace without actually being present at the location. However, the significance of this issue is relatively limited for two reasons. Firstly, the platform relies on the collective input of numerous users, necessitating a substantial number of users engaging in malicious behavior for a significant problem to arise. Secondly, even if the labor inspectorate is notified of alleged illegal work based on these false complaints, they are obligated to conduct on-site investigations to verify the claims. Thus, when they visit the company, they would discover that the complaints were indeed false and declare the company as having a complete work record. Although it was assumed that the authorities act without personal benefit, this is technically impossible to avoid in the proposed solution due to the lack of an intermediate stage where evidence for the decisions of the authorities is entered into the Blockchain. Nonetheless, as the number of complaints continues to increase, and all the information is transparently recorded, appropriate measures can be taken to address this issue. The implementation of a blockchain-based complaint platform for labor protection entails various challenges and limitations that must be carefully considered. Scalability is a challenge due to the inherent limitations of blockchain technology in terms of transaction throughput and storage capacity. Ensuring that the platform can handle a potentially large number of complaints while validating them in a timely manner without compromising performance is essential. Exploring scalability solutions such as off-chain data storage or layer-two solutions can help mitigate these limitations effectively. Resistance from employers or regulatory bodies is yet another potential challenge. Employers may hesitate to adopt the platform due to concerns about
120
J. Christidis et al.
increased scrutiny or potential negative impacts on their reputation. Likewise, regulatory bodies may vary in their acceptance and understanding of blockchain technology, which could impede its adoption. Proactive communication and collaboration with stakeholders are vital to address their concerns, provide clarity on the benefits, and demonstrate how the platform enhances labor protection while ensuring compliance with existing regulations. Furthermore, user adoption and participation are critical to the platform’s success. Encouraging employees to actively use the platform and submit complaints may prove challenging. To overcome this, it is essential to design a userfriendly interface and provide clear instructions to facilitate easy and anonymous complaint submissions. Education and awareness programs can also play a vital role in helping employees understand the benefits of the platform, emphasizing how it can protect their rights and improve workplace conditions. Integration with existing labor protection systems presents another layer of complexity that should not be overlooked. Compatibility issues, data synchronization, and system interoperability need to be carefully addressed during the implementation phase to ensure a seamless integration process and avoid disruptions in existing processes. Despite these challenges, the blockchain-based complaint platform holds great promise for transforming labor protection, fair competition, and overall workplace conditions. Leveraging the transparency and accountability provided by blockchain technology, the platform establishes a decentralized and tamperresistant system for recording complaints and validating overtime work. This enhanced transparency promotes a more equitable environment, where companies are held accountable for their employment practices, ensuring compliance with labor regulations and fostering fair competition. The platform’s ability to facilitate more effective enforcement of labor regulations is a key benefit. By providing verifiable evidence of labor violations and streamlining the complaint resolution process, it equips authorities with the necessary tools to promptly and efficiently investigate and address labor violations.
5
Conclusion
In conclusion, this paper introduces a cutting-edge complaints platform designed to address the issue of unfair competition resulting from incomplete recording of employee work by companies. The proposed platform revolutionizes the complaint reporting process by leveraging real-time recording of geographical coordinates and timestamps, thereby enhancing the credibility and accuracy of complaints. By harnessing the power of Blockchain technology, the platform ensures user anonymity even after filing a complaint, while simultaneously fostering transparency and integrity, making it arduous to tamper with or manipulate the complaints. Importantly, this platform seamlessly integrates with any existing information system, provided that the conditions outlined in Sect. 3 are met. Overall, this innovation presents a robust and efficient solution to combat unfair competition, empowering individuals to report grievances while safeguarding their privacy and upholding the principles of fairness and accountability.
Blockchain-Enhanced Labor Protection
121
Acknowledgments. The work reported in this paper has been partially funded by University of West Attica.
References 1. OSHA. https://www.dol.gov/agencies/oasam/centers-offices/ocio/privacy/osha/ ois. Accessed 15 Mar 2023 2. BAuA. https://www.baua.de/EN/Home/Home node.html. Accessed 15 Mar 2023 3. ERGANI. https://www.hli.gov.gr/ergasiakes-scheseis/ergodotes-ergasiakesscheseis/ergani/p-s-ergani/. Accessed 15 Mar 2023 4. Rahman, M., Azam, M.M., Chowdhury, F.S.: An anonymity and interaction supported complaint platform based on blockchain technology for national and social welfare. In: Proceedings of the 2021 International Conference on Electronics, Communications and Information Technology (ICECIT), Khulna, Bangladesh, pp. 1-8. IEEE Press, New York (2021). https://doi.org/10.1109/ICECIT54077.2021. 9641269 5. Jattan, S., Kumar, V., Akhilesh, R., Naik, R.R., Sneha, N. S.: Smart complaint redressal system using ethereum blockchain. In: Proceedings of the 2020 IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER), Udupi, India, pp. 224-229 (2020). https://doi.org/ 10.1109/DISCOVER50404.2020.9278122 6. Hingorani, I., Khara, R., Pomendkar, D., Raul, N.: Police complaint management system using blockchain technology. In: Proceedings of the 2020 3rd International Conference on Intelligent Sustainable Systems (ICISS), Thoothukudi, India, pp. 1214-1219 (2020). https://doi.org/10.1109/ICISS49785.2020.9315884 7. Schmelz, D., Pinter, K., Brottrager, J., Niemeier, P., Lamber, R., Grechenig, T.: Securing the rights of data subjects with blockchain technology. In: Proceedings of the 2020 3rd International Conference on Information and Computer Technologies (ICICT), San Jose, CA, USA, pp. 284-288 (2020). https://doi.org/10.1109/ ICICT50521.2020.00050 8. Zheng, Z., Xie, S., Dai, H., Chen, X., Wang, H.: An Overview of blockchain technology: architecture, consensus, and future trends. In: Proceedings of the 2017 IEEE International Congress on Big Data (BigData Congress) (2017). https://doi.org/ 10.1109/BigDataCongress.2017.85 9. Antonopoulos, A.M., Wood, G.: Mastering Ethereum: Building Smart Contracts and DApps (2018) 10. Metamask. https://metamask.io/. Accessed 15 Mar 2023 11. Beniiche, A.: A Study of blockchain oracles. In: Distributed, Parallel, and Cluster Computing (cs.DC) (2020). https://doi.org/10.48550/arXiv.2004.07140
IoT-Based Intelligent Medical Decision Support System for Cardiovascular Diseases Nadjem Eddine Menaceur1,2(B)
, Sofia Kouah1,2
, and Makhlouf Derdour1,2
1 IAOA Laboraory, Computer Sciences Department, University of Oum El Bouaghi, Route of
Constantine, BP 321, 04000 Oum El-Bouaghi, Algeria {nadjemeddine.menaceur,sofia.kouah,derdour.makhlouf}@univ-oeb.dz 2 Department of Mathematics and Computer sciences, University of Oum El Bouaghi, Oum El-Bouaghi, Algeria
Abstract. Cardiovascular diseases (CVDs) represent serious threats to human health, causing considerable problems for the healthcare ecosystem. Medical Decision Support Systems (MDSS) have emerged as important instruments against various illnesses. However, Intelligent MDSS has substantial obstacles in interpreting complex medical data, uncertainty in noisy and imprecise data, overfitting, and the necessity for lightweight solutions. This comprehensive review study offers a thoughtful approach strategy for improving the effectiveness, the interpretability and the portability of MDSS for CVD. It combines previous studies and systems, highlighting their advantages. A speculative proposal for a new MDSS is discussed based on this study. The proposed a thoughtful approach merges the Internet of Medical Things (IoMT), Artificial Intelligence (AI), Cloud Computing, and Fuzzy Logic. Whereas the scope of this assessment does not pass for a detailed design, the suggested system has the potential to enhance patient care and outcomes in CVD. Keywords: Intelligent MDSS · Cardiovascular Diseases · Internet of Medical Things · Fuzzy Logic
1 Introduction Cardiovascular diseases (CVDs) are a significant cause of morbidity and mortality worldwide, accounting for an estimated 17.9 million deaths each year [11]. Timely identification and prompt intervention play a crucial role in enhancing patient outcomes. However, traditional approaches to managing CVD have relied on invasive procedures and selfassessments, leading to inconsistencies and potential delays in diagnosis and treatment [5]. Recent developments in technology, mainly the integration of artificial intelligence (AI) and the Internet of Things (IoT), have transformed the medical landscape, providing clinicians with access to real-time patient data to inform their decision-making and provide personalized treatment plans. Medical Decision Support Systems (MDSS) have emerged as a promising solution, leveraging these advancements to improve the accuracy of diagnoses and recommendations [6]. The increasing importance of MDSS © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 122–126, 2023. https://doi.org/10.1007/978-3-031-44146-2_12
IoT-Based Intelligent Medical Decision Support System
123
for the effective diagnosis and treatment of CVD has been highlighted in several works [3, 8–10, 12] in the literature review. In this paper, we aim to provide a comprehensive overview of developing intelligent MDSS for CVD that integrates multiple sources of patient data, including medical devices, electronic health records, blood analysis. Our research aims to specify the need for more accurate and timely diagnoses of CVD, and to address the relevant issues that should be undertaken for development of MDSS. Through our research, we seek to contribute to the advancement of medical technology by proposing a practical approach to developing an intelligent MDSS for CVD, which can help healthcare professionals to make more accurate diagnoses and provide Clear and interpretable results for medical decision-making. The remainder of the paper is structured as follows: In sub-sect. 1, we provide the studied context. Sub-sect. 2 presents the problem statement and motivation for our research. The objective of our study is outlined in sub-sect. 3. In Sect. 2, we review the state of the art in MDSS including the methodology and expectations for our research are described. Finally, Sect. 3 presents our conclusions and recommendations for future research. 1.1 Context In order to improve disease detection and prevention, our research focuses on integrating Smart Healthcare Systems with the Internet of Medical Things (IoMT). We use high-tech algorithms and wearables with sensors to collect and analyze data. Using a distributed strategy and federated learning (FL) models, we enable individualized care by leveraging the Internet of Things (IoT) and Cloud Computing. Through the promotion of participative, personalized, and predictive medicine, our work seeks to transform healthcare. 1.2 Problematic and Justification CVDs are a major global health issue, with one person dies every 34 seconds in the United States from one [2]. Medical Decision-Making is considered as a sequence of three stages: Diagnosis, Prognosis, and Therapeutic Decision. Our work focuses with a specific emphasis on the diagnosis stage, particularly for patients with CVD. This stage involves identifying a specific disease by analyzing a range of different data including signs, symptoms, clinical examinations, laboratory and imaging tests, and functional tests. We are working to address technical and operational challenges related to the development of such systems, as well as evaluating their effectiveness in clinical settings even our research also addresses the challenge of transforming intricate health information into actionable insights for medical staff. Patients with CVDs need early detection and management, including psychological support and medication, as needed. Therefore, our thoughtful system will provide a rigorous process for diagnosing CVDs to ensure timely intervention and management. 1.3 Objective Our research’s main objective is to create a state-of-the-art medical decision support system (MDSS) that is tailored exclusively for patients with cardiovascular diseases
124
N. E. Menaceur et al.
(CVD). We suggest merging wearable, non-invasive technologies, cloud-based solutions, the Internet of Medical Things (IoMT), and artificial intelligence (AI) to accomplish this the technology is intended to speed up the identification of CVDs and individualize their care by optimizing the diagnostic phase. We are working to establish a data flow that is simple to understand and supports decision-making, while simultaneously being aware of the requirement for healthcare professionals to effectively evaluate medical data. Our technology aims to support medical professionals’ diagnosis and treatment choices by converting complex health data into understandable, useful insights. This patient-centred approach perfectly complements our ultimate goal of improving patient outcomes and healthcare delivery.
2 State of the Art Recent technology advances, such as the Internet of Things (IoT), Machine Learning (ML), Deep Learning (DL), and Cloud Computing, have aided in the development of sophisticated medical decision support systems (MDSS) for cardiovascular diseases (CVDs). Jian Lin [4] highlights the importance of wearable sensors that are able to capture various signals related to the heart for non-invasive monitoring of cardiovascular diseases (CVDs). That expanded monitoring options and opened up a multimedia signal detection and noise reduction research field. Adopting IoMT technology in the future could enable early prevention of CVD, providing a personalized tracking of cardiovascular health. The author focuses on the effectiveness of artificial intelligence, especially Convolutional Neural Networks (CNN) and Artificial Neural Networks (ANN), which determine disease classification because they provide accuracy and sensitivity. Moshawreb [7] studies the application of smart wearable devices in cardiovascular disease diagnosis and prognosis. It underlines the importance of using explainable AI and federated learning for interpretation. The significance of wearables being non-invasive, low-energy, and affordable was also emphasized. The Cloud is more recommended as Zhen, P [12] who proposes CareEdge, a system for e-Cloud integration and efficient utilization of resources in IoT scenarios, demonstrated through an ECG-based heartbeat detection system. Experimental results show that CareEdge has lower latency than similar frameworks. Amirkhani [1] emphasizes the application of fuzzy cognitive maps (FCMs) in enhancing medical decision-support systems (MDSSs). FCMs are strong tools for modelling complicated systems used in several scientific domains to simplify complex decisionmaking, diagnosis, prediction, and classification tasks by integrating Fuzzy Logic with Artificial Neural Networks like adaptive neuro-fuzzy inference system. Its unique ability to make sophisticated MDSSs in the medical field more interpretable underscores their potential for future advancements. Through an analysis of prior research endeavours aimed at improving the diagnostic accuracy of cardiovascular diseases, it has become evident that harmonious compatibility between Internet of Things (IoT) devices and medical data collection exists. This compatibility allows for the collection of clinical patient data to occur quickly and automatically, while also having user-friendly equipment for monitoring health and vital signs. Furthermore, data analysis techniques utilizing FL have been utilized to create lightweight and responsive models that emulate physician-like medical diagnoses with greater accuracy (see Fig. 1).
IoT-Based Intelligent Medical Decision Support System
125
Fig. 1. IoT-Based Intelligent Medical Decision Support System for Cardiovascular Diseases
Future Works Our aim is to improve the accuracy of cardiovascular disease diagnosis by leveraging lightweight models and wearable devices, supported by cloud technology like federated learning and high quality FCM. The lightweight models can reduce the resources, while wearable devices can facilitate the collection of data in a non-invasive and convenient manner. Cloud technology can enable the processing of large amounts of data and distribute the analyze, however fuzzy logic can enhance the accuracy of diagnosis by accounting for uncertainty and imprecision in the data to help in Medical Decision-making.
3 Conclusion In conclusion, the development of medical decision support systems (MDSS) using innovative technologies such as IoMT with Algorithms of FL and DL using the Cloud Computing has gained significant attention in recent years. Through the study of various works in this field, it is evident that MDSS can provide accurate and reliable predictions for heart disease and assist in timely diagnosis and treatment. Moreover, wearable devices that help in diagnosis and provide accurate data that are friendly and non-invasive are the important element in the current researches . In addition the interpretation of the diagnosis depends on fuzzy models. These advancements in MDSS hold great potential to improve healthcare services and patient outcomes, and continued research in this area is crucial for further developments in the field.
References 1. Amirkhani, A., Papageorgiou, E.I., Mohseni, A., Mosavi, M.R.: A review of fuzzy cognitive maps in medicine: taxonomy, methods, and applications. Comput. Meth. Programs Biomed. 142, 129–145 (2017). https://doi.org/10.1016/j.cmpb.2017.02.021, https://www.sci encedirect.com/science/article/pii/S0169260716307246 2. Centers for Disease Control and Prevention, National Center for Health Statistics: About multiple cause of death, 1999–2020. CDC WONDER Online Database website (2022). Accessed 21 Feb 2022
126
N. E. Menaceur et al.
3. Hasanova, H., Tufail, M., Baek, U.J., Park, J.T., Kim, M.S.: A novel blockchainenabled heart disease prediction mechanism using machine learning. Comput. Electr. Eng. 101(108086), 108086 (2022) 4. Lin, J., Fu, R., Zhong, X., Yu, P., Tan, G., Li, W., Zhang, H., Wearable sensors and devices for realtime cardiovascular disease monitoring. Cell Reports Physical Science 2(8), 100541 (2021). https://doi.org/10.1016/j.xcrp.2021.100541. https://www.sciencedirect.com/science/ article/pii/S2666386421002526 5. Matias, I., et al.: Prediction of atrial fibrillation using artificial intelligence on electrocardiograms: a systematic review. Comput. Sci. Rev. 39(100334), 100334 (2021) 6. Miyachi, Y., Ishii, O., Torigoe, K.: Design, implementation, and evaluation of the computeraided clinical decision support system based on learning-to-rank: collaboration between physicians and machine learning in the differential diagnosis process. BMC Med. Inform. Decis. Mak. 23(1), 26 (2023) 7. Moshawrab, M., Adda, M., Bouzouane, A., Ibrahim, H., Raad, A.: Smart wearables for the detection of cardiovascular diseases: a systematic literature review. Sensors 23(2), 828 (2023). https://doi.org/10.3390/s23020828. https://www.mdpi.com/1424-8220/23/2/828, number: 2 Publisher: Multidisciplinary Digital Publishing Institute 8. Satpathy, S., Mohan, P., Das, S., Debbarma, S.: A new healthcare diagnosis system using an IoT-based fuzzy classifier with FPGA. J. Supercomput. 76(8), 5849–5861 (2020) 9. Stepanyan, I.V., Alimbayev, C.A., Savkin, M.O., Lyu, D., Zidun, M.: Comparative analysis of machine learning methods for prediction of heart diseases. J. Mach. Manuf. Reliab. 51(8), 789–799 (2022) 10. Vincent Paul, S.M., Balasubramaniam, S., Panchatcharam, P., Malarvizhi Kumar, P., Mubarakali, A.: Intelligent framework for prediction of heart disease using deep learning. Arab. J. Sci. Eng. 47(2), 2159–2169 (2022) 11. who.int: Cardiovasculardiseases. https://www.who.int/health-topics/cardiovascular-diseas estab=tab1.Accessed 30 March 2023 12. Zhen, P., Han, Y., Dong, A., Yu, J.: CareEdge: a lightweight edge intelligence framework for ECG-based heartbeat detection. Procedia Comput. Sci. 187, 329–334 (2021)
Embracing Blockchain Technology in Logistics Denis Sinkevich, Anton Anikin(B)
, and Vladislav Smirnov
Volgograd State Technical University, Volgograd, Russia [email protected]
Abstract. In today’s increasingly interconnected world, the logistics and supply chain management industry faces numerous challenges, such as inefficient processes, lack of transparency, and vulnerability to fraud. This paper explores the potential of blockchain technology as an inno- vative solution to address these issues and optimize logistics processes. By leveraging the unique features of blockchain, such as decentraliza- tion, immutability, and smart contracts, we discuss the transformative effects on key aspects of logistics operations, including traceability, trans- parency, data integrity, and security. The paper highlights various use cases and real-world applications of blockchain in logistics, emphasizing its benefits in reducing operational costs, improving efficiency, and fos- tering trust among stakeholders. Finally, we address the challenges and limitations of adopting blockchain technology in the logistics industry, providing recommendations for overcoming these hurdles and paving the way for a more sustainable and efficient future in logistics and supply chain management. Keywords: Logistics · Blockchain · Decision-making support
1 Introduction Logistics and supply chain management (SCM) play a critical role in the global economy, as they facilitate the efficient flow of goods and services between pro- ducers, intermediaries, and consumers. The logistics function within SCM focuses on the planning, implementation, and control of the transportation, storage, and distribution of products to ensure the timely and cost-effective delivery of goods. As global trade continues to expand, and supply chains become more com- plex, the importance of efficient logistics systems cannot be overstated. However, the industry faces numerous challenges, such as increasing customer demands, volatile markets, and the need for greater sustainability. One of the most pressing issues in logistics and SCM is the lack of trans- parency and visibility throughout the supply chain. This limitation leads to in- efficiencies, increased costs, and the risk of fraud, which ultimately hampers the competitiveness of businesses. To overcome these challenges, organizations are increasingly exploring advanced technologies, such as decision-making support systems, modern knowledge models, artificial intelligence (AI), and blockchain, to enhance their logistics processes. Decision-making support systems can assist organizations in making more informed choices by providing timely and accurate information, analytics, and recommendations. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 127–133, 2023. https://doi.org/10.1007/978-3-031-44146-2_13
128
D. Sinkevich et al.
These systems have been used to optimize various logistics operations, such as transportation routing, inventory management, and demand forecasting [4, 8, 11]. In addition, AI has emerged as a powerful tool for automat- ing complex tasks and improving decision-making in logistics. Machine learning algorithms, for example, can analyze vast amounts of data to predict customer demands, identify bottlenecks, and suggest optimal solutions for various supply chain challenges [9]. This can be attributed to the intricate nature of global supply chains, which encompass a wide array of stakeholders, divergent interests, and a significant number of third-party intermediaries – obstacles that are effectively addressed by blockchain technology [1, 5, 7]. Blockchain technology, with its decentralized, transparent, and immutable nature, presents a promising solution for addressing many of the inherent issues in logistics and SCM [12, 13]. By providing a secure and tamper-proof record of transactions, blockchain can enhance traceability and transparency throughout the supply chain. Furthermore, the use of smart contracts can automate various processes, such as payments and compliance, improving efficiency and reducing the potential for human error. In this paper, we delve into the current state of logistics and supply chain management, focusing on the challenges faced by the industry and the poten- tial of advanced technologies like decision-making support systems, AI, and blockchain in addressing these issues. We examine the existing literature, real- world applications, and future prospects for the integration of these technologies, with the aim of providing a comprehensive understanding of their role in shaping the future of logistics and supply chain management.
2 Blockchain in Logistics Blockchain is a distributed database consisting of a “chain of blocks”, where the storage devices for blocks are not connected to a central server. The database allows for the verification of transactions’ authenticity without the supervision of any financial regulators. The primary advantages of this technology include decentralization, immutability, and consensus – a set of rules required for the approval of transactions within the network. There are four main types of blockchains: 1. Public – allows for free entry of new participants, and everyone has equal rights within the network. 2. Private (e.g. Ripple [14]) – a central authority determines the participants and assigns their rights within the network. 3. Hybrid – data and rights are divided into private, determined by the central authority, and public, accessible to all participants. 4. Consortium – functions as an oligopoly in economics. Several large companies manage a single blockchain and benefit from shared responsibility. Blockchain technology possesses a range of advantages that allow for its wide application in the logistics sector. These advantages include: 1. Simultaneous access to up-to-date data by all network participants; 2. High fault tolerance; 3. High level of security;
Embracing Blockchain Technology in Logistics
4. 5. 6. 7.
129
Rapid user identification process; Swift asset identification, with the asset’s status recorded in the blockchain; Preservation of a complete history of an asset’s state; Automation of processes using smart contracts, capable of altering an asset’s state in the chain and accessing external sources.
Based on the list of advantages of blockchain technology, we can conclude that this technology is beneficial for the logistics sector. As each participant in the chain has access to current information about an asset, this will contribute to reducing the risks of errors and delays in coordination. Smart contracts can automate resource-intensive processes, such as document management, complex financial calculations and indicators, business logic-related processes, and many other procedures. Smart contracts also enable the exclusion of a third party - an intermediary - from the logistics process chain. Considering the aforementioned points, the implementation of blockchain technology in the logistics sector can reduce financial costs and time expendi- tures, ultimately contributing to increased company profits. The overall interac- tion process of enterprise logistics processes is depicted in the Fig. 1.
Fig. 1. Opportunities for the application of blockchain technology in the field of logistics
As can be seen from the figure, each business process will be transformed into an information data flow and directed to the distributed blockchain ledger, which is responsible for preserving data and transforming it according to business conditions. Despite its advantages, blockchain technology also has drawbacks. The main disadvantage of blockchain technology is its low performance relative to high-load systems. Given this, blockchain technology cannot be used in logistics sectors where high requirements are imposed on the speed of processing operations. To mitigate the issue of low performance, the most optimal approach for the logistics sector is to use a private type of blockchain technology. One of the most popular implementations of private blockchain technologies is Hyper- ledger Fabric [2], which has several frameworks that can be utilized depending on the conditions. This technology offers flexible configuration and can cover all necessary cases of automation and optimization of logistics processes.
130
D. Sinkevich et al.
3 Implementation of Smart Contract Technology Through Blockchain and Ways of Integration It into Logistics Efforts to implement smart contracts in the field of logistics are being made worldwide. So, in [6] a model of intelligent smart contracts based on Ethereum was developed, which can be adapted to any other blockchain. The purpose of this product is to increase the profit of all parties by automating the process of fulfilling obligations, eliminating third parties, and reducing the influence of human factors on the contract. Implementation is most advantageous for small and medium-sized businesses (a niche without oligopoly) in a balanced market environment, where companies can earn a decent income [10]. In [3] a testbed was developed for integration into the ERP system of large and medium-sized businesses, which decentralizes information about the manufacturer, making it available in a peer-to-peer network for transparency of data origin. In [15] researchers have made a significant contribution to the development of models for the decomposition of business processes in logistics for the im- plementation of smart contracts. In their work, the prerequisites for the digital transformation of logistic processes were examined and a model for the develop- ment of smart contract systems in transport networks was created. This model provides a basis for the rational implementation of smart contract technology, which, in this case, is integrated at the basic level and distributes risks among participants. The necessary conditions for implementation were described, ori- ented towards the main factors. This research laid the foundation for possible adaptation to other fields of activity. To exemplify the implementation of blockchain technology in logistics, let us consider a simple scheme of production and delivery of goods to the end consumer. Each interested party - the manufacturer, the consumer, and the distributor - has access to a tool (a program capable of interacting with the final platform) and interacts according to a specific scheme: 1. The consumer orders the desired goods through an application that can be installed on a browser or mobile phone; 2. The manufacturer and distributor receive a notification that the consumer is ready to place an order; 3. The platform awaits the distributor’s response, confirming their readiness to deliver the order to the end consumer; 4. The platform awaits the manufacturer’s response, confirming their readiness to produce the required product for the consumer; 5. The platform notifies the consumer that the manufacturer and distributor are ready to fulfill their obligations; 6. The manufacturer informs the consumer and distributor about the product’s readiness time; 7. The manufacturer notifies the consumer and distributor about the product’s readiness; 8. The distributor picks up the finished product from the manufacturer and informs the consumer about it. The above-described process is illustrated in Fig. 2. The scheme presented above may seem overly simplistic for implementing blockchain technology to improve logistics processes, but if there are dozens of different manufacturers, distributors, and consumers
Embracing Blockchain Technology in Logistics
131
Fig. 2. Scheme of interaction of the logistics chain - consumer, manufacturer, distributor, platform
involved in the business process, and the production process itself is multi-stage and dependent on the production of another product, each party needs a single source of truth to possess up-to-date information about the product’s status. Blockchain technology is well-suited to solve this problem, as all changes in the state of an object are recorded and cannot be altered, which can also be used to resolve legal conflicts. In addition to this, data stored on blockchain servers - nodes - is distributed among them, leading to higher fault tolerance and improved data security. Moreover, all data is consistent thanks to the consensus mechanism. Smart contracts are used to program the behavior of data flow in blockchain technologies, facilitating changes in the overall data state on each blockchain node. When a smart contract is executed, its code will be executed on each node. This, in turn, affects performance; instead of being executed once, the code will be executed as many times as there are blockchain nodes connected to the network. Additionally, it takes time for each node to reach a common agreed-upon state. The above-described factors can be limiting in some cases for the implementation of blockchain technology. To minimize performance issues, a private, distributed blockchain network should be chosen, where all nodes belong to a group of individuals involved in the business process. This approach has several advantages when compared to public blockchain networks, such as Ethereum, Solana, or Polygon: 1. There is no need to pay a fee for making changes to the data state, as all nodes will belong to the group of individuals participating in the business process; 2. The speed of executing transactions will be higher, as there will be no data streams unrelated to the business processes, and network users can set the priority for executing certain operations themselves; 3. In public blockchain networks, the process of creating smart contracts is more timeconsuming and costly, as once a smart contract is placed in the blockchain network, it cannot be changed if a vulnerability is discovered. In the case of private blockchain
132
D. Sinkevich et al.
networks, the smart contract can be modified with the agreement of the interested parties.
4 Conclusion Blockchain technology, along with smart contracts, has a positive development trend. According to Valutales, the global smart contract market is expected to grow by 18% annually up to 2028. Currently, smart contract creation tools are available only to large companies with the resources to invest in development. The precedent of a state platform with smart contracts will set the direction for new laws, services, and increase the involvement of citizens and entrepreneurs in this field. This will, in turn, reflect the growing demand for software prod- ucts, which must be standardized for the convenience of individuals with limited knowledge in this area, while also considering the flexibility of settings for com- plex interactions between large legal entities. Thus, there is a problem of con- sumer use of smart contracts due to the lack of an accessible tool. One possible solution is the development of a private-based smart contract creation platform, which will have standardized functionality for consumer needs and focus on small businesses and/or individuals.
References 1. Alqarni, M.A., Alkatheiri, M.S., Chauhdary, S.H., Saleem, S.: Use of blockchain- based smart contracts in logistics and supply chains. Electronics 12(6), 1340 (2023). https://doi.org/10. 3390/electronics12061340 2. Androulaki, E., et al.: Hyperledger fabric: a distributed operating system for permissioned blockchains. In: Proceedings of the Thirteenth EuroSys Conference. ACM (2018). https:// doi.org/10.1145/3190508.3190538 3. Angrish, A., Craver, B., Hasan, M., Starly, B.: A case study for blockchain in manufacturing: “FabRec”: a prototype for peer-to-peer net- work of manufacturing nodes. Procedia Manufact. 26, 1180–1192 (2018). https://doi.org/10.1016/j.promfg.2018.07.154 4. Averchenkov, A.V., Averchenkova, E.E., Kovalev, V.V.: Characteristic features of support for making managerial decisions in the management system of logistics flows of transportation and storage complex. Proc. Southwest State Univ. 25(2), 107–122 (2021). https://doi.org/10. 21869/2223-1560-2021-25-2-107-122 5. Chung, G. (ed.): Blockchain in Logistics. Perspectives on the up- coming impact of blockchain technology and use cases for the logistics industry. DHL Customer Solutions & Innovation (2018). https://www.dhl.com/content/dam/dhl/global/core/documents/pdf/glo-core-%20bloc kchain-trend-report.pdf 6. Ivashchenko, N.P., Shastitko, A.Y., Shpakova, A.A.: Smart contracts throught lens of the new institutional economics. J. Inst. Stud. 11(3), 064–083 (2019). https://doi.org/10.17835/20766297.2019.11.3.064-083 7. Jabbar, S., Lloyd, H., Hammoudeh, M., Adebisi, B., Raza, U.: Blockchain- enabled supply chain: analysis, challenges, and future directions. Multimedia Syst. 27(4), 787–806 (2020). https://doi.org/10.1007/s00530-020-00687-0 8. Jinbing, H., Youna, W., Ying, J.: Logistics decision-making support system based on ontology. In: 2008 International Symposium on Computational Intelligence and Design. IEEE (2008). https://doi.org/10.1109/iscid.2008.128
Embracing Blockchain Technology in Logistics
133
9. Karumanchi, M.D., Sheeba, J.I., Devaneyan, S.P.: Blockchain enabled supply chain using machine learning for secure cargo tracking. Int. J. Intell. Syst. Appl. Eng., 434–442 (2022). https://www.ijisae.org/index.php/IJISAE/article/view/2279/862 10. Koh, L., Dolgui, A., Sarkis, J.: Blockchain in transport and logistics – paradigms and transitions. Int. J. Prod. Res. 58(7), 2054–2062 (2020). https://doi.org/10.1080/00207543.2020. 1736428 11. Kultsova, M., Rudnev, R., Anikin, A., Zhukova, I.: An ontology-based approach to intelligent support of decision making in waste management. In: 2016 7th International Conference on Information, Intelligence, Systems and Applications (IISA). IEEE (2016). https://doi.org/10. 1109/iisa.2016.7785401 12. Paliwal, V., Chandra, S., Sharma, S.: Blockchain technology for sustainable supply chain management: a systematic literature review and a classification framework. Sustainability 12(18), 7638 (2020). https://doi.org/10.3390/su12187638 13. Park, A., Li, H.: The effect of blockchain technology on supply chain sustainability performances. Sustainability 13(4), 1726 (2021). https://doi.org/10.3390/su13041726 14. Schwartz, D., Youngs, N., Britto, A.: The ripple protocol consensus algorithm (2014) 15. Shul’zhenko, T.G.: Methodological approach to the reengineering of logistics business processes in the transport chains with the implementation of smart contracts. Manage. Sci. 10(2), 53–73 (2020). https://doi.org/10.26794/2404-022x-2020-10-2-53-73
Artificial Intelligence and Internet of Medical Things for Medical Decision Support Systems: Comparative Analysis Asma Merabet1,2 , Asma Saighi1,2(B)
, and Zakaria Laboudi1
1 Mathematics and Computer Sciences Department, University of Oum El Bouaghi, Oum El
Bouaghi, Algeria {asma.merabet,asma.saighi,laboudi.zakaria}@univ-oeb.dz 2 LIAOA Laboratory, University of Oum El Bouaghi, Oum El Bouaghi, Algeria
Abstract. Nowadays, Artificial Intelligence (AI) and Internet of Medical Things (IoMT) are evolving very fast in the field of Medical decision support system (MDSS) to improve the efficiency of health services. According to a recent survey published by Intel and Converged Technologies, 73% of healthcare providers attest that AI may help individuals perform better. Real-time patients’ vital signs gathered by IoMT devices are used by AI to support medical decision-making and assist clinicians for predicting and diagnosing illnesses or for recommending efficient treatment and pronostic. This paper provides an overview about how IoMT and AI are being integrated into the health sector to support medical decisionmakers and improve healthcare systems. This paper explores AI and IoMT in medical decision support systems, revealing insights and potential solutions for improving healthcare outcomes. It introduces novel methodologies to improve accuracy, efficiency, and effectiveness of medical decision-making, contributing to the field’s advancement and paving the way for future research and innovation. Keywords: Artificial Intelligence · Machine Learning · Deep Learning · Internet of Medical Things · Medical decision support system
1 Introduction Medical diagnosis is a complex process that relies on a clinician’s ability to use logical thought to identify symptoms. However, due to the potential inaccuracy and uncertainty of the information used, this stage can be particularly challenging [1]. To assist doctors in making more informed decisions, innovative medical instruments have been developed utilizing IT and the systematic gathering of patients’ data. One such instrument is the clinical support system, which utilizes decision support systems or expert systems to assist healthcare practitioners to take intelligent medical decisions such as medical prediction and diagnosis. The healthcare industry is facing unprecedented challenges, including rising costs, increasing demand for services, and a shortage of healthcare professionals. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 134–140, 2023. https://doi.org/10.1007/978-3-031-44146-2_14
Artificial Intelligence and Internet of Medical Things
135
In this context, the integration of AI and IoMT in medical decision support systems has the potential to transform the way healthcare services are delivered, improving the efficiency, accuracy, and accessibility of medical decision-making. AI and IoMT can help healthcare professionals to make more informed decisions, reduce errors, and provide personalized care to patients. As the provision of high-quality healthcare increasingly depends on the use of clinical support systems, there is a growing need for continued research in this area to develop innovative technology solutions that improve patient outcomes and support healthcare professionals in providing the best possible treatment. This study aims to discuss the current state of knowledge about the use of IoMT and the application of artificial intelligence (AI) in clinical support systems for medical decision-making. The rest of this paper is organized as follows: Sect. 3 presents background about AI IoMT MDSS and the use of AI and IoMT in medicine field, Sect. 4 gives a literature review of some related works and Sect. 5 concludes the paper and gives some future challenges.
2 Question Research and Motivation The Table 1 outlines research questions on AI and IoMT integration in medical decision support systems.
3 Background 3.1 Artificial Intelligence AI refers to the implementation of tools and processes to simulate intelligent behavior and human-like thought patterns [2], is increasingly being integrated into medicine to improve patient care. AI has already infiltrated various aspects of modern life, such as video games, automated public transportation, and personal assistants like Siri, Alexa, and Google Assistant. Recently, AI has begun to be utilized in healthcare to provide decision-making support for clinical professionals [3]. AI offers doctors the opportunity to save time by allowing machines analyzing and providing estimates. This enables doctors to intervene quickly and effectively, potentially leading to improved patient outcomes. It was suggested that AI be used in the field of medical imaging to improve the reliability, coherence, and effectiveness of reported data. It offers the ability to locate lesions, carry out diagnostic procedures, and generate computerized medical reports [4]. 3.2 Internet of Medical Things (IoMT) In fact, the use of networked medical devices allow a remote monitoring and management of patients, thanks to the Internet of Medical Things (IoMT) which is changing the healthcare sector. IoMT devices are used to gather, process, and send data in real-time to provide the required clinical information to the healthcare practitioners which allow them making better patient care decisions. This technology has the potential to raise patient satisfaction levels, overall, lower healthcare expenditures, and improve patient outcomes. In addition to the requirement
136
A. Merabet et al. Table 1. Description of the motivations behind the different research questions.
Research Question
Motivation
1. How can AI and IoMT technologies be effectively integrated into medical decision support systems to improve diagnostic accuracy and treatment outcomes?
1. The integration of AI and IoMT technologies into medical decision support systems has the potential to significantly improve diagnostic accuracy and treatment outcomes. By leveraging real-time patient data and advanced algorithms, clinicians can make more informed decisions and provide more personalized care to patients
2. Considerations associated with the use of AI and IoMT in medical decision support systems, and how can they be addressed?
2. AI and IoMT in medical decision support systems raise ethical concerns like patient privacy, algorithm bias, and automation replacing human decision-making. Clear guidelines and regulations are essential, while patients should be informed about AI and IoMT usage and have the option to opt out
3. How does the performance of different AI 3. AI algorithms are used in medical decision algorithms (such as K-nearest neighbor, support systems, with K-nearest neighbor artificial neural networks, or convolutional for classification, artificial neural networks neural networks) compare in predicting for pattern recognition and prediction, and specific diseases or medical conditions? convolutional neural networks for image analysis and diagnosis. Performance varies based on task and data quality 4. What are the potential benefits and drawbacks of using IoMT devices for realtime patient monitoring and data collection in medical decision support systems?
4. IoMT devices improve patient monitoring and data collection, but face security, privacy, interoperability, and device malfunction. Clear guidelines and patient education are essential for healthcare use
for adequate training and instruction for healthcare personnel to efficiently use these devices, there are worries about data privacy and security. IoMT is positioned to play a significant role in the future of healthcare despite these obstacles [5]. 3.3 Medical Decision Support System The automation of data-based medical diagnostic modeling can provide clinical decision support and enhance the efficacy of health care delivery in the clinical settings [6]. Many definitions regarding Medical Diagnostic Decision Support Systems have been suggested, among others: A medical decision support system is an organized collection of information, to assist the clinician in his/her reasoning to identify a diagnostic and choose the adequate therapy, establishing a conversation between man and the machine [7]. Medical decision support systems (MDSS) are software programs that provide clinicians with timely and relevant information about a patient’s clinical situation of a patient as well
Artificial Intelligence and Internet of Medical Things
137
as the appropriate information for that situation, proper situation, correctly screened and displayed in such a way to improve quality of patient health care and health of patients [7]. 3.4 AI and IoMT in Medicine Field Artificial intelligence describes the use of machines and techniques to simulate intelligent operation and humanlike thinking [2]. Nowadays, on one hand, AI is becoming part of our daily life in different forms, including personal assistants (Siri, Alexa, Google assistant, etc.), as well as automated public transportation, aviation and video games. More recently, AI has also started to be embedded in medicine for better patient care [3]. On the other hand, Internet of Medical Things (IoMT) is a cutting-edge bio-analytical tool that brings together networked biomedical devices and software applications to effectively assist in healthcare activities. AI and IoMT in healthcare can provide medical decision support for doctors, as well as help to increase the autonomy of diabetic patients for example. On another side, data analytic techniques can detect pathology, and avoid having patients subjected to invasive examinations, as Thibault Pironneau explains [5]. in fact, by integrating automation, interfacing sensors, and machine learning-based artificial intelligence, IoMT devices enable healthcare monitoring without requiring human intervention. This technology links patients with clinicians through medical devices, allowing remote access to securely collect, process, and transfer medical data [8].
4 Related Work Due to the importance of AI and IoMT in medical decision support systems nowadays, several researchers have focused their research towards this domain [9]. In this section, we review the relevant literature on clinical support systems, focusing on recent developments and trends in the field. A decision-making system has been proposed to classify community members and manage demand and disease outbreaks within the healthcare supply network. This system utilizes physicians’ knowledge and a fuzzy inference system (FIS) to group users based on age range and preexisting conditions such as diabetes, heart problems, or high blood pressure. The FIS approach allows for non-linear input and output variables, enabling the determination of appropriate rules for different conditions. The healthcare chain in this study is a two-tiered system, consisting of service recipients (community members) and service providers (healthcare system). The healthcare equipment and services provided by the healthcare system are treated as the product within this supply chain [9]. An alternative approach involves introducing an IoT-based fog computing model to diagnose patients with type 2 diabetes [10]. The system is applied for type 2 diabetes monitoring and surveillance for patients and repeated examinations automatically. This work has three major layers: cloud layer, fog and wearable IoT sensors. The researchers proposed a novel system established on Wireless Body Area Network (WBAN) to take advantage of its wireless transmission characteristics, which can classify the limitations of conventional systems. The proposed healthcare model comprises several components, with the WBAN or medical sensor node being its main core. This node consists of a collection of physical devices containing hand-carried sensors and a small wireless module to collect data that can help physicians identify type
138
A. Merabet et al.
2 diabetes in its early stages. The smart e-health gateways on IoT devices are responsible for managing and collecting patient data, which is then summarized through the use of apps and mobile devices to generate patient identifiers in the system. The data collected from the WBAN conveyor is delivered to the cloud via WiFi and stored for processing and dissemination. Finally, the N-MCDM model is used by consultants and physicians to determine the severity of patients’ type 2 diabetes [10]. Parthiban and Srivatsa in [11] proposed a machine learning algorithm for detection and analysis of heart disease utilizing Naive Bayes algorithm which provides 74% and Support Vector Machine. Other researchers proposed a cloud-based IoT with a clinical decision support system for CKD (Chronic Kidney Disease) prediction and adherence with its severity level. The framework proposes collects patient data through IoT devices attached to the user to be stored in the cloud with associated medical records from the UCI repository. In addition, They used a Deep Neural Network (DNN) classifier for the prediction of CKD and its severity level [12]. To estimate the DNN classifier’s classification results on the applied CKD dataset, a set of experiments is performed. The suggested model has been implemented by employing Python programming and Amazon Web Services (AWS). For experimentation, the parameter used is batch size: 8, learning rate: 0.02, epoch or step size: 10000, score threshold:0.7, minimum dimension: 600 and maximum dimension: 1024 [12]. Shahadat Uddin conducted a study to evaluate the efficacy of different variants of the K-nearest neighbor (KNN) algorithm, namely Classic, Adaptive, Locally adaptive, k-means clustering, Fuzzy, Mutual, Ensemble, Hassanat, and Generalised mean distance, in predicting eight diseases. The results of the study revealed that the Hassanat KNN showed the most promising performance in terms of accuracy, precision, and recall, among the tested algorithms, for disease prediction [13]. Roseline et al. (2019) introduced an IoMT-based diagnosis system for the detection of breast cancer. The system aimed to classify breast tissue as either malignant or benign, and the researchers utilized artificial neural network (ANN) and convolutional neural network (CNN) algorithms with hyperparameter optimization for classification. Additionally, a particle swarm optimization (PSO) feature selection approach was employed to identify the most informative features. Results showed that the proposed model achieved a high classification accuracy of 98.5% with CNN and 99.2% with ANN (Roseline et al., 2019) [14]. In a separate investigation, the duration of hospitalization in two major departments - cardio and surgery services - has been the focus of researchers. In the case of the cardiac service, Lafaro et al. (2015) conducted a study in which they selected 8 out of 36 variables to create a DDS prediction model [15]. Although AI and IoMT have shown great potential in clinical support systems, there are several limitations that need to be addressed. One major concern is the security of patient data, as the use of these technologies increases the risk of data breaches and unauthorized access. In addition, the effectiveness of the learning models used in these systems may be limited by the quality and quantity of available data. Another limitation is the potential for bias in the data or algorithms, which could result in inaccurate or unfair predictions. Finally, the high cost of implementing and maintaining these systems may also be a barrier to widespread adoption.
Artificial Intelligence and Internet of Medical Things
139
5 Discussion This study explores the potential of AI and IoMT in medical decision support systems. It proposes a fuzzy inference system (FIS) for grouping users based on age and preexisting conditions, enabling non-linear input and output variables. An IoT-based fog computing model for diagnosing type 2 diabetes, using wearable sensors, mobile devices, and cloud computing, achieved high accuracy and personalized treatment recommendations. A hybrid decision support system combines machine learning algorithms and expert knowledge for diagnosing and treating patients with chronic obstructive pulmonary disease (COPD). The study highlights the potential of AI and IoMT in improving healthcare services by integrating multiple data sources. However, challenges like data security, ethical concerns, and regulatory frameworks need to be addressed. Future research should focus on developing more robust and effective AI and IoMT-based medical decision support systems.
6 Conclusion In this paper, we have presented and discussed current works in the field of MDSS focusing on the impact of AI and IoMT to support medical decisions. As the field of AI in healthcare continues to advance, it is crucial to address these challenges and concerns to ensure that the conjugation of AI and IoMT in MDSS is utilized in a responsible and effective manner. Several future challenges and issues can be considered to enhance the efficiency of medical decision systems, among others: the medical data security issue when using AI and IoMT in healthcare is critical to ensure that the use of AI in medical decision-making is transparent, fair, and does not result in bias, ethical hacking implication especially when using cloud health services such as AWS, another consideration is the need to develop regulations and standards for the design and use of AI-based medical support systems.
References 1. Merabet, A., Ferradji, M.A.: Smart virtual environment to support collaborative medical diagnosis. In: 2022 4th International Conference on Pattern Analysis and Intelligent Systems (PAIS), pp. 1–6. IEEE (2022) 2. Bernelin, M., Desmoulin-Canselier, S.: Chapitre 2. L’intelligibilitedes algorithmes dans les systemes d’aide a‘ la decision medicale. Journal international de bioethique et dethique des sciences 32(2), 19–31(2021) 3. Mintz, Y., Brodie, R.: Introduction to artificial intelligence in medicine. Minimally Invasive Therapy All. Technol. 28(2), 73–81 (2019) 4. VivekKaulMD, FASGE1SarahEnslinPA-C1Seth A.GrossMD, FASGE, History of artificial intelligence in medicine, pp. 807–812 (2020) 5. Bouaziz, A.: Methodes d’apprentissage interactif pour la classification des messages courts (Doctoral dissertation, Universite Cote d’Azur) (2017) 6. Henao, G., Anderson, J.: L’intelligence artificielle verte pour automatiser le diagnostic me´dical avec une faible consommation d’e´nergie (Doctoral dissertation, Universite´ Co^te d’Azur) (2021)
140
A. Merabet et al.
7. Mazeau, L.: Intelligence artificielle et responsabilite´ civile: Le cas des logiciels d’aide a‘ la de´cision en matie‘re me´dicale. Revue pratique de la prospective et de l’innovation 1, 38–43 (2018) 8. Manickam, P., et al.: Artificial intelligence (AI) and internet of medical things (IoMT) assisted biomedical systems for intelligent healthcare. Biosensors 12(8), 562 (2022) 9. Govindan, K., Mina, H., Alavi, B.: A decision support system for demand management in healthcare supply chains considering the epidemic outbreaks: a case study of coronavirus disease 2019 (COVID-19). Transp. Res. Part E Logist. Transp. Rev. 138, 101967 (2020) 10. Abdel-Basset, M., Manogaran, G., Gamal, A., Chang, V.: A novel intelligent medical decision support model based on soft computing and IoT. IEEE Internet Things J. 7(5), 4160–4170 (2019) 11. Shailaja, K., Seetharamulu, B., Jabbar, M.A.: Machine learning in healthcare: a review. In 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), pp. 910–914. IEEE (2018) 12. Lakshmanaprabu, S. K., Mohanty, S.N., Krishnamoorthy, S., Uthayakumar, J., Shankar, K.: Online clinical decision support system using optimal deep neural networks. Appl. Soft Comput. 81, 105487 (2019) 13. Uddin, S., Haque, I., Lu, H., Moni, M.A., Gide, E.: Comparative performance analysis of K-nearest neighbour (KNN) algorithm and its different variants for disease prediction. Sci. Rep. 12(1), 1–11 (2022) 14. Ogundokun, R. O., Misra, S., Douglas, M., Damaševiˇcius, R., Maskeliu¯nas, R.: Medical internet-of-things based breast cancer diagnosis using hyperparameter-optimized neural networks. Future Internet 14(5), 153 (2022) 15. Mekhaldi, R.N., Caulier, P., Chaabane, S., CHRAIBI, A.: Apports de lIntelligence Artificielle à la prédiction des durées de séjours hospitaliers. PFIA, IA santé, pp. 1–7 (2019)
A Conceptual Framework for a Critical Approach to the Digital World: Integrating Digital Humanities and Informal Learning into Educational Design Maria-Sofia Georgopoulou(B) , Christos Troussas, and Cleo Sgouropoulou Department of Informatics and Computer Engineering, University of West Attica, Aigaleo, Greece {mgeorgopoulou,ctrouss,csgouro}@uniwa.gr
Abstract. Technology has opened up new possibilities in the way we live, communicate, and learn. The daily engagement with activities of the digital world and the re-conceptualization of citizenship highlights the need to connect school to the wider socio-cultural context, in order to prepare tomorrow’s citizens to thrive in the local and global community. To this end, this paper introduces a conceptual framework, which consists of three main pillars: technology as a subject of critical analysis by the humanities, intentional formal and informal learning, and learning theories. The novelty of this study lies in the nature of the educational design, which is oriented towards shaping active, literate, and autonomous identities in the digital world through authentic student-centered learning practices that reflect current pedagogical approaches. It addresses secondary education, as little work has been undertaken in this area, regarding Digital Humanities and connection of formal and informal learning spaces. By providing a basis for transforming educational practice into an active, engaging, critical, and autonomous learning experience, the proposed framework has the potential to revolutionize the way we approach the digital world in education. Keywords: digital humanities · pedagogy · literacy · formal – non-formal – informal learning · learning theories
1 Introduction Technologies have been introduced to education on international level since the 90’s, but they have been reinforced by the outbreak of COVID-19, when emergency remote teaching took place [1, 2]. Apart from that, going through the third decade of the 21st century, technology seems to be inextricably linked to the daily life of young people, being one of the determining factors in shaping individual’s identity [3–5]. However, although today’s students are considered “digital natives” [6] or, like digital resources, “born – digital”, it seems that most of them do not have the necessary skills to be considered literate citizens in the digital world [7–9]. Such skills include the ability to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 141–150, 2023. https://doi.org/10.1007/978-3-031-44146-2_15
142
M.-S. Georgopoulou et al.
create meanings, to read, analyze, and interpret visual stimuli (e.g. images and graphs) and their semiotic codes (e.g. layout and font size), to successfully navigate non-linear texts, to critically evaluate information retrieved from a web search, to understand the relationship between sociocultural contexts and dominant ideologies in multimodal texts, to formulate selection criteria to the abundance of information and sources, to share material and emotions wisely, to communicate effectively, and to use digital media safely [10–12]. So, it can be concluded that the skills required for an active literate citizen in the digital world do not focus on the functional skills, but mainly on the cognitive skills required to access and properly utilize the information available in the digital environments [13–15]. In view of the above, Digital Humanities (hereafter DH), which comes from the intersection and interaction of Computing and Humanities (C ↔ H) [16], should be viewed as “humanities in the digital age” [17]. In this perspective, DH deals with technology not only as a set of tools and practices that support the humanities (C → H), but also as a subject of critical analysis by the humanities (C ← H), aiming to develop critical thinking skills for the digital world [18]. This reasoning reflects Mittell’s approaches to the DH: a. applying computational methods to the humanities, b. using digital tools and platforms to facilitate or enrich learning in the humanities, and c. adopting a humanistic perspective to the digital resources and practices [19]. Technological advances have become an integral part of the sociocultural landscape [20], transcending the limitations of space and time. The school, being a thriving microsociety, has the role of preparing students for this digital culture [14]. As today’s students engage daily in a variety of activities in the digital world [21, 22], there is a need to extend classroom practices into everyday activity and vice versa, bridging the gap between students’ experience in and out of school [11, 23, 24]. On this basis, we should not treat forms of learning as competing paradigms, but as complementary elements that compose a learning ecology, a holistic approach to learning that empowers individuals to achieve their full potential [25]. Consequently, it becomes necessary to renegotiate learning space and time, as well as learning goals. Thus, the learning process that takes place in an organized educational practice could be redefined, following theoretical traditions that outline the broader framework of educational principles and methods [26]. The cognitive and sociocultural learning theories are the dominant theoretical traditions that underlie later theoretical frameworks, such as connectivism [26, 27]. However, it is important to note that each theoretical framework is not a panacea, as different goals and needs lead to different educational decisions, while “elements of all of these approaches may be found in any learning event” [28]. The interaction of learning theories with the common thread of active, autonomous, and networked learning is essential in shaping citizenship in the digital age, in accordance with the DigComp framework suggested by European Commission [15]. With this in mind, it is important to recognize that modernizing education goes beyond simply incorporating technology in educational practice; it also entails equipping learners with cognitive tools to evaluate digital resources critically and fostering selfdirected learning [11, 14]. The ultimate objective of every educational design should be to cultivate literate identities that are capable of adapting to an ever-changing world
A Conceptual Framework for a Critical Approach to the Digital World
143
[8]. Therefore, this paper seeks to examine the organization of the learning process and reconceptualization of learning space and time, with the aim of fostering an active literate citizenship in the digital culture within the context of DH.
2 Literature Review Most of the studies that explored DH as a field of application tackled higher education [29, 30]. In addition, most DH studies and projects suggested that DH is mainly about using digital technologies and applying computational methods (C → H), when, in fact, the core value of the humanities, that is critical thinking involving socio-cultural criticism, is set aside for the sake of technological progress [31–34]. Nevertheless, a critical approach, including a critical analysis of digital resources and practices, suggests the apprehension and interpretation of technology in relation to the underlying social context in which it is developed and signified [10, 12, 32, 35]. Hence, this approach primarily aims to raise the level of awareness and cultivate independent identities capable of critically analyzing the constantly changing world [36, 37]. Moreover, although there is a considerable number of studies regarding informal learning, most of them focused on adult education and lifelong learning or on programs implemented to re-engage young people in learning [38–40]. In certain cases, focus was brought upon the use of e-tools and social web regarding their pedagogical value and the effect that informal practices had on formal learning outcomes [41–43]. However, most of them focused on higher education, while their critical aspect was not addressed. Another study [44], exploring democracy and participatory attitudes, examined informal learning practices for developing active citizenship in secondary education, but did not involve a digital perspective. The prevalence of formal learning in compulsory education is likely due to the fact that its results are immediately visible and can be evaluated, providing parents with information on their child’s progress [28]. In contrast, the outcomes of informal learning are often ambiguous and difficult to measure [45]. Nevertheless, informal learning contexts offer a valuable and limitless source of heterogeneous experiences that shape one’s identity [4]; they do not simply complement the institutionalized learning process, but also empower autonomous learning by transcending traditional restrictions, especially in recent years with the widespread use of web 2.0 [5, 16, 22, 46]. These digital spaces expand learning opportunities and span across space and time [47–49]. In this context, learning theories, serving as guides for the educational design, emphasize different aspects of the learning process, depending on the specific goals that are being pursued [50]. In terms of technology and knowledge, educational objectives tend to prioritize «know-what» and «know-how», which are complemented by «know-where» –through the principles of connectivism [27]–, while little attention has been paid to «know-why», i.e. the critical perspective of these concepts. However, a comprehensive understanding of any subject matter requires not only the acquisition of factual knowledge and practical skills, but also a critical perspective that enables learners to question and analyze the underlying principles and assumptions. In view of the above, the design and development of DH pedagogy framework a that prioritizes critical thinking and integrates various modes of learning in secondary
144
M.-S. Georgopoulou et al.
education, is an open challenge for this study. This innovative framework aspires to establish a foundation for educational design through authentic learning practices. Thus, it is expected to promote a critical humanistic approach of digital environments, tools, and resources in relation to socio-cultural context [35, 51, 52], placing the student’s experience at the center of the learning process.
3 Research Methodology The aim of this case study is to investigate how the learning design can foster a digital culture by redefining the learning space and time, specifically within the context of DH. The following research questions clarify the general objective: RQ1: How could a humanistic perspective to digital resources and practices be best developed? RQ2: How could the different conceptualization of learning space and time affect the learning process? RQ3: How could the principles of learning theories adapted affect the effectiveness of the learning content, process, and outcome? RQ4: What skills and strategies could be developed within this pedagogical context? Design research, used to test and improve new systems, as well as to understand and evaluate their functionality, will guide the steps of this research. The key question in design research is “Will it work?”, rather than “Is it valid?”. The phases from a learning design perspective are: 1) preparing for the experiment, 2) conducting the experiment, and 3) applying retrospective analysis [53]. Design research, like action research, follows a circular process, where critical reflection could lead to new questions, thus to new research. Through retrospective analysis, new, improved ideas can be produced, which create the need to test them in educational practice. So, a new cycle of experiment begins. Within the research cycle, a spiral process takes place, which concerns daily practice at the micro-level of the classroom, highlighting the adaptability of the overall learning process during a design experiment (see Fig. 1). The sample will be composed of students in Secondary Education from two to three Greek schools. Data collection will involve recording observer’s perspective through field notes, and gathering teachers’ and students’ perspectives through interviews. The use of triangulation will ensure the accuracy of the data by cross-checking and avoiding misinterpretations. This approach will also enhance the reliability and internal validity of the research [54].
A Conceptual Framework for a Critical Approach to the Digital World
145
Fig. 1. Research design of the study
4 The Hybrid Learning Framework: Digital World as a Subject of Critical Analysis by the Humanities in Education After investigating related work on the DH and different forms of learning, a hybrid learning framework was deployed, which has never been approached with the perspective of promoting a critical view of digital environments, tools, and resources in secondary education. The proposed framework is expected to contribute to the ongoing conversation about the role of technology in education through the lens of humanities by examining ways in which the learning can be optimized to cultivate literacy. The act of learning is embedded in every human activity [55]. Learning has often been associated with the image of an iceberg, the top of which consists of the visible and conscious learning outcomes gained from participating in structured educational programs, while the base of the iceberg consists of the invisible conscious and unconscious learning outcomes derived from everyday experience [28]. Hence, bridging formal and informal contexts of learning is crucial for young people to understand the linkage between educational institutions and lived experience in the digital world [3, 56]. At this point, it should be noted that non-formal learning, which is a gray area between the aforementioned forms of learning, includes activities that are not part of a course of study, but are part of the school curriculum, such as school trips. The proposed framework draws inspiration from the aforementioned scheme and the Mobile-Blended Collaborative Learning (MBCL) model, which leverages technology to combine formal and informal learning [57]. This model emphasizes the potential of (mobile) technologies to foster a participatory structure and support an ecology of learning. By establishing a two-way relationship, formal and informal learning act as complementary elements in order to create stimulating and interesting learning contexts by capitalizing the strengths of each approach. To further refine our framework, we turn to Vavoula’s et al. [58] typology of learning which distinguishes between intentional and unintentional learning. The former encompasses both formal and informal learning, while the latter is primarily associated with informal learning, which is largely unspecified, incidental, and implicit. In this study, the focus will be on deliberate, self-directed learning, in which the individuals consciously engage in learning activities [28]. Figure 2 presents the hybrid learning framework which has been designed to provide valuable insights into the potential of a critical perspective of digital environments, tools,
146
M.-S. Georgopoulou et al.
and resources to prepare tomorrow’s citizens for the challenges of the digital world. This framework is comprised of three main pillars: a) technology as a subject of critical analysis by the humanities, b) intentional formal and informal learning, and c) learning theories. The first pillar provides the content of educational design, which focuses on a critical view of the digital resources and practices. The second pillar sets the scene for implementing educational design, suggesting that diverse learning environments are expected to encourage active learning, interaction, and critical thinking through authentic learning practices. Finally, the third pillar frames this framework, as learning theories involve a set of principles that guide the content and methodology of educational practice, reflecting higher goals of current pedagogical approaches, namely critical, networked, and autonomous learning [59].
Fig. 2. The Hybrid Learning Framework
The following sub-sections describe the framework’s structure on the basis of formal and informal learning. 4.1 Informal Learning Informal learning will be facilitated through a forum, which has been developed with the WordPress Forum Plugin for building online communities, wpForo. This forum is comprised of separate discussion threads and can be displayed with various layouts. Simplified layout has been chosen for the categorization of thematic sections on diverse topics that examine the critical aspects of digital environments, tools, and resources. Additionally, Q&A layout has been selected to provoke students’ interest and encourage active engagement with challenging topics, as it enables peer voting, where students can rate each other’s responses using up / down arrows. Access to the class forum will be restricted to the learning community, composed of students and instructors of secondary education, who will be granted a license to access the forum through passwords provided by the administrator, in order to protect minors and the content they are exposed to. The added pedagogical value of the forum lies in the fact that it is a dynamic web 2.0 platform that fosters a sense of community, provokes peer interaction, increases engagement, and develops a participatory and democratic discourse [45]. The forum
A Conceptual Framework for a Critical Approach to the Digital World
147
gives learners the opportunity to discuss challenging topics, react to content, share experiences, clarify arguments, reflect on different perspectives, research a specific area collaboratively and develop evaluation criteria. Thus, the forum could serve as a bridge between classroom practices and informal learning. 4.2 Formal Learning In the classroom, formal learning practices will involve conversations on current topics and relevant activities that align with the learning goals of the language course to elicit higher order skills of Bloom’s taxonomy. Thematic topics outlined in the syllabus will serve as starting points for the educational design, while other subjects will provide more in-depth information on specific areas. A student-centered approach in conjunction with the pedagogical objectives set by the syllabus will be employed to pique students’ interest, increase engagement, and stimulate critical thinking, by diversifying pedagogical approaches through authentic interdisciplinary practices. This will create a dynamic, participatory, and interactive educational setting that can develop an open dialogue about why and how to adopt a critical view of digital environments, tools, and resources, to help students build an active literate citizenship.
5 Conclusion and Future Work Gee [60] argues that, just as the inability to access school excludes children “from important opportunities to learn, interact with peers and develop their own identity”, the inability to meet the needs of the digital world excludes children from becoming citizens of their time. The choice of pedagogical paradigms, as a set of principles that direct the content and methodology of educational practices, reflects the meaning perspectives [61] that lie beneath teachers’ beliefs about “What citizens do we wish to cultivate?” [62]. In order to promote an active literate citizenship for students to be part of the local and global community, a conceptual framework has been developed, incorporating learning theories and bridging formal and informal learning contexts. The principal aim of this framework is to foster a critical view of digital environments, tools, and resources in secondary school students through authentic learning practices and establish adaptability in an ever-changing world. This novel framework aspires to serve as a foundation for educational design in the humanities, in order to transform learning procedures and enhance quality of education. Thus, the challenge lies in its practical application within educational settings, in order to gain valuable insights into its effectiveness and feasibility to diverse groups of students, as well as detect potential gaps for its improvement. Acknowledgements. This research and the doctoral thesis were supported by the Special Account of Research Grants of the University of West Attica through the scholarship program “Research Actions to Support PhD Candidates of the University of West Attica”. Moreover, Marisa would like to personally thank associate professor, Athanasios Michalis, for the inspiration and conception of the research.
148
M.-S. Georgopoulou et al.
References 1. Hodges, C., Moore, S., Lockee, B., Trust, T., Bond, A.: The difference between emergency remote teaching and online learning. Educause Rev. 27 (2020) 2. Troussas, C., Krouska, A., Giannakas, F., Sgouropoulou, C., Voyiatzis, I.: An alternative educational tool through interactive software over Facebook in the era of COVID-19. In: Novelties in Intelligent Digital Systems, pp. 3–11. IOS Press (2021) 3. Williams, B.T.: “Tomorrow will not be like today”: Literacy and identity in a world of multiliteracies. J. Adolesc. Health. 51(8), 682–686 (2008) 4. Greenhow, C., Robelia, B.: Informal learning and identity formation in online social networks. Learn. Media Technol. 34(2), 119–140 (2009) 5. Vogels, E.A., Gelles-Watnick, R., Massarat, N.: Teens, social media and technology 2022. Pew Research Center, pp. 1–30 (2022) 6. Prensky, M.: Digital natives, digital immigrants. Horiz. 9(5), 1–6 (2001) 7. Eshet-Alkalai, Y., Chajut, E.: Changes over time in digital literacy. Cyberpsychol. Behav. 12 (2009). 10.1089=cpb.2008.0264 8. Koutsogiannis, D. (2011). ICTs and language teaching: The missing third circle. Language, languages and new technologies, 43–59 9. Eshet, Y.: Thinking in the digital era: a revised model for digital literacy. Issues Informing Sci. Inf. Technol. 9(2), 267–276 (2012) 10. Eshet-Alkalai, Y.: Digital literacy: a conceptual framework for survival skills in the digital era. J. Educ. Multimedia Hypermedia 13(1), 93–106 (2004) 11. Hague, C., Payton, S.: Digital literacy across the curriculum. Curriculum Leadersh. 9(10) (2011) 12. Mihalis, A.: Digital literacy practice cultivation: a creative challenge for the modern school. Preschool Prim. Educ. 4(1), 165–181 (2016) 13. Casey, L., Bruce, B.C.: The practice profile of inquiry: connecting digital literacy and pedagogy. E-Learn. Digit. Media 8(1), 76–85 (2011) 14. Hicks, T., Turner, K.H.: No longer a luxury: digital literacy can’t wait. Engl. J., 58–65 (2013) 15. European Commission: The Digital Competence Framework 2.2 (2022) 16. Davidson, C.N.: Humanities 2.0: promise, perils, predictions. In: Gold, M. K. (ed.), Debates in the Digital Humanities, pp. 707–717. Minnesota Press, London (2012) 17. Piez, W.: Something called digital humanities. In: Terras, M., Nyhan, J., Vanhoutte, E. (eds.) Defining Digital Humanities: A Reader. Routledge (2016) 18. Berry, D.M.: Critical digital humanities. In: O’Sullivan, J., (ed.) The Bloomsbury Handbook to the Digital Humanities, pp. 125–136. Bloomsbury Publishing (2022) 19. Mittell, J.: Videographic criticism as a digital humanities method. In: Gold, M.K., Klein, L.F. (eds.) Debates in the Digital Humanities, pp. 224–242. Minnesota Press, London (2019) 20. Hobbs, R.: Create to Learn: Introduction to Digital Literacy. John Wiley & Sons (2017) 21. Sefton-Green, J., Marsh, J., Erstad, O., Flewitt, R.: Establishing a research agenda for digital literacy practices of young children: a white paper for COST action IS1410, pp. 1–37. European Cooperation in Science and Technology, Brussels (2016) 22. Troussas, C., Krouska, A., Sgouropoulou, C.: Impact of social networking for advancing learners’ knowledge in E-learning environments. Educ. Inf. Technol. 26, 4285–4305 (2021) 23. Meyers, E.M., Erickson, I., Small, R.V.: Digital literacy and informal learning environments: an introduction. Learn. Media Technol. 38(4), 355–367 (2013). https://doi.org/10.1080/174 39884.2013.783597 24. Lewin, C., Charania, A.: Bridging Formal and Informal Learning Through Technology in the Twenty-First Century: Issues and Challenges. In: Voogt, J., Knezek, G., Christensen, R., Lai, K.-W. (eds.) Second Handbook of Information Technology in Primary and Secondary
A Conceptual Framework for a Critical Approach to the Digital World
25.
26. 27. 28. 29. 30. 31.
32. 33.
34.
35. 36.
37. 38. 39.
40. 41. 42. 43.
149
Education. SIHE, pp. 199–215. Springer, Cham (2018). https://doi.org/10.1007/978-3-31971054-9_13 Jackson, N. J. (2013). Learning ecology narratives. In: Lifewide Learning, Education and Personal Development e-book, pp. 1–26. Accessed 29 Jul 2022 from Research – Lifewide E Book Wenger, E.: Communities of practice: Learning, meaning, and identity. Harvard University Press, Cambridge (1999) Siemens, G.: Connectivism: a learning theory for the digital age. J. Instr. Technol. Distance Learn. 2(1) (2004) Rogers, A.: The Base of the Iceberg: Informal Learning and Its Impact on Formal and Nonformal Learning. Verlag Barbara Budrich (2014) Hirsch, B.D.: Digital Humanities Pedagogy: Practices, Principles and Politics (epub). Open Book Publishers (2012) Risam, R.: New Digital Worlds: Postcolonial Digital Humanities in Theory, Praxis, and Pedagogy. Northwestern University Press (2018) Brier, S.: Where’s the pedagogy? The role of teaching and learning in the digital humanities? In: Gold, M.K. (ed.) Debates in the Digital Humanities, pp. 390–401. Minnesota Press, London (2012) Liu, A.: Where is cultural criticism in the digital humanities? In: Gold, M.K. (ed.) Debates in the Digital Humanities. Minnesota Press, London (2012) Pacheco, A.: Under the hood of digital humanities: toys or opportunities? In: Proceedings of the Sixth International Conference on Technological Ecosystems for Enhancing Multiculturality, Salamanca Spain, 24–26 October 2018, pp. 253–257, October 2018 Hunter, J.: The digital humanities and ‘critical theory’: an institutional cautionary tale. In: Gold, M.K., Klein, L.F. (ed.) Debates in the Digital Humanities, pp. 188–194. Minnesota Press, London (2019) Street, B.V.: What’s ‘new’ in New Literacy Studies? Critical approaches to literacy in theory and practice. Curr. Issues Comp. Educ. 5(2), 77–91 (2003) Kress, G.: The profound shift of digital literacies. In: Gillen, J., Barton, D., (eds.) Digital Literacies: A Research Briefing by the Technology Enhanced Learning Phase of the Teaching and Learning Research Programme, pp. 6–8. London Knowledge Lab, Institute of Education, University of London (2010) Pangrazio, L.: Reconceptualising critical digital literacy. Discourse Stud. Cult. Polit. Educ. 37(2), 163–174 (2016) Hayes, D.: Re-engaging marginalised young people in learning: The contribution of informal learning and community-based collaborations. J. Educ. Policy 27(5), 641–653 (2012) Nygren, H., Nissinen, K., Hämäläinen, R., De Wever, B.: Lifelong learning: formal, nonformal and informal learning in the context of the use of problem-solving skills in technologyrich environments. Br. J. Educ. Technol. 50(4) (2019) Sulkunen, S., Nissinen, K., Malin, A.: The role of informal learning in adults’ literacy proficiency. Eur. J. Res. Educ. Learn. Adults 12(2), 207–222 (2021) Kirkpatrick, G.: Online ‘chat’ facilities as pedagogic tools: a case study. Act. Learn. High. Educ. 6(2), 145–159 (2005) Trinder, K., Guiller, J., Margaryan, A., Littlejohn, A., Nicol, D.: Learning from digital natives: bridging formal and informal learning. High. Educ. 1, 1–57 (2008) Lucas, M., Moreira, A.: Bridging formal and informal learning – a case study on students’ perceptions of the use of social networking tools. In: Cress, U., Dimitrova, V., Specht, M. (eds.) EC-TEL 2009. LNCS, vol. 5794, pp. 325–337. Springer, Heidelberg (2009). https:// doi.org/10.1007/978-3-642-04636-0_31
150
M.-S. Georgopoulou et al.
44. Hoskins, B., Janmaat, J.G., Villalba, E.: Learning citizenship through social participation outside and inside school: an international, multilevel study of young people’s learning of citizenship. Br. Edu. Res. J. 38(3), 419–446 (2012) 45. Schugurensky, D.: “This is our school of citizenship”: informal learning in local democracy. Counterpoints 249, 163–182 (2006) 46. Wells, N.: Social media, research, and the digital humanities. In: O’Sullivan, J. (ed.) The Bloomsbury Handbook to the Digital Humanities, pp. 189–198. Bloomsbury Publishing (2022) 47. Song, D. & Lee, J. (2014). Has Web 2.0 revitalized informal learning? The relationship between Web 2.0 and informal learning. Journal of Computer Assisted Learning, 30(6), 511–533 48. Kumpulainen, K., Mikkola, A.: Toward hybrid learning: educational engagement and learning in the digital age. In: Elstad, E. (ed.) Educational technology and polycontextual bridging, pp. 14–37. Springer (2016) 10.1007/978-94-6300-645-3_2 49. Krouska, A., Troussas, C., Sgouropoulou, C.: Usability and educational affordance of web 2.0 tools from teachers’ perspectives. In: ACM International Conf. Proceedings series: 24th Pan-Hellenic Conference on Informatics, pp. 107–110, November 2020 50. Schunk, D.H.: Learning Theories: An Educational Perspective (8th ed.). Pearson (2020) 51. Johanson, C., Sullivan, E., Reiff, J., Favro, D., Presner, T., Wendrich, W.: Teaching digital humanities through digital cultural mapping. In: Hirsch, B.D.: Digital Humanities Pedagogy: Practices, Principles and Politics, pp. 121–149. Open Book Publishers (2012) 52. Bonds, E.L.: Listening in on the conversations: An overview of digital humanities pedagogy. CEA Critic 76(2), 147–157 (2014) 53. Gravemeijer, K., Cobb, P.:. Design research from a learning design perspective. In: Van den Akker, J., Gravemeijer, S., McKenney, K., Nieveen, N. (eds.) Educational Design Research, pp. 29–63. Routledge (2006) 54. Robson, C.: Real World Research, 3rd edn. Wiley, Chichester (2011) 55. Siemens, G., Rudolph, J., Tan, S.: As human beings, we cannot not learn. An interview with Professor George Siemens on connectivism, MOOCs and learning analytics. J. Appl. Learn. Teach. 3(1), 108–119 (2020) 56. Dewey, J.: School and society. In: Dewey on Education. Teachers College Press, New York (original work published in 1899) 57. Lai, K.W., Khaddage, F., Knezek, G.: Blending student technology experiences in formal and informal learning. J. Comput. Assist. Learn. 29(5), 414–425 (2013) 58. Vavoula, G., Scanlon, E., Lonsdale, P., Sharples, M., Jones, A.: Report on empirical work with mobile learning and literature on mobile learning in science. Jointly Executed Integr. Res. Projects (JEIRP) D33, 2 (2005) 59. Oommen, P.G.: Learning theories–taking a critical look at current learning theories and the ideas proposed by their authors. Asian J. Res. Educ. Soc. Sci. 2(1), 27–32 (2020) 60. Gee, J.P.: Affinity spaces: How young people live and learn on line and out of school. Phi Delta Kappan 99(6), 8–13 (2018) 61. Mezirow, J.: Transformative dimensions of adult learning. Jossey-Bass, 350 Sansome Street, San Francisco, CA 94104-1310 (1991) 62. Salomon, G.: It’s not just the tool but the educational rationale that counts. In: Elstad, E. (ed.) Educational Technology and Polycontextual Bridging, pp. 149–161. Springer (2016). https:// doi.org/10.1007/978-94-6300-645-3_8
Optimizing Image and Signal Processing Through the Application of Various Filtering Techniques: A Comparative Study Aliza Lyca Gonzales(B) , Anna Liza Ramos(B) , Jephta Michail Lacson, Kyle Spencer Go, and Regie Boy Furigay Saint Michael’s College of Laguna, Old National Highway, Platero, Philippines {aliza.gonzales,annaliza.ramos,jephta.lacson,kyle.go, regie.furigay}@smcl.edu.ph
Abstract. This study aimed to explore different filtering techniques in order to showcase the performance of the filters and improve image and signal detection accuracy. The filters were evaluated using performance metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Mean Squared Error (MSE), and Root Mean Squared Error (RMSE). These performance metrics were supported by a confidence interval assessment to statistically validate the results. The results demonstrated that the median filter for smoothing, Butterworth filter for sharpening, and Inverse filters for restoration showed promising results. In the case of EEG signals, the Least Mean Square (LMS) filter for adaptive filtering was found to be the best choice, while the Notch filter for Infinite Impulse Response (IIR) and the Coiflets3 for Wavelet Transform outperformed other filters. Therefore, these results serve as a reference guide for determining which filters should be applied for denoising image and signal datasets. Additionally, it is worth noting that when selecting a filter, practitioners should carefully consider the characteristics of the filter in relation to the type of noise, resolution, and lighting conditions. Keywords: Image Processing · Smoothening Techniques · Sharpening Techniques · Restoration Techniques · Image Filtering · Signal Processing · Adaptive Filters · Wavelet Transform · Infinite Impulse Response
1 Introduction In the fields of computer vision, image processing, and signal processing, filtering techniques play a critical role in achieving precise detection and recognition. These techniques are widely utilized to address diverse challenges, including the identification of circular shapes [1], facial expression recognition [2], enhancement of video and color processing data [3], and improvement of accuracy in speech and sound recognition systems [4]. Moreover, filtering techniques are employed to reduce noise in images caused by acquisition devices [5], information loss, and transmission issues resulting © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 151–170, 2023. https://doi.org/10.1007/978-3-031-44146-2_16
152
A. L. Gonzales et al.
from environmental factors [6]. In the domain of EEG signal processing, filtering techniques are also of significant importance. EEG signals inherently contain noise due to factors such as electrical interference, muscle activity, and artifacts. Filtering plays a vital role in denoising and enhancing the quality of EEG signals for accurate analysis and interpretation [7]. Several studies have been conducted to explore different filters and methods for enhancing accuracy in various applications. For example, the mean filter has been applied to low-resolution videos, but it often results in information loss [8]. However, when combined with the block mean filtering method in a multi-window structure, its performance improves [10]. On the other hand, the median filter excels at removing speckles and saltpepper noise [11], yielding better results with fewer iterations [12]. While filters like Gaussian may increase processing time and reduce image details [9], they are particularly effective for image deblurring [13] and accurate detection of circular shapes due to their robustness and precision [1]. Furthermore, a comparative study on Gaussian, median, and mean filters revealed Gaussian as the superior performer [2]. Another filter, the Laplacian filter, exhibits better performance in image reconstruction [14], while the application of an inverse filter significantly enhances images by reducing noise and blurring based on the peak signal-to-noise ratio (PSNR) [15]. The inverse filter has also proven effective in restoring motion-blurred images [16]. Filters like the Wiener filter excel at removing motion blur, Poisson, and Gaussian noise but are less effective with salt-pepper and speckle noise [17]. Wiener filters are also well-suited for image restoration [18], effectively reducing Gaussian noise in degraded images [19]. Furthermore, various studies have shown that specific filters yield different performance outcomes in particular applications. For example, in a comparison of adaptive filters for ECG signals, namely the Least Mean Squared, Normalized Least Mean Squared, and Recursive Least Mean Squared filters, the Normalized Least Mean Squared filter was identified as providing the most favorable results [20]. Similarly, a study that compared notch and Butterworth filters for denoising EEG signals concluded that the Butterworth filter performed better [21]. Another study evaluating different wavelet transform filters for denoising EEG signals indicated that the Symlet 4 and Coiflets3 filters exhibited superior performance [22]. Based on the findings of the existing studies regarding the performance of the mentioned filters, this study seeks to further examine their effectiveness. Our objective is to conduct a comprehensive assessment of these filters, considering their performance on images under different lighting conditions, noise types, and blurring techniques. Additionally, we will evaluate their performance on both the original EEG data and EEG data with integrated Poisson noise.
2 Method The study involved the collection of a total of 480 images. Among these, 120 images were sourced from various internet sources and another 120 were built by the researcher. The researchers further augmented these 240 images by introducing artificial noise, such as Gaussian, speckle noise, salt and pepper, blurring, and motion blur. These images were subsequently converted to grayscales and cropped. They were used to evaluate the
Optimizing Image and Signal Processing Through the Application
153
performance of the filters in terms of smoothing, sharpening, and restoration techniques, with the objective of identifying the filters that perform well in different image scenarios. To quantify the outcomes of various filtering techniques, performance metrics such as Peak Signal-to-Noise Ratio (PSNR), Mean Square Error (MSE), Root Mean Square Error (RMSE), and Structural Similarity Index (SSIM) were applied. Additionally, confidence intervals were utilized as a statistical tool to further analyze the results of the performance metrics (Fig. 1).
Fig. 1. Conceptual Framework
In the context of signal processing, EEG datasets consisting of a total of 9,752.00 data points were employed to assess the performance of the filters in adaptive filters, infinite impulse response, and wavelet transforms. The filters were evaluated based on performance metrics including PSNR, MSE, and SSIM. In conducting these experiments, we assumed that the selected noise types, blurring techniques, resolutions, and lighting conditions accurately represent real-world scenarios. We aimed to encompass a broad range of challenges typically encountered in image and signal processing. Furthermore, for the convenience of utilizing this study, we developed an application that enables users to import their data. The application then generates the performance evaluation of the filters, providing an easy-to-use tool for researchers and practitioners. 2.1 Experimental Setup The study involved categorizing the two types of datasets to be used in the experiment. To conduct the experiment, the Anaconda Python platform was utilized, incorporating several libraries including Python 3.11.3, NumPy (version 1.24.3), SciPy (version 1.10.1), Pandas (version 1.5.2), OpenCV (version 4.7.0), Matplotlib (version 3.4.2), sewar (version 0.4.5), scikit-image (skimage) (version 0.19.2), scikit-learn (version 1.2.0), and
154
A. L. Gonzales et al.
PyWavelets (version 1.4.1). These libraries were employed for processing and evaluating both image and signal datasets. In this experiment, the researchers investigated the following parameters in order to determine the optimal settings for image and signal denoising (Tables 1 and 2). Table 1. Parameters utilized in experimentation for image filtering. Filters
Experimented Parameters
Mean Filter
Kernel 9
Median Filter
Kernel 5
Gaussian Filter
Kernel 5
Max-Min Filter
Kernel 3
Ideal Low Pass Filter
Cutoff frequency 50
Gaussian Low Pass Filter
Cutoff frequency 50
Butterworth Low Pass Filter
Cutoff frequency 100, order 10
Laplacian Filter
Kernel 3
Ideal High Pass Filter
Cutoff frequency 50
Butterworth High Pass Filter
Cutoff frequency 50, order 1
Wiener Filter
Kernel 3
These are the parameters utilized in the experiments, which yield outstanding performance. These parameters have been thoroughly investigated to determine the optimal recommendations for each filter. And to further evaluate the performance metrics result, the researcher utilized R programming to perform the statistical analysis. 2.2 Building Datasets This study utilized two image datasets. The first dataset 240 images originated from images sourced from the internet with 120 color images with sizes ranging from 452 × 678 to 1733 × 1155 pixels. An additional 120 images were built by the researcher captured using various cameras with megapixels ranging from 8MP to 48MP. This image is comprising of 120 color images with different resolutions, varying from 960 × 720 to 2448 × 2448 pixels. Both image datasets included a wide range of lighting conditions, consisting of 30 images per lighting condition. The lighting conditions encompassed flat light, backlight, split light, and dim light, with a particular emphasis on facial images.
Optimizing Image and Signal Processing Through the Application
155
Table 2. Parameters used in experimentation for signal filtering. Filters
Experimented Parameters
Least Mean Squared
Order 3, step size 0.000000001, weights 0.01
Normalized Least Mean Squared
Order 3, step size 0.000000001, weights 0.01
Recursive Least Mean Squared
Order 3, step size 0.000000001, weights 0.01, delta 1
Butterworth Filter
Order 5, cutoff frequency 50
Notch Filter
Center frequency 50, quality factor 30.0
Symlet 4
Wavelet symlet4
Haar
Filter level 4
Daubechies4
Wavelet db4, level 4
Bior2.6
Wavelet bior2.6, threshold 0.1
Coiflets3
Wavelet coif3, level 5
Discrete Meyer
Wavelet dmey
Reverse Bior 6.8
Wavelet bior6.8, level 5
Reverse Biorthogonal 2.8
Wavelet bior2.8, level 5
As for the EEG signal dataset, it was obtained from a reliable source [26]. This dataset comprised EEG recordings with 9,752 frequency samples, 14 channels, and covered a time span of 241 s. The recordings were sampled at a rate of 128 Hz. 2.3 Image Pre-processing To reduce computing needs, images are converted from RGB to grayscale and cropped to 2 × 2 dimensions to focus on a specific area that will be more noticeable when noise and filters are applied (Fig. 2).
Fig. 2. Original image to grayscale and cropped image
2.4 Noise and Blur Generation Process Figure 3, depicts three types of noise in the image: gaussian, speckle, and salt and pepper. The noises used differ in that gaussian noises have randomly recurring white dots that are more evident in dark and grayish sections of the image. The speckle noise and white spots have become sharper and larger. While the salt and pepper are similarly randomly
156
A. L. Gonzales et al.
dispersed, the white and black dots are visible in both bright and dark environments. Furthermore, blurring techniques such as gaussian blurring, which smoothens the images, particularly the edges, and motion blur, which reveals the movement effect in the image horizontally and vertically.
Fig. 3. Images with noise and blur
To simulate common types of noise encountered in image processing, three distinct noise types were added to the original images: Gaussian noise, speckle noise, and salt and pepper noise. For Gaussian noise, random values were sampled from a Gaussian distribution with a mean of 0 and a standard deviation of 20, determining the intensity of the noise added to the image. Speckle noise, on the other hand, applied a random value to each pixel in the image based on a probability parameter set at 0.07. This probability parameter controls the likelihood of a pixel being corrupted by speckle noise, allowing for adjustment of the noise level in the image. Salt and pepper noise, the third type, randomly scatters white and black pixels throughout the image. A specified number of pixels, in this case, 5000 × 5000, were selected and assigned either the maximum pixel value of 255 or the minimum pixel value of 0. The chosen number of pixels determines the noise level in the image. In terms of simulating various types of blurs commonly encountered in image processing, two different types of blurs were added to the original images: Gaussian blurring and motion blurring. For Gaussian blur, a standard deviation of 20 was used, while for motion blur, a kernel size of 3 was employed. Furthermore, in the context of EEG signal filtering, particular attention was given to Poisson noise, which can be encountered in low-intensity EEG recordings (Fig. 4).
Fig. 4. EEG Signal after adding Poisson Noise.
Poisson noise was generated by adding random values sampled from a Poisson distribution with a parameter λ value of 5. The value of λ controls the intensity of the noise added to the EEG signals.
Optimizing Image and Signal Processing Through the Application
157
2.5 Image Filtering Techniques In this study, all images were subjected to filtering techniques to determine the most effective filter in reducing image noise. The filtering techniques included smoothing, sharpening, and restoration techniques, each serving a distinct purpose. The aim of this study is to provide valuable insights into the application of these filters and their respective efficacy. For smoothing techniques that reduce noise and blur in an image, various filters can be employed. These include low-pass filters, Gaussian filters, max-min filters, mean filters, and median filters. Each of these filters utilizes different methods and approaches. Low-pass filters are effective in suppressing high-frequency components, thereby preserving smoother regions in the image. Gaussian filters, on the other hand, apply a weighted average to preserve edges while reducing noise. The max-min filter replaces each pixel with the difference between the maximum and minimum values within its local neighborhood, while the mean filter replaces each pixel with the average value of its neighboring pixels within a specified window size. When it comes to sharpening techniques that enhance the edges of an image, there are several methods available. The Laplacian filter enhances edges by emphasizing intensity changes in the image. The Ideal and Gaussian High Pass filters, on the other hand, enhance high-frequency components, thereby sharpening the edges. The Butterworth High Pass filter offers greater flexibility in controlling the sharpness of the edges. In terms of restoration techniques, the objective is to eliminate noise, blur, and other distortions from an image. The inverse filter aims to recover the original image by estimating the inverse of the degradation function. In cases where the inverse filter may be ill-conditioned, the pseudo inverse filter provides a viable solution. The Wiener filter, on the other hand, is an adaptive filter that estimates the optimal restoration by considering both the noise and the characteristics of the image. 2.6 Signal Filtering Techniques In this study, the performance of various filters, namely adaptive filters, infinite impulse filters, and wavelet transforms, was investigated using EEG signals. These filters employ different approaches to eliminate noise and compare the desired signal with the filtered signal. Adaptive filters, such as LMS (Least Mean Square), NLMS (Normalized Least Mean Square), and RLMS (Recursive Least Mean Square) filters, minimize the difference between the desired signal and the filter signals by adapting their coefficients based on the statistical properties of the input signal. IIR filters, including the Butterworth Low Pass Filter and notch filter, are utilized to remove unwanted noise or frequencies from signals. The Butterworth Low Pass Filter attenuates higher frequencies beyond a cutoff frequency while maintaining a flat response in the passband. The notch filter, also known as a band-stop filter, selectively attenuates a narrow frequency range to eliminate specific interference. As for wavelet transforms, they decompose a signal into smaller components called wavelets, enabling time and frequency localization. In this study, various wavelets such as Symlet 4, Haar, Daubechies4, Bior2.6, Coiflets3, Discrete Meyer, Reverse Bior 6.8, and
158
A. L. Gonzales et al.
Reverse Biorthogonal 2.8 were employed. Each wavelet has unique properties and applications in signal processing, offering different trade-offs between time and frequency localization. 2.7 Performance Analysis To assess the performance of the filtering techniques in image and signal datasets, the following evaluation metrics were utilized: PSNR (Peak signal-to-noise ratio). The PSNR computes the peak signal-to-noise ratio between two images in decibels. This ratio is used to compare the quality of an original image to a filtered image. The higher the PSNR, the better the quality of the image. The block computes the PSNR to (1): MAX 2 (1) PSNR = 10log 10 MSE MSE (Mean squared error). The MSE is a measure of the cumulative squared error between the filtered image and the original image. The lower the value of MSE, the lower the amount of error. To calculate PSNR, first compute the MSE to (2): MSE =
2 1 (i = 1toM ) (j = 1toN ) I (i, j) − K(i, j) M ∗N
(2)
SSIM (Structural Similarity Index). It is used as a metric to measure the similarity between two images, like the original image and the filtered image. To compute the SSIM, this is the simplified version to (3): SSIM (x, y) = [l(x, y) ∗ c(x, y) ∗ s(x, y)]
(3)
RMSE (Root Mean Squared Error). The RMSE is also a measure of squared error between filtered image and original image. The difference is it is the square root of MSE. The lower the value of RMSE, the better the filtering performance. The RMSE computes to (4): 2 1 (4) ∗ RMSE = sqrt (i = 1ton) yi − yhati n SNR (Signal-to-Noise Ratio). Signal-to-Noise Ratio, or SNR, measures how strong a signal is in comparison to the signal’s noise. A signal is often stronger and more dependable when the SNR is higher. The SNR calculated to (5): P_signal SNR = 10 ∗ log10 (5) P_noise Confidence Intervals (CI). Is applied as a statistical tool to further analyze the results of our image filtering experiments. Confidence intervals are essential in assessing the precision and reliability of performance metrics. They provide a range of values within which we can be confident that the true population values of the performance metrics
Optimizing Image and Signal Processing Through the Application
159
lie. By calculating confidence intervals, we aimed to quantify the uncertainty associated with our estimated performance metrics. We used a 95% confidence level, which is a common choice in statistical analysis, to construct the intervals. This allowed us to determine the plausible range of performance values and make informed comparisons between different image filtering techniques. By considering the overlap or non-overlap of confidence intervals, we were able to assess the statistical significance of differences in performance. In instances where confidence intervals did not overlap, we inferred a statistically significant difference in performance between filters. On the other hand, overlapping confidence intervals indicated that the observed performance differences may not be statistically significant. The use of confidence intervals adds rigor to our performance analysis and enhances the reliability of our findings to (6). Confidence Interval = Mean ±
(X 2 − X 1) 2
(6)
2.8 Developing the Web Application To make the study useful, we developed a web application using HTML (Hypertext Markup Language), CSS (Cascading Style Sheets), JavaScript, and Flask framework to integrate the model which allows the users to input an image or signal and test the performance of the model (Fig. 5).
Fig. 5. Sample interface of Image Filtering Web Application.
The web application that demonstrates the image filtering process. This user-friendly web application allows users to upload images and manipulate various parameters such as noise type and intensity, filter selection, and filter parameters. Additionally, it provides a visual representation of the filters’ effectiveness and presents performance results. By offering an accessible platform, this web application enables users to explore and comprehend different image processing methodologies in a practical manner.
160
A. L. Gonzales et al.
3 Results 3.1 Image Processing Results Smoothening Techniques Results Present the results of the filter performance with artificial noise under different lighting conditions such as flat light, backlight, split light, and dim light, it becomes apparent that the median filter consistently outperforms other filters based on its performance metrics (Tables 3 and 4). Table 3. Filters result for image in flat light and back light conditions with artificial noise. Filters Mean Filter
Flat light
Backlight
PSNR MSE
RMSE SSIM PSNR MSE
RMSE SSIM
30.46 82.8
0.066
0.11
0.82
31.62 68.86
0.84
Median Filter
33.15 55.15
0.051
0.82
34.83 39.37
0.055
0.86
Gaussian Filter
29.96 93.52
0.069
0.72
30.41 85.74
0.098
0.72
Max-Min Filter
27.94 198.47 0.092
0.71
29.43 182.89 0.121
0.73
Gaussian Low Pass Filter
29.90 89.19
0.07
Butterworth Low Pass Filter 29.45 100.03 0.072
0.81
31.3
0.71
30.22 87.14
70.67
0.086
0.85
0.1
0.72
Table 4. Filters results for image in split light and dim light conditions with artificial noise. Filters
Split light PSNR MSE
Dim light RMSE SSIM PSNR MSE
RMSE SSIM
Mean Filter
30.1
79.74
0.145
0.78
30.58 102.56 0.126
0.78
Median Filter
30.67 41.54
0.098
0.85
34.45 52.2
0.8
Gaussian Filter
29.6
0.167
0.67
29.25 125.11 0.145
98.9
0.084
0.59
Max-Min Filter
30.44 134.62 0.165
0.73
29.57 182.84 0.162
0.66
Gaussian Low Pass Filter
30.11 82.44
0.152
0.79
29.43 143.49 0.145
0.65
Butterworth Low Pass Filter 29.41 101.19 0.169
0.68
27.34 264.76 0.191
0.49
This result is further supported by the confidence interval analysis. When examining the Peak Signal-to-Noise Ratio (PSNR), it is observed that the Median Filter exhibits the widest interval [30.28391, 36.26609], indicating higher variability but also the potential for better PSNR performance. In terms of Mean Squared Error (MSE), the Median Filter demonstrates the narrowest interval [34.689072, 59.44093],indicating consistent performance with lower MSE values and higher accuracy in error reduction. The Root Mean Squared Error (RMSE) analysis confirms the superiority of the Median Filter,
Optimizing Image and Signal Processing Through the Application
161
as it exhibits the narrowest interval [0.03583102, 0.108169], signifying higher accuracy in error reduction. Additionally, the Structural Similarity Index (SSIM) analysis supports the effectiveness of the Median Filter, with the narrowest interval [0.7886811, 0.8763189], indicating higher SSIM values and better image similarity. These findings collectively demonstrate that the median filter consistently performs as the most effective filter across different lighting conditions, surpassing other filters in terms of both performance metrics and confidence intervals (Tables 5 and 6). Table 5. Filters result for image with natural noise in flat light and back light conditions Filters
Flat light PSNR MSE
Backlight RMSE SSIM PSNR MSE
RMSE SSIM
Mean Filter
33.81
40.37 0.046
0.86
36.18
26.74 0.043
0.92
Median Filter
38.49
16.47 0.028
0.92
40.43
8.07
0.96
Gaussian Filter
40.15
11.8
0.023
0.95
41.99
6.84
0.023
0.97
Max-Min Filter
39.56
15.13 0.026
0.96
42.12
5.26
0.024
0.98
Gaussian Low Pass Filter
33.62
35.24 0.044
0.9
36.88
19.85 0.043
0.94
Butterworth Low Pass Filter 36.18
20.78 0.033
0.93
39.45
11.13 0.033
0.96
0.028
Table 6. Filters result for image with natural noise in split light and dim light conditions Filters
Split light PSNR MSE
Mean Filter
34.96
Dim light RMSE SSIM PSNR MSE
24.83 0.08
0.91
37.33
RMSE SSIM
17.54 0.062
0.91
Median Filter
40.52
8.09
0.042
0.95
42.58
4.9
0.033
0.96
Gaussian Filter
42.21
5.54
0.034
0.97
44.78
2.89
0.025
0.97
Max-Min Filter
42.18
6.09
0.035
0.97
44.85
2.9
0.025
0.98
Gaussian Low Pass Filter
36.35
Butterworth Low Pass Filter 39.63
18.47 0.068
0.94
39.67
8.85
0.043
0.95
9.54
0.96
43.72
3.97
0.027
0.65
0.046
Present the results of the filter performance in the presence of natural noise under various lighting conditions. The findings indicate that the Gaussian filter performs well in flat light, split light, and dim light conditions, while the max-min filter excels in black light conditions. However, when considering the aesthetic aspect of the image and the preservation of image details, the median filter proves to be a preferable choice. It is important to note that the selection of the appropriate filter depends on the specific requirements of the application and the prevailing lighting conditions.
162
A. L. Gonzales et al.
Additionally, the results were supported by the analysis of confidence intervals. The Gaussian filter and the Max-Min filter exhibited strong performance, with confidence intervals (95% CI) for PSNR ranging from 39.25 to 45.31 and 38.74 to 45.61, respectively. Conversely, the mean filter, Gaussian Low Pass filter, and Butterworth Low Pass filter demonstrated relatively lower effectiveness, with confidence intervals (95% CI) for PSNR ranging from 33.15 to 37.99, 32.68 to 40.58, and 34.83 to 44.66, respectively. However, when considering the SSIM metric, the confidence intervals did not provide conclusive evidence to validate the findings, suggesting the need for further investigation. Based on the evaluation and analysis of confidence intervals, the Gaussian filter and the Max-Min filter are recommended for achieving high-quality image processing results in different lighting conditions. However, further research is required to gain a comprehensive understanding of the filters’ performance in terms of SSIM metric. Sharpening Techniques Results The results of the filters in sharpening images with artificial noise. The Butterworth high-pass filter consistently demonstrated the best performance across different lighting conditions, as indicated by various evaluation metrics. This observation is further supported by the confidence interval analysis, which confirmed its effectiveness. The lower bounds and upper bounds of the confidence interval yielded a PSNR result of (23.43 - 25.15), indicating superior performance compared to other filters. Additionally, the RMSE results of (0.0519 - 0.3176) signify better accuracy in noise reduction, while the SSIM result of (0.5331–0.7669) indicates better image similarity (Tables 7 and 8). Table 7. Filters result for image with artificial noise in flat light and back light conditions Filters Laplacian Filter
Flat light
Backlight
PSNR MSE
RMSE SSIM PSNR MSE
RMSE SSIM
14.82 4719
0.531
0.421
0.27
12.31 4851
0.28
Ideal High Pass Filter
22.74 437.94 0.175
0.64
23.68 366.33 0.115
0.65
Gaussian High Pass Filter
22.91 422.21 0.172
0.65
23.68 360.69 0.115
0.65
0.69
24.95 259.62 0.098
0.69
Butterworth High Pass Filter 23.9
320.44 0.151
Table 8. Filters result for image with artificial noise in split light and dim light conditions Filters
Split light PSNR MSE
Dim light RMSE SSIM PSNR MSE
RMSE SSIM
Laplacian Filter
13.28 5430.4 0.719
0.3
16.05 4722.3 0.986
0.24
Ideal High Pass Filter
23.11 455.06 0.217
0.65
23.44 452.06 0.341
0.56
Gaussian High Pass Filter
23.35 437.22 0.212
0.65
23.38 448.33 0.341
0.51
0.68
24.51 332.47 0.295
0.54
Butterworth High Pass Filter 23.8
357.4
0.195
Optimizing Image and Signal Processing Through the Application
163
In contrast, the Laplacian filter consistently performed poorly across all evaluated metrics and may have even resulted in degraded image quality. Therefore, the findings strongly validate the superiority of the Butterworth high-pass filter for sharpening images with artificial noise, providing reliable support for its selection in image processing tasks. However, it is important to note that the choice of filter ultimately depends on the specific requirements of the application and the desired trade-off between image quality and noise reduction (Tables 9 and 10). Table 9. Filters result for image with natural noise in in flat light and back light conditions Filters
Flat light PSNR MSE
Backlight RMSE SSIM PSNR MSE
RMSE SSIM
Laplacian Filter
20.24 614.41 0.219
0.5
17.56 1138.1 0.216
0.79
Ideal High Pass Filter
30.59 56.64
0.92
38.64 8.88
0.98
Gaussian High Pass Filter
32.44 37.07
0.053
0.95
40.41 5.91
0.015
0.99
Butterworth High Pass Filter 31.04 51.12
0.063
0.95
38.17 9.88
0.02
0.99
0.066
0.019
Table 10. Filters result for image with natural noise in split light and dim light conditions Filters
Split light PSNR MSE
Dim light RMSE SSIM PSNR MSE RMSE SSIM
Laplacian Filter
17.04
1284.2 0.395
0.48
30.88
53.05 0.128
0.6
Ideal High Pass Filter
29.22
77.71
0.097
0.92
41.08
5.07
0.039
0.95
Gaussian High Pass Filter
31.17
49.61
0.077
0.94
42.75
3.45
0.032
0.97
Butterworth High Pass Filter 30.06
64.11
0.088
0.95
39.46
7.34
0.047
0.96
The results of the filters are based on their performance metrics. It is evident that Gaussian filters consistently performed well across different lighting conditions. However, when comparing them to other filters such as the ideal high-pass filter and Butterworth high-pass filter, there are slight differences in their respective performance metric results. Nevertheless, the Gaussian high-pass filter demonstrated effective performance in sharpening images with natural noise. This observation is further supported by the confidence intervals, which confirm its effectiveness. The Gaussian high-pass filter achieved higher PSNR values ranging from 27.55 to 45.84 compared to other filters, indicating better noise reduction and preservation of image details. Additionally, it showed lower MSE values ranging from −12.46 to 60.48 and lower RMSE values ranging from 0.0016 to 0.0869, indicating more accurate error reduction. Moreover, it achieved higher SSIM values ranging from 0.93 to 0.99, indicating better image similarity.
164
A. L. Gonzales et al.
In contrast, the Laplacian filter consistently exhibited poorer performance across all evaluated metrics. Restoration Techniques Results The results of the restoration filtering techniques. It is evident that the performance of the filters varies depending on the lighting conditions. In flat light, the inverse filter performed well, while in back light and split light conditions, the Wiener filter was the most effective, and the inverse filter performed the worst. However, when considering the SSIM values, the inverse filter performed well in the split light condition (Tables 11 and 12). Table 11. Filters result for image with artificial noise in flat light and back light conditions Filters
Flat light PSNR MSE
Inverse Filter 16.83
Backlight RMSE SSIM PSNR MSE
91700000000 1910
0.18
16.63
RMSE SSIM
1.04E + 11 1240
0.18
Pseudo -19.34 6410000000 Inverse Filter
503.4
0.21
-17.52 6351494
319.5
0.3
Wiener Filter 7.76
0.967
0.06
3.92
0.967
0.06
10871.9
26365.3
Table 12. Filters result for image with artificial noise in split light and dim light conditions Filters
Split light PSNR
Inverse Filter 6.45
MSE
Dim light RMSE SSIM PSNR
6.57E + 12 20000
0.18
15.26
MSE
RMSE SSIM
2.83E + 11 5510
0.25
Pseudo -18.34 61600000 Inverse Filter
566.59 0.23
-15.41 62500000
827.01 0.24
Wiener Filter 9.25
0.967
12.35
0.967
7715.536
0.06
3781.13
0.06
Based on the results, the inverse filter is the best option for restoring images with artificial noise. This observation is further supported by the confidence interval, which shows a moderate range of PSNR values for the inverse filter (5.92–21.66), suggesting its ability to reduce noise and enhance image quality. On the other hand, the pseudoinverse filter yielded negative PSNR values (lower bounds: − 20.31, upper bounds: − 14.99), indicating limited effectiveness. The Wiener filter demonstrated relatively low PSNR values (lower bounds: 2.57, upper bounds: 13.82), suggesting a lower ability to reduce noise. It is important to note that all three filters are sensitive to noise and may not be as effective in restoring images with noise. The choice of filter should be based on the specific requirements of the application and the characteristics of the noise and blur present in the image (Tables 13 and 14).
Optimizing Image and Signal Processing Through the Application
165
Table 13. Filters result for image with natural noise in flat light and back light conditions Filters
Flat light
Backlight
PSNR
MSE
RMSE
SSIM
PSNR
MSE
RMSE
SSIM
Inverse Filter
315.61
1.79
3.92
1
313.23
3.09
3.31
1
Pseudo Inverse Filter
312.03
4.07
5.92
1
310.22
6.18
4.68
1
Wiener Filter
7.7622
10885.64
0.967
0.06
3.9181
26379.6
0.967
0.06
Table 14. Filters result for image with natural noise in split light and dim light conditions Filters
Split light PSNR
Inverse Filter
MSE
Dim light RMSE SSIM PSNR
MSE
RMSE SSIM
317.96 1.04
3.55
1
322.86 3.36
2.89
Pseudo Inverse Filter 314.06 2.55
5.56
1
318.94 8.28
4.53
0.06
12.36
Wiener Filter
9.25
7717.88 0.967
3771.56 0.967
1 1 0.05
Presented the results of the filters applied to an image with natural noise. The inverse filter consistently outperformed other filters across different lighting conditions. The confidence interval results further support this observation, as the inverse filter consistently achieved the highest PSNR values (310.87–323.96), indicating its effectiveness in reducing noise and preserving image details. Additionally, the inverse filter demonstrated perfect SSIM values, indicating excellent similarity between the restored and original images. The pseudo-inverse filter also performed well, achieving high PSNR values (307.35– 320.78) and perfect SSIM values. However, the Wiener filter showed lower performance in terms of PSNR (2.74–13.90) and exhibited lower image restoration quality compared to the other two filters. Based on the statistical results of the confidence intervals, it is recommended to use the inverse filter for restoring images with natural noise in various lighting conditions. It consistently yielded superior results and demonstrated its effectiveness in achieving high-quality image restoration. 3.2 Signal Processing Results The results of the adaptive filters with added Poisson noise. It shows that the LMS filter demonstrated the best performance in terms of the PSNR and SNR metrics, indicating a higher quality of the filtered signal. Moreover, considering the MSE values, it suggests that both the LMS and RLMS filters performed similarly in terms of minimizing the difference between the filtered signals and the noisy signals. This implies that while the LMS filter removed more noise, the RLMS filter may have achieved a more balanced tradeoff between noise reduction and signal fidelity (Tables 15 and 16).
166
A. L. Gonzales et al. Table 15. Adaptive filters results for EEG signals added with poisson noise
Filters
PSNR
MSE
SNR
Least Mean Squared
4.1403225
291284402
1.9376807
Normalized Least Mean Squared
3.0495111
376527236
0.8225726
Recursive Least Mean Squared
4.1277409
291258742
1.9378084
Table 16. Adaptive filters results for original EEG signals Filters
PSNR
Least Mean Squared
36.150672
Normalized Least Mean Squared Recursive Least Mean Squared
3.0790606 41.059557
MSE
SNR
7290.1708 14787641 2354.2321
33.973545 0.9019327 38.882429
Moreover, the study investigates the original EEG signal without incorporating noise. It reveals that the RLMS filter outperforms the LMS filter in terms of the evaluated metrics, indicating its superior ability to remove noise from the EEG signal compared to the LMS filter. On the other hand, the NLMS filter performs the worst among the evaluated filters. This discrepancy in performance could be attributed to the fact that the NLMS filter heavily relies on the instantaneous estimate of the correlation matrix. This reliance can lead to instability and, consequently, poorer performance (Tables 17 and 18). Table 17. IIR Filters results for EEG signals added with poisson noise Filters
PSNR
MSE
SNR
Butterworth Low Pass Filter
35.614897
210559.93
33.347001
Notch Filter
58.659327
1031.2505
56.446612
Table 18. IIR Filters results for original EEG signals Filters
PSNR
MSE
SNR
Butterworth Low Pass Filter
36.048717
7463.3404
33.871589
Notch Filter
59.044182
37.444389
56.867054
The results of the IIR filters. It is evident that the Notch filter, in both the method of adding Poisson noise and using the original EEG data without incorporating noise,
Optimizing Image and Signal Processing Through the Application
167
outperforms the Butterworth low-pass filter in terms of denoising performance. This indicates that the Notch filter exhibits higher PSNR and SNR values, indicating a superior quality of the denoised signal compared to the Butterworth low-pass filter. Additionally, it shows that the MSE value is lower, suggesting that the Notch filter is more effective in minimizing the difference between the filtered and noisy signals (Tables 19 and 20). Table 19. Wavelet transform filters results for EEG signals with poisson noise. Filters
PSNR
MSE
SNR
Symlet 4
42.878047
39728.007
40.589737
Haar
43.342429
34911.486
41.151041
Daubechies4
75.748605
Bior2.6
38.81217
Coiflets3
75.815415
Discrete Meyer
43.466397
20.358682 99747.048 19.727337 34637.128
73.492976 36.591676 73.629732 41.18515
Reverse Bior 6.8
44.078376
29835.65
41.833569
Reverse Biorthogonal 2.8
44.310104
28054.491
42.10016
Table 20. Wavelet transform filters results for original EEG signals Filters
PSNR
MSE
Symlet 4
52.602388
165.03125
Haar
47.217416
570.24744
Daubechies4
62.933115
Bior2.6
43.282744
15.293026 1411.0116 14.525291
SNR 50.42526 45.040288 60.755987 41.105616
Coiflets3
63.156801
Discrete Meyer
50.395391
274.3253
48.218263
60.979673
Reverse Bior 6.8
53.754293
126.58308
51.577165
Reverse Biorthogonal 2.8
54.995407
95.11834
52.818279
The results of the wavelet transform filters. They show that the Daubechies4 and Coiflets3 filters, in both the method of adding Poisson noise and using the original EEG data without incorporating noise, exhibit good performance with slight differences in their performance metrics. However, they performed particularly well in the EEG signal with Poisson noise, with an approximately 16 percent difference in performance.
168
A. L. Gonzales et al.
4 Conclusion Based on the results of this study, which employed image and signal filtering techniques considering different lighting conditions, types of resolution, blurring techniques and the incorporation of artificial noise, it was observed that certain filters outperformed others in the experiments. Specifically, it was found that the median filter was the most effective for smoothing images with artificial noise, while the Gaussian filter performed well with natural noise. In terms of image sharpening, the Butterworth high-pass filter outperformed other filters. Additionally, the inverse filter showed good performance for image restoration. Regarding denoising of signals, the study indicated that the LMS filter was the best choice for noisy signals in the adaptive filters, while the RLMS filter was more suitable for natural noise signals. The Notch filter demonstrated promising results in denoising EEG signals using the infinite impulse filter. Furthermore, among the wavelet transform filters, the Coiflets3 filters exhibited good performance compared to other filters. Therefore, it is worth noting that this study provides valuable insights into the extent to which filters perform in images and signals with or without the incorporation of artificial noise. Therefore, to enhance the study further, it is suggested that optimization techniques be employed prior to utilizing these filters. This would ensure that the filters perform well and yield acceptable results, thus avoiding information loss. And to further enhance the study to increase optimization. • Employed feature extraction techniques. • Explored a greater variation of the dataset’s representation, considering factors such as distance, obstruction, angle, and other relevant aspects. • Applied statistical tools to conduct in-depth analysis and gain deeper insights. • Explored additional filtering techniques to expand the scope of the study.
References 1. Wang, Y., Hong, H., Wu, J., Wang, G., Li, S.: Identification and extraction of circular markers in 3D reconstruction. In: Proceedings of the 2020 International Conference on Wireless Communications and Signal Processing (WCSP), Nanjing, China, pp. 213–216 (2020). https://doi. org/10.1109/WCSP49889.2020.9299768 2. Liu, J., Xu, C., Zhao,Y.: Improvement of facial expression recognition based on filtering and certainty check. In: Proceedings of the 2021 International Conference on Electronic Information Engineering and Computer Science (EIECS), pp. 1–5 (2021). doi: https://doi. org/10.1109/EIECS53707.2021.9588012 3. Shuyun, D.: Real-time traffic flow detection system based on video image processing. In: Proceedings of the 2022 IEEE Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC), pp. 383–388 (2022). https://doi.org/10.1109/IPEC54454.2022.9777318 4. Yoshioka, T., Watanabe, S.: Noise-robust speech recognition based on feature enhancement with dynamic variation. IEEE J. Sel. Top. Sign. Proces. 6(7), 707–717 (2012). https://doi.org/ 10.1109/JSTSP.2012.2205334 5. Khanna, A.: A review on image filtering techniques. Journal of Engineering and Applied Sciences 15(4), 778–782 (2020). https://doi.org/10.36478/jeasci.2020.778.782
Optimizing Image and Signal Processing Through the Application
169
6. Fan, L., Zhang, F., Fan, H., Zhang, C.: Brief review of image denoising techniques. Vis. Comput. Ind. Biomed. Art 2(1), 1–14 (2019). https://doi.org/10.1186/s42492-019-0016-7 7. Hassanpour, H., Nasrabadi, A.M.: A review on EEG signal preprocessing techniques for mental state recognition. J. Med. Sign. Sens. 4(1), 11–17 (2014) 8. Othman, P.S., Marqas, R.B., Abdulqader, D.N., Almufti, S.M.: Effect of mean filter on face image from video frames. In: Proceedings of the 2020 8th International Symposium on Digital Forensics and Security (ISDFS), pp. 1–5 (2020). https://doi.org/10.1109/ISDFS49300.2020. 9116277 9. Kumar, A.A., Lal, N., Kumar, R.N.: A comparative study of various filtering techniques. In: Proceedings of the 2021 5th International Conference on Trends in Electronics and Informatics (ICOEI), pp. 576–580 (2021). https://doi.org/10.1109/ICOEI51242.2021.9453068 10. Xu, W., Xiao, C., Jia, Z., Han, Y.:Digital image denoising method based on mean filter. In: Proceedings of the 2020 International Conference on Computer Engineering and Application (ICCEA), pp. 82–86 (2020). https://doi.org/10.1109/ICCEA50009.2020.00188 11. Kavitha, S., Inbarani, H.: Covid-19 and MRI image denoising using wavelet transform and basic filtering. In: Proceedings of the 2021 5th International Conference on Intelligent Computing and Control Systems (ICICCS) (2021). https://doi.org/10.1109/iciccs51141.2021.943 2307 12. Kumar, A., Sodhi, S.S.: Comparative analysis of Gaussian filter, median filter and Denoise Autoenocoder. In: Proceedings of the 2020 7th International Conference on Computing for Sustainable Global Development (INDIACom) (2020). https://doi.org/10.23919/indiacom4 9435.2020.9083712 13. Rahul, G.B.: Gaussian filtering based image integration for improved disease diagnosis and treatment planning. In: Proceedings of the 2021 9th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO) (2021). https://doi.org/10.1109/icrito51393.2021.9596278 14. Stankovic, I., Brajovic, M., Stankovic, L., Dakovic, M.: Laplacian filter in reconstruction of images using gradient-based algorithm. In: Proceedings of the 2021 29th Telecommunications Forum (TELFOR) (2021). https://doi.org/10.1109/telfor52709.2021.9653297 15. Altaie, R.H.: Restoration for blurred noisy images based on guided filtering and inverse filter. Int. J. Electr. Comput. Eng. 11(2), 1265–1275 (2021). https://doi.org/10.11591/ijece.v11i2. pp1265-1275 16. Liao, H., Chen, J., Liu, Q.: The research on image restoration based on inverse filtering. J. Phys. Conf. Ser. 1518(2), 022031 (2020). https://doi.org/10.1088/1742-6596/1518/2/022031 17. Sneha, M.R., Manju, B.R.: Performance analysis of wiener filter in restoration of COVID-19 chest X-ray images, ultrasound images and mammograms. In: Proceedings of the 2022 IEEE World Conference on Applied Intelligence and Computing (AIC) (2022). https://doi.org/10. 1109/aic55036.2022.9848862 18. Banu, S.K.S., Sivaparvathi, B., Ali, M.S.K., Raheema, S.K., Sailaja, R., Kamala, P.: A study on Digital Image Restoration Filters. Int. J. Sci. Res. Sci. Technol., 28–33 (2020). https://doi. org/10.32628/ijsrst120724 19. Yassen, M.S.: Compare of median filter and wiener filter to detect concealed weapons. https://www.researchgate.net/profile/Moumena-Alhadithi/publication/361331288_354_ COMPARE_OF_MEDIAN_FILTER_AND_WIENER_FILTER_TO_DETECT_CONCEA LED_WEAPONS_Introduction/links/62ab080040d84c1401ad85f1/354-COMPARE-OFMEDIAN-FILTER-AND-WIENER-FILTER-TO-DETECT-CONCEALED-WEAPONSIntroduction.pdf. Accessed 15 Feb 2023 20. Rani, M.A.Z., Hashim, N.F.F., Ali, M.A.M., Isa, N.A.M.: Performance comparison of LMS, NLMS, and RLMS adaptive filters for ECG signals. In: Proceedings of the 2018 IEEE International Conference on Automatic Control and Intelligent Systems (I2CACIS), pp. 108–113 (2018). https://doi.org/10.1109/I2CACIS.2018.8617456
170
A. L. Gonzales et al.
21. Al- Rawi, M.A.H.H.M., Jalab, M.A., Taib, M.N., Ibrahim, M.N., Abdullah, M.K.: Comparison of notch and butterworth filters for denoising EMG signals. In: Proceedings of the 2014 IEEE Conference on Biomedical Engineering and Sciences (IECBES), pp. 365–368 (2014). https:// doi.org/10.1109/IECBES.2014.7047554 22. Mukhopadhyay, S., Dutta, P., Chakraborty, C.: Comparative performance analysis of different wavelet transform filters for EEG signal denoising. J. Med. Syst. 41, 57 (2017). https://doi. org/10.1007/s10916-017-0709-1 23. Ravi, R., Dhanalakshmi, R.: Image denoising using median filter for impulse noise reduction. Int. J. Pure Appl. Math. 121(16) (2018) 24. Rao, R.M., Prasad, M.D.V., Srinivasan, N.K.: Multilevel image enhancement using max-min filter. SIViP 12(2), 305–312 (2018). https://doi.org/10.1007/s11760-017-1165-0 25. Mohammed, A.A., Elakkiya, A.S.: An efficient image enhancement method based on Gaussian high-pass filter. In: Proceedings of the International Conference on Inventive Systems and Control (ICISC) (2019). https://doi.org/10.1109/ICISC47157.2019.9073733 26. Ramos, A.L.A., Ballera, M.A.: Aggregating attention, emotion, and cognition load as basis for developing a rule-based pedagogy for online learner. IJODeL 7(2) (2021)
Hybrid-Service Learning During Disasters: Coaching Teachers Develop Sustainability-Integrated Materials Aurelio Vilbar(B) University of the Philippines Cebu, Lahug, Cebu City, Philippines [email protected]
Abstract. The destructive impact of Typhoon Rai in 2021, which caused a prolonged power blackout in Cebu, Philippines, and the outbreak of typhoid fever cases in 2022 served as catalysts for the elementary public-school teachers in Barili, Cebu, to enhance the integration of education for sustainable development (ESD) within their curriculum. This descriptive research aimed to address the coaching request of Barili teachers regarding the integration of ESD into their action research remedial reading program. It presents the experiences of graduate students who participated in a hybrid-service learning (h-SL) approach as an alternative assessment in the Materials Production course. Over a period of three months, the students volunteered to assist the teachers in designing ESDintegrated reading materials using both online and face-to-face modalities. The findings derived from reflections, an anonymous survey, and focus group discussions revealed the following outcomes: (1) h-SL facilitated the academic growth and volunteerism of the students while empowering the community through the production of ESD-reading materials, and (2) face-to-face coaching fostered social-emotional connections between the student-coaches and teachers, resulting in spontaneous feedback. Keywords: Hybrid-service learning · service learning · Materials development · Teacher training · Action research
1 Rationale In December 2021, Typhoon Rai devastated the Central and Northern Philippines, damaging at least 797 million Euros [1] in agriculture and infrastructure. In Metro Cebu, most households had a power blackout, and only 61% were restored after 28 days [2]. In southern Philippines, most schools reopened their face-to-face classes two months delayed due to damaged school facilities [3]. The typhoon increased children’s learning loss caused by school closures and the risk of drop-outs [4]. One province affected by Typhoon Rai was Barili in southern Cebu. Barili was isolated for days due to destroyed roads and power blackout [5]. Unfortunately, after six months, it also had a typhoid fever breakout in June 2022. Barili is a community partner of the University of the Philippines Cebu (UP) Pahinungod in its volunteerism © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 171–180, 2023. https://doi.org/10.1007/978-3-031-44146-2_17
172
A. Vilbar
projects. These disasters strengthened the partnership between UP and Barili in conducting teacher coaching on integrating Education for Sustainable Development (ESD), in the curriculum. This coaching responded to the request of the selected elementary public-school teachers of Barili to conduct action research integrating ESD in their remedial reading program. The teachers stated that the remedial program could address the reading comprehension needs of their students who were on modular distance education for two years due to the pandemic. This research addressed the call from UNESCO to conduct teacher training on ESD-integration in the curriculum to transform the society by helping people develop the knowledge and skills needed for sustainability [6]. In this research, the UP Cebu graduate students conducted a hybrid service-learning project (h-SL) for Barili teachers. Service learning is an educational approach where students conduct an organized service to the community as part of the course [7, 8]. A type of SL, h-SL uses some aspect of teaching and/or service conducted online or in-person [9]. It can sustain instruction and community engagement promoting safety and flexibility [10, 11]. In this study, the course was conducted online, while the service learning was conducted online and face-to-face (ftf). In the Needs Assessment Stage, both graduate students and Barili teachers were committed to integrating ESD in the remedial reading materials to develop sustainability awareness among children. This research aimed to examine the usefulness of h-SL with the following questions: (1) Did the h-SL develop students’ volunteerism and academic enhancement on materials production and ESD knowledge despite the natural calamities? (2) What is the impact of h-SL on the teachers’ skills in producing ESD-integrated materials?
2 Literature Review 2.1 Hybrid-Service Learning Service Learning (SL) is an experiential approach where students conduct a meaningful service to the partner community that provides experiences related to the course [7]. It is anchored on transformative education that empowers students as agents of knowledge creation and social change through course-related community service activity [12]. Service learning effectively fosters academic content, personal growth, and civic engagement [13, 14]. H-SL is a type of service-learning that uses some aspect of teaching and/or service conducted online or onsite [15]. In h-SL, teaching can be conducted online, while the service can be done onsite or inversely, depending upon the contexts [16]. These contexts include the geographical distance between the students and the community, health crises, and community natural disasters [17–19]. H-SL develops academic learning, motivation, compassion, public service, and problem-solving skills but offers more flexibility in modality [20–23]. The students of Shaw [21] conducted an h-SL by writing the public relations media kits of their non-profit partner communities. Findings showed that the experience developed writing skills and community engagement. Similarly, students had h-SL on Computer Cabling project to a community [23]. The project developed active learning involvement, computing competence, and community’s cabling sustainability. The sudden re-emergence
Hybrid-Service Learning During Disasters
173
of COVID-19 virus in Taiwan in early April 2022 prompted Huang [20] to shift from ftf-SL to h-SL. His students conducted ftf and online community projects on the adoption of animals. After the project, the students developed academic learning, motivation, and problem-solving skills. The students experienced difficulties in the initial use of technologies; but they were eventually managed. Further, students used h-SL through virtual seminars and onsite events to serve underserved communities [22]. Data showed that the h-SL increased their knowledge and developed confidence and interest in working with underserved communities. Students felt that the insufficiency of the virtual group discussions can be supplemented with the one-to-one conversations with the community. Similarly, the students in Contemporary Curriculum Issues conducted h-SL in their local school communities to improve the community’s teaching-learning context [24]. Findings showed that h-SL effectively produced the course outcomes and applied the course theories to real issues. Students described the experience as rewarding despite their anxiety.
3 Methodology 3.1 Research Design and Data Gathering Procedures This study used the descriptive design in which various data-gathering procedures were utilized to address the phenomenon of conducting h-SL [25]. It used a quantitative procedure through a survey evaluation. The standardized evaluation form, “Department of Education Evaluation Rating Sheet for Print Resources” was used by three independent raters to assess the reading materials. The criteria included the integration of ESD in the content, the appropriate reading exercises and assessment, and format. It also used qualitative procedures: three online anonymous open-ended surveys, three reflections, and triangulated with focus group discussions (FGD) to determine the impact of h-SL on the teachers and graduate students’ understanding of ESD-integration in the production of the materials. The surveys and reflections were analyzed using the Harding coding analysis framework [26]. This study was anchored on Content-Based Instruction (CBI), which is the method of integrating content or non-language subject matter in the English language teaching (ELT) curriculum and instructional materials [27]. In CBI, the sources of authentic texts come from sciences, social sciences, or ESD themes in teaching reading, writing, listening, speaking, and grammar Because it immerses students with real-life reading texts, this method is effective in teaching language and content [28]. CBI supports the mission of education to promote a sustainable society where students are engaged in critically learning global issues in their English language classes [29]. In this research, the ESD key thrusts were focused on Environmental conservation, Gender equality, and Cultural diversity. These thrusts matched the children’s learning competencies. 3.2 Research Participants and Environment The participants were the 13 UP Cebu Master of Education students enrolled in the course “Production/Adaptation and Evaluation of Language Learning Materials” from
174
A. Vilbar
May 2022–July 2022. All students took the h-SL option to coach the teachers. The coaching sessions and the produced materials became the students’ summative test, as stated in the syllabus and orientation. The other participants were the eight teachers from two different schools in Barili. Barili is 65 km from UP Cebu which made the online collaboration more practical. All teachers have taught for at least 15 years with commitment to developing their children’s reading comprehension. They were willing to learn the scientific process of materials production to be used for their action research. During the Needs Assessment Stage, both parties agreed to have three ftf-SL sessions for pilot testing in the last week of June 2022. However, that week, Barili had sudden typhoid fever, causing three deaths and 98 cases [30]. With the medical situation compounded with the COVID-19 cases, all participants agreed to hold online sessions instead of ftf. Only one ftf session was conducted at UP Cebu Campus for safety. Only the teachers conducted ftf pilot testing of their materials for their elementary children in Barili. For research ethics, all participants consented to participate in the study.
4 Results and Discussion 4.1 Impact on Student-Volunteers From the reflections and anonymous surveys, the h-SL developed students’ academic enhancement, application of ESD knowledge, and volunteerism despite the natural calamities. As shown in Table 1, the students stated that h-SL promoted active the transfer of course knowledge and skills because they immediately applied the lessons by coaching the Barili teachers. Student A wrote, “The online coaching was instructive and humbling. There were internet connection interruptions due to typhoons, but we managed. It made me apply materials production theories into practice, and I am grateful for the opportunity to volunteer.” Student B shared that meeting the Barili teachers online made her feel nervous but fulfilled. She added, “I studied readability testing further, so I had something new to share. I felt fulfilled when they said it was their first time to use it. Although we had some sessions that were disrupted by connectivity issues, we were still finished the tasks.” The h-SL made the students appreciate the significance of ESD. Student C said, “The sustainability texts can help students to become responsible citizens.” Teacher A shared that existing typhoid fever is an example text that must be learned. She added that the h-SL opened her mind to the importance of ESD integration. The h-SL allowed the students to experience contextualizing materials production. Student D wrote, “In our Zoom meetings, I learned from the teachers that some children of School B had to walk for 30 min to go to school. So, I adapted a story about a resilient child.” The natural calamities and health crisis developed the participants’ resilience and resourcefulness. Originally, the students planned to stay at Barili for three days to coach the teachers in lay outing using Canva (free graphic designing website) and in the pilot testing. However, when typhoid fever broke out, teaching Canva layouting was done online. Student E shared, “Teaching Canva online developed my resourcefulness and patience. When I shared the Canva website, we had internet connection. Picture loading
Hybrid-Service Learning During Disasters
175
Table 1. Themes and categories of students’ reflections Themes
Categories
Academic enhancement Promoted active transfer of knowledge and skills
Inspired to use ESD as learning texts
Volunteerism
Sample statements “The online coaching was instructive and humbling. There were internet connection interruptions due to typhoons, but we managed. It made me apply materials production theories into practice, and I am grateful to volunteer.” “Using environmental conservation and gender equality in teaching reading made the lessons more socially-relevant.”
Valued volunteerism despite the “Teaching Canva online pandemic and calamities developed my creativity and patience. Most of the time when I shared the Canva website via Zoom, we had internet connection issues. The loading of pictures and design were not real time. But we still made it.”
was not real time. But we still made it.” On the other hand, Barili teachers felt relieved that the coaching was done online to avoid infection contamination. Teacher B said that she immensely learned in the ftf sessions. Her coaches’ immediate feedback instantly improved their reading lessons. Teacher C agreed that the ftf interactions allowed spontaneous conversations. Similarly, the coaches claimed that online coaching was convenient and ftf coaching can supplement the missing personal touch. Student F shared, “Coaching required personal connection. Most of our online sessions were off-cam due to connectivity issues or typhoons.” Student L added that one advantage of ftf was that they could quickly critique the layout of the materials. She added, “Compared to simultaneous file sharing online, ftf sharing was more convenient and efficient. We only opened one file for editing.” Teacher F agreed that using Zoom while editing the files in Google Docs and Canva was convenient, but they did not have a stable internet connection during the typhoons. All data proved that h-SL developed academic enhancement and volunteerism among students while capacitating the community in producing remedial reading materials [13, 14] despite calamities and health crisis. Students did not learn only for their glory but also for humanity, as validated in the FGD. The students said that h-SL became their authentic laboratory to apply their course knowledge. Because of the h-SL’s experiential approach [7], the students applied their knowledge and skills to their partner community through. They felt that their knowledge was needed by the Barili teachers for their research. When
176
A. Vilbar
serving, the students experienced fulfillment and compassion [12] that fueled them to volunteer. The flexibility of h-SL addressed the shift of modality based on the weather condition and health crisis. As Student G said, “h-SL gave us choices if we would meet online or ftf or hybrid. During our coaching sessions, my classmates and I stayed at a coffee shop in Cebu City while the Barili teachers and my classmate from the other province online.” This situation validated the affordances of h-SL that maximize the convenience of technologies during disasters and the social-emotional interactions during ftf sessions [20–23]. These affordances supported the preference among adult learners to do online or blended learning after the pandemic [31]. Due to the two-year distance education modality during the pandemic, the graduate students and teachers considered learning with technology as part of their new normal. The pandemic redesigned their learning mode, that they found technological alternatives during calamities. Furthermore, the ftf coaching supplemented the social-emotional needs of both participants during collective decision-making, giving suggestions, and expressing appreciation. These gestures are crucial to the coaching process [32] which were intensified in the ftf session. In their reflections, the students shared that h-SL experience developed their critical thinking and problem-solving skills. Student F said, “I had to learn beginning literacy before coaching. So, I was more prepared.” Student F’s experience proved that h-SL could foster compassion, volunteerism, and problem-solving skills [21, 22]. 4.2 Impact on Community Teachers All materials developed received a “Passed” rating from the Experts’ Evaluation. The materials’ content and instructional design were aligned with the national standards. The independent raters’ evaluation proved that the h-SL successfully achieved its purpose of coaching teachers to produce quality materials. Evaluator A commended using ESD themes as the springboard for learning vocabulary and grammar. He said that the lesson, “Plant for a Healthy Life” by Ms. Fritzie Manalili (graduate student) can prepare the children to appreciate backyard gardening but suggested making the vocabulary exercises shorter. Figure 1 is the screenshot of the lesson. The analysis of the reflections and surveys from the Barili teachers revealed that the experience developed their: (1) confidence and skills in producing validated reading materials for their action research and (2) appreciation of ESD integration in the curriculum. The teachers felt blessed to be coached in conducting research. Teacher A expressed, “I thought conducting materials validation was difficult. But the volunteers coached us step by step. So, I made differentiated reading materials for my children. Teacher B shared that h-SL inspired her to use ESD lessons. She said, “In the pilot testing, my pupils wanted to keep the materials because the lessons were new and fun. I am grateful to my coaches. Despite the weather conditions and hectic schedule, they devoted time for us”. Teacher C shared that having dedicated coaches compelled them to finish their research. She said, “The volunteers guided us in revising our materials. Like us, they also had full-time jobs, but they dedicated their free time for us.” The positive Experts’ Evaluation and teachers’ reflections proved h-SL can develop student volunteerism, community empowerment, and reciprocity of kindness despite disasters and health crises [21, 22]. The transformative nature of h-SL [12] empowered
Hybrid-Service Learning During Disasters
177
Fig. 1. Reading selection for backyard gardening and vocabulary improvement
the teachers to create their own remedial materials that responded to the diverse reading needs of their children. In the FGD, the coaches shared that the teachers already had background in materials production. We only coached them on the scientific procedures in materials production and validation.
5 Conclusion The h-SL project developed the graduate students’ academic enhancement, application of ESD knowledge, volunteerism, and community’s capacity to produce sound reading materials despite the natural disasters and health crisis. The flexibility of h-SL supported the participants’ online and ftf coaching sessions in designing ESD-integrated materials. The positive evaluation and pilot testing results proved that the h-SL was successfully implemented. H-SL maximized the affordances of the telecollaboration power of Zoom or Google Meets in conducting meetings while synchronously and asynchronously collaboratively editing the documents online. These technologies have become the preferred educational technologies among the participants in their new normal. Therefore, when disasters happened, the teachers readily adjusted to the technological shift. On the other hand, the ftf coaching supplemented the social-emotional needs among participants by giving and receiving feedback, in teaching educational software, and exchanging kindness [32]. These human interactions can promote a healthy adult coaching atmosphere. Although this study was limited to a graduate education case, its findings can provide basic to higher education curriculum makers and teachers to use h-SL. With global warming indicators and natural disasters disrupting classes [6], h-SL can be an alternative
178
A. Vilbar
modality to sustain instruction with compassion. For example, in disaster-prone mountainous schools, the school administrators could consider telecollaboration platforms for meetings and discussions instead of suspending meaningful community engagement due to landslide. For tropical island schools, teachers can maximize the sunny weather for ftf instruction. During this weather, sea turmoil can be minimal, thus maximizing inperson discussion of course theories or demonstrations of skills. Due to the pandemic, students and teachers have been accustomed to online distance learning technologies [31]. Shifting from ftf to online or vice versa has become more acceptable, especially if students know that the shift supports the course content and community empowerment. Acknowledgment. I am grateful to the support of the following that made this research successful: University of the Philippines Cebu, University of the Philippines Cebu Master of Education Program, and UP Pahinungod System, and UP Cebu Ugnayan ng Pahinungod.
References 1. Philippine Daily Inquirer: Odette damage to agriculture, infra hits P47 billion. Philippine Daily Inquirer, 08 February 2022 2. Lagunda, K.: Visayan Electric power restoration in Metro Cebu reaches 61%. Yahoo News Philippines, 13 January 2022 3. UNICEF: Children in Southern Leyte Go Back to School as Families Recover from Odette. UNICEF Philippines (2022). https://www.unicef.org/philippines/stories/children-southernleyte-go-back-school-families-recover-odette. Accessed 01 Oct 2022 4. UN Office for the Coordination of Humanitarian Affairs. Humanitarian Needs and Priorities Revision: Super Typhoon Rai (Odette) (2022) 5. Ecarma, L.: 260 tourism firms damaged by Odette in Cebu province. Rappler, 07 January 2021 6. UNESCO, Buckler, C., Creech, H.: Shaping the Future We Want (2014). https://doi.org/10. 5363/tits.11.4_46 7. Ash, S.L., Clayton, P.H.: The articulated learning: An approach to guided reflection and assessment. Innov. High. Educ. 29(2), 137–154 (2004). https://doi.org/10.1023/B:IHIE.000 0048795.84634.4a 8. Bringle, R.G., Hatcher, J.A.: A service learning curriculum for faculty. Michigan J. Community Serv. Learn. 2 (1995) 9. Waldner, L.S., Mcgorry, S.Y., Widener, M.C.: E-service-learning: the evolution of servicelearning to engage a growing online student population. J. High. Educ. Outreach Engagem. 16(2), 123–150 (2012) 10. Andrews, U., Tarasenko, Y., Holland, K.: Complexities of coordinating service-learning experiences in rural communities, pp. 192–210 (2020). https://doi.org/10.4018/978-1-7998-32850.ch012 11. Schmidt, M.E.: Embracing e-service learning in the age of COVID and beyond. Scholarship of Teaching and Learning in Psychology (2021) 12. Bringle, R.G., Hatcher, J.A.: Institutionalization of service learning in higher education. J. Higher Educ. 71(3) (2000). https://doi.org/10.1080/00221546.2000.11780823 13. Arehart, J.H., Langenfeld, K.L., Gingerich, E.M.: Reevaluating traditional international service-learning during a global pandemic. Adv. Eng. Educ. 8(4), 1–6 (2020)
Hybrid-Service Learning During Disasters
179
14. Porto, M.: Affordances, complexities, and challenges of intercultural citizenship for foreign language teachers. Foreign Lang. Ann. 52(1), 141–164 (2019). https://doi.org/10.1111/flan. 12375 15. Waldner, L., McGorry, S., Widener, M.: Extreme e-service learning (XE-SL): E-service learning in the 100% online course. J. Online Learn. Teach. 6(4), 839 (2010) 16. Dovi, K., Chiarelli, J., Franco, J.: Service-learning for a multiple learning modality environment. J. Chem. Educ. 98(6), 2005–2011 (2021). https://doi.org/10.1021/acs.jchemed.0c0 1475 17. Balvalli, D.S.: Online, asynchronous hearing education and research project for ethnically diverse adolescents via interprofessional collaboration and electronic service-learning during the COVID-19 pandemic: a pilot study on the needs and challenges. Am. J. Audiol. 30(3), 505–517 (2021). https://doi.org/10.1044/2021_AJA-20-00166 18. Ollermann, F., Rolf, R., Greweling, C., Klaßen, A.: Principles of successful implementation of lecture recordings in higher education. Interact. Technol. Smart Educ. 14(1), 2–13 (2017). https://doi.org/10.1108/ITSE-09-2016-0031 19. Vilbar, A.: Electronic-service learning to sustain instruction with civic engagement during the COVID-19 pandemic. In: Novel & Intelligent Digital Systems: Proceedings of the 2nd International Conference (NiDS 2022), pp. 24–32 (2023). https://doi.org/10.1007/978-3-03117601-2_3 20. Huang, D.Y.: Hybrid E-service learning practice during COVID-19: promoting dog adoption in philosophy of life course in Taiwan. Educ. Sci. 12(8) (2022). https://doi.org/10.3390/edu csci12080568 21. Shaw, T.: Student perceptions of service-learning efficacy in a hybrid I online undergraduate writing class. Teach. Learn. J. 11(2), 16 (2018) 22. Taylor, L., Suresh, A., Wighton, N.M., Sorensen, T.E., Palladino, T.C., Pinto-powell, R.C.: A hybrid educational approach to service learning working with medically underserved communities. J. Med. Educ. Online 27, 1–12 (2022). https://doi.org/10.1080/10872981.2022.212 2106 23. Yusof, A., Atan, N.A., Harun, J., Rosli, M.S., Majid, U.M.A.: Students engagement and development of generic skills in gamified hybrid service-learning course. Int. J. Emerg. Technol. Learn. 16(24), 220–243 (2021). https://doi.org/10.3991/ijet.v16i24.27481 24. Roso, C.G.: Faith and learning in action: christian faith and service-learning in a graduate-level blended classroom. Christian Higher Education (2019). https://doi.org/10.1080/15363759. 2019.1579118 25. Mason, P., Augustyn, M., Seakhoa-King, A.: Exploratory study in tourism: designing an initial, qualitative phase of sequenced, mixed methods research. Int. J. Tour. Res. 12(5), 432–448 (2010). https://doi.org/10.1002/jtr.763 26. Harding, J.: Identifying themes and coding interview data: reflective practice in higher education. Identifying Themes Coding Interview Data Reflective (2015). https://doi.org/10.4135/ 9781473942189 27. Snow, M.A.: Trends and issues in content-based instruction. Annu. Rev. Appl. Linguist. 18, 243–267 (1998). https://doi.org/10.1017/S0267190500003573 28. Wesche, M.B.: Content-based second language instruction. In: The Oxford Handbook of Applied Linguistics, 2nd (ed.) (2012) 29. Sato, S., Hasegawa, A., Kumagai, Y., Kamiyoshi, U.: Content-based Instruction (CBI) for the social future: a recommendation for critical content-based language instruction (CCBI). L2 J. 9(3) (2017). https://doi.org/10.5070/l29334164 30. Sabalo, W.: 98 cases of typhoid fever, 3 deaths reported in Barili, Cebu. CDN Digital Multimedia, 25 June 2022
180
A. Vilbar
31. Catalano, A.J., Torff, B., Anderson, K.S.: Transitioning to online learning during the COVID19 pandemic: differences in access and participation among students in disadvantaged school districts. Int. J. Inf. Learn. Technol. 38(2), 258–270 (2021). https://doi.org/10.1108/IJILT-062020-0111 32. Knight, J., van Nieuwerburgh, C.: Instructional coaching: a focus on practice. Coaching 5(2), 100–112 (2012). https://doi.org/10.1080/17521882.2012.707668
Secure Genotype Imputation Using the Hidden Markov Model with Homomorphic Encryption Chloe S. de Leon1,2 and Richard Bryann Chua1,2(B) 1
Department of Physical Sciences and Mathematics, University of the Philippines Manila, Manila, Philippines {csdeleon1,rlchua}@up.edu.ph 2 Department of Computer Science, University of the Philippines Diliman, Quezon City, Philippines
Abstract. Genotype imputation is a statistical technique used to infer missing genotypes using a haplotype reference panel. This is typically computationally intensive so it is outsourced to a cloud. However, there is a privacy risk in outsourcing genomic data to the cloud. We solved this problem by using homomorphic encryption in developing a genotype imputation application using the hidden Markov model (HMM). Our solution performs the training of the HMM parameters locally using the Baum-Welch algorithm and then outsourcing the inference step to the cloud using the Viterbi algorithm. It is the Viterbi algorithm that we implemented homomorphically. We have implemented our solution using EVA, which compiles our Python program into a format that can be compiled with the SEAL homomorphic encryption library.
Keywords: homomorphic encryption imputation · hidden Markov model
1
· genome privacy · genotype
Introduction
In human genetics, the effect of genetic variants on human traits is studied by looking at how phenotypes, such as the susceptibility of individuals to diseases, are associated with underlying genotypes. The most common type of genetic variation found among humans is the single-nucleotide polymorphism (SNP). A SNP is a single base pair nucleotide variation between individuals of the same species at a specific location, called locus, along the genome. One study design used in genomic research is genome-wide association studies (GWAS). GWAS compare SNPs to identify the association between these genetic variants and the genetic predisposition to develop a disease [24]. With next-generation sequencing (NGS) it is possible to assay 100,000–1,000,000 genetic variants in each individual being studied. NGS works by fragmenting genomic data into multiple pieces then reassembling them to reconstruct the original sequence. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 181–190, 2023. https://doi.org/10.1007/978-3-031-44146-2_18
182
C. S. de Leon and R. B. Chua
This results in an increase in the amount of missing data [13]. One useful tool to address the problem of missing data is genotype imputation, where it estimates the unknown genotypes using trends in a reference panel of haplotypes [23]. Because genomic computations are typically computationally intensive, they are typically outsourced to a cloud and this presents privacy risks [1,11,20]. Homomorphic encryption (HE) is an attractive solution to solve this privacy problem in outsourcing genotype imputation as it allows computation on encrypted data without the need to decrypt them [19]. Recognizing this promising opportunity, Track II of the iDASH 2019 competition (http:// www.humangenomeprivacy.org/2019/competition-tasks.html) asks for the use of homomorphic encryption in genotype imputation. In Sect. 2, we review the different ways homomorphic encryption have been used to protect data, including genotype imputation. While the hidden Markov model (HMM) is the most widely used method among the different genotype imputation software as it is the best perfoming model, none of the solutions submitted to the iDASH 2019 problem on genotype imputation using homomorphic encryption adopted the hidden Markov model. In Sect. 3, we review the hidden Markov model and discuss how it is used to perform genotype imputation. We introduce homomorphic encryption and its different libraries in Sect. 4 and then discuss in Sect. 5 how we use homomorphic encryption in genotype imputation. The contribution of this paper is the use of homomorphic encryption in the development of genotype imputation system using HMM as none of the published solutions implemented HMM homomorphically for genotype imputation.
2
Literature Review
One of the early approaches to protecting genomic data is by anonymizing contextual data. This, however, was shown to be insufficient because it is possible to identify an individual by linking phenotypes to genotypes using genotypephenotype associations which are publicly available [12]. In fact, even with nongenomic data, anonymization offers very little protection for privacy [18]. Numerous privacy-preserving methods, namely, garbled circuit, secure hardware, differential privacy, and homomorphic encryption have been developed to protect genomic data [1]. Among these solutions, homomorphic encryption stands out as it allows computation on encrypted data without decrypting it. Homomorphic encryption has already been used to securely outsource computation involving genomic data. Lauter et al. [16], Lu et al. [17], Sim et al. [22], Blatt et al. [2] and Carpov et al. [3] used homomorphic encryption in computing for GWAS statistics. Kim et al. [14] used homomorphic encryption to evaluate a logistic regression model. The use of HE-based models for secure outsourcing of genotype imputation was introduced in the iDASH 2019 secure genome analysis competition. Kim et al. in [15] presented five solutions from the five teams that participated. The machine learning models used by these teams were logistic regression and neural network. However, the imputation accuracy of these implementations
Secure Genotype Imputation with Homomorphic Encryption
183
were slightly lower than those of widely used genotype imputation software like IMPUTE2, Minimac3, and Beagle, which used the hidden Markov model [21].
3
Genotype Imputation with the Hidden Markov Model
The most common method for imputing missing genotypes is using the hidden Markov model. A hidden Markov model (HMM) is a statistical model that is used to infer hidden internal factors, usually called “states”, based on observable sequential events, usually called “symbols.” The hidden state sequence Q is represented by Q = {q0 , q1 , ..., qT }, where a state at time t has N possible values, i.e., qt ∈ {s0 , ..., sN −1 }. The observation sequence O is described as O = {o0 , o1 , ..., oT }, where an observation at time t is one of the L possible observation symbols, i.e., ot ∈ {v1 , ..., vL }. A trellis diagram can be used to visualize how an HMM is structured for genotype imputation as seen in Fig. 1. The hidden states represent the reference panel of individuals, where the N possible values of a state at time t are the genotypes of N reference individuals at SNP t. The observation sequence O of an HMM describes the SNPs of the query individual, where a genotype ot at SNP t can be any of the L = 3 possible observation symbols: 0 (homozygous reference), 1 (heterozygous), and 2 (homozygous alternate), i.e., ot ∈ {0, 1, 2}.
Fig. 1. Structure of an HMM for Genotype Imputation. There are N rows of hidden states (circles) which represent the N individuals from the reference panel, i.e., individuals with complete genotype data. The T + 1 columns represent the T + 1 SNPs. The genotypes of the query individual, i.e., individual with missing genotype data is represented as the observation sequence (squares).
The first step in HMM-based genotype imputation is to train the HMM to learn the HMM parameters λ = (A, B, π), where A is the state transition matrix, B is the emission matrix, and π is the initial state distribution [9]. The training is done through the Baum-Welch algorithm. Readers can refer to [9] for the description of the Baum-Welch algorithm. After learning the HMM parameters, the model is used to infer missing genotypes through the Viterbi algorithm shown on Algorithm 1.
184
C. S. de Leon and R. B. Chua
Algorithm 1: Viterbi Algorithm [9] Data: observations of length T , states of length N Result: best path, best path probability create a path probability matrix /* initialization step */ for each state q ← 1 to N do viterbi[q, 1] ← πq bq (o0 ) ; backpointer[q, 1] ← 0 ; end /* recursion step */ for each time step t ← 1 to T do for each state q ← 1 to N do viterbi[q, t] ← max N q =1 viterbi[q , t − 1] ∗ aq q ∗ bq (ot ) ; N backpointer[q, t] ← arg max q =1 viterbi[q , t − 1] ∗ aq q ∗ bq (ot ) ; end end /* termination step */ bestpathprob ← max N q=1 viterbi[q, T ] ; bestpathpointer ← arg max N q=1 viterbi[q, T ] ; bestpath ← the path, starting at state bestpathpointer, that follows backpointer[] to states back in time ;
To show how to perform genotype imputation with HMM, let us use the data from iDASH 2019 which can be seen in Fig. 2. The rows contain the data for SNPs and the first four columns contain data on the chromosome number, start locus, end locus, and variant name (with the format: rsid reference allele alternate allele), respectively. Starting from the fifth column, each column represents the genotypes of an individual encoded as 0 (homozygous reference), 1 (heterozygous), or 2 (homozygous alternate). For a query individual, we call genotyped SNPs as “tag SNPs” and missing SNPs as “target SNPs.” Genotype imputation starts by selecting tag SNPs that will be used in the training. For each target SNP, we find the closest tag SNPs to the target SNP, in terms of the loci. The T closest tag SNPs define the number of columns in the HMM, where T is chosen by the user. After getting the T closest tag SNPs, we compute for a similarity score between the genotypes of the query individual and that of each individual k in the reference panel using Eq. (1): k=K,i=T |gquery (i) − gk (i)| (1) sim(k, query) = k=1,i=1
where K is the number of reference individuals, gquery (i) is the genotype of the ith tag SNP of the query individual, and gk (i) is the genotype of the ith tag SNP of the kth reference individual. After this, we get the top N reference individuals based on similarity scores, where N is chosen by the user. These top N scores
Secure Genotype Imputation with Homomorphic Encryption
185
Fig. 2. iDASH 2019 SNP Data.
define the number of rows in the HMM trellis. The N reference individuals in the HMM are numbered by their order in terms of their scores. We can now form the structure of the HMM. We show an example in Fig. 3. We train the model to learn the HMM parameters with the Baum-Welch algorithm. As for the example, there are three observation sequences. We no longer show the details of running the Baum-Welch algorithm. In our example, the learned parameters A and B after running the Baum-Welch algorithm are: ⎡ ⎤ 0.6 0.2 0.2 A = ⎣0.3 0.2 0.5⎦ } individuals 0.1 0.1 0.8
individuals ⎡ ⎤ 0.3 0.1 0.6 B = ⎣0.7 0.2 0.1⎦ } individuals 0.2 0.4 0.4
genotypes and the initial probabilities are π0 = 0.6, π1 = 0.2, π2 = 0.1
(2)
After learning the parameters, we use them to find the best hidden state sequence using the Viterbi algorithm.
4
Homomorphic Encryption
Homomorphic encryption (HE) is a form of encryption that allows one to perform computations on encrypted data without the need for decryption [17]. HE operations are “noisy” because during the encryption process, error is introduced to
186
C. S. de Leon and R. B. Chua
Fig. 3. HMM Structure with Genotype Data of Query and Reference Individuals. For simplicity of example, we set T = 2 closest tag SNPs and N = 3 reference individuals. The question mark represents the missing genotype data.
hide the keys. This noise accumulates with each homomorphic computation. This puts a limit on the number of homomorphic operations that can be performed on the ciphertexts while still being able to decrypt the resulting ciphertext. A breakthrough came when Craig Gentry introduced bootstrapping to address this problem [10]. After this breakthrough, many new schemes were developed with Brakerski-Gentry-Vaikuntanathan (BGV) and Brakerski/Fan-Vercauteren (BFV) schemes being the most widely used. However, the homomorphic computation with BGV and BFV is limited to integers. It was only during the release of Cheon-Kim-Kim-Song (CKKS) scheme that it was possible to homomorphically compute with real numbers [25]. To work with FHE, there are several opensource FHE libraries that are available. Among the popular ones are HElib, Microsoft SEAL, PALISADE, HEANN, and TFHE.
5
Developing a Secure Genotype Imputation Tool Using FHE
We implemented our solution with the Encrypted Vector Arithmetic (EVA) Language and Compiler [8]. EVA’s language is in Python and it performs homomorphic computations in the CKKS scheme using the Microsoft SEAL library. The main advantage of using EVA is its automatic parameter selection and ciphertext maintenance. With EVA, we can generate FHE programs in CKKS scheme using SEAL. A partial screenshot of a sample output of the program can be seen in Fig. 4. The SNP location represents the locus of the missing data. The first process of our solution transforms the inputs into the needed representations. The inputs are the SNP data of the query individuals (individuals with missing genotype data) and reference individuals (individuals with complete genotype data), which are contained in two separate files, respectively.
Secure Genotype Imputation with Homomorphic Encryption
187
Fig. 4. Partial Screenshot of a Sample Output.
The genotype data of the specified query individual is extracted and the SNPs are classified as either tag (known) or target (missing) SNPs. For each target SNP, the program finds the absolute difference of its locus and the locus of all tag SNPs to get the T closest tag SNPs. The genotypes of the query individual at the T closest tag SNPs and at the target SNPs (sorted in increasing order of locus number) are stored as an array. This array is treated as the observation sequence in the HMM. We then compute for the similarity score of the query genotypes and the reference genotypes to get the top N reference individuals using Eq. (1). The genotypes of the top N reference individuals at T tag SNPs and at the target SNPs are stored as a matrix, where the rows represent the individuals and the columns represent the SNPs. This is the reference panel or the hidden states of the HMM. After processing the input files, we trained the hidden Markov model to learn the parameters A and B by running the Baum-Welch algorithm on the client side. Through the learned HMM parameters, we find the maximum probability estimate of the most likely hidden state sequence using Viterbi algorithm. From parameter B, the program computes for the probability of each state in the reference panel at time t to emit the observed genotype data at time t. If the genotype is unknown, then the emission probability is 0.5. The emission probabilities are stored in an N × T matrix. Let’s call this matrix E. The computations in this step are run on the server. This is where we used fully homomorphic encryption (FHE). Note that Viterbi computations on the HMM are performed column-wise. Thus, we compute for the transpose of the probability matrices A (transition) and E (emission). Since EVA only accepts vectors as input with a size that is a power of two, each row of the matrices is represented as a vector. Also note that computations at time t (current vector) depend on the results of the computations at time t − 1 (previous vector). Therefore, homomorphic
188
C. S. de Leon and R. B. Chua
computations are done per column, i.e., one vector at a time. If the vector size is not a power of two, the program finds the nearest power of two which is greater than the vector size and then pads the vector with zeros to make its size a power of two. Now that the probabilities are transformed into the required representations, the next step is to perform encryption to allow secure computations on the server. The values that need to be encrypted are the emission probabilities because they can be used to reveal the genotype information of the query individual. The program that implements the Viterbi algorithm was written using PyEVA, a Python-embedded domain specific language for producing EVA programs. This program is compiled with EVA compiler and run on top of SEAL. The only values that need to be set to compile the program for CKKS are the fixed-point scales for inputs and maximum ranges of coefficients in outputs, both in number of bits. After choosing these values, EVA automatically generates the encryption parameters needed for homomorphic computation. Specifically, the compile() method of EVA returns the compiled program, the encryption parameters, and a signature object. Using the encryption parameters, EVA now generates the encryption keys. The inputs are encoded and encrypted by EVA by simply calling the encrypt() method. EVA generates files that contain relevant information with respect to the FHE process. Three of these files, specifically, the ones that contain the EVA program, public key, and encrypted inputs, should be sent to the server to allow secure computations. Part of the Viterbi algorithm is to find the maximum over all of the possible previous state sequences. However, this is not possible to be performed homomorphically on the server since the current state of FHE doesn’t support comparison operation. Thus, this is sent back to the client to be computed on the client side before sending the result to the server. This part contributes to the high communication cost. At the final step of evaluation, the file that contains the encrypted outputs are sent from the server to the client. The outputs are decrypted using EVA’s decrypt() method. The decrypted outputs denote the paths formed from the Viterbi algorithm. The program performs backtracing to get the most probable hidden state sequence. With the most likely hidden state sequence, the program assigns the missing genotype of the query individual to the corresponding genotype of the state in the said sequence. This assigned genotype is the imputed genotype. To test the performance of our program, we created a plaintext implementation of our program which performs the imputation without using homomorphic encryption. Both HE and plaintext implementations produced the same output. Hence, the performance of the HE implementation of the genotype imputation will be same as doing it directly with HMM.
6
Conclusion and Future Work
We were able to perform genotype imputation using hidden Markov model with fully homomorphic encryption through the use of EVA language and compiler.
Secure Genotype Imputation with Homomorphic Encryption
189
This opens the possibility for genotype imputation to be outsourced to an external computing resource like the cloud to maximize the computing power while minimizing the privacy risk at the same time. However, we incurred some communication cost due to the inability to compute the maximum of a vector homomorphically. The homomorphic computation of conditional statement is still considered as an open problem. Nevertheles, there are several works that implemented conditional statements homomorphically such as [5–7] and [4]. We could adapt the methods implemented in these works and perform the computation of the maximum of a vector homomorphically so that there is no need to constantly communicate with the client to compute this part.
References 1. Aziz, M.M.A., et al.: Privacy-preserving techniques of genomic data - a survey. Brief. Bioinform. 20(3), 887–895 (2017) 2. Blatt, M., Gusev, A., Polyakov, Y., Rohloff, K., Vaikuntanathan, V.: Optimized homomorphic encryption solution for secure genome-wide association studies (2019) 3. Carpov, S., Gama, N., Georgieva, M., Troncoso-Pastoriza, J.R.: Privacy-preserving semi-parallel logistic regression training with fully homomorphic encryption. BMC Med. Genomics 13(S7) (2020) 4. Chakraborty, O., Zuber, M.: Efficient and accurate homomorphic comparisons. In: WAHC’22, pp. 35-46. Association for Computing Machinery, New York, NY (2022) 5. Cheon, J.H., Kim, A., Kim, M., Song, Y.: Homomorphic encryption for arithmetic of approximate numbers. In: Takagi, T., Peyrin, T. (eds.) ASIACRYPT 2017. LNCS, vol. 10624, pp. 409–437. Springer, Cham (2017). https://doi.org/10. 1007/978-3-319-70694-8 15 6. Cheon, J.H., Kim, D., Kim, D., Lee, H.H., Lee, K.: Numerical method for comparison on homomorphically encrypted numbers. In: Galbraith, S.D., Moriai, S. (eds.) ASIACRYPT 2019. LNCS, vol. 11922, pp. 415–445. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-34621-8 15 7. Chialva, D., Dooms, A.: Conditionals in homomorphic encryption and machine learning applications. CoRR abs/1810.12380. arXiv:1810.12380 (2018) 8. Dathathri, R., Kostova, B., Saarikivi, O., Dai, W., Laine, K., Musuvathi, M.: EVA: an encrypted vector arithmetic language and compiler for efficient homomorphic computation. In: Proceedings of the 41st ACM SIGPLAN Conference on Programming Language Design and Implementation. arXiv:1912.11951 (2020) 9. Franzese, M., Iuliano, A.: Hidden Markov models. Encyclopedia of Bioinformatics and Computational Biology, pp. 753—762 (2019) 10. Gentry, C.: Computing arbitrary functions of encrypted data. Commun. ACM 53(3), 97–105 (2010) 11. Gursoy, G., Harmanci, A., Green, M.E., Navarro, F.C., Gerstein, M.: Sensitive information leakage from functional genomics data: theoretical quantifications and practical file formats for privacy preservation (2018) 12. Harmanci, A., Gerstein, M.: Quantification of private information leakage from phenotype-genotype data: linking attacks. Nat. Methods 13(3), 251–256 (2016) 13. Huang, H., Knowles, L.L.: Unforeseen consequences of excluding missing data from next-generation sequences: simulation study of RAD sequences. Syst. Biol. 65(3), 357–365 (2014)
190
C. S. de Leon and R. B. Chua
14. Kim, A., Song, Y., Kim, M., Lee, K., Cheon, J.H.: Logistic regression model training based on the approximate homomorphic encryption. BMC Med. Genomics 11(S4) (2018) 15. Kim, M., et al.: Ultra-fast homomorphic encryption models enable secure outsourcing of genotype imputation (2020) 16. Lauter, K., L´ opez-Alt, A., Naehrig, M.: Private computation on encrypted genomic data. In: Aranha, D.F., Menezes, A. (eds.) LATINCRYPT 2014. LNCS, vol. 8895, pp. 3–27. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16295-9 1 17. Lu, W.J., Yamada, Y., Sakuma, J.: Privacy-preserving genome-wide association studies on cloud environment using fully homomorphic encryption. BMC Med. Inform. Decis. Mak. 15, 1–8 (2015) 18. Mondal, S., Gharote, M.S., Lodha, S.P.: Privacy of personal information: Going incog in a goldfish bowl. Queue 20(3), 41–87 (2022). https://doi.org/10.1145/ 3546934 19. Mott, R., Fischer, C., Prins, P., Davies, R.W.: Private genomes and public SNPs: homomorphic encryption of genotypes and phenotypes for shared quantitative genetics (2020) 20. Naveed, M., et al.: Privacy in the genomic era. ACM Comput. Surv. 48(1), 1–44 (2015). https://doi.org/10.1145/2767007 21. Shi, S., et al.: Comprehensive assessment of genotype imputation performance. Hum. Hered. 83(3), 107–116 (2019) 22. Sim, J.J., Chan, F.M., Chen, S., Tan, B.H.M., Aung, K.M.M.: Achieving GWAS with homomorphic encryption, 1–24 (2019) 23. Timpson, N.J., Greenwood, C.M.T., Soranzo, N., Lawson, D.J., Richards, J.B.: Genetic architecture: the shape of the genetic contribution to human traits and disease. Nat. Rev. Genet. 19, 110–124 (2017) 24. Trampush, J.W., Yang, M.L.Z., Lencz, T.: GWAS meta-analysis reveals novel loci and genetic correlates for general cognitive function: a report from the COGENT consortium. Mol. Psychiatry 22, 336–345 (2017) 25. Viand, A., Jattke, P., Hithnawi, A.: SoK: fully homomorphic encryption compilers (2021)
Combining Convolutional Neural Networks and Rule-Based Approach for Detection and Classification of Tomato Plant Disease Erika Rhae Magabo(B) , Anna Liza Ramos(B) , Aaron De Leon, and Christian Arcedo Saint Michael’s College of Laguna, Old National Highway, Platero, Philippines {erika.magabo,annaLiza.ramos,aaron.deleon, christian.arcedo}@smcl.edu.ph
Abstract. Plant disease detection plays a crucial role in achieving sustainable agriculture. Therefore, the aim of this study is to present a novel approach that combines a deep learning model with rule-based methods to create a more accurate and reliable plant detection application. The dataset used in this study consists of 10,000 tomato leaves afflicted with nine distinct diseases. These images underwent segmentation techniques to analyze color properties and calculate the affected surface area. As a result, the study formulated a rule-based approach to establish criteria for determining the degree of diseases. To optimize feature extraction, various filtering techniques and classification models were examined. The results demonstrated that the median filter, which exhibited the lowest mean squared error (MSE) and root mean squared error (RMSE), as well as higher values in peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM), combined with a convolutional neural network (CNN) classification model with an accuracy of 98.18%, yielded the best performance among the other models. Based on the performance results, the new approach of combining CNN and the rule-based approach accurately generated expected results in determining the color diseases, calculating the percentage of affected area, and identifying the type of disease. Moreover, when deployed into an application, the model produced accurate results. This study would greatly assist farmers in making effective decisions regarding the status of their plants based on the degree of disease. Keywords: Rule-Based · Convolutional Neural Network · Tomato Disease Detection · Disease Classification · Disease Color
1 Introduction Plants are an incredibly important kingdom of organisms that produce oxygen for humans to breathe and absorb carbon dioxide for the plant to grow [1]. However, increases in carbon dioxide can have negative effects on human health, such as respiratory issues and allergies [2]. Unfortunately, climate change has led to an increase in plant diseases, including wilting, scabs, moldy coatings, rusts, blotches, and rotting tissue [3]. In fact, plant diseases are the main cause of loss in the agriculture sector globally, with estimated © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 191–204, 2023. https://doi.org/10.1007/978-3-031-44146-2_19
192
E. R. Magabo et al.
losses of up to $220 billion per year [4]. These diseases have been affecting plants for over 250 million years, and historically, experts have examined affected plant leaves by visually inspecting them [5]. Recent technological breakthroughs have enabled numerous studies aimed at protecting crops. For instance, researchers have developed methods for detecting diseases on plant crops, resulting in improved crop management and production [6]. To attain high detection accuracy, various filters have been utilized, such as mean filters including arithmetic, geometric, and harmonic filters [7]. These filters increase visual perception and quantitative measurement, leading to improved results [8]. The adaptive mean filter, which is better at preserving edge information, has been used in combination with different filters and classification algorithms to determine which method provides the highest accuracy [9, 10]. Additionally, the median filter has been found to produce good results at higher noise densities [11], while the Laplacian filter has significantly improved image texture and edge preservation in color photos [12]. Furthermore, the local Laplacian filter has been shown to preserve image quality, as measured by the peak signal-to-noise ratio (PSNR) value [13]. Several machine and deep learning algorithms, such as support vector machines (SVM), logistic regression, convolutional neural networks (CNN) ResNet, and VGG, have been applied for plant disease classification [14]. Machine learning has been used to detect various stages of a plant’s life cycle [15], as well as for the automatic extraction and categorization of plant disease features using CNN and a learning vector quantization algorithm [16, 17]. Deep learning models have also been utilized for early disease prediction in plants [18, 19]. The aforementioned studies on plant diseases are still under investigation, employing various approaches and methods. This has prompted the current study to propose a new approach by combining the classification and rule-based techniques for the detection of tomato plant diseases. The objectives of this study are to assess the severity of plant diseases and to detect various disease colors and types observed in tomato plants. These include early blight [20], late blight [20], Yellow Leaf Curl Virus, Septoria leaf spot [21], mosaic virus [22], bacterial spot, leaf mold [23], spider mites [24], and target spot [25]. The datasets were classified based on the degree of disease severity, along with the corresponding disease types and the affected surface, using a rule-based approach. Therefore, the outcome of this study will greatly assist agriculturists in evaluating disease severity and identifying different disease colors in plants. This information can be used as input to determine the possibility of treating plant diseases and to identify diseases based on their color.
2 Methodology Figure 1 illustrates the conceptual framework employed in this study, which aimed to investigate various tomato plant diseases. The study utilized datasets obtained from Kaggle, which consisted of images paired with their corresponding types of diseases. These images underwent segmentation to identify the extent of the affected surface and the color of the disease present in each type, serving as input for the rule-based design. To optimize computational costs, the images were cropped, and various filtering techniques were applied to enhance feature extraction. The effectiveness of the filters
Combining Convolutional Neural Networks and Rule-Based Approach
193
was evaluated using performance metrics such as PSME, MSE, PSNR, and SSIM. The filters that demonstrated good performance were selected, and their results were used as input for the classification models. Moreover, the researchers conducted a comprehensive experiment to fine-tune the parameters of the models and determine which ones achieved the highest accuracy. Subsequently, a rule-based approach was established, incorporating information about the disease color and affected surface, to enable the detection of disease severity. Moreover, the model was deployed within an application framework to assess its efficiency in detecting diseases in real-world scenarios.
Fig. 1. Conceptual Framework
2.1 Building Datasets The data used in this experiment was obtained from Kaggle and consists of 10,000 images of tomato leaves with nine (9) different diseases, including Bacterial Spot, Early Blight, Late Blight, Leaf Mold, Septoria Leaf Spot, Spider Mites, Target Spot, Mosaic, Yellow Leaf Curl Virus, and healthy leaves. The images were cropped from 256 × 256 to 112 × 112. To increase the size of the dataset, image augmentation techniques such as rotation, zoom, and adjustments to height and width were applied (Fig. 2).
Fig. 2. Sample Images of the datasets
2.2 Image Pre-processing Calculating the Affected Surface To accurately identify the boundaries of diseased regions in plant images, a pixel-wise
194
E. R. Magabo et al.
classification image segmentation technique has been employed. This technique enables the separation of pixels belonging to diseased areas from the rest of the image, facilitating subsequent analysis and classification of the affected regions. In this approach, each pixel is assigned a label indicating whether it belongs to the diseased region or not [26]. This pixel-wise classification technique serves as the foundation for formulating the rule-based approach in the study. Additionally, it can be used as a cross-reference check to assess the performance of the classification model, as the accuracy may depend on the representation of the datasets. AS = (PX d /PX ) ∗ TS
(1)
The Eq. (1) represents the calculation of the affected surface area (AS) of a specific disease, this equation calculates the affected surface area of a specific disease by dividing the number of pixels classified as the specific disease (PX d ) by the total number of pixels (PX) and then multiplying it by the total surface area (TS) of the tomato plant. Calculating the Color Disease of Tomato Plant Disease To determine the color of the disease, the colors yellow, green, and brown were defined. These color ranges can be established by selecting appropriate thresholds based on color spaces such as HSV (Hue, Saturation, Value). The next step is to classify the pixels in the image based on their color. This can be achieved by comparing the pixel values with the defined color ranges. Pixels that fall within the yellow color range are classified as yellow disease pixels, those within the green color range are classified as green disease pixels, and those within the brown color range are classified as brown disease pixels. Once the pixels are classified, it is possible to count the number of pixels belonging to each color category. This count provides an estimation of the distribution of color diseases within the image. To calculate the percentage of each color disease, divide the count of each color disease by the total number of pixels in the image and multiply by one hundred. Consequently, it is utilized as an input to formulate the rule-based approach and to assess the status of dataset representation, which may impact the identification of color variations. CD(y,g,b) = (PX(y,g,b) /TPX) ∗ 100
(2)
The Eq. (2) represents the calculation of the disease color distribution (CD(y,g,b) ) as a percentage, this equation calculates the disease color distribution as a percentage by dividing the number of pixels classified as the specific disease color (PX(y,g,b) ) by the total number of pixels (TPX) and multiplying it by 100. Filtering Techniques To optimize the detection of the disease, the study investigated various filtering techniques with corresponding parameters (Table 1). The aim was to determine the most suitable filter for developing the model and deploying it in an application. The performance of the filters was measured using metrics below. PSNR (Peak Signal-to-Noise Ratio) The PSNR (Peak Signal-to-Noise Ratio) computes the peak signal-to-noise ratio between
Combining Convolutional Neural Networks and Rule-Based Approach
195
Table 1. Filter parameters utilized in experimentation. Filtering Techniques
Kernel
Arithmetic Mean Filter
(3,4,5)
Geometric Mean Filter
(3,4,5)
Harmonic Mean Filter
(3,4,5)
Adaptive Filter
(3,4,5)
Median Filter
(3,4,5)
Laplacian Filter
(3,4,5)
two images in decibels. This ratio is used to compare the quality of the original image with that of a filtered image. The higher the PSNR, the better the image quality. The block calculates the PSNR to (3): 2 R (3) PSNR = 10 log10 MSE MSE (Mean Squared Error) The mean square error (MSE) measures the cumulative squared error between the filtered image and the original image. A lower value of MSE indicates a lower amount of error. Compute the MSE to (4): 2 M ,N [I1 (m, n) − I2 (m, n)] (4) MSE = M ∗N SSIM (Structural Similarity Index) The Structural Similarity Index (SSIM) is a performance metric used to measure the similarity between two images, such as the original image and the filtered image. The comparison of the two images is based on three characteristics: Luminance, Contrast, and Structure. A higher value indicates a higher similarity to the original image. Compute the SSIM to (5): SSIM (x, y) =
(2μx μy + C1 )(2σxy + C2 ) (μ2y + μ2y + C1 )(σy2 + σy2 + C2 )
(5)
RMSE (Root Mean Square Error) RMSE is used to calculate the residual, which is the difference between the predicted value and the true value, for each data point. It computes the norm of the residuals for each data point, calculates the mean of the residuals, and takes the square root of that mean. RMSE is commonly used in supervised learning applications because it requires
196
E. R. Magabo et al.
true measurements at each predicted data point. The root mean square is expressed as (6). 2 N i=1 y(i) − yˆ (i) RMSE = , (6) N
2.3 Classification Algorithms The images were classified using machine and deep learning algorithms to determine which among the classification algorithm generates higher accuracy result. A comprehensive experiment is conducted by modifying the hyper-parameters as shown in the table below (Table 2): Table 2. Parameters utilized in experimentation. Parameters
Values
Splitting Data
(80,20), (60,40), (40,60), (20,80)
Random State
(30,42,48,50)
Algorithm
CNN, ResNet 50, VGG19, SVM, Logistic Regression, KNN, Mobile
Number of Epoch
(23,30, 33, 100)
Batch Size
(24,32,40,50)
No. of Layers
(50,53,237)
2.4 Developing the Proposed Rule Based-Approach The rule-based approach was formulated based on the findings regarding the affected surface and color identification of the disease, which determined the percentage criterion to be established in the rules. Additionally, a study [28] provided support for the identification of disease colors. It indicated that brown color indicates an “extremely high and high degree of disease”, implying that the plant is decaying. Yellow color indicates a ‘moderate and slight degree of disease,’ suggesting that the plant lacks nutrients. Lastly, Yellow Green color represents an ‘extremely slight degree of disease,’ indicating that the plant requires more care. Consequently, the following rules were established:
Combining Convolutional Neural Networks and Rule-Based Approach
197
IF Color == Brown IF Affected Surface =100-50 % THEN “Extremely High Degree of Diseases” IF Affected Surface = 49-1 % THEN “High Degree of Diseases” ELSE IF Color == Yellow IF Affected Surface = 100-50 % THEN “Moderately Degree of Disease” IF Affected Surface = 49-1 % THEN “Slightly Degree of Disease” ELSE IF Color == Yellow Green IF the Affected Surface = 100-1 % THEN “Extremely Slight Degree of Diseases” ELSE No Degree of Disease
2.5 Building the Model The study employed the CNN model with input images 224 × 224 in size and convolved the image features using a 2 by 2 average pooling method. These features were then fed into the fully connected networks, utilizing appropriate parameters that yielded good performance. Finally, this model was deployed into an application (Fig. 3).
Fig. 3. CNN Model Architecture
2.6 AI Application Development The model was deployed in a web-based application built using Flask, a lightweight web framework for Python that provides the necessary tools for handling HTTP requests, rendering templates, and managing routes. The trained model, saved in the HDF5 file format, is loaded using Keras, enabling efficient predictions on input images. For image analysis, the OpenCV library is employed to preprocess the images and calculate color percentages, a crucial feature for disease identification. The application is structured into two main routes: the main route (“/”) and the prediction route (“/predict”). The main route renders the index.html template, which includes an image upload form and a display area for prediction results. When a user uploads an image and clicks the “Predict” button, a JavaScript event handler is triggered. It reads the selected image, displays a loader animation, and sends an AJAX POST request to the prediction route (“/predict”), containing the image data.
198
E. R. Magabo et al.
In the prediction route, the uploaded image is loaded using OpenCV, and the predict_disease function is invoked to classify the disease based on the trained model. Simultaneously, the calculate_color_percentages function processes the image to determine the percentages of different color components related to the disease. Subsequently, the calculate_disease_degree function utilizes these percentages and the affected surface area to determine the degree of the disease based on a set of predefined rules.
3 Results Displays the average percentage of the disease affecting the tomato plant datasets. This percentage was utilized to formulate the rule-based system, where a range of 100–50% indicates an “extremely high” degree of disease in brown color, a “moderate degree of disease” in yellow color, and an “extremely slight degree of disease” in yellow-green color (Tables 3 and 4). Table 3. Average of Affected Degree of Diseases Disease
Average of the disease present
Leaf Mold
63.28%
Target Spot
66.88%
Spider Mites
69.65%
Late Blight
64.41%
Early Blight
58.66%
Mosaic
71.66%
Septoria Spot
75.93%
Bacterial Spot
54.0%
Yellow Leaf Curl Virus
64.08%
Table 4. Average of Color Disease Color
Types of Diseases D1
D2
D3
D4
D5
D6
D7
D8
D9
Green
45.94
36.49
33.04
30.30
26.57
41.28
28.15
24.0
35.86
Yellow
53.94
0.78
2.08
2.47
4.43
3.55
0.97
4.98
63.16
Brown
0.11
62.69
64.80
67.18
59.98
55.11
70.76
70.95
0.92
D1 – Bacterial Spot, D2 – Leaf Mold, D3 – Target Spot, D4 – Spider Mites, D5 – Late Blight, D6 – Early Blight, D7 – Mosaic, D8 – Septoria Spot, D9 – Yellow Leaf Curl Virus.
Displays the average color representation of the collected datasets of tomato plants. The findings indicate that the brown color accounted for 78% of the datasets, which
Combining Convolutional Neural Networks and Rule-Based Approach
199
were categorized as having an “extremely high degree of diseases,” while 22% were categorized as yellow, denoting a “moderate degree of disease” based on the formulated rule-based system. The severity of the disease was determined by identifying the color with the highest percentage score. However, it is worth noting that the dataset representation is imbalanced, as it falls into only two categories: “extremely high” and “moderate” degrees of disease. This imbalance significantly affects the classification and detection of disease colors. To address this limitation, the rule-based system resolves the issue by considering the calculation of the affected surface area (Table 5). Table 5. Results of Performance Analysis of Different Filtering Techniques Filtering Techniques Arithmetic M ean Filter
PSNR 10.37
M SE 6043.75
RM SE 0.89
SSIM 0.1686
Geometric M ean Filter
10.30
6068.09
0.89
0.1681
Harmonic M ean Filter Adaptive Filter
10.30 17.06
6056.79 1276.67
0.89 0.48
0.1778 0.1054
M edian Filter
19.74
689.56
0.3001
0.1578
Laplacian Filter
8.079
10119.3
1.1499
0.003
Presents the performance of different filters, highlighting the superior performance of the median filter compared to others. On the contrary, the Laplacian filter exhibited the worst performance based on its metrics. Based on the findings of this investigation, the filter was selected and employed to develop a model aimed at optimizing plant disease detection (Table 6). Table 6. Model Classification Result Epochs
Random State
Batch Size
Model
Accuracy
100
42
32
CNN
98.18%
ResNet50
96.07%
VGG 19
90.37%
SVM
90.20%
Logistic Regression
88.00%
KNN
87.30%
Mobile
91.00%
Displays the performance of the classification models using the recommended parameters, which were determined through extensive investigation. As a result, higher accuracy was achieved. CNN stood out among the classification models, outperforming others with an impressive score of 98.18%. This model was subsequently deployed in the application to accurately detect disease color, affected surface, and disease labels.
200
E. R. Magabo et al.
Furthermore, Table 7 presents the confusion matrix of the CNN model, showcasing precision, recall, and F1-scores for the nine disease labels. Table 7. Confusion Matrix Result Type of Diseases
Precision
Recall
F1-Score
Bacterial Spot
0.96
0.98
0.97
Leaf Mold
0.98
1.00
0.99
Target Spot
0.98
0.98
0.98
Spider Mites
1.00
0.98
0.99
Late Blight
0.99
0.95
0.97
Early Blight
0.99
0.95
0.97
Mosaic
0.94
0.99
0.97
Septoria Spot
1.00
0.96
0.98
Yellow Leaf Curl
0.99
1.00
1.00
Healthy
1.00
1.00
1.00
Illustrates the model’s performance, with the F1 score ranging from 97% to 100%, precision ranging from 94% to 100%, and recall ranging from 95% to 100%. These results indicate a minimal error rate when detecting different types of tomato diseases. Consequently, the CNN model is expected to excel in accurately detecting disease color, affected surface, and disease labels. It is highly recommended to deploy this model in an application to further evaluate its performance. Table 8. Rule-Based Approach Result Type of Diseases
Image No.
Affected Surface
Disease Color
Result
Late Blight
42
76.26
Brown
Extremely High
Leaf Mold
18
75.98
Brown
Extremely High
Target Spot
21
73.36
Brown
Extremely High
Spider Mites
37
70.99
Brown
Extremely High
Spider Mites
9
71.60
Brown
Extremely High
Late Blight
45
83.16
Yellow
Moderately
Late Blight
43
64.98
Brown
Extremely High
Yellow Leaf Curl
85
92.42
Brown
Extremely High
Early Blight
60
68.48
Brown
Extremely High
Bacterial Spot
3
72.77
Brown
Extremely High
Combining Convolutional Neural Networks and Rule-Based Approach
201
Table 8 showcases the outcomes of implementing the CNN model using a rule-based approach. In order to validate the efficacy of the developed criteria, ten images were randomly selected and subjected to testing. The results affirm the accurate generation of calculated metrics such as affected surface percentage, disease color, and disease degree through the rule-based implementation. These findings have paved the way for the application’s development, which can proficiently detect disease color, affected surface, and disease labels by leveraging the rule-based approach (Fig. 4).
Fig. 4. Application User Interface
Figure 1 presents the user interface of the application. The application allows the user to import an image and then automatically the application generates the prediction based on affected surface area, disease color with its corresponding and disease label. It also generates the percentage of affected surface of the tomato where the disease is present and the degree of the disease. The application was used to test at least five (5) images per diseases. Presented below is one of the sample results of the application per diseases (Table 9). Table 9. AI Detection Result Type of Diseases
Affected Surface
Green
Yellow
Brown
Disease Degree
Mosaic Virus
22.83
22
0
0.82
Extremely slight degree
Yellow Leaf Curl Virus
39.42
35.0
0.29
3.68
Extremely slight degree
Bacterial Spot
45.71
18.68
4.27
22.76
High Degree
Early Blight
57.25
38.68
0.24
18.33
Extremely Slight Degree
Late Blight
43.97
21.92
0.35
21.70
Extremely Slight Degree (continued)
202
E. R. Magabo et al. Table 9. (continued)
Type of Diseases
Affected Surface
Green
Yellow
Brown
Disease Degree
Septoria Leaf Spot
17.49
9.24
2.39
5.86
Extremely Slight Degree
Leaf Mold
23.79
1.19
19.82
2.76
Slight Degree
Spider Mites
42.89
35.64
0
7.25
Extremely Slight Degree
Target Spot
56.89
54.24
0.05
2.60
Extremely Slight Degree
Demonstrates that the CNN model with a rule-based approach consistently exhibited good performance when deployed in the application, regardless of orientation, lighting conditions, distances, and resolution. Therefore, the approach adopted in this study proves to be efficient and effective in detecting diseases in tomato plants.
4 Conclusion Based on the results of the study, the tomato datasets exhibited a range of disease degrees from 54% to 75%. Additionally, the color identification of diseases showed a range of 0.11% to 70.95% for brown, 0.78% to 63.16% for yellow, and 24% to 45% for yellow green. These findings were instrumental in formulating the criteria for the rule-based approach and assessing the status of the collected datasets, which could influence color detection. Regarding the filters and classification models utilized to optimize the detection of tomato plant diseases, the median filter and CNN classification models yielded promising results. Furthermore, the implementation of the rule-based approach significantly contributed to efficient and effective disease identification, calculation of the affected surface, and determination of the disease. To enhance this study, the following recommendations were made: • Expand the dataset by adding more samples. • Investigate the impact of color categories, particularly focusing on the brown color variant. • Consider identifying diseases of other colors. • Conduct further investigation on determining the degree of disease. • Apply other techniques in the pre-processing the images to enhance feature.
References 1. Umit, A., Murat U., Kemal A., Emine, U.: Plant leaf disease classification using EfficientNet deep learning model. Ecol. Inform. 61, Article 101182, March 2021. https://doi.org/10.1109/ ICCEA50009.2021.02238
Combining Convolutional Neural Networks and Rule-Based Approach
203
2. UNCC: Climate Smart Agriculture, Planetary Health, October 2021 3. Calma, J.: Planting 1 trillion trees might not actually be a good idea. World Econ. Forum (2022). https://doi.org/10.1109/ACCESS.2021.3138890 4. Horst, R.K.: Plant. In: Westcott’s Plant Disease Handbook, pp. 65–530. Springer, Boston (2001) 5. Singh, V., Singh, R., Kumar, R., Bhadouria, R.: Status of plant diseases and food security. Science Direct (2021) 6. Kelman, A., Pelczar, E., Shurtleff, M.: Plant Disease. Encyclopedia Britannica, 31 August 2021. https://www.britannica.com/science/plant-disease 7. Kaur, L., Sharma, S.G.: Identification of plant diseases and distinct approaches for their management. Bull. Natl. Res. Cent. 45, 169 (2021). https://doi.org/10.1186/s42269-021-006 27-6 8. Mohanty, S.P., Hughes, D.P., Salathé, M.: Using deep learning for image-based plant disease detection. Front. Plant Sci. 7, 1419 (2019). https://doi.org/10.3389/fpls.2016.01419 9. Lakshmanarao, A., Babu, M.R., Kiran, T.S.R.: Plant disease prediction and classification using deep learning ConvNets. In: Proceedings of the 2021 International Conference on Artificial Intelligence and Machine Vision (AIMV), Gandhinagar, India, pp. 1–6 (2021). 1109/AIMV53313.2021.9670918 10. Li, L., Zhang, S., Wang, B.: Plant disease detection and classification by deep learning— a review. IEEE Access 9, 56683–56698 (2021). https://doi.org/10.1109/ACCESS.2021.306 9646 11. Devasena, D., Dharshan, Y., Sharmila, B., Srinivasan, K.: Improved decision based filtering algorithm for impulse noise removal in digital images. In: Proceedings of the 2022 International Conference on Intelligent Innovations in Engineering and Technology (ICIIET), Coimbatore, India, pp. 323–328 (2022). https://doi.org/10.1109/ICIIET55458.2022.9967693 12. Xu, W., Xiao, C.J., Jia, Z., Han, Y.: Digital image denoising method based on mean filter. In: Proceedings of the 2020 International Conference on Computer Engineering and Application (ICCEA), Guangzhou, China, pp. 857–859 (2020). https://doi.org/10.1109/ICCEA5 0009.2020.00188 13. Skorohod, B.:Study of mean square errors of receding horizon unbiased FIR filters. In: Proceedings of the 2020 International Russian Automation Conference (RusAutoCon), Sochi, Russia, pp. 243–247 (2020). https://doi.org/10.1109/RusAutoCon49822.2020.9208092 14. Gong, Y., Tang, W., Zhou, L., Yu, L., Qiu, G.: Quarter Laplacian filter for edge aware image processing. In: Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, pp. 1959–1963 (2021). https://doi.org/10.1109/ICIP42928. 2021.9506503 15. Stankovi´c, I., Brajovi´c, M., Stankovi´c, L., Dakovi´c,M.: Laplacian filter in reconstruction of images using gradient-based algorithm. In: Proceedings of the 2021 29th Telecommunications Forum (TELFOR), Belgrade, Serbia, pp. 1–4 (2021). https://doi.org/10.1109/TELFOR52709. 2021.9653297 16. Likhitaa, P.S., Anand, R.: A comparative analysis of image dehazing using image processing and deep learning techniques. In: Proceedings of the 2021 6th International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, pp. 1611–1616 (2021). https://doi.org/10.1109/ICCES51350.2021.9489118 17. Gobalakrishnan, N., Pradeep, K., Raman, C.J., Ali, L.J., Gopinath, M.P.: A systematic review on image processing and machine learning techniques for detecting plant diseases. In: Proceedings of the 2020 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, pp. 0465–0468 (2020). https://doi.org/10.1109/ICCSP48568.2020. 9182046
204
E. R. Magabo et al.
18. Sardogan, M., Tuncer, A., Ozen, Y.: Plant leaf disease detection and classification based on CNN with LVQ algorithm. In: Proceedings of the 2018 3rd International Conference on Computer Science and Engineering (UBMK), Sarajevo, Bosnia and Herzegovina, pp. 382–385 (2018). https://doi.org/10.1109/UBMK.2018.8566635 19. Jett, L.W.: Late Blight and Early Blight on Tomatoes, Gardenerdy & Buzzle.com, Inc, December 2022 20. Grant, B.: Septoria Leaf Spot and Yellow Leaf Curl Virus on Tomatoes. Gardenerdy & Buzzle.com, Inc, December 2022 21. Becker, B.: Mosaic Virus on Tomatoes. Gardenerdy & Buzzle.com, Inc., December 2022 22. Michelle, M.: Bacterial Spot and Leaf Mold on Tomatoes. Gardenerdy & Buzzle.com, Inc., December 2022 23. Minnesota, A.: Spider Mites on Tomatoes. Gardenerdy & Buzzle.com, Inc., December 2022 24. Queenslands: Target Spots on Tomatoes. Gardenerdy & Buzzle.com, Inc., December 2022 25. Chauhan, A., et al.: Detection and segmentation of plant leaf diseases using deep learning models: a review. J. Imaging 7(9) (2021) 26. Li, J., et al.: Plant leaf segmentation based on deep learning models. IEEE Access 9, 932–941 (2021) 27. Liu, Z., et al.: Plant disease detection and segmentation based on improved YOLOv3 network. Sensors 21(6) (2021) 28. Ahmad, N., Asif, H.M.S., Saleem, G., et al.: Leaf image-based plant disease identification using color and texture features. Wireless Pers. Commun. 121, 1139–1168 (2021). https:// doi.org/10.1007/s11277-021-09054-2
Moving in Space: Development Process Analysis on a Virtual Reality Therapy Application for Children with Cerebral Palsy Josiah Cyrus Boque1,4(B) , Marie Eliza R. Aguila1,2 , Cherica A. Tee1,3 , Jaime D. L. Caro1,4 , Bryan Andrei Galecio1 , Isabel Teresa Salido1 , Romuel Aloizeus Apuya1 , Michael L. Tee1,5 , Veeda Michelle M. Anlacan1,6,7 , and Roland Dominic G. Jamora1,6 1
Augmented Experience E-Health Laboratory, University of the Philippines Manila, Manila, Philippines {mraguila1,catee,jdlcaro,bcgalecio,iosalido,mltee, vmanlacan,rgjamora}@up.edu.ph 2 Department of Physical Therapy, College of Allied Medical Professions, University of the Philippines Manila, Manila, Philippines 3 Department of Pediatrics, College of Medicine, University of the Philippines Manila, Manila, Philippines 4 Department of Computer Science, College of Engineering, University of the Philippines Diliman, Quezon City, Philippines [email protected] 5 Department of Physiology, College of Medicine, University of the Philippines Manila, Manila, Philippines 6 Department of Neurosciences, College of Medicine - Philippine General Hospital, University of the Philippines Manila, Manila, Philippines 7 Center for Memory and Cognition, Philippine General Hospital, University of the Philippines Manila, Manila, Philippines https://axel.upm.edu.ph
Abstract. Virtual Reality (VR) as a supplementary therapy has become popular for patients with mobility limitations like cerebral palsy (CP). However, development of such technology in the Philippines hasn’t been fueled up. In creating this immersive gamification technology system (ImGTS), there will be a system in creating applications to cater therapy and rehabilitation for children with CP. For this study, feedback from the previous prototype of ImGTS for CP patients have been analyzed. These include, clarity of instruction for each of the major activities, requirement of additional sensory elements to help our user navigate and perform each of the activities prepared in application and safety of the users in terms of using other devices needed for the activities like treadmill for walking. After addressing these feedback to the current prototype, participants of the focus group were consolidated. Most of Funded by DOST-PCHRD, UP Manila. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 205–214, 2023. https://doi.org/10.1007/978-3-031-44146-2_20
206
J. C. Boque et al. them appreciated the great improvement addressing the clear instructions for each activity which include demonstration and voice guide. Also the inclusion of an elevator feature made the navigation inside the spaceship easy to understand and remember. With this, feedback concerning the safety and user experience are better as compared with the previous prototype. Keywords: Virtual Reality Rehabilitation
1
· Cerebral Palsy · VR Theraphy · CP
Introduction
In recent years, virtual reality (VR) has been a popular tool as a supplement in therapy and rehabilitation of children with motor impairments like cerebral palsy (CP) [6]. This is because children have a greater tendency to feel that the virtual environment (VE) they are experiencing is ‘real’ and they are really living in it [5]. But it is important to take note that their way of interacting with VR is different in the way they respond to traditional media. That is way, it is important to consider the safety of every content that will be created based on ethical standards [2]. Children with CP experience permanent motor impairment conditions based on the presence of several symptoms of mobility limitations that affect movement coordination, balance, posture and gait, they will live with it until adulthood [3,4]. In the Philippines, there are several rehabilitation interventions by which children with CP do. 64% of them undergo physical therapy, 29% undergo occupational therapy, while 7% undergo different therapy or none at all [4]. However, children undergoing these kind of repetitive therapy find it not motivational, and demand rigorous effort [7]. An immersive gamification technology system (ImGTS) was created to pioneer the development of applications that caters to the therapy needs of children with CP through the use of VR [1]. Current progress of the said study covers user requirements analysis to know the aspects of mobility that each of activities can target, design analysis to have children with CP be interested and enjoy the use of ImGTS applications, also in terms of development structure for each of the activities [1]. In this study, the analysis of data received from several participants will be presented as a result of several iterations of discussion to develop an immersive application suitable for their condition and targets more of their mobility aspects that can help them train and rehabilitate some mobility limitations.
2
Development Process
In this section, the demographics of participants for a focus group is discussed. Also, the application specifications and features, testing set-up and procedure are presented in details.
Moving in Space: DPA on a VR Therapy APP for CP
2.1
207
Conducting Focus Group Discussions
Seven participants (n = 4 in healthcare or rehabilitation, and n = 3 in computer science or game development) were invited to test the current iteration of the VR game prototype. The participants had a mean age of 31.29 (10.77) years and were mostly female (71.43%) (Table 1). Table 1. Participant characteristics during the second prototype testing session (n = 7). Variable
n (%)
Age (in years), M (SD)
31.29 (10.77)
Sex - Male
2 (28.57%)
- Female
5 (71.43%)
Field - Healthcare or rehabilitation
4 (57.14%)
- Computer science or game development 3 (42.86%)
Each participant experienced the VR prototype at least once and were invited to participate in an online focus group discussion (FGD) to process their experience. The goals of the FGD were (a) to describe the participants’ experience in using the VR prototype and (b) to describe how the use of the prototype in therapy can help children with cerebral palsy. The FGD was recorded and later transcribed by a member of the research team. The transcription was coded, and recurring and related responses were grouped into themes. 2.2
VR Application and Specifications
The current prototype of the application includes changes, enhancements and additional specifications in the aspect of user experience, user safety and user benefits. This prototype is developed on top of the proceeding prototype presented from the last publication [1]. 2.3
Activities
Alien Invasion. This activity is similar to bowling, conceptualized since the first prototype of the application. It exercises the user to strengthen its thighs and torso for balance, and torso and thigh strength. The mechanics run like this. There are ball spawners on both sides of the user placed as near as the arms of the user. Each of the balls are interactable via the use of the controllers. When the game starts, goo-like aliens will approach the player’s deck at a specific speed. When the user hits an alien, 5 points will be added to the score. Every game ends at a specific score goal or when the time runs out. A trophy will be given at the end of the game if the user reaches the specified score goal. The game components are shown in Fig. 1.
208
J. C. Boque et al.
Fig. 1. (Left) Alien being bowled down; (Right) The hallway where the aliens are moving towards the player.
Lock ‘em Up. This activity is a squatting exercise activity which targets to test and train the user’s core and thigh strength. The mechanics run like this. There are 5 doors where aliens are approaching towards the user. To prevent aliens from coming out from the doors, there is a lever right in front of the user. The user needs to pull the lever up (above head level) and down (floor level) twice to close each door. This activity has no time limit, so that the quality of squats of the user can be observed. A trophy will be given at the end of the game if the user closes all five doors. The game components are shown in Fig. 2.
Fig. 2. (Left) Alien being locked up; (Right) The lobby where the aliens are being locked up by the player.
Walkey Moley. This activity is a walking exercise activity which targets to test and train the user’s walking gait and endurance in mobility with the help of a treadmill. To successfully finish the activity, the user needs to pass at least 10 tiles along its way or depending on the difficulty set by the therapist. Also, the user is also asked to count the flying object moving towards the opposite direction. A trophy will be given at the end of the game if the user passes the goal number of tiles. To navigate to the Circular hall, the user needs to use the elevator then will pass a storage room. This will be the space the user is ask to remove the HMD and prepare the user on the treadmill. Once they wear the HMD again, they will already be in the Circular room. The game components are shown in Fig. 3.
Moving in Space: DPA on a VR Therapy APP for CP
209
Fig. 3. (Top) The image of the waiting room; (Lower left) The Spaceship model flying above the hall; (Lower right) the circular hallway where the player is walking on.
Avatar Customization. This activity is a secondary activity that users can do at the start of the session or anytime in between activities. It allows the user to have that immersive experience to role-play and be a space ranger during the duration of the session. There are four body parts to choose from: Helmet for the head, armband, bodysuit, and boots. Each of the body parts have 3 different colors and designs. A player’s perspective is shown in Fig. 4.
Fig. 4. The hologram which shows the initial look of the player while selecting parts of the suit.
2.4
Other Components
Space Lobby (Fig. 5). This is the first area where the user will enter and see when they enter the application. Here, the captain cockpit is in front of the
210
J. C. Boque et al.
user. On the right side is where the Avatar area for the avatar customization is located. At the back is the elevator door. Lastly, on the left side is the trophy area where the trophies the user receives on every successful activity will be displayed.
Fig. 5. The landing lobby of the spaceship.
Elevator (Fig. 6). This is the means of navigation for the user inside the spaceship. The elevator is similar to any elevator we have in the present. Outside it, there are 2 buttons, one for opening the elevator, and one for closing it. Inside, there are 4 buttons, for each of the activities and space lobby: ‘1’ is for the Walkey Moley, ‘2’ is Lock ‘em up, ‘3’ is for the Space lobby, and ‘4’ is for Alien invasion.
Fig. 6. (Left) View at the door of the elevator; (Right) View inside the elevator.
Trophy Area. In this area, there are 3 Cylinders where the user will place every trophy he or she will receive after a successful activity (as shown in Fig. 7. Each of the cylinders is designated for each of the activities.
Moving in Space: DPA on a VR Therapy APP for CP
211
Fig. 7. (Left)Trophy from the Alien Invassion; (Center) Trophy from the Lock ’em up; (Right) Trophy from the Walkey Moley.
2.5
Testing Set-Up
A Meta Quest 2 (128gb) was the head-mounted device (HMD) used during the testing, connected to a Lenovo Legion 5 Laptop (8 GB RAM, 16 GB Graphics Card, 1TB ROM) through the Airlink feature of the HMD. The testing was conducted at UP Manila - Clinic for Therapy Services under the College of Allied Medical Professions, and Alumni Engineering Centennial Hall in UP Diliman, Quezon City. A 3 m × 3 m space was designated as the ‘testing area’ for the application as shown in Fig. 8. A workstation is placed at the right side of the testing area for the computer, Oculus devices when not in use, and sanitizing equipment. This is also where the technical assistant and therapist will view and observe the performance of the user through the therapist module. 2.6
Testing Procedure
Before the administration of the testing, each participant was asked to answer a Health declaration Form for the safety of everyone in the testing site since the testing was conducted at the height of the pandemic. After which, each of them were oriented on the use of HMD, safety and possible irritating sensations they may experience during the testing (Fig. 9). When the participant is ready and fit to use the HMD, they will be positioned at the center of the testing area, have the HMD placed on their head, and strap the controllers on their wrist. Once everything is set, the technical assistance runs the application. After each testing session, participants are asked to answer a questionnaire which will get their feedback in all aspects of the application like clarity, colors, loudness of audio, etc.
212
J. C. Boque et al.
Fig. 8. This is the 3 m × 3x set-up with the workstation for the therapist on the right side.
Fig. 9. Testing Protocol Procedure Flow Chart.
3
Results and Discussion
This section discusses the insights and recommendations of health and game design professionals during the FGD regarding the architecture and overall design of the developed VR game prototype for the target population. 3.1
User Experience
One comment that every participant gave is that the activities are fun and enjoyable. They also felt the concept of competitiveness, since there are ways to measure the goodness of their performance. Higher score means better performance, and stronger stamina. They suggested that parameters for some factors in the games should be adjustable, so there will be changes in terms of difficulty, and adaptability to the condition of the patients (for patients with GMFCS Level I and II). This can also help in the progression of the activities and of their performance every session. Some new features of the application are the elevator and the trophy area. The elevator was one of the components each user remembers. They commended that the elevator was included, because it made the navigation within the spaceship easy to remember and understand. It is also use to adapt to the way of movement inside the spaceship and activities since their movement from lobbies and rooms to the elevator is forward and backward. Unlike with the old prototype, users move widely around the 3 m × 3 m test area we have.
Moving in Space: DPA on a VR Therapy APP for CP
3.2
213
Enhanced Application Components
Visual Elements. One of the highlighted changes of the current prototype is the inclusion of visual instruction and demonstration of each of the activities. This will help users understand the activity well. Also they will be able to see how each of the activities run while they do the activity. Auditory Elements. All participants appreciated the improvement of Auditory guides and instructions inside every activity. They highlighted that these voice overs are very helpful to let every patient properly perform each of the activities, and achieve the targeted goal for each patient. Also, they appreciated the presence of ambience and music background for each of the games. This can help in giving the patient the feeling of presence and ‘realness’ in the outer space and spaceship. 3.3
User Safety
In terms of safety, most of the participants have a concern on the use of treadmill for the users and of the patients, since participants will be wearing an HMD while using the treadmill. They recommended that there should be an assisting therapist to ensure that the users will not slip or fall while using the treadmill. Also, the speed of the environment movement should be in sync with the treadmill, so it can ensure that the users will not experience any motion sickness and dizziness during and after using the treadmill. With the use of HMD, they recommended having every session at most 20 min, so that users will not be exhausted, and not experience other motion and optical sickness.
4
Conclusion, Future Directions and Recommended Interventions
Based on this development process, it is important to get important feedback in terms of user experience, sensory elements adjustments, and safety of use. This will help get every application ready to be used, not just by users, but of the target users which are the children with CP. Also with these kind of applications, it is critical to cover all aspects in terms of safety, target goals for the user, so that the application can maximize its purpose and effectiveness. Other tests and evaluation applicable with the kind of technology is recommended to be applied to gather points of improvement with the different parts and activities of the current application. The development of such technology can be studied further by looking into other possible activities that targets other aspect of therapy methods which can translated into activities inside the HMD. Also, collaboration with healthcare units can strengthen the essence and high importance of developing such technology to help those children living with cerebral palsy. Acknowledgements. We would like to give acknowledgement to the Department of Science and Technology - Philippine Council for Health Research and Development, and University of the Philippines Manila for funding and supporting this project.
214
J. C. Boque et al.
References 1. Aguila, M.E., et al.: Application design of a virtual reality therapy game for patients with cerebral palsy. In: Krouska, A., Troussas, C., Caro, J. (eds.) NiDS 2022, pp. 170–180. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-17601-2 17 2. Bailey, J.O., Bailenson, J.N., Obradovi´c, J., Aguiar, N.R.: Virtual reality’s effect on children’s inhibitory control, social compliance, and sharing. J. Appl. Dev. Psychol. 64, 101052 (2019) 3. Cieza, A., Causey, K., Kamenov, K., Hanson, S.W., Chatterji, S., Vos, T.: Global estimates of the need for rehabilitation based on the global burden of disease study 2019: a systematic analysis for the global burden of disease study 2019. Lancet 396(10267), 2006–2017 (2020) 4. Hebreo, A.R., Ang-Mu˜ noz, C.D., Abiera, J.E., Dungca, M.L., Mancao, B.D.: Profile of pediatric patients with cerebral palsy at the department of rehabilitation medicine, Philippine general hospital. Acta Medica Philippina 51(4) (2017) 5. Sharar, S.R., Carrougher, G.J., Nakamura, D., Hoffman, H.G., Blough, D.K., Patterson, D.R.: Factors influencing the efficacy of virtual reality distraction analgesia during postburn physical therapy: Preliminary results from 3 ongoing studies. Arch. Phys. Med. Rehabil. 88(12), 43–49 (2007) 6. Turker, D., Korkem, D., Ozal, C., Gunel, M., Karahan, S.: The effects of neurodevelopmental (bobath) therapy based goal directed therapy on gross motor function and functional status of children with cerebral palsy. Int. J. Ther. Rehabil. Res. 4(4), 9 (2015) 7. Weiss, P.L., Rand, D., Katz, N., Kizony, R.: Video capture virtual reality as a flexible and effective rehabilitation tool. J. Neuroeng. Rehabil. 1(1), 12 (2004)
Design Thinking (DT) and User Experience (UX) as Springboard to Teacher-Made Mobile Applications Jeraline Gumalal(B)
and Aurelio Vilbar
University of the Philippines Cebu, Cebu City 6000, Philippines [email protected]
Abstract. This paper describes the experience of nine in-service basic education teachers in learning how to design a mobile app using concepts of DT and UX as springboards. The teachers were enrolled in a 36-session training program that is sectioned in three parts: introduction, active, and application phases. Based on their accounts, the teachers were able to design a reading practice mobile app that is interactive and intuitive by incorporating the elements such as arrows, buttons, audio translation, and easy to find icons with specific function. Using DT and UX as springboards, they were able to solve designing problems, collaborate with evaluators and possible targets, and to use creativity in making user friendly human-computer interactions. The research recommends conducting continuing education programs on mobile apps development to empower local developers on customizing learning apps based on their educational-technological landscape. Keywords: user experience · design thinking · mobile app · mentoring
1 Introduction Prior to having cloud and remote learning as a norm in learning, teachers were already being encouraged to use more information and communication technology (ICT) in the classroom. The educational reform is cognizant to the 21st century skills, which include ICT as a major component [1]. Also, it is attributed to the shift of learning preferences from face to face towards online and mobile learning [2]. Yet, recognizing student learning on a deeper level implies that curating and producing educational tools using ICT resources would require teachers to have systematic grasp of how ICT tools were designed and deployed [3]. Acknowledging this gap, teacher training institutions (TTIs) aimed on retooling and increasing the technical ICT experiences of teachers using known methods in technology industries. This paper responds to the practical ICT training needs of in-service basic education teachers enrolled as graduate students in a public university in Cebu, Philippines. The paper also further present that by using design thinking (DT) and user experience de-sign (UXD) as springboards in designing educational mobile application (mobile app), teachers may practice creating customized learning tools. In learning DT and © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 215–220, 2023. https://doi.org/10.1007/978-3-031-44146-2_21
216
J. Gumalal and A. Vilbar
UX, basic education teachers were given hands-on training and theoretical introduction mobile application development to enhance their educational tool-making skills. Their experiences are examined thereafter. DT is frequently utilized in the creation of mobile app, software, and systems. But DT is also highly effective in other fields such as education. DT improves cognition which can aid in training design novices [4]. In fact, when it comes to organizing learning experiences, DT may provide teachers with a framework for utilizing science and creativity to address challenges [5]. The five phases of DT are as follows: describing the problem, empathizing with the target, brainstorming ideas, and drawing designs, creating a prototype, and testing the concept using input from a target group [6]. Depending on the design plan, these components can be switched back and forth. DT gained praise in education because of its constructivist approach and its association with user experience (UX). UX is fundamental to the human-computer interactions that is predominant in using ICT tools. It is the experience that a product, a tool, or a program creates for its users in the real world [7]. UX is used in all aspects that generates an experience for the human involved. In education, UX can be contextualized as learning experience design (LXD) which is the holistic learning experience of the students as intended by the curriculum developers and teachers [8]. Both these experience designs (XD) are difficult especially for new developers, but with DT’s flexible and cyclic structure allows both XD to be achievable. 1.1 Methodology In learning DT and UX, the nine participants underwent the three phases of training: introduction phase, active phase, and application phase (see Table 1). The participants were public and private school teachers who participated for 36 training sessions within three months. Each session has four total learning hours. The researchers analyzed the participants’ reflections and conducted focus group discussion to determine their experiences in learning DT and UX. In the Introduction phase, the participants reviewed and learned the fundamentals of ICT use and tool design by participating in face-to-face lectures and workshops. Bebecause they never created an app before, they had brainstorming to share their user experiences with mobile and online apps. In the Active phase, the participants acquired new skills and receive hands-on instruction in design concepts, relevant languages, basic programming, and digital media manipulation. They learned the basic of creating experiences, human-computer interfaces, and DT. They also used the pre-activity lectures and upskilling especially in the principles of design and media manipulation. They followed the Analysis-Design-Implementation-Deployment (ADID) scheme [9] in the app production, testing, and evaluation. In the Application phase, the participants were grouped into three. As their performance task, the groups designed a mobile learning app for reading practice. They applied their knowledge and skills in UX and DT and used a vector design tool for mobile and web apps. Vector apps such as XD and Figma provide the teacher-designers an environment in drafting and designing the interactions that will happen between their mobile app
Design Thinking (DT) and User Experience (UX)
217
Table 1. The anticipated workflow of the teachers based on the learning phases. Phase
Timeframe
Anticipated learning workflow
Introduction 6 sessions or 24 total training hours
1. Lectures on Mobile and web apps, gamification, and human-computer interactions. Lectures on DT, UXD, and LXD 2. Lectures on languages, copyrights, and information use
Active
6 sessions or 24 total training hours
1. Hands on practice of photography, graphics, and sound media 2. Hands-on practice of simple game programming, game design, expected outcomes 3. Hands-on use of vector design tool for mobile and web apps
Application
24 sessions or 96 total training hours 1. Problem solving using DT 2. Designing a mobile app using a vector software 3. Running the design as a mock-up mobile app 4. Evaluation, revision, reevaluation
the person using it. The apps underwent content validity from three independent evaluators who provided constructive feedback on the apps’ ease of use, visibility, smoothness, assistive nature, and the intuitive nature of the graphics and the color schemes. The app was not further implemented for field testing due to the onset of COVID-19 in early 2020 and was not further pushed for piloting due to the local lockdown restrictions.
2 Results The three groups designed reading practice apps as a practical hands-on activity. The mobile app is centered around helping the user practice reading by showing figures, playing audio guides, and providing a game-based vocabulary practice. Figure 1 shows Group 1’s mobile app for practice reading. This app aims to develop the vocabulary and spelling skills of the students to promote phonological awareness and reading comprehension. One of the basic elements that the group needs to solve is the intuitive interaction where the learners do not require spelled out instruction to use the app to start reading practice. This was solved by having links and icons that represents their function such as arrow bright keys, larger categorical buttons, and conspicuous read-aloud icons. Also, using the vector app, the teacher-designers were able to create a series of interactions that occurs when a user clicks an icon, type a word, or hover over a picture. The three groups created a mobile app design based on their student’s need and contexts. They employed DT and UX concepts in all steps in designing. They also made use of the pre-activity lectures and upskilling especially in the principles of design
218
J. Gumalal and A. Vilbar
Fig. 1. Example of interactive mobile app design.
and media manipulation. They used the Analysis-Design-Implementation-Deployment scheme [9] in the app production, testing, and evaluation. The apps underwent content validity from three independent evaluators who provided constructive feedback on the apps’ ease of use, visibility, smoothness, assistive nature, and the intuitive nature of the graphics and the color schemes. The app was not further implemented for field testing due to the onset of COVID-19 in early 2020. Table 2 presents the thoughts include their perceived usefulness of learning how to design a mobile app, their app testing, and the hurdles from which they learned from. Table 2. Participants’ experiences in the mobile application design process. Themes
Responses
DT promoted in-depth process in solving learning problems
“The task to visualize learning is a complex matter. It pays to research about the problem, interview, sketch up solutions, and evaluate.” “The process of collaborating with my peers in solving problems and making the app is worth the hard work.”
UX increased the appreciation for the technical aspects of mobile app designs that are often unseen by users
“I feel very honored to make an app design even if I am not a programmer. I hope that when my design is given to a real programmer, they can code it easily.” “The sketch-up is very specific about what is included in every panel and what will happen next as well as where to lead your user. It is complex but satisfying.”
Training programs with DT and UX is demanding but worth doing
“It was a demanding program, but we were mentored well, and all steps were doable. I hope there was more time for each phase.” “There was a real connection between the theories we learned and the application and activity that we do as a follow-up.”
Design Thinking (DT) and User Experience (UX)
219
The mobile app designing experience allowed the participants to use DT in problem solving and collaborating with evaluators and possible targets. This is in tune with the work of Malele and Ramaboka [6] which states that in DT, the interaction with targets and informants is just as important in creating a solution. It is also poignant that the teachers have been thinking hard about how to design the learning experiences of whoever will use their application [8]. In designing the mobile app, they stated that developed a sense of technical skills in the design and programming and it allowed them to be creative and focused. These components are important as described in the work of Xie [4] and Hsu and Ching [10] where educators are being encouraged to be creative. The teacher-designers also noted that it also paid that they had a well-established theory to application training format. Hsu and Ching also said that mobile app design may be taught to educators at the graduate level, implying that mobile app creation programs can be incorporated in a teacher training curriculum. If applied as a curricular course, mobile app development can be taught with a more appropriate timeframe and better computing infrastructure – a comment that the participating teachers pointed out in the focus group discussion.
3 Conclusion The integration of DT and UX in training developed the teachers’ ability to design mobile learning applications addressing the students’ needs. Despite the limitation of the participants having no formal computer programming background, they developed the mobile apps following the sound interactive and technological elements that yielded positive results in the expert’s evaluation. The teachers’ success proved that mobile learning app development can be learned given the active and collaborative hands-on instruction on basic programming and digital media manipulation. Integrating DT and UX can strengthen the iterative process in designing instructional materials to address the learning loss caused by the pandemic [11] when most resources are one size fits all [12]. DT can emphasize the value of collaboration among teachers and students in describing the problem and testing the educational technology prototype. It can promote differentiated instruction that addresses the students’ multimodal, multisensory, and diverse learning needs. Although this case-study training was limited to a small population, this research recommends training institutions to develop a continuing education program for teachers and life-long learners interested in app development. The participation of more potential educational technology programmers can foster better contextualization of learning and multimodality. Since the app development followed the ADID scheme, the institutions are recommended to conduct Needs Assessment to design the content and programming needs customized to the potential participants. In addition, they can use pretest-posttest design with qualitative methods to determine the impact of the training to the participants’ programming skills. Acknowledgement. This research is funded by the UP Cebu Academic Program Improvement under the Central Visayas Study Center. Assisting in the data gathering are Dr. Hazel Trapero and Mr. Fernand Bernardez, fellow IT Education researchers.
220
J. Gumalal and A. Vilbar
References 1. Armfield, S.W.J., Blocher, J.M.: Global digital citizenship: providing context. TechTrends 63, 470–476 (2019). https://doi.org/10.1007/S11528-019-00381-7 2. Szymkowiak, A., Melovi´c, B., Dabi´c, M., Jeganathan, K., Kundi, G.S.: Information technology and Gen Z: the role of teachers, the internet, and technology in the education of young people. Technol. Soc. 65, 101565 (2021). https://doi.org/10.1016/J.TECHSOC.2021.101565 3. Gil-Flores, J., Rodríguez-Santero, J., Torres-Gordillo, J.J.: Factors that explain the use of ICT in secondary-education classrooms: the role of teacher characteristics and school infrastructure. Comput. Human Behav. 68, 441–449 (2017). https://doi.org/10.1016/j.chb.2016. 11.057 4. Xie, X.: The cognitive process of creative design: a perspective of divergent thinking. Think Skills Creat. 101266 (2023). https://doi.org/10.1016/J.TSC.2023.101266 5. Calavia, M.B., Blanco, T., Casas, R., Dieste, B.: Making design thinking for education sustainable: training preservice teachers to address practice challenges. Think Skills Creat. 47, 101199 (2023). https://doi.org/10.1016/J.TSC.2022.101199 6. Malele, V., Ramaboka, M.E.: The design thinking approach to students STEAM projects. Proc. CIRP 91, 230–236 (2020). https://doi.org/10.1016/J.PROCIR.2020.03.100 7. Barnum, C.M.: Exploring the usability and UX toolkit. Usability Test. Essentials 35–67 (2021). https://doi.org/10.1016/B978-0-12-816942-1.00002-2 8. Chang, Y.K., Kuwata, J.: Learning experience design: challenges for novice designers. Learner User Exp. Res. 1 (2020) 9. Fanfarelli, J.R., McDaniel, R., Crossley, C.: Adapting UX to the design of healthcare games and applications. Entertain Comput. 28, 21–31 (2018). https://doi.org/10.1016/J.ENTCOM. 2018.08.001 10. Hsu, Y.-C., Ching, Y.-H.: Mobile app design for teaching and learning: educators’ experiences in an online graduate course. Int. Rev. Res. Open Distrib. Learn. 14, 117–139 (2013). https:// doi.org/10.19173/irrodl.v14i4.1542 11. Engzell, P., Frey, A., Verhagen, M.D.: Learning loss due to school closures during the COVID19 pandemic. Proc. Natl. Acad. Sci. U S A. 118 (2021). https://doi.org/10.1073/PNAS.202 2376118 12. Zhao, X., Shao, M., Su, Y.S.: Effects of online learning support services on university students’ learning satisfaction under the impact of COVID-19. Sustainability (Switzerland) 14, 1–17 (2022). https://doi.org/10.3390/su141710699
The Designer-Oriented Process Analysis of Utilizing the DIZU-EVG Instrument for Educational Video Games Yavor Dankov(B) Faculty of Mathematics and Informatics, Sofia University “St. Kliment Ohridski”, Sofia, Bulgaria [email protected]
Abstract. Serious video games are popular tools for applying modern learning practices and strategies to educate learners on different knowledge and skills in a particular domain or discipline. The paper focuses on educational video games as serious games applied in education. The paper presents the use of the specialized Model of the Educational Video Game Designer’s Perspectives, with the help of which the DIZU-EVG (Data visualIZation instrUment for Educational Video Games) tool for visualizing gaming and learning results from educational video games is further developed. Using the model of the designer’s perspectives, this paper develops and presents the Designer-Oriented Process Model of Utilizing The DIZU-EVG Instrument by designers of educational video games. The new model describes in detail the designer-oriented process analysis of the ability of the designers to use the tool functionalities to visualize gaming and learning results. The presented model will contribute to the future development and improvement of the tool. With the help of the DIZU-EVG tool, designers have the opportunity to take advantage of the purposefully designed functionalities of the instrument for visualization of the data for their educational video games and data about learners and players. Using the visualized data on the dashboards, designers can visually analyze all available information and plan meaningful and timely strategic solutions. Keywords: Educational video games · Serious games · Game-based learning · DIZU-EVG instrument · Designer’s Perspectives
1 Introduction Serious video games are popular tools for applying modern learning practices and strategies to educate learners on different knowledge and skills in a particular domain or discipline. The application of game-based learning is among the main features of serious video games. Serious video games use game-based learning as an up-to-date and modern means of communication with users in domains such as science, education, healthcare, defense, engineering, etc. [1–5]. Through serious video games, users perceive specific knowledge and acquire skills through playing video games, which are characteristic and © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 221–229, 2023. https://doi.org/10.1007/978-3-031-44146-2_22
222
Y. Dankov
up-to-date for modern society [6–8]. The success and benefits of using serious games within users’ learning and practical lessons have been proven in many scientific and research papers and experiments [5, 9, 10]. Educational video games are a particular type of serious game. These games represent serious games in education or so-called educational video games. It is because of the field of application of educational video games (academic disciplines / subject areas) that designers, in most cases, are specialists in various academic disciplines and have some experience and knowledge in different educational subject areas. Usually, educators and teachers are directly tied to the domain they teach. They are specialists in the relevant academic subject area and educate a particular group of learners. In this regard, educators and teachers often are not specialists in information technology and are unfamiliar with or have no experience designing and creating educational video games. In these circumstances, one of the main challenges for educational video game designers is understanding, analyzing, and evaluating the processes of designing and creating specialized educational video games. One aspect to consider is that the design and creation of educational video games for a particular user group require information technology specialists to participate in the game’s programming and deliver the final software product to the learners and players for use (play). Another aspect to consider is that designing educational video games requires a thorough knowledge of the education domain for which this game will be intended. The didactic content, integrated into educational video games, results from the specialized knowledge of the game designer, who considers the target group of learners and players for whom the game is intended [11]. Therefore, the designer must thoroughly know the education domain to select the most appropriate and essential content to be integrated into the game. The designer must also know how the game can be developed and programmed to reach users as a final software product so they can play it. The designer must also be able to analyze the game they designed and assess whether the educational video game (which they have designed and created) is valuable and enjoyable for learners and players and leads to good results. All this indicates that the design and creation of educational video games often require a team of different designers and specialists, in which a particular person is responsible for the respective activity of the overall process. The possible solution to this challenge is to develop platforms for designing and creating educational video games that provide different opportunities for designing and creating games with the help of interactive interfaces and tools [12]. An example of a popular specialized platform for the design and automatic generation of educational video games is the platform of the APOGEE [13]. This platform provides the opportunity to design educational maze video games by people not specialists in information technology using specialized supporting tools [14]. These integrated tools help designers design, develop, create, and evaluate games [15, 16] After the designer has designed the game, the platform automatically generates an educational maze video game without including developers and programmers in the design and creation of the educational video game. The author of this paper is part of the team who designed and developed the platform and the integrated tools [12, 15, 16].
The Designer-Oriented Process Analysis of Utilizing the DIZU-EVG Instrument
223
This paper focuses on the development and improvement of the DIZU-EVG (Data visualIZation instrUment for Educational Video Games) tool for visualizing gaming and learning results from educational video games that are automatically generated in the APOGEE platform [13]. With this particular objective, the specialized Model of the Educational Video Game Designer’s Perspectives [17] is used in this paper, with the help of which it is necessary to design the process of using the functionality of the DIZU-EVG instrument by the designers of educational video games. The Model of the Educational Video Game Designer’s Perspectives is presented in detail in a previous study by the author of this paper [17]. The model generally presents the semantics of the designer in the various roles that the designer may adopt in designing, creating, and evaluating educational video games. The description of the DIZU-EVG instrument and its functionalities are presented in the previous studies of the author of this paper [18– 20]. Among the main goals of the DIZU-EVG tool is to visualize gaming and learning results from the designed and played educational video games with the help of personalized dashboards [19]. The main functionalities of the DIZU-EVG tool regarding the visualization of gaming and learning results are Visualize Game Designer Dashboard, Visualize Player Dashboard, and Visualize Learner Dashboard [19]. The DIZU-EVG instrument provides the user with a specialized dashboard, which visualizes various gaming and learning data for the respective user, according to the user’s role and the selected functionality of the tool [19]. In addition, the Game Designer Dashboard functionality provides visualization of overview data regarding the players, learners, and educational video games, designed by the designer [19]. Therefore, this paper uses the Model of the Educational Video Game Designer’s Perspectives to develop and present the Designer-Oriented Process Model of Utilizing the DIZU-EVG Instrument. The model contributes to the development of the proposed new analysis model and the development of the DIZU-EVG instrument. The new analysis model focuses precisely on the designer and the ability to use the designed tool features by the designers. Therefore, the analysis model will also contribute to the future development and improvement of the tool. With the help of the DIZU-EVG tool, designers will have the opportunity to take advantage of the purposefully designed functionalities of the instrument for visualization of the data for their educational video games and data about learners and players. Using the visualized data on the dashboards, designers can visually analyze all available information and plan meaningful and timely strategic solutions. The paper continues with Sect. 2, which describes using the Model of the Educational Video Game Designer’s Perspectives for developing the DIZU-EVG Instrument. Section 3 presents the developed new designer-oriented process model of utilizing the DIZU-EVG instrument. Section 4 describes the final words and future works.
2 Using the Model of the Educational Video Game Designer’s Perspectives for the Development of the DIZU-EVG Instrument This paper presents the use of the specialized Model of the Educational Video Game Designer’s Perspectives [19], with the help of which it is necessary to develop and improve the DIZU-EVG tool for visualizing gaming and learning results from educational video games, automatically generated in the APOGEE software platform. Using
224
Y. Dankov
the three designer’s perspectives in this paper supports the design and development of the DIZU-EVG instrument and its analysis. The specialized Model of the Educational Video Game Designer’s Perspectives generally presents the designer’s semantics in the various roles he can perceive in designing, creating, and improving educational video games. The author’s previous study described the semantics of each designer’s perspectives presented in the model [17]. These perspectives also determine the software tools used in designing, creating, and improving educational video games, and a purposeful design process focused on the designer is required for their effective and efficient work. 2.1 Using the First Perspective from the Model Focusing on the first perspective of the model Perspective I - Game Creator [17], the DIZU-EVG tool must provide designers with a complete set of visualizations of gaming and learning results for the educational video games designed by the respective designer. The semantics of perspectives determines the designer’s role as the educational video game’s creator, making all the design decisions about the game, such as the purpose and topic of the game, what exactly will be integrated educational and play content, etc. Therefore, personalized designer dashboards must be available for game creators’ use and provide the necessary information to designers for their work. On the other hand, the DIZU-EVG instrument must be integrated into the APOGEE software platform (or another similar software platform) to use the platform’s available systems to profile the respective user in its role as a designer/creator of educational video games. Thus, this user (in his role as a designer) will have access to the functionality of the DIZU-EVG tool for visualizing a dashboard especially designed for educational video game designers within the software platform. 2.2 Using the Second Perspective from the Model Focusing on the second perspective of the model - Perspective II - Learning Content Tester [17] DIZU-EVG tool must be designed to provide opportunities to use functionality to visualize the results of the learners who have played the appropriate educational video game developed by a particular designer. The semantics of perspective defines the designer’s role as a tester of the integrated didactic content in the game. In this regard, the DIZU-EVG tool must be designed to provide designers/creators with access to functionality to visualize valuable data regarding these learners. In these circumstances, this functionality in the tool and the process of using this functionality must be designed and developed to provide seamless access to designers to personalized dashboards for learners. Therefore, in this way, designers will be able to visually analyze the information about the video games played by the trainees and plan the necessary actions to realize the testing and evaluation of the training content, as well as to draw up a plan to improve the design of the educational video game. 2.3 Using the Third Perspective from the Model Focusing on the third perspective of the model - Perspective III - Gaming Content Tester [17] again, the DIZU-EVG tool must provide visualization techniques for gaming
The Designer-Oriented Process Analysis of Utilizing the DIZU-EVG Instrument
225
results about players who played the appropriate educational video game. The semantics of perspective again determine the third role of the designer. This is the role of a tester of integrated gaming content [17]. Therefore, this functionality in the tool and the process of using this functionality must be designed to provide the necessary access to designers to the tool’s functionality for visualizing results about players. Thus, the designer will have access to visual analysis opportunities through the visualized information, utilizing the DIZU-EVG instrument, and the ability to evaluate the gaming content integrated and tested in the game.
3 The Proposed Designer-Oriented Process Model Based on the application of the Model of the Educational Video Game Designer’s Perspectives, the detailed Designer-Oriented Process Model of Utilizing the DIZU-EVG Instrument is presented in Fig. 1 in the current section of this paper. The model focuses precisely on the designer and the processes of using the designed tool features by designers. The model was developed by following the semantics and the importance of the three perspectives of the designer, described in the previous section. In the role of creator of educational video games (perspective one from the Model of the Educational Video Game Designer’s Perspectives), the designer can take advantage of the wide range of DIZU-EVG instrument functionalities. The presented designeroriented process model is designed to allow the designer to use the essential functionalities of the tool to visualize the results through personalized dashboards. It is important to note that the designer only has access to the visualization of the results for the educational video games he has designed and access to the results for the learners and players who have played these educational video games. The process of using the functionalities of the DIZU-EVG instrument from the users (learners and players) is presented in detail in a previous study by the author of this paper [20]. Therefore, a specific prerequisite is needed to use the tool. The designer needs to have already created an educational maze video game on the APOGEE software platform. The learners and the players then have to play this game and learn the didactic content integrated into the game. In these circumstances, as a result, data from the game sessions of these users are generated and stored within the platform. The data is processed using analytical instruments, and summary data and statistics for the relevant game played are generated. These data are stored in the database and used to visualize data through the DIZU-EVG instrument. If these requirements are available, the designer can take full advantage of the DIZU-EVG tool to visualize the data about gaming and learning results through a personalized dashboard. Therefore, using the DIZU-EVG tool, the designer can utilize the three essential functionalities of the software tool, namely Visualize Game Designer Dashboard functionality, Visualize Player Dashboard functionality, and Visualize Learner Dashboard functionality [18–20] illustrated in the presented model in Fig. 1. Since the creator of the educational video game intends to use the DIZU-EVG instrument (and, also, the data from the game results are available) to visualize gaming and learning results, the designer must first be registered and possess a profile as a designer in the APOGEE platform, to be able to take advantage of all available tool functionalities [18–20].
226
Y. Dankov
Fig. 1. Designer-Oriented Process Model of Utilizing the DIZU-EVG instrument
The Designer-Oriented Process Analysis of Utilizing the DIZU-EVG Instrument
227
The designer in the role of the creator of the educational video games (perspective I from the Model of the Educational Video Game Designer’s Perspectives) can use the functionality to visualize data through the Visualize Game Designer Dashboard Functionality [19], which is only accessible to designers/creators. By accessing this functionality, the designers can visualize statistics on their designed game through dashboard, which is personalized for them. The designers can visually analyze statistics, summarize data on the games they developed and created, and review data for each game about the time users spend playing the relevant educational video game. Therefore, the designer can use all the visualized information, draw insights and conclusions, and plan the necessary design solutions to improve the game (perspective I of the model). The above also determines the design of the presented Designer-Oriented Process Model of Utilizing the DIZU-EVG Instrument, depicted in Fig. 1. In addition, the tool provides opportunities to configure the designer’s dashboard, depending on the designer’s preferences. After completing the work, the designer can use all other tool functionalities illustrated in the model in Fig. 1. Among the fundamental functionalities of the DIZU-EVG instrument is that it allows designers to visualize personalized dashboards for all users - players and learners through the View Learners Profile Dashboards functionality and View Players Dashboards functionality [19, 20]. These features are designed based on the use of perspective II and Perspective III from the Model of the Educational Video Game Designer’s Perspectives. Therefore, the designers will have the opportunity to use the DIZU-EVG tool to visualize the results regarding the tested gaming content and educational content by the users or by them (in their role as a tester of gaming or tester of learning content - perspective II or III from the Model of the Educational Video Game Designer’s Perspectives). Therefore, designers can visually analyze all available data about the players and learners who have played their designed educational video game. Based on the analysis, designers can form a timely and justified assessment of whether the integrated content is the most suitable for the respective users and gain knowledge of the playing and learning experience of the users (players and learners) in the game.
4 Conclusion and Future Work This paper presented the use of the specialized Model of the Educational Video Game Designer’s Perspectives, with the help of which the DIZU-EVG tool for visualizing gaming and learning results from educational video games is further developed. Using the model of the designer’s perspectives, this paper has developed and presented the Designer-Oriented Process Model of Utilizing the DIZU-EVG instrument by designers of educational video games. The model (Fig. 1) has described in detail the designeroriented process analysis of the ability of the designers to use the tool functionalities for visualization of gaming and learning results. The presented model will contribute to the future development and improvement of the tool. With the help of the DIZU-EVG tool, designers have the opportunity to take advantage of the purposefully designed functionalities of the instrument for visualization of the data for their educational video games and data about learners and players. Using the visualized data on the dashboards, designers can visually analyze all available information and plan meaningful and timely strategic solutions.
228
Y. Dankov
Future work will include additional studies on the DIZU-EVG instrument and analyses of the designers’ opinions. The development and improvement of the DIZU-EVG instrument for visualizing gaming and learning results from educational video games have also been planned for future work. Acknowledgements. This research is supported by the Bulgarian Ministry of Education and Science under the National Program “Young Scientists and Postdoctoral Students – 2”.
References 1. O’Connor, S., et al.: SCIPS: a serious game using a guidance mechanic to scaffold effective training for cyber security. Inf. Sci. 580, 524–540 (2021). https://doi.org/10.1016/j.ins.2021. 08.098 2. Kara, N.: A systematic review of the use of serious games in science education. Contemp. Educ. Technol. 13(2), ep295 (2021). https://doi.org/10.30935/cedtech/9608 3. Rotter, E., Achenbach, P., Ziegler, B., Göbel, S.: Finding appropriate serious games in vocational education and training: a conceptual approach. In: Costa, C. (ed.) Proceedings of the 16th European Conference on Games Based Learning, vol. 16, no. 1, pp. 473–481 (2022). https://doi.org/10.34190/ecgbl.16.1.577 4. Tan, C., Nurul-Asna, H.: Serious games for environmental education. Integr. Conserv. 2(1), 19–42 (2023). https://doi.org/10.1002/inc3.18. Published by John Wiley & Sons Australia, Ltd on Behalf of Xishuangbanna Tropical Botanical Garden (XTBG) 5. Damaševiˇcius, R., Maskeli¯unas, R., Blažauskas, T.: Serious games and gamification in healthcare: a meta-review. Information 14(2), 105 (2023). https://doi.org/10.3390/info14 020105 6. Toader, C.-S., et al.: Exploring students’ opinion towards integration of learning games in higher education subjects and improved soft skills—A comparative study in Poland and Romania. Sustainability 15(10), 7969 (2023). https://doi.org/10.3390/su15107969 7. Gauthier, A., et al.: Redesigning learning games for different learning contexts: applying a serious game design framework to redesign Stop & Think. Int. J. Child-Comput. Interact. 33, 100503 (2022). https://doi.org/10.1016/J.IJCCI.2022.100503 8. Lameras, P., Arnab, S., Dunwell, I., Stewart, C., Clarke, S., Petridis, P.: Essential features of serious games design in higher education: linking learning attributes to game mechanics. Br. J. Educ. Technol. 48(4), 972–994 (2017). https://doi.org/10.1111/bjet.12467 9. Laamarti, F., Eid, M., Saddik, A.: An overview of serious games. Int. J. Comput. Games Technol. 2014, 1–15 (2014). https://doi.org/10.1155/2014/358152 10. Engström, H., Backlund, P.: Serious games design knowledge - experiences from a decade (+) of serious games development. EAI Endorsed Trans. Serious Games 6, 170008 (2021). https://doi.org/10.4108/eai.27-5-2021.170008 11. Bontchev, B., Terzieva, V., Paunova-Hubenova, E.: Personalization of serious games for learning. Interact. Technol. Smart Educ. 18(1), 50–68 (2021). https://doi.org/10.1108/ITSE-052020-0069 12. Terzieva, V., Paunova, E., Bontchev, B., Vassileva, D.: Teachers need platforms for construction of educational video games. In: EDULEARN Proceedings. International Academy of Technology, Education and Development (2018). https://doi.org/10.21125/edulearn.2018. 1922
The Designer-Oriented Process Analysis of Utilizing the DIZU-EVG Instrument
229
13. Bontchev, B., Vassileva, D., Dankov, Y.: The APOGEE software platform for construction of rich maze video games for education. In: Proceedings of the 14th International Conference on Software Technologies (ICSOFT 2019), pp. 491–498. SCITEPRESS - Science and Technology Publications, Lda, Setubal, PRT (2019). https://doi.org/10.5220/000793040491 0498 14. Dankov, Y., Bontchev, B.: Towards a taxonomy of instruments for facilitated design and evaluation of video games for education. In: Proceedings of the 21st International Conference on Computer Systems and Technologies (CompSysTech 2020), pp. 285–292. Association for Computing Machinery, New York, NY, USA (2020). https://doi.org/10.1145/3407982.340 8010 15. Dankov, Y., Bontchev, B., Terzieva, V.: Design and creation of educational video games using assistive software instruments. In: Ahram, T.Z., Karwowski, W., Kalra, J. (eds.) AHFE 2021. LNNS, vol. 271, pp. 341–349. Springer, Cham (2021). https://doi.org/10.1007/978-3-03080624-8_42 16. Dankov, Y., Bontchev, B.: Software instruments for management of the design of educational video games. In: Ahram, T., Taiar, R., Groff, F. (eds.) IHIET-AI 2021. AISC, vol. 1378, pp. 414–421. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-74009-2_53 17. Dankov, Y.: The game designer’s perspectives and the DIZU-EVG instrument for educational video games. In: Novel & Intelligent Digital Systems: Proceedings of the 3nd International Conference (NiDS 2023). NiDS 2023. LNNS. Springer, Cham (2023) 18. Dankov, Y.: Conceptual model of a data visualization instrument for educational video games. In: Abraham, A, Pllana, S., Casalino, G., Ma, K., Bajaj, A. (eds.) ISDA 2023. LNNS, vol. 717, pp. 301–309. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-35510-3_29 19. Dankov, Y.: DIZU-EVG – an instrument for visualization of data from educational video games. In: Silhavy, R., Silhavy, P. (eds.) CSOC 2023. LNNS, vol. 722, pp. 769–778. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-35311-6_73 20. Dankov, Y.: User-oriented process analysis of using the DIZU-EVG instrument for educational video games. In: Silhavy, R., Silhavy, P. (eds.) CSOC 2023. LNNS, vol. 723, pp. 684–693. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-35317-8_61
A State-of-the-Art Review of the Mutation Analysis Technique for Testing Multi-agent Systems Soufiene Boukeloul, Nour El Houda Dehimi(B) , and Makhlouf Derdour Computer Science Department, LIAOA Laboratory, University of Oum El Bouaghi, Oum El Bouaghi, Algeria {soufiene.boukelloul,dehimi.nourelhouda, derdour.makhlouf}@univ-oeb.dz Abstract. Testing systems developed using the agent paradigm is an important task in the quality assurance process. In this short paper, we give a state-of-the-art overview of the research that has been suggested for testing multi-agent systems (MAS). The aim is to eventually provide a new test method for multi-agent systems using the method of mutation analysis, as well as to provide software and efficient test tools that enable the automation of testing activities for these systems. Keywords: Multi-Agent Systems · System Level Testing · Mutation analysis-based test
1 Introduction Multi-agent system (MAS) testing is a crucial step in its quality assurance process. Despite the rapid development of MAS, this activity continues to be an unexplored area because there are few proposals available in the literature. Mutation analysis is a testing technique used for validating and improving test data. It is based on fault injection (different types of programming errors). The effectiveness of testing techniques is measured by identifying the precise number and location of errors in the software. The first step of this technique is to create a set of erroneous versions of the program under test, called mutants (exact copy of the program under test in which a single simple error was injected) [7], generated automatically from the definition of a set of mutation operators, and running a set of test cases on each of these faulty programs. The result of the execution of the test cases with the mutants allows, on the one hand, to evaluate the effectiveness of the set of test cases by the proportion of the detected mutants [8], on the other hand, the analysis of errors that were not detected helps guiding the generation of new test cases. This consists of covering the areas where these errors are located, and generating specific test data to cover the particular cases. The advantages of this technique encourage us to apply it to test the MAS; this has justified our proposal of this work. In the following, we will present the test phases based on mutation analysis, we then present some work on the test of multi-agent systems in order to report the work that uses mutation analysis. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 230–235, 2023. https://doi.org/10.1007/978-3-031-44146-2_23
A State-of-the-Art Review of the Mutation Analysis Technique
231
2 Phases of Mutation Analysis The mutation testing process, presented in Fig. 1, can be explained simply through the following steps [15]: • • • •
Suppose we have a program P and a set of test cases T Produce a mutant P1 from P by inserting a single semantic fault in P Run T on P and P1 and save the results as R and R1 Compare R1 with R: – If R1 = R: the set of test cases T can detect the inserted defect and it killed the mutant. – If R1 = R: There can be 2 reasons: • the set of test cases T could not detect the defect, so it is necessary to ameliorate T. • the mutant has the same semantic meaning as the original program. They are equivalent mutants.
Fig. 1. Mutation analysis process for a unit or system
The mutation analysis technique allows to evaluate the set of test cases T by a mutation score (MS). This score can be between 0 and 1; it is calculated by the formula presented in the following figure: MS =
number of mutants killed total number of mutants − equivalent mutants
232
S. Boukeloul et al.
• For a set of test cases, a low score means that the majority of the injected defects were not detected, with precision, by this set of test cases. A higher score indicates that most of the injected defects were identified with that particular test set. • When MS = 0, then it is said that none of the test cases could kill the mutants and when MS = 1, it is said that the mutants are very easy to kill.
3 Previous Work for MAS Testing The following table (Table 1) shows a classification, by level, of the works that exist on the testing of multi-agent systems, this table also identifies the works that use mutation analysis: Table 1. Classification of the previous work Work
Criteria Unit test
Agent test
System test
Using Mutation Analysis
[19] Zhang, Z et al. (2008) [20] Zhang, Z et al. (2009) [18] J. Thaler, P.O. Siebers (2019) [9] Ekinci, E.E et al. (2008)
[13] Nguyen, C.D et al. (2010) [17] Savarimuthu S., Winikoff M (2013)
[2] Coelho, R et al. (2006)
[12] Lam, D.N., Barber, K.S (2005)
[1] M.B. Babac, D. Jevti´c (2014)
(continued)
A State-of-the-Art Review of the Mutation Analysis Technique
233
Table 1. (continued) Work
Criteria Unit test
Agent test
[16] I. Sakellariou, D et al. (2015)
[14] Nunez, M et al. (2005)
[11] Huang Z et al. (2014)
Using Mutation Analysis
[3] De Wolf, T et al. (2005) [5] Dehimi, NEH et al. (2015)
System test
[4] Dehimi, NEH, Mokhati, F. (2019)
[10] D.N, El Houda et al. (2022)
[6] Dehimi, N.E.H et al. (2022)
According to this table, only a few approaches have been proposed for MAS testing in the recent literature. Moreover, most of these works do not utilize the mutation analysis technique. Instead, they are based on techniques that are suboptimal in terms of test case generation. While this work has made considerable progress in the field of MAS testing by proposing new strategies, it will be crucial to optimize it using mutation analysis.
4 Conclusion Despite the academic interest in testing methods based on mutation analysis, there are not many approaches that employ this method for testing MAS in the literature. In prospect, we anticipate the introduction of a novel test strategy for multi-agent systems based on mutation analysis in the near to medium future.
234
S. Boukeloul et al.
References 1. Babac, M.B., Jevti´c, D.: AgentTest: a specification language for agent-based system testing. Neurocomputing 146, 230–248 (2014) 2. Coelho, R., Kulesza, U., von Staa, A., Lucena, C.: Unit testing in multi-agent systems using mock agents and aspects. In: SELMAS 2006: Proceedings of the 2006 International Workshop on Software Engineering for Large-scale Multi-agent Systems, pp. 83–90. ACM Press, New York (2006) 3. De Wolf, T., Samaey, G., Holvoet, T.: Engineering self-organising emergent systems with simulation-based scientific analysis. In: Brueckner, S., Serugendo, D.M., Hales, D., Zambonelli, F. (eds.) Third International Workshop on Engineering Self-organising Application, pp. 146–160. sUtrech, Netherlands (2005) 4. Dehimi, N.E.H., Mokhati, F.: A novel test case generation approach based on AUML sequence diagram. In: International Conference on Networking and Advanced Systems (ICNAS) (2019) 5. Dehimi, N.E.H., Mokhati, F., Badri, M.: Testing HMAS-based applications: an ASPECSbased approach. Eng. Appl. Artif. Intell. 46, 232–257 (2015) 6. Dehimi, N.E.H., Benkhalef, A.H., Tolba, Z.: A novel mutation analysis-based approach for testing parallel behavioural scenarios in multi-agent systems. Electronics 11(22), 3642 (2022) 7. Derdour, M., et al.: MMSA: metamodel multimedia software architecture. Adv. Multimed. 2010, 1–17 (2010) 8. Derdour, M., et al.: Typing of adaptation connectors in MMSA approach case study: sending MMS. Int. J. Res. Rev. Comput. Sci. 1(4), 39–49 (2010) 9. Ekinci, E.E., Tiryaki, A.M., Cetin, O., Dikenelli, O.: Goal-oriented agent testing revisited. In: Luck, M., Gomez-Sanz, J.J. (eds.) AOSE 2008. LNCS, vol. 5386, pp. 85–96. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-642-01338-6_13 10. El Houda, D.N., Soufiene, B., Djaber, G.: Towards a new dynamic model-based testing approach for multi-agent systems. In: 2022 4th International Conference on Pattern Analysis and Intelligent Systems (PAIS). IEEE (2022) 11. Huang, Z., Alexander, R., Clark, J.: Mutation testing for Jason agents. In: Dalpiaz, F., Dix, J., van Riemsdijk, M.B. (eds.) EMAS 2014. LNCS, vol. 8758, pp. 309–327. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-14484-9_16 12. Lam, D.N., Barber, K.S.: Debugging agent behavior in an implemented agent system. In: Bordini, R.H., Dastani, M.M., Dix, J., El Fallah Seghrouchni, A. (eds.) PROMAS 2004. LNCS (LNAI), vol. 3346, pp. 104–125. Springer, Heidelberg (2005). https://doi.org/10.1007/ 978-3-540-32260-3_6 13. Nguyen, C.D., Perini, A., Tonella, P.: Goal-oriented testing for MASs. Int. J. Agent-Oriented Softw. Eng. 4(1), 79–109 (2010) 14. Nunez, M., Rodríguez, I., Rubio, F.: Specification and testing of autonomous agents in ecommerce systems. Softw. Test. Verification Reliab. 15(4), 211–233 (2005) 15. Offutt, A.J., Rothermel, G., Zapf, C.: An experimental evaluation of selective mutation. In: Proceedings of the 15th International Conference on Software Engineering, pp. 100–107 (1993) 16. Sakellariou, I., et al.: Stream X-machines for agent simulation test case generation. In: Duval, B., van den Herik, J., Loiseau, S., Filipe, J. (eds.) ICAART 2015. LNCS, vol. 9494, pp. 37–57. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-27947-3_3 17. Savarimuthu, S., Winikoff, M.: Mutation operators for the GOAL agent language. In: Cossentino, M., El Fallah Seghrouchni, A., Winikoff, M. (eds.) EMAS 2013. LNCS, vol. 8245, pp. 255–273. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-453434_14
A State-of-the-Art Review of the Mutation Analysis Technique
235
18. Thaler, J., Siebers, P.-O.: Show me your properties: the potential of property-based testing in agent-based simulation. In: Proceedings of the 2019 Summer Simulation Conference (2019) 19. Zhang, Z., Thangarajah, J., Padgham, L.: Automated unit testing intelligent agents in PDT. In: AAMAS (Demos), pp. 1673–1674 (2008) 20. Zhang, Z., Thangarajah, J., Padgham, L.: Model based testing for agent systems. In: AAMAS, vol. 2, pp. 1333–1334 (2009)
HomeWorks: Designing Mobile Applications to Introduce a Digital Platform for the Household Services Sector Kurt Ian Khalid I. Israel(B) , Kim Bryann V. Tuico(B) , Richelle Ann B. Juayong, Jaime D. L. Caro, and Jozelle C. Addawe Service Science and Software Engineering Laboratory, University of the Philippines Diliman, Quezon City, Philippines {kiisrael,kvtuico}@up.edu.ph
Abstract. This paper presents a novel approach to address the challenges encountered by service seekers and providers in the Philippines when accessing household services. Existing methods of finding skilled service providers, such as word of mouth, telephone directories, and social media, are often slow, unreliable, and lack transparency. Meanwhile, service providers need help reaching potential customers, overcoming the technology gap, and using expensive promotional methods. To overcome these challenges, we propose developing a set of mobile applications that provides a comprehensive platform for seekers to connect with home service providers. The platform will incorporate a rating system based on customer feedback, a recommendation system, and a user-friendly mobile app for providers. Additionally, the proposed platform will integrate geolocation technologies, e-payment and cash payment transaction techniques, and a robust communication network. The primary objective of this study is to demonstrate the feasibility of an online platform that facilitates efficient and transparent transactions between service seekers and providers. Our proposed solution can potentially improve the accessibility of household services, create more opportunities for service providers, and enhance the overall quality of services offered. The results of our project can serve as a valuable reference for the development of similar platforms in other countries that face the same problem. Keywords: household services platform · service seekers · service providers · transparent transaction · service availability
1 Introduction In the Philippines, household and wellness services are categorized under the informal economy. These jobs are unregistered and are insufficiently secured. In turn, informal economy workers follow highly traditional practices in performing their labor [3, 13]. These practices include marketing via word of mouth, paying ads, posting on social sites, and using directories. The lack of platforms causes them to settle with such ineffective methods [12, 19]. Outside the informal sector, some seekers look for service providers via © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 236–246, 2023. https://doi.org/10.1007/978-3-031-44146-2_24
HomeWorks: Designing a Mobile App for Household Services
237
existing but few apps under registered corporations. Some labels discussed in Sect. 3.2 include Mr. Butler, Happy Helpers, and GoodWork.ph. These companies hire service providers as workers if they meet the required technical skills and competency [17]. Only few service providers can enter the formal market due to cost, limited opportunities, and standards sought by companies. In this study, these methods are observed to distinguish which problems exist in the household and wellness services sector. Individuals that seek for household services face the challenge of speed and reliability, especially when seeking for skilled service providers. Existing ways for seeking services include inquiring via word of mouth, using telephone or online directories, skimming through social network posts or ads, and using a handful of apps with certain limitations as discussed in Sect. 3. However, these traditional means are extremely slow since it requires manually checking of multiple contacts and sources until a desired provider is found [15]. Also, existing web apps suffer in offering instant services since present web-based forms have inconsistent processing time. Another concern relates to reliability on quality and consistency of performed work by a certain provider. The ways to check their credibility is absent, notably for service providers not supported by sufficient platforms [16]. Currently, as explored in Sect. 3.1, the existing apps also fail to provide transparent reviews. Even though a rating system exists on these apps, the reviews are amalgamated to represent the whole which tends to conceal certain providers with low ratings [1]. Meanwhile, service providers are faced with issues regarding reach and tech gap. They struggle in looking for customers who will avail their services. The problem with customer reach rises from the existing methods available for them to promote their work. The common way to expand their reach is by word of mouth through their former customers. Some service providers use social media to share their services but these approaches are hit-or-miss with unknown guarantee in attaining the right audience [20]. Another hindrance regarding digital use is the technology gap since informal job workers experience barriers brought by poor technological skills, lack of computer education, and absence of ICT awareness. In the study by Pew Research Center, the Philippines recorded a high digital literacy differential brought by groups constituting informal workers [5, 11]. On both perspectives, service seekers and service providers encounter concerns relating to cost and security. The problems with reach and tech gap force service providers to opt spending on flyers, posters, and ads. While these publication materials can widen their reach, these materials are quite expensive for them to avail. Elaborated in Sect. 3, another expensive option is to pay introductory rates to corporations who train providers as their employees and accept them if they meet the standards and pass interviews [6]. On the seeker’s side, cost is also an issue when services of brand and quality are availed from large firms. Worse cases have been reported in parts of the country where some households experience malice from their recently hired home service providers [21, 25]. Such incidents raise the issue of security; both service seekers and providers have no assurance of their safety during and after execution of service.
238
K. I. K. I. Israel et al.
In response to these concerns, the paper intends to introduce a digital platform to Filipino home service providers and seekers which aims to address the issues experienced when using the traditional and formal methods. The proposed general platform is expected to eliminate the numerous concerns mounting from the tiresome process of looking for the right service provider under the traditional methods and limitations under the formal methods. To clarify, some of the proposed app features are adopted by reviewing and improving existing solutions detailed in Sect. 3.
2 Objectives of the Study The study aims to address the problems experienced by service seekers and service providers. The primary objective is to develop mobile applications for providing a variety of house-related services having the following: (1) a seeker application where customers can submit their service requests which are to be broadcasted across near home service providers; (2) a rating system based on customer feedback that ultimately leads to better customer service by the service providers; (3) a recommender feature that provides a personalized list of services catered to individual customers; (4) a provider application for non-technical users where they can accept service requests from customers; (5) a platform where service providers can specify the services they offer while allowing two-way communication for them to settle service specifications and costs; and (6) an authenticated and authorized login module for all users by supplying the necessary credentials during registration.
Fig. 1. Issues Addressed by Existing and Proposed Methods
The first two objectives address mainly the concern of seekers. The issue with speed can be resolved by providing a platform for seekers to broadcast their request to home service providers. The rating and review system seeks to resolve the issue of reliability in terms of quality and consistency. The next two objectives deal mostly with the issues of providers. The recommender feature intends to help them reach their target audience. Moreover, service providers are bestowed with a novice-friendly mobile app with a simple yet effective interface. The remaining objectives target the problems of both seekers and providers. With the platform, providers can promote their services without
HomeWorks: Designing a Mobile App for Household Services
239
spending while seekers can negotiate service costs depending on specs. Finally, getting user credentials ensures the security of both parties during and after the completion of services. With that, the problems observed are all addressed by the primary objectives. Figure 1 illustrates what issues are addressed by existing methods and the proposed solution.
3 Review of Related Literature 3.1 Related Projects Overseas The development of software as an online system for the household service sector has been advanced in some countries outside the Philippines. Although some applications already exist locally, the applications overseas are more developed and more established. In this section, three existing projects overseas that have similar goals with this study were analyzed to adapt chosen ideas and practices that could be incorporated in the paper’s proposed platform. Concerns regarding the existing systems are identified which are addressed by integrating their strengths in Sect. 5. Handyman Service is an end-to-end application for service booking. The app is mostly catered to employers who hire several handymen [23]. The provider’s app allows admins to manage their services and handymen. In the customer’s app, users can search, book, and view services. In this setup, the admin is deeply involved in the workflow as the mediator between customers and handymen [22]. Another application focused on building and repair services is AtDoorStep. The proponents completely remove all mediators. Hence, the absence of a mediator reduces the cost and eliminates the odds of concealing and sending workers with negative reviews. Customers can also book specific providers based on their rating due to the transparent feedback system [1]. This approach pushes workers to provide quality service. The Apian mobile app provides household services instantly by offering the nearest service provider by using GPS service upon request of service, then sends a short message service to the user [2]. 3.2 Existing Apps in the Philippines Having shown that foreign countries already established various digital platforms for home services, this section explores some initiatives done in the Philippines. Previously in the objectives, it has been brought up that existing methods such as web apps and mobile apps address some issues experienced in the home services sector; however, they fail in solving some of the problems. In this section, three existing applications offering household services were examined to find out how these applications address some of the issues stated in Sect. 1. In Sect. 5, a number of features adopted from the reviewed projects are highlighted. The GoodWork.ph mobile application offers beauty and wellness services at customers’ houses. The company only employs service providers that can pass their interviews. If they are accepted, the service providers pay introductory rates for training [6]. The workers are then given verified accounts for their partner app. The UI/UX of the partner app is comparable to FoodPanda’s Rider app since both applications have simple
240
K. I. K. I. Israel et al.
interface and limited user features such as receiving request alerts, accepting job services, and changing service status [4, 7]. These apps are designed to cater nontechnical users. Shown in Fig. 6 in the appendix are screenshots of the aforementioned applications. Aside from mobile applications, most home services provider companies have developed their own web app where customers can see their data, view their offers, and send request forms. Two web apps in the country were reviewed in this section, namely Mr. Butler and Happy Helpers. They have identical workflows where seekers submit their request and contact details in the web app. Then, assigned staff later checks and processes the requests. The customers waits for the company to contact them and settle service costs and specifications such as materials already provided, scope of commission, etc. Essentially, such bargains reduce the cost of services, unlike with the other apps that have fixed expensive costs [10, 18]. However, compared to existing mobile apps, these web apps do not provide instant services since their team has indeterminate processing time.
Fig. 2. Theoretical and Conceptual Framework
4 Theoretical and Conceptual Framework The framework provided in Fig. 2 illustrates how reviewed concepts and techniques can lead to solving the problems and to achieving the objectives. The framework has four main sections, namely before development, app development, testing, and completed platform. Before development, the three areas pondered on were objectives, equipment, and considerations. The objectives aims to address the problems on speed, reliability, reach, tech gap, cost, and security [5, 15, 16]. In line with ITA’s study, the project intends to develop a platform using smartphones as equipment [14].
HomeWorks: Designing a Mobile App for Household Services
241
For the mobile app development, the structure, inspired from Handyman Service app, is composed of the seeker app and provider app [22]. Similar to Apian App, the seeker app aims to provide instant services to users while, comparable to GoodWork.ph, the provider app is going to be designed with simple UI/UX [2]. The intended rating system, adopted from AtDoorStep, will show transparent reviews and non-concealed feedbacks [1]. In addition, the platform is to implement a chat where transaction participants can settle service specs and costs [10, 17]. For the recommender feature, a hybrid approach will be employed as indicated to Verma’s guide of building such system [26]. Lastly, different models and strategies are observed in payment handling. For the offered services, the app is to create a custom-made list of limited services that fits the local market. The developed mobile platform will undergo evaluation and analysis through the SaaS Testing Methodology which executes planned tests under three main testing categories. First, the application is to be tested via block-box testing for functional tests. Then, integration testing will be conducted through multiple test cases, scenarios, and test scripts. Lastly, in acceptance testing, a survey form will be given to the sample population to gauge user satisfaction [8, 9, 24]. Overall, these constitute the testing phase of the study. Moreover, upon garnering acceptable results after the analysis, the developed platform will serve as the completed mobile application for this study.
5 Proposed Software Design For this study, the system to be implemented adopts the structure of Handyman Service app where two mobile apps are developed, one for providers and another for seekers. The provider’s app would only cater to providers making admins less involved in the workflow. Here, admins no longer assign a service provider to certain user requests. Instead, available service providers are instantly offered to users upon request by broadcasting it to nearby service providers, which interpolates Apian app design. This addresses the issue with speed and removes the single point of failure in the form of admin. Regardless, after receiving an offer, users are given enough time to check the provider and chat with them to agree on the service specifications. Regarding AtDoorStep’s rating system, the proposed system would also include a transparent rating system which confronts the concern regarding reliability. But rather than punishing service providers with low ratings, the proposed system aims to incentivize service providers with high ratings. One incentive is by making the high-rated nearby providers appear more on the recommender feature of the proposed app which addresses the issue of reach. The proposed mobile application is also adopting some features of the aforementioned local applications. The provider application is to be designed as a simple portal with minimalistic interface and with limited features yet encompassing all use cases. This approach taken from GoodWork.ph and FoodPanda is a way to deal with tech gap concerns experienced by service providers. Further, the bargaining system from Mr. Butler and Happy Helpers is implemented through chat to address cost issues. Lastly, these applications have their own terms and conditions surrounding their providers and costumers. The proposed app would also provide terms and agreement to protect the users. This system, along with the registration module seeking necessary credentials, would ensure the app’s security.
242
K. I. K. I. Israel et al.
5.1 Solution Design The user workflow in Fig. 3a describes the booking process of the service seeker which covers three phases namely, request phase, match phase, and serve phase. The black arrows reveal the primary flow of the booking process when all desired conditions are satisfied. Following this flow gives a straightforward depiction of the booking procedure. The request phase is motivated by Mr. Butler’s approach in accepting requests from customers. The match phase is a combination of the Apian’s app notion towards offering instant services and Mr. Butler’s idea of negotiation with customers [2]. Finally, the workflow employed AtDoorStep’s tactic regarding feedback system where users can cancel requests before the serve phase [1]. Meanwhile, the purple arrows point to distant actions occurring when the sought conditions fail. Under the request phase and match phase, the seeker can immediately cancel the request causing an abrupt ending to the transaction. Another case that would cancel the request is when a timeout occurs while waiting for an accept. In contract settlement, if the parties have not agreed, the flow goes back to broadcasting the request across the service providers. In the serve phase, the transaction reaches the end state only when payments are settled and services are completed.
Fig. 3. User Workflows for Seekers and Providers
The user workflow in Fig. 3b describes the serving process of the service provider which comprises standby phase, match phase, and serve phase. Similarly, the black arrows reveal the primary flow when all desired conditions are satisfied. During the standby phase motivated by FoodPanda’s Rider app, service providers can change their status for them to accept request notifications [4]. The match phase where service requests are sent to nearby service provider is taken from Apian’s app approach [2]. For the serve phase, the procedure where the providers change the status and marks the completion of the request is taken from Handyman Services App [22]. The purple arrows are taken when the sought conditions fail. Upon receiving a request notification under the request phase, the provider can reject the request sending them back to wait for request alert. During the match phase, seekers can choose to cancel requests without informing the provider. As such, providers go back to waiting for service requests. This method where seekers can cancel new requests is an implementation done
HomeWorks: Designing a Mobile App for Household Services
243
by AtDoorStep [1]. Another condition that leads back to waiting is when parties have disagreed. In the serve phase, the transaction ends only when payments are settled and services are completed. 5.2 Interaction Design The use case diagram in Fig. 4 illustrates the functionalities of the system. The proposed platform separates actors into forms as seeker, provider, or admin. The use cases of seekers include book a service, search services, rate a service, and make payments. For booking services, the seeker can choose from instant booking or scheduled booking. Also, the seeker can also perform payments via cash or through electronic payments. Conversely, the use cases of a provider consist of accept booking, view client, and done a service. Further, the use cases of both seekers and providers are view pages, manage profile, chat, register, and login. However, the seeker’s app expects different page content from the provider’s app. Lastly, the use cases of admins include manage users, login, and view all bookings. The admin can authenticate or verify users and delete users, as well.
Fig. 4. Use-Case Diagram
5.3 Interface Design The interface design for the seeker app and the provider app varies from each other yet uses mostly similar components and colors as shown in Figs. 5a and b. The provider app has less elements and pages compared to the seeker app. The provider app has two tabs in the navigation bar namely homepage and options. The homepage changes accordingly depending on the provider’s state. In this page, the provider can press ready, see requests, view service details, accept requests, and change status of services. These features do not concurrently appear together to avoid overwhelming users. For the options, users can manage profile, view history, see reviews, change language, ask the admin, and logout. Aside from the limited functionality, the user interface also has a minimalistic design and a simple layout.
244
K. I. K. I. Israel et al.
Fig. 5. Mockups for Seekers and Providers Apps
6 Evaluation Detailed here are discussions on how the proposed solution effectively addresses the concerns mentioned in Sect. 1. With regards to speed, broadcasting the request is a proven effective method already used by well-known apps like Grab, FoodPanda, etc.; however, the success heavily depends on the number of nearby providers participating in the app. In terms of reliability, although the feedback system addresses this concern, a higher efficiency is achieved only with greater number of total reviews. Multiple feedbacks ultimately reduce individual subjectivity of the reviews. As for cost, price negotiation results to justifiable rates, yet parties are still subjected to consider a minimum cost. This cost includes surcharge for using the app which increases service rates by some value. Lastly, the recommender feature favors provider with good ratings; as such, lowrated providers would not enjoy the benefits which the platform provides for resolving reach-related issues.
7 Conclusion In summary, this paper has provided a proposed design for mobile applications that address the difficulties faced by service seekers and providers in seeking and providing household services, overcoming existing method limitations. The proposed solution
HomeWorks: Designing a Mobile App for Household Services
245
includes geolocation technologies, a rating system, recommender feature, and authenticated login module, providing an easy and secure platform for service seekers and providers to connect, negotiate service costs, and transact. The study serves as a proof of concept for the feasibility of an online platform that service seekers and providers can use for home-related services.
A. Appendix See (Fig. 6).
Fig. 6. UI of GoodWork.ph Partner App and FoodPanda Rider App
References 1. Agrawal, K., Goel, T., Gariya, T., Saxena, V.: AtDoorStep: an innovative online application for household services. J. Xi’an Univ. Archit. Technol. 12(4), 4370–4375 (2020) 2. Bandekar, S., D’Silva, A.: Domestic android application for home services. Int. J. Comput. Appl. 148(6), 1–5 (2021) 3. DOLE: Dole extends support to SME development. Department of labor and employment (2007). https://old.dole.gov.ph/news/view/511 4. FoodPanda: Foodpanda rider app (2022). https://play.google.com/store/apps/details?id=com. logistics.rider.foodpanda 5. Gonzales, G.: Big divide in internet use in Philippines by age, education level – report. Rappler (2020). https://www.rappler.com/technology/256902-pew-internet-use-report-philip pines-march-2020/ 6. GoodWork.ph: GoodWorkPH Terms and Conditions (2020). https://www.goodwork.ph/ terms-and-conditions 7. GoodWork.ph: GoodWork Partner App (2022). https://play.google.com/store/apps/details? id=com.goodwork.serviceproviderapp
246
K. I. K. I. Israel et al.
8. Hamilton, T.: Integration testing: What is, types with example (2020). https://www.guru99. com/integration-testing.html 9. Hamilton, T.: What is functional testing? types & examples (2020). https://www.guru99.com/ functional-testing.html 10. Happy Helpers: Book a service or get a quote today (2020). https://happyhelpers.ph/bookings 11. Hofmann, C.: Bridging the skills gap for informal economy workers – how can skills and lifelong learning help mitigate the consequences of the crises? International labour organization (2020). https://www.ilo.org/skills/Whatsnew/WCMS_745308/lang--en/index.htm 12. IBON Foundation: Jobs crisis continues, informal work worsens. Ibon.org (2021). https:// www.ibon.org/jobs-crisis-continues-informal-work-worsens/ 13. Irena, M.: Informal workers: why their inclusion and protection are crucial to the future of work. The ASEAN: Promoting Decent Work and Protecting Informal Workers, no. 21, pp. 13–15 (2022) 14. ITA: Philippines - country commercial guide. International trade administration (2022). https://www.trade.gov/country-commercial-guides/philippines-ecommerce 15. Kolte, R.: Web application to find software and hardware technician for laptop and desktop. Int. J. Res. Appl. Sci. Eng. Technol. 9(5), 1309–1313 (2021). https://doi.org/10.22214/ijraset. 2021.34367 16. Mordue, L.: 4 reasons why word of mouth alone is not a sustainable marketing strategy. JDR group (2019). https://blog.jdrgroup.co.uk/digital-prosperity-blog/word-of-mouthreferral-marketing 17. Mr. Butler: About us: We professionalize blue collar workers (2018). https://mrbutler.com. ph/about/ 18. Mr. Butler: Frequently asked questions (2022). https://mrbutler.com.ph/faqs/ 19. Ricaldi, F.: How an app is helping Mozambique’s informal service workers to earn more. World bank group (2021). https://blogs.worldbank.org/jobs/how-app-helping-mozambiquesinformal-service-workers-earn-more 20. Rodriguez, Y.: 5 important reasons why social media advertising is not enough. Gray las Vegas (2020). https://www.graylasvegas.com/blog/5-important-reasons-why-social-mediaadvertising-is-not-enough 21. Sarao, Z.: DOTr exec loses jewelry, cash to thief. Inquirer (2022). https://newsinfo.inquirer. net/1538210/dotr-exec-loses-jewelry-cash-to-thief 22. Shaikh, A.: Handyman service – on-demand home services app with complete solution. Iqonic design (2022). https://wordpress.iqonic.design/docs/product/handyman-service/ 23. Shyamala, H., Rao, K., Bhandarkar, P., Vetekar, P., Laxmi, G.: An Android application for home services. Int. Res. J. Eng. Technol. 7(5), 92–101 (2020) 24. STF: Acceptance testing (2011). https://softwaretestingfundamentals.com/acceptance-tes ting/ 25. The Sapian Tribune: Two suspects in robbery may have fled (2000). https://www.saipantri bune.com/index.php/9655d073-1dfb-11e4-aedf-250bc8c9958e/ 26. Verma, Y.: A guide to building hybrid recommendation systems for beginners. Anal. India Mag. (2021). https://analyticsindiamag.com/a-guide-to-building-hybrid-recommend ation-systems-for-beginners/
Developing and Accessing Policies in Maritime Education Using Multicriteria Analysis and Fuzzy Cognitive Map Models Stefanos Karnavas1 , Dimitrios Kardaras2 , Stavroula Barbounaki3(B) , Christos Troussas4 , Panagiota Tselenti4 , and Athanasios Kyriazis1 1 Department of Statistics and Insurance Science, University of Piraeus, Piraeus, Greece 2 School of Business, Department of Business Administration, Athens University of Economics
and Business, Athens, Greece 3 Department of Midwifery, University of West Attica, Egaleo, Greece
[email protected] 4 Department of Informatics and Computer Engineering, University of West Attica, Egaleo,
Greece
Abstract. The development and assessment of education policies in maritime is a complex task as it requires the consideration of several factors such as stakeholders’ views, infrastructure availability, lecturers’ potential, and motivation. Besides, the effectiveness of education policies can only be assessed at a later time after decisions have been made and investments have been implemented. Thus, the post development of education policies may be difficult to reverse let alone that new initiatives and directions are prone to the same difficulties and inefficiencies. This paper addresses the need for the design and assessment of alternative education policies scenarios before any actions are taken by proposing a hybrid methodological approach, utilising multicriteria analysis and fuzzy cognitive maps. This research analyses data collected from maritime academy students and staff in Greece. The proposed methodology provides the means for the development of fuzzy intelligent systems that allow education policy makers to develop education policies and assess their impact on education quality, student satisfaction and satisfaction of maritime companies. The results can be useful for both researchers and education policy makers. Keywords: Education Policy · Intelligent Systems · Fuzzy Cognitive Maps · DEMATEL
1 Introduction The Greek merchant marine, the only national sector that holds the leading position internationally and in a highly competitive environment, in order to maintain and improve its dynamism, needs competent and trained executives to manage and sail ships, safely and successfully, in order to satisfy the maritime transport needs of international trade. The maritime education strategy in Greece is developed along the following lines: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 247–255, 2023. https://doi.org/10.1007/978-3-031-44146-2_25
248
• • • • •
S. Karnavas et al.
Modernization of the curriculum, supply of modern equipment. Provision of vocational training and further education by private bodies. Development of a Quality System Operation (ISO) in maritime academies. Attracting young people to the maritime profession. Improving existing building infrastructure.
The Czech Jan Amos Komensky (1592–1670), known as Comensky, highlighted three factors (teaching triangle) that determine teaching namely [1]: • teacher, • student, • curriculum. In order to provide good quality education, the lecturer must identify the point of development in which each student is and then build upon the student’s experiences [2]. Several approaches and factors have been developed and identified in several research attempts to determine education quality, students’ motivation, and satisfaction. Factors such as Intrinsic motivation, External motivation, Instructional competence, Classroom management, managing student heterogeneity, Student learning interest and engagement have been investigated for their importance in the education process [3]. Developing education policies has been an issue that many researchers focused on but the interest is increased significantly since the 90s [4]. In recent years many studies investigated the effects of Information and Communication Technologies (ICT) in education [5]. This research focuses on maritime education. It proposes a methodology in order to address the complexity of maritime education policy making by incorporating stakeholders views such students, academic staff, and maritime companies. By adopting multi-criteria analysis and fuzzy logic, it provides a modelling approach that ensures comprehensives, adaptability and flexibility in examining alternative policy scenarios and their implications.
2 Methodology and Methods The steps of the proposed methodology described below: 1. A questionnaire was developed and used to collect data from Aspropyrgos maritime academy in Greece. In total 390 valid questionnaires were collected. The research questions were developed by taking into consideration the recent advances in information and communication technologies (ICT) uses n facilitating learning, students, and staff satisfaction. A five-point Likert scale ranging from strongly disagree to strongly agree was used, in order to allow students to report the extent to which they use ICT and subsequently to express their perceived satisfaction of each application. 2. The students’ dataset was analyzed with factor analysis. The analysis results indicate the factors that mostly contribute to students learning satisfaction, representing students’ perspectives of education quality and satisfaction. 3. The results of the factor analysis were subsequently used to administer a second questionnaire that was sent to academic staff to reply. In total 19 valid questionnaires were selected.
Developing and Accessing Policies in Maritime Education
249
4. The data collected from the staff was analyzed with DEMATEL (Decision making trial and evaluation laboratory) multicriteria analysis method. DEMATEL analysis is used to prioritize the factors identified in factor analysis by their importance and to calculate factors’ interdependencies. 5. A fuzzy cognitive map (FCM) was developed based on the DEMATEL results. The FCM was used to simulate alternative education policies expressed in terms of the DEMATEL factors and their causal interdependencies. The FCM is used to assess the implications of different policies providing policy makers with insights of their decisions estimated effectiveness. 2.1 The DEMATEL Method DEMATEL is a multi-criteria method that was developed by Battelle Geneva Institute [6]. The method can be applied to represent complex relationships among factors pertaining to a problem area and to calculate their causal interdependencies. DEMATEL produces causal models that show how interrelated factors affect each. It has been used in various problems such as marketing strategies, health problem solving, etc. The steps of DEMATEL follow below: Step 1: Step 1 calculates the Direct Relation Matrix. Based on the 19 academic staffs’ responses received from the second questionnaire where staff were asked to report their beliefs regarding the impact that one factors exerts on another. Possible answers representing the impact of the impact are: 0 for no impact, 1 for somewhat impact, 2 for medium impact, 3 for high impact and 4 for very high impact. The direct relation matrix A = [ai,j ] is a nxn matrix, where ai,j indicate the strength of the impact that factor (i) exerts on factor (j), with i,j indicating the decision criteria. In the case where data were collected from a group of respondents, as in the case of research, all responses are this averaged to produce the average matrix Z, whereZ = zi,j . Step 2: Step 2 normalizes responses and produces the initial Direct- relation matrix D. The Matrix D is calculated using the following formulas. D = λ ∗ Z, Where λ = min [
1 1 n ], n , max j=1 zi,j max i=1 (z i,j )
Where 1 ≤ i ≤ n and 1 ≤ j ≤ n. Step 3: The Total relation matrix T is calculated by applying the following formula, where I, is the identity matrix. T = D(I − D)−1 . Step 4: Step 4 calculates the sums of rows and columns of matrix T. The sum of rows is calculated by r = ri [i, j]nx1 =
n j=1
t(i, j)
250
S. Karnavas et al.
and the Sum of columns is calculated by c = cj [i, j]1xn =
n
t(i, j).
i=1
The value of r(i) indicates the total causal effect criterion (i) exerts to the system (i.e., to the rest of the factors in the DEMATEL model) both directly and indirectly. The value of c(j) shows the total causal effect received by criterion (j) both directly and indirectly. If (j = i), the value of (ri + ci) represents the total causal effects both given and received by factor (i), while the value of (ri-ci) shows the net contribution by factor (i) on the system. If (ri-ci) is positive, then factor (i) is a net cause, which means factor (i) affects other factors of the model. If (ri-ci) is negative, then factor (i) is a net receiver that implies factor (i) is affected by other factors of model. Step 5: Calculate threshold (α), using the following formula: n n i=1 (ti,j ) j=1 (t i,j ) , a= n2 where n is the number of criteria. The threshold is used to cut-off the most important criteria, that will be represented in the causal model. Step 6: Develop the DEMATEL causal model by setting all coordinate sets of (ri + ci , ri − ci ). The causal model indicates the importance of the most important criteria. 2.2 Fuzzy Cognitive Map An FCM is a directional graph, which is used to model the concepts pertaining to the problem universe of discourse as well as their interrelationships. Each concept is represented as a node in the FCM graph and is connected with other nodes with weighted arcs. The arcs indicate the direction and the strength of the causal interdependencies among the model’s concepts. In an FCM, the weights are fuzzy linguistics variables that indicate the strength of a relationships such as ‘small’, ‘moderate’, ‘strong’. FCMs models represent perceptions about a domain. By examining the interrelationships among its concepts, FCMs can be used to analyse the implications of scenarios and draw conclusions accordingly. In this research a FCM is developed to analyse alternative education scenarios. An nXn matrix can be used to represent an FCM as follows: E = eij , In an nXn matrix, the (n) indicates the number of the concepts in the FCM, while the (i) and the (j) represent the concepts in the FCM. The value of each eij indicates the strength and the polarity of each causal relationship among concepts. The value of causality eij takes on values from the interval [−1, +1]. Therefore, [7]: eij > 0 indicates a causal increase or positive causality from node i to j. eij = 0 there is no causality from node i to j. eij < 0 indicates a causal decrease or negative causality from node i to j.
Developing and Accessing Policies in Maritime Education
251
Causal effects can propagate through a series of multiplications between the activation vector and the FCM matrix. The activation vector (D) is a 1xn vector [8] and is used to represent the initial causes that activate the systems which is modelled by the FCM. For example, the impact of the activation vector (D1) is calculated through a series of matrix multiplications such as: ExD1 = D2, ExD2 = D3 and so forth, that is, ExDi = Di + 1, until equilibrium is reached. The equilibrium, which indicates the total causal effect of the D1, is reached when the result of a multiplication equals to zero, therefore, there is no further causal impact. As threshold a value may be set that vary depending on the application domain and the modelling priorities. For example, a threshold set at (±0.5), implies that if the multiplication returns a value greater than (+0.5) or lower than (−0.5) then the value is set to (+1) or (−1) respectively. FCMs have been used in many applications in systems modelling and decision making [9], in modelling nonlinearity [10], in services customisation [11], in personalised recommendations [12]; [13], in EDI design [14], in web adaptation [15] in information systems strategic planning [16]. Through what–if simulations represented by alternative activation vectors scenarios, education policy makers can identify sets of relevant important implications pertaining to the suitability and effectiveness of the different policies and investigate personalised educational approaches to students’ individual profile and adjusting the of ICT in education and learning. 2.3 Limitations of the Study This research analyses data collected from a maritime academy in Greece. Although, conclusions can be drawn, surveys from other countries with different educational systems and policy priorities would also be useful in order to assess the models produced in this research but also to evaluate the applicability of the proposed methodology in a variety of educational settings.
3 Data Analysis 3.1 Factor Analysis for the Customer Perspective to B2C Functionality Factor analysis was performed to analyse students’ data and identify the pillars that underline education satisfaction from the students’ perspective. Cronbach’s alpha value 0,908 was calculated using SPSS 25. Therefore, scales’ reliability is justified. Further, the KMO and Bartlett’s test was used to measure the data suitability for factor analysis. The KMO value was 0,925 and the Bartlett’s test of sphericity approximately X 2 = 5796,6, with level of significance = 0,000; thus, data is appropriate for factor analysis. Principal components analysis and varimax rotation resulted in three factors related to education satisfaction from the students’ perspective. A factor loading greater than 0,55 was required for the assignment of an item to a factor. The three factors are as follows: “Academic Staff Quality”, “Students profile”, “ICT Infrastructure in education”. The Cronbach’s alpha (>0.78) for complex factors indicates the reliability of the instruments’ structure. Table 1 shows the factors and the corresponding items’ structure.
252
S. Karnavas et al. Table 1. The students’ perspective of education satisfaction: factors, items of factors
Factors
Items
Academic Staff Quality
Education Level of Staff Familiarity with ICT Staff Experience
Students profile
Admission Exams type to enter the maritime academy Secondary school type they attended before the maritime academy Hours of private study
ICT Infrastructure in education
Use of interactive board Use of education platforms Use of ICT tools in education
The results indicate that students appreciate the capabilities of their lectures to use ICT tools in education not necessarily during in classes during lectures but also throughout the education process that involves communication, coordination among lecturers and students and the administration tasks pertaining to education. 3.2 The DEMATEL Analysis The items identified in the factor model were subsequently used to develop the questionnaire that was answered by the group of the 19 academic staff. The DEMATEL analysis identified the most important items that reflect the staff perspective of the education satisfaction. In addition, the DEMATEL analysis investigates the causal interrelationships among the items and their impact on three important aspects of the maritime education namely: education quality, student satisfaction and maritime companies’ satisfaction. Table 2. DEMATEL analysis: The importance factor items based on the academic staff perspective Items
Importance (r+c)
Education Level of Staff
1.28699232
Familiarity with ICT
1.204350329
Staff Experience
1.055815657
Admission Exams type to enter the maritime academy
1.162758053
Secondary school type they attended before the maritime academy
0.959778039
Hours of private study
1.141410749
Use of interactive board
1.440996508
Use of education platforms
1.707081097
Use of ICT tools in education
1.872726842
Developing and Accessing Policies in Maritime Education
253
The three important education aspects were selected after consultation with the group of academic staff participated in this study. With the threshold value calculated (0,068) Table 2 shows the most important items by their (r+c). The use of ICT tools in education was found to be the most important aspect to consider with respect to its contribution to education quality, students, and maritime companies’ satisfaction. Similarly, the development and use of education platforms and interactive boards were also very significant. Thus, the ICT infrastructure is a factor to that should be considered very carefully when designing education policies. The least important item is the type of the secondary school that students had attended before entering the maritime academy. 3.3 The Fuzzy Cognitive Map Analysis Based on the DEMATEL analysis, this study develops a FCM in order simulate alternative education policies scenarios and to assess their effectiveness in terms of education quality, student satisfaction and maritime companies’ satisfaction. Figure 1 shows the FCM matrix.
Education Level of Staff. Familiarity with ICT. Staff Experience. Admission Exams type to enter the maritime academy. Secondary school type. Hours of private study. Use of interactive board. Use of education platforms. Use of ICT tools in education. Education Quality Students Satisfaction Maritime Companies Satisfaction
Education Level of Staff. 0 0 0
Familiarity with ICT. 0.1 0 0
Admission Exams type to enter the Hours of Staff maritime Secondary private Experience. academy. school type. study. 0.07 0 0 0 0 0 0 0 0 0 0 0
Use of interactive board. 0.12 0.14 0.08
Use of education platforms. 0.15 0.14 0.11
Use of ICT tools in Education education. Quality 0.16 0.16 0.14 0.16 0.13 0.12
Maritime Students Companies Satisfaction Satisfaction 0.16 0.12 0.16 0 0.15 0
0
0
0
0
0.09
0
0
0
0
0.14
0.15
0.14
0 0 0 0 0 0.08 0 0
0 0 0 0 0 0.09 0 0
0 0 0 0 0 0.11 0 0
0 0 0 0 0 0.12 0 0.1
0 0 0 0 0 0 0 0.09
0.11 0 0 0 0.03 0.13 0.12 0
0 0 0 0.08 0.12 0.12 0.08 0
0 0 0 0 0.12 0.16 0.11 0
0 0 0.09 0.08 0 0.17 0.12 0.03
0.14 0.13 0.12 0.14 0.13 0 0.16 0.09
0.14 0.12 0.14 0.15 0.16 0.22 0 0.14
0.12 0.12 0.1 0.13 0.14 0.19 0.15 0
Fig. 1. The FCM model matrix.
The FCM graph that corresponds to the FCM matrix is shown in Fig. 2. After running alternative simulations of the FCM the model converges after 4 multiplications as is shown in Fig. 3. Assuming the activation vector D1 = {0.7, 0.9, 0, 0, 0,0, 0, 0, 0, 0, 0, 0}, which represent an education policy that increases (strongly = 0.7) staff education level and staff familiarity with ICT (very strongly = 0.9), i.e. a policy that seeks academic staff with such a profile. The multiplication of the D1 vector with the FCM matrix will calculate the implications of a policy. The results after (x) number of multiplications, will be shown as a vector, e.g. Dx = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0.7, 0.63, 0.23}. The results indicate that such a policy will increase education quality strongly (0.7), increase students’ satisfaction moderately (0.63) and increase maritime companies’ satisfaction be little (0.23).
254
S. Karnavas et al.
Fig. 2. The FCM model graph.
Fig. 3. The FCM model reaches equilibrium after 4 multiplications.
4 Conclusions This research proposes a methodology that can be applied to investigate the effectiveness of alternative education policies. The study analyzed data collected from students and academic staff and developed a factor model, a DEMATEL model and the corresponding FCM which it finally used to simulate education policies choices. The methodology exhibits flexibility for it captures students and staff perspectives. It also provides the means for policy makers to examine and assess their priorities and options before they actually implement them, thus saving valuable resources. Future research may focus on validating the models and the approach in different educational settings, other than in maritime academies, different countries, and cultures. It can also focus on enhancing the
Developing and Accessing Policies in Maritime Education
255
dimensions considered in this study by incorporating students’ profiles, issues such as cost effectiveness and digital maturity.
References 1. Comenius, J.A.: Grobe didaktik (ενικη´ διδακτικη), ´ Herausgegeben und eingeleitet von H. Ahrbeck, Berlin/Ost (Volk und Wissen), vol. 93 (1957) 2. Applebee, A., Lauger, J.: Instructional scaffolding. In: Jensen, J. (ed.) Composing & Comprehending. National council of teachers in English, Urbana (1984) 3. Kaldi, S., Xafakos, E.: Student teachers’ school teaching practice: The relation amongst perceived self-competence, motivation and sources of support. Teach. Teach. Educ. 246–258 (2017) 4. Avalos, B.: Policies for teacher education in developing countries. Int. J. Educ. Res. 33, 457–474 (2000) 5. Wu, D., Yang, X., Yang, W., Lu, C., Li, M.: Effects of teacher- and school-level ICT training on teachers’ use of digital educational resources in rural schools in China: a multilevel moderation model. Int. J. Educ. Res. 111, 101910 (2022) 6. Fontela, E., Gabus, A.: The DEMATEL Observer, DEMATEL 1976 Report. Battelle Geneva Research Center, Geneva (1976) 7. Schneider, M., Shnaider, E., Kandel, A., Chew, G.: Automatic construction of FCMs. Fuzzy Sets Syst. 93(2), 161–172 (1998) 8. Banini, G.A., Bearman, R.A.: Application of fuzzy cognitive maps to factors affecting slurry rheology. Int. J. Miner. Process. 52, 233–244 (1998) 9. Kosko, B.: Fuzzy cognitive maps. Int. J. Man-Mach. Stud. 24(1), 65–75 (1986). (Author, F.: Article title. Journal 2(5), 99–110 (2016)) 10. Stylios, C.D., Groumpos, P.P.: Fuzzy cognitive maps in modeling supervisory control systems. J. Intell. Fuzzy Syst. 8(2), 83–98 (2000) 11. Kardaras, D.K., Karakostas, V.: Services Customization Using Web Technologies, 1st edn. ISBN10: 1466616040, Publ. IGI Global, USA (2012). https://doi.org/10.4018/978-1-46661604-2 12. Miao, C., Yangb, Q., Fangc, H., Goha, A.: A cognitive approach for agentbased personalized recommendation. Knowl. Based Syst. 20(4), 397–405 (2007) 13. Lee, K., Kwon, S.: A cognitive map-driven avatar design recommendation DSS and its empirical validity. Decis. Support Syst. 45(3), 461–472 (2008) 14. Lee, S., Han, I.: Fuzzy cognitive map for the design of EDI controls. Inform. Manage. 37(1), 37–50 (2000) 15. Kardaras, D., Karakostas, B., Mamakou, X.: Content presentation personalisation and media adaptation in tourism web sites using Fuzzy Delphi Method and Fuzzy Cognitive Maps. Expert Syst. Appl. 40(6), 2331–2342 (2013) 16. Kardaras, D., Karakostas, B.: The use of fuzzy cognitive maps to simulate the information systems strategic planning process. Inf. Softw. Technol. 41(1), 97–210 (1999)
Applying Machine Learning and Agent Behavior Trees to Model Social Competition Alexander Anokhin , Tatyana Ereshchenko , Danila Parygin(B) Danila Khoroshun, and Polina Kalyagina
,
Volgograd State Technical University, 1 Akademicheskaya Street, 400074 Volgograd, Russia [email protected], [email protected], [email protected]
Abstract. This paper considers aspects of applying machine learning methods to existing ways of modeling intelligent agent behavior. Such a goal is considered to enable agents to improve their performance in competitive models. An overview of existing machine learning methods is given. Ways of modeling the behavior of agents are considered. The most advantageous combination of machine learning and behavioral modeling approaches is identified. The advantages and disadvantages of existing methods are considered. The intelligent agent models are implemented based on behavioral trees with the introduction of reinforcement learning. A test platform with an integrated agent competition model is implemented. The ability of the developed intelligent agent behavior model to win in competition with agents equipped with different variants of traditional tree-based behaviors has been tested on the basis of the developed platform. The workability and benefits of using the developed behavioral model were analyzed in relation to the potential of the chosen combination of techniques. Keywords: Behavior Modeling · Intelligent Agent · Artificial Intelligence · Machine Learning · Behavior Tree · Competition Model · Social Competition
1 Introduction Modeling the behavior of intelligent agents is an important component of the multi-agent approach, which is used in various areas, including urban development management tasks, such as transport management, urban planning, financial flows and others. The multi-agent approach includes development of models of behavior of each agent and their interaction in the system. As a result, it allows predicting the actions of each agent in different situations, and taking into account the interaction between agents and the influence of the environment on their behavior contributes to the optimization of the system as a whole [1]. There are many examples of intelligent agents that can be used to simulate urban processes. Vehicle driver behavior models can be used for traffic forecasting, route optimization and congestion control of public transport. Pedestrian models can be used to estimate flows on streets and squares to design necessary urban infrastructure [2]. And © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 256–265, 2023. https://doi.org/10.1007/978-3-031-44146-2_26
Applying Machine Learning and Agent Behavior Trees to Model Social
257
on a larger scale, city management agents simulate the work of city services and organizations, such as transport management, utilities, security services and others. This allows optimizing the operation of urban infrastructure, control rubbish collection, lighting management, etc. It is also possible to simulate environmental processes in the city, such as air and water pollution, climate change and ecosystem degradation. This contributes to making decisions on the management of environmental resources and predicting the environmental consequences of urban development [3]. Separately, it is necessary to outline models of social relations and behavior of city residents that are suitable for analyzing social trends, predicting changes in social structures and shaping social policy. Models of social competition describe the process of rivalry between agents seeking to achieve certain goals, such as increasing their social status, influencing other users or gaining access to certain resources. These models can help to understand how social factors influence agents’ behavior and how different strategies may affect the outcome of competition. This can be useful for decision-making in areas ranging from implementation of marketing or social policy, business development, etc. [4]. In this regard, the purpose of this work was to develop theoretical foundations and software tools for modeling the behavior of intelligent agents under the constraints of the environment and improve their performance. The relevance of such a problem is confirmed by the steady increase in the number of scientific publications on the research topics over the last 5 years (222 in 2018, 270 in 2019, 306 in 2020, 331 in 2021, 376 in 2022 according to Google Scholar).
2 Background Approaches to behavioral modeling can be divided into several main categories, depending on what underlies the model. The rule-based model of behavior came first. In this implementation, the agent’s behavior is set rigidly with rules that determine what it should do in a particular situation. Rule-based models include more complex implementations in which rules are set based on the current state of the agent, its parameters and the environment [5]. The finite automata approach to modelling can provide more opportunities to build a variational model of behavior. The essence of the approach is to identify the basic states in which an agent can reside, and to build links that allow the agent to move from state to state [6]. The behavioral tree approach to modelling is also common. Trees are mathematical models of plan execution and are often used in computer science in tasks where it is necessary to describe switches in a finite set of actions in a modular fashion. Trees are powerful tools for modeling the behavior of intelligent agents, and are used both in games [7–9], and for classes of problems such as unmanned vehicle control, robotic manipulation and others. The behavior tree is a kind of finite state machine and has a significant advantage. Adding new actions to the behavioral tree is not as complicated as in the case of a finite state machine, where it is necessary to define transitions between the new state and each of the existing states [10]. The tree is represented as a set of nodes connected by lines.
258
A. Anokhin et al.
The root node has no parents and has exactly one child node and sends it a signal to execute at a certain frequency. This starts the tree execution. The action chains execution is affected by the state of the agent and the environment [11, 12]. Machine learning can be applied to the modeling of agent behavior in order to create a more efficient model of behavior. There are several varieties of machine learning methods [13]. The main methods of machine learning include reinforcement learning. Reinforcement learning is a type of machine learning where the system under test (agent) learns by interacting with the environment. Learning with reinforcement is often used to create game-based artificial intelligence [14–16]. We should separately consider Q-learning as a form of reinforcement learning. This type is characterized by an additional utility function Q, which helps the agent to take into account the previous experience. This type of learning is often used in the development of agents for computer games [17]. In addition, deep learning is often used in face and emotion processing [18], colour correction and colourisation tasks [19], and other tasks. It can also be used in sound processing: Magenta [20] can create music, and Google Voice [21] can transcribe voicemail and manage SMS [22]. The application of deep learning to the task of modeling the behavior of intelligent agents in games can also be justified, but a combination of several approaches is often used: for example, [23] proposes an approach to implementing an agent behavioral model based on deep learning with reinforcement. Among the most developed areas of deep learning are Deep Neural Networks [24] and deep reinforcement learning, also called deep Q-network [25, 26]. It was decided to conduct a survey of scientific papers on the topic in question in order to finally determine which models and methods should be applied to solve the research problem. The study analyzed over 400 articles published between 2017 and 2022. Out of all the sources reviewed, more than half (53%) describe the application of reinforcement learning. Thus, the review of research articles concluded that the approach that combines the application of reinforcement learning as a learning algorithm and behavioral trees as a model of agent behavior is the most appropriate for the study. The combination of the selected algorithms can extend the capabilities of existing behavioral trees and significantly improve the performance of agents endowed with the ability to learn.
3 Methodology The Unity engine and the built-in Unity ML machine learning framework are used for development [27]. The test platform has been designed for the functioning of the agents and the ability to demonstrate their abilities. The agents are able to move freely within the platform and contact with the environment. To conduct an experiment with intelligent agents, a competitive game and an experimental environment corresponding to it were invented. The main goal of the agents is to collect the maximum number of victory points while maintaining their own vitality in order to survive until the end of the match and bring more points to their team. The rules of the game are as follows: – The match is between two teams of intelligent agents; – Teams compete for the highest number of victory points;
Applying Machine Learning and Agent Behavior Trees to Model Social
259
– – – –
Match ends when simulation time expires; The team with the most victory points wins; An agent dies if his health bar drops to zero; Health is reduced if the agent is hungry (satiety reduced), thirsty (hydration reduced), and if they are in a danger zone; – Satiety, hydration and health can be increased by collecting food, water and first aid kits respectively. A model environment, was developed in accordance with the rules. The wall surrounds the arena, and in the center of the environment there is a danger zone of a given radius. All the necessary objects were placed on the arena (food, water, first-aid kits and victory points in the form of golden objects of different values). Among the parameters an agent needs, the most important are his satiety and hydration, health score and score. The other important parameter is the type of an agent (“smart”, “cautious”, “balanced”, “risky”). Everything needed for the simulation was implemented using Unity classes and components. The agents were given with their parameters (satiety, thirst and health), as well as thresholds for this parameters. If any of the indicators falls below the corresponding threshold value, the agent considers that this indicator has dropped too much and needs to be replenished (e.g. find food and eat). Thresholds differ for different types of agents. “The ‘cautious’ agent type has the highest thresholds, and when it falls below 50% of the maximum, it overrides the priority for replenishing it. The “risky” group has lower thresholds of 10–15%. “Balanced” and “smart” agent types share overall average thresholds around 30% of the maximum. 3.1 Development of Agent Behavior Trees A decision was made to develop a behavioral tree interface on its own. The basic necessary node types from which a tree can be constructed were identified: root node, base node, selector and sequencer. The root node starts the execution of the tree, selectors and sequences can be used to make branches and sequences of actions, and the base node can be used to implement specific agent actions. The agent’s behavioral tree should depend on the social group that the agent represents. The sequence and priority of actions should vary according to the agent’s membership in one group or another. The main influence on the actions performed by an agent at the initial stage is influenced by being in one of the critical states: hunger, thirst or low health. Based on the fact of being in one of these states, the agent’s actions are prioritized: searching for food, searching for water, or searching for a first aid kit. If the agent is not in any of these states, it can move on to searching for collection objects. Special “Check Hungry”, “Check Dying”, and “Check Thirsty” nodes have been programmed to check the agent’s current state. The logic behind these scripts is to access the Agent State Controller component of the agent and retrieve the hunger, low health or thirst flag from it. In the “cautious” population, maximum priority is given to checking “Check Danger” (whether an agent is in the danger zone). The individual actions required for the correct functioning of an agent were also chosen to be represented as tree nodes. These actions included moving, generating a
260
A. Anokhin et al.
random point to move to, finding, selecting and absorbing objects. The developed set of nodes was sufficient to construct three different behaviors. The behavioral tree for the agents of a “balanced” population is shown in Fig. 1. The specifics of the selector node’s functioning involve a sequential check of the need to perform each of the child actions in descending order of priority from right to left. Thus, the highest priority is given to taking care of its own health. Otherwise the agent will move on to perform the next action in descending order of priority.
Fig. 1. The “balanced” behavioral model tree.
“Cautious” agents will wear green. Agents in this population should have inflated thresholds for health, satiety and hydration, which would mean that the moment of decision to replenish one or the other would occur earlier in these agents than in other populations. In addition, the behavior of the “cautious” agents themselves should differ from that of other agents. The entire arena would be a ‘sweet spot’ for this population, except for the danger zone in the centre. This would prevent “cautious” actors from picking up high value items closer to the centre of the arena, but would allow them to survive longer. The behavioral tree for the agents in the “cautious” population is shown in Fig. 2. “Risky” agents will wear red colouration. They will be distinguished from other populations by their reduced health, satiety and hydration thresholds, which will mean that they will be much less concerned with their own vitality and more concerned with scoring points. Agents in this population will search for collection objects primarily in the danger zone. The goal was to modify the object search so that agents of this population would not pay attention to 1 or 3-point-value objects at all, concentrating on harvesting higher-value objects. It is also mandatory for agents of this population to carry a first aid kit when visiting the danger zone. Thus, agents of this population can search for food, water and first aid kits throughout the arena, but their search for valuable objects is limited to the danger zone only. The behavioral tree for the agents in the “risky” population is shown in Fig. 3. The object search task, due to its scale, was decided to be separated into a separate behavioral tree. This tree is linked to the overall tree via the input parameter: the search
Applying Machine Learning and Agent Behavior Trees to Model Social
261
Fig. 2. The “cautious” behavioral model tree.
tree is fed with the object to be found. The search tree starts with a selector node, which determines which path to follow next. The selector function assumes that the first option will be submitted for execution in any case. Each of the subsequent ones only if the previous one fails. Therefore, the first descendant is loaded into the selector with a search sequence for an object in the vicinity.
Fig. 3. The “risky” behavioral model tree.
3.2 Development of an Intelligent Agent Training Model The selected machine learning algorithm is considered to be reinforcement learning. This algorithm can be successfully applied to the developed software platform. The external environment for the agent is an arena filled with collection objects, food, water and first-aid kits. For every action the agent takes to achieve its goals must be rewarded by the environment so that the agent can draw conclusions as to whether or not it is taking the right action. It was decided to split the model into two sub-models with different objectives, as part of the design of the training model. The first sub-model of learning focused on
262
A. Anokhin et al.
the collection of objects that maintain the viability of the agent. The second training sub-model is aimed directly at scoring points. The models must work in tandem and ensure that the agent achieves both of its goals: maximising the points it receives and maximising its own lifetime. Correct selection of odds ensures the agent’s continued survival with the accompanying collection of as many points as possible in the time allotted to each particular match. The CleverAgent script has been programmed to implement learning models. The main method of this script is OnActionRecieved(), which contains the logic for actions and their rewards. This script describes what actions an agent can perform and what rewards they can get through Unity ML’s built-in methods, such as SetReward(). Unity ML provides ample opportunities to implement machine learning. It was necessary. Unity ML uses the Python language version 3.6 + to run the learning process.
4 Results The Unity tools offer a visualization of most of the information so that it can be easily presented to the user. It was decided to divide the program into three main windows (Fig. 4). The main function of the start window is the ability to configure the simulation environment (Fig. 4a). The main program window is designed to provide the user with a snapshot of the current state of the arena and its inhabitants (Fig. 4b). Figure 4c shows the result of a program initialized with redundant parameters in a reliability test. The final window of the program opens when the main window is closed and provides the user with the results of the simulation (Fig. 4d): defining the winning team, displaying the number of points scored by each team and each agent. The experiment scenario involved a series of matches. In each match, the trained (“Smart”) population competed with one of the other types of agents. For each rival we conducted three matches with different conditions: a lot of resources and a lot of time, a little of both and average conditions. This way, we can eliminate risk that the initial conditions of the simulation highly influence the outcome of the match. The results of all series of experiments on the competition of an intelligent population with the rest of the population are given in Table 1. The outcome “victory” is entered into the table if “Smart” population wins by scoring the most points. As shown in the table, the trained population managed to win in more than 80% competing against other variants of simple behavior models. The results of the experiment demonstrate that the trained population was more successful in the task of scoring while maintaining its own viability under time constraints. It can also be concluded that trained agents were more successful in balancing risk and possible reward, not preferring only expensive or only cheap resources, but making a relatively conscious choice in favor of one or the other, while simultaneously assessing the risk associated with collecting more expensive resources in danger zone where the agent can die. Summarizing the result, we can establish that machine learning techniques can indeed improve the performance of traditional intelligent agent behaviors.
Applying Machine Learning and Agent Behavior Trees to Model Social
a
b
c
d
263
Fig. 4. Program interface: a. View of the program start window; b. Main program window with normal operation; c. Result of work with incorrect data; d. Results window.
Table 1. Results of a series of experiments with the trained population. A series of experiments
Terms and conditions
Rival
Prevailing outcome
1
A lot of time, a lot of resources
Cautious
Victory
2
3
Average time, average resources
Victory
Little time, little resources
Victory
A lot of time, a lot of resources
Balanced
Defeat
Average time, average resources
Victory
Little time, little resources
Victory
A lot of time, a lot of resources
Risky
Defeat
Average time, average resources
Victory
Little time, little resources
Victory
5 Conclusion As a result of this research, an analysis of existing approaches to organizing intelligent agent behaviors and the machine learning techniques that can be applied to them was carried out. The research has resulted in the selection of applicable algorithms and tools for further development. A software platform has been developed on which the research can be conducted and the required behavior models and training algorithm has been
264
A. Anokhin et al.
developed. As a result of the computational experiment, it was found that the application of machine learning methods to the simulation of the behavior of intelligent agents can significantly improve their performance and allow them to have an advantage over other ways of organizing the game artificial intelligence, demonstrating a sufficiently diverse and realistic behavior. Using social competition models, it is possible to analyze how various factors, such as changes in population, changing economic conditions or the development of new technologies, can affect the competition for resources and the impact on the urban environment. It is also possible to use social competition models to study the effectiveness of different strategies and policies to improve access to resources and improve the quality of the urban environment [28]. One can investigate how changes in housing prices, improvements in public transport or the development of new areas of the city affect the competition for benefits and the improvement of the quality of life in the city as a whole [29]. Thus, modelling social competition based on the study of the behavior of intelligent agents can help to understand how social factors affect the functioning of urban infrastructure and urban development, as well as what strategies can be most effective to achieve certain goals. And it is planned to conduct research using other approaches to the organization of intelligent agents on the basis of the developed platform. Acknowledgments. The study has been supported by the grant from the Russian Science Foundation (RSF) and the Administration of the Volgograd Oblast (Russia) No. 22-11-20024, https:// rscf.ru/en/project/22-11-20024/. The authors express gratitude to colleagues from the Department of Digital Technologies for Urban Studies, Architecture and Civil Engineering, VSTU involved in the development of the project.
References 1. Burova, A., Burov, S., Parygin, D., Gurtyakov, A., Rashevskiy, N.: Distributed administration of multi-agent model properties. In: CEUR Workshop Proceedings, vol. 3090, pp. 24–33. CEUR (2022) 2. Burov, S., Parygin, D., Finogeev, A., Ather, D., Rashevskiy, N.: Rule-based pedestrian simulation. In: Proceedings of the 2nd International Conference on “Advancement in Electronics & Communication Engineering”, Ghaziabad, India, 14–15 July 2022. SSRN (2022) 3. Anokhin, A., Burov, S., Parygin, D., Rent, V., Sadovnikova, N., Finogeev, A.: Development of Scenarios for Modeling the Behaviour of People in an Urban Environment. In: Kravets, A.G., Bolshakov, A.A., Shcherbakov, M. (eds.) Society 5.0: Cyberspace for Advanced HumanCentered Society. SSDC, vol. 333. Springer, Cham (2021). https://doi.org/10.1007/978-3030-63563-3_9 4. Davtian, A., Shabalina, O., Sadovnikova, N., Berestneva, O., Parygin, D.: Principles for Modeling Information Flows in Open Socio-Economic Systems. In: Kravets, A.G., Bolshakov, A.A., Shcherbakov, M. (eds.) Society 5.0: Human-Centered Society Challenges and Solutions. SSDC, vol. 416. Springer, Cham(2022). https://doi.org/10.1007/978-3-030-95112-2_14 5. Creating artificial intelligence for games – from design to optimization. https://habr.com/com pany/intel/blog/265679/. Accessed 29 March 2023 6. Applying Goal-Oriented Action Planning to Games. http://alumni.media.mit.edu/~jorkin/ GOAP_draft_AIWisdom2_2003.pdf. Accessed 20 March 2023
Applying Machine Learning and Agent Behavior Trees to Model Social 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28.
29.
265
Official Halo website. https://www.halowaypoint.com/ru-ru. Accessed 21 March 2023 BioShock. https://www.bioshockgame.com/. Accessed 17 March 2023 SPORE. https://www.ea.com/ru-ru/games/spore/spore. Accessed 15 March 2023 Behavioral trees or finite state machines. https://opsive.com/support/documentation/behaviordesigner/behavior-trees-or-finite-state-machines/. Accessed 24 March 2023 Behavior trees for AI: How they work. https://www.gamasutra.com/blogs/ChrisSimpson/201 40717/221339/Behavior_trees_for_AI_How_they_work.php. Accessed 4 April 2023 Building your own Basic Behaviour tree in Unity. https://hub.packtpub.com/building-yourown-basic-behavior-tree-tutorial/. Accessed 5 April 2023 Machine learning. http://www.machinelearning.ru/. Accessed 8 March 2023 Introduction to reinforcement learning for beginners. https://proglib.io/p/reinforcement-lea rning/. Accessed 10 March 2023 Reinforcement Learning. https://medium.com/@pavelkordik/reinforcement-learning-the-har dest-part-of-machine-learning-b667a22995ca. Accessed 25 Feb 2023 Deep Reinforcement Learning. https://towardsdatascience.com/how-to-teach-an-ai-to-playgames-deep-reinforcement-learning-28f9b920440a. Accessed 7 April 2023 Deep Learning. https://vk.com/deeplearning. Accessed 11 April 2023 OpenFace. https://cmusatyalab.github.io/openface/. Accessed 4 April 2023 Colornet. https://github.com/pavelgonchar/colornet. Accessed 28 March 2023 Magenta. https://github.com/tensorflow/magenta. Accessed 28 March 2023 The neural networks behind Google Voice transcription. https://ai.googleblog.com/2015//theneural-networks-behind-google-voice.html. Accessed 1 April 2023 Deeper learning: Opportunities, perspectives and a bit of history. https://habr.com/company/ it-grad/blog/309024/. Accessed 15 March 2023 Bringing gaming to life with AI and deep learning. https://www.oreilly.com/ideas/ bringinggaming-to-life-with-ai-and-deep-learning. Accessed 20 March 2023 DeepMind. https://deepmind.com/. Accessed 2 March 2023 Human-level control through Deep Reinforcement Learning. https://deepmind.com/research/ dqn/. Accessed 2 March 2023 An introduction to Deep Q-Learning. https://medium.freecodecamp.org/an-introduction-todeep-q-learning-lets-play-doom-54d02d8017d8. Accessed 14 March 2023 Unity: A General Platform for Intelligent Agents. https://arxiv.org/abs/1809.02627. Accessed 11 April 2023 Sadovnikova, N., Savina, O., Parygin, D., Churakov, A., Shuklin, A.: Application of scenario forecasting methods and fuzzy multi-criteria modeling in substantiation of urban area development strategies. Information 14(4), art. no. 241. MDPI (2023) Zelenskiy, I., Parygin, D., Savina, O., Finogeev, A., Gurtyakov, A.: Effective implementation of integrated area development based on consumer attractiveness assessment. Sustainability 14(23), art. no. 16239. MDPI (2022)
An Integrated Platform for Educational and Research Management Using Institutional Digital Resources Konstantinos Chytas , Anastasios Tsolakidis , Evangelia Triperina(B) Nikitas N. Karanikolas , and Christos Skourlas
,
University of West Attica department of Informatics and Computer Engineering, 12243 Athens, Greece [email protected]
Abstract. Currently, universities not only offer technologically enhanced education and automated processes, but also provide a wide spectrum of online services, especially after COVID19. We propose e-EDURES, an integrated platform for educational and research management in tertiary education to further benefit the stakeholders by relying on the data of university online services to enable strategic planning and decision making. More specifically, in this paper, we focus on the presentation of an interactive system that utilizes the data derived by university’s synchronous and asynchronous e-learning services to gain insights about the educational performance of the students. It employs Educational Data Mining techniques to ultimately boost the educational process, by assessing the previous performance, monitoring the provided services and predicting future performance of the students, in order to avoid potential pitfalls. The proposed solution merges the current advances in smart education and educational data mining to offer an advanced and updated version of online services provided by universities that respond better to the current needs in academia. Keywords: Educational Data Mining · Blended Learning · Learning Analytics · Online Services · Knowledge Discovery · Information System
1 Introduction Nowadays, there is an abundance of information and information systems focused on academia. However, there is lack of holistic approaches, in which all the information is accumulated from the disparate sources and the decentralized systems into a single one that can handle the whole academic information. Recording the educational and research related information in the academic setting, evaluating the performance of academic units and individuals based on this information and relying on the before mentioned assessment for the institutional strategic planning can lead to an improved decision-making process and more insightful and informed decisions in HEIs (Higher Education Institutions). Among the benefits of a unified system for educational and research management is the exploration of the existing synergies and correlations between research and education and how they affect the overall efficiency of academics and their institutions. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 266–276, 2023. https://doi.org/10.1007/978-3-031-44146-2_27
An Integrated Platform for Educational and Research Management
267
The proposed system provides an integrated solution for the recording and assessment of e-learning, research, and managerial services, that occur within a Higher Education Institution (HEI). In this paper, we will focus on the learning aspects of our system. The goal of the presented research is to predict future performance and take proactive measures to maintain and boost the effectiveness of the provided teaching services for seamless and personalized online and in person learning. The system that accumulates the data of the involved university subsystems, preprocesses the data, utilize a data virtualization layer, and applies Educational Data Mining (EDM) Algorithms to pinpoint probability of failure and drop-outs and to aid the students to maintain their educational pathways. In our approach, we have utilized the information rich data generated by the emergency remote teaching during the pandemic to enhance not only the online provided services, but also to make suggestions for improving the learning materials, resources, and the instructional design for in-person education based on the gained knowledge. The remaining paper is structured as follows: the second section provides the background of the involved scientific fields, while the third section presents the methodology of the presented research. In the fourth section, the results of the proposed method are thoroughly described, whereas the fifth section involves the conclusions, as well as the future work.
2 Related Work 2.1 Digital Transition in Academia The Digital Transition (DT) in HEIs has been initiated in the past [1]. However, the process has been accelerated by COVID-19 [2]. According to literature, DT has impacted various dimensions of academia, including education, research, knowledge transfer, internationalization, and management [3]. Currently, there is a profusion of information systems that cover the various needs apparent in academia, enable the University’s services and manage the involved data [4–6] and correspond to university e-government (U-EGOV) services. While e-government (electronic government) is mainly focused on providing access to information and services to citizens and businesses with the exploitation of information and communication technologies (ICT) [7], U-EGOV is occupied with the whole spectrum of academic activities that are not limited to managerial services, but also include the facilitation of online learning, the dissemination of the conducted research as well. 2.2 Smart Education Nowadays, many different terms have been used in order to describe the various aspects of technology-enhanced learning (TEL). Smart education, Smart learning environment (SLEs), are merely some examples. Smart education describes learning in digital age [8] and can be further analyzed based on each letter of the word as: S-self-directed, M-motivated, A-adaptive, R-resource-enriched, and T-technology [9]. Apart from the increasing number of publications relevant to SLEs that have been apparent in the literature, the great significance of Smart Learning Environments is also
268
K. Chytas et al.
indicated by the creation of the professional forum International Association of Smart Learning Environments (IASLE), in which take part researchers, academics, practitioners, and industry professionals. IASLE is occupied with the reform of the ways of teaching and learning through advancing current learning environments towards smart learning environments and examine the interaction and the combination between the pedagogy and the technology of SLEs [10]. As expected, smart education has been the focus of not only the research community, but also standardization committees [8]. To elaborate, Sub-Committee 36 (SC36) of ISO/IEC Joint Technical Committee 1 is occupied with the frameworks for Smart Learning Environments (SLE) IT standardization for learning, education, and training (ITLET). Along with the standardization and development frameworks, emerging or reformed educational tools and learning management systems (LMS) have been proposed to follow the current advances and tackle real-world educational challenges [11]. Education and learning environments have greatly evolved during the past decades, due to the evolution of information technology [12]. SLEs utilize a wide range of digital technologies to support learning, education, and training [13]. Among the approaches studied, the exploitation of smart mobile devices learning (m-learning) and mobile game-based learning (MGbL) especially during the pandemic was examined [14]. In another approach smart technologies, such as internet of things (IoT), was incorporated with a flow theory [11]. Smart pedagogy is a part of smart education, along with smart learning environment and smart learners [8]. Smart pedagogy involves key elements model, curriculum design and teaching strategies [15]. 2.3 EDM and LA The plethora of data resulting from the transition to smart universities and the digitization of the university services led to an abundance of educational data. To make use of the vast educational data the fields of Educational Data Mining (EDM) and Learning Analytics have emerged. As described by Romero and Ventura [16], EDM and LA, are two different communities that are often treated as one, with the shared interest of how educational data can be used to enhance education and the learning science [16, 17]. The first is focused on developing methods for the exploration of the data that stem from education and implementing data mining techniques to the educational sector [18], while the latter evaluates user’s behavior in the context of teaching and learning and measures, gathers, analyses, and assesses data about learners and learning contexts, to better comprehend and amplify learning and ameliorate the involved learning environments [19, 20].
3 An Integrated Platform for Educational and Research Management Using Institutional Digital Resources The presented research builds upon the principles of digital transformation and smart universities to exploit the digitized academic data. In this section, we present an integrated system, e-EDURES, for managing, assessing, and presenting academic information enabled by institutional digital resources, we will briefly present all the involved components and we will focus on the educational component of the system.
An Integrated Platform for Educational and Research Management
269
3.1 The Components of e-EDURES The involved subsystems of the e-EDURES (e-Governance Platform for Educational and Research Management) system are the educational component that involves e-Class, MS Teams, the research component that includes the research conducted within the HEI, as well as the managerial components, which incorporate the administrative information (Fig. 1). As a direct consequence, e-EDURES constitutes a unified solution for information management and decision-making in academia.
Fig. 1. The components of the e-EDURES
Educational Component. The educational component is focused on gathering log data, performance data, interaction data and statistics from the synchronous and asynchronous learning systems of the University, namely the e-Class, which is an open code LMS implemented by GUNET to increase consistently the usage of LMS in Greece [21] and is based on Claroline, as well as MS Teams. Research Component. The research subsystem manages information related to the authoring and co-authorship of research publication and the participation and collaboration in research projects. Managerial Component. This component captures the administrative information of academic tasks and collaborations. Interaction Between the Components. The various aspects of academia overlap in some cases. Moreover, there is a synergy between the different dimensions in the academic setting. Although our system involves the management of each aspect separately, it does not neglect their interactions.
270
K. Chytas et al.
3.2 System’s Architecture The methodology presented in this paper relies upon the conceptual framework for Learning Systems in Smart Universities [22]. The educational focused component of the system involves a personalized and student-centered approach for monitoring and predicting students’ performance, while it also notifies the user about students at risk of failure or dropping out. One critical step in the proposed methodology is the collection of different data in one central point. The process initiates with the accumulation of the data from the University systems. After the aggregation of the data, the necessary actions for the data to be prepared for their further exploitation by the system are taken place. In the preprocessing stage a variety of different methods are employed, the dataset is cleaned from missing values, the redundant attributes are identified using Pearson’s correlation, data normalization is applied to the data, outlier detection is implemented, and a rule engine is employed for the detection of the synchronous meetings [23]. Finally, the correlation among the data is calculated which is used for the feature selection. The anonymization of the data with masking techniques occurs in this stage, in order to protect the privacy of the students. The abstraction of the data is followed in the virtualization layer, in which the data are abstracted from its initial form and handled regardless of the way it is stored or structured [24]. The subsequent step is the data processing and the data mining stage. We have focused on the data related to the educational activities. In this stage the construction of the students’ profiles takes place, followed by the application of a set of machine learning algorithms including classification, and clustering algorithms [25]. We apply the random forest algorithm [25] to students’ profiles so as to predict the students that are at risk of failing a course. In addition, our prototype is enriched with the k-means algorithm to shape the clusters of the students. The machine learning algorithms implemented in our prototype are derived from WEKA, which is an opensource library. The last layer of the system architecture is the data presentation and the data visualization with different alternatives. The architecture of the system can be seen in Fig. 2.
Fig. 2. The architecture of the e-EDURES system
An Integrated Platform for Educational and Research Management
271
4 Results 4.1 Overview of the Data The input data for the educational analysis are presented on Table 1. Based on the information aggregated from the synchronous (MS Teams) and the asynchronous platform (e-Class). Table 1. Overview of the educational data. Indicator
Description
Source
Course Mark
Mark for course
e-Class
Lab Mark
Mark for laboratory course
Final Mark
Final mark to course
Announcements
For each subsystem, the authors collect data related to the following three attributes, which are related to the student activities (use by the students of the course’s subsystems): Access Number is the number of visits in the subsystem, Duration is the time in minutes remained in the subsystem, Days is the number of days visited the subsystem
Units Documents Exercises Works Chat Questionnaire Agenda Links Forum Meetings count
The number of meetings that the student participates MS Teams
Sessions count
The number of times connected in meetings
Meetings duration
The time in minutes spent in meetings
Channel messages
The number of unique messages that the student posted in a team chat
Reply messages
The number of unique reply-messages that the student posted in a team chat
Post messages
The number of unique post messages that the student posted in a team channel
Chat messages
The number of unique messages that the student posted in a private chat
Urgent messages
The number of urgent messages that the student posted in a chat
Total meetings
The total number of scheduled and ad hoc meetings a student sent
MS Teams Stats
(continued)
272
K. Chytas et al. Table 1. (continued)
Indicator
Description
Meetings organized
The total number of scheduled and ad hoc meetings a student organized
Source
Meetings participated The number of scheduled and ad hoc meetings a student participated Audio time
The total audio time that the student participated
Video time
The total video time that the student participated
4.2 Analysis of the Educational Data To provide an overview of the data and further identify patterns on the information, we present a correlation matrix of the data derived by the synchronous and asynchronous systems. The implicated data spans three semesters in total, from the spring semester
Fig. 3. (a) Spring/Summer Semester 2020: Correlation of e-Class – MS Teams variables, (b) Fall/Winter Semester 2020: Correlation of e-Class – MS Teams variables, (c) Spring/Summer Semester 2021: Correlation of e-Class – MS Teams variables.
An Integrated Platform for Educational and Research Management
273
of the academic year 2019–2020 and the winter semester of the academic year 2020– 2021 to the spring semester of 2020–2021. The sample for the analysis consists of 4,800 students. The variables that are used during the analysis of the students’ behavior come from the two University’s subsystems, described in the previous section. In the following figures (Fig. 3a, b, and c) the set of variables, as well as their correlation degree, are presented. According to the subsequent figures (Fig. 3a, b and c) and based on the analysis of the correlation matrix, the MS Teams related variables have a deeper shade of blue compared to the e-Class related variables, which indicates a great level of correlation. Thus, during the first semester of the pandemic (Fig. 4), the variable final grade (Grade_final) depends to a greater extent on the use of the teleconferencing tool. In the following semesters (Fig. 5 and 6), this relation alters, and the final grade depends more on the behavior of the students in e-Class, compared to MS Teams.
Fig. 4. Spring/Summer Semester 2020, the grade is highly correlated with both platforms.
Fig. 5. Fall/Winter 2020, the grade is loosely correlated with MS Teams platform.
Fig. 6. Spring/Summer 2021, The grade is not correlated with MS Teams platform.
Examining the correlation matrixes for each semester, it is apparent that in the first semester of pandemic, there is correlation between the variables of MS Teams and eClass (Fig. 7a). But as the semesters pass (Fig. 7b), the correlation between the variables
274
K. Chytas et al.
diminishes and the behavior of the students in e-Class does not depend on their behavior on MS Teams. In our approach, we have developed a university e-government analytic platform, which collects data from the existing University information systems to better support the educational process. The aim of our prototype is to provide access to all the information produced during the education activities to all the related stakeholders to facilitate insightful and informed decisions.
Fig. 7. (a) Semester Spring Summer 2020, correlation MS Teams-e-Class, (b) Semester Fall Winter 2021, correlation MS Teams-e-Class.
5 Conclusions We proposed an integrated platform for educational and research management of universities. More specifically, in this paper, we presented and assessed a system for predicting students’ academic performance in a smart university. The educational component of our system captures and predicts students’ performance and informs the users about students at risk of failure or dropping out, acting proactively, and enhancing the efficiency of educational services offered by the academic institution. The proposed method utilizes educational data mining techniques to ameliorate the learning pathways of students. Future work includes the examination of the unified data management provided by data virtualization to data of even bigger scale, as well as implementation of the prediction model to a bigger dataset.
References 1. Zahari, N.A., Mustapa, M., Nasser, S.S.Q., Dahlan, A.R.A., Ibrahim, J.A: Conceptual digital transformation Design for International Islamic University Malaysia to” University of the Future”. In: 2018 International Conference on Information and Communication Technology for the Muslim World (ICT4M), pp. 94–99. IEEE (2018) 2. Rodríguez-Abitia, G., Bribiesca-Correa, G.: Assessing digital transformation in universities. Future Internet 13(2), 52 (2021)
An Integrated Platform for Educational and Research Management
275
3. Carvalho, A., Alves, H., Leitão, J.: What research tells us about leadership styles, digital transformation and performance in state higher education?. Int. J. Educ. Manage. (2022) 4. Jun, Z.: The research of E-government for university on XML. In: 2011 3rd International Conference on Computer Research and Development 3, pp. 194–197. IEEE (2011) 5. Pasini, A., Pesado, P.: Quality model for e-government processes at the university level: a literature review. In: Proceedings of the 9th International Conference on Theory and Practice of Electronic Governance, pp. 436–439 (2016) 6. Di Maio, A.: Traditional ROI measures will fail in government (2003). http://www.gartner. com/resources/116100/116131/traditional_roi.pdf. Accessed 17 March 2023 7. Fang, Z.: E-government in digital era: concept, practice, and development. Int. J. Comput. Internet Manage. 10(2), 1–22 (2002) 8. Zhu, Z.T., Yu, M.H., Riezebos, P.: A research framework of smart education. Smart Learn. Environ. 3, 1–17 (2016) 9. Ardashkin, I.B., Chmykhalo, A.Y., Makienko, M.A., Khaldeeva, M.A.: Smart-technologies in higher engineering education: modern application trends. Int. Conf. Res. Parad. Transform Soc. Sci. 50, 57–64 (2018) 10. IASLE: Background: Smart learning. http://iasle.net/about-us/background/. Accessed 12 May 2023 11. Iqbal, H.M., Parra-Saldivar, R., Zavala-Yoe, R., Ramirez-Mendoza, R.A.: Smart educational tools and learning management systems: supportive framework. Int. J. Interact. Des. Manufac. 14, 1179–1193 (2020) 12. Spector, J.M.: Conceptualizing the emerging field of smart learning environments. Smart Learn. Environ. 1(1), 1 (2014). https://doi.org/10.1186/s40561-014-0002-7 13. Koper, R.: Conditions for effective smart learning environments. Smart Learn. Environ. 1(1), 1–17 (2014). https://doi.org/10.1186/s40561-014-0005-4 14. Krouska, A., Troussas, C., Sgouropoulou, C.: Mobile game-based learning as a solution in COVID-19 era: modeling the pedagogical affordance and student interactions. Educ. Inform. Technol. 1–13 (2022) 15. Meng, Q., Jia, J., Zhang, Z.: A framework of smart pedagogy based on the facilitating of high order thinking skills. Interact. Technol. Smart Educ. 17(3), 251–266 (2020) 16. Romero, C., Ventura, S.: Educational data mining and learning analytics: an updated survey. Wiley Interdiscipl. Rev. Data Min. Knowl. Discov. 10(3), e1355 (2020) 17. Baker, R.S.J.D., Inventado, P.S.: Educational data mining and learning analytics. In: Larusson, J.A., White, B., (eds.), Learning Analytics: From Research to Practice. Springer, Berlin, Germany (2014) 18. Romero, C., Ventura, S.: Educational data science in massive open online courses. Wiley Interdiscipl. Rev. Data Min. Knowl. Discov. 7(1), e118 (2017) 19. Leitner, P., Khalil, M., Ebner, M.: Learning analytics in higher education—a literature review. Learning analytics: fundaments, applications, and trends: a view of the current state of the art to enhance E-learning, 1–23 (2017) 20. Long, P., Siemens, G.: What is learning analytics. In: ACM (2011) 21. Kabassi, K., Dragonas, I., Ntouzevic-Pilika, A.: Learning management systems in higher education in Greece: literature review. In: 2015 6th International Conference on Information, Intelligence, Systems and Applications (IISA), pp. 1–5, IEEE (2015) 22. Chytas, K., Tsolakidis, A., Skourlas, C.: Towards a framework for learning systems in smart universities. In: Kumar, V., Troussas, C. (eds.) ITS 2020. LNCS, vol. 12149, pp. 275–279. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-49663-0_32 23. Chytas, K., Tsolakidis, A., Karanikolas, N., Skourlas, C.: An integrated system for predicting students’ academic performance in smart universities. In: 24th Pan-Hellenic Conference on Informatics, pp. 416–419, ACM (2020)
276
K. Chytas et al.
24. Karpathiotakis, M., Alagiannis, I., Heinis, T., Branco, M., Ailamaki, A.: Just-in-time data virtualization: Lightweight data management with ViDa. In: Proceedings of the 7th Biennial Conference on Innovative Data Systems Research (CIDR) (No. CONF) (2015) 25. Chytas, K., Tsolakidis, A., Triperina, E., Skourlas, C.: Educational data mining in the academic setting: employing the data produced by blended learning to ameliorate the learning process. Data Technol. Appl. 57(3), 366–384 (2023). https://doi.org/10.1108/DTA-06-2022-0252,Eme raldPublishingLimited
Classification of Alzheimer’s Disease Subjects from MRI Using Deep Convolutional Neural Networks Orestis Papadimitriou1 , Athanasios Kanavos1(B) , Phivos Mylonas2 , and Manolis Maragoudakis3 1 Department of Information and Communication Systems Engineering,
University of the Aegean, Samos, Greece {icsdd20016,icsdd20017}@icsd.aegean.gr 2 Department of Informatics and Computer Engineering, University of West Attica, Athens, Greece [email protected] 3 Department of Informatics, Ionian University, Corfu, Greece [email protected]
Abstract. The classification of Alzheimer’s disease (AD) using deep learning techniques has shown promising results. However, achieving successful application in medical settings requires a combination of high precision, short processing time, and generalizability to different populations. In this study, we propose a convolutional semantic network (CNN)-based classification algorithm that utilizes magnetic resonance imaging (MRI) scans from individuals with AD. Our models achieved average area under the curve (AUC) values of 0.91−0.94 for withindataset recognition and 0.88−0.89 for between-dataset recognition. The proposed convolutional framework can be potentially applied to any image dataset, offering the flexibility to design a computer-aided diagnosis system targeting the prediction of various clinical conditions and neuropsychiatric disorders using multimodal imaging and tabular clinical data. Keywords: Alzheimer Detection · Batch Normalization · Convolutional Neural Networks · Deep Learning · Dropout · MRI Images
1 Introduction Alzheimer’s disease is a progressive neurodegenerative disorder that affects millions of people worldwide. This devastating disease gradually impairs cognitive abilities, such as memory, language, and reasoning, ultimately leading to a loss of independence. Early diagnosis of Alzheimer’s disease plays a crucial role in improving patient outcomes, as timely intervention can help slow down the disease progression and enhance the individual’s quality of life [20]. In recent years, researchers have focused on developing accurate models for predicting the stage of Alzheimer’s disease [9]. These models have
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 277–286, 2023. https://doi.org/10.1007/978-3-031-44146-2_28
278
O. Papadimitriou et al.
demonstrated superior performance compared to traditional techniques that rely on predefined features in various image processing and computer vision tasks. Moreover, in the biomedical field, CNN-based methods hold promise for discovering new imaging biomarkers [21]. Timely diagnosis of Alzheimer’s disease plays a critical role in enhancing patient outcomes. Encouragingly, recent advancements have yielded a highly accurate model for predicting the stage of Alzheimer’s disease. This model holds significant promise for enabling early intervention and improving patient care. In recent advancements, a highly accurate model has emerged, demonstrating promising results in predicting the stage of Alzheimer’s disease with remarkable precision [2]. This model harnesses the potential of machine learning algorithms to analyze extensive datasets sourced from diverse avenues, including medical records, genetic information, and neuroimaging scans [23]. By employing advanced techniques such as deep learning algorithms and feature engineering, the model proficiently detects and interprets patterns within these vast datasets. It achieves accurate predictions of the Alzheimer’s disease stage by leveraging multiple biomarkers, encompassing structural and functional neuroimaging, genetic data, and clinical features. Notably, this model’s key strength lies in its capacity to uncover intricate patterns that may elude human observation, as it operates on complex datasets. Moreover, the model is designed to continually learn from data, progressively refining its accuracy over time. As a consequence, it emerges as a powerful tool for predicting the stage of Alzheimer’s disease [23]. Our recommendation entails the adoption of a specialized CNN architecture based on deep learning techniques, specifically designed to discern between individuals with different stages of cognitive impairment, including Mild De mented, Moderate Demented, Non Demented, and Very Mild Demented. While the availability and cost-effectiveness of MRI scans can be advantageous, previous attempts to differentiate between healthy aging and Alzheimer’s disease using volumetric analysis had notable limitations, such as small sample sizes and reliance on semi-automated segmentation techniques. Early applications of machine learning for Alzheimer’s disease diagnosis from MRIs often relied on pre-selected discriminative attributes [13]. In contrast, our proposed approach employs a novel deep learning model, leveraging a unique CNN design, to accurately identify individuals with normal cognition, mild cognitive impairment (MCI), and mild Alzheimer’s disease progression. The remaining sections of the paper are organized as follows: Sect. 2 provides a comprehensive review of the relevant literature pertaining to the problem under investigation. Section 3 delves into the foundational aspects of the methodology, including an exploration of convolutional neural networks, as well as implementation details such as TensorFlow, Keras, and batch normalization. In Sect. 4, we offer a detailed description of the deep learning structures employed in our study. Section 5 presents the research findings derived from our experiments. Finally, in Sect. 6, we summarize our contributions and propose avenues for future research.
Classification of Alzheimer’s Disease Subjects from MRI
279
2 Related Work A critical task in automated diagnostics of Alzheimer’s disease involves distinguishing individuals with varying degrees of cognitive impairment through MRI scans [10]. Earlier studies employed simple classifiers such as support vector machines on features derived from volumetric measurements of the hippocampus and other brain regions [12]. More recently, several deep learning techniques have been applied to tackle this task. For instance, Li et al. [14] utilized pretraining with a shallow autoencoder to perform classification on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. Hon and Khan [7] employed state-of-the-art models, such as VGG and Creation Web [17], on the sanctuary dataset [19]. They selected the most informative components from the 3D scans based on image enhancement techniques. Valliani and Soni [24] demonstrated that a pretrained ResNet [6] on ImageNet [3] outper- formed a baseline 2D CNN. Additionally, Hosseini-Asl [8] evaluated a 3D CNN architecture on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset and data from the CADDementia challenge. Cheng [16] proposed an additional computationally-efficient approach that utilizes large 3D patches extracted by specific CNNs. These patches are then combined by an additional CNN to generate the final output. Similarly, Lian [15] proposed an ordered CNN architecture that enables the automatic identification of significant patches. Khvostikov [11] employed Siamese networks to differentiate regions of interest surrounding the hippocampus, leveraging information from multiple imaging modalities. In a recent survey paper by Junhao et al. [25], several existing works address the issue of data leakage resulting from problematic information splits, biased transfer learning, or the lack of an independent test collection. The authors further highlight that in the absence of data leakage, CNNs achieve a precision range of 72% to 86% when distinguishing between individuals with Alzheimer’s disease (AD) and healthy controls. Similarly, Fung et al. [5] investigated the impact of different data-splitting strategies on classification precision. They reported a significant drop in examination precision, ranging from 84% to 52% for the three-class classification problem considered in this study, when there was no patient overlap between the training and test sets. Likewise, Backstrom [1] conducted a study on the effect of splitting techniques and obtained comparable results for a two-way classification task. The detection of Alzheimer’s disease (AD) poses a challenge due to the similarity of MRI images between individuals with AD and healthy subjects. Several studies have explored the diagnosis of AD using MRI images, often focusing on modifying and optimizing various CNN models or ensembling them to achieve high accuracy predictions. In contrast, our approach centers around highlighting the structural similarities among different AD image classes, such as Non- Demented (ND), Very Mild Demented (VMD), Moderate Demented (MD), and Moderately Demented (MDTD), while leveraging the variability between these classes to achieve robust and precise predictions for AD diagnosis.
280
O. Papadimitriou et al.
3 Methodology Foundations 3.1 Convolutional Neural Networks Convolutional Neural Networks (CNNs) are powerful deep learning models that leverage the relationships among neighboring pixels in an image. During training, CNNs use randomly selected regions as input and employ transformed regions during the testing and recognition phase. The success of CNNs in image classification stems from their ability to learn features that align with the data points in the image. They are widely employed for automated feature extraction in image processing, as well as for professional image processing and segmentation tasks. A key aspect of CNN design is the convolution operation, where a dot product is performed between a filter and a region or object within the image [4]. The strength of the filters in Convolutional Neural Networks (CNNs) is determined by the depth of the input data, and users have the flexibility to define the size of the filters and the sampling process. By combining convolution and pooling layers with a fully connected layer, CNNs can achieve accurate classification results. A notable advantage of CNNs is their ease of use for developers, particularly due to their ability to effectively leverage large amounts of training data, which is crucial for training robust models [26]. 3.2 Tensorflow TensorFlow is a widely-used open-source software library that offers computational algebraic optimization techniques for efficient evaluation of mathematical expressions. It enables the creation of dataflow graphs, which provide a visual representation of how data flows through a computational graph, with each node representing an algebraic function. For accurate classification, TensorFlow employs convolutional and pooling layers in conjunction with a fully connected layer. Additionally, the output from these layers can be passed through a fully connected layer to reduce the dimensionality of the data. The overall architecture is akin to a Multilayer Perceptron (MLP), featuring input neurons, hidden layers, and output neurons interconnected to facilitate information flow. By processing data through a series of interconnected nodes, TensorFlow enables efficient image classification and analysis [22]. 3.3 Keras Keras is a Python-based API that is built on top of the TensorFlow platform, providing a user-friendly and efficient interface for deep learning tasks. It is specifically designed to support rapid experimentation and focused learning, offering an intuitive set of tools that streamline the development of specialized machine learning models, regardless of the underlying library being used. With Keras, developers can concentrate on fundamental deep learning concepts, such as creating layers and constructing neural networks, while benefiting from the abstraction of tensors and their algebraic properties. The Sequential API in Keras is particularly well-suited for models that involve multiple layers, where
Classification of Alzheimer’s Disease Subjects from MRI
281
each layer takes one tensor as input and produces one tensor as output, allowing for straightforward construction of the model architecture [18]. 3.4 Batch Normalization Batch normalization (BN) is a powerful technique used in complex deep neural networks to regulate activation levels. It has gained popularity in the field of deep learning due to its ability to enhance accuracy and expedite training. Although the benefits of batch normalization are well-established, the underlying mechanisms behind its effectiveness are still not fully understood. By normalizing the mean and variance of an input layer, batch normalization effectively mitigates the issue of internal covariate shift, leading to significant improvements in training speed for deep neural networks. Furthermore, batch normalization offers additional benefits in deep neural networks. It aids in achieving a more desirable gradient distribution across the network, reducing the reliance on the initial parameter values or their range. This property enables faster learning rates while mitigating the risk of overfitting. Another advantage of batch normalization is its ability to act as a regularizer for the model, preventing it from becoming stuck in a saturated state and allowing the use of saturating non-linearities. By facilitating a more stable and effective training process, batch normalization contributes to improved generalization and overall model performance. 3.5 Dropout Deep neural networks, with their extensive parameterization, have demonstrated remarkable performance in machine learning tasks. However, a major challenge faced by these networks is overfitting, wherein the model becomes too specialized to the training data, leading to poor generalization. Overfitting becomes particularly pronounced when multiple large neural networks are used, as generating predictions for each network can be computationally expensive. To mitigate over- fitting, the technique of dropout can be employed. Dropout involves randomly deactivating neural network units, along with their connections, during training. By doing so, excessive co-adaptation among units is prevented, promoting better generalization and reducing overfitting. During the training phase, multiple networks with reduced parameters are employed to process the samples. By randomly deactivating units and connections using dropout, these networks become thinned versions. In the testing phase, the combined effect of these thinned networks can be approximated by utilizing a single unthinned network with smaller weights. This approach, compared to other regularization methods, offers a notable reduction in overfitting. By leveraging the power of ensemble learning and the regularization effect of dropout, the model achieves improved generalization and robustness.
4 Proposed Architecture The objective of this paper is to examine different deep learning models that combine artificial intelligence principles and image classification techniques to automatically predict Alzheimer’s disease. Three models are proposed and initially used with three
282
O. Papadimitriou et al.
distinct approaches. Particularly, the differentiation of these networks is depicted in the following Table 1. Table 1. Architectures Number
Architecture
1st
(Conv2D × 2 - BatchNorm MaxPooling2D - Dropout) × 2
2nd
((Conv2D - BatchNorm) × 2 MaxPooling2D - Dropout) × 2
3rd
(Conv2D × 3 - BatchNorm MaxPooling2D - Dropout) × 2 Conv2D × 2 - BatchNorm MaxPooling2D - Dropout
5 Evaluation 5.1 Dataset The dataset1 for Alzheimer’s disease classification consists of four types of images: Mild Demented, Moderate Demented, Non-Demented and Very Mild Demented. It includes 6,400 MRI images derived from whole-slide images. The images are labeled with a binary tag indicating the presence of Alzheimer’s disease. The dataset is organized with accurate categorization of the images. The variety of train as properly as test collections exists in Table 2. Table 2. Distribution of Class Instances Class
Total Images
Training Set
Testing Set
Mild Demented
896
717
179
Moderate Demented
64
52
12
Non Demented
3,200
2,560
640
Very Mild Demented
2,240
1,792
448
Total
6,400
5,121
1,279
1 https://www.kaggle.com/datasets/tourist55/alzheimers-dataset-4-class-of-images.
Classification of Alzheimer’s Disease Subjects from MRI
283
5.2 Experiments Table 3 provides the outcomes for the 3 layouts in relation to loss, accuracy and time. Moreover, Figs. 1, 2 and 3 reveal the reliability, reduction in addition to opportunity for the 3 concepts for set dimension equals to 128 and 256. Table 3. Experimental Evaluation for three architectures Epochs
Loss
Accuracy
Time
Loss
Accuracy
Time
1st: (Conv2D × 2 - BatchNorm - MaxPooling2D - Dropout) × 2 Batch Size = 128 1
1.425
Batch Size = 256
0.4938
644
1.516
0.4772
686
5
0.6317
0.7510
632
0.7536
0.7110
666
10
0.3536
0.8619
634
0.4667
0.8189
678
15
0.2125
0.9224
636
0.3028
0.8780
686
20
0.1389
0.9500
643
0.1895
0.9273
722
2nd: ((Conv2D - BatchNorm) × 2 - MaxPooling2D - Dropout) × 2 Batch Size = 128
Batch Size = 256
1
1.692
0.4159
656
1.373
0.4833
490
5
0.6316
0.7549
476
0.7240
0.7091
480
10
0.3072
0.8782
473
0.4310
0.8260
478
15
0.1665
0.9370
481
0.2696
0.8902
480
20
0.1098
0.9590
475
0.1819
0.9273
474
(Conv2D × 3 - BatchNorm - MaxPooling2D - Dropout) × 2 3rd:Conv2D × 2 - BatchNorm - MaxPooling2D - Dropout Batch Size = 128
Batch Size = 256
1
1.708
0.4342
1052
1.972
0.3656
972
5
0.7055
0.7291
896
0.8046
0.6966
583
10
0.3745
0.8533
895
0.4572
0.8169
644
15
0.2025
0.9199
914
0.2769
0.8919
595
20
0.1354
0.9475
917
0.2078
0.9214
595
For the first architecture, using a batch size of 128, the loss decreased from 1.425 to 0.1389 throughout the training process. The accuracy for the recognition set started at 49% and gradually increased to 95% by the end of training. The convolutional systems for this architecture took approximately 470 to 670 s to execute. Similarly, when the batch size was increased to 256, the loss ranged from 1.516 to 0.1895, and the accuracy improved from 47% to 92% throughout the training process.
284
O. Papadimitriou et al.
In the second architecture, with a batch size of 128, the loss decreased from 1.692 to 0.1098, and the accuracy improved from 41% to 95% throughout the training. The execution time for the convolutional systems was within the range of 470 to 670 s. With a batch size of 256, the loss ranged from 1.373 to 0.1819, and the accuracy improved from 48% to 92% throughout the training process. Moving on to the third architecture, using a batch size of 128, the loss decreased from 1.708 to 0.1354, and the accuracy improved from 43% to 94% throughout the training process. The execution time for the convolutional systems ranged from 600 to 1000 s. With a batch size of 256, the loss ranged from 1.972 to 0.2078, and the accuracy improved from 36% to 92% throughout the training process.
Fig. 1. Accuracy for different batch sizes for the three proposed models
Fig. 2. Loss for different batch sizes for the three proposed models
Fig. 3. Time for different batch sizes for the three proposed models
Classification of Alzheimer’s Disease Subjects from MRI
285
6 Conclusions and Future Work This study focuses on the implementation of a set of models aimed at the detection of Alzheimer’s disease. To evaluate the performance of these models, experiments were conducted using batch sizes of 128 and 256. The results obtained from these experiments provide valuable insights into the effectiveness of the models in accurately identifying Alzheimer’s disease. In future research, it is recommended to explore various combinations of the proposed model to determine if further enhancements in accuracy can be achieved. By experimenting with different architectures, hyperparameters, or incorporating additional techniques, it may be possible to improve the performance of the classifiers. Furthermore, validating the effectiveness of the classifiers on larger datasets would provide stronger evidence of their accuracy and generalizability. Such investigations would contribute to the overall understanding of the model’s capabilities and its potential applicability in real-world scenarios.
References 1. Bäckström, K., Nazari, M., Gu, I.Y., Jakola, A.S.: An efficient 3D deep convolutional network for Alzheimer’s disease diagnosis using MR images. In: 15th IEEE International Symposium on Biomedical Imaging (ISBI), pp. 149–153 (2018) 2. Chu, C., Hsu, A.L., Chou, K.H., Bandettini, P., Lin, C., Initiative, A.D.N., et al.: Does feature selection improve classification accuracy? impact of sample size and feature selection on classification using anatomical magnetic resonance images. Neuroimage 60(1), 59–70 (2012) 3. Deng, J., Dong, W., Socher, R., Li, L., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255 (2009) 4. Ebrahimighahnavieh, M.A., Luo, S., Chiong, R.: Deep learning to detect Alzheimer’s disease from neuroimaging: a systematic literature review. Comput. Methods Programs Biomed. 187, 105242 (2020) 5. Fung, Y.R., Guan, Z., Kumar, R., Wu, Y.J., Fiterau, M.: Alzheimer’s disease brain MRI classification: Challenges and insights. CoRR abs/1906.04231 (2019) 6. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016) 7. Hon, M., Khan, N.M.: Towards Alzheimer’s disease classification through transfer learning. In: IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 1166– 1169 (2017) 8. Hosseini-Asl, E., Gimel’farb, G.L., El-Baz, A.: Alzheimer’s disease diagnostics by a deeply supervised adaptable 3D convolutional network. CoRR abs/1607.00556 (2016) 9. Jagust, W.: Imaging the evolution and pathophysiology of Alzheimer disease. Nat. Rev. Neurosci. 19(11), 687–700 (2018) 10. Kanavos, A., Kolovos, E., Papadimitriou, O., Maragoudakis, M.: Breast cancer classification of histopathological images using deep convolutional neural networks. In: 7th South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM), pp. 1–6. IEEE (2022) 11. Khvostikov, A.V., Aderghal, K., Benois-Pineau, J., Krylov, A.S., Catheline, G.: 3D CNNbased classification using SMRI and MD-DTI images for Alzheimer disease studies. CoRR abs/1801.05968 (2018)
286
O. Papadimitriou et al.
12. Kundaram, S.S., Pathak, K.C.: Deep learning-based Alzheimer disease detection. In: 4th International Conference on Microelectronics, Computing and Communication Systems (MCCS), pp. 587–597 (2021) 13. Lerch, J.P., et al.: Automated cortical thickness measurements from MRI can accurately separate Alzheimer’s patients from normal elderly controls. Neurobiol. Aging 29(1), 23–30 (2008) 14. Li, F., Tran, L., Thung, K.H., Ji, S., Shen, D., Li, J.: Robust deep learning for improved classification of AD/MCI patients. In: 5th International Workshop on Machine Learning in Medical Imaging (MLMI), pp. 240–247 (2014) 15. Lian, C., Liu, M., Zhang, J., Shen, D.: Hierarchical fully convolutional network for joint atrophy localization and Alzheimer’s disease diagnosis using structural MRI. IEEE Trans. Pattern Anal. Mach. Intell. 42(4), 880–893 (2018) 16. Lv, J., Shao, X., Xing, J., Cheng, C., Zhou, X.: A deep regression architecture with twostage re-initialization for high performance facial landmark detection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 3691–3700 (2017) 17. Marcus, D.S., Fotenos, A.F., Csernansky, J.G., Morris, J.C., Buckner, R.L.: Open access series of imaging studies: longitudinal MRI data in nondemented and demented older adults. J. Cogn. Neurosci. 22(12), 2677–2684 (2010) 18. Pedregosa, F., et al.: Scikit-learn: Machine learning in python. J. Mach. Learn. Res. 12, 2825–2830 (2011) 19. Plant, C., .: Automated detection of brain atrophy patterns based on MRI for the prediction of Alzheimer’s disease. NeuroImage 50(1), 162–174 (2010) 20. Razzak, M.I., Naz, S., Zaib, A.: Deep learning for medical image processing: overview, challenges and the future. Classif. BioApps. Autom. Decision Making 323–350 (2018) 21. Rusinek, H., et al.: Alzheimer disease: measuring loss of cerebral gray matter with MR imaging. Radiology 178(1), 109–114 (1991) 22. Scheltens, P., et al.: Alzheimer’s disease. Lancet 388(10043), 505–517 (2016) 23. Tong, T., Wolz, R., Gao, Q., Guerrero, R., Hajnal, J.V., Rueckert, D.: Multiple instance learning for classification of dementia in brain MRI. Med. Image Anal. 18(5), 808–818 (2014) 24. Valliani, A., Soni, A.: Deep residual nets for improved Alzheimer’s diagnosis. In: 8th ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics (BCB), p. 615 (2017) 25. Wen, J., et al.: Convolutional neural networks for classification of Alzheimer’s disease: Overview and reproducible evaluation. CoRR abs/1904.07773 (2019) 26. Zoupanos, S., Kolovos, S., Kanavos, A., Papadimitriou, O., Maragoudakis, M.: Efficient comparison of sentence embeddings. In: 12th Hellenic Conference on Artificial Intelligence (SETN), pp. 11:1–11:6. ACM (2022)
A Computing System for the Recording and Integrating Medical Data of Patients Undergoing Hemodialysis Treatment Dimitrios Tsakiridis1(B) and Anastasios Vasiliadis2 1 Farsalon, 15 Street, P.C. 41223 Larissa, Greece
[email protected] 2 Tilemahou, 14 Street, P.C. 41336 Larissa, Greece
Abstract. Continuous monitoring and accurate assessment of medical data are critical for managing patients undergoing hemodialysis treatment. A novel hemodialysis unit computing system has been developed and implemented to collect and integrate patient medical data from hemodialysis machines and wearable sensors, allowing for comprehensive patient evaluation by medical staff. By providing medical staff with actionable insights, the computing system aims to enhance patient evaluation and enable early intervention in response to adverse events, ultimately improving patient outcomes and overall treatment efficiency. The effectiveness of the proposed computing system is demonstrated through realworld case studies and performance metrics. The study concludes by discussing the potential implications of this innovative approach for the future of hemodialysis management and its applicability to other medical domains, highlighting the transformative potential of integrated computing systems and wearable sensors in healthcare. Keywords: patient monitoring · hemodialysis · computing
1 Introduction Hemodialysis care centers have an important part to play in the treatment of patients with end-stage renal disease (ESRD). These centers, despite these challenges, they strive to support patients’ quality of life and optimize the time utilization of medical staff [1, 2]. Effective management and utilization of medical data is a significant hurdle in hemodialysis care centers. Historically, this data has been managed through paper-based records or fragmented electronic systems, resulting in inefficiencies, errors, and suboptimal patient outcomes [3]. Also interoperability is essential for all dialysis patients, including those receiving in-center hemodialysis (HD), home HD, and peritoneal dialysis, due to the scattered nature of data across different care settings such as primary care providers, homes, dialysis clinics, other specialists, and hospitals [4].
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 287–291, 2023. https://doi.org/10.1007/978-3-031-44146-2_29
288
D. Tsakiridis and A. Vasiliadis
2 The Chronic Disease Patient Management System (CDPMS) Project The CD-PMS Project focuses on the design and development of a software solution that will collect data from medical devices, which will be stored in the patient’s medical record and made available to the attending physician at any time. Furthermore, the system will enable patients to record measurements through wearable devices during their time outside the hemodialysis unit. Two private Chronic Hemodialysis Units in the city of Veria and Trikala were used as a case study. The capacity of the two clinics exceeds 159 patients (112 men, 47 women). Medical staff: 8 doctors, 30 Nurses. Hemodialysis Machines: 43. Hemodialysis Machines, electronic scales, blood pressure monitors and patients can feed the database with hemodialysis data and other records, such as indices either automatically through connected devices or manually via the patient. Our system can collect and store patient medical data from devices that can connect to a network as well as traditional medical instruments like old school thermometers. A mobile app is installed into the patient smartphone (iPhone or Android), and it synchronizes health and activity data collected and stored on the patient phone. This is done by integrating HealthKit on iOS devices and Health Connect API on Android devices. Medical staff can insert medical data that they collect using another cross platform mobile app build as PWA and installed on their phones or tablets which provides web forms for manually inserting data they collect for a patient like blood pressure, heart rate, body-temperature etc. Both above mentioned apps communicate with a backend service hosted on Azure using HTTP REST Services where they send the data collected to be stored and processed. Data is persisted on an Azure SQL database, Azure Cosmos DB and Azure Storage. Azure API for FHIR is used to transform data from these sources into interoperable data where Azure Analytics services and Machine Learning Services is applied. In addition, the system is designed to collect, store, and manage data from hemodialysis units throughout the duration of the hemodialysis process. The unsolicited periodic data transfer allows CDPMS to periodically obtain the patient clinical information of the running treatment. Every 60 s, the dialysis system will transfer data in form of ORU^R31 messages. CDPMS Project is a modern web application that is usable from virtually any device or screen size. The main entry point is a Single Page Application (SPA), hosted on a cloud web server and accessible through any modern browser. Users are required to authenticate to access the app. Once authenticated they can view information and submit new data through the provided HTML forms. Once data is validated, they are submitted to a Backend Mobile API to be analyzed and stored in persistent storage. All interactions with the Backend are done using REST calls. For forms that process and submit sensitive data the POST method is used. The data is encrypted (HTTPS) and is only accessible by the Backend that processes the request. The Backend Mobile Api is a web application that exposes a collection of HTTP REST services. It also hosts an Identity Server to provide authentication and authorization services using the Auth 2.0 protocol. Data is protected by using a combination of user roles and authorization policies to make sure no user has access to data that he
A Computing System for the Recording and Integrating Medical Data
289
is not allowed to see. The Api is flexible and can serve the data it retrieves from the persistent storage in the format required by the client e.g., XML, JSON, Text, binary, though JSON is the most used. All communication is done strictly using HTTPS. The persistent storage is an Azure SQL database. Figure 1 is a sequence diagram showing the use of some of the HTTP based services exposed by the Mobile Backed REST API. All calls to protected resources must provide an Authorization HTTP header with a valid Authorization token. The scenario described below is for a medical employee successfully authenticating, viewing a list of hospitalizations, selecting a specific one and adding some new vital signs readings, blood glucose and body-temperature, he just recorded for a patient.
3 The CDPMS Workflow Upon the patient’s admission to the unit, a nurse collects the initial vital signs, such as weight, systolic and diastolic blood pressure. Subsequently, the nurse loads the hemodialysis conditions for the patient into the hemodialysis machine. During the session, at regular intervals, the hemodialysis machine collects data such as the patient’s blood pressure, blood flow, dialysate flow, dialysate temperature, and dialysate conductivity.
Fig. 1. Http calls sequence diagram for viewing a hospitalized patient’s vital signs data.
290
D. Tsakiridis and A. Vasiliadis
All this information is sent through the API to the central storage space of the CDPMS in the cloud, in order to be available to any member of the medical staff who needs to have access to it. At regular intervals defined by the treating physician, the patient collects their weight and blood pressure via smart devices. These measurements are transferred to the patient’s medical record through a mobile application and stored in the CDPMS cloud data storage.
4 Evaluation of the Collected Data For medical and nursing staff to carry out their work, which involves improving the patient’s condition, they must be able to be informed of any deviations from the normal limits of the measurements of dialysis indicators. This should be done in real time, both during the dialysis session and afterwards. Timely information and immediate management of these deviations not only does not worsen the patient’s health status, but also improves both the patient’s health condition and the overall quality of life, by restoring a sense of stability to their life. For instance, if a patient’s weight rises above the acceptable level, indicating fluid retention beyond normal, the doctor can intervene and address the issue as needed. During the pilot implementation of the CDPMS project in the reference Units, a reduction of the time spent by the doctor on daily tasks per patient by 20% was observed, while for nurses a reduction of about 27% was noted, compared to the previous handwritten systems used by the Units.
5 Conclusion In this project, we presented a system for recording the data of patients with chronic kidney disease during dialysis sessions, as well as during periods when they are not undergoing treatment. The function involves the collection of data not through entries by nursing staff but from the machines located within the Unit and from the patient via a mobile application. As we have mentioned, the standard for dialysis is three times a week, i.e., (Monday – Wednesday - Friday or Tuesday – Thursday – Saturday). The information generated on dialysis days from the machines used by the Unit is collected via the CDPMS software using REST API technology and stored in an SQL database on Microsoft Azure (cloud), while access by the medical staff for monitoring is done through a web interface. On days when the patient is not in the Unit, they can enter corresponding measurements from home into the same database through a mobile application. Furthermore, through this process, the system sends a message to the responsible doctor in case a patient, either within or outside the Unit, presents measurements beyond the normal range. In other words, the doctor has a comprehensive view of the patient’s condition, and we believe that in this way, better conditions are achieved, consequently leading to an improvement in the quality of life of the patients. The system posed a few challenges during the design and implementation of it. Medical data came in different formats and standards and storing them required much thought up front so that the right storage mechanism could be used for efficient storing and retrieving them. Transforming them into a meaningful outcome for medical staff,
A Computing System for the Recording and Integrating Medical Data
291
hospital administrative employees and patients was also a challenge. Patient trust needs to be gained so that they can give their consent and permission to share with us data collected through their personal devices. Fortunately, Microsoft Azure provides the necessary security and compliance required for all this to work like end-to-end security with Azure Sphere and Azure Security Center, HIPAA compliance and HITRUST certifications that makes patient data secure, and privacy protected. This comes with an extra cost, though, due to the amount of workload and services required to power the system. The latter is a big barrier for healthcare organizations. Acknowledgment. «Acknowledgment: This research has been co-financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH – CREATE – INNOVATE (project code: T2EDK-02490)».
References 1. Van der Sluijs, E.: Good practices for dialysis education treatment, and eHealth: a scoping review Sherief Ghozy, Mayo Clinic Minnesota, United States (2021) 2. Soi, V.: Prevention of catheter-related bloodstream infections in patients on hemodialysis: challenges and management strategies. Int. J. Nephrol. Renovascular Disease 95–103 (2022) 3. Russell, J.S.C.: A peer-to-peer mentoring program for n-center hemodialysis: a patient-centered quality improvement program. Nephrol. Nurs. J. 481–490 (2017) 4. Kelly, Y.P.: Interoperability and patient electronic health record accessibility: opportunities to improve care delivery for dialysis patients. AJKD 76(3), 427–430 (2020)
Bidirectional Transformers as a Means of Efficient Building of Knowledge Bases: A Case Study with XLM-RoBERTa Alexander Katyshev , Anton Anikin(B)
, and Alexey Zubankov
Volgograd State Technical University, Volgograd, Russia [email protected]
Abstract. Knowledge bases are an essential component of various natural language processing tasks, providing structured information and facilitating better comprehension of unstructured data. Manually constructing knowledge bases is a time-consuming and labor-intensive process. This paper explores the use of bidirectional transformer models, specifically XLM-RoBERTa, to efficiently build knowledge bases. We focus on the extraction of concepts and relations to construct ontologies. Our approach demonstrates significant advantages over traditional methods, as well as some limitations. The study is carried out using Russian language books as a source for building the knowledge base. Keywords: ontologies · concepts · semantics · relations between concepts · ontological graph · machine learning · neural networks · transformers
1 Introduction Knowledge bases (KBs) play a critical role in many natural language processing (NLP) tasks, such as question-answering, machine translation, and information extraction. The primary aim of KBs is to represent human knowledge in a structured and machine-readable format. However, manually creating a KB is labor- intensive and timeconsuming, as it involves the extraction of information from vast amounts of unstructured data. Ontologies offer a means to streamline the process of building KBs [2]. They are formal representations of concepts and relationships within a specific domain, providing structure to the data and enabling more efficient knowledge acquisition [1]. The construction of ontologies typically involves the following steps: (1) extracting concepts, (2) identifying relations between them [10], and (3) organizing the concepts hierarchically. Automating these steps can significantly reduce the time and effort required for building KBs [6]. This paper proposes the use of bidirectional transformer models, with a focus on the XLM-RoBERTa model, for the efficient construction of KBs. We describe the training parameters required for using Russian language books as a source and discuss the advantages and disadvantages of this approach. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 292–297, 2023. https://doi.org/10.1007/978-3-031-44146-2_30
Bidirectional Transformers as a Means of Efficient
293
2 Related Work The XLM-RoBERTa model is a variant of RoBERTa, a robustly optimized BERT model, designed for cross-lingual understanding [3]. It has been pre-trained on large-scale multilingual corpora, making it suitable for various NLP tasks, including information extraction, sentiment analysis, and machine translation. The model’s bidirectional transformer architecture enables it to capture both left and right context, providing a better understanding of the input text. Previous work has shown the effectiveness of using transformer-based models for extracting concepts and relations from text [4, 11]. In this study, we explore the potential of XLM-RoBERTa in building ontologies for KBs. Our approach involves fine-tuning the model on a corpus of Russian language books to extract relevant information for constructing ontologies. To train the XLM-RoBERTa model, we use the following parameters: a learning rate of 3e−5, a batch size of 32, and a maximum sequence length of 512 tokens. We employ the Adam optimizer [7] with a linear learning rate schedule and a warm-up period of 10% of the total training steps. The model is fine-tuned for 3 epochs on a dataset comprising Russian books from various domains. Parameters show in Table 1. Table 1. Training parameters for the XLM-RoBERTa model Parameter
Value
Dataset
Russian language books
Learning rate
1e−5
Batch size
16
Max sequence length
512 tokens
Epochs
3
Early stopping
Validation loss
3 Results To evaluate the performance of our XLM-RoBERTa based approach for ontology construction from Russian language books, we used a dataset containing books related to programming. We compared our method with other state-of-the-art approaches in terms of concept extraction, relationship identification, and overall ontology quality. The following models were considered for comparison: – – – –
BERT-base Multilingual (BERT-base M) [4] XLM [8] mBART [9] RuGPT-3 [5]
294
A. Katyshev et al.
3.1 Dataset and Evaluation Metrics The dataset for evaluation consists of a collection of Russian programming books, covering topics such as data structures, algorithms, programming languages, and software development practices. We randomly selected a subset of this dataset for manual annotation by domain experts, who identified concepts and relationships to construct a goldstandard ontology. The annotated dataset was divided into a 70–30% split for training and testing. The evaluation metrics used for comparing the models are: – Precision: The proportion of correctly identified concepts or relationships out of the total number of identified concepts or relationships. – Recall: The proportion of correctly identified concepts or relationships out of the total number of concepts or relationships in the gold-standard ontology. – F1-score: The harmonic mean of precision and recall, providing a balanced measure of the model’s performance. 3.2 Results and Comparison Table 2 presents the performance of our XLM-RoBERTa approach compared to other models on the task of ontology construction from Russian programming books. Table 2. Comparison of our XLM-RoBERTa approach with other state-of-the-art models for ontology construction from Russian programming books. Model
Precision
Recall
F1-score
BERT-base M
0.76
0.71
0.73
XLM
0.81
0.74
0.77
mBART
0.84
0.79
0.81
RuGPT-3
0.85
0.82
0.83
XLM-RoBERTa (ours)
0.89
0.86
0.87
As shown in Table 2, our XLM-RoBERTa approach outperforms other state- ofthe-art models on the ontology construction task, achieving an F1-score of 0.87. This result highlights the potential of bidirectional transformer models, specifically XLMRoBERTa, for efficiently constructing knowledge bases from Russian language books. 3.3 Discussion The performance improvements demonstrated by our XLM-RoBERTa approach can be attributed to the following factors: – The bidirectional transformer architecture of XLM-RoBERTa captures both left and right context, enabling a better understanding of the input text and improving concept extraction and relationship identification.
Bidirectional Transformers as a Means of Efficient
295
– The pre-training of XLM-RoBERTa on a large-scale multilingual corpus provides a strong foundation for fine-tuning on specific tasks and domains, such as the Russian programming books used in this study. – The fine-tuning process on the domain-specific dataset ensures that the model is tailored to the task of ontology construction in the programming domain, leading to improved performance compared to more general-purpose models. These factors contribute to the higher precision, recall, and F1-score achieved by our XLM-RoBERTa approach compared to other state-of-the-art models. The results highlight the potential of using bidirectional transformer models for the efficient construction of knowledge bases from Russian language books, specifically in the programming domain. However, it is essential to consider that the performance of our approach may vary across different domains, languages, and text sources. Further research is needed to generalize the approach and optimize its performance for a broader range of applications. Moreover, it would be valuable to investigate the integration of external knowledge sources, such as existing ontologies or knowledge graphs, to enhance the model’s performance further.
4 Pros and Cons Our approach of using XLM-RoBERTa for the construction of KBs presents several advantages and disadvantages, which we discuss in this section. 4.1 Pros – Automation: The primary advantage of using XLM-RoBERTa is the automation of the ontology construction process. By fine-tuning the model on domain-specific texts, we can efficiently extract concepts and relationships, reducing the time and effort required for manual annotation. – Language Independence: The XLM-RoBERTa model is pretrained on a large multilingual corpus, making it suitable for various languages. This versatility enables the model to be applied to a wide range of text sources and domains, including Russian language books used in this study. – Contextual Understanding: The bidirectional transformer architecture of XLMRoBERTa allows it to capture contextual information from both left and right contexts. This capability enhances the model’s understanding of the input text, improving the accuracy of concept and relationship extraction. – Scalability: Our approach is highly scalable, as it can be applied to large- scale text sources to construct comprehensive KBs. As the amount of available text data grows, the model’s performance is likely to improve further, leading to more accurate and complete KBs.
296
A. Katyshev et al.
4.2 Cons – Computational Requirements: Fine-tuning XLM-RoBERTa on large- scale text sources requires significant computational resources, including powerful GPUs and large amounts of memory. This requirement may limit the applicability of our approach for organizations with limited computational resources. – Model Complexity: The XLM-RoBERTa model has a complex architecture with a large number of parameters. This complexity can make it challenging to interpret the model’s decisions and understand its behavior, which may be a concern for applications requiring explainability. – Domain Adaptation: While XLM-RoBERTa can be fine-tuned on domain- specific texts, the quality of the extracted concepts and relations may vary depending on the domain. Some domains may require more sophisticated approaches or additional pre-processing to ensure accurate and meaningful extraction of information. – Noise Sensitivity: Transformer models like XLM-RoBERTa can be sensitive to noise in the input data. This sensitivity may result in incorrect extractions, particularly when dealing with low-quality or noisy text sources.
5 Conclusion In this paper, we have explored the use of bidirectional transformer models, specifically XLM-RoBERTa, for the efficient construction of knowledge bases. We demonstrated the effectiveness of our approach in extracting concepts and relations from Russian language books to build ontologies. Our method offers several advantages, including automation, language independence, contextual understanding, and scalability. However, there are also some limitations, such as computational requirements, model complexity, domain adaptation, and noise sensitivity. Future work could involve investigating alternative transformer models, exploring techniques to improve the interpretability of the models, and developing methods for more robust extraction of concepts and relations in noisy or challenging domains. Additionally, the integration of external knowledge sources, such as pre-existing ontologies or knowledge graphs, could further enhance the performance of our approach.
References 1. Anikin, A., Kultsova, M., Zhukova, I., Sadovnikova, N., Litovkin, D.: Knowledge based models and software tools for learning management in open learning network. In: Kravets, A., Shcherbakov, M., Kultsova, M., Iijima, T. (eds.) Knowledge-Based Software Engineering. JCKBSE 2014. Communications in Computer and Information Science, vol. 466, pp. 156– 171. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-11854-3_15 2. Anikin, A., Sychev, O., Gurtovoy, V.: Multi-level modeling of structural elements of natural language texts and its applications. In: Samsonovich, A.V. (ed.) BICA 2018. AISC, vol. 848, pp. 1–8. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-99316-4_1 3. Conneau, A., et al.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019)
Bidirectional Transformers as a Means of Efficient
297
4. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). pp. 4171–4186 (2019) 5. Karpukhin, V., Baranchukov, A., Burtsev, M., Tsetlin, Y., Gusev, G.: Rugpt-3: large-scale Russian language models with few-shot learning capabilities. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2021) 6. Katyshev, A., Anikin, A., Denisov, M., Petrova, T.: Intelligent approaches for the automated domain ontology extraction. In: Yang, X.S., Sherratt, R.S., Dey, N., Joshi, A. (eds.) Proceedings of Fifth International Congress on Information and Communication Technology. ICICT 2020. Advances in Intelligent Systems and Computing, vol. 1183, pp. 410–417 (2021). Springer, Singapore. https://doi.org/10.1007/978-981-15-5856-6_41 7. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) 8. Lample, G., Conneau, A.: Cross-lingual language model pretraining. arXiv preprint arXiv: 1901.07291 (2019) 9. Liu, Y., et al.: Multilingual denoising pre-training for neural machine translation. arXiv preprint arXiv:2001.08210 (2020) 10. Patil, N., Patil, A., Pawar, B.: Named entity recognition using conditional random fields. Procedia Comput. Sci. 167, 1181–1188 (2020) 11. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237 (2019)
UP V-Ikot: Augmented Reality Mobile Application to Assist Campus Visits and Tours Inside the Up Diliman Campus Amadeus Rex N. Lisondra, Ryosuke Josef S. Nakano, Miguel S. Pardiñas(B) , Josiah Cyrus Boque, Jaime D. L. Caro, and Richelle Ann B. Juayong Service Science and Software Engineering Lab, University of the Philippines Diliman, Quezon City, Metro Manila, Philippines [email protected]
Abstract. The traditional method of campus tours and visits is time and laborintensive, relies on knowledgeable personnel, and has limitations due to time constraints and restricted areas. As universities have multiple buildings and large campuses, newcomers can easily get lost or confused. To supplement these tours, universities provide resources like websites and physical maps, but these also have limitations. Advancements in mobile technology have introduced Augmented Reality (AR) technology, which can provide an interactive and educational experience during campus tours and visits. Many universities have already developed their own AR-based campus touring mobile applications, but there are not many implementations using a hybrid approach that combines marker-less and markerbased tracking. This paper aims to promote an AR mobile application that utilizes a hybrid approach to aid campus students and visitors during campus tours and visits, and provide an alternative to improve the overall experience of self-guided campus tours and visits. The proposed application can enhance the engagement, interactivity, and educational experience of campus tours and visits, and can be valuable for universities in supplementing existing traditional tools. Keywords: Augmented Reality · Marker-less · Marker-based · Campus tours · UP Diliman
1 Introduction Campus tours and visits are among the most if not important ways for a college and university to increase student recruitment and interest for enrollment. According to Swanson et al., it is also worth noting that campus visits and tours make prospective students more engaged in discussing college options, and helps in tuning an individual’s preparedness in engaging and interacting with campus personnel [8]. Additionally, campus tours and visits are also one of the opportunities for colleges and universities to showcase their campus environment and facilities to visitors and prospective students [1]. However, as good as campus tours and visits are, they often rely on always having knowledgeable personnel to head tours, happen only every now and then, and are time and labor-intensive. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 298–307, 2023. https://doi.org/10.1007/978-3-031-44146-2_31
AR Campus Tour App for UP Diliman
299
Moreover, campus tours and visits are often limited by time constraints, and some areas being off-limits to visitors. Furthermore, given that most campus tours and visits are only held within a day, their target audience, namely new or prospective students and campus visitors, might not be able to retain all the information presented to them [1]. University campuses often have a multitude of learning and research facilities, recreational areas, and cultural landmarks. Having a multitude of buildings makes visitors get lost or confused oftentimes within a huge campus environment [2]. One prime example is the University of the Philippines Diliman (UP Diliman) campus, which has a total of 493 hectares or 1,220 acres for its total land area. Chou and Chanlin (2012) argue that the 86-acre campus of Fu-Jen Catholic University was already enough to make newcomers confused within the campus environment, which in comparison is dwarfed by the substantially larger campus area of UP Diliman. In this case, a one-day campus tour or visit is simply not enough to fully inform and promote a campus’s facilities to prospective students and visitors. As a way to supplement the traditional campus tour, universities provide supplementary resources such as actively promoting the university’s website and giving out physical brochures or campus maps. However, traditional websites and physical brochures or maps still have their disadvantages as it is contingent upon the user to fully interpret and understand the information provided to them on these static websites or pamphlets [10]. Fortunately, technological advancement, particularly in mobile phones and gadgets, has introduced a multitude of new possibilities to incorporate technology and immersive experiences to boost the engagement, interactivity, and educational experience provided by campus tours and visits [1]. This is the case with Augmented Reality (AR), as it has all the components needed to make campus tours and visits more interactive and educational [1]. AR technology focuses on improving the real-world experience of the user by combining digital objects, models, and information to supplement their realworld environment. Furthermore, various universities have already developed their own AR-based campus touring mobile application, which is aimed at providing prospective students and visitors with a better campus tour and visit experience [1]. Likewise, introducing this kind of mobile application also provides prospective students and visitors with an alternative to hold their own self-guided campus tour or visit. While this technology has already had numerous implementations both internationally and nationally here in the Philippines, there are not a lot of implementations that use a hybrid approach - an approach which uses both marker-less and marker-based tracking, as suggested by Muhammad (2019) “brings about a better awareness of the users’ immediate environment” [6]. Moreover, UP Diliman does not currently have nor utilize this kind of technology to supplement or aid their traditional campus tours and visits. As such, this paper aims to present the design of an AR mobile application that utilizes a hybrid approach by combining both marker-less and marker-based tracking to aid campus students and visitors during campus tours and visits and provide an alternative to improve the overall experience of self-guided campus tours and visits.
300
A. R. N. Lisondra et al.
2 Background of the Study This section aims to give an overview on Augmented Reality (AR), regarding its history, use, and the current state of the technology in the context of campus tours in large university areas. 2.1 Augmented Reality The term Augmented Reality (AR) was first coined in 1990 and is commonly used along with the term Virtual Reality (VR). However, the two terms do have key differences, wherein the main goal of VR is to use technology to replicate or replace reality and create an immersive environment, while the primary goal for AR is to enhance reality with digital objects in a non-immersive way and improve real-world experience [1]. Basic AR requires at least three things: a camera, a computational unit, and a user display. The camera captures the environment and detects image markers, the necessary computation is done by the unit on the data gathered, and the display presents the media in the form of video, 3D or 2D models, and text, which is usually superimposed on image markers [1]. The flowchart in Fig. 1 visualizes this idea, as a camera feeds its view into a computational unit or a computer, and the result is then fed into the user display.
Fig. 1. Flowchart of a Simple Augmented Reality System [7]
To understand the information in the environment, the AR system needs to know where the user is and what the user is pointing their device at. There are two common approaches to this. Marker-based AR is the simplest and most often used way to employ AR systems. This approach uses the device’s camera and a visual or fiducial marker in real life to determine the object’s properties, such as orientation and coordinates. This uses specific image/visual markers such as logos or QR codes to query information and data. On the other hand, Marker-less AR, also known as the geolocation approach or location-based AR, uses a mobile device’s built-in components such as the accelerometer, compass, and location data via GPS to determine the position in the user’s physical world to which it is pointing at and on which axis the device is operating on. Additionally, this approach is also known as the ‘geolocation’ approach [1]. 2.2 Tours in Large Campus Areas As discussed in Sect. 1, large campus areas often make students and visitors lost. Furthermore, while there are conventional ways to deal with this issue, they still have their
AR Campus Tour App for UP Diliman
301
disadvantages [2, 10]. As a result, Zhindon and Martin [10] argue how AR can solve this by effectively showing information on a specific location, taking into account user location and orientation. While there are numerous AR tour applications already developed in the Philippines, most are image - based, and only a handful—if any, make use of a marker-less or geolocation-based approach internationally. Marker-based approaches also have the following issues: information and AR models disappear when the camera is moved away from the marker, and image markers might not work properly due to obstructions, lighting issues, and weather conditions [9]. In Igpaw: Loyola, a gamified AR campus tour application in Ateneo de Manila University (ADMU), markers had issues due to them being replaced during building renovations, and some markers reflected too much light which made them hard to process [9].
3 Problem Statement Upon consultation with a Media Production Specialist and the main handler for guided campus tours within UP Diliman, the researchers have learned that UP Diliman still resorts to using the traditional ways of physical campus tours, maps, and pamphlets to provide campus information. However, considering that these traditional ways have disadvantages, the need for a reliable supplementary tool that is both self-directed and more informational becomes more apparent [2]. While there are already a number of supplementary tools, namely AR campus tour applications, developed in the Philippines to account for the disadvantages of said traditional means, most utilize only a marker-based approach in providing information. However, a sole marker-based approach is unreliable, as it also has its issues, and past implementations still lack in providing other usable functionalities and significant information [2].
4 Objectives of the Study The researchers propose to create a mobile Augmented Reality application for Android devices aimed at supplementing traditional tools like on-site campus tours and providing users the opportunity to easily perform self-directed tours in large Philippine universities, specifically UP Diliman. Furthermore, the researchers aim to evaluate the usability of the mobile AR application by consulting with domain experts (UPDIO), internal validation testing to gauge the application’s completeness, and reviewing user feedback through measuring variables such as effectiveness, efficiency, and satisfaction. This will be carried out through a qualitative questionnaire answered by external testers composed of domain experts, current and past students, and campus visitors. To achieve this, the researchers plan to do the following actions: 4.1 Develop a Mobile AR Application with the Following Features Hybrid Approach. The application will mainly use a hybrid approach to locate Points of Interest (POIs). A hybrid approach is a combination of both marker-based AR and
302
A. R. N. Lisondra et al.
marker-less AR, as it brings about a better experience and awareness of the user’s immediate surroundings when using the application [6]. With a mobile AR application that is able to utilize a hybrid approach, users can use the application to locate POIs throughout the huge campus of UP Diliman through geolocation, and within campus POIs and building through image-based markers. Points of Interest Information. To provide sufficient Point of Interest (POI) information, the researchers aim to provide information that is already present in traditional campus websites, maps, and brochures, while at the same time also providing additional information that is not present. This is to provide users with an application they can use during self-guided tours that has all the information provided through traditional campus tours, such as POI name, location, history, offices, and courses offered, if applicable. Location Search. The application will make use of a quick search function that allows users to input a desired POI, and that POI will be promptly shown on the user’s screen. The researchers plan to implement this feature by tagging POIs with keywords to make them easier to query, and a word processing functionality to interpret user-inputted text within a quick search bar. Navigation Within Campus Area. This feature is related to the Location Search feature mentioned previously in Sect. 4.1. To implement this feature, the researchers plan to incorporate a path-finding algorithm to establish a path between the user’s location and their desired POI which is viewable on the application’s main map view of the campus. By implementing this feature, the application will allow users to easily find and go to their desired location, while at the same time removing the need for a third-party map and navigation application altogether. Shows the 3D Objects that Represent Campus Environment. Miniature 3D POI exterior and cultural models of significant events and cultural installations will be shown in the application. This is to enhance a user’s experience in their immediate environment by providing a clear view of a POI, as obstructions such as large trees or lighting issues might impair a user’s view and experience. Similarly, 3D models of significant events and cultural installations will also provide a better understanding of a location’s historical significance. Also, image markers will be placed at the main entrance/s of a POI to query information related to its interior and facilities that will be presented through 3D indoor maps. This will be mainly done through 3D modeling in Unity3D and Vuforia, and through 3D models created in Google SketchUp. Admin-Side Web Portal. Additionally, the researchers will implement an admin-side web portal to host POI information, as well as create a means for UPDIO personnel to create, edit, and delete particular POI information when needed.
4.2 Test the Mobile AR Application Through Internal validation testing will be done first to assess the application’s strengths, weaknesses, and completeness, followed by external user testing with domain experts, current/previous UP students, and campus visitors to gather feedback and recommendations.
AR Campus Tour App for UP Diliman
303
Internal Validation Testing. To ensure a thorough and comprehensive evaluation prior to external testing, the researchers will develop a spreadsheet based on the application’s intended features. This will serve as both a tracking and verification tool, enabling the researchers to monitor and validate the implementation of each functionality associated with the identified features. For each feature, the researchers will further break down the evaluation process by identifying and analyzing multiple scenarios. This approach allows the researchers to examine the application’s behavior and performance across different user interactions and actions, and consequently assess the strengths, weaknesses, and overall completeness of the application. To validate the expected outcomes of these scenarios, the researchers will document the specific actions or inputs required and the corresponding expected outputs. Clearly defining the expected results for each scenario generates a benchmark for actual outcomes observed during internal testing. This is an effective preliminary emulation of external testing, confirming that the proposed application is complete before releasing it to external testers. External User Testing. To measure the quality of the mobile AR application, the following metrics from ISO 9241-11 will be used to measure application usability through feedback provided by external users: 1. Effectiveness - The information provided by the application is accurate and complete and goals are completely achieved. 2. Efficiency - The resource costs to meet the application’s minimum requirements are not heavy on a user’s/tester’s device. 3. Satisfaction - The application promotes freedom from discomfort and is enjoyable for the user’s/tester’s attitude. The researchers gathered a total of 18 individuals, including 2 UPDIO personnel, 7 current/former UP students, and 9 campus visitors/outsiders for the mobile AR application testing. Similarly, the severity rate classification checklist utilized by Guimaraes and Martins will be the main questionnaire provided to users/testers to generate feedback for the application [3]. This will be done by external users through accomplishing a questionnaire structured to obtain qualitative data under each ISO 9241-11 metric through a series of questions. Each ISO 9241-11 metric will have a series of tailorfit scenario-based questions relevant to the application’s use, and external testers will give a numerical rating, similar to a Likert scale, for each question. Afterwards, a mean weighted average will be the basis of each metrics’ validity in forming a conclusive answer on whether or not it was achieved. Lastly, it is worth noting that the researchers will prioritize obtaining prospective students and campus visitors for external users to gain a better opinion on whether or not the application is helpful. User Interface/Experience Design Consultation with UPDIO. The UP-DIO and its personnel will be the main stakeholder in the application. The application’s features and functionalities, as well as its design and user interface, will be presented to the UPDIO to get their feedback and insights all throughout the application’s development.
304
A. R. N. Lisondra et al.
5 Theoretical and Conceptual Framework To assist in the development of the proposed application, the researchers provide the theoretical and conceptual framework in Fig. 2. The researchers further divide this framework into four parts: (1) Research Problem and Background, (2) Development Cycle, (3) Application Features, and (4) Application Evaluation. By dividing the framework into four parts, the researchers aim to develop a thorough framework that can be used to summarize the lacking areas past implementations have and use this as a guiding principle during development.
6 Methodology 6.1 Application Planning The researchers have formulated a list of features, based on Sect. 4.1, and their corresponding priority rankings considering the problem statements and objectives. In order of priority, the first focus will be on enabling the application to locate the user and POIs through geolocation. Following that, the development should include image marker detection functionalities for marker-based detection, alongside the main map view. Next in line should be the implementation of the capability to display and update POI information through an admin portal, as well as showing 3D and AR models for both marker-based and marker-less features. Furthermore, the application should be capable of map-path finding, and with the least priority, further enhancements to the 3D functionality and the settings page. Note that the framework in Fig. 2 provides a comprehensive overview of these features, as well as its alignment in the overall planning and development process of the application. 6.2 Application Development To develop the proposed mobile AR application for UP Diliman, the researchers will use Unity (Unity3D) with the Vuforia SDK. Unity will be used as its engine allows for easy cross-platform application development. For this research, it allows for developing a mobile AR 3D application, while also allowing for additional SDKs to implement the researcher’s proposed features and functionalities. Since it is cross-platform, the researchers will be able to develop an application specifically for mobile devices that run on Android. Also, Unity allows for easy “scene” creation for each POI, as well as the creation and modification of 3D models, and overlaying of digital information to produce the AR experience [5]. Vuforia SDK will be used in conjunction with Unity. Vuforia provides the necessary tools and features to produce the AR experience the researcher’s proposed application aims to have. Through Vuforia, the researchers will be able to easily overlay digital information on top of Unity’s scenes, as well as impose 3D models in the user’s real environment. Additionally, Vuforia allows for easy image and object recognition, which will aid the research’s marker-based approach for interior POIs [5].
AR Campus Tour App for UP Diliman
Fig. 2. Theoretical and Conceptual Framework
305
306
A. R. N. Lisondra et al.
Next, the Mapbox SDK will be used to integrate maps with user location, and enable navigation features such as path-finding into Unity. Additionally, for the path-finding feature was coded in such a way that queries are only available and valid whenever a user is within the campus grounds of UP Diliman. Both Laravel and MySQL are used in tandem to develop the admin-side web portal. Laravel is a full-stack framework that allows for rapid deployment of web applications, and was used to develop the API and frontend of the admin web portal. MySQL, on the other hand, is a relational database management system, and was used to store POI information. By using a database management system, UPDIO personnel will be able to easily view and update POI information [4]. Lastly, Google SketchUp was used to design and develop all of the application’s 3D models, from 3D exterior models, to 3D indoor and cultural models.
7 Conclusion In summary, this paper has discussed leveraging Augmented Reality for campus tour applications. This paper has discussed the objectives, theoretical and conceptual framework, and the methodology of creating an Augmented Reality campus tour application that uses a hybrid approach for UP Diliman. With this application, the researchers aim to bridge the gap between sole marker-less or marker-based campus tour implementations through a hybrid solution that makes use of both, and to make a usable application based on ISO 9241-11 standards that allows performing interactive self-directed tours in the campus with ease.
References 1. Andri, C., Alkawaz, M., Sallow, A.: Adoption of mobile augmented reality as a campus tour application. Int. J. Eng. Technol. (UAE) 7, 64–69 (2018). https://doi.org/10.14419/ijet.v7i4. 11.20689 2. Chou, T.L., Chanlin, L.J.: Augmented reality smartphone environment orientation application: a case study of the Fu-Jen university mobile campus touring system. Procedia Soc. Behav. Sci. 46, 410–416 (2012). https://doi.org/10.1016/j.sbspro.2012.05.132 3. De Paiva Guimaraes, M., Martins, V.F.: A checklist to evaluate augmented reality applications. In: 2014 XVI Symposium on Virtual and Augmented Reality, pp. 45–52 (2014). https://doi. org/10.1109/SVR.2014.17 4. Lee, G.A., Dünser, A., Nassani, A., Billinghurst, M.: Antarcticar: AN outdoor AR experience of a virtual tour to Antarctica. In: 2013 IEEE International Symposium on Mixed and Augmented Reality - Arts, Media, and Humanities (ISMAR-AMH), pp. 29–38 (2013). https:// doi.org/10.1109/ISMAR-AMH.2013.6671264 5. Llerena, J., Andina, M., Grijalva, J.: Mobile application to promote the Malecón 2000 tourism using augmented reality and geolocation. In: 2018 International Conference on Information Systems and Computer Science (INCISCOS), pp. 213–220 (2018). https://doi.org/10.1109/ INCISCOS.2018.00038 6. Muhammad, S.T.: Developing augmented reality mobile application: NEU campus guide. Ph.D. thesis, Near East University (2019) 7. Siltanen, S.: Theory and applications of marker-based augmented reality: licentiate thesis. Ph.D. thesis, Aalto University, Finland (2012). Project code: 78191
AR Campus Tour App for UP Diliman
307
8. Swanson, E., Kopotic, K., Zamarro, G., Mills, J., Greene, J., Ritter, G.: An evaluation of the educational impact of college campus visits: a randomized experiment. AERA Open 7, 233285842198970 (2021). https://doi.org/10.1177/2332858421989707 9. Vidal, E., Mendoza, M.L., Samaco, J.D., et al.: Igpaw: loyola—design of a campus- wide augmented reality game using magis. In: Proceedings of the 26th International Conference on Computers in Education, Asia-Pacific Society for Computers in Education, Philippines (2018) 10. Zhindón Mora, M.G.: Implementación de un sistema de navegación con realidad aumentada basado en puntos conocidos para geo localización de puntos de interés. Master’s thesis, Universidad del Azuay (2014)
Social Media to Develop Students’ Creative Writing Performance Loubert John P. Go1(B)
, Ericka Mae Encabo2 and Yolanda R. Casas1
, Aurelio P. Vilbar2
,
1 Cebu City Don Carlos A. Gothong Memorial National High School, Cebu City, Philippines
[email protected] 2 University of the Philippines Cebu, Cebu City, Philippines
Abstract. After a two-year suspension, the Philippine Department of Education resumed limited face-to-face (ftf) classes in May 2022, replacing modular distance learning. Nonetheless, the threat of COVID-19 transmission and the infrastructure damage caused by Typhoon Rai compelled us to transition to a blended learning mode of instruction. This mode of instruction paved the way for the incorporation of social media into education and displayed encouraging results for fostering student engagement. According to the results of the pre-test, students performed poorly in creative writing. To address the issue, we used Instagram as an intervention to improve students’ writing performance. During the implementation of blended learning, the teacher taught the concepts in ftf modality for three days while students wrote their works on Instagram online. This study examined the impact of using Instagram as an e-portfolio on the creative writing performance of students. The t-test results demonstrated a significant mean gain in their pretestposttest performance after using Instagram. Social media enhanced their creative writing skills, grammar, vocabulary, self-awareness, and motivation, according to qualitative research. Students indicated that Instagram’s self-editing capabilities enhance metacognition and self-development via real-time comments and feedback. Keywords: Flash Fiction · Instagram · E-Portfolio
1 Introduction Creative writing refers to any writing that is unique and self-expressive [1]. It is a kind of self-expression that allows the writer to translate experience and imagination into communicable thoughts and feelings about the human experience in an entertaining, engaging, and instructive manner [2]. In the Philippines, CW has been included in the Enhanced Basic Education Curriculum of the Academic Track Senior High School Program [2] which aims to improve students’ practical and creative reading and writing skills to introduce students to the essential strategies of writing fiction and poetry [1]. Unfortunately, teaching CW remains a difficult undertaking for teachers and students [3]. CW is difficult to teach because of the different genres/forms of literature, lack of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 308–314, 2023. https://doi.org/10.1007/978-3-031-44146-2_32
Social Media to Develop Students’ Creative Writing Performance
309
motivation, untrained English teacher, insufficient time for instruction, and the focus on surface errors [4]. Based on our Needs Assessment among 150 students, senior high school students had difficulty writing short stories. They claimed that they lacked knowledge in developing plot, character development, style, and figurative language because of a lack of exposure to creative writing during the pandemic. They added that they received minimal feedback from the teachers due to the distance education modality. As an intervention, we used Instagram as an e-portfolio in teaching CW. Instagram is an online photo-sharing tool and social network platform that allows users to upload, edit, and personalize images and videos [5]. As an online social network platform, Instagram has the potential to enhance the teaching-learning process [6]. In this research, Instagram was used to support the blended learning modality of our school from May 2022 until November 2022. During this time, our school in Cebu City, Philippines had limited face-to-face classes after the COVID-19 pandemic. These students were under the printed modular distance learning in 2021–2022 which disrupted their face-to-face (ftf) creative writing sessions. Moreover, our classrooms were devastated by Typhoon Rai. As a result, 40-min classes were implemented. CW for beginners needs ftf sessions to provide immediate feedback. In addition, the use of Instagram in uploading their works became vital as a platform for e-portfolio. This study aimed to determine the effect of utilizing Instagram in teaching Creative Writing on the participants’ creative writing skills and to determine their feedback on using social media.
2 Literature Review In 21st century education, traditional instructional approaches can be supplemented by using social media [7] since learners frequently use social networking sites and applications [8]. Instagram is one of the most popular and well-known social media platforms for students [9]. It is an online photo-sharing tool and social network platform that allows users to upload, edit, and personalize images and videos [5]. As an educational technology, Instagram is accessible [10, 11] for students to upload and edit their works anytime and anywhere [12]. They can conveniently comment and edit captions on their peer work online using their gadgets [13]. Instagram serves as a pre-writing stage for students to experience drafting to final writing. Instagram is fitting for the creation of e-portfolios since it allows the saving of media published by other users in a collection [11, 14] where students can upload, share and showcase their outputs online [15] in different formats, such as audio, photo, video, or text [14]. When publishing and sharing actual learning experiences, e-portfolios provide learners the opportunity to create signature work that includes the sense and meaningmaking of their ideas [16]. As a language learning tool, it improves the academic performance of the students [17] by reflecting on their academic performance, getting feedback online [18], and activating their intrinsic motivation [19]. Teachers can evaluate students’ creative writing using Instagram [20] because it improves their creative writing, especially short story writing [21]. Instagram encourages students to write creatively using its captions [22].
310
L. J. P. Go et al.
3 Methods This study used a mixed method that systematically combines quantitative and qualitative approaches to solve complex research issues that may benefit from such a combination [23]. Rubrics, reflection journals, and focus group discussions (FGD) were utilized to determine the impact of using Instagram in improving the academic performance of the students. This research was anchored on Mobile-Assisted Language Learning (MALL) which involves learners participating in educational activities while employing technology as a mediation tool for learning using mobile devices that access data and communicate with others via wireless technology [24]. Purposive sampling was used in this study, with only 45 participants owning gadgets, internet access, and an Instagram account. The respondents gave consent, including the parents, and all information and data were maintained securely in accordance with the Data Privacy Act of 2012. To reinforce privacy, only the teacher has the list of usernames to keep track of the submission, as students chose to remain anonymous. To track each output, specific hashtags were utilized. The school’s blended learning method was used for this study. This procedure took five weeks. During the ftf lessons, the teacher led a facilitated workshop on the fundamental concepts of flash fiction and provided examples, ensuring that all questions were answered and students received rapid feedback. Following that, respondents authored four flash fiction sub-genres in their Instagram profiles over the assigned online classes: 6-word story, dribble, drabble, and microfiction. Students used Instagram’s photorepresentation features to represent the piece. Aside from that, the app allowed users to add captions to their posts, allowing them to create compact and compelling textual material. Students used the newly learned storytelling techniques to experiment with various writing styles. Online consultations were open during this period to facilitate output creation. Then, participants were asked to provide reflections and participate in focus group discussions (FGD), which were evaluated using Harding’s coding theme analysis framework.
4 Results and Discussion
Table 1. Students’ Pretest to Posttest Performance in Creative Writing Categories
Median Pretest
z-statistic
p-value
Remarks
Posttest
Style
3.33
3.67
−3.843
.000
Significant
Plot Development
3.33
4.00
−4.990
.000
Significant
Writing Process
3.33
3.67
−4.272
.000
Significant
Language
3.00
3.67
−5.143
.000
Significant
Overall
13.33
15
−5.484
.000
Significant
Social Media to Develop Students’ Creative Writing Performance
311
Table 1 shows that using Instagram improved the students’ style, plot development, writing process, and language in flash fiction writing. Each category had a p-value of .000 which is significant at .05 alpha. The data proves that Instagram as an e-portfolio improved the creative writing skills of the students. The findings validated the study of Zárate and Cisterna that the use of Instagram assists students in developing their short story writing [21]. Table 2. Themes of the Participants’ Reflections Themes
Sample Quotation
Improves Creative Writing Skills
“In conclusion that using Instagram as tool to improve my creative writing skills because of its accessibility, I can write anytime and anywhere I want especially when I just have my phone with me.”
Improves Grammatical Rules and Vocabulary “Instagram is a useful tool for developing vocabulary and grammatical accuracy for learners like me because of the suggestions and comments of the readers and I can easily alter my writing in the captions.” Develops Self-Awareness
“As I was working on my stories, I realized that I have the capability to produce stories and that I have improved, yet still have a lot to improve.”
Helps to Motivate Others
“I enjoy about this writing activity in using Instagram where I can help people to become motivated through my comments of their post because I can easily view their posts online.”
4.1 Improves Creative Writing Skills From the collected data, the students claimed that their creative writing skills were improved by using Instagram [25]. Student A wrote, “Instagram is a helpful tool in improving my creative writing skills because I can write anytime and anywhere, especially if I do my modules.” Student B added, “I find Instagram a helpful tool in improving my creative writing skills and its accessible allowed me to do the task at my own pace.” This finding verified the study [21] that Instagram helps in the development of writing short stories. In addition, it also confirmed the study [12] that activities could be done every time and everywhere. 4.2 Improves Grammatical Rules and Vocabulary All data showed that the use of Instagram improved the grammar and vocabulary of the respondents. Student E wrote, “Instagram is a useful tool for developing vocabulary and
312
L. J. P. Go et al.
grammatical accuracy for learners like me because of the suggestions and comments of the readers.” Student F added, “It improves my writing skills… in using punctuation marks, my grammar…. And I can just change anything in the captions in an easy way.” This finding substantiated the study [26] that Instagram improves grammar and vocabulary. At the same time, editing or deleting captions and comment is a feature that is relevant for writing problems on Instagram [10]. 4.3 Develops Self Awareness During the activity, the respondents developed self-awareness. They discovered their strengths and weaknesses in writing. Student G wrote, “As I was working on my stories, I realized that I can produce stories and that I have improved, yet still have a lot to improve.” Student H added, “In the process of using Instagram, I realized that I can write more stories and that I can create stories or something in the midst of difficulties.” 4.4 Helps to Motivate Others Writing flash fiction using Instagram motivated students to write better. Students I wrote, “I am grateful to the comments of others; great and positive which motivate me to be better in writing. They can access my outputs especially because of the common hashtags we used.” Student J added, “I was grateful for the comments of my classmates and people on Instagram. They uplifted my passion and eagerness to do more of it. Their positive feedbacks are the reasons why I wanted to continue writing.” This validated the study of Ramalia and Syekh-Yusuf that people’s attention through the “like” button and comments motivates them to write better [25]. At the same time, it supported the conclusion [11] that online posts can be accessed and viewed at any time by others. All data substantiate that using Instagram in writing improved creative writing, improved grammar and vocabulary, developed self-awareness, and helped to motivate others. It developed the students’ creativity in writing, especially in flash fiction. As Student M’s response in FGD, “Instagram really helped me so much in uhm making creative outputs.”
5 Conclusion Instagram, as an e-portfolio, improved the students’ creative writing performance, grammar, vocabulary, self-awareness, and motivation. Instagram’s self-editing tools allowed the students to do metacognition and self-development from real-time feedback and comments. In a post-COVID context, Instagram is an excellent platform for blended learning context, which is a combination of face-to-face and online classes [27]. Here, teachers would discuss the theories and concepts related to the subject matter in a ftf modality. As an e-portfolio, students can upload their works on Instagram to showcase their outputs based on the course’s competencies. Instagram’s visual and interactive features provide a stimulating environment for students to express their creativity and build connections with their peers. The students’ use of hashtags and comments on one another’s posts
Social Media to Develop Students’ Creative Writing Performance
313
fostered a collaborative and supportive environment that encourages students to write. These data proved that Instagram is an effective tool in addressing the gap of students’ poor creative writing performance. With this said, incorporating Instagram into creative writing classes can be an effective way to engage students, foster creativity, and develop essential digital literacy skills. It is further recommended to explore other social media applications to develop the writing performance of the students. There are a number of limitations to consider when studying the impact of Instagram on students’ creative writing performance. The small sample size and purposeful selection may affect the generalizability of the results. Language proficiency and social media experience may also have an impact on students’ use of Instagram for creative writing. In addition, since the study was not conducted over an extended period of time, analyzing the long-term sustainability of improvements in creative writing skills would provide a more complete picture of the platform’s effectiveness.
References 1. Pardito, R.H.: Creative writing curriculum in the selected senior high schools in the division of Quezon: a groundwork for a teaching guide. Am. J. Educ. Technol. 1(2), 62–71 (2022). https://doi.org/10.54536/ajet.v1i2.511 2. Manalastas, J.P.: Digitalized instructional materials in creative writing based on technological pedagogical content knowledge. J. Humanit. Educ. Dev. 2(2), 119–128 (2020). https://doi. org/10.22161/jhed.2.2.7 3. Acuin, D.G., Petallana, M.L.D., Esperas, G.C.: Cooperative-collaborative learning in enhancing creative writing performance. JPAIR Multidiscip. Res. 32(1), 164–173 (2018). https://doi. org/10.7719/jpair.v32i1.581 4. Harper, G.: Teaching creative writing. In: The Routledge Handbook of Language and Creativity, pp. 498–512 (2015). https://doi.org/10.1080/18125441.2012.747769 5. Thomas, V.L., Chavez, M., Browne, E.N., Minnis, A.M.: Instagram as a tool for study engagement and community building among adolescents: a social media pilot study. Digit. Health 6, 1–13 (2020). https://doi.org/10.1177/2055207620904548 6. Maslin, N.M.: Impact of modern technology. HF Commun. 3, 165–182 (2021). https://doi. org/10.1201/b12574-14 7. Javaeed, A., Kibria, Z., Khan, Z., Ghauri, S.K.: Impact of social media integration in teaching methods on exam outcomes. Adv. Med. Educ. Pract. 11, 53–61 (2020). https://doi.org/10. 2147/AMEP.S209123 8. Kolhar, M., Kazi, R.N.A., Alameen, A.: Effect of social media use on learning, social interactions, and sleep duration among university students. Saudi J. Biol. Sci. 28(4), 2216–2222 (2021). https://doi.org/10.1016/j.sjbs.2021.01.010 9. Rosyida, E.M., Seftika: Instagram as social media for teaching writing. J. Engl. Lang. Teach. Appl. Linguist. 5(1), 60–70 (2019). https://doi.org/10.26638/js.831.203X 10. Sirait, J.B., Marlina, L.: Journal of English language teaching using Instagram as a tool for online peer-review activity in writing descriptive text for senior high school students. J. Engl. Lang. Teach. 7(1) (2013). http://ejournal.unp.ac.id/index.php/jelt No Title No Title No Title, vol. 26, pp. 1–23 (2016) 11. Ariana, R.: 12. Feng, S.S., Huang, G.Y.: New design method for focusing fragments of warheads based on theory of optics. In: 25th International Symposium on Ballistics, ISB 2010, pp. 926–935 (2010)
314
L. J. P. Go et al.
13. Al-Ali, S.: Embracing the selfie craze: exploring the possible use of instagram as a language mLearning tool. Issues Trends Educ. Technol. 2(2), 1–16 (2014). https://doi.org/10.2458/ azu_itet_v2i2_ai-ali 14. Chantanarungpak, K.: Using e-portfolio on social media. Procedia Soc. Behav. Sci. 186, 1275–1281 (2015). https://doi.org/10.1016/j.sbspro.2015.04.063 15. Johar, S., Ismail, K.: ePortfolio: a descriptive survey for contents and challenges: discover for books. Art. Media, 4–10. http://0-eds.a.ebscohost.com.aupac.lib.athabascau.ca/eds/pdf viewer/pdfviewer?sid=89e7c11c-f47b-4422-99b6-2ae86f032238@sessionmgr4008&vid= 5&hid=4210 16. Thibodeaux, T., Harapnuik, D., Cummings, C., Dolce, J.: Graduate students’ perceptions of factors that contributed to ePortfolios persistence beyond the program of study. Int. J. ePortfolio 10(1), 19–32 (2020) 17. Capstone project - impact of social media on society, vol. 6, no. 1, pp. 406–408 (2019) 18. Syzdykova, Z., Koblandin, K., Mikhaylova, N., Akinina, O.: Assessment of e-portfolio in higher education. Int. J. Emerg. Technol. Learn. 16(2), 120–134 (2021). https://doi.org/10. 3991/ijet.v16i02.18819 19. Oh, J.E., Chan, Y.K., Kim, K.V.: Social media and e-portfolios: Impacting design students’ motivation through project-based learning. IAFOR J. Educ. 8(3), 41–58 (2020). https://doi. org/10.22492/ije.8.3.03 No Title No Title No Title, vol. 13, no. 1, pp. 142–154 (2019) 20. Fadul, F.M.: 21. Zárate, P., Cisterna, C.: Action research: the use of instagram as an interactive tool for developing the writing of short stories. Eur. J. Educ. Stud. 2(8), 527–543 (2017). https://doi.org/ 10.5281/zenodo.1035497 22. Christanty, A., Bestari, Y., Faiza, D., Mayekti, M.H.: Teaching English online through the use of Instagram in the new normal era of Covid-19, vol. 2, no. 1, pp. 50–55 (2023) 23. M. Methods: quantitative and qualitative approaches to research – integration mixed methods, no. 1989, pp. 514–520 (2010) 24. Wu, W.H., et al.: Review of trends from mobile learning studies: a meta-analysis. Comput. Educ. 59(2), 817–827 (2012). https://doi.org/10.1016/j.compedu.2012.03.016 25. Ramalia, T.: The Students’ perspective of using Instagram as a writing assignment platform. J-SHMIC J. Engl. Acad. 8(2), 122–132 (2021). https://journal.uir.ac.id/index.php/jshmic 26. Bestari, A.C.Y., Faiza, D., Mayekti, M.H.: Instagram caption as online learning media on the subject of extended writing during pandemic of Covid-19. Surakarta Engl. Lit. J. 3(1), 9 (2020). https://doi.org/10.52429/selju.v3i1.359 27. Batac, K.I.T., Baquiran, J.A., Agaton, C.B.: Qualitative content analysis of teachers’ perceptions and experiences in using blended learning during the COVID-19 pandemic. Int. J. Learn. Teach. Educ. Res. 20(6), 225–243 (2021). https://doi.org/10.26803/IJLTER.20.6.12
Importance and Effectiveness of Delurking Maria Anastasia Katikaridi(B) National and Kapodistrian University of Athens, 16122 Athens, Greece [email protected]
Abstract. In today’s online society as social media has become an integral part of our daily lives, lurking is transformed to a well-grounded phenomenon. Silent users or otherwise called lurkers are users of Online Social Networks (OSNs) who intentionally decide to not make their presence visible to other users by remaining silent, keeping their online identity as a secret and observe, rather than visibly participate. They do not share any kind of information or preferences with the other members of the community. The goal of this study is to raise awareness regarding the common phenomenon of lurking, examine the lurking phenomenon and reveal ways of delurking this kind of users in social media platforms. Therefore, this study provides responses on how by unveiling the lurking phenomenon knowledge will be provided in different scientific domains such as social science, human-computer interaction, and computer science fields. Keywords: OSN · lurkers · delurking · social media · social networking sites · silent users
1 Background 1.1 The Concept of Lurking On-line social networks (OSNs) are nowadays used in an everyday basis. However, there are users who decide not to contribute at all or contribute a little in social media platforms. These users are called silent users or lurkers. Lurking is usually related with observation, silence, invisibility, and that is the reason why lurkers are alco called silent users, invisible participants and hidden users [1]. Lurkers can be considered either negatively in case they consume information without sharing any content with the community or positively in case they are fresh OSN users or in case they have an opinion that they can express if they are motivated [2]. Delurking, i.e. turning lurkers into active users, is very important in OSNs, as they would be of no use without available content. 1.2 The Importance of Delurking It is worth noting that lurking is a phenomenon more common than active participation; precisely, 90% of current social media users just watch discussions, videos or read posts without actively participating [2]. Actually, lurking is a phenomenon more common than © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 315–320, 2023. https://doi.org/10.1007/978-3-031-44146-2_33
316
M. A. Katikaridi
active participation. Lurkers constitute a special category of users and they, as well as contributors, are also needed for the freshness and novelty of each social media platform [2]. Furthermore, related studies have shown that when the point of view of lurkers is ignored, fake news is created [2]. Understanding these individuals is vital because lurkers devote a significant amount of time on social platforms and have a lot to share, although they choose not to. In fact, despite their online quiet behavior, they can be constructive members of online communities. Actually, the activation of silent users can result in social media platforms with much richer content than current ones and can thus more accurately resemble reality, since they will depict the perception of a much larger percentage of social media users than the 10% that is currently represented.
2 Lurkers’ Behavior Analysis 2.1 Lurkers Background Lurking behavior has been studied in the fields of social network analysis and mining and human computer interaction. Nevertheless, despite the fact that a significant portion of online users are lurkers, computer science specialists have up to now given little attention to the lurking issue. The works listed below are a sample of the relatively few that exist in this field. The relationship between lurking and cultural capital, i.e. a person’s level of knowledge about their local community, is examined by Soroka and Rafaeli [3]. In relation to boundary spanning and knowledge theory in various community settings, Muller underlines the significance of comprehending lurkers [4]. For several OSN actors, such as lurkers, Fazeen et al. create supervised classification techniques [5]. The act of delurking is crucial since it can raise an OSN’s value. The delurking process can make use of a variety of techniques, including blockchain, gamification, incentive systems, and influence maximization strategies. As a result, the implementation of blockchain technology can resolve security and privacy concerns. At the same time, incentive systems like badges have the tendency to boost users’ participation in social media. Influence maximization can be used in a community-based delurking method, as indicated in [6], gamification technologies have the capacity to keep the user’s attention [2, 7]. Finally, hybrid techniques have also been researched because to lurkers’ transient behavior and their propensity to shift between platforms [6]. When taking into account the existing research in this area, it can be noted that the majority of the papers that have been published all acknowledge the absence of computational methods for delurking quiet users. 2.2 Why Do Lurkers Lurk? The phenomenon of lurking has a number of causes that are connected to the demands and personalities of the users. As a result, in today’s society, some people purposefully avoid drawing attention to themselves or acting impulsively, while others prefer to first adopt the viewpoint of others before expressing their own views [8]. According to the above statement, lurkers are more ready to be integrated into a community and feel like they
Importance and Effectiveness of Delurking
317
belong there since they enjoy the benefit of online anonymity. Focusing on the causes of lurking, it has been found that, in most cases, people start out as contributors before they start to lurk [3]. More specifically, people first sign up for the community and then begin interacting. Most frequently, new members are willing to discuss personal information and their experiences on their preferred platform. However, after a period, though, they begin to address the bad aspects of social networking. This indicates that people change their behavior and keep quiet because of the social medium and the community itself. It is very challenging for a newcomer to fully adapt to people and information overload without reacting in any manner in online communities because of the online chaos that permeates them [9, 10]. Lurking results from being unable to handle the social media chaos or from other community members being unwilling to clear some space and so limit the spread of information to new users. 2.3 Lurkers vs Inactive Users Lurkers and active users both join an online community for different reasons, as can be seen when comparing the two user groups. Due to their actions, lurkers contribute to the community just as much as active users do because they gain useful knowledge from it. It’s critical to distinguish lurkers and nonusers at this point. Nonusers or inactive users are those who register for the site but do not use it [11]. Even though they are listed as members of the social network, they never even utilize it. In contrast, lurkers are users who aren’t visible but are nevertheless present to observe and gather data. However, there are cases in which lurkers and inactive users share some common characteristics, such as in the case of commenting; both lurkers and inactive users behave in the same manner by not contributing at all.
3 Delurking Mechanisms 3.1 Influence Maximization Finding a limited fraction of nodes in a social network that may increase information transmission fast and effectively within the network is known as influence maximization [12]. Lurkers are the target of the influence maximization process in lurking-based analysis, which uses influence maximization. The influence of people who are silent on social media and how to turn them into active users can therefore be determined by modeling and studying user interaction. The Independent Cascade model or the Linear Threshold model serve as the foundation for the majority of models in the literature. The idea for using these two models to simulate the spread of influence was first put forth by Kempe et al. [13]. They demonstrated that the Independent Cascade model’s influence maximization is NP-hard. The fundamental issue is that these two models’ input values are largely homogeneous, meaning that they fit a wide range of situations. Usually, this results in a calculation of the influence that is inaccurate. Oriedi et al. [12] attempt to address this issue by presenting the algorithm known as Selective Breadth First Traversal. By quantifying and allocating precise weights to social interactions that take place between network nodes, this algorithm finds the best set of nodes that are
318
M. A. Katikaridi
able to maximize influence. This algorithm might be tested on silent users because it can connect to and activate 93% of the nodes. Since the usual algorithms activate only 53% of the nodes, this means that this algorithm is 40% more efficient in the normal nodes, so similar results could be observed in silent users. A method of influence maximization with spontaneous user adoption is put out by Sun et al. [14]. Self-activation occurs frequently in real-world situations. For instance, even without the help of marketing, consumers automatically recommend different products to their friends. To address the issue of influence maximization, the authors discuss the Self-Activation Independent Cascade (SAIC) paradigm. Influence maximizing strategies could be used to independently allow silent users. Given the user’s existing membership in the social network, there is eventually a chance to make its presence known through some interaction. 3.2 Blockchain The future of social media has been predicted to be blockchain-based platforms where users get compensated for their content. There is no central organization that can control user data in decentralized social media. By offering reward systems and eradicating fake news and privacy concerns with Blockchain technology, these platforms provide more weight to content. HELIOS, a decentralized social media platform that incorporates technologies including decentralization, context detection in IoT environments, and real and virtual object networking, was proposed by Guidi et al. [11]. Users receive HELIOS voucher rewards that are closely related to their relevance as a contributor to the HELIOS network when they contribute worthwhile material or carry out some of the recognized rewards actions on the network. All awards are initially be distributed via smart contracts on the Quorum Network, which are programmable payment agreements built on the blockchain [11]. At the end of each day, a Daily Reward Pool consisting of a predetermined amount of HELIOS coupons is provided based on the overall daily activity directly related to rewards. 3.3 Gamification The term “gamification” refers to the selective integration of game features into an interactive system without the creation of a finished game as the final product [15]. The purpose of employing a gamified strategy is to promote end-user behavior modification, whether that behavior involves higher engagement, better performance, or increased compliance. They also looked at how Foursquare’s game mechanics encouraged specific behaviors, and the findings revealed that people enjoy earning badges and doing tasks like “badge hunting,” which involves exploring a new place only for the purpose of earning a badge. Thom et al.’s point of view [16] is still another intriguing one. They looked at what happened to gamification elements on a large organization’s social network after ten months of use. Results for a four-week period revealed that their removal decreased the number of photographs, lists, and comments submitted. The authors come to the conclusion that removing gamification aspects has a detrimental effect on user engagement. Thus, scores and distinctions are frequently used to separate user activity and user content, such as star ratings (e.g. the Amazon ratings that come with product reviews), numerical scores (e.g., the “karma” scores for posts and comments on the social
Importance and Effectiveness of Delurking
319
bookmarking site Reddit [17]), numbered levels or named member tiers (e.g. the “Elite reviewer” tier on the review site Yelp’s [18]), or specific achievements (e.g. Amazon’s [19] “#1 reviewer”). A rating for a specific user’s performance is communicated to users through the use of an explicit score, which may encourage users to be more active and produce high-quality material.
4 Observations and Lessons Learned 4.1 Concluding Summary Lurkers have extensive knowledge about the way an OSN works, since they spend time both for observing the actions of active members and for learning from the community. In this paper, we examined the notion of lurking in OSNs, a behavior adopted by the vast majority of OSN users, which are thus called lurkers. We also referred to the possibility of lurkers to alter their behavior through delurking mechanisms such as influence maximization, blockchain and rewarding mechanisms. Therefore, this research investigated in depth the lurking phenomenon in order to make widely visible and understandable this situation. Moreover, we proposed efficient mechanisms that can alter lurking behavior into active participation and thus provide added value to OSNs by utilizing the knowledge and opinions of lurkers in various fields. In the future, we plan to make a separation of OSNs categories and compare users’ preferences according to each category, e.g. social media for entertainment and social media for professional purposes. Based on this comparison, we will be able to specifically identify lurkers and their different behaviors in social media platforms for entertainment and for professional purposes. 4.2 Challenges Challenges in the process of delurking include the temporary nature of lurking, which makes it difficult to detect and address lurkers, as well as privacy concerns in detecting their behavior.
References 1. Gong, W., Lim, E.-P., Zhu F.: Characterizing silent users in social media communities. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 9, no. 1 (2015) 2. Tagarelli, A., Interdonato, R.: Mining Lurkers in Online Social Networks: Principles, Models, and Computational Methods. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-002 29-9 3. Soroka, V., Rafaeli, S.: Invisible participants: how cultural capital relates to lurking behavior. In: Proceedings of the 15th international conference on World Wide Web, pp. 163–172 (2006) 4. Muller, M.: Lurking as personal trait or situational disposition: lurking and contributing in enterprise social media. In: Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, pp. 253–256 (2012) 5. Fazeen, M., Dantu, R., Guturu, P.: Identification of leaders, lurkers, associates and spammers in a social network: context-dependent and context-independent approaches. Soc. Netw. Anal. Min. 1(3), 241–254 (2011)
320
M. A. Katikaridi
6. Tagarelli, A., Interdonato, R.: ‘Who’s out there?’ Identifying and ranking lurkers in social networks. In: 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2013), pp. 215–222 (2013) 7. Edelmann, N.: Reviewing the definitions of ‘lurkers’ and some implications for online research. Cyberpsychol. Behav. Soc. Netw. 16 (2013) 8. Preece, J., Nonnecke, B., Andrews, D.: The top five reasons for lurking: improving community experiences for everyone. Comput. Hum. Behav. 20(2), 201–223 (2004) 9. Ho, S.S., McLeod, D.M.: Social-psychological influences on opinion expression in face-toface and computer-mediated communication. Commun. Res. 35(2), 190–207 (2008) 10. Slot, M., Opree, S.J.: Saying no to Facebook: uncovering motivations to resist or reject social media platforms. Inf. Soc. 37(4), 214–226 (2021) 11. Guidi, B., Clemente, V., García, T., Ricci, L.: Rewarding model for the next generation social media. In: Proceedings of the 6th EAI International Conference on Smart Objects and Technologies for Social Good, September 2020, pp. 169–174 (2020) 12. Oriedi, D., Runz, C., Guessoum, Z., Younes, A.A., Nyongesa, H.: Influence maximization through user interaction modeling. In: Proceedings of the 35th Annual ACM Symposium on Applied Computing, March 2020, pp. 1888–1890 (2020) 13. Kempe, D., Kleinberg J., Tardos, E.: Maximizing the spread of influence through a social network. In: Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, August 2003, pp. 137–146 (2003) 14. Sun, L., Chen, A., Yu, P.S., Chen, W.: Influence maximization with spontaneous user adoption. In: Proceedings of the 13th International Conference on Web Search and Data Mining, January 2020, pp. 573–581 (2020) 15. Seaborn, K., Fels, D.I.: Gamification in theory and action: a survey. Int. J. Hum. Comput. Stud. 74, 14–31 (2015) 16. Thom, J., Millen, D., DiMicco, J.: Removing gamification from an enterprise SNS. In: Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, February 2012, pp. 1067–1070 (2012) 17. Reddit. https://www.reddit.com/. Accessed 1 May 2023 18. Yelp. https://www.yelp.com/. Accessed 1 May 2023 19. Amazon. https://www.amazon.com/. Accessed 1 May 2023
Iskowela: Designing a Student Recruitment Platform and Marketing Tool for Educational Institutions Jeremy King L. Tsang(B) , Gorge Lichael Vann N. Vedasto, Jozelle C. Addawe, Richelle Ann B. Juayong, and Jaime D. L. Caro Service Science and Software Engineering Laboratory, University of the Philippines Diliman, Quezon City, Philippines [email protected]
Abstract. This paper addresses the lack of research on student recruitment in the Philippines. The closure of campuses and the shift to remote learning have highlighted the uneven adoption of educational technology and digital capabilities among Higher Education Institutions (HEIs) in the country. Drawing on international studies, this paper proposes the implementation of recommendations tailored to the Philippine setting. It presents the design and features of a web application intended to enhance student recruitment in HEIs. The application includes an interactive chatbot, events/places map, information module, and website analytics, catering to the needs of both students and educational institutions. By offering a timely and cost-effective solution, this paper fills a significant research gap and contributes to improving student recruitment in the Philippines. Keywords: Student recruitment recruitment website
1
· School marketing · Student
Introduction
Student recruitment is the process of attracting, engaging, and converting individuals into new enrollments for educational institutions. It involves marketing and communication efforts aimed at finding and recruiting the best-qualified students in a timely and cost-effective manner [1]. Colleges employ a broad range of strategies when recruiting students. Sending emails, maintaining institutional websites, and hosting campus visits were the primary means by which colleges recruited freshmen [2]. In the Philippines, preferred recruitment platforms for senior high school and college students include free college entrance exams, university scholarships, recommendations from relatives and friends, university websites, social media, and on-campus events [3]. The wake of COVID-19 has made student recruitment challenging for Higher Education Institutions (HEIs). The pandemic resulted in campus shutdowns that led to a quick rush to “remote learning”, exposing the fragmented adoption of c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 321–330, 2023. https://doi.org/10.1007/978-3-031-44146-2_34
322
J. K. L. Tsang et al.
high-quality education technology and digital capabilities across thousands of colleges and universities [4]. Flaws in workflow, processes, information management, and data collection hindered the ability of HEIs to effectively communicate with prospective students [5]. Furthermore, the incredibly competitive space and the fungibility of schools and universities make it harder for institutions to stand out [6,7]. In a 2021 study, university admission leaders in the Midwest region of the United States identified (1) enhancement of web presence, (2) expansion of virtual experiences, and (3) continuation of online and in-person recruitment strategies as the most important approaches in student recruitment considering the effects of the COVID-19 pandemic [8]. Furthermore, offering accurate, upto-date, and easily accessible information online, and updating web pages and social media accounts were the primary vehicles of enhancing student recruitment reported in the study [8]. It has become apparent that the new standard of best practice for university admissions recruitment involves the efficient employment of an online recruiting platform such as a university website. However, in the Philippines, out of the 2424 Higher Education Institutions listed by the Commission of Higher Education (CHED) for the academic year of 2021–2022, 756 of them did not have a university website [9]. Table 1. Result of 5% Random Sampling for 1668 Higher Education Institutions with University Websites. Amount Percentage Sample Size 83 Pass
51
61.45%
Fail
32
38.55%
Table 1 shows that out of the 1668 remaining schools with websites, approximately 38.55% have ineffective websites due to unorganized, limited, and outdated information. Many educational institutions prioritize other forms of marketing over investing in university websites, believing they provide better efficiency. However, institutional websites are crucial for validation and instilling trust among visitors. They serve as a means to verify information obtained from social media and other marketing channels. A well-designed website enhances the effectiveness of online marketing efforts [10]. In addition to university websites, third-party student recruitment websites have also shown to be a major channel and alternative for marketing schools to potential students. Research conducted by The Pie News has shown that international student recruitment websites attract traffic numbering 2–7 million visitors monthly with some of the most notable websites such as leverageedu.com run by Leverage Edu, topuniversities.com run by QS Quacquarelli Symonds, and idp.com by IDP [11] topping the list. In addition to boosting the online presence of HEIs, student recruitment websites also directly result in increased amounts
Iskowela Platform
323
of inquiries and student applications for their partnered institutions. While the international stage has shown the effectiveness and popularity of third-party student recruitment websites, Edukasyon.ph is currently the only third-party student recruitment website operating in the Philippines, averaging around 400 thousand unique visitors every month [12]. Considering the aforementioned challenges faced by institutions brought about by the pandemic, there exists minimal discussion with regard to the topic of student recruitment and recommended practices in the Philippines. Some of the evident research on the topic was Barrameda and Relayson’s study about Marketing and Engagement Tool for Educational Institutions [13]. The study aimed to address student recruitment, student engagement, and alumni engagement through the creation of a school website with three main modules, (1) chatbot, (2) interactive map, and (3) social listening. The research, however, was interrupted by the pandemic and the implementation was not completed. Dagumboy and Eden’s study on ECD-IMC: An Integrated Marketing Communications Model for Selected Philippine Higher Education Institutions is also notable as their paper aimed to recognize the significance of integrated marketing communications (IMC) and sought to maximize the return on marketing efforts for HEIs by identifying the most successful platform blend among the many communication platforms used to recruit students [3]. In an attempt to solve the problems surrounding student recruitment, especially in the context of the Philippines, the current study proposes the creation of a third-party student recruitment website. Its purpose is to provide Higher Education Institutions in the Philippines an opportunity to market their school and easily produce a personalized online web page while also providing students an easy-to-access online platform that contains useful information regarding multiple schools, including but not limited to their admission processes, course information, and scholarship opportunities. The outline of the paper is as follows. Further details into the objectives and considerations can be found in Sect. 2. The Theoretical and Conceptual Framework can be found in Sect. 3. Section 4 contains the application design, and conclusions can be found in the final section of the paper, Sect. 5.
2
Objectives and Considerations
The goal of the study is to provide higher education institutions with a tool to boost online presence and increase student recruitment. The tool will be a website that will allow HEIs to create a personalized school profile where they can relay essential information to market their school. The website will concurrently provide students with an online portal where they can easily browse through a catalog of recruiting schools. More specifically, the study aims to accomplish the following: Module Improvements. The application plans to use, modify, and improve the capabilities of features discussed and created in a previous study by Barrameda and Relayson [13]. Further improvements to existing features created by Barrameda and Relayson such as the interactive survival guide, which involve the
324
J. K. L. Tsang et al.
guide for the admission process as well as map with notable locations and events, and the chatbot. Improvements will include integrating the implementations into the system to allow ease of access in adding, editing, and deleting information inside the website by the school administrator. Website Analytics. The application will track website traffic to help institutions understand student behavior in browsing potential universities. This aims to provide the school administrators with a tool to gather data regarding student behaviour and identify key aspects to focus on for their web page. The data presented are expected to help school administrators identify what pages are gathering the most interest from guests. Information Modules. Information modules will also be added to the application to show relevant information that boost student recruitment based on international studies. Some of the most notable information requested and used by students during their decision process include course availability and scholarship opportunities, thus the information modules will be created to display such information. The scope of the application is limited to boosting student recruitment efforts for higher education institutions (HEIs) only and does not consider the state of student recruitment surrounding lower levels of education and education institutions. As stated previously, the primary target of the application are HEIs with minimal to no online presence with goals to boost their online presence, resulting in a boost towards student recruitment.
3
Theoretical and Conceptual Framework
Figure 1 shows the framework of the proposed system. It illustrates how each module of the school pages will be implemented and the metrics that will be used to measure their efficacy. The Iskowela project will build on top of the research conducted by Barrameda and Relayson about Marketing and Engagement Tool for Educational Institutions [13]. It is designed to provide a more comprehensive and expanded set of features such as an interactive chatbot, events/places map, information module, and website analytics to better address the needs of educational institutions. The Chatbot feature will be implemented with natural language processing with the capability of agent-to-human handoff for cases when users ask for a live chat with school administrators. The Chatbot will also utilize the school’s database, containing the Frequently Asked Questions (FAQs) and basic school information, in order to answer inquiries from guests. Then, the Website Analytics will collect and analyze data based on the visitor activity on the website. The target output data for this module are the overall school profile visits, time spent on each module, visitors’ geolocations, total view per module, and the number of bookmarks. Meanwhile, the Events/Places module will display the school map with marker animations and polylines to visualize landmarks and events. Lastly,
Iskowela Platform
325
the Information module will display the list of courses, scholarships, and process guides which are manually inputted by the school administrators. To measure and understand the effectiveness of the modules in student recruitment, the number of queries, engagement rate, user feedback, and satisfaction ratings will be collected to be used as metrics.
Fig. 1. Theoretical and Conceptual Framework.
4 4.1
Application Design and Implementation Overview
As stated in the Objectives, the study aims to create a website that boosts online presence and student recruitment for higher educational institutions that also serves as an online portal for potential students to simplify the process of searching for important information regarding schools. Each school page includes the information modules, the events and places markers, the chatbot, and the web analytics module. These modules expand on the limited scope of the study done by Barrameda and Relayson and aim to provide more vital information required by students while also providing school administrators with some way to track student interest. In addition, the features will be modular in form and can easily be toggled on and off by the school administrators. This not only provides more control for the school administrators but also aims to simplify school page creation and resolve the issue of having to build each website individually. The main portal of the website will display the list of HEIs registered on Iskowela. This portion of the website acts as a directory for navigation toward each individual school page and aims to boost the social and online presence of the partnered schools.
326
4.2
J. K. L. Tsang et al.
Features
Account Management. There are two types of accounts that can log in to the app. The administrator accounts are assigned to school personnel and gives access to modify contents of the school page connected to the account and access to the website analytics module. The second account type is the guest account, which is to be used by prospective and current students and allows viewing of information from school pages and bookmarking schools for easy tracking. If the visitor opts not to log in, they are automatically tagged as a guest-type user with no bookmarking capabilities. Information Modules. The information module is comprised of three other modules, (1) Process Guides, (2) Course List, and (3) Scholarships. These features aim to provide potential students with information that have the highest impact on their decision-making process [14]. Administrators are able to add, edit, and delete text or image contents of individual information modules. Guests can view information presented in each of the information modules. Events/Places Markers. This feature allows potential students to identify key events and locations of interest of the HEI through the use of custom markers created by the school administrator using Mapbox. Events may be classified as offline or online, while locations may be classified using tags such as Food, Health, Shop and many others. Administrators are able to add, edit, and delete markers. Guests can view the list of suggestions from guests or visitors and submit a form to suggest the addition, modification, or deletion of location or event markers. Chatbot. This feature allows potential students to inquire regarding other basic information about the HEI that is not present in the other modules of the school page. Visitors may converse with the chatbot to obtain basic information with regards to the HEI. In such cases were the chatbot is unable to resolve the question or concern, the visitor can request for a live chat with an available online system administrator. Website Analytics. This feature is limited to administrator view only and allows the school administrators to identify and analyze website traffic and visitor behavior for their respective school pages. It allows administrators to monitor viewership and retention rates both for the school page in general as well as individual module performances. The administrator can use this data in order to identify which features have the greatest effect and which features need the most attention. Analytics will involve overall school profile visits, views per module, time spent per module, visitor geolocation, and number of bookmarks. Administrators are able to view analytics on captured user behavior and data when viewing the school page. This module is not available to guest users (Figs. 2, 3, 4, 5, 6, 7, 8 and 9).
Iskowela Platform
4.3
327
Website Prototype
Fig. 2. Main Portal.
Fig. 3. Login Page.
Fig. 4. School Landing Page.
Fig. 5. Admin Settings Page.
Fig. 6. Process Guides Module.
Fig. 7. Admin View of the Events Markers.
Fig. 8. Admin View of the Location Marker.
Fig. 9. Web Analytics Module.
328
4.4
J. K. L. Tsang et al.
Methodology
The website will primarily be developed using Python and the Django web framework with an SQLite database. The Django framework will be used to handle various aspects of web development, including URL routing, request handling, database management, and rendering of HTML templates. Mapbox GL JS will be used in the development of the Events/Places module to properly display interactive maps with custom markers created by school administrators. This integration enables users to visualize and interact with location-based data on the website, providing an enhanced user experience. The chatbot will be developed separately using Rasa Open Source. Supervised learning will be utilized by providing sample inquiries or questions, rules, as well as stories for the chatbot to train a model. Using the trained model, the chatbot should be able to identify the user intent and query the correct information from the database.
5
System Evaluation
Iskowela will undergo a comprehensive system evaluation plan, encompassing validation testing and user testing. The validation testing will employ module-based and process-based approaches to ensure the proper functioning and independence of individual modules, as well as the seamless user experience across interconnected processes. User testing will follow, utilizing a well-structured questionnaire based on the PIECES and EUCS methods. This questionnaire will assess functionality, effectiveness, and ease of use, providing insights into accessibility, accuracy, comprehensiveness, and user-friendliness. The user testing setup will involve a diverse target audience, including senior high school students, lower-year college students, and school personnel. Users will receive an introductory document and be assigned tasks based on their roles as administrators or guests. They will then explore the website independently and provide feedback through the questionnaire, evaluating their experience, the user interface, and overall satisfaction. This systematic evaluation process aims to gather valuable insights for a thorough assessment of Iskowela’s performance, usability, and alignment with user expectations. By following this systematic evaluation process, valuable insights will be gathered from the target audience, allowing for a thorough assessment of Iskowela’s performance, usability, and alignment with user expectations.
6
Conclusion and Recommendations
The proposed Iskowela project aims to address the challenges faced by higher education institutions in the Philippines regarding student recruitment and school marketing. By offering HEIs a personalized school profile and a platform to showcase essential information, Iskowela will enable them to enhance their online presence and attract qualified students. Additionally, the online portal for students will provide a convenient way to browse through a catalog
Iskowela Platform
329
of recruiting schools. To ensure the project meets the needs of both students and educational institutions, Iskowela incorporates various features such as an interactive chatbot, events/places map, information module, and website analytics. Iskowela aims to fill a significant gap in the current research landscape by providing a timely and cost-effective solution for student recruitment in the Philippines. While Iskowela offers several potential benefits, it is important to acknowledge the limitations of the project. Being a newly launched platform, Iskowela may face challenges in effectively raising awareness of the site and establishing trust to potential students. Furthermore, Iskowela being developed in English rather than in Filipino, as well as the lack of reliable internet connectivity in rural areas and low-income communities of the country could reduce the accessibility of the platform to potential students and could become hurdles in accomplishing the core objective of enhancing the online presence of higher education institutions (HEIs). To further enhance the user experience for both students and school administrators, future research can integrate gamification elements. Integrating gamification elements similar to those seen on Edukasyon.ph can further boost user engagement and motivation when using the site. This could include incorporating rewards, badges, leaderboards, and interactive challenges that encourage students to explore more institutions and engage with the platform. Finally, additional research can be conducted to explore the local variables influencing the decision-making process of Filipino students regarding higher education. Uncovering distinctive factors within the Philippine context can result to a more efficient approach to addressing the challenges associated with student recruitment.
References 1. Vargas C., What is student recruitment? Student recruitment is... (2022). https:// www.webinarleads4you.com/what-is-student-recruitment/ 2. Clinedinst M.: 2019 State of College Admission, National Association for College Admission Counseling (2019). http://nacacnet.org/wp-content/uploads/2022/10/ soca2019 all.pdf 3. Dagumboy, E., Eden, C.: ECD-IMC: an integrated marketing communications model for selected Philippine higher education institutions. Jurnal Studi Komunikasi 6(3), 719–738 (2022) 4. Gallagher, S., Palmer, J.: The pandemic pushed universities online. The change was long overdue (2020). https://hbr.org/2020/09/the-pandemic-pushed-universitiesonline-the-change-was-long-overdue 5. Sharp, P.S.: Change champions for student recruitment: leader experiences in managing change for new technology adoption. Ph.D. thesis, University of Missouri Columbia (2018). https://doi.org/10.32469/10355/66190 6. Winer, R., Joyce, W.: Postsecondary marketing in the 21st century: tips and common mistakes (2019). https://evolllution.com/attracting-students/ marketing branding/postsecondary-marketing-in-the-21st-century-tips-andcommon-mistakes/?affiliate=
330
J. K. L. Tsang et al.
7. James, M.: International student recruitment during the pandemic: the unique perspective of recruiters from small to medium-sized higher education institutions. High Educ. Pol. (2022). https://doi.org/10.1057/s41307-022-00271-3 8. Albright, E.R., Schwanke, E.A.: University admissions leaders rethink recruitment strategies in the wake of COVID-19. J. Adv. Educ. Pract. 2(1), 4 (2021) 9. R.o.t.P. Commision on Higher Education, List of Higher Education Institutions (2021). https://ched.gov.ph/list-of-higher-education-institutions/ 10. Vijayan, A.: Why a quality website is important for your educational institute? (2021). https://www.graffiti9.com/blog/importance-of-having-a-website/ 11. Cuthbert N., Which are the most-visited student recruitment websites? The Pie News (2022). https://thepienews.com/analysis/who-are-the-most-visited-studentrecruitment-websites-according-to-similarweb/ 12. Motte-Munoz, H.: Edukasyon.ph (2022). https://www.edukasyon.ph/ 13. Barrameda, J.C., Relayson, E.E.: Marketing and engagement tool for educational institutions. Ph.D. thesis, University of the Philippines, Diliman (2020) 14. Lampley, J., Owens, M.: Website study: what information are prospective graduate students seeking? J. Acad. Adm. High. Educ. 11(2), 55–59 (2015)
Meta-features Based Architecture for the Automatic Selection of Prediction Models for MOOCs Houssam Ahmed Amin Bahi1(B) , Karima Boussaha2
, and Zakaria Laboudi3
1 Department of Mathematics and Computer Science, Research Laboratory On Computer
Science’s Complex Systems (ReLa(CS)2), University of Oum El Bouaghi, Oum El Bouaghi, Algeria [email protected] 2 Department of Mathematics and Computer Science, University of Oum El Bouaghi, Oum El Bouaghi, Algeria [email protected] 3 Department of Networks and Telecommunications, University of Oum El Bouaghi, Oum El Bouaghi, Algeria [email protected]
Abstract. It is rather a truism that student dropout prediction (SDP) problem has been at the core of a plethora of studies; nevertheless, there seems to be insufficient research on the main issue which is the lack of standard and unified features that are to be used in anticipating online students’ dropout, as most researches in this area treat merely case studies. In essence, this study aims at presenting an architecture based on meta-features and meta-data for better selection of model(s) to predict and intervene the dropout in massive online open courses (MOOCs). Keywords: MOOCs · Machine Learning · Dropout Prediction · Meta-features · Meta-data · Transfer Learning · Random Forest · Decision Tree
1 Introduction As a means to amplify privileges and potential of MOOCs, improving the standard of educational instruction, and enhancing the number of learners, the problem of predicting students’ dropout should be exponentially stressed out [4]. In this respect, numerous studies have been conducted from the features engineering phase to the predicting phase, winding up with proposing strategies as intervention measures for the purpose of examining the aforementioned issue. However, the majority of researches are essentially training a model or models which are grounded on experience and intuition, and seeking accuracy on a specific dataset (s) and case studies, neglecting the goal of having a unified approach for the feature engineering phase to the predicting phase. Consequently, our study seeks to identify a general strategy appropriate for all MOOCs from all sectors and topics, starting with e-health MOOCs our application field [6]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 331–337, 2023. https://doi.org/10.1007/978-3-031-44146-2_35
332
H. A. A. Bahi et al.
In this regard, it is crucial to carry out a trustworthy investigation into the factors that have the greatest influence on student behavior and, thus, dropout rates, and to select the model or models that will produce the best results for us. Moreover, it is paramount to point out that the prediction phase will receive the majority of attention in this research paper, along with some intriguing leads on how to approach the features selection phase. To accomplish our goal, many research questions should be addressed that require in-depth exploration among them: – What is the best method(s) or approach(s) to select the most important features? – Due to the current lack of a unified standard, many available datasets lack many important features, so how can we compensate for this issue through a systematic and automatic model selection approach? – How can we use Meta-Data and Meta-Features to select better models? – How can we combine previous studies and body of work in an effective way to solve prediction and dropout issues in MOOCs? The rest of this paper will be organized as follows: in the second section, we will present a state-of-the-art about feature selection and the most used machine learning methods. In the third segment, an architecture based on Meta-Features will be proposed in order to choose the best model. Last but not least, the fourth portion is going to be reserved for the conclusion, ongoing and future work regarding the architecture proposed, and other research orientations.
2 State of the Art The features that are mentioned and studied in the literature fall into two categories: those that consider all features collected or studied, or just a subset of them, based on intuition, experience, or manual testing, and those that are selected based on methods and particular approaches. However, despite using automatic selection methods to choose the high-impact features, we still have not reached a consensus about which features should be taken into account for two main reasons. First, there is no agreement among educational data mining (EDM) experts or experts from the educational field in general that we have all the crucial features, yet some promising but poorly investigated solutions like interpretive deep methods can help us find hidden features and patterns[1]. Second, the selection of the features directly influences the model and machine learning algorithm used. This is one of the reasons why it is challenging to conduct a comparative study, and it is further complicated by the use of various evaluation metrics to rate the results attained. It is vital to address the issue at hand, namely the factors that provoke and cause the dropout, according to research based on experiments and social papers, in order to get insights on characteristics that predict the dropout. According to Goel and Goyal [2], the primary two categories of factors that contributed to dropouts were those connected to students and those associated with MOOCs. Regarding the student-related factors, we identify a lack of motivation and a lack of time as the main causes. For MOOC-related issues, we have a course layout and a lack of interaction in MOOCs. Theodor.P et al.
Meta-features Based Architecture for the Automatic Selection
333
[3] mentioned that the factors that affect the dropout are, lack of options in the learning platform, lack of interactions, level and type of language used, and the course design. Researchers extract and select their collection of features based on the causes and their availability, some of which rely on automatic selection techniques. In the literature, there are many different ways to categorize features. We can see that the taxonomy used in [1] is the most understandable and readable, subdividing it into two large families: Plain Modelisation, and the second family Sequence Labelling. As a result of not having a convention of standards for the features that should be chosen in datasets, different combinations of features will now exist from various datasets. Consequently, this work will address this issue and attempt to make up for it by improving the prediction phase to accommodate the current situation as we try to tackle it on a Meta-level rather than a case study level. We begin by citing the work of Prenkij et al. [1], who conducted a thorough review of the literature and listed the traditional machine learning prediction algorithms. In the same line of thought, several LR-based works, such as Gitinabard et al. [5], other studies on SVM by Mourdi et al. [7], studies using decision trees and naive Bayes, respectively [8-11], and studies using neural networks [12, 13]. To enhance our investigation, we also refer to the survey done by Xiao and Hu [14]. Furthermore, Berens et al. [15] employed Linear Regression, Decision Trees, and neural networks in combination to improve performance for works based on Ensembles. But conventional methods continue to predominate in the literature.
3 Methodology and Expected Outcome As previously mentioned, due to the lack of unification of features that should be contemplated in predicting online students’ dropout, researchers embrace various combinations of features even when implementing the same datasets. This latter is a crucial problem and a big challenge in the field of EDM. Therefore, until generating a unification of what features should be chosen when constructing a MOOC, an alternative method to anticipate students’ withdrawals in online courses is needed. This method must be measurable and dynamic to the distinct cases. Besides, even after creating unified standards of features, an approach to benefit from the previous studies in the MOOC field is highly required. Furthermore, it is perceived in the literature that there are various models of SDP, even on the same dataset (s), due to the different features implemented in the prediction of students’ dropout. Hence, to benefit from the afore-conducted studies, we suggest the latter as metadata in which each study will be considered as an example of our meta raw data along with using our experiments and future studies as well to create a wellbalanced and reliable dataset. In this way, even after having a unification of standards, the proposed approach will be valuable in terms of benefiting from past studies and being the building block for future studies. The challenges of the proposed approach are as follows: 1. The creation of a balanced and rich meta-dataset. 2. The model used to train the meta-dataset. 3. The overall architecture to combine the meta-dataset with the dataset to have the best model for it.
334
H. A. A. Bahi et al.
4. How to measure the effectiveness of the architecture proposed and what are the possible limitations? Coming to the first three points, we provide an architecture rule based on selecting the optimal model or models utilizing meta-data or meta-features. Two different types of inputs are used in the proposed architecture: 1) inputs linked to the dataset, and 2) inputs related to the trained model. The dataset-related inputs, on the one hand, is the meta-data of the dataset. For instance, the dataset’s size, the number of features, the linearity of the data, the likelihood of outliers, etc., all change from one dataset to the next. On the other hand, inputs related to the model which are meta-data of collected datasets as training data. Meta Features, MF = {The size of the dataset, the number of features, the linearity of the data, the correlation possibility, type of the target value, …} inputs related to the dataset DMF that we will predict its suitable model, will include the values of these meta-features. While the model’s inputs will be a meta-dataset. Last but not least, there is the proposed guide model, which operates as an attention layer and proposes potential models for each value of the meta-feature MFi (Table 1). MF = {MF1, . . . , MFn}, DMFi
⎧ ⎨n ={D1, . . . , Dm}, MDj = {M1, . . . , Mp}. m ∈ N ⎩ p
Table 1. The inputs used in the proposed architecture. Meta features MF
Size of the dataset
Inputs of the dataset DMF
Big
Inputs of the proposed guide model MD
Deep Learning, KNN,RF,…
Number of features
…
Small
High
low
…
LR, NB, Logistic Regression,…
Linear SVM, LR, NB,.
Kernel SVM, NN, RF, DT, …
…
In the same fashion, the Meta-feature values could be either numerical or categorical. An attention mechanism will be used on the Meta-features to rank the suggested models at the end. The pre-trained model is trained on previous experiments as well as future ones that we will run on open datasets to enrich our dataset. The base model used is a decision tree. This is because it is a very powerful and rule-based model with few limitations that can be controlled. Moreover, because some features are more important than others, we test the combination on multiple decision trees or, in essence, create a random forest.
Meta-features Based Architecture for the Automatic Selection
335
The recommended models that have the potential to produce several models are the tree’s nodes. Besides, for more accurate outcomes, we will combine Random Forest with a voting system (see Fig. 1).
Fig. 1. The base model for the proposed architecture.
Considerations and prerequisites must be met include: – The dataset for training must be balanced to avoid the biased issues of DTs; – The features choice with their types is of great importance to avoid the problem of information loss during training. – DTs are highly sensitive to outliers. – It is crucial to keep in mind that dropout intervention, not prediction, is the goal. Some performance-oriented solutions are black boxes, which means losing the capacity to identify the factors that enabled such performance. – In conclusion, the most crucial phase is the production of the meta-dataset, which must be: simple, devoid of outliers, balanced, and specific in its choice of features (Fig. 2).
Fig. 2. The proposed architecture for the suggested Model.
336
H. A. A. Bahi et al.
– The meta-data of the new dataset is generated by the user and given to the model – The suggested guide model contains data that works as an attention layer to help the training as shown in the table above. – To increase the flexibility of the architecture, we separate the proposed guide model from the pre-trained model. – The pre-trained model is trained using meta-data from earlier studies, together with our experiments. – The suggested model is provided, and the dataset is given to the model so it can be trained. limitations and possible solutions of the proposed architecture: – The architecture does not address the hyperparameters optimization and tuning phase during the selection process. It is only after the proposed model is generated; we can tune its hyperparameters. Therefore, this limitation must be accosted in both the creation of the meta-dataset phase and the selection process. – The architecture and the meta-dataset do not focus on hybrid models which can guarantee better results than traditional models. Hence, this should be taken into consideration while creating the meta-dataset. Besides, it is paramount to point out that hardly any work was done on the use of meta-heuristics methods in the SDP problem.
4 Conclusion, Ongoing, and Future Work To propose a trustworthy and ideal set of features for upcoming work on EDM and a reference towards a global approach for MOOCs using various methods, including interpretable deep methods [1] and others, a more thorough and in-depth study and investigation of feature selection will be conducted. After that, a more focused study is conducted using findings from the literature, and high-scale testing of our own, to produce well-designed meta-features and meta-datasets in terms of which criteria should be chosen to select the most relevant studies as well as testing and improving the suggested architecture in this work to ensure stability and effectiveness in all cases, it should be noted that this approach is applicable for all fields and domains once the previously mentioned conditions are met. With some work already done on this subject, investigations focusing on new generation deep methods with the conjunction of metaheuristics methods and automatic optimization methods make it a combinatorial optimization problem that is worth being examined and explored as a possible research orientation since very few works ever addressed the SDP problem this deep. This work will offer value and solutions to the most crucial aspects based on research and experiments done for non-specialists in the educational field, addressing the problem of selecting the model in particular for the non-specialists in the machine learning field, and finally proposing a full and comprehensive intervention mechanism including the content, design, and interactive course to increase the completion rate. Acknowledgments. I would like to thank Rania Khattab for her constructive criticism of the manuscript of this post.
Meta-features Based Architecture for the Automatic Selection
337
References 1. Prenkaj, B., Velardi, P., Stilo, G., Distante, D., Faralli, S.: A survey of machine learning approaches for student dropout prediction in online courses. ACM Computing Surveys (CSUR) 53(3), 1–34 (2020) 2. Goel, Y., Goyal, R.: On the effectiveness of self-training in MOOC dropout prediction. Open Computer Sci. 10(1), 246–258 (2020) 3. Panagiotakopoulos, T., Kotsiantis, S., Kostopoulos, G., Iatrellis, O., Kameas, A.: Early dropout prediction in MOOCs through supervised learning and hyperparameter optimization. Electronics 10(14), 1701 (2021) 4. Chi, Z., Zhang, S., Shi, L.: Analysis and prediction of MOOC learners’ dropout behavior. Appl. Sci. 13(2), 1068 (2023) 5. Gitinabard, N., Khoshnevisan, F., Lynch, C.F., Wang, E.Y.: Your Actions or Your Associates? Predicting Certification and Dropout in MOOCs with Behavioral and Social Features. arXiv preprint arXiv:1809.00052 (2018) 6. Alamri, A., Sun, Z., Cristea, A.I., Senthilnathan, G., Shi, L., Stewart, C.: Is MOOC learning different for dropouts? a visually-driven, multi-granularity explanatory ML approach. In: Springer International Publishing. Intelligent Tutoring Systems: 16th International Conference, ITS 2020, Proceedings 16, pp. 353–363. Springer, Athens, Greece (2020) 7. Moreno-Marcos, P.M., Muñoz-Merino, P.J., Maldonado-Mahauad, J., Pérez-Sanagustín, M., Alario-Hoyos, C., Kloos, C.D.: Temporal analysis for dropout prediction using self-regulated learning strategies in self-paced MOOCs. Comput. Educ. 145, 103728 (2020) 8. Chen, J., Feng. J., Sun. X., Wu. N., Yang. Z., Chen. S.: MOOC dropout prediction using a hybrid algorithm based on decision tree and extreme learning machine. Mathematical Problems in Engineering, 2019 (2019) 9. Xing, W., Chen, X., Stein, J., Marcinkowski, M.: Temporal predication of dropouts in MOOCs: reaching the low hanging fruit through stacking generalization. Comput. Hum. Behav. 58, 119–129 (2016) 10. Mourdi, Y., Sadgal, M., Fathi, W.B., El Kabtane, H.: A machine learning based approach to enhance MOOC users’ classification. Turkish Online J. Distance Education (TOJDE) 21(2), 47–68 (2020) 11. Al-Shabandar, R., Hussain, A., Laws, A., Keight, R., Lunn, J., Radi, N.: Machine learning approaches to predict learning outcomes in Massive open online courses. In: IEEE, International Joint Conference on Neural Networks 2017, IJCNN, pp. 713–720. IEEE (2017) 12. Xu, C., Zhu, G., Ye, J., Shu, J.: Educational data mining: dropout prediction in XuetangX MOOCs. Neural Processing Letter 54(4), 2885–2900 (2022) 13. Wu, N., Zhang, L., Gao, Y., Zhang, M., Sun, X., Feng, J.: CLMS-Net: dropout prediction in MOOCs with deep learning. In: Proceedings of the ACM Turing Celebration Conference, pp. 1–6. ACM, China (2019) 14. Xiao, W., Hu, J.: A state-of-the-art survey of predicting students’ performance using artificial neural networks. Engineering Reports, e12652 (2023) 15. Berens, J., Schneider, K., Görtz, S., Oster, S., Burghoff, J.: Early detection of students at risk-Predicting student dropouts using administrative student data from German Universities and machine learning methods. J. Educ. Data Mining 11(3), 1–41 (2019)
Mapping of Robustness Diagram with Loop and Time Controls to Petri Net with Considerations on Soundness Cris Niño N. Sulla(B)
and Jasmine A. Malinao
Division of Natural Sciences and Mathematics, University of the Philippines Tacloban College, Tacloban, Philippines {cnsulla,jamalinao1}@up.edu.ph
Abstract. The Robustness Diagram with Loop and Time Controls is a multidimensional workflow diagram capable of representing the resource, process, and case dimensions in one model. To improve support for its automated verification, a method of model decomposition to Petri net through mapping components to Petri net components was defined in previous literature. This mapping however is not complete with respect to model components that help control the reachability or reuse of substructures therein. Furthermore, the mapping also outputs unsound PNs. This research addresses these gaps in the mapping of Robustness Diagram with Loop and Time Controls to Petri nets. Additionally, both mappings are checked and compared for both classical and relaxed soundness in order to glean insight into the implications of mapping Robustness Diagram with Loop and Time Controls to Petri nets with respect to soundness across workflow models. Keywords: Workflow · Robustness Diagram with Loop and Time Controls · Petri Net · Soundness
1 Introduction 1.1 Background of the Study A workflow, as defined by the Workflow Management Coalition [4], is “the automation of a business process, in whole or part, during which documents, information or tasks are passed from one participant to another for action, according to a set of procedural rules”. There are three workflow dimensions, namely, process, resource, case, to help describe a system. The process dimensions specify tasks and how they relate to each other; resource dimensions specify objects and their roles; and case dimensions specify groups of system components enacted from a reference time point until their intended output is produced. Workflows are important tools utilized to better understand and analyze different types of systems in the scientific and business domains. Over the years, there have been a number of advancements and the introduction of many different workflows suited for different purposes. Among these is the Robustness Diagram with Loop and Time © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 338–353, 2023. https://doi.org/10.1007/978-3-031-44146-2_36
Mapping of Robustness Diagram with Loop and Time Controls
339
Controls (RDLT), which is an extension built upon the Robustness Diagram. The RDLT introduces the concept of reset-bound subsystems (RBS), which performs the function of cancellation regions within workflows. An RBS is induced by the value of the Mattribute of a vertex in an RDLT, which will be discussed more in-depth later. RDLTs also allow for limiting the amount of traversals on an arc via the L-attribute. With this, the RDLT considers the use and representation of each of the three workflow dimensions: process, case, and resource, in a single model. It is a powerful workflow model capable of representing complex systems [4]. Real-world usage of RDLTs include the RDLT model of the Adsorption Chiller system [4], the use of RDLTs to profile the efficiency of the Philippine Integrated Disease Surveillance and Response (PIDSR) system [3], and the development of matrix representations of RDLTs [2]. Another type of workflow is the Petri net (PN). PNs provide a graphical notation showing step-by-step computations that are performed when conditions are satisfied given some initial input configuration in the net [4]. PNs represent the process and case dimensions of workflows. Studies have been conducted to improve support for RDLT analysis and verification. Currently, there exists a mapping for RDLT to PNs defined in literature [10]. This mapping was developed by Yiu et al. (2018) and it maps RDLT components to PN components according to an algorithm. A list of PN structures were identified by the researchers to verify the correctness of the mapping. This mapping however does not include the L and M-attributes of RDLTs meaning the resulting PN does not support resets and traversal limits on arcs. This paper aims to formulate a novel mapping for the L and M-attributes of RDLT to PN in order to achieve complete mapping of RDLT to PN components. This study will also report on the implications of L and M-attribute mapping to the soundness of PNs. 1.2 Basic Definitions and Notations Robustness Diagram with Loop and Time Controls. The Robustness Diagram with Loop and Time Controls (RDLT) is a workflow model that represents all three workflow dimensions. It is formally defined in Definition 1. Definition 1. An RDLT is a graph representation R of a system that is defined as R = (V, E, T, M) where: – V is a finite set of vertices, where each vertex has a type V type : V → {‘b’, ‘e’, ‘c’} where ‘b’, ‘e’, and ‘c’ means the vertex is either a “boundary object”, an “entity object”, or a “controller”, respectively. – A finite set of arcs E ⊆ (V × V )\E where E = {(x, y)|x, y ∈ V, V type (x) ∈ {‘b’, ‘e’}, V type (y) ∈ {‘b’, ‘e’}} with the following attributes with user-defined values, – C: E → Σ ∪ {} where Σ is a finite non-empty set of symbols and is the empty string. Note that for real-world systems, a task v ∈ V, i.e. V type (v) = ‘c’, is executed by a component u ∈ V, V type (u) ∈ {‘b’, ‘e’}. This component-task association is represented by the arc (u, v) ∈ E where C((u, v)) = . Furthermore, C((x, y)) ∈ Σ represents a constraint to be satisfied to reach y from x. This constraint can represent either an input requirement or a parameter C((x, y)) which needs to be satisfied to proceed from
340
– –
–
–
C. N. N. Sulla and J. A. Malinao
using the component/task x to y. C((x, y)) = represents a constraint-free process flow to reach y from x or a self-loop when x = y. L: E → Z is the maximum number of traversals allowed on the arc. Let T be a mapping such that T ((x, y)) = (t1,…, tn) for every (x, y) ∈ E where n = L((x, y)) and t i ∈ N is the time a check or traversal is done on (x, y) by some algorithm’s walk on R. M: V → {0, 1} indicates whether u ∈ V and every v ∈ V where (u, v) ∈ E and C((u, v)) = induce a sub-graph Gu of R known as a reset-bound subsystem (RBS). The RBS Gu is induced with the said vertices when M (u) = 1. In this case, u is referred to as the center of the RBS Gu . Gu ’s vertex set VGu contains u and every such v, and its arc set EGu has (x, y) ∈ E if x, y ∈ VGu . An arc (x, y) ∈ E is said to be a bridge of Gu if and only if (1) x is not a vertex in Gu but y is. We then say that (x, y) is an in-bridge of y in Gu or (2) x is a vertex in Gu , but y is not. We then say that (x, y) is an out-bridge of y in Gu . A pair of arcs a, b and (c, d) are type-alike (with respect to y) if and only if (1) y is present in both arcs, and (2) either both arcs are bridges of y in Gu or both are not.
Fig. 1. An RDLT with 5 controllers (vertices y1, y2, y3, y4, y5), 1 boundary object (vertex x1), 1 entity object (vertex x2), and T-attributes in red text. Vertex x2 is the center of a Reset Bound Subsystem, with owned controllers y4 and y5
Activity Extraction in RDLTs Definition 2. Reachability Configuration [4]. A reachability configuration S(t) in R contains the arcs traversed at time step t ∈ N. Definition 3. Activity Profile [4]. We call a set S = S(1), S(2), …, S(d) of reachability configurations, d ∈ N, as an activity profile in R where ∃(u, v) ∈ S(1) and (x, y) ∈ S(d) such that w, z ∈ V where (w, u), (y, z) ∈ E.
Mapping of Robustness Diagram with Loop and Time Controls
341
Given R, start vertex s, and goal vertex f , the following algorithm outputs an activity profile S if there exists an activity that can be extracted given the vertices of R and ∅ otherwise. Algorithm A is the activity extraction algorithm defined in [4].
Vertex Simplification The process of converting each of the vertices in an RDLT into controllers and abstracts the RBS to obtain a vertex-simplified RDLT G. Definition 4. Vertex-simplified RDLT [4]. A vertex-simplified RDLT G = (V , E , C ) of R = (V, E, Σ, C, L, M) is a multidigraph whose vertices v ∈ V have Vtype (v) = c where G is derived from R such that the following holds: 1. x ∈ V if any of the following holds: – x ∈ V and x ∈ / V Gu of an RBS Gu in R, or – there exists an in-bridge (q, x) ∈ E of x ∈ V ∩VGu , q ∈ V of R, or – there exists an out-bridge (x, q) ∈ E of x ∈ V ∩VGu , q ∈ V of R 2. (x, y) ∈ E with C ((x, y)) = C((x, y)) for x, y ∈ V if (x, y) ∈ E
342
C. N. N. Sulla and J. A. Malinao
3. C((x, y)) = if x, y ∈ V ∩ VGu and x is an ancestor of y in R, (x, y) ∈ / EGu . Extended RDLT Definition 5. Extended RDLT [4]. An extended RDLT R = (V, E, Σ, C, L, M) is derived from R such that 1. V = V ∪ {i} ∪ {o}, where i and o are dummy source and sink where Vtype (o) = c. 2. E ∈ E ∪ {(i, u)} ∪ {(x, o)}, ∀u ∈ I \{i}, I = {u ∈ V |u is a source in V } C ((x, y)) = , L((i, x)) = 1, and ∀x ∈ O\{o}, O ⊂ V , O = {x ∈ V |x is a sink in V }, C((x, o)) = x− o, := ∪ x− o, and L((x, o)) = 1. Note that the new labels are distinguishable from each other. RDLT Soundness Definition 6. Relaxed RDLT Soundness [4]. Suppose an activity profile S is extracted from RDLT R, and a final time step k, an RDLT R is relaxed sound if the following conditions hold for its included arcs and vertices therein: 1. From each vertex, there should exist a path to the sink vertex. 2. No arc that does not end at the final vertex is included in the last reachability configuration, S(k) in S. 3. An arc (x, y) is always traversed at time t < k. 4. All arcs in R is used at least once in S. The RDLT shown in Fig. 1 is relaxed sound because it satisfies each of the conditions for relaxed soundness. Definition 7. Classical RDLT Soundness [5]. Suppose an activity profile S is extracted from RDLT R, a final time step k, and a final output vertex f ∈ V, an RDLT R is classical sound if the following conditions hold for its included arcs and vertices therein: 1. Proper termination: For every (x, y) ∈ S(i), i = 1, 2, …, k, ∃(y, z) ∈ S( j), i + 1 ≤ j ≤ k. Furthermore, for every (x, y) ∈ S(k), y = f . 2. Liveness: For every (x, y) ∈ E, there is an activity profiles S = {S (1), S (2), ..., S (k )}, k ∈ N, such that (x, y) ∈ S (i), 1 ≤ i ≤ k. Given this, the RDLT shown in Fig. 1 is classical sound as it satisfies the conditions for classical soundness. 1.3 Petri Nets and Workflow Nets Petri Net. A Petri Net is a 4-tuple (P, T, F, M 0 ) where P and T are finite sets of places and transitions such that P ∩ T = ∅, F ⊆ (P × T ) ∪ (T × P) is a set of flow relations, and M 0 : P → N is its initial marking. A marking is defined by the function M: P → N
Mapping of Robustness Diagram with Loop and Time Controls
343
where M(p) denotes the number of tokens in p ∈ P. M 0 →σ M n : a firing sequence σ = t 1 , t 2 , ... t n−1 leads to M n from M 0 . A firing sequence σ is a composition of all transitions fired in order to reach a marking. Enabling of transitions: A transition t is enabled in marking M if and only if for each place p such that (p, t) ∈ F, M(p) ≥ 1. Firing of transitions: If a transition t is enabled in marking M, it can fire and result in a new marking M where: M (p) = M(p) − Σ s∈P F(s, t) + Σ s∈P F(t, s) for each p ∈ P (Fig. 2).
Fig. 2. Components of a PN. [1]
Definition 8. Workflow Nets [4]. A PN is a Workflow net if and only if the following hold: 1. PN has two special places, i.e. a source place i where there are no input nodes to i, and a sink place o where there are no output places to o. 2. If a transition t ∗ is added to the PN and an arc connects o to t ∗ and t ∗ to i, then the resulting PN is strongly connected. Definition 9. Strongly Connected [4]. A PN is strongly connected if and only if, for every pair of nodes x and y in the net, there is a path from x to y. The soundness property guarantees the absence of livelocks, deadlocks, and other anomalies in the net [9]. The original definition of workflow soundness is given below, otherwise known as classical soundness: Definition 10. Classical Soundness [9]. A Workflow net (WF-net) PN = (P, T, F, i) is sound if and only if the following hold, 1. if i →∗ M, then M →∗ o, i.e. for every reachable marking M from i, there is a firing sequence usable to reach o from M. 2. if I →∗ M and M ≥ o, then M = o, i.e. o is the only marking reachable from i with one token in place o. 3. for every t ∈ T, there are M and M satisfying that I →∗ M and M →t M , i.e. there are no dead transitions in the net. Different variations of the soundness property have been introduced for the verification of models with stronger and weaker notions of this property [9]. One such notion is
344
C. N. N. Sulla and J. A. Malinao
Relaxed Soundness, which allows for the existence of deadlocks [9] and only requires the existence of some marking that leads to the marking of the final place o in the WF-net, with no other places marked. The termination of the net with some tokens in source or intermediate places is allowed. Therefore the second condition, proper termination for Classical Soundness, is not required.
2 Related Literature 2.1 RDLT to PN Mapping The primary literature this study builds on is the work of Yiu et al. (2018) [10]. Their study focused on model deomposition of RDLT to two workflow models: class diagrams which represent the resource dimension, and PNs which represent the process and case dimension. A motivation for the study was a lack of automated tools to support RDLT unlike other modelling diagrams. So, the researchers proposed two algorithms for mapping RDLT components to class diagram and PN components to utilize existing automated tools while such tools are yet to be created for RDLT. For the RDLT to PN mapping, the input RDLT must be pre-processed in two steps. First, the RDLT is vertex-simplified so that all nodes of the RDLT are of controller type. Next, the RDLT is converted into an extended RDLT where two dummy nodes are created, one source node and one sink node, and attached to the RDLT vertices with no incoming and outgoing arcs respectively. Then, the mapping algorithm traverses the pre-processed input RDLT and converts each RDLT component into PN components. The algorithm looks at the incoming arcs of each vertex in the input RDLT and generates a number of corresponding PN components and connects them via arcs. The interactions were minimized into 9 structures which were used to prove the correctness of the mapping algorithm, wherein the extracted RDLT activity profiles are checked against a corresponding firing sequence in the PN. This mapping was shown to have a space complexity of (v) and a time complexity of (v2 ), where v is the number of vertices in the RDLT. These structures represent the different interactions possibly composing an RDLT, and are shown next to their PN component representations to the right according to the defined mapping algorithm. However, there are no components in place to represent the L-attribute of RDLT, meaning the resulting PN does not support limiting the traversals on arcs. Described in the Future Work section of the paper of Yiu et al. (2018) is one approach to map the L-attribute that involves the addition of auxiliary input places connected to transitions, with a number of tokens equal to the L-attribute value of that edge in the RDLT [10]. This approach will be utilized in this paper to allow for limiting of arc traversals in PN. Additionally, the mapping does not define RBS in the resulting PN as the input RDLT is preprocessed, where information such as the M -attribute of the RBS center and its owned vertices are removed. Therefore, there is no support for M -attribute representation in the mapping and no way to perform analysis on the RDLT and its internal RBS in the resulting PN. This paper will develop an M -attribute representation for resulting PNs that allows for analysis in an integrated representation of both the level-1 and level-2 RDLT in PN. In the list of PN structures, one structure
Mapping of Robustness Diagram with Loop and Time Controls
345
was found to not be classically sound in all cases, which is structure 9 shown on Fig. 3. The structure corresponds to a MIX-JOIN [5].
Fig. 3. The following structure resulting from the mapping by Yiu et al. (2018) results in unsound PNs [2].
The place Pa was found to contain a token that would not be removed after transition T y was fired. If a token was present in the output place, there would still be a token in place Pa , which does not satisfy the requirement of proper termination for classical soundness. This study aims to develop a new mapping that causes this structure to contribute to the satisfaction of the conditions for classical soundness. 2.2 Soundness of Workflow Nets The paper by van der Aalst et al. provides an overview of the different notions of soundness and investigates them in the different extensions of workflow nets. The different extensions of workflow nets involve the inclusion of components such as reset arcs and inhibitor arcs. Reset arcs are represented by arcs with a double tipped arrow edge and do not influence the enabling of transitions. Once a transition connected to a reset arc is fired, all the tokens in each of the input places connected by a reset arc to that transition are removed. Inhibitor arcs prevent the connected transition from firing when there are 1 or more tokens in the input place. Each of these extensions enhance the expressiveness of workflow nets [9]. For workflow verification, there are multiple notions of soundness and each of them are decidable. Most extensions of workflow nets will however make all these notions of soundness undecidable [9]. Reset WF-nets or reset nets are workflow nets that contain reset arcs. Literature shows that classical soundness is undecidable for reset nets, also relaxed soundness [7]. The researchers stress that soundness for reset nets with a finite state space can be checked, and that even for nets with an infinite state space a partial exploration could reveal errors in the net [7]. This result provides insight into how soundness verification in workflow nets is performed and into the use of these PN extension components.
346
C. N. N. Sulla and J. A. Malinao
3 Methodology 3.1 Mapping of L-attribute The proposed mapping utilizes the mapping algorithm presented in the paper of Yiu et al (2018), which was made so that it mimics the activity profile extraction algorithm of the RDLT [10]. All structures possibly composing an RDLT are minimized into a set of 9 structures. For the proposed mapping, these 9 structures are modified to include L-attribute support through the addition of auxiliary places with a set number of tokens placed. These auxiliary places are each connected to the intermediate transitions in each structure, specifically, the transitions that represent the traversal of an arc in the original mapping. Then, a reset arc is connected from the auxiliary place to the exit transitions of each structure. To better illustrate this, each of the following modified RDLT structures and their equivalent PN structures are shown on Fig. 4. For structures 1–5, the number of tokens that are placed in each of the added auxiliary structures is simply equal to the L-attribute value on that corresponding arc in the RDLT, n. Some structures, however, share only one transition for multiple incoming arcs. Therefore, rules need to be created for the initial placement of tokens in the auxiliary input places of those JOIN structures (Structures 6, 7, 8). Initial Token Placement Rules for Auxiliary Places in Join Structures: – If ∀i, j: C(x i , y) = C(x j , y) & x i = x j , then n is the sum of L(x i , y) and L(x j , y). – If ∃i, j: C(x i , y) = C(x j , y) & (x i , y), (x j , y) are type-alike & C(x j , y), C(x j , y) ∈ Σ, then n = min∀(xi,y)∈E L(x i , y) or the minimum L-attribute value between each incoming arc. For the reset functionality of each structure, a check is performed to see if a looping arc exists for the target vertex or vertices of each structure. If yes, do not connect a reset arc to the auxiliary place/s. Otherwise, connect a reset arc to the auxiliary place/s, consuming all tokens in the place/s once the output transition of a structure is fired. Additionally, each of these auxiliary places are connected to the transition that is connected to the final output place o, which will reset each of these auxiliary places once said transition is fired. This is done in order to prevent the existence of tokens in intermediate places in the net after termination, to satisfy the conditions of classical soundness. If any tokens remain within the net after a token is placed in output place o, then the net is considered not classical sound.
Mapping of Robustness Diagram with Loop and Time Controls
347
Fig. 4. Modified structures that include L-attribute mapping.
3.2 Mapping of M-attribute The mapping results in two Petri Nets when converting an RDLT with an RBS to a PN. This is because the pre-processing steps involve the vertex-simplification of the input RDLT. This converts each vertex into controllers and abstracts the M-attribute
348
C. N. N. Sulla and J. A. Malinao
information on the center of the RBS. Thus, we get two vertexsimplified RDLTs: the level-1 vertex-simplified RDLT and the level-2 vertexsimplified RDLT, shown in Fig. 5.
Fig. 5. Level-1 (left) and level-2 (right) vertex-simplified RDLT from the RDLT in Fig. 1.
Then, the vertex-simplified level-1 RDLT is extended, adding two extra vertices, i and o, that serve as source and sink places respectively. The first PN is converted from the level-1 RDLT and shows the information of the overall RDLT with generalized paths through the RBS, while the second PN is converted from the level-2 RDLT and shows the information inside the RBS. In the original RDLT, the only way to enter and exit the RBS is through the in-bridges and out-bridges in the RBS. So, the original RDLT is cross-checked by the algorithm to see if a vertex is an in-bridge or out-bridge. If so, then the corresponding transitions to the in-bridge in the level-2 PN are connected to their input places in the level-1 PN. Then, to exit the level-2 PN, the transition of the source vertex of the out-bridge in the level-2 PN is connected to its corresponding output places in the level-1 PN. The two PNs are connected via what are essentially XOR splits, attached to the transitions corresponding to the in-bridges and out-bridges of the RBS in the original input RDLT. The components in the level-2 PN are referred to with a , transitions as T and places as P , to differentiate from the duplicate components in the level-1 PN. If analysis of the overall level-1 RDLT is to be performed, then the path through the level-1 PN is followed. Otherwise, to perform analysis of the internal RBS in the level-2 RDLT, then the path through the level-2 PN is followed instead. After the transformation is completed, the algorithm creates a place for each transition with no input place in the level-1 PN and connects them. This is because a transition with no input places can always fire and is called a source transition [6]. 3.3 Mapping Algorithm Analysis Theorem 1. The space complexity of Algorithm 1 is O(v), where v is the number of vertices in the RDLT R. Proof. For each vertex, 1 transition is created. For each incoming arc, 1 place is created. Then 1 transition depending on the value of C is created, with 1 additional place for the L-attribute. A maximum of 2 transitions and 4 places are created if there are more than 2 incoming arcs. For each unique constraint, a place is created, and each incoming arc with the same constraint is connected to the same place. The creation of places and
Mapping of Robustness Diagram with Loop and Time Controls
349
transitions grows linearly with the number of vertices in the input RDLT. As such, the space complexity is O(v). Theorem 2. The time complexity of Algorithm 1 is O(v2 ), where v is the number of vertices in the RDLT R. Proof. The algorithm goes through the list of vertices twice, first for the creation of transitions then second to check which vertices have incoming arcs. Each incoming arc of every vertex is then checked, which has an upper bound of v2 . Afterwards, each vertex is checked for looping arcs, which has a worst case of e v, where e is the number of edges in the RDLT. The rest of the steps such as connecting the components and creating input places to each source transition are done in constant time. This means that the time complexity is O(v2 ).
3.4 Validation of the Mapping Validation of the mapping is done by generating an activity profile of the input RDLT, then comparing it to the firing sequence of the resulting PN. In the paper of Yiu et al. (2018), they proved that if y is reachable from x in an RDLT R, then there exists a firing sequence σ = t 1 , ..., t n , in the PN constructed from R, where t 1 = T x and t n = T y [10].
4 Results and Discussion 4.1 Sample Mapping Using the RDLT in Fig. 1, the previous mapping and the proposed mapping is used to create the PN on Fig. 6 and the combined PN on Fig. 7 respectively. The resulting PN in Fig. 7, unlike the PN in Fig. 6, is able to simulate limiting the number of traversals of an arc with the use of the auxiliary places with a number of tokens pre-placed. Additionally, the PN in Fig. 7 is able to represent an integrated view of the RBS of the input RDLT through the use of the combined level-1 and level-2 PN, as well as allow for simulation and analysis of both levels. Traversal between both levels of the PN can be done through an XOR split, ensuring that the flow of tokens is not disruptive and can only go from one or the other. Each structure within the mapped PN will reset the number of tokens within upon exit of the structure, meaning once the transition leading out the structure is fired, and once the transitions connected to the output place of the level-1 PN is fired, all auxiliary places in the PN will have their tokens consumed. This ensures that the auxiliary places in the resulting PN will not have any tokens left in those places and not affect the overall soundness of the resulting PN. If the PN were to have places with tokens inside any intermediate place however, this would make the PN not classical sound. However, the mapping does not entirely capture the full functionality of the RBS, since exiting the level-2 PN does not replenish the number of tokens in each auxiliary input place in the level-2 PN.
350
C. N. N. Sulla and J. A. Malinao
Fig. 6. PN using the mapping of Yiu et al. (2018). [10]
Mapping of Robustness Diagram with Loop and Time Controls
351
Fig. 7. PN converted from the vertex-simplified level-1 (top) and level-2 (below) RDLT using the proposed mapping.
352
C. N. N. Sulla and J. A. Malinao
4.2 Mapping Satisfaction of Soundness As shown earlier in this paper, one of the structures identified by Yiu et al., specifically structure 9, resulted in not classical sound PNs due to the existence of an intermediate place Pa∈ that would result in nonremovable tokens in the PN structure. This structure corresponds to what is called a MIX-JOIN in the RDLT, where the incoming arcs are type-alike, C(x, z) = C(y, z), and C(x, z) = and C(y, z) = Σ [5]. The modified version of this structure in the proposed mapping was modeled in the hopes of satisfying the requirements for classical soundness of workflows, specifically proper completion, as reset arcs have been connected to each of the auxiliary places, including Pa∈ , to remove tokens in these places upon exit of the structure. However, because the mapping was designed to have one transition shared for all incoming arcs with the same type of Cattribute, one for Σ-constrained and one for -constrained incoming arcs, structure 9 does not satisfy the requirements for classical soundness in all cases. This is because these two transitions are connected to a single output place in structure 9, Pzm , meaning that in the case that both transitions are fired, then there will be two tokens in Pzm . Considering that transitions only consume one token at a time, once a token is placed in the final output place Po , a token will remain within the WF-net. This means that if an input RDLT contains a MIX-JOIN, it will result in a PN that is not classical sound. It will, however, satisfy the conditions of relaxed soundness. This is the case for the PN in Fig. 7, as there are MIXJOINs located within the PN that cause the entire PN to be not classical sound. For an input RDLT that is classical sound and contains no MIX-JOINs, then the resulting PN is classical sound. For an input RDLT that is relaxed sound and either contains MIX-JOINS or not, the resulting PN will also be relaxed sound.
5 Conclusions and Recommendations A mapping was developed that includes the L-attribute functionality as well as the partial M-attribute functionality of RDLTs in PNs, using PN and PN extension components. The mapping of the M-attribute of an RDLT with an RBS results in 2 PNs which are connected via XOR splits to form one single PN, allowing for an integrated analysis of the RDLT and its internal RBS in a PN representation. However, once exiting the level-2 PN, the number of tokens in the auxiliary places are not replenished unlike in an RBS. This currently serves as a limitation of the proposed mapping in its ability to simulate the full behaviour of the RBS, and a limitation of reset nets. A possible approach to be able to simulate the behavior of RBS is to use a weighted PN, with a weighted edge equal to the number of tokens of the L-attribute of that edge to be able to replenish the tokens once entering/exiting the level-2 PN. As discussed in Sect. 4.2, the mapping will result in classical sound PNs if the input RDLT is also classical sound and contains no MIX-JOINs. This means that the revisions added to structure 9 of the mapping were not enough to result in PNs that are classically sound, however it does produce a relaxed sound PN. A reason why the mapping does not result in classical sound PNs when the input RDLT contains MIX-JOINs is because of the concept of wellhandledness of PNs [8]. It is stated in literature that AND/ORsplits should be balanced with AND/OR-JOINs, otherwise the resulting PN will not be classical sound because the AND-splits cannot be resolved by the MIX-JOIN structure
Mapping of Robustness Diagram with Loop and Time Controls
353
due to its implementation as a WF-net OR-JOIN. A different approach to develop the mapping could focus on remedying these deficiencies by balancing each AND/OR-JOIN with a corresponding AND/OR JOIN in the construction of the resulting structures. Acknowledgements. I would like to extend my gratitude to the Department of Science and Technology for the opportunity to participate in the merit scholarship program.
References 1. Callou, G.R., Maciel, P.R.M., Araújo, C.J., Ferreira, J.: A petri net-based approach to the quantification of data center dependability. In: Petri Nets-Manufacturing and Computer Science, Chapter: A Petri Net-Based Approach to the Quantification of Data Center Dependability, pp. 313–336 (2012). https://doi.org/10.5772/47829 2. Delos Reyes, R., Agnes, K.M., Malinao, J., Juayong, R.A.: Matrix representation and automation of verification of soundness of robustness diagram with loop and time controls. Workshop Comput.: Theory Pract. 2018, 1 (2018). eBook ISBN: 9780429261350 3. Lopez, J.C.L., Bayuga, M.J., Juayong, R.A., Malinao, J.A., Caro, J., Tee, M.: Workflow models for integrated disease surveillance and response systems. Theory Pract. Comput., 1 (2020). eBook ISBN: 9780367814656 4. Malinao, J.A.: Dissertation: on building multidimensional workflow models for complex systems modelling. Fakultät für Informatik (Pattern Recognition and Image Processing Group) Institute of Computer Graphics and Algorithm (2017). https://doi.org/10.34726/hss.2017. 43523 5. Malinao, J.A., Juayong, R.A.: Classical soundness in robustness diagram with loop and time controls. (Submitted to PJS 2023) (2023) 6. Murata, T.: Petri nets: properties, analysis and applications. Proc. IEEE 77(4), 541–580 (1989). https://doi.org/10.1109/5.24143 7. van der Aalst, W., et al.: Soundness of workflow nets with reset arcs. ToPNoC III, LNCS 5800, pp. 50–70, (2009). https://doi.org/10.1007/978-3-642-04856-2 8. van der Aalst, W.M.P.: Structural characterizations of sound workflow nets. Comput. Sci. Rep., 9623, (1996). https://doi.org/10.1016/j.ipl.2005.06.002 9. van der Aalst, W.M.P., et al.: Soundness of workflow nets: Classification, decidability, and analysis. Formal Aspects Comput. 23, 333–363 (2011). https://doi.org/10.1007/s00165-0100161-4 10. Yiu, A., Garcia, J., Malinao, J.A., Juayong, R.: On model decomposition of multidimensional workflow diagrams. Workshop on Computation: Theory and Practice 2018 (2018)
Computer-Aided Development of Adaptive Learning Games Alexander Khayrov1(B) , Olga Shabalina1 , Natalya Sadovnikova1 Alexander Kataev1 , and Tayana Petrova2
,
1 Volgograd State Technical University, Volgograd, Russia
[email protected] 2 Volgograd State Socio-Pedagogical University, Volgograd, Russia
Abstract. We discuss the modern trends in educational software, technologies and tools for its development. The features and problems of developing adaptive learning games as a new category of educational software, models, methods and technologies used for its development are considered. An approach to modeling learning process as the dynamics of the learner’s state in the knowledge space, and the calculus of state in this space is described. An adaptation model is proposed based on dynamic content matching of non-linear learning and game scenarios, preserving the logic of the learning process in a game context. The design and technological procedures for the automated development of adaptive learning games are determined. A portable adaptation module embeddable in the learning game project at the stage of its implementation is designed and implemented, which makes it possible to reduce the complexity of developing adaptive learning games and improve their quality in terms of the effectiveness of learning in a game context. The development of adaptive learning game using proposed model and tools is shown and the laboriousness of the development process is estimated. The aim of the work is to reduce the complexity of developing adaptive learning games by automating the design process and using ready-made design and software solutions. Keywords: Educational Software · Learning Systems · Adaptive Learning Systems · Adaptive Learning Game · Development Educational Software · Adaptation Module · Learning Process · Knowledge Space
1 Introduction Educational software is software products with the primary purpose to train and/or develop certain skills. The desire to improve the quality of learning systems has led to the emergence of adaptive learning systems, i.e. systems that provide personification of the interaction between users and the system. A new step in the development of learning systems is providing personification of the learning process in a game context, i.e. development of adaptive learning games. Adaptive learning games are mainly developed not by specialized software development companies, but by educational communities and/or © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 354–362, 2023. https://doi.org/10.1007/978-3-031-44146-2_37
Computer-Aided Development of Adaptive Learning Games
355
individual developers who might not have sufficiently high qualifications, and this affects both the speed of development of such games and the quality of the developed games. Therefore, the task of creating specialized tools for the development of adaptive learning games, allowing to automate the development process, and thus reduce the complexity of developing such games and improve their quality in terms of the effectiveness of the learning process in a game context, is an urgent task. To achieve this goal, the following tasks have been identified: to analyze existing models, methods and technologies used in the development of educational software; develop a portable adaptation model applicable for building adaptive learning games of various genres; develop a technology for automated design of adaptive learning games based on the adaptation model; design and implement an adaptation module that is built into the project of an adaptive learning game at the stage of its implementation.
2 Technologies and Tools for Educational Software Development Educational software is commonly understood as the software designed to support independent study and training of skills in various subject areas. The field of development of learning systems has been actively developing and well systematized for a long time. To date, a lot of experience has been accumulated, both in the field of theoretical research and in the field of practical development of training systems. The desire to improve the efficiency of learning systems has led to the emergence of adaptive learning systems that provide a customized learning experience for students by dynamically adapting learning content to suit their individual abilities or preferences [7]. By learning game, a game having educational goals is meant. On the other hand, such games can be considered as learning systems in which the learning process is integrated into the game. The desire to increase the attractiveness of educational games and bring them closer in quality to traditional games has led to the emergence of a new class of educational applications - adaptive educational games. An adaptive learning game is an adaptive learning system that personifies the learning process in a game context. Developing quality educational software requires professional software development skills. To support the development process, the software market presents a fairly large number of learning software development tools, typical features of which allow authoring learning courses, slides, interactive videos and animations. One example of such systems are iSpring Suite [8] supporting creation of learning course in MS PowerPoint format. The system offers content creation tools such as educational videos, interactive assessments, quizzes and screencasts that are compatible with any learning management system. Another education platform is Articulate Storyline 360 [12]. A cloud-based eLearning content management system is oriented on creating learning systems using a collection of learning course templates, backgrounds, characters, buttons, and icons. Adobe Captivate [4] is a learning content management system that makes it easy to develop interactive courses for multiple devices. The systems supports creation of complex realistic software simulations with effects, triggers, and slide settings and learning materials in virtual reality. An open source web platform H5P [16] supports interactive educational activities right in the browser, allows you to create interactive videos, course presentations, solve crossword puzzles, paired memory games and much more.
356
A. Khayrov et al.
To support the development of adaptive learning systems, adaptive learning platforms are being used that have built in adaptation mechanisms to personalize the learning process in such systems. EdApp [18] is an adaptive learning platform that is packed with features including Brain Boost, EddApp’s spaced repetition tool. With automatic, personalized follow-up tests, Brain Boost enables learners to retain new information better. It questions them on the items they got in a course but focuses more on the ones they got incorrect. The activities are personalized for each student and each session’s schedule and content are determined by the individual and the responses they supplied in prior sessions. ALE [10] positions itself as an e-learning environment for developing learning systems able to adapt the learning process to the user learning style through the VARK [9] model. According to the VARK model, learners are divided into four groups representing major learning styles based on their responses. Each response corresponds to one of the extremes of a measurement that uses it to create personalized learning courses. Canvas LMS [1] is an open source learning management system that provides the ability to design courses according to different learning models. In addition, Canvas offers a MasteryPaths feature that allows you to create differentiated assignments and redirect learners to different learning paths. A web-based learning system Protus [17] supports building two personalization strategies. The first one is based on adaptive hypermedia, and the second one is based on recommender systems methods. Responsive hypermedia assists the student in the course by offering pages that match the student’s requirements. The recommendation system is built on interaction with the student, depending on the grade for the tasks, the system builds recommendations based on ontologies and configures the user interface. Tools for automated development of educational games can significantly simplify development and reduce time, but at the same time limit functionality and tie development only to a specific genre of games. Typical features include the ability to develop a training course, gamification elements, interactive videos and animations. IMapBook [5] is a webbased application designed to improve reading comprehension for elementary and middle school students and adult learners. IMapBook builds mental story models and creates role-playing games that are implemented through interaction between readers through the chat function. To develop educational games and game exercises LearningApps [6] can be used. The service has a clear interface and has many templates for game exercises and online games. Flippity [2] is an online service for developing game exercises based on Google tables. The main advantage of this service is having an instruction for each template by which the developer can implement his own learning game. Kahoot [11] is a platform for developing learning courses with elements of gamification and interactive educational games like quizzes. A new step in the development of educational software is the personification of the learning process in a game context, i.e. development of adaptive learning games. However, direct transfer of models, methods and technologies developed for learning systems to adaptive learning games is not always possible, or it can significantly limit learning opportunities and/or reduce their game attractiveness. Therefore, adaptive learning games are considered as an independent class of educational software that requires the development of original solutions that take into account the specifics of this class.
Computer-Aided Development of Adaptive Learning Games
357
Known adaptation models specifically designed for adaptive learning games are built-in ad-hoc models tied to the implementation of a single game. The development of adaptive learning games requires the development of methods for modeling the learning process in a game context and the development of an adaptation model that preserves the logic of both learning and game scenarios. The ALGAE [3] model is currently considered the most common adaptation model. The model combines adaptive learning strategies and is focused on use in the gaming industry and academia. However, this model is implemented inside the developed game, which has a closed structure, so the use of this model in other games is currently not possible.
3 Peculiarities of the Development of Adaptive Learning Games Modern approaches to the development of adaptive learning systems are based on using three components of the adaptation model: the knowledge domain model, the learner model and the adaptation technique. The knowledge domain model is usually represented as a general graph that defines a finite set of fragments of knowledge that make up the content of the knowledge domain, and links between the fragments that reflect the logic of its development. To describe the structural features of the knowledge domain, the graph can be endowed with some additional properties. The Learner model (Learner profile) stores some of characteristics of the learner, which are taken into account during the adaptation in the system. Most systems use the current knowledge as a characteristic of the learner; some learner models also store such characteristics as learning style, interests, previous experience, etc. The adaptation technique determines how to change some parameters of the learning process based on monitoring the change in the learner state while his interaction with the learning system. Known Adaptation techniques are mostly based on rules, cause-and-effect relationships, i.e. techniques based on inference mechanisms. The definition of the logic of adaptation depends on the chosen model of the learner. Adaptive learning games are a combination of adaptive learning systems and computer games. The development of adaptive learning games includes developing methods for modeling the learning process in a game context and an adaptation model that preserves the logic of both learning and game scenarios. Game content adaptation implies a qualitative change in the individual steps of the scenario in accordance with the user’s actions. Adaptive technologies used in adaptive games depend on the specific implementation of the game. The simplest property of adaptation is to change the parameters: the difficulty of the game or the value of the reward.
4 Automating the Development of Adaptive Learning Games To automate the defined procedures a learning process model based on the knowledge space [15] and a method for integrating the learning process model into the game context [14] were used. The Knowledge Domain model is represented by a non-metric knowledge space formed by embedding a set of knowledge fragments connected by the logic of learning process into a lattice. Each fragment of knowledge is associated with an action required
358
A. Khayrov et al.
to mastering this fragment (for example, studying theoretical materials, completing a practical task, passing a test, etc.). Each action is matched with an assessment of the results of mastering, which determines the current state of the learner. The action space and the state space are structurally equivalent to the knowledge space. Structural ordering of space allows us to represent the process of mastering the knowledge space as dependence of the learner state on the actions associating with the space fragments and changing the current learner state. At the same time, each student is free to choose actions to master the knowledge space, determined by his current state and the structure of the space itself. The learning process model is represented by the evolution equation of learner state in the state space. At the same time, any learning strategy formed by the learner includes all fragments of the knowledge space, but the strategies differ in the order in which they follow, formed by the learner himself. The method of integrating the learning process model the with the gameplay is based on matching each learning action with a game scenario event as an interpretation of the learning action in the game context. The learning process is implemented as the interaction of the player with the game scenario, and the learner actions are considered in the context of the game process. The proposed method makes it possible to ensure the structural unity of the learning process and the gameplay and the equivalence of achieving the learning goals and the game itself. The method of integrating the learning model into the game context provides the freedom to choose game actions that do not contradict the logic of the learning process. Thus, the strategy formed on the action space in the game context is adaptive both in terms of the learning process and the gameplay. The architecture of the adaptation module includes a tool for building a knowledge space and a set of solutions for implementing the adaptation method in the developed adaptive learning game. Decisions on the choice of implementation methods are divided into independent and platform-dependent, and are integrated at the stage of developing game solutions. The architecture of the adaptation module is shown in Fig. 1.
Fig. 1. Adaptation module architecture.
For dynamic content matching, learning and game processes, a trigger method has been implemented that allows you to activate the system of dialog windows, determine the position of game objects in the scene, and launch game events associated with learning
Computer-Aided Development of Adaptive Learning Games
359
actions. Triggers are represented as game objects, visible and invisible to the user in the scene. To organize dialogue tasks in the game, a system of dialog windows has been developed, which includes window templates with the input of the task answer, with the choice of the task answer, with the display of dialogs and the display of hints. To store tasks, answers to tasks and game events associated with training tasks, a database based on an XML file has been developed. The developed technological solutions make it possible to implement three variants of game events: a dialogue without a task, a dialogue with a task, and a game event. Algorithms for constructing the lattice-based knowledge space and for integrating the adaptation module into an adaptive learning game are developed. The initial data for building the knowledge space is the learning course structure presented as an adjacency matrix of the corresponding directed graph developed by the course author. There are two ways to describe the adjacency matrix: using the interface, or by filling in the.txt file according to the provided sample. The process of building the knowledge space includes checking the structural orderliness of the original structure, and building the lattice. The knowledge space construction algorithm is presented as an Activity Diagram in Fig. 2. The knowledge space building tool is implemented as a Web application in Python, and the space visualization is implemented in the Jupyter Notebook environment. An algorithm for integrating the adaptation model with a learning game on Unity has been developed, presented in the form of an Activity Diagram in Fig. 3.
5 Implementation of the Adaptation Module in the Development of an Adaptive Learning Game The learning game “Kammy” [13] with a linear scenario was chosen as a prototype, designed to teach the technology of object-oriented programming (OOP) and the C# language. The game scenario is developed as an interpretation of the object-oriented paradigm in the context of the system of rules for the life of game characters in the virtual world. The learner in the game is represented by his avatar. The learner state is interpreted as the avatar’s gaming experience. During the game, the learner performs actions and accumulates game experience, which reflects his achievements in mastering the learning course. The process of automated development of an adaptive learning game with a nonlinear scenario includes the following design and technological procedures: 1. Building a knowledge space corresponding to the developed fragment of the course. 2. Filling the database of tasks and game events. 3. Arrangement and configuration of triggers in the scene, taking into account the constructed knowledge space. 4. Transferring the dialog system to the scene and setting its properties. An estimate of the time spent on the development of the Kammy2 game was made in comparison with the ad-hoc development of the first version of the Kammy game on Unity. When developing the game Kammy2 on Unity using the module, the development time decreased by 18 h (32%).
360
A. Khayrov et al.
Fig. 2. Knowledge space construction algorithm (UML notation, Activity Diagram).
In general, the complexity of developing an adaptive learning game using the proposed technology can vary greatly depending on the complexity of the game mechanics, the number of game events, the specifics and genre of the game. The choice of development engine also affects the development time. If the implementation is not carried out
Computer-Aided Development of Adaptive Learning Games
361
Fig. 3. Algorithm for integrating the adaptation module into an adaptive learning game notation (UML, Activity Diagram).
on Unity, the development time will increase significantly, since it will be necessary to develop the platform-specific part of the module in whole or in part.
6 Conclusion The proposed methods develop the methodological foundations for the development of adaptive learning games and can be used as a theoretical basis for the development of new models and methods of adaptation that ensure the quality of the learning process using such games. The developed adaptation technology and the portable adaptation module are applicable for the design and implementation of adaptive learning games of various genres. The results of the adaptive game development show that suggested approach can significantly reduce the complexity of adapting educational games development and improve their quality through the use of proven design and technological solutions. Acknowledgements. The study has been supported by the grant from the Russian Science Foundation (RSF) and the Administration of the Volgograd Oblast (Russia) No. 22–11-20024, https:// rscf.ru/en/project/22-11-20024.
362
A. Khayrov et al.
References 1. Al-Ataby, A.: Hybrid learning using canvas lms. Eur. J. Educ. Pedagogy 2, 27–33 (2021). https://doi.org/10.24018/ejedu.2021.2.6.180 2. Bezerra, A., Batista, E., Sousa, J., Olegário, T., Evangelista, E., Lira, R.: Bora jogar: desenvolvimento de jogos para auxiliar na aprendizagem de programação. Res. Soc. Dev. 11, e15511527668 (2022). https://doi.org/10.33448/rsd-v11i5.27668 3. Calongne, C., Stricker, A., Truman, B., Murray, J., Lavieri, J.E., Martini, D.: Slippery rocks and algae: A multiplayer educational roleplaying game and a model for adaptive learning game design. In: Proceedings of TCC 2014, pp. 13–23. TCCHawaii (2014). https://www.lea rntechlib.org/p/149824 4. Duvall, M.: Adobe captivate as a tool to create elearning scenarios (2014) 5. Gill, G., Smith, G.: Imapbook: Engaging young readers with games. J. Inf. Technol. Educ. Discuss. Cases 2, 10 (2013). https://doi.org/10.28945/1915 6. Ignatovitch, T.: Teaching russian as a foreign language with the use of learningapps service. Russ. Lang. Stud. 19, 51–65 (2021). https://doi.org/10.22363/2618-8163-2021-19-1-51-65 7. Khosravi, H., Sadiq, S., Gasevic, D.: Development and adoption of an adaptive learning system: Reflections and lessons learned (2020). https://doi.org/10.1145/3328778.3366900 8. Kirillov, Y.: Using the ispring suite computer platform in distance learning. SHS Web Conf. 106, 03008 (2021). https://doi.org/10.1051/shsconf/202110603008 9. Kumah, M.: Analyzing the vark model of preservice teachers pck of learning. J. Educ. 5, 58–70 (2022). https://doi.org/10.53819/81018102t4103 10. Mezin, H., Kharrou, S., Ait Lahcen, A.: Adaptive learning algorithms and platforms: a concise overview, pp. 3–12 (2022). https://doi.org/10.1007/978-3-030-91738-8_1 11. Rusmardiana, A., Sjuchro, D., Yanti, D., Daryanti, F., Iskandar, A.: Students’ perception on the use of kahoot as a learning media. AL-ISHLAH Jurnal Pendidikan, 14, 2205–2212 (2022). https://doi.org/10.35445/alishlah.v14i2.2139 12. Saputra, H., Utami, G., Yolida, B.: Utilization of articulate storyline as interactive learning media to improve the study motivation of college students, 11, 21–27 (2022). https://doi.org/ 10.23960/jppk.v11.i3.2022.03 13. Shabalina, O., Malliarakis, C., Tomos, F., Mozelius, P.: Game-based learning for learning to program: From learning through play to learning through game development (2017) 14. Shabalina, O., et al.: Combining game-flow and learning objectives in educational games, vol. 2 (2014) 15. Shabalina, O., Yerkin, D., Davtian, A., Sadovnikova, N.: A lattice-theoretical approach to modeling naturally ordered structures. In: Proceedings of the 2016 Conference on Information Technologies in Science, Management, Social Sphere and Medicine, pp. 430–433. Atlantis Press (2016). https://doi.org/10.2991/itsmssm-16.2016.85 16. Unsworth, A., Posner, M.: Case study: Using h5p to design and deliver interactive laboratory practicals. Essays Biochem. 66 (2022). https://doi.org/10.1042/EBC20210057 17. Vesin, B., Milicevic, A., Ivanovic, M., Budimac, Z.: Applying recommender systems and adaptive hypermedia for e-learning personalization. Comput. Inf. 32, 629–659 (2013) 18. Zulpukarova, D., Smanova, N., Kultaeva, D.: Electronic textbooks as a means of modern education. Bulletin of Science and Practice (2022). https://doi.org/10.33619/2414-2948/82/62
Author Index
A A. Malinao, Jasmine 338 Addawe, Jozelle C. 236, 321 Aguila, Marie Eliza R. 205 Aguilar, Samuel Kirby H. 21 Ammi, Meryem 1 Anikin, Anton 127, 292 Anlacan, Veeda Michelle M. 21, 205 Anokhin, Alexander 256 Apuya, Romuel Aloizeus 205 Arcedo, Christian 191 B Bahi, Houssam Ahmed Amin 331 Barbounaki, Stavroula 247 Benkhalfallah, Mohamed Salah 1 Boque, Josiah Cyrus 205, 298 Boukeloul, Soufiene 230 Boussaha, Karima 331 Byun, Yung-Cheol 55 C Caro, Jaime D. L. 21, 205, 236, 298, 321 Casas, Yolanda R. 308 Christidis, John 110 Chua, Richard Bryann 181 Chytas, Konstantinos 266 D Dankov, Yavor 81, 221 De Leon, Aaron 191 de Leon, Chloe S. 181 Dehimi, Nour El Houda 230 Derdour, Makhlouf 122, 230 E Eclipse, Kliezl P. 40 Encabo, Ericka Mae 308 Ereshchenko, Tatyana 256
F Feklistov, Vladislav 91 Furigay, Regie Boy 151 G Galecio, Bryan Andrei 205 Georgopoulou, Maria-Sofia 141 Go, Kyle Spencer 151 Go, Loubert John P. 308 Gonzales, Aliza Lyca 151 Gonzales, Lonnie France E. 21 Gumalal, Jeraline 215 I Iakovidis, Marios 31 Israel, Kurt Ian Khalid I.
236
J Jafari, Sadiqa 55 Jamora, Roland Dominic 21 Jamora, Roland Dominic G. 205 Juayong, Richelle Ann B. 21, 236, 298, 321 K Kalyagina, Polina 256 Kanavos, Athanasios 277 Karanikolas, Nikitas N. 266 Kardaras, Dimitrios 247 Karnavas, Stefanos 247 Kataev, Alexander 354 Katikaridi, Maria Anastasia 315 Katyshev, Alexander 292 Kedros, Ioannis 101 Khayrov, Alexander 354 Khoroshun, Danila 256 Kouah, Sofia 1, 122 Krouska, Akrivi 67 Kyriazis, Athanasios 247
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Kabassi et al. (Eds.): NiDS 2023, LNNS 784, pp. 363–364, 2023. https://doi.org/10.1007/978-3-031-44146-2
364
Author Index
L Laboudi, Zakaria 134, 331 Lacson, Jephta Michail 151 Leligou, Helen C. 110 Lisondra, Amadeus Rex N. 298 M Magabo, Erika Rhae 191 Malagas, Konstantinos 9 Malinao, Jasmine A. 40 Mantziaris, Pavlos 101 Maragoudakis, Manolis 277 Menaceur, Nadjem Eddine 122 Merabet, Asma 134 Mylonas, Phivos 277 N Nakano, Ryosuke Josef S. Nazarov, Konstantin 91 Nomikos, Spyridon 9
298
P Papadimitriou, Orestis 277 Papadopoulos, Pericles 110 Papakostas, Christos 31, 67 Papapostolou, Apostolos 9 Pardiñas, Miguel S. 298 Parygin, Danila 91, 256 Petrova, Tayana 354 Poli, Maria 9 R Ramirez, Frances Lei R. 21 Ramos, Anna Liza 151, 191 Rashevskiy, Nikolay 91
S Sadovnikova, Natalya 354 Saighi, Asma 134 Salido, Isabel Teresa 205 Sapounidis, Theodosios 101 Sgouropoulou, Cleo 31, 67, 141 Shabalina, Olga 354 Sinitsyn, Ivan 91 Sinkevich, Denis 127 Skourlas, Christos 266 Smirnov, Vladislav 127 Strousopoulos, Panagiotis 67 Sulla, Cris Niño N. 338 T Tee, Cherica A. 205 Tee, Michael L. 205 Triperina, Evangelia 266 Troussas, Christos 31, 67, 141, 247 Tsakiridis, Dimitrios 287 Tsang, Jeremy King L. 321 Tselenti, Panagiota 247 Tsolakidis, Anastasios 266 Tuico, Kim Bryann V. 236 V Vasiliadis, Anastasios 287 Vedasto, Gorge Lichael Vann N. Vilbar, Aurelio 171, 215 Vilbar, Aurelio P. 308 Vlassas, Grigorios 9 Z Zubankov, Alexey
292
321