126 1 28MB
English Pages 1028 [977] Year 2023
Lecture Notes in Electrical Engineering 1029
Subhas Chandra Mukhopadhyay S. M. Namal Arosha Senanayake P. W. Chandana Withana Editors
Innovative Technologies in Intelligent Systems and Industrial Applications CITISIA 2022
Lecture Notes in Electrical Engineering Volume 1029
Series Editors Leopoldo Angrisani, Department of Electrical and Information Technologies Engineering, University of Napoli Federico II, Napoli, Italy Marco Arteaga, Departamento de Control y Robótica, Universidad Nacional Autónoma de México, Coyoacán, Mexico Samarjit Chakraborty, Fakultät für Elektrotechnik und Informationstechnik, TU München, München, Germany Jiming Chen, Zhejiang University, Hangzhou, Zhejiang, China Shanben Chen, School of Materials Science and Engineering, Shanghai Jiao Tong University, Shanghai, China Tan Kay Chen, Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore Rüdiger Dillmann, University of Karlsruhe (TH) IAIM, Karlsruhe, Baden-Württemberg, Germany Haibin Duan, Beijing University of Aeronautics and Astronautics, Beijing, China Gianluigi Ferrari, Dipartimento di Ingegneria dell’Informazione, Sede Scientifica Università degli Studi di Parma, Parma, Italy Manuel Ferre, Centre for Automation and Robotics CAR (UPM-CSIC), Universidad Politécnica de Madrid, Madrid, Spain Faryar Jabbari, Department of Mechanical and Aerospace Engineering, University of California, Irvine, CA, USA Limin Jia, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China Janusz Kacprzyk, Intelligent Systems Laboratory, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Alaa Khamis, Department of Mechatronics Engineering, German University in Egypt El Tagamoa El Khames, New Cairo City, Egypt Torsten Kroeger, Intrinsic Innovation, Mountain View, CA, USA Yong Li, College of Electrical and Information Engineering, Hunan University, Changsha, Hunan, China Qilian Liang, Department of Electrical Engineering, University of Texas at Arlington, Arlington, TX, USA Ferran Martín, Departament d’Enginyeria Electrònica, Universitat Autònoma de Barcelona, Bellaterra, Barcelona, Spain Tan Cher Ming, College of Engineering, Nanyang Technological University, Singapore, Singapore Wolfgang Minker, Institute of Information Technology, University of Ulm, Ulm, Germany Pradeep Misra, Department of Electrical Engineering, Wright State University, Dayton, OH, USA Subhas Mukhopadhyay, School of Engineering, Macquarie University, NSW, Australia Cun-Zheng Ning, Department of Electrical Engineering, Arizona State University, Tempe, AZ, USA Toyoaki Nishida, Department of Intelligence Science and Technology, Kyoto University, Kyoto, Japan Luca Oneto, Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genova, Genova, Genova, Italy Bijaya Ketan Panigrahi, Department of Electrical Engineering, Indian Institute of Technology Delhi, New Delhi, Delhi, India Federica Pascucci, Department di Ingegneria, Università degli Studi Roma Tre, Roma, Italy Yong Qin, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China Gan Woon Seng, School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore Joachim Speidel, Institute of Telecommunications, University of Stuttgart, Stuttgart, Germany Germano Veiga, FEUP Campus, INESC Porto, Porto, Portugal Haitao Wu, Academy of Opto-electronics, Chinese Academy of Sciences, Haidian District Beijing, China Walter Zamboni, Department of Computer Engineering, Electrical Engineering and Applied Mathematics, DIEM—Università degli studi di Salerno, Fisciano, Salerno, Italy Junjie James Zhang, Charlotte, NC, USA
The book series Lecture Notes in Electrical Engineering (LNEE) publishes the latest developments in Electrical Engineering—quickly, informally and in high quality. While original research reported in proceedings and monographs has traditionally formed the core of LNEE, we also encourage authors to submit books devoted to supporting student education and professional training in the various fields and applications areas of electrical engineering. The series cover classical and emerging topics concerning: • • • • • • • • • • • •
Communication Engineering, Information Theory and Networks Electronics Engineering and Microelectronics Signal, Image and Speech Processing Wireless and Mobile Communication Circuits and Systems Energy Systems, Power Electronics and Electrical Machines Electro-optical Engineering Instrumentation Engineering Avionics Engineering Control Systems Internet-of-Things and Cybersecurity Biomedical Devices, MEMS and NEMS
For general information about this book series, comments or suggestions, please contact [email protected]. To submit a proposal or request further information, please contact the Publishing Editor in your country: China Jasmine Dou, Editor ([email protected]) India, Japan, Rest of Asia Swati Meherishi, Editorial Director ([email protected]) Southeast Asia, Australia, New Zealand Ramesh Nath Premnath, Editor ([email protected]) USA, Canada Michael Luby, Senior Editor ([email protected]) All other Countries Leontina Di Cecco, Senior Editor ([email protected]) ** This series is indexed by EI Compendex and Scopus databases. **
Subhas Chandra Mukhopadhyay · S. M. Namal Arosha Senanayake · P. W. Chandana Withana Editors
Innovative Technologies in Intelligent Systems and Industrial Applications CITISIA 2022
Editors Subhas Chandra Mukhopadhyay School of Engineering Macquarie University Sydney, NSW, Australia
S. M. Namal Arosha Senanayake Faculty of Science Taylor’s University Selangor, Malaysia
P. W. Chandana Withana Charles Sturt University Bathurst, NSW, Australia
ISSN 1876-1100 ISSN 1876-1119 (electronic) Lecture Notes in Electrical Engineering ISBN 978-3-031-29077-0 ISBN 978-3-031-29078-7 (eBook) https://doi.org/10.1007/978-3-031-29078-7 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
Systems mimicking the human brain have a long history. The recent advancements brought about by the Industry Revolution 4.0 and Smart Society 5.0, however, have led to unprecedented volumes of Innovative Technologies in Intelligent Systems and Industrial Applications. These mostly depend on the design of new algorithms applied to existing historical patterns and decision-making solutions such as building critical (time VS space) systems using artificial intelligence (AI) and machine learning (ML). Over the past decade, this has produced innovative technologies for smart manufacturing to better satisfy global needs and demands. However, despite these recent advancements, there is still an urgent need to find optimal solutions for real-world applications, in particular mission-critical systems in terms of reliability, precision, and repeatability. Furthermore cross-disciplinary research engagement is vital to solving cross-domain problems encountered during daily life for which embedded hybrid intelligence (AI and human intelligence) is vital if we are to develop hybrid intelligent systems to build a truly smart society. Thus, complex processes and data-driven models are paramount for innovative technologies that minimize human intervention to minimize or even eliminate human error during manufacturing and general daily processes. Production of missioncritical systems is still facing challenges due to the latency inherent in communication technologies (i.e., the latency in 5G mobile communication is around 1 ms). Minimizing such latency and thus optimizing processes are generally addressed using mixed reality from the active engagement of humans in critical processes and data signals from the environment. However, challenges remain in terms of achieving real-time processes due to the nature of occurrences and events in highly complex situations such as those taking place in nuclear plants, cockpits of flights, and space vehicles, for drones. Not only automatic re-configuration and re-installation of processes are required, it is mandatory to embed programmable systems on board in order to adjust and to re-configure critical parameters that change the nature of the execution of processes in real time. Thus, innovation under catastrophic conditions in real time is still a serious challenge in smart manufacturing. Furthermore, business processes involved in the production of smart products and services are seriously impacted by unpredictable environmental conditions in v
vi
Preface
the form of sudden atmospheric changes such as changes in weather patterns and pandemics. Yet, state-of-the-art AI and ML are capable of positively impacting mission-critical processes and thus resolve a significant range of cross-discipline processes in different domains. Designing and building such multidimensional processes equates to the development of novel hybrid models embedding information fusion, smart decision-making models, and information visualization processes using smart devices. Innovative technologies in intelligent systems and industrial applications provide this vital research through adaptation and development for the benefit of humanity through improvements in lifestyle. Thus, future industrial revolutions and societal evolutions heavily depend on the innovation and invention of smart processes and information fusion for the development of mission-critical systems. This book focuses on cross-disciplinary research impacting industry and smart society—ultimately leading to smart manufacturing, management of critical processes, and information fusion. It contains chapters from authors presented during the International Conference on Innovative Technologies in Intelligent Systems and Industrial Applications (CITISIA) 2022 which was held in Sydney, Australia, on November 14–16, 2022. This book familiarizes the reader with new dimension of innovation designed for future generations to be inspired by and to learn from and to retain as a valuable library. In the current context, the book addresses Industry Revolution 4.0 and Smart Society 5.0 impacting technological evolution and transformations in the years to come. Chapters are categorized by main topics so that readers are guided to their areas of interest: Topic 1 addresses the contributions that impact on society through novel neural network architectures and machine learning algorithms. Topic 2 explores novel mixed reality approaches for real-world applications. Topic 3 introduces hybrid data security models for a smart society. Topic 4 analyzes synergy of signal processing, AI, and ML for systems. Topic 5 discusses advanced system modeling for industry revolution. Topic 6 explores IoT and cybersecurity for smart devices. Topic 7 describes recent advances in AI for digital innovations. Topic 8 introduces AI and IoT for industry applications. Topic 9 explores knowledge-based IoT and intelligent systems. Sydney, Australia Selangor, Malaysia Bathurst, Australia
Subhas Chandra Mukhopadhyay S. M. Namal Arosha Senanayake P. W. Chandana Withana
Contents
Societal Impact Using Novel Neural Network Architectures and Machine Learning Algorithms Intelligent Small Scale Autonomous Vehicle Development Based on Convolutional Neural Network (CNN) for Steering Angle Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Muhammad Zacky Asy’ari, Maxson Phang, Nicholas Suganda, and Yosica Mariana Histogram of Oriented Gradients (HOG) and Haar Cascade with Convolutional Neural Network (CNN) Performance Comparison in the Application of Edge Home Security System . . . . . . . . Muhammad Zacky Asy’ari, Sebastian Filbert, and Zener Lie Sukra Comparison of K-Nearest Neighbor and Support Vector Regression for Predicting Oil Palm Yield . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bens Pardamean, Teddy Suparyanto, Gokma Sahat Tua Sinaga, Gregorius Natanael Elwirehardja, Erick Firmansyah, Candra Ginting, Hangger Gahara Mawandha, and Dian Pratama Putra Application of Convolution Neural Network for Plastics Waste Management Using TensorFlow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Immanuela Puspasari Saputro, Junaidy Budi Sanger, Angelia Melani Adrian, and Gilberth Mokorisa Study on Optimal Machine Learning Approaches for Kidney Failure Detection System Based on Ammonia Level in the Mount . . . . . . Nicholas Phandinata, Muhammad Nurul Puji, Winda Astuti, and Yuli Astuti Andriatin Understanding the Influential Factors on Multi-device Usage in Higher Education During Covid-19 Outbreak . . . . . . . . . . . . . . . . . . . . . Robertus Tang Herman, Yoseph Benny Kusuma, Yudhistya Ayu Kusumawati, Darjat Sudrajat, and Satria Fadil Persada
3
13
23
35
47
59
vii
viii
Contents
Society with Trust: A Scientometrics Review of Zero-Knowledge Proof Advanced Applications in Preserving Digital Privacy for Society 5.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nicholas Dominic, Naufal Rizqi Pratama, Kenny Cornelius, Shavellin Herliman Senewe, and Bens Pardamean
69
Mixed Reality Approaches for Real-World Applications Validation of Augmented Reality Prototype for Aspects of Cultural Learning for BIPA Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pandu Meidian Pratama, Agik Nur Efendi, Zainatul Mufarrikoh, and Muhammad David Iryadus Sholihin The Impact of Different Modes of Augmented Reality Information in Assisted Aircraft Cable Assembly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dedy Ariansyah, Khaizuo Xi, John Ahmet Erkoyuncu, and Bens Pardamean
81
91
Toward Learning Factory for Industry 4.0: Virtual Reality (VR) for Learning Human–Robot Collaboration . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Dedy Ariansyah, Giorgio Colombo, and Bens Pardamean Analysing the Impact of Support Plans on Telehealth Services Users with Complex Needs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Yufeng Mao and Mahsa Mohaghegh Trend and Behaviour Changes in Young People Using the New Zealand Mental Health Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Yingyue Kang and Mahsa Mohaghegh Hybrid Data Security Models for Smart Society Securing Cloud Storage Data Using Audit-Based Blockchain Technology—A Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Mohammad Belayet Hossain and P. W. C. Prasad Data Security in Hybrid Cloud Computing Using AES Encryption for Health Sector Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Pratish Shrestha, Rajesh Ampani, Mahmoud Bekhit, Danish Faraz Abbasi, Abeer Alsadoon, and P. W. C. Prasad Cyber Warfare: Challenges Posed in a Digitally Connected World: A Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Ravi Chandra and P. W. C. Prasad Surveilling Systems Used to Detect Lingering Threats on Dark Web . . . . 183 Y. K. P. Vithanage and U. A. A. Niroshika
Contents
ix
Early Attack Detection and Resolution in Sensor Nodes to Improve IoT Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Alvin Nyathi and P. W. C. Prasad Exploring Cyber Security Challenges of Multi-cloud Environments in the Public Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Katherine Spencer and Chandana Withana Data Security Risk Mitigation in the Cloud Through Virtual Machine Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Ashritha Jonnala, Rajesh Ampani, Danish Faraz Abbasi, Abeer Alsadoon, and P. W. C. Prasad Synergy of Signal Processing, AI, and Ml for Systems Analysis of Knocking Potential Based on Vibration on a Gasoline Engine for Pertalite and Pertamax Turbo Using Signal Processing Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Munzir Qadri and Winda Astuti AI-Based Video Analysis for Driver Fatigue Detection: A Literature Review on Underlying Datasets, Labelling, and Alertness Level Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Dedy Ariansyah, Reza Rahutomo, Gregorius Natanael Elwirehardja, Faisal Asadi, and Bens Pardamean A Study of Information and Communications Technology Students e-Platform Choices as Technopreneur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Lukas Tanutama and Albert Hardy Occupational Safety and Health Training in Virtual Reality Considering Human Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Amir Tjolleng Block Chain Technology and Internet of Thing Model on Land Transportation to Reduce Traffic Jam in Big Cities . . . . . . . . . . . . . . . . . . . 281 Inayatulloh, Nico D. Djajasinga, Deny Jollyta, Rozali Toyib, and Eka Sahputra A Marketing Strategy for Architects Using a Virtual Tour Portfolio to Enhance Client Understanding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 A. Pramono and C. Yuninda Bee AR Teacher Framework: Build Augmented Reality Independently in Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Maria Seraphina Astriani, Raymond Bahana, and Arif Priyono Susilo Ahmad
x
Contents
Performance Evaluation of Coffee Bean Binary Classification Through Deep Learning Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Fajrul Islamy, Kahlil Muchtar, Fitri Arnia, Rahmad Dawood, Alifya Febriana, Gregorius Natanael Elwirehardja, and Bens Pardamean Sustainable Material-Based Bedside Table Equipped with a Smart Lighting System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 A. Pramono, T. I. W. Primadani, B. K. Kurniawan, F. C. Pratama, and C. Yuninda Malang Historical Monument in HIMO Application with Augmented Reality Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 Christoper Luis Alexander, Christiano Ekasakti Sangalang, Jonathan Evan Sampurna, Fairuz Iqbal Maulana, and Mirza Ramadhani A Gaze-Based Intelligent Textbook Manager . . . . . . . . . . . . . . . . . . . . . . . . . 345 Aleksandra Klasnja-Milicevic, Mirjana Ivanovic, and Marco Porta Advanced System Modeling for Industry Revolution Aligning DevOps Concepts with Agile Models of the Software Development Life Cycle (SLDC) in Pursuit of Continuous Regulatory Compliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Kieran Byrne and Antoinette Cevenini Decentralized Communications for Emergency Services: A Review . . . . . 375 Dean Farmer and Antoinette Cevenini Assessing Organisational Incident Response Readiness in Cloud Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 Andrew Malec and P. W. C. Prasad Industrial Internet of Things Cyber Security Risk: Understanding and Managing Industrial Control System Risk in the Wake of Industry 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 J. Schurmann, Amr Elchouemi, and P. W. C. Prasad Color Image Encryption Employing Cellular Automata and Three-Dimensional Chaotic Transforms . . . . . . . . . . . . . . . . . . . . . . . . . 411 Renjith V. Ravi, S. B. Goyal, Sardar M. N. Islam, and Vikram Kumar Privacy and Security Issues of IoT Wearables in Smart Healthcare . . . . . 423 Syed Hassan Mehdi, Javad Rezazadeh, Rajesh Ampani, and Benoy Varghese A Novel Framework Incorporating Socioeconomic Variables into the Optimisation of South East Queensland Fire Stations Coverages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 Albertus Untadi, Lily D. Li, Roland Dodd, and Michael Li
Contents
xi
IoT and Cybersecurity for Smart Devices The Illicit Use of Cryptocurrency on the Darknet by Cyber Criminals to Evade Authorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 Mariagrazia Sartori, Indra Seher, and P. W. C. Prasad The Integration and Complications of Emerging Technologies in Modern Warfare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 Matthew Walsh, Indra Seher, P. W. C. Prasad, and Amr Elchouemi Development of “RURUH” Mobile Based Application to Increase Mental Health Awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 Deby Ramadhana, Enquity Ekayekti, and Dina Fitria Murad User Experience Analysis of Web-Based Application System OTRS (Open-Source Ticket Request System) by Using Heuristic Evaluation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 Abimanyu Yoga Prastama, Primus William Oliver, M. Irsan Saputra, and Titan Review for Person Recognition Using Siamese Neural Network . . . . . . . . 499 Jimmy Linggarjati Thermal Condition Evaluation of Farmhouse Using Ecotect Analysis as an Effort to Optimize Cultural Activity of Enclave Villagers (Case Study: Ngadas, Bromo Tengger Semeru National Park) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 Ida Bagus Ananta Wijaya, Dian Kartika Santoso, and Irawan Setyabudi Intelligent Home Monitoring System Using IoT Device . . . . . . . . . . . . . . . . 515 Santoso Budijono and Daniel Patricko Hutabarat Contactless Student Attendance System Using BLE Technology, QR-Code, and Android . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527 Rico Wijaya, Steven Kristianto, Yudha Batara Hasibuan, and Ivan Alexander Factors Influencing the Intention to Use Electronic Payment Among Generation Z in Jabodetabek . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539 Adriyani Fernanda Kurniawan, Jessica Nathalie Wenas, Michael, and Robertus Nugroho Perwiro Atmojo Recent Advances in AI for Digital Innovations Shirt Folding Appliance for Small Home Laundry Service Business . . . . 553 Lukas Tanutama, Hendy Wijaya, and Vincent Cunardy Artificial Intelligence as a New Competitive Advantage in Digital Marketing in the Banking Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 Wahyu Sardjono and Widhilaga Gia Perdana
xii
Contents
Digital Supply Chain Management Transformation in Industry: A Bibliometric Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575 Azure Kamul, Nico Hananda, and Rienna Oktarina Development of Wireless Integrated Early Warning System (EWS) for Hospital Patient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587 Steady, Endra Oey, Winda Astuti, and Yuli Astuti Andriatin Hexapod Robot with Indoor Path Planning Using ROS Navigation Stack on a Static Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597 Denzel Polantika, Yusuf Averroes Sungkar, and Johannes Detect the Use of Real-Masks with Machine Acquiring Using the Concept of Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609 Bambang Dwi Wijanarko, Dina Fitria Murad, Dania Amelia, and Fitri Ayu Cahyaningrum Development of Indoor Positioning Engine Application at Workshop PT Garuda Maintenance Facilities Aero Asia Tbk . . . . . . . . 621 Bani Fahlevy, Dery Oktora Pradana, Maulana Haikal, G. G. Faniru Pakuning Desak, and Meta Amalya Dewi Analysis and Design of Information System E-Check Sheet . . . . . . . . . . . . 633 GG Faniru Pakuning Desak, Sunardi, Imanuel Revelino Murmanto, and Johari Firman Julianto Sirait Analysis of Request for Quotation (RFQ) with Rejected Status Use K-Modes and Ward’s Clustering Methods. A Case Study of B2B E-Commerce Indotrading.Com . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643 Fransisca Dini Ariyanti and Farrel Gunawan Innovation Design of Lobster Fishing Gear Based on Smart IoT with the TRIZ (Teoriya Resheniya Izobreatatelskikh Zadatch) Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655 Roikhanatun Nafi’ah, Era Febriana Aqidawati, and Kumara Pinasthika Dharaka Controllable Smart Locker Using Firebase Services . . . . . . . . . . . . . . . . . . . 669 Ivan Alexander and Rico Wijaya Controllable Nutrient Feeder and Water Change System Based on IoT Application for Maintaining Aquascape Environment . . . . . . . . . . 679 Daniel Patricko Hutabarat, Ivan Alexander, Felix Ferdinan, Stefanus Karviditalen, and Robby Saleh Website Personalization Using Association Rules Mining . . . . . . . . . . . . . . 689 Benfano Soewito and Jeffrey Johan
Contents
xiii
Adoption of Artificial Intelligence in Response to Industry 4.0 in the Mining Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699 Wahyu Sardjono and Widhilaga Gia Perdana AI and IoT for Industry Applications IoT in the Aquaponic Ecosystem for Water Quality Monitoring . . . . . . . . 711 N. W. Prasetya, Y. Yulianto, S. Sidharta, and M. A. Febriantono IoT Based Methods for Pandemic Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 719 Artem Filatov and Mahsa Razavi Identifying Renewable Energy Sources for Investment on Sumba Island, Indonesia Using the Analytic Hierarchy Process (AHP) . . . . . . . . 739 Michael Aaron Tuori, Andriana Nurmalasari, and Pearla Natalia Light Control and Monitoring System Based on Internet of Things . . . . . 749 Syahroni, Gede Putra Kusuma, and Galih Dea Pratama Transfer Learning Approach Based on MobileNet Architecture for Human Smile Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759 Gusti Pangestu, Daniel Anando Wangean, Sinjiru Setyawan, Choirul Huda, Fairuz Iqbal Maulana, Albert Verasius Dian Sano, and Slamet Kuswantoro Breakdown Time Prediction Model Using CART Regression Trees . . . . . 769 Ni Nyoman Putri Santi Rahayu and Dyah Lestari Widaningrum Video Mapping Application in Sea Life Experience Interior Design as Education and Recreation Facilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 779 Florencia Irena Wijaya, Savitri Putri Ramadina, and Andriano Simarmata The Approach of Big Data Analytics and Innovative Work Behavior to Improve Employee Performance in Mining Contractor Companies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 791 Widhi Setya Wahyudhi, Mohammad Hamsal, Rano Kartono, and Asnan Furinto Emotion Recognition Based on Voice Using Combination of Long Short Term Memory (LSTM) and Recurrent Neural Network (RNN) for Automation Music Healing Application . . . . . . . . . . . . . . . . . . . . 807 Daryl Elangi Trisyanto, Michael Reynard, Endra Oey, and Winda Astuti A Systematic Review of Marketing in Smart City . . . . . . . . . . . . . . . . . . . . . 819 Angelie Natalia Sanjaya, Agung Purnomo, Fairuz Iqbal Maulana, Etsa Astridya Setiyati, and Priska Arindya Purnama
xiv
Contents
Design and Development Applications HD’R Comic Cafe Using Augmented Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829 Charin Tricilia Hinsauli Simatupang, Dewi Aliyatul Shoviah, Fairuz Iqbal Maulana, Ida Bagus Ananta Wijaya, and Ira Audia Agustina Portable Waste Bank for Plastic Bottles with Electronic-Money Payment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841 Safarudin Gazali Herawan, Kristien Margi Suryaningrum, Desi Maya Kristin, Ardito Gavrila, Afa Ahmad Yunus, and Welldelin Tanawi Evaluation of Branding Strategy in Automotive Industry Using DEMATEL Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 853 Ezra Peranginangin and Yosica Mariana IoT Based Beverage Dispenser Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 861 Wiedjaja Atmadja, Hansel Pringgiady, and Kevin Lie The Implementation of the “My Parking” Application as a Tool and Solution for Changing the Old Parking System to a Smart Parking System in Indonesia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873 Anisah El Hafizha Harahap and Wahyu Sardjono Knowledge-Based IoT and Intelligent Systems Study of Environmental Graphic Design Signage MRT Station Blok M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 891 Arsa Widitiarsa Utoyo and Santo Thin Church Online Queuing System Based-On Android . . . . . . . . . . . . . . . . . . 901 Hendry Hendratno, Louis Bonatua, Teguh Raharjo, and Emny Harna Yossy Tourism Recommendation System Using Fuzzy Logic Method . . . . . . . . . 913 Arinda Restu Nandatiko, Wahyu Fadli Satrya, and Emny Harna Yossy Set of Experience Knowledge Structure (SOEKS) and Decisional DNA (DDNA)—A Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 925 A. B. M. Mehedi Hasan, Md. Nafiz Ishtiaque Mahee, and Cesar Sanin Multiple Regression Model in Testing the Effectiveness of LMS After COVID-19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 937 Meta Amalya Dewi, Dina Fitria Murad, Arba’iah Binti Inn, Taufik Darwis, and Noor Udin Skin Disease Detection as Unsupervised-Classification with Autoencoder and Experience-Based Augmented Intelligence (AI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 949 Kushal Pokhrel, Suman Giri, Sudip Karki, and Cesar Sanin
Contents
xv
Intelligent System of Productivity Monitoring and Organic Garden Marketing Based on Digital Trust with Multi-criteria Decision-Making Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 959 Sularso Budilaksono, Febrianty, Woro Harkandi Kencana, and Nizirwan Anwar Projection Matrix Optimization for Compressive Sensing with Consideration of Cosparse Representation Error . . . . . . . . . . . . . . . . 969 Endra Oey Detection of Type 2 Diabetes Mellitus with Deep Learning . . . . . . . . . . . . . 979 Mukul Saklani, Mahsa Razavi, and Amr Elchouemi Irrigation Control System for Seedlings Based on the Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 999 André de Carvalho, Gede Putra Kusuma, and Alif Tri Handoyo
About the Editors
Dr. Subhas Chandra Mukhopadhyay (M’97, SM’02, F’11) graduated from the Department of Electrical Engineering, Jadavpur University, Calcutta, India, with a gold medal and received the Master of Electrical Engineering degree from Indian Institute of Science, Bengaluru, India. He has Ph.D. (Eng.) degree from Jadavpur University, India, and Doctor of Engineering degree from Kanazawa University, Japan. Currently, he is working as Professor of Mechanical/ Electronics Engineering and Discipline Leader of the Mechatronics Degree Program of the School of Engineering, Macquarie University, Sydney, Australia. He has over 30 years of teaching and research experiences. His fields of interest include smart sensors and sensing technology, wireless sensor networks, Internet of Things, electromagnetics, control engineering, magnetic bearing, fault current limiter, electrical machines, numerical field calculation, etc. He has authored/co-authored over 500 papers in different international journals, conferences, and chapters. He has edited eighteen conference proceedings. He has also edited twenty-five special issues of international journals as Lead Guest Editor and thirty-five books with Springer-Verlag. He was awarded numerous awards throughout his career and attracted over AUD 6.2 M on different research projects. He has delivered 359 seminars including keynote, tutorial, invited, and special seminars. He is Fellow of IEEE (USA), Fellow of IET (UK), and Fellow of IETE (India). He is Topical Editor of IEEE xvii
xviii
About the Editors
Sensors Journal and Associate Editor IEEE Transactions on Instrumentation. He has organized many international conferences either as General Chair or Technical Program Chair. He is Founding Chair of the IEEE Sensors Council New South Wales Chapter. Dr. S. M. Namal Arosha Senanayake, Senior Member of IEEE, is Founder and Leader of IntelliHealth Solutions (Technology Licensing), and he gained wellbalanced portfolio on research, education (teaching), and service (administration). Currently, he, jointly with University of Malaya Connected Health (UMCH) Technology Pvt. Ltd., is establishing a joint venture within Asia-Pacific region (ASEAN, Japan, South Korea, Singapore, and Vietnam). He is also working as Adjunct Professor, School of Engineering, at Taylor’s University. He started his career as Pioneer Assistant Lecturer in computer science at the University of Peradeniya, Sri Lanka. After he obtained his Ph.D. in artificial intelligence, he has been promoted to Senior Lecturer at the University of Peradeniya, Sri Lanka. He joined Monash University Sunway Campus as Senior Lecturer in 2002 where he was considered as Active Researcher. He succeeded in getting the largest eScience fund from Ministry of Science Technology and Innovation under the title Bio-Inspired Robotic Devices for Sportsman Screening Services (BIRDSSS). Based on research outcomes, he was awarded Pro-Vice Chancellor’s award for excellence in research within three consecutive years; 2008, 2009, and 2010. In 2011, he joined as Associate Professor in artificial intelligence at the University of Brunei Darussalam. He was Recipient of the UK-South East Asia Knowledge Partnership—Collaborative Development Award, 2013. In 2021, he jointly with team members comprised of Japan, Malaysia, and Vietnam was Recipient of research excellence award by the National Institute of Information and Communications Technology (NICT), Japan, for the best project among six leading projects sponsored by the NICT during 2017–2020. He has been appointed as Liaison Officer and Visiting Professor under the Advanced Global Program (AGP) at Gifu University, Japan, since 2018. As Senior Member of IEEE, he actively engaged with IEEE during the last two decades. He served as
About the Editors
xix
Chairman of IEEE Robotics and Automation Society Chapter, Director of IEEE Asia-Pacific Robotics and Automation Council, and Student Branch Counselor. He also serves as Associate Editor of four international journals, Reviewer of 13 different IEEE Transactions, ELSEVIER, Springer, Taylor & Francis, Acta press, etc. He authored a book with the title Bio-Interfacing Devices. He was Editor for ten proceedings. He was External Examiner for Ph.D. and Master’s by research candidates at well-known universities in the Asia-Pacific region. Dr. P. W. Chandana Withana is Associate Professor with the School of Computing and Mathematics at Charles Sturt University, Australia. Before this, he was Lecturer at the United Arab Emirates University in UAE, Multimedia University in Malaysia, and also the Informatics Institute of Technology (IIT), Sri Lanka. He gained his undergraduate and postgraduate degrees from St. Petersburg State Electrotechnical University in the early 90s and completed his Ph.D. studies at the Multimedia University in Malaysia. He is Active Researcher in “computer architecture, digital systems, and modeling and simulation”. He has published more than 230 research articles in computing and engineering journals and conference proceedings. He has co-authored two books entitled Digital Systems Fundamentals and Computer Systems Organization and Architecture published by Prentice Hall. He is Senior Member of the IEEE Computer Society.
Societal Impact Using Novel Neural Network Architectures and Machine Learning Algorithms
Intelligent Small Scale Autonomous Vehicle Development Based on Convolutional Neural Network (CNN) for Steering Angle Prediction Muhammad Zacky Asy’ari , Maxson Phang, Nicholas Suganda, and Yosica Mariana Abstract Human error is the most significant factor in traffic accidents. 61% of accidents in Indonesia are caused by driver negligence. Autonomous vehicles are unmanned vehicles that are believed to be able to reduce the number of accidents caused by human error. This study aims to create a prototype of a small-scale unmanned vehicle using computer vision that can move around the track. The prototype captures an image using a USB webcam and is processed by a Raspberry Pi 4 to predict the correct steering angle based on input from the camera. A convolutional neural network (CNN) algorithm is used to recognize the path. The material used to make the prototype is PLA (polylactic acid). The prototype steering system uses a servo motor, and the drive system uses a brushless DC motor controlled by the PCA9685 module. More than 1600 image data were used and taken based on a simplified track. The results of this study indicate that the prototype can move along the given track with a steering angle range of 20°–130° for right turns and 60°–170° for left turns. The throttle value capable of moving the prototype at a steady speed is 0.075. The Keras Tensorflow model, dataset, and autonomous car design can achieve the desired level of autonomy with an RMSE accuracy of 0.1145. Keywords Autonomous vehicle · Computer vision · Raspberry Pi · Machine learning · Steering angle prediction
M. Z. Asy’ari (B) · M. Phang · N. Suganda Automotive and Robotics Program, Computer Engineering Department, BINUS ASO School of Engineering, Bina Nusantara University, Jakarta, Indonesia e-mail: [email protected] M. Phang e-mail: [email protected] N. Suganda e-mail: [email protected] Y. Mariana Architecture Department, Faculty of Engineering, Binus University, Jakarta, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_1
3
4
M. Z. Asy’ari et al.
1 Introduction The concept idea of Autonomous Vehicles (AVs) has been of great interest and considerable talk around the world today. In today’s era, much AV research and development has started and will increase over time [1]. Many car companies like Tesla and General Motors are trying to develop their AVs [2]. This development is also carried out in Indonesia. One of them is the collaboration between BPPT (Agency for the Assessment and Application of Technology) and the Ministry of Transportation in developing an Autonomous Vehicle (AV). They stated that the transportation sector needs to implement electric-based autonomous vehicle technology. Later, future mobility with autonomous electric vehicle technology needs to be implemented as part of the mass transportation system [3]. Autonomous Vehicles (AVs) are electric vehicles that are programmed to be able to move on their own without human intervention [4]. Autonomous technology used in transportation today has the opportunity to reduce or be able to solve economic and environmental problems in driving, namely lower fuel consumption and emissions with a smoother driving experience [5]. The development of AVs focuses on increasing the car’s controllability to a level where there is no need for human driver intervention, focusing on lane recognition, detecting nearby vehicles, and traveling from one point to another. According to GAIKINDO data, the trend of electric-based cars increases as the year progresses from 2019, which returned 812 electric cars, to 2021, which operated 3205 electric cars [6]. Human error occurs due to the negligence of human resources, inadvertently resulting in unwanted and detrimental consequences for oneself and others [7]. As in driving, human errors can occur while driving and can result in accidents. Based on data from the Ministry of Transportation (Kemenhub) and the Indonesian National Police (Polri) [8], the number of land traffic accidents in Indonesia reached 103,645 cases in 2021. This number increased by 3.62% compared to the previous year, which was 100,028 cases. With the presence of AVs in everyday life, it is expected to encourage safer driver behavior related to the driving experience, improve road safety, and significantly reduce driving accidents due to the elimination of human errors in driving [9, 10]. In this study, the authors will develop and test small-scale AVs in maneuvering in artificial pathways using the Raspberry Pi 4-based Convolutional Neural Network (CNN) method.
2 Related Work Research on “1/10th Scale Autonomous Vehicle Based On Convolutional Neural Network” was conducted by Seth et al. [11]. This research shows the demonstration and implementation of autonomous vehicles based on a Convolutional Neural Network (CNN). The vehicle uses a 1/10 scale RC car as the main base, a Raspberry Pi camera as its main input, a Raspberry Pi 4 as a computing platform, and ultrasonic
Intelligent Small Scale Autonomous Vehicle Development Based …
5
sensors. The dataset for training contains 30,960 images and a ‘.json’ file containing the steering data. A batch size of 128 used with the Tensorflow Keras Keras model has a validation loss of 0.098543, and with 55 epochs, training stops when losses start to increase. Research on “Self Driving Car Using Machine Learning” was conducted by Kurani et al. [12]. In their study, a Raspberry Pi camera, Arduino UNO, and ultrasonic sensors, data is collected from the Raspberry Pi and processed using the Convolutional Neural Network (CNN), where the image is processed to detect road markings. In the study, 14 videos were taken, and about 647 images were extracted and classified into various folders, which were then trained in various combinations of tracks. One study by the University of Oxford also developed a system for unmanned vehicle racing. This system is simulated with Gazebo, open-source software for three-dimensional simulation. In the simulation, the vehicle seems to go through the stages of perception, localization, and mapping using GPS and object detection algorithms using the YOLOv3 Tiny framework. Then estimate the path limit using the nearest neighbour algorithm. Modeling for vehicle control system using ROS 2, Python, and PID Controller [13]. Research conducted by a technical university made a small-scale AV prototype capable of detecting the fastest lanes and traffic signs. Kalman Filter is an algorithm for lane tracking, and the method used to detect the path is Hough Transform. Detection of traffic signs using contour analysis programmed with the OpenCV library and the C++ programming language [14]. In terms of steering capabilities, Lu Xiong et al. deal with steering angle control using an actuator in the autonomous vehicle with an electrical power system and simulate its steering angle using a numerical method to track control of desired settering angle shaft [15]. Self-driving car prototype using PID control was developed to control the servo angle and PWM of DC motor using the camera to follow trajectory using image processing techniques without deep learning algorithms that are occasionally out of the track and do not go to the right direction [16]. Zakaria et al. Testing the steering control performance on Autonomous vehicles on dynamic curvature and estimated the path with small error [17]. This paper aims to improve the steering capabilities of autonomous vehicles by combining the PID controller and deep learning model in an integrated system using a minicomputer.
3 Proposed Method The first step is to design and create a prototype and then produce the design using a 3D printer [18, 19]. Next, the dataset that has been taken is used for creating a deep-learning model that will be used to predict the steering angle. With this model, the prototype can predict the degree of rotation through input from the camera. When the program is started, the raspberry pi will send a signal to
6 Table 1 Right turn steering angle test
M. Z. Asy’ari et al. Maximum turning angle
Minimum turning angle
140°
20°
140°
30°
140°
40°
140°
50°
140°
60°
run the servo motor and brushless DC motor through the PCA9685 module. Then, it will set the servo angle based on the camera’s input prediction. The parameters tested for this prototype are the rotation angle of the servo as a steering angle. The experiment to test the steering angle was carried out by reviewing the prototype’s ability to circle the track with five variations of the servo angle and different rotation directions. For the right-hand rotation test, the maximum value of the turning angle is constant at 140°, and the minimum value varies from 20, 30, 40, 50, and 60°, as seen in Table 1. Determining constant maximum and minimum limits makes the prototype movement more stable. In this experiment, the DC motor rotation speed parameter is kept constant to ensure that the vehicle velocity is constant.
4 Experimental Setup The experimental setup for this study is divided into three components: system circuit, homemade track design, and autonomous car design. Figure 1 shows the electrical circuit used in this research. The system circuit design for this research prototype uses the Raspberry Pi 4 as a microprocessor that runs and processes the input program from the image obtained from the webcam and outputs to the PCA9685 module in sending the steering angle prediction results and regulating the movement of the servo motor [20]. The Raspberry Pi 4 is powered by a 15-W power bank with 10,000 mAh. The PCA9685 module also plays a role in regulating the speed of the brushless DC motor through the ESC (Electronic Speed Controller) powered by a 7.4 V battery. Hence, the vehicle was designed to move at a constant speed during the test. Commonly used steering systems are parallel steering and Ackerman steering. Ackerman steering is a type of steering system in which the turning angle for the inner wheel is greater than the wheel on the other side and is also used for autonomous vehicles [21-23]. The following section shows a path with a circular design that has four 90° angle turns and four straight paths. This path is used for data collection and testing the Tensorflow Keras autonomous car model. This path is designed using black duct tape attached to ceramic tiles. The design of the artificial path is shown in Fig. 2a. The result determines that the servo motor, which should have a rotating angle range of 0° ≤ θ ≤ 180°, becomes 20° ≤ θ ≤ 170°. These happen because the ability
Intelligent Small Scale Autonomous Vehicle Development Based …
7
Fig. 1 Electrical circuit diagram
Fig. 2 Simplified homemade track (a) and autonomous car prototype (b)
of the servo to rotate fully is inhibited by the servo arm attached to the connector that connects the left-wheel drive and right-wheel drive. The proposed small-scale autonomous vehicle is represented in Fig. 2b. The prototype steering mechanism has different turning angles. For example, the steering angle range for right turns is 20° ≤ θ < 80°, for left turns it is 100° < θ ≤ 170°, and for straight-ahead movement it is 80° ≤ θ < 100°. Ideally, for the prototype to move in a straight line, the servo motor angle should be around 90°. However, because the torque generated by the servo motor is not strong enough to turn the wheel, the range of 80°–100° has no significant difference in making a turn.
5 Result and Discussion 5.1 Parameter Selection This research test was carried out on a dataset of 1607 images that had passed the data balancing process and left 723 images. The dataset is then divided into 80% training
8
M. Z. Asy’ari et al.
data and 20% validation data with a ratio of 80:20. Therefore, the total training data is 578, and the data validation is 145. The training process uses Adam optimizer and loss function MSE [24]. Figure 3 presents the train and loss results for RMSE. The other parameters, such as epoch and step per epoch, were fixed at 30 and 500, respectively. The learning rate is to find the optimal value for the gradient descent function’s smoothness. If the chosen value is just right, it will give a smoother gradient. On the contrary, if its value is too high, it will give oscillate results or even diverge [25]. In this paper, according to the result from Fig. 4, the lower learning rate has better than, the higher one. Based on the parameters that have been determined, the best performance results are those selected for developing training models for autonomous driving. Thus, unmanned vehicles will perform optimal steering to follow the track.
Fig. 3 RMSE graph with a 0.0001 learning rate
Performance Comparison Plot 0.5622
Loss and RMSE Value
0.6 0.5
0.3161
0.4
0.3185
0.3 0.2 0.1
0.5644
0.1251 0.01564
0.1145
0.01312
0 0.0001
0.001 Learning Rate
Epoch_Loss Train
Epoch_Loss Val
Epoch_RMSE Train
Fig. 4 Plot comparison performance with different learning rate
Epoch_RMSE Val
Intelligent Small Scale Autonomous Vehicle Development Based …
9
Fig. 5 Steering angle prediction result (straight)
5.2 Steering Performance The parameter used to create a deep learning model is a learning rate of 0.0001 because it performs better than a learning rate of 0.001. Data gathering for steering results of the experiment is shown in Fig. 5. The image on the left shows the position of the prototype captured by the camera, and on the right is the predicted result displayed on the right side, as shown in Fig. 5. Prediction results show numbers ranging from 90° to 97°, and the camera view shows a straight path. It can be seen that the model created and developed can predict the image captured by the camera correctly. To see the steering performance of the autonomous vehicle, Fig. 6 shows the data distribution result of the response model on the path traversed. It can be seen that the distribution of data on the steering angle 20° is spread evenly, which means that it can respond according to the steering conditions. In contrast, the other steering angle has difficulty responding to the track. According to Fig. 7, the blue line with a minimum angle of 20° shows that the prototype has the longest steering distance. The other graph lines show that the prototype unable to circle the track, even once. However, it can be seen that the graph with a minimum angle of 30° shows more movement than the graph with 40°, 50°, and 60° values, meaning the AV struggles to move back onto the right track. Although in the end, still incapable of completing one lap circling the track.
6 Conclusion It can be concluded that the prototype of the autonomous car that was made is not yet feasible for use on public roads due to the lack of accuracy and consistency. The experimental results obtained are as follows
10
M. Z. Asy’ari et al.
Fig. 6 Steering angle distribution
Fig. 7 Right turn steering angle log
• The best steering degree range for right turns is 140°–20°, and the best steering degree range for left turns is 170°–60°. • The best throttle value for the prototype to circle the track is 0.075. • The TensorFlow Keras model is suitable for predicting the steering angle in realtime, with an RMSE accuracy rate of 0.1145 points. The research can still be developed to get better results and performance. This study’s problems are inconsistent data collection, less torque to drive the wheels, and no prototype protection to avoid accidents. Thus, the following are suggestions that can be made for future development • Captured datasets can be reproduced and varied • Testing other training parameter values to see the level of accuracy of a model • Tried using methods other than CNN to test the effectiveness of steering prediction performance
Intelligent Small Scale Autonomous Vehicle Development Based …
11
• Autonomous Vehicle Using a camera with a wider Field of View • Use a more powerful Single Board Computer (SBC) with better computing abilities.
References 1. Goldfain B et al (2019) AutoRally: an open platform for aggressive autonomous driving. IEEE Control Syst Mag 39(1):26–55 2. Raza M (2018) Autonomous vehicles: levels technologies impacts and concerns. Int J Appl Eng Res 13(16):12710–12714 3. Indonesian Government Research Institute. https://www.bppt.go.id/berita-bppt/autonomousvehicle-av-akan-diadopsi-untuk-ibu-kota-baru-indonesia. Last accessed 13 July 2022 4. Hu J, Bhowmick P, Arvin F, Lanzon A, Lennox B (2020) Cooperative control of heterogeneous connected vehicle platoons: an adaptive leader-following approach. IEEE Robot Autom Lett 5(2):977–984 5. Stern RE et al (2019) Quantifying air quality benefits resulting from few autonomous vehicles stabilizing traffic. Transp Res D Transp Environ 67:351–365 6. Katadata. https://databoks.katadata.co.id/datapublish/2022/04/21/berapa-penjualan-mobil-lis trik-di-indonesia. Last accessed 10 July 2022 7. Kanki BG (2018) Cognitive functions and human error. In: Sgobba T, Kanki B, Clervoy J-F, Sandal GM (eds) Space safety and human performance. Butterworth-Heinemann, pp 17–52 8. Bisnis Indonesia Resources Center. https://dataindonesia.id/sektor-riil/detail/jumlah-kecela kaan-lalu-lintas-meningkat-jadi-103645-pada-2021. Last accessed 09 June 2022 9. Giuffrè T, Canale A, Severino A, Trubia S (2017) Automated vehicles: a review of road safety implications as a driver of change, vol 16. In: Proceedings of the 27th CARSP conference 10. Wang J, Zhang L, Huang Y, Zhao J (2020) Safety of autonomous vehicles. J Adv Transp 11. Seth A, James A, Mukhopadhyay SC (2020) 1/10th scale autonomous vehicle based on convolutional neural network. Int J Smart Sens Intell Syst 13(1):1 12. Kurani T, Kathiriya N, Mistry U, Kadu L, Motekar H (2020) Self driving car using machine learning 13. Ahemad MM (2020) Advance self driving car using machine learning. Asian J Appl Sci Technol 4(3):45–50 14. Blaga B-C-Z, Deac M-A, Al-doori RWY, Negru M, Dˇanescu R (2018) Miniature autonomous vehicle development on Raspberry Pi. In: IEEE 14th international conference on intelligent computer communication and processing (ICCP), pp 229–236 15. Xiong L, Jiang Y, Fu Z (2018) Steering angle control of autonomous vehicles based on active disturbance rejection control. IFAC-PapersOnLine 51(31):796–800 16. Rafsanjani, Wibawa IPD, Ekaputri C (2019) Speed and steering control system for self-driving car prototype. J Phys Conf Ser 1367(1):12068 17. Zakaria MA, Zamzuri H, Mazlan SA (2016) Dynamic curvature steering control for autonomous vehicle: performance analysis. IOP Conf Ser Mater Sci Eng 114:12149 18. Culley J et al (2020) System design for a driverless autonomous racing vehicle. In 12th international symposium on communication systems, networks and digital signal processing (CSNDSP) 19. Open Source 3D Design. https://grabcad.com/. Last accessed 21 May 2021 20. Sreekar C, Sindhu VS, Bhuvaneshwaran S, Rubin Bose S, Sathiesh Kumar V (2021) Positioning the 5-DOF robotic arm using single stage deep CNN model. In: Seventh international conference on bio signals, images, and instrumentation (ICBSII) 21. Sotelo MA (2003) Lateral control strategy for autonomous steering of Ackerman-like vehicles. Rob Auton Syst 45(3):223–233
12
M. Z. Asy’ari et al.
22. Khristamto M, Praptijanto A, Kaleg S (2015) Measuring geometric and kinematic properties to design steering axis to angle turn of the electric golf car. Energy Procedia 68:463–470 23. Tian F, Li Z, Wang F, Li L (2022) Parallel learning-based steering control for autonomous driving. IEEE Trans Intell Veh 1 24. Bock S, Weiß M (2019) A proof of local convergence for the Adam optimizer. In: International joint conference on neural networks (IJCNN) 25. Wu X, Ward R, Bottou L (2018) WNGrad: learn the learning rate in gradient descent. arXiv
Histogram of Oriented Gradients (HOG) and Haar Cascade with Convolutional Neural Network (CNN) Performance Comparison in the Application of Edge Home Security System Muhammad Zacky Asy’ari , Sebastian Filbert, and Zener Lie Sukra Abstract In recent years, security has played a significant role in our daily lives. House security has become more popular with the enhancement of the Internet of Things (IoT) in intelligent home automation. This paper aims to build a system that can detect faces and the presence of unknown people using computer vision methods combined with the Internet of things. Two algorithms, Histogram of Oriented Gradients (HOG) and Haar Cascade classifiers with the enhancement of Convolutional Neural Network (CNN), were used to compare their performance. The system utilizes Nvidia Jetson Nano as an edge device, HD camera, Light Emitting Diode (LED), and sound buzzer as peripheral hardware. Telegram messenger application was used to notify the homeowner when uninvited guests passed through the pre-defined security area. In the result, the HOG method has 100% accuracy compared to 75% Haar Cascade with 720p video quality in the proposed developed system. Keywords Computer vision · Face recognition · Home security · Internet of Things · Object tracking
1 Introduction The Internet of Things (IoT) has played an essential role in daily human activity. It is considered the future of the Internet and will contain a billion intelligent devices communicating with each other [1]. The technology will enhance the quality of life and improve productivity, safety, and wellbeing. The concept of IoT, combined with M. Z. Asy’ari (B) · S. Filbert · Z. L. Sukra Automotive and Robotics Program, Computer Engineering Department, BINUS ASO of Engineering, Bina Nusantara University, Jakarta, Indonesia e-mail: [email protected] S. Filbert e-mail: [email protected] Z. L. Sukra e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_2
13
14
M. Z. Asy’ari et al.
smart homes and security, has the potential to revolutionize houses, homes, and offices into safety-aware environments [2]. Over the last 15 years, people have been interested in smart homes and IoT, according to the data taken from google trends analytics. Smart homes cannot be separated from the surveillance camera system to improve house safety and prevent criminal acts. Zhang et al. Examined the possibility of a wireless video network over the Internet, which uploaded real-time and HD quality video [3]. The system uses heavy bandwidth to transfer the data to the cloud, meaning higher operational costs. The edge computing technology could avoid the issue by processing the vast data locally. The combination of video streaming and computer vision makes it possible to develop more efficient systems. In this study, several computer vision techniques have been used. Namely, the Histogram of Oriented Gradients (HOG), Haar Cascade algorithms, and Multiple Object Tracking (MOT) or Multiple Target Tracking (MTT) are used for face recognition and object tracking. This paper uses a holistic approach to integrate IoT, Computer Vision, Machine Learning, and Edge Computing solutions into the smart home security system. The research includes a security notification system to develop a surveillance camera with better features and functionality. It also covers the weaknesses of similar systems that others have made. The system can detect faces during the day and human objects at night. In addition, it also warns the owner via Telegram messenger and localizes the sound buzzer in real-time.
2 Background and Related Work 2.1 Background HOG Face recognition. They are many methodologies in face recognition systems. The HOG technique is one of the methods being used in this study. This technique outperforms existing feature sets for human detection. The concept of this method is edge direction distribution or light intensity gradient to describe the shape of a human face. This technique divides the face into a small area in the image into the cell. Then, the histogram was made for each cell and combined to extract features from each face [4]. The tri-linear interpolation divides each cell’s gradient magnitude and pixel direction into nine bins. To extract face image features, the histograms from all cells are combined after every cells are generated pixel by pixel using a vector gradient. Different kernel functions use Support Vector Machine (SVM) to investigate the face recognition problem [4]. The face recognition system in this paper uses technology developed from DLIB, the machine learning toolkit algorithm, and tools written in C++ programming
Histogram of Oriented Gradients (HOG) and Haar Cascade …
15
language [5]. Model precision from benchmark Labeled Faces in the Wild is 99.38% [6]. Haar Cascade. Haar cascade is a classification system utilizing a cascade classifier and haar-like feature to detect a human face. Cascade classifier is the most successful method from Viola and Jones to detect faces [7, 8]. A haar-like feature is a digital image feature to detect an object. It has a scalar value representing the average intensity difference between two rectangular regions. An intensity gradient is captured at different placements, spatial frequencies, and directions which vary in the shape, size, position, and rectangular regions arrangement to the base resolution of the detector [9]. The algorithm classifies features such as forehead, eyebrows, eyes, nose, and mouth. The pattern of the nose black-white-black when light is reflected from above. The highlight will make a line, thus classified as a line feature. The Haar-like feature is a two-dimension function transforming an object into a code. The existence of a Haar-like feature is the result of average dark-region pixel and average light region subtraction. The feature is determined if the difference exceeds the limit [10]. Convolutional Neural Network (CNN). Convolutional Neural Network is a neural network model consisting of neurons organized in layers or layers. The neurons in that layer are connected through weight and bias. Remote sensing data is an example of the first layer’s input, while the last layer’s output example is the prediction of data classification into classes. In the middle, the feature space of the input is changed by the hidden layer so that it matches the output. At least one convolution layer is used on CNN as a hidden layer to detect a pattern [11]. CNN consists of several layers of artificial neurons. An artificial neuron is a mathematical function that calculates the sum of input and output activation values weights. When inserting an image in ConvNet, each layer generates some activation function passed to the next layer [12]. Based on the activation map of the convolution layer, the classification layer will display a confidence score (a value between 0 and 1) that determines how likely it is that the image belongs to its “class.” For example, if ConvNet detects cats, dogs, and horses, the output of the last layer is likely to be an input image containing one of these animals [12]. This project uses CNN for object detection before entering the object tracking process. Object tracking, usually known as Multiple Object Tracking (MOT) or Multiple Target Tracking (MTT), has many functions and is essential in Computer Vision. MOT serves to track, identify, and movement of an object [13]. Multiple centroid tracking methods have been used in this paper. The centroid tracking method always follows the bounding box detected by an object detector. The centroid of a bounding box has its unique ID [14].
16
M. Z. Asy’ari et al.
2.2 Related Work People have widely used the implementation of face detection technology to create security systems. The author will take an example from the system developed by Aydin and Othman in their research titled “A New IoT Combined Face Detection of People by Using Computer Vision for Security Application” [15]. The author makes a system that detects faces when there is movement. The controller used is Raspberry Pi and uses one of the algorithms from OpenCV, namely Haar Cascade. When it detects a movement, the system will take a picture and count how many faces are found. Testing of the system is carried out indoors and will only function when it detects movement. On the last part of this paper, it will compare the performance of the Haar Cascade algorithm and face_recognition as well as a security notification system using object tracking at night by considering the results of research from journals written by Grm et al. in “Strengths and weaknesses of deep learning models for face recognition against image degradations” [16]. The journal discusses the performance of various facial recognition models on parameters such as blur, image quality, and noise. Boyko et al. compare popular face recognition algorithm. It is mentioned that the OpenCV method is more productive had better performance in detection and better for IoT application. While HOG algorithms have more detail and necessary if it required plenty of photos in the dataset [17]. In this paper, it is examined the algorithm in face recognition in implementation for home security system during the day and night using the edge computing device.
3 Research Method and Implementation The main hardware on the system is Nvidia Jetson Nano acting as an edge device that processes the data locally before sending extracted data to the cloud API of the Telegram messenger service. Peripheral devices such as a buzzer, camera, and LED lamp are integrated into the system. Finally, the user received a notification from a smartphone because of local computation being done in the edge device via Telegram message (see Fig. 2a). To be able to start designing the system, several preparations need to be done. The first thing is preparing the microSD card used for the Jetson Nano 2 GB (see Fig. 2b). First, the MicroSD card is formatted, and the jetpack version 4.5 is installed as a software image from Jetson Nano [18], with headless setup [19]. Several things are needed by the face recognition system as dependencies so that the system works as needed, namely the installation of libraries and modules such as DLIB and the face_recognition module. After the entire library was successfully installed, the program code was written using the face_recognition module, combined with the program code for sending messages to Telegram. The program is written to send messages to Telegram when a face is detected as ‘Unknown’ (faces of people
Histogram of Oriented Gradients (HOG) and Haar Cascade …
17
who are not in the dataset). The dataset used in the face recognition system is 15 people collected from wikipedia mainly famous people and celebrities as seen in Fig. 1. One example of an image obtained from wikipedia for Cristiano Ronaldo’s picture [20]. Like the face recognition system, the object tracking algorithm has several libraries that must be installed, such as a pre-trained COCO dataset [21], dependencies for OpenCV, and TensorFlow. The system was developed to be able to sound a buzzer and send messages to Telegram when someone crosses the reference line in a particular direction. Then, the message is sent to Telegram via the Telegram bot.
Fig. 1 Dataset face recognition
Fig. 2 Home security architecture (a) and device prototype (b)
18
M. Z. Asy’ari et al.
Fig. 3 Real-time face recognition
4 Result and Analysis 4.1 Result The device can work and detect in real-time. However, the performance and parameter comparison results use pre-recorded videos due to device placement issues in the monitored area. The face recognition system can run well. The system was tested using 15 different faces. The sample used is the face of the author, the author’s relatives, and famous people obtained from the wikipedia. The system can already detect the faces of people in the dataset and people who are not in the dataset or detected as ‘Unknown’. An illustration of the device implementation (see Fig. 3). It can be seen that person is walking from the direction of the gate to the camera, and then when they cross the boundary line in the video frame, the counter will increase to 1.
4.2 Analysis Face recognition result. The faces in the dataset are the faces of known people or people who have permission to enter the area. ‘Unknown’ faces are not in the dataset or people who are unknown and without permission to enter the area. The system can detect 15 faces that are in the dataset correctly. The system can also recognize faces not in the dataset as ‘Unknown’ (see Fig. 4). However, for people who are not in the dataset, the system will match faces that should be ‘Unknown’ to the faces of people with the closest features in the dataset. In general, features can include race, skin color, eye shape, mouth shape, etc.
Histogram of Oriented Gradients (HOG) and Haar Cascade …
19
Fig. 4 Face recognition algorithm result
Comparison of HOG Face recognition and Haar Cascade Algorithms. Face recognition and Haar Cascade work similarly. The difference is that face recognition mostly uses the DLIB package. Meanwhile, Haar Cascade uses OpenCV in carrying out its functions. The experiment conducted in this study was to use 15 faces of the same person in Chap. 3. The result is that the face recognition algorithm has higher accuracy than Haar Cascade. Here is the performance of the two algorithms: • Face recognition: 15 images detected and can detect ‘Unknown’ faces perfectly • Haar Cascade: 12 images detected correctly and unable to properly detect ‘Unknown’ face. The results in the red box in Fig. 5 are incorrect detection results. There are two images whose faces are not detected because of the use of glasses and hats. In addition, one image was detected as another person, and a face sample that should have been classified as ‘Unknown’ was detected as one of the people in the dataset. In the tested images, face recognition has an accuracy of 100% and Haar Cascade of 75%. Haar Cascade has a different way of detecting people with ‘Unknown’ faces by entering objects unrelated to faces into the dataset and labeling the image as ‘Unknown’. As previously discussed, face recognition usually detects faces that should be ‘Unknown’ as faces of people with the closest features in the dataset. However, this weakness does not affect the system’s work. Because, even if it changes, the algorithm can still detect faces that are not in the dataset, so the system sends messages via Telegram. The Haar Cascade algorithm approaches this facial feature, but the results do not change. The Haar Cascade algorithm will still detect the ‘Unknown’ face as a sample of people with the closest features in the dataset, so the system does not
20
M. Z. Asy’ari et al.
Fig. 5 Haar Cascade algorithm performance
send messages to Telegram. Therefore, the system design is more suitable for using the face recognition algorithm than the Haar Cascade algorithm. Comparison of Resolution Used to Resize Video. It is necessary to do a resizing process on the video input used before the detection process. The purpose is to maximize the detection system process. The greater the resolution used, the value of detection accuracy will increase; this is due to the increase in image quality. However, the greater the video resolution, the heavier the system performs; this can decrease the frame rate or even cause the system to crash from the program being run. The resolution also affects how quickly the system can detect an object. The greater the resolution, the more the pixel value of the video that will be detected, so the detection process takes longer. The higher the resolution, the lower the frame rate per second (fps) of the video. The 480p resolution has the highest fps of 15 fps. The smaller the fps, the lighter the system performance will be. However, the 480p resolution cannot detect objects, either using a face recognition system or object tracking, so 480p resolution cannot be used at all. Considering the results, it is decided to resize the video to a 720p resolution. The resolution was chosen because it has the most optimal resolution of the three tested resolutions. 720p resolution has an image quality that is not much different from 1080p, so it has a similar accuracy of results. The 720p resolution also has a 2 × higher fps and is much more stable than the 1080p resolution. Detectable Objects by the System. Face recognition and object tracking systems are trained using datasets containing only faces and the human body, so other objects are not detected. The author tries to use objects between humans and cats that are close together to prove that the system can only detect humans. Figure 6 shows that the face recognition system only detects human faces, and the object tracking system detects humans. The cat is entirely undetected by the
Histogram of Oriented Gradients (HOG) and Haar Cascade …
21
Fig. 6 Face recognition only detects human
system. However, the system may make a detection error. For example, very windy conditions blow dust towards the camera and cause the camera to lose focus or blur.
5 Conclusion The location of the device placement needs to be considered to optimize detection, such as placing the device in a sterile place where there are no people at a predetermined hour. The goal is that the object tracking system at night will not mistakenly detect people who are not intruders. The best algorithm is face_recognition which has 100% accuracy compared to 75% in Haar cascade. In terms of streaming configurations, 720p has outstanding results. This project source code can be accessed in Github [22]. Another thing to note is the ability of the hardware used. The hardware used is the Jetson Nano 2 GB, which is relatively affordable. However, based on the test results, the Jetson Nano has the insufficient capability to run the system due to frequent lags, decreased fps, and system crashes. The system performance can be further improved by using better modules, for example, NVIDIA Jetson TX1 and TX2. The resolution to resize the video can be even greater with greater hardware capability.
References 1. Li S, da Xu L, Zhao S (2015) The internet of things: a survey. Inf Syst Front 17(2):243–259 2. Risteska Stojkoska BL, Trivodaliev KV (2017) A review of Internet of Things for smart home: challenges and solutions. J Clean Prod 140:1454–1464 3. Zhang T, Chowdhery A, (Victor) Bahl P, Jamieson K, Banerjee S (2015) The design and implementation of a wireless video surveillance system. In: Proceedings of the 21st annual international conference on mobile computing and networking, pp 426–438 4. Kortli Y, Jridi M, al Falou A, Atri M (2020) Face recognition systems.: a survey. Sensors 20(2) 5. DLIB framework. http://dlib.net/. Last accessed 25 Apr 2022 6. Open source community repository. https://github.com/ageitgey/face_recognition. Last accessed 18 July 2022
22
M. Z. Asy’ari et al.
7. Cuimei L, Zhiliang Q, Nan J, Jianhua W (2017) Human face detection algorithm via Haar cascade classifier combined with three additional classifiers. In: 13th IEEE international conference on electronic measurement and instruments (ICEMI), pp 483–487 8. Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features, vol 1. In: Proceedings of the 2001 IEEE computer society conference on computer vision and pattern recognition. CVPR, pp I–I 9. Mita T, Kaneko T, Hori O (2005) Joint Haar-like features for face detection, vol 2. In: Tenth IEEE international conference on computer vision (ICCV’05), pp 1619–1626 10. Kadir K, Kamaruddin MK, Nasir H, Safie SI, Bakti ZAK (2014) A comparative study between LBP and Haar-like features for face detection using OpenCV. In: 4th international conference on engineering technology and technopreneuship (ICE2T) 11. Kattenborn T, Leitloff J, Schiefer F, Hinz S (2021) Review on convolutional neural networks (CNN) in vegetation remote sensing. ISPRS J Photogram Remote Sens 173:24–49 12. Data science community and knowledge portal. https://www.analyticsvidhya.com/blog/2021/ 05/convolutional-neural-networks-cnn. Last accessed 11 Aug 2022 13. Luo W, Xing J, Milan A, Zhang X, Liu W, Kim T-K (2021) Multiple object tracking: a literature review. Artif Intell 293:103448 14. Computer vision and deep learning web portal. https://pyimagesearch.com/2018/07/23/simpleobject-tracking-with-opencv/. Last accessed 29 July 2022 15. Aydin I, Othman NA (2017) A new IoT combined face detection of people by using computer vision for security application. In: International artificial intelligence and data processing symposium (IDAP), pp 1–6 16. Grm K, Štruc V, Artiges A, Caron M, Ekenel H (2017) Strengths and weaknesses of deep learning models for face recognition against image degradations. IET Biom 7 17. Boyko N, Basystiuk O, Shakhovska N (2018) Performance evaluation and comparison of software for face recognition, based on Dlib and Opencv library. In: IEEE second international conference on data stream mining and processing (DSMP), pp 478–482 18. Nvidia. https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-2gb-devkit. Last accessed 15 June 2022 19. Putty headless setup. https://www.putty.org/. Last accessed 2 May 2022 20. Lin T-Y et al (2014) Microsoft {COCO:} common objects in context, vol abs/1405.0. In: CoRR 21. Open source community repository. https://github.com/John777-loss/SecuritySys. Last accessed 5 Aug 2022 22. Wikipedia, Christiano Ronaldo. https://en.wikipedia.org/wiki/Cristiano_Ronaldo. Last accessed 28 Jan 2023
Comparison of K-Nearest Neighbor and Support Vector Regression for Predicting Oil Palm Yield Bens Pardamean , Teddy Suparyanto , Gokma Sahat Tua Sinaga , Gregorius Natanael Elwirehardja , Erick Firmansyah , Candra Ginting , Hangger Gahara Mawandha , and Dian Pratama Putra Abstract With the high demand for oil palm production, implementations of Machine Learning (ML) technologies to provide accurate predictions and recommendations to assist oil palm plantation management tasks have become beneficial, such as in predicting annual oil palm. However, different geographical and meteorological conditions may result in different scales of influence for each variable. In this research, K-Nearest Neighbors (KNN) and Support Vector Regression (SVR) were used in predicting oil palm yield based on data collected in Riau, Indonesia. Pearson’s correlation coefficient was also calculated in selecting the input features for the models, whereas normalization and standardization were used in scaling the data. By setting the minimum absolute correlation threshold to 0.1 and using standardization, both models managed to obtain more than 0.81 R2 , with SVR obtaining the overall best performance with 0.8709 R2 , 1.372 MAE, and 1.8025 RMSE without hyperparameter fine-tuning. It was also discovered that the oil palm yield in the previous year is the variable with the most influence in estimating oil palm yield in the current year, followed by the number of plants and soil types. Keywords Oil palm · Yield prediction · Machine learning · KNN · SVR
B. Pardamean · T. Suparyanto · G. S. T. Sinaga · G. N. Elwirehardja (B) Bioinformatics and Data Science Research Center, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] B. Pardamean Computer Science Department, BINUS Graduate Program—Master of Computer Science Program, Bina Nusantara University, Jakarta 11480, Indonesia E. Firmansyah · C. Ginting · H. G. Mawandha · D. P. Putra Faculty of Agriculture, Institute of Agriculture STIPER, Yogyakarta 55282, Indonesia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_3
23
24
B. Pardamean et al.
1 Introduction The high demand and limited supply of oil palm fruit production have urged many oil palm plantation companies to intensify the technology implementation to boost productivity whilst preserving efficiency. The Roundtable on Sustainable Palm Oil (RSPO), one of the biggest oil palm stakeholder groups, indicated that optimizing productivity and efficiency in oil palm fruit production has been their priority since 2018 [1]. As the global leader in Crude Palm Oil (CPO) production, Indonesia and Malaysia control approximately 86% of the global CPO supply [2]. It elevates CPO to a vital component of Indonesia’s and its consumer countries’ food security [3]. Furthermore, the rapid demand growth has been fueled by the demand for biofuel raw materials as well as other less harmful alternative raw materials [4]. The issue for Indonesia and Malaysia is to boost national oil palm fruit production while avoiding deforestation. A thorough examination of several production metrics is required to identify the potential for improving the productivity of existing plantings and ongoing replanting programs [5]. The significant current gap between an oil palm plantation’s potential and actual production indicates that there is still scope for further improvement. However, relying simply on conventional human intuition to make modifications to a large plantation area is regarded as risky [6]. To tackle this problem, Artificial Intelligence (AI) technology offers an objective and rational perspective of complex production systems through data-driven analysis. Several AI implementations have been employed to generate valuable information in increasing oil palm productivity. One of them is classifying the ripeness level of oil palm fresh fruit bunches (FFB) [7, 8]. AI can also analyze the availability and loss of nutritional components using Android-based applications [9]. Another advantageous implementation is the identification of oil palm leaf deficit [10], fertilization recommendation system [11], computer vision-based for object counting [12], irrigation mapping system [13–15], and weed detection [16, 17]. One of the most pressing problems is projecting oil palm plantation productivity using machine learning to provide comprehensive insight into the critically necessary materials to minimize future resource loss [18–20]. Numerous studies on oil palm yield prediction using machine learning have also been attempted using a combination of associated factors. These studies concluded that climatic data, such as rainfall indices, wind speed, and relative humidity, had a significant impact on oil palm yield projection [21–23]. However, there are many other factors that determine oil palm yield forecast, such as fertilizer content, planting density, crop age, soil type, and so on [24–26]. A comparison of two machine learning models to regress annual oil palm yield is reported in this research. Correlation analysis was first conducted to select variables that are highly correlated with the output and use them to train the models, which include meteorological, soil, and plantation data. Min–max normalization and zscore standardization were also performed on the data to further assist the models in learning. The findings of this comparative research will subsequently be linked to a feature in an oil palm plantation management recommendation system.
Comparison of K-Nearest Neighbor and Support Vector Regression …
25
2 Related Works Among various ML algorithms, the Bayesian Networks (BN) and Artificial Neural Networks (ANN) have been widely adopted in predicting oil palm yields despite the limited number of studies in this field. Chapman et al., had utilized such models to predict oil palm yield using meteorological data in the form of total rainfall up to two years prior to the year of harvest, fertilizer usage, soil conditions, tree age, and chemical contents in frond 17 of the oil palm trees. Both models obtained mean R2 values of 0.6, indicating that they performed quite well. It was also implied that the rainfall conditions brought about a significant influence on the number of yields [21]. In a similar research, such findings were also found where usage of rainfall, temperature, relative humidity, light intensity, and wind speed data in the past 12 months to train ANNs allowed them to obtain up to 0.4707 MSE in forecasting oil palm yield [23]. In one of the most recent studies, other ML models such as XGBoost and Random Forest (RF) obtained outstanding results with more than 0.7 R2 values in predicting oil palm yield based on satellite multispectral images [22]. ANN had also been used in another similar study with the Non-linear Autoregressive Exogenous (NARX) to predict oil palm yield using air pollution data as inputs in addition to the meteorological dan plantation area data. Trained and tested on data collected from nine states in Malaysia, both the ANN and NARX models obtained R2 values larger than 0.87 [27]. Such results imply that the plantation area and air pollution data may also be highly influential albeit the importance of each feature may differ according to the location of the oil palm plantation fields as climate changes played major parts in both regional and global oil palm yields [26]. This means that the ML models used had to be fine-tuned according to the data for each different plantation region according to the geographical and climatic characteristics.
3 Theoretical Statistics 3.1 Min–Max Normalization Normalization is the process of converting a set of numerical values into a standard range of values, such as (0–1) in the case of min–max normalization. It is mainly aimed to transform features to a similar scale and minimize the risk of some features dominating the contributions to the output, which is formulated as follows: X norm =
X − X min X max − X min
(1)
where X norm denotes the normalized value, X is the original value, X min is the minimum value of the variable, and X max is its maximum value.
26
B. Pardamean et al.
3.2 Z-score Standardization The standardization of data is the transformation of features into their standard scores also known as z-scores, which is done by subtracting from the mean of the whole data and dividing by the standard deviation. The z-score can be calculated as follows: X stand =
X − X mean σ
(2)
where σ is the standard deviation of the variable. Similar to normalization, its main purpose is to make sure that all features have equal probabilities of contributions when the ML model is trained, as features with larger range of values may significantly be more impactful to the model’s predictions. Under the assumption that the data follow the Gaussian distribution, this method has been regarded as suitable to use.
3.3 Model Evaluation Method The regression models were evaluated by calculating the Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), and coefficient-of-determination (R2 ). The equations behind these metrics are presented in Eqs. (3)–(5), respectively. n | 1 ∑ || yi − yˆi | n i=1 ┌ | n |1 ∑( )2 yi − yˆi RMSE = √ n i=1
M AE =
R =1− 2
∑n
(
i=1 ∑n i=1
(yi − y)2
yi − yˆi
(3)
(4)
)2 (5)
where yi and yˆi denote the ground truth and predicted values, whereas n is the number of samples. In addition, the prediction latency of the models was also calculated to evaluate their computational complexity.
Comparison of K-Nearest Neighbor and Support Vector Regression …
27
Table 1 Description of each feature in the dataset, the asterisk (*) denotes the output variable Attribute
Data type
Description
crop_age
Integer
Mean of age of each oil palm tree in a planting area (year)
mineral_area
Integer
Planting area covered without peat land (hectare)
peat_area
Integer
Planting area covered with peat land (hectare)
crop_count
Integer
The total of oil palm trees in a planting area (crop)
rainfall_ n
Integer
Rainfall rate in the year m − n (mm)
rainyday_ n
Integer
Total of rainy days in the year m − n (day)
yield_ n
Float
Crop yield in the year m − n (ton)
*yield
Float
Crop yield in the year m (ton)
4 Dataset and Experiment Setup 4.1 Dataset Description The dataset used was a compilation of oil palm yield records in 2021 from plantation areas with two different soil types in Riau province, Indonesia. The data was then integrated with the regional meteorological data from 2017 to 2021. The rainfall indices and annual total rainy days in the region were included as the independent variables. The dataset contains 2472 samples with 15 input variables and the total number of yields in the current year as the dependent output variable, with 2 samples containing null “yield_1” values, 1976 samples used as the training data, and 494 samples as the testing data. Table 1 presents the full descriptions of each variable in the dataset.
4.2 Experiment Setup The prediction models, SVR and KNN, were constructed using the Scikit-Learn library of the Python programming language. The two models were selected for this study as SVR has been known to be an accurate algorithm that is less likely to overfit, whereas KNN is one of the lightest ML models with high accuracy [28, 29]. To minimize error outcomes, feature selection and data scaling were conducted beforehand. Feature selection was intended to filter out input variables that are less relevant to the output [30]. In this research, the selection was performed based on Pearson’s correlation coefficient and p-values, which examines the relationship between two continuous variables where higher values indicate stronger positive correlations. The coefficient was selected as the independent and dependent variables are numerical and indicated to have linear correlation [31]. Only variables with absolute Pearson correlation scores > 0.1 and p-values > 0.05 were used as input variables for the ML models.
28
B. Pardamean et al.
Feature scaling using normalization and standardization was performed on the selected variables. The SVM and KNN were then trained using the scaled variables and compared to those without scaling, producing three models for each ML algorithm that are trained on: (1) unscaled data, (2) normalized data, and (3) standardized data. MAE, RMSE, and R 2 were used in evaluating the models’ performance. It should be noted that all models used the default configurations provided by the Scikit-Learn library without fine-tuning.
5 Results and Discussions 5.1 Correlation Analysis for Feature Selection By using the “yield” variable as the dependent variable, the correlation analysis was performed on the 15 independent variables. Table 2 presents Pearson’s correlation coefficients and the p-values of these variables. It can be seen that “yield_1” and “crop_age” have the highest positive correlation to “yield” with p-values < 0.05, meaning that the positive correlations are statistically significant, and the variables may greatly affect the output. Therefore, the eight variables with absolute Pearson correlation scores > 0.1 and p-values > 0.05 were selected as inputs for the ML models. The eight variables were marked by the asterisk (*) symbol in Table 2. These variables were then used in the modeling process using KNN and SVR algorithms. The correlation coefficient threshold value of 0.1 in this study is less rigorous than in other previous studies [32, 33]. As such, half of the available independent variables were used. However, it can still be inferred that meteorological factors still possess significant impacts on the overall oil palm yield, which aligns with the findings from previous studies [18, 21]. Additionally, the soil types also affect the yields, where the area of mineral soils is positively correlated, and the area of peat soils is negatively correlated. Overall, the number of yields in the previous year had the most impact.
5.2 Experiment Results and Discussions Earlier studies had proven that both standardization and normalization of data are highly beneficial in improving the accuracy of ML models [34, 35], especially in cases where the variables possess different ranges. Standardization may be more suitable when the data follows the Gaussian distribution. From the visualization provided in Fig. 1, the data closely follows the Gaussian distribution albeit slightly left-skewed. However, deeper analysis is required to observe whether standardization can provide the best results. Table 3 displays the evaluation results of the KNN and SVR models, which shows that the standardized and normalized training data produce better results than the one
Comparison of K-Nearest Neighbor and Support Vector Regression … Table 2 Pearson’s correlation coefficient and p-value of each input variable to the output
Attribute
Pearson’s correlation coefficient
29 P-value
yield_1*
0.886
7.14 × 10–293
crop_age*
0.587
6.09 × 10–229
mineral_area*
0.402
1.90 × 10–96
crop_count*
0.120
2.05 × 10–9
rainyday_5*
0.118
3.62 × 10–9
rainyday_4
0.093
3.52 × 10–6
rainfall_4
0.086
2.03 × 10–5
rainfall_3
0.079
7.86 × 10–5
rainfall_5
0.067
9.18 × 10–4
rainfall_2
-0.026
1.88 × 10–1
rainyday_2
-0.034
9.51 × 10–2
rainyday_3
-0.076
1.44 × 10–4
rainfall_1*
-0.109
6.11 × 10–8
rainyday_1*
-0.117
5.16 × 10–9
peat_area*
-0.321
3.33 × 10–60
The asterisk (*) symbol denotes variables selected as inputs for the ML models
Fig. 1 Visualization of the distribution of the yield data
without the feature scaling approach. On the KNN model, standardization allows the model to obtain 0.8171 R2 value whereas the usage of normalization resulted in 0.7975 R2 . On the contrary, the SVR model obtained 0.8709 and 0.8617 R2 on the standardized and normalized data, respectively. Therefore, it can be concluded that standardization is more suitable for this case with the average R2 value of 0.844 on both models which may have been caused by the normal distribution of labels. However, the prediction latency of KNN is almost 10-times better than SVR. When comparing the two models, SVR is clearly superior to KNN on both the normalized and standardized data in terms of errors. On the standardized data, SVR
30
B. Pardamean et al.
Table 3 Evaluation results of the KNN and SVR models KNN (Latency = ±5.99 × 10−5 s)
SVR (Latency = ±4.46 × 10−4 s)
MAE
RMSE
R2
MAE
RMSE
R2
Non-scaled
3.5085
4.6681
0.1342
3.6668
4.8744
0.056
Normalized
1.7587
2.2576
0.7975
1.4563
1.8656
0.8617
Standardized
1.6891
2.1455
0.8171
1.3728
1.8025
0.8709
Fig. 2 Scatter plot of yield predicted value against the actual value of A KNN and B SVR models trained on the standardized data
obtained 18.73% and 15.99% lower MAE and RMSE, respectively compared to KNN. The lower RMSE also indicates that the difference between the ground-truth yields and predictions made by the SVR are generally lower, which can also be inferred from the scatterplots in Fig. 2. From the scatterplots, it is apparent that the errors of the SVM model are less dispersed and generally closer to the red regression line, which illustrates the perfect projection where the predicted value perfectly fits the actual value. In addition to the difference in MAE, RMSE, and R2 values, the scatterplots further prove that the SVR is superior to KNN. However, it should also be noted that hyperparameter fine-tuning was not conducted in this study. Further fine-tuning may result in better prediction results for both models in future studies.
6 Conclusion In this research, KNN and SVR were utilized to predict oil palm yields using various inputs including meteorological, soil, and plantation data. Through the correlation analysis process, variables with high correlation to the output were selected. It was found that the total yield in the previous year is the most influential variable, followed by the crop count and area of each soil type. By using z-score standardization, both ML models managed to obtain outstanding accuracy with SVR being the superior
Comparison of K-Nearest Neighbor and Support Vector Regression …
31
model, achieving 0.8709 R2 value with 1.372 MAE and 1.8025 RMSE. The low dispersion of prediction data points in the SVR regression model indicates that it can provide generally accurate predictions. In future studies, hyperparameter fine-tuning can be performed to further improve the models’ performance. In addition, deeper analyses of the independent variables’ influence on the yield data can also be studied to assist oil palm planters in extracting insights to increase the annual oil palm yield.
References 1. Roundtable on Sustainable Palm Oil (RSPO) (2021) Guidance for the 2018 RSPO principles and criteria metrics template. Kuala Lumpur 2. Zuhdi DAF, Abdullah MF, Suliswanto MSW, Wahyudi ST (2021) The competitiveness of Indonesian crude palm oil in international market. J Ekon Pembang 19:111–124. https://doi. org/10.29259/jep.v19i1.13193 3. Sheng Goh C, Teong Lee K (2010) Will biofuel projects in Southeast Asia become white elephants? Energy Policy 38:3847–3848. https://doi.org/10.1016/j.enpol.2010.04.009 4. Germer J, Sauerborn J (2008) Estimation of the impact of oil palm plantation establishment on greenhouse gas balance. Environ Dev Sustain 10:697–716. https://doi.org/10.1007/s10668006-9080-1 5. Angelsen A (2010) Policies for reduced deforestation and their impact on agricultural production. Proc Natl Acad Sci 107:19639–19644. https://doi.org/10.1073/pnas.0912014107 6. Akhter R, Sofi SA (2021) Precision agriculture using IoT data analytics and machine learning. J King Saud Univ—Comput Inf Sci. https://doi.org/10.1016/j.jksuci.2021.05.013 7. Harsawardana, Rahutomo R, Mahesworo B, Cenggoro TW, Budiarto A, Suparyanto T, Surya Atmaja DB, Samoedro B, Pardamean B (2020) AI-based ripeness grading for oil palm fresh fruit bunch in smart crane grabber. IOP Conf Ser Earth Environ Sci 426:12147. https://doi.org/ 10.1088/1755-1315/426/1/012147 8. Herman H, Cenggoro TW, Susanto A, Pardamean B (2021) Deep learning for oil palm fruit ripeness classification with DenseNet. In: International conference on information management and technology (ICIMTech), pp 116–119. https://doi.org/10.1109/ICIMTech53080.2021.953 4988 9. Putra DP, Bimantio MP, Sahfitra AA, Suparyanto T, Pardamean B (2020) Simulation of availability and loss of nutrient elements in land with android-based fertilizing applications. In: International conference on information management and technology (ICIMTech), pp 312–317. https://doi.org/10.1109/ICIMTech50083.2020.9211268 10. Putra DP, Bimantio P, Suparyanto T, Pardamean B (2021) Expert system for oil palm leaves deficiency to support precision agriculture. In: International conference on information management and technology (ICIMTech), pp 33–36. https://doi.org/10.1109/ICIMTech53080.2021. 9535083 11. Firmansyah E, Pardamean B, Ginting C, Mawandha HG, Putra DP, Suparyanto T (2021) Development of artificial intelligence for variable rate application based oil palm fertilization recommendation system. In: International conference on information management and technology (ICIMTech), pp 6–11. https://doi.org/10.1109/ICIMTech53080.2021.9535082 12. Rahutomo R, Perbangsa AS, Lie Y, Cenggoro TW, Pardamean B (2019) Artificial intelligence model implementation in web-based application for pineapple object counting. In: International conference on information management and technology (ICIMTech), pp 525–530. https://doi. org/10.1109/ICIMTech.2019.8843741 13. Purboseno S, Suparyanto T, Hidayat AA, Pardamean B (2021) A hydrodynamic analysis of water system in Dadahup swamp irrigation area. In: 1st international conference on computer
32
14.
15.
16.
17.
18.
19.
20.
21.
22.
23. 24.
25.
26.
27. 28.
29.
30.
B. Pardamean et al. science and artificial intelligence (ICCSAI), pp 400–406. https://doi.org/10.1109/ICCSAI 53272.2021.9609729 Krisdiarto AW, Julianto E, Wisnubhadra I, Suparyanto T, Sudigyo D, Pardamean B (2021) Design of water information management system in palm oil plantation. In: 1st international conference on computer science and artificial intelligence (ICCSAI), pp 395–399. https://doi. org/10.1109/ICCSAI53272.2021.9609780 Hermantoro S, Suparman S, Ariyanto DS, Rahutomo R, Suparyanto T, Pardamean B (2021) IoT sensors integration for water quality analysis. In: 1st international conference on computer science and artificial intelligence (ICCSAI), pp 361–366. https://doi.org/10.1109/ICCSAI 53272.2021.9609707 Firmansyah E, Suparyanto T, Ahmad Hidayat A, Pardamean B (2022) Real-time weed identification using machine learning and image processing in oil palm plantations. IOP Conf Ser Earth Environ Sci 998:12046. https://doi.org/10.1088/1755-1315/998/1/012046 Mawandha HG, Suparyanto T, Pardamean B (2021) Weeds e-Catalog as a tool for identification of weeds in plantation. IOP Conf Ser Earth Environ Sci 794:12113. https://doi.org/10.1088/ 1755-1315/794/1/012113 Woittiez LS, van Wijk MT, Slingerland M, van Noordwijk M, Giller KE (2017) Yield gaps in oil palm: a quantitative review of contributing factors. Eur J Agron 83:57–77. https://doi.org/ 10.1016/j.eja.2016.11.002 Hoffmann MP, Donough CR, Cook SE, Fisher MJ, Lim CH, Lim YL, Cock J, Kam SP, Mohanaraj SN, Indrasuara K, Tittinutchanon P, Oberthür T (2017) Yield gap analysis in oil palm: framework development and application in commercial operations in Southeast Asia. Agric Syst 151:12–19. https://doi.org/10.1016/j.agsy.2016.11.005 Soliman T, Lim FKS, Lee JSH, Carrasco LR (2022) Closing oil palm yield gaps among Indonesian smallholders through industry schemes, pruning, weeding and improved seeds. R Soc Open Sci 3:160292. https://doi.org/10.1098/rsos.160292 Chapman R, Cook S, Donough C, Lim YL, Vun Vui Ho P, Lo KW, Oberthür T (2018) Using Bayesian networks to predict future yield functions with data from commercial oil palm plantations: a proof of concept analysis. Comput Electron Agric 151:338–348. https://doi.org/10. 1016/j.compag.2018.06.006 Watson-Hernández F, Gómez-Calderón N, da Silva RP (2022) Oil palm yield estimation based on vegetation and humidity indices generated from satellite images and machine learning techniques. https://doi.org/10.3390/agriengineering4010019 Kartika ND, Astika IW, Santosa E (2016) Oil palm yield forecasting based on weather variables using artificial neural network. Indones J Electr Eng Comput Sci 3:626–633 Rodríguez AC, D’Aronco S, Schindler K, Wegner JD (2021) Mapping oil palm density at country scale: an active learning approach. Remote Sens Environ 261:112479. https://doi.org/ 10.1016/j.rse.2021.112479 Khan N, Kamaruddin MA, Sheikh UU, Yusup Y, Bakht MP (2021) Oil palm and machine learning: reviewing one decade of ideas, innovations, applications, and gaps. https://doi.org/ 10.3390/agriculture11090832 Rashid M, Bari BS, Yusup Y, Kamaruddin MA, Khan N (2021) A comprehensive review of crop yield prediction using machine learning approaches with special emphasis on palm oil yield prediction. IEEE Access 9:63406–63439. https://doi.org/10.1109/ACCESS.2021.3075159 Hilal YY, Yahya A, Ismail WIW, Asha’ari ZH (2020) Neural networks method in predicting oil palm FFB yields for the Peninsular States of Malaysia. J Oil Palm Res Awad M, Khanna R (2015) Support vector regression. In: Awad M, Khanna R (eds) Efficient learning machines: theories, concepts, and applications for engineers and system designers. Apress, Berkeley, CA, pp 67–80. https://doi.org/10.1007/978-1-4302-5990-9_4 Ray S (2019) A quick review of machine learning algorithms. In: International conference on machine learning, big data, cloud and parallel computing (COMITCon), pp 35–39. https://doi. org/10.1109/COMITCon.2019.8862451 Hopf K, Reifenrath S (2021) Filter methods for feature selection in supervised machine learning applications—review and benchmark. https://doi.org/10.48550/arxiv.2111.12140
Comparison of K-Nearest Neighbor and Support Vector Regression …
33
31. Pearson K, Henrici OMFE (1896) VII. Mathematical contributions to the theory of evolution.— III. Regression, heredity, and panmixia. Phil Trans R Soc Lond A 187:253–318. https://doi. org/10.1098/rsta.1896.0007 32. Hassan MA, Yang M, Rasheed A, Yang G, Reynolds M, Xia X, Xiao Y, He Z (2019) A rapid monitoring of NDVI across the wheat growth cycle for grain yield prediction using a multi-spectral UAV platform. Plant Sci 282:95–103. https://doi.org/10.1016/j.plantsci.2018. 10.022 33. Wang J, Xu J, Peng Y, Wang H, Shen J (2020) Prediction of forest unit volume based on hybrid feature selection and ensemble learning. Evol Intell 13:21–32. https://doi.org/10.1007/s12065019-00219-4 34. Cannas B, Fanni A, Pintus M, Sechi GM (2002) Neural network models to forecast hydrological risk, vol 1. In: Proceedings of the 2002 international joint conference on neural networks. IJCNN’02 (Cat. No.02CH37290), pp 423–426. https://doi.org/10.1109/IJCNN.2002.1005509 35. Rajurkar MP, Kothyari UC, Chaube UC (2002) Artificial neural networks for daily rainfall— runoff modelling. Hydrol Sci J 47:865–877. https://doi.org/10.1080/02626660209492996
Application of Convolution Neural Network for Plastics Waste Management Using TensorFlow Immanuela Puspasari Saputro, Junaidy Budi Sanger, Angelia Melani Adrian, and Gilberth Mokorisa
Abstract Nowadays, plastic waste has become a global problem. Plastic waste can be found in the land, oceans, rivers, and even soil sediments. This problem has motivated various countries to overcome environmental pollution, especially plastic waste primarily through recycling programs. Plastic waste that is still good can be recycled into handicrafts or sold at the waste bank according to the current selling price. However, the waste management officer encountered difficulties classifying plastic waste since there are many forms of waste. Recently, a deep learning-based method, namely The Convolutional Neural Network (CNN), has been widely used for image processing tasks. In this study, we developed a mobile-based application by employing the CNN Algorithm to identify and classify plastic waste. The application will recognize and classify recyclable plastic waste such as plastic bags, plastic bottles, shampoo bottles, etc. There were 156 images collected manually by a smartphone camera and Google Image. All images are converted into jpg format with the size of 300 × 300 pixels. The dataset was divided into two parts, 106 images for data training and 50 images for data testing. From the experimental result, the model obtained an accuracy of 86%. Moreover, there are features to view price lists and tutorials for handicrafts from plastic waste. Keywords Plastic waste · CNN · TensorFlow · Image recognition · Image classification
I. P. Saputro (B) Computer Science Department, BINUS Online Learning, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] J. B. Sanger · A. M. Adrian · G. Mokorisa Informatics Engineering, Faculty of Engineering, Universitas Katolik De La Salle Manado, Manado 95000, Indonesia e-mail: [email protected] A. M. Adrian e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_4
35
36
I. P. Saputro et al.
1 Introduction Environmental pollution caused by plastic waste has become a global problem. The usage of single-use plastics further worsens pollution. Plastic waste can be found in the land, oceans, rivers, and even soil sediments. It is estimated that by 2040 plastic pollution will reach 710 million tons [1]. Several countries have conducted campaigns to reduce plastic usage through government agencies and non-governmental organizations by providing education on knowledge, attitudes, and behaviors towards plastic waste management. Communities in Japan are highly involved in encouraging people to care about plastic waste recycling with the “3R” program. Another country that has concerned about recycling programs in Taiwan. In 1997, the government created a “4-in-1 Recycling Program” program involving communities, collectors, local governments, and local funders to recycle waste. There are 13 categories of recyclable waste with a total of 33 items, including metal, glass, plastic, and electronics. The United Kingdom also gives excellent support to waste management. WasteCare is a company that serves to process and recycle plastic waste that helped more than 20 thousand organizations provide solutions to the plastic waste problem [2]. In recent years, in Indonesia, the government and the private sector have been increasingly concerned about the problem of plastic waste. One of non-government organizations Greenpeace Indonesia conducted a campaign #PantangPlastik by holding various actions such as cleaning beaches and using Instagram to pursued people to be environmentally friendly such as reducing the use of plastic straws and use tumblers instead plastic bottle [3]. In Bali, the Badung Regency Government in Bali, there is a place to process plastic waste into fuel oil. The Surabaya City Government launched the Suroboyo Bus program, where passengers exchange plastic waste for travel tickets [4]. Indonesian Ministry of Environment and Forestry 2018, Danone-Aqua companies, Alfamart, Telkomsel, and Smash, collaborated to build a model to recognize plastic bottle barcodes. The collected plastic bottles will be recorded through the Smart Drop Box application and exchanged for points that can be used as online payments [5]. In coastal communities such as those in the Manado, on the celebration of World Ocean Day, dozens of divers and environmental activists held a beach cleanup activity. On that day, hundreds of kilograms of plastic waste were successfully cleaned and sorted by type [6]. To reduce human involvement in separating plastic waste that can still be recycled or not recycled, this study aims to develop a mobile application that can recognize and classify plastic bottle waste using the Convolution Neural Network (CNN). Plastic bottle waste still in proper condition will be categorized as recyclable waste. The application also has a tutorial to guide how to recycle plastic waste, which gives selling value. This research consists of five sections. The first section discusses the background. The related works in the second section discuss the use of the CNN method in various studies. Meanwhile, the research method will be discussed in section three. The test results and discussion are discussed in section four. Finally, conclusions and future works are stated in section five.
Application of Convolution Neural Network for Plastics Waste …
37
2 Related Work The Convolution Neural Network and Support Vector Machine (SVM) methods are successfully used to detect plastic waste based on the material’s texture. The research used PLAWO-40 (Kaggle.com) as data that consists of 40 datasets. Each dataset has 500 images of various types of plastic waste. The combination of CNN and SVM methods in the testing process has 97% accuracy with pre-processing and 94% without pre-processing [7]. The research entitled “Application of Deep Learning Object Classifier to Improve e-Waste Collection Planning” successfully implemented a combination of CNN and R-CNN methods. The training process in this research used 180 images. The classification is divided into plastic waste from washing machines, refrigerators, and computer waste. Each category uses 60 images as training data. To measure the model’s performance, 30 images were used for data testing; each category consisted of 10 images. The size of the image in the training and testing data is 128 × 128 pixels. The CNN and R-CNN model uses three layers C1, C2, and C3. The size of matrix filter C1 is 7 × 7, 9 × 9, and 11 × 11. C2 uses a matrix filter at sizes 7 × 7, 5 × 5, and 3 × 3, and C3 uses a matrix filter of 3 × 3. Following the training and testing process on three layers, the average accuracy with the CNN method is 93.3%, and the average R-CNN accuracy is 96.7%. Increasing the production of metropolitan waste and recycling is a way that can be done to keep the environment clean and well maintained. The manual process of recycling waste usually takes a long time. This lie beneath research regarding the identification and classification of waste that can still be used. Identifying objects in real-time usually use the YOLO algorithm; meanwhile, the CNN method is used to classify objects into paper, plastic, aluminum, and other waste. This research used the dataset from the Trash Annotation in Context (TACO), which is a dataset that is freely accessible. The accuracy of CNN testing in accurate time detection and classification is 92.43% [8]. The lack of the traditional waste management system is inefficient, requires high costs, and the behavior of people who are less aware of recycling has persuaded research related to IoT-based waste management using the LoRa communication protocol and a Tensorflow-based deep learning model. The model of the waste management system is trained to recognize metal, plastic, and paper objects using the SsdMobilenetV2, which can work well on the Raspberry Pi 3 Model B+ . While TensorFlow is used for classifying objects. This research made a prototype for separating and monitoring landfills that are supposed to reduce operational costs and improve waste management efficiency [9]. The CNN Fine-Tuning of Deep Learning Technology method was used and provided good performance in the training and testing process in identifying cat and dog images taken from the CIFAR-10 dataset. The image classification CNN model was developed using TensorFlow. Following 10,000 iterations of testing, the model shows an accuracy above 79% [10]. Another research showed the success of the distributed parallel TensorFlow algorithm in identifying images through real-time video. The system was trained using 331 videos with an average duration of 30 min. The duration of each video is break down into
38
I. P. Saputro et al.
one frame per second. Through 12 test datasets, the system can provide an average performance of 97% [11]. Research in 2021 was successed proposed a system that can carry out waste classification using the Convolution Neural Network method so that domestic waste processing is more efficient [12]. Based on several previous research that has been conducted using the CNN and TensorFlow methods, a plastic bottle waste management system was proposed in this study. The system built is expected to identify and classify plastic waste and estimate the rupiah value of the type of plastic waste. In addition, the system has tutorials on recycling plastic bottle waste into plastic decorations or other forms. The algorithm used for identification is TensorFlow, and its classification uses CNN.
3 Research Methodology 3.1 TensorFlow TensorFlow is an open-source library structured with the Python programming language. This library was developed by the Google Brain team that can be used to perform numerical computations using graphs [13]. TensorFlow has a multidimensional array of data referred to as Tensor. The tensor flows next to a graph. Each node in the graph represents a mathematical operation [13, 14]. In this research, the model is implemented using Android. Hence, TensorFlow version 1.13 and TensorFlow Lite are used, which aim to maximize the image detection process on Android smartphones.
3.2 Convolution Neural Network Convolutional Neural Network (CNN) is one of the models in artificial neural networks that are usually used to process data in the form of images. The structure of neurons in CNN is like the structure of the visual cortex of mammals [15]. The CNN model consists of several convolution layers, pooling layers, and fully connected layer before the classification step arrives. The first step in CNN is to break down the image into smaller sizes and then recognize the local pattern in the image [16, 17]. CNN is one good solution that can be used for the problem of plastic waste classification. This research categorized the image into five classes: plastic bottles, plastic cups, caps, compact disks, and plastic bags.
Application of Convolution Neural Network for Plastics Waste …
39
Fig. 1 The proposed method of mobile application waste plastic management
3.3 Dataset In several studies, plastic waste datasets were taken using a smartphone camera, as was done by Sarosa et al. [19] and similarly to Nugroho et al. [18]. This research collected the images from Google Images or took them manually using a smartphone camera. The 156 images are divided into 106 images for training data and 50 for testing data.
3.4 Proposed Method The main feature of the application is to group the form of plastic waste into five categories, namely plastic bottles, plastic drink cups, plastic bottle caps, compact disks, and plastic bags. The proposed method in this research can be seen in Fig. 1. The steps include labeling the dataset and dividing the data for training data and testing data. TensorFlow and CNN methods are used for image identification and image classification of plastic waste.
4 Result and Discussion 4.1 Pre-processing Dataset An example of a dataset taken through Google Image can be seen in Fig. 2, and the sample of dataset taken from the smartphone camera can be seen in Fig. 3. All images are saved in the format “.jpg” and will be resized in a size of 300 × 300 pixels. The next step is data labeling data. The data labeling process uses the LabelImg module from Python. This process provides a label for each image in the dataset. Figure 4 is the labeling process using the LabelImg module. After the labeling process is complete and gives the output of the XML file, the next step is to convert the XML file into a CSV file. This file is divided into two
40
I. P. Saputro et al.
Fig. 2 Sample of plastic cups from Google Image
Fig. 3 Sample of a plastic bottle taken from the smartphone camera
parts: a CSV file containing training data information and a CSV file containing information for testing data.
4.2 Training CNN Before the training process is conducted, the CNN model must first be created. The first layer in CNN is the Convolutional Layer or convolution layer that serves to extract features in an image. The size of the filter matrix used is 3 × 3. The image
Application of Convolution Neural Network for Plastics Waste …
41
Fig. 4 Image labeling process
size of the Convolutional Layer will be reduced by size using a filter with a size of 2 × 2. Finally, on a Fully connected layer, the multidimensional array is converted into a one-dimensional array. After the model is formed, training is then carried out so that a pattern is obtained that can identify and produce a classification of objects with a high level of accuracy. The training process on the CNN model that has been created can be seen in Fig. 5. Similar to the labeling process, the training process generates two files with the format “.pb” and “pbtxt”. But both files can only run-on devices that use the Windows operating system. It needs to be converted to run those files on an Android smartphone. The transformation is done using Bazel, which TensorFlow already provides. The result of the conversion is in the form of the format “.tflite”.
Fig. 5 The training process of CNN model
42
I. P. Saputro et al.
4.3 Experiment To measure the performance of models and applications, tests were conducted using an Android smartphone with Android operating system version 9.0 (Pie). There are several schemes to carry out testing. The first test was through two different lighting conditions. The results obtained from the poor lighting application have not been able to detect the object of mineral glass. While in the excellent lighting, objects are correctly detected. Figure 6 shows the test results affected by exposure. The second test of detection is based on distance. At its less than one meter, the object can be appropriately detected, while at it more than one meter, the object has not been detected. The detection based on distance is shown in Fig. 7. Fig. 6 Object detection based on lighting
Fig. 7 Object detection based on distance
Application of Convolution Neural Network for Plastics Waste …
43
The final testing is to see the application’s performance in classifying several objects. The result shows that the model can identify and make classification accurately. The results of these detections can be seen in Fig. 8. To find out the accuracy of the mobile application in identifying and classify plastic waste, the evaluation is calculated based on Eq. 1 [20]. Accuracy =
tr ue ∗ 100% number o f data
(1)
Based on the experimental results with seven datasets, we found that average of accuracy that object poorly detected in some experiments is 6.6% and for the average well-detected object is 28.6%. The first dataset with total 20 object has the lowest accuracy of 70% while the seventh dataset with data of 50 objects has the highest accuracy rate of 86%. The results show consistency in achieving accuracy above 80% when the dataset has an object ≥ 35 data. The test results can be seen in Fig. 9 and Table 1. Fig. 8 Object classification
Fig. 9 Experiment of first dataset
44 Table 1 Object detection and classification accuracy
I. P. Saputro et al. Number of objects detected
Well detected objects
Poorly detected objects
Object detection accuracy (%)
20
14
7
70
25
19
6
76
30
22
8
73.3
35
30
5
85.7
40
33
7
82.5
45
39
6
86.7
50
43
7
86
In addition to identifying and classifying plastic waste, there is a feature to obtain the selling price of plastic waste. The price list that used is based on information at the time of research have been done. The price list feature and an example of its calculation can be seen in Fig. 10. Another feature is a tutorial for recycling plastic waste. Figure 11 shows the recycling of various types of plastic waste along with the tutorials. Fig. 10 Price list and total price
Application of Convolution Neural Network for Plastics Waste …
45
Fig. 11 Handicraft and its tutorial
5 Conclusion and Future Work Based on the test results of the Android-Based Mobile Application Waste Management, the following conclusions can be drawn: 1. The model successfully identified and classified plastic waste into five classes. 2. Objects can be detected well in a non-reverse position, with good lighting, and less than one meter. 3. The model has an accuracy of more than 80%. 4. Price list information and tutorials can help recycle plastic waste, which has a selling value. Doing recycling can help in. For further development, applications can be added to identify objects in poor lighting and within a maximum distance of one meter. 20
References 1. Lau WW, Shiran Y, Bailey MR, Velis CA (2020) Evaluating scenarios toward zero plastic pollution. Science 369:1455–1461 2. Chow C-F, So W-MW, Cheung T-Y, Yeung DS-K (2017) Plastic waste problem and education for plastic waste management. In: Emerging practice in scholarship of learning and technology in a digital era. Springer Nature Singapore Pvt Ltd, Singapore, pp 124–140 3. Krisyantia VOS I, Priliantini A (2020) Influence of #PantangPlastik campaign on environmental friendly attitudes (survey on Instagram followers @GreenpeaceID). Komunika 9(1):40–51 4. Fauzi A (2018) Indonesia Darurat Sampah Plastik. [Online]. Available: https://indonesiabai k.id/. Accessed 25 July 2022
46
I. P. Saputro et al.
5. Cipto W (2018) Merah Putih. [Online]. Available: https://merahputih.com/. Accessed 25 July 2022 6. Karouw D (2021) iNewsSulut. [Online]. Available: https://sulut.inews.id/. Accessed 13 Aug 2022 7. Zaar AE, Aoulalay A, Benaya N, Mhouti AE, Massar M, Allati AE (2022) A deep learning approach to manage and reduce plastic waste in the oceans. In: International conference on energy and green computing, Morocco 8. Nowakowski P, Pamula T (2020) Application of deep learning object classifier to improve e-waste collection planning. Waste Manage 10(9):1–9 9. Sheng TJ, Islam MS, Misran N, Baharuddin MH, Arshad H, Islam MR, Chowdhury ME, Rmili H, Islam MT (2020) An Internet of Things based smart waste management system using LoRa and Tensorflow deep learning model. IEEE Access 8:793–811 10. Ziouzios D, Baras N, Balafas V, Dasygenis M, Stimoniaris A (2022) Intelligent and realtime detection and classification algorithm for recycled materials using convolutional neural networks. Recycling 7(9):1–14 11. Gunjan VK, Pathak R, Singh O (2019) Understanding image classification using TensorFlow deep learning convolution neural network. Int J Hyperconnectivity Internet of Things 3(2):19– 37 12. Zhang Q, Yang Q, Zhang X, Bao Q, Su J, Liu X (2021) Waste image classification based on transfer learning and convolutional neural network. Wate Manage 135:150–157 13. Xu W (2021) Efficient distributed image recognition algorithm of deep learning framework TensorFlow. In: 2nd international conference innovative technologies in mechanical engineering, India 14. Zafar I, Tzanidou G, Burton R, Patel N, Araujo L (2018) Hands-on convolution neural networks with TensorFlow. Packt Publishing, Mumbai 15. Pang BP, Nijkamp E, Wu YN (2019) Deep learning with TensorFlow: a review. J Educ Behav Stat XX(X):1–12 16. Yu L, Li B, Jiao B (2019) Research and implementation of CNN based on TensorFlow. In: Application of materials science and energy materials. International symposium, Shanghai 17. Bobulski J, Kubanek M (2021) Deep learning for plastic waste classification system. Appl Comput Intell Soft Comput 2021:1–7 18. Pratama IN, Rohana T, Al Mudzakir T (2020) Introduction to plastic waste with model convolutional neural network. In: Conference on innovation and application of science and technology, Malang 19. Sarosa M, Muna N, Rohadi E (2020) Performance of faster R-CNN to detect plastic waste. Int J Adv Trends Comput Sci Eng 9(5):7756–7762 20. Faisal M, Chaudhury S, Sankaran KS, Raghavendra S, Chitra RJ, Eswaran M, Boddu R (2022) Faster R-CNN algorithm for detection of plastic garbage in the ocean: a case for turtle preservation. Math Probl Eng 2022:1–11
Study on Optimal Machine Learning Approaches for Kidney Failure Detection System Based on Ammonia Level in the Mount Nicholas Phandinata, Muhammad Nurul Puji, Winda Astuti, and Yuli Astuti Andriatin
Abstract Patients with kidney failure can emit bad breath with a certain level of ammonia content. The interpretation of ammonia levels in patients with renal failure was identified in this work. This research used the Chronic Kidney Disease dataset from UCI Machine Learning Repository. The dataset was processed first to get the value of eGFR (estimated Glomerular Filtration Rate) and ppb (parts per billion) of ammonia. Based on the eGFR value, the severity of kidney failure was divided into 5 categories, namely normal (Stage 1), mild (stage 2), moderate (stage 3), severe (stage 4) and failure (stage 5). The values of eGFR features are used as input for the machine learning technique in order to predict the level of kidney failure. Four different types of machine learning techniques, namely Support Vector Machine (SVM), Naïve Bayes (NB), Artificial Neural Network (ANN) and K-nearest neighbors (KNN), are applied and compared. The last process was to identify kidney failure using the KNearest Neighbors (KNN) method based on eGFR and ppb of ammonia dataset. AI-based kidney failure severity identification system with KNN algorithm had an average accuracy of 89.9% and 95.65% for training and testing accuracy, respectfully. Keywords Kidney failure · eGFR (estimated glomerular filtration rate) · Support vector machine (SVM) · Naïve Bayes (NB)
1 Introduction In the world, there are at least 850 million people (double the number of people with diabetes) who have chronic kidney disease (CKD), acute kidney injury (AKI), and N. Phandinata · M. N. Puji · W. Astuti (B) Automotive and Robotics Program, Computer Engineering Department, BINUS ASO School of Engineering, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] Y. A. Andriatin Nursery Department, Cilacap Regional General Hospital, Gatot Subroto No.28, Cilacap, Central Jawa 53223, Indonesia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_5
47
48
N. Phandinata et al.
kidney replacement therapy (RRT) [1]. Blood tests using the blood urea nitrogen index (BUN) and creatinine (CR) are usually performed on a patient with kidney failure to determine how severe the condition of the disease is [2]. In addition to blood tests, methods were also developed to determine the severity of kidney failure patients using artificial intelligence-based methods. Some of them are using the HMANN (Heterogeneous Modified Artificial Neural Network) method and the DLA (Deep Learning Algorithm) method. In the HMANN method, the test material is in the form of ultrasound images of the kidneys with an accuracy/confidentiality of 97.50%. Then for the DLA (Deep Learning Algorithm) method, the test material is in the form of retinal images with 90% accuracy/confidentiality [2]. In this paper, the data from the ammonia (ppm) contain from the month of the patient are convert to the Blood Urea Nitrogen (BUN) and Creatinine (CR). From the BUN and CR data, the Estimation Filtrasi Glomerulus (eGFR) are calculated. The data is then used as input to the machine learning method to classify and identified. Support Vector Machine (SVM), Naïve Bayes (NB), Artificial Neural Network (ANN) and K-nearest neighbours (KNN), are applied and compared. K-Nearest Neighbors (KNN) method is introduced to detect the severity of kidney failure patients. The test material used is the ammonia content of the patient’s bad breath. The effectiveness of the proposed system was evaluated experimentally based on KNN method. The results show that the proposed technique has produced the good level of accuracy.
2 Proposed Kidney Failure Detection System The proposed Kidney Failure Detection System is first discussed in the previous paper. The proposed method estimates the level of severity of the kidney failure. The proposed algorithm depicted in Fig. 1. There are two important stages involved; data processing, and pattern classification or identification. The overall system consists of two important steps; data-processing, and classification and identification process. The pre-processing stage involves reading the data from the sensor ammonia (ppm), converting the data to get Blood Urea Nitrogen (BUN) and Creatinine (CR) in order to get Estimated Glomerular Filtration Rate (eGFR) and detecting the level of kidney failure identification. The pre-processing is conducted to select the amount of data which contain important information of the kidney to be processed by the rest of the system. At this stage, the raw data is modified in order to apply and modify to get the value of eGFT which will determine improve the performance of the prediction system. The next stage is feature extraction. In this process, the ammonia data is transformed into a feature vector that can be used as input to the classifier. This feature vector is fed into the pattern classification stages in order to predict the level of kidney failure detection of the patient. The two important phases namely, the training phase and testing phase. The training phase is the process of developing the model for the data that will be used
Study on Optimal Machine Learning Approaches for Kidney Failure …
49
Fig. 1 Flowchart of the proposed kidney failure prediction system
as the reference for the prediction system. Meanwhile, the testing phase is a process of evaluating the performance of the proposed system.
2.1 Data Processing The processing step on this stage involves: reading the ammonia (ppm) data, converting the data into the component of Estimated Glomerular Filtration Rate (eGFR), that are BUN and CR, in order to use as input to calculate the failure level of the patient kidney. This stage is to determine the value of eGFR that will use as input to the feature extraction stage as shown in Table 1. In the preprocessing stage, the ammonia data is converted to BUN using the graph shown in Fig. 2. The correlations of ammonia breath with the Creatinine (CR) are shown in Fig. 3. The resulting values are used to calculate the eGFR. These quantities are then multiply, computed and denoted using the formula (1). Table 1 Kidney failure severity categories based on eGFR values
Severity category
eGFR value
Normal (stage 1)
≥ 90
Mild (stage 2)
60 ≤ eGFR ≤ 89
Moderate (stage 3)
30 ≤ eGFR ≤ 59
Severe (stage 4)
15 ≤ eGFR ≤ 29
Failure (stage 5)
≥ 15
50
N. Phandinata et al.
eG F R
mL min
1.73
× Age
m
−0.203
Fig. 2 Graph of the correlation of ammonia breath with blood urea belonging to patients with kidney failure [3]
Fig. 3 Graph of the correlation of ammonia breath with creatinine belonging to patients with kidney failure [3]
2
= 186 × Serum Creatinine
mg −1.154 dL
× 0.742(i f f emale) × 1.212(i f black)
(1)
Study on Optimal Machine Learning Approaches for Kidney Failure …
51
2.2 Pattern Classification and Model Development The pattern classification is used to develop the model which is used as the reference data in kidney failure identification. The initial model for severity kidney failure. In this work, four different pattern classification techniques, namely SVM, ANN, Naïve Bayes and KNN are applied in order to develop the model.
2.2.1
Artificial Neural Network
The ANN consists of small function units called neuron, which are interconnected to produce a global transfer function of the ANN. There are many forms of neural networks, such as multilayer feedforward network (MFN), radial basis function (RBF), and learning vector quantizer (LVQ) [4].
2.2.2
Naïve Bayesian
Naive Bayes is a simple probabilistic classifier based on applying Bayes’ theorem (or Bayes’s rule) with strong independence (naive) assumptions [5].
2.2.3
K Nearest Neighbors
KNN assumes that the data is in a feature space, the data points are in a metric space. Since the points are in feature space, they have a notion of distance [6]. Each of the training data consists of a set of vectors and class label associated with each vector. In the simplest case, it will be either + for the positive class or − for negative class [7].
2.2.4
Support Vector Machine (SVMs) Classification
Support Vector Machine (SVM) separated data into two class as shown in Fig. 4. There is margin line that separated data into two classes. Hyplerplane is line in between margin, the data lies in hyperplane called Super Vector (SV).
52
N. Phandinata et al.
Fig. 4 SVM with linear separable data [8]
3 Experimental Result and Discussion 3.1 Data Processing In this work, the data used are based on the Chronic Kidney Disease dataset which get from kidney failure patient data collected over a two month period in one hospital in India [9]. The dataset using the Python programming language with Pandas library and has 26 columns of data as shown in Fig. 5. Since, eGFR, which is an estimate of the glomerular filtration rate for each row of patient data so that later the severity of kidney failure can be known, and also ppb of ammonia excreted from the mouth based on blood urea levels, the only columns needed are age, blood urea “ma”, creatinine in blood “sc”, and kidney
Fig. 5 Chronic kidney disease dataset
Study on Optimal Machine Learning Approaches for Kidney Failure …
53
Fig. 6 Dataset with the important data to develop the eGFT
failure classification or not “classification”, so a new dataset was created from the initial dataset which only had these columns as shown in Fig. 6. The selected data on the dataset which supports eGFR as shown in Eq. 1. As can be seen from the equation, to find the eGFR, the data needed is the amount of creatinine in the blood, age, gender, and race of the person but because the data set does not contain information on gender and race. Therefore, there will be no black people because the data is taken from a hospital in India, all patients are Indians and not black, the eGFR formula can be modified to Eq. 2 and the eGFR result can be seen in Fig. 7; eG F R
mL min
1.73
m2
= 186 × Serum Creatinine
mg −1.154 dL
× Age−0.203
(2)
Finding the ppm (part per million) of ammonia in the dataset. The correlation of ammonia levels excreted from the mouth (ammonia breath) with urea and creatinine levels in the blood of patients with renal failure. The research results from the journal are in the form of graphs and can be seen in Figs. 8 and 9 where the points of intersection are also already determined. Belongs to a patient with kidney failure which the intersection have been determined. Based on the graphs shown in Figs. 8 and 9, it can be seen the higher the ammonia breath, the higher the BUN (urea) and Creatinine levels in the blood. Therefore, a linear equation can be found from each graph to be used as a formula for converting urea and Creatinine into breath ammonia. y(ur ea/cr eatinine) = mx x(( ppb)) + b
54
N. Phandinata et al.
Fig. 7 Selected dataset applied for eGFR formula
Fig. 8 Graphic of correlation of ammonia breath with blood creatinine based on research
m=
y2 − y1 x2 − x1 (x1, y1) = (0.4( ppb), 39) = (400( ppb), 39) (x2, y2) = (0.6( ppb), 59) = (400( ppb), 59)
Study on Optimal Machine Learning Approaches for Kidney Failure …
55
Fig. 9 Graphic of correlation of ammonia breath with blood creatinine based on research
59 − 39 = 0.1 600 − 400 y = 0.1 × x + b
m=
59 = 0.1 × 600 + b b = −1 y = 0.1 − 1 x( ppb) = (y(Cr eatinine) + 1) + 0.1 The process of finding the linear equation has been summarized as follows; (x1, y1) = (0.5( ppb), 8) = (500( ppb), 8) (x2, y2) = (0.7( ppb), 11) = (700( ppb), 11) 11 − 8 = 0.015 m= 700 − 500 y = 0.015 × x + b 11 = 0.015 × 700 + b b = 0.5 y = 0.5 + 1 x( ppb) = (y(Cr eatinine) − 0.5) + 0.015 Belongs to a patient with kidney failure which have intersection have been determined from the two linear equations that have been obtained, the ppb of ammonia in the dataset can be searched by first creating a new column whose data will be filled in using data from the BUN column and the Creatinine column which are entered
56
N. Phandinata et al.
Fig. 10 The process based on Eqs. (1) and (2) of using the bun and creatinine equations to find the ppb of ammonia
into each equation and then the average value is taken. Figure 10 shows the process and its results. As shown in Fig. 11, although there are already two new columns, namely the eGFR and ppb ammonia columns, there is no column for the severe category of kidney failure. As discussed, there are five categories of kidney failure severity, namely normal (Stage 1) with an eGFR of 90 and above, mild (stage 2) with an eGFR of 60–89, moderate (stage 3) with an eGFR of 30–59, and severe (stage 3). (stage 4) with an eGFR between 15 and 29, and failure (stage 5) with an eGFR below 15. Based on these categories, a new column was created for this using data from the eGFR column and can be seen in Fig. 9.
3.2 Data Classification and Identification Method As can be seen from Table 2, the algorithm that has the highest accuracy when given a testing dataset is KNN with an accuracy of 0.96 or 96%, followed by the same accuracy value of 91% from the logistic regression algorithm and Naïve Bayes, and when given training dataset or at the time of training, the ANN algorithm has the highest accuracy of 92% followed by the SVM algorithm with an accuracy of 91%. However, when the entire dataset is given, the KNN algorithm again leads with an accuracy of 92% and plus the average is also the highest with the same value of 92%. The data of ammonia (ppb) with get from the sensor are converted into Blood Urea Nitrogen (BUN) and Creatinine (CR) in order to apply the eGFR formula to
Study on Optimal Machine Learning Approaches for Kidney Failure …
57
Fig. 11 Dataset with column for the category of severity of kidney failure
Table 2 The accuracy of testing and training phase for kidney failure detection
Type of machine learning
Training testing Training (%)
Testing (%)
Support vector machine (SVM)
91.195
89.86
Naïve Bayesian (NB)
88.68
91.30
Artificial neural network (ANN)
91.82
89.86
K-nearest neighbors (KNN)
89.94
95.65
get Estimated Glomerular Filtration (eCFR) value with will determine the level of severity kidney failure. This can be show from the result, than KNN can be apply as machine learning identification of the kidney failure detection, since the training and testing result are both good, with 89.9% and 95.65%, respectfully. The results of our proposed method are relatively better than those from the other classification and identification method applied in this work.
4 Conclusion The automatic kidney failure identification system has been proposed and developed in this paper. This technique develops the ammonia data which get from the sensor and converted to BUN and CR which became the component of data to measure the level of the kidney failure identification. The calculation of eGFR became the
58
N. Phandinata et al.
input data from the machine learning for classification and identification of the kidney failure. The accuracy of the KNN based kidney failure classification and identification system has good result which has the training and testing phases are 80.99% and 95.65%, respectively. It was also clear, from our computer simulation results, that the proposed method has good result of kidney failure identification. In order to increase the accuracy, the ammonia real data from the kidney patient can be increase that used to train the system.
References 1. Jager KJ, Kovesdy C, Langham R, Rosenberg M, Jha V, Zoccali C (2019) A single number for advocacy and communication—worldwide more than 850 million individuals have kidney diseases. Kidney Int 96:1048–1050 2. Ma F, Sun T, Liu L, Jing H (2020) Detection and diagnosis of chronic kidney disease using deep learning-based heterogeneous modified artificial neural network. Futur Gener Comput Syst 111:17–26 3. Narasimhan ALR, Goodman W, Patel CKN (2016) Correlation of breath ammonia with blood urea nitrogen and creatinine during hemodialysis. 98:4617–4621 4. Palaniappan P, Raveendran P, Nishida S, Saiwaki N (2000) Autoregressive spectral analysis and model order selection criteria for EEG signals, vol 2. TENCON proceedings. Intelligent systems and technologies for the new millennium (Cat. No. 00CH37119), pp 126–129 5. Nandar A (2009) Bayesian network probability model for weather prediction. In: International conference on the current trends in information technology (CTIT), pp 1–5 6. Duda RO, Hart PE, Stork DG (2001) Pattern classification. Wiley 7. Kuncheva LI, Hoare ZSJ (2008) Error-dependency relationships for the naïve Bayes classifier with binary features. IEEE Trans Pattern Anal Mach Intell 30:735–740 8. Thiruvengatanadhan R (2018) Speech recognition using SVM, pp 918–921 9. Kaggle. Chronic KIdney disease dataset|Kaggle. https://www.kaggle.com/mansoordaku/ckd isease. Last accessed 29 Apr 2021
Understanding the Influential Factors on Multi-device Usage in Higher Education During Covid-19 Outbreak Robertus Tang Herman, Yoseph Benny Kusuma, Yudhistya Ayu Kusumawati, Darjat Sudrajat, and Satria Fadil Persada
Abstract Multi-device usage is the activity conducted involving many types of gadgets. Further, the active generations such as generation x, the millennial generation, and generation-z are quite adapted to using the multi-device. In the early year 2020, the existence of the Covid-19 pandemic outbreak accelerates the situation. However, it is still important to understand what kind of components/dimensions affect the usage, especially for learning in higher education. The present research explores the new dimension that affects multi-device usage. Principle component analysis is used, and 150 higher education students participated as the respondents. The data was extracted by self-administered questionnaires. Principal component analysis was performed as the analysis tool. The result shows the reduction of variables of entertainment, separate work and personal life (SWPL), productivity, interaction, emergency, and virtual mobility into two dimensions. The two dimensions generated are identified as the work-life balance and fear of missing out. Further discussion and recommendations are presented. The insight can be used by stakeholders such as academicians, researchers, and policymakers to better understand the influential dimensions of multi-device usage. Keywords Multi-device · Usage · Principal component analysis · Covid-19 · Students
R. T. Herman · Y. B. Kusuma · Y. A. Kusumawati · D. Sudrajat · S. F. Persada (B) Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] R. T. Herman e-mail: [email protected] Y. B. Kusuma e-mail: [email protected] Y. A. Kusumawati e-mail: [email protected] D. Sudrajat e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_6
59
60
R. T. Herman et al.
1 Introduction In late 2019, the novel Corona Virus (known as Covid-19) pandemic has brought about a huge change in the social life of today’s global community. Some of the most popular policies are physical distancing and social distancing. The main goal is to break the chain of the spread of the Covid-19 transmitting from human-to-human. Almost all industries affected by this social restriction are included in the education industry. As the impact of the Covid-19 pandemic, learning from home (LFH) policy has emerged. With the LFH policy, the physical activities that were originally carried out at school are switched to their respective homes or places of residence. The faceto-face learning process is replaced with online learning by utilizing various media and online-based learning applications. The results of previous research reported that the impact of LFH made a school or campus lose its essence as a place for learning and self-development [1]. In this condition, collaboration between parents and students is needed. The role of parents is to provide comfort and guidance to their kids during online learning. Both parents and students in the LFH will get the bounding better. During the LFH, interactions between students and teachers are carried out virtually by using communication technology devices and relying on the internet network. This is emphasized in research that the main key in online learning is full support for students who study online due to the advanced influence of mobile devices and internet technology [2]. Some of the most widely used technology applications by most of the global providers (e.g., Zoom Meetings, Google Hangouts Meet, Microsoft Teams, Face Time, Cisco WebEx, Skype, YouTube, WhatsApp). Thus, the existence of multi-devices is necessary. The use of multi-devices becomes a transformation tool that is triggered by several factors. First, it has become common to change the pattern of face-to-face interactions into virtual interactions. In the context of learning, there is a shift where traditional learning cannot be applied. Second, virtual mobility is increasing due to limited physical mobility. Third, entertainments as part of personal needs are commonly obtained through the internet and mobile applications. Fourth, the task to remain productive in completing personal responsibilities is boosted with online. Fifth, the ease of organizing work and personal interests in a professional manner with cloud and technology. Sixth, anticipating emergency conditions caused by technical problems is easier, such as disruption of internet network access and cellular communication devices. From the results of observations of 25 students at several universities in East Java, it can be concluded that most students decide to use more than one device to support the learning process. They believe that using more than one device can increase productivity and provide a lot of convenience and comfort in interacting and collaborating. But on the other hand, students have reasons and personal considerations in using multi-devices. Each student has a different orientation and perception in the use of multi-devices. The focus of this research is to make a mapping of the factors that become the orientation of students in higher education in using multi-device at
Understanding the Influential Factors on Multi-device Usage in Higher …
61
the same time. The results of the mapping will provide an overview of the orientation and characteristics of students to increase the added value of using multi-devices.
2 Literature Review The trend of using multi-devices has become part of the needs of people’s lives in the digital era. Users are used to using multiple devices at the same time. The use of multi-devices in everyday life is a portal to enter space online together [3]. The use of multiple devices to work towards the same goal is called cross-device interaction. The use of digital media is increasing in the online learning process during the pandemic. The use of e-learning and of digital media for teaching and learning has grown rapidly in just a few years, exceeding the traditional learning model. The development and use of new technologies that are increasingly popular among the young generation such as the internet and social network sites have changed our interaction patterns with others. Many social interactions occur in various social media networks, even the use of Social Networks (SNs) can improve the learning process. Social media is not only used for interaction and communication but also for learning [4]. Social media networks can also be used in managing knowledge and learning through connections with external parties who are experts in their fields to share information, ideas, concepts, activities. There are four main reasons for the use of Social Networks (SNs): (1) Developing community through the exchange of information, (2) Energizing people who have the passion to take advantage of every device and turn it into useful information, (3) Come up with good ideas to share with others, (4) Meets the need to establish connections for need fulfillment or satisfaction. When people locked inside their homes during the lockdown and till now is changing their lifestyles. This is leading to a drastic switch to digital entertainment platforms. Most people are spending time inside the house only and are majorly relying on platforms and encouraging digital media [5]. Streaming services have improved access to media content for the audiences through various platforms like television, laptops, and mobile phones. Drastic switch to digital platforms has increased internet traffic. Social networking apps like Instagram consume huge traffic and that traffic of Instagram only is comparable to the traffic of video-on-demand users, such as Netflix or YouTube [6]. Online learning is the modern form of distance learning, or the later edition of education. Thus, the urgency of adopting technology to support learning cannot be avoided. For the learning process to take place well, the main requirement is certainty to be connected through communication media and internet networks [7]. Virtual mobility is a driving tool in online learning or e-Learning [8]. Communication with other parties can occur if they are connected through communication media without being limited by space and time and can collaborate with students and teachers from different locations regardless of location. Virtual mobility is a series of activities supported by the presence of information and communication technology to collaborate in the context of teaching and learning and obtain the same benefits when carrying out physical
62
R. T. Herman et al.
mobility. Virtual mobility is growing with the existence of Social Networks that allow students to connect, communicate, interact, and collaborate through social networks. At first, parents, teachers and students experienced various obstacles caused by not being prepared to face online learning and not knowing what the benefits were. The positive benefit of online learning is the emergence of a sense of togetherness between students, teachers, and parents [9]. With the existence of learning technology, it really helps to deal with various educational needs so that learning objectives can be achieved. Online learning has a high appeal so that many students are interested because it provides convenience and increases participation and high accessibility. Students need several devices to do assignments, conduct discussions, communicate with teachers and friends, send assignments, and participate in video conferences at the same time. Generally, they use laptops and personal computers to send emails, browse or download materials and assignments. Smartphones are used to discuss through the WhatsApp application and the line application and use smartphones and laptops to participate in video conferences through the Zoom platform and Microsoft Teams. In the digital era, people are used to finding the information or content they want when they need it, whether they are working, studying, relaxing or in a meeting. Most students are required to multi-task because some work must be completed in the same relative timeframe. They must be able to organize their work, duties, and responsibilities well even at the same time. However, when referring to the results of previous studies conducted [10], the workers who use a combination of desktops, laptops, and mobile devices to perform tasks throughout the day, show that managing information across devices can be difficult. Therefore, each device that is used needs to be separated based on the context or situation that is happening.
3 Methods and Data Collection The present research is exploratory research. Factor analysis method is used to process and analyze the data. Factor analysis has two main objectives, namely, to reduce variables and to detect structures in the relationships between variables. From the questionnaires, the new variables are predicted to be developed. The present research uses primary data collection. Specifically, the respondents’ data were extracted from higher education students. A total of 150 higher education students, originated from private and public universities, participated. The instrument data collection is mediated by a self-administered questionnaire. The data was taken during covid-19 pandemic period, which is between 2020 and 2022.
Understanding the Influential Factors on Multi-device Usage in Higher …
63
4 Result Prior to the main analysis, Kaiser–Meyer–Olkin (KMO) and Barlett’s test are performed. The details of the values are presented in Table 1. Based on the information in Table 1, we get the value of KMO which shows the relationship between variables. It is known that the KMO MSA value = 0.731 or > 0.5, then the factor analysis process can be carried out. The value of Bartlett’s Sphericity test is 317,950 and the significance value is 0.000 (below 0.05), which means that all variables are correlated. The partial correlation value of each variable can be seen in the anti-image correlation section which is shown in the following Table 2. Based on the Anti-Image Correlation value contained in Table 2, the research obtains information on the MSA value of each variable, namely: Entertainment (0.697), Separate Works and Personal Life—SWPL (0.695), Productivity (0.833), Interaction (0.779), Emergency (0.773) and Virtual Mobility (0.663). The MSA value of all these variables is > 0.5, it can be concluded that the factor analysis process can be carried out to the next stage, namely the extraction process. The extraction process or factoring process in principal component analysis aims to separate each variable to find out which variables are correlated. The basis for the decision is to compare the results of the extraction of the variables to the MSA value. If the value of the extracted variable > MSA value, it can be concluded that the variable is correlated. Based on Table 3, we obtain the value of the extraction results of each variable using the principal component analysis method. The Communalities value of each variable is > 0.5, meaning that each variable contributes to building new factors that will be formed. To find out how many new factors are formed, we need information based on the eigenvalues shown in Table 4. Based on the information from the table above, there are two components that have eigenvalues > 1.0, namely component 1 (score = 2.759) and component 2 (score = 1.215). Thus, the number of new variables formed is 2 variables. The factor rotation process aims to ensure whether each variable is in accordance with the factor. The following Table 5 shows that these variables are in accordance with the factors. Based on the information from table 5, it shows that each variable is spread over each component. Most of the variables are in component 1, except for the virtual mobility variable which is in component 2. To get a clear picture regarding the position of each variable, it can be seen in the following Table 6 of factor rotation results. Table 1 Keywords by cluster
Test
Indicators
KMO Barlett
Value 0.731
Approx. chi-square
317.950
df
15
Sig
0.000
− 0.286
Anti image correlation
− 0.100
− 0.042
− 0.030
Emergency
− 0.142
− 0.169 0.009 − 0.049 − 0.072
− 0.286 − 0.059 − 0.055
Productivity
Interaction
Emergency
Virtual mobility
0.153
− 0.25
0.695 − 0.232
0.833
− 0.169
− 0.648
0.669
Entertainment − 0.648
− 0.149
0.569
SWPL
0.087
0.005 − 0.027
− 0.031
Interaction
Virtual mobility
− 0.016
− 0.080
− 0.132
Productivity
0.080
0.398
SWPL
− 0.132
− 0.249
0.371 − 0.249
Entertainment
Productivity
SWPL
Barlett
Entertainment
Anti image covariance
Variables
Table 2 Partial correlation value
− 0.124
− 0.298
0.779
− 0.232
0.009
− 0.059
− 0.099
− 0.224
0.724
− 0.149
0.005
− 0.031
Interaction
− 0.193
0.773
− 0.298
− 0.025
− 0.049
− 0.055
− 0.159
0.777
− 0.224
− 0.016
− 0.027
− 0.030
Emergency
0.663
− 0.193
− 0.124
− 0.142
− 0.072
0.153
0.870
− 0.159
− 0.099
− 0.100
− 0.042
0.087
Virtual Mobility
64 R. T. Herman et al.
Understanding the Influential Factors on Multi-device Usage in Higher … Table 3 Exctraction analysis
65
Variables
Initial
Extraction
Entertainment
1.000
0.842
SWPL
1.000
0.795
Productivity
1.000
0.645
Interaction
1.000
0.551
Emergency
1.000
0.559
Virtual mobility
1.000
0.583
The result of rotation of the matrix components in the table above shows that each variable has spread in two components. The Interaction variable and the Emergency variable which were previously in component 1 have moved to component 2, so that the two components formed each have three variables. Table 7 shows that the 6 factors that influence students’ decisions to use multi-device can be grouped into two factors, namely: (1) Work Life Balance factors consisting of: Entertainment, SWPL and Productivity; (2) Fear of Missing Out (FOMO) factor which consists of: Interaction, Emergency, Virtual Mobility. The first dimension, which is work life balance, is defined due to the similarity of how the students are concerned with their duties as well as their hobbies intermediated by multi-device usage. This is plausible since most of the students are doing their activities blended with many channels. The second dimension, which is fear of missing out, is described because the characteristics of online aspects are clearly listed. The concern situation of covid-19 will restrain the students to meet physically and prefer to meet virtually. Thus, multi-devices serve as the suitable media for accommodating the situation.
5 Discussion From the generated result, two dimensions are developed: work-life balance and fear of missing out. The work-life balance covers each student’s personal situation, such as entertainment, and separate work and personal life. Work-life balance can be integrated into other well-established behavioral models such as the theory of planned behavior, self-determination theory, and others. Fear of missing out (FOMO) covers the student’s interaction and any academic activities. FOMO can be integrated into any academic-related behavior model, such as expectation-confirmation theory, theory of reasoned action, and the relevant others model.
Cumulative
1.215
0.745
0.612
0.438
0.230
2
3
4
5
6
3.84
7.30
10.20
12.41
20.25
100.00
96.16
88.86
78.66
66.23
45.98 1.21
2.75 20.25
45.98
Variance 66.23
45.98
Cumulative
Total
45.98
Variance
Total
2.759
Extraction of sum squared loadings
Initial eigenvalues
1
Component
Table 4 Principal component analysis
1.65
2.32
Total
27.52
38.71
Variance
Rotation SS loadings
66.23
38.71
Cumulative
66 R. T. Herman et al.
Understanding the Influential Factors on Multi-device Usage in Higher … Table 5 Extracted component
Table 6 Rotated component
67
Variables
1
2
Entertainment
0.808
− 0.436
SWPL
0.803
− 0.388
Productivity
0.794
− 0.120
Interaction
0.632
0.389
Emergency
0.559
0.497
Virtual mobility
0.346
0.680
Variables
1
2
Entertainment
0.916
0.060
SWPL
0.886
0.097
Productivity
0.736
0.320
Interaction
0.329
0.665
Emergency
0.210
0.718
− 0.068
0.760
Virtual mobility
Table 7 Reduction components Component
Correlation
Variable
Score
New factor
Component 1
0.847 (based on component transformation matrix)
Entertainment (X1)
0.916
1. Work life balance
SWPL (X2)
0.886
Productivity (X3)
0.736
0.847 (based on component transformation matrix)
Interaction (X4)
0.665
Emergency (X5)
0.718
Virtual mobility (X6)
0.760
Component 2
2. Fear of missing out (FOMO)
6 Conclusion The existence of multi-devices brings a new paradigm for students in doing their private and campus activities. The multi-device is used heavily to accommodate the connections, communications, and interactions during the covid-19 period. The approach to understanding the big idea of multi-device usage is presented by conducting the principal component analysis. The original dimensions such as entertainment, SWPL, productivity, interaction, emergency, and virtual mobility are further investigated with factor reduction. The result shows two big themes of the investigated factors, generating the work life balance and fear of missing out. The work life balance will accommodate duties and tasks existed in the students’ daily life. The IT providers should be aware of this opportunity to provide the tools in accommodating the students’ jobs. While on the second dimension, FOMO, it is ideal since
68
R. T. Herman et al.
covid-19 outbreak will transmit the virus on human-to-human contact. Universities management should be concerned on the FOMO feeling and use the online meeting to minimize the transmitted virus. Not only by laptop, but the meeting can also be accommodated by smartphone, tablet, and asynchronous tools. The study has few limitations, where the first is related to respondents’ number, adding the number will increase the accuracy of the analysis. The second is on the segmentation, where the lower types of education such as senior high school, junior high school, or even elementary can be further analyzed.
References 1. Bhamani S, Makhdoom AZ, Bharuchi V, Ali N, Kaleem S, Ahmed D (2020) Home learning in times of COVID: experiences of parents. J Educ Educ Devel 7:9–26 2. Sharples M (2000) The design of personal mobile technologies for lifelong learning. Comput Educ 34:177–193 3. Houben S, Marquardt N, Vermeulen J, Klokmose C, Schöning J, Reiterer H, Holz C (2017) Opportunities and challenges for cross-device interactions in the wild. Interactions 24:58–63 4. Liccardi I, Ounnas A, Pau R, Massey E, Kinnunen P, Lewthwaite S, Midy M-A, Sarkar C (2007) The role of social networks in students’ learning experiences. ACM Sigcse Bull 39:224–237 5. Lades LK, Laffan K, Daly M, Delaney L (2020) Daily emotional well-being during the COVID19 pandemic. Br J Health Psychol 25:902–911 6. Affinito A, Botta A, Ventre G (2020) The impact of covid on network utilization: an analysis on domain popularity. In: IEEE 25th international workshop on computer aided modeling and design of communication links and networks (CAMAD). IEEE, pp 1–6 7. Wolfinger S (2016) An exploratory case study of middle school student academic achievement in a fully online virtual school. Drexel University 8. Pursula M, Warsta M, Laaksonen I (2005) Virtual university–a vehicle for development, cooperation and internationalisation in teaching and learning. Eur J Eng Educ 30:439–446 9. Nguyen T (2015) The effectiveness of online learning: beyond no significant difference and future horizons. MERLOT J Online Learn Teach 11:309–319 10. Oulasvirta A, Sumari L (2007) Mobile kits and laptop trays: managing multiple devices in mobile information work. In: Proceedings of the SIGCHI conference on human factors in computing systems, pp 1127–1136
Society with Trust: A Scientometrics Review of Zero-Knowledge Proof Advanced Applications in Preserving Digital Privacy for Society 5.0 Nicholas Dominic , Naufal Rizqi Pratama, Kenny Cornelius, Shavellin Herliman Senewe, and Bens Pardamean Abstract Society 5.0 focuses on human productivity in the midst of advanced technological services. While the concept has human trust at its core, technology development is now leading to zero-trust architecture. In this scientometrics review, 107 selected papers were analyzed to obtain the nature of the Zero-knowledge Proof (ZKP) theory and its advanced applications to favor digital privacy in greater society. With the common literature review strategy, it was found that the most citations in this field appeared in 2018, with IEEE, Springer, and Elsevier as the top three publishers. By affiliations, China and United States are both leading countries where the researcher comes from. And as the positive trend emerged, the ZKP mechanism will keep being leveraged as one of the cryptographic toolsets in the cloud services, Internet of Things (IoT), smart contracts, healthcare, electronic systems, and many other industrial needs. Keywords Zero-knowledge proof · Digital privacy · Society 5.0 N. Dominic (B) · B. Pardamean Bioinformatics and Data Science Research Center, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] B. Pardamean e-mail: [email protected] N. R. Pratama Information System Department, School of Information Systems, Bina Nusantara University, Jakarta 11480, Indonesia K. Cornelius Department of Computer Science, Universitas Multimedia Nusantara, Tangerang 15111, Indonesia S. H. Senewe Psychology Department, Faculty of Humanities, Bina Nusantara University, Jakarta 11480, Indonesia B. Pardamean BINUS Graduate Program, Bina Nusantara University, Jakarta 11480, Indonesia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_7
69
70
N. Dominic et al.
1 Introduction Zero-knowledge proof (ZKP) is one of cryptographic protocols in which a “prover” can convince a “verifier” that the given statement is true without revealing any information. Yao’s Millionaires’ problem [1] is a ground example of ZKP, where two parties are eager to know who is richer without disclosing their financial budgets. The first emergence of ZKP was caused by the rapid development of distributed ledger technology (DLT) or blockchain [2]. Nevertheless, in recent times, the ZKP concept has been embedded in many advanced applications to preserve users’ digital privacy as one of the steppingstones toward a human-centric society or Society 5.0. In the creation of a super-intelligent society, i.e., integrated space between humans and innovative technology, Society 5.0 must attain Sustainable Development Goals (SDGs) by 2030 [3]. From 17 agendas in the United Nations’ SDGs, ZKP can contribute to the embodiment of the “peace, justice, and strong institutions” goal through equitable distribution of secure digital services to strengthen trust at all layers of society. This study seeks to (1) understand the nature of scholarly literature in ZKP theory and (2) elucidate its applications to favor Society 5.0 objective.
2 Materials and Methods The complete scheme of this review is depicted in Fig. 1. First, two queries were set: (1) “zero-knowledge proof” AND “privacy”, (2) “zero-knowledge proof” AND “society”. The search was done on the Google Scholar database only. The year 2016 to 2021 and the first five pages were set as constraints. The exclusion criteria include systematic literature review (SLR) papers, e-prints or pre-prints, and other papers from arXiv and ResearchGate. Note that only articles written in English are acceptable. In the filtering process, duplicate results found in each query and the published year were dropped. Towards the final decision, papers were filtered by title [Filter-1] and then by abstract [Filter-2]. Desired outcomes contain (1) total papers with total citations per year, (2) top publishers, (3) source region of researchers, and (4) co-occurring keywords cloud. Tableau Desktop v2021.4 and MonkeyLearn web application were utilized for visualization purposes.
3 Scientometric Results The ZKP research status is depicted in Fig. 2 in the year-wise distribution. The figure indicates that 2017–2018 is the year where the research obtained the most citations, and 2019 is the most productive year with 50 published papers. While the future polynomial trendline was not plotted, it is projected that total papers in the next year will be increased. Details for this data can be found in Table 1, grouped by year,
Society with Trust: A Scientometrics Review of Zero-Knowledge Proof …
71
Fig. 1 Scientometric review methodology
Fig. 2 Total papers with citations per year, from 2016 to 2021
query, and filter. Note that when the citation data is unavailable on the publisher webpage, the data from Google Scholar will be recorded instead. The normalized total citation data is provided for easier comparison among years. Figure 3 shows that IEEE, Springer, and Elsevier are the top three publishers where researchers are entrusted to publish their papers. Note that most papers, especially from IEEE, came from annual proceedings. In Fig. 4, the worldwide map is characterized by the total distribution of publications in every country. China dominates the ZKP publication in Asia, while Europeans seem to be actively working on research in this field. The region data extracted from each paper was based on the author’s country or the first author’s affiliation. Due to some limitations, co-author and country collaboration analyses are not provided. Lastly, as the word cloud is commonly used to see the most frequent and prominent term, keywords abstract diagrammed in Fig. 5 is used to create a collective cluster from papers regarding the highlights in ZKP research. Some of them will be discussed in the next section. Note that when in-paper keywords are unavailable, the journal keywords are used instead. If those keywords are again unavailable, prominent words from the title are extracted to build this keywords cloud.
72
Fig. 3 Total papers published by each publisher
Fig. 4 Source region of researchers
Fig. 5 Abstract keywords cloud
N. Dominic et al.
Society with Trust: A Scientometrics Review of Zero-Knowledge Proof …
73
4 Discussions 4.1 Advanced Applications ZKP in the Internet of Things (IoT), smart systems, and cloud services. Internet of Vehicle (IoV) [4] systems allow data gathering from many connected vehicles to form a distributed vehicular ad hoc networks (VANETs) [5]. This advanced progress can be made through secure multiparty computation [6] and IoT [7] deployment. In daily events, ZKP is also embedded in smart home [8], smart grid [9], and even the Wireless Body Area Network (WBAN) [10] to protect sensitive data. Another ZKP application can be found in the public cloud services [11]. Since it offers high availability and ease of scalability, most users rely on cloud service for their storage [12] option. Google Cloud Storage, for instance, has a Google-managed Encryption Key (GMEK) mechanism with double-layer encryption that may apply ZKP to authenticate any bucket access. More ZKP applications in cloud include mobile cloud computing (MCC) [13], mobile (multi-access) edge computing (MEC) [14], and vehicular cloud computing [15]. ZKP in the blockchain space, smart contracts, and digital signature. In the implementations of blockchain such as cryptocurrency [16], every transaction information is kept secure by maintaining accessibility (can be accessed transparently by everyone), immutability (cannot be intentionally modified or deleted), and resiliency (cannot be tampered with by a single point of fault) [17]. Because of this technology, each party can discern the transaction history or account balance. Hence, ZKP is used since it can provide proof of a transaction [18] without disclosing sensitive information, e.g., trading relationships and transaction amounts. With ZKP, the transaction process can transpire in a decentralized manner while maintaining privacy. Furthermore, ZKP can be applied in conjunction with smart contracts [19], i.e., a protocol that ensures each party carries out its obligations in a transaction. ZKP can also be found in cryptographic technology such as digital signatures [20]. Digital signatures are the operations of signing a message with a signing key to maintain the validity and authenticity of the message. Digital signatures use public and private keys to sign a message. With ZKP, the signature can be acquired without knowing the contents of the encrypted message. This concept is formally known as the Blind Signature Scheme [21]. ZKP in the authentication and credential systems. As many studies have shown, ZKP is used in authentication protocols, including biometric system [22], login scheme [23], and single sign-on (SSO) [24] to prevent distributed denial of service (DDoS) attacks [25]. ZKP is also implemented in the credential management system [26] to validate user identity [27] or Personalized Identifiable Information (PII). To reinforce users’ information security, know that most web services [28] and distributed Virtual Private Network (VPN) [29] are also secured with the ZKP mechanism.
74
N. Dominic et al.
ZKP in the environments and healthcare systems. Commodities like wind power, crude oil, electricity, and natural gas should be intelligently managed with the decentralized energy trading system to meet prosumer demand [30]. The system leverages the ZKP protocol to keep the prosumers’ energy data secure while stored in the public blockchain, as well as to detect its inconsistency. In healthcare, a ZKP-based scheme for sharing medical data began to be installed in many hospitals [31]. Particularly, a novel technique was coined to serve the genetic data queries between genomic testing facilities. ZKP is used to maintain patients’ data completeness and authenticity [32]. ZKP in other electronic systems and industrial settings. With ZKP protocols, e-auction can be done without Trusted Third Party (TTP) auctioneer [33]. The same things applied to e-payment, where a fair exchange is possible in a trustless network [34]. E-voting systems also employ ZKP to respect voters’ privacy while keeping the full transparency for fair auditing [35]. By this mechanism, the e-voting system ensures the vote is cast as intended. A study proposed the first secure ZKP protocol for e-coupons, where its mechanism can achieve better verification proofs of customer’s certified attributes (e.g., citizenship or academic title) [36]. To guarantee ownership, the online marketplace utilized ZKP protocols for authentication [37]. It also allows many deals to occur without PII divulgence. One type of ZKP, a non-interactive ZKP (NIZKP), is used in the ticketing service to authenticate all purchased tickets stored in the DLT network [38]. NIZKP also works for multi-dimensional user data verification in the credit score calculation process [39]. With ZKP, it was possible to avert fraud and document forgery until the online real estate contract system was concluded [40]. In enterprise modeling, ZKP is used to monitor knowledge management and preserve its patterns [41]. A card-based access control, such as RFID (Radiofrequency Identification) card, is now designed with a ZKP scheme as one set of cryptographic features [42]. Besides end consumer’s privacy concerns, the efficiency and traceability in supply chain management (SCM) can be addressed using ZKP mechanisms [43]. To preserve the user’s location and prevent double-reservation attacks, the ZKP scheme is attached as one of the building blocks in the Autonomous Valet Parking (AVP) system [44]. The location-aware architecture for a real-time traffic management system also integrates NIZKP to fulfill data privacy requirements [45].
4.2 Society 5.0 Through the Lens of Social Psychology While the Industrial Revolution 4.0 takes sides on technological advancements, e.g., in IoT or deep learning [46–50], Society 5.0 takes sides on human wellbeing, particularly productivity, by focusing on the environment, society, and economy. This integral concept will be applied to all human beings, regardless of their background. While Society 5.0 has human trust in its crux, favoritism among groups that happened
Society with Trust: A Scientometrics Review of Zero-Knowledge Proof …
75
nowadays can form an unneglectable prejudice which causes a decrease in citizen trust. In addition, even though technologies allow organizations and governments to store big data about their citizens’ address, email, password, or behavior, they need to use it wisely [51]. Otherwise, the impact of data leakage will scare the citizens in many forms. By this, citizens’ trust in technology may decrease. To achieve human wellbeing, all layers of society must hold the trust entrusted since basic humans’ instincts are to survive and live happily without getting interrupted. Humans also need physical security, protection, and freedom from forces that threaten them and yearn for an orderly law [52]. To generate in-depth trust in greater society, governments or related organizations can provide valid information and enforce privacy laws [53].
5 Conclusion and Future Works With the developed scientometrics review methodology, it was found that the most citations in this field appeared in 2018, with IEEE, Springer, and Elsevier as the top three publishers. By affiliations, China and United States are both leading countries where the researcher comes from. Reflecting on the total papers from 2016 to 2021, applications using the ZKP scheme will keep showing a positive trend. As discussed in the previous section, ZKP was leveraged as one of the cryptographic toolsets in IoT, cloud services, smart contracts, healthcare, electronic systems, and many other industrial needs. A detailed explanation of the installation of the ZKP scheme with other cryptographic algorithms and technologies will be left as a forthcoming work. Table 1 Total papers found with their citation index, grouped by year, query, and filter Year
Query
Retrieval date
Total paper
Total citation
Total citation (norm)
2016
(1)
2022-03-02
12/7
106/50
0.228/0.106
(2)
2022-03-02
8/3
102/38
0.219/0.080
(1)
2022-03-02
19/12
462/211
1.000/0.456
(2)
2022-03-02
9/8
171/157
0.369/0.338
(1)
2022-03-03
11/8
130/103
0.280/0.221
(2)
2022-03-03
11/9
448/364
0.970/0.787
(1)
2022-03-03
18/6
246/235
0.531/0.508
(2)
2022-03-03
10/6
32/22
0.067/0.046
(1)
2022-03-08
16/15
98/87
0.210/0.187
(2)
2022-03-08
4/2
19/11
0.039/0.022
(1)
2022-03-09
21/16
30/24
0.063/0.050
(2)
2022-03-09
5/5
1/1
0.000/0.000
2017 2018 2019 2020 2021
Total paper and total citation are written in order [Filter-1]/[Filter-2]
76
N. Dominic et al.
References 1. Yao AC (1982) Protocols for secure computations. In: 23rd annual symposium on foundations of computer science (SFCS 1982). pp 160–164 2. Zhang P, Schmidt DC, White J, Dubey A (2019) Consensus mechanisms and information security technologies. In: Kim S, Deka GC, Zhang P (eds) Role of blockchain technology in IoT applications. Elsevier, pp 181–209 3. Narvaez Rojas C, Alomia Peñafiel GA, Loaiza Buitrago DF, Tavera Romero CA (2021) Society 5.0: a Japanese concept for a superintelligent society. Sustainability 13(12) 4. Chen C, van Groenigen KJ, Yang H, Hungate BA, Yang B, Tian Y, Chen J, Dong W, Huang S, Deng A, Jiang Y, Zhang W (2020) Global warming and shifts in cropping systems together reduce China’s rice production. Glob Food Sec 24(January):100359 5. Rasheed AA, Mahapatra RN, Hamza-Lup FG (2020) Adaptive group-based zero knowledge proof-authentication protocol in vehicular ad hoc networks. IEEE Trans Intell Transp Syst 21(2):867–881 6. Yang X, Huang M (2018) Zero knowledge proof for secure two-party computation with malicious adversaries in distributed networks. Int J Comput Sci Eng 16(4):441–450 7. Ma Z, Wang L, Zhao W (2021) Blockchain-driven trusted data sharing with privacy protection in IoT sensor network. IEEE Sens J 21(22):25472–25479 8. Park G, Kim B, Jun MS (2017) A design of secure authentication method using zero knowledge proof in smart-home environment. Lecture notes in electrical engineering, vol 421. pp 215–220 9. Badra M, Borghol R (2021) Privacy-preserving and efficient aggregation for smart grid based on blockchain. In: 11th IFIP international conference on new technologies, mobility and security, NTMS 2021. pp 82–88 10. Umar M, Wu Z, Liao X (2021) Channel characteristics aware zero knowledge proof based authentication scheme in body area networks. Ad Hoc Netw 112:102374 11. Yu Y, Li Y, Au MH, Susilo W, Choo KKR, Zhang X (2016) Public cloud data auditing with practical key update and zero knowledge privacy. Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol 9722. pp 389–405 12. Tian H, Nan F, Chang CC, Huang Y, Lu J, Du Y (2019) Privacy-preserving public auditing for secure data storage in fog-to-cloud computing. J Netw Comput Appl 127:59–69 13. Song YX, Liao ZX, Liang YH (2018) A trusted authentication model for remote users under cloud architecture. Int J Internet Protoc Technol 11(2):110–117 14. Lin W, Zhang X, Cui Q, Zhang Z (2021) Blockchain based unified authentication with zeroknowledge proof in heterogeneous MEC. In: IEEE international conference on communications workshops (ICC workshops). pp 1–6 15. Hegde N, Manvi SS (2019) MFZKAP: multi factor zero knowledge proof authentication for secure service in vehicular cloud computing. In: 2nd international conference on advanced computational and communication paradigms, ICACCP 2019. pp 1–6 16. Dai W, Lv Y, Choo K-KR, Liu Z, Zou D, Jin H (2021) CRSA: a cryptocurrency recovery scheme based on hidden assistance relationships. IEEE Trans Inf Forensics Secur 16:4291–4305 17. Xu L, Shah N, Chen L, Diallo N, Gao Z, Lu Y, Shi W (2017) Enabling the sharing economy: privacy respecting contract based on public blockchain. In: BCC 2017—proceedings of the ACM workshop on blockchain, cryptocurrencies and contracts, co-located with ASIA CCS 2017. pp 15–21 18. Vakilinia I, Tosh DK, Sengupta S (2017) Privacy-preserving cybersecurity information exchange mechanism. Simul Ser 49(10):15–21 19. Xu L, Chen L, Gao Z, Kasichainula K, Fernandez M, Carbunar B, Shi W (2020) PrivateEx: privacy preserving exchange of crypto-assets on blockchain. In: Proceedings of the ACM symposium on applied computing. pp 316–323 20. Ishida A, Emura K, Hanaoka G, Sakai Y, Tanaka K (2017) Group signature with deniability: how to disavow a signature. IEICE Trans Fundam Electron Commun Comput Sci E100A(9):1825– 1837
Society with Trust: A Scientometrics Review of Zero-Knowledge Proof …
77
21. Petzoldt A, Szepieniec A, Mohamed MSE (2017) A practical multivariate blind signature scheme. Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol 10322 LNCS. pp 437–454 22. Tran QN, Turnbull BP, Wang M, Hu J (2022) A privacy-preserving biometric authentication system with binary classification in a zero knowledge proof protocol. IEEE Open J Comput Soc 3:1–10 23. Qingshui X, Yue S, Haifeng M, Zongyang H, Tianhao Z (2021) Registration and login scheme of charity blood donation system based on blockchain zero-knowledge proof. In: IEEE 9th international conference on information, communication and networks (ICICN). pp 464–469 24. Kim HJ, Lee IY (2017) A study on a secure single sign-on for user authentication information privacy in distributed computing environment. Int J Commun Netw Distrib Syst 19(1):28–45 25. Ramezan G, Abdelnasser A, Liu B, Jiang W, Yang F (2021) EAP-ZKP: a zero-knowledge proof based authentication protocol to prevent DDoS attacks at the edge in beyond 5G. In: IEEE 4th 5G world forum (5GWF). pp 259–264 26. Song, T., Lin, J., Wang, W., Cai, Q.: Traceable Revocable Anonymous Registration Scheme with Zero-knowledge Proof on Blockchain. IEEE International Conference on Communications, 2020-June (2017), (2020). 27. Yang X, Li W (2020) A zero-knowledge-proof-based digital identity management scheme in blockchain. Comput Secur 99 28. Al-Bajjari AL, Yuan L (2016) Research of web security model based on zero knowledge protocol. In: Proceedings of the IEEE international conference on software engineering and service sciences, ICSESS. pp 68–71 29. Varvello M, Azurmendi IQ, Nappa A, Papadopoulos P, Pestana G, Livshits B (2021) VPN-zero: a privacy-preserving decentralized virtual private network. In: IFIP networking conference (IFIP networking). pp 1–6 30. Pop CD, Antal M, Cioara T, Anghel I, Salomie I (2020) Blockchain and demand response: zero-knowledge proofs for energy transactions privacy. Sensors (Switzerland) 20(19):1–21 31. Chaudhry JA, Saleem K, Alazab M, Zeeshan HMA, Al-Muhtadi J, Rodrigues JJPC (2021) Data security through zero-knowledge proof and statistical fingerprinting in vehicle-to-healthcare everything (V2HX) communications. IEEE Trans Intell Transp Syst 22(6):3869–3879 32. Ding X, Ozturk E, Tsudik G (2019) Balancing security and privacy in genomic range queries. In: Proceedings of the ACM conference on computer and communications security, pp 106–110 33. Li H, Xue W (2021) A blockchain-based sealed-bid e-auction scheme with smart contract and zero-knowledge proof. Secur Commun Netw 2021 34. Harikrishnan M, Lakshmy KV (2019) Secure digital service payments using zero knowledge proof in distributed network. In: 5th international conference on advanced computing and communication systems, ICACCS. pp 307–312 35. Panja S, Roy B (2021) A secure end-to-end verifiable e-voting system using zero-knowledge proof and blockchain. Indian Statistical Institute Series, pp 45–48 36. Hancke GP, Damiani E (2018) A selective privacy-preserving identity attributes protocol for electronic coupons. Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol 10741 LNCS V 37. Jiang Y, Carrijo D, Huang S, Chen J, Balaine N, Zhang W, van Groenigen KJ, Linquist B (2019) Water management to mitigate the global warming potential of rice systems: a global meta-analysis. Field Crop Res 234(February):47–54 38. Cha SC, Peng WC, Hsu TY, Chang CL, Li SW (2018) A blockchain-based privacy preserving ticketing service. In: IEEE 7th global conference on consumer electronics, GCCE. pp 585–587 39. Lin C, Luo M, Huang X, Choo K-KR, He D (2021) An efficient privacy-preserving credit score system based on noninteractive zero-knowledge proof. IEEE Syst J 1–10 40. Jeong SH, Ahn B (2021) Implementation of real estate contract system using zero knowledge proof algorithm based blockchain. J Supercomputing 77(10):11881–11893 41. Fill HG, Härer F (2018) Knowledge blockchains: applying blockchain technologies to enterprise modeling. In: Proceedings of the annual Hawaii international conference on system sciences. pp 4045–4054
78
N. Dominic et al.
42. Hajny J, Dzurenda P, Malina L (2018) Multidevice authentication with strong privacy protection. Wireless Commun Mobile Comput 2018 43. Sahai S, Singh N, Dayama P (2020) Enabling privacy and traceability in supply chains using blockchain and zero knowledge proofs. In: Proceedings—2020 IEEE international conference on blockchain, blockchain. pp 134–143 44. Huang C, Lu R, Lin X, Shen X (2018) Secure automated valet parking: a privacy-preserving reservation scheme for autonomous vehicles. IEEE Trans Veh Technol 67(11):11169–11180 45. Li W, Guo H, Nejad M, Shen CC (2020) Privacy-preserving traffic management: a blockchain and zero-knowledge proof inspired approach. IEEE Access 8:181733–181743 46. Cenggoro TW, Wirastari RA, Rudianto E, Mohadi MI, Ratj D, Pardamean B (2021) Deep learning as a vector embedding model for customer churn. Procedia Comput Sci 179(2020):624–631 47. Budiarto A, Rahutomo R, Putra HN, Cenggoro TW, Kacamarga MF, Pardamean B (2021) Unsupervised news topic modelling with Doc2Vec and spherical clustering. Procedia Comput Sci 179(2020):40–46 48. Rahutomo R, Budiarto A, Purwandari K, Perbangsa AS, Cenggoro TW, Pardamean B (2020) Ten-year compilation of #savekpk twitter dataset. In: Proceedings of 2020 International Conference on Information Management and Technology, ICIMTech 2020 9211246 pp 185–190 49. Purwandari K, Sigalingging JWC, Cenggoro TW, Pardamean B (2021) Multi-class weather forecasting from twitter using machine learning aprroaches. Procedia Comput Sci 179(2020):47–54 50. Rahutomo R, Perbangsa, AS, Soeparno H, Pardamean B (2019) Embedding model design for producing book recommendation. In: International conference on information management and technology, ICIMTech 2019 8843769 pp 537–541 51. Al-musawi A, Yang E, Bley K, Thapa D, Pappas IO (2021) The role of citizens’ familiarity, privacy concerns, and trust on adoption of smart services. NOKOBIT Norsk konferanse organisasjoners bruk av IT 2021(2):1–14 52. Altymurat A (2021) Human behavior in organizations related to Abraham Maslow’s hierarchy of needs theory. Interdisc J Papier Hum Rev 2(1):12–16 53. Martin K (2018) The penalty for privacy violations: how privacy violations impact trust online. J Bus Res 82(June 2016):103–116
Mixed Reality Approaches for Real-World Applications
Validation of Augmented Reality Prototype for Aspects of Cultural Learning for BIPA Students Pandu Meidian Pratama, Agik Nur Efendi, Zainatul Mufarrikoh, and Muhammad David Iryadus Sholihin
Abstract The purpose of this study is to summarise the findings from the validation of an augmented reality prototype for BIPA learning’s cultural components. An R&D design was used for this research. The section on product validation, which is an important step in the research and development process, will be the main focus in this instance, though. To determine the calibre of the prototype or product under development, validation is important. An examination of BIPA level 1’s cultural components and AR content is done as part of the validation process. From the devices created during design until the augmented reality software product is attained, validation of the initial product is done in an integrated way. This validation prototype was carried out by two experts, namely technology experts and BIPA learning experts using content validity evaluation (technology content and BIPA materials). The results of the validation data will be processed using SPSS. Based on expert validation, it shows that the Augmented Reality prototype is considered valid, practical, and can be used in learning BIPA Cultural Aspects at Level 1. Keywords Validation · Augmented reality · BIPA
1 Introduction The Indonesian Language Program for Foreign Learner (BIPA) is a means for foreign students who will learn Indonesian. The purpose of implementing the BIPA program, P. M. Pratama (B) Digital Language Learning Center, Computer Science Department, Faculty of Humanities, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] A. N. Efendi · Z. Mufarrikoh Institut Agama Islam Negeri Madura, Pamekasan 69371, Indonesia M. D. I. Sholihin The Informatics Engineering Education Department, State University of Surabaya, Surabaya 60231, Indonesia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_8
81
82
P. M. Pratama et al.
of which is as a means of expanding Indonesian speakers, especially abroad, as well as a tool of diplomacy, such as language and cultural diplomacy. Learning a language is essentially studying and studying its culture because language activities are also considered as an activity to convey cultural messages to the community [1]. The implementation of the BIPA program is regulated in the Regulation of the Minister of Education and Culture of the Republic of Indonesia Number 27 of 2017. The regulation also regulates the level of BIPA students from level 1 to level 7 with tiered and systematic learning outcomes. BIPA learning competencies are required to be able to learn aspects of language skills, linguistic aspects, and cultural aspects. Cultural aspects in BIPA teaching are also part of cultural diplomacy. Cultural diplomacy is carried out so that the exchange of ideas/ideas, values, and information is more rapid and precise [2]. Using technology for BIPA learning can have a direct influence instead of using traditional BIPA methods like textbook-centered, student-centered instruction since teachers will find it harder to evaluate the effects of learning [3]. The post-pandemic period and important technical advancements have an impact on how BIPA is researched. BIPA learning currently makes use of technology advancements like gamification, virtual reality, and augmented reality. Augmented reality technology can be used as a learning media and is becoming a trend in education and learning research [4]. AR is considered cheaper than virtual reality because AR does not require additional tools during use. This is different from VR which requires additional tools in the form of glasses. In addition, AR is a new breakthrough in learning using virtual and real technology that improves the quality of learning [5]. Besides being able to improve the quality of learning, AR technology can also present various objects of Indonesian cultural products which are many and spread throughout Indonesia more clearly. Thus, BIPA students can recognize and learn about Indonesian culture which is very diverse just by using this AR device [6]. In addition to learning material, another factor that must be taken into account is creating a pleasant learning environment. Through learning evaluation, it is possible to determine what type of learning is enjoyable for students [7]. Naturally, a number of steps must be taken to ensure that the technology components of BIPA learning are ready before the learning evaluation, such as expert validation testing. A validation test tries to evaluate the prototype or product’s quality. This validation procedure is continued until the created product is deemed appropriate for use in the instructional process [8] (Figs. 1, 2 and 3). In this research, Augmented Reality has been designed and designed (prototype) as shown in Fig. 4. This prototype contains aspects of Javanese culture (traditional clothes, traditional houses, and traditional weapons). Augmented Reality development and applying virtual buttons on each marker. Some markers are designed using the Vuforia software development kit. Unity 3d and Vuforia Engineering SDK are used as platforms to develop Augmented Reality software. 3D objects are designed using the 3D blender app to imitate custom clothing, custom homes, and custom weapons.
Validation of Augmented Reality Prototype for Aspects of Cultural …
Fig. 1 Menu display
Fig. 2 Choice of province to be displayed
Fig. 3 One of the menu displays topic of traditional clothes
83
84
P. M. Pratama et al.
Fig. 4 Example of augmented reality prototype on traditional house drawing
This study’s aim was to explain the outcomes of the augmented reality prototype’s validation in the context of BIPA learning’s cultural component. The BIPA level 1 student will utilise this augmented reality prototype to explain many cultures in Indonesia, such as traditional homes, traditional weaponry, and traditional clothing. Other researchers have become interested in the field of validation tests. For example, Ansari et al. [8] tested HOTS-based learning, Fast-Berglund et al. [9] tested and validating augmented reality and virtual reality in manufacturing, Manuri et al. [10] and Del Amo et al. [11] developed an augmented reality validation test for the industrial sector.
2 Method This study uses a Research and Development design. However, in this case, the focus will be on the product validation section which is an important part of a Research and Development process. The design of this research is shown in Fig. 5. The validation process was carried out to examine aspects of AR content and material aspects of BIPA culture level 1. The sampling technique in this study used purposive sampling in determining the validator. The validators are divided into two experts, namely technology experts and BIPA learning experts. The selection of experts was based on the criteria contained in Table 1. The data mining method used a content validity evaluation form (technology content and BIPA material) in the form of a questionnaire as in Table 2. This study uses a Likert rating scale of 1–5, namely an assessment of 1 very unworthy (VU), 2 not feasible (NF), 3 quite decent (QD), 4 worthy (W), and 5 so worth it (SWI). Analysis of the data used in this study, namely descriptive analysis of the results of SPSS Statistics Version 25 data processing.
Validation of Augmented Reality Prototype for Aspects of Cultural …
85
Fig. 5 Research design
Table 1 Expert criteria Augmented reality content expert criteria
BIPA content expert criteria
Lecturer or researcher
Lecturer or researcher
Expert in the field of technology or information systems (especially in the field of augmented reality)
Expert in the field of teaching BIPA
Have experience in research or AR manufacturing at least 3 years
Have experience as a BIPA teacher for at least 3 years
3 Results and Discussion Before discussing the validation assessment in terms of both AR and BIPA content, the researcher first tested the validity and reliability of the indicators that had been formed. The results of the validity test for each indicator are shown in Table 3. The initial hypothesis (H0) in the validity test is an invalid indicator while the alternative hypothesis (H1) is a valid indicator. The test statistic used is Spearman rank correlation with a significant level (α) of 5%. The indicator is said to be valid if the p-value is smaller than the significant level. Based on Table 3, it can be concluded that all indicators formed are valid so that they can be continued for reliability testing. A reliability test was conducted to see the consistency of the indicators to the variables. The initial hypothesis used is the unreliable variable and the alternative hypothesis is the reliable variable. The test statistic used is Cronbach’s alpha. Referring to Table 4, it is found that the reliability of the two variables is above 70% which indicates that the indicators that have been established in both AR and BIPA content are consistent in measuring these variables. Because the existing indicators are valid and reliable,
86
P. M. Pratama et al.
Table 2 Instrument validation indicator AR content validation indicator instrument
Scoring scale 1
2
3
4
5
Compatibility of target image with prototype Sensitivity level and image reality Asset display 3D graphics display Completeness of features or application menu The suitability of the text description with the image on the application BIPA validation indicator instrument Systematic arrangement of materials Presentation of cultural aspects Use of words or diction Use of grammar Readability or adequacy of text The use of examples in the form of objects presented Elements of novelty and up-to-date information Interactive use of language
these indicators are appropriate to be used as a reference for assessment both in terms of AR and BIPA content. In the next discussion, the researchers obtained the results in Table 5 regarding the results of the validation assessment from experts or experts. The results of the assessment on each indicator both on AR and BIPA content are very feasible, it can be seen in Table 5 that the assessment in the column is very feasible having a percentage above 50%. In the assessment of AR content, information can be obtained that the prototype created is very feasible, especially on the sensitivity and reality of the image. This means that the prototype has been able to detect an object accurately and represent it in a two-dimensional form in the form of an image. While in the assessment of BIPA content, the highest assessment is in the use of examples in the form of objects that are presented. This means that the existing prototype has a diverse collection so that it can detect various objects. If in the AR assessment all indicators show very decent, but in BIPA content there is one indicator that has a fairly decent rating even though it is only 6.7%. The indicator in question is the use of interactive language. This means that there are still experts who judge that the use of language needs to be developed to make it more interactive.
Validation of Augmented Reality Prototype for Aspects of Cultural …
87
Table 3 Instrument validity test results on AR and BIPA content Variables and indicators
Spearman rank correlation value
P-value
Information
Compatibility of target image with 0.725 prototype
0.003
Valid
Sensitivity level and image reality
0.646
0.013
Valid
Asset display
0.791
0.001
Valid
3D graphics display
0.817
0.000
Valid
Completeness of features or application menu
0.608
0.021
Valid
The suitability of the text description with the image on the application
0.725
0.003
Valid
Systematic arrangement of materials
0.756
0.001
Valid
Presentation of cultural aspects
0.675
0.006
Valid
Use of words or diction
0.850
0.000
Valid
Use of grammar
0.834
0.000
Valid
Readability or adequacy of text
AR content
BIPA
0.833
0.000
Valid
The use of examples in the form of 0.707 objects presented
0.003
Valid
Elements of novelty and up-to-date information
0.834
0.000
Valid
Interactive use of language
0.703
0.003
Valid
Table 4 Reliability test results on AR and BIPA content
Variable
Cronbach alpha value
Information
AR content
0.839
High reliability
BIPA
0.928
Very high reliability
4 Conclusion The indicator of achievement of the results of the assessment carried out on AR content can be said to be very feasible because AR is sensitive and shows the real reality of the image. This shows that this AR prototype can display an object correctly and represent it in the two-dimensional form in the form of images. In addition, the assessment carried out on BIPA content is also considered feasible, because the use of examples in the form of objects presented is in accordance with the material being studied. The BIPA content does not reach the very feasible criteria because there is one indicator that has a fairly decent rating, which is 6.7%. The indicator in question is the use of interactive language. This means that there are still experts who judge
88
P. M. Pratama et al.
Table 5 Description of the feasibility assessment results on AR and BIPA content Variables and indicators
1
2
3
4
5
VU NF QD (%) W (%) SWI (%) AR content validation indicator instrument Compatibility of target image with prototype
35.71
64.29
Sensitivity level and image reality
28.57
71.43
Asset display
35.71
64.29
3D graphics display
42.86
57.14
Completeness of features or application menu
35.71
64.29
The suitability of the text description with the image on the application
35.71
64.29
Systematic arrangement of materials
46.70
53.30
Presentation of cultural aspects
26.70
73.30
Use of words or diction
46.70
53.30
Use of grammar
40.00
60.00
Readability or adequacy of text
33.30
66.70
The use of examples in the form of objects presented
20.00
80.00
40.00
60.00
40.00
53.30
BIPA validation indicator instrument
Elements of novelty and up-to-date information Interactive use of language
6.70
Notes on each description are shortened to very unworthy (VU), not feasible (NF), quite decent (QD), worthy (W), and so worth it (SWI)
that the use of language needs to be developed to make it more interactive. This can be used as a guideline for conducting further studies in this research to develop an interactive language for this product.
References 1. Arisnawati N, Yulianti AI, Provinsi B, Selatan S, Info A (2022) Bipa learning design based on Buginese culture 1:459–470 2. Azizah SN, Sukmawan S (2022) Tradisi Sodoran Tengger sebagai Alat Diplomasi Budaya Indonesia melalui Pembelajaran BIPA 5:619–630 3. Bing W (2017) The college English teaching reform based on MOOC. English Lang Teach 10:19. https://doi.org/10.5539/elt.v10n2p19 4. Bacca J, Baldiris S, Fabregat R, Graf S (2014) Kinshuk: international forum of educational technology and society augmented reality trends in education: a systematic review of research and applications. Educ Technol 17:133–149 5. Mardasari OR, Susilowati NE, Luciandika A, Nagari PM, Yanhua Z (2022) Applying augmented reality in foreign language learning materials: research and development. In: International seminar on language, education, and culture (ISoLEC 2021), vol 612. pp 253–258. https://doi.org/10.2991/assehr.k.211212.047
Validation of Augmented Reality Prototype for Aspects of Cultural …
89
6. Rahma R, Nurhadi J (2017) Virtual reality: Sebuah Terobosan Pemanfaatan Media dalam Pembelajaran BIPA. Pros PITABIPA 2017:1–6 7. Defina (2021) Evaluasi Pembelajaran Bipa : Penilaian Pemelajar 18:203–221 8. Ansari BI, Saminan, Sulastri R (2018) Validation of prototype instruments for implementing higher order thinking learning using the IMPROVE method. J Phys Conf Ser 1088. https://doi. org/10.1088/1742-6596/1088/1/012113 9. Fast-Berglund Å, Gong L, Li D (2018) Testing and validating extended reality (xR) technologies in manufacturing. Procedia Manuf 25:31–38. https://doi.org/10.1016/j.promfg.2018.06.054 10. Manuri F, Pizzigalli A, Sanna A (2019) A state validation system for augmented reality based maintenance procedures. Appl Sci 9. https://doi.org/10.3390/app9102115 11. Fernández del Amo I, Erkoyuncu JA, Roy R, Palmarini R, Onoufriou D (2018) A systematic review of augmented reality content-related techniques for knowledge transfer in maintenance applications. Comput Ind 103:47–71. https://doi.org/10.1016/j.compind.2018.08.007
The Impact of Different Modes of Augmented Reality Information in Assisted Aircraft Cable Assembly Dedy Ariansyah, Khaizuo Xi, John Ahmet Erkoyuncu, and Bens Pardamean
Abstract Aircraft avionics systems are complicated systems which involves high number of components and complex cable assembly procedure. To deal with this challenge, Augmented Reality (AR) has been proposed to be an effective method to improve assembly efficiency and to reduce error rates. In this paper, the authors developed AR-based guidance cable assembly and compare the impact of different modes of AR information. The results show that although AR outperformed the paper document assembly guidance, the design of AR information has distinct influence on the successful completion of the cables assembly process. Keywords Augmented reality · Cable assembly · Industry 4.0 · Aircraft maintenance
1 Introduction Aircraft avionics systems are complicated systems which include communications, navigations, displays, and many control systems that are fitted to the aircraft. To enable the proper function of each system, cables are the key medium to transmit D. Ariansyah (B) · B. Pardamean Bioinformatics and Data Science Research Centre, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] K. Xi School of Aerospace, Transport, and Manufacturing, Cranfield University, Bedford MK43 0AL, UK J. A. Erkoyuncu Centre of Digital Engineering and Manufacturing, School of Aerospace, Transport and Manufacturing, Cranfield University, Bedford MK43 0AL, UK B. Pardamean Computer Science Department, BINUS Graduate Program Master of Computer Science Program, Bina Nusantara University, Jakarta 11480, Indonesia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_9
91
92
D. Ariansyah et al.
signals and power between electronic devices in the aircraft. The cable network in the aircraft is a huge system where each cable as well as connector are given a unique number to ensure that all systems are correctly connected, and the cable network is laid out without errors. This is usually done with the cable brackets in which every cable has to go through the right bracket and routed with the correct direction and excess length to ensure that cable securely plugged and not easily damaged. The main procedure for cable assembly is therefore consist of recognizing the right match between cable number and the bracket and assemble the cable layout according to the design requirements. In general, the cable layout design considers not only to facilitate assembly and disassembly but also maintenance and other engineering requirement like electromagnetic compatibility [1]. Given the high number of components which differ in size, shape, and colors, cables assembly is a complex task, time consuming, and require high skill of the operator as well as sustained attention to complete the cable installation successfully. Although, paper document of assembly outline is still widely used in the aircraft assembly, many scholars have criticized this approach to be ineffective, incurs high cognitive operation, and prone to errors [2–4]. To deal with this problem, a few scholars have shown that Augmented Reality is a promising technology to improve the effectiveness and efficiency of aircraft cable assembly. Industries have also applied AR to help in cables assembly process. For example, Boeing has seen 30 and 90% improvement in the production time and the quality in comparison to traditional approach [5]. In the similar fashion, Airbus also applied AR technology to enable downstream use of 3D information for supporting interpretation of assembly instruction and minimizing the likelihood of errors during the execution process [6]. Despite showing promising results, there are still a lot of gaps both from the technical and non-technical aspect that need to be addressed before AR can be fully adopted to support cable assembly process. In the literature, research in AR-assisted cable assembly is still in its infancy. Known challenges for effective AR implementations for industrial applications consists of hardware issues (e.g., information display resolution, field of view, etc.), software issues (e.g., tracking accuracy, authoring solution, ease of calibration), and human-factors issues (e.g., design of interface, types of information, interaction, and so on) [7, 8] and knowledge management [8]. From the AR tracking aspect, aircraft cables assembly environment lacks feature for AR recognition. To improve the precision of AR tracking accuracy, one study investigated the integration of marker-based detection and Visual Simultaneous Localisation and Mapping (VSLAM) [4]. Recently, research in applied deep learning was undertaken to improve the quality of cables bracket inspection and cable text reading using Convolution Neural Network (CNN) [2]. Besides tracking, creating, and managing AR instructions known as AR authoring tool constitute an important barrier for the users. To address this issue, a cables assembly authoring software was developed to help users without programming knowledge to generate cables assembly procedure in AR [2]. Although, some progress in AR tracking and authoring tool have been made in the previous studies, the design of information display to effectively guide the users in completing the cables assembly has not been studied in the literature. In the previous study, different
The Impact of Different Modes of Augmented Reality Information …
93
types of AR information mode have been found to affect the performance of manual assembly task [6]. Therefore, this paper sought to examine the impact of different AR information mode in cable assembly. In Sect. 2, the development of the AR-assisted cable assembly is presented followed by a test case in Sect. 3. Section 4 discusses the finding and further research.
2 AR-Assisted Cable Assembly Development The AR-assisted cable assembly typically consists of modules to read the text printed on the cable, displaying the bracket through which the cable must be inserted in a correct layout to meet the design requirement. The common approach to visualise assembly information involves textual information such as the location of the bracket where the cable must be inserted, and the path of the cable routed through brackets in 3D representation. To replicate the AR assisted assembly, this paper used Vuforia library for AR tracking and registration, Unity3D to manage cable assembly data, AR visualization, and user interface.
2.1 Software and Hardware Implementation The key challenge in developing AR-assisted cable assembly is the object tracking and information registration process. To solve this challenge, this paper combined two different approaches for AR tracking: (1) marker-based in which QR-code was attached to the test platform for the initial identification of camera’s pose to achieve tracking registration and (2) object-based tracking to compensate for the deviation of camera’s pose estimation caused by camera moving away from the marker. Figure 1 shows the application of both approaches.
Fig. 1 Object and marker tracking for the cable bracket
94
D. Ariansyah et al.
Fig. 2 Hardware setup and the test platform
2.2 Test Platform The test platform consists of cable brackets that were installed on the flat panel (e.g., white board) to replicate the environment of cable assembly process in the aircraft. These brackets were the same in terms of size and colour but differed in the orientation. This test platform was designed to simulate the insertion of cables to the brackets according to the predefined cable layout. The monocular RGB camera and PC computer were used to develop the AR-assisted cable assembly application. Figure 2 shows the setup of the test platform.
3 Test Case Upon successful development of AR-assisted cable assembly, this section describes how AR was used to display different modes of information and investigate their impact on cable assembly performance.
3.1 Task Settings In the context of aircraft cable assembly, the complexity of the task varies depending on the complexity of the cable (e.g. the length, the number of brackets to be assembled into, the orientation of the cable) and the complexity of the cable assembly environment (e.g. the number of other cables in the same bracket and in the assembly environment).To simulate the real situation as close as possible while balancing the test with the time constraint, a set of three different task complexity was set (Class A, Class B, Class C). These three classes were chosen to represent the increasing complexity of cable assembly environment.
The Impact of Different Modes of Augmented Reality Information …
95
Fig. 3 Example of cable assembly layout with different complexities
In class A, the number of cable brackets was five, and the direction of cables was basically unidirectional. Since class A was always the first to be tested, the cable did not interfere with others in the bracket and hence would have the lowest environmental complexity. In Class B, the number of cable brackets was five. However, it was more complex than Class A because the assembly was not only from left to right, but there was a certain detour and turn in the process. In addition, Class B was always installed after Class A, and hence, it would have some intersection with installation done in Class A. In class C, the number of brackets was increased to six, and the direction would be more difficult to predict. Since this type of cable was always installed at the end of the task sequence, the environment would be more complex than the previous ones (i.e., some cables share the same brackets). Figure 3 shows different configurations of cable assembly for each class.
3.2 AR Information Modes Cable assembly instructions in AR are commonly shown in the combination of textual instruction showing the code of the bracket where the cable is tied to as well as the path of the cable through the brackets. To assess the individual impact of each information modality in AR, the way assembly instructions were presented to the users was divided into three modes namely, full step aided assembly, single step aided assembly, and textual information aided assembly. Full Step. This mode displays the complete assembly instructions at once such as brackets code (e.g., cable_bracket_1 or cable_1), cable paths, and cable orientation. Textual instruction and 3D model were used to inform the location of the brackets while 3D line (red dash line) was overlaid to depict the layout of the cable assemblies along with the yellow arrows that indicate the direction of the cable insertion. Figure 4 shows the application of Full Step aided assembly guidance in AR. Single Step. This mode displays the same information as in Full Step mode, except that presents the assembly instructions step by step as opposed to the whole step. The users were guided to assemble the cable from one bracket to another until the end of the sequence. This information mode was considered as it could reduce the
96
D. Ariansyah et al.
Fig. 4 Full step aided assembly in AR
information overload that might be encountered by the users. Figure 5 shows the application of Single Step aided assembly guidance in AR. Textual Information. This mode displays the basic information contained in the paper document Assembly Outline (AO). Unlike two previous modes, this visualization displays the cable bracket code and the image of the cable layout. Figure 6 shows the application of Textual Information aided assembly guidance in AR.
Fig. 5 Single step aided assembly in AR
The Impact of Different Modes of Augmented Reality Information …
97
Fig. 6 Textual information aided assembly in AR
3.3 Data Collection To measure the effectiveness of different AR information modes, nine users who had never used AR technology were invited to participate the test. Each participant was asked to complete three sequences of cable assembly from Class A to Class C using three types of AR information mode and paper document which resulted in twelve different tasks. Each task was set to be different from the others to prevent users from having experience that could affect the accuracy of the test. Task performance was recorded in terms of time completion and the number of errors. The assembly time was measured from the beginning of the cable assembly to the completion when the user finished cable assembly. The number of errors was measured as the number of times the individual cable assembly was inconsistent with the assembly layout.
3.4 Results The result of task performance for different tasks was presented in Figs. 7 and 8. Due to a small number of participants, it is not possible to perform statistical analysis. Nevertheless, it is possible to gain some insights from this preliminary data to inform further research. From the assembly time and error results, it appears that all AR information mode led to shorter assembly time and less error in comparison to the paper document across different task complexities. It was also observed that there seems appear a positive correlation between task complexity and the number of errors when using AR textual instruction and paper document such that as the task complexity increases, the number of errors also increases.
D. Ariansyah et al.
102 seconds
98
Mean Time
1.4 1.2 1 0.8 0.6 0.4 0.2 0
Full Step Single Step Textual Instruction Paper AO Class A
Class B
Class C
Fig. 7 Mean time for different information mode
Number of occurence
Mean Error 1 0.8
Full Step
0.6
Single Step
0.4
Textual Instruction
0.2
Paper AO
0 Class A
Class B
Class C
Fig. 8 Mean error for different information mode
However, this trend was not observed in Full Step and Single Step information mode. This seems to imply that AR textual information may reduce the time required to perform assembly task since users did not have to switch their attention between the assembly environment and the assembly instruction. However, since the layout was not overlaid on top of the brackets, users still tend to commit mistake at the cost of matching what they saw in the 2D content to the assembly environment [6]. Moreover, the additional information regarding the layout and the orientation of the cables in the format of 3D line and arrows could help users to increase the accuracy of the cable assembly. This observation appears to corroborate the finding in the AR-based warehouse order picking application which found that graphical contents increase the efficiency of information comprehension [9].
The Impact of Different Modes of Augmented Reality Information …
99
4 Conclusion Augmented Reality technology proves to be useful to improve the efficiency and the quality of cable assembly. This paper presents how different AR information modes affect the successful completion of cable assembly. The additional information was found to be helpful in improving the quality of cable assembly. Nevertheless, how does the number of AR information (Full step vs. Single step) affect users’ workload and performance in completing the task was not pronounced in this study. This is possibly due to the complexity of the task conducted in this study was too simple compared to the real application where the number of brackets and the assembly environment are far more complex. Further study should proceed with a more complex setup, incorporating more objective and subjective measure as well as involvement of more participants to get further insight into how AR information should be designed relative to the level of task and environmental complexity in the aircraft cable assembly. Furthermore, the study can also proceed with the analysis of different learning styles for the operators to acquire information necessary efficiently and effectively for accomplishing the task [10, 11].
References 1. Yang X, Liu J, Lv N, Xia H (2019) A review of cable layout design and assembly simulation in virtual environments 2. Zheng L, Liu X, An Z, Li S, Zhang R (2020) A smart assistance system for cable assembly by combining wearable augmented reality with portable visual inspection. Virtual Reality Intell Hardware 2:12–27 3. Zhang W, Chen C, Chen H, Sun G, Zhao S (2019) Application of augmented reality in satellite cable network assembly. In: IOP conference series: materials science and engineering. Institute of Physics Publishing 4. Chen H, Chen C, Sun G, Wan B (2019) Augmented reality tracking registration and process visualization method for large spacecraft cable assembly. In: IOP conference series: materials science and engineering. Institute of Physics Publishing 5. Boeing Homepage. https://www.boeing.com/features/2018/01/augmented-reality-01-18.page. Last accessed 19 Sept 2022, Last accessed 24 Nov 2022 6. Serván J, Mas F, Menéndez JL, Ríos J (2012) Using augmented reality in AIRBUS A400M shop floor assembly work instructions. In: AIP conference proceedings. pp 633–640 7. Ariansyah D, Erkoyuncu JA, Eimontaite I, Johnson T, Oostveen AM, Fletcher S, Sharples S (2022) A head mounted augmented reality design practice for maintenance assembly: toward meeting perceptual and cognitive needs of AR users. In: Applied ergonomics. Elsevier Ltd 8. Prabowo H, Cenggoro TW, Budiarto A, Perbangsa AS, Muljo HH, Pardamean B (2018) Utilizing mobile-based deep learning model for managing video in knowledge management system. Int J Interact Mob Technol 12:62–73 9. Kim S, Nussbaum MA, Gabbard JL (2019) Influences of augmented reality head-worn display type and user interface design on performance and usability in simulated warehouse order picking. Appl Ergon 74:186–193
100
D. Ariansyah et al.
10. Pardamean B, Suparyanto T, Cenggoro TW, Sudigyo D, Anugrahana A (2022) AI-based learning style prediction in online learning for primary education. IEEE Access 11. Pardamean B, Sudigyo D, Suparyanto T, Anugrahana A, Wawan Cenggoro T, Anugraheni I (2021) Model of learning management system based on artificial intelligence in teambased learning framework. In: Proceedings of 2021 international conference on information management and technology, ICIMTech 2021. pp 37–42
Toward Learning Factory for Industry 4.0: Virtual Reality (VR) for Learning Human–Robot Collaboration Dedy Ariansyah, Giorgio Colombo, and Bens Pardamean
Abstract Industry 4.0 will bring not only transformation to the manufacturing technologies but also to the profile of the workforce. Education system should be revised to prepare the future graduates embracing the knowledge of the ongoing revolution. Initiatives on the modification of curriculum and tools to deliver the concepts of Industry 4.0 must be taken. This paper examines limitations of current engineering tools and proposes a virtual reality (VR)-based learning platform to support the teaching and learning activity for Industry 4.0, focusing on the design of Human– robot Collaboration (HRC). The development process began with the identification of the general Intended Learning Outcomes (ILOs) which were extracted from the review of the HRC design issues, followed by the evaluation of the current teaching tool. From these results, the new requirements of the tools to achieve ILOs are redefined. The implementation of the system is demonstrated to show the feasibility of the proposed learning platform. This workflow can serve as initial development of another learning platform for the other innovative concepts that form the building blocks of Industry 4.0. Keywords Virtual reality · Human–robot collaboration · Industry 4.0 · Learning factory
D. Ariansyah (B) · B. Pardamean Bioinformatics and Data Science Research Centre, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] G. Colombo Department of Mechanical Engineering, Politecnico di Milano, 20156 Milan, Italy B. Pardamean Computer Science Department, BINUS Graduate Program Master of Computer Science Program, Bina Nusantara University, Jakarta 11480, Indonesia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_10
101
102
D. Ariansyah et al.
1 Introduction Recent studies analysing the challenges that the companies face in adopting the technologies of Industry 4.0 have been hindered by high financial resources and the lack of qualified personnel [1, 2]. In this context, University can play an important role to prepare that future workforce acquiring the knowledge and the competency to meet the requirement of education and qualification for the new created job for Industry 4.0. Although, some attempts have been undertaken to improve online education [3, 4] and knowledge management [5, 6], initiatives on the modification of curriculum and tools to deliver the concepts of Industry 4.0 must be taken. One of the sectors in the industrial practice that is going through the modernisation is the production line, which involve smart robots work side by side with humans. The evolution of technologies in Industry 4.0 permits the new working paradigm in which human and robot (known as collaborative robot) will work as a team and physically share the workspace. However, the design of HRC (Human–robot Collaboration) entails multifaceted problems and how it can be successfully learnt and taught in the education institution need to be addressed. This paper presents the design of the virtual learning platform for the Engineering students to acquire the knowledge and skills related to the design of HRC. The design process began by reviewing the existing literature about the issues that need to be considered for successful implementation of HRC. These issues in HRC are categorized and considered as the Intended Learning Outcomes (ILOs) for the students to understand and to know how to address some of the existing issues in the HRC design. After that, the study proceeds to the identification of the limitations of current tool that is commonly used in the academia to teach students in assessing the feasibility of the design and manufacturing planning projecting to its use in HRC application. Based on this evaluation, the new requirements of the tool are redefined and a virtual learning platform and the implementation of the system in relation to the achievement of ILOs are presented.
2 Design Issue in Human–Robot Collaboration The increased level of immersion in VR environment has been shown as an effective approach to improve the effectiveness of learning outcomes [7]. Specifically, VR has also been increasingly demonstrated to evaluate the implementation of HRC. Existing studies have investigated and have tested the framework to deploy robot as human co-workers [8, 9]. The potential of VR to immerse the user for the evaluation of HRC safety protocols have been shown [10, 11]. Several studies have also used VR tool to evaluate the benefits of HRC implementation in terms of increased productivity and ergonomics [8, 12]. Other studies have also utilized VR to understand the reaction of human operator toward predictable and unpredictable movement of the robot [13], the stress level and human performance toward the movement speed
Toward Learning Factory for Industry 4.0: Virtual Reality (VR) …
103
and robot trajectory [14, 15]. Furthermore, some studies [16, 17] also examined several subjective factors that contribute to the acceptability of HRC such as the distance between human and robot and different working configurations. Although some issues of HRC design have been identified [8–17], those implementations were targeted to specific measurement, and hence, it is unclear how they can be translated as learning objectives for students to acquire knowledge and skills related to the design of HRC. Therefore, HRC essential aspects are created in Table 1 (Sect. 4) and serve as the general ILOs for the students to develop a comprehensive understanding in the HRC design issues. Table 1 Overview of the issues in the design of HRC HRC aspects
Intended learning outcomes (ILOs)
New requirement
Integrability
Students understand the method to design a shared task between human and robot (e.g., working paradigm, collaborative effectiveness analysis, task allocation, robot and human trajectories, etc.)
The provision of the 3D [8, 9] spatial perception and user viewpoint during the design process of robot trajectories to verify any possible occlusions to the worker’s view caused by the movement of robot arm
Safety
Students know the method to design and evaluate active and adaptive procedure to ensure safe collaboration between human and robot
Real-time detection of human movement in the shared workspace to for collision detection
Performance evaluation Students know how to evaluate the impact of HRC in terms of reduced cycle time, ergonomics, improved product quality, etc
References
[10, 11]
[8, 12]
Human behavior
Students know how to carry out the assessment of human reaction toward different parameters of robot movements (e.g., speed, erratic movement, etc.)
The record of human body movement, biological signals, and subjective perceptions are necessary
User experience
Students know the method The data of biological to assess human comfort signals and subjective and acceptance while perceptions working side by side with the robot (e.g., the position of the robot, distance from the robot, perceivable robot’s movement, etc.)
[13–15]
[16, 17]
104
D. Ariansyah et al.
3 Digital Human Modeling (DHM) Tools The configuration of the manufacturing or assembly line is usually subject to change to accommodate the change in product variants therefore, the production lines need to be flexible but still fulfill the safety and ergonomic aspect of the human worker. For the Engineering course that deals with the ergonomic assessment of a manual work operation, students are usually taught to use DHM tools to assess the performance of a certain manufacturing configuration which involves human to examine the fitness of the task and workplace for human safety and health.
3.1 Test Case on Human Simulation In this project, three Engineering students who had attended the Human Modelling course were involved in a project to use some digital tools that they had learned during the course to conduct Ergonomic analysis of a manual assembly task. The task involved modelling the virtual components of assembly task (e.g., workbench, assembled parts, container, conveyors, etc.) using a 3D modelling software and import the models to human simulation software for Ergonomic analysis. The software used for Ergonomic analysis was Jack 8.01 student version from Siemens [18]. The configuration of assembly system consists of an adjustable workbench that permits the worker to position the containers close and easily reachable to the worker. The assembled product was a butterfly valve which is an industrial product that is subject to high product variants. To simulate the assembly process, the same types of assembled component were placed in different containers and ergonomic analysis was performed during the components pick-up where the worker remain seated with a predefined distance from the workbench. For the ergonomic assessment, Rapid Upper Limb Assessment (RULA) [19] was used to assess if the sequence of components pick-up for assembly consists of some awkward postures. The goal of this assessment was to allow students to perform the evaluation of the current workbench to identify the optimal configuration throughout the assembly task for the human worker and in which way the presence of the robot can improve their performance and ergonomics of the task. The ergonomics analysis investigated the effects of different anthropometric data and different workbench configurations. The human data from the ANSUR database which is the default database in JACK was used considering gender difference, 50th percentile represents average of the population, 5th and 95th percentiles for lower and upper limits. Figure 1 shows the ergonomics analysis for one of the picking tasks for males and females produce different RULA scores. All male profiles can pick the object with medium risk while for 50th and 5th female profiles, the posture impose a high risk of experiencing musculoskeletal disorder. On the other hand, different workbench configuration was assessed on 50th percentile of male profile to find the best configuration for average male population as shown in Fig. 1b. Through the RULA
Toward Learning Factory for Industry 4.0: Virtual Reality (VR) …
105
Fig. 1 The impact of different anthropometric models (a) and different working condition (b) on RULA score
assessment, it is obvious that the working condition can still be improved as several postures still has the RULA score 3 (Medium risk, further investigation, change soon). The improvement of the workbench for example, can be realized by incorporating the robot arm to improve the ergonomics of the worker in two ways: (1) robot arm can hand component that is not easily reachable to the worker to prevent non-ergonomic posture and (2) perform assembly task that is not ergonomic to the worker (e.g., screwing).
3.2 Case on Human–Robot Collaboration To perform the ergonomic assessment in human working space which involves robot in the manufacturing process, the common practice is to use robot package that is integrated with human simulation software. This module allows the user to import the 3D model of the robot, to program the trajectory of the robot offline, and to simulate the working procedure of the robot in partnership with digital human. Nevertheless, previous studies investigating HRC working paradigm using this type of simulation software showed that there are some discrepancies on the results obtained between the simulated and the actual evaluation. This implies that the conventional tool has some limitations and might not be effective for teaching purpose for at least some reasons as follows: 1. The visualization is limited on 2D representation, and the simulation of human task is done through virtual human which has no corresponding to the actual human data. This restricts the user to perceive the spatial relationship concerning the space, the distance, and possible occlusion of the worker’s view in the complex working environment (i.e., HRC setting).
106
D. Ariansyah et al.
2. The assignment of the digital human posture for simulating the task may not reflect the actual posture in physical workspace as the user may interact differently during his encounter with the robot in the collaborative setting. 3. The posture of virtual human performing the task need to be assigned manually and hence, the ergonomic simulation is done one step at a time for each predefined posture. This approach may demotivate the students and hamper the learning process as the user need to spend more time and effort to find the optimal working configuration when complex environment involving robot movement is considered. 4. Human simulation software may not support the integration with virtual reality visualization device such as Oculus rift which also limits involvement of multiple users as to provide collaborative and co-creativity problem solving during the learning activities.
4 Development of a Virtual Learning Platform The analysis of the HRC design issues resulted in the new requirements for the HRC design tool which can be addressed by deploying VR tools to establish a new learning platform (See Table 1).
4.1 Implementation This section shows the implementation of a new learning platform based on VR tools. The new task for the students is to design a collaborative working paradigm that involves a robot to improve the ergonomics (e.g., avoid musculoskeletal disorder) while taking into consideration the aspect of safety and user experience of the working condition in a manual assembly task (see Sect. 3.1) and to perform some assessments to verify the increased productivity of the new design solution. Figure 2a depicts the hierarchy of building blocks for the implementation of VR platform. The first step is to prepare the 3D models used in VR platform. Figure 2b shows the new learning platform of assembly task in virtual environment. Unity3D [20] was used as VR engine to build the application. The 3D virtual objects such as workbench, robot, valve components, warehouse, etc. were modelled in SolidWorks [21] and Rhinoceros [22]. This commercial software permits the CAD model to be exported into Unity3D supported file format (.fbx). The model of human avatar was modelled by using MakeHuman [23] which is an open-source tool that provides a parameterized mesh for human representation and a master skeleton or ‘rig’ creation that is compatible with Unity3D. The next step proceeds with the setup of VR interaction. VR headset Oculus Rift DK2 [24] was employed as a visualization device to provide first person controller and
Toward Learning Factory for Industry 4.0: Virtual Reality (VR) …
107
Fig. 2 Building block virtual learning platform (a), working environment in VR with hand and body tracker using Kinect V2 and leap motion (b)
3D stereo views to the user. Unlike the traditional approach, this new learning platform immerses the user in a VR environment and allow him to experience the 3D spatial working condition. In this current implementation, hand detection sensor Leap motion [25] attached to Oculus was used to acquire hands data in real environment and use the detected hands movement via the SDK to control the virtual hand in virtual environment. This implementation does allow the user to pick the assembly components from the blue containers, however, the absence of haptic feedback renders difficulty to grasp the virtual components and perform the assembly. Therefore, the acquisition of VR gloves and the test of its implementation will be the object for the future research. Furthermore, Microsoft Kinect v2 [26] was used to track the motion of human skeletal data and utilize the acquired data to control the movement of human avatar. The communication between Kinect and Unity3D is established by using Microsoft Kinect SDK. In contrary to the traditional approach, VR platform involves the user to interact directly with the manual assembly task which leads to a more accurate reflection of the actual posture in the physical workspace. Figure 3b shows the devices used to immerse the user in VR environment.
Fig. 3 The model of 6 axis virtual robot arm and 2D Schematic of the robot joints
108
D. Ariansyah et al.
The next implementation is the methods to model and to control the virtual robot arm. This virtual robot used in this study was articulated robot represented by a series of rigid body linked to each other by joints. The type of joint considered in this work was rotational joint in which the movement of the joint was constrained by an angle around specific axis. The 3D model of a robot consisted of multiple links was imported in Unity3D. These 3D graphical object of robot arms were modelled as hierarchy where the joints from base to end effector are structured as parent– child relationship (P0–P1–P2–P3–P4–P5) as shown in Fig. 3. With this hierarchical system, the rotation of one joint affects the entire links of the later joints. Controlling the robot can be achieved in two ways: (1) forward kinematic and (2) inverse kinematic. This work used inverse kinematic to control the robot in virtual environment and the programming of the mathematical model was done on Unity3D using C#. For simplification purpose, the schematic of the joints was presented in 2D form as shown in Fig. 3. The approach used to solve the inverse kinematic was adopted from the implementation of the gradient descent method [27], which is a general optimization algorithm that is easier to program than the common analytical approach using Denavit-Hartenberg parameters [28]. Solving inverse kinematic using gradient descent implies the computation that minimizes the distance between the target position, T and the end effector which is given by (1). The position of the end effector can be calculated by the forward kinematic formula in (2) that takes the parameter of the angle for each joint of the robotic arm. Equation (2) means that the position of joint P_i is equal to the position of the joint P_(i − 1) adds the rotation of the length link l_i around the rotation axis P_(i − 1) by the angle θ_m. D = T − E f f ector Pi = Pi−1 + r otate li , Pi−1 ,
i−1
(1) θm
(2)
m=0
The objective is to find the angle, θ_i that minimizes the distance between the target distance and end effector. To do this, the first step is to calculate the gradient function ∇f of the current angle θ by using estimation of partial derivative as indicated in (3). ∇ fθ =
f (θ + u) − f (θ ) u
(3)
Since the virtual robot used in this study has six joints, the gradient ∇f must be computed for multiple parameters: θ_1, θ_2, θ_3, θ_4, θ_5, θ_6. Once the estimated derivative is computed, the angle joints must be updated following the (4). Pi+1 = Pi − L∇ f θi
(4)
Toward Learning Factory for Industry 4.0: Virtual Reality (VR) …
109
where L is the learning rate that determines the speed of gradient to approach the solution. This method was tested successfully by assigning the target position controlled by a mouse cursor and each joint angle was updated automatically in the manner where the end effector following the target position defined by the user. The final step is related to development and integration of assessment modules/tools to quantify the quality of HRC. Firstly, collision avoidance system is required to ensure the safety of the human while sharing the workspace with the robot. Two safety approaches, namely adaptive and active safety procedure can be developed [10]. The former is related to the control algorithm of the robot movement when the human body enters the working area of the robot, and the interference is about to occur [29]. The latter is related to the provision of warning or alert to the human in unimodal or multimodal interface (e.g., visual, audio, or haptic feedback) to indicate the working area of the robot to the human. Secondly, the assessment of the awkward postures exhibited by the operator during the manual task that may cause the risk of musculoskeletal disorder. The analytical tool to assess ergonomic posture and validation based on RULA can be implemented using the data from the Microsoft Kinect v2 and Leap motion [30, 31]. This tool automatically calculates the RULA score in real-time allowing user to assess if the working posture is safe while maximizing task performance. Thirdly, to what extent the collaborative framework has led to increased performance of the human worker involving human errors, variability, and the time are calculated and recorded automatically. Fourthly, assessment of cognitive processes which drive the behavior, experience, and attitude of the human operator while interacting with the robot. Some of the high cognitive activities related to increased workload and risk perception while interacting with the robot have been shown to be measurable through skin conductance and heart rate [15, 17]. Using the setup in the different VR system developed in the previous study [32], these physiological data can be obtained and analyzed in comparison to the motion of the robot arm and the performance of the operator during the collaborative sessions. Finally, the programming environment in Unity3D allows the students to learn how to program the trajectory of the robot and adjust the motion parameters for safety implementation.
5 Conclusion Manufacturing companies will increasingly need future workforce to work alongside new technologies. The comprehensive understanding of how these new technologies work and how to facilitate students to acquire the required knowledge and skills are required to fill the gap between education and industry. This paper analyses the limitations of current engineering tools and proposes a virtual learning platform to support a new way of teaching and learning for Industry 4.0 focusing on HRC. The platform is developed based on the identification of the HRC design issues and the evaluation of the current tools to support the learning objectives. Future work will be focused on the user testing to evaluate the effectiveness of the learning platform.
110
D. Ariansyah et al.
Acknowledgements The authors would like to acknowledge the participation of Peng Bo Cong, Shi Lei Lyu, and Camille Sebastien Ronan Feuntun for the evaluation of Jack software.
References 1. Benešová A, Tupa J (2017) Requirements for education and qualification of people in Industry 4.0. Procedia Manuf 11:2195–2202 2. Benešová A, Hirman M, Steiner F, Tupa J (2018) Analysis of education requirements for electronics manufacturing within concept Industry 4.0. In: 41st international spring seminar on electronics technology (ISSE) 3. Pardamean B, Suparyanto T, Cenggoro TW, Sudigyo D, Anugrahana A (2022) AI-based learning style prediction in online learning for primary education. IEEE Access 10:35725– 35735 4. Gromang YB, Sudigyo D, Suparyanto T, Gromang YB, Rudhito MA, Sudigyo D, Suparyanto T, Pardamean B (2023) The development of video analysis instrument to determine teacher’s character. AIP Conference Proceedings 2594,130003 5. Prabowo H, Cenggoro TW, Budiarto A, Perbangsa AS, Muljo HH, Pardamean B (2018) Utilizing mobile-based deep learning model for managing video in knowledge management system. Int J Interact Mobile Technol 12:62–73 6. Pardamean B, Sudigyo D, Suparyanto T, Anugrahana A, Wawan Cenggoro T, Anugraheni I (2021) Model of learning management system based on artificial intelligence in teambased learning framework. In: Proceedings of 2021 international conference on information management and technology, ICIMTech 2021. pp 37–42 7. Bharathi AKBG, Tucker CS (2015) Investigating the impact of interactive immersive virtual reality environments in enhancing task performance in investigating the impact of interactive immersive virtual reality environments in enhancing task performance in online engineering design activities 8. Heydaryan S, Bedolla JS, Belingardi G (2018) Safety design and development of a human-robot collaboration assembly process in the automotive industry. Appl Sci (Switzerland) 8 9. Malik AA, Bilberg A (2017) Framework to implement collaborative robots in manual assembly: a lean automation approach. In: Annals of DAAAM and proceedings of the international DAAAM symposium. Danube Adria Association for Automation and Manufacturing, DAAAM, pp 1151–1160 10. Matsas E, Vosniakos GC, Batras D (2018) Prototyping proactive and adaptive techniques for human-robot collaboration in manufacturing using virtual reality. Robot Comput Integr Manuf 50:168–180 11. Matsas E, Vosniakos GC (2017) Design of a virtual reality training system for human–robot collaboration in manufacturing tasks. Int J Interact Des Manuf 11:139–153 12. Michalos G, Makris S, Spiliotopoulos J, Misios I, Tsarouchi P, Chryssolouris G (2014) ROBOPARTNER: seamless human-robot cooperation for intelligent, flexible and safe operations in the assembly factories of the future. In: Procedia CIRP. Elsevier B.V., pp 71–76 13. Oyekan JO, Hutabarat W, Tiwari A, Grech R, Aung MH, Mariani MP, López-Dávalos L, Ricaud T, Singh S, Dupuis C (2019) The effectiveness of virtual environments in developing collaborative strategies between industrial robots and humans. Robot Comput Integr Manuf 55:41–54 14. Koppenborg M, Nickel P, Naber B, Lungfiel A, Huelke M (2017) Effects of movement speed and predictability in human–robot collaboration. Hum Factors Ergon Manuf 27:197–209 15. Arai T, Kato R, Fujita M (2010) Assessment of operator stress induced by robot collaboration in assembly. CIRP Ann Manuf Technol 59:5–8
Toward Learning Factory for Industry 4.0: Virtual Reality (VR) …
111
16. Castro PR, Högberg D, Ramsen H, Bjursten J, Hanson L (2019) Virtual simulation of humanrobot collaboration workstations. In: Advances in intelligent systems and computing. Springer, pp 250–261 17. Weistroffer V, Paljic A, Fuchs P, Hugues O, Chodacki JP, Ligot P, Morais A (2014) Assessing the acceptability of human-robot co-presence on assembly lines: a comparison between actual situations and their virtual reality counterparts. In: IEEE RO-MAN 2014—23rd IEEE international symposium on robot and human interactive communication: human-robot co-existence: adaptive interfaces and systems for daily life, therapy, assistance and socially engaging interactions. Institute of Electrical and Electronics Engineers Inc., pp 377–384 18. Jack Siemens. https://www.plm.automation.siemens.com/media/store/en_us/4917_tcm10234952_tcm29-1992.pdf. Last accessed 20 May 2022 19. Vaidya R, Mcatamney L, Corlett EN (1993) REBA: a survey method for the investigation of work-related upper limb disorders RULA: a survey method for the investigation of work-related upper limb disorders 20. Unity Homepage. https://unity3d.com. Last accessed 20 May 2022 21. Solidworks Homepage. https://www.solidworks.com. Last accessed 20 May 2022 22. Rhinoceros Hompage. https://www.rhino3d.com. Last accessed 20 May 2022 23. MakeHuman Homepage. http://www.makehumancommunity.org. Last accessed 20 May 2022 24. Meta Homepage. https://www.meta.com/quest/. Last accessed 24 Nov 2022 25. UltraLeap Homepage. https://www.ultraleap.com/. Last accessed 24 Nov 2022 26. Microsoft Kinect Homepage. https://learn.microsoft.com/en-us/windows/apps/design/devices/ kinect-for-windows. Last accessed 20 May 2022 27. Inverse Kinematic for Robotic. https://www.alanzucconi.com/2017/04/10/robotic-arms/. Last accessed 20 May 2022 28. Corke PI (2007) A simple and systematic approach to assigning Denavit-Hartenberg parameters. IEEE Trans Rob 23:590–594 29. Morato C, Kaipa KN, Zhao B, Gupta SK (2014) Toward safe human robot collaboration by using multiple kinects based real-time human tracking. J Comput Inf Sci Eng 14 30. Plantard P, Shum HPH, le Pierres AS, Multon F (2017) Validation of an ergonomic assessment method using Kinect data in real workplace conditions. Appl Ergon 65:562–569 31. Haggag H, Hossny M, Nahavandi S, Creighton D (2013) Real time ergonomic assessment for assembly operations using Kinect. In: Proceedings—UKSim 15th international conference on computer modelling and simulation, UKSim 2013. pp 495–500 32. Ariansyah D, Caruso G, Ruscio D, Bordegoni M (2018) Analysis of autonomic indexes on drivers’ workload to assess the effect of visual ADAS on user experience and driving performance in different driving conditions. J Comput Inf Sci Eng 18
Analysing the Impact of Support Plans on Telehealth Services Users with Complex Needs Yufeng Mao and Mahsa Mohaghegh
Abstract The COVID-19 pandemic had a negative impact on people’s mental health. This study analysed 430,969 contacts made to more than 100 telehealth services by 1064 callers with more complex needs who had tailored support plans in place between 31 December 2017 and 3 March 2022. This study aimed to investigate the characteristics of these callers with complex needs, identify caller types, and explore whether the support plans effectively reduced the caller’s calling demands. This study used a mixed method. A descriptive analysis was used to explore callers’ socio-demographics and calling demand patterns. A K-prototype clustering algorithm was conducted to group callers into four clusters (The Former Users, The Loyal Users, The High Frequency Users and The One-Off Users). Furthermore, a randomisation test was used to compare changes in callers’ calling behaviours in different periods. This study presented insights into calling demand patterns and identified calling behaviours for those callers seeking support via helplines. The result showed that callers’ average daily contact frequency was significantly reduced due to the effect of the support plans. However, the support plan also had a small influence on reducing the calling frequency for callers from The Loyal Users cluster and The High Frequency Users cluster which suggests that callers from these clusters may need additional support. Keywords Pandemic · Mental health · Telehealth services · Clustering · Caller behaviour
1 Introduction The COVID-19 pandemic outbreak across the world has also impacted mental health globally. People’s usual activities, routines or livelihoods are affected by preventive Y. Mao · M. Mohaghegh (B) Auckland University of Technology, Auckland, New Zealand e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_11
113
114
Y. Mao and M. Mohaghegh
measures for COVID-19, such as self-isolation and quarantine. Thus, levels of loneliness, depression, harmful alcohol and drug use, and self-harm or suicidal behaviour are expected to increase [32]. In addition to these mental health observations, the World Health Organization (WHO) also has concerns about the disruption of health services worldwide due to the pandemic. Therefore, telehealth is believed to play a critical role in today’s scenario. Telehealth allows people to receive psychological and mental health services through phone calls, video, email, or other telecommunications technologies. Telehealth is effective, feasible and acceptable in delivering mental health care [25]. People seeking social or emotional support can easily access various helplines at home. Understanding callers’ calling behaviour and demands have been the subject of many studies. It is valuable for operational purposes and may help mental health services understand how crisis helplines can be used to support population-level well-being [30]. In March 2020, New Zealand took several strong control measures to prevent pandemic outbreaks, including an Alert Level 4 lockdown from 25 March 2020 to 27 April 2020. Due to the high restrictions, there were increases in self-reported mental health problems in the community. According to the COVID-19 psychosocial and mental well-being recovery plan published by the Ministry of Health [17], the expansion of mental health services was supported in the community, which included increased capacity for mental health and addiction telehealth services, and funding and promotion of several support telehealth lines. A call log dataset was gathered from various telehealth services made by 1064 callers with complex needs between 31 December 2017 and 3 March 2022. While only 1064 people make up this group of callers with complex needs, they have made more than 400,000 calls across the four-year study period. They are people who might need tailored support. Callers in the broad group may be identified as having more complex needs, which may include some of the following indicators: • Callers underlying needs may not be met via traditional telehealth approach or brief intervention models, and they often contact telehealth services more frequently and persistently. • These callers may be socially isolated and negatively impacted by poor health, mental health challenges, and other psychosocial difficulties. • Callers who do have more complex needs may benefit from more holistic support and tailored support plans. When a caller is identified as requiring additional tailored support, the telehealth team will assess the callers’ needs and then develop a plan that helps frontline call staff provide targeted support to improve the outcomes for the service user. A plan usually captures the agreed process for providing the best possible support to a caller with more complex needs. This plan supports the service user and may also minimises the impact of potential high-frequency calling on the service. It consists of simple instructions and directive information developed in consultation with the caller and their medical and mental health provider. This study aims to provide a better understanding of callers with complex needs and inform appropriate support for these callers. The two purposes of this study include (1) exploring the patterns,
Analysing the Impact of Support Plans on Telehealth Services Users …
115
behaviours and characteristics of complex callers and grouping them into several clusters—where callers in the same cluster have similar characteristics but differ from callers from other clusters based on their demographic information and calling patterns. And (2) identify callers’ call patterns before and after their plans due to the intervention of the service plans.
2 Related Work The socio-demographics among helpline callers have been studied for many years. Spittal et al. [27] applied multivariate logistic regression to identify the characteristics of frequent callers from an anonymous dataset on calls made between December 2011 and May 2013 from Lifeline (the largest crisis helpline in Australia). The descriptive analysis found that frequent callers account for 3% of all callers but make 60% of calls based on their study of over 411,000 calls. They also found that males and transgender people have a slightly greater risk of being frequent callers than females. This study also suggested that people who have identified mental health problems have more chance of becoming frequent callers. Several studies found that people who frequently call helplines may have complex social, physical, and mental health needs and have a heavy and unhelpful reliance on telehealth services [23]. These studies also showed that frequent callers have mental health problems such as depression, anxiety, and suicidality or have physical illnesses [23, 27]. They may also use other healthcare services such as GPs, psychologists, psychiatrists, and emergency departments. However, some researchers believe that other areas of healthcare services are not meeting frequent callers’ needs, so they keep calling helplines and looking for social support. In New Zealand, the proportion of females who experienced psychological distress is 1.4 times higher than males [16]. This rate is almost double in the New Zealand youth group [14]. One study that focused on three youth telephone helplines in the UK also found a similar pattern with two-thirds of helpline callers being female. A scoping review of gender differences in the usage of crisis helplines also found that in most studies, women represented 51–66% of calls and frequent callers are more likely to be female [11]. Another review of gender differences in gambling helplines found that in some earlier studies, males represented the highest proportion of gambling helpline callers. However, females contact with the gambling helpline was noted to increase in more recent research [26]. Alternatively, another two helpline studies in Bangladesh reported that most callers are male. This can be explained because males in Bangladesh have more resources to access the services, and females have to focus on the family issue [6, 7]. In addition to the male/female gender, very few studies discussed transgender or gender-diverse callers. Transgender and genderdiverse people have been found to have higher rates of mental health problems compared to cisgender people [28, 29]. A recent New Zealand COVID-19 related study found that contact demand for gender-diverse populations had a high increase of 51.3% in January and March 2020 [21]. The demands for gender-diverse populations
116
Y. Mao and M. Mohaghegh
seeking mental health help should also be taken seriously. Based on these studies, gender differences among helpline callers might vary due to social and cultural factors, and gender inequality in access to telehealth services exists in many studies. O’Neil et al. [20] looked for patterns and behaviours of all callers who contacted Samaritans Ireland seeking mental health support between April 2013 and December 2016. A total of 3.449 million calls in the four years were analysed. Their study used K-means clustering to identify callers based on their calling patterns (number of calls, mean call duration and standard deviation of call duration). Callers in the same cluster have similar features in comparison with other clusters. Their study suggested that 5 or 6 clusters are reasonable based on their data. Each cluster was interpreted, and they explained callers’ behaviours in each cluster. For example, there is one cluster called ‘typical caller’. About 40–50% of callers were grouped into this cluster. They usually called five or six times with a short 3–4 min conversation. Another cluster called ‘one-off chatty callers’ grouped people who only call one to two times with a long 30- to 1-h conversation and do not contact again. Based on this study and their clustering solution, Turkington et al. [30] explored how callers’ patterns change in each cluster before and during COVID-19 on a week-by-week basis. They found that callers’ behaviour changed as a result of COVID-19. Callers made more calls with longer conversation times during the COVID-19 period than before COVID-19.
3 Methodology This study applied a mixed method to achieve project objectives. The descriptive analysis helped identify the socio-demographics of callers with complex needs and understand how the patterns of contact demands change regarding callers’ gender, ethnicity, and age group. The K-Prototype clustering method was applied to identify callers’ types based on their demographic information and contact patterns. A nonparametric randomization test was conducted at last to explore how callers’ contact patterns change under the intervention of support plans and determine what types of callers may need additional support.
3.1 Dataset A call log dataset made by 1064 callers with complex needs between 31 December 2017 and 3 March 2022 from various New Zealand telehealth services was recorded. A total of 430,969 contacts were analysed. The call dataset includes a date-time stamp of each contact, interaction type, whether made via phone calls or SMS, duration of each call, user ID and users’ demographic information, and what telehealth services this contact made to. There were over 100 telehealth services involved in the call log dataset. They were categorised into seven types named Mental Health Services, Mental Health Crisis Line, Family and Sexual Harm Services, Health Services incl
Analysing the Impact of Support Plans on Telehealth Services Users …
117
COVID, Smoking Cession Helpline, Poisons and Care Coordination. Due to the sizeable missing value appearing in the call duration and ensuring the data authenticity, we did not use any methods to impute the missing value for call duration. Further analysis of call duration will be only based on the 254,877 call duration recorded. Another user dataset was extracted from the call log dataset. The user dataset contains demographic information of 1064 callers and their relative contact behaviours. These features include callers’ ID, users’ gender (female, male, gender diverse and unknown), ethnicity (NZ European, Maori, Asian, Pacific Peoples, MELAA and unknown), and age was grouped into seven categories which are 17 and under, 18–29, 30–39, 40–49, 50–59, 60 and above and unknown. We also defined several periods during the study period. Callers’ active period is the period between their first contact and last contact throughout our study period. The callers’ nonactive period is between the users’ last contact and 3 March 2022. Callers’ cumulative number of calls and daily contact frequency was calculated, respectively (i.e. a user made a total of 1215 contacts during the active period of 1521 days, then this user’s daily contact frequency is 0.8 contacts per day).
3.2 Clustering Analysis Clustering is a data mining technique used to group a set of data objects into different clusters where data objects in the same cluster are more similar but differ from the data objects in other clusters [8]. Some traditional clustering algorithms can mainly deal with numerical-only datasets such as the K-means algorithm or categorical-only datasets such as K-mode algorithms. These result in a limitation when clustering a mixed-type dataset. The problem with clustering a mixed dataset is that dissimilarity definitions vary between numerical and categorical variables. Although we can use the label encoding technique to convert categorical variables into numerical variables, it would lose the original meaning of the dataset [12]. We applied the K-prototype algorithm, which combined K-means and K-mode to deal with the mixed-type dataset. The dissimilarity between the numerical variables is calculated by Euclidean distance and simple matching distance for categorical variables. All numerical variables were standardized at a mean is 0, and a standard deviation is 1 before we ran the clustering algorithm. To determine which attributes are suitable for clustering, we measured the feature importance by fitting a K-prototype using all features in the user dataset. The misclassification rate for each variable was calculated to determine whether this variable has a huge influence on the clustering result or not. We finally selected users’ active period, non-active period, the cumulative number of calls and age group for clustering. Users’ active and non-active periods represent their stickiness in using the services. The number of contacts indicates their call volume, and the age group can help to understand that users in the same age group might share similar contact behaviours.
118
Y. Mao and M. Mohaghegh
Fig. 1 Elbow method
The number of clusters is the parameter we need to decide on at the beginning when we use the K-Prototype clustering method. We chose the number of clusters K = 4 suggested by the elbow method in Fig. 1 showed.
3.3 Period Define Every caller with complex needs is supported with tailored support plans to achieve the best outcomes, and each caller has a different plan start/end date. The period before a caller’s support plan start date is referred to as the pre-plan period, and the period after a caller’s plan end date is referred to as the post-plan period. For those callers who no longer have active support plans in place, we extracted 308 callers who had a plan in place for more than two days and identified the number of contacts they made before and after their plan ended, converted into daily contact frequency. Callers’ average call duration in the pre-plan period and post-plan period were also computed, but only 19 callers had average call duration recorded in both periods.
4 Result 4.1 Sample Characteristics A total of 430,969 contacts made by 1064 callers between 31 December 2017 and 3 March 2022 from all helpline services were analysed. In general, the daily contact volume was found to be stable in the year 2018, with an average of 200 contacts per day, followed by a rapid increase in the year 2019. The contact demands peaked in
Analysing the Impact of Support Plans on Telehealth Services Users …
119
2020 with a high increase of 50% compared with 2019. In 2021, the contact volume declined by a 3% decrease. By looking at the contact volume trend in several service types, contacts made to Mental Health Services are consistent with the overall trend, which has an increase of 53% in 2020 and a decrease of 19% in 2021. It was astonishing to see the contact volume made to Family and Sexual Harm Services in 2020 had an increase of 240% and another increase of 22% in 2021. The Mental Health Crisis Line also showed a continuous upward trend with a 41% increase in 2020 and a 10% increase in 2021. In contrast, the Health Service incl COVID showed a continuous downward trend with a 14% decrease in 2020 and a 9% decrease in 2021. A slight increase was observed for contacts made to Smoking Cession Helpline, with a 5% increase in 2020 and a 7% increase in 2021. The contact demand peaked at night during our study period. 27.4% of contacts were made between 6 and 10 pm. Gender More than half of users are female (56.6%), followed by male (38.4%), less than one per cent (0.8%) gender diverse callers, and 4.2% of callers did not specify their gender. The number of female callers is more than the number of male users, which holds true for each known age group. Especially in the age group 18–29, the female callers are twice as male callers, and the gender ratio increase to three times in the age group 17 and under. The gender-diverse people were observed in the age group 17 and under, 18–29 and 30–39. There is a 32% increase in female callers in 2020, followed by a slight decrease in 2021 (− 0.1%). A high increase of 46% was observed from male callers in 2020 and a slight increase of 5% in 2021. For gender-diverse callers, a 13% increase in contact demand was observed in 2020, and this volume became 2.7 times higher in 2021. Ethnicity The majority of users are NZ European (66.4%), 12.5% of users are M¯aori, 2.4% are Asian, 1.4% users are Pacific Peoples, 0.4% of users come from MELAA, and 17% of users did not specify their ethnicity. Further explored contact volume trend made by ethnicity, there is a 28% increase from European callers in 2020 and a minimal 1% increase in 2021. An increase of 19% from M¯aori callers in 2020, followed by a 21% decrease in 2021. Similar patterns were observed for Pacific Peoples and Asians. A 50% and 98% increase from Pacific Peoples and Asians in 2020, followed by a 21% and 12% decrease in 2021, respectively. Age In terms of age, teenage users aged 17 and under account for 4.4%. An increase of 32% and 134% in contact volume was made from this age group in 2020 and 2021, respectively. Young adults in the age group 18–29 account for 19.3%. An increasing trend of 108% and a decreasing trend of 1% were observed in this age group in 2020 and 2021, respectively. Users aged from 30 to 39 account for 15.5%. An increase of 28% and a decrease of 18% were made from this group in 2020 and 2021. Mid-aged users in the age group 40–49 account for 19.1%. In 2020 and 2021, their contact
120
Y. Mao and M. Mohaghegh
volume increased by 35% and 16%, respectively. Callers aged in the 50–59 group account for 17.8%. Their contact volume trend showed similar patterns with calls in the age group 40–49. Users aged 60 and above account for 18.6%. The contact volume increased by 42% in 2020 and slightly decreased by 0.9% in 2021. There are remaining 5.4% of users are of unknown age. Interaction Type The phone call is the major channel for callers to access the services, followed by SMS and other interaction types (i.e., email and webchat). However, there is a very high increase of 85% in contact volume made via SMS in 2020 followed by another 9% increase in 2021. Contacts made via phone call have a 46% increase in 2020 and a 4% decrease in 2021. For other interaction types, including email and webchat, a high 120% increase was observed in 2020, followed by a 14% decrease in 2021. In addition, younger callers are more likely to use SMS than older callers—the proportion of using SMS decrease with the age group increase. There are 57% of contacts made via SMS and 43% of contacts made via phone in the age group 17 and under, 45% of contacts made via SMS in the age group 18–29, 16% of contact volume was made by SMS in the age group 30–39, 7% contacts volume made by SMS in age group 40–49, 4% contact volume made by SMS in the age group 50–59 and the proportion of using SMS in the age group 60 and above further decrease to 2%.
4.2 Clustering Results We generated a four-cluster solution (see Table 1) based on callers’ active period, non-active period, calling volume and age group. We then interpreted this clustering result based on callers’ demographic and calling patterns (see Fig. 2) in each cluster as follows: 1. Cluster One (The Former Users) It contains the most significant number of female users aged 18–29. Their average active period is one and a half years, and the non-active period is 217 days. They did not have a high call volume, with an average of 97 contacts during their active Table 1 Clustering results
Cluster
Size
Proportions (%)
WSS
1 (The former users)
334
31.4
515.5
2 (The loyal users)
536
50.4
716.3
3 (The high frequency users)
33
3.1
4 (The one-off users)
161
15.1
259 263.2
It shows the cluster size, the percentage accounts and the within sum of squares of each cluster
Analysing the Impact of Support Plans on Telehealth Services Users …
121
Fig. 2 Calling patterns by clusters
period, typically making 0.29 contacts per day. Each call can last for 16 min on average. 2. Cluster Two (The Loyal Users) A giant cluster. Most users are mid-aged to elder-aged females. They have a long active period and a short non-active period. This cluster has been using the services for 3.7 years and made 392 contacts on average. Each call can last for 14 min on average. 3. Cluster Three (The High Frequency Users) This cluster has only 33 users, but they contributed 40% contact volume. They have incredibly high contact demands, make more than 5000 contacts and can make four contacts per day, and each call can last for 11 min during their active period on average. Their active period and daily contact frequency are the highest, while their non-active period and average call duration are the shortest among all clusters. 4. Cluster Four (The One-Off Users) The minor active users with the slightest contact demands only made less than one hundred contacts on average. They have not returned to the services for more than two years. However, their contact duration is the longest, with an average of 17 min.
122
Y. Mao and M. Mohaghegh
4.3 Calling Patterns Change The mean and median of the daily contact frequency were reduced in the callers’ post-plan period. Callers’ were found to have a mean of 0.82 contacts per day (or 24.6 contacts per month) and a median of 0.34 contacts per day (or 10.2 contacts per month) in their pre-plan period. And a mean of 0.68 contacts per day (or 20.4 contacts per month) and a median of 0.19 contacts per day (or 5.7 contacts per month) in their post-plan period. To determine whether callers’ daily contact frequency significantly reduced after their plan ended, a paired randomisation test was performed, and a significant decrease in daily contact frequency in callers’ post-plan period (p < 0.05). Callers’ daily contact frequency was concentrated at a lower value after their plan ended for all clusters. To determine whether the effectiveness of service plans on reducing callers’ daily contact frequency is differ by cluster between periods, a randomisation test was performed, and a significant decrease in daily contact frequency in the post-plan period was observed in clusters 1 and cluster 4 (p < 0.05). Although there was a visible difference in daily contact frequency between periods in cluster 2 and cluster 3, no evidence showed their decreases were significant (p > 0.05).
5 Discussion Our study analysed 430,969 contacts made to more than 100 national telehealth services by 1064 callers with complex needs over four years. Callers were more likely to seek help at night, around 9 pm and 12 am. Many studies reported that calls to helplines peaked at night or during the weekend [6, 20]. People may have more free time to contact helplines in the evening or at weekends without interruption. Also, loneliness or sleep disturbance may occur more frequently at night [2] and this might explain why a peak appeared around midnight. Contact demands for telehealth services experienced a high increase of 50% in 2020 due to the COVID19 pandemic effect, followed by a declining trend. It should also be noted that contact volume started to rapidly increase in 2019 (especially those contacts made to mental health services) before cases of COVID-19 were confirmed in New Zealand, and the government announced several control measures. In 2020, increased contact demands were observed for Mental Health Services, Mental Health Crisis Line, Family and Sexual Harm Services and Smoking Cession Helpline, and a very high increase of 240% for Family and Sexual Harm Services. This can be explained by the increasing rate of psychological distress and increased risk of family violence during COVID-19. Isolation can exacerbate pre-existing mental health problems [24] and cause economic stress accompanying potential increases in harmful coping mechanisms such as alcohol abuse [31]. Unemployment, reduced income, limited social support, and alcohol abuse are the common risk factors that trigger family violence [4]. Many studies reported the increasing
Analysing the Impact of Support Plans on Telehealth Services Users …
123
rate of family violence after the lockdown measures. Children and pets are also victims of family violence, at a greater risk of suffering physical and emotional harm [3]. In addition, the increasing contact demands in 2020 and 2021 for Smoking Cession Helpline is consistent with another youth Smoking Cession Helpline study from Hong Kong that the number of incoming calls and the quit rate of using tobacco increased since the COVID-19 outbreak [5]. One reason for observing this increase is that most smokers are concerned that smoking increases the risk of getting infected with COVID-19 [10]. The health risk related to COVID-19 is the main factor for smokers to have quit intentions [33]. As we observed from the contact volume trend, a general downward trend appeared in 2021 when there were no lockdown restrictions. However, it came with increasing demands for the Mental Health Crisis Line, Family and Sexual Harm Services and Smoking Cession Helpline. The impact on mental health and family issues caused by the pandemic might last longer. Another trend that needs to be discussed is the increasing number of contacts made by SMS. An increase of 85% and 9% in SMS contacts were found in 2020 and 2021, respectively, while the number of phone calls decreased in 2021. We also found that younger callers are more likely to use SMS. An Australia COVID-19 related youth helpline study found similar patterns in which the increased contact was entirely in the webchat because young people have more concerns about privacy. They worried that their family members might overhear their personal information if they used the phone call to contact helplines [1]. This suggests that all youth helplines could consider text-based communication. Our sample was predominantly by females. Call volume from males, females and gender-diverse people all showed an increasing trend during the pandemic, and the percentage of increase from male callers was higher than from female callers. As we mentioned before, the proportion of female callers is higher than male callers in many previous studies. Several studies reported that females have higher depression and anxiety scores based on some self-report scales during the pandemic [19, 22] which means females are more likely to experience poorer mental health under the impact of COVID-19. Another reason might explain the shortfall in male callers observed in the majority of helpline studies. An earlier sociological study found that ‘traditional masculine scripts’ prevents men from seeking help [13], especially in the case of psychological problems [9]. Due to traditional gender roles, males are discouraged from showing their emotions and are less likely to seek support. In New Zealand, males died by suicide at 3.3 times higher than females [15]. There is a need for mental health services to target males who might need support and provide early intervention. In addition, the help-seeking patterns may vary between gender, with females more likely to perceive emotional support while males are more likely to perceive instrumental support [18]. This suggests that when we develop support plans for complex callers, their social-demographic has to be considered. Using the K-Prototype clustering method, our study identified four complex caller types based on their age group and contact patterns. Compared with a study from O’Neill et al. [20], similar contact patterns were found in a group of callers with the highest contact demands and a group with the lowest contact demands. We also analysed the difference in daily contact frequency and average call duration between
124
Y. Mao and M. Mohaghegh
callers’ pre-plan and post-plan periods. Our study found that callers’ daily contact frequency significantly reduced after their support plan session ended, representing the decrease in callers’ contact demands resulting from the intervention of support plans. However, we did not find any decrease in callers’ average call duration in the post-plan period. Our study further explored the difference in daily contact frequency on a cluster basis. The result showed that the support plan efficiently decreased daily contact frequency among clusters 1 (The Former Users) and 4 (The One-Off Users) callers. However, no significant decrease was observed for cluster 2 (The Loyal Users) and cluster 3 (The High Frequency Users) callers, where these callers usually have moderate to high call volume and high reliance on telehealth services. This result suggested that callers from clusters 2 (The Loyal Users) and 3 (The High Frequency Users) may need more health support.
6 Conclusion This study explored the contact volume trend made to telehealth services under the impact of COVID-19 and summarised the socio-demographics of 1064 callers with complex needs supported by tailored support plans. Four clusters were identified based on their age group and contact patterns. Helpful information for each cluster was delivered. Contact demands from cluster 1 (The Former Users) and cluster 4 (The One-Off Users) callers were reduced due to the effectiveness of the support plan— however, small influence on reducing contact demand from cluster 2 (The Loyal Users) and cluster 3 (The High Frequency Users) callers. This result also suggested callers in cluster 2 (The Loyal Users) and cluster 3 (The High Frequency Users) may need greater support. This study explored some reasons that may cause the patterns of demands among different types of callers and suggested that callers’ demographic information be considered when developing support plans. For example, younger callers have more concerns about privacy and are less likely to open up, and text-based communication may help build a trusting relationship with younger callers. Limitations also exist in this study. The result towards callers’ average call duration may not be accurate due to the large missing value appearing in call duration. Compared with another clustering study [20], average call duration was an important variable in identifying types of callers. Exploration of callers’ satisfaction with the support they have received at the end of the session and evaluating the support plans from the perspective of callers is recommended in future work.
Analysing the Impact of Support Plans on Telehealth Services Users …
125
References 1. Batchelor S, Stoyanov S, Pirkis J, Kõlves K (2021) Use of kids helpline by children and young people in Australia during the covid-19 pandemic. J Adolesc Health 68(6):1067–1074 2. Bryant RA (1998) An analysis of calls to a Vietnam veteran’s telephone counselling service. J Trauma Stress 11(3):589–596 3. Campbell AM, Hicks RA, Thompson SL, Wiehe SE (2020) Characteristics of intimate partner violence incidents and the environments in which they occur: victim reports to responding law enforcement officers. J Interpers Violence 35(13–14):2583–2606 4. Catalá-Miñana A, Lila M, Oliver A, Vivo JM, Galiana L, Gracia E (2017) Contextual factors related to alcohol abuse among intimate partner violence offenders. Subst Use Misuse 52(3):294–302 5. Ho LLK, Li WHC, Cheung AT, Xia W, Wang MP, Cheung DYT, Lam TH (2020) Impact of covid-19 on the Hong Kong youth quitline service and quitting behaviours of its users. Int J Environ Res Public Health 17(22):8397 6. Iqbal Y, Jahan R, Matin MR (2019) Descriptive characteristics of callers to an emotional support and suicide prevention helpline in Bangladesh (first five years). Asian J Psychiatr 45:63–65 7. Iqbal Y, Jahan R, Yesmin S, Selim A, Siddique SN (2021) Covid-19-related issues on the tele-counseling helpline in Bangladesh. Asia Pac Psychiatry 13(2):e12407 8. Ji J, Pang W, Zhou C, Han X, Wang Z (2012) A fuzzy k-prototype clustering algorithm for mixed numeric and categorical data. Knowl-Based Syst 30:129–135 9. Johnson ME (1988) Influences of gender and sex role orientation on help-seeking attitudes. J Psychol 122(3):237–241 10. Koczkodaj P, Cedzy´nska M, Przepiórka I, Przewo´zniak K, Gliwska E, Ciuba A, Didkowska J, Ma´nczuk M (2022) The covid-19 pandemic and smoking cessation—a real-time data analysis from the polish national quitline. Int J Environ Res Public Health 19(4):2016 11. Krishnamurti LS, Monteith LL, McCoy I, Dichter ME (2022) Gender differences in use of suicide crisis hotlines: a scoping review of current literature. J Public Mental Health 12. Li Y, Chu X, Tian D, Feng J, Mu W (2021) Customer segmentation using k-means clustering and the adaptive particle swarm optimization algorithm. Appl Soft Comput 113:107924 13. Mahalik JR, Good GE, Englar-Carlson M (2003) Masculinity scripts, presenting concerns, and help seeking: implications for practice and training. Prof Psychol Res Pract 34(2):123 14. Menzies R, Gluckman P, Poulton R (2020) Youth mental health in Aotearoa New Zealand: greater urgency required 15. Ministry of Health (2014) Office of the director of mental health annual report 2013 16. Ministry of Health (2016) The social report 2016 17. Ministry of Health (2020) Kia kaha, kia m¯aia, kia ora aotearoa: Covid-19 psychosocial and mental wellbeing recovery plan 18. Olson DA, Shultz KS (1994) Gender differences in the dimensionality of social support 1. J Appl Soc Psychol 24(14):1221–1232 19. Özdin S, BayrakÖzdin S (2020) Levels and predictors of anxiety, depression and health anxiety during covid-19 pandemic in Turkish society: the importance of gender. Int J Soc Psychiatry 66(5):504–511 20. O’Neill S, Bond RR, Grigorash A, Ramsey C, Armour C, Mulvenna MD (2019) Data analytics of call log data to identify caller behaviour patterns from a mental health and well-being helpline. Health Informatics J 25(4):1722–1738 21. Pavlova A, Witt K, Scarth B, Fleming T, Kingi-Uluave D, Sharma V, Hetrick S, Fortune S (2021) The use of helplines and telehealth support in Aotearoa/New Zealand during covid-19 pandemic control measures: a mixed-methods study. Front Psychiatry 12 22. Pieh C, Budimir S, Probst T (2020) The effect of age, gender, income, work, and physical activity on mental health during coronavirus disease (covid-19) lockdown in austria. J Psychosom Res 136:110186 23. Pirkis J, Middleton A, Bassilios B, Harris M, Spittal M, Fedszyn I, Chondros P, Gunn J (2015) Frequent callers to lifeline
126
Y. Mao and M. Mohaghegh
24. Pressman SD, Cohen S, Miller GE, Barkin A, Rabin BS, Treanor JJ (2005) Loneliness, social network size, and immune response to influenza vaccination in college freshmen. Health Psychol 24(3):297 25. Reay RE, Looi JC, Keightley P (2020) Telehealth mental health services during covid-19: summary of evidence and clinical practice. Australas Psychiatry 28(5):514–516 26. Rodda SN, Hing N, Lubman DI (2014) Improved outcomes following contact with a gambling helpline: the impact of gender on barriers and facilitators. Int Gambl Stud 14(2):318–329 27. Spittal MJ, Fedyszyn I, Middleton A, Bassilios B, Gunn J, Woodward A, Pirkis J (2015) Frequent callers to crisis helplines: who are they and why do they call? Aust N Z J Psychiatry 49(1):54–64 28. Tan KK, Schmidt JM, Ellis SJ, Veale JF (2019) Mental health of trans and gender diverse people in Aotearoa/New Zealand: a review of the social determinants of inequities. N Z J Psychol 48(2) 29. Taylor J, Power J, Smith E, Rathbone M (2020) Bisexual mental health and gender diversity: findings from the ‘who i am’ study. Aust J Gen Pract 49(7):392–399 30. Turkington R, Mulvenna M, Bond R, Ennis E, Potts C, Moore C, Hamra L, Morrissey J, Isaksen M, Scowcroft E et al (2020) Behavior of callers to a crisis helpline before and during the covid-19 pandemic: quantitative data analysis. JMIR Mental Health 7(11):e22984 31. Van Gelder N, Peterman A, Potts A, O’Donnell M, Thompson K, Shah N, Oertelt-Prigione S (2020) Covid-19: reducing the risk of infection might increase the risk of intimate partner violence. EClinicalMedicine 21 32. WHO (2020) World mental health day: an opportunity to kick-start a massive scale-up in investment in mental health. News release 33. Yang H, Ma J (2021) How the covid-19 pandemic impacts tobacco addiction: changes in smoking behavior and associations with well-being. Addict Behav 119:106917
Trend and Behaviour Changes in Young People Using the New Zealand Mental Health Services Yingyue Kang and Mahsa Mohaghegh
Abstract Many countries offer crisis services to help people deal with mental problems. The Mental health services have been operating in New Zealand for many years, with multiple services supporting those that feel stressed, worried and down. This study was undertaken to examine the use and trends of the mental health services among young people between 2018 and 2022. Keywords Mental health · Trend and behaviours · Clustering · Changes behaviour
1 Background and Related Work Mental illness can be distressing and can cause problems in everyday life. However, in most cases, symptoms can be effectively managed through a combination of medication and talking therapies known as psychotherapy. In addition, many countries offer crisis services to help people deal with mental illness by talking on the phone or sending text messages. Crisis services have been available since the 1950s to support community members who are experiencing personal crises, including suicide risk and violence [2]. Highly trained operators provide prompt and professional assistance to callers, and a growing body of research has consistently shown that services are effective tools in reducing distress and suicidality for help-seekers. Data show that young people are increasingly using mental health services to seek help for mental health problems and that their age group, behavioural trends and reasons for seeking advice are changing yearly. According to a survey of youth support hotlines in Los Angeles, the number of people using the helpline has increased each year significantly [7]. The most contacted group among young people when it comes teenage females, with most contacts made by teenage females aged 15 and 16, while contact from children aged 13 and under has also increased significantly year on year [5]. In many cases, there is evidence that the reason for contact is anxiety and stress, with young people Y. Kang · M. Mohaghegh (B) Auckland University of Technology, Auckland, New Zealand e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_12
127
128
Y. Kang and M. Mohaghegh
contacting the helpline describing their distress as stemming from suicide-related thoughts or behaviours [3, 9]. For example, the Danish helpline reported that 6.3% of the 9685 consultations via SMS communication in a year were on the topic of suicide. In addition, 5.5% of Danish young people aged 13–18 reported attempting suicide at least once, and 67.8% of these were young women [10]. Some evidence suggests that services can improve the short-term psychological state of callers, including reducing suicidal ideation and intent [4, 6]. In addition, early psychological interventions can reduce mental health burden and health disparities in underserved communities [1] and have been shown to effectively address unmet mental health needs. This research will focus on mental health services in New Zealand. Mental health services in New Zealand provide free virtual health, mental health and social services to the public. The aim of these services is to provide consistent, clinically supported access to services for those in need or when people are unable to access other options due to time of day, location, or financial or cultural barriers. These services support anyone feeling stressed, worried, down, or needing support. Trained counsellors respond to the contacts and develop care plans to provide the best possible services user outcomes. Finding trends of young people using mental health services can help mental health services better understand the reasons, behavioural patterns and changes in the characteristics of young people contacting services. This will help professionals provide efficient, early and comprehensive interventions for people with mental health problems and other health-compromising behaviours. The aim of this paper is to look for the trends of young people using mental health services. In the article, young callers aged 13–24 years will be tracked in 6-month blocks to explore whether there is a changing trend in the use of mental health services by young callers.
2 Methodology Data Collection Mental health services provided anonymised datasets of calls made or text messages sent between January 2017 and February 2022. More than 200,000 contacts were recorded, with contacts made by either phone or text message. Data related to contacts to the service contained basic demographic information such as age, gender identity, District Health Board (DHB), ethnicity, type of contact, and reason for contact. Advisors collect data and record each call and text message in a systematic and standardised way. Also based on mental health services’ risk definition criteria, the advisor assesses the risk types of the contact. The dataset contains demographic information about callers over a three-year period, including demographic information such as age, gender, and DHB. The data also contains contact-related information such as contact method, call duration and reason for the call. If the contact is considered to have psychological risks, the number and type of risks are also recorded in
Trend and Behaviour Changes in Young People Using the New Zealand …
129
the contact information. Each individual is assigned a patient ID, providing a unique identifier at the caller level. Linear Regression Trends in the demographic characteristics, behavioural patterns such as contact methods and risk outcomes of users exposed to mental health services were analysed, with standard linear regression analyses performed separately for each time trend. One of the Linear Regression Models in RStudio was used to analyse whether the trends in user characteristics, behaviours and risks were statistically significant. We defined the trend model in which a p-value of less than 0.05 was considered statistically significant. For missing data in the dataset, we consider them to be completely randomly missing and only use data that are fully documented in this study. The regression model is given by Eq. (1). yi = β0 + β1 xi1 + · · · + β P xi P + εi i = 1, 2, . . . n
(1)
3 Results and Evaluation 3.1 Overall Trends Across Mental Health Services and Trends for Young People Between 1 January 2018 and 31 February 2022, mental health services received a total of 214,384 contacts. For the purposes of this study, data was split into sixmonthly blocks. However, as the data for 2022 contained only two months we have only used data from 2018 to 2021 when calculating the growth rate and linear model. The time trend analysis shows a general upward trend in the total number of services contacts, excluding three sudden increases between 2019 and the second half of 2021, the overall trend is a slow but steady increase. The number of services contacts increased from 28,098 in 2018 to 66,239 in 2021 (p-value < 0.001, growth rate = 17%). Similar to the overall trend, the number of contacts from young people is on an overall upward trend. A total of 51,866 contacts over the period of data recording were from young people aged 13–24 years, accounting for approximately 25% of the total contacts. Of these, 30,460 contacts were from 13 to 19-year-olds group and 21,406 contacts were from 20 to 24-year-olds group. The number of contacts from young people on the services has increased from 5376 in 2018 to 18,238 in 2021. Although young people’s contacts overall also show an upward trend the yearly average growth rate is much greater than the overall contacts (p-value < 0.001, growth rate = 23%) (Fig. 1).
130
Y. Kang and M. Mohaghegh
Fig. 1 a Overall trends of all users who contacted the mental health services. b Overall trend of young users who contacted the mental health services
3.2 Trends in Demographic Characteristics of Contacts to The Mental Health Services Trends in gender at contact to the mental health services The gender of young people contacting the services is divided into four different groups: female, male, Gender diverse and undisclosed gender. Between 2018 and 2021, the majority of young people contacting the services will be female, accounting for approximately 67.4% of young people overall. The proportion of males was approximately 18.7% and a minority of Gender diverse young people, only 1.5% overall. 12.4% did not disclose their gender identity information. For all contacts female callers account for approximately 53.4% of all contacts, male callers account for approximately 25.3%, 20.8% did not disclose their gender identity and only 0.58% of callers were Gender diverse. By comparing the total number of contacts, it can be clearly seen the difference in the proportion of young people by gender. There is a slight decrease in the proportion of males and non-disclosed gender, but there is an increase in the proportion of females and gender diversity. There is also a clear positive trend in the number of young females contacting the services (pvalue < 0.001), and the change in the proportion of young female is significant and consistently upwards (p-value = 0.047). In contrast, there was only a slight change in the number of males contacting the services (p-value = 0.001) and little change in the trend in the proportion of males (p-value = 0.6543), with a p-value greater than 0.05 not showing a statistical association between the proportion of males and time. The trend in the total number of Gender diverse young people is climbing (p-value = 0.0086), while the percentage of gender diversity is also increasing (pvalue = 0.0056). The number of young people who did not disclose information
Trend and Behaviour Changes in Young People Using the New Zealand …
131
Fig. 2 Trends in gender of young users who contacted the mental health services
about their gender identity increased (p-value = 0.029) and the percentage increased significantly (p-value = 0.0045) (Fig. 2). Trends in District health boards at contact to the mental health services According to the New Zealand Ministry of Health’s DHB classification policy, all mental health services users are recorded as 20 different DHBs and grouped into four main DHBs by region, Central, Midland, Southern and Northern. The contacts from the Central region were the highest, accounting for 30.7% of the total number of contacts. The total number of contacts from the Northern region was similar to that of the Central region, accounting for 30.2% of the total contacts. The total number of contacts from the Southern and Midlands regions is relatively low, at 21.8% and 17.3% respectively. In a similar trend of total contacts for young people, contacts from all four Health Authority regions showed an upward trend over the four-year period. The highest growth rate was seen in the Southern region, where the growth rate was 37%. The increasing trend in the number of contacts was evident in the Southern region (p-value < 0.001), as well as the proportional increase in this region (p-value = 0.0051). The Central and Northern regions show very similar growth rates of 29% and 30% respectively, with the number of contacts from the Central region showing a clear upward trend (p-value < 0.001) and the proportional change, although not as significant as the number of contacts, still showing a gentle upward trend (p-value = 0.041). The total number of contacts in the Northern region was not as significant as in the Central region but still showed a consistent upward trend (p-value = 0.002) and the proportional change also showed a gentle upward trend (pvalue = 0.027). Surprisingly, although the trend from the Midlands region increased slightly (p-value < 0.001), the percentages from this region showed essentially no change (p-value = 0.49) (Fig. 3). Trends in ethnicity at contact to the mental health services According to the New Zealand 2018 Census report on ethnic groups, New Zealand citizens and residents are divided into six different ethnic groups. Services users may choose to provide an ethnicity, these will then be grouped into European, M¯aori, Pacific Peoples, Asian, MELAA (Middle Eastern/Latin American/African) and Other ethnicity. The distribution of the six ethnic groups from which young people are exposed is similar to the census results. The largest contact comes from the European ethnic group, accounting for 64.8% of the overall number of young people’s contacts.
132
Y. Kang and M. Mohaghegh
Fig. 3 Trends in DHBs of young users who contacted the mental health services
Between 2018 and 2021, the number of contacts from the European ethnic group continues to increase (p-value < 0.001), while the proportion of contacts from the European ethnic group is increasing but not as significantly as the number (p-value = 0.024). Contacts from the M¯aori ethnic group accounted for 15.7% of total youth contacts, making it the second largest ethnic group in total contacts. There was a positive change in the number of contacts from the M¯aori group (p-value = 0.036) and a consistent but insignificant positive change in the proportion (p-value = 0.033). The Pacific group accounts for approximately 4.7% of the total number of youth contacts. There is a significant positive trend in the number of contacts for the Pacific group (p-value = 0.0038) but not a significant upward trend in the proportion (p-value = 0.043). The proportion of contacts from Asian ethnic groups was approximately 9.8%. There was a significant increase in the number of contacts from Asian groups over the four years (p-value < 0.001), however there was no significant change in the proportion of contacts from Asian groups which did not fit a linear trend (p-value = 0.78). Similar to the Asian group, the MELAA group, which accounted for 1.5% of total contacts, also showed an increase in the number of contacts (p-value = 0.0083) but no significant change in proportion (p-value = 0.87). Contacts from other ethnic groups also showed a positive trend in number (p-value = 0.0019) but again no significant change in proportion (p-value = 0.055) (Fig. 4).
Fig. 4 Trends in ethnicity of young users who contacted the mental health services
Trend and Behaviour Changes in Young People Using the New Zealand …
133
Fig. 5 Trends in interaction type of young users contacted the mental health services
3.3 Trends in Behavior at Contact to The Mental Health Services In this study, we also analyse the behaviour of young users of the services to help us understand more deeply the changing behavioural patterns of young people. We will focus on the analysis of users’ interaction type. By looking at the total number of contacts over four years, text messaging was the preferred method of contact for young people aged 13–24, with the number of contacts made via SMS accounting for 81% of total contacts, in contrast to just 19% of contacts made using telephone. Similar to the increasing trend in the overall number of contacts, there is a clear upward trend in the number of both text messages and phone calls. The linear trend p-value for SMS contacts was less than 0.001 and for telephone contacts the linear trend p-value was 0.0014. The number of telephone contacts rose from 1047 contacts in 2018 to 3362 contacts in 2021, an increase in volume of 2.84. The number of contacts for SMS contacts rose from 4329 contacts in 2018 to 14,876 contacts in 2021. However, the proportion of SMS contacts did not show a significant change over the four-year period (p-value = 0.75) and the change in the percentage of SMS contacts did not fit well with the linear trend. The proportion of telephone contacts also did not follow a linear trend (p-value = 0.75) and even showed a negative trend in the proportion of contacts (Fig. 5).
3.4 Trends in Risk at Contact to The Mental Health Services An advisor supporting a services user on the mental health services will record a risk during the contact if a risk situation arises. Risk data will be categorised into five risk groups: risk of suicide, self-harm, harm to others, abuse and breaking glass. Breaking glass is the internal term for necessary disclosure. Under the Health Information Privacy Act, 1994 (HIPC) and the Privacy Act 2003, information collected from a services user must be kept confidential and not disclosed to a third party without that person’s consent unless that third party provides health services to them. However, if necessary, health authorities may disclose information to prevent or mitigate a serious threat to public health, public safety, and the healthy life of the concerned individual or other individuals. In addition, disclosures may be made, where necessary, to persons who can take action against the threat.
134
Y. Kang and M. Mohaghegh
Over the four-year period 2018–2021, a total of 31,657 contacts were identified as at risk (4%) with the highest number of contacts at risk of suicide. However, the number of contacts from young people aged 13–24 that were at risk over this fouryear period was 11,042 (7%), which is slightly higher than the proportion of total contacts that were at risk. The number of contacts identified as at-risk showed an upward trend and was statistically significant (p-value = 0.019). The highest number of calls were related to suicide and suicidal ideation, accounting for approximately 57.7% of the total number of contacts at risk. In 2018, there were 1148 suiciderelated contacts and in 2021, there were 2051 contacts regarding suicide. Similar to the change in trend in the total number of contacts at risk, the trend in contacts at risk of suicide showed a similar upward trend and was statistically significant (p-value = 0.002). The trend in percentages did not show a significant trend and did not conform to a linear trend (p-value = 0.36). The increase in the number of contacts regarding self-harm was significant (p-value = 0.00352), representing approximately 25.2% of the total risk contacts. there were 148 contacts regarding self-harm in 2018 and 824 contacts regarding self-harm in 2021. There was no significant trend in proportional change (p-value = 0.37). There was no statistically significant linear trend in the number of contacts regarding harm to others (p-value = 0.56), and the number of contacts defined as a risk of harm to others decreased from 53 in 2018 to 42 in 2021, representing approximately 1.9% of the total risk contacts, which is the lowest number of the four risks. There was also no significant trend in the proportion of harm to others (p-value = 0.057). There was a significant linear upward trend in the number of contacts to breaking glass (p-value = 0.00074), with the number of contacts defined as a risk of breaking glass increasing from 131 in 2018 to 357 in 2021, representing approximately 9.5% of the total risk contacts. However, there was no significant trend in percentage change (p-value = 0.40). There was a significant upward trend in the number of risk contacts related to abuse (p-value = 0.0020), with a total of 62 contacts defined as having a risk of abuse in 2018, increasing to 239 contacts for this risk in 2021. Also, there was a significant upward trend in the proportion (p-value = 0.0020) (Fig. 6). To further discuss whether user demographics are related to risk type, we used Chisquare tests to correlate risk type with age, gender, DHB and ethnic group. Results showed that gender (p-value = 0.025) and ethnic group (p-value = 0.0088) were associated with risk type, while DHB (p-value = 0.92) and age (p-value = 0.49) were
Fig. 6 Trends in risk contacts of young users who contacted the mental health services
Trend and Behaviour Changes in Young People Using the New Zealand …
135
independent variables from risk type. In terms of gender, by calculating the proportion of risk contacts comparing the four different gender groups we found that females had the highest proportion of risk contacts, with approximately 2% of contacts from females being considered risky. For males it was 1.7% and for undisclosed gender 1.9%. The Gender diverse group had the lowest percentage of risky contacts at 0.9%. However, by looking at the gender of each risk type of contact we can see trends in the different risk types displayed by different genders. Where women show a strong tendency to self-harm, men are more likely to be identified as being at risk of harming others. In terms of ethnic groups, we also calculated the proportion of risk contacts for six ethnic groups. MELAA had the lowest percentage of risky contacts at approximately 1.1%. The different ethnic groups also show different trends in the types of risk. The European ethnic group had a higher tendency to self-harm, while the M¯aori ethnic group showed higher risk trends for hurting others, breaking glass and abuse.
4 Discussion It has been suggested that many young people feel hesitant to disclose suicidal or self-harming thoughts to adults and are reluctant to seek help from parents, siblings and other relatives [8]. Similarly, surveys have shown that using anonymous methods such as social networks to seek help is the most widely accepted method for young people with emotional problems [11]. The helpline reduces the barriers to seeking help for emotional problems such as fear of parents and worry about friendships. The data observed an overall increase in use by 13- to 24-year-olds group, with the data highlighting that the number of young female contacts has increased significantly each year and remains at a high level. Contacts from the South are notable for DHB, with young people in the South accounting for only 19% of young people aged 13–24 nationally according to the 2018 New Zealand Census report [12]. However, between 2018 and 2021, the mental health services receive far more contacts than the percentage of the population, with over 30% of young people’s contacts coming from the Central region. By looking at the type of interaction of contacts, we see that young people are more likely to use text messaging to communicate. At the same time, the risk trend for young people increases significantly, with the proportion of contacts identified as risky in the 13–24 age group remaining the highest of all age groups. A Chi-square test of risk type and demographics revealed a significant interaction between gender and ethnicity on risk type. The main objective of this study was to understand the demographics and trends of young users (13–24 years old) of the mental health services. It has been suggested that many young people feel hesitant to disclose suicidal or self-harming thoughts to adults and are reluctant to seek help from parents, siblings and other relatives [8]. Similarly, surveys have shown that using anonymous methods such as social networks to seek help is the most widely accepted method for young people with emotional
136
Y. Kang and M. Mohaghegh
problems [11]. The helpline reduces the barriers to seeking help for emotional problems such as fear of parents and worries about friendships. The data observed an overall increase in use by the 13- to 24-year-olds group, with the data highlighting that the number of young female contacts has increased each year significantly and remains at a high level. Also, the risk trend for young people increases significantly, with the proportion of contacts identified as risky in the 13–24 age group remaining the highest of all age groups. In addition, a Chi-square test of risk type and demographics revealed a significant interaction between gender and ethnicity on the risk type.
5 Conclusion Our data shows that young people aged 13–24 are increasingly using the mental health services. Contacts from young people are increasing each year significantly, and the majority of these contacts are from services users who identify as female. Contacts from the southern DHB are increasing rapidly, with contacts from this area going from the lowest proportion of contacts to the highest in four years. The mental health services show an upward usage trend across all ethnic groups. At the same time, the number of contacts defined as risk contacts is increasing yearly, with young people being considered more at risk than other age groups. Young people contact the services most frequently for suicide-related issues, and proper guidance and resolution of suicide-related issues are imperative, particularly in relation to young women. M¯aori shows strong trends in almost all risk types, and this ethnic group needs more guidance on what can be done to reduce risk. In future work, we could consider recording and analysing more data from the non-COVID-19 period to explore how young people use the services changes in the absence of a public health emergency outbreak. This will ensure that the mental health services can capture young people’s behavioural patterns and trends in both regular situations and emergencies.
References 1. Barnett ML, Lau AS, Miranda J (2018) Lay health worker involvement in evidence-based treatment delivery: a conceptual model to address disparities in care. Annu Rev Clin Psychol 14:185 2. Burgess N, Christensen H, Leach LS, Farrer L, Griffiths KM (2008) Mental health profile of callers to a telephone counselling services. J Telemedicine Telecare 14(1):42–47 3. Coveney CM, Pollock K, Armstrong S, Moore J (2012) Callers’ experiences of contacting a national suicide prevention helpline. Crisis 4. Guo B, Harstall C (2002) Efficacy of suicide prevention programs for children and youth. In: Database of abstracts of reviews of effects (DARE): quality-assessed reviews [Internet]. Centre for Reviews and Dissemination (UK)
Trend and Behaviour Changes in Young People Using the New Zealand …
137
5. Kerner B, Carlson M, Eskin CK, Tseng C-H, Ho J-MG-Y, Zima B, Leader E (2021) Trends in the utilization of a peer-supported youth hotline. Child Adolesc Mental Health 26(1):65–72 6. Mann JJ, Apter A, Bertolote J, Beautrais A, Currier D, Haas A, Hegerl U, Lonnqvist J, Malone K, Marusic A et al (2005) Suicide prevention strategies: a systematic review. JAMA 294(16):2064–2074 7. Mathieu SL, Uddin R, Brady M, Batchelor S, Ross V, Spence SH, Watling D, Kõlves K (2021) Systematic review: the state of research into youth services. J Am Acad Child Adolesc Psychiatry 60(10):1190–1233 8. Michelmore L, Hindley P (2012) Help-seeking for suicidal thoughts and self-harm in young people: a systematic review. Suicide Life-Threat Behav 42(5):507–524 9. Nestor BA, Cheek SM, Liu RT (2016) Ethnic and racial differences in mental health services utilization for suicidal ideation and behavior in a nationally representative sample of adolescents. J Affect Disord 202:197–202 10. Sindahl TN, Côte L-P, Dargis L, Mishara BL, Bechmann Jensen T (2019) Texting for help: processes and impact of text counselling with children and youth with suicide ideation. Suicide Life-Threat Behav 49(5):1412–1430 11. Ssegonja R, Nystrand C, Feldman I, Sarkadi A, Langenskiöld S, Jonsson U (2021) Indicated preventive interventions for depression in children and adolescents: a meta-anal 12. 2018 Census|Stats NZ (n.d.). https://www.stats.govt.nz/2018-census/. Retrieved 16 Oct 2022
Hybrid Data Security Models for Smart Society
Securing Cloud Storage Data Using Audit-Based Blockchain Technology—A Review Mohammad Belayet Hossain and P. W. C. Prasad
Abstract Cloud storage services in cloud computing have become very popular. Many organizations nowadays are using cloud services to outsource their data storage. Due to the cloud services provider’s lack of cloud storage security and to ensure data integrity, availability, privacy, data tampering, and data leakage, the newly emerging blockchain audit-based has become a very popular and future solution. Most of the existing schemes are based on traditional or identity-based public auditing. These kinds of audit schemes have a problem such as a certificate management or key escrow. They also do not support dynamic data updates and user identity tracking for group users. A trusted TPA is required for the existing public auditing schemes. To address the above data privacy, data privacy leakage in cloud storage, third-party auditor’s mistrust, and certificate management, it is to adapt a certificateless multi-replica and multi-data public audit scheme based on blockchain technology and blockchain and public key searchable encryption. All replicas will be stored in different cloud servers and at the same time, their integrity can be audited. All data also is protected by the encryption algorithm. Smart contract technology is used for controlling data access and sharing. Data transaction is automatically recorded on the blockchain. The adapted project work will discuss all methodologies used, their comparison, and their benefits. This paper reviews the application of blockchain-based audit technology for securing cloud storage data. Keywords Blockchain · Blockchain-based auditing · Cloud computing · Cloud storage
M. B. Hossain (B) · P. W. C. Prasad Charles Sturt University, Bathurst, Australia P. W. C. Prasad e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_14
141
142
M. B. Hossain and P. W. C. Prasad
1 Introduction In recent years, cloud computing as a new computing model has attracted many businesses and organizations extensive attention [1]. With the rapid development of cloud computing and big data technology, more and more businesses and individuals choose to outsource their data to cloud services [2]. Cloud services provide users with an efficient and flexible data management method and reduce the burden of local data storage and maintenance [1]. To ensure the security of data and the privacy of users, data is usually stored on the cloud server in the form of ciphertext. To achieve access control of the data, encryption technology can be regarded as a security guarantee. But how to achieve access control for encrypted data is a big challenge [2]. As the data is stored in the cloud, the Data Owner (DO) will lose strong control over the data. Cloud service providers (CSPs) may be damaged by external threats, such as hacking or natural disasters, and even they may tamper with data for their benefit. These external and internal attacks can damage the integrity of remote data. If the integrity of data cannot be audited in time, then a serious disaster will be triggered [3]. Therefore, users must regularly verify the integrity of outsourced data. Public audit technology enables users to outsource data integrity verification to specialized Third-party auditors (TPA). The TPA regularly verifies data integrity and notifies users [1]. Most public auditing schemes are based on a public key infrastructure (PKI), in which the auditor verifies users’ certification and selects the correct public key for authentication. These schemes face various issues related to certificate management, including the revocation, storage, distribution, and validation of certificates. Furthermore, the credibility of the TPA is questionable. An irresponsible auditor can establish a good integrity record without performing the public auditing protocol to reduce verification costs [4]. Therefore, it is necessary to formulate a public audit scheme to restrict the behavior of auditors. The public audit technology based on the blockchain can effectively audit the behavior of TPA. TPA generates challenge information based on the unpredictability nonce of the block and writes the verification result of each time into the blockchain. At the same time to solve the privacy and credibility issues in the context of cloud storage, the scheme Public Key Encryption with Conjunctive Field Keyword Search (PECK) is based on a smart contract used with blockchain technology. In the literature review, it has been noted that most of the schemes discussed basic public auditing of cloud storage data based on trust between the Data Owner, Cloud service provider, and the TPA. A third-party auditor notifies any changes to the data and the Data owner checks the audit logs to verify the operations. Organization The rest of the paper is organized as follows: Sect. 2 introduced a literature review discussing related works. Section 3, existing techniques of blockchain-based audit schemes, Sect. 4, discussion, and finally Sect. 5, conclusions are provided.
Securing Cloud Storage Data Using Audit-Based Blockchain …
143
Fig. 1 Network visualization
Methodology Most of the research work materials were collected using Charles Sturt University Primo, Google Scholar, IEEE Xplore, ACM Digital Library, and Springer search. This project work involves collecting journals based on their merits and relevance to the project topic. It involved selecting journals mostly based on SJR journals ranking (Q1 and Q2). I used keywords for my topics such as Security, Cloud storage, Audit and blockchain. I have used search criteria and filtered journals based on the years 2019– 2022. After that, I collected the 15 best journal articles (Review) based on keywords for my project work. From these 15 journal articles, I have 12 journal articles for my literature review which are close and best suitable for my project. Figure 1 is a visual representation of the topic keywords and their relationship mapping.
2 Literature Review The review articles mostly discussed public auditing based on blockchain and focused on some specific audit methods for cloud storage data. Hence, this section will discuss all related works on auditing, public verification, and blockchain-based data verification.
144
M. B. Hossain and P. W. C. Prasad
2.1 Related Work The article by Li et al. [5], proposed blockchain-based public auditing for big data in cloud storage. This proposed solution utilizes the blockchain technique instead of typical TPA as used in many existing schemes. In this scheme, a DO can dynamically request other DOs to audit its data through the blockchain network. This is possible as the hashed tags of encrypted file blocks are stored in the blockchain network and witnessed by the entire blockchain network so that anyone in the network can serve as a public auditor. Therefore, the blockchain-based auditing scheme can resist dishonest CSPs and DOs. This scheme uses lightweight verification tags on the blockchain and generates a lightweight proof by constructing the MHT using the tags, thus reducing the overhead of computing and communication for integrity verification. Meanwhile, the DOs store the root of MHT locally as they immediately check the data integrity of their remote data after the DOs uploaded their data. This research work uses blockchain networks and entities CSP and DOs, there is no real authenticity and proven techniques to verify CSP and DOs. Trusting unknown DOs for auditing data can be a threat to data security. The article by Li et al. [6], proposed blockchain-based public auditing for cloud storage environments without trusted Auditors. In this article, the scheme looked at a certificateless public auditing to resist malicious auditors and key escrow problems. The proposed framework uses four roles: cloud server provider (CSP), client, key generation center (KGC), and auditors. The client performs the data updating operations on the stored data and the CSP generates the client’s operation log of this time and computes the multiple signatures on this log by the client and CSP which indicate that all members agree with the result. There is a consensus mechanism of the distributed auditing nodes, when a client sends an auditing request to the distributed auditors, the blockchain network triggers a consensus mechanism, and the data stored in the CSP is audited and stored among the nodes. In this scheme, data audit has been performed without a trusted auditor and the CSP valid that all members have agreed with auditing results stored in the blockchain are tamper-resistant. Therefore, anyone can check the historical audit log but difficult to determine the responsibility for the data damage. In this paper, Xue et al. [4], proposed an identity-based public auditing (IBPA) for ensuring data integrity in cloud storage systems. In this scheme, auditing data integrity is based on nonces, which are indispensable features of a public blockchain that are used to solve given Hash puzzles. The nonce in a block is not predefined and is easily verifiable, which ensures that even if a malicious auditor forges an auditing result, it cannot be validated by the user. This scheme is also discussed entities such as a Private key generator (PKG), A user, a CSP, and A TPA as articles [6], and describes the relationship among the entities in the system model. However, the TPA may perform fewer audits than agreed upon with the user to reduce auditing costs, or for financial reasons, the TPA and the CS may collude to forge audit data to deceive the user. This paper differs from the articles [5, 6] because IBPA enables public auditing
Securing Cloud Storage Data Using Audit-Based Blockchain …
145
of outsourced data in cloud storage systems, including the ability to resist malicious auditors, construct challenge messages, and efficiently verify auditing results. This system has several important properties such as if the user’s data are lost, the auditor may be able to provide sufficient evidence that he has properly complied with the audit protocol. In this article, Yang et al. [3], proposed a Public Mutual Audit Blockchain (PMAB) for Outsourced Data in Cloud Storage. In this work, similar entities similar to [4–6] are used for the system model, except the Regulator [R]. In this scheme, an audit chain and a credit chain are two distributed ledgers maintained by CSPs and R, which, respectively, record the audit information and credit of each entity. After outsourcing data to CSPs, DO generates the audit contract with CSPs and R. In PMAB, some corrupted CSPs may try to bribe other CSPs to conceal their data problem in audit verification. A regulator is assumed to be a trustworthy regulatory agency that supervises cloud storage services. After outsourcing data to CSPs, DO generates the audit contract with CSPs and R. In PMAB, some corrupted CSPs may try to bribe other CSPs to conceal their data problem in audit verification. A regulator is assumed to be a trustworthy regulatory agency that supervises cloud storage services. Decentralized data outsourcing auditing protocol based on blockchain (DBPA) [7, 8], proposed a new method, which allows customers to store data safely without relying on any specific TPA and using blockchain technology, achieve decentralization with security without the need for a trusted central authority. To ensure the privacy and security of cloud data and users, Wang et al. [7], proposed a data integrity audit scheme based on blockchain. This protocol does not require a specific administrator or TPA to store data. Hence, it can be compatible with existing protocols [3–6] by modifying smart contracts. This public audit model based on blockchain is mainly composed of similar entities [3, 4, 6] such as User, Auditing Service Provider, Cloud Service Provider, and client. The proposed architecture by Sharma et al. [8] implements the Cyphertext policy Attribute-based Encryption (CP-ABE) algorithm to ensure confidentiality, integrity, and availability features. It uses a java-based blockchain network and deploys the honeybee optimization algorithm on a cloud storage system to optimize the resources and minimize the transaction and execution time. The proposed system model consists of four entities, blockchain, user, Cloud Storage, and Data Owner. This proposed scheme is discussed in detail in blockchain, encryption algorithm, and threat model. Articles [2, 9, 10] discussed blockchain-based access control for the cloud storage system. Articles [2, 9] proposed a cloud CP-ABE blockchain-based framework but they have used different methods to solve the problem with access control in cloud storage systems. Wang et al. [2] proposed decentralized framework without a trusted third party in the system. A secure cloud storage framework with access control based on blockchain is proposed, which is a combination of Ethereum blockchain and CP-ABE algorithm, the aim is to realize fine-grained access control for cloud storage. Access period time of information on the Ethereum blockchain stored, when an attribute set is assigned to the data user, the data owner can append an effective access period for the data user. Sharma et al. [9] proposed future work to include
146
M. B. Hossain and P. W. C. Prasad
the integrity checking process in the proposed architecture, which ensures that the uploaded documents are not tampered with by malicious users and enhances the security of the architecture. Ezhil Arasi and Kulothungan [10], Data owners encrypt their files based on a set of attributes and define an access policy using those attributes, to specify which type of users can gain access to that file. Then, the encrypted file is outsourced the cloud storage. Previous articles [2–10] proposed different methods and architectures for blockchain-based public auditing. Yang et al. [1], Xue et al. [4] proposed privacy preserving public auditing. In this article, Miao et al. [4] proposed a decentralized and privacy-preserving public auditing scheme based on blockchain. In DBPA, the challenge message is generated based on the latest successive block hash and a random seed chosen by the TPA. DBPA scheme employs the PoW consensus mechanism and utilizes blockchain to record the audit results, which is public, decentralized, and unforgeable. DBPA scheme is secure in the random oracle model based on the intractability of the computational Diffie-Hellman problem and Discrete Logarithm problem. This cloud privacy-preserving article [11], addressed BC-PECK, a data protection scheme based on blockchain and public key searchable encryption. BCPECK is a cloud storage data privacy protection scheme based on blockchain and smart contracts. The data sharing process in multi-user scenarios is realized with the help of PECK. Therefore, this scheme allows a more complex and accurate query process using multi-keyword retrieval.
2.2 An Overview of Blockchain Technology Blockchain is one of the most advanced technologies used nowadays and has gained popularity and consider an innovative technology widely deployed in various areas. The blockchain is adopted mainly as an accounting book or digital distributed ledger database [12]. Blockchain is a distributed database that is maintained by multiple nodes and increases a list of ordered records in the shape of blocks without requiring trust among nodes [13]. Blockchain could be cost-effective, removing the centralized authority’s need to monitor and regulate transactions and interactions between different members. In the blockchain, every transfer is cryptographically marked and confirmed by other mining entities holding a copy of the entire record consisting of all the transactions [12]. As a decentralized system, blockchain adopts the decentralized consensus mechanism without a third-party trusted authority. There are four major consensus mechanisms, Proof of Work (PoW), Proof of Stake (PoS), Practical Byzantine Fault Tolerance (PBFT), and Delegated Proof of Stake (DPoS) [4]. Blockchain systems can be classified into three types: public blockchain, consortium blockchain, and private blockchain [13]. A public blockchain has no threshold for users, and anyone can join or leave the blockchain without getting permission from centralized or distributed authorities. In general, blockchain has its characteristics and advantages in decentralization and anonymity, non-modifiability and unforgeability, traceability, and irreversibility [13].
Securing Cloud Storage Data Using Audit-Based Blockchain …
147
3 Existing Technique of Blockchain-Based Auditing Schemes Cloud Storage Data 3.1 Blockchain-Based Cryptographic Technique for Cloud Storage This technique enables information transferred from the users to the cloud systems in encoded form, and ensures the accessibility of information using cryptographic procedures [12]. The structure of cryptography storage contains three parts. • Data Processor (DP): Processing of information before sending it to the cloud [12]. • Data verifier (DV): Verification of the damaged information stored in the cloud [SLR]. • Token Generator (TG): For saving the documents of the clients on the cloud, the token generator generates the token for each other [12].
3.2 Blockchain-Based Data Integrity Checking Technique for Cloud Storage Service The decentralized data integrity checking for cloud storage services is enabled by a blockchain-based structure. The integrity checking method mostly consists of three entities. They are Data Owner (DO), Cloud Service Provider (CSP), and the blockchain. Data Integrity Service (DIS) is built on the structure of the blockchain. Merkle trees for data integrity verification are used in Blockchain. Merkle trees has have phases: the pre-processing stage and the validation phase [12]. Pre-processing Phase Initially, the data are processed by the data users to form different fragments, and these fragment data are then used for constructing a hash Merkle tree. After that, the user and CSP will approve the hash Merkle trees, and the user stores the root of this hash tree. The user data and public Merkle trees are uploaded to CSP [12]. Verification Phase The client sends a challenge number to the CSP, and the CSP selects shards to check. The hash function is used based on the challenge number and shard to calculate a hash digest. Blockchain receives digest data from CSP and the equivalent supporting statistics. The smart contract calculates a fresh hash root and compares the hash roots. The data integrity will be assured when the hash roots are equal [12].
148
M. B. Hossain and P. W. C. Prasad
3.3 Blockchain-Based Access Control for Cloud Storage Data Due to the unreliable cloud environment blockchain technology may be utilized to secure access control to the cloud storage data information. Attribute-based encryption scheme method provides access control based on blockchain technology. Blockchain-based decentralized ledger technology keeps the unchanged log for all relevant events, like revocation, key generation, access policy appointment, and access request [12]. To guarantee secure access to sensitive information, creating a smart contract-based access control mechanism is intended to be reliable, flexible, and useful.
3.4 Auditing Scheme in Blockchain-Based Cloud Storage Data With the popularity and fast improvement of cloud computing, many organizations and people share and store information on the cloud without knowing the risk of having untrustworthy cloud providers. Therefore, auditing of shared information stored on cloud storage has become a major issue. In audit-based public auditing the TPA needs for checking the data evidence is the group manager’s public key. Furthermore, the community manager cannot change the changed records discretionarily [12]. Decentralized blockchain-based public auditing [7, 13], for cloud storage framework was introduced to eliminate the TPA and due to this framework stability and reliability are enhanced. A smart contract along with an automatic auditing protocol was developed to check periodically the integrity of the data in the cloud instead of the owner of the data. Audit results cannot be altered until every single smart contract is exhausted that is stored by all nodes in the system [12].
3.5 Security and Privacy Issues in Blockchain-Based Cloud Storage Data Blockchain technology is based on peer networks, shared framework, and peer resource computation. To improve blockchain security methods such as Proof of Work (PoW), and Proof of Storage were implemented. Security and privacy for the cloud data are still at risk even though blockchain security is continuously improving [12]. Article [11], discussed more privacy-preserving using BC-PECK.
Securing Cloud Storage Data Using Audit-Based Blockchain …
149
4 Discussion 4.1 Problem Overview The Literature review process helped to understand different ways of auditing outsourced cloud storage data and securing data integrity and authenticity. Due to the cloud storage popularity, availability, and scalability many organizations and private users store their information in the cloud storage. But protecting cloud storage data from the malicious cloud service provider and Third-party auditors (TPA) has become challenging for the data owners and users. Due to the untrustworthy Cloud service providers and TPAs, blockchain-based auditing method have has become more and more popular and adopted by many users.
4.2 Blockchain-Based Auditing Pros and Cons The identity management scheme is vital for Cloud service providers and cloud computing users. The users adopt their identities to access their data in the cloud. There are many identity administration systems and different limitations are identified. Blockchain technology provides a means to avoid this issue by providing a safe method without any trustworthy party. Therefore, various public auditing schemes have already been researched by many researchers. From the literature review, it is clear that the blockchain-based public auditing scheme is based on a trusted Third-party Auditor or without Third-party auditors. Public auditing scheme also depends on various attribute-based and decentralized data integrity scheme based on blockchain. Each of the reviewed articles has presented various methods, frameworks, and system models. These works have been properly justified against existing models and their framework. The article [2], auditing scheme involves only two predefined entities (CSP and DOs) who may not trust each other and doing such remove third parties for data auditing. To reduce the probability of the auditor being compromised, this scheme relies on dynamically selected DOs from the blockchain network to perform the auditing operations. This is possible as the hashed tags of encrypted file blocks are stored in the blockchain network and witnessed by the entire blockchain network so that anyone in the network can serve as a public auditor. This auditing work can defend against malicious entities and the 51% attack on the blockchain network. To solve the problem of malicious attackers in the traditional Third-party Auditor-based (TPA) schemes, the distributed nodes in the blockchain network as auditors to check the integrity. This framework uses a private key issued by the KGC to calculate the linear authenticator of the file before the client uploads the data to the CSP. The smart contract technique is used as auditors on the blockchain nodes, and the function of which includes processing client auditing requests and executing the Proof Verify algorithm [3]. In IBPA, the public blockchain mechanism of the Bitcoin system is used as the key technology for public auditing. In IBPA,
150
M. B. Hossain and P. W. C. Prasad
the TPA’s auditing results are written into the public blockchain, which can serve as undeniable evidence that the TPA has executed the auditing agreement in compliance with user requirements [10]. In PMAB, present a customized blockchain architecture PMAB for public audit, which enable all CSPs to automatically audit each other through audit contract and release DOs from data audit. Furthermore, a credit-based incentive mechanism to resist collusion attacks while quantifying behaviors of entities. Though it is a mutual and credit-based audit, instead of TPA’s audit, it has some issues with threats to the security of the data [11]. In this framework, a malicious ASP could spoof users by working with cloud service providers to forge audit results. Data at CSP may have been corrupted and this could impact their reputation. A malicious user may deny the quality of the service provided by the CSP without paying for it after obtaining a reliable service. Therefore, to protect the right and interests of all users, this work defines the User can authenticate the storage service provided by the CSP through the service and pay a fee if the data is completed. The User has retroactive responsibility for the data verified by the ASP for a limited period. However, the results of efficiency analysis show that contract confirmation time alone takes up most of the time in the scheme implemented by Ethereum [5, 8]. By introducing blockchain technology, the problem of potential single point failure of the central authority in the original scheme is solved in some context. On the other hand [6] proposed a decentralized and blockchain-based architecture for the cloud storage systems, combining a java-based blockchain network with a cloud storage system and ensuring features with a revocation process without involving any trusted authority. The proposed architecture addresses the key escrow problem using two authorities for the key generation process. It provides a distributed approach to generate keyrelated information, user access policy, and revocation process details without any single authority [6, 7, 9]. A malicious cloud server is unable to guess the challenge message ahead of time anymore. It utilizes zero-knowledge proof (ZKP) to protect user privacy in DBPA. Therefore, instead of returning the aggregated tag, the cloud server returns a blinded version of the tag and provides a ZKP to show the correctness of the tag [4]. There are certain limitations that remain. For example, the resistance to malicious auditors increases the computational overhead on the user side.
4.3 Adapted Framework After reviewing all the selected articles and discussing various blockchain-based public auditing, security, and privacy, I have adopted a framework in a combination of multi-replica and multi-cloud public auditing with privacy- preserving blockchain-based technology. I anticipate that this framework may maintain proper public auditing protocol and provide privacy at the same time. Reviewing articles and comparing various blockchain-based auditing technology and smart contactbased privacy preservation looks to be more effective in securing cloud storage data. Multi-replica and multi-cloud protect data in the event to cloud server failure or compromise of the security of the data. All replicas can be audited at the same time.
Securing Cloud Storage Data Using Audit-Based Blockchain …
151
Fig. 2 Extracted combined public auditing blockchain-base
Privacy preservation of the data is based on blockchain and public key searchable encryption (Fig. 2).
5 Future Works After reviewing articles, there are courses of future needs to be worked on for further research works. A further audit process needs to be looked at for secure and accountable entities such as a TPA method using the blockchain technique. In the future, the focus will be on providing secure and efficient services relying on blockchain-based public auditing techniques. Further, explore efficient public auditing mechanisms such as a simpler way to construct challenge messages and simplifying the check log algorithm.
6 Conclusion The extracted project work based on all articles reviewed comparing their blockchainbased public auditing and privacy preserving methods, a multi-replica, and multicloud data audit scheme, can track the identity of malicious users as well as supports the modification, insertion, and deletion of cloud-replica data. The scheme also restricts the behavior of Third-party auditors using blockchain technology. This scheme has higher performance in computation and communication overhead. On
152
M. B. Hossain and P. W. C. Prasad
the other hand, the BC-PECK scheme uses blockchain technology to construct a credible and reliable cloud storage data privacy management. Data privacy and security are maintained using ciphertext on stored and transmitted data. The accuracy and efficiency of the multi-keyword of the data sharing process are realized through the PECK technology. In addition, fair and credible access control of data in the multiple users’ scenario is achievable with the use of blockchain technology and smart contract technology. Literature review has provided various schemes which has demonstrated ideal prospective of secure and creditable ways of public auditing of cloud storage data. Though the literature review has substantial evidence of securing cloud storage data, it has very complex methods of providing the solutions. Most of the papers discussed complex methods of blockchain structure, algorithm, and intensive experiments on performance analysis. Therefore, this project work has been completed on pure review due to the limitation on doing experiments and analysis.
Appendix: Abbreviations BC-PECK CP-ABE CDH DBPA DIS DL DO DU CSP IBPA KGC MHT PKG PECK PoW PoS TPA
Blockchain-based Public Key encryption with conjuctive keyword Cypertext policy Attribute-based Encryption Computational Diffie-Hellman Decentralized based Public Auditing Data Integrity Service Decision Linear Data User Data User Cloud Service Provider Identity-based Public Auditing Key generator Center Merkle Hash Tree Private Key Generator Public Key Encryption with Conjuctive Keyword search Proof of Work Proof of Stake Third-Party Auditor
Securing Cloud Storage Data Using Audit-Based Blockchain …
153
References 1. Yang X, Pei X, Wang M, Li T, Wang C (2020) Multi-replica and multi-cloud data public audit scheme based on blockchain. IEEE Access 8:144809–144822. https://doi.org/10.1109/ACC ESS.2020.3014510 2. Wang S, Wang X, Zhang Y (2019) A secure cloud storage framework with access control based on blockchain. IEEE Access 7:112713–112725. https://doi.org/10.1109/ACCESS.2019. 2929205 3. Yang H et al (2021) PMAB: a public mutual audit blockchain for outsourced data in cloud storage. Secur Commun Netw 2021:1–11. https://doi.org/10.1155/2021/9993855 4. Xue J, Xu C, Zhao J, Ma J (2019) Identity-based public auditing for cloud storage systems against malicious auditors via blockchain. Sci China Inf Sci 62(3):1–16. https://doi.org/10. 1007/s11432-018-9462-0 5. Li J, Wu J, Jiang G, Srikanthan T (2020) Blockchain-based public auditing for big data in cloud storage. Inf Process Manage 57(6):1. https://doi.org/10.1016/j.ipm.2020.102382 6. Li S, Liu J, Yang G, Han J (2020) A blockchain-based public auditing scheme for cloud storage environment without trusted auditors. Wirel Commun Mob Comput 2020. https://doi.org/10. 1155/2020/8841711 7. Wang H, Wang XA, Xiao S, Liu J (2020) Decentralized data outsourcing auditing protocol based on blockchain. J Ambient Intell Humaniz Comput 12(2):2703–2714. https://doi.org/10. 1007/s12652-020-02432-x 8. Sharma P, Jindal R, Borah MD (2021) Blockchain-based decentralized architecture for cloud storage system. J Inf Secur Appl 62:102970. https://doi.org/10.1016/j.jisa.2021.102970 9. Sharma P, Jindal R, Borah MD (2022) Blockchain-based cloud storage system with CP-ABEbased access control and revocation process. J Supercomput. https://doi.org/10.1007/s11227021-04179-4 10. Ezhil Arasi KIGv, Kulothungan K (2022) Auditable attribute-based data access control using blockchain in cloud storage. J Supercomput. https://doi.org/10.1007/s11227-021-04293-3 11. Jia-Shun Z, Gang X, Xiu-Bo C, Haseeb A, Xin L, Wen L (2021) Towards privacy-preserving cloud storage: a blockchain approach. Comput Mater Continua 69(3):2903. https://doi.org/10. 32604/cmc.2021.017227 12. Sharma P, Jindal R, Borah M (2020) Blockchain technology for cloud storage: a systematic literature review. ACM Comput Surv 53(4):1–32. https://doi.org/10.1145/3403954 13. Miao Y, Huang Q, Xiao M, Li H (2020) Decentralized and privacy-preserving public auditing for cloud storage based on blockchain. IEEE Access 8:139813–139826. https://doi.org/10. 1109/ACCESS.2020.3013153
Data Security in Hybrid Cloud Computing Using AES Encryption for Health Sector Organization Pratish Shrestha, Rajesh Ampani, Mahmoud Bekhit, Danish Faraz Abbasi, Abeer Alsadoon, and P. W. C. Prasad
Abstract The healthcare sector has a very broad volume of patient information that needs to be recorded, and the cloud provides the necessary infrastructure at a low cost with better quality. The patient health records are implemented as digital output to Electronic Health Records (EHRs). The EHR can contain sensitive patient data such as data, scanned images, and X-rays. Security of the data in the cloud is an important issue due to different kinds of security threats to the cloud, such as distributed denial-of-service (DDoS) and man-in-the-middle (MITM) attacks. This study emphasizes the importance of data security in the hybrid cloud for the health sector using encryption with the Advanced Encryption Standard (AES). We present Data, Speed Efficiency, and Electronics Health Record (DSE), which defines each of the major components required to implement data security in hybrid cloud computing. EHR security is provided by using an encryption technique such as AES, which is implemented and uploaded to the hybrid cloud system. We approach the DSE taxonomy according to its speed and encryption. This study’s main contribution is that it establishes a strong link between data security and healthcare organizations, as well as the open issues and challenges for patient data security in the health field. As a recommended solution, EHRs are encrypted with a secret key before uploading
P. Shrestha Study Group Australia, Darlinghurst, Australia R. Ampani Peninsula Health, Melbourne, Australia R. Ampani · M. Bekhit · D. F. Abbasi · A. Alsadoon · P. W. C. Prasad (B) Kent Institute Australia, Melbourne, Australia e-mail: [email protected] M. Bekhit University of Technology Sydney, Ultimo, Australia A. Alsadoon · P. W. C. Prasad Charles Sturt University, Bathurst, Australia A. Alsadoon Western Sydney University, Penrith, Australia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_15
155
156
P. Shrestha et al.
to the cloud server. Thus, the data will be safe as it cannot be changed without the help of the data owner and healthcare organizations. Keywords Cloud computing · Electronic health record (EHR) · Advanced encryption standard (AES) · Introduction
1 Introduction AES is efficient in a hybrid cloud for the management of healthcare organizations. It contains elements such as EHR, doctor and hospital records, staff information, and information from past history. Every element can be practiced in hybrid cloud computing with the AES encryption technique. Although in the long term, cost savings are one benefit, the introductory deployment cost is very high. This system will help with scalability in storage with unlimited storage, mobility as the organization does not need to worry about the hardware and software, and other upgraded features that will help in the healthcare sector in an efficient manner. With the introduction of AES on the client side, the data will remain secure, thus ensuring the security of the data in the cloud. The health institution obtains more flexibility and different options for data implementation with the introduction of hybrid clouds. Although many new data security methods for hybrid cloud computing with encryption have been proposed, few methods have implemented the AES technique for the encryption of hybrid clouds. Although hybrid computing is becoming more common for commercial purposes for storing and performing multipurpose functions, it is not common to see them used in health sector organization with AES [1]. There are several plausible reasons why this kind of system is not widely used, as the data of patients, hospitals, and doctors is not taken seriously. The AES algorithm achieves rapid execution and higher security when compared to other algorithms such as Data Encryption Standard (DES) and Blowfish [2]. Another important feature of the Advanced Encryption Standard algorithm is that it is open source and unpatented, so no licence is required to implement it in various industries and organizations. Moreover, the algorithm achieves rapid execution and higher security when compared to other algorithms such as Data Encryption Standard (DES) and Blowfish [2]. This research project emphasizes the importance of using encryption with the AES method for data security in hybrid clouds in the health sector. The technique used in this paper is encryption with AES to be applied on a hybrid cloud. Furthermore, the domain of this project is to implement data security in the health sector. As cloud computing technology is advancing significantly, the need for data security in cloud computing is also increasing significantly. Cloud computing technology has been implemented in the health sector to provide better service to patients and maintain a proper system for patient prioritization [3]. Some of the issues related to healthcare organization are data security and privacy of patients, doctors, staff, etc. A hybrid cloud is a vital cloud computing model used commercially and combines a private
Data Security in Hybrid Cloud Computing Using AES Encryption …
157
cloud and a public cloud. By organizing 16 state-of-the-art journals that mention encryption techniques within the healthcare system, we determine the practicality of the DSE taxonomy. All the journals were selected to disclose the most excellent and active authors and groups within 38 papers from the domain of hybrid cloud systems. We verify the DSE taxonomy according to its goodness of fit in the process. Lastly, we analyze the taxonomy’s components with other review journals to see the completeness of the system. The remainder of the paper is organized as follows: The literature review is in Sect. 2, the proposed DSE Taxonomy is in Sect. 3, the DSE Classification is in Sect. 4, the validation and evaluation is in Sect. 5, a summary of the discussion is in Sect. 6, and the conclusion is at the end of this paper.
2 Literature Review Healthcare organizations obtain greater flexibility and several more options for data deployment with the introduction of the hybrid cloud. EHRs in the cloud should be both secure and scalable. Hybrid clouds provide additional flexibility and greater optimization for business workloads [4–8]. Authors of [9] investigated whether cloud storage enables a better storage center for patient data or not. The tedious task of managing infrastructure and the cost of development and maintenance is reduced with the help of a cloud storage facility in the health department. Encryption with protocols will have a major impact on cost, but the AES encryption technique will solve this problem in this organization’s hybrid cloud. Patient data security must be considered, and an appropriate system should be designed to protect sensitive medical records and patient details [10]. The system boosts the cost due to the big data encryption-based system. Reference [2] investigated that the increase in the number of medical devices connected that are constantly collecting data has led to the increase in cloud storage for storing clinical data, which is HIPAA-compliant and easily accessible to authorized users. Authors of [1] proposed different algorithm techniques such as AES and Data Encryption Standard (DES) for public cloud computing in hospitals. Babitha and Babu [11] proposed several encryption techniques for encrypting the EHR, but AES secures the privacy. Due to different vulnerabilities like MITM, DDoS, etc., data is compromised. Cloud computing is used for managing, controlling and accessing data. Modi and Kapadia [12] proposed ElGamal cryptography on healthcare data stored on hybrid clouds with proxy re-encryption and linear network encoding. Incrementing the security of the cloud to save the EHR. Authors of [13] proposed the enhancement of the security of health records by making use of the Health Insurance Portability and Accountability Act (HIPAA) on different applications of mobile data storage. The cost is significantly increased using the proposed system, but it enhances the security of mobile data and healthcare data stored on hybrid clouds [14]. Lounis et al. [15] proposed the security of EHR on hybrid cloud storage by implementation of encryption with Data Encryption Standard (DES). Cloud service provider need to
158
P. Shrestha et al.
deliver a standard level of privacy to the data of the healthcare organization that is stored in the cloud storage [2]. Much of the ontology has been designed for speed efficiency in encryption protocols and encryption efficiency in electronic health record systems. Nevertheless, most were created to consider the encryption factor without considering the critical size of encryption and the type of cloud, which are considered critical factors in EHR. Moreover, there are many papers investigating different encryption schemes that are irrelevant. We have mentioned a more sophisticated taxonomy according to the most relevant factors. The proposed DSE taxonomy is entirely dependent on three factors: data (which data types are used for encryption), speed efficiency (e.g., how long does it take to encrypt and decrypt a given number of files), and electronic health record (EHR).
3 Proposed DSE Taxonomy The proposed system provides data security protection for secure electronic health records. It helps build practical experience by engaging with the encrypted records on the hybrid cloud. The proposed system is based on the DSE taxonomy (Data, Speed, Efficiency and Electronics Health Record) created based on reviewing previous and current AES encryption in the hybrid cloud system. We build a taxonomy that considers the most relevant factors for validating, verifying, and assessing such a system. The 1st component in the DSE taxonomy is data. It includes the patient’s raw and image data with demographic information and acquisition sensor properties. Such classes are used to identify whether all the data has been accumulated correctly. Second, speed efficiency is used to classify the encryption assessment methods and evaluate the result of the proposed work. The subclasses included encryption processes used to determine the speed of the algorithm. Lastly, we classify based on the patient’s health record, which represents the patient’s data written in handwritten (which is converted to computer-readable format) using an Optical Character Reader Algorithm. All the three components, their relations, and their subclasses are described in Fig. 1. The component classes and components with their best relative effectiveness and the categories they can occupy are provided in Table 1. Data: The purpose of using data as one of the classifications is to provide necessary information about patients and staff of a health care system. All the data that should be included in the system should be either raw or image data. Articles use different data types, but they are mostly categorized as either raw or image data. Raw data includes numbers, alphabets, words, issues of the patient, age, height, address, etc. Image data includes images such as X-Rays, Computed Tomography (CT) scans, Video X-Rays, Magnetic Resonance Images (MRI), etc. Raw data mostly includes demographic information about patients, doctors, and staff. These are necessary to cipher and send it to the cloud.
Data Security in Hybrid Cloud Computing Using AES Encryption …
159
Fig. 1 The percentage (%) of system components, classes and subclasses which are explained in chosen publication are revealed. For e.g., 60% of journals mentioned the AES algorithms that are used in encryption
Speed Efficiency: The final goal of this project is to secure the data of healthrelated sectors in hybrid clouds with encryption of data at a very low speed. Thus, the evaluation of speed efficiency would be one point of this review article. The speed will totally depend on the encryption process with the Advanced Encryption Standard. The speed will vary depending on the key size to encrypt the data with 128, 192, and 256 bits. The larger the key size, the longer it takes to encrypt. Data of a large size can use AES-128-bit encryption, a medium-size can use AES-192-bit encryption, and a large size uses 256-bit encryption. Some journals will identify the speed of the encryption process to encrypt the data with different key sizes, while some journals will just focus on encrypting and discarding the speed. The classes, subclasses, and the essential attributes, and sample instances will be displayed in Fig. 1. Electronic Health Record: Authors of [9] reviewed the basic components for evaluating the encryption performance for different data sizes. In this article, I consider two major subclasses of patient’s history data, which are written in letter-pad and notepad by doctors when there were fewer computers and people relied on files and copies. So, written data is converted to digital format using an algorithm. These records are taken into consideration such that every patient and staff member is recorded in a hybrid cloud. All the ciphered text is then transferred to the cloud.
4 DSE Classification In the following section, each of the components of the proposed system is described and the solutions to the problem are analyzed in Tables 1 and 2.
Type of encryption
AES
ABE
AES
AES
AES
ElGamal cryptography
HIPPA
AES
Reference
[11]
[1]
[16]
[9]
[10]
[12]
[13]
[3]
D: A, N, G
D: N, A, C
D: A, N, G
Big data
D: N, A, S, C
D: N, A, S, C
D: H, W, N
D: N, A, S
Aq: X-Ray, CT
Aq: X-Ray, CT,
Aq: CT, X-Ray
Aq: Video X-Ray, MRI, CT
N/S
Aq: CT, X-Ray
Aq: Video X-Ray
Aq: MRI, CT
STR, UNSTR, FN
UNSTR, FN
STR, UNSTR, FN
STR, FN
STR, FN
STR, FN
STR
STR, FN
KB
KB
KB
KB
KB
KB
KB
KB
Input size
N/S
AddRoundKey, ShiftRow, SubByte
SubByte, ShiftRow, MixColumn
SubByte, ShiftRow, MixColumn, AddRoundKey
SubByte
N/S
MixColumn, Shift row
Shift row, MixColumn, AddRoundKey
Speed processing
Speed efficiency Processing
R
I
Data
Table 1 DSE classification of data security in hybrid cloud computing
Big data ciphered
Data are structured in a proper way
Can look at the info. at different view
L
L, H
Ciphered text
OCRad
GR
(continued)
Big data structured
L, H
FineReader based File and Folder on OCR algorithm
OCR
OCR
OCR
N/S
OCR
Algorithm
EHR
160 P. Shrestha et al.
Type of encryption
SKE
Radio frequency
HIPPA
AES
Reference
[17]
[18]
[15]
[19]
Table 1 (continued)
D: G
D: A, N, C
N/S
N/S
Aq: MRI, X-Ray
Aq: CT
N/S
N/S
FN, UNSTR
STR, FN
FN
FN
KB
KB
KB
KB
Ciphered text
OCR algorithm
OCR
Big data Ciphered
L, H
N/S
FineReader based N/S on OCR algorithm
Algorithm
EHR
Varying speed GR performance with the key size
Variation method
Variation method
Variation method
Speed processing
Speed efficiency Input size
Processing
R
I
Data
Data Security in Hybrid Cloud Computing Using AES Encryption … 161
162
P. Shrestha et al.
Table 2 Abbreviation Abbreviation
Definition
Abbreviation
Definition
D
Demographic information
L
Letterpad
Aq
Acquisition sensor
H
Handwritten text
N
Name
GR
Glyph recognition
Aq
Age
CHISTAR
Cloud health information system technology architecture
S
Sex
SKE
Symmetric key encryption
C
Complaint
N/S
Not specified
Str
Structural
OCR
Optical character reader
FN
Functional
KB
KiloBytes
5 DSE Validation and Evaluation We resolved the system components, which must be validated and evaluated to determine if there is surplus value in using the system. We consecutively contrasted evaluation and validation: validation shows that the data remains encrypted at minimum speed and accuracy, and evaluation is concerned with determining the value and convenience of the system; whether that system considers enough factors or classes to build up a sufficient encrypted system. Table 3 summarizes this validation and evaluation. All papers described a method of evaluation or how they validated their method. In most journals, they target the efficiency of the whole system by encrypting using AES with different key sizes for variable input sizes. Several works also depend on the system’s accuracy when comparing the original text with decrypted text when downloaded from the server. All papers described a method of evaluation or validation of their schemes. In most of the journals, they pinpoint the efficiency of the speed of the whole system by encrypting using AES with different key sizes for variable input sizes. Many works rely on the system’s accuracy when comparing original and decrypted text downloaded from the server. In some of the selected journals, the speed and efficiency components of the system were validated. El Bouchti et al. [1] mentioned the evaluation part of the system: effective factors in building an effective encryption system in the cloud with use in the health sector. The major factors that should be considered in speed efficiency are the type of key size of 128, 192, and 256 bits. The other factor is the data, where different raw and image data types are used. Bouchti et al. [1] concentrated on the validation part; it looked at the simulation of data calculating speed using different key sizes. Furthermore, it looked at the data with different scenarios, such as processor and CPU speed.
ABE-128
AES-128, 192, 256
AES-256 key
AES with all possible Whole system key size
[1]
[16]
[9]
[10]
Speed
Data Accuracy
Speed
Data, encryption method Accuracy, speed
Encryption method
Whole system
AES-128, 192, 256
[11]
Accuracy, speed
Component validated or Study criteria evaluated
Reference Protocol
Table 3 Evaluation and validation of the system
MATLAB
Programmable C
Simulation tools
Png, jpg, docx, pdf
Data of variable size
C#
MATLAB
Demographic and C# acquisition sensor’s data
128 bits of data
Data in img, pdf, docx format
Validation/evaluation method and/or data set
(continued)
Accuracy after decrypting the encrypted file meets
Increase in speed with high end processor
Data to be encrypted depends on the size of the block but running on higher processing speed gives less speed to encrypt the data
Small data are encrypted with less than 0.001 ms in time. Encryption error less than 12 bits of data
Throughput increases, the speed of the encryption decreases with different size of blocks ranging from 0.5 to 20 MB
Results
Data Security in Hybrid Cloud Computing Using AES Encryption … 163
N/S
AES
SKE
AES-128, 192, 256
HIPPA
[13]
[3]
[17]
[18]
[15]
Whole system
Whole system
Data
Whole system
Encryption method
Encryption method
ElGamal cryptography
[12]
Data of variable size
256 bits of data
Accuracy, speed
Data in img, pdf, docx format
Results
Programmable C
MATLAB
MATLAB
C#
(continued)
Throughput increases, the speed of the encryption decreases with different size of blocks ranging from 1 to 20 MB
Increase in speed with high end processor
Increase in speed with high end processor
Data are found to be 99.7% accurate when decrypting the original file
Speed found to be decreased when encrypted in the client section before uploading to the cloud
Google App Engine Data to be encrypted depends on the size of the block but running on higher processing speed gives less speed to encrypt the data
Simulation tools
Data of different format Programmable C such as png, jpg, docx, pdf
Data of size thirteen, fifty-six kilobytes
Validation/evaluation method and/or data set
Accuracy, Speed Data of variable size
Speed
Accuracy
Speed
Speed
Component validated or Study criteria evaluated
Reference Protocol
Table 3 (continued)
164 P. Shrestha et al.
Speed
Encryption Method
[19]
AES
Component validated or Study criteria evaluated
Reference Protocol
Table 3 (continued) Simulation tools
Data of different format MATLAB such as png, jpg, docx, pdf
Validation/evaluation method and/or data set Increase in speed with high end processor
Results
Data Security in Hybrid Cloud Computing Using AES Encryption … 165
166
P. Shrestha et al.
6 Discussion In this section, we discover the components of the DES taxonomy that were not apparently explained in the selected 16 journals. We provide instances from the article to determine that the data, algorithms, and EHR are presented. Moreover, we feature the importance of carefully considering selected answers for the components. While data without a classification is well developed and defined in each of the 30 selected publications, few publications only showed data classification. We use data as a classification because it provides necessary information about patients and staff of a healthcare system. All the data that should be included in the system should be either raw or image data. Articles use different data types, but they are mostly categorized as either raw or image data. Raw data includes numbers, alphabets, words, issues of the patient, age, height, address, etc. Image data includes images such as X-Rays, Computed Tomography (CT) scans, video X-Rays, Magnetic Resonance Images (MRI), etc. Raw data mostly includes demographic information about patients, doctors, and staff. These are necessary to cipher and send it to the cloud. The main goal is to secure data for health-related sectors in hybrid clouds with encryption at a minimum speed. Speed will totally depend on the encryption process. Some journals describe how the encryption should be done in the cloud, which will tremendously increase the encryption speed due to high latency with the cloud server. So, encrypting on the client-side will help in decreasing the speed. Also, it depends on the key size of 128, 192, and 256 bits. For evaluating taxonomy DSE, I looked at the overlap of terms between the literature and taxonomy. To be exact, we determined the overlap of terms between the instances that a class and component of DES taxonomy can acquire and the terms discovered between the given components Table 1. It is impossible to test the above method for all the terms in my taxonomy since some parameters have relationships with each other. For example, the ciphered text is co-relative with the AES algorithm, which means even if all the publications have not mentioned ciphered text, the AES algorithms are still influenced by that.
7 Conclusion The purpose of the study is to focus on the impact of encrypting data with Advanced Encryption Standard to ensure its security in hybrid cloud environments used by the healthcare industry (AES). This study proposes different contributions, such as the link between the security of data and health care organization and the open issues and challenges for patient data security in the health fields. The EHR may include data, scanned images, X-Rays, which contain sensitive patient information. The study includes Data, Speed Efficiency, Electronic Health Record (DSE). Within the context of the healthcare, the recommended solution proposed that the EHRs should be encrypted with the secret key before uploading to the cloud server. In a hybrid
Data Security in Hybrid Cloud Computing Using AES Encryption …
167
cloud environment, the data cannot be compromised without the participation of the data owner as well as healthcare organizations. This ensures that the data is kept in a secure environment. This work analyses the constraints and suggests potential solutions for work that could be done in the future.
References 1. El Bouchti A, Bahsani S, Nahhal T (2016) Encryption as a service for data healthcare cloud security. IEEE 2. Kumarage H et al (2016) Secure data analytics for cloud-integrated internet of things applications. IEEE Cloud Comput 3(2):46–56 3. Daman R, Tripathi MM, Mishra SK (2016) Security issues in cloud computing for healthcare. In: 2016 3rd international conference on computing for sustainable global development (INDIACom). IEEE 4. Altowaijri SM (2020) An architecture to improve the security of cloud computing in the healthcare sector. Smart infrastructure and applications. Springer, pp 249–266 5. Mittal A (2020) Digital health: data privacy and security with cloud computing. Issues Inf Syst 21(1):227–238 6. Dadhich P (2020) Security of healthcare systems with smart health records using cloud technology. In: Machine learning with health care perspective. Springer, pp 183–198 7. Alenizi BA, Humayun M, Jhanjhi N (2021) Security and privacy issues in cloud computing. In: Journal of physics: conference series. IOP Publishing 8. Adel B et al (2022) A survey on deep learning architectures in human activities recognition application in sports science, healthcare, and security. In: The international conference on innovations in computing research. Springer 9. Yang J-J, Li J-Q, Niu Y (2015) A hybrid solution for privacy preserving medical data sharing in the cloud environment. Futur Gener Comput Syst 43:74–86 10. Fabian B, Ermakova T, Junghanns P (2015) Collaborative and secure sharing of healthcare data in multi-clouds. Inf Syst 48:132–150 11. Babitha MP, Babu KRRR, Secure cloud storage using AES encryption. IEEE 12. Modi KJ, Kapadia N (2019) Securing healthcare information over cloud using hybrid approach. Progress in advanced computing and intelligent engineering. Springer, pp 63–74 13. Paredes I et al (2015) Cranioplasty after decompressive craniectomy. A prospective series analyzing complications and clinical improvement. Neurocirugia 26(3):115–125 14. Gurav Y, Deshmukh M (2016) Scalable and secure sharing of personal health records in cloud computing using attribute based encryption. Int J Adv Comput Tech 5(6) 15. Lounis A et al (2016) Healing on the cloud: Secure cloud architecture for medical wireless sensor networks. Futur Gener Comput Syst 55:266–277 16. Arunkumar RJ, Anbuselvi R, An enhanced methodology to protect the patient healthcare records using multi-cloud approach 17. Hosseini A et al (2016) HIPAA compliant wireless sensing smartwatch application for the selfmanagement of pediatric asthma. In: 2016 IEEE 13th international conference on wearable and implantable body sensor networks (BSN). IEEE 18. He D, Zeadally S (2014) An analysis of RFID authentication schemes for internet of things in healthcare environment using elliptic curve cryptography. IEEE Internet Things J 2(1):72–83 19. Abbas A, Khan SU (2015) E-health cloud: privacy concerns and mitigation strategies. In: Medical data privacy handbook. Springer, pp 389–421
Cyber Warfare: Challenges Posed in a Digitally Connected World: A Review Ravi Chandra and P. W. C. Prasad
Abstract Today’s world is highly dependent on electronic technology and protecting this data from cyber-attacks is a challenging issue. Cyberspace is considered as a new frontier where it represents a digital ecosystem, the next generation of Internet and network applications, promising a whole new world of distributed and open systems that can interact, self-organize, evolve, and adapt. At present, most of the economic, commercial, cultural, social, and governmental activities and exchanges of countries, at all levels, including individuals, non-governmental organizations and government and governmental institutions, are carried out in cyberspace. Lately, countless private companies and government organizations around the world are facing the problem of cyber-attacks and the danger of wireless communication technologies. Globally, no sector is considered immune from the impacts of cyberwarfare. Global multinationals, government agencies at all levels, large organisations, and critical infrastructure providers no matter where they are headquartered or what their country of origin are all targeted primarily by criminals or state actors. Inadvertent or lateral collateral damage of nation-state driven cyberwarfare impacts organizations of all sorts. Most of the challenges and conflicts facing countries and groups today involve cyberspace. To conduct cyberspace operations, cyber target information must be collected in cyberspace, and cyber targets must be selected to achieve effective operational objectives. This article highlights the advancements presented in the field of cyber security and to investigate the challenges, weaknesses, and strengths of the proposed methods. Keywords Cyber warfare · Cyber attack · Cyber incident · Internet of Things (IoT) · Intrusion detection · Cyberspace
R. Chandra · P. W. C. Prasad (B) Charles Sturt University, Bathurst, Australia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_16
169
170
R. Chandra and P. W. C. Prasad
1 Introduction For more than two decades, the Internet has played a significant role in global communication and has become increasingly integrated into the lives of people around the world. Innovations and low cost in this area have significantly increased the availability, use and performance of the Internet, thus that today the Internet has about 3 billion users worldwide [1]. The Internet has created a vast global network that has generated billions of dollars annually for the global economy [2]. At present, most of the economic, commercial, cultural, social and governmental activities and interactions of countries, at all levels, including individuals, non-governmental organizations and government and governmental institutions, are carried out in cyberspace [3]. Vital and sensitive infrastructures and systems either form a part of cyberspace themselves or are controlled, managed, and exploited through this space, and most of the vital and sensitive information is transferred to this space. The consequences of cyber warfare can include the following [4–6]: • The overthrow of the system of government or the catastrophic threat to national security. • Simultaneous initiation of physical warfare or groundwork and facilitate the start of physical warfare in the near future. • Catastrophic destruction or damage to the country’s image at the international level. • Catastrophic destruction or damage to the political and economic relations of the country. • Extensive human casualties or danger to public health and safety. • Internal chaos. • Widespread disruption in the administration of the country. • Destroying public confidence or religious, national and ethnic beliefs. • Severe damage to the national economy. • Extensive destruction or disruption of the performance of national cyber assets. In addition, five scenarios can be considered for cyber warfare: (1) Governmentsponsored cyber espionage to gather information to plan future cyber-attacks, (2) a cyber-attack aimed at laying the groundwork for any unrest and popular uprising, (3) Cyber-attack aimed at disabling equipment and facilitating physical aggression, (4) Cyber-attack as a complement to physical aggression, and (5) Cyber-attack with the aim of widespread destruction or disruption as the ultimate goal (cyber warfare) [7].
2 Research Methodology Keyword search was the primary identification and gathering of literature for analysis within this paper. These keywords were initially “Cyberspace”, “Cyber Attack” and “Cyber Warfare”. As subtopics such as information security, intrusion detection and
Cyber Warfare: Challenges Posed in a Digitally Connected World: …
171
Fig. 1 Information gathering process
cyber deterrence were revealed, and as a result, these also got converted keywords for supplementary searches. Materials were collected using Charles Sturt University Primo, Google Scholar, IEEE Xplore, and Springer search. Keeping in mind that cyber warfare is an inter-disciplinary subject, journals from other disciplines such as law, international relations and defence were also searched for relevant sources. The approach undertaken for collecting of review literature which included peer reviewed articles, reference materials and other associated data followed a systematic manner as illustrated in Fig. 1. Figures 2 and 3 help visualize significant relationships and theories with the research topic and assist to in realizing the importance of key concepts and the association among related concepts such as cyber warfare, cyber security, information security and how these relationships can help in education and design of prevention and counter measures.
2.1 Definitions The topic of cyber warfare is a substantial one, with various subtopics. It is important to examine the most basic question of what cyber warfare is, comparing existing definitions to find common ground or differences. To understand the context of this review paper, it is important to identify various terms used in this literature. While there is no universal definition of cyber warfare, Parks and Duggan [8] define “Cyberwarfare is a combination of computer network attack and defense and special technical operations.” The character of conflict in cyberspace is as diverse as the actors who exploit it, the actions they take and the targets they attack. Table 4 provides further key terms around the basic definitions and concepts of cyber space, information warfare, cyber threats, cyber vulnerabilities, and cyber warfare.
172
R. Chandra and P. W. C. Prasad
Fig. 2 Visual representation of topic keywords and relationship mapping
Fig. 3 14 common steps undertaken by a hacker
Cyber Warfare: Challenges Posed in a Digitally Connected World: …
173
3 Literature Review As organizations increasingly use digital technology and data to perform business activities, they also face potentially greater cyber risk [9]. Security vulnerabilities in systems can expose organisations, government states and even countries. These cyber risks can lead into a state sponsored cyber warfare and it can take place in many forms, with a cyber-attack being the primary focus. Table 2 discusses the details of literature review. As technologies continue to evolve and progress, so does the attack surface, with cyber criminals looking to explore vulnerabilities from other fields of technology advancements. An actor (hacker) typically undertakes about 14 steps to take control of the systems as depicted by Fig. 3, adopted from MITRE ATT&CK framework.
3.1 Vulnerability Within IoT Infrastructure During last 5 years, there has a been a great focus on the ubiquitous spread of Internet of Things (IoT) devices (such as fitness trackers, Amazon Echo, Google Nest, heart rate monitors etc.) and smartphones over private and public places, home, and enterprise environments, with pervasiveness and mobility characteristics that make them well embedded into daily activities. IoT technologies, due to its pervasive sensing capability and harmonious connectivity, have already changed our lives by creating a smart way of thinking and acting. The huge uptake of these IoT in recent years has also seen a lot of these devices to be unmanaged or not cared properly by end users. During the COVID-19 pandemic, the usage, and the reliance of these devices, in particular smartphones become ever more prominent with introduction of QR code check-in, exposure notification system using Bluetooth etc. has had a massive impact on personal freedom. While these technologies may have helped contain the spread of the virus, it also led to a secondary significant threat to a technology-driven society, that is both a series of indiscriminate and a set of targeted cyber-attacks and cybercrime campaigns [10]. The author [11] uses qualitative technique based on “Influence Net” Modeling to find out chain of consequences and derives a model of attacks and impacts that could lead to system failures. The authors present preliminary analysis of the risks, and leaves room for further quantitative evaluation of the global effects of the presented chain of consequences.
3.2 Influence of Artificial Intelligence Within Cyberspace Over the last decade, primary in the last 2 or 3 years, organisations have increased the efforts to leverage artificial intelligence (AI) methods in a broad range of cyber
174
R. Chandra and P. W. C. Prasad
security applications. The authors [12] surveys the existing literature on the applications of Artificial Intelligence (AI) in user access authentication, network situation awareness, dangerous behavior monitoring, and abnormal traffic identification. The authors also identify several limitations and challenges, and based on the findings, a conceptual human-in-the-loop intelligence cyber security model is presented. The research proposes a new model based on human-in-the-loop named Human-in-theLoop Cyber Security Model (HLCSM) and splits the model into 2 modules: Machine Detection Module (MDM) and Manual Intervention Module (MIM). As part of the research, the authors introduce Confidence level module (CLM) to determine whether MIM needs to be called to complete the event processing. The emergence of AI should assist human beings, rather than completely replace them. In this model, there is no need to use a large number of security specialists, because the main work is done by MDM. On the cutting edge of cybersecurity is Artificial Intelligence (AI), which is used for the development of complex algorithms to protect networks and systems. However, cyber-attackers continue to exploit AI and are starting to use adversarial AI to carry out cyber-attacks.
3.3 Vulnerabilities and Challenges in Connected Vehicles Another area of technology advancement and progress, and a cause for concerns is within the modern-day cars. Care these days are built with automotive serial protocols such as Controller Area Network (CAN), Local Interconnect Network (LIN), and FlexRay used to provide connectivity between vehicles. The author [13] discusses the progress of technology during recent years with vehicle connectivity with other vehicles referred to as Vehicle2-Vehicle (V2V) and external infrastructure, referred to as Vehicle-2-Infrastructure (V2I) becoming more widespread and vehicle manufactures needing to support external connectivity such as Global Positioning Systems (GPS), On-Board Diagnostic- (OBD-2) based cellular dongles to support digital progress. As such cyberattack surface for these vehicles continues to increase, and more vulnerable to attacks not only from inside but also from outside the vehicle. The authors focus on In Vehicle Network Security (IVN) by analysing the existing crypto methods, limitations with the current CAN Bus countermeasures, analysis of the existing IDS methods used to secure IVN, and datasets, software and hardware used to build the connected vehicle systems. Their research datasets are collected using a survey that provides 4 key contributions: (i) description of in-vehicle serial bus protocols (particularly the CAN Bus), (ii) evaluation of current cryptographic and Intrusion Detection Systems (IDS) approaches used for protecting vehicular data, (iii) comparison and assessment of current mitigation strategies to protect vehicles against cyberattacks, and (iv) challenges and potential future research directions for in-vehicular cybersecurity. The study concludes that cryptographic mechanisms have been implemented to secure CAN Bus in existing vehicles to secure attacks from with or external when a attackers can get access to the CAN Bus, however it is complicated to implement encryption due to lack on computational resources in
Cyber Warfare: Challenges Posed in a Digitally Connected World: …
175
the current Electronic Control Units (ECUs) allowing scope for further research and development of cybersecurity of the CAN Bus protocol.
3.4 Vulnerabilities and Challenges in Unmanned Aerial Vehicles (UAV) Another advancement in technology is within the introduction of drones. While there are numerous benefits of such unmanned aerial vehicles, it also exposes another avenue for cyber-attacks. The author [14] the existing literature for different cyberattacks, and then using that classifying the attacks based on their attack entry points, which can be radio channels, messages or on-board systems. The article states the six classes of UAV cyberattacks, namely channel jamming, message interception, message deletion, message injection, message spoofing and on-board system attack. The survey complied is around looking at existing countermeasures for the six attack classes. Classification of countermeasure in derived into three groups, namely prevention, detection, and mitigation. Prevention countermeasures stop a cyberattack from starting. When prevention countermeasures fail, detection countermeasures alert UAV operator of an attack. After detecting an attack, mitigation countermeasures limit the damage. The study concludes that navigation message (GPS) spoofing attack has the greatest number of proposed countermeasures. Also, cryptographic encryption is effective in preventing almost all types of attacks launched above physical layer, except some forms of denial-of-service attacks. The author [15] discusses how recent digital revolution led robots to become integrated more than ever into different domains such as agricultural, medical, industrial, military, police (law enforcement), and logistics. Robots are devoted to serve, facilitate, and enhance the human life. However, many incidents have been occurring, leading to serious injuries and devastating impacts such as the unnecessary loss of human lives. Unintended accidents will always take place, but the ones caused by malicious attacks represent a very challenging issue. This includes maliciously hijacking and controlling robots and causing serious economic and financial losses. The article reviews the main security vulnerabilities, threats, risks, and their impacts, within the robotics domain and discusses maliciously hijacking and controlling robots and causing serious economic and financial losses. The authors in this context, provide different approaches and recommendations to enhance and improve the security level of robotic systems such as multi-factor device/user authentication schemes, in addition to multi-factor cryptographic algorithms. A study conducted the authors [16] review the complexity and the adoption of cloud and mobile services and how this has greatly increased the attack surface. To proactively address these security issues in enterprise systems, the authors propose a threat modeling language for enterprise security based on the MITRE Enterprise ATT&CK Matrix (MITREATT&CK is a globally accessible knowledge base of adversary tactics and techniques based on real-world observations). The modeling language designed using the Meta Attack Language framework
176
R. Chandra and P. W. C. Prasad
Table 1 Known cyber security practices Description Privileged account management
To protect privileged accounts, e.g., AdminAccounts, from abuse and misuse, enterprises should limit their use, modification, and permissions
User account management
This defence is associated mainly with the AdminAccount asset to ensure proper user permissions
Execution prevention
Enterprises may use application whitelisting tools to block scripts and unapproved software that can be misused by adversaries
Restrict file and directory permissions Enterprises can apply the least privilege principle to limit access to files and directories Network intrusion prevention
Enterprises can apply the least privilege principle to limit access to files and directories
Disable or remove feature or program
To prevent misuse of vulnerable software, some features should be disabled
Audit
The security of enterprise systems should be systematically evaluated; this evaluation should include checking file system permissions for opportunities for abuse
Network segmentation
An architectural approach that divides a network into subnetworks, to improve the network performance and security
and focuses on describing system assets, attack steps, defences, and asset associations. Threat modeling process is used to analyse potential attacks while attack simulations show all the paths through a system that end in a state where an adversary has successfully achieved his or her goal. As part of this study, the authors emphasise on using a well know cyber defence strategy that can be implemented as cyber security practice as shown in Table 1.
4 Discussion From the sword battles of the past to the unmanned drone strikes of today, this game of power is constantly driven to shift and evolve by technology. The development of armoured vehicles, aircraft, ships and the use of electronics and telecommunications have all expanded the battle space and introduced new and innovative ways to gain an advantage over opponents [17]. The subject of cyber weapons encompasses a variety of challenges, from identifying what a cyber weapon is, in what way are they different to conventional weapons and if it is feasible to regulate their creation and use. While traditional weapons are designed to kill, injure, and cause damage to infrastructure such as properties and buildings, a cyber weapon(s) can be considered as a piece of malware resulting in a cyber incident where an actor can to collect
Cyber Warfare: Challenges Posed in a Digitally Connected World: …
177
data that can be used for financial gain, disrupt country’s economic or steal military secrets to name a few. Cyberwar is dangerous, covert and is typically waged out of sight. It takes place in cyberspace, a location that cannot be seen, touched, nor felt. Due to a lack of precise terminology to help define cyber warfare also clouds the issue. Data infringements and breeches occur more and more as a result of malevolent and unlawful attacks. However, human error continues to play a substantial part in these incidents and attacks, with malicious actors often obtaining entry to systems by manipulating human mistakes and vulnerabilities. According to the Allianz Risk Barometer, cyber incidents are now the most important business risk globally [18]. In the report produced by Allianz AL. [19], it is reported that in 2020–21, locally within Australia, there has been a 15% increase in ransomware-related cybercrime compared to the previous financial year, as reported in the Australian Cyber Security Centre’s Annual Report and a globally, there has been an increase of more than 105% over the last 12 months [20]. Table 2 shows the latest examples of significant scale and threat organisation are facing from ransomware resulting in cyber incidents. Cyber security has been under a constant spotlight and a key area of focus for the Australian Government throughout the last 12 months. As a push towards combatting cybercrimes, The Security Legislation Amendment (Critical Infrastructure) Act 2021 (First Amending Act) came into force in December 2021. The First Amending Table 2 Significant cyber breeches 2021 [20] Occurrence
Organisation impacted
Occurrence
June 2021
Volkswagen (global car manufacturer)
Data breach in which customer data including full names, licence numbers, email addresses, mailing addresses and phone numbers was exposed online for over 18 months
Sep 2021
Neiman Marcus (US retailer)
Data breach that occurred in May 2020, whereby an ‘unauthorised party’ accessed names, addresses, credit card information and gift card numbers. The intrusion was only detected in September 2021
Nov 2021
Frontier software (payroll software provider)
Personal information of over 80,000. South Australian publics servants was stolen. The attack was orchestrated by Russia-based hacking group Conti, which employs ransomware to encrypt a victim’s data before attempting to sell them the decryption key. To date, Conti’s haul of ransomware payments is thought to exceed US$32 million
Nov 2021
GoDaddy (web hosting)
Data breach in which hackers stole information relating to more than 1.2 million of its users. The hackers used a compromised password to access GoDaddy’s core systems
178
R. Chandra and P. W. C. Prasad
Act amends the scope of the Security of Critical Infrastructure Act 2018 (Cth) (SOCI Act), which underpins a framework for managing risks relating to critical infrastructure [20]. For organisations to combat against cybercrime, cyber incidents leading into cyber warfare, there is a need to continually review the current risk management framework and policies implemented. The authors in the article [21] review the current risk management that the vital role it plays in the tackling current cyber threats within the cyber-physical system (CPS). The authors aim to present an effective cybersecurity risk management (CSRM) practice using assets criticality, predication of risk types and evaluating the effectiveness of existing controls. The authors propose a novel unified CSRM approach that systematically determines critical asset, predicts the risk types for an effective risk management practice and evaluates the effectiveness of the existing controls. Cyber Warfare creates an enormous impact on global multinationals organisations based on unintentional or tangential collateral damage of nation-state motivated cyberwarfare. The author [22] has reviewed reports (while anecdotal) show that state-sponsored cyberwarfare attacks are legion, and they are impacting large corporations across the globe. The key challenge the author has found is the difficulty in identifying the perpetrators of these cyberwarfare events. The article also concludes that no company can completely protect itself, so the controls must be deployed in balance with the risks. For any organisation, it is vital to have a robust cyber security response plan. The incident response plan should dictate detailed, sequential procedures to follow in the event of an incident. The incident coordinator (or similar role) should ensure that each step of the process is completed, and that progress is tracked and communicated on a rolling basis.
5 Limitations and Future Work While the reviewed articles have provided some great insights into types of cyber incidents, and the lead into cyber warfare, there are some key challenges and limitations that also has been identified that opens the avenue for further research. Table 3 provides a high-level summary that can be used to explore further research topics that can be used to develop countermeasures against cybercrimes, and potentially cyber warfare.
6 Conclusion Anecdotal reports show that state-sponsored cyberwarfare attacks are legion, and they are impacting large corporations across the globe. Cyber breaches can bring about significant monetary and reputational losses and can even compromise a firm business continuity entirely, with consequences resonating throughout the economy. Adding to the challenge of a response is the difficulty in identifying the perpetrators of these cyberwarfare events. The losses to companies from cyber-attacks launched
Cyber Warfare: Challenges Posed in a Digitally Connected World: …
179
Table 3 Further research areas Domain
Further exploratory research
Unmanned aerial vehicles
Development of cryptographic encryption to avoid GPS spoofing
Unintended threats from nation-state cyberwarfare
Global companies could develop a framework that can implemented in a manner that will allow to respond to threats by demonstrating their significant economic power, forcing nation-states to agree to rein in cyberwarfare attacks in the future
Robotics
Develop secure inbuilt design, developing logic to allow for self-isolation in event of a compromise of system, embedding IDS and IPS, multi factor authentication
Ethics
Under what principles can a cyberwarfare be justified to be conducted by a state or nation
Table 4 Abbreviations
Acronym
Definitions
AIDS
Anomaly-based IDS
ANN
Artificial neural network
CAM
Comprehensive assessment model
CAN
Controller area network
CLM
Confidence level module
COVID-19
Infectious disease cause by SARS-CoV-2 virus
CPS
Cyber-physical system
CSRM
Cybersecurity risk management
ECU
Electronic control units
GPS
Global positioning systems
HIF
History based IP filtering
HLCSM
Human-in-the-loop cyber security model
IDS
Intrusion detection systems
IPS
Intrusion prevention systems
IVN
In vehicle network security
LIN
Local interconnect network
LIP
Linear-in-the-parameters
MDM
Machine detection module
MIM
Manual intervention module
PCCN
Parallel cross convolutional neural network
UAV
Unmanned aerial vehicles
V2I
Vehicle-2-infrastructure
V2V
Vehicle2-vehicle
180
R. Chandra and P. W. C. Prasad
by countries are possibly higher than the losses from rogue actors, assuming that state sponsored cyber-attacks are more potent, driven from a position of better resources. This article has provided some in-depth analysis on how cyber warfare can impact different domains, from end user mobile and IoT devices, unmanned aerial vehicles, robotics, and global multinational companies. The eagerness of adopting and living in digital era where needing information at fingertips can provide lot of challenges if systems are not designed properly. As system becomes more autonomous, so the does the risk of using technology for malicious activity and unfair gain. Organisations should look and consider cyber security is an important issue in the infrastructure of every company and organization. Cyber-security includes practical measures to protect information, networks, and data against internal or external threats.
References 1. Tan S, Xie P, Guerrero J, Vasquez J, Li Y, Guo X (2021) Attack detection design for dc microgrid using eigenvalue assignment approach. Energy Rep 7:469–476 2. Judge MA, Manzoor A, Maple C, Rodrigues JJ, ul Islam, S. (2021) Price-based demand response for household load management with interval uncertainty. Energy Rep 7:8493–8504 3. Aghajani G, Ghadimi N (2018) Multi-objective energy management in a micro-grid. Energy Rep 4:218–225 4. Khan S, Shiwakoti N, Stasinopoulos P, Chen Y (2020) Cyber-attacks in the next-generation cars, mitigation techniques, anticipated readiness and future directions. Accid Anal Prev 148:105837 5. Furnell S, Shah J (2020) Home working and cyber security—an outbreak of unpreparedness? Comput Fraud Secur 2020(8):6–12 6. Mehrpooya M, Ghadimi N, Marefati M, Ghorbanian S (2021) Numerical investigation of a new combined energy system includes parabolic dish solar collector, stirling engine and thermoelectric device. Int J Energy Res 45(11):16436–16455 7. Alibasic A, Al Junaibi R, Aung Z, Woon WL, Omar MA (2016) Cybersecurity for smart cities: a brief review. In: International workshop on data analytics for renewable energy integration. Springer, Cham, pp 22–30 8. Parks R, Duggan D (2011) Principles of cyberwarfare. IEEE Secur Priv Mag 9(5):30–35. Available: https://doi.org/10.1109/msp.2011.138 9. Jamal AA, Majid AAM, Konev A, Kosachenko T, Shelupanov A (2021) A review on security analysis of cyber physical systems using machine learning. Materials today: proceedings 10. Ashraf J, Keshk M, Moustafa N, Abdel-Basset M, Khurshid H, Bakhshi A, Mostafa R (2021) IoTBoT-IDS: a novel statistical learning-enabled botnet detection framework for protecting networks of smart cities. Sustain Cities Soc 72:103041 11. 2019 Cyber Security Risk Report: What’s Now and What’s Next|Aon, Aon, 2019. [Online]. Available: https://www.aon.com/cyber-solutions/thinking/2019-cyber-security-riskreport-whats-now-and-whats-next/. Accessed 30 May 2022 12. Bobbio A, Campanile L, Gribaudo M, Iacono M, Marulli F, Mastroianni M (2022) A cyber warfare perspective on risks related to health IoT devices and contact tracing. Neural Comput Appl, 1–15.https://doi.org/10.1007/s00521-021-06720-1 13. Zhang Z, Ning H, Shi F, Farha F, Xu Y, Xu J, Zhang F, Choo K-KR (2021) Artificial intelligence in cyber security: research advances, challenges, and opportunities. Artif Intell Rev 55(2):1029– 1053. https://doi.org/10.1007/s10462-021-09976-0 14. Aliwa E, Rana O, Perera C, Burnap P (2021) Cyberattacks and countermeasures for in-vehicle networks. ACM Comput Surv 54(1):1–37. https://doi.org/10.1145/3431233
Cyber Warfare: Challenges Posed in a Digitally Connected World: …
181
15. Kong P-Y (2021) A Survey of cyberattack countermeasures for unmanned aerial vehicles. IEEE Access 9:148244–148263. https://doi.org/10.1109/access.2021.3124996 16. Yaacoub J, Noura H, Salman O, Chehab A (2021) Robotics cyber security: vulnerabilities, attacks, countermeasures, and recommendations. Int J Inf Secur 21(1):115–158. https://doi. org/10.1007/s10207-021-00545-8 17. Xiong W, Legrand E, Åberg O, Lagerström R (2021) Cyber security threat modeling based on the MITRE enterprise ATT&CK Matrix. Softw Syst Model 21(1):157–177. https://doi.org/10. 1007/s10270-021-00898-7 18. Robinson M, Jones K, Janicke H (2015) Cyber warfare: issues and challenges. Comput Secur 49:70–94 19. Allianz AL (2020) Allianz risk barometer: identifying the major business risks for 2020. Retrieved from https://www.agcs.allianz.com/news-and-insights/reports/allianz-risk-barome ter.html 20. Kallenbach P (2022) Perspectives on cyber risk: new threats and challenges in 2022—Insight— MinterEllison. [online] Minterellison.com. Available at: https://www.minterellison.com/art icles/perspectives-on-cyber-risk-new-threats-and-challenges-in-2022 21. The Australian Cyber Security Centre. ACSC annual cyber threat report—1 July 2020 to 30 June 2021. Australian Signals Directorate. https://www.cyber.gov.au/acsc/view-all-content/rep orts-and-statistics/acsc-annual-cyber-threat-report-2020-21. Accessed 11 March 2022 22. Kure HI, Islam S, Ghazanfar M, Raza A, Pasha M (2021) Asset criticality and risk prediction for an effective cybersecurity risk management of cyber-physical system. Neural Comput Appl 34(1):493–514. https://doi.org/10.1007/s00521-021-06400-0
Surveilling Systems Used to Detect Lingering Threats on Dark Web Y. K. P. Vithanage and U. A. A. Niroshika
Abstract Cybersecurity investigators, counter-terrorism agents and law enforcement pursues their continuous scrutiny on perpetrators, extremists, and others via studying their actions on dark web platform enabled by The Onion Route (TOR). Content on dark web forums and TOR hidden services offers a rich source of buried knowledge for many professionals. Often these hidden services are exploited to execute illicit operations including but not limited to disinformation-as-a-service, child pornography, circulating propagandas, drug and weapon smuggling and hiring assassins. Determining whether dark web platform is criminogenic in nature and designing approaches to surveil trends and fluctuations of TOR are crucial to regulate varied activities thriving within dark web. Qualitative research approach was used on determining the reality of dark web and TOR hidden services by identifying popular trends on dark web forums, types of user cohorts and their motivations to perform such activities. Moreover, the literature review provides several technologies employed to monitor popular trends on dark web and TOR traffic. Results shows that, illicit and immoral activities are abundant within dark web. Additionally, by employing real-time automated monitoring systems such as BlackWidow will help to either mitigate or eradicate known and unknown illicit activities on dark web to bring about security to the cyberworld. Keywords Dark web · Anonymity · The onion route · Crawlers · Censorship · Criminals
Y. K. P. Vithanage (B) Charles Sturt University, Study Centre, Melbourne, Australia e-mail: [email protected] U. A. A. Niroshika RMIT University, Melbourne, Australia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_17
183
184
Y. K. P. Vithanage and U. A. A. Niroshika
1 Introduction Mining dark web has been considered as a rapidly emerging research terrain and it is one of most prioritized research areas by many professionals, academics, researchers, and policymakers. Dark web and anonymity-securing technologies reinforcing the platform basically changes how criminal activities are executed. This platform act as an enabler of cross-country and international crimes where every significant individual, evidence and earnings obtained by executing these unlawful activities can all be within diverse jurisdictions. Technologies employed can mask the identity of criminals and the nature of the illicit activities they have committed. To some users, dark web is a place to acquire absolute freedom, whereas to the remaining, it is a playground or a hub to perform illegal actions and convey their criminogenic desires. From the moment dark web incepted in the US naval laboratory, dark web platform and technologies underpinning it have soon become a vital tool for those who desire to be anonymous. Just after a month of dark web inception, child pornography website known as ‘Playpen’ with more than 60,000 participants was launched in 2014 and before it was ceased in 2015 by the FBI, the website Playpen had more than 117,000 posts and an estimated amount of 11,000 visitors each week [1, 2]. Specifically, for policy makers of law enforcement agencies, restricting technologies that avoid legal access of users to online activities have been a major dilemma. Though these anonymity-securing technologies can provide complete freedom, privacy, and anonymity to users of dark web, the rising of terrorist threats, online abuses, and proliferation of cyberattacks requires immediate attention and robust monitoring tools and techniques [3].
1.1 Research Justification and Flow of Content The extraordinary development of dark web along with TOR hidden services has resulted in identifying appropriate and robust techniques to assemble data on dark web. Owning to the many challenges caused by dark web and hidden services including not knowing the exact size of dark web, weather TOR hidden services are short or long-lived and in classification of traffic data towards the anonymity tools, finding a suitable crawler technique is crucial to monitor the dark web trends and fluctuations. This article will provide a roadmap to attain a complete understanding about the infamous dark web platform, its’ users, whether it’s criminogenic or not and TOR hidden services. The first section will provide a thorough discussion about the dark web, explanation of the differences between the term deep web and dark web and TOR hidden services. Second section will contain an extensive literature review. Third section comprises of an analysis and findings of different technologies and tools employed to monitor trends and fluctuations of dark web forums and are given tubular form followed by data matrix. Followed by results section that discuss the
Surveilling Systems Used to Detect Lingering Threats on Dark Web
185
BlackWidow crawler systems attributes. Finally, a conclusion is provided to conclude the entire literature review.
1.2 The Infamous Dark Web and TOR Often, deep web, dark web and darknet terms are utilized interchangeably and owning to the misuse of these terminologies, the prospective of falsified data and analysis obtained can be higher than anticipated. Thus, explaining the terms including deep web and dark web will provide the readers with clarification and provide accurate results for the research. Deep/Hidden web: Deep web can be considered as content that is unindexed by any traditional search engines such as Google. These contents may be passwordprotected, highly encrypted and/or merely not have been hyperlinked. Nevertheless, some of this content can be accessed without using any anonymity-securing technologies [4]. Dark web: Dark web is considered as a part of the deep web and is comprised of an assembly of sites that remains on an encrypted network and is unindexed. Instead of being able to access using normal web browsers, users will require specific software such as TOR or I2P to access dark web. Nevertheless, TOR hidden services have been the most employed software as it provides robust anonymity. To evade censorship, users avoid online filters utilizing tools and techniques associated with being anonymous in an online environment which can be either a secure private virtual network, a proxy or TOR. It is essential to know that TOR hidden services do not offer additional anonymity. Dark web has been considered a ‘playground/hub’ for many illicit actions including but not limited to money laundering, child pornography, drug, and weapon dealing/smuggling, stealing credentials, hacking, hiring hitmen and all these purchases are paid using cryptocurrencies. Simultaneously, the potential to initiate cyberwarfare by purchasing leaked confidential data namely, military tactics of armed forces including military drone document are inevitable on a platform armored by anonymity securing tools. With identifying and ceasing Silk Road which is considered a recognized illegal market on dark web in 2013, public became aware of the existence of dark web. Disinformation-as-a-service has been the latest inexpensive offering on dark web forums where cybercriminals purchase highly customized false information to generate both positive and negative propaganda and deceive the public perception by initiating influential campaigns by many parties varying from companies to politicians to regulate information warfare [2, 4, 5]. TOR hidden services: TOR is the acronym for The Onion Route network that offers anonymous communication via encrypted and highly configured relay network. It is a virtual private network that offers anonymity for both the server and the user. There are two fundamental modes of uses that are provided by TOR including accessing internet anonymously and hidden services. The former states that, the
186
Y. K. P. Vithanage and U. A. A. Niroshika
traffic is directed via TOR network and go back to the internet through exist replays where IP addresses are untraceable, and the latter describes that traffic stays inside the network and users can offer services to others and they can gain access to these through ‘rendezvous’ points. TOR network is composed of the onion proxy, onion router, directory, and application servers. Tor hidden services permits the users to post content or publish their sites without exposing the real location of the web server and the conceals the identity of the person [3, 6].
2 Literature Review Zhang et al. [7] presented a web-based knowledge portal was also referred as the Dark Web Forum Portal (DWFP) that was utilized to integrate and assemble data from dark web forums containing Jihadist activity worldwide. The DWFP could capture every conversation user initiate on the forums and vigilant analysis of these threads of conversations can aid in uncovering the topic and discussion trends, the association among posters and sequencing of ideas. This portal encompasses data acquisition, preparation, system functionality module comprised of submodules inclusive of both single and multiple dark web forum recognizing and surfing, statistical interpretation, translating functionality of multi-languages and visualization of social network and attender-based system evaluation. Multilingual translation is required as majority of the users use their native languages to interact and create content on dark web. Users use more non-English languages than English [8]. Results showed that DWFP facilitated faster and effective information acquisition. According to Hurlburt [9], dark web is identified as a hub for botnet actions and cybersecurity and law enforcement agencies can trap them by separating bots into different classes. In addition, rapidly emerging and robust tools, and techniques namely, data mining, analytical tools and machine learning can be contemplated as prospective countermeasures or weapons used to uncover huge amounts of interconnected nodes, patterns that designate illegal activities including botnet that thrives within the dark web platform and eventually mitigate them. In addition, Owenson et al. [1] have employed a crawler stimulator to determine whether these hidden services manifested by TOR are short or long lived and they concluded that many of the hidden services are short-lived. This study has helped to determine the size of dark web and this proposed crawler has the potential of serving law enforcement and cybersecurity agencies to estimate the real size of the dark web. The results showed that, lifespan of majority of TOR hidden services tends to vanish within a single day and showed that majority of its’ sites have botnet activities, validating Hurlburt [9] fact about dark web being a platform for botnet. Furthermore, Docker-based hidden service crawler approach discovered by Park et al. [10] also confirmed that Tor hidden services are short-lived, stating that only 52% of these invisible services hits the 100 days and that 32% lives up to 151 days which is considered as long-term living.
Surveilling Systems Used to Detect Lingering Threats on Dark Web
187
Dalins et al. [11] proposed a TOR-use Motivation Model (TMM) that incorporates ethics to be used by the law enforcement. TMM showed adaptable and robust results across various examples and were able to distinguish between immoral and illegal actions. Wang et al. [12] proposed a new attribute extraction methodology that contains block filtration, attribute user generation and attribute user verification stages and identified that dark web users are not active users on surface web. It is shown by many studies that higher percentage of traffic to TOR hidden services are in relation to child pornography. However, the underlying data that reinforces the above statement shows that corresponding values are vastly uncertain because requests and frequent crawler performances executed by law enforcement agencies and child protection agencies can impact the percentage of traffic to the hidden services. Simultaneously, owing to the nature and life cycle of the dark web and Tor hidden services, respectively, crawling and surveilling the Tor hidden services are believed to be difficult [4]. Additionally, Montieri et al. [13] proposed a novel hierarchical classification framework which classifies the traffic that is directed towards anonymity tools (AT) such as Tor, I2P and JonDonym. The authors have used Anon17 public dataset containing traffic data of the three ATs and the main goal of this study was to identify to which degree the ATs can be identified by external parties such as the law enforcement and cybersecurity agencies. The results were promising and showed a higher degree of anonymity is provided by I2P than TOR and JonDonym. Posts on forums can be considered as an ideal way to extract information about forum activity, passive members of the forums, precise time, and the date the post was posted on the forum and metadata on the participants such as registered date to the forum, their respective username. Real-time data accumulation in both deep and dark web and incorporation of exterior translating sources which enables scalability tool was introduced by Schafer et al. [14]. An extremely automated system that surveil dark web services in a real-time and continual basis known as BlackWidow was proposed by Schafer et al. [14] and this system was highly competent in identifying associations among forums and were able to function across different languages. In addition, the system was able to extract information including conversation threads, writers and content and analyze them for various cybersecurity purposes. Recently, the law enforcement agents were able to cease two other marketplaces on dark web known as the Wall Street and Valhalla. In addition, the law enforcements were able to deactivate news source on dark web called DeepDotWeb that were carried out by the administrators of dark web to gain profit by marketing criminal sites [15].
3 Existing Tools to Monitor Dark Web Activity and Findings To determine whether dark web is criminogenic and to identify who are the real users of dark web, several crawlers and other strategies were identified and implemented
188
Y. K. P. Vithanage and U. A. A. Niroshika
by different researchers to assemble and monitor trends and fluctuations of dark web contents. Table 1 Analysis of different technologies and tools employed for monitoring Techniques
Data set
Description
Findings
Dark web forum portal (DWFP) [7]
Six Arabic and one English forum with Islamic Network (hidden web forums)
• Analyses worldwide Jihadist forums • Comprised of data acquisition, data preparation, system functionality and analysis • Multilingual translation is integration and Google translator is employed for the cause • Used 7-point likert scale to obtain data about user-friendliness when searching on DWFP
• Searching task using DWFP finished the task in half the amount of time than using the original ‘Alokab’ forum site • Users were extremely happy to have a multilingual translator as they were able to search for non-English forums • Provides significant evaluation functionalities including browsing searching across single and multiple forums
• Incorporates ethics to the system they have proposed • Incorporates motivation to behavior to simplify the entire process • Assembled 232,792 pages from 7651 TOR virtual domains which gave the study a diverse data to work with • TMM also labelled 4000 unique TOR pages manually, which shows the model flexibility
• TMM can be employed as a focused crawler by law enforcement agencies for investigations and monitoring purposes as it has incorporated ethics • Provides the location and analysis of materials • Provides higher granularity, adaptability, and robustness • Found that illegal transactions and child pornography are abundant within TOR network (continued)
Tor-use motivation Both unique. onion model (TMM) [11] URLs (hidden web) and WWW domains (surface web)
Surveilling Systems Used to Detect Lingering Threats on Dark Web
189
Table 1 (continued) Techniques
Data set
Description
User attribute extraction methodology [12]
WEPS2 data set and data from TOR (surface and hidden web, respectively)
• Comprised of three • Illustrated higher steps including the recall rates, accuracy block filtration, and F1 scores • Identifies location attribute user information generation and user employing the mobile verification number and Exif • User binary classifier header of images for participant sets for verification purposes • Improved performance than the • Due to anonymity, traditional extraction revealing the identity methodology of the users are difficult but in some occasions, users tend to leave strong attributes such as the email address, mobile number etc. • Analysis was performed from perception of no: of attributes, emails, geographic distributions, top-k names etc.
Findings
Hierarchical classification framework [13]
Anon17 dataset
• There are classification levels including anonymous network (L1) • (TOR, IP2 & Jon Donym), traffic type (L2) and application (L3) • Uses progressive-censoring policy which provides ‘reject option’ that deletes ‘unsure’ classification results at middle layers • Uses 4 types of ML-based classifiers including C4.5, BN_TAN, RF & NB_SD
• Suits encrypted traffic • Fine-grain optimized the implementation of the classifier and enhanced the performance • Identified I2P as the hardest anonymity-securing tool as it provides the highest privacy levels • HC permits live traffic evaluation of anonymity tools
(continued)
190
Y. K. P. Vithanage and U. A. A. Niroshika
Table 1 (continued) Techniques
Data set
Description
Findings
BlackWidow [14]
Seven forums in total (three from dark web and four from the deep web)
• Uses multilingual translators • BlackWidow processing is a repeated cycle that delivers new understandings in each iteration • Uses separate forum parsers for every forum platform • Uses unsupervised text clustering method to categorize messages such as exploits and botnets
• Able to detect demand of different concepts on forums anytime • Provides a real-time and continual accumulation of data • Can identify the inter user associations among users of the same thread • Less time consumption and comparatively lesser number of resources required
3.1 Metrics for Comparison Purposes Several metrices are compared among the technologies that was analyzed previously to identify robust, efficient, and flexible techniques to monitor and investigate illicit activities on dark web and TOR hidden services. For some of the techniques, feedback from users were obtained to measure the performance of the functions provided by the system.
4 Results Designing a real-time surveilling system/solution that can operate across different languages and other potential interferences is imperative. According to Table 1 illustrating comparison between different surveilling technologies, BlackWidow automated modular crawler solution has the capacity to capture, accumulate and merge real-time data into a single analytical framework which is integral when surveilling traffic and interactions on dark web. Though Hierarchical classification Framework encompasses desirable and almost similar attributes as BlackWidow crawler system as shown in Table 2, it lacked the real-time data accumulation attribute which pose as one of integral measures of an ideal dark web and TOR hidden services surveilling solutions. BlackWidow not only gathers intellect and insights in real-time much faster which not only assists in accumulating greatest amount of intelligence of different forums but by possessing the higher performing attribute, the fast-performing crawler system has a lower chance of failure to gather intellect owing to the often very limited lifespan of the focused forums. Hence, real-time ability is a key essential for the long-term
Accuracy – – ✓ ✓ ✓
Performance
✓
✓
✓
✓
✓
Techniques
Dark web forum portal (DWFP) [7]
Tor-use motivation model (TMM) [11]
User attribute extraction methodology [12]
Hierarchical classification (HC) framework [13]
BlackWidow [14]
–
✓
✓
–
–
F-measure
–
–
✓
–
–
Recall
–
–
–
–
✓
Feedback from users
Table 2 Metrics used to compare existing dark web and TOR monitoring technologies
✓
✓
–
✓
✓
Granular ity
✓
✓
–
✓
✓
Accessi bility
✓
–
–
–
–
Real-time data accumulation
Surveilling Systems Used to Detect Lingering Threats on Dark Web 191
192
Y. K. P. Vithanage and U. A. A. Niroshika
efficacy of the crawler system. Additionally, the above-mentioned crawler system was capable of inferring associations among authors using different languages and web forums. It can be depicted from this extensive literature review that BlackWidow crawler system can be employed and developed as an ideal accumulating and investigating tool for professionals across different sectors because the surveilling can be executed with less resources that saves time.
5 Conclusion Studies have revealed that immoral content on TOR hidden services adds up to 45% of all events. Further, users of TOR hidden services varying from criminals, terrorists to whistle-blowers, political protestors and journalists exploit the many benefits TOR offers including anonymity, freedom of expression and the lack of censorship to perform their immoral and illicit activities. A need for an ideal analytical tool that extracts real-time data on dark web forums with components including continuous surveillance of deep and dark web, generating hashes for confidential and sensitive files is mandatory to mitigate or eradicate the negativity that flourishes within the dark web. From all the techniques and methodologies that has been reviewed in the article, BlackWidow shows promising performance with real-time data gathering and analysis as key attributes. Moreover, with the advancing technologies, cybersecurity and other agencies have the potential of exploiting the emerging, competent, and accurate tools including but not restricted to advanced algorithms, machine learning and big data analytics, to detect cybercrimes and traffic that occur within the dark web and directed to TOR hidden services, respectively.
References 1. Owenson G, Cortes S, Lewman A (2018) The darknet’s smaller than we thought: the life cycle of tor hidden services. Digit Investig 27:17–22 2. Shillito MR (2019) Untangling the ‘dark web’: an emerging technological challenge for the criminal law. Inf Commun Technol Law 28(2) 3. Jardine E (2018) Privacy, censorship, data breaches and internet freedom: the drivers of support and opposition to dark web technologies. New Media Soc 20(8):2824–28843 4. Koch R (2019) Hidden in the shadow: the dark web—a growing risk for military operations. In: 11th international conference on cyber security (CyCon), pp 1–24 5. Alnabulsi H, Islam R (2018) Identification of illegal forum activities inside the dark net. In: 2018 international conference on machine learning and data engineering, pp 22–29 6. Yang Y, Yang L, Yang M, Yu H, Zhu G, Chen Z, Chen L (2019) Dark web forum correlation analysis research. In: 2019 IEEE 8th joint international information technology and artificial intelligence conference, pp 1216–1220 7. Zhang Y, Zeng S, Li F, Dang Y, Larson CA, Chen H (2009) Dark web forums portal: searching and analysing Jihadist forums retrieved from. In: 2009 IEEE conference on intelligence and security informatics, pp 71–76
Surveilling Systems Used to Detect Lingering Threats on Dark Web
193
8. Faizan M, Khan RA (2019) Exploring and analyzing the dark web: a new alchemy. FirstMonday 24(5) 9. Hurlburt G (2017) Shining light on the dark web. Computer 50(4):100–105 10. Park J, Mun H, Lee Y (2018) Improving tor hidden service crawler performance. In: 2018 IEEE conference on dependable and secure computing (DSC), pp 1–8 11. Dalins J, Wilson C, Carman M (2018) Criminal motivation on the dark web: a categorization model for law enforcement. Digit Investig 24:62–71 12. Wang M, Wang X, Shi J, Tan Q, Gao Y, Chen M, Jiang X (2018) Who are in the darknet? Measurement and analysis of darknet person attributes. In: 2018 IEEE third international conference on data science in cyberspace, pp 948–955 13. Montieri A, Ciuonzo D, Bovenzi G, Persico V, Pescape A (2019) A dive into the dark web: hierarchical traffic classification of anonymity tools. In: IEEE transactions on network science and engineering, pp 1–12 14. Schafer M, Strohmeier M, Liechti M, Fuchs M, Engel M, Lenders V (2019) BlackWidow: monitoring the dark web for cyber security information. In: 2019 11th international conference on cyber conflict, pp 1–21 15. Photon Research Team (2019) Dark web monitoring: the good, the bad and the ugly. Digital Shadows
Early Attack Detection and Resolution in Sensor Nodes to Improve IoT Security Alvin Nyathi and P. W. C. Prasad
Abstract The expanding Internet of Things (IoT) and the sensor network subsystem they are bound to collaboratively contribute to the growing smart technology ecosystem that uses wireless sensor networks presenting the fastest growing attack surface for malware. The smart technology ecosystem integrates processes in the home consumer sector, industry, and economic environments through sensors and wireless sensor networks (WSN). The system is exposed to an expanded attack surface that introduces new attack vectors which are exploited in novel ways for which new mitigative measures are needed. The review aims to find out how latest technologies already known to be effective for attack detection in other realms can be utilised for early attack detection on sensor nodes deployed in WSN and IoST. A review of recent systematically compiled articles on effective detection of attacks on sensors is done focusing on usage of new technologies and their effectiveness. The work showed that early attack detection is feasible and effective. Designs that use contractual models built through integration of AI, ML, or block chain with sensor node sourced data for training and the use of established data sets succeeded in the tasked detection functions. The research intended to find how detecting attacks early in IoT sensor nodes could be used in an advantageous way to mitigate against attacks on the IoT ecosystem. The literature shows there are process with that capability that can still be enhanced to perform better. Keywords Sensor node · Early · Detection · WSN and IoT · Identity theft smart · Attack detection · Cyber-attacks · Machine learning · IoT · Internet-of-Things
1 Introduction The Internet of Things (IoT) and the related smart technology infrastructure encompasses developments and integrations in communications technology that connects A. Nyathi (B) · P. W. C. Prasad Charles Sturt University, Bathurst, NSW, Australia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_18
195
196
A. Nyathi and P. W. C. Prasad
millions of new devices daily [1]. Environmental conditions can be measured and sent to connected processes through wireless sensor networks (WSN) [2], where sensor nodes that are distributed, autonomous, low-power, low-cost and small sized cooperatively collect and transmit information. IoT are cyber-physical systems (CPS) in which the IoT integrates sensors, networks, and processing with objects in the real world [3]. The IoT ecosystem enables people, appliances, and services to interchange communications and data autonomously, this has led to associated development of a smart ecosystem that includes smart homes, smart appliances and the related ecosystem that influenced the (HAN) a network topology in which sensor readings are uploaded to a central server. The IoT has made it possible to collect massive sensor data in real-time from multiple sources [4]. Early detection, quick and effective response to threats is vital as current attacks are engineered to operate stealthily to maintain long term compromise of victims’ systems [5]. Vulnerabilities are introduced as the interconnected gadgets are potential threat vectors [3]. Mitigation models should consider both malicious and normal attacks that happen on data generated by the smart IoT environment [6]. Sensor attributes are valuable to attackers, identity (ID) and location values give attackers advantages in attacking cryptographic functions like encryption and decryption [7]. Risks are created ranging from the disclosure of PII to critical city infrastructure vulnerability. IoT rely on sensor nodes to source data that is input into the systems. Section 2 reflects on the nature of attacks on sensor nodes including the technologies used in detecting these. Section 3 analyses the effectiveness and outputs resulting from implementations of the technologies that identify attacks on sensor nodes. Section 4 is a discussion on challenges, successes, analytical perceptions, and summary of attack detection and mitigative issues in the subject. The conclusion, section summarises current issues relating to the detection of attacks on sensor nodes.
1.1 Methodology Mind maps were used in grouping thought processes and generating ideas that led to refinement of the topic and compiling the literature for review. Keywords and terms. The search criteria terms included sensor attacks, early detection, WSN and IoT, IoT sensors, sensor nodes, attack, attack detection. The search range for the articles was limited to within the 2020–2022 period, for currency. Platforms used include Google scholar, Springer, Wiley, CSU PRIMO article search, IEEE, and Elsevier. 31 articles were retrieved, it is from a final 17 that the annotated bibliography was created. EndNote was used to manage resources and compile relevant research literature. Article credibility verifications were made including verifying authorship and the publishing journals’ status. Articles referenced are from Q1 journals. The project flow management and visualisation were supported with the use of Project Libre, from which a network visualisation diagram was created showing an overview of the project’s progress scope, and weekly updates as blog
Early Attack Detection and Resolution in Sensor Nodes to Improve IoT …
197
Fig. 1 Network visualisation [8]
posts. VOSviewer a data analysis tool was used with Scopus database for concept mapping using keywords occurrence, Fig. 1. Network visualisation diagram. The general nature of the literature review framework is depicted as shown in Fig. 2. Inclusion; sensor nodes, generally in IoT environments including those on smart devices and related ecosystem. Attacks instigated through applications or network software are excluded.
2 Literature Review The section will detail current and potential attacks of on sensor nodes by understanding the way they happen, how means to detect and defend against them are built and applied, the intention is to determine and highlight fast and efficient solutions that a detailed review of the literature showed. With the main objective being that known and unknown attacks be speedily mitigated in the shortest time possible.
2.1 Attacks, Inputs, and Vulnerabilities The research field focuses on sensor nodes, their roles in attacks, detection of attacks, and mitigation processes within the related ecosystem that includes IoT, IoST, WSN,
198
A. Nyathi and P. W. C. Prasad
Fig. 2 An attack, detection, and mitigation model
sensor nodes, smart homes, smart appliances, and gadgets. IoT are cyber-physical systems where sensors, networks, and processing are integrated with objects in the real world [9, 10] and include actuators and smart devices. Sensors are in the perception layer of an abstracted view of the IoT architecture. Attacks on sensor are classified as Perception Stage Sensor Exploit Attacks [9] WSN do not have the traditional gateways or switches that monitor information flow [2], causing a critical security flaw. The interconnectedness of multiple devices results in numerous potential vectors that attackers can use [10]. AI based attack detection systems have been introduced [11] over the past years, and ML supported solutions have made their way into the IoT cybersecurity environment [12]. Sensors are limited in computing capacity due to minimal memory, battery life, power, and processing capabilities [7]. Nodes are often deployed in harsh isolated areas and are extremely vulnerable to attacks [11]. Sajid et al. [13] mention that IoST nodes are vulnerable to external attack from botnets and malware. Network throughput and packet delivery are affected by the presence of malicious nodes in the networks. Nodes are prone to grey hole attacks causing dropped packets in multi hop routing. There are malicious nodes that send corrupted data to base stations causing localization errors. A selective forwarding attack is quite difficult to detect [11] as smart nodes frequently elude detection in ways nodes often drop packets because of harsh environments, a malicious node may succeed in disguising its behaviour. Classical attacks impact the sensing environment and influence the data
Early Attack Detection and Resolution in Sensor Nodes to Improve IoT …
199
of targeted sensors [9], recent attack forms were effectively defended against include data attacks, sensor exploit attacks, algorithmic attacks, and sensor commandeering attacks, and signal attacks.
2.2 Technology The literature includes attack detection technologies like AI, ML, blockchain, and others used in designing solutions, the discussion segments cover some of these techniques. AI integrating ML, and blockchain: Advanced data analysis and Cybersecurity applications have benefited from effective use of ML [2]. The use of AI and ML in the resource constrained IoT ecosystem has not been fully investigated and utilised in processes that mitigate against attacks that target systems. The IoT micro-service architecture embraces AI enabled services with ML solutions applicable in industrial applications [12]. A ML based intelligent model that trains IoT end node devices to detect network attacks is successfully used [14] where AI use shows how trivial IoT nodes can host an artificial neural network (ANN) trained to intelligently detect similar attacks by doing computationally demanding tasks off-node end chip on a more powerful gateway device [14]. Edge or fog computing integrates IoT localisation and utilises ML and AI in blockchain based geofencing strategies [13] and deeper usage in IoT localization applications. A ML based intelligent model that trains IoT end node devices to detect network attacks is utilised. Federated techniques: Approaches include AI enabled anomaly detection on IoT networks based on a federated learning approach. Gated recurrent units (GRU) are used in federated training rounds in a recurrent process where the learning is continually enhanced [12]. Regan et al. [15] used a federated learning technique deep auto encoder for anomaly detection. They designed a federated approach that detects botnet attacks by using a deep autoencoder on on-device decentralised traffic data to address limitations caused by there being no one mitigation solution that suits them all because of heterogeneity of IoT devices. Sajid et al. [7] define a FL attack detection model that reduces movement of sensitive data, it works with an anomaly detection engine that differentiates benign behaviour patterns from malicious ones. PoW consensus mechanism was considered a limitation for validating transactions because of cost and PoA mechanism was used instead. The mappings are between identified limitations, proposed solutions, and validations made. This culminates in a model that uses authenticated nodes, drops malicious ones, and saves cost by selecting a shortest path secure routing [7]. A detailed examination is made in integrating blockchain techniques with WSNs (BWSN) to detect malicious sensor nodes and the ‘smart contract’ concept of malicious node detection [16]. Olmo et al. [8] incorporated data from sensor nodes utilising blockchain in smart contracts allowing for reliable processing of data amongst nodes part of a membership network, securing the transmission of the data and
200
A. Nyathi and P. W. C. Prasad
preserving its integrity. The device smart contract and the bridge smart contract collaborate to identify and disable compromised nodes. When nodes were segregated to attend to different sets of data there is proven performance gain. Smart contracts enhancing blockchain and BWSN nodes for MN detection, and data security are created. The effectiveness of a smart contract against attacks can be checked through a performance security analysis [7]. The method in [2] an ANN based-PDR was proven faster at indicating compromise than is ANN-based-energy consumption value although both do detect value changes. In an alternative technique, based on current research on sensors’ susceptibility to malicious attacks [9] devised a classification of recent attack types differentiating between types, methods of attacks and defence strategies. Reception category attacks that target in-band signals, directly changing the environment or medium. Perception category attacks targeting the information provided by a sensor indirectly, and communication stage ones that target the system a sensor belongs to. Reference [1] managed threats in an IoT environment by developing a holistic approach where routing verification, anomaly detection, mitigative interventions were all integrated into a visual analytical model. The module featured SoA techniques which can tackle anomaly detection and mitigate where the architecture is novel and features AI functionality. The graph neural network model employed a multi-agent structure in a distributed manner reflecting the network of IoT nodes in anomaly detection enabling the extraction of network statistics. Centralised: ML learning solutions are tailored to work mostly with legacy architecture and datasets—where the entire data is located on a central server [12]. Centralised server approach WSN IoT security handling requires an increasingly growing WSN integration in which it is difficult to avoid errors which are challenging to reproduce and trace. Mitigation technologies designed for conventional WSN fall short [16]. The authors used the weighted trust method for more rapid results, to examine centralised WSN block-chain for security problems, blockchain based WSN offers solution for security management, like access control, integrity, privacy, and WSNs’ longevity. Decentralised: In their model [7], a decentralised blockchain based model is designed for node authentication offering optimal network fault tolerance. ML, blockchain, and intelligence in sensor systems are used to detect malicious nodes [7]. The GA-SVM and GA-DT algorithms are used to detect malicious nodes. The Blockchain registers, authenticates nodes, and keeps data packets’s transactions to store the routing information generated [7]. A defined set of sensors is attached to an appliance and get registered. A time value for registering a node’s identity is repeatedly generated to encrypt a node’s identity and its information to neighbour sensors [17].
Early Attack Detection and Resolution in Sensor Nodes to Improve IoT …
201
3 Enhancements and Outputs The discussed technologies had effective mitigative designs and outcomes. Protogerou et al.’s [1] visual analytics module has a tool that allows a user to view deployed analytical agents where the user can start and stop their detection functionality, verify, and mitigate anomalies including routing verification. Networked multi-agent systems exchange information to get an awareness of threatening attacks within the connected neighbourhood and on nodes a step away. Location enabled IoT help where there is no human intervention or perception, location identifying contexts have military to civilian applications [13]. In a time-series using method, Alshboul et al. [1] demonstrated quick detection of anomaly activity by an operator in a banking environment within the hour it was happening, a demonstration of early detection of an attack. In [6] the performance metrics in resource constrained IoT devices achieved a 100% attack detection accuracy rate, a 19% false alarm rate (FAR), less than in in other algorithms and therefore more accurate. The proposed architecture design has advantages in that it has higher detection rates and increased security showing it can provide more accurate security to IoT networks [6]. Alshbouol et al. [17] came up with an identity preserving protocol achieved by describing an initialisation phase, a concealment phase, and a communication phase. The three-phase process is implemented in embedded code in sensors, models running the thread are verified using UPPAAL to make sure there is no violation of the temporal aspect. The model proved reliable, maintaining concurrency, resource sharing, and safety, the authors succeeded to frequently change a sensor’s identity overtime whilst at the same time all home appliances can identify each other, this achieved high control and manipulation of internal communication to confuse external intruders and stops man in the middle attacks [17]. The benefit of blockchain smart contracts to early detection, is that in the event of attacks, a node’s original data, is stored. Block chain wireless sensor networks (BWSN) assure traceability and transparency in the attack detection process, guarantying malicious node detection and data authentication [16]. The GA-SVM detects malicious nodes more accurately than GA-DT [7]. As in [7, 17] used GA-SVM and GA-DT analysing their performance and energy consumption metrics to detect presence of malicious nodes and possible type of attacks that could be ongoing. The DLR model is an effective early detection strategy. It can be complemented with current systems to reduce un-planned facility incidents [4]. The DLR is a low computational complexity generic model developed to detect generic drifts and identify potential outliers [4]. It uses a double linear regression to identify both aggressive and progressive drift. Outliers in symmetric and skewed distribution are picked using an adjusted boxplot method [4]. Algorithm precision and accuracy ranged between 100 to 92% and between 88 to 98% respectively [7]. Performance metrics in resource constrained IoT devices had a 100% attack detection rate accuracy, and a 19% FAR which was less than in other algorithms and therefore more accurate [6].
202
A. Nyathi and P. W. C. Prasad
PoA preferred over PoW as it reduces transaction costs and improves accuracy, and that private blockchain is cheaper [7]. Federated learning was shown to have performances close to or beyond centralised models but still maintained its unique benefits [3]. Distributed deep learning-based IDS and more computationally efficient models could use resource constrained devices efficiently for deep learning to work with real time data streams. Deep neural networks are the basis from which developments in novel IDS could start. CNNs, RNNs, and CNN-RNN hybrids are effective on time-sensitive and sequential input of data [10]. By analysing studies on attack types that happen on edge device sensor nodes [9] classify attack types in a way helpful to users when discerning attacks helpful to developing cheaper effective means of defence. The culmination is a new classification that describes attacks on endnode devices and gives advice on suitable means of defence.
4 Component Table and Classification The component table has a format that groups significantly contributing attributes into the input, processing, and output reflected throughout the literature review. The literature review process was developed through a systematic grouping of key components from the papers’ architecture. An analysis of initial problems to the solutions resulted in groupings of attributes made according to common factors, these are reflected in the Table 1, data inputs, data analysis or processing, and evaluation or results. Notable performance classification is a mitigation engine feature, with time series analysis, time sensitive services. The output is a customisable user interface, KPI, with visualisation analytics [1]. A unique technologically light output [9] that displays a classification based on a taxonomy lexicon. On inputs real-time online data was used as input dataset in [1] enhancing the reliability of the model. Discussion The impacts of innovations in early detection of attacks on sensor nodes are discussed. Node vulnerabilities include mobility, challenging and varying environmental installations. Hardening of sensor nodes, to conceal identity attributes is essential. Nodes need to be registered where node ID or sensor ID attributes are kept track of ensuring that attacks using node ID or sensor ID are prevented [7, 17]. Real time is the earliest detection possible, ahead of that is predictive capability. It can be deduced from the literature features that predictive capability is possible. The model Protogerou et al. [1] developed has the potential of detecting anomalies in real time. A deduction can be that an awareness of threats on nodes that are a step away within a connected neighbourhood [1] could be utilised for attack prediction when enhanced as in Ashboul et al.’s [17] model where nodes’ encrypted IDs and their
Early Attack Detection and Resolution in Sensor Nodes to Improve IoT …
203
Table 1 Component table for early attack detection in sensor nodes Factor
Attributes
Instances
Data inputs
Data source/sensing device
Fog MANO, IPFIX, RBAC, sensor data, device farm, simulation, hybrid (real/simulated), sensors, sensor nodes, cluster heads, device farms, Lidar
Data sets
State database, PyTorch, Tensor, IoT botnet, BASHLITE, Mirai, KDD99, NSL-KDD, ISCX 2021, UNSW-NB15, CIDDS-001, CICISA 2017,
Signals/traffic type
Time-domain signal, cryptographic keys, device ID, timestamp, signature, real-time domain signals, time series forecasting, cryptographic keys, device ID, timestamp, signature, pointer
Technology
Machine learning technique, automatic differentiation technique, wireless communication channel technique, L-PPM technique, DPSK modulation technique, block chain
Hardware
Satellites IoT device node, data hub node, actuators, IoT Bridge, edge device, virtual worker, building-to-building, sensor nodes, optical channel, symbol modulation, raspberry Pi3 Model B, EBS, NVMe, SSD, Lambda Labs workstation 128 GB of RAM, 4 Nvidia RTX 2080 Ti’s Ubuntu 18
Network
WSN, distributed (WSN), hierarchical (HWSN)
Attack types
DoS, UDP flood, port scanning, smart tools, TCP SYN, UDP flood, sinkhole, SSL, Gafgyt, TCP, UDP, source IP spoofing, illegal memory access (IMA)
Data analysis/processing and modulation
(continued)
204
A. Nyathi and P. W. C. Prasad
Table 1 (continued) Factor
Results /evaluation
Attributes
Instances
Detection types
Centralised test of attack detection, multi worker test of attack detection, IoT IDS, knowledge-based IDS, anomaly-based IDS
Techniques (networks, comms, modulation)
Coding, SoA, Ethereum, matrix calculus, Fault diagnosis, sequential processing, parallel processing, model aggregation, ML, FL, Deep learning, PySift open-source library, autoencoder
Frameworks/mechanism/model
Federated IoT attack detection, robust- - -, multicast communication, smart contract, smart contract, blockchain, device farm, local autoencoder, autoencoder (AE), federated learning, DNN, CNN, RNN, CNN-RNN, Boltzmann machine DBN, RaNN, AE & FFNN, CNN, LSTM, & AE, NSNN, GANs, IBM’s adversarial robustness toolbox (ART)
Implementation
AWS, EC2, EC2-EVM instance simulations —2, 4, 8 CPUs
Algorithm
Naïve Bayes classifier, support vector machine (SVM), load balancing, cost aware, service balancing, multi-objective deep reinforcement learning, GNN, ECDSA, elliptic curve, multi-agent algorithm, IoT2Vec, loads balancing
Performance and accuracy metrics
Time milliseconds, transactions, start time, duration, recall, precision, accuracy, false positive, false positive rate, threshold, loss, time stamped data with signature (continued)
Early Attack Detection and Resolution in Sensor Nodes to Improve IoT …
205
Table 1 (continued) Factor
Attributes
Instances
Output/graphic visualisation
Scatter plots, tables, high-low close chart, histograms, a single control system, visual analytics module, location, location of virtual worker. MAC-IP aggregation, source IP, confusion matrix, baseline results, federated solution, evaluation of false positives, precision, accuracy
information are made known to neighbours. Node IDs were effectively protected shown in reduced detection of sensor attributes of the model by [17]. Scalability per transaction would be beneficiary for detection and or resilience to malicious activities. Considering transactions efficiency and processing at the actual node to mitigate current limits due to energy and hardware limits, Ramasamy et al. [16] address this using BWSN which offer benefit to early detection in that, in the event of attacks, a node’s original data, is stored. BWSN assure traceability and transparency in the attack detection process, guarantying malicious node detection and data. ML and ML Technology could help achieve these whilst using reduced energy consumed by the node. An ANN model architecture has been made to show that through training models can be incorporated within an IoT infrastructure as resource constrained intelligent systems. Awareness of neighbours has scope to be broadened towards localisation capability. Improved algorithms and ML’s use of state-of-the-art IoT localisation using LPWAN to overcome challenges faced by many low cost IoT nodes like inability to incorporate GNSS receivers, movement of nodes that involves NonLine of Sight, speed change and orientation [13], enhance the capability. Multi-sensor integration uses in geo strategic situations like emergencies and geofencing that need accurate, continuous reliable and seamless multi-environment integration collaborate in localisation efficiency. IIoT has opportunities in industry where processes are still monitored using SPC Limits which lack precision detection when values are within a triggering value. There may be opportunities through IIoT that processes are developed to modifying or transitioning from an (SPC) or in mitigating the generic drift mentioned in [4] where trigger alerts happen when it’s too late to prevent operational disasters. A low computational complexity generic model using DLR detects generic drifts and identifies potential outliers, in a successful early attack detection strategy [4]. In classical attack on sensors mentioned in [9], often information is generated that requires an immediate, incorrect response, or a slow build-up of errors over time, called a meaningful response. Sensor redundancy and random sampling is used to mitigate meaningful response errors and against classical attacks in general. These
206
A. Nyathi and P. W. C. Prasad
happen when there are vulnerabilities at the perception layer where an attack could cause errors to slowly build up over time [9]. Performance improvement in sensor identities protection achieved with technologies that enhance processing, like concurrency during execution [17] which significantly preserved the linearity of time required to protect a HAN networks identities. Enhancing detection supported by having control of nodes’ space vicinity [13, 17], also helpful to achieve localisation in IoT use of the SVM algorithm, the smart contract. The role of blockchain is shown to be indispensable, Sajid et al. [7] show its relevance in selecting the shortest path to destination ensuring secure rooting in the absence of malicious nodes. The ML techniques using blockchain successfully detect malicious nodes enabling registering of only legal nodes to use in routing.
5 Future Work Relationships in the solutions to the two problems of generic drift and meaningful response are worth further investigations because of the similarities in the problems they solve. Predictive attack detection in the IoT, finding from literature reviews how technology can be used to support such capabilities to support effective mitigative responses where pro-active tailored defence can be set up. In [8] blockchain is used in smart contracts that allow for reliable processing of data amongst nodes that make up a membership network.
6 Conclusion The Internet of Things (IoT) constitutes a global connection of sensors and smart devices with services and distributed applications that impact every human sector. The heterogeneity of IoT devices makes designing solutions a big challenge. The significance of the role of blockchain is in the capacity to keep track of multiple sequences of events. Nodes sense multiple events, the information they create is massive and held across many packets, a routing protocol that uses blockchain technology can provide a distributed memory throughout the networks’ nodes. Successful early detection is achieved, with the use of technical solutions designed to effectively achieve the detection, mitigation, or both. A wide range of approaches is used, including using properties of sensors and time where a threshold time variable is manipulated to preserve time linearity to protect a home network sensors’ identity. There is use of algorithms, most solutions have architectures where ML, AI, and blockchain interact with sensor nodes on devices, integrate innovative technology (ML, AI) with datasets and cloud connection in IoT ecosystem. Different detection model architectures use the same framework. Early detection of attack is possible and there are many enhancement possibilities.
Early Attack Detection and Resolution in Sensor Nodes to Improve IoT …
207
Appendix: Abbreviations ANN
Artificial neural networks
ITA
Identity theft attack
BLE
Bluetooth low energy
KNN
K-nearest-neighbour
CNN
Convoluted neural networks
LPWAN
Low power local area networks
CPS
Cyber physical systems
LSTM
Long short-term memory
DBN
Deep Boltzmann machines
MC
Misclassification rate
DLR
Double linear regression
MDTMO Malicious node detection using a triangle module
DML-AcD Distributed ML-aided cyberattacks NSNN detection
Negative selection neural network
DNN
Deep neural networks
PDR
Packet drop/delivery ratio
DT
Decision tree
PMiR
Packet misroute rate
ECDSA
Elliptic curve digital signature algorithm
PoA
Proof of authenticity
ELL
Embedded learning library
RaNN
Random neural networks
FAR
False alarm rate
RBAC
Role based access control
FWSN
Federated wireless sensor networks
RBN
Restricted Boltzmann machines
GAN
Generative adversarial networks
RNN
Recurrent Neural networks
GANs
Generative adversarial networks
SDN
Software define network
GNN
Graph neural network
SoA
Service oriented architecture
GRU
Gated recurrent units
SVM
GA support vector machines
HAN
Home area networks
TOD
Transaction-ordering dependence fusion operator
IoST
Internet of Sensor Things
UPPAAL An integrated tool environment for modelling, validation, and verification of real-time systems
IIoT
Industrial Internet of Things
References 1. Alshboul Y, Bsoul AAR, Al Zamil M, Samarah S (2021) Cybersecurity of smart home systems: sensor identity protection. J Netw Syst Manag 29(3). https://doi.org/10.1007/s10922-021-095 86-9 2. Ding J, Wang H, Wu Y (2022) The detection scheme against selective forwarding of smart malicious nodes with reinforcement learning in wireless sensor networks. IEEE Sensors J 1–1. https://doi.org/10.1109/JSEN.2022.3176462 3. Haro-Olmo FJ, Alvarez-Bermejo JA, Varela-Vaca AJ, López-Ramos JA (2021) Blockchainbased federation of wireless sensor nodes. J Supercomput 77(7):7879–7891. https://doi.org/ 10.1007/s11227-020-03605-3
208
A. Nyathi and P. W. C. Prasad
4. Hasan B, Alani S, Saad MA (2021) Secured node detection technique based on artificial neural network for wireless sensor network. Int J Electr Comput Eng (Malacca, Malacca) 11(1):536–544. https://doi.org/10.11591/ijece.v11i1.pp536-544 5. Kalnoor G, Gowrishankar S (2021) IoT-based smart environment using intelligent intrusion detection system. Soft Comput (Berlin, Germany) 25(17):11573–11588. https://doi.org/10. 1007/s00500-021-06028-1 6. Li Y et al (2021) Toward location-enabled IoT (LE-IoT): IoT positioning techniques, error sources, and error mitigation. IEEE Internet Things J 8(6):4035–4062. https://doi.org/10.1109/ JIOT.2020.3019199 7. Mothukuri V, Khare P, Parizi RM, Pouriyeh S, Dehghantanha A, Srivastava G (2022) Federatedlearning-based anomaly detection for IoT security attacks. IEEE Internet Things J 9(4):2545– 2554. https://doi.org/10.1109/JIOT.2021.3077803 8. Software survey (2010) VOSviewer, a computer program for bibliometric mapping’. Scientometrics [Online]. Available: https://www.vosviewer.com 9. Munirathinam S (2021) Drift detection analytics for IoT sensors. Procedia Comput Sci 180:903–912. https://doi.org/10.1016/j.procs.2021.01.341 10. Panoff M, Dutta RG, Hu Y, Yang K, Jin Y (2021) On sensor security in the era of IoT and CPS. SN Comput Sci 2(1):2016. https://doi.org/10.1007/s42979-020-00423-5 11. Protogerou A et al (2022) Time series network data enabling distributed intelligence. A holistic IoT security platform solution. Electronics (Basel) 11(4):529. https://doi.org/10.3390/electroni cs11040529 12. Ramasamy LK, Khan FKP, Imoize AL, Ogbebor JO, Kadry S, Rho S (2021) Blockchain-based wireless sensor networks for malicious node detection: a survey. IEEE Access 9:128765– 128785.https://doi.org/10.1109/ACCESS.2021.3111923 13. Sajid MBE, Ullah S, Javaid N, Ullah I, Qamar AM, Zaman F (2022) Exploiting machine learning to detect malicious nodes in intelligent sensor-based systems using blockchain. Wirel Commun Mob Comput 2022. https://doi.org/10.1155/2022/7386049 14. Safi R, Browne GJ (2022) Detecting cybersecurity threats: the role of the recency and risk compensating effects. Inf Syst Front. 30 May 2022. https://doi.org/10.1007/s10796-022-102 74-5 15. Regan C, Nasajpour M, Parizi RM, Pouriyeh S, Dehghantanha A, Choo K-KR (2022) Federated IoT attack detection using decentralized edge data. Mach Learn Appl 8:100263. https://doi. org/10.1016/j.mlwa.2022.100263 16. Shalaginov A, Azad MA (2021) Securing resource-constrained iot nodes: towards intelligent microcontroller-based attack detection in distributed smart applications. Future Internet 13(11):272. https://doi.org/10.3390/fi13110272 17. Tsimenidis S, Lagkas T, Rantos K (2021) Deep learning in IoT intrusion detection. J Netw Syst Manag 30(1). https://doi.org/10.1007/s10922-021-09621-9
Exploring Cyber Security Challenges of Multi-cloud Environments in the Public Sector Katherine Spencer and Chandana Withana
Abstract As the Public sector has begun to mature in the adoption of Cloud technology, the sector itself faces challenges in bringing together the disparate use of several different public cloud offerings across their individual organisations and reconciling the security challenges of people and technology to meet compliance and effectiveness. The aim of this research is to provide an overview of some of the challenges of cyber security across culture and technology in multi-cloud environments. Through a review of current research, industry papers and government frameworks this paper provides a comparative analysis that has been developed resulting in key areas for future research. This review found that in the people domain, the lack of availability of subject matter expertise to implement security education is a factor for the success of security in organisations as well as security awareness training and informed situational awareness. For Technology, it was highlighted that data accountability and integrity is a significant challenge in multi-cloud with limitations being restricted to the tier 1 Cloud Service Providers and a need for consistent benchmarking or framework across private industry and public sector alike. The results of this have assisted with articulating emphasised areas for improvement and further research for cyber security culture frameworks and benchmarking for confidentiality, integrity, and availability of public cloud. Keywords Cyber security · Multi-cloud · Public sector · Cyber security culture · Benchmark
1 Introduction As the Public sector has begun to mature in the adoption of Cloud technology, they face challenges in bringing together the disparate use of several different public cloud offerings across their individual organisations, as stated by the (Australian) K. Spencer · C. Withana (B) Charles Sturt University, Bathurst, Australia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_19
209
210
K. Spencer and C. Withana
Digital Transformation Agency in the Digital Economy Strategy [1] ‘Technology is transforming business models and reducing barriers to growth for small business through affordable access to safe and secure payments systems, forums for collaboration, and reduced technology investment costs through the adoption of multi cloud, hybrid cloud and distributed cloud services’. These Multi-Cloud environments have grown from a variety of needs, as earlier stated, for example the desire to respond to business demand and the availability of services only offered via Cloud [1]. These Public sector organisations are now looking to how they can assure security controls and countermeasures which are both compliant and effective to remain resilient in the current wave of advanced persistent threats. This requirement is specifically part of meeting mandatory reporting under the Protective Security Policy Frame (PSPF) [2] ‘Under the PSPF, all non-corporate Commonwealth entities must report to their portfolio minister and the AttorneyGeneral’s Department each financial year on security’. In parallel to this, while each Public Cloud provider is responsible for assuring the security of “the public cloud, Public Sector organisations must assume responsibility for security in” the Cloud, this is well documented in Cloud Service Providers Shared Responsibility Model [3]. The challenge is then multiplied when different Cloud solutions and Public Cloud providers have varying, and sometimes competing, security requirements. In the Public domain, there is an interest to improve Cyber Security frameworks and an increased understanding and education in capability [1] to undertake the implementation of compliant and effective Cyber Security controls and countermeasure in the use of multi-Cloud environments. This in part is also driven by the need to meet the maturity requirements of the PSPF and the Essential 8 [4]. Of interest in this research is identifying key challenge areas of technology and people where there can be additional focus and deeper research undertaken to provide recommendations or approaches. Out of scope for this review are challenges or concerns for private industry or the delivery of detailed approaches for meeting security compliance. It was found during this review that the majority of sources conclude that for people, formalised frameworks [5] for cyber security culture and security awareness and training would be beneficial, as concluded by Alshaikh [6] there is a significant impact in the development of desirable security behaviors when shifting the focus from compliance to culture. In the technology domain, data confidentiality, integrity and availability [7] and security threat detection were at the forefront of multi-cloud challenges [8]. The remainder of the paper is organised as follows: Sect. 2 details the Literature Review undertaken for this paper; Sect. 3 Summarises the analysis of the Component, Classification and Evaluation Tables; Sect. 4 provides a Discussion centered on consolidated research identified in Sect. 3 tables; Sect. 5 identifies key areas of Future Work; and the Conclusion in Sect. 6 brings together the findings and completes the research proposal by articulating the various niches found.
Exploring Cyber Security Challenges of Multi-cloud Environments …
211
1.1 Methodology A keyword search was conducted across Elsevier, Springer Link, Science Direct, Wiley, IEEE Explore, Google Scholar, CSU Primo for Journals, Articles, Research Papers, Cyber Security Professional White Papers, and Government Publications that met the selection criteria of; peer reviewed, publishing period between years 2020 and 2022. Figure 1 is a visual representation of the bibliographic networks of this initial set of unrefined candidates (Fig. 2, step 2). Initial results were assessed by abstract, introduction and conclusion against the topic brief, which included, multi-cloud, security, challenges, limitations, and the public sector. Using this approach, thirty-two candidate articles were identified, and to refine the results each candidate was measured for Q1 or Q2 status, this resulted in fifteen papers. Candidates were then further critically analysed by author, references, and clarity of language in which the reader can understand to a degree the subject
Fig. 1 VOSViewer bibliographic networks
Fig. 2 Research methodology flow chart
212
K. Spencer and C. Withana
of research, argument, and utility. This resulted with twelve high quality papers and articles that met all the criteria. In addition to this, material from industry security professionals and the government were sought. 10 white papers and government frameworks were reviewed against a similar criterion, excluding the SCImago Journal Ranking. One industry paper and four government papers/frameworks were selected as the most relevant to the keyword search and criteria. Referencing was performed with Endnote, with articles, papers and publications added along with a summary, publication abstract and key word notes.
2 Literature Review This area of research seeks to explore the challenges of cyber security in multi-cloud over two domains—Culture and Technology. Previously, articles and papers have sought to investigate these separately, and for the purposes of this paper, they have been consolidated to identify gaps in a more holistic way that can then be used for further research or thought leadership.
2.1 Culture Modern enterprises acknowledge the value of culture to the success of their corporate objectives, as do the Australian Public Service (APS), which is captured in an independent review “Our Public Service Our Future” [9] ‘To rebuild trust, the APS needs to foster a culture in which people do not merely comply with rules and promote shared values, but ensure their combined actions result in a public service which is trustworthy’. This section will provide an overview how culture can be considered in the context of cyber security and important factors to engendering it.
2.1.1
Frameworks for Measuring Cyber Security Culture
Uchendu et al. [5] discuss in their paper that the term “Cyber Security Culture” is often used interchangeably with “Information Security Culture” and “Security Culture”. The authors further investigate the factors and models needed for the creation and sustainment of a thriving security culture. The overall intent of their work is to produce a contemporary characterisation of cyber security and define burning issues such as the role of change management processes and that of culture within an enterprise. They do this through four questions; how cyber security is defined, what factors are essential to creating and sustaining such a security culture, the frameworks required to cultivate a security culture and the metrics needed to assess these.
Exploring Cyber Security Challenges of Multi-cloud Environments …
213
Similarly, Georgiadou et al. [10] have aimed to overlay research on cyber-security attacks with that of the impact organisational culture and human behaviors. The intent of the research is to find the relative associations between the two and present a refined set of culture factors mapped to adversary behaviors and patterns utilising the MITRE ATT&CK framework. The outcome of this mapping is to increase efficiency of organisational security procedures and enhance security readiness through insights into advancing threats and security risks. They have taken a thematic approach to data collection by providing a hierarchical mapping of the framework to the identified individual behaviors, an alternative view by mapping Organisational Information Technology and Communications domains and, the individual criteria to the MITRE ATT&CK framework. For further work Georgiadou et al. [10] propose “exploiting a generic cybersecurity culture framework” to baseline a security status, what is unclear from this statement is the intent to physically exploit an organisation to find a the gaps in cybersecurity, specifically targeted to the culture aspects such as Insider Threat, or if the intent is to find a more generic, less well known adversary map (MITRE ATT&CK) framework to compare gap analysis. Uchendu et al. [5] utilised Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA) to conduct analysis of 58 prior research papers and present this in a comparative table and graphs. Through their analysis they found that there is no commonly agreed definition of the term ‘cyber security culture’. Likewise, as Georgiadou et al. [10], Uchendu et al. [5] also found a need for a more generalised approach to security culture frameworks and propose that in place testing and evaluation of these frameworks recommends for practitioners to not just rely on questionnaires and instead become embedded with teams and their day-to-day tasks to get real time evaluations.
2.1.2
Security Education Training, Awareness and Behaviors
Security Education Training and Awareness (SETA) is a core program undertaken by many organisations to reduce the perceived or real security concern of the “human element” or insider threat. As reported by Verizon in their 2021 Data Breach Investigations Report [11] “The human element continues to drive breaches. This year 82% of breaches involved the human element. Whether it is the Use of stolen credentials, Phishing, Misuse, or simply an Error, people continue to play a very large role in incidents and breaches alike.” Explored by Alshaikh [6] is the effectiveness of five initiatives within each organisation and for other organisations, how to incorporate security practices into their own cyber security education programs. The research conducted has provided a theoretical explanation based on a qualitative data analysis approach through interviews, surveys, and review of existing security policies. The author seeks to prove the success of the initiatives in comparison to previous Security, Education, Training and Awareness programs and acknowledges the initiatives are intended to move the focus from security compliance to security
214
K. Spencer and C. Withana
cultures, there is a limitation in the quantitative metrics to confirm this approach improves the overall effectiveness of organisational cyber-security. These initiatives cover identifying desirable cyber security behaviors, founding a cyber security champion collective, establishing branding for the cyber team, building a cyber security hub and focusing alignment of internal and external organisational campaigns with security education and awareness programs. Alshaik’s [6] intent with the paper is to highlight the effectiveness of these five initiatives within each organisation and for other organisations, how to incorporate the security practices into their own cyber security education programs. In contrast Phuthong [12] seeks to provide insight to behaviors that improve decision making on cloud services in the public sector in Thailand. Through a qualitative analysis, administered via questionnaires and interviews, they assessed responses against the factors of System Failure, Reliability, Responsiveness, Assurance, Empathy, Flexibility and Security. Data collection for this analysis comprised two main areas for measurement; demographic characteristics of the participants and the aforementioned factors, with data processing being performed with SPSS statistical software suite (developed by IBM). This investigation by Phuthong [12] provides an interesting perspective on the adoption of cloud, however was limited to G-Cloud (Google Cloud), increasing the analysis to additional cloud service providers could reveal a correlation on adoption in additional Public Cloud Service Providers. This article highlights important factors that contribute to cloud adoption in the public sector and provides a link between these two authors—that of behaviors.
2.2 Technology Just as Culture is crucial in the success of the Public Sector, so is Technology. The authors of the Digital Economy Strategy found that while Australia is developing the understanding of technologies in order to build capability, they are also at the same time keeping pace with changes in the same technologies in an effort to position Australia at the forefront of technology development and use [1]. Through this study of the related multi-cloud technology papers, it was determined that a commonality appeared between them and are the focus of the following sections.
2.2.1
Data Integrity
One of the tenets of Information Security is Integrity, and in examining the identified literatures it was established that the problem of Data Integrity was a common factor. Bucur and Miclea [13] explore the solution feasibility of using Application Programming Interfaces (APIs) to manage resources in multi-cloud environments, improving and facilitating the highlighted complex challenges of data transformation, ingestion and scaling of compute resources to required when utilising deep
Exploring Cyber Security Challenges of Multi-cloud Environments …
215
learning Artificial Intelligence (AI) methods for cyber-physical systems. This use of API’s creates a platform layer of connectors to perform the interface between the different cloud environments, ultimately assuring the integrity of data as it traverses the data pipelines. Looking at the data allocation techniques in multi-cloud, di Vimercati et al. [14] have set out in their research a solution that answers challenges to both security protection and overall data allocation requirements that in parallel optimises cost. They do this through a proposed data model that allows specification of data protection requirements at both the micro-level of a single resource and addressing the minimum-security requirements that cloud service providers are bound to when offering the storage of resources. The solution hinges on breaking the problem down to two domains (binary in their words) to that of whether the allocation output is expected to be in plaintext or encrypted. This is a sound approach, as it may be that organisations err on the side of caution and over allocate security requirements to assure all data is encrypted in concern of loss of maintaining Data Integrity. Irrespective if the actual security policy or similar requires it. Further looking at solutions for data integrity Li et al. [15] examine secure data uploading, sharing and updating among multiple clouds through a combination of a cloud-blockchain fusion framework (CBFF). The authors report a significant improvement in the time cost of data querying and publication through expediting the security services of uploading, sharing and traceability of data. While the support for these claims is well documented the solution was constrained to the single cloud provider for deployment. Similar to di Vimercati et al. [14] in the inspection of encryption to assure Data Integrity, Raj et al. [16] explore the problem of securely sharing data between multiple owners while countering insider threats. The authors propose the use of a mechanism that revokes user access when a set of criteria is not met. This criterion is then used to generate an Access Control List (ACL) which in turn is used to verify the encrypted data via a three-way handshake.
2.2.2
Multi-cloud Access
Having access to the Cloud Service of choice without compromising security is a challenge for those organisations that have multiple cloud deployments. Sampe et al. [7] examine the problem of selecting a serverless computing service without becoming locked into a vendor and thus benefiting from a multi-cloud deployment model of choosing the fit-for-purpose services rather than those constrained to a single cloud service provider. The authors propose using Lithops, a developer toolkit for enabling execution of unmodified python code against separate components. Their work was carried out against serverless functions from cloud service providers, IBM Cloud, AWS Lambda and Google Cloud. They were able to compare overall performance and overhead usage of data upload, invoking the serverless functions, function setup and result
216
K. Spencer and C. Withana
download meaning that developers can more accurately choose serverless functions without dependency on a single cloud provider. In the same vein as Access, Rampérez et al. [17] present a solution that automatically translates Service Level Agreements (SLA) into objectives that are expressed as metrics and can be measured across cloud service providers. This proposal of an intelligent knowledge base that translates SLA’s into metrics can assist organisations to implement and measure security requirements through the use of rule-sets and business facts. The authors touch on future work required in part to lack of standardisation of SLAs across Cloud Service Providers, making it difficult to compare all services rather than only the few most common. This solution could assist businesses to assure cloud service providers are meeting organisational security requirements.
2.2.3
Security Assurance
Commonly, Cloud Service Providers advise the same rhetoric in their Shared Responsibility Model [3]. That they (Cloud Service Providers) are responsible for security of the cloud and the customer or consumer is responsible for security in the cloud. This in effect means that there is a requirement to assure the security of the components that fall into the consumer’s remit. To this end, authors Lahmar and Mezni [18] propose using a mathematical approach to identifying the best fit cloud service for a consumer against an organisations security requirements. This experiment utilises the two techniques of fuzzy formal concept analysis (fuzzy FCA) and rough set theory (RS) to perform an analysis of cloud service providers against organisational security requirements. The fuzzy FCA technique was utilised for precision through generating relationships between clouds/services and security policies and the second technique approximated the volume of user requests which contributed to reducing the overall complexity and time. There is some limitation as there still exists a need to also verify these policies are in place. This experiment could assist in drawing further insight from the benefits of security-aware functions and their use in multi-cloud environments. Correspondingly, Torkura et al. [8] propose a bespoke cloud security system, CSBAuditor, that performs continuous monitoring of cloud infrastructure to detect malicious activity and unauthorised changes and the sets about remediating these via misconfigured resources (self-healing). The authors report that the CSBAuditor has been evaluated by several strategies and have called out specifically the use of security chaos engineering, which is the act of introducing an unpredictable variant into a technology system to test resiliency and capability in response to the event (security or otherwise). The CSBAuditor works by employing state transition analysis and reconciler pattern. The reconciler pattern enables the tracking of cloud resources, and the state transition analysis allows for auditing, which in this system, is to find the difference between two states to detect malicious or unexpected events. This finding is then customised into a report and scored using a cloud security metrics system.
Exploring Cyber Security Challenges of Multi-cloud Environments …
217
This type of system could prove meaningful to the governance assurance of security in multi-Cloud environments, particularly when there may be intensive compliance requirements such as areas of the Public Sector that manage Personally Identifiable Information (PII) and Banking Records.
3 Evaluation During research on the topic of this paper there were many related subtopics that began to emerge. Using the key criteria initially used to locate the research papers; peer review, date of publication and key word search, the criteria of journal ranking (Q1, Q2) and problem domain correlation were added. In addition, at the onset of this paper, it was quickly determined that technology was not the only domain that was contributing to cyber security challenges, but also that of culture. The authors for Culture all utilised people, factors, and behaviors with two of these authors standing out on this finding, the other two authors identified with the measurement of Cyber Security culture frameworks. The authors for Technology, the cloud technology platforms and methods differed, however their problem domains indicated similar challenges. These have been grouped into Data integrity, multicloud access and Security assurance (Table 1). Table 2 brings together the analysis from the component and classifications tables and demonstrates the criteria that all papers were assessed against, specifically allowing for deeper inquiry and discovery of insight across all papers.
4 Discussion This section focuses on discussing information in the previous sections and tables identified from analysis across the twelve selected papers and articles found during the study and inquiry of the publications. The aim of this research, as stated earlier in this paper, is to provide an overview of some of the challenges of cyber security across people and technology in multi-cloud environments and articulate emphasised areas for improvement and further research.
4.1 Culture 4.1.1
Frameworks for Measuring Cyber Security Culture
Governance frameworks play a pivotal part in the delivery of good IT Management governance such as Control Objectives in IT (COBIT) and the Infrastructure Information Technology Library (ITIL). The delivery of secure, robust multi-cloud
218
K. Spencer and C. Withana
Table 1 Component table for cyber security challenges in multi-cloud—culture Factors
Attributes
Instances
Study selection
Study type
Qualitative, interpretive, grounded theory
Study sources
Journals, articles, research papers, cyber security industry experts, cloud industry experts, government publications
Settings
Cybersecurity culture, cyber security awareness, cloud computing, multi-cloud, cloud storage performance, multicloud, multi-Cloud, multi cloud, protection requirements, culture framework, security culture, cyber security culture, acceptance, public sector, behaviour, security behaviour, denial of service, multi-cloud systems, scheduling, reliability, budget constraints, security evaluation, multi-tenancy, public cloud
Applied area
Cloud technology, cloud services, people and culture, information security, technology governance, cyber security, multi-cloud
Systematic literature review Digital libraries
Elsevier, Springer Link, Science Direct, Wiley, IEEE Explore, Google Scholar, CSU Primo
Article selection
Key word search, peer reviewed, publish period, journal ranking (quality), data comparison, information synthesising
Research approaches
Descriptive, content analysis, mixed method, normative, case study, observation, interpretive case study, grounded theory
Challenge
Skillsets and capability, change impact, cost burden, technology compatibility, security compliance, security effectiveness, resource management, total cost of ownership, technical debt, cyber security culture, culture frameworks, theory only
Metrics
Random consistency index (RI), linguistic principle (continued)
Exploring Cyber Security Challenges of Multi-cloud Environments …
219
Table 1 (continued) Factors
Empirical study
Attributes
Instances
Criteria
Peer reviewed, quality research, date of publication
Data extraction
Authors, title, year of publication, publisher, research methodology, data gathering and interpretation methods, outcome, further research, limitations, challenges
Problem domain
Measuring culture impact, changing behaviours, training, awareness, security culture terms, applied frameworks, compliance
Proposal
Security training and awareness programs in development of behaviours and focus on culture rather, than compliance, cultivating cyber security culture through behavioural factors and formalised frameworks, a cyber security culture framework adapted from the MITRE ATT&CK framework, evaluation framework cloud services versus people factors
Limitations
Access to subject matter experts, access to senior executive, formalised security culture frameworks, policies, situational awareness, common understanding of linguistic terms, applied confirmation of recommendations and framework (research only), enrichment to align the models in fulness, adoption, perception of usefulness
Questionnaire development Demographic characteristics, factors of influence, assessment of reliability, responsiveness, usefulness, security, flexibility, and empathy Respondents
Email, survey tool, questionaries, interviews, training sessions, observations
Quality verification
Likert scale, Cronbach’s alpha, PRISMA guidelines (continued)
220
K. Spencer and C. Withana
Table 1 (continued) Factors
Data analysis
Evaluation
Output
Attributes
Instances
Barriers
Senior management support, misunderstood or poorly articulated risk, vendor opportunism, loss of technical capability, reduced access to security linked activities, human intervention, reduced ability to rapidly evolve
Linguistic terms
Cyber security culture, security culture, infrastructure security culture
Algorithms
Partial least squares (PLS), requirement engineering
Technique
Generalisation and specialisation techniques, research techniques, analytical technique, search techniques, ISCA questionnaire instrument, security related questionnaires
Method
Knowledge based concept, grounded theory, Decision-making trial and evaluation laboratory method (DEMATEL), cyber security culture
Model
SERVQUAL, analytical model, cause and effect
Tools
SPSS, Online Survey Tool
Framework
PRISMA, Likert scale, information security culture assessment
Author biases
Participant selection, data analysis, factors, database selection, tool selection, frameworks for comparative analysis
Likert scale
5 points (strongly agree, agree, neutral, disagree, strongly disagree), 7 Points (extremely disagree, disagree, moderately disagree, slightly disagree, not sure, slightly agree, moderately agree, extremely agree)
Primary output
Articulation of future recommendations, gaps, challenges, future challenges, results from survey findings, descriptive analysis, comparison of survey results, factors
Output visualisation
Tables, flow charts, infographics
Exploring Cyber Security Challenges of Multi-cloud Environments …
221
Table 2 Classification table for cyber security challenges in multi-cloud—technology Article
Challenge
Problem domain
Proposal
Limitations
Platform
[10]
Data integrity
Data transformation, data ingestion, resource scaling, business continuity
API’s
Requirement for benchmarks, testing for continuity
Amazon Web Services, Google Cloud Platform, AWS S3, Apache Kafka
[11]
Data integrity, security assurance
Security protection, data allocation
Encryption schema
Reliant on N/A existing, defined security model
[12]
Data integrity
Data accountability, multi-cloud data sharing
Hyper-ledger block chain fabric, operation tracing mechanism
Applied to one Alibaba Tier 1 cloud Cloud service Computing provider
[13]
Data integrity
Secure data sharing, multi-cloud
Structured data sharing mechanism, access control
Compute N/A resource usage
[14]
Multi-cloud access
Vendor lock in, multi-cloud
Open-source framework (Lithops), API’s
Supports one coding language (Python)
IBM Cloud, Amazon Web Services, Alibaba Allyun, Google Cloud Platform, Microsoft Azure
[15]
Multi-cloud access
Multi-cloud comparative service analysis
Intelligent knowledge-based system
Comparison of Tier 1 cloud service providers only
Amazon Web Services, Google Cloud, Microsoft Azure, Rabbit MQ, Active MQ
[17]
Security assurance
Cloud service, security requirements
Applied mathematical theory
Existing adoption of security policies, unconfirmed if applied
Amazon Web Services, Microsoft Azure (continued)
222
K. Spencer and C. Withana
Table 2 (continued) Article
Challenge
Problem domain
Proposal
Limitations
Platform
[18]
Security assurance
Threat detection, multi-cloud
Cloud security system (CSBAuditor), cloud security scoring system
New tooling category means less options for comparison and evaluation
Amazon Web Services, Google Cloud Platform
composition is no different. Uchendu et al. [5] identify that while many theoretical frameworks on cyber security culture were dominant in their research, with varied approaches and applications, they did not remain consistent to organisations with different sizes, did not have the same resources and must manage different issues. The authors go on to say that these conceptual frameworks need to be applied in practice and undergo the relevant, rigorous validation against organisational policies. This is also the case with Georgiadou et al. [10] who similarly identified that while the conceptual proposal of their cyber security culture framework against the MITRE ATT&CK framework would need further situational testing, it would also need the attention of security experts to validate and enrich the model. In addition to this, Uchendu et al. [5] noted throughout their paper that a common understanding of linguistic terms, such as cyber security culture, would benefit from official definition, as it is often used interchangeably with information security culture and the shortened, security culture.
4.1.2
Security Education Training, Awareness and Behaviors
While many organisations pro-actively apply SETA with a focus on behaviors, Alashaik [6] finds there is a shortage of security experts in this field of security awareness/behavioral change that are trained in the development, implementations and evaluation of SETA frameworks and programs. This has resulted in an inability of organisations to comprehensively adopt and sustain an effort contributed and reducing the effectiveness of delivery. In parallel to the adoption of SETA is the fellow behaviors of adoption in technology, as explored by Phuthong [9] and while they sought to understand the impact of these behaviors, the authors research was constrained to a single cloud provider, which introduced a bias when correlating real or perceived factors.
Exploring Cyber Security Challenges of Multi-cloud Environments …
223
4.2 Technology 4.2.1
Data Integrity
In answer to the challenge of Data Integrity and across four articles [10-13] several solutions have been investigated. These include API’s, Encryption Schemas, BlockChain and Structured data sharing mechanisms. In some cases, these are emerging technologies and others are already well situated in the market. There is an opportunity here for a continuity of testing and benchmarking across Public Cloud providers to create a common, reliable security model that assists consumers in choosing the relevant solution based on business concern. Multi-cloud access Authors Sampe et al. [7] and Ramperez et al. [17] seek to enable developers and technologists alike by simplifying the access to different cloud services, irrespective of the cloud service provider. However, in their proposal of an open-source framework and intelligent knowledge-based system, results were limited to a single coding language and tier 1 cloud service providers. For these proposals to meet the challenge of true ubiquitous cloud use, these technology solutions would need to broaden the scope to various sized cloud providers and coding languages common to cloud platforms, such as Java, JSON and VUEJS. Security assurance As with Data Integrity, security assurance is vital to the sustainment of a thriving cloud ecosystem. These goals are complex and as a result, may require a complex approach. Lahmar et al. [18] and Torkura et al. [8] propose simplifying this through a mathematical theory and security system similarly based on mathematical theory. Both proposals may benefit from broader evaluation, further confirming if the security policies they are based on can be validated to produce metrics for effectiveness and compliance. In exploring these twelve research papers, we’ve reconciled that there are familiar, perhaps already identified cyber security challenges that are not necessarily unique to cloud ecosystems, but certainly are emphasised when scaled at large, which is both a characteristic and benefit of cloud adoption. The limitations of this paper are that the study was constrained to a publication period (2020–2022) and focused only on multi-cloud.
5 Future Work During research for this topic, we refined areas that could be considered for future work. These have been distilled from the twelve research articles and encompass sub-topics of Frameworks for measuring Cyber Security Culture, Security Education
224
K. Spencer and C. Withana
Training, Awareness and Behaviors, Data Integrity, Multi-cloud access and Security Assurance. Derived from the study of these topics, it has emerged that future work could be considered in the following areas; conducting quantitative analysis on the availability of security education subject matter experts to assist with gaps in the industry capability; benchmarking or baselines for factors of data integrity in public cloud service providers to assist in fit-for-purpose, informed decision making; and an applied framework for implementing and measuring the effectiveness of initiatives for Cyber Security Culture.
6 Conclusion Cloud Technology will continue to be a focus of transformation and adoption over the course of the next 7 years in the Public Sector. It is both the trigger and enabler for the Public Sector to increase its business agility and deliver outstanding public services that will underpin the success of the Digital Economy. This paper has sought out and emphasised areas for improvement and further research in the domain of cyber security and multi-cloud in the context of the public sector. There were several limitations outlined, specifically in defined, formalised frameworks for measuring cyber security culture, defined security models and benchmarking for data integrity in the cloud and broadening the scope of solutions beyond tier one public cloud service providers. The key findings from this research are the need for further research and investigation in benchmarking or baselines for factors of data integrity in public cloud service providers to assist in fit-for-purpose, informed decision making; and an applied framework for implementing and measuring the effectiveness of initiatives for Cyber Security Culture.
References 1. Digital Economy Strategy 2030 (2021) [Online] Available: https://digitaleconomy.pmc.gov. au/sites/default/files/2021-07/digital-economy-strategy.pdf 2. A.-G. s. Department (2021) Protective security policy framework. Australian Government. https://www.protectivesecurity.gov.au/ 3. A. W. Services. Shared responsibility model (2020). https://aws.amazon.com/compliance/sha red-responsibility-model/ 4. A. G. Department (2019) “PSPF 2019–20 consolidated maturity report,” 2020 [Online]. https://www.protectivesecurity.gov.au/system/files/2021-06/pspf_2019-20_consolidated_mat urity_report.pdf 5. Uchendu B, Nurse C, Bada M, Furnell S (2021) Developing a cyber security culture: Current practices and future needs. Comput Secur 109:102387. https://doi.org/10.1016/j.cose.2021. 102387 6. Alshaikh M (2020) Developing cybersecurity culture to influence employee behavior: a practice perspective. Comput Secur 98:102003. https://doi.org/10.1016/j.cose.2020.102003
Exploring Cyber Security Challenges of Multi-cloud Environments …
225
7. Sampe P, Garcia-Lopez M, Sanchez-Artigas G, Vernik P, Roca-Llaberia AA (2021) Toward multicloud access transparency in serverless computing. IEEE Softw 38(1):68–74. https://doi. org/10.1109/MS.2020.3029994 8. Torkura KA, Sukmana MIH, Cheng F, Meinel C (2021) Continuous auditing and threat detection in multi-cloud infrastructure. Comput Secur 102. https://doi.org/10.1016/j.cose.2020. 102124 9. DoPMa Cabinet (2019) Our public service, our future [Online]. https://www.pmc.gov.au/sites/ default/files/publications/independent-review-aps.pdf 10. Georgiadou A, Mouzakitis S, Askounis D (2021) Assessing mitre attack risk using a cybersecurity culture framework. Sensors (Basel, Switzerland) 21(9):3267. https://doi.org/10.3390/ s21093267 11. Bassett Gm, Hylender CD, Langlois P, Pinto A, Widup S (2021) 2021 data breach investigations report [Online]. Available: https://www.verizon.com/business/resources/reports/dbir/ 12. Phuthong T (2022) Factors that influence cloud adoption in the public sector: the case of an emerging economy-Thailand. Cogent Bus Manag 9(1). https://doi.org/10.1080/23311975. 2021.2020202 13. Bucur V, Miclea L-C (2021) Multi-cloud resource management techniques for cyber-physical systems. Sensors (Basel, Switzerland) 21(24):8364. https://doi.org/10.3390/s21248364 14. di Vimercati DC, Foresti S, Livraga G, Piuri V, Samarati P (2021) Security-aware data allocation in multicloud scenarios. IEEE Trans Dependable Secure Comput 18(5):2456–2468. https://doi. org/10.1109/TDSC.2019.2953068 15. Li Q, Yang Z, Qin X, Tao D, Pan H, Huang Y (2022) CBFF: a cloud–blockchain fusion framework ensuring data accountability for multi-cloud environments. J Syst Architect 124:2022. https://doi.org/10.1016/j.sysarc.2022.102436 16. Raj B, Kumar A, Venkatesan GKD (2021) A security-attribute-based access control along with user revocation for shared data in multi-owner cloud system. Inf Secur J 30(6):309–324. https:// doi.org/10.1080/19393555.2020.1842568 17. Rampérez V, Soriano J, Lizcano D, Aljawarneh S, Lara JA (2021) From SLA to vendor-neutral metrics: an intelligent knowledge-based approach for multi-cloud SLA-based broker. Int J Intell Syst. https://doi.org/10.1002/int.22638 18. Lahmar F, Mezni H (2021) Security-aware multi-cloud service composition by exploiting rough sets and fuzzy FCA. Soft Comput (Berlin, Germany) 25(7):5173–5197. https://doi.org/ 10.1007/s00500-020-05519-x.(2021)
Data Security Risk Mitigation in the Cloud Through Virtual Machine Monitoring Ashritha Jonnala, Rajesh Ampani, Danish Faraz Abbasi, Abeer Alsadoon, and P. W. C. Prasad
Abstract Cloud computing offers on demand, pay per use, and remote access to shared pool of resources, because of which several organizations are moving towards cloud. With increased adoption of cloud computing by number of organizations, data security and privacy concerns in cloud have also become increasingly prominent. It is important to maintain data security in the cloud to avoid any kind of data breaches. However, the problem of identifying the optimal solution for data security in cloud environment is very challenging because of various security and performance constraints. The aim of this paper is to emphasize the importance of data security in cloud computing by analyzing various data security techniques. The paper proposes a taxonomy of authentication, verification, and mitigation in multiagent system cloud monitoring for data security risk mitigation in cloud computing environment. The authentication process will include node authentication, mutual authentication, data authentication and wireless sensor network (WSN) authentication. Verification will be based on the use of digital certificates and access control implementation. Mitigation will involve single sign on solution and live migration approach. The contribution of this work into the wide research area of data security in cloud computing is demonstrating various tools and techniques for data security risk mitigation. We verify the utility of the authentication, verification, and mitigation taxonomy by analyzing 15 research journals. This study will provide a comprehensive insight into already existing techniques for data security risk mitigation in cloud computing.
A. Jonnala · A. Alsadoon · P. W. C. Prasad Charles Sturt University, Bathurst, Australia R. Ampani · D. F. Abbasi (B) · A. Alsadoon · P. W. C. Prasad Kent Institute Australia, Melbourne, Australia e-mail: [email protected] R. Ampani Peninsula Health, Frankston, Australia A. Alsadoon · P. W. C. Prasad Western Sydney University, Penrith, Australia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_20
227
228
A. Jonnala et al.
Keywords Authentication · Mitigation · Verification · VM (virtual machine)
1 Introduction Data security in cloud computing is a broad topic and can include any combination of policies, technologies, and controls to protect data from possible attacks and breaches. In cloud computing, the users are unaware of the exact location of their sensitive data, because the cloud service providers maintain their data centres in geographically distributed locations resulting in several security challenges and threats. The traditional information security techniques such as firewalls, antivirus software and intrusion detection systems do not offer adequate security in virtualized systems due to the rapid spread of the threats via virtualized environments [1]. Cloud environment can provide constant support for the control of the resources required while storing and sharing of data. Multi-Agent System based Cloud Monitoring model is used for predicting alerts and intrusion blocking for solving the limitation of the traditional security methods used in cloud environment. It helps in preventing the unapproved task injection and alteration of the scheduling process [2]. The main purpose of this research is to propose a system which can include authentication, verification, and mitigation in cloud environment using Multi-Agent system based cloud monitoring. The process steps of the Multi-Agent system based Cloud Monitoring Model with authentication, verification and mitigation include Node Authentication, Mutual Authentication, Data Authentication, WSN Authentication, Live Mitigation Approach, Single Sign On solution and verification based on Digital Certificates. Multi-Agent System based Cloud Monitoring will be intended to augment the multi-concept issue detection within the cloud data center. The model is based on the basic multi-agent system whose role is to monitor and support the cloud environment so that the data and information can be stored securely [3]. This model will ensure the safety, quality of service as well as the ideal use of resources. One of the essential processes in the system is about the tasks scheduling. In the cloud system, different types of tasks are loaded by the numerous users from different locations. For adequate monitoring and support for each system, three agents are essential for maintaining security and efficiency which are collector agent, master agent and the worker agent. This project is aimed to develop and design the model which uses authentication, verification, and mitigation techniques. For attaining the stated purpose and objective of the system, the paper investigates the latest journal in the context of the stated topic. Also, several practices and recommendations are analyzed from the academia for analyzing the security perspective of different authors. The comparative analysis of different aspects is done for identifying and accessing the mechanism within the cloud environment. The research includes a brief explanation of Authentication, Verification, and Mitigation (AVM) taxonomy. It includes the proposition of components with a comparison between previous solutions. The taxonomy is focusing on mitigating data security risk and manages cloud backend data storage.
Data Security Risk Mitigation in the Cloud Through Virtual Machine …
229
2 Literature Review The main causes of security problems in cloud computing environment are related to the structural characteristics of the cloud environment. The nodes involved in cloud computing are diverse, sparsely distributed and often unable to be controlled effectively. There is always a risk that that privacy in cloud environment can be disclosed by cloud service provider during the process of transmission, processing, and storage of data. Because cloud computing is based on technology, the security vulnerabilities of existing technologies will be directly transferred to a cloud computing platform and have even greater security threats [3]. In a cloud environment there are constant changes in memory, which is a major issue for securing the data. Data security should not be an additional requirement in cloud computing, it should exist as a basic feature of the system at all levels. Intrusion Detection System helps in detecting the malicious activities in the cloud center [1]. Software defined network opcloudsec with its ability to provide administrators and end users a single interface to operate a virtual infrastructure which will help in protecting distributed virtual cloud data centers [4]. Gaussian Stochastic Mixture model is used for defining the performance set within Virtual Machine [5]. A task service model combining a security-aware task scheduler is used for mitigating data security risks. Task waiting is the limitation identified in the system. Zhang et al. [6] Investigate novel agentless periodic file system monitor framework for managing the data format. It detects the real file of the user. The native memory gets lost while shutting down the VM. The number of file scanner will be reduced. Across-view validation technique is used for accessing the classified files. The stored files access redirection which enhances the designation of work. The malicious activities cannot be tracked, but it provides a technique for resolving security issues. DNA Based Data Security (DNABDS) is proposed to encrypt big data by utilizing DNA. A 1024-bit DNA based Secret Key (DNASK) is used for data encryption. This key is randomly generated based on the attributes of the authorized users. It takes less time for secret key generation, key retrieval, data encryption and data decryption. It can resist many security attacks [7]. In the words of [2] a Multi-Agent based Cloud Monitoring (MAS-CM) model is used for enhancing the security execution of the system. It includes communication for switching advantage. Signature Apriori algorithm is developed for intrusion detection in the cloud system. It identifies the virtual machine attacks. The issue in the central log can affect the detection [8]. New resource provisioning algorithm is used for improving the resource limit. The task runtime can be managed. Bandwidth and data locality are taken into account to continually and dynamically update scaling decisions based on the changes in the average runtime of the tasks and data transfer rates [9]. Data integrity checks the security level and storage data management of the system. Intelligent indoor environment monitoring system using sensors in the Cloud Computing and Big Data environment are used. This work introduces the storage environment protection. Data manipulation affects the accuracy of the system. The functional encryption of the task will be improved [10].
230
A. Jonnala et al.
An Intrusion Detection System is used for identifying the malicious activities. Multi-Objective Privacy-aware workflow scheduling algorithm is used for data security [11]. Sacbe approach is used for improving the flexibility of the system. It helps in sharing the data within the cloud environment [12]. A decentralized affinity-aware migration technique is used for resolving the data migration issue. The downtime in service is the main issue in the solution. It provides a CPU based pattern [13]. Specific flow connection entropy (FCE) series is used to reduce the dimensionality of traffic sequence. Time-series decomposition method can be used to divide the FCE series into a steady random component and a trend component which can be analyzed to detect the anomaly of both the short-term and long-term trends in the message traffic caused by the attacks [14]. The Energy-save model is explored for managing the energy consumption and security issues. It contributes by enhancing 5.5% of the CPU power. The limitation is due to the heavy traffic, and the content similarity is exploited [15].
3 Taxonomy for Data Security in the Cloud Through Virtual Machine Monitoring 3.1 Proposed System—AVM (Authentication, Verification, and Mitigation) The authentication verification and mitigation taxonomy is developed based on the review of both previous and current multi-agent systems. The terminology used in the taxonomy is based on the formal literary review which is done by the database of publications within the area of virtual machine monitoring. The base of this work is the inclusion of parameters such as authentication, mitigation, verification, accuracy, safety, tracking, connectivity as well as the privacy in the cloud computing. This project is aimed to develop and design the model which uses authentication, verification, and mitigation techniques. For attaining the stated purpose and objective of the system, the paper investigates the latest journal in the context of the indicated topic. Also, it analyses several practices and recommendations from the academia for analyzing the security perspective of different authors. The comparative analysis of various aspects is done for identifying and accessing the mechanism within the cloud environment. Also, the taxonomy that has been modelled in the research helps in classifying the complete technology into three parts which cover all the factors of the authentication verification and mitigation model. The first part is the authentication which is an act of proving the user to be real. The second class in the system is verification that would verify the authentic user who accesses the system. The last fragment is the mitigation part that would aids in transferring the data from one format to another. The authentication verification and mitigation model is divided into different useful fragments in the context of analyzing the components in an effective manner (Fig. 1).
Data Security Risk Mitigation in the Cloud Through Virtual Machine …
231
Fig. 1 The AVM taxonomy along with its classes and subclasses
4 Classification, Validation & Evaluation Analysis Table Table 1 depicts the classification, validation, and evaluation of various components of proposed system.
5 Validation and Evaluation It is necessary to determine the aspects of the authentication verification and mitigation model. Also, the subclasses of the system will assist in improving the accuracy and reliability of the system. This will help in validating and evaluating the system in a systematic way. Table 1 shows the validation and evaluation of tools so that the authentication verification and mitigation model will be justified. Many of the article and journals are used for improving the security and validity of data. In some of the papers described in above table, the authentication verification and mitigation component of the system is evaluated and validated. The authentication verification and mitigation model will assist in providing data security in the cloud through virtual machine monitoring. For determining the most relevant factors of authentication verification and mitigation model, various components are evaluated and validated for demonstrating the added value in the system. The validation is aimed to prove the system accuracy and showing whether the right system is built or not. On the other hand, evaluation is linked with showing the usefulness of the system. In above table, the validations and assessments are summarized. The validation focuses on the overall accuracy of the system [9]. The reason behind evaluating and validating the system is to justify the use of authentication verification and mitigation model. Different publication and journals are being used for improving the security aspects of user data.
Cloud data
Organization cloud
Yang et al. [16]
Fernández-Cerero et al. [5]
Monolithic scheduling
Locality sensitive Hashing Server
Cryptography
Privacy protection framework
Cloud computing
Sun [3]
Proxy re-encryption
Attack Software prevention model defined networking
Wireless network security
Data encryption security
Virtual networking
Sharma et al. [4]
Intrusion detection system
Component validated or evaluated
DNA based key
Software organisation
Subramanian and Jeyaraj [1]
Authentication
Namasudra et al. [7] Big data
Domain
References
Genetic Algorithm
Ciphertext-policy ABE
Comparative analysis
Security attack algorithms
Cloud simulation model
Firewall
Validation and evaluation method
Table 1 Classification, validation and evaluation analysis table
N/A
HPL
N/A
ETL tools
Data security risk mitigation
Database migration
Mitigation
The end to end encryption
Scanning
Identity based encryption
Signature
DNA
Enabling security agents
Digital certificate
Verification
Rule-based access control
N/A
Traditional access controls
Discretionary access control
Attribute based access control
Mandatory access control
Access control
(continued)
i in tasks-related operations; Pidle is the required power
i an idle state; tbusy —the time machine spends
where, i tidle denote the time the ith machine spends in
∑n H(X) = − i=1 p(xi ) logbp (xi ) where p(xi ) is the probability mass function of outcome xi ( )) ∑m ( i i i i i i=1 Pidle ∗ tidle + Pbusy ∗ tbusy + tsec
N/A
Fm (k) = im (k) XOR im (k − 1) where it represents the data-vector
N/A
∑g Nm : X(t) = f(X(t)) + k=1 Hk(X(t − πk) where f(·), fˆ(·), Hk(·) and Hˆk(·) are the nonlinear vector-valued functions, τk (k = 1, 2, …, g)
Mathematical formula
232 A. Jonnala et al.
Domain
Management based organization
Business firm
Hybrid cloud
Cloud service
References
Grzonka et al. [2]
Modi and Patel [8]
Toosi et al. [9]
Li et al. [15]
Table 1 (continued)
Trust management framework
Deployment service model
Multi-threading technique
Multi-agent based cloud monitoring (MAS-CM) model
Authentication
High bandwidth
Aneka Daemon
Traditional network
Master agent
Component validated or evaluated
Cloud computing framework
Cloud bursting model
Multi-threading technique
Genetic algorithm
Validation and evaluation method
Host-based file-level migration
ETL tools
Database migration
Not logging raw tables
Mitigation
Verification
N/A
Application level encryption
Code recognition
Mail transfer protocol
Digital certificate
Rule-based access control
Rule-based access control
N/A
Attribute-based access control
Access control [
]
2π −θm,n Δϕ int
∑n (O )∗N Pi i=1 ∑n i i=1 N Pi
⎪ ⎩r
⎧ ⎪ ⎨ ei
− aei
eimin
(c)>eimin
(continued)
RMSR = 1 − [(δd + βr ) × 100%]/(δtotal + βr ) where δd is the number of delayed jobs, and δ total is the total number of submitted jobs. βr is the number of resubmitted jobs
where, r is the residue and e is the essential quantities
ei =
r ns
where n is the number of tests, Oi is the output of the ith test and NPi is the number of profiles inspected in the ith test
Weighted average =
where, The symbol [.]int represents rounding to the nearest integer, θm,n can
Lm,n =
Mathematical formula
Data Security Risk Mitigation in the Cloud Through Virtual Machine … 233
Controlling intelligent socket algorithm
Business security
Multi-tenancy model
Indoor organizational environment
Yang et al. [10]
Self-similarity model
Ficco [14]
Healthcare firm
Gonzalez-Compean et al. [12]
Heuristic algorithm
Profit-aware VM deployment framework
Cloud
Wen et al. [11]
Authentication
Silva filho et al. [13] Cloud environment
Domain
References
Table 1 (continued)
Target host
Virtual machine monitoring
OpenStack dashboard
Building blocks
Data center
Component validated or evaluated
Cloud elasticity model
Virtualizing layer
Reachability algorithm
Self-similarity model
CloudSim framework
Validation and evaluation method
Host-based file-level migration
ETL tools
Database mitigation
Array-based block-level migration
N/A
Mitigation
Verification
N/A
QR code scanner
OTP
N/A
Feature extraction
Digital certificate
N/A
Rule-based access control
Discretionary access control
Verification of the system operation
N/A
Access control
where Pj = power consumption of PM j, Pjb = power consumption ∑n k = i=1 πm j=1 Pi j where k is the running host nodes
ST = {ti ,si , vi } where ST is sensor tuples ⎧( ) ⎨ Pb − Pi ∗ U c j j j Pj = + P ji ⎩0
MAD is the ∑fitness function of the cloud MAD = ( x, y|I (x, y) − I '(x, y)|)1−1 ) where, I and I' are the intensity values at the pixel (x, y) of the frame before and after embedding the watermark
T wT otal = ) ( { ( )} maxt jk∈ pr ed(tk) E T t jk , r 1 + DT t jk , tik where T wT otal is obtained, tek is the kth instance of the exit task te
Mathematical formula
234 A. Jonnala et al.
Data Security Risk Mitigation in the Cloud Through Virtual Machine …
235
6 Discussion In the following section, the components of authentication verification and mitigation taxonomy which are not correctly discussed in the chosen publication will be explored effectively and efficiently. The implementation of the proposed, examples from the literature is made for representing that whether the authentication verification and mitigation components are present in the data security risk mitigation in cloud platform for storing data, and therefore related to such taxonomy or not [11].
6.1 Authentication The authentication is a process which helps in identifying the user as per their username and password. In the security system of the cloud, the authentication is distinct from the authorization which is the process of giving access to the user for using the object as per their identity. The authentication is largely used in the cloud server when the service is exactly required to know the actual user of the information and data [17]. Users can use cards, retina scans, fingerprints, and others to authenticate their data. Mainly, the authentication includes the credentials such as personal id and password in which the stored data can only be accessed by the authorized person [18]. Authors of [14] also describes the business security which aids in supporting the growth and development of the firm. This factor will also help in protecting the data of the user.
6.2 Verification Majority of the publications includes the verification technique to verify the authenticity of the user. The system will verify the actual user of the stored data by credentials i.e., id and password [19]. Continued research effort helps in focusing on the advantage of the tools and techniques so that data security can be maintained. This component helps in creating assurance regarding the security of the virtual machine within the infrastructure of the cloud server.
6.3 Mitigation The selected publications assist in identifying the techniques that can be used for resolving the issue of data sharing with the cloud server. This component helps in detecting, diverting, filtering, and analyzing the data in an ethical manner. The mitigation component is the effective one in protecting the user data and information
236
A. Jonnala et al.
from attacks or third-party application. It also describes the variation which shows in the authentication verification and mitigation taxonomy. It is necessary to include the mitigation component in the system during interaction because it helps in protecting the data directly from the attacks. Also, the accuracy and safety of the user data will be enhanced and improved in an ethical manner.
6.4 Data Security Scenario Although, the classification based on the type of data security is not explicitly described in the context of security system. The notion of the data security scenario helps in considering the authorized data directly from the cloud server. It is mainly based on the steps that help in completing the process in a secure manner. This research is focused on showing the authorized data and information in the cloud server [20]. It can be determined by the cloud model in which data authentication is required in every step of the work.
7 Conclusion The above discussion concludes that virtual machine monitoring is playing a tremendous role in enhancing the data security in the cloud. In this work, the authentication verification and mitigation taxonomy has been described which are based on the authentication, verification, and mitigation. Through this, the system agents can easily be able to monitor and identify the malicious activities which can be responsible for affecting the whole process. Cloud computing can be used in various sectors such as organizations, banking sector or others for storing a large number of data practically. In the literature review, various techniques and methods are identified which can be used for enhancing the security measure of the system. Rest of the monitored objectives will assist in checking the quality of scheduling and the efficiency of the cloud environment. In this study, the authentication, verification, and mitigation (AVM) are highlighted which leads to improving the security aspects of the cloud platform to store the data. The purpose of this work is to enhance the safety and security of the user’s data by minimizing the occurrence of data leakage or breaches in the system. It helps in developing a system which will implement the identification of the authorized user. It also provides a credential to the user so that they can be able to access their data without affecting the other content. The purpose of the research is to use the authentication, verification, and mitigation taxonomy for better storage of the data and information. The credentials and authentication of the user will aid in attaining the stated objective of the research. The authentication verification and mitigation taxonomy need to be shown for enhancing the success of the user’s data in a systematic manner. The proposed solution is effective in the context of improving the idea in the domain of data security in the cloud. This research will
Data Security Risk Mitigation in the Cloud Through Virtual Machine …
237
assist in providing a wide range of techniques and models for handling the data of a user. The characteristics of the authentication verification and mitigation model are justified by the autonomous multiple cloud machines for data security.
References 1. Subramanian N, Jeyaraj A (2018) Recent security challenges in cloud computing. Comput Electr Eng 71:28–42. https://doi.org/10.1016/J.COMPELECENG.2018.06.006 2. Grzonka D, Jakóbik A, Kołodziej J, Pllana S (2018) Using a multi-agent system and artificial intelligence for monitoring and improving the cloud performance and security. Future Gener Comput Syst 86:1106–1117. https://doi.org/10.1016/J.FUTURE.2017.05.046 3. Sun PJ (2020) Security and privacy protection in cloud computing: discussions and challenges. J Netw Comput Appl. https://doi.org/10.1016/j.jnca.2020.102642 4. Sharma PK, Singh S, Park JH (2018) OpCloudSec: open cloud software defined wireless network security for the Internet of Things. Comput Commun 122:1–8. https://doi.org/10. 1016/J.COMCOM.2018.03.008 5. Fernández-Cerero D, Jakóbik A, Grzonka D, Kołodziej J, Fernández-Montes A (2018) Security supportive energy-aware scheduling and energy policies for cloud environments. J Parallel Distrib Comput 119:191–202. https://doi.org/10.1016/J.JPDC.2018.04.015 6. Zhang Y, Deng R, Liu X, Zheng D (2018) Outsourcing service fair payment based on blockchain and its applications in cloud computing. IEEE Trans Serv Comput 1–1.https://doi.org/10.1109/ TSC.2018.2864191 7. Namasudra S, Devi D, Kadry S, Sundarasekar R, Shanthini A (2020) Towards DNA based data security in the cloud computing environment. Comput Commun. https://doi.org/10.1016/ J.COMCOM.2019.12.041 8. Modi C, Patel D (2018) A feasible approach to intrusion detection in virtual network layer of Cloud computing. S¯adhan¯a 43(7):114. https://doi.org/10.1007/s12046-018-0910-2 9. Toosi AN, Sinnott RO, Buyya R (2018) Resource provisioning for data-intensive applications with deadline constraints on hybrid clouds using Aneka, future generation computer systems, vol 79, Part 2, pp 765–775. ISSN 0167-739X.https://doi.org/10.1016/j.future.2017.05.042 10. Yang C-T, Chen S-T, Den W, Wang Y-T, Kristiani E (2018) Implementation of an intelligent indoor environmental monitoring and management system in the cloud. Futur Gener Comput Syst. https://doi.org/10.1016/J.FUTURE.2018.02.041 11. Wen Y, Liu J, Dou W, Xu X, Cao B, Chen J (2018) Scheduling workflows with privacy protection constraints for big data applications on the cloud. Futur Gener Comput Syst. https://doi.org/ 10.1016/J.FUTURE.2018.03.028 12. Gonzalez-Compean JL, Sosa-Sosa V, Diaz-Perez A, Carretero J, Yanez-Sierra J (2018) Sacbe: a building block approach for constructing efficient and flexible end-to-end cloud storage. J Syst Softw 135:143–156. https://doi.org/10.1016/J.JSS.2017.10.004 13. Silva Filho MC, Monteiro CC, Inácio PRM, Freire MM (2018) Approaches for optimizing virtual machine placement and migration in cloud environments: a survey. J Parallel Distrib Comput 111:222–250. https://doi.org/10.1016/J.JPDC.2017.08.010 14. Ficco M (2018) Could emerging fraudulent energy consumption attacks make the cloud infrastructure costs unsustainable? Inf Sci. https://doi.org/10.1016/J.INS.2018.05.029 15. Li X, Yuan J, Ma H, Yao W (2018) Fast and parallel trust computing scheme based on big data analysis for collaboration cloud service. IEEE Trans Inf Forensics Secur 13(8):1917–1931. https://doi.org/10.1109/TIFS.2018.2806925 16. Yang R, Xu Q, Au MH, Yu Z, Wang H, Zhou L (2018) Position based cryptography with location privacy: a step for fog computing. Futur Gener Comput Syst 78:799–806. https://doi. org/10.1016/J.FUTURE.2017.05.035
238
A. Jonnala et al.
17. Ooi K-B, Lee V-H, Tan GW-H, Hew T-S, Hew J-J (2018) Cloud computing in manufacturing: the next industrial revolution in Malaysia? Expert Syst Appl 93:376–394. https://doi.org/10. 1016/J.ESWA.2017.10.009 18. Xu L, Weng C-Y, Yuan L-P, Wu M-E, Tso R, Sun H-M (2018) A shareable keyword search over encrypted data in cloud computing. J Supercomput 74(3):1001–1023. https://doi.org/10. 1007/s11227-015-1515-8 19. Xue K, Hong J, Ma Y, Wei DSL, Hong P, Yu N (2018) Fog-aided verifiable privacy preserving access control for latency-sensitive data sharing in vehicular cloud computing. IEEE Netw 32(3):7–13. https://doi.org/10.1109/MNET.2018.1700341 20. Chen Z, Fu A, Xiao K, Su M, Yu Y, Wang Y (2018) Secure and verifiable outsourcing of largescale matrix inversion without precondition in cloud computing. In: 2018 IEEE international conference on communications (ICC). IEEE, pp 1–6. https://doi.org/10.1109/ICC.2018.842 2326
Synergy of Signal Processing, AI, and Ml for Systems
Analysis of Knocking Potential Based on Vibration on a Gasoline Engine for Pertalite and Pertamax Turbo Using Signal Processing Methods Munzir Qadri and Winda Astuti
Abstract Knocking is one of the problems in internal combustion engines with many influencing factors, one of which is the use of fuel octane values that are not in accordance with the recommended use or engine specifications. Knocking can lead to poor performance of an engine and shorten its lifetime. To prevent this from happening, a study was conducted on how the octane rating can affect the knocking of an engine by comparing two types of fuel that have different octane rating levels. The most appropriate signal processing method was also determined to carry out this research. The engine speed was predetermined in each experiment. The achieved results prove that the engine has the potential of knocking to occur when using Pertalite-type fuel, caused by misfires. Signal processing is carried out using the Fast Fourier Transform (FFT) and Low Pass Filter (LPF) methods to obtain clear data. Keywords Knocking · Vibration · Octane rating · Gasoline · Signal processing
1 Introduction Technological developments from year to year have been proven to contribute in the form of a very large impact on humans in helping people’s work and daily life, especially the rampant development of the industrial revolution 4.0. The automotive industry is one of the industries that has the fastest technological development. In this sophisticated era in which everything is computerized, every aspect of the vehicle is controlled by the Engine Control Unit (ECU). One of the functions of the ECU is to be able to identify problems or damages to the vehicle engine [1]. A gasoline engine is an internal combustion engine that works on a four-stroke cycle using gasoline and an ignition system that uses spark plugs. This type of engine is quite powerful and light, but fuel efficiency needs to be continuously improved, M. Qadri (B) · W. Astuti Automotive and Robotics Program, Computer Engineering Department, BINUS ASO School of Engineering, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_21
241
242
M. Qadri and W. Astuti
namely by increasing the compression ratio and fuel quality. However, continuous efforts to increase the power and efficiency of a four-stroke combustion engine are always hampered by the occurrence of detonation or more commonly known as knocking [2]. Engine knocking is a problem in the engine that causes unnatural vibrations [3]. Selection of fuel that is adjusted to engine compression will make engine performance much more optimal and reduce the risk of engine knocking. Knocking can also occur due to the wrong selection of fuel that does not match the specifications of the engine used [4]. The fuel used by the combustion engine must meet the criteria of physical properties and chemical properties, including the heating value of the fuel itself, high energy density, non-toxic, heat stability, low pollution, and easy to use and store [5]. However, most vehicle users of gasoline engine in Indonesia choose to make savings by using subsidized fuel, so that sometimes the octane value used is lower than the manufacturer’s recommendation and does not meet the minimum requirements of the engine specifications used. Main requirements that must be met by gasoline that can be used in combustion engines, among others, is the combustion process in the cylinder which must be as fast and as hot as possible, leaving no deposits after the combustion process because it can cause damage to the cylinder walls, and combustion gases must be harmless to when released into the atmosphere. The main properties of gasoline include: volatile at normal temperature, able to dissolve oil, has the ability to withstand knocking, has a low burning point, and leaves a small amount of combustion carbon [6]. Octane value is a number that indicates the range of resistance of a fuel to detonation or commonly known as knocking. The octane number of a fuel is measured by the engine Coordinating Fuel Research (CFR), which is a testing machine that the compression ratio can be varied. Within that measurement, it is needed to set the standard operating conditions which can be in the form of rotation, temperature, pressure, humidity of the intake air, and so on. Gas with a high-octane value has properties that are not easy to cause knocking on the engine [6]. Therefore, it is necessary to conduct research to find out how to detect knocking in gasoline-fueled car engines based on the vibrations they cause by using signal processing methods in the MATLAB software application. The effect of Pertalite 90 and Pertamax Turbo 98 fuel types which have different octane values on the potential for knocking on gasoline car engines, as well as the effect of different engine speeds on the potential for knocking on gasoline car engines when using Pertalite 90 and Pertamax Turbo 98 gasoline fuels which have different octane values are the main focus of this study. The K3-VE machine used for this research is a teaching aid for practical needs with no-load conditions such as transmission and others. The dynamometer connection or other important and accurate sensor readings cannot be connected if there is no loading, so it is difficult to get data related to engine performance contained in the K3-VE engine. Therefore, it is not possible for further analysis on the engine performance. This study aims to determine an effective signal processing method in detecting knocking that occurs in gasoline-fueled car engines based on the vibrations they cause
Analysis of Knocking Potential Based on Vibration on a Gasoline …
243
in the MATLAB software application, determine the type of gasoline fuel that has the most potential to cause knocking on the K3-VE gasoline engine, and analyze the effect of different engine speed on the potential for knocking on a gasoline-fueled car engine. It is hoped that this research can be useful for the development of knocking detection analysis and become a theoretical basis that can be further developed.
2 Method 2.1 Measurements In this research, there are equipment and materials needed to collect and process the data before analysis as shown in Table 1. K3-VE Gasoline Engine. This is an engine of 4-stroke and 4 inline-cylinders with capacity of 1.297 cc. The set-up of this engine does not include loads which are normally coupled to the engine. The engine is installed in a closed room, as shown in Fig. 1, where the air temperature is naturally justified by an air conditioner [7]. Knock Sensor. The knock sensor shown in Fig. 2 is one of the sensors used in this study and serves as a knock or unnatural vibrations detector that occur due to improper combustion timing. Knock sensor instructs the ECU on the vehicle to advance and reverse ignition timing of several degrees when knocking occurs. To detect knocking that occurs during engine operation, this sensor uses a piezo electric component that will send a signal output that is proportional to the vibration that occurs in the engine [8]. In this study, the output signal read by the knock sensor is entered and translated by the oscilloscope through the measuring point located on the sensor management installed on the K3-VE engine set-up. RIGOL DS1054Z (Oscilloscope). This oscilloscope, as shown in Fig. 3 is used to capture and decode the output signal produced by knock sensor in the form of a .csv file format. The result is then matched to the readings showed by OBD-II Scanner. Table 1 Equipment and materials used in this research
No.
Equipment and materials
1
K3-VE gasoline engine
2
Knock sensor
3
Autel Maxisys MS906 (OBD-II scanner)
4
RIGOL DS1054Z (oscilloscope)
5
Software (MATLAB)
6
Gasoline RON 90 & 92
244 Fig. 1 K3-VE gasoline engine
Fig. 2 Sensor management on K3-VE engine
Fig. 3 RIGOL DS1054Z (oscilloscope)
M. Qadri and W. Astuti
Analysis of Knocking Potential Based on Vibration on a Gasoline …
245
2.2 Signal Processing As mentioned, the objective of this work is to analyze the characteristics of the knocking based on engine vibration. A flowchart of proposed system is shown in Fig. 4. As shown, the raw data of vibration engine must be read properly. The raw data consist of vibration information of the engine, the data is load in the MATLAB, in time domain which have time (sec) in x axis and amplitude (amp) in y axis, the data was recorded for 6 s in 1200 sample data. The time domain data is then segmented over 200 sample, resulting of six segments of data. The segmented data is multiplied by a window function. The purpose of the windowing function is to reduce the effect of spectral leakage. FFT techniques are then applied to the windowed data. In this work, Hamming window functions are applied and calculated. Hamming window is chosen, since it has good in frequency resolution and fair in spectral leakage. The Fast Fourier Transformation (FFT) is then applied to the segmented windowing data, based on the formula applied in Eq. (1). The FFT data is used to determine the cut off frequency of the Low Pass Filter (LPF) which applied to the segmented windowing data. The FFT data is the filtered using LPF. After filtering data, the last step is converting the filtering signal from the frequency domain into time domain for further analysis. Windowing of the signal. Data is divided into segments of equal length. The purpose of segmentation is to maintain stationary of the segmented signal, so that the Fourier Transform can be applied. The segmented data is multiplied by a window function. The purpose of the windowing function is to reduce the effect of spectral leakage. Hamming window is applied in this work, the formula for Hamming window as shown in Eq. 1, w(k + 1) = 0.54 − 0.46 · cos 2π ·
Fig. 4 Flow chart knocking recognition system
k , k = 0, 1, 2, . . . , n − 1 n−1
(1)
246
M. Qadri and W. Astuti
where k is the number of sample in segmented data and n is the total number of the sample. Fast Fourier Transformation Technique. Feature extraction process using Fast Fourier Transform (FFT) [9]. FFT helps to decompose an input 1D signal into real and imaginary components which are a representation of the image in the frequency domain [10]. Each point in the domain is represented as frequencies in Fourier or frequency domain. For an input signal of size [M × N], FFT is given by [11], as shown in Eq. 2: X (k) =
N −1 1 x(n)e− jπ kn/N k = 0, 1, . . . , n − 1 N n=0
(2)
where k is the number of sample in segmented data and n is the total number of the sample. Transform (FFT) algorithm is applied to the windowed signal. Then, algorithmic plot of FFT, referred to spectrum, is obtain. If the log amplitude spectrum contains many regularly spaced harmonics, then the FFT spectral estimates will show peaks corresponding to the spacing between the harmonics. Low Pass Filter Method. The pre-processing depends on the feature to be obtained. In the case of feature based on frequencies, only windowing process is needed. On the other hand, in the LPC feature extraction, windowing filtering and framing process are required. The objective of pre-emphasis filtering is to spectrally flatten the signal to make it less susceptible to finite precision effects at a later stage. The digital vibration s (n) is captured by an Analog-to-Digital converter (ADC) at sampling frequency fs. The signal is then filtered by a first order FIR filter, whose transfer function is H(z) = 1 − αz −1
(3)
where α in the range of filter coefficient and reflects the degree of pre-emphasis. The pre-emphasis signal is blocked/split into equal frames of length N. In this system the sampling frequency is 1200 kHz, therefore typical value of N are 200, which are related to a frame length of 0.2 ms. The third step is windowing; the objective of this step is to avoid discontinuity in the beginning and the end of the signal.
Analysis of Knocking Potential Based on Vibration on a Gasoline …
247
3 Result and Discussion 3.1 FFT-Based Analysis Based on the data shown in Fig. 5, the peak value of the FFT amplitude indicates the high vibration generated at a frequency of 50 Hz. The high frequency is caused by the magnitude of the wave generated in the vibration on the raw data that has been filtered. The Pertalite data shows a high vibration value at a high and significant frequency of 50 Hz based on a sampling frequency of 200. The graph from FFT will show a tall and slender graph caused by the presence of high value of wave fluctuations and amplitude. The high value of the amplitude wave on Pertalite can be caused by a compression value that is lower than it should
Fig. 5 a FFT-Based Signal for Pertalite RON 90. b FFT-Based Signal for Pertamax Turbo RON 98
248
M. Qadri and W. Astuti
be, considering the ECU has an unstable-advanced ignition value on Pertalite and continues to adjust so that misfire does not occur. In this irregular data, it can be concluded that the occurrence of non-uniform combustion causes misfire during combustion. Pertamax Turbo has data that tends to be of low amplitude at a frequency of 50 Hz with a sampling frequency of 200. Very much different with the amplitude value in the Pertalite data. A very low amplitude indicates vibrations that occur in the engine have much smaller waves. This minimal vibration exhibits a more uniform combustion than Pertalite. Uniform combustion will make the piston move smoothly resulting in minimal vibration in the combustion chamber area, then knock sensor will catch these low vibrations. Therefore, there is no potential for misfire to occur on Pertamax Turbo during combustion.
3.2 Time-Domain-Based Analysis The Time Domain graph in Fig. 6 shows waves with the amplitude indicating the magnitude of the vibration detected by the knock sensor and accompanied by the time span of the occurrence of the vibration wave. By doing a filtering process, clear vibration wave graphs and the greatest magnitude wave are then obtained. According to the knock position sensor of the K3-VE engine, which is located at the top of the cylinder, the vibration that will be read the most is the largest and dominant vibrations produced by the combustion process in the combustion chamber. Based on the data in Fig. 6, it is found that the vibration wave has a very large amplitude range in Pertalite fuel, which is at 0.5–4 V. This indicates a large vibration in the combustion chamber. The large vibration can be caused by the process of incomplete combustion, so that the piston moves by vibrating due to uneven or ununiform combustion. In the Pertamax Turbo data, the amplitude results are much lower than that in Pertalite. Amplitude range on Pertamax Turbo is 1.5–3 V, as captured by the knock sensor. It indicates that the vibration for Pertamax Turbo is so minimal. This condition also makes Pertamax Turbo less likely to happen misfire. This minimal vibration is produced by the presence of uniform combustion, so that the vibration in the combustion chamber is getting minimal. The waveform from FFT result itself also shows a very small value and slightly wavy, with this Time Domain result proves that the subtle vibrations in the FFT are related to the wave magnitude result in Time Domain graph.
4 Conclusion The signal processing method is helpful in identifying the potential of knocking. However, the use of more accurate sensors and equipment would give better results.
Analysis of Knocking Potential Based on Vibration on a Gasoline …
249
Fig. 6 a Time-domain-based Signal for Pertalite RON 90. b Time-domain-based Signal for Pertamax Turbo RON 98
The potential for knocking appears on the K3-VE engine when using gasoline that has an octane rating under the recommended use or engine specifications. Pertalite with an octane rating of 90 can lead to the potential for knocking on the K3-VE engine due to engine misfiring. The use of fuel with a higher-octane rating from the manufacturer’s recommended use (according to specifications engine compression) is one way of preventing the occurrence of the potential for knocking on the engine and can reduce the risk of damage on the inside of the machine.
250
M. Qadri and W. Astuti
References 1. Abm-motorsport. https://www.abm-motorsport.co.id/news/7229/fungsi-ecupadakendaraan. Last accessed 23 Feb 2022 2. Sujono A, Soekrisno R, Firmansyah E, Wahyunggoro O (2014) Detection and identification of detonation sounds in an internal combustion engine using wavelet and regression analysis. In: Proceeding of the electrical engineering computer science and informatics, vol 1, no 1 3. Firestone Complete Auto Care. https://www.firestonecompleteautocare.com/blog/mainte nance/whatcauses-engine-knocking/. Last accessed 20 Feb 2022 4. CARRO Blog! https://carro.id/blog/tips-trik/lebih-baik-mana-bensin-oktan-tinggiataubiasaini-jawabnya/4453/. Last accessed 24 Feb 2022 5. KOMPAS.com. https://www.kompas.com/cekfakta/read/2022/04/01/184100482/8-jenisbbmyang-dijual-pertamina--mulai-dari-pertamax-pertalitehingga?page=all. Last accessed 24 June 2022 6. KOMPAS.com. https://www.kompas.com/skola/read/2020/05/10/170000569/bahan-bakarf osil-minyak-bumi-batu-bara-dan-gas-alam?page=all. Last accessed 24 June 2022 7. Mv-clp. http://www.mv-clp.com/2016/03/sistem-kontrol-mesin-k3-ve.html. Last accessed 22 June 2022 8. KOMPAS.com. https://otomotif.kompas.com/read/2021/01/08/154100415/mengenalfungsidan-cara-kerja-knock-sensor-pada-sistem-injeksi?page=all. Last accessed 22 June 2022 9. KOMPAS.com. https://otomotif.kompas.com/read/2020/07/02/141200315/ketahui-3-efekburuk-mesin-mobil-mengelitik?page=all. Last accessed 20 June 2022 10. Kumparan. https://kumparan.com/infootomotif/apa-yang-dimaksud-dengan-motor-bakar-inipenjelasannya-1wqxCMr0plq/1. Last accessed 24 June 2022 11. Jurusan Teknik Mesin Terbaik di Sumut. https://mesin.uma.ac.id/2020/12/02/mesin-bensin/. Last accessed 24 June 2022
AI-Based Video Analysis for Driver Fatigue Detection: A Literature Review on Underlying Datasets, Labelling, and Alertness Level Classification Dedy Ariansyah, Reza Rahutomo, Gregorius Natanael Elwirehardja, Faisal Asadi, and Bens Pardamean Abstract Reduced alertness because of fatigue or drowsiness accounts for a major cause of road accidents globally. To minimize the likelihood of alertness reductionrelated crashes, a video-based detection emerges as a non-intrusive method and can offer high accuracy for the countermeasure development. Many Artificial Intelligence (AI) techniques have been developed to aid in video analysis for fatigue detection. However, the issues in AI-based video analysis for fatigue countermeasure development are rarely discussed. This paper reviews current video-based countermeasure development in the literature and highlights some reliability issues of AIbased fatigue countermeasure development in terms of underlying dataset, labelling method, and alertness level classification. The result serves as insights and consideration to academics and industrialists in developing a high reliable AI-based video analysis for driver fatigue countermeasure. Keywords Driver fatigue detection · Drowsy · Artificial intelligent · Deep learning
1 Introduction Fatigue and drowsiness account for a major cause of road accidents globally. Many studies have shown that healthy drivers were aware of decreased alertness while driving and deciding to stop driving may reduce the likelihood of road crashes [1, 2]. Although, driving duration restrictions are in place in many countries and the provision of billboards on the road exists to remind drivers to take a break while driving, studies shows that there is a need of more proactive approach rather than reminding D. Ariansyah (B) · R. Rahutomo · G. N. Elwirehardja · F. Asadi · B. Pardamean Bioinformatics and Data Science Research Centre, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] B. Pardamean Computer Science Department, BINUS Graduate Program Master of Computer Science Program, Bina Nusantara University, Jakarta 11480, Indonesia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_22
251
252
D. Ariansyah et al.
the drivers [1, 2]. Fatigue countermeasure such as fatigue or drowsy detection is widely developed as part of in-vehicle technology to alert the drivers when the level of sleepiness increases. This approach is promising in revolutionizing the current countermeasure from passive to proactive through monitoring and control. Among various fatigue prediction techniques, video analysis is known as a method that is both effective and non-intrusive [3]. It has also been applied in various fields and had further proven to be a recommendable approach, such as in building management, remote detections of objects [4] as well as knowledge management systems [5]. The method leverages Computer Vision (CV) algorithms and Artificial Intelligence (AI) to detect the decrement of driver’s alertness through facial features like yawning, percentage of eye closure (PERCLOS), blinking, body posture, and so on [6, 7]. At present, video-based approach is arguable the most viable approach as a single type of measure owing to its non-intrusiveness, economical value, and highly accurate [8, 9]. However, reliability issues of this approach are rarely investigated. This paper aims at addressing the following research questions: (1) how is the current reliability of AI-based video analysis for fatigue detection in terms of underlying dataset, labelling method, and alertness level classification and (2) what can be done to improve the reliability of this approach. The next section describes the methodology of this study. The result of the review along with the discussion and recommendation are presented in Sect. 3. The last part of this paper concludes the review.
2 Methodology This paper reviews all related journal and conference articles published from 2017 to 2022. Table 1 summarizes the methodology of this paper in acquiring research articles in terms of database, search string, article types, search period, and screening procedure. The article search was done by three researchers using the combination of different search strings in the selected database which resulted in more than 100 articles. To select only relevant articles for review, these articles were further screened through abstract, methodology, and conclusion. The selected articles for review must discuss the use of AI methods in computer vision for detecting driver fatigue/drowsy. This activity successfully narrowed down 25 research articles to be included in the review.
3 Results and Discussion This section presents the result of the reviewed articles (Table 2) along with some analysis to discuss research questions raised in this paper: Q1: how is the current reliability of video-based fatigue detection in terms underlying datasets, alertness reduction classification, and ground truth labelling?
AI-Based Video Analysis for Driver Fatigue Detection: A Literature …
253
Table 1 Methodology on acquiring and screening articles Searching index
Specific content
Database
ScienceDirect, Scopus, IEEEXplore, and Google Scholar
Article type
Scientific/technical articles published in journals and conferences
Search strings
“Driver Fatigue Level”, “Computer vision”, “Driver Fatigue”, “Artificial Intelligence”, “Drowsy”, “Drowsiness”
Language
English
Search period
2017–2021
Screening procedure
Research in computer vision AI
Q2: what can be done to improve the reliability of AI-based video analysis for fatigue countermeasure development?
3.1 Current Reliability Issues of Video-Based Fatigue Detection Underlying Dataset. From the reviewed articles (Fig. 1a), there are generally five categories of datasets used for the development of video-based driver fatigue/drowsy detection namely: 1. Acted Dataset. This dataset consists of emulated facial features of fatigue and drowsy like yawning, nodding, sleepy, and so on. Data is typically recorded with subjects sitting with simulated driving situation. Examples of this dataset includes NTHU-DDD [32], YawDD [7], etc. Out of twenty-five reviewed studies, sixteen used acted dataset for fatigue detection model. This figure also represented the most frequent used dataset. The advantage of using this dataset is a wide range of driving conditions that it provides such as high number of subjects with different ethnicities, age, gender, driving with/without glasses, and different illumination conditions. Nevertheless, it is unclear if it can reliably represent natural fatigued/drowsy driver and can be used in real-life settings as the emulated expression may not reveal the physiological and psychological concomitant which are important for early fatigue detection [33, 34]. 2. Non-driving Conditioned Dataset. This dataset is typically collected without performing driving tasks. Unlike acted dataset, non-driving conditioned dataset might show observable characteristics that are more similar to real fatigue and drowsy driver as it involves subjects who are under fatigue or drowsy condition due to prolonged waking state. One example of publicly available conditioned dataset is [35], and there were only two studies of the reviewed papers used this type of dataset for driver fatigue detection model. This type was the least frequent used dataset. Although, it is unknown why it is less used for fatigue detection modeling, one reason could be because it is more suitable for development of
254
D. Ariansyah et al.
Table 2 The result of the reviewed studies Reference Dataset (N = subject)
Dataset type
[10]
NTHU-DDD (N = Acted 36)
[11]
NTHU-DDD (N = Acted 36) CEW (N = 2423)
Ground-truth fatigue labeling method
Assessed fatigue levels
Provided in dataset Normal, drowsy
Provided in dataset Normal, early fatigue, falling Non-fatigue related Arbitrary threshold asleep
[7]
YawDD (N = 107) Acted
Arbitrary threshold Alert, early fatigue, fatigue
[6]
Self-collect (N = 15)
PERCLOS threshold
[12]
NTHU-DDD (N = Acted 36)
Provided in dataset Non-drowsy, drowsy
[9]
Self-collect (N = 21)
Driving simulator
Hybrid (subjective Fatigue level (0–5) assessment, observer ratings, and driving performance)
[13]
Self-collect (N = 13)
Real-driving (acted Arbitrary threshold Normal, fatigue fatigue)
[14]
WFLW
Non-fatigue related Arbitrary threshold Normal, fatigue
300-W challenge
Non-fatigue related
Helen
Non-fatigue related
Acted
Normal, fatigue
NTHU (N = 36)
Acted
Provided in dataset
[15]
Self-collect (N = 6)
Real-driving (acted)
Arbitrary threshold Normal, fatigue
[16]
UTA-RLD (N = 60)
Non-driving conditioned
Provided in dataset Approach 1: awake, drowsy
YawDD (N = 30)
Acted
Provided in dataset Approach 2: low, medium, high drowsiness
ZJU (N = 20)
Non-fatigue related N/A
[17]
[18]
NTHU-DDD (N = Acted 36)
Normal, talking, Provided in dataset sleepy-nodding, yawning
YawDD (N = 107) Acted
Provided in dataset
FDF (N = 16)
Provided in dataset
Acted
YawDD (N = 107) Acted LFPW
Provided in dataset Normal, fatigue
Non-fatigue related N/A (continued)
AI-Based Video Analysis for Driver Fatigue Detection: A Literature …
255
Table 2 (continued) Reference Dataset (N = subject)
Dataset type
Ground-truth fatigue labeling method
Self-collect
Acted
Provided in dataset
300-W
Non-fatigue related N/A
AFW
Non-fatigue related
HELEN
Non-fatigue related
LFPW
Non-fatigue related
IBUG
Non-fatigue related
[20]
Self-collect
Driving simulator
[21]
MRL eye (N = 37) Non-fatigue related Arbitrary threshold Low drowsy, OuluVS2 (N = 53) Non-fatigue related Arbitrary threshold medium drowsy, high drowsy YawDD (N = 107) Acted Arbitrary threshold
[22]
NTHU-DD (N = 36)
Acted
Provided in dataset Awake, distracted, fatigue
[23]
NTHU-DD (N = 36)
Acted
Arbitrary threshold Normal, fatigue, drunken, reckless
[19]
Subjective assessment (KSS)
Assessed fatigue levels
Awake, mild fatigue, deep fatigue
Normal, fatigue
Cohn-kanade
Non-fatigue related
[24]
Self-collect
Non-driving conditioned
Hybrid (subjective Normal, drowsy assessment, physiological data)
[25]
Self-collect (N = 20)
Acted
PERCLOS threshold
Normal, fatigue
ZJU (N = 20)
Non-fatigue related
[26]
WIDER FACE
Acted
PERCLOS & POM threshold
Normal, fatigue
[27]
Self-collect (N = 10)
Real-driving
Subjective assessment
Sober, slight fatigue, severe fatigue, extreme fatigue
[28]
Self-collect
Driving simulator
N/A
Normal, fatigue, visual inattention, cognitive inattention, fatigue
[29]
Self-collect (N = 10)
Real driving (acted) Provided in dataset Normal, drowsy
[30]
NTHU-DDD (N = Acted 36)
Provided in dataset Drowsy, non-drowsy
[31]
Self-collect (N = 4)
Provided in the dataset
Driving simulator
Normal, yawning, eyes closed, eyes closed while yawning
256
D. Ariansyah et al.
Fig. 1 The number of times each type of dataset (a), labeling method (b) and alertness level classification (c) were used in the reviewed studies (N = 25)
fatigue prediction model under automated driving rather than manual driving [34]. This is mainly due to some physiological measures of fatigue such as blinking can differ when performing non driving and driving tasks. As indicated in the previous studies blink frequency, and eye closures can vary depending on the demand of visual stimulus [36, 37]. 3. Driving Simulator Dataset. This dataset is collected through driving simulator. Fatigue and drowsiness are usually induced through simulated driving under monotonous and prolonged driving either conditioned (e.g. sleep deprived) or unconditioned (e.g. alert condition). There were four of the reviewed studies used driving simulator dataset as a basis for fatigue modelling. Compared to acted and non-driving conditioned dataset, the use of driving simulator may present a better proxy for studying and developing driver fatigue prediction model. The choice of using driving simulator to collect fatigue dataset is usually preferable than real driving due to safety risk, cost saving, and ease of data collection. Nevertheless, it should be noted that driving simulators can induce simulator sickness in some people and lack of simulator fidelity may invoke unrealistic behavior related to fatigue which result in invalid dataset for fatigue detection modelling [38, 39]. 4. Real Driving Dataset. This dataset is collected in real life driving either in a predefined track (e.g. having control of the traffic) or naturalistic driving (e.g. having no control of the traffic) with/without acted expressions. The use of real driving dataset makes fatigue detection model more reliable than simulated
AI-Based Video Analysis for Driver Fatigue Detection: A Literature …
257
driving. Nevertheless, ensuring a usable data and data quality from real driving is more difficult and expensive. It needs some measures to control some unexpected events which may compromise the usability of the data. Besides, unlike simulated driving, inducing fatigue to the driver takes more time [38] and dangerous. In the reviewed studies, there were three studies that collected drivers’ video with acted fatigue and one study involved data collection under long hours driving. 5. Non-fatigue Related Dataset. This dataset consists of various facial expression, blinking behavior, mouth movement, and so on there were not related to fatigue. There were seven studies that used this type of dataset for AI training. Although, high accuracy can be achieved, this dataset suffers from alertness level labelling which may not be accurate when it is used in detecting real fatigue drivers. Labelling Method. From Fig. 1b, most of the reviewed studies (44%) relied on the ground truth provided in the dataset for developing the fatigue model detection. Some other studies (28%) used arbitrary thresholds and a few studies (12%) adopted PERCLOS threshold to classify different level of alertness. Only a small number of studies (8%) used subjective assessment (i.e., Karolinska Sleepiness Scale) and Hybrid approach (i.e., observer rating and other measures) for establishing the ground truth for fatigue detection system. The labelling method ‘provided in dataset’ involves data collection which records video with predefined instruction to emulate various traits of different alertness level. This ‘acted condition’ as the ground truth might not be able to capture the actual transition of alertness reduction as some micro expressions related to alertness decrement is difficult to emulate [40]. The use of threshold value could be more reliable to discriminate different level of alertness more accurately based on multiple fatigue indicators (e.g., PERCLOS, head posture, mouth movement, and so on). To get the representative thresholds, observer ratings are usually more reliable than subjective assessment as its accuracy is questionable because of subjectivity nature as well as potentially inducing alerting effect to the driver [41]. Hybrid approach might present to be the most reliable approach in defining the ground truth. However, it involves more complex setup and multi-disciplinary approach to establish the valid ground truth. Alertness Reduction Classification. From Fig. 1c, there were 56% studies in fatigue or drowsiness prediction using video-based analysis focused on modelling a transition of wakefulness to drowsiness into binary states such as “alert/awake/nonfatigue/non-drowsy” to “fatigue/drowsy” where the distinction between fatigue and drowsiness was not considered. This has at least two important drawbacks in real life setting. Firstly, if fatigue/drowsiness state was identified too early by the system, drivers may lose confidence in the system capability. Conversely, if fatigue/drowsiness was predicted too late by the system, driver’s alertness may have already been drastically reduced and accident has been unavoidable if the driver were to trust the system and continued to drive [11, 42]. Secondly, the alert delivered by the system is ambiguous between fatigue or drowsiness, and therefore, can lead to driver misperception to counteract the symptom as treating fatigue is different to treating drowsiness [43]. As testified by normal experience, rest and inactivity relieves fatigue, but makes drowsiness worse [44]. Moreover, there were 28% studies
258
D. Ariansyah et al.
that included the detection of transition between wakefulness to drowsiness/fatigue such as “early fatigue/drowsy or mild fatigue/drowsy”. Although, the transition stage was considered in some studies to allow early identification of decreased alertness, many studies used ground truth provided from dataset and arbitrary values in classifying the decreasing alertness. As a result, although some approaches can achieve high accuracy in their classification algorithm, it is uncertain whether the distinction between fatigue and drowsiness is properly classified.
3.2 Recommendations to Improve the Reliability of AI-Based Video Analysis From the reviewed studies, the NTHU-DDD dataset was remarked as the most utilized public dataset for fatigue detection research. It has been used in combination with self-collected data to assist classification algorithms in obtaining more accurate detection. In addition, the YawDD dataset has been highly popular for yawn-based detection. In studies where these two datasets were used without being combined, the classification models were generally able to achieve around 70–90% classification accuracy albeit without system validation in real-life settings [7, 10, 12, 22]. However, combining such datasets with self-collected data generally allowed the AI models to perform better [11, 14] even when evaluated on driving simulator [18]. To build a high reliable AI-based video analysis for fatigue detection, the use of NTHU-DDD and YawDD may not be sufficient. From the view of Technology Readiness Level (TRL), the utilization of ‘acted dataset’ is more suitable for basic research (TRL 1, 2, and 3) due to the acted conditions, which may be one of the reasons why most studies that utilized either the NTHU-DDD or YawDD datasets were typically oriented in binary classification tasks and system validations were rarely reported. Therefore, the development of reliable AI-based fatigue detection systems may require customized, realistic, and arranged dataset collected in more naturalistic settings, such as in real driving or driving simulator scenarios whereas the ‘acted datasets’ may be used in pre-training the AI models. In defining the ground truth, observer ratings seem to be a viable approach. Nevertheless, there are some issues that could affect the reliability of this approach such as: (1) valid time interval for determining the valid ground truth [41] and (2) granularity of fatigue level captured by observer ratings [33]. To deal with these issues, this paper encourages the adoption of hybrid approach by complementing observer ratings with other measures. Physiological and behavioral data have been shown to be sensitive and discriminative to the changes of driver alertness [45, 46]. Variation of these data can be associated with observer ratings to correct the accurate classification of drivers’ state, and hence, also improve the granularity. Furthermore, alertness level classification should discriminate between fatigue and drowsiness, while also considering the onset of each state to give early warning to the driver. This has an important implication as crash rate could be further
AI-Based Video Analysis for Driver Fatigue Detection: A Literature …
259
reduced when early symptom of reduced alertness is detected and informed to the driver [11, 47].
4 Conclusion and Further Research Lack of standardized datasets makes fatigue model comparison and improvement difficult. Currently, publicly available datasets are only suitable for initial development of model due to lack of representativeness of real-life driving. Future research should aim at establishing reliable datasets that consists of natural fatigue data, proper labelling of ground truth with sufficient granularity.
References 1. Williamson A, Friswell R, Olivier J, Grzebieta R (2014) Are drivers aware of sleepiness and increasing crash risk while driving? Accid Anal Prev 70:225–234. https://doi.org/10.1016/j. aap.2014.04.007 2. Cai AWT, Manousakis JE, Lo TYT, Horne JA, Howard ME, Anderson C (2021) I think I’m sleepy, therefore I am—awareness of sleepiness while driving: a systematic review. https://doi. org/10.1016/j.smrv.2021.101533 3. Pardamean B, Muljo HH, Cenggoro TW, Chandra BJ, Rahutomo R (2019) Using transfer learning for smart building management system. J Big Data 6. https://doi.org/10.1186/s40537019-0272-6 4. Pardamean B, Abid F, Cenggoro TW, Elwirehardja GN, Muljo HH (2022) Counting people inside a region-of-interest in CCTV footage with deep learning. PeerJ Comput Sci. 8:e1067. https://doi.org/10.7717/peerj-cs.1067 5. Prabowo H, Cenggoro TW, Budiarto A, Perbangsa AS, Muljo HH, Pardamean B (2018) Utilizing mobile-based deep learning model for managing video in knowledge management system. Int J Interact Mob Technol 12:62–73. https://doi.org/10.3991/ijim.v12i6.8563 6. Mandal B, Li L, Wang GS, Lin J (2017) Towards detection of bus driver fatigue based on robust visual analysis of eye state. IEEE Trans Intell Transp Syst 18:545–557. https://doi.org/ 10.1109/TITS.2016.2582900 7. Kassem HA, Chowdhury MU, Abawajy J, Al-Sudani AR, Yawn based driver fatigue level prediction 8. Sikander G, Anwar S (2019) Driver fatigue detection systems: a review. IEEE Trans Intell Transp Syst 20:2339–2352. https://doi.org/10.1109/TITS.2018.2868499 9. Cheng Q, Wang W, Jiang X, Hou S, Qin Y (2019) Assessment of driver mental fatigue using facial landmarks. IEEE Access. 7:150423–150434. https://doi.org/10.1109/ACCESS.2019.294 7692 10. Ed-Doughmi Y, Idrissi N, Hbali Y (2020) Real-time system for driver fatigue detection based on a recurrent neuronal network. J Imaging 6. https://doi.org/10.3390/jimaging6030008 11. Kassem HA, Chowdhury M, Abawajy JH (2021) Drivers fatigue level prediction using facial, and head behavior information. IEEE Access 9:121686–121697. https://doi.org/10.1109/ACC ESS.2021.3108561 12. Razzaq S, Ahmad MN, Hamayun MM, Ur Rahman A, Fraz MM (2018) A hybrid approach for fatigue detection and quantification. In: Proceedings of 2017 international multi-topic conference, INMIC 2017. pp. 1–7. Institute of Electrical and Electronics Engineers Inc. https://doi. org/10.1109/INMIC.2017.8289472
260
D. Ariansyah et al.
13. Ma X, Chau L-P, Yap K-H, Ping G (2019) Convolutional three-stream network fusion for driver fatigue detection from infrared videos. In: IEEE international symposium on circuits and systems (ISCAS), pp 1–5 14. Zhuang Q, Kehua Z, Wang J, Chen Q (2020) Driver fatigue detection method based on eye states with pupil and iris segmentation. IEEE Access. 8:173440–173449. https://doi.org/10. 1109/ACCESS.2020.3025818 15. Wu J, da Chang CH (2022) Driver drowsiness detection and alert system development using object detection. Traitement du Signal 39:493–499. https://doi.org/10.18280/ts.390211 16. Magán E, Sesmero MP, Alonso-Weber JM, Sanchis A (2022) Driver drowsiness detection by applying deep learning techniques to sequences of images. Appl Sci (Switzerland) 12. https:// doi.org/10.3390/app12031145 17. Xiang W, Wu X, Li C, Zhang W, Li F (2022) Driving fatigue detection based on the combination of multi-branch 3D-CNN and attention mechanism. Appl Sci (Switzerland) 12. https://doi.org/ 10.3390/app12094689 18. Chen J, Yan M, Zhu F, Xu J, Li H, Sun X (2022) Fatigue driving detection method based on combination of BP neural network and time cumulative effect. Sensors 22. https://doi.org/10. 3390/s22134717 19. Wang Y, Liu B, Wang H (2022) Fatigue detection based on facial feature correction and fusion. In: Journal of physics: conference series. IOP Publishing Ltd. https://doi.org/10.1088/17426596/2183/1/012022 20. Li T, Zhang T, Zhang Y, Yang L (2022) Driver fatigue detection method based on human pose information entropy. J Adv Transp 2022. https://doi.org/10.1155/2022/7213841 21. Alkishri W, Abualkishik A, Al-Bahri M (2022) Enhanced image processing and fuzzy logic approach for optimizing driver drowsiness detection. Appl Computat Intell Soft Comput 2022. https://doi.org/10.1155/2022/9551203 22. Hueso E, Gutiérrez Reina D, Anber S, Alsaggaf W, Shalash W (2022) A hybrid driver fatigue and distraction detection model using AlexNet based on facial features.https://doi.org/10.3390/ electronics 23. Varun Chand H, Karthikeyan J (2022) Cnn based driver drowsiness detection system using emotion analysis. Intell Autom Soft Comput 31:717–728. https://doi.org/10.32604/iasc.2022. 020008 24. Husain SS, Mir J, Anwar SM, Rafique W, Ullah MO (2022) Development and validation of a deep learning-based algorithm for drowsiness detection in facial photographs. Multimed Tools Appl 81:20425–20441. https://doi.org/10.1007/s11042-022-12433-x 25. Zhang F, Su J, Geng L, Xiao Z (2017) Driver fatigue detection based on eye state recognition. In: Proceedings—2017 international conference on machine vision and information technology, CMVIT 2017. Institute of Electrical and Electronics Engineers Inc., pp 105–110. https://doi. org/10.1109/CMVIT.2017.25 26. Mohana RS, Vidhya MS (2021) A real-time fatigue detection system using multi-task cascaded CNN model. In: 2021 10th IEEE international conference on communication systems and network technologies (CSNT). https://doi.org/10.1109/CSNT.2021.118 27. Liu MZ, Xu X, Hu J, Jiang QN (2022) Real time detection of driver fatigue based on CNNLSTM. IET Image Process 16:576–595. https://doi.org/10.1049/ipr2.12373 28. Abbas Q, Ibrahim MEA, Khan S, Baig AR (2022) Hypo-driver: a multiview driver fatigue and distraction level detection system. Comput Mater Continua 71:1999–2017. https://doi.org/10. 32604/cmc.2022.022553 29. Deng W, Wu R (2019) Real-time driver-drowsiness detection system using facial features. IEEE Access 7:118727–118738. https://doi.org/10.1109/ACCESS.2019.2936663 30. Chen S, Wang Z, Chen W (2021) Driver drowsiness estimation based on factorized bilinear feature fusion and a long-short-term recurrent convolutional network. Information (Switzerland). 12:1–15. https://doi.org/10.3390/info12010003 31. Karuppusamy NS, Kang BY (2020) Multimodal system to detect driver fatigue using EEG, gyroscope, and image processing. IEEE Access 8:129645–129667. https://doi.org/10.1109/ ACCESS.2020.3009226
AI-Based Video Analysis for Driver Fatigue Detection: A Literature …
261
32. Chen CS, Lu J, Ma KK (2017) Driver drowsiness detection via a hierarchical temporal deep belief network. In: Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics). Springer, p v. https://doi.org/10. 1007/978-3-319-54526-4 33. Kundinger T, Sofra N, Riener A (2020) Assessment of the potential of wrist-worn wearable sensors for driver drowsiness detection. Sensors (Switzerland) 20. https://doi.org/10.3390/s20 041029 34. Vogelpohl T, Kühn M, Hummel T, Vollrath M (2019) Asleep at the automated wheel—sleepiness and fatigue during highly automated driving. Accid Anal Prev 126:70–84. https://doi.org/ 10.1016/j.aap.2018.03.013 35. Ghoddoosian R, Galib M, Athitsos V (2019) A realistic dataset and baseline temporal model for early drowsiness detection 36. Skotte JH, Nøjgaard JK, Jørgensen Lv, Christensen KB, Sjøgaard G (2007) Eye blink frequency during different computer tasks quantified by electrooculography. Eur J Appl Physiol 99:113– 119. https://doi.org/10.1007/s00421-006-0322-6 37. Crnovrsanin T, Wang Y, Ma KL (2014) Stimulating a blink: reduction of eye fatigue with visual stimulus. In: Conference on human factors in computing systems—proceedings. Association for Computing Machinery, pp 2055–2064. https://doi.org/10.1145/2556288.2557129 38. Fatigue sleepiness peformance in simulated versus real driving condition 39. Meng F, Li S, Cao L, Peng Q, Li M, Wang C, Zhang W (2016) Designing fatigue warning systems: the perspective of professional drivers. Appl Ergon 53:122–130. https://doi.org/10. 1016/j.apergo.2015.08.003 40. Tao K, Xie K, Wen C, He JB (2022) Multi-feature fusion prediction of fatigue driving based on improved optical flow algorithm. Signal Image Video Process. https://doi.org/10.1007/s11 760-022-02242-y 41. Kundinger T, Mayr C, Riener A (2020) Towards a reliable ground truth for drowsiness: a complexity analysis on the example of driver fatigue. Proc ACM Hum Comput Interact 4. https://doi.org/10.1145/3394980 42. Zhang C, Wang H, Fu R (2014) Automated detection of driver fatigue based on entropy and complexity measures. IEEE Trans Intell Transp Syst 15:168–177. https://doi.org/10.1109/ TITS.2013.2275192 43. May JF, Baldwin CL (2009) Driver fatigue: the importance of identifying causal factors of fatigue when considering detection and countermeasure technologies. Transp Res Part F Traffic Psychol Behav. 12:218–224. https://doi.org/10.1016/j.trf.2008.11.005 44. Johns MW, Chapman R, Crowley K, Tucker A (2008) A new method for assessing the risks of drowsiness while driving. Somnologie-Schlafforschung und Schlafmedizin. 12:66–74 45. Borghini G, Astolfi L, Vecchiato G, Mattia D, Babiloni F (2014) Measuring neurophysiological signals in aircraft pilots and car drivers for the assessment of mental workload, fatigue and drowsiness. https://doi.org/10.1016/j.neubiorev.2012.10.003 46. Jung SJ, Shin HS, Chung WY (2014) Driver fatigue and drowsiness monitoring system with embedded electrocardiogram sensor on steering wheel. IET Intel Transport Syst 8:43–50. https://doi.org/10.1049/iet-its.2012.0032 47. Li Z, Chen L, Peng J, Wu Y (2017) Automatic detection of driver fatigue using driving operation information for transportation safety. Sensors (Switzerland) 17. https://doi.org/10.3390/s17 061212
A Study of Information and Communications Technology Students e-Platform Choices as Technopreneur Lukas Tanutama and Albert Hardy
Abstract Information and Communication Technology (ICT) students typically are well positioned to become technopreneurs. They generally offer solutions related to technology. Current state of technology is ICT driven and provide ample opportunities for aspiring entrepreneurs. In this study it is interesting to understand what the ICT student’s perception of the suitable platforms to choose to launch their innovative products subject to limited resources. Even though resources are limited but they are trained in the STEM (Science, Technology, Engineering and Mathematics) body knowledge. It is assumed that their respond are based on their technical knowledge. The study obtained their choice through a Modified Focus Group Discussions (MFGD). The focus group consist of Computer Science and Information Systems students and the discussions is an open on-line forum where they all can post their opinion. There are two major themes that need their considerations in their choice of platforms to introduce their solution as entrepreneurs. The current contemporary digital techniques are the existence of market place and the Cloud. Both are being touted as solutions for limited resources young entrepreneurs. This study shows that young ICT knowledgeable technopreneurs choose market place and Cloud as their e-platforms. Both platform providers can then tailor their services accordingly to improve their services and performance. Keywords Technopreneur · Market place · Cloud · Opportunity · Platform
1 Introduction The number of internet users in Indonesia has exceeded 50% of the total population [1]. The average on-line shopping revenue per annum are in the range of millions US dollars. The number of entrepreneurs are more than 3% of the total population. The number of entrepreneurs is driven by the increase in the entrepreneurial L. Tanutama (B) · A. Hardy Universitas Bina Nusantara, Jakarta 11480, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_23
263
264
L. Tanutama and A. Hardy
spirit of the younger generation [2]. ICT (Information and Communications Technology) students can be considered as the best candidates exposed to entrepreneurship. Entrepreneurship in the ICT realm is often known as technopreneurship. In Indonesia itself technopreneurship becomes a trend in recent years and dominated by the younger generation [3]. A technopreneur merges technology, innovation, and entrepreneurship. Technopreneurship is centered on the application of technology of transforming good ideas into commercial initiatives of high value added. This study is concerned on how the perception of ICT students belonging to a well-established private university in Indonesia on market place and the cloud as their platform for launching their product and solutions. The university itself encouraged and train these ICT students to become technopreneurs. They are technically well equipped to analyze the technical environment that enables digital economy to function seamlessly. They are the digital natives as they belong to the millennial generation. Their response could provide an insight how the young aspiring technopreneurs will enter the real world [4]. This study will provide some insight for service providers, educators and market in general of technical related product and solutions support needs and improvements demanded. Players in the digital economy should take into account that the young entrepreneurs are more willing to face risks especially in the technical field as they are well trained and ready to face the challenges [5]. It cannot be denied they are still facing the traditional main obstacles for beginners that are related to space, time, and limited capital. These obstacles are not impeding their technopreneurship spirit with their ICT trainings. Advances in digital technology decrease technopreneurial risk and hurdles of starting a digital-based business famously known as a start-up [6].
2 Literature Study 2.1 Technopreneurship In Indonesia the number students becoming entrepreneurs should be increased as it is still small. Universities ought to introduce entrepreneurship training for students. Private universities are in good position to provide education on entrepreneurship in order encourage them to become entrepreneur. Free trade, increasing work competition, limited job vacancies point to the importance of fostering entrepreneurial spirit among university students. ICT students due to the nature of their STEM (Science, Technology, Engineering and Mathematics) training are relatively in demand but also their number are smaller than of other disciplines. They are potentially more motivated technopreneurs. Technopreneurs are expected help reduce unemployment [2]. A technopreneur is involved in providing high technology product or services innovation as solutions to the ones that need them. ICT (Information and Communication Technology) technopreneurs have been successful in most parts of the world. Many countries depend on ICT as means to develop their public and private sectors.
A Study of Information and Communications Technology Students …
265
Technological innovations has provided new opportunities to a nation’s development. ICT is an important factor in improving business processes for the business community and development of a country [7]. Technopreneurs are thriving from what they perceived as opportunities. There are various sources of opportunities such as result from asymmetries in available information and new information, supply and demand, between enhancement of productivity and rent-seeking and identification of changes for opportunities generation. Opportunities leads to entrepreneurs to think to start business plans. The opportunitybased approach to entrepreneurship is that the emergence of entrepreneurial opportunities is of fundamental importance. It is observed that social, political, and technological change could eliminate entrepreneurial opportunities [8].
2.2 Business Platform In the last two years the efforts to slow and ultimately to stop the pandemic somehow accelerated e-commerce growth. Some countries resort to strict lock down and other countries to soft lock down. Lock downs and social distancing affect the economy in general and in particular traditional retail business. E-business picks up as the public gradually adjust from traditional transaction to e-customers that are now computer and Internet knowledgeable. E-commerce activities are enabled as all business players are accessible via Internet. Buyers and Sellers are connected with the help of a platform on the internet as the medium. One of the important component for the running of commercial system apart from reliable and transparent legal foundation is the existence of financial reliable and accepted financial transaction system. Financial sector builds infrastructure that are also Internet based and supports commercial platform as market place. The Internet based online e-commerce market place provides the opportunity for aspiring technopreneurs the ability to compete in country wide and ultimately global wide. Customers have the convenience in terms of time, geographical location, and financial assurance for their activities. In Indonesia the e-commerce revenue could reach an annual growth rate 12.95% by 2025 seen from 2022. In 2022 the number of users is projected to reach above 220 million [9, 10]. E-commerce number of transactions and revenue has been increasing since the onset of pandemic. At the same time there are challenges that must be overcome by industry [11]. Technopreneurs have a number of choices to launch and offer their products and solutions. They can use the traditional brick and mortar, Internet web based with own hardware and software or based on social media. With the advent of Cloud computing the internet web based solution that formerly based on large capital outlay is logically migrated to the cloud. The need of platforms for product and solutions created by technopreneurs inspired several companies to create market place that cater their need for internet based market. On line commercial activities even before the pandemic is supported by existence of e-marketplaces. Market places that are not only cover local market but also have global coverage. Local market place in
266
L. Tanutama and A. Hardy
general are facilitating local e-commerce. They facilitate legal virtual meeting location and transaction between the e-commerce players. Market place can function as its place is in the Internet. Marketplace has become a reliable platform for reliable product offering, followed by websites and social media. Market place can interest technopreneurs as they can take advantage of not needing setting up a physical premise with all the infrastructure. This certainly greatly reduces the financial burden of starting a business and becomes an incentive for prospective entrepreneurs. The critical success factors in implementing an e-marketplace are among other trust, coverage, and payment channel [12]. In Indonesia apparently competitive threats, financial resources and risk affect the adoption of e-market place of the major aspiring entrepreneurs in general. The trend is adopting social media as their starting business platform. Even though market place provides on line financial transactions support that is convenient for both parties [13]. In a study of business school students in Indonesia concerning the major factors that affects prospective entrepreneurs to choose e-marketplace as their business platform found that technical factors are the most important factors. They pay close attention to technical parameters such as latency, user friendliness, and help desk support. Additional parameters that come into considerations among other service quality, marketing, price competitiveness and ease of financial transactions [14].
2.3 Information and Communication Technology Platform Startup companies and small and medium enterprises (SMEs) have the choice of following the well tested traditional on premise Information and Communication Technology (ICT) path or the maturing contemporary path known as the Cloud Computing. To reach a large market both depend on the Internet. Whatever ICT path any company chooses Internet connection is a must; otherwise, they will operate in isolation without connectivity to the outside world and in this case the social and commercial market. Start up, micro, small and medium enterprises are encouraged to adopt cloud computing as financially cloud computing might help to eliminate capital outlay to meet ICT resources needs. Choosing cloud computing as the ICT platform enables them to compete with established businesses due to the similar level ICT playing field. Adoption of cloud computing provide benefits among other on cost saving, scalability, performance and computing capacity. Cloud computing enables the faster and improved business processes to organizations that traditional ICT need sizable financial capability [15, 16]. Operationally the start-ups and SME can minimize their cost related to the service, maintenance and overheads [17]. A caveat in cost is the cost structure of cloud computing. A technopreneur with skill in ICT should be able to calculate more precisely the operations cost outlay and not being trap in the marketing gimmicks and offers that are only valid for certain terms and condition. The flexibility of cloud computing provides multi-platform interoperability e-commerce entities [18]. Cloud computing for e-commerce is essential for
A Study of Information and Communications Technology Students …
267
technopreneurs SME due to its ability with proper configuration and ICT skill to overcome resource limitation barriers. Furthermore in time start-ups and technopreneurs could easily develop integration that leads to market expansion and overcome barriers that usually hampers SMEs to expand their share in e-commerce [19]. Technopreneurs usually have to overcome IT resources requirements to develop, introduce, produce and maintain their solutions. Traditionally they have to build their own IT platform what currently is called on premise system. They need to carefully dimension the capacity of the system based on the predicted traffic both internally and externally. On premise system are not flexible to adjust due to the financial outlay. Cloud computing seems to be the solution for technopreneurs with limited financial resources. Even though cloud computing technology lessen financial risks of starting up and growth process, there are concerns in security, privacy and data ownership. Moreover, issues concerning customer privacy [16, 20] and customer trust [16, 21, 22] need to be addressed since beginning. The security and privacy theme become major concerns in e-commerce especially if cloud computing is adopted [23, 24]. Due attention should be paid to data confidentiality, cyber security attacks and transaction activities. Risks in internet transactions are caused by fraudulent behavior and deceptive activity that could harm e commerce entities and its customers. More and more countries are issuing regulations in protecting data and in particular personal data. Most notable is GDPR (General Data Protection Regulation) must be of important attention as adoption of cloud computing exposed technopreneurs to global interconnections. In this study it is interesting to find what the young aspiring entrepreneur with ICT background perception on cloud computing as the IT platform of choice.
3 Methodology To have insight of what ICT student’s aspiration are if they become technopreneurs concerning the platform where their product will be offered and how the supporting IT will be structured, a Modified Focus Discussion Group is performed. It is a modified FGD as it is not conducted with the usual face to face discussions but using a closed academic discussion class forum. The participants are students of selected classes. In each selected class the students receive normal class assignment where they have to answer a discussion theme related to their choice of product offering platform and IT support platform. They can openly and freely express their opinion. There are no grading for this session. Every students is expected to post their opinion in this online academic forum. The students have one week to post their opinion. The forum is facilitated by university’s LMS (Learning Management System). Each subject of a class can have its own forum and only accessible to the students and the lecturer of the subject and class. The subject used for this research is Computer Networks. Students of three classes of School of Computer Science and one class of School of Information System are selected. They have better knowledge of ICT (Information and Communication Technology).
268
L. Tanutama and A. Hardy
The discussion theme that was posted is to solicit the required opinion of the ICT students concerned their opinion on what their thinking on starting their own or together with their peer in introducing their product or services. There were two subjects of interest, the first one whether they would choose market place or do their own marketing the traditional brick and mortar. They must state their reasoning of their choice. The second subject of interest was their choice of their IT platform as ICT students whether cloud base that they recently had classes on premise set up that they are familiar or at least has some knowledge. Again their reasoning must be stated of their choice. Their opinion is then sorted and classified accordingly. In the discussions there were no instructions on what the student must consider in writing their opinion. The study strive to obtain honest perception of ICT students on what the important considerations they would take in heading toward entrepreneurship. Most likely the result will be influenced by the environment of the students. The cultural, education and peer would somehow influence their opinion. The method is trying to find out if there are qualitative ICT characteristics that most likely influence their opinion. The result classification is based on the posted discussions submitted at the due date. The result is tabulated to provide qualitative picture. The qualitative result showed the perception of ICT students of this particular university.
4 Results and Discussions 4.1 Choice of Business Platform There were two subjects of interest which are market place or traditional brick and mortar. As the students are digital native without the need of further explanation market place is understood as the platform where they can offer their product and services via Internet. In this study social media is not included as market place as it is assumed as informal set-up. The posted discussions can be classified according to their choice considerations. There are five considerations that were the base of their choice. The considerations were coverage, brand name, promotion platform, ease of doing business and others including traditional brick and mortar premise, or traditional brick and mortar as stepping stone before joining e-commerce. Coverage is based on the students consideration whether their offerings or at least the existence of them were well disseminated, accessible for purchase, and competitively reasonably priced. Brand name means how well the market place is known by the public its reputation. Generally the students were assuming that their offering are for retail market, which showed in their concern on promotion platform. Promotion platform is considered where they can promote their offering to the largest market possible but no consideration on potential or targeted market. The students also concern on the ease of doing business as their perception market place has few red tapes and business operations is handled by the market place. They only have to ensure that their offering can be purchased easily by the parties that are interested
A Study of Information and Communications Technology Students … Table 1 Choice of business platform
269
Item
Respondent
Percent (%)
Coverage
50
44
Brand name
13
11
Promotion
20
18
Ease of doing business
23
20
Others
8
7
Total
114
100
or need it. The last classification called others are counting the students that chose traditional solution, combination, or evolution of traditional to market place and the reverse process. The result is tabulated in Table 1. ICT students do not hesitate to choose market place cloud for their main business platform as shown by the total percentage of 92%. Major reason of choosing market place is its coverage (44%) and ease of doing business (20%).
4.2 Choice of IT platform Similar to business platform, the discussion subject is their preference of IT platform for their offering of product or services. The choices are on premise or cloud based. As ICT students they are exposed to on premise systems as it is strongly emphasized in the curriculum for their ICT knowledge foundation. Cloud computing concepts is introduced within the ICT exposure but not as deep as the traditional on premise system. The students enhance their knowledge accessing the abundant information in the internet and proactively following webinars offerings. The posted discussions can be classified to five major opinion on the reasoning of their choice. If they become technopreneurs their considerations of importance are accessibility, cost, technical issues, and holistic view in choosing cloud based platform. On premise platform was also considered by some students. Table 2 is the tabulated result. ICT students pay particular attention to technical related issues and in this case are accessibility (33%) and technical issues (32%). Accessibility concerns Internet Table 2 Choice of information technology
Item
Respondent
Percent (%)
Accessibility
38
33
Cost
28
24
Technical issues
37
32
All factors
11
9
On premise
2
2
Total
116
100
270
L. Tanutama and A. Hardy
access as they choose internet market place as their business platform and clout IT platform. Both are Internet dependent. ICT students does not consider cost as a major factor to start as an entrepreneur. Included in cost consideration is Capital Expenditure (CAPEX) and OPEX (Operations Expenditure). A number of the ICT students (10%) had holistic consideration covering technical, financial, marketing and business issues. Traditional on premise system is out of favor considering the financial aspect namely capital expenditure and operational expenditure. Included in the capital expenditure is the consideration of over or under dimensioning system that could also affect operations. The ICT students considered from their technical understanding perspective. Financial aspects were described qualitatively with no numbers.
5 Conclusion The study of ICT Students considerations in launching their innovative products and services as technopreneurs reveals that their decisions were based on what they know best which is Information Technology. More considerations were based on technical aspects. It cannot be denied the two aspects that were asked for consideration were also related to IT. They must determine their choice of business platform and IT platform. Even though no requirements were specified but the response were overwhelmingly technical related. Business platform choice was market place with at least 64% participants considers coverage and ease of doing business their considerations. Coverage is typically accessibility issue and ease of doing business was related to the red tapes (regulation) issues. IT platform choice for aspiring technopreneurs was definitely Cloud based. Cloud was chosen by at least 65% of participants and 74% if holistic choice was included. Only 24% considered cost (CAPEX, OPEX) as their consideration. This study could help business players in general, educators, and policy makers to tailor their needs accordingly. One caveat that need attention that the study participants were ICT students of a reputable university in Indonesia. Further study would be needed to obtain generalized conclusion.
References 1. APJII Indonesian Internet Profile 2022. https://apjii.or.id/content/read/39/559/Laporan-SurveiProfil-Internet-Indonesia-2022. Last accessed 12 Sep 2022 2. Indriyani R, Darmawan RC, Gougui A (2020) Entrepreneurial spirit among university students in Indonesia. In: SHS web of conferences, vol 76. EDP Sciences 3. Hutasuhut S, Aditia R (2022) Overview of student entrepreneurship in Indonesia. In: 2nd international conference of strategic issues on economics, business and, education, vol 204. Atlantis Press 4. Abbas AA (2018) The bright future of technopreneurship. Int J Sci Eng Res 9(12):563–566 5. Pratiwi CP, Sasangko AH, Aguzman G (2022) Characteristics and challenge faced by sociotechnopreneur in Indonesia. Bus Rev Case Stud 3(1):13–22
A Study of Information and Communications Technology Students …
271
6. Cavallo A, Ghezzi A, Balocco R (2019) Entrepreneurial ecosystem research: present debates and future directions. Int Entrepreneurship Manag J 15(4):1291–1321 7. Fowosire RA, Idris OY (2017) Technopreneurship: a view of technology, innovations and entrepreneurship. Glob J Res Eng 17(7):41–46 8. Eckhardt JT, Shane AS (2003) Opportunities and entrepreneurship. J Manag 29(3):333–349 9. Statista. https://www.statista.com/outlook/dmo/ecommerce/indonesia. Last accessed 12 Sep 2022 10. Baijal A, Hoppe ACF (2020) SingaporeReport e-Conomy SEA 2020 resilient and racing ahead: Southeast Asia at full velocity. Google, Temasek and Bain & Company 11. Negara SD, Endang SS (2021) E-commerce in Indonesia: impressive growth but facing serious challenges. ISEAS-Yusof Ishak Institute 12. Prihastomo Y, Hidayanto AN, Prabowo H (2018) The key success factors in e-marketplace implementation: a systematic literature review. In: IEEE 2018 international conference on information management and technology, pp 443–448 13. Purwandari B, Otmen B, Kumaralalita L (2019) Adoption factors of e-marketplace and Instagram for micro, small, and medium enterprises (MSMEs) in Indonesia. In: Proceedings of the 2019 2nd international conference on data science and information technology 14. Hatammimi J, Purnama SD (2022) Factors affecting prospective entrepreneurs to utilize e-marketplace: a study of business school students in Indonesia. Int J Res Bus Soc Sci 11(01):2147–4478 15. Shaikh F, Patil D (2014) Multi-tenant e-commerce based on SaaS model to minimize it cost. In: IEEE 2014 international conference on advances in engineering & technology research 16. Wang B, Tang J (2016) The analysis of application of cloud computing in e-commerce. In: IEEE 2016 international conference on information system and artificial intelligence (ISAI), pp 148–151 17. Kiruthika J, Horgan G, Khaddaj S (2012) Quality measurement for cloud based e-commerce applications. In: IEEE 2012 11th international symposium on distributed computing and applications to business, engineering & science, pp 209–213 18. Cai HY, Zhen L, Tian JF (2011) A new trust evaluation model based on cloud theory in e-commerce environment. In: IEEE 2011 2nd international symposium on intelligence information processing and trusted computing, pp 139–142 19. Yu J, Jun N (2013) Development strategies for SME E-commerce based on cloud computing. In: IEEE 2013 seventh international conference on internet computing for engineering and science 20. Sawesi KGA, Madihah MS, Jali MZ (2013) Designing a new E-Commerce authentication framework for a cloud-based environment. In: IEEE 2013 4th control and system graduate research colloquium, pp 53–58 21. Nafi KW (2013) A new trusted and secured E-commerce architecture for cloud computing. In: IEEE 2013 international conference on informatics, electronics and vision 22. Boritz JE, Won GN (2011) E-commerce and privacy: exploring what we know and opportunities for future discovery. J Inf Syst 25(2):11–45 23. Lackermair G (2011) Hybrid cloud architectures for the online commerce. Procedia Comput Sci 3:550–555 24. Treesinthuros W (2012) E-commerce transaction security model based on cloud computing. In: IEEE 2012 2nd international conference on cloud computing and intelligence systems, vol 1, pp 344–347
Occupational Safety and Health Training in Virtual Reality Considering Human Factors Amir Tjolleng
Abstract Occupational safety and health (OSH) is a critical workplace factor dealing with the safety, welfare, and social well-being of workers. OSH training aims to provide a safe and healthy workplace, prevent the likelihood of accidents or injuries, as well as increase productivity. The rapid pace of technological development and the increasing use of immersive visual technologies provide an ability to enhance the performance of safety and health training programs. Virtual reality (VR) is gaining popularity in the area of safety and health training, and it has the potential to outperform traditional training programs in a variety of ways. However, it is also essential to take into account the human factors (HF) aspect while adopting virtual reality technology for the program of safety and health training. Hence, this paper attempts to briefly review the concept of OSH, VR, and provide a further contribution to the design and utilization of VR in OSH training with respect to the HF aspects. Keywords Hazard · Human factors · Industry 4.0
1 Introduction The development of technology in the current era is rapidly transforming and gaining considerable attention from the industry. This dynamic phenomenon is well-known as the fourth industrial revolution (Industry 4.0), which is characterized by automation, digitalization, and information technologies integration, such as artificial intelligence (AI), internet of things (IoT), augmented and virtual reality, cloud computing, big data, data analytics, additive manufacturing, and cyber-physical systems. These transformations in the digital era disrupted not only the structure and business model of the industry; but may also have an impact on the health and safety of employees at work. A. Tjolleng (B) Industrial Engineering Department, Faculty of Engineering, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_24
273
274
A. Tjolleng
Every organization’s performance depends heavily on its commitment to occupational safety and health (OSH). It is a practice of ensuring the health, safety, and well-being of workers in the workplace from accidents, injuries, and exposure to hazardous chemicals [1]. The International Labour Organization estimates workrelated mortality at around 2.3 million fatalities worldwide per year [2]. Meanwhile, based on the statistics of the Social Security Administration Board for Employment in Indonesia, there were 221,740 occurrences of work accidents in 2020, a 21.3% increase from 2019, which led to major personal, societal, and economic costs [3]. Thus, it is important to enhance OSH training among novice workers to reduce the occurrence of accidents and work-related injuries or illnesses, increase the workers’ health quality, and enhance productivity. The aforementioned technological development and the rising use of immersive visual technologies provide a fascinating potential to enhance the performance of safety and health training programs. Virtual reality (VR) as one of the elements of industry 4.0 (I4.0) is gaining significant popularity in the area of safety training, and it has the potential to offer a variety of benefits over conventional training programs [4]. However, it is also essential to consider the human factors (HF, synonymous with the term ergonomics) aspect while adopting virtual reality for safety and health training programs. Failure to fulfill the attention for HF in the system design and implementation may elicit negative consequences and problems for workers, system performances, and society in general. To date, prior research on I4.0 element systems and implementation has mostly ignored humans as operators in the system [5]. The design and implementation of I4.0 elements by considering HF aspects in the industry have been researched in the existing studies. Recently, a study from Neumann et al. [5] proposed a conceptual framework to investigate the role of HF in the design of I4.0 elements in manufacturers. They determined which HF components have received the most attention in the I4.0 literature and presented a framework to systematically facilitate the companies to include HF in their I4.0 system development initiatives (e.g., collaborative picking robots and equipment maintenance using augmented reality glasses). Unfortunately, they did not identify and discuss how to apply their proposed framework in the OSH training using VR as an element of I4.0. The present study adopted the framework proposed in [5] and attempted to provide a further contribution to the design and implementation of virtual reality in OSH training by taking into account the HF aspects.
2 Literature Review 2.1 Occupational Safety and Health (OSH) Occupational accidents, injuries, and illnesses, as well as catastrophic industrial disasters, have long been a critical issue at all levels, from the individual workplace to the worldwide. Occupational safety and health (OSH) is a multidisciplinary area
Occupational Safety and Health Training in Virtual Reality Considering …
275
concerned with the health, safety, and well-being of workers (i.e. in an occupation). OSH is generally defined as the science of mitigating, identifying, assessing, and controlling hazards that could appear in the workplace and harm workers’ health and well-being, while also considering the potential effects on surrounding communities and the broader environment [1]. It is also can be referred to as occupational health or occupational safety. Promoting a secure and healthy workplace is the goal of an occupational safety and health program to enhance workers’ physical, mental, and social well-being.
2.2 Virtual Reality (VR) Virtual reality (VR) can be defined as a scientific and technological field that simulates the behavior of 3D entities in a virtual world using behavioral interfaces and computer science, allowing users to experience in real-time a pseudo-natural immersion through sensorimotor channels [6]. In addition, according to Sacks et al. [7], virtual reality is a technology that creates a simulated environment for its user by using computers, software, and additional hardware. Virtual reality allows a user, which is also called a cybernaut, to move through the computer screen into a threedimensional (3D) world. The user may look at, walk around, and interact with these digital environments as if they were real. The virtual representation of the user in an imaginary world is called an avatar [8, 9]. Based on the senses of immersion, virtual reality systems can be divided into three major categories: (1) non-immersive (desktop), (2) semi-immersive, and (3) fully-immersive systems [9, 10]. Non-immersive reality is a type of VR that allows users to interact with the virtual environment through a computer screen or desktop (e.g., video game, driving simulator). Meanwhile, semi-immersive VR enables partial immersion by superimposing digital elements on real-world items. They are systems that are similar to a standard flight simulator in that users are seated in a chair or room and are shown monitors relaying 3D pictures. Lastly, fully-immersive VR allows users to immerse themselves in a 360° sensory simulation. It is a computergenerated environment that provides a person with the sensation of being in another world by activating many senses, hence removing perception or knowledge of the real surroundings. Fully immersive VR also gives the user the capacity to control things within the virtual world. Virtual reality systems present more immersive and interactive experiences; however, the complexity of their implementation and components requirements remains a significant concern [8]. The subsystems of virtual reality are categorized based on the senses affected, such as the visualization subsystem, acoustic subsystem, kinetic and statokinetic subsystem, tactile and contact subsystem, as well as other senses.
276
A. Tjolleng
2.3 VR for Safety and Health Training A rising number of major corporations have recently started to investigate the numerous options provided by virtual reality for corporate training [11]. Applying VR allows companies to train employees about safety procedures and dangerous tasks in engaging and attractive ways. VR enables employees to not only see but also feel the immediate repercussions of dangerous behaviors by stimulating multiple senses. Thus, it is promising to enhance engagement, reduce costs, as well as improve the training quality. The existing study found that engaging training was nearly three times more successful than non-engaging training in encouraging safety training knowledge and skill development [12]. Moreover, VR allows companies to train employees about safety procedures and hazardous tasks in a safe and controlled environment. A VR-based safety training program’s goal is to provide a safe working environment in which employees may successfully practice activities, eventually promoting their capacities for hazard identification and intervention [13]. The VR could make it possible for potentially risky activities that require the physical presence of employees to be carried out remotely without affecting performance. In other words, it enables training via failure without endangering workers’ safety and health [10]. The rapid advancements in VR headmounted displays that are better, more affordable, and more adaptable are creating new opportunities for the technology’s use across a range of sectors, including manufacturing, construction, mining, etc. [4].
3 Methods A conceptual framework proposed in a recent study [5] was employed to design occupational safety and health training in virtual reality considering the human factors aspects. The framework of the study was established in five-step procedures: (1) define the adopted technology, (2) determine affected roles, (3) identify task situations, (4) analyze the tasks and their implications, and (5) analyze the outcome.
4 Result and Discussion This section presents and discusses how to apply occupational safety and health training in virtual reality considering the human factors aspects based on the conceptual framework proposed in [5]. The description of each step was briefly explained and followed by an application example in safety and health training in virtual reality as an element of I4.0.
Occupational Safety and Health Training in Virtual Reality Considering …
277
Step 1. Define the characteristics and objectives of the adopted technology This study considered an example of occupational health and safety training in VR as an industrial 4.0 element. Employees use smart glasses or head-mounted displays (HMD) and perform safety and health-related tasks virtually which are supported by instruction and perception. The manufacturer’s objectives in using the safety and health training using a VR system are to: (1) increase the employee’s perceived engagement or learning and motivation, and (2) perform the training in a safe environment. Step 2. Determine the human roles The most affected human role in occupational health and safety training is the employee (either new or current employee) in a company or manufacturer. It also involves information technology (IT) or engineering staff team workers in order to help the training operations. A trainer who delivers the training session is also impacted. However, at first, the roles that need to be considered are employees and IT team. They are involved in designing and integrating the system. Step 3. Identify task situations The adoption of VR technology for occupational health and safety training will remove and add some works. It removes in-depth initial training or learning the safety training materials on their own. On the other hand, it adds several tasks to deal with the safety training in VR and handling the VR devices. In addition, the IT and engineering teams have new roles to support and supervise the training system in VR. They also need to deal with additional planning efforts with the new system and new task scenarios. Step 4. Analyze the tasks and their impacts In this step, the task and the impacts of technology use on the humans in the system are described in terms of five aspects: (1) perceptual, (2) cognitive, (3) knowledge, (4) physical, and (5) psychosocial aspect. For example, by using virtual reality, the training session for employees is changed from manual to virtual, which increases learning engagement and motivation. However, despite the advantages of VR, they need higher effort to adapt to the new training system with physical, perceptual, and cognitive task loads. In addition, it may induce changes in head or neck postures and increase prolonged standing or sitting. Meanwhile, technical knowledge related to the VR system is also required for the employees and engineering team, which could increase the workload level at first and induce new challenges. Lastly, in the psychosocial aspect, the VR-based training system provides better safety training and allows to perform training with other employees virtually. Step 5. Analyze the outcome Applying VR for safety and health training can increase engagement, perceived learning, and motivation, and enable to perform the training in a safe environment. However, it requires investment for the development of the virtual environment as
278
A. Tjolleng
well as for the installation of the devices. Moreover, the manufacturer may be able to not provide trainers but will need more IT and engineering teams to develop and manage the VR system. In addition, related to the outcomes of the task and impacts on employees, initial adaptation to the VR system possibly induce headaches, eye strain, nausea, and mental exhaustion [8]. The work organizations or system performances are also might be affected due to increased cognitive and knowledge demands. For example, the employees or engineering team may feel stressed and prone to error, absenteeism, and turnover. The example of safety and health training using VR is summarized in Table 1. The one possible chain of effects for this case could be explained as follows: Using VR in OSH training adds several tasks to employees for dealing with the safety training in VR over the traditional training (1). As a result of this human impact, employees should be trained with sufficient knowledge regarding how to use the VR systems and perform the training well (2). Training requirements are related to initial learning effects while using the VR systems. In addition, the implementation of this system at the first possibly reduces system performance (3). Lastly, the reduced system performance and increased training requirements will have budgetary implications for training and provision (4). This chain of effects supported the need for attention to the HF that must occur throughout the design. The adoption of I4.0 elements needs to incorporate the HF aspects to enhance the worker and system performances to be more efficient and productive. To achieve the best performance, the HF needs to be addressed prior to the implementation of I4.0 elements. The poor design with HF in the initial stages of design will reduce the system’s performance [5]. The framework used for OSH training in VR would be worth helping managers to avoid HF-related pitfalls in the ongoing innovation phases. It also could be a basis in the design of the implementation of I4.0 system elements for the company or manufacturer to capture the overall system.
5 Conclusion This study aimed at identifying the aspects that need to be considered in the design of OSH training in VR with HF to support corporate I4.0-system development. The HF aspects need to be systematically addressed in the design of the technology adoption to achieve better system performance. In the case of OSH training using VR, the affected human roles are employees and the IT team. Using the VR systems adds and removes some works, and impacts the human, task scenarios, as well as outcomes of the system performances. This chain of effects supported the need for appropriate attention to the HF that must occur in the early design and implementation of OSH training using VR.
Occupational Safety and Health Training in Virtual Reality Considering …
279
Table 1 Example of method application for safety and health training in virtual reality
References 1. Alli B (2008) Fundamental principles of occupational health and safety. International Labour Organization (ILO) 2. ILO, The enormous burden of poor working conditions. https://www.ilo.org/moscow/areas-ofwork/occupational-safety-and-health/WCMS_249278/lang--en/index.htm. Last accessed 26 Sep 2022 3. Ketenagakerjaan BPJS (2020) Menghadapi Tantangan, Memperkuat Inovasi Berkelanjutan: Laporan Tahunan Terintegrasi 2020 4. Grassini S, Laumann K (2020) Evaluating the use of virtual reality in work safety: a literature review. In: Proceedings of the 30th European safety and reliability conference and the 15th probabilistic safety assessment and management conference, pp 1–6 5. Neumann WP, Winkelhaus S, Grosse EH, Glock CH (2021) Industry 4.0 and the human factor— a systems framework and analysis methodology for successful development. Int J Prod Econ 233:107992 6. Fuchs P, Moreau G, Guitton P (2011) Virtual reality: concepts and technologies. CRC Press 7. Sacks R, Perlman A, Barak R (2013) Construction safety training using immersive virtual reality. Constr Manag Econ 31(9):1005–1017 8. Grega M, Neˇcas P, Lancik B (2021) Virtual reality safety limitations. INCAS Bulletin 13(4):75– 86
280
A. Tjolleng
9. Mujber TS, Szecsi T, Hashmi MS (2004) Virtual reality applications in manufacturing process simulation. J Mater Process Technol 155:1834–1838 10. Norris MW, Spicer K, Byrd T (2019) Virtual reality: the new pathway for effective safety training. Prof Saf 64(06):36–39 11. Makransky G, Klingenberg S (2022) Virtual reality enhances safety training in the maritime industry: an organizational training experiment with a non-WEIRD sample. J Comput Assist Learn 38:1127–1140 12. Burke MJ, Sockbeson CES (2015) Safety training. In: Clarke S, Probst TM, Guldenmund F, Passmore J (eds) The Wiley Blackwell handbook of the psychology of occupational safety and workplace health. Wiley 13. Zhao D, Lucas J (2015) Virtual reality simulation for construction safety promotion. Int J Inj Contr Saf Promot 22(1):57–67
Block Chain Technology and Internet of Thing Model on Land Transportation to Reduce Traffic Jam in Big Cities Inayatulloh, Nico D. Djajasinga, Deny Jollyta, Rozali Toyib, and Eka Sahputra
Abstract Traffic congestion in big cities is a difficult problem to solve. The large volume of vehicles that pass the same road at the same time is a major problem in addition to other problems that add to the complexity of congestion in big cities. Block chain technology is a technology that offers transactions with high transparency where every node connected in the block chain network can monitor each other for all transactions that occur. In the context of the research being built, every vehicle that passes a certain road will become a node in the block chain network so that each node will know the path of other nodes and can look for alternative roads to avoid traffic jams. The aim of this research is to build a block chain adoption model for land transportation to reduce congestion. This study use uses observation and literature review to identify problems and find alternative solutions. The result of this research is the adoption of Internet of Thing and block chain technology to reduce traffic congestion in big cities. Keywords Internet of Thing · Block chain technology · Land transportation · Traffic jam Inayatulloh (B) Information Systems Department, School of Information System Bina, Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] N. D. Djajasinga Politeknik Transportasi Darat Indonesia-STTD, West Java, Indonesia e-mail: [email protected] D. Jollyta Institut Bisnis Dan Teknologi Pelita Indonesia, Pekanbaru, Indonesia e-mail: [email protected] R. Toyib · E. Sahputra Universitas Muhammadiyah Bengkulu, Bengkulu, Indonesia e-mail: [email protected] E. Sahputra e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_25
281
282
Inayatulloh et al.
1 Introduction Congestion problems often occur in the capital city or areas that have a high intensity of activity, land use and population [1–3]. Traffic jams often occur due to high traffic volume, which is caused by the continuous mixing of traffic. The nature of traffic jams is a routine occurrence, which usually affects the use of resources. In addition, traffic jams can interfere with activities in the surrounding environment. The broad impact of congestion will affect the smooth running of socio-economic and cultural activities in an area around the congestion. Traffic congestion is caused by an imbalance between the population and the increasing number of vehicles with the number of existing roads or available in an area. Congestion has a social impact; usually the impact of this congestion causes stress, irritation, fatigue experienced by the driver/driver and even broadly affects the psychology of the population around the area. From an economic point of view, the impact of traffic jams has an impact on the loss of time for drivers and increased costs for drivers [5, 6]. Block chain technology offers transparency of every transaction where every node is connected in the block chain network [7]. In the context of the land transportation model to be built where every vehicle that will use a road will become a node and have an IoT device in the block chain network so that each vehicle will know the movement of each node in the block chain network. The monitoring system built by the government that is connected to the block chain network will provide an alternative road that is feasible to pass to avoid traffic congestion. A traffic monitoring system based on a geographic information system that provides all information related to roads in an area so that it will provide information that is much needed by motorists who will use roads in that area. The purpose of this research is to help the government reduce congestion by integrating block chain technology, Internet of Things (IoT) and Geographical Information System (GIS). The research method uses a quantitative approach through observation and literature review to identify problems and alternative solutions using information technology. The results of this research are block chain, IoT and Geographical Information System integration models to reduce traffic congestion in land transportation.
2 Literature Review 2.1 Block Chain Technology A blockchain consists of blocks containing transaction information that are linked and sequenced to form a chain. The literal definition of blockchain is a series of blocks. Therefore, blockchain serves as a hash-based digital data storage system. A hash is an alphanumeric code encoding a word, message, or piece of data that is
Block Chain Technology and Internet of Thing Model on Land …
283
Fig. 1 A block chain network architecture (Salman, 2018). https:// www.academia.edu/dow nload/82356372/pbc_uem. pdf
generated using a particular algorithm employing a cryptographic code. Each block has a separate and unique hash [8, 9]. The most recent or most recently connected block must contain the prior block’s hash value. Each block will contain a reference to the previous block, forming a chain. The data in the blocks that are already linked in the chain cannot be modified since it would need the modification of the blocks preceding them [10]. Figure 1 show the block chain mechanism. Therefore blockchain-based systems are so safe. The initial block in a distributed ledger is known as the genesis block. Every new block will be appended to the chain’s end. The subsequent block will then contain information regarding the sequence of all prior blocks to maintain the integrity of the blockchain chain. Each block will be validated by the algorithm before being added to the chain. Depending on the consensus mechanism utilized, each blockchain’s verification technique may differ. This consensus system ensures that all data is correct, precise, and secure. To contribute blocks to the Bitcoin network, every Bitcoin miner must solve a difficult cryptographic challenge. Once our transaction is validated, the data will be stored with hundreds of other transactions in a block. This data includes the transaction amount, our digital signature, and the parties involved. The oldest transaction is always stored first, and vice versa. After all transactions in a block have been validated, the blockchain algorithm then generates a hash based on those transactions. This new block is also provided the preceding block’s hash data. This is the link between the new block and the blockchain chain. When a new block is uploaded to the blockchain, it becomes accessible to all parties, including ourselves. Lastly, information about each block and chain of this network is not saved on a single computer, but rather is distributed
284
Inayatulloh et al.
to all the miners functioning as network nodes. This type of system is sometimes referred to as a distributed ledger [11].
2.2 Internet of Thing Internet of things is a concept in which objects are embedded with technologies such as sensors and software to communicate, control, connect, and exchange data with other devices while connected to the internet [12]. The Internet of Things is closely related to the term machine-to-machine, or M2M. All devices with M2M communication capabilities are commonly known as smart devices. This intelligent device is expected to facilitate the completion of various existing affairs or tasks [13]. To establish an IoT ecosystem, it is necessary to include not only intelligent devices but also a variety of other components. Artificial intelligence is first. Artificial intelligence (AI) is a human-created intelligence system that is implemented or programmed into machines so they can think and act like humans. One of the many branches of artificial intelligence is machine learning [14]. Almost any machine or device can become a smart machine through the IoT. Thus, IoT has a significant impact on every aspect of our lives. This AI is responsible for data collection, algorithm design and development, and network installation. Second are sensors. This characteristic differentiates IoT machines from other sophisticated machines. This sensor enables the machine to identify the instrument that can transform a passive IoT machine into an active and integrated machine or tool. The final factor is connectivity. Connectivity is also referred to as the interconnection of networks. It is possible to create a new network for IoT-specific devices within the IoT ecosystem itself [15].
2.3 GIS (Geographical Information System) GIS or Geographic Information System is a computer-based tool to map and analyze things that exist and events that occur on earth. Figure 2 show the GIS technology integrates common databases with operations such as queries and statistical analysis with unique visualizations. GIS can also display geographical aspects as well as the analytical benefits offered by maps. Another definition of GIS is a set of computer systems for capturing, storing, checking, and displaying data related to positions on the earth’s surface. This capability distinguishes GIS from other information systems tools which can make it valuable for a wide range of public and private companies. With the advantages of being able to explain events, predict outcomes, and planning strategies [16].
Block Chain Technology and Internet of Thing Model on Land …
285
Fig. 2 Geographical information system architecture (Polino, 2012). https://www.mdpi.com/19995903/4/2/451/pdf
3 Previous Research 3.1 Internet of Thing and Transportation Figure 3 show precise and appropriate traffic related data allows travellers to choose suitable travelling modes, travelling paths, and departure time, which is crucial for the success of Intelligent Transportation System (ITS). With the growth of vehicles, the rate of pollution and consumption of fuel has increased, it also creates traffic congestions. For the recent years there has been a rapid growth in technology, which can be explored to solve traffic issues. However, depending upon the available technologies each countries ITS research area may be different [17].
3.2 Block Chain and Transportation Figure 4 explains how to apply blockchain technology for smart transportation. fuzzy set theory is used for decision making. The results show that solving social problems is the main link, blockchain application systems in intelligent transportation need to be built from three levels including government layer, enterprise layer, and user layer [18].
286
Inayatulloh et al.
Fig. 3 Internet of thing for transportation architecture (Chand, 2018). https://www.academia.edu/ download/66309539/9132.pdf
Fig. 4 A block chain architecture for smart transportation (Du, 2020). https://www.hindawi.com/ journals/jat/2020/5036792/
4 Research Methods The initial stage of the research identifies congestion problems in big cities based on literature review and observations. An in-depth literature review was conducted to get to the root of the problem of congestion in big cities. The next stage is to identify alternative solutions with an information technology approach by analyzing previous research using Internet of Things or Block Chain technology or Geographical Information Systems and looking for the weaknesses of the research. The final
Block Chain Technology and Internet of Thing Model on Land …
287
stage of research is building a model by complementing previous research where the model built integrates Internet of Things technology, Block Chain and Geographical Information System.
5 Result and Discussion Figure 5 explains the proposed model based on some previous research that only uses IoT or Block Chain only. The proposed model not only combines IoT and Blockchain but uses spatial data from the Geographical Information System so that each vehicle as a node on the block chain network will know the spatial location of each existing node. The following is a detailed explanation of Fig. 5. 1.
Every vehicle connected to the block chain network must have an Internet of Thing device attached to the car or motorcycle. IoT will provide vehicle information that will be used for communication on the blockchain network [19].
Fig. 5 A proposed model the integration block chain, IoT and GIS to reduce Traffic Jam (Created by author)
288
Inayatulloh et al.
2.
All data originating from IoT on each vehicle will be stored at the local gateway. In addition to storing all vehicle data connected to the IoT local gateway, it also stores road data derived from spatial data from GIS [20]. 3. Spatial data representing data on all roads in one area will be the reference for every node movement that represents each vehicle on the blockchain network. Spatial data displays information accurately, in detail and in real time so that it can be used in the proposed model [21]. 4. Each node representing each vehicle will be registered as a block. Each block will store important information needed for connection and communication on the block chain network [22]. 5. On the other hand, some important information in spatial data is also registered as a block with the aim that if there are changes to road components, automatically all nodes or blocks that represent vehicles on the block chain network will know and respond to road changes [23]. 6. Registered block chain is information that combines each block representing vehicle and road information [24]. 7. After each part is registered as a block, the system on the block chain network will validate each block with peer to peer validation so that if there is a difference in information on one of the nodes during the validation process, the node cannot enter the block chain network [25]. 8. After each block has been validated to all block chain networks and all information is valid, then all vehicle information and road sections will become information on the block chain network [26]. 9. Users as users of road information and vehicle information can access traffic information from several devices connected to the block chain network. Users can find out information on road conditions and vehicles on one road to be traversed if there is congestion on a certain road, the user can take other alternative roads [27]. 10. The government as a regulator can access information on the block chain network. Vehicle and road information obtained from the block chain network can be used by the government to create new regulations to improve services for road users [28].
6 Conclusion The factor causing traffic congestion in several locations, especially in big cities in developing countries, is generally due to the lack of accurate information that can be accessed in real time by road users. The availability of accurate and real time information for road users and the government can reduce congestion because road users can look for other alternative roads. The government will respond more quickly to congestion by creating new policies. Integration between IoT, block chain and GIS can be an alternative solution because combining these three technologies
Block Chain Technology and Internet of Thing Model on Land …
289
will produce accurate and real time information for road users so as to reduce traffic congestion.
References 1. Qingsong H et al (2018) The impact of urban growth patterns on urban vitality in newly built-up areas based on an association rules analysis using geographical ‘big data’. Land Use Policy (78):726–738 2. Xia (2020) Analyzing spatial relationships between urban land use intensity and urban vitality at street block level: a case study of five Chinese megacities. Landsc Urban Plan (193):103669 3. Surya (2020) Land use change, spatial interaction, and sustainable development in the metropolitan urban areas, South Sulawesi Province, Indonesia. Land (9.3):95 4. Ko´zlak (2018) Causes of traffic congestion in urban areas. case of Poland. In: SHS web of conferences. EDP Sciences Press, p 01019 5. Calatayud (2021) Using big data to estimate the impact of cruise activity on congestion in port cities. Marit Econ Logist (1):18 6. Centobelli (2021) Blockchain technology for bridging trust, traceability and transparency in circular supply chain. Inf Manag (103):508 7. Iqbal (2020) Safe farming as a service of blockchain-based supply chain management for improved transparency. Clust Comput (23.3):2139–2150 8. Kaihua Q, Zhou L, Gervais A (2022) Quantifying blockchain extractable value: How dark is the forest. In: 2022 IEEE symposium on security and privacy (SP. IEEE Press), pp 198–214 9. Thammavong (2021) Interoperability electronic transactions and block-chain technology security. Int J Sociol Anthropol Sci Rev (IJSASR) (1.6):7–16 10. Chan Kyu P, Jun Baek S (2019) Blockchain of finite-lifetime blocks with applications to edge-based IoT. IEEE Internet of Things J (7.3):2102–2116 11. Salman T, Jain R, Gupta L (2018) Probabilistic blockchains: a blockchain paradigm for collaborative decision-making. In: 2018 9th IEEE annual ubiquitous computing, electronics & mobile communication conference. IEEE Press, pp. 457–465 12. Perwej Y et al (2019) The internet of things (IoT) and its application domains. Int J Comput Appl (182.49):36–49 13. Mazhar (2022) Forensic analysis on internet of things (IoT) device using machine-to-machine (M2M) framework. Electronics (11.7):1126 14. Sanclemente GL (2022) Understanding cognitive human bias in artificial intelligence for national security and intelligence analysis. Secur J (35.4):1328–1348 15. Munirathinam, S (2020) Industry 4.0: industrial internet of things (IIOT). Adv comput (117. 1) 16. Pollino (2012) Collaborative open-source geospatial tools and maps supporting the response planning to disastrous earthquake events. Future Internet (4.2):451–468 17. Chand (2018) Survey on the role of IoT in intelligent transportation system. Indones J Electr Eng Comput Sci (11.3):936–941 18. Du X (2020) Blockchain-based intelligent transportation: a sustainable GCU application system. J Adv Transport (2020):5036792 19. Fallgren (2018) Fifth-generation technologies for the connected car: capable systems for vehicle-to-anything communications. IEEE Vehr Technol Mag (13.3):28–38 20. Bacco (2020) Monitoring ancient buildings: real deployment of an IoT system enhanced by UAVs and virtual reality. IEEE Access (8):50131–50148 21. Chen, BY et al (2022) A spatiotemporal data model and an index structure for computational time geography. Int J Geogr Inf Sci (1–34) 22. Gupta (2022) Game theory-based authentication framework to secure internet of vehicles with blockchain. Sensors (22.14):5119
290
Inayatulloh et al.
23. Kamali (2021) A blockchain-based spatial crowdsourcing system for spatial information Collection using a reward distribution. Sensors (21.15):5146 24. Gabay (2020) Privacy-preserving authentication scheme for connected electric vehicles using blockchain and zero knowledge proofs. IEEE Trans Veh Technol (69.6):5760–5772 25. Joshi (2021) Unified authentication and access control for future mobile communication based lightweight IoT systems using blockchain. Wirel Commun Mob Comput (2021):8621230 26. Wang, Y (2020) BSV-PAGS: blockchain-based special vehicles priority access guarantee scheme. Comput Commun (161):28–40 27. Distefano (2021) Trustworthiness for transportation ecosystems: the blockchain vehicle information system. IEEE Trans Intell Transp Syst (22.4):2013–2022 28. Sharma (2018) Blockchain-based distributed framework for automotive industry in a smart city. IEEE Trans Industr Inform (15.7):4197–4205
A Marketing Strategy for Architects Using a Virtual Tour Portfolio to Enhance Client Understanding A. Pramono
and C. Yuninda
Abstract A portfolio is a prerequisite for anyone working in the creative industry, especially architects. Portfolios can include content from social media, PDF files, websites, and mobile apps. Some architects frequently employ portfolio presentation methods like images or 3D renderings. In order to introduce their campus environment, several universities frequently use 360-degree presentation techniques in photographs or 3D visualization graphics. It is common for museums to show off their collections, but architects do not usually use them as portfolios. This study aims to determine how effective the virtual tour method is for the client to understand the architect’s work. A single case study was the research methodology used. Surveys of a variety of customers were used to collect the data. In addition, interviews with a few adequately selected clients were done to determine the significance of the presentation strategy used in the virtual tour. In the first stage, the architect makes a design layout plan in 2D and 3D. From the layout design, it was developed into a 3D perspective design made using computer software. The final design is a website-based 360-degree virtual tour. Most respondents preferred the virtual tour’s 3D panoramic visualization image out of the three presentation methods, understanding the space described by the architect in 88.1% of participants. The proper presentation technique will enhance the client’s understanding and reduce misinterpretation of the architect’s perception. It aligns with the sustainable design concept, which helps people, profit, and the planet. Keywords Architect · Client understanding · Marketing · Portfolio · Virtual tour
A. Pramono (B) Interior Design, School of Design, Bina Nusantara University, West Jakarta, Indonesia e-mail: [email protected] C. Yuninda Department of Architecture, Engineering Faculty, University of Merdeka, Malang, Indonesia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_26
291
292
A. Pramono and C. Yuninda
1 Introduction Portfolios are compilations of drawings or photographs that document an architect’s most significant accomplishments [1, 2]. Anyone working in the creative industries, particularly architects, must have a portfolio [3]. A portfolio is a method for architects to display their previous work and allows clients to assess the skill and personality of the work [4]. Before starting to create a portfolio, define the purpose, format, and concept of the portfolio. Architects generally employ a rendering image presented as visualization images in their design portfolio [5]. At the same time, the portfolio that has been built will be documented in the form of photos. Over the last decade, portfolios have evolved from paper to digital, as it is more time and cost-efficient to maintain digital portfolios and keep them up to date [6]. Within digital portfolios, options can range from social media, the Portable Document Format (PDF) files, websites, and mobile apps. Most architects’ portfolios are made up mostly of two-dimensional (2D) Joint Photographic Experts Group (JPEG) files, which can be improved with text. Architects can also improve their presentations by adding digital material, such as video and music, called multimedia presentations [7]. Some clients do not understand 2D drawings, so an architect needs 3D visualization images to help clients understand the design [8]. It is also helped by the growth of new technologies, such as 3D visualization and virtual reality, used to show off architectural design works [9]. Computer modeling software like Sketchup can produce 3D representations [10]. This software is suitable for various drawing applications, including architecture and interior design [11]. One type of application that performs rendering, such as Lumion, is required to achieve photorealistic rendering, making 3D images resemble the original [12]. Additionally, this software may produce 360-degree photos that enable a user to see the entire room. As a result, a user can comprehend the area as a whole and minimize the misleading impressions that an architect might express. A virtual tour needs 360 images or photos. Many previous researchers, including Vidanapathirana [13], have studied the usage of 360-degree photos, which offers a more immersive experience than traditional photography. When introducing campuses, photo 360 is frequently employed, like Widiyaningtyas et al. [14] introduced the State University of Malang in Indonesia. Furthermore, Sharma [15] also developed a Virtual Environment of a University Campus using 3D rendering images. The use of photo 360 for a virtual tour of the museum has been carried out by Bonacini [16], who used a 360 camera as a virtual tour at the Syracuse Regional Archaeological Museum in Italy in collaboration with Google. Bandarsyah and Sulaeman [17] researched European History Through the Museum Virtual Tour 360. Furthermore, Cai et al. [18] demonstrated how a Cultural Heritage site could feel like a real-world experience when viewed in a 360-degree video. The use of photos or 3D rendering images in 360 degrees for portfolio purposes has never been studied; therefore, the novelty in this study is the use of virtual tours as an architect’s marketing strategy to enhance client understanding.
A Marketing Strategy for Architects Using a Virtual Tour Portfolio …
293
This research aims to create a digital portfolio that presents architectural and interior design works in the form of a 360 virtual tour. Everyone can access the website where the work is displayed. This 360-degree virtual tour’s presentation method compares with other results that employ 2D working drawing presentation methods and 3D rendering images. It will be evaluated whether this 360 virtual tour aids an architect’s marketing strategy in understanding the architectural and interior design of the room by comparing the three presentation methods.
2 Methodology This research presents illustrations using various digital presentation techniques as a case study. The author surveyed 20 respondents, where 95% of our respondents were the productive age, which is between 15–64 years. Respondents aged in college are between 19–24 years, 20%. At the same time, respondents are adults, 25–60 years of 65%. Respondents, the majority of whom are our clients, were asked to complete a questionnaire about their understanding of the images the author presented. The author presents three types of images, namely floor plan or layout plan images, perspective images, and 360 panoramic images in the form of virtual tours. The author assesses each image’s advantages and disadvantages from the three images so that it can be used as input for the author to improve the presentation in the future. The author also interviewed various customers to gather information about how well they understood the design drawings. A 2D working drawing is the first design presented to the client. The image shows a two-dimensional form with the dimensions of length and width. The drawing lines merge to form a field with a notation in the form of color descriptions and numerical dimensions, which is usually to present room plans and 2D working drawings. The architect will display his work as a JPEG image of an interior rendered as a room or as a 3D building with length, width, and height dimensions. After the two methods are compared, the designer makes a final portfolio in the form of 3D images using the help of virtual reality 360 software. The client is given a new way to experience and understand a building or space design still in the planning stage. In addition, clients see what the designed design looks like and feel realistically with the help of virtual reality software. Clients can explore a room or building realistically from the corner of the room to a different room. It integrates VR tools with 3D modeling software, allowing architects and clients to understand the spatial quality of the project truly. The basic idea of making a 3D portfolio in VR is to make it easier for clients to understand design drawings, especially during the planning stage before the design is realized in the field. Architects and clients get the same goal in designing the room and providing visualization boundaries, so there are no misunderstandings in comprehending the design intent. Architects will compare client satisfaction in enjoying the design from just a 2D image to a more attractive 3D image. The architect chooses the proper presentation technique by obtaining information from the questionnaire and
294
A. Pramono and C. Yuninda
Fig. 1 Flowchart of methodology research
interview and uses this portfolio for marketing strategies. The details of the research method flow are shown in Fig. 1.
3 Result and Discussion 3.1 Designing Layout Plan The first stage carried out by an architect is making a design concept. In this phase, the architect collects information from the user about what space will be designed and analyzes the activity of space requirements, the relationship between spaces, and the style desired by a user. After getting adequate information from a client, the architect constructs a pre-design drawing which includes a block plan, circulation system, and layout plan, as represented in Fig. 2. The layout plan is a drawing illustrating the entire building seen from above. At the stage of making the layout plan, it is still a 2D drawing that informs the notation of the walls, columns, furniture, and dimensions seen from the top side of the building. The left picture in Fig. 2 depicts the 2D layout plan, while the right picture demonstrates the 3D floor plan. Figure 2 above represents the layout of a two-bedroom apartment. The entrance to the apartment unit is drawn at the bottom. Next up is the kitchen and dining area
A Marketing Strategy for Architects Using a Virtual Tour Portfolio …
295
Fig. 2 Apartment layout plan in 2D monochrome drawing and 3D rendering image
on the front side and the bathroom on the left side of the kitchen area. The kitchen and dining room are in one area, but the kitchen table and dining table are opposite each other. The kitchen table consists of a bottom shelf, a closed top shelf, and an open shelf. The bottom shelf serves as a place for the stove and sink, where there is a gas storage drawer and a drawer for a dish rack. While on the top shelf, the closed section serves to store kitchen utensils. The shelves are usually used to put spices, and a kitchen cooker hood is at the bottom. For the dining area, the furniture is designed to follow the shape of a sloping wall. The dining table also uses smart furniture, the primary function of eating activities. There are also open shelves for storing refrigerators and other eating utensils. On the right and left of the apartment unit is a bedroom view with one single bed measuring 120 × 200 cm, a ceiling-high backdrop, and a nightstand hanging table rack as high as 50 cm from the floor. Above the nightstand is a chandelier that provides decorative lighting. A backdrop TV mounted to the wall is positioned on the front side of the bed. A study table and storage rack are next to a small window on the left side of the backdrop television. The back of the wardrobe is made to be curved, forming the semi-circular shape of the curved wall. On the left side of the image is a 2D layout plan view, while the right side image is a layout plan view that looks closer to 3D. From our survey of 20 respondents, 57.9% stated that this image is essential to see the relationship between spaces in a building. At this stage, the architect needs to provide a long narrative to provide an overview so that the client quickly understands it. After the client understands and agrees with the layout, the architect develops the drawing into an elevation (front, right side, left side, and rear view), the mechanical electrical plumbing (MEP) system, and finishing materials. The drawing in this stage is in the form of a parallel projection, not a perspective view, equipped with general dimensions in the form of floor level height, room size, and the building as a whole, and contains an outline of the material specifications to be used, especially architectural finishing materials. The last stage
296
A. Pramono and C. Yuninda
is creating a detailed engineering design (DED) which the contractor will then use to build a building.
3.2 Rendering 3D Modelling in Perspective It is necessary to make a visualization of perspective images to facilitate understanding of the image on the client. This image combines the layout plan and section or view plan to look 3D, as seen in Fig. 3. Presentation of images at this stage will further enhance the client’s understanding of the visualization of space. In the digital era like today, a visualizer often uses digital rendering software to produce images that resemble photo realistic. In this study, a client can understand the furniture in a room in more detail. It is proven by choice of respondents preferring this image (15.8%) compared to the 2D layout, which has a rating of 10.5%. The client can now visualize a better understanding of the concept of the shape created by the architect because they have access to a 3D image of the entire environment. The process of turning a 3D model into an actual, perceptible object with shape, texture, and size is known as rendering 3D modeling. The components in the 3D design are much more than 2D images; apart from shape, the 3D design also has contrast, shadow, and depth. 3D modeling is needed in many fields, such as inspection, navigation, object identification, visualization, and animation. Creating a complete, detailed, accurate, and realistic 3D model from an image is still complicated, especially for large and complex models. In general, 3D modeling consists of several processes, including design, 3D measurement, framework and modeling, texture giving, and visualization.
Fig. 3. 3D perspective image rendered using computer
A Marketing Strategy for Architects Using a Virtual Tour Portfolio …
297
The shape, size, and specifications of the product can be displayed by architects using a 3D design. From this point, the company can assess several crucial factors, including product specifics, production costs, and product design concepts. Multinational businesses frequently use 3D design for product development. It is inextricably linked to the benefits provided, including accelerating the prototyping process, assisting in the identification of the appropriate specifications, and reducing errors. Designers frequently use 3D applications like Sketchup, 3D Max, Autodesk, Revit, Blender, and more.
3.3 Developing 360 Imagery in Virtual Tour 360-degree images are the most popular option among clients. This presentation method offers a virtual tour of the building in addition to 360-degree images in each room, as seen in Fig. 4 and can access through https://begawan.griyapram.com. A client can explore each room individually by clicking on the navigation buttons. The presentation method must be used on the website instead of as a printout. Clients can comprehend the entire environment when they view their surroundings in 360° horizontal and vertical panoramic views. As a result of their ability to properly understand the details of each room, 84.2 percent of respondents chose this presentation technique. Most respondents (88.1%) indicated that they would use this presentation technique, which will also be helpful to clients when discussing room design. Clients still perceive this presenting method as lacking since they do not comprehend how the spaces in a structure relate to one another. It is due to the absence of a layout plan, a crucial component of the plan. A key plan is a small-scale view that shows where a segment is located in a drawing. The client fully understands
Fig. 4 Rendering image generated with the 360-degree panoramic technique
298
A. Pramono and C. Yuninda
the details per room but does not understand the position of this room. By adding a layout key plan, apart from seeing a detailed picture of each room, the client’s understanding will also increase by knowing the position of the room in a building.
3.4 Appropriate Presentation Gains Sustainable Design Misinterpretation in a planning design will result in increased implementation costs. It is because the architect’s perception differs from the client’s perception. Revisions may cause work that should be completed on time to be delayed. Costs will increase similarly. Misunderstandings will lead to revisions against the fundamental tenets of sustainable design. Therefore, it requires a communication strategy to ensure that the client and the architect have the exact comprehension. Time and money restrictions can be minimized with the client’s firm grasp of the architect’s perspective. So both the architect and the client will profit from a suitable presentation method. Appropriate presentation techniques will be in line with the concept of sustainable design that brings benefits to people, profit, and the planet.
4 Conclusion The planning design can be displayed in depth in each space, together with furniture details and the concept communicated from the design. This is why all respondents preferred the presentation style employing 3D panoramic 360 visualizations. In order to communicate any problems or things that may be modified during the design planning stage, clients can also experience the room’s environment in real-time, such as how big, small, or dark it is. Errors and ambiguities in design intent and purpose are minimized. This presenting approach will be more engaging when played using virtual tour technology and facilitate a deeper understanding. This presentation method can only be seen online on a website. Architects can use it for marketing strategies because it helps users deeply understand the architect’s masterpiece. The second most popular method is 3D visualization design, which renders images as JPEG files. This presentation technique provides an overview of space by displaying a room’s length, width, and height so that the user can see the entire room from a specific point of view in the room. Usually, this technique is chosen as the second alternative to the 3D panoramic 360 visualization technique because it saves time in the work process. Some professionals in the design industry have a good understanding of presentations made with rendering visualization pictures. The last technique chosen by the respondents is the delivery of designs with layout plans. This technique is usually an initial presentation from the designer to the client to provide an initial overview before proceeding the design to a more detailed stage, such as a layout plan, elevation, and section equipped with MEP drawings. Although
A Marketing Strategy for Architects Using a Virtual Tour Portfolio …
299
rarely interested in understanding space, this presentation technique can provide a complete picture of the relationship between spaces in a building. Combining several presentation techniques to facilitate user understanding of the architect’s ideas is necessary to avoid misinterpretation between the architect and the client. It aligns with the sustainable design concept, which helps people, profit, and the planet. A sustainable design concept can be achieved by presenting an appropriate presentation technique.
References 1. Alvarez AR MSW, P, Moxley P, David MSW, P (2004) The student portfolio in social work education. J Teach Soc Work 24:87–103 2. Pichura A (2012) The architect’s portfolio: planning, design, production. Eng Technol J 30:252– 253 3. Abrami PC, Barrett H (2005) Canadian Journal of Learning and Technology / La revue canadienne de l’apprentissage et de la technologie, V31(3) Fall / automne, 2005 Canadian Journal of Learning and Technology Volume 31(3) Fall / automne 2005 Directions for Research and Development on. 31 4. Chailan C (2009) Brand architecture and brands portfolio: a clarification. EuroMed J Bus 4(2):173–184. https://doi.org/10.1108/14502190910976529 5. Whiteside-Dickson A, Rothgeb TD (1989) Portfolio reviews as a means of quality control in interior design programs. J Inter Des 15:21–28. https://doi.org/10.1111/j.1939-1668.1989.tb0 0140.x 6. Farrell O (2020) From portafoglio to eportfolio: the evolution of portfolio in higher education. J Interact Media Educ 2020:1–14. https://doi.org/10.5334/jime.574 7. Pramono A (2006) Presentasi multimedia dengan macromedia flash. Andi Offset, Yogyakarta 8. Al- K (2001) Visualization tools and methods for participatory planning and design. J Urban Technol 8:1–37. https://doi.org/10.1080/106307301316904772 9. Wang R (2019) Application and realization of VR technology in interior design. Proc—2019 12th Int Conf Intell Comput Technol Autom ICICTA 2019, pp 182–186. https://doi.org/10. 1109/ICICTA49267.2019.00046. 10. Sakuma T, McCauley T (2014) Detector and event visualization with sketchup at the CMS experiment. J Phys Conf Ser 513. https://doi.org/10.1088/1742-6596/513/2/022032 11. Wu Z (2016) The application of sketchup in architectural design teaching in higher vocational education. Internatcion Core J Eng 2:148–151 12. Ramadhanty DM, Handayani T (2020) The effect of computer-based 3D visualization. IOP Conf Ser Mater Sci Eng 879. https://doi.org/10.1088/1757-899X/879/1/012144 13. Vidanapathirana M, Meegahapola L, Perera I (2017) Cognitive analysis of 360 degree surround photos. In: Future technologies conference (FTC) 2017, pp 1036–1044 14. Widiyaningtyas T, Prasetya DD, Wibawa AP (2018) Web-based campus virtual tour application using ORB image stitching. Int Conf Electr Eng Comput Sci Informatics. 2018-Octob, 46–49. https://doi.org/10.1109/EECSI.2018.8752709 15. Sharma S, Rajeev SP, Devearux P (2015) An immersive collaborative virtual environment of a university campus for performing virtual campus evacuation drills and tours for campus safety. In: 2015 Int Conf Collab Technol Syst CTS 2015, pp 84–89. https://doi.org/10.1109/CTS.2015. 7210404 16. Bonacini E (2016) The “paolo Orsi” syracuse archaeological museum pilot project: a 360° tour with google indoor street view. In: Proc—11th Int Conf Signal-Image Technol Internet-Based Syst. SITIS 2015, pp 833–840. https://doi.org/10.1109/SITIS.2015.17
300
A. Pramono and C. Yuninda
17. Bandarsyah D, Sulaeman (2021) Student interest in understanding european history through the museum virtual tour 360. In: 2021 International Conference on Computer & Information Sciences (ICCOINS), pp 286–288. IEEE. https://doi.org/10.1109/ICCOINS49721.2021.949 7175 18. Cai S, Ch’Ng E, Li Y (2018) A comparison of the capacities of vr and 360-degree video for coordinating memory in the experience of cultural heritage.In: Proc 2018 3rd Digit Herit Int Congr Digit Herit 2018—Held jointly with 2018 24th Int Conf Virtual Syst Multimedia, VSMM 2018, pp 1–4. https://doi.org/10.1109/DigitalHeritage.2018.8810127.
Bee AR Teacher Framework: Build Augmented Reality Independently in Education Maria Seraphina Astriani, Raymond Bahana, and Arif Priyono Susilo Ahmad
Abstract COVID-19 pandemic makes home-based learning a common thing and has been implemented all over the world. Learning through Augmented Reality (AR) technology can help students understand learning materials in a more creative way than the traditional way. AR can make learning fun, memorable, and interactive. Developing AR software is a hard task, usually made by order (custom made), requires a technical person to develop it, and costs a lot of money. This tends to be a problem and the implementation of AR in the world especially in education field becomes quite challenging. To solve this problem, researchers propose a Bee AR Teacher framework to let developer create the AR system. Bee AR Teacher provides the contribution in AR framework, and it consists of three components: technologies, operations, and human resources. If developer can develop the AR system successfully, teachers may use the system and can independently build AR solutions that can support teaching and learning activities without the need for technical knowledge. One of the special features of AR solution is the teachers can share or navigate material from their devices and students can see the material (virtual object) directly on their smartphones and can interactively manipulate it with their fingers. Keywords Augmented reality · Framework · Education · Development · Planning
1 Background Augmented Reality (AR) technology allows users to get information and see content visually in the same way in the real world [1]. AR has gained recognition in various M. S. Astriani (B) · R. Bahana Computer Science Department, School of Computing and Creative Arts, Bina Nusantara University, 11480 Jakarta, Indonesia e-mail: [email protected] A. P. S. Ahmad Creative Advertising Program, Visual Communication Design Department, School of Design, Bina Nusantara University, 11480 Jakarta, Indonesia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_27
301
302
M. S. Astriani et al.
fields, especially in education [2]. Because of COVID-19 pandemic, home learning has become common and has been implemented all over the world. Learning through AR technology can help students understand learning material in a more creative way than the conventional one. According to Augment.com in 2020, the use of AR in education is able to make learning more fun, memorable, and interactive [3, 4]. AR software usually been developed custom made (according to needs and requirements) and requires special technical people who understand how to make it. It is generally cost a lot of money to develop AR project because it needs to be tailored. The development of this project tends to be a problem if have limited budget. This technology also not easy to be develop from scratch if the developer never done the AR project before. If the project timeline is tight, the developers will do not have enough time to learn technical things to create the AR software. Besides developing the software, it is also necessary to pay attention to the hardware or devices used in AR. Based on these problems, the application/implementation of AR in the education has become stunted and not often found at school or university. A framework needs to be chosen and prepared all the requirements before a project is developed to make sure everything already been properly prepared. Not having framework is similar like not having a plan, because developer will not have the idea of how the project can be developed. If no plan, generally the project has a chance delayed or cannot completed on-time based on the schedule, it needs more budget, and the fatal one is the project may fail/cannot be completed [5]. This causes all the things that have been dedicated to working on the project to be wasted. To solve this problem, researchers propose a framework solution named "Bee AR Teacher". This framework provides the contribution in AR framework. The benefit of implementing the framework is developer will have a guideline to develop the AR software and setup all the requirements. This software is able to create other AR solution without any technical skill and may help the users (teachers) independently build AR solutions that can support their teaching activities. Teachers do not need coding knowledge or technical skills to be able to build this solution. When the AR solution has been completed, teachers can share the link to students so they can see the results on their smartphones. There are two types of AR type implementation: marker-based (QR code, barcode) and marker-less [3]. In this research, the main focus is on marker-based so AR solution can recognize objects using objects in the real world but without using barcodes or QR codes. One of the special features of AR solution is the teachers can share or navigate material from their devise and students can see the material (virtual object) directly on their smartphones and can interactively manipulate it by their fingers.
Bee AR Teacher Framework: Build Augmented Reality Independently …
303
2 Literature Review 2.1 Augmented Reality Augmented Reality or AR is a technology that allows computer to generate virtual information to be layered into a real-world environment that been captured by the camera [6–8]. Usually, the object will be appeared directly or indirectly in real time. Through the use of portable devices (such as smartphones and tablets), (semi)transparent surfaces (such as tabletops), and devices (for example AR glasses), virtual information can be projected into actual/real-world environments [9, 10]. Mobile AR is a technology that is rapidly been used, and the number of users is increasing day by day [11]. Over the past few decades, many researchers and also professionals have developed pragmatic theories and applications for adopting AR into education and enterprises. Based on these studies, several AR innovations have been developed and used to improve the efficiency of education and training of students and employees. In addition, there is many studies conducted to improve the compatibility and application of AR in real-life [8].
2.2 Augmented Reality and Pedagogy in Special Education Framework Augmented Reality and Pedagogy in special education framework (AR-PeRCoSE) is a framework that combine AR technology with pedagogical theories and artificial intelligence in special education (see Fig. 1). This framework uniquely designed for promoting reading comprehension. Students need to use schoolbook that been scanned by using AR application in table mobile device. By using the internet, the material (visual and audio) will be got from the application and displayed in the table’s screen server in real-time. Instructors may design the learning material by using AR system and save the whole things in system’s database [12].
2.3 Learning Analytics Framework for AR Learning Analytics (LA) framework consist of several components: design, development, analysis, and assessment (Fig. 2). LA framework has a feature to be able to be adjusted based on the needs. LA framework can be implemented in AR for education (AR-supported innervations). Specific educational materials are created by instructors in traditional (for example books) or digital formats. Students can access the content and interact with it using mobile devices (phones, tablets, AR glasses, and headsets). Every action
304
M. S. Astriani et al.
Fig. 1 AR-PeRCoSE framework [12]
Fig. 2 LA framework for AR-supported interventions [9]
taken by a student is recorded in a database (external). The LA process can use this data as an input to be analyzed by the instructor later to improve the quality of teaching–learning process.
Bee AR Teacher Framework: Build Augmented Reality Independently …
305
3 Bee AR Teacher Solution and Discussion The AR solution that is built focuses on the following three things: the tools used (smartphones) especially on the camera, the custom AR system used by the teachers so they can create a custom AR system independently, and the AR client system (for students) which highlight the technique used to detect objects in the real world by using a camera and projecting virtual information in the form of objects (virtual objects) on the smartphone screen. This solution is the basis for making the Bee AR Teacher framework. The process of building the framework needs to be started from defining the big picture of the idea to ensure that the framework been built is on the right track. An architectural design needs to be made to help the researcher gain the big picture of the idea. The architecture design of Bee AR Teacher (Fig. 3) consists of three main parts: a custom AR system that is accessed by using a computer, an AR client system that uses a camera on the smartphone (Fig. 4), and a server to store data. When the teacher creates and places virtual objects using a custom AR system, the data will be stored on a server by using the internet. To let the students see the AR objects that have been created by the teacher, the AR client system need to be connected to the internet to retrieve the data from the server. Bee AR Teacher framework offers something special because the AR system built using this framework can have the ability for teachers to build AR solution independently. If they need to make other teaching materials, teachers are able to make their own AR solution and adjust it to aligned with the teaching materials/topics.
Fig. 3 Bee AR Teacher architecture design
306
M. S. Astriani et al.
Fig. 4 AR client system result
Based on the past research (existing framework and implementation), the main components usually focus on the technology such as software and hardware. However, there are components such as operations and human resources that need to be involved while developing the AR solution and need to be included also in the framework. The inspiration of building Bee AR Teacher framework comes from the combinations of several things: the big picture (obtained from the architectural design), the existing framework based on the past research (AR-PeRCoSE and LA frameworks), operations component, and human resources component. The illustration of Bee AR Teacher framework can be found in Fig. 5. The components in the ARPeRCoSE and LA frameworks are broken down, grouped, and then combined if there are components that are similar to one another. The results obtained from these two frameworks are then incorporated into the components in Bee AR Teacher framework. Technologies, Operations, and Human Resources are the components of Bee AR Teacher framework. These components cannot be separated because all of them need one another. This framework intended to help the developers to plan and prepare all the things needed to develop AR solution. The expected result after implementing Bee AR Teacher framework is that it can help developers in creating AR systems to minimize failures that will occur in the future.
3.1 Technologies Technologies usually consist of physical thing (or hardware) and software. In Bee AR Teacher framework, smartphone with camera and object in real-world can be included in physical things. User interface (wireframe and front-end design), virtual objects (assets), interaction (the code/library/technology to make the feature), and computation models (algorithm to detect the object and project the virtual objects in
Bee AR Teacher Framework: Build Augmented Reality Independently …
307
Fig. 5 Bee AR Teacher framework
smartphone’s screen) are needed to create the AR system. Data warehouse is useful to store the data (for example virtual objects). To let the students see and navigate the materials on the screen, their smartphones need to be connected with the internet. The technologies require to be able to do the following actions. Students need to scan the object with the camera in their smartphones. There is a special feature in Bee AR Teacher solution that usually the other AR application do not have. Teaching can share the content of the materials and the student can see the content directly. If the teacher changes the material, the virtual object in the students’ smartphone screen will change directly along with the changes made by the teacher. This method is adopted and similar like teaching in the class (teacher usually use presentation slides and navigate them) but with extra inter-active feature. For example, students are able to rotate the virtual object by swiping the screen, zoom in or zoom out by pinching, and pan the virtual object by swiping with two fingers.
3.2 Operations In operations component, the actions that need to be executed while build the AR solution is similar like developing the software. The flow starts from planning, designing, preparing the requirements, developing, staging, and the last one is deploying. Usually the wireframe, structure of database, use case diagram, sequence diagram, and the other diagrams can be created in designing phase. Next, all requirements
308
M. S. Astriani et al.
(technologies, operations, and human resources) need to be listed and be prepared. In developing phase, the custom AR system and AR client system is developed. The result of the system should be user friendly and the layout design needs to be arranged as simple as possible with drag-drop tools into the working area to make it easier for teachers to design AR displays. Staging or testing is an important step that needs to be done to find out whether the AR solution has been functioning properly or not. If the results show that there is still something that needs to be improved, the research process will return to the development phase or even to the design designing phase and improve it until the results are in line with the expected target. Once it done, the phase will go do deploying so teachers and students can use the AR solution.
3.3 Human Resources Human resources can be divided into two parts: technical and non-technical. Technical means people who have the skills to develop information technology (IT) projects such as systems analysts, coding, designers, and so on. Non-technical human resources are needed to monitor the progress of work, document work, and other nontechnical matters. Testers who are needed for the operations component in staging phase may also involve the non-technical people.
4 Conclusion and Recommendation Developing AR software is a hard task, usually made by order (custom made), requires a technical person to develop it, and costs a lot of money. This tends to be a problem and the implementation of AR in the world especially in education. To solve this problem, researchers propose a Bee AR Teacher framework to let developer create the AR system. Bee AR Teacher framework consist of three components: technologies, operations, and human resources. Technologies, Operations, and Human Resources components cannot be separated because all of them need one another. This framework provides the contribution in AR framework. Bee AR Teacher framework intended to help the developers to plan and prepare all the things needed to develop AR solution. For future work, Bee AR Teacher framework can be combined with gamification. Badges or points will be rewarded if students finished to do the tasks. Gamification is expected to enhance students with the AR solution created by the teacher. Acknowledgements This work is supported by Research and Technology Transfer Office, Bina Nusantara University as a part of Bina Nusantara University’s International Research Grant entitled Bee AR Teacher: Create Your Own AR with contract number: 061/VR.RTT/IV/2022 and contract date: 8 April 2022.
Bee AR Teacher Framework: Build Augmented Reality Independently …
309
References 1. Google AR & VR, https://arvr.google.com/, last accessed 2022/02/07 2. Faqih KM, Jaradat MIRM (2021) Integrating TTF and UTAUT2 theories to investigate the adoption of augmented reality technology in education: Perspective from a developing country. Technol Soc 67:101787 3. Kamphuis C, Barsom E, Schijven M, Christoph N (2014) Augmented reality in medical education? Perspect Med Educ 3(4):300–311 4. Roopa D, Prabha R, Senthil GA (2021) Revolutionizing education system with interactive augmented reality for quality education. In: Materials today on proceedings 46, pp 3860–3863 5. Astriani MS, Yi LH, Kurniawan A, Bahana R (2021) ITfResch: customer neuroscience data collection and analysis IT strategic planning for memorizing COVID-19 advertisements. In: 2021 IEEE 5th international conference on information technology, information systems and electrical engineering (ICITISEE), pp 376–380 6. Azuma RT (1997) A survey of augmented reality. Presence Teleoperators Virtual Environ 6(4):355–385 7. Zhou F, Duh HBL, Billinghurst M (2008) Trends in augmented reality tracking, interaction and display: a review of ten years of ISMAR. In: 2008 7th IEEE/ACM international symposium on mixed and augmented reality, pp 193–202. IEEE 8. Lee K (2012) Augmented reality in education and training. TechTrends 56(2):13–21 9. Kazanidis I, Pellas N, Christopoulos A (2021) A learning analytics conceptual framework for augmented reality-supported educational case studies. Multimodal Technol Interact 5(3):9 10. Wu HK, Lee SWY, Chang HY, Liang JC (2013) Current status, opportunities and challenges of augmented reality in education. Comput Educ 62:41–49 11. Yavuz M, Çorbacıo˘glu E, Ba¸so˘glu AN, Daim TU, Shaygan A (2021) Augmented reality technology adoption: case of a mobile application in Turkey. Technol Soc 66:101598 12. Kapetanaki A, Krouska A, Troussas C, Sgouropoulou C (2021) A novel framework incorporating augmented reality and pedagogy for improving reading comprehension in special education. Novelties Intell Dig Syst 105–110
Performance Evaluation of Coffee Bean Binary Classification Through Deep Learning Techniques Fajrul Islamy, Kahlil Muchtar , Fitri Arnia, Rahmad Dawood, Alifya Febriana, Gregorius Natanael Elwirehardja , and Bens Pardamean Abstract Coffee beans are one of the high-value commodities in Indonesia, but the sorting method for the quality of coffee beans still uses visual methods and sieves with mechanical machines. This study aims to provide an alternative to classifying coffee beans using Deep Learning algorithms, such as the ResNet-18 and MobileNetV2. The dataset used is USK-COFFEE which consists of two classes, the normal coffee bean class, and the defect coffee bean class. The models were trained using the balanced and imbalanced datasets, meaning that two scenarios were performed. From the results of this study, the accuracy value for the ResNet-18 model was 91.50% when trained on the balanced dataset and 90.87% on the imbalanced dataset. It outperformed the MobileNetV2 architecture, which only achieved 0.8125 and 0.8112 accuracy scores when trained on the balanced and imbalanced datasets, respectively. Keywords Coffee beans · Deep learning · Binary classification · Computer vision · Convolutional neural networks
1 Introduction As one of the few countries with the ideal climate and soil conditions for growing coffee plants, Indonesia is one of the largest coffee bean exporters globally [1]. When harvesting, farmers will refer to the Indonesian National Standard (SNI) regarding F. Islamy · K. Muchtar (B) · F. Arnia · R. Dawood · A. Febriana Department of Electrical and Computer Engineering, Universitas Syiah Kuala, 23111 Banda Aceh, Indonesia e-mail: [email protected] G. N. Elwirehardja · B. Pardamean Bioinformatics and Data Science Research Center, Bina Nusantara University, 11480 Jakarta, Indonesia B. Pardamean Computer Science Department, Master of Computer Science Program, BINUS Graduate Program, Bina Nusantara University, 11480 Jakarta, Indonesia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_28
311
312
F. Islamy et al.
the level of coffee bean defects which are reviewed based on its shape and color [2]. The perfect coffee bean is slightly oval in shape and light green in color. The selection of coffee beans generally uses visual methods and automatic sieve machines, which may result in increased error rates due to several environmental factors, namely lack of illumination or different eye conditions for each person [3]. In addition to the vision-based method, coffee beans are also selected with a sieve to separate the size of the beans. The use of the sieve method with either human power or mechanical machines can help reduce the workload. However, this method has not included the color variable as one of the parameters of grain quality. The fourth industrial revolution introduced several implementations of Artificial Intelligence (AI) in agricultural technology. With the advent of deep learning [4], Artificial Neural Networks (ANN) have gained massive attention and success, especially Convolutional Neural Networks (CNN) [5]. CNNs have the capability to comprehend spatial features, making them suitable to be deployed in various computer vision tasks including in agricultural technology. These tasks vary from ripeness classifications [6–8], plant disease detection [9], and even plant counting [10]. Over the years, several studies on the classification of coffee bean defects using AI have been carried out. Several methods have been introduced such as the use of the K-Neighbor Nearest (KNN) algorithm, K-Means, and ANN which were able to classify coffee bean defects with an accuracy score that is higher than 60%. One of these previous studies performed image processing before feeding the data to ANN and KNN models to classify coffee beans data collected from the National Coffee Research Development and Extension Center (NCRDEC) of the Cavite State University, Indang, Cavite. The researchers used MATLAB software as the medium for digital image data processing. At this stage, 195 samples were used in training the models and 60 samples were used for evaluating them. Through this research, the accuracy rate of 96.66% for the ANN algorithm and the range of 70% to 85% for the KNN algorithm at a value of k = 4 depends on the type of coffee bean defect [11]. In another study, KNN was deployed with the help of MATLAB software and the Arduino microcontroller as a device that executes the program and displays the results. In this study, the KNN was trained on 100 images of single coffee beans and validated on 120 images of single coffee beans. The highest accuracy results for this study on quality inspection were 94.99% at the value of k = 10 and the accuracy value for the type of defect was 95.66% with the same k value [12]. Similar to the experiments above, researchers also achieved great success in this task using CNN. In 2019, a team of researchers used the CNN algorithm to classify coffee bean defects, obtaining an accuracy rate of 93.34% and an FPR value of 0.1007 [13]. In another similar study, the CNN had been used as an image processor, obtaining classification accuracy that ranged from 72.4% to 98.7% [14]. From these past studies, the use of CNN to classify coffee beans is a recommendable method due to it being both efficient and effective in detecting the size and color of beans according to standards with an accuracy rate ranging from 75 to 96%, albeit the accuracy depends on the type of defects classified [11, 14]. The use of classification methods using digital image processing and combining it with the application
Performance Evaluation of Coffee Bean Binary Classification Through …
313
of ANN can increase accuracy by up to 90% compared to the use of non-neural algorithms such as KNN. In this study, two CNN architectures, namely ResNet (Residual Network) [15] and MobileNetV2 [16], were compared for this task. The main contributions of this research are as follows: 1. The coffee bean dataset used was introduced with the name “USK-Coffee Dataset”, which is a collection of 8000 coffee bean images with two classes, namely normal and defect. This dataset can be used in further related studies. 2. This study produces a coffee bean classification model using two architectures, namely ResNet-18 and MobileNetV2, that are trained on both balanced and imbalanced data by measuring the performance of the two models with the accuracy metric collected during the training, validation, and testing phase of the research.
2 Research Methodology 2.1 Pre-processing the Dataset This research evaluates the performance of CNN models in distinguishing normal coffee beans and those with defects. The USK-Coffee dataset [17, 18] which contains 8000 coffee bean images classified into four classes were used. The four classes are “Premium”, “Peaberry”, “Longberry”, and “defect”. In this study, the images from the “Premium”, “Peaberry” and “Longberry” classes were combined into a single class labeled as “Normal”, meaning that the models were trained to perform binary classification, distinguishing “Normal” coffee beans and “Defect” ones. It should be noted that each class of the original dataset contains 2000 images with a resolution of 256 × 256 pixels. In total, 6000 images were considered normal and the other 2000 were defects. The dataset is then split into 3 data subsets, which are the training, validation, and testing subsets. The validation subset is used in monitoring the models’ performance to confirm the performance of the model in recognizing the dataset. Test data is used to test the performance of the model that has been made. Calculating the mean and standard deviation was done to standardize the images. Data standardization is a way of changing the image pixels in such a way that the average value and standard deviation will be 0.0 and 1.0 with the aim of uniformity of features and pixels in all images in the dataset. With the same mean and standard deviation, the distribution of pixel values will be uniform across all images in the USK-Coffee dataset [17, 18] so that it can help the model in the data training process. The images are then transformed into tensors to be used in training the models.
314
F. Islamy et al.
2.2 Architectures of the CNN Models Used The CNN models used in this research are ResNet-18 and MobileNetV2. The overviews of these two architectures are as follows.
2.2.1
ResNet
ResNet is known as a deep CNN architecture that implements residual learning. The residual learning itself is a mechanism proposed to handle the risk of vanishing gradients by utilizing skip connections. The skip connections were designed to allow gradients to flow to earlier layers of deep CNN models, leveraging the risk of the gradients becoming too small in earlier layers of a deep CNN. There are several variations of ResNet, which vary in depth or number of layers [15].
2.2.2
MobileNetV2
MobileNetV2 is a neural network architecture that is known to work well on mobile devices. It is based on an inverted residue structure with residual connections between bottleneck layers. The intermediate enhancement layer uses depth folds to exclude features as non-linear sources. It is known as a lightweight and accurate CNN, being one of the models with the least number of parameters [16].
2.3 Evaluation Metrics and Performance Analysis In this study, we used the confusion matrix [19] as the main source of evaluation metric calculation. We used one of the most common units of measurement, called accuracy as a benchmark. The accuracy (Acc) is calculated using the following equation: Acc =
(T P + T N ) (T P + T N + F P + F N )
(1)
where TP is true positive, TN is true negative, FP is false positive, and FN is false negative. The accuracy implies the overall rate of the input images that were classified correctly.
Performance Evaluation of Coffee Bean Binary Classification Through …
315
3 Experimental Results The details of hyperparameter configurations used in the training stage are listed in Tables 1, 2 and 3. During the experiment, we used the Google Colaboratory platform [20] that possesses NVIDIA® Tesla K80 with a memory of 16 GB. The PyTorch deep learning framework [21] was employed and the pre-trained models, namely the ResNet-18 and MobileNetV2, were used for transfer learning purposes. Tables 4 and 5 show the average accuracy score in decimals, respectively. Table 1 Distribution of data in the balanced dataset
Table 2 Distribution of data in the imbalanced dataset
Type of data
Normal class
Defect class
Train Data
1200
1200
Validation Data
400
400
Test Data
400
400
Total
2000
2000
Type of data
Normal class
Defect class
Train Data
3600
1200
Validation Data
400
400
Test Data
400
400
Total
4400
2000
Table 3 Hyperparameter configurations
ResNet-18
MobileNetV2
Batch Size
2
2
Max. Epoch
25
25
Learning Rate
0.001
0.001
Learning Rate Schedule
StepLR (7 Step)
StepLR (7 Step)
Image Size
224
224
Table 4 The accuracy of each model in the training and testing stages
Accuracy in training stage
Accuracy in testing stage
ResNet-18 (Balanced)
0.9946
0.915
MobileNetV2 (Balanced)
0.8983
0.8125
ResNet-18 (Imbalanced)
0.9963
0.9087
MobileNetV2 (Imbalanced)
0.9325
0.8112
316
F. Islamy et al.
Table 5 Test results on good coffee beans and defect coffee beans images using ResNet-18 architectures on imbalanced data Good coffee beans Input image
Defect coffee beans
Softmax confidence score (%) Input image
Softmax confidence score (%)
98.12
95.69
98.32
99.01
98.63
97.94
87.04
99.87
98.58
99.99
(continued)
From the results presented in Table 4, it can be inferred that the models seemed to perform better after being trained on the balanced dataset. Figures 1, 2 and 3 also show that both ResNet-18 and MobileNetV2 overfit more when trained on the imbalanced dataset, which is indicated by the higher validation loss and lower validation accuracy. However, the validation loss of the ResNet-18 model on the balanced dataset is marginally lower than its training loss despite the lower validation accuracy. Such results may imply that the model has yet to converge, and more training epochs may be required. This particular model also outperformed the other models in the testing stage, with the former achieving 0.915 accuracy. On the other hand, the ResNet-18 model trained on the imbalanced dataset managed to obtain better training accuracy and lower validation accuracy, further proving its overfitting. Tables 5 and 6 show some of the prediction results of both models trained on the imbalanced dataset. It can be seen that the models are still able to correctly distinguish the defect coffee beans with high confidence scores. However, some of the confidence scores are less than 80%, which indicates that further tuning can potentially improve the models’ capabilities in classifying the coffee beans. In the future, other methods
Performance Evaluation of Coffee Bean Binary Classification Through …
317
Table 5 (continued) Good coffee beans Input image
Defect coffee beans
Softmax confidence score (%) Input image
Softmax confidence score (%)
64.33
55.55
98.27
99.98
99.84
81.15
Fig. 1 Sample images of the normal and defect coffee beans
Fig. 2 Comparison of training and validation results of ResNet-18 and MobileNetV2 on the balanced dataset
318
F. Islamy et al.
Fig. 3 Comparison of training and validation results of ResNet-18 and MobileNetV2 on the imbalanced dataset Table 6 Test results on good coffee beans and defect coffee beans images using MobileNetV2 architectures on imbalanced data Good coffee beans Input image
Defect coffee beans
Softmax confidence score (%) Input image
Softmax confidence score (%)
99.29
75.29
99.96
99.11
99.78
65.55
99.73
51.08
99.8
70.97
(continued)
Performance Evaluation of Coffee Bean Binary Classification Through …
319
Table 6 (continued) Good coffee beans Input image
Defect coffee beans
Softmax confidence score (%) Input image
Softmax confidence score (%)
97.32
62.66
99.92
99.69
99.95
67.88
such as transfer learning using self-supervised models [22] as studies had shown that pre-training models using Self-Supervised Learning (SSL) can allow the models to achieve better results on similar downstream tasks [23].
4 Conclusion In this study, two CNN architectures were deployed for classifying coffee bean defects. The models have demonstrated a high capability to distinguish the normal coffee beans from the defects with high accuracy, where the ResNet-18 model managed to achieve 0.915 and 0.9087 test accuracy after being trained on the balanced and imbalanced datasets, respectively. Similarly, the MobileNetV2 model also yields high accuracy albeit clearly inferior to ResNet-18, obtaining accuracy scores of 0.8125 when trained on the balanced dataset and 0.8112 on the imbalanced dataset. Since this study utilized a large dataset, more complex methods may be tested in training the models, such as using deeper models or utilizing SSL to pre-train the models.
References 1. Indonesia Eximbank Institute (2019) Proyeksi Ekspor Berdasarkan Industri: Komoditas Unggulan, Jakarta 2. Badan Standardisasi Nasional. Standar Nasional Indonesia Biji Kopi., Indonesia
320
F. Islamy et al.
3. Syahputra H, Arnia F, Munadi K (2019) Karakterisasi Kematangan Buah Kopi Berdasarkan Warna Kulit Kopi Menggunakan Histogram dan Momen Warna. JURNAL NASIONAL TEKNIK ELEKTRO 8:42–50. https://doi.org/10.25077/jnte.v8n1.615.2019 4. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444. https://doi.org/10. 1038/nature14539 5. Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet classification with deep convolutional neural networks. In: Pereira F, Burges CJ, Bottou L, Weinberger KQ (eds) Advances in neural information processing systems. curran associates, Inc 6. Harsawardana Rahutomo R, Mahesworo B, Cenggoro TW, Budiarto A, Suparyanto T, Surya Atmaja DB, Samoedro B, Pardamean B (2020) AI-based ripeness grading for oil palm fresh fruit bunch in smart crane grabber. IOP Conf Ser Earth Environ Sci 426:12147. https://doi.org/ 10.1088/1755-1315/426/1/012147 7. Herman H, Cenggoro TW, Susanto A, Pardamean B (2021) Deep learning for oil palm fruit ripeness classification with denseNet. In: 2021 international conference on information management and technology (ICIMTech), pp 116–119. https://doi.org/10.1109/ICIMTech5 3080.2021.9534988 8. Suharjito Elwirehardja GN, Prayoga JS (2021) Oil palm fresh fruit bunch ripeness classification on mobile devices using deep learning approaches. Comput Electron Agric 188. https://doi. org/10.1016/j.compag.2021.106359 9. Suparyanto T, Firmansyah E, Wawan Cenggoro T, Sudigyo D, Pardamean B (2022) Detecting Hemileia vastatrix using vision AI as supporting to food security for smallholder coffee commodities. In: IOP conference series: earth and environmental science. https://doi.org/10. 1088/1755-1315/998/1/012044 10. Rahutomo R, Perbangsa AS, Lie Y, Cenggoro TW, Pardamean B (2019) artificial intelligence model implementation in web-based application for pineapple object counting. In: 2019 international conference on information management and technology (ICIMTech), pp 525–530. https://doi.org/10.1109/ICIMTech.2019.8843741 11. Arboleda ER, Fajardo AC, Medina RP (2018) Classification of coffee bean species using image processing, artificial neural network and K nearest neighbors. In: 2018 IEEE international conference on innovative research and development (ICIRD), pp 1–5. https://doi.org/10.1109/ ICIRD.2018.8376326 12. García M, Candelo-Becerra JE, Hoyos FE (2019) Quality and defect inspection of green coffee beans using a computer vision. System. https://doi.org/10.3390/app9194195 13. Huang N-F, Chou D-L, Lee C-A (2019) Real-time classification of green coffee beans by using a convolutional neural network. In: 2019 3rd international conference on imaging, signal processing and communication (ICISPC), pp 107–111. https://doi.org/10.1109/ICISPC.2019. 8935644 14. Pinto C, Furukawa J, Fukai H, Tamura S (2017) Classification of green coffee bean images basec on defect types using convolutional neural network (CNN). In: 2017 international conference on advanced informatics, concepts, theory, and applications (ICAICTA), pp 1–5. https://doi. org/10.1109/ICAICTA.2017.8090980 15. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) 16. Sandler M, Howard A, Zhu M, Zhmoginov A, Chen L-C (2018) MobileNetV2: inverted residuals and linear bottlenecks. In: 2018 IEEE/CVF conference on computer vision and pattern recognition, pp 4510–4520. https://doi.org/10.1109/CVPR.2018.00474 17. USK-Coffee Dataset. A Multi-class dataset composed of the various green bean arabica, https:// comvis.unsyiah.ac.id/usk-coffee/ 18. Febriana A, Muchtar K, Dawood R, Lin C-Y (2022) USK-COFFEE dataset: a multi-class green arabica coffee bean dataset for deep learning. In: 2022 IEEE international conference on cybernetics and computational intelligence (CyberneticsCom), pp 469–473. https://doi.org/10. 1109/CyberneticsCom55287.2022.9865489 19. Luque A, Carrasco A, Martín A, de las Heras A (2019) The impact of class imbalance in classification performance metrics based on the binary confusion matrix. Pattern Recognit 91:216–231. https://doi.org/10.1016/j.patcog.2019.02.023
Performance Evaluation of Coffee Bean Binary Classification Through …
321
20. Carneiro T, da NóBrega RVM, Nepomuceno T, Bian G-B, de Albuquerque VHC, Filho PPR (2018) Performance analysis of google colaboratory as a tool for accelerating deep learning applications. IEEE Access 6:61677–61685. https://doi.org/10.1109/ACCESS.2018.2874767 21. Stevens E, Antiga L, Viehmann T (2020) Deep learning with PyTorch. Manning Publications 22. Hossain Md.B, Iqbal SMHS, Islam Md.M, Akhtar Md.N, Sarker IH (2022) Transfer learning with fine-tuned deep CNN ResNet50 model for classifying COVID-19 from chest X-ray images. Inform Med Unlocked 30:100916. https://doi.org/10.1016/j.imu.2022.100916 23. Jaiswal A, Babu AR, Zadeh MZ, Banerjee D, Makedon F (2021). A survey on contrastive self-supervised learning. https://doi.org/10.3390/technologies9010002
Sustainable Material-Based Bedside Table Equipped with a Smart Lighting System A. Pramono , T. I. W. Primadani, B. K. Kurniawan, F. C. Pratama, and C. Yuninda
Abstract The availability of wood in nature cannot keep up with the growing number of people’s needs. Numerous wood processing methods, including plywood, blockboard, and veneer, have evolved to fulfill the demands of the solid wood furniture business. Various synthetic materials, such as high-pressure laminate (HPL), have also been circulating for the covering materials used in furniture finishing. Despite being processed, materials used in production, including furniture creation, create waste. This study aims to produce a bedside table from waste materials, including the primary and finishing components. This furniture has a smart lighting system using an Arduino Uno microcontroller and PIR and LDR sensors. Design exploration is employed in this qualitative research project to create bedside tables with smart lighting features out of woodworking waste. This study tries to define, ideate, and create a prototype. The first practical step is designing and creating a bedside table prototype with an automatic light function that can only turn on in low light settings. The two sensors were function-tested before being incorporated into the furniture. Based on the findings of multiple experiments, the sensor was declared successful and installed on the bedside table. This product uses sustainable materials in its creation, reusing plywood and HPL waste material to make a bedside table. Additionally, this table uses sustainable energy in the form of energy-saving lights and efficient use, which is only performed at night or in low-light situations. Keywords Arduino · Bedside table · Smart lighting · Sustainable material · Furniture
A. Pramono (B) · T. I. W. Primadani · B. K. Kurniawan Interior Design, School of Design, Bina Nusantara University, Jakarta, Indonesia e-mail: [email protected] F. C. Pratama Entrepreneurship Department, BINUS Business School Undergraduate Program, Bina Nusantara University, Jakarta, Indonesia C. Yuninda Department of Architecture, Engineering Faculty, University of Merdeka, Malang, Indonesia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_29
323
324
A. Pramono et al.
1 Introduction The manufacture of furniture in the first period used materials from nature, such as solid wood. Solid wood is hard, and some are soft. In manufacturing furniture made of solid wood, most types of wood are soft and often used by the community [1]. Hardwoods are commonly used for furniture, such as teak and mahogany [2, 3]. The availability of wood in nature cannot keep up with the increasing number of people’s needs because wood takes a long time to grow in nature [4]. In addition to processing wood, it also requires a drying process to remove the water in the wood. Hence, the process of becoming the primary material for furniture also takes time. Various wood processing technologies have emerged, such as laminated wood, blockboard, and veneer, to meet the needs of the solid wood furniture market [5]. One of the technologies is that quality wood or logs are stripped using a rotary (peeling or spindle) machine with a thickness of less or equal to 2 mm, called veneer [6]. Several layers of veneer are stacked in such a way with a thickness of multiples of 3, such as 6, 9, 12, 15, and up to 18 mm, to form a board called plywood [7]. While the rest of the peeled logs can be put together and coated with veneer so that it becomes a board called block-board [8]. Making plywood and blockboard is one of the techniques for saving natural materials. Along with the development of materials, other materials such as iron, aluminium, glass, and stone became complementary materials for making furniture. In addition, other synthetic materials, such as PVC, have also begun to complement the materials for furniture making. Many synthetic materials have also been circulating for upholstery materials used in furniture finishing materials. High-pressure laminate (HPL) is a finishing material often used in furniture [9]. This material has a variety of patterns such as solid colour, wood grain, leather motifs, stones, to patterns resembling mirrors. This HPL motif also has a role in maintaining the availability of natural materials, so this motif also has a role in sustainable materials [10]. The remains of materials in production, including the manufacture of furniture, leave waste [11]. Eliminating zero waste from industry is a tough job, but efforts are still being made to minimize it. There are three techniques for reducing production waste: reduce, reuse, and recycle [12, 13]. At the same time, the reuse method employed to enhance the value of the benefits of items is commonly referred to as upcycling [14]. Large furniture production will leave material waste, which can be reused for smaller furniture products [15]. The bedside table is one piece of furniture with smaller dimensions than other furniture. This furniture can be formed from the remnants of materials for making kitchen sets, backdrops, tables, or wardrobes. This bedside table is equipped with a smart lighting system to illuminate the floor at night to give a modern impression. Furniture that is added with technology can be referred to as smart furniture. The term smart furniture also refers to a piece of furniture that serves a purpose other than that which is inherent to it [16]. This research aims to make a bedside table using materials from waste, both primary and finishing materials. Furthermore, this furniture is added technology in the form of a smart lighting system for lighting at night to prevent the user when
Sustainable Material-Based Bedside Table Equipped with a Smart …
325
leaving the bed. The light will provide illumination on the floor to help the user walk. In addition, the light that leads to the floor will not interfere with other users in the same room.
2 Methodology This study describes a bedside table design that utilizes waste from an interior and furniture construction service company. The waste is like pieces of HPL, plywood, and iron. The research method used is qualitative and exploratory design. The design exploration method used is defined, designed, and developed. Data collection is carried out through field studies to find the remaining or existing waste materials, literature studies and the furniture market, and the latest technology to support modern furniture, such as smart technology. According to collecting data, the residual waste is not very significant; however, the products manufactured from these resources must also consider this. Define includes the procedure. According to the data analysis for the field study, waste material can still be used to create a bedside table. The average size of a bedside table on the market is 50 × 50 × 50 cm. The market analysis results, currently, there have been many emerging products based on smart technology. So to add to the value and function of the bedside, the addition of a smart lamp under the table can be an added value for this furniture. The function of the addition of this smart lamp is as a room lighting at night so that the lighting does not interfere with comfort when resting. The light from the lamp can function as a light when the user wakes up in the middle of the night to go to the toilet or for other emergency needs. The following process is design ideate. The exploration of bedside table designs is based on the availability of existing waste materials, so creativity is needed to design furniture with limited material sizes. In addition, the aesthetic aspect must also be a significant concern so that the product still has a high selling value even though it comes from using residual or waste materials. Design exploration is outlined in the form of several alternative design sketches that consider the target market’s function, aesthetics and tastes. After analyzing the sketch results, one of the designs that will be produced is selected. The selected design is then completed with technical drawings to simplify the production process for the carpenter. This process is the development of a design plan into a finished product. After the furniture production process is complete, the smart lamp installation process is continued. The details of the research method flow are shown in Fig. 1.
326
A. Pramono et al.
Fig. 1 Research methodology process flowchart
3 Result and Discussion 3.1 Designing and Manufacturing the Bedside Table Based on survey findings, waste plywood is gathered in a bag and thrown away in several furniture workshops. Some workshops also give to people in need as cooking fuel. Several furniture industries stated that they had difficulty treating their waste, and the majority said they would burn or give it to others as a combustion medium. Thus the workshop where they work will be clean and have space to work or store other items. If they want to spend a little time and add a little material, they can make basic materials for making more miniature furniture, such as nightstands. The bedside table is ergonomically between 45 and 60 cm tall. Its placement is beside the bed, and the bedside table’s height adjusts to the bed’s height. Some users choose the bedside parallel to the bed, and some choose the bedside height lower than the bed, all back to the customer’s taste. From the total height between 45 and 60 cm, there is a design that uses whole-body furniture from the floor to the bed, meaning that the 45–60 cm height is full of plywood frames. The second design is to use a combination of half furniture and leg support. The third design uses a floating design, where the body is attached to a wall or backdrop. In this study, because the material uses waste materials from production, the author chose to determine design 2, which is to make half the body supported by the legs. The dimensions of the body nightstand are a width of 40 cm, a height of 20 cm, and a depth of 40 cm, as represented in Fig. 2. The body parts are made using a self-made board. The remnants of the same thickness of plywood are collected together and stacked on top of 3 mm plywood. The remaining plywood is aligned with each other and glued together with wood glue. Furthermore, to get the same thickness, the top of the remaining plywood is levelled using a planar machine. When the top surface is flat, it is covered with 3 mm plywood, and it becomes a board ready to be used for the body of the furniture. The plywood cover layer uses whole HPL or pieces of waste HPL, which are squared and arranged horizontally and vertically to obtain optimal results. Based on interviews with several furniture industry entrepreneurs and workers, they still have waste materials for the body nightstand needs because it is relatively small. For HPL cover material made from waste material, it is possible to use HPL with a size of 10 × 10 cm and arranged vertically and horizontally, as shown in Fig. 3.
Sustainable Material-Based Bedside Table Equipped with a Smart …
Fig. 2 Bedside table made of self-made blockboard
Fig. 3 Pieces of HPL waste arranged as the furniture finishing material
327
328
A. Pramono et al.
In Fig. 2 it can be seen that the legs use pine wood obtained from used pallets. The beams from the pallet are formed using a lathe by reducing a little at the top and more at the bottom. As a result, it seems huge at the top and small at the bottom from the foot. Then the table legs are finished using a clear coating. The table legs are connected to the bottom of the table body by screwing on the inside before being covered with HPL. The body of the nightstand is covered with waste HPL, as shown in the furniture in Fig. 3. Using materials from waste for primary materials and as a finishing material is an effort to preserve nature by applying sustainable materials.
3.2 Embedding Smart Lighting System Using Arduino Uno The author added a lighting system at the bottom of the table to add value to the benefits of the bedside table. The light is created using Light Emitting Diode (LED) strips whose beam direction is downwards. The purpose of this lighting is to help provide light at night. In general, people who sleep will turn off the lights. So that when someone wakes up and is about to walk out of bed, he needs light to illuminate the path he will pass. If the primary light is turned on, or the sitting lamp, which is usually placed on the bedside table, is turned on, then if he sleeps with other people, it will interfere with other people’s sleeping activities. It is different if the user turns on the lamp that illuminates the floor, then he does not disturb other people’s sleep activities. To create a lamp that illuminates the floor, it is not necessary to use an additional switch. Instead, it can turn on the lights automatically using a motion sensor called a Passive Infra Red (PIR) sensor. By reading the movement, the PIR sensor will send data to turn on the light. Meanwhile, to turn off the light, you can set how many seconds the light will turn off by itself. Another sensor needed in this lighting system is the Light Dependant Resistor (LDR) sensor. This tool can read light intensity using units of lux (lx). For example, in the late afternoon, the sun’s light source will begin to dim until it is below 80 lx. Regarding the threshold of less than 80 lx, the condition is declared evening or night. During the day with heavy rain, the light intensity can be less than 80 lx and can be considered evening conditions. If the LDR sensor reads a threshold above 80 lx, then the condition is in a bright condition, or the daytime is in a bright condition. The number 80 lx is the threshold exemplified by the author, and this number can be changed according to the user’s needs. Figure 4. describes the flowchart of this smart lighting system working and the circuit system on the Arduino. In the smart lighting system of this study, the LDR sensor functions as the main switch, while the PIR sensor functions as a secondary switch. By reading the light intensity of less than 80 lx, or evening conditions, then the condition of the tool is declared in standby. If there is movement, the PIR sensor will respond by sending data to turn on the lamp. On the other hand, when the LDR sensor reads a light intensity of more than 80 lx or in bright conditions, this sensor will instruct the device to sleep. So even if there is movement, the PIR sensor will not respond to movement because
Sustainable Material-Based Bedside Table Equipped with a Smart …
329
Fig. 4 The flowchart and schematic diagram of smart lighting system
the device is in a sleep state. Thus, the lighting system is more optimal because it can save energy.
3.3 Usability Test This study conducted experiments on the PIR sensor, LDR sensor, and a combination of the two. Experiments were carried out several times in testing the PIR sensor to see at what distance the sensor meter could respond. After the PIR sensor sends data to the device, it is observed how many seconds it will send data to turn off the lamp. This test is also carried out to determine how sensitive the sensor will be to receiving a response from human movement and whether, when someone is still in front of the sensor, it will send data that there is a person in front of him. The details of the test experiment are shown in table 1.
330
A. Pramono et al.
Table 1 PIR sensor test results
Distance
Light condition
Type of movement
130 cm
On
Waving hand, replacing foot movement
130 cm
On
Moving body
130 cm
Off
Silent
150 cm
Off
Waving hand, replacing foot movement
150 cm
Off
Moving body
150 cm
Off
Silent
From the experiment above, it can be seen that the PIR sensor will respond to movement or command the “light on” condition of the tool if there is movement at a distance of less than or equal to 130 cm. The light will remain off when someone is still, even at a distance of less than 130 cm. The movement condition at a distance of more than 130 cm will cause no response from the PIR sensor. So the response of the PIR sensor is influenced by the presence of specific movements and distances. The following experiment is the LDR sensor. The author sets the light intensity of 80 lx as the basis for standby or sleep mode. Based on the experimental results, if the sensor detects an intensity of less than 80 lx, it will set the device to sleep mode and be on standby if the sensor reads an intensity of more than 80 lx. To try the light intensity, the author tried it directly with sunlight using 5 LDR sensors that were all mounted on an Arduino Uno. Table 2 is the result of the LDR sensor experiment. The results of the experiment above will have different results in each region. Besides being influenced by weather conditions, it is also influenced by the reflection of the colours around it. If there is a shadow, it will also affect the reading. LDR sensor reads light intensity based on where he is. If the tool is placed in the room, that position will be used as a reference. So the number used as a benchmark for dark and light conditions depends on the reading sample used. The third experiment is to try the tool as a whole series. From the results obtained, the data shows that the condition is in standby mode in the evening conditions, with a light intensity of less than 80 lx. Movement at a distance of less than or equal to Table 2 The test result of 5 LDR sensors Time
LDR 1 (lx)
LDR 2 (lx)
LDR 3 (lx)
LDR 4 (lx)
LDR 5 (lx)
06.00
0
0
0
0
0
07.00
59
61
59
60
61
08.00
91
93
90
92
93
09.00
109
112
109
111
109
10.00–16.00
> 100
> 100
> 100
> 100
> 100
17.00
268
270
268
270
270
18.00
0
0
0
0
0
Sustainable Material-Based Bedside Table Equipped with a Smart …
331
Table 3 The test result of entire system Time
Device Mode
LDR (lx)
Distance
Movement
Light Condition
06.00
Standby
< 80
< 130 cm
Yes
On
06.00
Standby
< 80
< 130 cm
No
Off
07.00 – 16.00
Sleep
> = 80
< 130 cm
Yes
Off
07.00 – 16.00
Sleep
> = 80
< 130 cm
No
Off
17.00
Standby
< 80
> 130 cm
Yes
Off
17.00
Standby
< 80
> 130 cm
No
Off
30 cm will turn on the light. The idle condition will not turn on the light even at a distance of less than 130 cm. Likewise, a movement at a distance of more than 130 cm will not turn on the light. Details of the experimental results can be seen in table 3. The table above shows that the device is in sleep mode from 07.00 to 16.00 with an intensity of more than 80 lx. In this condition, even though there is movement at a distance of less than 130 cm, the lamp remains in the off condition. The lamp located at the bottom of the table will function when the light intensity is less than 80 lx, and there is movement at a distance of less than 30 cm.
3.4 Sustainable Product Design The product in the form of a table, usually placed next to the bed, generally uses wood or plywood material. However, in making this table, the author applies sustainability in its manufacture. The primary material for making it is a blockboard made from plywood waste, a material reuse technique. Likewise, finishing materials that use HPL waste and are connected with patchwork techniques often applied in fabric connection techniques are one of the efforts in material reuse. Both reuse techniques on blockboard and HPL patchwork are efforts to increase the value of the benefits of waste material so that it can be categorized as upcycling. This method is one of the efforts in the application of sustainable materials. Similarly, lights are often turned on and off using a manual switch mechanism. Forgetting to turn off the lights will increase the use of electrical energy. Some lights apply a motion sensor for energy efficiency, but the light sensor works 24 h. The author applies an energy-saving system to the lighting system. It is reflected in its use and is only used when conditions are dark or at night, so this technique can make energy use more efficient. The second factor is that an LED, which uses less power, was employed. The two energy-saving methods that are applied in the form of materials and efficiency of use are one of the methods in sustainable energy.
332
A. Pramono et al.
4 Conclusion Blockboard, which it can make from scraps of plywood or other materials, is a primary material that may be used to make a bedside table. The blockboard uses waste HPL with various designs on it as a covering. The pieces are formed in small sizes and arranged horizontally and vertically. Pine wood from recycled pallets is used for the table’s legs. Pallet beams are comprised of material that is turned on a lathe. The lighting system at the underside of the table uses an LED strip controlled using an Arduino Uno microcontroller equipped with a PIR sensor and an LDR sensor. The main switch from electricity is the LDR sensor, which indicates a bright condition when it detects a light intensity over 80 lx. So all electricity will be in sleep mode, or all electricity will be off. If the LDR detects a light intensity of more than 80 lx, it will indicate dark conditions and electricity on standby mode. In this standby state, on and off the lamp is determined by the presence of movement. The light will automatically turn on when there is movement. According to the experimental findings, a light will turn on if it detects movement at a distance of less than 130 cm. If there is no movement, the light will be off even if someone is in a position that is less than 130 cm. Moving or still objects farther away than 130 cm will not activate the light. According to the experimental findings using five LDR sensors connected to an Arduino Uno, the light intensity reached 8 am at more than 80 lx and decreased to 0 lx at 6 pm. Thus, it can be determined that the smart lighting system will operate when the light intensity is less than 80 lx between 6 pm and before 8 am. This product applies sustainable materials, which reuse plywood and HPL waste material for manufacturing the bedside table. In addition, this table also applies sustainable energy in the form of using energysaving lamps and efficiency of use which is only carried out in dark conditions or at night.
References 1. Pitti AR, Espinoza O, Smith R (2020) The case for urban and reclaimed wood in the circular economy. BioResources 15:5226–5245 2. Puspita AA, Sachari A, Sriwarno AB (2016) Indonesia wooden furniture: transition from the socio-cultural value leading to the ecological value. J Arts Humanit 5:1. https://doi.org/10. 18533/journal.v5i7.965 3. Fadillah AM, Hadi YS, Massijaya MY, Ozarska B (2014) Resistance of preservative treated mahogany wood to subterranean termite attack. J Indian Acad Wood Sci 11:140–143. https:// doi.org/10.1007/s13196-014-0130-2 4. Primadani TIW, Larasati D, Isdianto B (2019) Kajian Strategi Aplikasi Material Kayu Bekas Pada Elemen Desain Interior Restoran di Bandung. J Desain Inter 4:49–60 5. Jiang F, Li T, Li Y, Zhang Y, Gong A, Dai J, Hitz E, Luo W, Hu L (2020) Wood and wood based materials in urban furniture used in landscape design projects. Wood Ind Eng 2:35–44. https://doi.org/10.1002/ADMA.201703453
Sustainable Material-Based Bedside Table Equipped with a Smart …
333
6. Khoo PS, H’ng PS, Chin KL, Bakar ES, Maminski M, Raja-Ahmad R-N, Lee CL, Ashikin SN, Saharudin M-H (2018) Peeling of small diameter rubber log using spindleless lathe technology: evaluation of veneer properties from outer to inner radial section of log at different veneer thicknesses. Eur J Wood Wood Prod 76:1335–1346. https://doi.org/10.1007/s00107018-1300-5 7. Porteous J (2015) Composite section I-beams. Wood Compos:169–193. https://doi.org/10. 1016/B978-1-78242-454-3.00009-3 8. Nazerian M, Moazami V, Farokhpayam S, Gargari RM (2018) Production of blockboard from small athel slats end-glued by different type of joint. Maderas Cienc y Tecnol 0–0. https://doi. org/10.4067/S0718-221X2018005021101 9. Primadani TIW, Kurniawan BK, Shidarta S, Putra WW (2021) Furniture making training as creative interior business development in Tirtomoyo Village Malang. ICCD 3:81–84. https:// doi.org/10.33068/ICCD.VOL3.ISS1.306 10. Szökeová S, Fictum L, Šimek M, Sobotkova A, Hrabec M, Domljan D (2021). First and second phase of human centered design method in design of exterior seating furniture Prve dvije faze dizajniranja vanjskog namještaja primjenom metode dizajna usmjerene prema korisniku. https://doi.org/10.5552/drvind.2021.2101 11. Subagio RP, Vincensia M (2017) PENGOLAHAN MATERIAL LIMBAH KAYU PRODUKSI FURNITUR MENJADI LAMPU TIDUR. In: Simposium Nasional RAPI XVI, pp 227–233, Surakarta 12. Gnatiuk L, Novik H, Melnyk M (2022) Recycling and upcycling in constraction. Theory Pract Des 130–139. https://doi.org/10.18372/2415-8151.25.16789 13. Nurdiani N (2020) Taufik: The study of application of the green architectural design concept at residential area in Jakarta. IOP Conf Ser Earth Environ Sci 426:012067. https://doi.org/10. 1088/1755-1315/426/1/012067 14. Pramono A, Azis B, Primadani TIW, Putra WW (2022) Penerapan Upcycling Limbah Kain Perca Pada Kursi Flat-Pack. Mintakat J. Arsit. 23:14–27 15. Bohemia E, Institution of Engineering Designers, Design Society, Dublin Institute of Technology (2013) Upcycling: re-use and recreate functional interior space using waste materials. In: DS 76 Proc. E&PDE 2013, 15th Int Conf Eng Prod Des Educ Dublin, Ireland, 05–06.09.2013. 798–803 16. Pramono A, Ananta Wijaya IB, Kartono Kurniawan B (2021) Maximizing small spaces using smart portable desk for online learning purpose. In: 2021 international conference on ICT for smart society (ICISS), pp 1–7. IEEE. https://doi.org/10.1109/ICISS53185.2021.9533209
Malang Historical Monument in HIMO Application with Augmented Reality Technology Christoper Luis Alexander , Christiano Ekasakti Sangalang , Jonathan Evan Sampurna , Fairuz Iqbal Maulana , and Mirza Ramadhani Abstract History is a legacy found in every country, including Indonesia. In Indonesia, various kinds of historical heritage have values of the struggle of the Indonesian people. But unfortunately, the young generation today does not know and understand the historical values in Indonesia. If this is not handled immediately, the Indonesian people will not appreciate the hard work of the struggles of previous Indonesian heroes. One of the historical relics in Indonesia is the Hero Monument which is located in Malang City. But unfortunately, many young people are not familiar with this monument. For this reason, by using current technological developments, a 3D application is made that allows the younger generation to see and experience the Hero Monument in Malang City virtually. In this way, each young generation is limited to Malang City and from the provinces to the whole of Indonesia can recognize and understand the historical values that exist. Unity, Vuforia, and Blender technologies are used to develop applications. In addition, this application also ensures that it makes it easy for every user to use it to scan and display the monument virtually. With this application, it is hoped that the younger generation will be increasingly interested in understanding and understanding the historical values that exist in Indonesia. “A great nation is a nation that respects the services of its heroes” this is our guideline in developing the Himo application.
C. L. Alexander (B) · C. E. Sangalang · J. E. Sampurna · F. I. Maulana · M. Ramadhani Computer Science Department, School of Computer Science, Bina Nusantara University, 11480 Jakarta, Indonesia e-mail: [email protected] C. E. Sangalang e-mail: [email protected] J. E. Sampurna e-mail: [email protected] F. I. Maulana e-mail: [email protected] M. Ramadhani e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_30
335
336
C. L. Alexander et al.
Keywords Augmented reality · Monument · Local history · Heritage · Application
1 Introduction Today’s young generation is familiar with the use of existing digital media and technology [1]. This makes the young generation unable to stay away from the technology that is currently developing. Technology like this has both positive and negative impacts [2]. The younger generation can develop their creativity with existing technology. But on the other hand, technological developments that are not addressed wisely can lead to massive globalization and have an impact on the loss of love and pride for the nation, especially the beloved Indonesian nation. The love for the struggles of the Indonesian nation in the past has also begun to fade with the development of increasingly advanced technology. The struggle of the Indonesian people has left a lot of historical legacies which of course must still be respected by the current and future generations of youth. One of the historical heritages in Indonesia is a monument [3]. Monuments are objects that store the historical values of a country [4]. Especially the national struggle monument is an important historical legacy for future generations[5]. Future generations are expected to continue the struggle for independence in Indonesia. Monuments of struggle are found in various regions in Indonesia that have struggled to the death to liberate Indonesia. One of them is in the city of Malang. In the city of Malang, there are monuments that hold historical values and the struggle for Indonesia. Unfortunately, if the younger generation does not know all the historical monuments in areas including the city of Malang. Therefore, current technology can be developed to introduce historical monuments. One of them is by using 3D technology, namely Augmented Reality (AR). AR is a technology that makes three-dimensional (3D) objects presented into a 3D environment in real-time [6, 7], which is usually an artificial environment. Technology like this can be used to display objects of historical monuments in 3D in real form. Augmented Reality (AR) itself has many benefits [8], especially for the state of mind of the younger generation. AR can increase learning motivation [9], attention to technology, concentration in the learning process [10], and high satisfaction. This benefit is very beneficial for the younger generation, especially in studying the history of monuments in Malang. Historical monuments in Malang that can be presented with AR technology are Tugu Malang and Melati. Later, visual monuments like this can be seen directly by people without having to visit the place. In addition, the historical information from these monuments can also be studied by the younger generation using only technology [11]. That way the younger generation not only from Malang but other areas such as Kalimantan, Papua, and Sumatra can see these monuments as real and understand historical values. This is certainly useful, for example, such as the Melati Monument which keeps the spirit of previous warriors and must be guarded by the current generation.
Malang Historical Monument in HIMO Application with Augmented …
337
AR technology that is present like this makes the younger generation not only use technology such as social media where the younger generation uses it to interact socially with each other or communicate indirectly in two directions. Technology like this can also be used as an alternative in strengthening a sense of pride for the struggles of the Indonesian nation in the past. Furthermore, it will be studied more deeply how to develop this AR technology so that it can actually present the historical monuments in Malang in front of the public and can be applied to today’s technology such as smartphones which are widely used by the public.
2 Methods The method that is used to develop this software is ADDIE Model technique. ADDIE Model is a design process consisting of 5 steps, that is Analysis, Design, Development, Implementation, and Evaluation [12]. This model is usually used as a framework for developing courses or for developing processes in multimedia projects such as this software [13]. It is iterative where the evaluation result can be considered as analysis proses again. Explanation of every stage is as follow: This SLR uses quantitative, perspective, and multi-level analysis. The quantitative analysis consists of annual publications and geographical contexts. Multilevel analysis includes individual, team, company, network, and institutional levels. Perspective analysis includes communication, services, manufacturing, information technology, and health.
2.1 Analysis In this process, things to do are defining a problem and finding its’ solution. The writer’s group were doing research about problems that exist in their local environment and related to local wisdom in the environment of the writer’s group, that is Malang city. Finally, the writer’s group chose historical monument topic that exists in Malang city. The primary problem is because of COVID-19 pandemic, the tourism sector in Malang city has decreased. Many tourists, both local and foreign, cannot visit Malang city. Moreover, there are also many local residents and foreigners who don’t recognize historical monuments that exist in Malang city. Finally, the writer’s group got an idea to make documenter application that could introduce historical monuments in Malang city while also promoting it. This application will be useful in tourism and education sector.
338
C. L. Alexander et al.
2.2 Design From the analysis process, after writer’s group find the solution, the design process will be done. This design process involves strategic design from the solution that has been found, through making application design, mock-up design using Figma software, 3D models asset design using Blender software, and information data about historical monument in Malang city. Writer’s group also design picture book that will be used as the marker base model.
2.3 Development In development process, product development will be done which become the implementation of design result. The examples are documentation, software, and others. During this step, writer’s group start doing application making, which begin with model asset installation to license and database in Vuforia software. After the marker already formed and connected to the model, writer’s group will start make the application using Unity software.
2.4 Implementation Implementation step is done by presenting the product development and prototype result to user candidates for trial needs. During this step, writer’s group start sharing the prototype to testers.
2.5 Evaluation After the implementation step, the appraisal is given from the testing result to improve the product. There will be evaluations, critics, and suggestions that will be received by the writer’s group from testers. The result from this step will become evaluation and guide to improve the application to be better. If there are any incorrect in the problem choice, the writer’s group can fix the analysis process. If there are any mistake in technical problem, the writer’s group will receive it and attempt to fix it from the evaluation result. Once the target marker image is in the camera view of a smartphone or tablet, this app displays virtual educational content in the real world. To view and interact with virtual 3D content in the physical world, users can use their notebooks or notebooks to view and interact with additional environments associated with the target image. Figures 2, 3, 4, 5 and 6 depicts the entire process flow of the app.
Malang Historical Monument in HIMO Application with Augmented …
Fig. 1 Method step development this application
Fig. 2 AR apps for smartphones and tablet computers [12]
339
340
C. L. Alexander et al.
Fig. 3 Flowchart HIMO Application
Fig. 4 Flowchart HIMO Application
From Fig. 2 shows what I mean: 1. The target page’s features are stored in Vuforia’s database. It is necessary to make a comparison between the newly discovered features and those that have already been recorded in the Vuforia database. Virtual content is added to the target image surface when database features match the image. 2. Scan the target marker, which has been added to the Vuforia database as an updated marker, using an Android smartphone.
Malang Historical Monument in HIMO Application with Augmented …
341
Fig. 5 Graph distance and time testing HIMO AR application
Fig. 6 Graph testing HIMO AR application with two device Android
3. To display the contents of the HIMO, the AR software relies on photos taken with the phone’s camera.
3 Results and Discussion 3.1 Design and Development of HIMO AR Application Using Unity The following diagram is this application’s is a flowchart diagram. It begins with menu choice and then the program will be run based on the choice. The ‘start’ choice will open the camera and do the scanning process. In the ‘about’ choice, the application will show the application’s information. Finally, the ‘exit’ will end the program application. For the following are design views or User Interface (UI) of the application. Start with main page cover where there are logo and the application’s name, and also there are three buttons in the bottom which are start, about, and exit button. The main cover is used as first display after open the application and used as a base before enters other features. Then after start button is chosen, the next page is scan page, where the user’s device’s camera will be automatically opens and it will try to find picture inside
342
C. L. Alexander et al.
the picture book about monument. When the camera found the picture, which is the marker in the book, the scan process will be continued, and the view will shift to the next page. There is also an arrow mark that is used to back to the main page. The next page is the model page. This page will be automatically opened after the scan process is finished. The 3D model of the monument picture which has been scanned will appear and user can interact with the model such as zoom and rotate. Users will experience the interaction with the monument model, at a time also get information about monument’s name and its history that can be read in the bottom of the model. There is also an arrow mark to get back to the main page. The other page in this application is about page. This page can be accessed from the about button in main page. This page contains information regarding the application, such as application’s version, application’s explanation, application’s objective, and how to use the application. This page is destined for the user, so they get to know the application information and how to use it. There is also an arrow mark that used to get back to the main page.
3.2 Testing of HIMO AR Application After all of this, we test the application. Below is the result Table 1 of experiment in scanning process trial. Writer’s group want to know how much time it will take for the model to appear after the marker is scanned. As it turns out from the result, 20 cm is the minimum distance needed by the camera to scan the marker and the average times needed is around 1,197 s. Then we also experimented with angle testing from a slope angle of 0°, 15°, 30°, 45°, and 60°. From this test wen can see if the angle 60° from camera and marker, the 3D model not appear. The test results are attached in the Table 2 below. We also experimented with distance testing and measure time (ms) the marker detects to show the 3D object. The test results are attached in the Table 3 below. Table 1 Testing Himo application with distance
Distance between camera and marker (cm)
Model appearance time (second)
Description
20
00.27
Detected
40
00.35
Detected
60
00.47
Detected
80
01.12
Detected
100
–
Not Detected
Malang Historical Monument in HIMO Application with Augmented … Table 2 Testing Himo application with distance
Table 3 Testing Himo AR application with two Device android
343
HIMO application testing Angle
Result
0°
Detected
15°
Detected
30°
Detected
45°
Detected
60°
Not Detected
Type of HIMO marker Device camera scanning time (seconds) Asus Zenfone Android POCO 8 GB RAM Monumen Tugu
00.29
00.17
Monumen Melati
00.31
00.19
4 Conclusions The purpose of making the Himo @Malang application is to preserve and provide knowledge about historical heritage in Indonesia, especially to the younger generation using technological developments such as Unity, Vuforia, and Blender. With this application, every young generation is not only limited to the city of Malang, but all parts of Indonesia can get to know and know about the Malang monument and the jasmine monument. Development this application is also to overcome the problem of the COVID 19 pandemic that is currently hitting the city of Malang so that tourists can also find out about the Malang monument virtually. Another attempt to measure the brightness of the room’s interior illumination levels. HIMO AR can read markers well in angle from 0° to 45°. The distance at which the marker can be read, and the 3D model displayed is 80 cm in the distance test. The camera on the device may not read the pattern on the marker, which results in test accuracy that is less than 100 percent. The development of this application must also ensure that the application is easy to use and can function to display the Malang monument in 3D when the user scans a book image. We as developers also have to ensure that this application can meet the needs and satisfaction of users who will use it.
344
C. L. Alexander et al.
References 1. Hazirah Mohd Azhar N, Diah NM, Ahmad S, Ismail M (2019) Development of augmented reality to learn history. Bull Electr Eng Informatics 8:1425–1432. https://doi.org/10.11591/eei. v8i4.1635 2. Nursita YM, Hadi S (2021) Development of mobile augmented reality based media for an electrical measurement instrument. J Phys Conf Ser 2111:012029. https://doi.org/10.1088/ 1742-6596/2111/1/012029 3. Nugrahani R, Wibawanto W, Nazam R (2019) Syakir, Supatmo: Augmented interactive wall as a technology-based art learning media. J Phys Conf Ser 1387:012114. https://doi.org/10. 1088/1742-6596/1387/1/012114 4. Rizaldy I, Agustina I, Fauziah F (2018) Implementasi Virtual Reality Pada Tur Virtual Monumen Nasional Menggunakan Unity 3D Algoritma Greedy Berbasis Android. JOINTECS J Inf Technol Comput Sci 3. https://doi.org/10.31328/jointecs.v3i2.786 5. Opoku A (2019) Biodiversity and the built environment: implications for the sustainable development goals (SDGs). Resour Conserv Recycl 141:1–7. https://doi.org/10.1016/j.resconrec. 2018.10.011 6. Awang K, Shamsuddin SNW, Ismail I, Rawi NA, Amin MM (2019) The usability analysis of using augmented reality for linus students. Indones. J Electr Eng Comput Sci 13:58–64. https:// doi.org/10.11591/ijeecs.v13.i1.pp58-64 7. Maulana FI, Atmaji LT, Azis B, Hidayati A, Ramdania DR (2021) Mapping the educational applications of augmented reality research using a bibliometric analysis. In: 2021 7th international conference on electrical, electronics and information engineering (ICEEIE), pp 1–6. IEEE. https://doi.org/10.1109/ICEEIE52663.2021.9616934 8. Uriel C, Sergio S, Carolina G, Mariano G, Paola D, Martín A (2020) Improving the understanding of basic sciences concepts by using virtual and augmented reality. Procedia Comput Sci 172:389–392. https://doi.org/10.1016/j.procs.2020.05.165 9. Ibáñez MB, Delgado-Kloos C (2018) Augmented reality for STEM learning: a systematic review. Comput Educ 123:109–123. https://doi.org/10.1016/j.compedu.2018.05.002 10. Khan T, Johnston K, Ophoff J (2019) The impact of an augmented reality application on learning motivation of students. Adv Human-Computer Interact 2019. https://doi.org/10.1155/ 2019/7208494 11. Darmawiguna IGM, Sunarya IMG, Pradnyana GA, Pradnyana IMA (2016) Pengembangan Sistem Informasi Museum Berbasis Web Dan Digital Display Dengan Teknologi Augmented Reality. Semin Nas Ris Inov Ke-4 Tahun 2016. 2–8 12. Maulana FI, Al-Abdillah BI, Pandango RRA, Djafar APM, Permana F (2023) Introduction to Sumba’s traditional typical woven fabric motifs based on augmented reality technology. Indones J Electr Eng Comput Sci 29:509–517. https://doi.org/10.11591/ijeecs.v29. i1.pp509-517 13. Elmunsyah H, Kusumo GR, Pujianto U, Prasetya DD (2018) Development of mobile based educational game as a learning media for basic programming in VHS. Int Conf Electr Eng Comput Sci Informatics. 2018-Octob, 416–420. https://doi.org/10.1109/EECSI.2018.8752658
A Gaze-Based Intelligent Textbook Manager Aleksandra Klasnja-Milicevic, Mirjana Ivanovic, and Marco Porta
Abstract IT technologies and their rapid development can greatly support and significantly improve Human–Computer Interaction, defining new communication methods for a fast and direct user experience. One very promising technology nowadays is eye tracking. The main contribution of our research is to propose a gazebased intelligent textbook manager that will support complete document analysis. Our solution includes tools providing translation support (single or entire sentences), keywords annotations, and the creation of document summaries, that contain all the areas that the user has read, with their time and words of major interest. Keywords Human-centered computing · Eye tracking · Gaze behavior · Intelligent textbook · Document analysis
1 Introduction Eye tracking technology is based on methods and techniques for discovering, identifying, and recording eye movement events. Researchers have been progressively able to obtain more accurate eye-gaze measurements with less obtrusive technologies thanks to significant advancements in the production of eye tracking devices over the last three decades [8, 8]. Different eye tracking systems have been developed to monitor eye movements during visual activity, to recognize behavioral responses, display cognitive load, provide an alternative mode of human–computer interaction, trigger inter-face design, and adjust element appearance based on user data [7]. Compared to the earliest eye tracking tools, today’s devices are less intrusive, and are A. Klasnja-Milicevic (B) · M. Ivanovic Faculty of Sciences, University of Novi Sad, 21000 Novi Sad, Serbia e-mail: [email protected] M. Porta Department of Electrical, Computer and Biomedical Engineering, University of Pavia, 27100 Pavia, Italy © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_31
345
346
A. Klasnja-Milicevic et al.
typically positioned at the base of monitors, so that users can be as free as possible during experiments. The modern trend is towards the use of eye tracking devices in various fields, such as psychology (for instance to perform cognitive tests), virtual reality (to interact with objects and the environment), marketing purposes (to determine what the customer is focusing on), education (to track learning processes, where we can see how the user follows the learning material), clinical trials, etc. [6]. Incorporating eye tracking into an adaptive intelligent textbook by using data about pupil and gaze indicating attentional focus and cognitive load levels can be useful in a process of adaptation to the requirements and needs of the learner. Eye tracking, as a sensor-based technology that allows to find out where the focus of the user is, can determine presence, attention, drowsiness, awareness, and other mental states of learners [8]. With an eye tracker, the eye serves a double duty: by processing sensory stimuli from the computer display, it gains a better insight into learner behavior and provides motor responses to control the system. Personalization of an intelligent textbook based on a learners’ cognitive load levels inferred from eye tracking data would bring the benefits of a personal tutoring system to many contexts, allowing for more effective training by improving knowledge transmission and maintenance. In this paper, a system for intelligent document analysis is proposed. The idea is to provide tools for translation support (single or entire sentences), key-words annotation (with their Wikipedia definition), and creation of summaries containing all the areas that the user has read (with their time and words of major interest). Since every person has different eye characteristics, it is important to develop a system that is independent of the specific eyes and of the documents read. The paper is structured as follows. Section 2 presents a short summary of related works. Section 3 briefly explains the structure of the human eye, human–computer interaction systems, and the possibility to use eye tracking devices to define new ways of interactions for a fast and direct user experience. The developed system, that exploits eye tracking to trace user activities while reading documents is illustrated in detail in Sect. 4 and evaluated in Sect. 5. Lastly, Sect. 6 outlines conclusions and future work on the presented topic.
2 Related Work Separating fixations from saccades during text reading is not easy, because the characteristics of the human eye change from person to person. Correctly translating raw eye movements into fixations and saccades [14] is very important, since poor algorithms may produce wrong (sometimes too many and sometimes too few) fixations or cause biased interpretations. Two families of methods, namely spatial and temporal criteria, are proposed in [14], which include five techniques. Spatial criteria focus on velocity, dispersion, and area-of-interest information. Temporal criteria classify algorithms based on duration information and local adaptivity.
A Gaze-Based Intelligent Textbook Manager
347
iDict [9] is a system designed to track the reader’s eye movements to help them deal with text written in a foreign language. An ordinary mouse is used to control iDict, while eye tracking technology is exploited to detect possible reading problems. When this occurs for a specific word, its translation is provided through a pop-up window. The purpose of the work presented in [4] is to generate implicit relevance feedback from eye movements to personalize information retrieval. To decrease noise in gaze data, reading behaviors can be initially identified based on specific spatial and temporal patterns, and gaze data are recorded only when there is an active read of the user. By analyzing the lengths and structure of saccades, it is possible to establish whether a flow of saccades over a line of text corresponds to a reading or skimming action. It is thus also possible to detect which parts of a document have been found interesting by a user, so that this knowledge can be exploited for the personalization of information retrieval. Other works focus on whole sentences and paragraphs instead of considering single words [11]. Typical gaze-based measures to understand user behavior (like reading) include [3]: (1) Average fixation duration in a paragraph; (2) Average forward saccade length (average length of left-to-right saccades, which is also known to be influenced by characteristics of the text [13]); (3) Regression ratio (number of regressions divided by the total number of saccades on a paragraph [6]); (4) Reading ratio (length of text that has been detected as read by the reading detection method divided by the length of read or skimmed text—practically, it is a measure for the reading intensity of a user [6]); and (5) Coherently read text length (length of text, in characters, that has been read coherently without skipping any text in between). The work described in [3] is aimed at recognizing when a user is reading rather than searching or otherwise exploring the visual scene. Reading can be classified into three typical user behaviors, namely: (1) Reading phase: high interest; (2) Skimming phase: medium interest; and (3) Scanning phase (inspecting, monitoring, searching): low interest. Distinguishing these behaviors is a difficult problem that depends on several factors, such as text difficulty, word length, word frequency, font size and color, distortion, user distance from the display, and individual differences (e.g., age, “intelligence”, and reading speed). For example, if a scientific text is analyzed by a user who does not have knowledge about a specific area, normally fixation duration increases [7] and the number of regressions (movements from right to left for languages that are written from left to right, the opposite otherwise) increase as well.
3 Eye Tracking Technologies in Human Computer Interaction The eye receives sensory stimuli from the environment. Stimuli are processed in the brain as “information” and formulate decisions about appropriate actions. The usual course of events also requires that human decisions provide motor responses
348
A. Klasnja-Milicevic et al.
that affect changes in the environment. If there is a computer environment, sensory stimulations are provided by displays, and motor responses are triggered by controls. Two important key-elements that is necessary to consider in HCI are: Usability: the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specific context of use [2]. Accessibility: the ability of an IT System to deliver services and provide information without any type of discrimination including people that have some form of disability which requires assistive technologies [14]. In this scenario, eye tracking can help to collect a big amount of data that can be exploited in the evaluation of many factors, like for example [1] website usability, comprehension, management of text, and preferences in an application interface. With an eye tracker, the eye serves a dual purpose, processing sensory input from computer displays while also supplying motor responses for device control. Ponsoda, Scott and Findlay [12] define the ideal measurement technique with an eye tracking device that does not interfere with the visual field of the test subject, does not require the device to be worn by the participant, and is able to stabilize the corneal image when needed. The precision of an eye tracking device is defined as the difference between the measured and the corresponding eye gaze position [5]. The ideal eye tracking device is characterized by a short response time and a temporary resolution. The gaze sampling frequency gives the number of gaze samples acquired in a second [13]. Lower frequencies record fewer eye movements, and it is impossible to detect quick glances. An eye tracker should also be able to compensate for head movements and to measure one or both the eyes. The device should also have a wide dynamic range for recording the position of the eye, for translation and for all three rotational types of the eyeball. The device should also be flexible and easy to calibrate. The interface proposed in this paper provides tools for a complete document analysis such as translation, key-words annotation with their Wikipedia definition, creation of a summary that contains all the areas that the user has read with their time and words of major interest. The developed system is independent of the specific eyes and the documents read. Performed tests have allowed us to evaluate the robustness of the system and analyze it under different points of view.
4 Gaze-Based Intelligent Textbook Manager Development This section will illustrate the major functionalities of the developed Gaze-Based Intelligent Textbook system. The system, implemented in C# using the WPF (Windows Presentation Foundation) framework, provides two types of actions: SemiAutomatic actions—actions where keyboard and eye tracker are combined; and Fully Automatic actions—actions performed using the eye tracker only. The supported formats for input files are Word, PowerPoint, and PDF. As an eye tracker, the low-cost Eye Tribe (https://theeyetribe.com/) device was used—now out of production, but our aim is to demonstrate the feasibility of the proposal rather than presenting a “product”. Before running the application, it is
A Gaze-Based Intelligent Textbook Manager
349
necessary to start the Eye Tribe server (fundamental step to allow the eye tracker to communicate with the computer). Once the server is running, it is recommended to execute a calibration procedure in order to precisely map the different screen areas to infrared reflections produced by the device on the user’s eyes. This can be done through the EyeTribe UI application. The “keys” necessary to activate the semi-automatic actions have been chosen in order to allow the user to control the operations with the left-hand only. One of the most important functionalities that has been implemented is the translation support, which is not only from and to English, but also from and to: Italian, German, French, Spanish. The system can also auto-detect the source language. The idea is to have a semi-automatic interaction with the user; when the translation of a word is needed, he or she looks at it and presses the “Z” key on the keyboard. A non-intrusive interaction modality has been preferred, and only when a translation is required it is unambiguously triggered by the key press. The translation appears both in a small window and at the bottom of the document viewer page. All the translated words are also shown in a list positioned at the bottom right of the page and are available in a PDF document, called “FileWithGaze”, that contains a gaze pattern of fixations and saccades. When a sentence is observed, it is highlighted with a green rectangle; by pressing the “X” key, it is then translated. Another important feature of the developed system is the possibility to "define" the document keywords; by pressing the “A” key, the user can specify that a particular word he or she is watching will become a keyword. For all the words in the list, a definition will be downloaded from Wikipedia and written in a summary PDF file. There is also a "Cancel KeyWord" button to delete a selected keyword (whose definition will not be downloaded from Wikipedia). Following the basic ideas explained in the previous chapter, a function that tracks the fixations and saccades of the user have been implemented using circles for fixations and lines that connect fixations as saccades. The aim is to track the user’s gaze while reading the document. The area of circles is proportional to fixation durations. Translated sentences have a cyan background. Rounded big rectangles denote the so-called “Macro-Sections”, i.e., sections that were read with particular attention. This feature is important because it allows to know which areas the user read, and which ones were simply skimmed. To visually give an idea of the time spent in each of these sections, the software draws the rectangles with three different colors, that are: (a) Red: for “poorly read” sections (e.g., with a reading time from three to five seconds) (b) Orange: for “moderately read” sections (e.g., with a reading time from five to ten seconds). (c) Green: for “highly read” sections (e.g., with a reading time higher than ten seconds). Above each section rectangle there is a number; the Gaze-Data file is in fact linked with another file called "SummaryFile.pdf ", where the user can find information about the time, in seconds, spent watching a particular section and the number of words read inside it. This is very important for understanding which are the critical or most interesting areas of the document. A long time spent in an area may denote interest or comprehension difficulties, while a short time may denote a clear text portion. This may help the user to improve
350
A. Klasnja-Milicevic et al.
his or her knowledge of specific concepts and provide a quick and useful summary of the reading session. In this file there is information organized in "Translations", "Your Keywords", and "Reading behavior". From this file, the user can extract statistics and analyze all the actions done during the reading session: (a) Translation: in the format "Original language" = "Destination language". (b) Your Keywords: in the format "Keyword"—definition. (c) Automatic section: Word of major interest and Number of macro-sections, with the time spent in each section and read words.
5 Tests and Results All tests with this Gaze-Based Intelligent Textbook system have been performed in the Computer Vision and Multimedia Laboratory of the University of Pavia. The distance between the user and the tracker was about 60 cm. System performance has been tested with a VE247 Full HD asus screen, 23 inches and a resolution of 1920 × 1080. A test population composed of 25 users, 15 males and 10 females were involved in the three experiments. In particular, four males and three females were wearing glasses. Three different settings were considered within experimental part of our work. (1) Reading activity; (2) Finding Key-Words activity; (3) Three-Columns document reading activity. The first test consisted of a comprehensive reading document. The generated PDF files (fileWithGaze and summaryFile) were checked to extract a “score”. The second test did not require to read an entire page but only to search for three keywords within it; the idea was to test whether the software correctly matched the region where the user had found the keywords and whether it classified them as "not read" areas. Finally, the third test considers a document written on two columns and the goal is to understand if the application is “robust” when different document formatting is used in experiment. Results are grouped into three classes, namely "Bad", "Average", and “Good”, and testers are subdivided into two groups, that are "with glasses" and "without glasses". These two groups are necessary because sometimes glasses can create problems in calibration and precision performance. This section will present the results obtained from the 25 testers grouped into the three categories of score (bad, average, good). In the end, a summary graph is shown in order to illustrate the application performance. The idea is to test how correctly the software detects the sections read by the testers and the time spent on every word. Scores have been assigned in the following way: (a) Two correct sections with all the sentences included: good. (b) Two correct sections with at least one sentence missing: average. (c) Zero, one or more than two sections detected: bad. From Fig. 1, it is possible to notice that there are two main sections, namely “Introduzione” (eng. Introduction) and "Definizioni" (eng. Definitions). For this reason, the user was instructed to read these two sections and then check if the software had correctly recognized this pattern by drawing two rectangles around them. The second experiment tried to figure out if the software could precisely recognize the position of a keyword observed by the tester by drawing a rectangle around the
A Gaze-Based Intelligent Textbook Manager
351
Fig. 1 Examples of the first test
keyword’s sentence. If the rectangle drawn includes other sentences, the final score is lower. For this experiment, the user was required to search three keywords, i.e., combustione, ricostituite, and razionale. The document employed for the test was the same as in the first experiment and the scores were assigned as follows: (a) One section around each of the three keywords’ sentences: good. (b) One section around each of the three keyword’s sentences plus two sentences: average score. Zero, one, two or more than three sections detected: bad.
352
A. Klasnja-Milicevic et al.
The third experiment was about a document written on three-columns. The aim was to detect whether the system recognized the column read by the tester. As can be noticed from Fig. 2, the document is subdivided into three distinct columns; the user was instructed to read the central column and then we checked if the software had correctly recognized the selected column instead of the others. The scores have been assigned in this way: • One section around the central column: good score. • One section that includes parts of the left or right columns: average score. • Zero or more than one sections detected: bad score The first experiment summary score performance has been "Bad" in six cases, “Average” in fourteen cases, and "Good" in five cases. In the "Average" case, the system worked in an acceptable way with both testers wearing and not glasses
Fig. 2 Examples of third test
A Gaze-Based Intelligent Textbook Manager
353
Fig. 3 Experiments’ summary score graph
(respectively 5 and 9); "Good" scores were obtained for only users who did not wear glasses. From the summary of the second experiment shown in Fig. 3, we can see that the "Bad" case is the most frequent (14 occurrences), followed by the “Average” case (11). No "Good" scores were obtained. This not very satisfactory result may depend on the ambient light conditions or on the parameters we have used for the Campbell-Maglio algorithm. An improvement solution might be to get in advance the characteristics of the document, such as font size or line spacing, in order to optimize these parameters. In this way, the application will be more precise to recognize a skimming action instead of a reading action. The third experiment score graph shows seven "Good" scores and four "Bad" scores. It seems that documents composed of more than one column can exhibit an “Average” performance (14 cases), with a "variance" lower than, for example, the first experiment where the votes were spread in a more homogeneous manner.
6 Discussion and Conclusions In this paper we have presented an application that exploits eye tracking to trace user activities while reading documents. Significant advances in state-of-the-art eye tracking technology have created cheaper, comfortable, and precise devices. Therefore, it is now feasible to collect information about user behavior, such as areas of interest, points of fixation, unknown words, etc. In this context, we tried to provide a system able to extract all this information to improve the reading experience. In particular, the possibility to include keywords is useful in all those cases in which the user does not know the meaning of a specific word or wants detailed information associated with it. For the translation support, we have found that most applications provide only single word translations; for this reason, in the developed application, the user can select portions of text with different sizes, from one-to-many lines. The system also tries to help the user in the case where font size is not big enough or
354
A. Klasnja-Milicevic et al.
simple to analyze; a magnifying glass can improve this situation and its parameters can be changed in real-time directly using the graphical user interface (GUI), by setting the radius size and zoom factor. The fixation and macro section detection feature uses, respectively five classes of the dispersion-based algorithms [2] [5] and the Campell-Maglio algorithm [4]. For the dispersion algorithm we have developed a method (auto-calibration procedure) that checks every seven seconds how the user is reading the document (speed reading feature) in order to correctly set the threshold to determine if a pool of points can be considered as a fixation. For the Campbell-Maglio algorithm, the ranges used to determine whether an eye movement should be considered as "short", "medium" or "long" have been empirically chosen by taking advantage of the carried-out tests. One future improvement can be to dynamically change the ranges of "short", "medium", and "long" values of the Campbell-Maglio algorithm related to the font size, so as to have the system correctly balanced for every type of document and light conditions. Another possible future work is the extraction of the lines of each macro-section and the creation of a unique PDF file that links all these sections; in this way, from a long document it would be possible to create a document that is shorter and easier to analyze and understand. At last, the application could be made completely "automatic". Translations and key-word definitions could be performed not using the keyboard but through eye gestures or by looking at graphical elements directly in the Graphical User Interface.
References 1. Al-Rahayfeh A, Faezipour M (2013) Eye tracking and head movement detection: a state-of-art survey. IEEE J Transl Eng Health Med 1, Art no. 2100212 2. Brousseau B, Rose J, Eizenman M (2020) Hybrid eye-tracking on a smartphone with CNN feature Extraction and an infrared 3D model. Sensors 20(2):543 3. Buscher G, Dengel A, Biedert R, Elst LV (2012) Attentive documents: eye tracking as implicit feedback for information retrieval and beyond. ACM TiiS 1(2):1–30 4. Campbell CS, Maglio PP (2001) A robust algorithm for reading detection 5. Castner N, Geßler L, Hüttig F, Kasneci E (2020) Towards expert gaze modeling and recognition of a user’s attention in realtime. Proc Comput Sci (2020) 6. Duarte RB, da Silveira DS, de Albuquerque Brito V, Lopes CS (2020) A systematic review on the usage of eye-tracking in understanding process models. Bus Process Mana J 7. Duchowski AT (2017) Eye tracking methodology: theory and practice. Springer 8. Geise S (2011) Eyetracking in communication and media studies: theory, method and Critical reflection. Stud Commun Media 1(2):149–263 9. Hyrskykari A, Majaranta P, Aaltonen A, Räihä KJ (2000) Design issues of iDict: a gaze-assisted translation aid. Symposium on Eye tracking research & applications, pp 9–14 10. Kirsh I (2020) Automatic complex word identification using implicit feedback from user copy operations. In: Int. Conference on Web Information Systems, pp 155–166 11. Ohno T (2004) EyePrint: support of document browsing with eye gaze trace. In: Proceedings of the 6th international conference on Multimodal interfaces, pp 16–23 12. Ponsoda V, Scott D, Findlay JA (1995) probability vector and transition matrix analysis of eye movements during visual search. Acta Physiol (Oxf) 88(2):167–185
A Gaze-Based Intelligent Textbook Manager
355
13. Rayner K (1998) Eye movements in reading and information processing: 20 years of research. Psychol Bull 124(3):372 14. Salvucci DD, Goldberg JH (2000) Identifying fixations and saccades in eye-tracking protocols. Symposium on Eye tracking research & applications, pp. 71–78
Advanced System Modeling for Industry Revolution
Aligning DevOps Concepts with Agile Models of the Software Development Life Cycle (SLDC) in Pursuit of Continuous Regulatory Compliance Kieran Byrne and Antoinette Cevenini
Abstract In the historical landscape of software development, regulatory compliance considerations may have been considered of low-priority to teams outside of highly regulated industries, such as medical devices, finance, avionics, and cyber-physical systems. As mission and safety critical domains have traditionally favored requirement and verification driven linear forms of development, utilization of DevOps concepts to support regulatory compliance goals is still a relatively novel concept. With software development delivery and integration requirements increasing in highly regulated domains, and implementation of more ubiquitous standards for developers to show evidence of data handling (as seen in now ubiquitous standards, such as General Data Protection Regulation (GDPR)), research and practice in this area becomes increasingly valuable. Further, examples of customizing DevOps approaches for specific challenges, such as the example presented by DevSecOps for engineering security, present a plausible template that could be applied to engineering compliance in a rapidly iterative model. Through exploring learnings from domains comfortable with an Agile/DevOps approach to SDLC as they implement emerging requirements, confidence in exploring compatibility of such practices with delivery and integration of highly regulated software products is growing, laying the foundation for a movement towards continuous delivery and iteration of regulatory compliant software across the industry. This paper will review and discuss available literature in five key areas: Agile/DevOps alignment, alignment of regulation with requirement engineering, compatibility of regulations with DevOps concepts, emerging forms of DevOps, and the movement towards continuous compliance. Keywords DevOps · Agile · Regulatory compliance · Requirement engineering · DevSecOps K. Byrne (B) Charles Sturt University, Sydney, NSW, Australia e-mail: [email protected] A. Cevenini Australian Computer Society, Western Sydney University, Sydney, NSW, Australia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_32
359
360
K. Byrne and A. Cevenini
1 Introduction In recent years, software development organizations across a broad range of domains have sought alignment between development teams and end users by replaced traditional linear approaches to the software development life cycle (SDLC) with the more flexible Agile methodology [1]. Further improvements have been reported through an alignment of development and IT/Operations teams known as DevOps. Adoption of Agile principles and practices combined with DevOps influenced automation tools can allow development teams to create vital feedback loops at every stage of the development life cycle allowing them to utilize input from a broad range of stakeholders in order to achieve continuous delivery of, and integration of new features to, high quality software [2]. Such organizational improvements report benefits not only in the speed of high-quality deliverables, but also in risk reduction and stakeholder satisfaction [3]. Despite the reported benefits of such improvements, a number of development organizations still struggle to get the necessary cultural buy-in required for implementation of an aligned Agile/DevOps model of the software development life cycle [4]. In particular, certain domains of development producing safety and missioncritical output have been more hesitant to adopt such strategies, with traditional approaches still favoring linear approaches thought necessary to support immutable validation and verification requirements [5]. Given the increasing demand for software projects within such domains, and a landscape in which regulatory requirements are implemented in domains in which developers have historically been able to transfer regulatory risk to end users [6], researchers and practitioners increasingly seek development workflows focused on balancing requirements engineering steps with methods that can reduce validation and verification lead times on hard requirements [6]. With new research showing evidence of compatibility between the benefits provided by Agile/DevOps models of SDLC and the output of software meeting regulatory requirements [7], and emerging forms of DevOps delivering such benefits as continuous security [8] and continuous documentation [9], it seems reasonable that continuous regulatory compliance may be a plausible utilization of an Agile/DevOps model [10]. This research will avoid investigation of specific tools and proprietary implementations of SDLC improvements, nor will it seek to focus too closely on the nuance of any specific regulation, rather seeking to explore high level Agile/DevOps concepts and benefits, and the general problems requirement-heavy software development may present. This research seeks to contribute to a greater understanding of this problem by exploring the accepted definition and understood benefits of an aligned Agile/DevOps model of the software development lifecycle, as well as the landscape of regulatory compliance considerations across a broad range of development domains encompassing both mission/safety critical areas, and increasingly ubiquitous regulation in other domains. Here, the work will attempt to give some insight into
Aligning DevOps Concepts with Agile Models of the Software …
361
the real work required for regulation and explore how parallels with traditional approaches to requirement engineering may give clues to integrating the Agile/DevOps approach. This work will attempt to show examples of compatibility of an aligned Agile/DevOps model with delivery of highly compliant software, and further explore how emerging extensions of DevOps approaches, such as DevSecOps (focused on supporting continuous security) and DevDocOps (focused on supporting continuous documentation) lay the foundation for a plausible development model focused on continuous regulatory compliance. To lay the foundation for this future work, this research will perform a qualitative analysis of Agile/DevOps alignment, alignment of regulation and requirement engineering, compatibility of Agile/DevOps with regulatory requirements, emerging forms of DevOps, towards continuous compliance.
1.1 Methodology The CSU library Primo search tool was utilized to limit the search to open access and publicly available articles between 2019 and 2022. The search keywords used are shown and justified in Table 1. Articles linking multiple keywords were preferred, and journals were limited to areas of computer science and software engineering. Article SCImago ratings were reviewed, and articles with a rating below Q2 were excluded. Conference papers were also excluded, and titles and abstracts of remaining papers were reviewed to ensure relevance and usefulness to the topic. The methodology is described in Fig. 1. Table 1 Keywords Keyword
Justification
Agile
to find papers on agile methodology linked to other keywords relevant to compliance
DevOps
To find useful definition of concepts and links to compliance keywords
Compliance
Useful limited to software development terms to show challenges and potential for Agile/DevOps compatibility
Regulation GDPR
Papers on relatively recent and ubiquitous regulation requirement is software development
Requirement Engineering
Useful for compatibility with Agile/DevOps and parallel to regulatory requirements
DevSecOps
Useful to show emerging forms of DevOps offering some “continuous” benefit
DevDocOps
362
K. Byrne and A. Cevenini
Fig. 1 Research methodology
2 Literature Review Implementation of aligned Agile DevOps models of software development intended to achieve continuous regulatory compliance is a novel and relatively under explored concept. The lack of useful papers linking keywords related to regulatory compliance to the relevant development terms is apparent in Table 2, showing the classification of articles utilized relevant to the keywords searched. Further, as the area of continuous compliance and utilization of DevOps strategies to address regulatory requirement is still relatively novel, there were difficulties sourcing quantitative analysis papers, thus qualitative papers exploring DevOps in its current state were sourced to provide a useful comparison, this disparity of sources is displayed in Table 4. A rundown of study sources, settings, and applied areas is displayed in component Table 4 to inform classification Table 3. This section will provide an overview of literature defining terms, challenges, and opportunities to lay the foundation for the movement towards continuous regulatory compliance in development. To demonstrate the availability of research in the investigated domain, a network map of relevant keyword searches is included in Fig. 2.
2.1 Aligning Agile and DevOps Since the 1990s, many domains of software development have adopted Agile methodology as a well-accepted and proven effective approach to aligning end users emerging needs with a phase-based approach to software development [3]. While such adoptions have reported benefits to the speed and quality of software deliverables, many organizations still struggled to align development teams with operational teams, resulting in siloing of technical and operational functions and bottlenecks which cause delays for development teams [3]. The term “DevOps” was coined in 2009 to describe an approach focused on breaking down siloing of roles and address the misalignment between rapidly iterative agile development teams, and the operational teams focused on secure deployment, who due to the traditional understanding expecting a careful and slower approach, would represent a bottleneck to rapid deployment and updates of software offerings [3]. DevOps practices can functionally be understood through four core values: Culture, Automation, Measurement and Sharing [3], with culture referring to an
Aligning DevOps Concepts with Agile Models of the Software …
363
Table 2 Keyword classification table Factors
Attribute
Instances
Study Selection
Study Type
Quantitative, Qualitative/quantitative, Qualitative, Descriptive Literature Review
Study Sources
European journal of information systems, Computers in industry, ACM Computing Surveys, Artificial intelligence and law, IEEE internet computing, IEEE access, Computers & security, Software, practice & experience, The Journal of systems and software, Information and Software Technology
Settings
Agile, DevOps, Requirement Engineering, Regulation, Compliance, DevSecOps, DevDocOps, GDPR
Applied Area
Avionics, Space Exploration, Software Development, Software Engineering, Cybersecurity, mission-critical systems, safety–critical systems, cyber-physical systems, medical devices, data privacy, data security, ISO/IEC, ITSM, Cloud Engineering
Digital Libraries
Elsevier ScienceDirect Journals Complete, Taylor & Francis Journals, ACM Digital Library, EBSCOhost Computers and Applied Sciences Complete, Wiley Online Library, SciTech Premium Collection, ProQuest Central, SpringerLink Journals, EBSCOhost Computers and Applied Sciences Complete, IEEE Electronic Library (IEL) Journals, IEEE Explore, DOAJ Directory of Open Access Journals
Article Selection
keyword search, inclusion, and exclusion criteria
Systemic Literature Review
(continued)
organization-wide approach focused on collaboration and continuous improvement. This goal is achieved through automation of feedback loops intended to measure progress, and iteratively improve approaches through further automation and knowledge sharing [2]. DevOps practices can be both a help, and a hindrance to regulatory compliance goals. While highly-regulated software domains may be seen as incompatible with rapid, frequent delivery, DevOps practices of automation and collaborative sharing can report benefits in reducing lead times required to meet validation and verification requirements [4].
364
K. Byrne and A. Cevenini
Table 2 (continued) Factors
Output
Attribute
Instances
Research Approaches
Point to accepted definition of Agile/DevOps SDLC. Show compatible frameworks for implementation and explore benefits and challenges at high level. Point to traditional frameworks compatible with development of highly-regulated software, explore benefits and challenges. Explore emerging standards at a high level and point to examples of Agile/DevOps models compatibility. Show emerging models in highly-regulated domains compatible with Agile/DevOps models and concepts
Metrics
SDLC frameworks, Workflow models, Cloud architecture models, Requirement Engineering feedback models, Survey, organisational documentation review, literature review
Participants
Participating development organisations and referenced studies
Primary Output
Qualitative analysis of Agile and DevOps concepts, Requirement Engineering practices pertaining to security and regulatory compliance, Agile compatible workflows relevant to regulatory compliance, and emerging extensions of DevOps (eg. DevSecOps). Classification and component tables, network visualisation map, research methodology description
Further, the signature feedback loops that form the foundation of DevOps culture may also be compatible with forms of requirement engineering intended to build and improve compliance-awareness throughout the software development life cycle [1]. Given the reported benefits of Agile/DevOps alignment in software delivery, it seems logical that such methods can also be extended to improve delivery of software with high regulatory compliance requirements.
2.2 Aligning Regulation and Requirement Engineering Historically, software developers were often able to transfer risk of error and failure largely to the end user through end user licensing agreements (EULA), however,
Aligning DevOps Concepts with Agile Models of the Software …
365
Table 3 Component table Article
Study type
Study source
Settings
[1]
Qualitative, Descriptive Literature Review
European journal of information systems
Agile, DevOps, ITSM, Software DevSecOps development
Applied area
[2]
Qualitative
European journal of information systems
Agile, Requirement Engineering, DevOps, DevSecOps
[3]
Qualitative/quantitative
Computers in industry
Agile, Software Requirement Development Engineering, DevOps, GDPR
[4]
Qualitative
ACM Computing Surveys
Agile, Requirement Engineering, DevOps, DevSecOps
Software Development
[5]
Qualitative/quantitative
Artificial intelligence and law
Agile, Requirement Engineering, DevOps, DevSecOps, Regulation, Compliance,
Software Engineering, Space Exploration
[6]
Qualitative/quantitative
IEEE internet computing
Agile, Requirement Engineering, DevOps, DevSecOps, Regulation, Compliance, GDPR
Cloud Engineering
[7]
Qualitative/quantitative
IEEE access
Agile, Requirement Engineering, Regulation, Compliance, GDPR
Software Engineering, Space Exploration
[8]
Qualitative/quantitative
Computers and security,
Agile, Requirement Engineering, DevOps, DevSecOps
Cloud Engineering, Cybersecurity
Software Development
(continued)
366
K. Byrne and A. Cevenini
Table 3 (continued) Article
Study type
Study source
Settings
Applied area
[9]
Qualitative/quantitative
Software, practice & experience
Agile, Requirement Engineering, DevOps, DevSecOps, DevDocOps
Software Development
[10]
Qualitative, Descriptive Literature Review
Computers in industry,
Agile, Requirement Engineering, DevOps, Regulation, Compliance
Avionics, Software Engineering
[11]
Qualitative, Quantitative
The Journal of Agile, systems and Requirement software Engineering, DevOps, DevSecOps
Software Engineering, Cybersecurity
[12]
Descriptive Literature Review
Information and Software Technology
Software Engineering, Cybersecurity
Agile, Requirement Engineering, DevOps, DevSecOps, Compliance
Table 4 Classification table Agile
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
RE DevOps
✓
✓
DevSecOps
✓
✓
✓
DevDocOps ✓
Regulatory Compliance GDPR
✓
✓
✓
✓
✓
✓
the increasing effectiveness of software offerings across domains such as medical devices, mission and safety critical systems, finance, voting and many other highlyregulated domains have added further legal and regulatory compliance overheads to software development projects [6]. With the increasing duty of care obligations on software developers, new bottlenecks of meeting regulatory and compliance requirements are added, upsetting the pace of even the most Agile/DevOps abiding organizations.
Aligning DevOps Concepts with Agile Models of the Software …
367
Fig. 2 Network visualization map. Created with VOSViewer for Mac v1.6.18
As more lead times are added in reviewing and fulfilling requirements, a focus on distributing the effort of Requirement Engineering (RE) across the life of a project seems an attractive way to prevent such bottlenecks. While traditional linear approaches to development (waterfall) would consider RE as the first immutable step in a development project, some researchers and practitioners of Agile/DevOps approaches have shown how an RE phase can continue throughout the life of a project, constantly feeding back to risk assessments, providing more accurate estimations of effort, and preventing initial bottlenecks, even in highly-regulated software development domains [5]. The addition of DevOps practices targeting manual tasks for automation can further support the integration of RE through every step of the SDLC, increasing not just the speed of delivery and improvement, but can also be utilized to increase security, improve record keeping and build more stable, compliance-aware systems [7]. While such RE approaches focused on meeting compliance requirements may still be relatively rare and under researched, it is notable that areas of development focused on building security into products have dealt with similar delays and bottlenecks in RE stages of projects, and there is good evidence of an automated approach to security requirement engineering reporting some solutions [11]. This analogy will be further explored in section D, which will discuss DevSecOps and other emerging forms of DevOps.
368
K. Byrne and A. Cevenini
Considering the effectiveness of such RE approaches in organizations already comfortable with Agile/DevOps approaches to the SDLC, it seems logical that finding some compatibility between Agile/DevOps models and the delivery of highly regulated software development projects may provide benefits in speed and stability to deliver, while decreasing necessary effort.
2.3 Compatibility of Agile/DevOps with Regulation requirements Within domains beholden to stricter regulatory requirement overheads, there is a traditionally cautious attitude to Agile/DevOps approaches [5]. Here, linear development approaches are thought of as necessary, due to the vital validation and verification steps required to ensure all requirements of a given standard are fulfilled. While such caution in safety and mission critical domains, in which the cost of failure can be catastrophic, is understandable, it is not the case that the relevant standards necessitate a linear approach to development [5]. Recent initiatives in regulatory standards focused on protecting user data culminated in the 2018 adoption of the now ubiquitous General Data Protection Regulation standard (GDPR), adding new non-negotiable regulatory requirements for data protection to software development projects that may have in the past been able to avoid such overheads through clever EULAs [7]. While such rigid requirements may have initially been a spanner in the works for many Agile/DevOps teams, there is some good evidence demonstrating how utilizing a DevOps approach to process automation can help reduce initial effort, replace many manual steps, and ultimately spread the work-load across the life of a software development project, while feeding back requirement engineering lessons to inform such steps in future projects [7]. The ready utilization of DevOps practices to support an approach of “continuous requirements engineering” [7] mirrors a number of emerging extensions of DevOps, to be explored further in the next section.
2.4 Emerging Forms of DevOps As mentioned in previous section, development of software with critical security requirements may be seen as an analogy for development of highly regulated software offerings. Within this domain, developers may be focused on preserving the confidentiality, integrity, and availability of backend data to protect their users and avoid reputational damage for their organization. Within a growing landscape of ever-changing security threats, success in this area is increasingly important [11]. Within traditional approaches to delivery of highly secure software products, teams would have the option of front-loading requirements engineering, adding
Aligning DevOps Concepts with Agile Models of the Software …
369
substantial lead times before development, and requiring some inflexibility in the SDLC, or incur the technical debt of addressing security concerns later in development. Further, as the security landscape is often altered by new threats and mitigations, final deliveries would regularly require expensive updates that may detract from stability, or risk reputational damage to the developing organization [8]. To help deal with these challenges, DevOps practitioners focused on security critical projects began to customize their practices to focus on security assurance, developing a method focused on delivering “continuous security” through utilization of the pillars of DevOps. Here, the culture aims to build security awareness amongst development team. This culture can be achieved through un-siloing security experts within the team, automating manual tasks focused on security, measuring security features, and feeding back learnings to allow teams to constantly approve security awareness, and implementing secure coding practices, such as security-as-code [8]. Through implementation of such practices, evidence shows reductions in requirement engineering lead times, reduction in technical debt incurred to fulfil security requirements, and ultimately more secure product that can more easily reduce the cost of post-release security updates resulting in a more efficient and less-expensive SDLC [12]. Certainly, there are some stark differences in engineering for security versus compliance. While the security landscape is constantly shifting, regulatory standards tend to be much more resistant to change. While DevSecOps (aligning Development, Security and Operations) is the most visible extension of DevOps, it is certainly not the only emerging branch, nor is it even the most compatible with continuous regulatory compliance. Is. When addressing large software development projects, it is understood that any output will likely require some degree of supporting documentation. In a traditional approach, this requirement would see teams devoting some resources to the manual creation and maintenance of such documentation. Depending on the organization, this resourcing may be provided by the development, or the operations team, and in smaller organizations, finding such resources may become an obstacle to continuous delivery. Alternatively, Documentation may be postponed to a later stage in development causing greater effort, greater potential for misunderstanding amongst teams, and still presenting an obstacle to delivery. Within Agile/DevOps models of development, some recent evidence shows utilization of DevOps practices (mainly focused on continuous automation) implementing some automated documentation creation within a field now known as DevDocOps (aligning Development, Documentation, and Operations) [9]. This area is still emerging, and while minimal useful research exists, it does suggest that DevOps practices of measurement, sharing, automation and culture may be applied to a broad range of development problems, creating new terminology, but essentially abiding by the central tenants of DevOps.
370
K. Byrne and A. Cevenini
2.5 Towards Continuous Compliance Given the benefits observed in the alignment of development teams and end users through agile development, and the further benefits by including operational teams in such alignment, development teams tasked with delivery of highly regulated software products are beginning to become more confident in finding compatibility of Agile/DevOps methods. While most examples researched would represent some hybrid of linear development with Agile/DevOps methods [10], there is some exciting progress in workflow compatibility of mission and safety critical development practices. Some practitioners show evidence of universal workflows that may be applied to all tasks on software projects, building validation and verifications steps into each ticket, and reducing lead times and bottlenecks throughout the SDLC [10]. While still in the early days of research, such approaches challenge traditionalists conclusion that highly regulated software is only compatible with linear development, empowering teams to further push the envelope of continuous delivery and integration of such products, while still fulfilling relevant regulatory requirements.
3 Discussion Since its broad adoption across many software development domains, ample evidence exists of the effectiveness of DevOps practices supporting agile models of SDLC to improve speed of delivery and iteration of software products [2]. Despite such evidence, development domains in which software products are beholden to higher regulatory requirements have preferred linear approaches to SDLC, fearing that the rapid iteration of agile models will be negated by the higher need for validation and verification of regulatory requirements [10]. As end user calls for higher regulation across all domains of development see implementation of ubiquitous standards such as GDPR [7], domains comfortable with Agile/DevOps models of software development are now challenged to apply their practices to reduce validation and verification lead times resulting from the newly imposed regulatory requirements. Learnings from emerging branches of DevOps focused on applying continuous culture to narrow areas of requirement, such as documentation [9] or security [12], can likely be applied to provide some templates for the necessary distribution of requirements engineering efforts across the SDLC. Further, such learnings can likely feedback into existing DevOps models to provide greater evidence of reliable delivery of software fulfilling nuanced and complicated requirements [11], helping to reassure practitioners in more highly regulated domains. In the example of DevSecOps, an emergent form of DevOps is focused on providing continuous security, the narrow and complicated requirements added to any development project would be delivery and iteration of secure software [8]. Considering the broad landscape of threats to confidentiality, integrity, and availability of end user data, as well as the stability and continuity of dependent systems
Aligning DevOps Concepts with Agile Models of the Software …
371
required to support maintenance of software in production, it is unsurprising that DevOps practitioners quickly attempted to apply the model to engineering for security. Through application of DevOps staples, DevSecOps practitioners show evidence that automating security tasks, monitoring, and measuring security threats and landscape, and sharing security knowledge to build a security aware culture helps reduce the risk and effort of producing secure software [8]. While the DevSecOps culture presents many benefits in delivering secure output, it could be argued that it represents some naive siloing of software development goals, seeing security requirements as more noteworthy than any other development requirements, such as end user and stakeholder satisfaction [12]. Considering a purported goal of DevOps is to unsilo development stakeholders, it follows logically that DevOps should readily accept a broad range of requirements, including security and compliance, as well as stakeholder considerations, essentially indistinguishable as anything other than a requirement. It is unsurprising that this approach of collective requirement engineering raises hackles in development teams tasked with delivery of software in safety and mission-critical domains such as finance, avionics, and medical devices. Here, failure to reach requirements can resulting in loss of life and/or serious legal reprisal [10]. In domains more comfortable with Agile/DevOps models, such failures may historically be less serious. However, with increasing demand from end users calling for regulation protecting data privacy and providing transparency on development practices, the landscape appears to be changing somewhat [7]. It should again be noted that domains outside of safety and mission critical fields escaped regulatory compliance restrictions for a long time by transferring risk to end users through increasingly dense EULAs designed to absolve developers of any regulatory responsibilities [7]. Considering the increasing reach of software development into a broad range of domains, broader regulatory compliance seems overdue by many measures of similar industries. As organizations comfortable with Agile/DevOps models of development begin to deal with the challenges of regulatory overheads, some divergence is seen between organizations returning to traditional linear approaches, and others attempting to show how validation and verification lead times thought necessary to ensure strict observance of regulatory considerations can be reduced by application of DevOps practices [12]. As evidence begins to grow for the plausibility of the latter approach, confidence amongst development stakeholders in highly regulated domains for agile and DevOps compatible workflows grows also [10]. Within domains such as avionics and space travel, evidence of workflows much more familiar to DevOps practitioners have begun to appear and be implemented in safety and mission critical software projects [10]. While further research is required to truly understand the effectiveness of such strategies, and certainly safety must be prioritized above considerations of rapid delivery, the landscape of the DevOps movement seems to be headed excitingly towards un-siloing of previously separate development domains divided by their treatment of regulatory requirements.
372
K. Byrne and A. Cevenini
4 Limitations and Future Work As described, due to the relative novelty of this area, existing research remains rare, and what exists is largely theoretical. While some specific workflows have been proposed and even implemented, academic evidence of the benefits of such models is limited. Further work, largely centered in domains dealing with integrating GDPR requirements into existing agile SDLC models points only to challenges of aligning DevOps approaches with compliance considerations but offers no direct solutions. Future work in this area will collect testable data from the implementations of agile compatible workflows in avionics fields, as well as other highly regulated domains. Further research can be conducted comparing integration of requirements such as GDPR in organizations with existing DevOps culture. Emerging branches of DevOps culture such as DevDocOps, DevRegOps (alignment Development, Regulation, Operation) and DevCompOps (alignment Development, Compliance, Operations) may also present opportunities for future research, though may currently be largely used as eye-catching, but ultimately empty inclusions in product marketing material, and the likely useful and high-quality research will focus purely on DevOps alignment with compliance considerations.
5 Conclusion The recent introduction of broad regulatory standards such as GDPR presents several challenges to delivery for software development organizations that have been traditionally able to deflect regulatory compliance responsibilities to end users through EULAs. As DevOps practitioners within said organizations begin to apply their model to adopt new regulatory requirements in RE steps through the SDLC, domains involving delivery of highly regulated software, traditionally relying on linear development methods thought necessary to fulfil regulatory requirements, have also begun to experiment with implementing DevOps compatible workflows. As these domains begin to learn from each other’s practices, and end-users calls for further regulation seem likely to impose broader responsibilities across all software development organizations, it seems plausible that the traditional siloing of safety and mission critical domains thought to be extremely specialized, may increasingly become indistinguishable.
Aligning DevOps Concepts with Agile Models of the Software …
373
Table 5 Abbreviations Abbreviation
Definition
DevOps
Alignment between Development teams and Operation teams focused on continuous delivery and integration of software output
DevSecOps
Alignment between Development teams and Operation teams focused on providing continuous Security
DevDocOps
Alignment between Development teams and Operation teams focused on continuous Documentation
DevRegOps
Alignment between Development teams and Operation teams focused on continuous Regulation
EULA
End User License Agreement
ISO
International Organization for Standardization
IEC
The International Electrotechnical Commission
RE
Requirement Engineering
SDLC
Software Development Life Cycle
Appendix References 1. Aberkane A-J, Poels G, vanden Broucke, S. (2021) Exploring automated GDPR-compliance in requirements engineering : a systematic mapping study. IEEE Access 9:66542–66559. https:// doi.org/10.1109/ACCESS.2021.3076921 2. Baron C, Louis V (2021) Towards a continuous certification of safety-critical avionics software. Computers in industry, Peer Reviewed 125:14, 2021–02. https://doi.org/10.1016/j.compind. 2020.103382 3. Casola V, Benedictis AS, Massimiliano R, Villano U (2020) A novel security-by-design methodology: Modeling and assessing security by SLAs with a quantitative approach. J Syst Software 163:21, 2020–05 2020. https://doi.org/10.1016/j.jss.2020.110537 4. Castellanos-Ardila JP, Gallina B, Governatori G (2021) Compliance-aware engineering process plans: the case of space software engineering processes. Artif Intell Law 29(4):40, 03–20 2021. https://doi.org/10.1007/s10506-021-09285-5 5. Gall M, Pigni F (2021)Taking DevOps mainstream: a critical review and conceptual framework. Euro J Inf Syst Vol.ahead-of-print (ahead-of-print):20.https://doi.org/10.1080/0960085X. 2021.1997100 6. Hemon-Hildgen A, Rowe F, Laetitia M (2020) Orchestrating automation and sharing in DevOps teams: a revelatory case of job satisfaction factors, risk and work conditions. Euro J Inf Syst, peer reviewed 29(5):25, 2020–09–02. https://doi.org/10.1080/0960085X.2020.1782276 7. Kumar R, Goyal R (2020) Modeling continuous security: a conceptual model for automated DevSecOps using open-source software over cloud (ADOC). Comput Sec, Peer Reviewed 97:28, 2020–10 2020. https://doi.org/10.1016/j.cose.2020.101967 8. Leonardo L, Kon F, Milojicic D, Meirelles P (2019) A survey of DevOps concepts and challenges. ACM Comput Surv, Peer Reviewed 52(6):1.https://doi.org/10.1145/3359981 9. Rajapakse RN, Ali Babar M, Shen H (2022) Challenges and solutions when adopting DevSecOps: a systematic review. Inf Softw Technol, Peer Reviewed 141(106700): 22, 2022–01. https://doi.org/10.1016/j.infsof.2021.106700
374
K. Byrne and A. Cevenini
10. Rong G, Jin Z, He Z, Youwen Z, Ye W, Dong S (2019) DevDocOps: enabling continuous documentation in alignment with DevOps. Softw Pract Exp, Peer Reviewed 40(3):16, 2020–03 2020. https://doi.org/10.1002/spe.2770 11. Singh Aujla G, Barati M, Rana O, Dustdar S, Noor A, Tomas Llanos J, Ranjan R (2020) COM-PACE: compliance-aware cloud application engineering using blockchain. IEEE Internet Comput 24(5):45–53. https://doi.org/10.1109/mic.2020.3014484 12. Wiedemann A, Wiesche M, Gewald H, Krcmar H, Francis T (2020) Understanding how DevOps aligns development and operations: a tripartite model of intra-IT alignment. Eur J Inf Syst, Peer Reviewed 29(5):25, 2020–09–02. https://doi.org/10.1080/0960085X.2020.1782277
Decentralized Communications for Emergency Services: A Review Dean Farmer and Antoinette Cevenini
Abstract Currently the world is going thru turbulent times. The UN World Food Program outlined in 2022 major conflicts in the world still exist in Ukraine, Afghanistan, Ethiopia, South Sudan, Syria, and Yemen, in addition to the global pandemic of COVID 19. With social events dramatically affecting the lives of everyday citizens, virtual communications play a more significant role in replacing traditional face-to-face meetings. The use of virtual communications has solved part of the problem bringing millions of companies, governments, and individuals closer together around the world. However, concerns have arisen around the security and privacy of current centralized communication systems, as conflicts have continued to grow so too has the use of communications in emergency situations. The aim of this paper is to better understand how emergency service providers could use a secure decentralized environment with encrypted communications, thru the review of 12 recently published Q1 and A2 articles in the areas of blockchain, decentralized systems, with different communication mediums. This review investigates the methods that organizations use when undertaking activities to improve communication system performance, by describing how different systems can be adopted and utilized. Finally, this review details the different types of blockchain systems, the pros, and cons of different models along with the capabilities and performance of different platforms. We review the core system components of blockchain systems, and discuss the strengths and weaknesses that these different decentralized systems have, along with how to deal with scalability issues to achieve more reliable and secure communication. Keywords Blockchain · System security · Mobile communications · Secure communications
D. Farmer (B) Charles Sturt University, Sydney, NSW, Australia e-mail: [email protected] A. Cevenini Western Sydney University, Sydney, NSW, Australia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_33
375
376
D. Farmer and A. Cevenini
1 Introduction Due to the outbreak of war and other natural disasters communications systems around the world have been put in disarray with governments and companies forcing users to work remotely or not at all. The war in Ukraine has led to the reliance on satellite and mobile communications as its only option in providing communications around the country. While traditional communication systems rely on a centralized server-client relationship to communicate with clients the main problem with this option is if the physical equipment providing the communication link to the server is damaged or destroyed, the communications system is crimped. This raises concerns about the reliability and security of traditional systems in both normal and emergency situations. Just days before the Ukraine War started, Russian hackers sent out malware called “Wiper” to infect and erase data from as many Ukrainian government and commercial servers as possible with the aim to disrupt all Ukrainian communications [1]. This paper aims to review how other possible alternative communication structures could remove the ability to disrupt or crash communication systems by using decentralized systems based on blockchain technology to provide a secure structure that functions on a peer-to-peer basis without the need for a server infrastructure, while still being able to provide security, authentication, and integrity in the event of an emergency. The use of blockchain-based communications with the help of smart contracts and IPFS for file storage could provide a resilient option [2, 3]. While deciding on the right type of decentralized system, that is approximate for the needs of the emergency service provider, different core capabilities need to be considered, like speed and time to transfer the packets, file sharing, E-to-E encryption, MITM resistance, the ability to run over a range of wireless models 5G, 6G and Sat based public networks. We also need to understand how authentication will be archived, and how the system will be resistant to DDOS attacks [2]. Authenticity plays an important role in protecting sensitive data, so consideration around maintaining and preserving network secrecy on the network, so user’s details are hidden from other users within the same environment is important [4].
1.1 Methodology The review process of this paper was based on the following keyword searches: Blockchain, Mobile Communications, Vehicle Communications and Decentralized Communications. The CUS primo library search was conducted for Q1 and Q2 papers that were published between 2020 to 2022. From the results, conference papers were excluded with Q1 journals being selected as first preference with Q2 journals as backups. The top 20 journals where picked based on titles relevant to Decentralized communications. For future review, this was reduced to 15 papers. The methodology of this process is described in Fig. 1.
Decentralized Communications for Emergency Services: A Review
377
Fig. 1 Search methodology of research papers
Figure 2 was created with VOSViewer and based data extracted from Scopus. The Scopus data file was created with keywords around blockchain and secure communications for articles published between 2020 and 2022. The figure shows the relationships between keywords and their relationships, where stronger relationships have larger links. Figure 2 highlights the major research areas related to blockchain communications with the core links being data communication systems, digital storage, information management, mobile communications, vehicular communications, privacy, network security. Figure 3 represents a heat map of core linking between key words and was created with VOSViewer. The same keyword input data from Fig. 2 was used to create the heat map which highlights the intensity of the keyword relationships The rest of the paper is organized as follows Sect. 2, a literature review of the core components of 12 papers. Sect. 3, Discussion 4, future work and finally Sect. 5, conclusion. Fig. 2 Network visualization map (VOSViewer v1.6.18)
378
D. Farmer and A. Cevenini
Fig. 3 Density view map (Created with VOSViewer v1.6.18)
2 Literature Review The literature review of 12 papers highlights and reviews the principles around using blockchain in communications systems, with a focus on the core concepts which are required for blockchain and digital communications. While different articles focus on different areas of blockchain communications like social networks, messages, wireless systems. As blockchain is a decentralized system it stores network data in different locations and devices without the need for any central administration. Below is a summary of the core concepts that need to be considered when building a blockchain communication system.
2.1 Blockchain Technology Articles [1, 2, 5, 6] and [9] discuss the makeup of a blockchain. A Blockchain is made up of a chain of blocks. A block of the blockchain is a string of records being continuously fixed, the blocks are secured by cryptography as well as a hash value of the previous block. Blockchain works on an open distributed ledger so the design cannot be manipulated or modified with all communications being done over a peer-to-peer network. All previous blocks in the chain will change if the block is changed without the previous block being changed. This is controlled by the consensus protocol which communicates and agrees with other hosts on the network in a group decision. Figure 4 outlines the structure of a blockchain block. The formation consists of a body and a header, the hash of the previous block with the linking of individualistic blocks. Any modification of the block would require the previous block hash to be recreated to avoid tampering. The creation of the novel block would require all devices in the network to legalize the new block creation.
Decentralized Communications for Emergency Services: A Review
379
Fig. 4 Shows the structure of a block in the blockchain
2.2 Blockchain Operational Types Articles [6, 8], and [10] discusses operation types of blockchains. The papers highlighted the different types of blockchain systems available. A private blockchain is controlled by a single entity that has control over the entire network by design. Entry is only allowed if the participant has been verified while the central operator has the right to delete, edit or override control as required. Data on a private blockchain is not publicly available. A public blockchain is a permissionless network in which everyone can participate in its activities. It has no central authority and runs decentralized with no one being able to edit data once it has been added to the chain. The data is secure since it cannot be modified once being hashed. Public Permissioned Blockchain by default is a private blockchain until you have been given permission to enter the system. Permissions will be set on users and control what access they have to the specified system. Consortium Blockchains are a semi-decentralized networks with the difference being that individuals are not granted a single entity, but a group of individuals or nodes are, if it offers network security that public blockchains do not. It allows for different levels of control and faster processing times. Table 1 below outlines the permissions associated with the different operation types.
380 Table 1 Permissions
D. Farmer and A. Cevenini Blockchain
Write
Read
Owner
Public
Public
Public
N/A
Public Permission
Limited
Public
Single/ Multiple
Consortium
Limited
Limited
Multiple
Private
Limited
Limited
Single
2.3 Consensus Protocol Articles [2] and [9] discusses the use of consensus protocol. The distributed consensus protocol makes decentralization possible in the blockchain system. This protocol eliminates the system’s requirement for a central authority. All messages passed and decisions made are by each node using this protocol. The consensus protocol also determines other parameters of the blockchain system such as scalability, transaction capacity, and fault tolerance. The original implementation of blockchain was built on the Nakamoto consensus protocol which protected again bitcoin double-spending attacks in a decentralized network.
2.4 IPFS Technology Articles [2] and [10] discuss how IPFS technology works and can be used. Due to the tremendous size of the blockchain approximately 270 GB, it is not feasible to register all data within the block. The Interplanetary file system (IPFS) is a peer-topeer and decentralized file distribution file system that provides a better way to store files on the internet. IPFS content storage can store files without duplication, unlike HTTP. For HTTP protocol users are required to have the URL of the file they wish to download and without the location you are unable to download the image. IPFS technology works by creating a hash of the file, which is stored online eliminating the ability of duplicate files being stored. All IPFS downloads are requested against the unique file hash. Validation of the hash is then carried out to ensure the correct file is downloaded. IPFS uses peer-to-peer with no mutual reliability between the two nodes, in addition to blockchain protocols to store unchanged data on blocks and to delete duplicate and search for network content.
2.5 Smart Contracts Articles [1] [2] and [3] discuss smart contracts and how they can be used. A smart contract is based on the two concepts. The operational concept describes software factors, that are not available in the ledger. Upon performing these defined obligations
Decentralized Communications for Emergency Services: A Review
381
certain rights could control the assets in the ledger. The second concept focused on how the implementation could be carried out thru software structure and explains how the operational aspects of writing and interpreting legal contracts are carried out. A smart contract is an enforceable agreement that is carried out by the system however this does get complicated under legal obligations. Figure 5 in Appendix A shows the overall design and architecture of an online book authenticity using an Ethereum smart contract with IPFS storage. Once the smart contract event has been triggered, the following happens. 1. 2. 3. 4.
Smart contract is created A request for verification of writings from the author is executed in the contract. Author requests a grant from the publishing company. The publishing company obtains permission to publish without desired literature. 5. The main file is electronically uploaded to IPFS by the publishing company. 6. The publishing company requests a grant from the reader 7. Reader pays the grant and downloads the file from IPFS memory. 8. The smart contract continues to be executed as required. 9. upon the publishing company receiving a grant the next file will be published. 10. Other companies upload the file to IPFS without permission.
2.6 Blockchain Security Articles [1] [4] and [6] discuss blockchain security. Securing a blockchain network involves validating the user’s identity for maximin security while exchanging messages. Communication should only be carried out with valid users with a validated smart contract in place. With all other communications being treated as a possible attack. In the first secure process, the user is to register their public key and identity which is then stored on the blockchain. The blockchain then uses a smart contract in the communication process. This protects both the send and receive from security threats. For added security files stored with IPFS technology should be encrypted with the user’s private key with the recipient using the public key to open and review.
2.7 Blockchain Oracles Articles [1] and [2] discuss what blockchain oracles are used for. Smart contracts are designed to improve performance for business applications with a high level of transparency between stakeholders. However, without the use of blockchain oracles this would not be possible, oracles act as a middleware between the outside world and the blockchain. To minimize the chance of fake information, oracles collect data from multiple sources. This extra service comes at a cost; thus, an incentive system should be used to reward the honest behaving oracles [1].
382
D. Farmer and A. Cevenini
Oracle states change when the external party shares new data. The data then passes from the oracles to the blockchain with encryption to ensure integrity. Oracles come in five categories, hardware, software, outbound, inbound, and consensus based. Hardware-based oracles interface with devices directly thru RFID chips and sensors to supply data points within the blockchain. Software based oracles use Scrappers, webhooks and Web APIs to verify and feed data to the blockchain. An inbound oracle provides external data to a smart contract. Outbound oracles transfer information from a smart contract to an outside point [1].
3 Discussion The Literature review process helped us to understand the core blockchain components required to create and secure a blockchain communication system and detail how we could deal with data when discussing storage, security, and authentication. However, a blockchain system for emergency services will have other core challenges and requirements to discuss around access and system links. The discussion aim is to give feedback and ideas about how system enhancements could be undertaken for future research. The strength of the blockchain system is based on its cryptographic keys to protect against data manipulation ensuring the network is always secure and statable. However now that RSA and Elliptic cryptographic systems can be cracked by quantum computers, all future communication systems must be designed to be more secure, with post-quantum countermeasures to protect the system security [6]. Remember that blockchain technology was originally built with a linear infrastructure and used with linked data hashing techniques [6]. Blockchain can be used in a public (permission-less) environment with an open ledger that can be involved by all nodes in the validation process or in a private (permissioned) situation where a particular node makes the restrictions. Blockchain-based technology communications does benefit from inter-block linking and consensus-based algorithms, with a decentralized peer-to-peer network structure and the ability to communicate directly between nodes [2]. The use of the IPFS technology with blockchain technology is a critical component in being able to store large chunks of data that would otherwise not be possible within the blockchain structure. The major point about using IPFS is that it can sit on a public database that everyone has access to, but no one organization has control over it. It has built in protection against duplicate files being uploaded and the system uses the unique hash value to determine if the network should allow the file to be saved or not, with the ability to check the history and version of a file. As the file is stored to a distributed database uploading and downloading is much quicker to access [5]. Smart Contracts may be used as a form of transparent authentication as the contract itself cannot be cancelled or modified, with both senders and receivers confirming their identity to the system. The system checks the message authenticity by determining if the sending message hash and its verified signature resulting in both parties
Decentralized Communications for Emergency Services: A Review
383
being able to set conditions for each other. A smart contract can be executed in two different ways. First, the blockchain needs to be equipped with cryptocurrencies, with the authentication node being assigned to a cryptocurrency as a guarantee. Second with the use of a public address with the message containing a violation document being sent to a different node. This information is stored in the blockchain so that the violated authentication node is recorded. When considering possible attack situations note that most protocols and structures can use the public key to exchange the private keys, and in most cases the private key will be used to encrypt and send data. Choosing a system that allows you to select your own encryption algorithms greatly reduces to the possibility of system attacks as the algorithm is unknown to outside parties. While Latency on Public platforms is one more concern, public blockchain platforms store and execute transactions on an open platform processing these transactions in a timely and costly process. As an example, it takes around 10 min to process one block on the bitcoin network resulting in a massive delay for non-text-based communications [1]. On the other hand, private blockchain platforms are much faster to process a block making them a better choice for voice communications, thus if the fast and reliable transfer is the desired goal then a private blockchain such as Hyperledger Fabric could be reviewed. However, the trade-off between speed and reliability needs to be considered for secure emergency systems. A private blockchain as a secure ledger can ensure the security and privacy desired for such systems. Moreover, a private blockchain channel can verify, validate, and authenticate all participating users on the same communications system. Users who have been reported as malicious devices more than five times will be permanently rejected from the service [5]. The communication link between the base station and the remote system uses wireless channels which are susceptible to a range of attacks like signal jamming and signal spoofing in addition to other traditional aerial communications issues such as man-in-the-middle, denial-of-service, and overhearing. So, security in Aerial communication networks is an extremely important topic. Tactical data link protocol is used to format a message in a particular way so that it can be transmitted over a wireless link securely and fully encrypted. More research needs to be reviewed on what information controls can be securely transmitted and managed over multiple domains, understanding what additional crucial transmission features we can offer for data consistency in an automated and adaptive way.
3.1 Framwork After reviewing the 12 selected articles and discussing the core components of a blockchain-based communication system, the designer needs to determine if the system will be voice-only, text-only or a mixture of the two. Any system design needs to be robust to resist quantum computer attacks and have built-in countermeasures. Analysing the security and efficacy of the blockchain plays an important role in any
384
D. Farmer and A. Cevenini
multi-party system. Three considerations need to be considered. First the ability to protect with post-quantum techniques. Second, improve the identity authentication process based on a blockchain-based PKI. Third, be able to support multi-party communications. We also need to consider how to preserve data integrity in the storage and the use of auditing schemes to ensure the key exposure does not occur. A well-designed distributed architecture should be lightweight and efficient to run 5G technology [6].
3.2 Future Work Future research and exploration of the gaps in the use of blockchain technology and solutions need to be conducted. The ability to provide reliable communications over a range of wireless, 5G, 6G, Satellite, and fixed transition systems thru blockchain technology need to be further studied to provide reliable blockchain communications over different transition mediums. Understanding that the correct use of a public or private blockchain depends on the application to be used in the case of Video and Audio applications a public blockchain system will not be useable due to the slow processing of hashes and then need to keep data hidden from the wrong eyes. For military and emergency systems using a private blockchain is the only way system goals can be achieved without compromising on integrity and security of the system.
4 Conclusion The aim of this paper was to review the different systems available in the field of decentralized communications that could be using emergency situations, to understand how reliable and secure communications can overcome functionality and scalability issues when using a blockchain-based system. An overview of the core system requirements of a blockchain communication system was explained, as differences between public and private networks. The literature review breaks down the main concepts from the 12 reviewed papers into the core areas of Blockchain technology, Operational types, consensus protocol, IPFS storage, smart contracts, and oracles. While the review was limited to the 12 papers it did show a range of gaps in the research where additional review needs to be carried out. The review was only the starting point for blockchain communications, gaps in the areas of cryptographic keys and need for future proof to provide secure future communications, the use of a private or public blockchain comes down to the processing time available, as well as the data link protocol used, with the appropriate wireless transmission system to allow for greater reach that is not susceptible to attacks such as spoofing, man in the middle and denial of service.
Decentralized Communications for Emergency Services: A Review
385
Fig. 5 Ethereum smart contract handling example [2]
Appendix A References 1. Deebak B, AL-Turjman F (2022) A robust and distributed architecture for 5G-enabled networks in the smart blockchain era. Comput Commun 2022–01–01 181:293–308. https://doi.org/10. 1016/j.comcom.2021.10.015 2. El Azzaoui A, Choi M, Lee C, Park J (2022) Scalable lightweight blockchain-based authentication mechanism for secure VoIP communication. Human-centric Computing and information sciences, Vol 12. https://doi.org/10.22967/HCIS.2022.12.008 3. Feng W, Li Y, Yang X, Yan Z, Chen L (2021) Blockchain-based data transmission control for tactical data link. Dig Commun Netw 2021–08 7(3):285–294. https://doi.org/10.1016/j.dcan. 2020.05.007 4. Jiang M, Li Y, Zhang Q, Zhang G, Qin J (2021) Decentralized blockchain-based dynamic spectrum acquisition for wireless downlink communications. IEEE Trans Signal Process 69:986–997. https://doi.org/10.1109/TSP.2021.3052830 5. Kumar R, Pham Q, Khan F, Piran M, Dev K (2021) Blockchain for securing aerial communications: potentials, solutions, and research directions. Phys Commun 2021–08 47. https://doi. org/10.1016/j.phycom.2021.101390 6. Lee S, Kim S (2022) Blockchain as a cyber defence: opportunities, applications, and challenges. IEEE Access 10:2602–2618. https://doi.org/10.1109/ACCESS.2021.3136328 7. Li P, Su J, Wang X, Xing Q (2021) DIIA: blockchain-based decentralized infrastructure for internet accountability. Secur Commun Netw 2021–07–19:1–17. https://doi.org/10.1155/2021/ 1974493 8. Mirzaei E, Hadian M (2022) Simorgh, is a fully decentralized blockchain-based secure communication system. J Ambient Intell Humaniz Comput. https://doi.org/10.1007/s12652-021-036 60-5 9. Shi L, Guo Z, Xu M (2021) Bitmessage plus: a blockchain-based communication protocol with high practicality. IEEE Access 9:21618–21626. https://doi.org/10.1016/j.phycom.2021. 101390 10. Wasim Ahmad R, Hasan H, Yaqoob I, Salah K, Jayaraman R, Omar M (2021) Blockchain for aerospace and defence: opportunities and open research challenges. Comput Ind Eng 2021–01 151:106982. https://doi.org/10.1016/j.cie.2020.106982
386
D. Farmer and A. Cevenini
11. Yi H (2022) A secure blockchain system for the internet of vehicles based on a 6G-enabled network in box. Comput Commun 2022–03–15 186:45–50. https://doi.org/10.1016/j.comcom. 2022.01.007 12. Zhang L, Zhang Z, Jin Z, Su Y, Wang Z (2021) An approach of covert communication based on the Ethereum whisper protocol in the blockchain. Int J Int Syst 2021–02 36(2):962–996. https://doi.org/10.1002/int.22327
Assessing Organisational Incident Response Readiness in Cloud Environments Andrew Malec and P. W. C. Prasad
Abstract Organisations across the world are adopting cloud-based technologies due to several benefits including efficiency, agility, and cost effectiveness. These same organisations are also facing an increasing number of cyber threats from threat actors across the globe. The difficulty faced by these organisations is ensuring their cloud-based technology environments are sufficiently protected prior to an incident, and that incident responders can rapidly identify, preserve, acquire, and analyse that data to support digital forensic investigations. As a result, organisations find themselves unable to effectively respond to cyber security incidents in cloud environments leading to loss of data, ineffective processes, and increases the risk of inadmissibility of evidence in court proceedings. An organisation’s incident response readiness is often limited by their inability to assess their response maturity due to a lack of incident response maturity frameworks. This review aims to provide an overview of commonly used frameworks within existing incident response and digital forensics processes by reviewing existing industry standard frameworks, assessing their efficacy, and identifying room for improvement by suggesting an incident response readiness assessment and maturity model. This model could be considered by organisations aiming to identify deficiencies within current processes, procedures, or considerations, and improve upon them to roadmap cloud cyber resilience. Keywords Cloud · Digital forensics · Incident readiness · Incident response · Framework · Organisational readiness
1 Introduction Cloud environments are constantly evolving to keep up with consumer and business demands, ensuring their respective platforms are robust and contemporary in the modern marketplace. The increased utilisation of various cloud platforms by large A. Malec (B) · P. W. C. Prasad Charles Sturt University, BLD 460, 2678 Wagga Wagga, NSW, Australia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_34
387
388
A. Malec and P. W. C. Prasad
organisations, Government, and enterprise, has resulted in significant competitive advantages based on the cloud’s ability to provide agile, adaptive, and potentially cost-effective services. However, by adopting cloud services, these same customers now face an increased risk of attack from global threat actors due to increasing their attack surface. An increased attack surface coupled with poor visibility into cloud assets can result in slow, cumbersome, and ineffective response to cyber security incidents within those cloud environments. To approach the issue holistically, a literature review was conducted which focused on research in the fields of digital forensics (DF), incident response (IR), and maturity assessment. This ensured that this research would address what is a somewhat complex, multi-faceted issue and address many non-technical areas such as regulations, business processes, and legal considerations while still highlighting technical issues such as data acquisition, retention, and analysis. Existing principles and standards provided by the National Institute of Standards and Technology [15] were also explored, as were industry standards such as the MITRE ATT&CK and United States Computer Emergency Readiness Team (US-CERT) frameworks. This provides the advantage of supporting industry standards with academic research and integrating them into business processes and procedures to produce a more robust assessment model. The research [2] indicated there is no single approach to digital forensics or incident response within cloud environments due to the dynamic and constantly changing nature of those cloud environments. As will be discussed in the literature review, this is not necessarily a detrimental position as the work undertaken by operators should still aligned with by forensic principles. However, it does show there is considerable room for improvement in the future. It was further highlighted that many organisations face difficulties in assessing their incident response readiness or capacity to respond to incidents due to lack of visibility into their cloud environments. The focus of this research was to identify current literature to develop and support the proposition of a readiness assessment model for organisations utilising cloud services. This model is designed to be cloud service provider agnostic and does not explore cloud-specific services as this ensures any guidance inferred by readers can be adopted by any platform. It is designed to identify and explore considerations such as cloud infrastructure, legal, analysis environments, staffing requirements, and incident response frameworks. This paper’s research methodology will be covered in Chap 2, where we’ll explore the selection process and criteria for current academic literature. The elements of each article will be explored as to how it relates to the field of incident response readiness and the value they provide. Chapter 3 will explore the most relevant findings and how they integrate into the suggested cloud incident response readiness model, which will also be explained in the context of how it would be applied within an organisation. Chapter 4 will discuss any identified potential future work in this research area.
Assessing Organisational Incident Response Readiness in Cloud …
389
2 Research Methodology Research was conducted across a range of sources including academic literature, industry publications, and scientific standards. Academic literature was identified as the primary source to identify contemporary research in this field. Charles Sturt University (CSU) literature library, Primo, was used to identify journal articles with the following keywords: cloud, digital forensics, incident readiness, incident response, framework, organisational readiness. These results were further filtered to ensure only peer-reviewed articles authored between years 2020 and 2022 were captured. The academic quality requirement standards were set to Q1 and Q2 articles, which were confirmed using Scimago Journal and Country Rank (SJR). As a result, 12 literature articles were identified as being suitable for literature review and analysis.
3 Literature Review The subsequent literature review highlighted a common theme of difficulty adopting a technical process like digital forensics and incident response within a businessfocused domain like organisational awareness or readiness. The greatest value was found in research material and literature which is platform agnostic, non-prescriptive and applied at a higher level [1]. Literature which explored the suitability of metamodels built on explorative taxonomy models were identified as valuable as they are vendor-agnostic and do not rely on the implementation of specific technologies for the concepts to be adopted [1, 2]. This is contrasted with other research [3] which proposes specific processes to achieve similar results, however there are underlying technical requirements in respect to the platform on which the organisation has built their cloud environment. This research was found to be valuable for its conceptual considerations and proposals, however, was let down by its inability to be adopted by organisations across multiple cloud platforms. Other literature which provided benefit to an organisation has been categorised within procedural considerations as the outcome of that research was applicable to a process, procedure, or lifecycle. As a result, the reviewed literature was summarised in the following three categories: high level models, platform-specific processes, and procedural considerations.
3.1 High Level/Meta Models High level metamodels provide an oversight of elements organisations can use as considerations when assessing their incident response readiness. These models allow distinct areas to be broken out for individual analysis by specialists. The proposition
390
A. Malec and P. W. C. Prasad
by Al-Dhaqm et al. [2], of the “Digital Forensics Subdomains” model, provides an example of where legal implications and analysis are different areas. Legal implications would therefore by reviewed by qualified legal practitioners informed by technical specialists, and analysis would be conducted by qualified digital forensic analysts. The key advantage to this style of model is that it is not prescriptive and is cloud platform agnostic. Purnaye and Kulkarni [1] developed a cloud forensics taxonomy which explores the considerations of conducting digital forensics within cloud environments in response to digital forensic investigations by identifying a range of legal, trust, analysis, and logistical considerations when faced with cyber security incidents. The researchers argued their taxonomy model assists with formulating reliable forensic solution strategies which results in a holistic understanding of legal, regulatory, and practical considerations [3, 6]. Several key challenges such as legal implications, and logistical requirements within cloud environments are expanded within the model built by Al-Dhaqm et al. [2]. While this taxonomy is valuable as a basis for a mature framework, the applicability of it within enterprise environments at a strategic business level is not readily apparent. The taxonomy identifies areas which need to be addressed however it does not include a level of maturity to enable organisations to conduct an assessment or identify their relative maturity to ensure they strengthen their systems moving forward. Like the research conducted by He Y et al. [7], research by Al-Dhaqm et al. [2] aimed to address the lack of standardisation within digital forensic and incident response processes by proposing a metamodel to encompass technical and nontechnical processes. The authors suggested a solution which addressed procedural and technical requirements, ensuring compliance with digital forensic principles to support admissibility in legal proceedings. Purnaye and Kulkarni’s abstract model addresses each step within the digital forensic process (M0-Level data model), as an instance of a greater domain-specific focus (M1-Level model). This offers the benefit of not being overly prescriptive or reliant on a specific technology [3] and is therefore deployable across a broader environment base.
3.2 Platform-specific Processes Hemdan and Manjaiah [3] developed a framework focusing on investigating cybercrimes within cloud environments by proposing a model to identify, acquire, and analyse resources within cloud environments to support criminal investigations and lawful prosecution. The Cloud forensic Investigation Model (CFIM) proposed by the authors was implemented within VMware’s ESXi environment, which although common at the infrastructure level, is not applicable to most cloud services utilised by organisations. The authors posed the proposed framework and model would adhere to principles including digital forensics and the incident response lifecycle [7, 16], however when it would be considered too restrictive in its applicability across a greater user base. The framework beyond its basic applicability within
Assessing Organisational Incident Response Readiness in Cloud …
391
existing VMware/ESXi environments provides little value to organisations who utilise vendor-specific services within cloud platforms. The National Institute of Standards and Technology (NIST) provides an incident response lifecycle framework [15] which He Y et al. [7] have built on to enable more effective incident response activities when responding to cyber security incidents involving malware. The researchers achieved this by reviewing the current framework and modifying it to reflect real-world malware investigations, where improvements were identified. The researchers argued this made the initial preparation, and detection and analysis phases far more effective as they were easily able to recognise and identify malicious behaviour exhibited within their environment [7]. Time to response and time to containment are critical in incidents where lateral movement must be prevented and contained. However, as identified by the researchers, their research was limited by the environment in which it was intended to be deployed [11]. Incident response processes, commonly referred to as playbooks, are customised by organisations for the environment in which they operate. There is value in building upon industry standards as it provides credibility within the industry, however the research itself does not provide any insight into how exactly it would be integrated. Beyond the proposed integration, the research does not provide quantifiable benefits to a business or organisation at a management level where it would be easily understood or approved.
3.3 Procedural Considerations Englbrecht et al. [8] focused their research on a capability model for digital forensic readiness by developing a maturity model encompassing several core elements referred to as ‘enablers’, such as processes, skills and competencies, and infrastructure. These enablers were ranked from 1–5 depending on the enabler’s level of maturity within the organisation. This high-level model is similar to other research [1, 2, 7] which is abstracted at a sufficiently high level so it can be understood, supported, and adopted by strategic decision makers within an organisation. Horsman [11] proposed an efficient ‘Order of Data Acquisition’ (ODA) which weighed the relative value of data obtained, the method by which it was obtained, the volume of data, and the invasive nature of data by conducting data acquisitions at various levels and comparing the potential implications of each acquisition method. Legal and privacy implications [1, 11] are becoming more of a primary consideration when organisations deploy assets within cloud environments which may be subject to governance regulations such as the General Data Protection Regulation (GDPR) or Health Insurance Portability and Accountability Act (HIPAA). There is no other contemporary academic material which lists or compares the ODA within the digital forensics’ domain. Horsman does not discuss limitations of their research, however this is likely due to the limitations only being applicable within the applied area. In the context of this current research, the ODA’s applicability may be limited by the environment in which it is deployed. For example, a cloud
392
A. Malec and P. W. C. Prasad
service provider may only support exporting full virtual disk images (Level 1 in Horsman’s model) which is potentially invasive, however due to GDPR or HIPAA legislation, existing procedural controls have been implemented by the DF analyst because of legal compliance.
4 Discussion This research has highlighted several areas which organisations must consider when assessing their incident response readiness, including non-technical and technical specialist domains. For example, legal considerations such as cross-border data governance may dictate that virtual servers or other resources are deployed within a certain geolocation such as Europe. If the organisation’s primary resources are onpremises at another location, for example in Australia, and cloud-based resources are primarily deployed within this same region, then cross-geolocation architectural considerations must be made. If the organisation does not have staff, skills, or technical capability within the remote geolocated region, evidence acquisition logistics come into consideration [1, 11]. Explorative and taxonomy models [1] should primarily be used by an organisation to identify the assets within their environment which they wish to secure and identify those which may be subject to attack. For example, a company may identify they have multiple VMs within a cloud environment, hosted email filtering gateway with a SaaS provider. This allows an organisation to begin their journey to incident response readiness. Identified assets can then be subject to SDLC and FbD [4] considerations to ensure future tasks such as data identification, preservation, and acquisition is supported in the event of a cyber security incident. Proposed maturity models [2, 5] show how an organisation can assess different elements of their technology environment to identify business risk. This is critically important as the main driver of cyber security is to protect data and information, and to mitigate risk. Cyber security and technical controls are not ‘set and forget’, as the landscape is constantly changing. Businesses can use utilise these maturity models to identify their current security posture, identify areas of improvement, and then build those roadmaps over time. For large organisations with complex cloud assets and infrastructure over various geolocations with regulatory considerations, implementing recommendations may take months or years. These models allow organisations to assess their maturity and measure improvement over time. This research also displayed how existing industry standards and frameworks can be customised to fit specialist environments, such as malware response [5] or incident response involving industrial control systems [7]. This is especially critical moving forward as critical infrastructure is under increased threat from advanced persistent threat actors during global conflict [8]. Coupled with technical advances, research indicates improved cyber security awareness and organisation culture adjustment [7, 9] can further improve an organisation’s cyber resilience and incident response
Assessing Organisational Incident Response Readiness in Cloud …
393
capacity by ensuring staff are adequately trained and equipped to respond to incidents accordingly. Figure 1 is an overview of a suggested model which encompasses these principles and facilitates organisations in identifying their primary considerations regarding incident response readiness. Research suggests it is not possible to provide a prescriptive process for every incident response scenario without having a sufficient understanding of how technology is used within an environment. Therefore, a model must be applicable at a high enough level so that those within an organisation, including technical specialists and executives, can identify their relevant considerations and apply it to their particular use case. This model has been designed to break out each individual element as a consideration of how incident response applies to an organisation. It is not an exhaustive model, and it is not an assessment framework as it not possible to have a prescriptive model for each organisation. It is designed to be used as a prompter for broader, high-level considerations of the required elements when responding to cyber security incidents within cloud environments. Figure 1 shows cloud infrastructure and PaaS, SaaS, IaaS models of cloud services. This allows an organisation to identify that element as an area of focus. Technical specialists can therefore identify the level of access they have for each asset and ensure it is documented within their assessment framework. For example, a company may utilise cloud-based software (SaaS), as well as virtual servers within another environment (IaaS). This dictates the depth of data which can be acquired from those environments in response to an incident. It is not generally possible for SaaS environments to provide, for example, full forensic disk images to a customer in
Fig. 1 Proposed model for assessing organisational incident response readiness in cloud environments
394
A. Malec and P. W. C. Prasad
the event of a cyber security incident as the underlying infrastructure is shared by multiple customers. In contrast, IaaS environments are typically managed by the customer who has administrative access and potentially access to the underlying hardware, resulting in the ability to gather more data. Acquisition considerations [11] identify that user privacy must be considered when conducting forensic investigations. If a cloud asset has been compromised and the required data is stored within system logs, there may not be much value in analysing an entire disk image which may impact end user privacy. These considerations also form part of legal considerations, especially when an organisation has assets deployed across multiple geolocations and may be subject to cross-border or jurisdictional regulations. A company with assets in Australia may not be able to acquire and examine data from a country which is bound by EU GDPR regulations. Figure 1 further highlights staff as a key element of an organisation’s incident response capability. Organisations may have adequate infrastructure, appropriate incident response frameworks, and sufficient analysis environments, but if staff are not available to assist in the response to a cyber security incident, the incident cannot be sufficiently investigated. Further, the model explores the technical competency and qualification of staff to respond to such incidents. There is little point filling positions within an organisation with insufficiently trained or incompetent technical operators as this provides a false sense of preparedness. To be effective, an organisation should take a baseline of their current assets, security controls, processes, and procedures. After using the suggested model (Fig. 1) to assess and identify areas of improvement, assessments should be made at regular intervals to measure improvements. Strategies such as implementing incident response tabletop exercises tests these areas, leading to further refinement or confidence in existing security countermeasures. The main limitation to this research was the lack of available time in which it could be conducted. Additional time would likely result in a greater depth of research across discrete disciplines to ensure the findings were able to be applied appropriately. The ability to apply the research findings to a real-world organisation would result in validation of the findings and identify additional areas which could be developed or refined. Another beneficial area of potential research would be the integration of digital forensic readiness within an existing framework such as the SDLC process. This has the potential of deploying systems which are secure, logged, and support cyber security incidents by default. This may also see systems enrolled into security tools such as SIEMs and forensic environments to ensure volatile and non-volatile data is adequately preserved and acquired. Given the constant emergence of new technology within the DFIR environment, it is necessary to test these theoretical frameworks within an existing environment to identify their suitability. This could be through the identification of cloud service users who are willing to participate in a practical study with researchers to see if their research withstands the scrutiny of real-world environments. Conducting practical research in this manner would result in a far more robust framework and establishes industry credibility.
Assessing Organisational Incident Response Readiness in Cloud …
395
5 Conclusion After collating contemporary academic articles and conducting a literature review, this paper has highlighted key components and themes which are relevant when an organisation is considering its incident response readiness within cloud environments. The results of this review highlighted a broad range of considerations when utilising cloud platforms to ensure an organisation is well prepared to respond to cyber security incidents. The limitations of the current research and academic material indicates there is room for improvement, especially when considering the broad nature of cloud service providers and technology available within those environments. This led to the formulation of a cloud-agnostic incident response readiness model (Fig. 1) which can be used by organisations of any size, regardless of their technology footprint, to assess their ability to respond to cyber security incidents in cloud environments. This model addresses legal, infrastructure, staffing, and industry standards as areas of focus when assessing incident response readiness. Unfortunately, this research was limited by time constraints, however utilising the suggested model would be possible for an organisation to assess their incident response readiness. Coupled with the suggested future work would ensure an organisation maintains and improves their cyber security posture in the future.
References 1. Purnaye P, Kulkarni V (2021) A comprehensive study of cloud forensics. Arch Comput Methods Eng 29(1):33–46 2. Al-Dhaqm A et al (2021) Digital forensics subdomains: the state of the art and future directions. IEEE Access 9:152476–152502 3. Hemdan EE-D, Manjaiah DH (2021) An efficient digital forensic model for cybercrimes investigation in cloud computing. Multimed Tools Appl 80(9):14255–14282 4. Akilal A, Kechadi M-T (2022) An improved forensic-by-design framework for cloud computing with systems engineering standard compliance. Forensic Sci Int Digit Investig 40:301315 5. Al-Dhaqm A, Razak SA, Siddique K, Ikuesan RA, Kebande VR (2020) Towards the development of an integrated incident response model for database forensic investigation field. IEEE Access 8:145018–145032 6. Alenezi A, Atlam HF, Wills GB (2019) Experts reviews of a cloud forensic readiness framework for organizations. J Cloud Comput 8(1) 7. He Y, Inglut E, Luo C (2021) Malware incident response (IR) informed by cyber threat intelligence (CTI). Sci China Inform Sci 65(7) 8. Englbrecht L, Meier S, Pernul G (2019) Towards a capability maturity model for digital forensic readiness. Wirel Network 26(7):4895–4907 9. Georgiadou A, Mouzakitis S, Askounis D (2021) Assessing MITRE ATT&CK risk using a cyber-security culture framework. Sensors 21(9):3267 10. Cisa.gov. 2022. APT cyber tools targeting ICS/SCADA devices, https://www.cisa.gov/uscert/ ncas/alerts/aa22-103a. Last accessed 2022/06 11. Horsman G (2022) An ‘order of data acquisition’ for digital forensic investigations. J Forensic Sci 67(3):1215–1220
396
A. Malec and P. W. C. Prasad
12. Naseer H, Maynard SB, Desouza KC (2021) Demystifying analytical information processing capability: The case of cybersecurity incident response. Decis Support Syst 143:113476 13. Schlette D, Vielberth M, Pernul G (2021) CTI-SOC2M2—The quest for mature, intelligencedriven security operations and incident response capabilities. Computers and security 111:102482 14. (2022) Information security manual. Australian Government, https://www.cyber.gov.au/sites/ default/files/2022-03/02.ISM-CyberSecurityPrinciplesMarch2022.pdf. Last accessed 2022/06 15. National Institute of Standards and Technology (2012) Computer security incident handling guide. 800–61r2
Industrial Internet of Things Cyber Security Risk: Understanding and Managing Industrial Control System Risk in the Wake of Industry 4.0 J. Schurmann, Amr Elchouemi, and P. W. C. Prasad
Abstract Industry 4.0 has evolved traditional industrial processing plants into smarter, more optimised, and efficient processes which provide valuable returns to industry, however the deployment of IIoT connectivity as the enabler of Industry 4.0 has also introduced a significant risk to the cyber physical systems (CPS) which operate industrial plants and processes. Prior to Industry 4.0 industrial control systems were mostly segregated from the outside world and therefore secure by nature. There is currently a lack of understanding of the operations technology (OT) security risks, the impact IIoT can have on CPS and the risk management strategies are needed to protect CPS from the ever-increasing cyber threats on the internet. This research paper identifies the common misunderstandings and knowledge gaps often found in academic journals, particularly around the nature of cyber security risks that are commonly present within CPS and the consequences that can occur as a direct result of weak or insecure IIoT designs and implementations. Keywords IIOT · Cybersecurity · Industry 4.0
1 Introduction The new wave of Industry 4.0 and the deployment of IIoT devices that connect cyber physical systems to the internet has introduced a significant level of exposure to cyber security risks. The industrial control systems to which IIoT systems connect have been engineered to ensure continuous plant production and safe operations [1]. The risks associated with industrial control systems and similarly the IIoT solutions to which they are being connected are often not understood and ignored [2]. Hence, a significant amount of current literature on IIoT cybersecurity solution J. Schurmann · P. W. C. Prasad (B) Charles Sturt University, Bathurst, NSW, Australia e-mail: [email protected] A. Elchouemi American Public University System, Charles Town, WV, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_35
397
398
J. Schurmann et al.
design concepts do not adequately align with or recognize engineering functional safety or industrial automation cyber security (IACS) standards. IACS systems have historically been well-segregated from external environments, such as IT and the wider internet. Whilst the concept of air-gapped OT environments is a thing of the past considerable effort has been put into design and engineering of secure OT architectures, particularly around the secure DMZ interface between IT and OT [3]. The International Electrotechnical Commission (IEC) 62,443 standards is the most common cyber security standards that have been adopted by industrial control system engineers and operators as the baseline standard for implementing cyber security controls within a CPS [4]. These standards include widely accepted guidance for designing and implementing secure network layers within the OT environment and between IT and OT. To ensure the ongoing safety and security of industrial processing plants and facilities, including critical infrastructure facilities it is essential that IIoT solution designers prioritize the risks as they relate to the cyber physical systems instead of the risks associated with the IT systems and the data that is extracted, communicated and processed for Industry 4.0 applications. In doing so, IIoT solutions should consider the security technologies that are needed at the interface between IT and OT systems, which will reduce the exposure brought about by the new connectivity between CPS networks and the internet [5]. In addition to these technological challenges, IIoT also introduces additional challenges which are tied to the concept of IT/OT convergence and the line of responsibilities for managing the security at the interface requires deeper consideration and clearer definition [6] This paper is structured as follows: Section I. A provides an overview of the methodology used to conduct a of review the current literature. Section II contains the detailed review and analysis of IIoT risk management approaches presented by current research, including definitions of IoT and Industry 4.0, IT and OT risk, cyber threats, artificial intelligence and industry standards and frameworks. Finally, Section III provides some direction on future work and Section IV provides a conclusion of the research.
2 Methodology The methodology used for this research paper involved an in-depth review of academic literature found in several academic databases, including PrimoSearch, Scopus, Google Scholar and ProQuest. Keyword searches were used to locate peer reviewed journal articles that focus on IIoT cybersecurity and the risks associated with CPS, that is risks that include impacts to industrial controls systems and safe operations of plants or factories. To meet the subject outline “emerging technologies” the searches also included a filter for papers that have been published in the last three years. The baseline keywords included (“Industrial Internet of Things” OR “Industrial IoT” OR “Industrial IoT”) AND (“cybersecurity” OR “Cyber Security”) and to these
Industrial Internet of Things Cyber Security Risk: Understanding …
399
Fig. 1 Keyword associations produced in VOSViewer, showing the lack of keyword associations linked to OT risk management topics
were added several industrial control system terms that would narrow down the research to papers which also considered risks to CPS. As illustrated in Table 1 below, the results highlight the number of research papers which focus on IIoT cybersecurity but do not consider the risks to CPS environments. Closer analysis found 12 papers that were then selected for review based on relevance to the overall subject. The quality of the literature (Majority Q1). Figure 1 below is a network visual produced in VOSViewer, which illustrates the results of the baseline keyword search and the number of and linkage between common keywords that were found in the 65 journal articles.
3 Literature Review The review articles mostly discussed public auditing based on blockchain and focused on some specific audit methods for cloud storage data. Hence, this section will
400
J. Schurmann et al.
discuss all related works on auditing, public verification, and blockchain-based data verification.
3.1 IIoT and Industry 4.0 Based on research and review of the academic papers that included the keywords associated with IIoT cyber security it is evident that there is a significant misunderstanding of what constitutes IIoT. This can be attributed to a misinterpretation of the two key components that make up the term, that is “Industrial” and “Internet of Things”. The first term”industrial” is defined as being a network of devices and applications which are generally located within a CPS and are used to operate, control or monitor industrial processes [7]. The second term, “Internet of Things” can be defined as the large-scale deployment of technology that aims to connect disparate systems and devices to the internet so that information can be acquired for further processing and deeper analysis [1]. Therefore, based on these two definitions IIoT in its most basic form is a composition of devices or applications that operate within or on the boundaries of an industrial processing plant or facility and which are connected to the internet for the purpose of acquiring and transferring data to internet hosted systems for further analysis and processing. Industry 4.0 is a term that describes the concepts and technologies that enable physical processes and systems to monitor and connect with other systems and collectively and autonomously make decisions in real-time to drive performance and improve industrial processes and deliver business outcomes [3]. IIoT can therefore be described as an enabler of Industry 4.0, as it provides the functionality required to connect industrial systems or devices to the internet so that the acquired data and information can be ingested and processed by Industry 4.0 applications. There is a significant amount of current research literature that does not align with the definition of IIoT described in this paper and the relevant standards or frameworks. For example, [4, 7, 8] describe the concept of IIoT connectivity as timesensitive machine to machine (M2M) communication that requires high throughput and low latency networks to ensure that the communication of IIoT data does not impact plant production or safety. This notion is misleading as the concept of M2M communications is a fundamental part of any modern or even legacy (Industry3.0) automation and control environment. Internet communications, such as public or private wide area networks have historically been and are still currently being utilised in industrial control networks to connect devices and systems at remote locations, such as those often found on large pipelines, onshore or offshore oil and gas facilities and water transportation. Academic research in IIoT security should avoid any misleading or confusing information regarding the concepts of Industrial systems and the Internet of things. Both are well established and well understood, however the integration of the two concepts has led to a distorted view of how and to what extent they are or can be integrated. IoT, which to date has amassed approximately 18 million devices
Industrial Internet of Things Cyber Security Risk: Understanding …
401
globally, is basically a large network of devices that are connected to the internet using a multitude of network media and protocols. This has resulted in a considerable increase in cybersecurity risk due to the massive exposure to online threats, which in the CPS domain can lead to serious consequences.
3.2 IT and OT Risk Due to the nature of IIoT, understanding and managing the risk of connecting CPS environments to the internet is a significant challenge and concern for CPS operators and owners. The fact that IIoT solutions include an IT and an OT element introduces some apprehension around responsibilities and the differing approaches used to identify, assess and manage risk. As is well established, cyber security risk management in the IT space is very different to risk management practices used for OT as the potential impacts and consequences are very different. IT risk is focused on the standard cyber security triad, which is confidentiality, integrity and availability, in that order of precedence [1, 4–10]. The OT security focus is however reversed, that is availability of the processes directly responsible for plant operation and production is the most important element followed by integrity of the systems and data, and confidentiality is the least significant element and sometimes not even considered [4, 7, 8, 11]. There is however another level to the OT security risk triad, which is the risk associated with safety. The risk of harming people, assets, the community, or the environment is of paramount concern within CPS environments. As shown in Fig. 2, management of OT cyber security risk should therefore always outweigh any IT risks that are assessed during IIoT solution design and the focus of the design should be on mitigating any impact to safety. Current literature on IIoT security tends to disregard OT security risks and focuses predominantly on IT risk. As shown in Table 2, only four out of the twelve papers reviewed highlighted the need for an OT approach to risk management, that is using the availability, integrity and confidentiality (AIC) triad. The remaining eight papers included five that describe risks associated with IIoT in the context of IT risk, that is confidentiality, integrity and availability (CIA) and three papers failed to mention either approach. Academic research on the subject of IIoT cyber security risk management should prescribe OT risk management approaches as the primary focus for any IIoT solution design or technology development. Connecting a CPS to the internet will potentially expose systems to a larger threat landscape, which could exploit weaknesses in the IIoT devices and gain access to the control and safety systems. Technology solutions such as blockchain, machine learning and artificial intelligence will provide little protection against an advanced persistent targeted attack on critical infrastructure and other industries.
402
J. Schurmann et al.
Fig. 2 OT Risk outweighs IT risk due to the added risk of safety and the potential to cause injury or loss of life
Table 1 . Literature references to safety and security standards and frameworks Search Term Used
Results
Baseline keywords ("Industrial Internet of Things" OR "Industrial IoT" OR "Industrial IoT") AND ("cybersecurity" OR "Cyber Security")
65
Baseline keywords + ("SCADA")
3
Baseline keywords + ("Safety")
2
Baseline keywords + ("OT" OR "Operations Technology")
1
Baseline keywords + ("CPS" OR "Cyber Physical System")
16
Baseline keywords + ("IACS" OR "Industrial Automations Cyber Security")
0
3.3 IIoT Cyber Threats IoT devices have been the target of large-scale cyber-attacks, particularly in the last five years due to the ease of exploit associated with the simplicity of the hardware and software used in IoT devices. A large majority of the attacks on IoT are intended to install botnet malware that will enable threat actors to use the large amount of IoT devices connected to the internet. The Mirai attack is a good example of such an attack, which resulted in millions of compromised IoT devices being used in a distributed denial of service attack [2, 9, 10]. IIoT devices could similarly be attacked in the same manner, however these types of attacks due not pose a risk to the CPS environment that is connected to the compromised IIoT device. However, many of the papers reviewed use examples of IT cyber security threats and incidents to justify proposed IIoT risk management soltions. The authors of [2,
Industrial Internet of Things Cyber Security Risk: Understanding … Table 2 Literature references to safety and security standards and frameworks
403
Reference
CIA
IAC
[1]
✓
X
[2]
X
X
[3]
X
X
[4]
✓
✓
[5]
✓
X
[6]
✓
X
[7]
✓
✓
[8]
✓
✓
[9]
✓
X
[10]
✓
X
[11]
X
✓
[12]
X
X
[13]
X
X
8–10, 12] defined denial of service, spoofing, phishing and spoofing as examples of attack vectors that could be used to target IIoT pose a risk to the CPS. Whilst IIoT may be vulnerable to IoT types of threats, these generally do not pose any threat to the CPS and academic literature should identify the difference between the two. This will ensure that more focus is placed on securing the IIoT devices from targeted attacks that are attempting to gain unauthorized access to the CPS by exploiting vulnerabilities in access controls, security patching, network flows and open services [6].
3.4 IIoT Edge Devices As discussed in Section A, the IIoT device landscape is large and diverse due to the multiple industries and associated environments to which IIoT solutions are being deployed. However, one characteristic that these devices have in common, and which creates a challenge for cyber security professionals is that the devices are generally built using low power, low spec and environmentally durable hardware resources [2, 7, 8]. This reduction in computing capacity, which meets requirements for lower cost and lower footprint, unfortunately results in lower capacity to run security controls and functions. Connecting these devices directly to the internet, as is the case with IoT is a significant area of concern for IIoT as there is the potential to expose CPS to a very broad array of threat actors that are actively scanning the internet for vulnerable systems. As a result of this increased risk to IIoT connected CPS environments and as defined by industry best practices, an edge device, such as a firewall, unidirectional gateway or edge compute device should be implemented as a security barrier between
404
J. Schurmann et al.
Fig. 3 IIoT edge security device used to protect CPS devices and consolidate data flows
the IIoT device and the internet [2, 6]. As shown in Fig. 3, the secure edege solution not only protects the IIoT device it also offers a central point to which multiple devices can connect and aggregate the data before communicating with the cloud hosted applications [5, 6]. For these reasons, edge computing solutions are becoming a preferred option for both IoT and IIoT solutions and in these cases of IIoT the decision to use this approach should be based on detailed risk assessment.
3.5 Security Monitoring Monitoring information systems is a critical security control as it should provide early detection of any suspicious or abnormal activity within the network, which can then be investigated and handled at the early stages of an incident before any significant impact to the system [6]. This also applies to industrial control and safety systems within CPS environments where OT systems run custom off the shelf (COTS) hardware and software which is susceptible to cyber security threats. Monitoring IIoT introduces a challenge for cyber security as the devices and networks that enable the
Industrial Internet of Things Cyber Security Risk: Understanding …
405
IIoT connectivity straddle both IT and OT and therefore the security monitoring solution would most likely not have visibility into both environments. According to [9, 11, 13], artificial intelligence, machine learning and data mining technologies play a significant role in Industry 4.0 however this technology adoption could lead to new vulnerabilities and attack vectors that can be leveraged by cyber threats. In response to these new threat vectors, Bécue et al. [13] proposes that AI can be a decisive advantage in defending IIoT from malicious cyber-attacks. Elsisi et al. [14] supports this view, by defining machine learning as a solution to monitor and analyze data from smart meters with the intended purpose of detecting malicious cyber activities that modify or change data. As a result, research tends to define methods and approaches for managing IIoT risk that focus primarily on protecting the information and data that is communicated to and processed by Industry 4.0 applications. For example, security measures such as security monitoring solutions that utilize artificial intelligence (AI) and machine learning (ML) are often defined as a viable solution to protect IIoT [9, 11, 13]. This concept of using AI or ML to detect cyber security incidents within the infrastructure of IIoT does not mitigate the potential risks to a CPS. These proposed AI solutions can certainly ingest all the data that is being communicated by IIoT devices and processed by Industry 4.0 applications, however this data will not provide any insight into malicious activities that may be directly targeted at and potentially impacting the CPS. As depicted in Fig. 4 below, a cyber threat looking to target a CPS would directly target the IIoT device, gateway or firewall to gain access to and penetrate the CPS network. AI or ML security monitoring technologies hosted in the cloud would not detect a direct targeted attack on the IIoT device and would therefore fail to notify OT of any cyber security breach [5, 6]. To manage cyber security threats that pose a risk to CPS security monitoring solutions that focus more on the IIoT device/s and less on the data being transmitted or exchanged, are required. Whether the device is a smart instrument, an edge device or a security gateway, the monitoring solution should continuously monitor these devices for any suspicious activity that could represent a malicious attack, which might target the CPS environment [6].
3.6 IT/OT Convergence Finally, the last challenge associated with IIoT security centers around IT/OT convergence. This topic is well-known and has been evolving since CPS environments started to adopt hardware and software used in IT environments. OT systems are generally used to operate, control and monitor industrial processes, while IT systems process information and data for business applications and systems. Industry 4.0 is driving businesses to improve performance by drawing on the large amount of valuable data that is available within industrial processes, however this is only achievable with large data processing servers that are managed by IT. This is one example of IT/OT convergence. IT systems are used to analyze and process information acquired
406
J. Schurmann et al.
Fig. 4 Malicious threat directly attacking the IIoT device and thereby circumventing the machine learning anomaly detection systems
from OT systems which industries can then use to better understand plant, health, machine performance metrics, drive improvements and preempt issues long before they occur. IIoT is a significant proponent of Industry 4.0 and similarly of IT/OT convergence, however only [1, 4, 12] mention this as a key aspect of managing IIoT cyber security risk. The other nine papers reviewed provided no guidance or insight into the challenges of and issues related to IT/OT convergence. IIoT only works if there is a connection between the OT systems located in the plant and the IT systems hosted in the cloud. This interface therefore should be treated as the most significant area of risk as it is the point at which a malicious threat will attempt to gain access to the CPS. As stated by [2, 7, 8], unlike conventional perimeter protection devices, IIoT devices are generally simple, low power and limited resource devices that provide little to no defense against IIoT cyber security threats, which have the potential to exploit and compromise a target CPS.
Industrial Internet of Things Cyber Security Risk: Understanding …
407
Current literature tends to focus predominantly on IT risk management strategies mentioned in previous chapters and there is little to no research that covers IT/OT convergence as it pertains to IIoT and the responsibilities for managing cyber security risk at the interface between IT and OT.
3.7 Frameworks and Standards In response to the evolution and rapid growth of IIoT and Industry 4.0, the Industrial Internet Consortium (IIC) and Industrie4.0 have each published industry standard reference architecture models for IIoT, both of which are widely used and accepted globally across many industries [1, 13]. These reference architecture models provide guidance on how to implement Industry 4.0 solutions using a standardized and wellstructured method which extends from the business requirement level all the way through to the physical instrument or smart device. Both models include detailed guidance regarding the best practice approach for securing the devices, applications and communications throughout the IIoT chain, however the Industrial Internet of Things Volume G4: Security Framework published by the IIC offers the most extensive coverage relating to cybersecurity requirements for IIoT. The framework includes in some detail, IIoT risk mitigation controls such as edge gateways, firewalls, network access controls and unidirectional gateways which are prescribed as acceptable controls for managing the interface between IT and OT to protect the CPS for external threats [6]. As discussed previously, the most critical element of risk to be managed within a CPS is safety and to ensure that CPS environments are designed and built with safety risk at the forefront, design engineers follow industry regulated process safety design standards such as IEC-61508 and IEC-61511. These standards ensure that the safety control systems are designed in such as way as to keep them independent and segregated from any system or device that may have a negative influence on the design and functionality of the system. IIoT cyber security design solutions and concepts should always account for the risk management strategies defined and sometimes mandated by industry standards and frameworks. The IEC process safety and cybersecurity standards are mandatory design and engineering standards for the process industry, which includes most types of manufacturing industries. The IIoT frameworks, whilst not industry standards provide sound guidance on how to implement IIoT cyber security solutions, in line with the IEC requirements. As shown in the hierarchy of IIoT frameworks and standards (Fig. 5) safety standards should be considered the most critical standards applicable to IIoT design, followed by the OT security standards, then the IIoT frameworks and finally the IT security frameworks.
408
J. Schurmann et al.
Fig. 5 Hierarchy of safety and cybersecurity standards to be applied to IIoT design
4 Future Work In order to fill the gap in the current knowledge and understanding of how industries should manage IIoT cyber security risks as they relate to CPS risk, more research is required that focuses on the safety consequences that may occur as a result of a successful cyber security exploit. In doing so, researchers should include industry process safety, OT security and IIoT standards and frameworks in their determination of what constitutes sound and acceptable IIoT cyber security risk management. There also needs to be more research that focuses on the interface between IT and OT. Whether a secure edge device solution, firewall or unidirectional gateway is deployed, the IT/OT convergence issue is still a developing concept, especially as it relates to IIoT. It will require a lot more than blockchain and AI technologies to convince OT engineers that it is safe to connect the control and safety systems to the cloud, so research needs to show that OT risks are understood, considered and addressed as part of any IIoT solution options. Finally, the concept of Industrial IoT needs to be clarified and standardized. The notion that IIoT can be used to operation industrial processes and even ensure the safety of those processes is a complete underestimation of the functional safety engineering practices.
5 Conclusion In conclusion, this literature review has shown the numerous misconceptions and misunderstandings that exist within academic research that aims to provide risk management solution and strategies for IIoT. The concept of IIoT cyber security risk and the correlation with safety risks within CPS environments were introduced as a foundation for any IIoT solution engineering and design. Details of the academic misconceptions were then presented within the main body of the review. Firstly, the
Industrial Internet of Things Cyber Security Risk: Understanding …
409
definition of IIoT was presented and used to show how current research is often misaligned on the differences between the concepts of IoT and Industrial IoT. The paper also highlighted the issues that exist regarding understanding of risk within the IIoT domain and the impact IIOT can have on industrial control systems and the facilities within which they operate. The subject of cyber threats was also discussed as another area of misrepresentation in academic research. The examples of cyber threats and some well-known incidents presented in as arguments for better IIoT security, were in fact more related to IT systems and not OT or CPS specific. The last and concluding section of the paper presented several industry process safety, OT cyber and IIoT standards and frameworks that should be used in any research on the subject of IIoT security and any IIoT solution design practices. Cyber security risks within industrial operating facilities can have a direct impact on safety and therefore should be assessed and mitigated before connecting any IIoT device to the internet.
References 1. Castillón DC, Martín JC, Suarez DP-M, Martínez ÁR, Álvarez VL (2020) Automation trends in industrial networks and IIoT. Springer, Cham, pp 161–187 2. Sha K, Yang TA, Wei W, Davari S (2020) A survey of edge computing-based designs for IoT security. Digit Commun Netw 6(2):195–202. https://doi.org/10.1016/j.dcan.2019.08.006 3. Boyes H, Hallaq B, Cunningham J, Watson T (2018) The industrial internet of things (IIoT): an analysis framework. Comput Ind 101:1–12. https://doi.org/10.1016/j.compind.2018.04.015 4. Dhirani LL, Armstrong E, Newe T (2021) Industrial IoT, cyber threats, and standards landscape: evaluation and roadmap. Sensors 21(11):3901. https://doi.org/10.3390/s21113901 5. Sari A, Lekidis A, Butun I (2020) Industrial networks and IIoT: now and future trends. Springer, Cham, pp 3–55 6. Industrial Internet of Things Volume G4: Security framework, I. I. Consortium, 2016. [Online]. Available: https://www.iiconsortium.org/pdf/IIC_PUB_G4_V1.00_PB.pdf 7. Carreras Guzman NH, Wied M, Kozine I, Lundteigen MA (2020) Conceptualizing the key features of cyber-physical systems in a multi-layered representation for safety and security analysis. Syst Eng 23(2):189–210. https://doi.org/10.1002/sys.21509 8. Berger S, Bürger O, Röglinger M (2020) Attacks on the industrial internet of things—development of a multi-layer taxonomy. Comput Secur 93:101790. https://doi.org/10.1016/j.cose. 2020.101790 9. Hussain Z, Akhunzada A, Iqbal J, Bibi I, Gani A (2021) Secure IIoT-enabled industry 4.0. Sustainability (Basel, Switzerland) 13(22):12384. https://doi.org/10.3390/su132212384 10. Sengupta J, Ruj S, Das Bit S (2020). A comprehensive survey on attacks, security issues and blockchain solutions for IoT and IIoT. J Netw Comput Appl 149:102481. https://doi.org/10. 1016/j.jnca.2019.10248 11. Adaros-Boye C, Kearney P, Josephs M (2020) Continuous risk management for industrial IoT: a methodological view. Computer Science. Springer International Publishing, Cham, pp 34–49 12. The Industrial Internet of Things Volume G1: Reference Architecture, I. I. Consortium, 2019. [Online]. Available: https://www.iiconsortium.org/pdf/IIRA-v1.9.pdf 13. Bécue A, Praça I, Gama J (2021) Artificial intelligence, cyber-threats and Industry 4.0: challenges and opportunities. Artif Intell Rev 54(5):3849–3886. https://doi.org/10.1007/s10462020-09942-2
410
J. Schurmann et al.
14. Elsisi M, Mahmoud K, Lehtonen M, Darwish MMF (2021) Reliable industry 4.0 based on machine learning and IoT for analyzing, monitoring, and securing smart meters. Sensors (Basel, Switzerland) 21(2):487. https://doi.org/10.3390/s21020487
Color Image Encryption Employing Cellular Automata and Three-Dimensional Chaotic Transforms Renjith V. Ravi, S. B. Goyal, Sardar M. N. Islam, and Vikram Kumar
Abstract Concerns about information security are relevant to almost every facet of human activity and daily life in today’s contemporary civilization. The problem of information being leaked to unauthorized parties is becoming a far more serious issue. The creation of procedures that efficiently preserve information security has emerged as a prominent focus of study in recent years. Classical algorithms for encrypting data, such as RSA and DES, have already been extensively used in encrypting textual information. However, due to developments in the electronics industry and the field of computer science, the abilities of human computers have advanced at an accelerated rate. As a direct consequence, conventional techniques for encrypting data risk being broken. On the other hand, despite the vast improvements in computer computational power, people still need faster encryption rates. The amount of image data is much greater than text data, and there is a strong association between neighbouring pixels. As a result, there is a need to develop image encryption algorithms that are both more effective and safer.. Special algorithms have been proposed to crypt digital images to retain the effectiveness, security, and throughput of encryption since color image data differs in many ways from conventional data types like text and binary data. The current work offers a method for encrypting digital images utilizing chaotic mapping and reversible cell automation. The principles of Shannon’s confusion and diffusion technique are used as the foundation for the encryption process in the suggested approach, which has two primary steps. The input is a simple image, which is then R. V. Ravi (B) M.E.A Engineering College, Malappuram, India e-mail: [email protected] S. B. Goyal City University, Petaling Jaya, Malaysia e-mail: [email protected] S. M. N. Islam Victoria University, Melbourne, VIC, Australia e-mail: [email protected] V. Kumar University of Calgary, Calgary, AB, Canada e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_36
411
412
R. V. Ravi et al.
shuffled using a 3D chaotic transform and an appropriate key value in the first phase. The second stage involves extracting the cypher image from the first step onto 24 one-bit plates and XORing it using the appropriate 2D reversible cell automata. The suggested technique exhibits excellent performance outcomes when compared to other cryptographic algorithms. Keywords Color image encryption · Cellular automata · 3D modular map · 3D arnold cat map · 3D logistic map
1 Introduction Communication networks that can be used for many different things have changed how we live and work a lot [1]. This indicates that by accessing to the Online World, people and organizations make their own data accessible to others. Computer security is crucial because of this. One of the most crucial components of online information security is preventing unauthorized access to data. Data encryption, Internet use limitations, and the deployment of security technologies are just a few of the solutions for information security that have been put out so far. Meanwhile, cryptography is vital and has a variety of applications. Encrypting data is among the most commonly used methods of information security. Ensuring security and authenticating images is becoming more crucial as image transmission over the Communication networks has increased recently. Communication routes need to be safe enough to prevent cyberattacks. But compared to text communication, establishing encryption techniques for visual data is difficult. Due to characteristics like bulk, massive additions, and strong correlation between image points, traditional text encryption methods like RSA and DES cannot reliably encrypt image data, particularly in real-time scenarios. In this study, we propose an image encryption algorithm using a diffusion, confusion, and the use of cellular automata. The concept of cellular automata is discussed in the second section of this investigation. The ideas behind the 3D chaotic map are explained in the third section. The suggested approach is explained in the fourth section. Section six will discuss the findings of the method’s assessment. The conclusion is presented in the article’s concluding section.
2 Cellular Automata The mathematical model of cellular automata (CA) [1, 2] is helpful for describing physical, biological, and computing processes. One distinct feature of CA is that it may be utilized to create CA-based encryptions through composite activities that can work extremely well with its basic principles. An open system with discrete outputs and inputs is modelled mathematically by cellular automata. It depicts the
Color Image Encryption Employing Cellular Automata …
413
Table 1 Elementary rule for CA 90 [1, 2] Neighborhood
111
110
101
100
011
010
001
000
No
Next State
0
1
0
1
1
0
1
0
90
Table 2 Results of entropy analysis Method Proposed
Image
Entropy of Ciphertext Image R
G
B
Mandril
7.9986
7.9958
7.9947
Peppers
7.9965
7.9954
7.9986
Lena
7.9976
7.9965
7.9985
U S Choi et al. [10]
7.9972
7.9968
7.9970
A Y Niyat et al. [11]
7.9972
7.9973
7.9972
sequential behaviour of a number of linked, regularly ordered cells, each with a limited range of probable values. The value that a particular cell (local state) engages in at a discrete time step in a CA is influenced by the values of the cells in its closest neighbours during the preceding time step, giving rise to a technique known as the CA principle. The simplest example of elementary CA is a linear array of cells with three neighbouring relationships, and each cell’s condition is either 0 or 1. A new state is created as [3]: where Sit is the i th cell’s state at time t and f is a Boolean function defining the local rule as shown in Eq. 1. {t+1}
Si
t t = f S{i−1} , Sit , S{i+1}
(1)
Wolfram [2] has coded the set of local rules for the temporal development of a 1D CA. Table 1 provides an illustration of CA rules based on Wolfram’s notation. The area is divided into 3 cells. Thus, there are n = 23 many ways that a neighbourhood might be set up. This indicates that there are 256 rules in the ECA overall [3].
3 The 3D Chaotic Maps 3.1 3D Arnolds Cat Map Several writers have recommended the enhanced 3D Arnolds Cat Map for image smudging. Hongjuan Liu et al. [4]. proposed one of them, and it is enhanced by the addition of two additional control parameters, c and d. Here is the improved ACM in Eq. 2.
414
R. V. Ravi et al.
⎢ ⎥ ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢x ⎥ 1 a 0 x x ⎢ ⎥ ⎣ y ⎦ = ⎣ b ab + 1 0 ⎦⎣ y ⎦ mod N = R.⎣ y ⎦mod N z c d 1 z z
(2)
This 3D ACM achieves dual encryption by doing both substitutions and shuffling simultaneously. The association between the neighbouring pixels may be entirely disrupted via ACM [5]. The 3D cat map, on the other hand, may take the place of the grey/color values. The results of applying 3D ACM to both images and grayscale images are shown below. The
following
two stages were taken to implement the 3D ACM: x x and are the locations of pixels after and before the mapping operation. y y The value of z’, the third input parameter, is found by calculating amod M. Where a =c×x +d ×y+z Here, z denotes the intensity or color of a pixel beforehand to mapping, and z’ denotes the intensity or color of the image afterwards mapping. The highest possible pixel intensity, M, is equal to 256. Using ACM, we first determine x’ and y’ before using the equation mentioned above to get z’. Due to two characteristics, the 3D ACM is much more effective than the ACM. The first is the existence of two more constants, c and d, which are mutable. Second, whereas ACM can just shuffle pixel locations, 3D ACM may substitute new pixels and create a uniform distribution of color and grayscale values.
3.2 3D Logistic Map The simplest chaotic function is the logistic map, which is determined by the equation xn+1 = λxn (1 − xn ). The equations Eq. 3 and Eq. 4 displays chaotic behavior for 0 < xn < 1 and λ = 4. xi+1 = μ1 xi (1 − xi ) + γ1 yi2
(3)
yi+1 = μ2 yi (1 − yi ) + γ2 xi2 + xi yi
(4)
The calculations mentioned above strengthen the quadratic relationship of the yi2 , xi2 , xi , yi components and boost system security. When the system reaches 2.75 < μ1 < 3.4, 2.7 < μ2 < 3.45, 0.15 < γ1 < 0.21 and 0.13 < γ1 < 0.15, it enters a state of chaos and may produce a chaotic pattern in the area [0, 1]. We use the 3D rendition of the logistic chaotic systems proposed by P. N. Khade et al. in [5] in this study. The equations Eqs. 5, 6 and 7 displays the equation set for a 3D logistic chaotic map (2). The cubic and quadratic coupling that the 3D chaotic system uses may lead to very unpredictable sequences.
Color Image Encryption Employing Cellular Automata …
415
xi+1 = λxi (1 − xi ) + βyi2 xi + αz i3
(5)
yi+1 = λyi (1 − yi ) + βz i2 yi + αxi3
(6)
z i+1 = λz i (1 − z i ) + βxi2 z i + αyi2
(7)
Here, the equations mentioned above for 3.53 < λ < 3.81 , 0 < β < 0.022 and 0 < α < 0.015 show chaotic behaviour and accept values between [0, 1].
3.3 3D Modular Chaotic Map (3DMCM) To combat security threats posed by the digital communication of images across networks and the Internet, image encryption techniques have been used extensively during the last several years [6]. Certain important characteristics of chaotic maps include “sensitivity to beginning circumstances,” “ergodicity,” “non-periodicity,” “pseudo-random behavior,” etc. In terms of throughput, security, and predictability, chaotic map-based information security approaches outperform traditional encryption algorithms [6]. There are two types of chaotic transforms for encryption: irreversible and reversible. For instance, the Arnold cat map and the logistic map are both reversible in their own way. Cryptography often employs reversible chaotic maps for permutation techniques. During encryption, certain chaotic transforms to go from continuous in time to discrete in time. In this paper, permutation techniques in image cryptography are carried out using reversible discrete, chaotic systems. Equation 8 defines this reversible, discrete chaotic map as the 3D modular chaotic map (3DMCM) [7, 8]. ⎡
⎤ ⎡ ⎤ ⎡ ⎤ xm+1 abc xm ⎣ ym+1 ⎦ = ⎣ d e f ⎦ × ⎣ ym ⎦modn z m+1 zm ghi
(8)
A is a 3 × 3-dimensional residue matrix with Z m -related components in Eq. 8. If gcd(|A|, n) = 1, 3DMCM is reversible, and its multiplicative inverse is denoted as in Eq. 9. ⎤ ⎡ ⎤ ⎤−1 ⎡ xm+1 xm abc ⎣ ym ⎦ = ⎣ d e f ⎦ × ⎣ ym+1 ⎦modn zm z m+1 ghi ⎡
(9)
The inverse of the residue matrix A isA−1 , in this case. If the residue matrix A meets the conditiongcd(|A|, n) = 1, then it is reversible. To figure out the
416
R. V. Ravi et al.
inverse matrix A−1 , we use the extended Euclidean method and the equation A−1 = |A|−1 × C T modn.
4 DNA Encryption Image encryption may benefit greatly from DNA computing’s parallelism, greater storage volume, and reduced power requirements [9]. The four nucleotide elements Thymine (T), Guanine (G), Cytosine (C), and Adenine make up the fundamental DNA molecule (A). In analogue systems, 0 is considered to be complementary to 1, and the bases A and G are always complementary to each other. There really are 24 DNA coding methods, however only 8 of them adhere to the Watson–Crick complementation rule. In this study, we use the index value of the PRN sequence produced by the 3D chaotic system to conduct DNA, also known as 2-bit permutation. The DNA sequence is 4 × W × H for an image of size W × H . The key sequence ‘k’ for DNA permutation will be formed by concatenating the individual sequences, x, y and z together using the following equation. Additionally, this sequence is transformed to decimal form to provide the sequence k for DNA encoding.
5 Proposed Encryption Technique According to the block design in Fig. 1, the suggested approach performs image encryption in two phases during the primary permutation and diffusion process.
Fig. 1 Block diagram of the proposed approach
Color Image Encryption Employing Cellular Automata …
417
5.1 Permutation Process The permutation procedure is the first stage in the suggested methodology. This process aims to reduce the high correlation between adjacent pixels in the nonencrypted image while maintaining the histogram intensity at the same level as the input image. The initial image is transformed into an illegible or random-looking object at the completion of this process. At this stage, a color digital image is chosen without regard to size as the inputs of the permutation procedure. Prior to applying the 3D chaotic maps, the elements of the image will be separated into three blue, green and red components. Further, these blue, green and red components will be permuted separately using the chaotic maps.
5.2 Diffusion Process The diffusion procedure modifies the gray value of the pixels in an image, is the second phase in the suggested methodology. At this stage, the color image that was sent in is first split into three parts: red, green, and blue. Each of these three parts is then turned into eight single-bit binary images. Each pixel in RGB pane is divided into eight 1-bit plate images. The procedure of splitting the bit plane makes this technology, although being simple, quite complicated. On the other hand, the binary images of cellular automata can be created utilizing principles of Wolfram cellular automata and are utilized as special keys produced by Wolfram cellular automata for doing the diffusion on 24 binary-valued image plates. Evey binary image plate is then XORed with a 2D Wolfram key and the resulting conflict produces the encrypted version of color image. Following the XOR operation on the binary plate images, the red, green, and blue elements from the binary image plates will first be merged to create the three red, green, and blue elements, which will then be combined to create the cypher color image.
6 Results and Discussion Various experiments are used in this part to assess the suggested method. For experimental reasons, many color images of 256 × 256 × 3 sizes were employed, including the Lena, Baboon and Peppers images. Figure 2 displays the results of encryption and decryption. This section describes in detail various common testing and assessment techniques for assessing and analyzing the effectiveness of an image cryptosystem.
418
R. V. Ravi et al.
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 2 Results of encryption and decryption (a) to (c) Plaintext images. (d) to (e) Corresponding Ciphertext images
6.1 Histogram Analysis The histogram displays the distribution of pixels on grey scale [10]. The histogram shows the number of pixels in the image that fall into each of the 255 grayscale levels. One of the key components of every image encryption scheme is grey level distributions [11]. In general, statistical assaults on the suggested approach are less likely to succeed and more consistently than the hand-held histogram. In other aspects, a good encryption scheme may be seen by the fairly uniform dispersion of image histograms. The test image Lena’s primary image histogram is shown in Fig. 3. The suggested approach is used to display the cryptogram of its encrypted image. The data analysis demonstrates that the histogram of the cypher image has a nearly even distribution, which suggests that the encryption method suggested is very safe.
6.2 Entropy Information Entropy is a term that has to do with how chaotic and unpredictable a component is. The dispersion of the grey level of an image may be determined by measuring the entropy of that image’s information [10]. Assuming all grey levels are identical,
Color Image Encryption Employing Cellular Automata …
419
Fig. 3 Results of histogram analysis (a) Histogram of plaintext image (b) Histogram of ciphertext image
Table 3 Values of NPCR Method
Image
NPCR R
G
B
Mandril
99.6584
99.6478
99.6325
Peppers
99.5872
99.5864
99.7854
Lena
99.3687
99.5689
99.4785
U S Choi et al. [10]
99.5986
99.6401
99.5770
A Y Niyat et al. [11]
99.6505
99.6444
99.6627
Proposed
the entropy for a color image with 256 grey levels in the RGB planes ought to be 8. Attempted entropy assaults may not affect it at all [11]. The equation Eq. 14 calculates information entropy, a significant measure of unpredictability. The values obtained for entropy for the ciphertext image and its comparison with the literature are shown in Table 3.
H (s) =
2 N −1 i=0
p(si )log2
1 p(si )
(14)
6.3 Differential Attack Analysis The suggested approach is used to cypher the two images after an attack, which is typically performed by a little modification in the image. The attacker then breaks into the cryptosystem by comparing two encrypted images side by side. In general, to assess the impact of this kind of assault, the number of pixels change rate (NPCR) [10] and unified average changing intensity (UACI) [11] are used. These metrics are derived as follows in Eq. 15 and Eq. 16.
420
R. V. Ravi et al.
Table 4 Values of UACI Method
Image
UACI R
G
B
Mandril
33.8526
33.6584
33.5879
Peppers
33.5864
33.8654
33.5423
Lena
33.8569
33.8745
33.5478
U S Choi et al. [10]
33.5341
33.4302
33.4759
A Y Niyat et al. [11]
33.4462
33.4131
33.4399
Proposed
i, j DR,G,B (i,
NPCRR,G,B =
UACIR,G,B =
1 W ×H
W×H
i, j
j)
× 100%
CR,G,B (i, j)−CR,G,B (i, j) 255
× 100%
(15)
(16)
where H and W represent the image’s height and width, C RG B and C RG B are its cypher images, and its equivalent plain images vary by only one pixel. The following rule determines how the difference array D RG B (i, j) is defined. If C RG B (i, j ) = C RG B (i, j ), then D RG B (i, j ) = 0; otherwise D RG B (i, j ) = 1. The values obtained for NPCR and UACI for the test image and its comparison with the literature are shown in Table 4 and Table 5.
7 Conclusion Data security is crucial in our everyday lives, and cryptographic procedures are often employed to ensure its secrecy. In the face of a possible third-party eavesdropper, cryptographic procedures are used to develop and execute secret communication strategies between two parties. This work presents a unique approach to encrypting color images based on chaotic mechanisms and cellular automata. Images communicated over the Internet may have military, commercial, medical, or other applications, making it crucial to keep images safe and verify their veracity. Images may be protected against illegal access through encryption. Video data, however, has a significant correlation between high spots and a large volume, making existing techniques of encryption ineffective and time-consuming. For image transmission, characteristics like colorlessness, image homogeneity, and data compression are sometimes necessary. When these demands are paired with security requirements, it is particularly challenging to provide them. The 3D chaotic maps, which also make use of permutation and diffusion to enhance conventional metrics, is used in the current work to explore digital color image encryption. The suggested method’s overall
Color Image Encryption Employing Cellular Automata …
421
procedure comprises phases for permutation and diffusion. By combining additional tests, cellular automata can be used as a valuable tool for high-quality and effective digital image encryption.
References 1. Ghazanfaripour H, Broumandnia A (2019) Digital color image encryption using cellular automata and chaotic map. Int J Nonlinear Anal Appl 10:169–177 2. Wolfram S (1983) Statistical mechanics of cellular automata. Rev Mod Phys 55:601 3. Jin J (2012) An image encryption based on elementary cellular automata. Opt Lasers Eng 50:1836–1843 4. Liu H, Zhu Z, Jiang H, Wang B (2008) A novel image encryption algorithm based on improved 3D chaotic cat map. In: 2008 the 9th international conference for young computer scientists. 5. Khade PN, Narnaware M (2012) 3D chaotic functions for image encryption. In: International journal of computer science issues (IJCSI), vol 9, p 323 6. Patro KAK, Acharya B (2019) An efficient colour image encryption scheme based on 1-D chaotic maps. J Inf Secur Appl 46:23–41 7. Broumandnia A (2019) Designing digital image encryption using 2D and 3D reversible modular chaotic maps. J Inf Secur Appl 47:188–198 8. Broumandnia A (2019) The 3D modular chaotic map to digital color image encryption. Futur Gener Comput Syst 99:489–499 9. Sarosh P, Parah SA, Bhat GM (2022) An efficient image encryption scheme for healthcare applications. Multimed Tools Appl 81:7253–7270 10. Choi US, Cho SJ, Kim JG, Kang SW, Kim HD (2020) Color image encryption based on programmable complemented maximum length cellular automata and generalized 3-D chaotic cat map. Multimed Tools Appls 79:22825–22842 11. Niyat AY, Moattar MH, Torshiz MN (2017) Color image encryption based on hybrid hyperchaotic system and cellular automata. Opt Lasers Eng 90:225–237
Privacy and Security Issues of IoT Wearables in Smart Healthcare Syed Hassan Mehdi, Javad Rezazadeh, Rajesh Ampani, and Benoy Varghese
Abstract This paper presents, analyzes, and evaluates the security and privacy issues associated with the IoT wearable devices. The research is conducted using secondary qualitative methodology. Recently, the pieces of private health information of the individuals are not possessed by the user rather by the companies that are responsible for producing healthcare wearable devices. The aggregated summary of a user’s health information is made available for them, whilst the raw information could be sold to different third-party partners of this business organization. Additionally, there are multiple types of privacy threats and security issues that the individual users of these wearable devices have been exposed to. A qualitative analysis of secondary data is also conducted in this research paper to investigate data integrity concerns in brief. Keywords IoT · Wearables · Security and privacy · Smart Healthcare
1 Introduction The Internet of Things (IoT) [1] is included 4 components, smart sensors [2], cloud computing [3], wireless networks [4] and analytic software. IoT Wearable healthcare devices are responsible for collecting a vast amount of data from users, including S. H. Mehdi (B) Victoria University, Sydney, Australia e-mail: [email protected] J. Rezazadeh Crown Institute of Higher Education (CIHE), Sydney, Australia e-mail: [email protected] R. Ampani Peninsula Health, Frankston, Australia R. Ampani · B. Varghese Kent Institute, Melbourne, Australia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_37
423
424
S. H. Mehdi et al.
both health and personal data, by using various types of physiological and behavioral sensors to continuously monitor their activity levels and health status. Whatever the case may be, the unprecedented gathering of personal information is generating security and privacy concerns among users of wearable healthcare equipment [5]. There are multiple wearable healthcare devices or fitness trackers that are accessible for the purpose of healthcare and public wellbeing across the world. In addition to that, the other information collected by the wearable healthcare devices would include sleep patterns of the individual as well as movements or activities are done by the user of the device and all these pieces of information are accumulated through pedometer, altimeter and accelerometers on top of that data relating to muscle condition and function of the user of these wearable devices are collected by means of skin conductance and pressure sensor [6] and so on. The health information of an individual is considered as the most private or confidential of all kinds of personal details. On the other side, data breaches, discriminatory profiling, and marketing techniques manipulation would all be substantial negative outcomes of violating data security and privacy rules relevant to the security and privacy issues associated with wearable healthcare devices.
2 Literature Review The literature review section is going to include points like the theoretical aspects or overview of the research topic, a conceptual framework of the privacy and security issues relating to the use of wearable equipment. In addition to that, there are multiple types of privacy threats and security issues that the individual users of these wearable devices have been exposed to during making utilization of these healthcare wearable devices. The discussion is going to present the findings considering the original research study which would examine the information privacy and security issues relating to the smart wearable devices. Margins, column widths, line spacing, and type styles are built-in; examples of the type styles are provided throughout this document and are identified in italic type, within parentheses, following the example. The formatter will need to create these components, incorporating the applicable criteria that follow (Fig. 1). Theoretical views: 1) The implication of the wearable healthcare devices in overall wellbeing of people through tracking the health conditions and growing healthy habits: When it comes to present the theoretical views relating to the security and privacy concerns while using wearable devices for the purpose of health and wellbeing, it can be said that the wearable devices would interface with the software of both the personal computer as well as the smartphone in order to accumulate a large variety of information about an individual user as mentioned by the authors in their discussion [5].
Privacy and Security Issues of IoT Wearables in Smart Healthcare
425
Fig. 1 The conceptual framework
Nevertheless, the significant benefits for which the popularity of these wearables across the world has been skyrocketing day after day would include improvement in nutrition and fitness, stress reduction, healthy weight loss and weight tracking quality, cutting down of the degrading habits by means of haptic feedbacks whilst it would also provide the users with the general information relating to the health and habits [7]. Therefore, it can be said that these devices are responsible for guiding people in practicing healthy habits and lifestyle by making them aware of their present health condition [7]. 2) Wearables and m-health: The wearable healthcare devices are signified as one of the innovative and advanced support aids for the health services reason because they are responsible for providing the individual care seekers or the general people with the capability of monitoring their crucial signs so that the harmful symptoms of the diseases can be detected at the earlier stage which would help in the possibility of recovery of different fatal diseases like heart disease, liver disease, lung failure and kidney failure besides different types of carcinogenic diseases [5]. Therefore, it is realized that by this property the wearables help in diagnosis the chronic illnesses (Fig. 2). However, there are multiple wearable healthcare devices or fitness trackers that are accessible for the purpose of healthcare and public wellbeing across the world. Be that as it may, the above figure shows these devices. For instance, wrist-worn wearables would include fitness tracker bands and smartwatches involving a touchscreen display
426
S. H. Mehdi et al.
Fig. 2 The different categories of wearable healthcare devices
whereas the wrist bands would mainly be utilized for tracking fitness yet would not consist of a touchscreen display. In addition to that, wearable healthcare devices keep on collecting useful health information about the users [5]. On one hand, the pieces of information relating to both personality traits as well as psychological disorders are accumulated through collecting data about the use of social media by the users of the IoT warble healthcare devices [8], their involvement with the members of their family and friends, activities and behavioral patterns at the time of using the smartphone [5]. On the other hand, the assessment of both the brain functions as well as the cognitive activities has been monitored through the brain wearables as well as cognitive sensors respectively [9]. 3) Data privacy and wearables: The health information of an individual is considered as the most private or confidential of all kinds of personal details [5] and all the data to be recorded in a smart cloud technology [10]. Be that as it may, the term privacy is described as the claim of institutions, individuals or groups in order to determine for themselves, however, to what extent and when the information relating to them would be communicated or conveyed to others [11]. As the authors of the article have mentioned that in the upcoming time, information accumulated from various sources including electronic health records as well as wearable healthcare devices are going to be combined for the sake of providing a complete insight relating to the health status or health condition of the care-seeking individual [12] and also another appropriate solution for data privacy is blockchain [13]. Currently, the Pentagon has acknowledged that the apps relating to fitness tracking keep on revealing the location of the soldiers of the United States in war-torn regions of Iraq and Syria. On the other hand, the heatmap which is one of the significant features of the fitness tracker wearable Strava used to be capable of revealing the location of the military facilities of the United States in Syria besides the other conflict regions along with revealing some of the confidential movements of the troop [14]. Be that as it may, it was reported further that Strava was responsible for allowing the
Privacy and Security Issues of IoT Wearables in Smart Healthcare
427
users to reveal the de-anonymized information about the user which may include the name of the user, heart rates of them and speed [15]. On the other hand, the availability of information refers to the feature or property of getting usable as well as accessible open requirement through one of the authorized entities [16]. Therefore, from this perspective, it can be evaluated that this is responsible for presenting a complexity while a healthcare service professional might require information from a wearable device of their patients for the sake of assisting with the patient care, however, would be unable of accessing it since they would not be authorized for doing so [7]. 4) Information in transit using the wearables healthcare devices: Smartphones nowadays have been availed to make sensitive transactions like online shopping or online banking [7]. However, what this means is that the range of malware or ransomware or the other computer viruses which would target the network devices like smartphones and wearable devices have likewise been increasing exponentially [17]. It is noticed that when data would be in transit, they might end up being susceptible to eavesdropping, for example sniffing which would involve the process of capturing as well as monitoring all the data packets transmitting through a dedicated network or channel besides tapping in which hardware equipment used for accessing the information flowing around a dedicated computer network and the attacks relating to traffic analysis or message alteration and so on like these [18]. Then again, in these kinds of attacks, the information would be changed since the hacker might end up altering or modifying the content to the transmitting packers or could alter the timestamp of the packets [19]. In addition to that hackers that are responsible for breaching the privacy law and accessing the private information of the users of these devices in an unauthorized way, at times end up attempting to utilize those data for the sake of stealing the identity of a user [7]. From the above literature review, it is summarized that data integrity would be concerned with the quality and assessment of the information that would be accumulated. In addition to that, the availability of information refers to the feature or property of getting usable as well as accessible open requirement through one of the authorized entities. Therefore, from this perspective, it can be evaluated that this is responsible for presenting a complexity while a healthcare service professional might require information from a wearable device of their patients for the sake of assisting with the patient care, however, would be unable of accessing it since they would not be authorized for doing so.
428
S. H. Mehdi et al.
3 Research Method 3.1 Structure of the Research Method This part of the research report is going to enlighten the methodologies selected to carry out this research by presenting the research onions [20]. Additionally, the research approach is going to be discussed by presenting the justification for the research approach. Apart from the research design would be enlightened by presenting the justification for the research approach [21]. In addition to that, the methods that are going to be followed to conduct this research will also be discussed with discussing the tools and techniques of data collection. In addition to that, the sampling process that would be followed in this research will be thrown some light on with designing a Gantt Chart for portraying the dedicated timeframe to execute this research study [22]. Then again, the ethical concerns of this research paper are going to be taken under consideration with pay attention to the research limitations. Then the approaches to be followed to analyze the collected data set will be discussed whilst this chapter is going to be ended by summarizing the entire discussion [23].
3.2 Research Methodology and Data Collection The selected methodologies for this research involve both qualitative and quantitative analysis. However, qualitative information would be collected through conducting a literature review and thematic analysis [20]. On the other hand, quantitative information would be collected through conducting a user survey on the user concerns relating to data security and privacy while using the wearable healthcare devices [24]. There are two different types of research approaches are the deductive and the inductive research approaches [20]. However, the deductive approach is going to be used in this research since it involves an in-depth investigation, evaluation, analysis and assessment of the research topic resulting in making the entire research process even more to the point and specific [7]. Be that as it may, the basis of the deductive approach is a top-down approach in which theoretical analysis, observation and confirmation depending on the research analysis would be done [25]. Thematic analysis is going to perform to collected secondary qualitative data. However, relevant and contemporary themes are going to be select from recent journal articles, newspaper blogs and blogs published on the websites of different multinational companies [20].
Privacy and Security Issues of IoT Wearables in Smart Healthcare
429
4 Analysis and Results A. Thematic analysis: The first theme is, “1: There is a concern relating to the capacity and physical interference when it comes to using the wearable healthcare devices.” The human aspects would often be considered as the weakest connection in privacy and hence both risk awareness, as well as circumstantial perception, play a vital role in the implementation and adoption of the privacy mechanisms [26]. Be that as it may, the general threats or risk factors that would take place in this specific area would be that the individual user of the wearable healthcare devices could lose, have or misplace the wearables stolen and that would end up enabling an unauthorized person to make use of the confidential details preserved in the device [20]. On the other hand, whilst the owners would be the one held responsible for safeguarding their own confidentiality and privacy on their wearables, a number of case scenarios indicates that the users end up lacking the technical skills for implementing the privacy measures on their wearable devices or smartphones [12]. The second theme is, “2: Once the information is analyzed and preserved in a software program, it might be vulnerable the malware.” However, it is noticed that there are three different vulnerable privacy or security aspects associated with the collecting of the health details or information from a wearable healthcare device [5]. Be that as it may, these threats would again be related to the individual users of these healthcare wearable devices, making utilization of these devices for gaining information including both the capacity as well as physical interference, information in transit among the software program and wearable healthcare devices along with the storage of these piece of aggregated information in a centralized database [19] and apply one smart machine learning method [27] to provide prediction. B. User survey: The user survey including important questions is going to be performed to collected primary quantitative information. However, the chosen sample size to conduct the user survey is a sample of 25 uses. 1. Are you interested in buying a wearable healthcare device? The Table 1 shows the response of the individuals who participated in the survey for the question on whether or not they are interested in buying a wearable healthcare device. Be that as it may, the table shows that around 28% of people are interested whilst 20% are extremely interested, 24% are very interested and 16% are somewhat interested in buying a wearable for them. On the other hand, it is noticed that only 16% among the total 100% of the respondents are uninterested whilst that only 4% are not at all interested in buying a wearable healthcare device. Therefore, it can be said that a majority of people are highly aware of their health and wellbeing these days and want to go for a wearable device so that they can monitor their health. Thus, even there are security issues, but most individuals prioritize their health and wellbeing on these privacy and security concerns.
430 Table 1 User response to survey question 1
S. H. Mehdi et al. Options
Responses
Percentage of total respondents
Total
Interested
7
28
25
Extremely interested
5
20
25
Very interested
6
24
25
Somewhat interested
4
16
25
Uninterested
2
8
25
Not at all interested
1
4
25
2. How concerned you are when it comes to the security of your private details stored in the database of the wearable producing companies? The Table 2 shows the response of the individuals who took part in the survey to their question that if they are concerned about the security of their private health details, they need to provide them to the wearable device manufacturing companies and are stored in the database of these companies. However, having a look through the user responses, it is noticed that among the 100% respondents, only 4% told that they are not at all concerned about the privacy issues whilst 16% are unconcerned about these issues. Moreover, 20% of users of the devices are highly concerned about the privacy breaches and issues that may take place and 28% are extremely concerned about the security and privacy of the confidential health details they put in the database of the wearable producing companies. Therefore, even though the level of user concern may vary, it is evident from these responses that a majority of people are concerned about the security and privacy of their personal and health information stored in the database of the wearable device manufacturing companies. 3. If you come across news that wearable manufacturing and selling got a history of privacy breach in the previous year where there was no personally identifiable information (i.e. location, address and name) had been taken, how likely will you prefer to go for that brand now? Table 2 User response to survey question 2
Options
Responses
Percentage of total respondents
Total
Concerned
5
20
25
Extremely concerned
7
28
25
Very concerned 6
24
25
Somewhat concerned
2
8
25
Unconcerned
4
16
25
Not at all concerned
1
4
25
Privacy and Security Issues of IoT Wearables in Smart Healthcare
431
Table 3 User response to survey question 3 Options
Responses Percentage of total Total
Likely
3
Very likely Less likely Like to go for a different company
12
25
2
8
25
5
20
25
7
28
25
Not going to purchase a product like wearable involves 1 security threats
4
25
The Table 3 shows the response of the respondents of this survey to the question that If you come across a new that wearable manufacturing and selling got the history of privacy breach in the previous one year where there was no personally identifiable information which may include location, address and name etc., had been taken, how likely they are going to purchase a product from that brand sitting in the present day. However, the pie-chart representation of the user responses shows that 12% prefers to go for that company that would commit the privacy breach whilst 8% are very likely to buy an item from that company and 20% are less likely to make a purchase from that particular company. On that other hand, the responses of 28% of the total respondents are doubtful since they are suffering from ambiguity when it comes to making a purchase from a company that already committed a privacy breach in the past year, even though the breach was not that harmful to their clients. However, 28% of people prefer to go for a different brand which got a clean history whilst only 4% told that they cannot trust in any wearable device producing company and hence are not going to buy this type of device which involves privacy concerns. Therefore, it is clear that a significant number of people will not preferably want to make their purchase from a company which got a history of privacy breach in the recent past whilst some people would like to give the company a benefit of doubt since they did not disclose any private details of the users and are comfortable in buying the product from that company. 4. To what extent, do you think information security and privacy ends up impacting the utility of wearable healthcare devices? The Table 4 shows how respondents in this survey responded to the question of how much they think information security and privacy will affect the usefulness of wearable health devices. However, the graphical representation of user responses shows that among 100% of those surveyed in this survey, approximately 76% believe that privacy and data security have a greater impact on the usefulness of smart wearable devices. Portables, although only 24% said that these factors will not affect the Utility of portable devices for medical and health purpose.
432 Table 4 User response to survey question 4
S. H. Mehdi et al. Options
Not at all impact Do impact significantly
Responses
Percentage of total respondents
Total respondents
6
24
25
19
76
25
5 Conclusion From the review of the literature, it is concluded that the common threats or risk factors that will occur in this particular area will be the individual users of the smart wearable healthcare device in terms of loss, theft or misplacement of the device which would result in unauthorized use of the device compromising confidential information stored in the device. From data analysis it is concluded that there are privacy concerns, especially when it comes to purchasing from a company which has ended up committing privacy breaches multiple time, still considering the lifesaving advantages of these devices most of the people are looking forward to buying a wearable device, however, a majority of them would like to make their purchase from a different company other than the one that got the history of multiple information privacy breaches, from the company which they can trust in. Therefore, it is clear that a significant number of people will not preferably want to make their purchase from a company which got a history of privacy breach in the recent past whilst some people would like to give the company a benefit of doubt since they did not disclose any private details of the users and are comfortable in buying the product from that company. However, even there are security issues, but most individuals prioritize their health and wellbeing on these privacy and security concerns.
References 1. Rezazadeh J, Sandrasegaran K, Kong X (2018) A location- based smart shopping system with IoT technology. In: IEEE 4th world forum on internet of things (WF-IoT) 2. Fotros M, Rezazadeh J, Sianaki OA (2020) A survey on VANETs routing protocols for IoT intelligent transportation systems. In: Proceedings of the workshops of the international conference on advanced information networking and applications, pp 1097–1115 3. Farhadian F, Rezazadeh J, Farahbakhsh R, Sandrasegaran K (2019) An efficient IoT cloud energy consumption based on genetic algorithm. Digit Commun Netw 150:1–8 4. Rezazadeh J, Moradi M, Ismail AS (2012) Mobile wireless sensor networks overview. Int J Comput Commun Netw 2(1):17–22 5. Bugeja J, Jacobsson A, Davidsson P (2016) On privacy and security challenges in smart connected homes. In: 2016 European intelligence and security informatics conference (EISIC). IEEE, pp 172–175 6. Rezazadeh J, Moradi M, Ismail AS (2012) Message- efficient localization in mobile wireless sensor networks. J Commun Comput 9(3)
Privacy and Security Issues of IoT Wearables in Smart Healthcare
433
7. Ching KW, Singh MM (2016) Wearable technology devices security and privacy vulnerability analysis. J Netw Secur Appl 8(3):19–30 8. Fotros M, Rezazadeh J, Ayoade J (2019) A timely VANET multi-hop routing method in IoT. In: 20th parallel and distributed computing, applications and technologies. IEEE, pp 19–24 9. Liang X, Zhao J, Shetty S, Liu J, Li D (2017) Integrating blockchain for data sharing and collaboration in mobile healthcare applications. IEEE 28th (PIMRC) pp. 1–5 10. Sahraei SH, Kashani MM, Rezazadeh J, FarahBakhsh R (2018) Efficient job scheduling in cloud computing based on genetic algorithm. Int J Commun Netw Distrib Syst 22:447–467 11. Casselman J, Onopa N, Khansa L (2017) Wearable healthcare: lessons from the past and a peek into the future. Telematics Inform 34(7):1011–1023 12. Dinh-Le C, Chuang R, Mann D (2019) Wearable health technology and electronic health record integration: scoping review. JMIR mHealth uHealth 7(9):e12861 13. Sharifinejad M, Dorri A, Rezazadeh J (2020) BIS-A blockchain-based solution for the insurance industry in smart cities. arXiv 2020, arXiv:2001.05273 14. Kapoor V, Singh R, Reddy R, Churi P (2020) Privacy issues in wearable technology: an intrinsic review. Available at SSRN 3566918 15. Hathaliya JJ, Tanwar S (2020) An exhaustive survey on security and privacy issues in Healthcare. 4.0. Comput Commun 153:311–335 16. Mamlin BW, Tierney WM (2016) The promise of information and communication technology in healthcare: extracting value from the chaos. Am J Med Sci 351(1):59–68 17. Frik A, Nurgalieva L, Egelman S (2019) Privacy and security threat models and mitigation strategies of older adults. In: Fifteenth symposium on usable privacy and security 18. Lee M, Lee K, Shim J, Cho S, Choi J (2016) Security threat on wearable services: empirical study using a commercial smart band. In: 2016 IEEE international conference on consumer electronics-Asia (ICCE-Asia). IEEE, pp 1–5 19. Dwivedi AD, Srivastava G, Dhar S, Singh R (2019) A decentralized privacy-preserving healthcare blockchain for IoT. Sensors 19(2):326 20. Cilliers L (2020) Wearable devices in healthcare: Privacy and information security issues. Health Inf Manag J 49(2–3):150–156 21. Pirbhulal S, Samuel OW, Li G (2019) A joint resource-aware and medical data security framework for wearable healthcare systems. Futur Gener Comput Syst 95:382–391 22. Solangi ZA, Solangi YA, Shah A (2018) The future of data privacy and security concerns in Internet of Things. IEEE (ICIRD), pp 1–4 23. Wang S, Bie R, Zhao F, Zhang N, Cheng X, Choi HA (2016) Security in wearable communications. IEEE Netw 30(5):61–67 24. Ren Z, Liu X, Ye R, Zhang T (2017) Security and privacy on internet of things. In: 7th IEEE conference on ICEIEC. IEEE, pp 140–144 25. Perez AJ, Zeadally S (2017) Privacy issues and solutions for consumer wearables. It Prof 20(4):46–56 26. Banerjee S, Hemphill T, Longstreet P (2018) Wearable devices and healthcare: data sharing and privacy. Inf Soc 34(1):49–57 27. Mozaffari N, Rezazadeh J, Farahbakhsh R, Ayoade JA (2020) IoT-based activity recognition with machine learning from smartwatch. Int J Wirel Mob Netw 12:15
A Novel Framework Incorporating Socioeconomic Variables into the Optimisation of South East Queensland Fire Stations Coverages Albertus Untadi, Lily D. Li, Roland Dodd, and Michael Li
Abstract Socioeconomic factors can vary the demand for fire services. Consequently, shifts in the socioeconomic make-up of a population may deem the historical data of the services’ demand less relevant. Unfortunately, studies on facility coverages have not considered the relevant socioeconomic variables. There are also minimal studies on covering location models that cater to sparse patches of residents around urban centres. Hence, this paper proposes a framework incorporating socioeconomic variables while optimising fire station coverages in South East Queensland, Australia. A robust backward stepwise regression analysis is first adopted to form a predictive socioeconomic equation of building fires. Then, a mathematical model embedding the equation is presented, along with a modification to minimise the distances between uncovered demand areas and their closest facilities. The potential algorithm to solve the model is also proposed. The research is the first to embed socioeconomic variables into a covering location problem. In the long run, wide adoption of the framework is expected to provide more equitable emergency services coverage to communities worldwide. Keywords Facility location problem · Optimisation · Metaheuristic · Operations research · Fire station · Emergency services
1 Introduction In Australia, preventable residential fires have caused 900 deaths in the fourteen years leading up to June 2017 [1]. It means an average of one life lost every week to the avoidable tragedy. On top of that, the latest study has appraised the proportional cost of fires to Australia’s gross domestic product (GDP) to be 1.3 per cent [2]. More specifically, Australia’s state of Queensland recorded 1554 fire incidents A. Untadi (B) · L. D. Li · R. Dodd · M. Li School of Engineering and Technology, Central Queensland University, Norman Gardens, QLD 4701, Australia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_38
435
436
A. Untadi et al.
involving damages to building structures and contents in 2020 alone [3]. Every incident concerns a Queenslander who has lost a family home, a loved one or a livelihood that has sheltered, cared for and fed generations of Australians. Therefore, any improvement to fire services should be welcomed. One aspect of fire services that could be enhanced is the location of the fire stations. A study has correlated spatial accessibility to fire stations with the number of unintentional injuries and deaths in Dallas, U.S. [4]. It quantified the accessibility by scoring the available capacity of fire services within a specified distance of a residential area. One modelling approach to analyse the reach and coverage of the fire stations is the location covering modelling. Studies on covering models categorised by García, Marín [5] into two broad clusters—set covering and maximal covering models. The first published set covering model is the Location Set Covering Problem (LSCP) by Toregas et al. [6], which finds the minimum number of facilities to provide complete coverage. Established variations to LSCP include Cooperative Location Set Covering Problem (CLSCP) and Net-LSCP [7, 8]. On the other hand, the first maximal covering model is the Maximal Covering Location Problem (MCLP), which intends to maximise the number of demands covered [9]. Extensions and modifications to MCLP have also been developed. Some typical examples include the Backup Double Covering Model (BDCM) and MCLP-Implicit model [10, 11]. Generally, the purpose of extending or modifying the covering models is to tighten or remove the three major assumptions that are deemed unrealistic and invalid in some use cases. First, the model may assume demand coverage to be binary—either wholly covered or uncovered. Second, models may also assume that each request is only served by one facility. Lastly, the models have assumed that the facility always has circular coverage with fixed radii [12]. For a study of South East Queensland (SEQ) fire stations, maximal covering models are preferred due to their consideration for the limited number of fire stations to be built. In comparison, the set covering model is generally used to determine the minimum number of facilities for complete population coverage. However, building facilities to cover every residence is unsustainable. Our initial survey of maximal covering models uncovers a gap that disregarded socioeconomic variables’ role in optimising fire station locations. Therefore, this paper proposed a framework to incorporate socioeconomic factors in the siting of fire stations. Including socioeconomic variables is crucial as they have been attributed to varying fire services demand [13]. The importance is especially pertinent to regions like SEQ, where fast-changing socioeconomic make-up may cause historical data on fire services to be less relevant. This paper also proposes a modification to the MCLP-Implicit model. The changes intend to minimise the distance of uncovered demand areas to the closest stations. The modification is especially useful for SEQ’s urban landscape, which consists of sparsely located houses on the city centres’ fringes. The rest of the paper begins the proposal by surveying the extensions and modifications to the maximal covering model, MCLP, in Sect. 2. Then, based on the survey, Sect. 3 proposed a framework to address the identified gaps. Finally, Sect. 4 concludes by discussing the future studies needed to investigate the framework’s suitability for real-life adoption.
A Novel Framework Incorporating Socioeconomic Variables …
437
2 Maximal Covering Location Problem (MCLP) The paper follows a nomenclature in Fig. 1 for the review and proposal of covering location models, ensuring consistent variables. Church, ReVelle [9] first establishes MCLP to improve the worst possible service outcome by minimising users’ maximum travelling distance. Therefore, MCLP is developed with the primary objective of maximising population coverage. The objective function (Eq. 1) maximises the number of people covered, while the constraints (Eqs. 2 and 3) accommodate the binary variable that indicates whether there is at
Fig. 1 The nomenclature of any covering models discussed
438
A. Untadi et al.
least one facility in set Oi . max
ηi y i
(1)
i∈I
subject to
x j ≥ yi , i ∈ I
(2)
j∈Oi
x j = |P|, i ∈ I
(3)
j∈J
However, as a consequence of maximising the number of people covered, MCLP increases the overall travel times due to the coverage of remote residents. MCLP also relies on a pre-set response time threshold. Karatas, Yakıcı [14] came up with a compromise by extending MCLP to include the p-median problem (pMP) and pcentre problem (pCP). The model will minimise the distance of demand points to the closest facilities and the maximum distance of all demand points to a facility, creating balanced solutions. So far, the models have not provided demands with alternative covering facilities. Hence, Dessouky et al. [15] introduced multiple coverage thresholds l to demand points to provide multiple alternatives to the primary facility in case of unprecedented events. The objective function (Eq. 4) minimises the total distance between demand points and the facilities, weighted by h l to account for the priority given in each layer r and ηi the population at each demand point i. In this concept, the distance between a covering facility and the demand point at layer l is closer than the covering facility at layer l + 1. The constraints in Eqs. (5) and (6) set the maximum number of facilities allowable and the threshold for each coverage level. min
l
h l ηi πi j xi jl
(4)
i∈I j∈J
subject to
xj ≤ A
(5)
j
xi jl ≥ αil
(6)
j∈Oi
Another issue in MCLP is the non-coverage of some demand points due to distance thresholds and bounds on the number of facilities. To address this issue, Yin and Mu [16] modified the objective function to minimise uncovered demand distance to the nearest facility as in Eq. (7). Weight k is adjustable to the priority given to the uncovered demand points. max
i∈I j∈J
ηi πi j − k
i∈I j∈J
πi j ηi μi j
(7)
A Novel Framework Incorporating Socioeconomic Variables …
439
Rather than assigning individual facilities to demand, cooperative models treat facilities and demand as signal transmitters and receivers. Like signals, a facility coverage quality dissipates over distance, reflected in its constraints (Eqs. 9 and 10) and derived from an aggregated signal function. Also, the ‘receivers’ can simultaneously accept ‘signals’ from any ‘transmitters’. In a way, every facility will cooperate to satisfy the demand while also maximising the total coverage function i∈N h i , optimising the cover provided by facilities within the distance threshold (Eq. (8)). A Cooperative Maximal Covering Location Problem (CMCLP) was proposed as follows [8]. max
hi
(8)
i∈N
subject to
h i ≥ β H,
(9)
i∈N
N = i ∈ N |i y p ≥ β
(10)
So far, the models discussed have treated demand as a set of points. The approach may be complicated for problems that require demand to be two-dimensional as the coverages of demand areas are not binary but, instead, fractional (e.g. 57 per cent covered). Murray et al. [11] offer a solution by proposing the MCLP-Implicit model (Eqs. 11–14). The term ‘Implicit’ refers to the model not tracking combinations of two-dimensional coverages. Rather, it introduces the index l to represent the coverage layers, each with a different coverage proportion threshold β l . Equation (12) constraint will ensure a minimum number of facilities α l are assigned to demand i according to the threshold at every layer l. The binary variable mil indicates if demand i is covered at layer l. max
ηi m i
(11)
x j ≥ αl m il ,
(12)
i
subject to
j∈Jil
xj = A
(13)
m il = m i
(14)
j
l
Overall, the literature’s commonalities lay in the lack of consideration for socioeconomic variables. Therefore, the following section proposes a framework to solve the oversight. The MCLP-Implicit has been chosen as the base of the proposed
440
A. Untadi et al.
covering location model. Apart from MCLP’s account for the limited number of facilities, the MCLP-Implicit ability to accommodate two-dimensional demand representation suits the available SEQ socioeconomic data, which is collected at the Statistical Area 2 (SA2) unit by the Australian Bureau of Statistics (ABS).
3 Proposed Approach: The Socioeconomic Facility Location Optimisation Framework Based on the literature review on MCLP, the Socioeconomic Facility Location Optimisation framework is introduced. It consists of three consecutive components— Robust Backward Stepwise Regression Modelling, Covering Location Modelling and Metaheuristic Solution Algorithms. The demand for the facility is first statistically modelled using the robust backward stepwise regression method. This modelling will identify impacting socioeconomic variables and each of their weights. Considering those impacting variables, the projected demand will be utilised in the modified MCLP-Implicit model designed to suit regional characteristics similar to the SEQ. The proposed covering location model is then solved by a metaheuristic algorithm. The following subsections describe the three components of the proposed framework one by one.
3.1 Robust Backward Stepwise Regression Modelling The regression modelling aims to obtain a multiple regression equation that effectively ‘predicts’ the rate of building fires, bi . The equation consists of coefficient ad for every variable gid , where i index represents the demand areas and d index represents the socioeconomic variables, forming a multiple regression equation such as in Eq. (15).
bi = a0 + a1 gi1 + · · · + ad gid
(15)
The socioeconomic variables acted as the explanatory variables gid , while the rate of building fires acted as the response variable ad . Due to the lack of a causation study, the predictive socioeconomic equation is formed using a widely adopted variable selection method—backward stepwise regression analysis. The approach is inspired by a preceding study on building fires in South East Queensland [17]. Improvement of the model’s accuracy was conducted by applying the Robust Final Predictor Error (RFPE) function at every step to limit the effect of outliers and biases. Maronna et al. [18] developed a robust statistical technique to improve Akaike’s FPE criterion that is sensitive to non-typical data points or outliers [19]. Therefore, RFPE was created as a solution by developing a robust FPE criterion
A Novel Framework Incorporating Socioeconomic Variables …
441
using a robust M-estimator in Eq. (16) [18].
y0 − x0C βC R F P E C = Eρ σ
n β yi − x iC ρ where β = arg minq β∈R σ i=1
(16)
yi =
p
xi j β j + u i = x i β + u i
(17)
(18)
j=1
ρ(r ) = r 2
(19)
xiC = xi1 , . . . , xi p
(20)
C ⊂ {1, 2, · · · , p}
(21)
i = {1, 2, · · · , n}
(22)
where xi j , yi is representing the dataset with xi as the independent/predictor variables and yi as the dependent variables. (x0 , y0 ) is an extra data point to measure the robustness of the dataset. σ is the robust error scale with an estimator derived through the iteratively reweighted least square algorithm [20]. βˆ is the robust M-estimator of the unknown parameter β. An estimator of the RFPE criterion in Eq. (23) has been established as follows [18].
RF PE =
n 1 riC q Aˆ + ρ n i=1 σˆ n Bˆ
n n 1 riC 2 ˆ 1 riC ˆ where A = ψ ,B = ψ n i=1 σˆ n i=1 σˆ
(23)
(24)
riC = yi − x iC β C
(25)
q = |C|
(26)
ψ(r ) = 2(r )
(27)
Within the backward stepwise regression framework, RFPE is calculated for the current model and every possible sub-model that has eliminated one variable. A
442
A. Untadi et al.
Robust RFPE backward stepwise regression algorithm is presented in plain language as follows. Step 1: Calculate RFPE for a model with all candidates set as explanatory variables (Eq. 23). Step 2: Calculate RFPE for every possible sub-model where a single variable is eliminated (Eq. 23). Step 3: Delete the single variable that improves the RFPE the most. Step 4: Set the remaining explanatory variables as the ‘current model’. Step 5: Repeat Steps 3–5 until the ‘current model’ RFPE is lower than every possible sub-model RFPE.
3.2 MCLP-Implicit Modelling The main objective of the research is to develop an optimisation model that considers socioeconomic data of the population and is suitable for regions such as the SEQ. Based on the MCLP-Implicit model (Eqs. 9–12), MCLP modification in Yin and Mu [16], the resulting socioeconomic equation and the nomenclature in Fig. 1, a new variation of the MCLP-Implicit model is proposed as follows. max
ηˆ i m i − h
i
subject to
closei ηˆ i
(28)
i
x j ≥ αl m il , ∀i, l
(29)
j∈Jil
xj = A
(30)
m il = m i
(31)
j
closei =
l
min disti j , m i = 0, ∀i 0, otherwise,
(32)
x j = {0, 1}, ∀ j
(33)
m il = {0, 1}, ∀i, l
(34)
m i = {0, 1}, ∀i
(35)
ηˆ i = bˆi qi , ∀i, l
(36)
A Novel Framework Incorporating Socioeconomic Variables …
443
The objective function (Eq. 28) maximises demand areas covered while also minimising uncovered demand distances to the closest facilities. Equation (29) constraint links the decision to site a facility at site j. If all x j for every j in J il equals zero (no facility that can cover demand area i at β l threshold), mil will also equal zero. Equation (30) constraint limits the number of facilities sited. Equation (31) constraint tracks the level of coverage in each demand area i is receiving. Equation (32) constraint minimises distance for demand areas with no coverage. It assigns the variable closei a value if and only if demand area i is uncovered. Constraints by Eqs. (33–35) specify that variables are restricted to binary values. The demand volume, the number of building fires, is projected through projected building fire rates in Eq. (36) and socioeconomic data. One novelty in the proposed variation to the MCLP-Implicit model includes the addition of function h i closei ηi in the objective function (Eq. 28) to consider the minimisation of uncovered demand areas to their closest facilities. Another novelty is accommodated in Eq. (36) constraint, where the rates of building fires are projected using socioeconomic variables. The concept of an implicit covering model is based on the notation beta β l , the minimum percentage for an ‘acceptable coverage’ of a demand area by each facility at level l. If the minimum number of facilities at level l is α l , that means complementary coverages by the two facilities will provide at least β l × α l per cent of coverage. For example, α 2 is set at 2 for the coverage level l = 2. It infers that up to two facilities are considered to cover the demand area at the level. If β 2 is set at 47.5%, complementary coverages by the two facilities will cover at least 95% of the demand area at l = 2 level [11].
3.3 Proposed Metaheuristic Solution Algorithm Numerous algorithms can be adopted to solve an optimisation problem. One of the most conventional metaheuristic approaches to solve an MCLP model is the Tabu Search (TS) algorithm [21]. Its flexibility in searching for optimality in large and multidimensional datasets is also beneficial in developing and assessing the proposed framework. TS proposes a deterministic model to prevent the algorithm from being trapped in a local optimum [22]. The name, Tabu Search, originated from its tabu list feature, which prevents solutions from being considered overly repetitively [23]. The list will track the changes in the node status in short-term memory. For flexibility, aspiration criteria can be included to provide an exception to the tabu list to suit the local fire services requirement [10, 22]. After the algorithm conducts a local search in its current space, sets of solutions with reasonable objective function values will be assessed for their attributes. Subsequently, these attributes will be used to ‘intensify’ the search by biasing the search function or revisiting the search space [22, 24]. For example, in siting the fire
444
A. Untadi et al.
stations, intensification can be conducted using a distance or aerial criterion to look for increasingly optimal solutions within the vicinity of the current solution. As a result of the intensification effect, a diversification function is needed to balance the algorithm and force it to search within unexplored spaces. The diversification will be done by utilising the tabu list that is incrementally expanded, following the intensification procedure [10]. Throughout the development stage, intensification parameters will be tuned. In addition to TS, other metaheuristic approaches, such as the Simulated Annealing algorithm [22], will also be implemented to solve the proposed covering location model. Finally, the performance of the algorithms will be evaluated, and the outperforming one will be recommended at the conclusion of the research.
4 Discussion and Conclusion The paper has proposed a novel framework incorporating socioeconomic variables to a fire station’s covering location problem. The framework consists of a robust backward stepwise regression model, a modified MCLP-Implicit optimisation model and a metaheuristic algorithm adoption. Components of the framework have been discussed, including the relevant mathematical formulations and the particular reasons for the adoption. Aside from the incorporation of socioeconomic variables, several other contributions of this framework include the first application of the robust backward regression method to model the incidence of building fires. Novel modifications to the MCLPImplicit model are also made to consider uncovered areas and to accommodate socioeconomically projected demand rates. The proposed framework needs to be further evaluated. Future works include implementing the robust backward stepwise regression method using datasets obtained from the ABS and the Queensland Government. Also, the application of the MCLP-Implicit model and the TS algorithm in the R programming environment is under ongoing development. Last and most importantly, the framework’s performance will be evaluated through comparative studies to other covering location models and algorithms before the research reaches its final conclusion.
References 1. Coates L, Kaandorp G, Harris J, van Leeuwen J, Avci A, Evans J et al (2019) Preventable residential fire fatalities in Australia July 2003 to June 2017. Bushfire and Natural Hazards CRC 2. Ashe B, McAneney KJ, Pitman AJ (2009) Total cost of fire in Australia. J Risk Res 12:121–136. https://doi.org/10.1080/13669870802648528 3. Queensland Fire and Emergency Services. QFES Incident Data (2020)
A Novel Framework Incorporating Socioeconomic Variables …
445
4. Min S, Kim D, Lee CK (2019) Association between spatial accessibility to fire protection services and unintentional residential fire injuries or deaths: a cross-sectional study in Dallas, Texas. BMJ Open 9(5):e023780. https://doi.org/10.1136/bmjopen-2018-023780 5. García S, Marín A (2015) Covering location problems. In: Laporte G, Nickel S, Saldanha da Gama F (eds) Location Science. Springer International Publishing, Cham, pp 93–114 6. Toregas C, Swain R, ReVelle C, Bergman L (1971) The location of emergency service facilities. Oper Res 19(6):1363–1373. https://doi.org/10.1287/opre.19.6.1363 7. Ye H, Kim H (2016) Locating healthcare facilities using a network-based covering location problem. GeoJournal 81(6):875–890. https://doi.org/10.1007/s10708-016-9744-9 8. Berman O, Drezner Z, Krass D (2009) Cooperative cover location problems: the planar case. IIE Trans 42(3):232–246. https://doi.org/10.1080/07408170903394355 9. Church R, ReVelle C (1974) The maximal covering location problem. Papers of the Regional Science Association, vol 32, no 1, pp 101–118. https://doi.org/10.1007/BF01942293 10. Ba¸sar A, Çatay B, Ünlüyurt T (2009) A new model and tabu search approach for planning the emergency service stations. Springer Berlin Heidelberg, Berlin, Heidelberg, pp 41–46 11. Murray A, Tong D, Kim K (2010) Enhancing classic coverage location models. Int Reg Sci Rev 33:115–133. https://doi.org/10.1177/0160017609340149 12. Berman O, Drezner Z, Krass D (2010) Generalized coverage: new developments in covering location models. Comput Oper Res 37(10):1675–1687. https://doi.org/10.1016/j.cor.2009. 11.003 13. Chhetri P, Corcoran J, Stimson R (2009) Exploring the spatio-temporal dynamics of fire incidence and the influence of socio-economic status: a case study from south east Queensland, Australia. J Spatial Sci 54(1):79–91. https://doi.org/10.1080/14498596.2009.9635168 14. Karatas M, Yakıcı E (2018) An iterative solution approach to a multi-objective facility location problem. Appl Soft Comput 62:272–287. https://doi.org/10.1016/j.asoc.2017.10.035 15. Dessouky M, Ordóñez F, Jia H, Shen Z (2013) Rapid distribution of medical supplies. Springer US, Boston, MA, pp 385–410 16. Yin P, Mu L (2012) Modular capacitated maximal covering location problem for the optimal siting of emergency vehicles. Appl Geogr 34:247–254. https://doi.org/10.1016/j.apgeog.2011. 11.013 17. Chhetri P, Corcoran J, Stimson RJ, Inbakaran R (2010) Modelling potential socio-economic determinants of building fires in South East Queensland. Geogr Res 48(1):75–85. https://doi. org/10.1111/j.1745-5871.2009.00587.x 18. Maronna RA, Martin RD, Yohai VJ, Salibin-Barrera M (2019) Robust inference and variable selection for M-estimators. Robust Statistics. Wiley Series in Probability and Statistics. John Wiley & Sons, Incorporated, pp 133–138 19. Akaike H (1970) Statistical predictor identification. Ann Inst Stat Math 22(1):203–217. https:// doi.org/10.1007/BF02506337 20. Maronna RA, Martin RD, Yohai VJ, Salibin-Barrera M (2019) M-estimators with smooth ψ-function. Robust Statistics. Wiley Series in Probability and Statistics. John Wiley & Sons, Incorporated, p 104 21. Gendreau M, Laporte G, Semet F (1997) Solving an ambulance location model by tabu search. Locat Sci 5(2):75–88. https://doi.org/10.1016/S0966-8349(97)00015-6 22. Talbi E-G (2009) Single-solution based metaheuristics. Metaheuristics: from design to implementation. John Wiley & Sons, Inc., pp 87–189 23. Glover F (1989) Tabu search—part I. ORSA J Comput 1(3):190–206. https://doi.org/10.1287/ ijoc.1.3.190 24. Glover FW (2013) Tabu search. In: Gass SI, Fu MC (eds) Encyclopedia of operations research and management science. Springer, US, Boston, MA, pp 1537–1544
IoT and Cybersecurity for Smart Devices
The Illicit Use of Cryptocurrency on the Darknet by Cyber Criminals to Evade Authorities Mariagrazia Sartori, Indra Seher, and P. W. C. Prasad
Abstract Due to the anonymity of cryptocurrencies combined with enhanced privacy, criminals have been able to migrate their markets to the darknet. Thus, cryptocurrencies have altered the power balance between criminals, law enforcement, and regulators. Due to a lack of technical knowledge and resources, it is challenging for law enforcement to find, arrest, and effectively prosecute a darknet market vendor who trades from the other side of the world by anonymizing hops in multiple jurisdictions. Regulators are struggling to govern and develop guidelines meant to regulate cryptocurrency. To avoid being tracked by law enforcement, criminals turn to more private cryptocurrencies like Monero and Zcash. These cryptocurrencies use total anonymity and untraceable transactions, making them ideal darknet currencies. In the research of journal articles, various limitations such as investigators cannot access any off-chain transaction data, are identified and highlighted. Furthermore, criminals have identified many channels that can hide them from law enforcement and regulators. It is difficult to stop cryptocurrencies, and criminals will continue to improve their technology in order to adapt to the evolving privacy-enhanced cryptocurrencies. Taking full advantage of their anonymity and obfuscation features and being smart about choosing and using exchanges to clean up their dirty coins. This review adds to current knowledge by bringing together all stages of cryptocurrency crime to better understand the role of privacy-enhanced cryptocurrencies in illicit activities. Keywords Cryptocurrency · Cryptomarket · Darknet · Law enforcement · Money laundering
M. Sartori · P. W. C. Prasad Charles Sturt University, Bathurst, Australia I. Seher (B) Canterbury Institute of Management, Sydney, Australia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_39
449
450
M. Sartori et al.
1 Introduction In the past few years, changes to the privacy of cryptocurrencies have given criminals new ways to commit old crimes and move their services to the darknet without regulators and law enforcement finding out. Criminals with technical ability can avoid detection and expand their illicit markets globally using cryptocurrencies with enhanced privacy. The subsequent review on cryptocurrencies in criminal activity focuses on four areas. The remainder of the paper is organized as follows: Sect. 2 is a primer on cryptocurrencies with enhanced privacy features; Sect. 3 covers cryptocurrencies on illicit darknet marketplaces with an overview of how the darknet enables criminals to remain hidden from law enforcement and their motivations for using cryptocurrencies on the darknet; Sect. 4 discusses regulation and law enforcement of cryptocurrency crime; Sect. 5 discusses decentralized exchanges and money laundering; Sect. 6 discusses the future work and Sect. 7 is the conclusion.
1.1 Research Approach This review gives an overview of privacy-enhanced cryptocurrencies’ most pertinent published information and how darknet criminals use this technology to further illicit activity on the darknet. Also included is, how law enforcement can stop money laundering and respond when criminals use technology, especially payment systems on darknet markets and how regulators enforce anti-money laundering reporting on cryptocurrency exchanges.
1.2 Methodology The review method is to find current peer-reviewed journal articles (2021–2022) of academic literature that met the Q1 quartile of the SCImago Journal rankings. Additionally, a select few Q2 articles are included. The titles, abstracts, introductions, and conclusions of fifty-one journal articles were visually scanned, looking for articles that mentioned the illegal use of cryptocurrencies. After filtering, twenty journal articles met the study’s goal. Upon further filtering, fifteen articles were selected for this review due to the repetitive nature of some articles. Significant emphasis was placed on the challenges regulators and law enforcement face in detecting and prosecuting cryptocurrency financial crime, including a summary of this study’s findings, limitations, and future work. This review can aid in the understanding of cryptocurrency crimes enabled by technology. The network graphic shown in VOSviewer [1] compares terms to Scopus source names. Circles on the map show document source keywords. The bigger the circle,
The Illicit Use of Cryptocurrency on the Darknet by Cyber Criminals …
451
the more keywords appear. Links between circles show keyword distance. If the relation size is smaller, keyword correlation is greater. Keywords cluster in the same color.
2 A Primer on Cryptocurrencies with Enhanced Privacy Features Cryptocurrencies like ZCash and Monero are helpful for funding illicit darknet markets, and the markets in which they trade are also changing rapidly [2]. As the use of cryptocurrencies increases in popularity, criminals exploit the ostensible privacy and anonymity they provide, making them perfect for a crime. The increased anonymity provided by some cryptocurrencies is a highly desirable feature for criminals who wish to protect their privacy [3]. Widely used cryptocurrencies Monero and Zcash are thought to be more secure and anonymous than Bitcoin [2], and there has been a shift away from Bitcoin [4] since the Bitcoin traceability announcement in 2015 [2]. The arrest of one hundred darknet vendors using Bitcoin for transactions by law enforcement also influenced the preference for Monero. On the contrary, research revealed that the announcement of traceability did not affect attitudes towards darknet markets using Bitcoin [5]. Even if this is true, researchers have shown that criminals know how cryptocurrencies can be tracked and change their behaviour accordingly [6].
2.1 Monero and Zcash To understand why criminals, prefer Monero, it is necessary to understand that the goal of Monero is to create a digital currency with the anonymity of cash that is by default private. The person who gets the money from a Monero transaction does not need to know who sent the money or where it came from because the transaction history is kept secret [7]. In this review, it is not necessary to understand the intricacies of Monero and Zcash other than that they conceal transaction details on the public blockchain [8] and use shielded pools in which the sender, recipient, and amount are entirely encrypted. In comparison, Zcash has the option to make transactions transparent (t-address) or private or concealed (z-address), while Monero transactions are always private [7]. Monero employs a decoy algorithm for additional obfuscation, resulting in an opaque blockchain [8]. Moreover, no information is exchanged except that both parties agree to the zero-knowledge proof [7] that the other party is aware of the existence of the value x. The stealth addresses are used one time only and are not linked to previously used addresses, thus further anonymizing the trail [7].
452
M. Sartori et al.
Currently, not all cryptocurrency exchanges support z-addresses [7]. However, criminals can exchange their Monero for Bitcoins on some exchanges, which breaks the chain that makes Monero anonymous. Despite this, criminals can first send their Monero to crypto mixers who facilitate the obfuscation and conversion of Monero to Bitcoin, and there is no possibility of tracing the transactions. Secondly, the mixers add a time delay and split transactions [3] into smaller ones to make them more anonymous. It is worth noting that coin mixers are not illegal and are not considered money laundering. Due to its anonymity and ease of use, Monero has become the preferred cryptocurrency for illicit trade on the DarkNet [4, 5]. Bahamazava and Nanda [5], hypothesized that DarkNet drug users no longer prefer Bitcoin. Monero started to appear in forum discussions after 2017. Monero Network introduced RingCT transactions in 2017. This upgrade made it impossible to deduce transaction amounts from the Monero blockchain. Monero became untraceable, making it a potential favourite for DarkNet illegal trade. Analysis suggested that darknet market users recommended Monero over Bitcoin from 2017 [9] which shows the Monero uptake from 2017, from Kaggle under CCO: Public Domain dataset for Top 100 Cryptocurrencies Historical Dataset.
2.2 Anonymity The non-fungibility and traceability issues with other cryptocurrencies were the reason for the creation of Monero [7]. Bitcoin’s use by criminals decreases when attached to blacklisted wallets associated with criminals or nation-states. Monero is fungible as it does not provide a way to link transactions together or trace the history since all transactions are private, ensuring that no Monero (XMR) can be blacklisted while giving all the benefits of a secure, decentralized blockchain [7]. Fungibility, because Monero (XMR) is untraceable, you do not have to worry about receiving coins that their previous owner’s activity has tainted [7]. This quality has benefited darknet market criminals. Always-on privacy ensures that illicit transactions are kept private. No one knows your wallet address, transaction balance, encrypted address and transaction amounts; prevent them from being tracked by the amount on a publicly distributed ledger [7]. It includes the benefit that users can connect their wallets without identifying themselves, and Monero transactions create anonymous accounts called stealth addresses [10]. The stealth address hides the public address and the Monero owner [10]. Privacy enhanced cryptocurrencies are accepted widely on decentralized exchanges that allow funds to enter and exit anonymously [4]. Therefore, criminals can leverage this anonymity to evade detection.
The Illicit Use of Cryptocurrency on the Darknet by Cyber Criminals …
453
3 Cryptocurrencies on Illicit Darknet Marketplaces Due to the anonymity of the darknet, criminals are increasingly turning to darknet markets and cryptocurrencies for their financial gain [11]. The only way to ensure their long-term financial security is to continue to commit criminal acts online [11]. Darknet crime has become increasingly popular among criminals, despite the risks of detection and punishment [11]. Correspondingly, the offline market experience can be translated into increased earnings in the digital realm [5]. According to the research on conventional criminals, individuals with online drug selling experience outperform offline sellers [12] and are better at evading authorities. The Oullett et al. [12] study compared drug sellers who started in physical drug stores versus those who started on digital platforms. The data implies that vendors that bridge the online and physical drug markets maximize profits. The researchers examined the criminal capital theory that more experienced criminals profited more from crime and demonstrated that criminals with experience could escape detection, maximize profits, and navigate illegal markets [12]. Darknet markets predominantly only accept cryptocurrencies as payment to ensure anonymity [4] and attract sellers and buyers because of the perceived anonymity, protection, and superior quality of the goods and services [4]. Privacyenhanced cryptocurrencies are the preferred payment methods on darknet markets [5]. Matsueda et al. [6], for instance, showed that previous experience with crime was associated with more illegal earnings from crime, and Morselli and Tremblay found that the more crime an offender commits, the higher his or her illegal earnings [6]. Knowledge and technical skills that can aid successful criminal activities are defined as setting up a wallet, making sales, and turning these cryptocurrencies into functional currency, all necessitating some technical knowledge [12]. Consequently, creating and operating online shops necessitates technological skill, unlike offline markets, which are historically characterized by speedy purchases and cash payments [12]. An overview of the broader crypto market is provided in this section, focusing on three key facets [13], Tor, I.P. addresses and the Tails operating system. Tor, which can be easily downloaded, is preconfigured to use an overlay network that masks the users’ internet traffic [4] by disconnecting the user’s I.P. address when they enter the network. The Tor Darknet ecosystem of hidden services includes cryptocurrency markets and online forums where goods and services are exchanged. Cybercriminals conceal their identities using digital encryption and provide goods for sale using enhanced privacy cryptocurrencies [12]. Criminals also use Tails, an operating system that starts up from a USB stick and then deletes all traces of illegal activity by wiping session data and encrypting specific files that stay on the computer after the session is over [13]. Researchers found a further novel finding that darknet market vendors coached buyers on how to create alibis as most customers buy Bitcoin as an investment. If questioned, the buyer can always claim that they purchased Bitcoin as an investment [4]. Another finding is that the darknet market vendors offer a service to mix the
454
M. Sartori et al.
Bitcoins before they are transferred to wallets, so sellers and buyers do not have to worry about revealing their identity [4]. The researchers describe how vendors and buyers must have set up a crypto wallet, and crypto markets prefer currencies like Monero that have gradually gained popularity. In addition, cryptocurrencies can be stored in one of three types of wallets: hosted, non-custodial, or hardware. Crypto market participants likely use non-custodial or hardware wallets [8].
4 Regulation and Law Enforcement of Cryptocurrency Crime The shifting cryptocurrency landscape threatens regulation and crime investigation, not to mention that even in some jurisdictions [13], cybercrime is not prosecuted. Gonzálvez-Gallego et al. [13] emphasized that some jurisdictions continue to be unregulated or have different definitions of cryptocurrencies. Notably, current research into cryptocurrency crime cases has primarily focused on prosecutions in the USA [14]. At the same time, unclear or strict local rules have pushed criminals to move to other countries and trading platforms with less strict rules [3]. To investigate worldwide and dispersed cryptocurrency-related crimes on the darknet, law enforcement agencies face a wide range of issues regarding new technology and attempt to deanonymize decentralized ledgers’ networks to link the wallet to the I.P. addresses of criminals [2]. For instance, blockchain investigations of decentralized exchanges that generate transactions outside the blockchain limit what regulators and law enforcement can access as any off-chain transactions are inaccessible [7] and untraceable [13]. Just as essential, traditional tools like bank records or subpoenas do not work or exist in the off-chain. To make matters worse, cryptocurrency mixing methods make coins anonymous by obscuring the link between identities and transaction address flows [4]. Researchers point out that this is beneficial to cybercriminals since they can hide their illicit transactions from blockchain analysis [7] by using cryptocurrency mixers. Researchers also found it was hard to get information from hundreds of blockchains because dirty money can become clean and then dirty again and again [15]. Even when chain analysis is used on the surface web, it is essential to note that at some point in the informational accumulation stage of the crypto market crime, users can choose to switch to the Tor browser and leverage the system’s anonymity to mask their I.P. address information [8]. However, since the Tor browser can access many Clear websites, information accumulation can continue via this channel [8] unless the criminals decide to use a virtual private network (VPN). Also, cross-border funds transfers can involve different jurisdictions, likely with different approaches to cryptocurrencies, making it more difficult for monitoring and law enforcement [13]. Even if these illegal activities are found, it will be hard to put financial sanctions on other countries, as the European Banking Authority (EBA) points out [13].
The Illicit Use of Cryptocurrency on the Darknet by Cyber Criminals …
455
Trozze et al. [14] examined cryptocurrency crimes decided by the U.S. District and Circuit Courts and how financial gain motivated defendants in various schemes. Interestingly, researchers observed that at least one federal district court has declared that Bitcoin is not money, despite its use as a medium of exchange, because it is not issued by or protected by any sovereign state, unlike cash or currency. This contrasts with virtual currencies used online to fund fraudulent and illegal behavior because cryptocurrencies like Bitcoin may be purchased in exchange for fiat currency, function as a value denominator, and are used in financial transactions [11]. It was found that the prosecution of cryptocurrency crimes remains underdeveloped relative to the value of the crimes [14]. The presence of defendants and the use of a cryptocurrency other than Bitcoin to conduct a crime decreased the likelihood of conviction [14], as the prosecution had no realistic chance of success when there were no other compelling reasons why the matter should be resolved at trial. It is unknown whether non-Bitcoin evidence was excluded from prosecutions because prosecutors feared it would be ruled inadmissible because they were unsure of their own, the judge’s, or the jury’s knowledge of the technological complexities [14]. Several defendants used Bitcoin to take part in the dark web or assist in unlawful activities like drug trafficking and the purchase and sale of illegal substances, such as a Parisian defendant who promoted Dream Market, a hidden service on the Tor network under the alias “OxyMonster.” He received millions of dollars from a Bitcoin tip jar [11]. Most of the cases examined included Bitcoin (70%). Surprisingly, no occurrences involving well-known cryptocurrencies such as Litecoin or Ripple occurred. However, it is unclear whether criminals who use Bitcoin are just more likely to be apprehended by authorities [14].
5 Decentralized Exchanges and Money Laundering The majority of centralized exchange (CEX) platforms are registered and compliant. Concerning smaller decentralized exchange platforms (DEX), it was pointed out that they do not have to be registered in a jurisdiction [3]. Criminals will not utilize a CEX that cooperates and is regulated by the authorities if they have adequate knowledge [15]. Figures 1 and 2 illustrate the difference between CEX and DEX platforms [3]. The CEX controls cryptocurrency owners’ funds is not anonymous, and has higher transaction speeds and bigger trade volumes. For DEX, the cryptocurrency owner controls the funds and is anonymous, this is a self-service model and has smaller trade volumes and lower transaction speeds and higher risk. A to G are seller and buyers of cryptocurrencies on the exchanges as shown in Figs. 1 and 2 respectively. However, criminals can still use an illiquid centralized cryptocurrency exchange that is mainly reliant on transaction fees from money laundering may not actively cooperate in anti-money laundering (AML) activities conducted by authorities but may instead seek to assist money launderers [15] without knowing their customers (KYC).
456
M. Sartori et al.
Fig. 1 The organization of a centralized exchange platform (CEX)
Fig. 2 The organization of a decentralized exchange platform (DEX)
The data for Fig. 3, was downloaded from nomics.com, the largest crypto index website for Top Global Crypto Exchanges [16] free CSV and the anti-money laundering regulation data by country was extracted from The Law Library of Congress, Global Legal Research Directorate, Regulation of Cryptocurrency Around the World: November 2021 Update [17]. AML Regulations are categorized by Yes (Y) regulated, Unknown (U) or No Information (N) for each country. Figure 3, highlights that the majority decentralized exchange locations are unknown and there is no regulatory information available. Also mentioned was that when small, illiquid CEX exchanges launder cryptocurrencies, their preoccupation is with stabilizing their profit systems and surviving in the highly volatile cryptocurrency market, which was advantageous to criminals [15]. The study’s outcome by Kim et al. [15] is that these cryptocurrency exchanges
The Illicit Use of Cryptocurrency on the Darknet by Cyber Criminals …
Exchanges by Continent and Regulated Countries 0 100 200 Africa (N) Africa (Y) Asia (U) Central America (U) Country not disclosed (U) Europe (U) North America (U) Oceania (N) Oceania (Y) South America (U) Unknown (U)
300
457
400
decentralized
(Y) = Regulated (N) = Not Regulated (U) = Unknown
Fig. 3 Centralized and decentralized exchanges by country and anti-money laundering regulations
would over-report money laundering due to their inferior detection abilities. The exchanges are highly involved in illegal activities. The study’s limitations are that small cryptocurrency exchanges have a brief history due to their illiquidity, so the impact of regulations could not be analyzed [15]. Darknet market criminals are constantly looking for ways to improve their anonymity and are capable of utilizing new technology, as studies have shown [4]. In addition, criminals who want to avoid regulators’ scrutiny are drawn to cryptocurrencies because of their speed, security, low exchange fees, global reach, and anonymity [3]. To avoid regulatory scrutiny, they flee to unregulated and non-compliant DEX [3]. Darknet market sellers can scan and compare multiple decentralized exchanges and dark pools to find the best prices and split large orders of transactions across multiple centralized and decentralized exchange platforms [14]. To further complicate matters, cryptocurrency is purchased anonymously, and criminals use peer-to-peer (P2P) on DEX [3]. No transaction history can be traced back to individual identity as these exchanges do not verify identity. The crypto is transferred directly between users using a peer-to-peer exchange like LocalBitcoins [2], so the exchange does not ever hold the cryptocurrency [4]. The critical aspect of decentralized exchanges is no need for an intermediary [18]. This was the vision of Satoshi Nakamoto, and CEX is a betrayal of that vision. This is why decentralized exchanges exist and why the development of such exchanges is growing, as is the number of users migrating to DEX exchanges [18]. This paper does not include the intricacies of how decentralized exchange makes money, as the topic is broad. Having established that DEX allow anonymous, untraceable transactions, criminals can use their services to launder the privacy-enhanced cryptocurrency gained through lawful criminal acts [2]. Criminals use this method to convert cryptocurrency into fiat money on exchanges, which is then spent on goods and services and absorbed into the system [15]. Most cryptocurrency laundering occurs when cryptocurrency is sent to a high-risk or unregulated crypto-to-fiat [3] DEX. As a result,
458
M. Sartori et al.
the inability or reluctance of a cryptocurrency exchange to detect money laundering (ML) transactions makes them attractive to criminals [15]. The evidence showed that regulation attempts on cryptocurrency exchanges had triggered a segmentation and sometimes an entanglement between centralized and decentralized trading platforms that, along with the geographical separation of crypto-markets across jurisdictions, further split the crypto-industry into compliant and non-compliant [3]. As the market evolves, and if more exchanges engage in cross-country arbitrage, regulation and enforcement in one jurisdiction may lead criminals to migrate to others with more lax jurisdictions [3].
6 Future Work The dataset of Top 100 Cryptocurrencies, by Market Cap and the anti-money laundering regulation data by country from The Law Library of Congress shows that there is still a high number of unregulated exchanges, and their jurisdiction is unknown along with no anti-money laundering regulations in place. Future work could monitor if cryptocurrency exchanges in unknown jurisdictions decreases or increases as regulations are implemented.
7 Conclusion New forms of anonymization have allowed criminals to commit crimes on online platforms enabled by technological shifts. This review highlights how well-versed criminals monetize cryptocurrencies implying that criminals are learning to use technology to engage in crime and have the sophistication to maximize their security. As criminals continue to adopt technology, early adopters of technology are the most successful in online cryptomarkets. Research shows that criminal offending patterns change as technology advances. This review also assessed the capabilities of regulators and law enforcement to police cryptocurrency crime, and this highlighted that these agencies need to improve these investigative skills in order for crimes to be successfully prosecuted. Currently, evidence is basic and only includes blockchain flows at a high level. Additionally, researchers showed that prosecutions only focused on Bitcoin and did not keep pace with the digital uptake of privacy-enhanced cryptocurrencies; they were only interested in unbeatable cases. Crime prevention strategies are bifurcating (branching) and this will affect the development of cryptocurrency crime fighting enforcement sophistication as privacy-enhanced cryptocurrency adoption increases on the darknet.
The Illicit Use of Cryptocurrency on the Darknet by Cyber Criminals …
459
References 1. VOSviewer—Visualizing scientific landscapes (in en), VOSviewer, 2022. [Online]. Available: https://www.vosviewer.com/ 2. Almaqableh L (2022) Is it possible to establish the link between drug busts and the cryptocurrency market? Yes, we can. Int J Inf Manag. https://doi.org/10.1016/j.ijinfomgt.2022. 102488 3. Sauce L (2022) The unintended consequences of the regulation of cryptocurrencies. Camb J Econ 46(1):57–71. https://doi.org/10.1093/cje/beab053 4. Holt TJ, Lee JR (2022) A crime script model of dark web firearms purchasing. Am J Crim Justice. https://doi.org/10.1007/s12103-022-09675-8 5. Bahamazava K, Nanda R (2022) The shift of DarkNet illegal drug trade preferences in cryptocurrency: the question of traceability and deterrence. Forensic Sci Int Digital Invest 40:301377–301377. https://doi.org/10.1016/j.fsidi.2022.301377 6. Buil-Gil D, Saldaña-Taboada P (2021) Offending concentration on the internet: an exploratory analysis of bitcoin-related cybercrime. Deviant Behav ahead-of-print(ahead-of-print):1–18. https://doi.org/10.1080/01639625.2021.1988760 7. Akcora CG, Gel YR, Kantarcioglu M (2022) Blockchain networks: data structures of Bitcoin, Monero, Zcash, Ethereum, Ripple, and Iota. Wiley Interdiscip Rev Data Mining Knowl Discov 12(1):e1436–e1436. https://doi.org/10.1002/widm.1436 8. Jardine E (2021) Policing the cybercrime script of darknet drug markets: methods of effective law enforcement intervention. Am J Crim Justice 46(6):980–1005. https://doi.org/10.1007/s12 103-021-09656-3 9. nomics.com. Top crypto exchanges ranked By Volume | Nomics [Online]. https://nomics.com/ exchanges?interval=ytd 10. Guo Z, Shi L, Xu M, Yin H (2021) MRCC: a practical covert channel over monero with provable security. IEEE Access 9:1–1. https://doi.org/10.1109/ACCESS.2021.3060285 11. Nolasco Braaten C, Vaughn MS (2021) Convenience theory of cryptocurrency crime: a content analysis of U.S. federal court decisions. Deviant Behav 42(8):958–978. https://doi.org/10.1080/ 01639625.2019.1706706 12. Ouellet M, Décary-Hétu D, Bergeron A (2022) Cryptomarkets and the returns to criminal experience. Global Crime, pp 1–16. https://doi.org/10.1080/17440572.2022.2028622 13. Gonzálvez-Gallego N, Pérez-Cárceles MC (2021) Cryptocurrencies and illicit practices: the role of governance. Econ Anal Policy 72:203–212. https://doi.org/10.1016/j.eap.2021.08.003 14. Trozze A, Davies T, Kleinberg B (2022) Explaining prosecution outcomes for cryptocurrencybased financial crimes. J Money Laundering Control ahead-of-print(ahead-of-print). https:// doi.org/10.1108/JMLC-10-2021-0119 15. Kim D, Bilgin MH, Ryu D (2021) Are suspicious activity reporting requirements for cryptocurrency exchanges effective? Financ Innov 7(1):1–17. https://doi.org/10.1186/s40854-02100294-6 16. nomics.com. Top global crypto exchanges [Online]. https://nomics.com/exchanges 17. U S G L R D Law Library Of Congress, Regulation of cryptocurrency around the world Albania, Algeria, Angola, Anguilla, Antigua and Barbuda, Argentina, Australia, Azerbaijan, Bahamas, Bahrain, Bangladesh, Belarus, Belgium, Washington 2021 18. Aspris A, Foley S, Svec J, Wang L (2021) Decentralized exchanges: the “wild west” of cryptocurrency trading. Int Rev Financ Anal 77:101845. https://doi.org/10.1016/j.irfa.2021. 101845
The Integration and Complications of Emerging Technologies in Modern Warfare Matthew Walsh, Indra Seher, P. W. C. Prasad, and Amr Elchouemi
Abstract Contained within this report is a framework designed to assist Australian Defence Force strategic commands to construct and establish cyber warfare capabilities in preparation of the next major conflict. Discussed are the critical failings of present command and control infrastructure and suggested models by which more efficient networks can be implemented as a solid foundation for building cyber warfare capability. Next an analysis of the warfare technologies that are already integrated into the Defence Force’s cyber warfare capability and those which have been proposed or are looking to be integrated into existing capabilities. With those technologies in mind, focus is shifted to how potential adversaries are utilising their own cyber warfare capabilities and their critical points of focus that may influence future decisions and our defensive tactics and strategies. Finally, the consequences of these seemingly inevitable conflicts must be considered, whilst these weapons may not necessarily maim or injure, they still possess the ability to cause tangible harm and may indeed leave lasting scars on those who have to utilise them. Ultimately, Australia’s cyber efforts are still in their infancy and will require significant investment from military and industry experts if we intend to be a formidable opponent in the next major conflict. Keywords Cyber warfare · Cyber Command and Control (C2) · Cyber defence · Cyber policy
M. Walsh · P. W. C. Prasad Charles Sturt University, Bathurst, Australia I. Seher (B) Canterbury Institute of Management, Sydney, Australia e-mail: [email protected] A. Elchouemi University of Arizona Global Campus, Chandler, AZ, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_40
461
462
M. Walsh et al.
1 Introduction As the internet has become a ubiquitous presence in modern day society it was inevitable that cyberspace would eventually cross into, and indeed envelop, the warfare domain. Indeed, cyberspace is already having significant effect on how we communicate within the Defence sector, what we consider a viable target and what constitutes a weapon of war. After land, sea, air, and recently space, cyberspace has truly established itself as the fifth battlefield and, as the recent invasion of Ukraine has shown, the lines on this new battlefield are not just evolving they are in fact still being drawn. With countries already launching offensive cyber capabilities in the form of hacking military information systems, paralysing said systems, disrupting or destroying infrastructure or launching psychological operations utilising social media campaigns it is imperative that government and military personnel are educated and situational aware of this new warfare domain. It is the simultaneously, inherently, clandestine and intangible nature of cyber warfare that means there a certain lack of solid international regulation or properly defined policy on agreed rules of engagement that could explain the current lack of clear policy that outlines Australia’s intent and capability when it comes to engaging in a cyberwarfare domain. Whilst those conducting the research are in no immediate position to influence cyberwarfare doctrine, this research can assist those in tactical command positions by providing subject matter expertise. As is the intrinsic problem with developing new capabilities, there is very little internal skill or knowledge base that those making cyber policy within the Australian military can draw from and those in cyber command position are often not suitably knowledgeable in the sphere of warfare they are responsible for. That being said, the following compiled research has been consolidated into a framework designed to provide the fundamental steps for military commanders to prepare and respond to potential cyber engagements. Building on a base of a versatile and reactive Command and Control (C2) infrastructure we look at how existing models and military decision-making processes can be adapted to allow the integration of cyber warfare concepts and technologies. As part of the latter stages of this framework, the implementation of cyber warfare technologies and the ethical ramifications thereof will form a critical part of the decided final course of action. Whilst this framework is inherently designed to coordinate offensive efforts in cyber space, the decision phase of the process is designed to ensure that the implemented strategy is both ethical reasonable and proportionate. Finally, when the decision to act has been made the ultimate stage of the framework will orchestrate the execution of cyber actions and encourage assessment of the current tactical situation in order to recommence the framework’s process.
The Integration and Complications of Emerging Technologies …
463
1.1 Research Methodology As this research is being constructed as not only a guide on establishing Australia cyber policy, educational research for those within the Australian military looking familiarise themselves with this new discipline of warfare but also as but can also be used as a frame of reference for international cyber capabilities a qualitative research approach is the most appropriate. With a combination of correlational and descriptive research derived from experts within the cyber security and cyber warfare communities, as well as experts in foreign military doctrine and rules of engagement and also from existing Australian military doctrine and standard operating procedures.
2 Literature Review Contained within this section is the collected, qualitative research that has been used to build the discussion points and recommendations contained within this report. Research from numerous sources and journalistic articles has been compiled to focus on four key topics that have been identified as critical deficiencies in Australian Cyber defence planning and strategy. These are a collective and intrinsic lack of understanding and education in: Command, Control, Communications and Intelligence frameworks, Cyber warfare technologies, State cyber actors and their motivations and the inherent consequences of a cyber-conflict.
2.1 Command, Control, Communications and Intelligence Frameworks To begin, to establish a solid framework of cyber processes and operational concepts, Kim et al. [1] has developed a framework to not only detect and minimise the effect of cyber warfare engagements but also a model to offensively respond. By accepting and understanding that, in the same way that cyber space and society has become intrinsically linked, cyber warfare and conventional warfare are fundamentally one and the same. Their research posited that the traditional “kill chain” strategy can be easily adapted to suit a more cyber-centric warfare capability. With the six fundamental steps of detection, verification, tracking, aiming, engagement and evaluation they proposed a number of models involving the systems monitoring networks, analysing surveillance data to make tactical decision, and executing strikes based on that decision working organically to respond to cyber intrusions. This research into offensive capabilities is combined with a robust cyberwarfare framework, it is argued in their research that one is fundamental to the success to the other. The cyberwarfare framework proposed during their research is composed of Cyber Information Surveillance Reconnaissance (ISR), Cyber Command and Control (C2), Cyber Defence,
464
M. Walsh et al.
and Cyber Battle Damage Assessment (BDA) and organizes these four stages in an organic relationship. Whilst the research into the evolution of C3 and intelligence infrastructure conducted by Russell and Abdelzaher [2] agrees that as communications technologies have become more invasive and will indeed become critical resources in making tactical and strategic positions their view of the integration of the technology into cyberwarfare thus far is decidedly more pessimistic. Whilst it conceded the concept of Command and Control (C2) is still a relatively new concept in warfighting strategies they posit that a certain amount of ambiguity and lack of certain definition exists within the cyberwarfare communications field. As the technology has progressed and priorities in battlefield technologies shift it has spawned numerous derivatives of the C2 model including Command, Control, Communications (C3), Computers (C4)/Intelligence (C3I) or cyber (C5) or cyber and intelligence (C5I). In order to alleviate this ambiguity, the research suggested that of the components listed, efficient Command and Control utilising effective Communications and supported by adequate Intelligence (C3I) adequately represented the field without requiring further redefinition with advancing technology. Whilst [1] worked with models already based in defence industry frameworks and technology models, this research created cyber frameworks based around more conventional, rudimentary models. Utilising Boyd’s OODA (Observe, Orient, Decide and Act) Loop and Lawson’s model they detail how the two model are themselves intrinsically similar, but they can also be adapted to suit a C3I and cyberwarfare framework. From this is built a modified C3I process model that includes an intelligence process, that whilst separate from the C2 somewhat mirrors it, that rather than acting, disseminates to the C2 part of the model. Of particular significance is the separated projections block, which is deliberately kept apart from the decisionmaking process. This is echoed in modern information driven C3I systems that have integrated projections and forecasts as a critical part of decision-making process, not just solely on intelligence provided. The research conducted by Hutchinson and Lehto [3] agrees that securing cyber infrastructure, structuring strategies around protection of networks, offensive strikes against, and permanently disabling enemy C2 infrastructure. However, the findings made within their paper were with the purpose of educating on the various disciplines associated with broader information warfare as opposed to cyberwarfare and their associated disciplines specifically. That being said, their research did support the notion that further development of cyber warfare capabilities can drastically change the potential risk profile of nations. This further supports the argument of Australia developing, or at the very least investing in, cyberwarfare capabilities and frameworks that can allow the Department of Defence to wage warfare around the globe rapidly, cheaply, anonymously and devastatingly. Aside from those minor points, the remainder of their findings delve into information warfare capabilities (Electromagnetic spectrum, non-kinetic, and electronic warfare) that have already been well developed and understood within the Australian Defence Force and will provide no additional strategic or training value.
The Integration and Complications of Emerging Technologies …
465
Finally, in a broader discussion stating that, as the world becomes more intrinsically digitised and heavily reliant on cyber technologies, militaries and governments must also work to digitise battlespaces and prepare for the next conflict to occur on a more intangible scape [4]. This report stressed the importance of developing cyber warfare capabilities from the perspective that the arms race is already happening and those nations who may be falling behind must work to re-establish themselves or risk being at a significant disadvantage during the next conflict. Unfortunately, this report gave little advice as to specific, practical steps that could be taken, or areas of research and investment that could be focused on, more stressing the importance and urgency of the situation from a holistic perspective.
2.2 Cyberwarfare Technologies 2.2.1
Cloud Computing/Artificial Intelligence
The research detailed in Ref. [5], was a multimethod approach combining military site visits with semi structured interviews with currently serving military personnel, cyber industry experts and military stakeholders and review of documentation outlining United States military cyber and communication baselines. This was conducted with the aim of examining current United States military infrastructure and detailing recommendations to integrate cloud-based network processes and artificial intelligence (in this case machine learning processes) into military strategic operations. Their research discussed the potential for these technologies across what they considered the three major vignettes of military operations, those being, the typical vignette of strategic defence operations, operations concerning humanitarian and disaster relief responses, and intelligence, surveillance and reconnaissance operations. Whilst the integration proposed in their research appeared promising in theory, they identified several key issues that needed to be resolved or considered before integration can be considered on military wide scale. Those being greater cooperation between discipline and force commands, synchronising battlefield requirements across multiple domains and their inherent difference in C2 requirements and also, as stated by several sources previously, a robust and resilient command and control infrastructure from which to develop frameworks. Government interest in integrating cloud computing beyond its private sector application is certainly growing [6]. However, this research into enhancing the efficiency of cloud computing using specific algorithmic models concedes that there is a distinct lack of practical research into how a cloud operating environment may fare within a complex operational environment and that has severely delayed the adoption of such technologies. Whilst the algorithmic model presented within this report is designed to increase efficiency by making combat power a tangible variable, they also concede that this very variable creates a higher imbalance between tasks due
466
M. Walsh et al.
to the impact of priorities within a given battlefield being so drastically different depending on the situation.
2.2.2
Autonomous Weaponry
Moving away from the more technical aspects of the cyber framework to those of a greater ethical nature the research conducted by Leppanen [7] was consulted in order to address to the questions around the use of autonomous weaponry in the modern warfare environment. Their research was conducted with the aim to answer what they perceived to be the questions that military and law enforcement personnel had the least perceived knowledge. The questions, for the purpose of their research, were: in order for the right to life under Human Rights Law to be adequately protected, should the legal use of force be modified to impose proper regulation on automated weapon systems? And, should automated weapon systems be considered as weapons per se or the duty bearers of the state in the shape of law enforcement officials? These particular questions are of particular interest to our framework as, whilst Australia has been rapidly, albeit haphazardly, developing its autonomous weaponry capability, a clear understanding of the ethical implications of the use of autonomous weaponry and establishing a clear set of rules of engagement of less conventional autonomous weaponry is critical part of any strategic military plan. By the authors own admission one limiting factor is that the research compiled by Leppanen [7] is subject to the inherent bias of any qualitative research conducted by a singular author.
2.2.3
Cyber Weaponry
As part of the effort to establish recommendations for an offensive cyber warfare capability within the Australian Defence Force, research was compiled in regard to Unintrusive Precision Cyber Weapons. The research in question, conducted by Hare and Diehl [8] detailed the difference between intrusive and unintrusive cyber weaponry, the key difference being that intrusive weaponry requires access to privileged access to a given network or domain in order to conduct the attack and why unintrusive weaponry may be a more attractive option for Australia’s offensive efforts. Effectively, unintrusive precision cyber weaponry have an inherent lack of intensive intelligence collection requirements, meaning that it takes far less prior preparation and reconnaissance if you simply cripple the domain or infrastructure entirely rather than attempting to break into it. They also circumvent the inherently fluid aspects of cyber networks, changes of administrators, software and configuration updates are irrelevant if the entire system is effectively crippled. They posit that these styles of weaponry are ideal for countries and governments that have small or fledgling signal intelligence capability (whilst Australia’s signal intelligence capability may be efficient, it still relies heavily on allied information chains to fill gaps in knowledge) and as they require less skill to operate, they are ideally suited for Australia’s inexperienced cyber operators.
The Integration and Complications of Emerging Technologies …
467
2.3 Existing State Policies and Strategies Within their research into applied artificial intelligence in modern warfare and national security [9] offers critical insight into the respective policies of what would be considered the three major contenders in the cyber warfare arms race. Principally focusing on China, Russia and the United States this research compared and contrasted each nation’s respective strategy to research and integration of developing cyber technology into their state and military policies. Haney [9] justified his research stating that his was one the first to simply not dismiss the concept of militarizing artificial intelligence and cyber functions as science fiction. They go on to state that theirs is one of the first to analyse the power and security dynamics between the United States, China and Russia, and private corporations involved in this cyber warfare arms race. It was noted during the course of this paper that their research was conducted from the point of view that the United States was perhaps not achieving the technology heights that may have been perceived by their experts. However, for the purpose of this report, the information contained in the research work [9] provides an excellent starting point to establish how our adversaries and our allies are approaching the situation that the Australian Defence Force is now looking to encroach on. This concern regarding the proliferation of Russian cyber activities in the recent decades was echoed by a Congressional Research Service report compiled in February 2022 [10]. The report detailed offensive cyber operations dating back as far as 2007/08 and the number of Russian Government or affiliated cyber units that have been stood up over the last 20 years. Whilst the majority of the focus of Russian cyber operations, according to the Congressional research, if focused on developing psychological and espionage operations, indications are that several units are invested in developing offensive capabilities. More alarmingly, an acknowledged shortage of qualified personnel within the Russian services may mean that these operations are outsourced to unknown external groups or individuals [10]. Much like the work conducted by Haney [9], the research conducted regarding the future of network centric battlefields [11] examined China’s investment into the command, control and communications space and the challenges that may pose to the United States (and their allies) in an electronic and cyber warfare context. They argue that this is indeed rapidly emerging as one of the greatest threats to the stability of the Asia Pacific region. With the significant resource and financial investment that is being put into the endeavour, Chinese domination of communications and cyber space is posed as not only a key challenge to overcome in the next global conflict but may actually be the tipping point that convinces either side to make the first strike.
468
M. Walsh et al.
2.4 Potential Consequences of Cyber Warfare Within their research in regard to the potential affects cyber warfare operations on serving military personnel [12] echoes previously research stating that cyber space has indeed become the 5th domain of warfare and that technology has become so ingrained into everything militaries do on a day-to-day basis that it is has an effect on all serving personnel not just those perceived as “cyber warriors”. It is posited that just like the domains of land, sea, air and space, serving personnel are inexorably linked to cyber space by their inherent duty alone. They go on to state that cyber space is indeed already contested with (United States Military) personnel already experiencing targeting cyber operations from criminals, non-state and state actors. Although, their research does acknowledge, and concurs with fellow researchers contained within this report, that the promulgation of any tangible international law to protect service personnel is hindered by the rapidly advancing technologies fundamentally creating a moving target of what is considered proper rules of engagement for cyber space.
3 Discussion Based on the information collected and analysed above, it is clear that Australia needs to increase focused investment in their cyberwarfare capability. Indeed, our allies, and more concerningly our enemies, have been working towards not only building their cyberwarfare capabilities but have been locked in a new arms race developing technologies and weapons to dominate this cyber battlefield. Russia continues to expand its sphere of influence utilising the numerous cyber and information warfare units within its government and military. Indeed, Russia has spent the better part of the new century increasing its budget and technological investment into cyber warfare capabilities. With a robust framework of artificial intelligence manipulation, they have been able to sway public and political opinion in numerous countries across the globe, most notably during the 2016 United States Presidential election, and have consistently demonstrated and effective ability to manipulate human behavior [10]. China, on the other hand, has actually committed itself to being a leader in artificial intelligence technology. Looking to achieve the goal by 2030 and investing 150 billion dollars into the enterprise, China has quickly established themselves as the next major adversary in the next cyber conflict [11]. Working in relative secrecy, attempting unlock the potential of emerging cyber technologies that will grant them the domination. United States, while usually an ally heavily relied upon by Australian Defence Forces in warfare, appears to continue their investment in improving existing technologies is putting them behind in this new arms race. This is why we seek to build a comprehensive framework from which we can build cyber warfare capabilities and identify areas that potentially require improvement and greater investment in order to optimise the potential of emerging technologies [9].
The Integration and Complications of Emerging Technologies …
469
Of the numerous frameworks and decision processes that were analysed as part of this research, all have a similar theme especially when a military or tactical decision is involved. All fundamentally follow the principle of observing a given situation, orientating oneself or team within the situation, deciding the next step or action that needs to be taken in order to achieve the desired outcome and, finally, commit to action that will achieve said outcome [2]. Whilst this an admittedly rudimentary decision cycle, this becomes the first fundamental blocks of our cyberwarfare framework. As part of the first block of this framework, or the observe and orient component of this overarching decision cycle research concluded that there was one area of development that was required to ensure the success of cyber operations. A secure, efficient and robust command and control (C2) infrastructure is essential in providing a sturdy base from which to build cyber capabilities and tactical operations [3]. Effectively, there is little use in establishing cyber warfare capabilities and putting cyber operations in to practice on an active battlefield if the communications equipment and systems are unable to sufficiently support those operations. Of the research conducted, it was concluded that a Command, Control, Communications and Intelligence (C3&I) process has established itself as the superior method of commandand-control infrastructure due to the ability to adapt to the majority of technological and prioritisation shifts that are common in the information technology and communications fields [2]. This additional intelligence process is one, that whilst separate from the C2, somewhat mirrors it. A process that disseminates information to the C3 part of the model to encourage and lend credibility to the decision-making process which forms the next block of the overall framework [2]. Whilst the ideal scenario to create a truly efficient and effective C3&I infrastructure would be to rely heavily on a combination of artificial intelligence to integrate machine learning into intelligence and strategic decision making and cloud systems to ease the storing and sharing of centralised intelligence and communications data research appears promising in theory, research and practical experience with different computer systems cross the Australian military, has identified several key issues that needed to be resolved or considered before integration can be considered on military wide scale. Those being greater cooperation between discipline and force commands, synchronising battlefield requirements across multiple domains and their inherent difference in C2 requirements and also, as stated by several sources previously, a robust and resilient command and control infrastructure from which to develop frameworks [5]. Once the C3&I (Observe and Orient) section of the model either provides the sufficient amount of reliable intelligence and information, or simply disseminates that which is available, decisions regarding a course of action will be the next major component of the process. This process could regard the protection of critical infrastructure or network in a defensive scenario or, if the occasion should arise that offensive action is deemed necessary, this section of the framework is where the ethical ramifications of the next stage, Act, will be discussed. Whilst Australia’s offensive cyber capability is relatively fledgling, there are already technologies in place and under development whose ethical ramifications need to be considered.
470
M. Walsh et al.
For instance, in the context of Australia’s burgeoning autonomous warfare capability, we must ensure that, as with any kinetic military operation or strike, that clearly defined rules of engagements are followed. Indeed, these offensive autonomous weapon platforms should never be considered as removed from their human counterparts but as a direct extension thereof, theoretically no different from a soldier with a firearm [7]. This ethical process is especially critical for offensive technologies whose consequences are yet to be fully realised, like the use of unintrusive cyber weapons. To elaborate, all cyber weapons can be broken into the categories of either intrusive or unintrusive, with intrusive weapons effectively being those that require the exploit of privileged credentials, or inherent “back door” vulnerabilities present in systems. These kinds of weapons not only require a significant amount of time investment, intelligence and reconnaissance gathering they also require significant technical expertise compared to the unintrusive alternative [8]. By utilising attacks methods such as Distributed Denial of Service attacks, we can combine a relatively simple to train cyberwarfare force with traditional kinetic military operations to ensure total domination of given battlespaces. Whilst this may seem like an attractive prospect to military planners looking for a cheaper and less personnel intensive alternative to standard kinetic operations, the ethical ramifications of their use on not only enemy combatants but also civilians and our own personnel but be seriously considered. We must consider the lasting effects these little researched and experienced forms of warfare may have on the soldiers, sailors, airmen, government officials who may have to perpetrate them as they are now intrinsically linked to the cyber domain due to the amount of technology that is now being integrated into the way that militaries conduct their day-to-day business [12]. Following the examples of our United States counterparts, work must be done to ensure current serving and veteran personnel are provided care if they are negatively, and possibly permanently, affected by operations conducted in cyber space as part of the Act part of this framework. Finally, once all available evidence has been gathered, the situation has been thoroughly assessed, the most ethical approach regarding technologies or weaponry have been put in place, and a decision to proceed has been made, the only thing left to do is act. For this final part of our framework, a modified form of the cyber kill chain principle could be utilised to coordinate the strategy desired [1]. With the six fundamental steps of detection, verification, tracking, aiming, engagement and destruction. Effectively, a traditional cyber kill chain will involve the monitoring of systems, infrastructure and personnel for inherent weaknesses or vulnerabilities, coordinating assets and ensuring capabilities are available to achieve the strike, hardening our own systems and infrastructure against a possible cyber or kinetic retaliation and finally, once the strike is carried, assessing the damage to both allied and adversarial assets to determine the strike effectiveness [1]. With the monitoring of systems, infrastructure and personnel being effectively covered in the Observe and Orientate section, Act will involve the physical coordination of required assets, establishment of defences, the physical strike if required and extensive battle damage assessment. This final assessment of the strike’s effectiveness will allow the cyclic nature of the framework to function, where we observe and orientate ourselves within
The Integration and Complications of Emerging Technologies …
471
Fig. 1 Complete framework
the situation post-action and the decision process shifts focus to where to proceed from here. With what has been discussed above, we now have a complete framework (see Fig. 1) that will not only allow our cyber operatives to effectively conduct offensive and defensive operations on the cyber battlefield but will also provide a clear process for military commanders to follow in the event that a coordinated effort be brought against our forces.
4 Future Work Whilst this framework provides an outline by which cyber operations can be conducted and heavily encourages the use of supporting technologies throughout, in order for this framework to be truly effective future work must be conducted to allow the full integration of emerging technologies into this framework. Work must be
472
M. Walsh et al.
done to rectify the issues already identified within this report, with greater cooperation between discipline and force commands, synchronizing battlefield requirements across multiple domains and their inherent difference in C2 requirements, this will allow the integration of emerging technologies such as artificial intelligence processes and cloud-based data sharing capabilities to be utilized as part of this framework. Further research and input from military decision makers must be gathered to ensure that the technologies that are integrated into this framework are not only appropriate but will be a suitable solution for an extended period of time as a significant financial investment may mean that replacement systems may not be forthcoming. With the ever-increasing interest from potential adversaries in the cyber domain as a field of conflict it is imperative that more research be devoted to not only ensuring the systems, we have in place are hardened against possible intrusion but that we also begin to develop our ability to offensively engage opponents as well. It is a strong possibility that whoever controls the cyber battlespace will prove to be the ultimate power come the next major conflict, with nations and militaries relying so heavily on cyber and communications infrastructure, those who can manipulate it will have a significant advantage over their opponents. Finally, as with the development of any military technology, work must be done to understand the ethical and moral implications of what unleashing a cyber conflict may have on serving men and women on both sides and the civilians that may end up in the crossfire.
5 Conclusion Contained within this report is what is effectively designed to be the building blocks, or the guide by which the existing blocks can be restructured, to create a cohesive, logical cyber warfare capability for the Australian Defence Force. With the current capabilities being anywhere from haphazard to potentially non-existent the need for a concentrated cyber effort, especially in the realm of offensive capability, has been identified by numerous echelons of the military. With significant investment in constructing efficient and cohesive Command, Control, Communications and Intelligence infrastructure combined with training and deployment of (initially at least) rudimentary offensive capabilities begins to build a platform from which Australia can establish itself. Taking lessons from our adversaries and maintaining the idea of ethical warfare practices and firm rules of engagement we can become a world leader in expertise of cyber warfare capabilities much as we have garnered a reputation as professional kinetic war fighter.
The Integration and Complications of Emerging Technologies …
473
References 1. Kim S, Kang J, Oh H, Shin D, Shin D (2020) Operation framework including cyber warfare execution process and operational concepts. IEEE Access 8:109168–109176 2. Russell S, Abdelzaher T (2018) The internet of battlefield things: the next generation of command, control, communications and intelligence (C3I) decision-making. In: MILCOM 2018—2018 IEEE military communications conference (MILCOM) 3. Hutchinson B, Lehto M (2020) Non-kinetic warfare: the new game changer in the battle space. In: ICCWS 2020 15th international conference on cyber warfare and security, 315–3334 4. Siroli GP (2018) Considerations on the cyber domain as the new worldwide battlefield. Int Spectator 53(2):111–123 5. Lingel S, Hagen J, Hastings E, Lee M, Sargent M, Walsh M, Zhang LA, Blancett D (2020) An analytic framework for identifying and developing artificial intelligence applications. In: Joint all-domain command and control for modern warfare 6. Hyeon C, Aurelia S (2020) Enhancement of efficiency of military cloud computing using Lanchester model. In: 2020 Fourth international conference on I-SMAC (IoT in social, mobile, analytics and cloud) (I-SMAC) 7. Leppanen M (2021) Potential threat towards the most fundamental of human rights? Autonomous weapons systems in law enforcement. https://www.diva-portal.org/smash/record. jsf?pid=diva2%3A1558931&dswid=-8618 8. Hare F, Diehl W (2020) Noisy operations on the silent battlefield: preparing for adversary use of unintrusive precision cyber weapons. Cyber Defense Rev 5(1):153–168. https://www.jstor. org/stable/26902668 9. Haney BS (2019) Applied artificial intelligence in modern warfare & National security policy. SSRN Electron J 10. Congressional Research Service (2022) Russian cyber units. In: Focus, 4:1–3. https://crsrep orts.congress.gov 11. Johnson JS (2018) China’s vision of the future network-centric battlefield: cyber, space and electromagnetic asymmetric challenges to the United States. Comp Strateg 37(5):373–390 12. Tucker Jr JE (2018) Invisible wounds of modern warfare: a remedy for nascent, latent injuries servicemembers sustain in cyber battlespace. 11 J. Marshall L.J. 1 (2017–2018). https://heinon line.org/HOL/LandingPage?handle=hein.journals/jmlwj11&div=25&id=&p
Development of “RURUH” Mobile Based Application to Increase Mental Health Awareness Deby Ramadhana, Enquity Ekayekti, and Dina Fitria Murad
Abstract The purpose of this research is to develop an information system that can improve Ubah Stigma’s business processes in spreading awareness and changing the stigma about mental health in Indonesia to the public through various mental health education and mental health management programs. The problem in this study is the lack of public interest in using health management service programs offered by Ubah Stigma because of the difficulties in accessing these services on the Ubah Stigma website. The solution chosen was to develop a mobile application “Ruruh” that can be the main platform for people in accessing Ubah Stigma work programs, especially the use of the safe Space program, counseling session registration, and educational podcast regarding mental health. The design and development of the mobile application “Ruruh” are carried out by implementing The Rapid Application Development (RAD) method where there are several iterative processes. Linear regression quantitative methods using questionnaire data were used to evaluate the system implementation. The results show that the solution could successfully increase public interest in using the mobile application to access Ubah Stigma work programs and learn more information about mental health. Keywords Information system · Mobile application · Rapid application development · Mental health application
1 Introduction Background Mental and physical health are intertwining concerns that are important in human life. Mental health conditions can influence how people think, act, and communicate D. Ramadhana · E. Ekayekti · D. F. Murad (B) Information Systems Department, BINUS Online Learning, Bina Nusantara University, Jakarta, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_41
475
476
D. Ramadhana et al.
with others. However, there are still a lot of people with low mental health literacy. People usually conceptualized good mental health as being happy and having good emotional well-being. While poor mental health is generally described in terms of mental illness with physical and behavioral disturbance [1]. In Indonesia, we can see prevalent stigmatizing views where people with mental illness are seen as ‘abnormal’ people. This stigma prompted people who experience mental health problems and illnesses to make a self-stigma about their own condition, therefore they refrain from seeking professional’s help. Fortunately, the younger generation (e.g., millennials, and generation z) in Indonesia acknowledged the importance of mental health, and most of them have an open attitude toward mental health conditions. This change of attitudes is driven by the advancement of information technology in the digital era. These days, people use technology such as smartphones, laptops, and computers in most of their daily life. They can find a lot of information easily on the Internet and that information could change people’s views and understanding of things. Surely, the health sector can also take advantage of the information technology advancement to improve their healthcare services and system. With the high proportion of smartphone users and the growing popularity of mobile applications (apps), researchers have begun to investigate the efficacy and impacts of mobile applications for mental healthcare [2]. A survey conducted by Newzoo in 2021 regarding internet usage found that Indonesia ranked 4th in the country with the most smartphone users with 170.4 million users, equivalent to 61.7% of Indonesia’s total population [3]. Logically, those high numbers of smartphones user suggest that smartphone apps might be the important key factor for the success of mobile mental health applications. There have been numerous efforts to improve health services through mobile devices as it become a global concern [4]. Mobile applications have two important benefits that could help the success of mental healthcare applications which are usefulness and convenience [5]. Ubah Stigma is a nonprofit social and mental health foundation founded in 2018 that is dedicated to increasing public awareness about mental and changing the stigma that surrounds mental health issues in Indonesia. Their work programs provide educational mental health workshops, sharing sessions, offline mental health awareness events, educational podcasts, and professional counseling services. To promote their work programs and pique the interest of the public, they mainly use social media, and the company website, and collaborate with influential figures and professionals. In 2021, Ubah Stigma launched a program named “Safe Space” on its website. This program’s objective is to assist people to do mental health self-management by providing a space where people can anonymously share their stories about mental health issues without worrying about judgment or stigma from others. However, the “Safe Space” program has not been able to run optimally due to the lack of user interest in accessing website-based systems. Research Purpose Based on the problem above, the researchers will develop a mobile-based information system that can support Ubah Stigma work programs, especially Safe Space programs
Development of “RURUH” Mobile Based Application to Increase …
477
that aim to help their users to do mental health self-management, increase public awareness and change the stigma about mental health in Indonesia. The developed mobile application will use Bahasa Indonesia as its main display language. In this study, the research question is how effective is the use of the mobile application “Ruruh” developed as a platform for Ubah Stigma work programs in increasing public awareness about mental health?
2 Literature Review Several researchers have previously developed a mobile health application and carried out on-field case studies on how mobile health (mHealth) applications could influence user health. A Health System is a mobile-based technology that can be used to support medical practices and provide health information. Mobile health (mHealth) applications provide a unique opportunity to widen the availability and quality of mental health interventions and assessments. Many of the mHealth applications laud the benefit of “self-care”, which allows users to take responsibility for their own health needs [6]. Self-care mental mHealth applications popularity has begun to expand as more people are interested in using those applications on daily basis. Although it’s a good thing, there is growing concern among psychiatrists about the lack of supervision from health professionals on these mental health applications [2]. A study emphasizes the importance of psychologists’ or psychiatrists’ involvement in monitoring the discussion forum in the mental health application they developed. The prototyping approach was used in this study to make sure that the system being developed is following the dynamic of the business’s current and future needs [4]. Using the System Development Life Cycle (SDLC) Waterfall approach, a study by another researcher designed an android-based self-care mental health application that helps the user to share their worries or problems about mental health using sharing form. Online consultation features with certified psychologists are also provided in the applications [7]. From the user’s point of view, this application provides convenience in obtaining mental health services. In addition, there are also studies using the Rapid Application Development (RAD) method in building the mental health self-management mobile application. During the development of those mobile health applications, user comfort and accessibility is the main priority that researchers have to deliver because they influence user interest in using the developed application [8]. Appertaining to these related studies, researchers in this study will design and build a mobile-based mental health application that can help users to do self-care by sharing their stories regarding mental health conditions, registering for counseling sessions with professionals, and learning further information about mental health through various informational media.
478
D. Ramadhana et al.
3 Research Method Rapid Application Development (RAD) This study uses Rapid Application Development (RAD) with an Iterative methodology that adopts Alan Dennis’s theory. RAD methodology is a collection of methods developed to overcome the traditional SDLC weakness, such as the waterfall model. Additionally, this methodology provides better software quality compared to the approach using the traditional SDLC method [9]. In Fig. 1, we can see that the development process is divided into 3 iterations. The initial phase of this method consists of (1) planning and (2) analysis. After that, the application development phase is continued in each iteration which consists of (2) analysis, (3) design, and (4) implementation. Once system version 1 in the first iteration is implemented, work will immediately begin on the iteration. This process continues until the developed system is complete or there are no other issues that arose from the user’s experience with the system. Research Method Based on Fig. 1, the research method will be implemented through several steps outlined in Fig. 2. The main object of this study is the problem in Ubah Stigma, which is the lack of a system that can handle their business processes effectively. The initial steps taken in this study are doing literature reviews based on the existing system problem identification and assumptions, studying relevant literature, collecting data, and analyzing the system. Afterward, researchers will model the system using the RAD methodology to develop a mobile based application named RURUH. User Acceptance Test (UAT) An assessment using the User Acceptance Testing (UAT) would be initiated to measure system application implementation impact centered on the previous research question. UAT is a system tests conducted by users in which the output document may be used as proof that the application system meets the users need and requirements [10].
Planning
Analysis
•Analysis •Design •Implementation
System Iteration 1
Fig. 1 RAD methodology
System Iteration 2 •Analysis •Design •Implementation
•Analysis •Design •Implementation
System Iteration 3
Development of “RURUH” Mobile Based Application to Increase …
479
Fig. 2 Research framework
4 Result and Discussion The mobile information systems application “RURUH” Ubah Stigma was developed after conducting a series of RAD methodology processes. At this section, the result of the system application built will be explained. The discussion will include an explanation of the system requirements, system appearance and features along with the business process that occurs in the system. Pictures will be included to help describe the processes that take place during the system design and implementation. RAD Initial Planning During the initial stage in the RAD system development process, project planning and data collection are carried out. The data that has been obtained is then analyzed to discover the system requirements needed. The following table shows the breakdown of the functional requirements shown in Table 1 and non-functional requirements shown in Table 2. RAD Iteration First iteration, the design process will develop UML diagrams based on the functional requirement of the application system “RURUH” and show in Fig. 3. The functional requirement as the result of the design process is illustrated in the Use Case Diagram, as shown in Fig. 3. The development and implementation of the application system is performed in the second and third iteration. During this stage, researchers will closely collaborate with Ubah Stigma to ensure whether the system developed is adequate or if there is any change needed for the system. The following are the results of the implementation of the application system development. The application system shown will be using Bahasa Indonesia as its main display language. To access the system, all users which are public users, admin, and coordinator must log in to the system using their registered account. If they do not have an account, then they must create a new account by filling out the registration form
480
D. Ramadhana et al.
Table 1 Functional requirements User
Module name Functional requirement
Admin, User, Coordinator Login
Fill in login form
User
Registration
Fill in account registration form
User
Safe space
Fill in safe space story submission form View private safe space story View public safe space story
User
Counselling
View counselling program information Fill in counselling registration form
User
Podcast
User
Our program
View podcast list released by Ubah Stigma Play podcast audio View change Stigma projects Contact Ubah Stigma to receive more detail about selected projects
Admin
Safe space
Admin
Counselling
View all safe space story submitted by user View safe space story detail View all user counselling registration data Contact user to verify the registration
Coordinator
Counselling
Download registration counselling reports and data from the system
Table 2 Non-functional requirement Parameter
Requirement
Reliability
User can access application system anytime if they are connected to the internet Anyone can use this application system as a user
Performance
Application system response time to user is less than 2 s to user request
Security
Application system users must have a registered account User and admin must log in to be able to access the system application menu Application system display functions that are in accordance with the logged in user role
Usability
Users with basic level of technology knowledge can easily learn how use application system User should be able to navigate through the system effectively
consisting of username, email, and password. The system will display a welcome page if the registration process is accepted, and the user successfully logged in into the system. After that, the system will display the main page with menu according to the user role (Fig. 3). Figure 4 shows the user main page and the admin or coordinator main page. On the user main page, the user can access Safe Space, Podcast,
Development of “RURUH” Mobile Based Application to Increase …
481
Fig. 3 Use Case Diagram
Counselling Registration and Ubah Stigma programs menu. Meanwhile, the admin and coordinator page have two features which are safe space and counselling registration. Admin and coordinator main page are equipped with automatic count of the Safe Space and counselling registration data saved in the system. The safe space story submission form can be accessed by user through the main page by clicking “Tulis di sini” text on the Safe Space menu. Users can write their story and choose their story options on the checkbox, such as private/public publication, label story as containing trigger warning words, and approval for Ubah Stigma to use their story in mental health educational content. Users can view their private safe space story or other user public safe space story by clicking “Lihat Detail” button on the Safe Space menu. User main page also has two features for mental health educational purposes, which are Podcast and Program Kami (Our Program). In the Podcast menu, users can play educational mental health podcast audio released by Ubah Stigma. While
482
D. Ramadhana et al.
Fig. 4 Main page
the Program Kami menu contains a list of Ubah Stigma projects regarding mental health and a brief description of the project. User can also register to Ubah Stigma professional counselling by clicking on the Counselling Registration menu which will shows the counselling registration main page that show details of Ubah Stigma counselling programs, counselling terms and conditions, and counselling registration form. According to the counselling registration page in Fig. 5, users can choose the counselling packages that they want. If they agree to the terms and conditions shown in the pop-up messages, the system will redirect them to the registration form page. The safe space page for the admin and coordinator role shown in Fig. 6, admin can see all private and public safe space submitted by users. Admin can also view the safe space details including the writer username and all the story options.
4.1 UAT Evaluation To measure the impact of the system application usage on improving the public awareness of mental health, a survey was carried out by distributing questionnaires to a random selection of respondents consisting of Ubah Stigma community members
Development of “RURUH” Mobile Based Application to Increase …
483
Fig. 5 Counseling registration
and staff. The distributed questionnaire contained questions regarding use of the features provided in the system application. To measure the level of user satisfaction, the distributed questionnaire used a Likert scale question with a score of 1–5 with information a score 1 representing strongly disagree, a score 2 representing disagree, a score 3 representing doubtful, a score 4 representing agree, and a score 5 representing strongly agree. The questionnaire has one dependent variable (Y) and one independent variable (X). The dependent variable that is measured is user awareness about mental health after using the application system, while the independent variable is the application system “RURUH” features. The t-test is used to determine whether there is a significant effect between the independent variable (X) and the dependent variable (Y). If the significance value (Sig.) < 0.05, then it means that the independent variable (X) partially affects the dependent variable (Y) (Table 3). Including to the coefficients table shown in Table 4, the value of Sig., RURUH Application (X) is 0.000 < 0.05 which concludes that system application “RURUH” have an influence on the user’s mental health awareness. The F test is used to determine whether there is a general impact from the independent variable (X) on the dependent variable (Y). If the value of Sig. < 0.05, then the independent variable gives general impact on the dependent variable. Based on the
484
D. Ramadhana et al.
Fig. 6 Admin counselling registration detail Table 3 Coefficientsa Model
1
a
Unstandardized coefficients
Standardized coefficients
B
Std. error
Beta
(Constant)
0.270
1.961
RURUH Application
1.214
0.062
t
0.961
Sig.
0.136
0.892
14.810
0.000
Dependent Variable Mental Health Awareness
Table 4 Anovaa Model 1
Sum of squares Regression Residual Total
a Dependent b Predictors
Mean square
F
Sig.
834,328
1
934,328
219,348
0.000b
76,672
18
4260
1,011,000
19
Variable Mental Health Awardness (Constant) RURUH Application
df
Development of “RURUH” Mobile Based Application to Increase …
485
Table 5 Model summary Model
R
Model summary R2
Adjusted R2
Std error of the estimate
1
961a
0.924
0.920
2064
a Predictors:
(Constant) RURUH Application
ANOVA output in Table 4, the value of Sig. is 0.000 signifying RURUH application (X) affects the user mental health awareness level (Y). Based on Table 5, the R2 value is 0.924, which means that the independent variable application system “RURUH” (X) is affecting the dependent variable user’s mental health awareness (Y) by 92.4%. From this result, we may conclude that the usage of mobile application RURUH has a positive influence towards user mental health awareness.
5 Conclusion In this study, researchers developed a mobile based mental health application system “RURUH”. The application system has good features and functions that can be used by Ubah Stigma to promote and increase mental health awareness to the Indonesian public by providing educational podcast content, mental health self-care via safe space by sharing story, professional counselling registration, and information about Ubah Stigma work programs related to mental health. Currently, the developed system has limitation in the self-care features through only providing a Safe Space menu. In further research, this application system can be re-analyzed and developed to add more features that can help user perform mental health self-care through various features.
References 1. Willenberg L et al (2020) Understanding mental health and its determinants from the perspective of adolescents: a qualitative study across diverse social settings in Indonesia. Asian J Psychiatr 52:9–18. https://doi.org/10.1016/j.ajp.2020.102148 2. Henson P, Wisniewski H, Hollis C, Keshavan M, Torous J (2019) Digital mental health apps and the therapeutic alliance: initial review. BJPsych Open 5(1). https://doi.org/10.1192/bjo.201 8.86 3. A Survey, “Global Mobile Market Report,” Newzoo (2021) 4. Krisnanik E, Isnainiyah IN, Resdiansyah AZA (2020) The development of mobile-based application for mental health counseling during the COVID-19 pandemic. In: Proceedings—2nd international conference on informatics, multimedia, cyber, and information system. ICIMCIS, pp 324–328. https://doi.org/10.1109/ICIMCIS51567.2020.9354299 5. Ju Hwang W, Hee Jo H (2019) Evaluation of the effectiveness of mobile app-based stressmanagement program: a randomized controlled trial. Int J Environ Res Public Heal 16(21). https://doi.org/10.3390/ijerph16214270
486
D. Ramadhana et al.
6. Ahmed A et al (2021) Mobile applications for mental health self-care: a scoping review. Comput Methods Programs Biomed Update 1:100041. https://doi.org/10.1016/J.CMPBUP. 2021.100041 7. Oktaria N, Anjani N, Listi TP, Dewangga T, Faujiyah Y, Sevtiyuni PE (2019) Perancangan Sistem Informasi Mi-Cure Berbasis Aplikasi Mobile | Oktaria | Annual Research Seminar (ARS). Semin 5(1):136–140 8. Sidiq MA, Wahyuningrum T, Wardhana AC (2021) Rancang Bangun Aplikasi suicide risk idea identification Menggunakan rapid application development. JUPITER (Jurnal Penelit. Ilmu dan Teknol. Komputer) 13(2):76–86 9. Chrismanto AR, Delima R, Santoso HB, Wibowo A, Kristiawan RA (2019) Developing agriculture land mapping using Rapid Application Development (RAD): a case study from Indonesia. Int J Adv Comput Sci Appl 10(10):232–241. https://doi.org/10.14569/ijacsa.2019.0101033 10. Aini N, Wicaksono S (2019) Pembangunan Sistem Informasi Perpustakaan Berbasis Web menggunakan Metode Rapid Application Development (RAD) (Studi pada: SMK Negeri 11 Malang). J Pengemb Teknol Inf dan Ilmu Komput 3(9):9
User Experience Analysis of Web-Based Application System OTRS (Open-Source Ticket Request System) by Using Heuristic Evaluation Method Abimanyu Yoga Prastama, Primus William Oliver, M. Irsan Saputra, and Titan
Abstract PT Datacomm Diangraha is a well-known IT service provide in Indonesia. Datacomm own a web-based application to record and create ticket for apps development’s problem monitoring called Open-Sources Ticket Request System (OTRS). During process monitoring user somehow feel that the Menu on the Dashboard still not effective and efficient yet on helping them monitoring the ticket. Based on this problem Service Desk Management Division that’s the user of this apps feel that they need to have deeper analysis on the Application’s Interface design effectivity that hopefully ahead able to help the user improve the application. In this research conducted analysis for Interface Design on Dashboard and Create Ticket Menu using Heuristic Evaluation Methodology. This analysis will be conducted using 10 points of heuristic evaluation method. The data was taken using an instrument of research and put in the Google Form with some questionnaire which was distributed to 60 respondents who are employees of the Service Help Desk and Service Management Division and then 55 respondents provided feedback and the results of the analysis found that 9 points can be improved and for the implementation, the researcher provides recommendations for User Interface/UI improvement design to the company so can be increase the effectiveness and efficiency for doing ticketing and monitoring using OTRS. Based on the results of the evaluation conducted with the company, the recommendations given by the researchers received positive feedback so it could be the better change and an innovation for OTRS at Datacomm Diangraha. Keywords Ticket request system · Web application · User experience analysis · Heuristic evaluation method
A. Y. Prastama · P. W. Oliver · M. I. Saputra · Titan (B) Information Systems Department, BINUS Online Learning, Bina Nusantara University, Jakarta, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_42
487
488
A. Y. Prastama et al.
1 Introduction During industry 4.0 era there are some new terms developed and becoming trend which are Internet of Things (IoT), Industrial Internet of Things (IoT), Cyber Physical System (CPS), Artificial Intelligence (AI), Cloud Computing (Sistem Komputasi Awan) and all of them are closely related to interface. Interface provide interaction space between application with the user [1, 2]. That is why there is theory developed call Human Centered Design (HCD) where digital product designer created an application design as user friendly as possible to meet certain matrix set before developed the apps [3, 4]. Datacomm Diangraha is an IT service provide company that have their own internal control application that called OTRS. Hence there are still problem that encountered by the user. During pilot test researcher finds that 60.5% user of OTRS often encountered problem when they operate OTRS [5]. Then 61.4% user feel that the apps interface is not informative to solve the problem they have during web page development. That is why in this research, researcher want to give improvement to the apps by analyzing indicators that should be improved related to interface and usability of the app [6]. Researcher using heuristic evaluation that measure 10 heuristic point that will directly affect usability of an application using survey to all OTRS user [7, 8]. The output of this research is to see indicators that contribute the most to problem occurred while user using the apps that in this research as well will be provided the solutions for the problem within the indicator user control and freedom, consistency and standards, errors prevention, flexibility, and efficiency of use, and help and documentation.
2 Literature Review User experience is user perception and how the user respond when they use an application [9, 10]. In this research, UX indicators will be used to analyzed Datacomm Diangraha’s OTRS usability [11]. User interface is a part of user experience that focusing on design of the application to accommodate interaction between user and program [12]. UI will be also an indicator to measured Datacomm’s Diangraha OTRS usability [13]. Heuristic is a technic used by usability experts to measure an apps performance focusing on the design and experience of user during using apps using number of heuristic principles [14]. In this research app usability will be measured by good UX and UI indicator [15]. This web application or web-based application is a program that can be accessed easily via a web browser. The program is stored on a web server which will be displayed in the form of a website. Web-based applications are software systems based on World Wide Web Consortium (W3C) technologies and standards. Webbased applications are usually deployed via a web browser. Every browser is slightly different and displays web pages in a different way. So, web-based application
User Experience Analysis of Web-Based Application System OTRS …
489
programming needs a special way to accommodate how the browsers interact with different web languages, such as HTML, XML, Flash, Perl, ASP, and PHP [16]. OTRS is a web app that able to be access by all device using HTML. OTRS consists of some feature which ticket requesting feature and also integration feature so user able to process app integration and maintain the progress as well in one platform. OTRS is the web-based application from Datacomm Diangraha that will be analyzed during this research [17].
3 Research Methodology Datacomm Diangraha’s OTRS can be accessed on http://otrs.datacomm.co.id using their VPN to maintain user feedback and apps errors. The OTRS operated by service desk officer to record and solve all tickets received. OTRS usually used to record ticket incident, service request, and RFC (request for change). The research was conducted based on a 10-point Heuristic Evaluation which was then made a questionnaire based on the experience of the user in using the OTRS application [18, 19]. The questions/statements used for questionnaires are: 1. 2.
Ticket Status could be seen in the home page. User could sort ticket based on ticket_number, ticket_status, issued_date, and ticket_type in Summary List Ticket Menu. 3. User could know ticket type for each Summary List Ticket in Help Menu. 4. Interface of Dashboard meet the requirements of minimalist and aesthetic design. 5. User could differ ticket types in Dashboard. 6. An error alert message will be appeared when user fail the data input when submitting ticket in Create Menu Ticket. 7. There is a data history that could ease the data filling. 8. There is a filling guideline for every field in Create Ticket Menu. 9. Manual field filling of customer/user could not cause error in data input. 10. Date field format is already meet the requirements of simple and elegant design. Researcher using questionnaire to cope with population of 60 user (The user of Service Management and Service Help Desk Division) of Datacomm Diangraha’s OTRS and using Google Form as a Media for collecting data [20]. The data collection process is carried out for 3 months from January 2022–March 2022. Therefore, the sample of this research is 53 users, by using Slovin’s formula to get the sample with margin of errors 5%. After that researcher will analyze the validity and reliability of the questionnaire using statistic application SPSS. That required to have significancy < 0.5 to be labelled as valid, and Cronbach’s Alpha > 0.6 to be labelled reliable. After the researcher spreading the questionnaire, the researcher received 55 respondent and the next step will be analyzing the questionnaire result using SPSS. In this research resulting that the researcher have r table within the amount of 0.266 for 55 respondents with 5% errors. Then for every question in the questionnaire achieving r count higher
490
A. Y. Prastama et al.
Fig. 1 Cronbach’s alpha result
Fig. 2 Questionnaire recapitulation
than 0.266 that stated that this questionnaire is valid. Then for the Cronbach’s Alpha here is the result for the questionnaires (Fig. 1). The researcher achieving Cronbach’s Alpha > 0.6 which resulting that all of this questions in the questionnaire is reliable or there is no ambiguity within the questions. Based on the results of the research from each question points, the recapitulation of the results from the calculation of the questionnaire data is shown as follows (Fig. 2).
4 Result and Discussion After the researcher able to gather respondent and analyzed that their questionnaire is valid and reliable then the researcher goes on with the improvement recommendation for Datacomm Diangraha’s OTRS UI and UX.
User Experience Analysis of Web-Based Application System OTRS …
491
A. Visibility of the System Status For the indicator Visibility of the System Status, 49% respondents are highly disagree if they couldn’t track their ticket status, so it will be better if Datacomm Diangraha able to have real time tracking status on their OTRS dashboard (Fig. 3). B. Flexibility and Efficiency of Use For the indicator Flexibility and Efficiency of Use, 45% respondents are highly disagree that they can sort ticket summary that they already submitted. Means that Datacomm Diangraha should create a sort of feature based on ticket number, status, date created, or etc (Fig. 3). C. Help and Documentation For the indicator Help and Documentation, 43% respondents are highly disagree that they could easily get help from Datacomm Diangraha team if they encountered an errors while submitting the ticket. So, they should add help
Fig. 3 Real time tracking status and ticket sort feature
492
A. Y. Prastama et al.
Fig. 4 Help or “?” button
or? button to make user able to get help easily if they encountered a problem while submitting or tracking the tickets (Fig. 4). D. Aesthetic and Minimalist Design For the indicator Aesthetic and Minimalist Design of Dashboard Menu, 44% respondents are disagree that they could easily identify their ticket status, so will be better if Datacomm Diangraha able to differentiate ticket colors based on the status when user try to track their ticket. Then 56% respondents are also disagree that they find out the design of the dashboard is eye catching, therefore the dashboard design needs to get revamped with more eye catching and fresh design. Then 51% respondents are highly disagree that they find the design of the apps is elegant and simple, therefore need to redesign interface on the create ticket menu with more simple and elegant approach. E. Error and Prevention For the indicators Error and Prevention of Create Ticket Menu, 58% are disagree that they can be notified when the tickets they submit encountered an error, therefore need to be developed an errors alert when the user submitted wrong information that could cause error when registering a ticket (Fig. 5). F. Recognition Rather than Recall For the indicator Recognition Rather Than Recall of Create Ticket Menu, 53% respondents are highly disagree that they can find history of the tickets they ever submitted, therefore need to be developed a historical database so the user won’t need to input all data from beginning if they want to registering a ticket, instead the system can automatically give a pop up recommendation when they found similar keywords going to be inputted (Fig. 6).
User Experience Analysis of Web-Based Application System OTRS …
493
Fig. 5 Errors alert
Fig. 6 Pop-up recommendation
G. Help and Documentation For the indicator Help and Documentation of Create Ticket Menu, 47% respondents are highly disagree that they able to find manuals when filling a field/column on Create Ticket Menu. Therefore, need to be developed a help or ‘?’ button to make user able to get help easily if dedicated page or feature to show manual for field/column input (Fig. 7). H. Flexibility and Efficiency of Use For the indicator Flexibility and Efficiency of Use, 51% are disagree that they can automatically fill the form for ticket submitting, there are lots of manual process instead when they want to submit a ticket. Therefore, need to build more automation on the OTRS system to make user able to easier when they are filling the field on Create Ticket Menu (Fig. 8).
494
A. Y. Prastama et al.
Fig. 7 Help or “?” button create ticket field
Fig. 8 Automation when filling field
I. Aesthetic and Minimalist Design For the indicator Aesthetic and Minimalist Design, 51% are not disagree that the design of Date Format Field is simple and elegance. Therefore, need to redesign the Date Format Field to fulfill aesthetic and minimalist elements which are simple and elegant and do not interfere with the user’s appearance (Fig. 9). The internal team found out that they agree with improvement needed and prototype that created by the researcher and later, periodically will be implemented on Datacomm Diangraha’s OTRS. The results of the evaluation carried out with the
User Experience Analysis of Web-Based Application System OTRS …
495
Fig. 9 Date format field redesign
company. The recommendations given by the researcher received positive feedback so that it could be changing the OTRS to be an innovation for Datacomm Diangraha.
5 Conclusion In this research of Datacomm Diangraha usability of OTRS using UI and UX indicator with Heuristic Method resulting that, 55 active users of the OTRS apps choose there are some indicators that should be improved to increase usability of the application itself. There are several things should be improved from OTRS of Datacomm Diangraha. Which consist of: Inaccessible of Visibility of the System Status, Lack of Flexibility and Efficiency of Use, Unclear of Help and Documentation Features, Lack of Help and Documentation, Aesthetic and Minimalist Design, Error, and Prevention, Recognition Rather Than Recall. That will be improved after the research done based on research result on what factor affect the most for developing good UI UX Based on professionals’ opinion [21]. Heuristic Evaluation Methodology using in this research so, this analysis will be conducted using 10 points of heuristic evaluation method. The data was taken using an instrument of research and put in the Google Form with some questionnaire which was distributed to 60 respondents who are employees of the Service Help Desk and Service Management Division and then 55 respondents provided feedback and the results of the analysis found that 9 points can be improved. The results of the evaluation carried out with the company. The recommendations given by the researcher received positive feedback so that it could be changing the OTRS to be an innovation for Datacomm Diangraha. Based on research that has been conducted by researchers, there are 5 respondents who did not provide feedback, due to the difficulty of coordinating with the company and limited access to locations caused by the pandemic conditions. However, with
496
A. Y. Prastama et al.
these conditions, it helps researchers to conduct evaluation presentations with companies anywhere and anytime using online platform application, not limited by location and time, so that the time to conducting the research and writing can be more effective and efficient. For future research, Datacomm Diangraha could create better apps Datacomm Diangraha based on given design and feedback. If needed, they could go for deep discussion to create a new user journey [3]. Furthermore, they should also have additional database for the function ‘Recognition rather than call’ in the create ticket menu to record all tickets ever submitted by the user to trigger automated recommendation when user create a ticket.
References 1. Dzazuly RZA, Putra WHN, Wardani NH (2019) Evaluasi Usability dan Perbaikan Desain Antarmuka Pengguna Website Perpustakaan Kota Malang menggunakan Metode Evaluasi Heuristik. Jurnal Pengembangan Teknologi Informasi dan Ilmu Komputer 2. Krisnayani P, Arthana IKR, Darmawiguna IGM (2016) Analisa Usability Pada Website UNDIKSHA Dengan Menggunakan Metode Heuristic Evaluation. Kumpulan Artikel Mahasiswa Pendidikan Teknik Informatika (KARMAPATI) 5 3. Zaharias P, Poulymenakou A (2009) Developing a usability method evaluation method for e-learning applications: beyond functional usability. Int J Hum Comput Interact 25:75–98 4. Tinar A, Wijoyo SH, Rokhmawati RI (2019) Evaluasi Usability Tampilan Antarmuka Website Perpustakaan Politeknik Kesehatan Kemenkes Kota Malang menggunakan Metode Usability Testing dan Heuristic Evaluation. Jurnal Pengembangan Teknologi Informasi dan Ilmu Komputer 5. Nurcahayati S, Dewi WC, Tanjung AW (2021) Pengaruh Konsep Diri, Kecerdasan, dan Perilaku Konsumtif. Jurnal Ilmiah Semarak 49 6. Alomari HW, Ramasamy V, Kiper JD, Potvin G (2020) A user interface (UI) and user experience (UX) evaluation framework for cyberlearning environments in computer science and software engineering education. Heliyon 6 7. Handiwidjojo W, Ernawati L (2016) Pengukuran Tingkat Ketergunaan (Usability) Sistem Informasi Keuangan Studi Kasus: Duta Wacana Internal Transaction (Duwit). Jurnal Informatika dan Sistem Informasi (JUISI) 8. NNGroup, https://www.nngroup.com/articles/usability-101-introduction-to-usability/. Accessed on 3 Sept 2022 9. Pandusarani G, Brata AH, Jonemaro EMA (2018) Analisis user experience pada game CS:GO dengan Menggunakan Metode Cognitive Walkthrough dan Metode Heuristic Evaluation. Jurnal Pengembangan Teknologi Informasi dan Ilmu Komputer 10. Sekawan Media, https://www.sekawanmedia.co.id/pengertian-user-experience/. Accessed on 8 Sept 2022 11. NNGroup, https://www.nngroup.com/articles/definition-user-experience/. Accessed on 8 Sept 2022 12. Auliazmi R, Rudiyanto G, Utomo RD (2021) Kajian Estetika Visual Interface dan User Experience pada Aplikasi Ruangguru 24 13. Ahsyar TK (2019) Evaluasi Usability Website Berita Online Menggunakan Metode Heuristic Evaluation. Jurnal Ilmiah Rekayasa dan Manajemen Sistem Informasi 14. Ahsyar TK, Husna, Syaifullah (2019) Evaluasi Usability Sistem Informasi Akademik SIAM Menggunakan Metode Heuristic Evaluation. In: Seminar Nasional Teknologi Informasi, Komunikasi dan Industri (SNTIKI) 11 Fakultas Sains dan Teknologi. UIN Sultan Syarif Kasim Riau, Pekanbaru
User Experience Analysis of Web-Based Application System OTRS …
497
15. Nielsen J, Molich R (1990) Heuristic evaluation of user interfaces. In: ACM CHI’90 Conference. ACM, Seattle 16. Sturm R, Pollard C, Craig J (2017) Application performance management (APM) in the digital entreprise. Elsevier Inc., Cambridge 17. 123dok. https://text-id.123dok.com/document/6qmwvknwz-otrs-vicidial-landasan-teori.html. Accessed 12 Aug 2022 18. Nielsen J (1994) Usability inspection methods in heuristic evaluation. John Wiley & Sons, New York 19. Alen J, Chudley J (2012) Smashing UX design: foundations for designing online user experiences. West Sussex 20. Sugiyono: Metode Penelitian Kuantitatif, Kualitatif, dan R&D. Bandung 21. Jeddi FR, Nabovati E, Bigham R, Farrahi R (2020) Usability evaluation of a comprehensive national health information. Inform Med Unlocked 19
Review for Person Recognition Using Siamese Neural Network Jimmy Linggarjati
Abstract Person recognition can be used in automatic attendance for a person attendance at a certain location, e.g., at the school, to replace a card or fingerprint sensor. In this paper, the Siamese Neural Network is reviewed and used as a person recognition system using the reference coding from https://github.com/nicknochnack/Fac eRecognition. The result shows that it can detect the intended person with a training loss of around 1%. Keywords Siamese NN · Person recognition · CNN—Convolutional Neural Network
1 Introduction In this study, a Siamese Neural Network is used for person recognition in an entrance application [1]. The word Siamese originates from the Siamese twin, in which twin person was born attached to each other [2]. The Siamese NN is used for a practical person recognition system, as it does not need to be retrained if a new data image is added, e.g., as a new employer is hired or as some new students are starting their first year.
1.1 Objectives The aim of this paper is to show that a Deep Learning model for person recognition can be built to replace a traditional biometric sensor, such as fingerprint sensor or another traditional proximity sensor or keypad password sensor. J. Linggarjati (B) Faculty of Engineering, Bina Nusantara University, Jakarta, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_43
499
500
J. Linggarjati
This system is experimented on a desktop computer equipped with a web camera. Other systems such as Raspberry pi or ESP-32, can be considered as another target for this software.
2 Prior Work 2.1 A Face Recognition Algorithm Early face recognition algorithm uses Haar Detection [3] and Eigenface [4] for face detection and followed by PCA (Principal Component Analysis) as face classifier. Other Face-recognition algorithms use the HOG (Histogram of Oriented Gradient) for face detection [5] and CNN for face recognition [6]. Zhou et al. [7] shows in their paper on page 11, for which CNN method is best for the person category. For the person recognition system, the CNN model needs to be retrained if a new person is added, this is not practical, given the fact that the turnover of employees can be high. Therefore, a few shots learning using Siamese Neural Networks, are used. By using this type of CNN, the output is not a classifier, but a different layer that outputs a range of 0–1, using a sigmoid activation function [8]. One of the advantages of Siamese NN is the ability to use the trained CNN twin model, to detect a completely different person and/or classes, by simply adding that new person to the anchor dataset. There is no need to re-train the CNN twin model, because it is not a classifier but a difference detector of two images, by subtracting the embedding of the two images.
2.2 Few Shots Learning Using Siamese NN The cosine similarity (1) calculates the embedding difference between two images [9]. And to create this Siamese NN, one needs to have two datasets, which are anchor-positive and anchor-negative [10]. cos(θ ) = x T w
(1)
2.3 Loss Function There are three types of loss functions used in Siamese NN, i.e., contrastive loss, triplet loss, quadruplet loss [6, 11–13]. The reference programming from [14] used the contrastive loss.
Review for Person Recognition Using Siamese Neural Network
501
Fig. 1 Training loss for 50 epochs
The Loss Function [15] is defined as follows: max 0, d + + α − d −
(2)
While the Quadruplet Loss Function [15] is defined as follows: max 0, d + + α1 − d1− + max 0, d + + α2 − d2−
(3)
The letter d stands for distance between anchor and positive/negative image, while alpha is the hyperparameter in the loss function that gives a margin space between anchor-positive and anchor-negative images.
3 Experiments Following the work of https://github.com/nicknochnack/FaceRecognition, it is replicated with the author picture to reproduce the output. The training loss is shown in Fig. 1, in which after 50 epochs, the loss is below 1%. Figures 2 and 3 show a recall and precision metric for this Siamese model during training. These metric results show that this Siamese model predictions can produce False Negative and False Positive classifications [16].
4 Conclusions Siamese architecture has been tested for person detection with a recall and a precision of 100% when tested with test negative images taken from the LFW (Labeled Faces
502
J. Linggarjati
Fig. 2 Recall Metric in the range of 92–100%
Fig. 3 Precision Metric in the range of 99–100%
in the Wild) [17] and from the test positive images taken from the webcam and then augmented. But this model is not yet tested in real-time person recognition using a webcam, and it has no liveness detection algorithm, so it can be cheated by only showing the picture of someone in the anchor-positive, to get detected as an authorized user. Furthermore, a simple experiment using a photo from someone on the internet, and then using it for this system recognition, has proven to output false negative responds, which means that the strangers’ photo is considered as an authorized user. Therefore, further pre-processing of figures taken from the handphone’ screen must be studied further to identify the problem.
Review for Person Recognition Using Siamese Neural Network
503
References 1. Face Recognition demo—Baidu’s face-enabled entrance, https://www.youtube.com/watch?v= wr4rx0Spihs. Accessed 9 Dec 2022 2. Chang and Eng Bunker. https://en.wikipedia.org/wiki/Chang_and_Eng_Bunker. Accessed 9 Dec 2022 3. Gupta I, Patil V, Kadam C, Dumbre S (2016) Face detection and recognition using Raspberry Pi. In: IEEE international WIE conference on electrical and computer engineering (WIECONECE). IEEE, pp 83–86 4. Gunawan TS, Gani MHH, Rahman FDA, Kartiwi M (2017) Development of face recognition on raspberry pi for security enhancement of smart home system. Indonesian J Electr Eng Inform (IJEEI) 5(4):317–325 5. Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. In: IEEE computer society conference on computer vision and pattern recognition, vol 1. IEEE, pp 886–893 6. Schroff F, Kalenichenko D, Philbin J (2015) Facenet: A unified embedding for face recognition and clustering. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 815–823 7. Zhao ZQ, Zheng P, Xu ST, Wu X (2019) Object detection with deep learning: a review. IEEE Trans Neural Netw Learn Syst 30(11):3212–3232 8. Dubey SR, Singh SK, Chaudhuri BB (2021) A comprehensive survey and performance analysis of activation functions in deep learning. arXiv preprint arXiv:2109.14545 9. Chicco D (2021) Siamese neural networks: an overview. In: Cartwright H (ed) Artificial neural networks. Methods in molecular biology, vol 2190. Humana, New York, NY, pp 73–94. https:// doi.org/10.1007/978-1-0716-0826-5_3 10. Hoffer E, Ailon N (2015) Deep metric learning using triplet network. In: Feragen A, Pelillo M, Loog M (eds) Similarity-based pattern recognition. SIMBAD 2015. Lecture Notes in Computer Science, vol 9370. Springer, Cham, pp 84–92. https://doi.org/10.1007/978-3-319-24261-3_7 11. Melekhov I, Kannala J, Rahtu E (2016) Siamese network features for image matching. In: 2016 23rd international conference on pattern recognition (ICPR). IEEE, pp 378–383 12. Ghojogh B, Sikaroudi M, Shafiei S, Tizhoosh HR, Karray F, Crowley M (2020) Fisher discriminant triplet and contrastive losses for training siamese networks. In: 2020 international joint conference on neural networks (IJCNN). IEEE, pp 1–7 13. Chen W, Chen X, Zhang J, Huang K (2017) Beyond triplet loss: a deep quadruplet network for person re-identification. In: Proceedings of the IEEE conference on computer vision and pattern recognition. IEEE, pp 403–412 14. FaceRecognition, https://github.com/nicknochnack/FaceRecognition. Accessed on 9 Dec 2022 15. How to choose your loss when designing a Siamese Neural Network? Contrastive, Triplet or Quadruplet? https://towardsdatascience.com/how-to-choose-your-loss-when-designing-asiamese-neural-net-contrastive-triplet-or-quadruplet-ecba11944ec. Accessed on 9 Dec 2022 16. Precision and Recall in Machine Learning. https://www.javatpoint.com/precision-and-recallin-machine-learning. Accessed on 9 Dec 2022 17. Labeled Faces in the Wild. http://vis-www.cs.umass.edu/lfw/. Accessed on 9 Dec 2022
Thermal Condition Evaluation of Farmhouse Using Ecotect Analysis as an Effort to Optimize Cultural Activity of Enclave Villagers (Case Study: Ngadas, Bromo Tengger Semeru National Park) Ida Bagus Ananta Wijaya, Dian Kartika Santoso, and Irawan Setyabudi Abstract One of the Indonesian agrarian society characteristics can be seen in the Ngadas society. A tough farmer who spends most of his time in the fields so builds a simple construction called a farmhouse as a potato nursery, a shed/stable, and a rest room. In Ngadas Village, there are five types of farmhouses that need to be studied through a thermal analysis. It aims to simulate the thermal conditions that exist in each typology of farmhouses and to find the ideal typology for optimal potato/Solanum growth. This study uses a qualitative comparative descriptive method and simulation using Ecotect software. The most optimal type of farmhouse for potato nurseries is a farmhouse with a rectangular pattern. Every unit is separate, main room (storage, potato nursery and rest room) on the front lane and shed is on the side. With this type, animal husbandry, potato farming, and gegenen cultural activities become more optimal and do not interfere with each other. This building can also be categorized as a smart building because of its energy efficiency and space use. Keywords Thermal analysis · Farmhouse · Smart building
1 Introduction Indonesia is known as a country where the majority of the people are an agrarian society. An agrarian society is a society whose livelihood is agriculture, usually living I. B. A. Wijaya (B) Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] D. K. Santoso · I. Setyabudi Universitas Tribhuwana Tunggadewi, Malang 65144, Indonesia e-mail: [email protected] I. Setyabudi e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_44
505
506
I. B. A. Wijaya et al.
in fertile areas. One of the agricultural villages with unique landscape conditions in Indonesia is Ngadas Village. Ngadas Village is one of the villages located in the enclave of Bromo Tengger Semeru National Park (TNBTS). Administratively, it is located in Poncokusumo District, Malang Regency, East Java. Uniquely, Ngadas Village is inhabited by the indigenous tribe, namely the Tenggerese Tribe [1–5]. The character of the Tengger Tribe is that they live in groups of their fellow tribes and the number is only 1/1000 of the population of the island of Java [6–8]. Ngadas village is located in the highlands with an altitude range between 1200 and 2500 m above sea level (masl) on Mount Semeru slopes [1, 3, 4, 9–12]. With the advantage of its natural conditions, Ngadas is known as a vegetable producing village [13]. Its inhabitants are strong field farmers and live in groups in the hills [8, 14]. As a cultural entity of an agrarian society, Ngadas farmers are ordinary people like humans in general. They live and thrive in the midst of the life of the general public. They also work to earn a living for their families, socialize as usual, and are also religious. As humans who have potential, reason, and feelings, they create and are shaped by their culture [15–17]. This ultimately creates a dynamic in sociocultural life to have an impact on the physical changes of an area. Human society and culture everywhere and at any time is always changing. Changes that occur can be slow and can also run fast. These changes can be caused by the environment in which people live or because of contacts with outside cultures [18]. Contacts with outside cultures that cause changes in the life of a society usually occur because of new experiences or the belief of the people concerned that certain elements of outside culture benefit them. These advantages are mainly seen in relation to the welfare of the local community. That is, if it is felt that it can bring benefits to their lives, then the community will quickly respond to any things that come from outside. These advantages are especially those that can bring economic, social and even political benefits. People who feel disadvantaged or even disadvantaged usually oppose something that comes from outside. These dynamics were born due to the process of adaptation to natural conditions. The most obvious example of adaptation is the construction of farmhouses in each of the residents’ fields. Community activities that are mostly carried out in the fields than in their homes or dwellings [7, 8] make them build farmhouses called huts around the fields where they grow crops [6]. Farmhouses are typical building types for agricultural activities in rural areas which are deliberately built around fields and close to water sources [19–21]. Farmhouses were also designed for stables, and sheds. Farmhouses are classified into vernacular architecture which has exterior and interior [21]. The types of farmhouses built consist of an average of three units of space, which become necessary for shelter, shelter for animals, and for agricultural equipment [20]. Materials, construction, facing direction and placement of slopes often adapt to existing contours and natural resources [20, 22]. Materials, construction, facing direction and placement of slopes often adapt to existing contours and natural resources [23–25]. Therefore, there is a need for a science building study of the typology of farmhouses found in Ngadas Village through a thermal analysis. It aims to simulate the thermal conditions that exist in each typology of farmhouses and to find the ideal typology for optimal potato growth.
Thermal Condition Evaluation of Farmhouse Using Ecotect Analysis …
507
Based on the above background, this research will concentrate on thermal simulation of farmhouses in Ngadas Village in order to find the optimal layout of farmhouses in the nursery and storage of agricultural commodities of the Ngadas people, namely potatoes. It is hoped that the research results will be used as a reference for the community in building farmhouses so that their livelihoods and cultural activities can still be accommodated properly. In addition, this study wants to prove whether the enclave community is naturally able to make a simple construction that can be categorized as a smart building. The criteria of course refer to the efficiency of resources as well as the faulty building performance which is known through thermal analysis [26].
2 Method The method used in this study is a qualitative comparative descriptive method and simulation. Five farmhouses were selected as case studies, namely farmhouses that represent each typology in Nagadas Village. Simulations are carried out in 2 dimensions on floor plans and simple modeling of building facades. The research only focuses on the farmhouse as a building, with external wind conditions assumed to be constant. The physical context of the environment around the building, such as buildings and vegetation is also excluded. This is done so that the layout of the room and its influence on the thermal conditions in the building can be analyzed without being disturbed by other variables. Likewise, the selected building material is the same material, which is concrete walls with tiled roofs, besides that the direction of the farmhouse chosen as the sample also leads north–south. The research method begins with identifying the typology of farmhouses in Ngadas Village through a literature study. After that, simulations were carried out on the facade against the incoming sunlight using Autodesk Ectotect Analysis 2011 and Sketch Up Pro 2018 software. Ecotect energy simulation software allows geometric modeling, thermal implementation and lighting analysis on a building model. Ecotect was chosen as a simulation tool because it is comprehensive in conducting environmental simulations that are used to assess energy efficiency such as solar radiation, natural lighting, and thermal comfort. Ecotect analysis helps architects to create various eco-friendly designs. Ecotect includes a simple, accurate and visually responsive efficiency analysis [27–29]. Furthermore, several alternative types of farmhouse typology are given a simulation of solar radiation that can be captured by the facade. The building performance indicator seen is thermal comfort. Simulation using Ecotect software requires input data in the form of climate data from the nearest weather station (BMKG) Karangploso which and building data include; dimensions of space, building materials and also building orientation data. Simulation results for thermal comfort in buildings are displayed for each evaluated space. The temperature profile in the simulation is shown over a span of one day (24 h) for one year.
508
I. B. A. Wijaya et al.
3 Result and Discussion 3.1 Typology of Farmhouses in Ngadas Village There are five types of farmhouses in Ngadas Village based on spatial aspects [1]. It can clearly be seen in Fig. 1. The first type is an L-shaped floor plan and all space units (shed, potato nursery & seeding room, and farmer’s rest areas). The layout in the farmhouse has the character
a. Type 1
b. Type 2
c. Type 3
d. Type 4
e.Type 5 Fig. 1 Ngadas farmhouses
Thermal Condition Evaluation of Farmhouse Using Ecotect Analysis …
509
of the entire space uniting under one roof and oriented to the fields belonging to the residents of the house. The second type has a rectangular floor plan character. It has three units of space (shed, farm tools storage and potato seeding, and farmer’s rest area). The location of the separate shed behind the main room is the storage room and the rest room. The orientation of the farmhouse overlooks the fields belonging to the residents of the house. The third type has a floor plan with a rectangular pattern. The room unit is a separate shed far from the main room, namely a storage area and a place to rest but can still be observed from the house-field. The orientation of the farmhouse overlooks the fields belonging to the residents of the house. The fourth type has a floor plan with a rectangular pattern. The room unit in the form of a separate shed is on the side of the main room, namely a storage and rest room. The orientation of the farmhouse overlooks the fields belonging to the residents of the house. The fifth type has a floor plan with a rectangular pattern. The room unit is in the form of a separate shed in front of the main room, namely a storage area and rest room. From five typologies, at least there are some real differences that can be a differentiating factor later when processed in Ecotect software, including a separate room unit or not in its use for livestock sheds. The number of doors and the location of two separate space units are also one of the differentiators that can affect the intensity of the sun entering the building envelope. For this reason, in the next subsection, simulation results are carried out on five types of farmhouses found in Ngadas Village.
3.2 Thermal Condition Analysis Results The results of the thermal simulation using ecotect software can be seen in Fig. 2. Different colors on the building envelope indicate differences in the reception of solar radiation in the building. The intensity of solar radiation is expressed in units of watts per square meter (W/m2 ). The effect of the intensity of sunlight on the air temperature in the environment is that the more sunlight, the warmer the ambient air temperature [30, 31]. In the simulation results using ecotect, it can be seen that the blue color indicates the reception of less and less sunlight gradually rising upwards until the yellow color indicates the higher the intensity of the sun and the higher the temperature. From the picture above, it can be seen that the type of farmhouse that gets the lowest solar intensity on almost every surface of its facade is the 4th (fourth) type. Sufficient opening with two doors in each room unit is one of the factors that causes the fourth type of thermal conditions to be better [32]. This will be beneficial for the growth of potato seedlings which require good air circulation and a fairly low temperature [25]. The shed and the main room unit are also separated so that the sanitation of the room is better.
510
I. B. A. Wijaya et al.
a. Type 1
b. Type 2
d. Type 4
c. Type 3
e. Type 5
Fig. 2 Ecotect simulation results for 5 typologies of farmhouses
3.3 Recommended Farmhouse Layout and Interior Taking into account the results of the ecotect simulation, it is necessary to have recommendations that can be applied by the Ngadas people in their daily lives. Farming and fields are their life and culture, so there needs to be maximum space to accommodate every cultural activity of the community. There are at least three spaces that need to be in the construction of a farmhouse, namely, a farm shed, a resting place that can be used for “Gegenen”, namely the activity of starting a fire to warm the body in the cold Tengger weather. Fire is also interpreted as a source of life by the Ngadas people [1, 8]. One of the most important space units in farming culture is the potato storage room which will be used as seeds for the next planting season, therefore it is necessary to recommend the layout and interior of the Ngadas community’s farmhouse. These recommendations can be seen in Fig. 3.
Thermal Condition Evaluation of Farmhouse Using Ecotect Analysis …
a. Layout
b. Perspective
c. Farmhouse interior
d. Shed/Stable
511
Fig. 3 Recommended farmhouse layouts and interiors
Some of the things that are taken into consideration in the recommendation for the layout of the farmhouse include horse stables and nurseries which are still separated due to health factors. In addition, horse stables especially require intensive maintenance so it is not good if they are put together in a nursery room. So, with this recommendation, it is hoped that it will be able to optimize the cultural activities of farming, animal husbandry, and community habits such as gegenen in their farmhouses. Through the recommendations above, it can be seen that several smart building criteria have been and can be maintained by the Tenggerese community. Smart building criteria which is energy and efficiency of space [33–35] is something that has been applied to this farmhouse. Energy efficiency can be seen from a room that is made without a partition that can accommodate more than one different activity. The use of materials is also selected from cheaper local materials so that they are efficient in the distribution of materials. Thermal conditions in accordance with their designation are also one of the keys to the smart building in this farmhouse, so it does not require much intervention from fossil fuel or electricity sources.
512
I. B. A. Wijaya et al.
4 Conclusion The type of farmhouse that gets the best sun intensity for potato nurseries in Ngadas Village is a field house with a rectangular pattern. The room unit in the form of a separate stable is on the side of the main room, namely a storage and resting place. With this type, animal husbandry, potato farming, and gegenen cultural activities become more optimal and do not interfere with each other. This building can also be categorized as a smart building because of its energy efficiency and space use.
References 1. Santoso DK, Antariksa A, Utami S (2019) Tipologi Rumah-Ladang di Desa Enclave Taman Nasional Bromo Tengger Semeru, Ngadas, Kabupaten Malang. Arsitektura 17:271 2. Santoso DK, Wikantyoso R (2018) Faktor Penyebab Perubahan Morfologi Desa Ngadas, Poncokusumo, Kabupaten Malang. Local Wisdom Sci Online J 10:2 3. Agustapraja HR (2017) Penerapan Genius Loci Pada Pemukiman Masyarakat Ngadas Tengger Malang. Jurnal CIVILL 2:1 4. Batoro J, Setiadi D, Chikmawati T, Purwanto Y (2011) Pengetahuan Tentang Tumbuhan Masyarakat Tenggerdi Bromo Tengger Semeru Jawa Timur. Jurnal WACANA 14:1 5. BBTNBTS, Hikayat Wong Tengger (2013) Kisah Peminggiran dan Dominasi. Balai Besar Taman Nasional Tengger Semeru 6. Batoro J (2017) Keajaiban Bromo Tengger Semeru. UB Press, Malang 7. Batoro J, Setiadi D, Chikmawati T, Purwanto Y (2011) Pemanfaatan Tumbuhan Dan Hewan Dalam Ritual Adat Masyarakat Tengger di Bromo Tengger Semeru Jawa Timur. Jurnal Ilmu Ilmu Sosial 03:01 8. Sutarto A (2006) Sekilas Tentang Masyarakat Tengger. Presented at the Jelajah Budaya, Yogyakarta 9. Anggiana, Versa, Bergas (2014) Pembangunan Pariwisata dan Perampasan Ruang Hidup Rakyat: KSPN Menjawab Masalahnya Siapa? Laporan Penelitian Tim Bromo Tengger Semeru 10. Rahayu S, Ludigdo U, Irianto G, Nurkholis (2015) Budgeting of school operational assistance fund based on the value of Gotong Royong. In: Procedia—Social and Behavioral Sciences 211:364–369 11. Santoso DK, Wikantyoso R (2018) Faktor Penyebab Perubahan Morfologi Desa Ngadas, Poncokusumo, Kabupaten Malang. LOCAL WISDOM 10:53–62 12. Endarwati MC (2013) Pengaruh Mitos Pada Bentukan Ruang Bermukim Di Desa Ngadas Kecamatan Poncokusumo Kabupaten Malang. Jurnal Tesa Arsitektur 11:1 13. Listiyana A, Mutiah R (2017) Pemberdayaan Masyarakat Suku Tengger Ngadas Poncokusumo Kabupaten Malang Dalam Mengembangkan Potensi Tumbuhan Obat Dan Hasil Pertanian Berbasis “Etnofarmasi” Menuju Terciptanya Desa Mandiri. J Islamic Med 1:1 14. Supanto F (2016) Model Pembangunan Ekonomi Desa Berbasis Agro Ekowisata Sebagai Penyangga Ekonomi Kawasan Taman Nasional Bromo Tengger Semeru: Studi Pada Desa Ngadas Kecamatan Poncokusumo Kabupaten Malang. Presented at the Dinamika Global: Rebranding Keunggulan Kompetitif Berbasis Kearifan Lokal, Gedung Pascasarjana FEB UNEJ 15. Euriga E, Boehme MH, Aminah S (2021) Changing farmers’ perception towards sustainable horticulture: a case study of extension education in farming community in Yogyakarta, Indonesia. AGRARIS: J Agribus Rural Dev Res 7:225–240 16. Maswadi M, Oktoriana S, Suharyani A (2018) The effect of farmer characteristics on perceptions of the fermented cocoa beans technology in Bengkayang Regency, West Kalimantan. AGRITROPICA J Agric Sci 1:85–92
Thermal Condition Evaluation of Farmhouse Using Ecotect Analysis …
513
17. Rohani ST, Siregar AR, Rasyid TG, Aminawar M, Darwis M (2020) Differences in characteristics of farmers who do and do not conduct a beef cattle business partnership system (teseng). IOP Conf Ser Earth Environ Sci 486:12047 18. Pudianti A, Syahbana JA, Suprapti A (2016) Role of culture in rural transformation in Manding Village, Bantul Yogyakarta, Indonesia. In: Procedia—social and behavioral sciences 227:458– 464 19. Bocz GÄ (2012) Reutilisation of agricultural buildings: tourism and sustainability in the Swedish Periurban context. Doctoral, Faculty of Landscape Planning, Horticulture and Agricultural Sciences Department of Rural Buildings and Animal Husbandry, Swedish University of Agricultural Sciences, Alnarp 20. Picuno CA, Kapetanovi A, Lakovi I, Roubis D, Picuno P (2017) Analysis of the characteristics of traditional rural constructions for animal corrals in the Adriatic-Ionian area. Sustainability 9 21. Susperregi J, Telleria I, Urteaga M, Jansma E (2017) The Basque farmhouses of Zelaa and Maiz Goena: new dendrochronology-based findings about the evolution of the built heritage in the northern Iberian Peninsula. J Archaeol Sci Reports 11:695–708 22. Mace J (2013) Beautifying the countryside: rural and vernacular gothic in late nineteenthcentury Ontario. JSSAC 38:29–36 23. Sindhupalchok, Chautara (2016) Potato seed tuber production techniques manual. Government of Nepal Ministry of Agriculture Development Regional Agriculture Directorate, central development Region 24. Muthoni J, Kabira J, Shimelis H, Melis R (2014) Producing potato crop from true potato seed (TPS): a comparative study. AJCS 8:8 25. Cutti L, Kulckzynski SM (2016) Treatment of Solanum torvum seeds improves germination in a batch-dependent manner1. Pesquisa Agropecuária Tropical 46:464–469 26. Sinopoli J (2006) Smart building systems for architects, owners, and builders. Elsevier, USA 27. Zulfiana IS, Sampe IS, Bahagia C (2020) Analisis Kenyamanan Termal Ruang Kelas Di Universitas Sains Dan Teknologi Jayapura Dengan Menggunakan Ecotect. Jurnal Teknologi Terpadu 8:2 28. Kashira FM, Sudarmo BS, Santosa H (2016) Analisa ecotect analysis dan workbench ansys pada desain double skin facade sport hall. Student J Arsitektur Brawijaya 29. Trisnawan D (2018) Ecotect design simulation on existing building to enhance its energy efficiency. IOP Conf Ser Earth Environ Sci 105:12117 30. Shrestha AK, Thapa A, Gautam H (2019) Solar radiation, air temperature, relative humidity, and dew point study: Damak, Jhapa, Nepal. Int J Photoenergy 2019:1–7 31. Daut I, Yusoff MI, Ibrahim S, Irwanto M, Nsurface G (2012) Relationship between the solar radiation and surface temperature in Perlis. Adv Mater Res 512–515:143–147 32. Ma L, Shao N, Zhang J, Zhao T (2015) The influence of doors and windows on the indoor temperature in rural house. Procedia Eng 121:621–627 33. Kim D, Yoon Y, Lee J, Mago PJ, Lee K, Cho H (2022) Design and implementation of smart buildings: a review of current research trend. Energies 15 34. Nurrahman H, Permana AY, Susanti I (2021) Implementation of the smart building concept in Parahyangan office rental space and apartment design. J Architectural Res Educ 3:1 35. Froufe MM, Chinelli CK, Haddad AN, Guedes ALA, Hammad AA, Soares CAP (2020) Smart buildings: systems and drivers. Buildings 10:153
Intelligent Home Monitoring System Using IoT Device Santoso Budijono and Daniel Patricko Hutabarat
Abstract During this pandemic, many industries were affected a lot which made many employees fired/laid off, causing the crime rate to increase, which also occurres in residential areas. Hence, there needs to be an effort to monitor and protect our homes from criminals. The systems have tools to monitor and warn us for 24 h if there is an electricity shortage. The purpose of this study is to design a microcontrollerbased smart home system and to be able to carry out IoT-based monitoring. This system is equipped with an ESP32cam camera and proximity sensor to detect movement in the system area. It sends an emergency notification to the smartphone if the sensor detects humans or objects moving around the home yard covered with this sensor. The camera is equipped with a servo to be able to encompass a broader picture. The systems will send an email if proximity sensors detect humans or objects moving. As a result of this system, users can monitor their homes using their android smartphones anytime, anywhere. Furthermore, this system has two redundancy power sources that make the system sustain watching the house. If an object occurs in the monitoring area, the system will send an email notification to the user. Keywords Intelligent home · ESP32cam · Android smartphone · IoT · Redundancy power sources
1 Introduction 1.1 Population Growth and Suburbanization Increasing population growth and suburbanization affect the number of houses in big cities [1]. Currently, many homes are built with no security system. Lower-class and S. Budijono (B) · D. P. Hutabarat Computer Engineering Department, Faculty of Engineering, Bina Nusantara University, Jakarta, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_45
515
516 Table 1 Crime Index in several Countries from 2018 until mid 2022
S. Budijono and D. P. Hutabarat Year
Indonesia
Singapore
Malaysia
2018
44.72
16.23
63.05
2019
46.01
21.47
60.79
2020
45.84
30.57
58.84
2021
46.23
32.98
57.89
2022
46.06
27.22
54.37
middle-class housing does not have a security system because there is little public awareness Regarding the safety the occupants [2]. In addition, the lack of supervision and protection of the house makes it very easy for criminals to commit crimes. Currently, society has a high level of individuality, resulting in a lack of concern in social life [3]. Many crimes often occur in housings, and criminals do not hesitate to break into, rob all valuable items, and even kill the occupants to eliminate traces [4].
1.2 Crimes Increase in Pandemic Covid Era During the Covid Pandemic, the crime rate has increased significantly. The crime data from https://www.numbeo.com shows an increase in crime for several countries during the year 2020 and 2021 [5], while crime rate decreases in 2022 when economic growth occurs and the spread of Covid 19 has decreased. Below is the crime index of several countries taken from www.numbeo.com (Table 1).
1.3 Secure Home with Smart Devices Research on intelligent home systems started over two decades ago since 2000, when Sixsmith AJ wrote a paper titled “An evaluation of an intelligent home monitoring system”. This research mention about systems that can monitor around the house with the purpose of protecting elderlies when they are alone at their home [6]. There has been a lot of research on Smart Home security, some using solar panels [7] and others using state electricity [8]. The previous research system uses solar panels so that if there is no sun light, the system does not get energy, and the system did not have a monitoring function, so that if there was a notification of movement in the home environment, the user had to leave the house to see whether the movement was a suspicious person or not, in this way it would be perilous for users to check into the house. The development of this research uses an energy source through electricity. Still, if there is a power failure, there is a rechargeable battery so that the power source would switch from electricity to the rechargeable battery.
Intelligent Home Monitoring System Using IoT Device
517
Then the system is equipped with a camera to monitor if there is movement in the home environment. To monitor can be done through the application, so the user only needs to open the application to check the home environment. The camera can be panned to have a wide range and has features to record and save. Coupled with the notification feature via email, if systems detect suspicious human movement in the environmental area, each user will receive notifications. This system can monitor the house from a distance, monitor images in real-time, turn on and turn off the alarm even though the user is far from the system, and panning the camera from an angle of 0 to 180°.
2 Methodology The intelligent home monitoring system can monitor the house’s condition through a smartphone that can see the house’s condition through the camera system and receive information if there is movement in the proximity sensor area.
2.1 Block Diagram Whole Systems The systems diagram shows in block diagram Fig. 1 Block Diagram Whole Systems. PHP dan MYSQL
Server Kamera OV2640
ESP32
Wifi
Buck Coverter
Android
Power Auto Backup Battery
Adaptor
Aki
Sensor Proximity
Wemos D1
Servo Fig. 1 Block diagram whole system
Buzzer
518
S. Budijono and D. P. Hutabarat
Redundancy Power Supply. As shown in the block diagram Fig. 1, the system has redundancy with two powers sources, namely electricity and a rechargeable battery which is a backup when the power goes out. Buck Converter is needed to make the voltage into the ESP32cam and Wemos D1 Mini to 5 V. Internet Connection. An internet connection must be active first to control the system remotely using a smartphone. ESP32cam and “Wemos D1 Mini” requires an internet connection to send and receive data control smoothly. ESP32cam and Wemos D1 Mini. Using the ESP32cam without the Wemos D1 Mini will slow down the system to send streaming images in real-time and slow down the movement of the sensor. Therefore, the system will divide into two controllers, with each having a different task; the ESP32cam is in charge of activating the camera and sending images to the server, while the Wemos D1 Mini is in charge of regulating the servo motor in moving the camera left and right. The ESP32cam system has the function of sending data received from the OV2640 camera [9] to a hosting server with a pre-prepared domain name, while the “Wemos D1 Mini” system controls the proximity sensor, stimulates the servo, and activates the buzzer. The image data received from the OV2640 camera must be converted to base64 format before being sent to the hosting server. The base64 format is then converted to images when it is being displayed on the smartphone. Second Controller. The second controller is the “Wemos D1 Mini” [10], connected to the internet. This controller is used to drive the servo, turn on the buzzer using an android application based on user requests, send data to the server, and activate the proximity sensor. The Proximity sensor can detect movement when the Wemos D1 Mini is active. The proximity sensor is used because of the sensor’s sensitivity, compared to the PIR sensor sensitivity [11], it is easier to set the sensitivity range of the proximity sensor. The PIR sensor will activate when an animal enters the sensor area when used for outdoor object detection. In contrast, the Proximity sensor can be positioned to match the object’s height because in general, the proximity sensor has a maximum distance of 1. When the proximity sensor detects an object, the system will send an email alert to the registered user. After the notification, the user can monitor the situation in the system area by opening an application that displays Camera Live Streaming. The “Wemos D1 Mini” controller is also equipped with a servo to move the camera from 0 to an angle of 180, making the coverage monitoring area wider. The “Wemos D1 Mini” controller is also in charge of triggering the buzzer alarm when the user activates the buzzer alarm function through the application on the smartphone.
2.2 Flow Diagrams Whole Systems Figures 2 and 3 describe the processes that occur in the system. Figure 2 is a flow diagram for the process that occurs in the “wemos D1 Mini” controller—starting from initializing the system followed by checking the Wi-Fi connection then taking the servo position data from the server and moving the servo to a position according
Intelligent Home Monitoring System Using IoT Device
519
to the data taken from the server. Then the motion detection process is carried out by the proximity sensor. If the sensor detects motion, it will trigger the system to send a notification via email. If no movement is detected, the system will return to looping to “get data from the server” when the user activates the system panic button, the system will turn on the buzzer and send data to the server. Whereas if the panic button system is not active, the system will turn off the buzzer. After that, it will be looping to the data retrieval process from the server. Fig. 2 Flow diagram “WemosD1 Mini
520
S. Budijono and D. P. Hutabarat
Fig. 3 Flow diagram ESP32Cam
2.3 Flow Diagram ESP32Cam Figure 3 is a flow diagram for the process on the ESP32cam. When the system is on, the camera will automatically connect to the internet network set and set the camera property, which provides the best resolution. Then the ESP32cam can take pictures from the camera, and the image is converted to text using a base64 decoder and uploaded to the server. The server accepts with an HTTP request. Then the android
Intelligent Home Monitoring System Using IoT Device
521
program requests PHP to get base64 data, then from the base64 data obtained, the android program will convert the base64 data back into an image so that the camera images can be accessed via the android devices in real-time.
2.4 Application on Android Smartphone The application is created using android studio. Designing an application consists of a back end and a front end. The back end aims to transmit data or play a role in developing the appearance of a site or application using the Java language, while the Front end seeks to take care of the appearance of the application using XML. There are three layouts: the splash screen, Display Login, and Display Main Activity: Fig. 4 splash screen is the first screen that the program appears on before entering the menu, Fig. 5 Display login layout is to put username and password to access server, and Fig. 6 Display Main Activity is the main activity layout; there is a live streaming camera to be able to monitor the camera in real-time, then there is a servo position to adjust the servo angle so that it can move the camera wider. Then, there is a panic button turn on and off the alarm manually.
3 Prototype This prototype device shown in Fig. 7 is designed to be as small as possible, so that it can be placed in small places. This device will place at 200 cm highest from the ground. Figure 8 shows the prototype android application. It shows camera streams through the android application in real-time
4 Result Table 2 measures the time difference of movement trigger to movement reports, as well as the time lag of pressing the panic button to the alarm buzzing. The measurement of time uses a stopwatch. In the first experiment, in the condition that there is movement at time 00:01:10, the notification of movement will be on time 00:01:13, the difference between the movement and the system giving warnings to the user is 3 s. After that, the panic button system will be activated at 00:01:14, and then the alarm will sound at 00:01:16 with a time difference of 2 s. The process from the movement until the alarm sounds takes 5 s in average. With the average data obtained, It takes 3 s for the sensor to send notifications after it has detect motions. It takes 2 s for the Alarm to buzz after the user has pressed the panic button. The internet quality affects the speed of sending notifications to the
522
S. Budijono and D. P. Hutabarat
Fig. 4 Splash screen
user when there is movement. If the internet is slow, the sending of warnings from the camera to email will be slow to receive. In the second experiment, the goal is to measure the maximum distance that the sensor proximity can detect. The measurement uses a meter measuring tool. The measurement method is that the instrument or camera is placed in the corner of the room, then given the movement at the distance that wants to be tested. The results of this experiment confirm that at a distance of 110 cm, the proximity sensor can no longer detect movement. The sensor can only detect perpendicular to the subject. The third experiment ensures the coverage area that the camera can record— Movable servo angle measurement. In this experiment, the servo position measurement can be carried out in the direct application—the testing by moving the servo with an internal 20°. The result is that a servo degree can be set from 0 to 180°. The Fourth experiment measures power redundancy with 3000 mAh battery, on the first measurement the battery only powers the system on standby conditions, and on the second measurement the system triggers the camera servo and activates an alarm every half hour. The result of system life using 3000 mAh battery as seen on Table 3. Measurement system life using 3000 mAh battery.
Intelligent Home Monitoring System Using IoT Device
523
Fig. 5 Login screen
5 Conclusions Using Intelligent Home Monitoring System to monitor home surroundings and give response to user smartphone if any movement occurs at the monitoring area makes people safer in this pandemic era. With this system, users can monitor their homes using their smartphones anytime, anywhere. Furthermore, this system has two redundancy power sources. If an object occurs in the monitoring area, the system will send an email notification to the user. Users can view the surrounding home with a camera integrated with the system and turn the buzzer on or off. Also, with a sensor distance of only 110 cm and placed at a height of 200 cm, the system will not detect the movement of small objects such as animals. To enhance this Monitoring system, the next step that can be taken is by sending a notification using other messaging systems like SMS or WhatsApp and making a dedicated internet connection to have redundancy of internet connection. Cloud systems can be add in the future, it can be record notification and also send a picture to cloud provider, it enhance the system with recording all activities that capture by systems [12].
524 Fig. 6 Main activity screen
Fig. 7 Smart home systems device
S. Budijono and D. P. Hutabarat
Intelligent Home Monitoring System Using IoT Device
525
Fig. 8 Android application for streaming Table 2 Measurement time movement No
Movement trigger
Movement reports
Differences (s)
Panic button process
Alarm
Differences (s)
1
00:01:10
00:01:13
3
00:01:14
00:01:16
2
2
00:01:21
00:01:25
4
00:01:26
00:01:29
3
3
00:01:53
00:01:57
4
00:01:58
00:02:02
4
4
00:02:21
00:02:24
3
00:02:25
00:02:28
3
5
00:03:14
00:03:18
4
00:03:19
00:03:21
2
6
00:03:23
00:03:26
3
00:03:27
00:03:29
2
7
00:03:44
00:03:47
3
00:03:48
00:03:51
3
8
00:03:53
00:03:56
3
00:03:57
00:03:59
2
9
00:04:20
00:04:23
3
00:04:24
00:04:26
2
10
00:04:30
00:04:33
3
00:04:34
00:04:36
2
11
00:04:42
00:04:45
3
00:04:46
00:04:48
2
12
00:04:55
00:04:58
3
00:04:59
00:05:01
2
13
00:05:09
00:05:13
4
00:05:14
00:05:16
2
14
00:05:24
00:05:27
3
00:05:28
00:05:31
3
15
00:05:36
00:05:40
4
00:05:41
00:05:44
3
16
00:05:50
00:05:54
4
00:05:55
00:05:58
3
17
00:06:13
00:06:16
3
00:06:17
00:06:19
2
18
00:06:27
00:06:30
3
00:06:31
00:06:33
2
19
00:06:40
00:06:43
3
00:06:44
00:06:47
3
20
00:06:54
00:06:57
3
00:06:58
00:07:00
2
526
S. Budijono and D. P. Hutabarat
Table 3 Measurement system life using 3000mAh battery No
System condition
System life
1
The system is only on standby condition
4 h 15 min
2
The system operates a camera servo and activates an alarm every half hour
3h
References 1. Peace P, Stanback TM Jr (1991) The new suburbanization: challenge to the central city, 1st edn. Routledge, New York 2. Hasan M, Anik MH, Islam S (2018) Microcontroller based smart home system with enhanced appliance switching capacity. In 2018 Fifth HCT information technology trends (ITT). IEEE, pp 364–367 3. Kowal M, Coll-Martín T, Ikizer G, Rasmussen J, Eichel K, Studzi´nska A, Koszałkowska K, Karwowski M, Najmussaqib A, Pankowski D, Lieberoth A, Ahmed O (2020) Who is the most stressed during the COVID-19 pandemic? Data from 26 countries and areas. Appl Psychol Health Well Being 12:946–966 4. Tillyer MS, Walter RJ (2019) Low-income housing and crime: The influence of housing development and neighborhood characteristics. Crime Delinq 147:969–993 5. Crime Index by Country. https://www.numbeo.com/crime/rankings_by_country.jsp?title= 2021. Accessed on 20 Oct 2022 6. Sixsmith AJ (2000) An evaluation of an intelligent home monitoring system. J Telemed Telecare 6:63–72 7. Al-Kuwari M, Ramadan A, Ismael Y, Al-Sughair L, Gastli A, Benammar M (2018) Smarthome automation using IoT-based sensing and monitoring platform. In: 12th IEEE international conference on compatibility, power electronics and power engineering. IEEE, Doha Qatar, pp 1–6 8. Arabul FK, Arabul AY, Kumru CF, Boynuegri AR (2017) Providing energy management of a fuel cell–battery–wind turbine–solar panel hybrid off grid smart home system. Int J Hydrogen Energy 42:26906–26913 9. Ahmed T, Bin Nuruddin AT, Latif AB, Arnob SS, Rahman R (2020) A real-time controlled closed loop IoT based home surveillance system for android using firebase. In: 6th international conference on control, automation and robotics (ICCAR). IEEE, Singapore, pp 601–606 10. Cameron N (2021) Electronics projects with the ESP8266 and ESP32, 1st edn. Apress, Edinburgh, UK 11. Surantha N, Wicaksono WR (2018) Design of smart home security system using object recognition and PIR sensor. In: The 3rd international conference on computer science and computational intelligence (ICCSCI): empowering smart technology in digital era for a better life. Procedia Computer Science, Jakarta, pp 465–472 12. Creating Object Recognition with Espressif ESP32. https://aws.amazon.com/blogs/iot/cre ating-object-recognition-with-espressif-esp32. Accessed 30 Oct 2022
Contactless Student Attendance System Using BLE Technology, QR-Code, and Android Rico Wijaya , Steven Kristianto, Yudha Batara Hasibuan, and Ivan Alexander
Abstract In an academic institution, taking attendance of the students is done to make sure if the students attend the class. Since COVID-19, safer, hygienic and precise taking-attendance system is needed to minimize the spreading of virus. The proposed system is contactless student attendance system based on BLE Devices, QR-Code and Android for high-rise building academic institution. To take attendance in this system, the instructor must display the QR-Code in-front of the class and students must connect to BLE device, scan the QR-Code and take selfie photo using their smartphone to taking-attendance. To detect student’s location, RSSI value between BLE device and the smartphone is used. −93.25 dBm is used as the predefined RSSI value in the system. With the predefined RSSI value, the system can successfully specify if the student is inside or outside the classroom. QR-Code can be scanned by student inside the classroom from each spot. Keywords Attendance system · QR-Code · Beacon · BLE · Android
1 Introduction In academic institution, taking attendance of the students by the instructor is a timeconsuming process, especially if the class has big number of students. The traditional method to take students attendance is pen-and-paper methods, this method takes effort and cost instructor and class time [1]. This method can be inaccurate. Students can do fraud by marking their absent classmate. Besides, if the taking attendance is done in the beginning of the class, the students can leave the class during the lecturing session. The instructor must verify the student presence during the lecturing session. Another traditional method for taking student attendance is by calling and taking note of the student’s presence. This method is done by the instructor him/herself or the instructor’s assistants and it will take approximately 10 min per lecture hour. By R. Wijaya (B) · S. Kristianto · Y. B. Hasibuan · I. Alexander Bina Nusantara University, Jakarta Barat, DKI Jakarta 11480, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_46
527
528
R. Wijaya et al.
doing this method, paper [2] states that 8 out of 45 h per semester are used up for taking student attendance. Since COVID-19 pandemic, the traditional pen-and-paper method become an obsolete way to take students’ attendance. This method risks the transmission of possible disease particle. Passing paper and sharing stationery can be life-threatening in the pandemic as well as increasing waste during pandemic [3]. In preparing for face-to-face lecture activities after the pandemic, academic institutions must be able to create a safer and more hygienic attendance-taking system. This can be achieved by utilizing technology such as access points broadcast signals, etc. [1]. Paper [2, 4] proposed a taking attendance system based on a QR-Code, smartphone and database server. In Ref. [2], the instructor requires to display the QR-Code in front of the class at the end of session. Then the students are taking the attendance by scanning and taking selfie photo in mobile application. By implementing this system, it reduces the wasted time by 90% than traditional taking attendance method. In contrast to Ref. [2], in Ref. [4] the system was implemented for the student exam attendance, which fraud on attendance is relatively lower and instructors have more time to check student attendance. The instructor must scan the printed QR-Code or displayed QR-Code on student smartphone to take the student attendance. Using the QR-code concept like paper [2], paper [5] proposes a different system. In paper [5], the developed system is a QR-code scanner with simple database. Scanners are used to scan QR-codes owned by students. The QR-code contains student information (i.e., student ID, name, etc.) and it will be recorded in the database if the QR-code detection is successful. This system has the potential to cause crowds when queuing for scanning the QR-Code. System in Ref. [6] using smart phone MAC address as student’s identifier. Students must register their smartphone in the database, and internet connections is needed for data verification. Location fraud still can be occurred in this system because students can take attendance outside the classroom. Different with Ref. [6], in Ref. [7] RFID technology (reader and tag) and cloud database (Google spreadsheet) is used. To take the student attendance, student must tag the RFID tag to the RFID reader than the controller will send data to the cloud to compare with the database. This system potentially making crowds. More advanced then [6–8], paper [9] using IoT and RFID for taking students attendance. The proposed system was developed to overcome post COVID-19 pandemic problems in the attendance system that uses fingerprints and to overcome fraud cases in taking attendance system. In this paper, the system is using controller (Arduino UNO), RFID (reader and tag), GSM, GPS, IoT, rand Wifi module. Taking attendance was done by the controller reads information in student’s RFID tag that scanned by RFID reader and sent it to the webserver through internet. Same as [8, 10, 11], in Ref. [9], the system using GPS module for Location Based Services (LBS). With GPS module in paper [8–14], the taking attendance system was able to get student’s location coordinate. But by using GPS in those papers are not known how precise the system can detect if the student is inside or outside the classroom. In Ref. [11], taking-attendance system was built for employee. The GPS is used to check
Contactless Student Attendance System Using BLE Technology, …
529
the employee location, if the employee is in 500 m radius from the office, the takingattendance process can be count as valid attendance. System in Ref. [11] cannot be implemented for student attendance system, because to take student attendance, the system needs to detect the location of students on a meter scale. Theoretically, GPS only works well in outdoors and doesn’t work well in indoors, especially academic institution with high-rise buildings. Bluetooth Low Energy (BLE) devices is one of the technologies that can be used for indoor positioning applications [15]. The experiments show that the BLE receiver (iBEacon) can get distance accurately when the distance was 8 m from the source beacon. The accuracy of distance measured by BLE devices is affected by separated walls and other signal propagating devices. When using BLE devices for indoor positioning applications, the engineers must consider the environment around the devices. Clear environment around the BLE devices is recommended for better positioning accuracy. The proposed system in this paper is contactless student attendance system based on BLE Devices, QR-Code and Android for high-rise building academic institution. The technologies are used for making safer, hygienic and precise student attendance system post COVID-19 pandemic. In addition to using those technologies, this system is also database and web-based system, making it easier for users to interact. The scope of work in this paper is to create a mobile and web application. The discussion will be focusing on how the developed system works and experiments in QR-code and RSSI value related. Four experiments done in this paper were: 1. 2. 3. 4.
Experiment to define RSSI value between BLE Devices and the smartphone Experiment to detect student/smartphone in several spot outside the classroom Experiment to detect student/smartphone in several spot inside the classroom Experiment to evaluate the successfulness of smartphone to scan the QR-Code shown by instructor in-front of the classroom.
2 System Architecture The proposed system aims to create safer, hygienic and precise attendance system. To achieve this goal, the system architecture is designed as follows: 1. Instructor Computer in classroom to display QR-Code in front of class 2. Student devices which are WiFi and Bluetooth enabled devices, such as smartphone with installed related application 3. BLE Device (Cubeacon) is attached on classroom ceiling, it used to detect smartphone location during taking-attendance process based on RSSI value 4. Wireless access point and other intermediary devices (router, switch, etc.) to send and receive data from student/instructor devices to server or vice versa 5. Web and database server to record student attendance, to validate smartphone location, to generate QR-Code.
530
R. Wijaya et al.
2.1 System Block Diagram System block diagram is shown in Fig. 1. Before taking-attendance process begins, the student’s smartphones must connect to LAN network such as wireless router or hotspot and BLE Devices. BLE device as a beacon measures the RSSI to the smartphone then send the value to smartphone. RSSI value will be evaluated by mobile application installed in smartphones, if the RSSI value is in the range of the desired value in the systems, it means smartphones/students in classroom. But if not, it means smartphones/students not in the classroom during taking-attendance process. This aims to minimize the possibility of cheating by students by taking attendance outside the classroom. QR-Code is generated by server and displayed by instructor in front of the classroom. Student must scan QR-code to perform student attendance. By using QR-code, the system minimizes the potential for crowds during the attendance-taking process which commonly occurs in in fingerprint or RFID cards attendance systems. After students scanned the QR-code, student must take a selfie photo. The data then sent to web and database server for the attendance record.
3 Results and Discussion In this section, system implementation and several experiments will be discussed. In system implementation sub section, a snippet of the web page and mobile application that has been implemented will be displayed and the system flow will be explained in 2 parts instructor side and student side. In experiment subsection, several scenarios are conducted to evaluate the system implementation.
Fig. 1 System block diagram
Contactless Student Attendance System Using BLE Technology, …
531
Fig. 2 Login page
3.1 System Implementation Instructor Side. Instructor must use web browser to show the QR-Code to the students. First the instructor must go to Login Page (Fig. 2) and enter username and password. If the username and password match with data stored in database, then server will send List of Courses Page (Fig. 3a). In List of Course Page, instructor can view their course schedule by clicking the View Schedule button. If View Schedule button is clicked, server will send List of Schedule Page (Fig. 3b) and instructor can view Student Attendance Page (Fig. 3c) by clicking View Attendance. In Students Attendance Page, instructor can view student who is done taking attendance or not. In this page instructor can show QR-Code Page (Fig. 3d) by clicking the View QRCode. This QR-Code is used for student to take attendance. Figure 3e is an example of Student Attendance Page for student who already successful for taking-attendance. Student Side. In student’s side, taking attendance process will be done by using mobile application in their smartphone. Student must connect their smartphone to the network and turn on their smartphone’s Bluetooth. After their smartphones are connected, student must open and login by using mobile application (Fig. 4a). If the username and password matched with the data in database, the smartphone will automatically request RSSI value to the BLE device and the BLE’s ID number. RSSI value and BLE’s ID number will be used for validating if the student in the room and in the right room or not. Student must select the room ID in Find Room Page (Fig. 4b) and scan the QR-Code shown by instructor (Fig. 4c). If student successfully scanned the QR-code, student must take a selfie photo in Take Photo Page (Fig. 4d) and the taking-attendance process is finished (Fig. 4e). If failed, student must re-do the process.
3.2 Experiments The experiments were conducted in one of classroom, in Anggrek Campus, Bina Nusantara University. With 8 level floors and 95 classrooms, the rooms in Anggrek
532
R. Wijaya et al.
Fig. 3 List of Course Page (a), List of Schedule Page (b), Student Attendance Page (c), Show QR-Code Page (d) and Example of students successful doing the taking-attendance process (e)
Contactless Student Attendance System Using BLE Technology, …
533
Fig. 3 (continued)
Campus are arranged close to each other and have a relatively homogeneous arrangement on each floor level. Figure 5 is an illustration of the homogeneous arrangement in Binus University. The size of room used in these experiments is 9 × 9 m2 size. RSSI value between BLE Devices and the smartphone is used to validate the smartphone/student’s location. In this first scenario, BLE device is attached on the ceiling of the 9 × 9 m2 classroom and student with his/her smartphone was trying to connect to the BLE device in spot B (Fig. 6a). The RSSI value was measured several times and the averaged value of RSSI is -93.25 dBm. By this experiment, -93 dBm RSSI value is defined as the RSSI cut off value for the system to classify smartphone inside or outside the classroom. With defined RSSI Value in the system from 1st experiment, the 2nd experiment was conducted by student to try to login on spot A, B, and C. The averaged RSSI value in each spot is shown in Fig. 6b. In 2nd experiment, the system successfully detects the smartphone that was at the outside of the classroom on spot A, B, and C. The 3rd experiment was done to measure RSSI between smartphone and BLE device with several spots as Fig. 7a. The averages RSSI values were shown in Fig. 7b, and the system successfully detects the smartphone inside the classroom. The 4th experiment is conducted to evaluate the successfulness of smartphone to scan the QR-Code shown by instructor in-front of the classroom. The smartphones scanned the QR-Code on spot A through spot I, and it shown in Fig. 8a. In this experiment, the result is the QR-Code can be scanned by smartphone from all spots in classroom shown in Fig. 8b.
4 Conclusions Proposed taking-attendance system can be used in pandemic or post pandemic era to minimize contact beetween students and instructor. By using BLE device as indoor positioning, the system can detect location of the smartphone/student precisely. RSSI value above −93.25 dBm between smartphone and BLE device is used to classify smartphone location as inside the classroom. If the smartphone is inside the classroom
534
R. Wijaya et al.
Fig. 4 Mobile application Login Page (a), Find Room Page (b), QR-code Scan Page (c), Take Photo Page (d), and Attendance Success Page
Contactless Student Attendance System Using BLE Technology, …
535
Fig. 5 Illustration of homogeneous arrangement in Campus Anggrek, Binus University (Floor level 3 and level 4)
Fig. 6 Classroom spot and BLE device placement for 1st and 2nd scenario experiment (a) and result (b)
(between on spot A through spot I), the smartphone can be detected as in-classroom and smartphone can scan the QR-Code easily.
536
R. Wijaya et al.
Fig. 7 Classroom spot and BLE device placement for 3rd scenario experiment (a) and result (b)
Fig. 8 Classroom spot for 4th scenario experiment (a) and result (b)
References 1. Sawall E et al (2021) COVID-19 zero-interaction school attendance system. In: 2021 IEEE International IOT, IEMTRONICS 2021. IEEE, Toronto, Canada, pp 0–4 2. Masalha F, Hirzallah N (2014) A students attendance system using QR code. Int J Adv Comput Sci Appl 5(3):75–79 3. Mangindaan D, Adib A, Febrianta H, Hutabarat DJC (2022) Systematic literature review and bibliometric study of waste management in Indonesia in the COVID-19 pandemic era. Sustainability 14(5):2556 4. Muharom LA, Sholeh ML (2016) Smart Presensi Menggunakan QR-Code Dengan Enkripsi Vigenere Cipher. Limits J Math Appl 13(2):31–44 5. Muhamad A et al (2021) Student’s attendance system using QR code. Res Innov Tech Vocat Educ Train 1(1):133–139 6. Munthe B et al (2021) Online student attendance system using android. J Phys Conf Ser 012048 (IOP Publishing) 7. Kurunthachalam A et al (2021) Design of attendance monitoring system using RFID. In: 2021 7th international conference on advanced computing and communication system ICACCS 2021, no March. IEEE, Coimbatore, India, pp 1628–1631 8. Rahayu N et al (2019) Online attendance system design to reduce the potential of Covid-19 distribution. Jurnal Mantik 3(2):10–19
Contactless Student Attendance System Using BLE Technology, …
537
9. Bharathy GT et al (2021) Smart attendance monitoring system using IoT and RFID. Int J Adv Eng Manag 3(6):1307 10. Uddin MS et al (2014) A location based time and attendance system. Int J Comput Theory Eng March 2018:36–38 11. Fajrianto R, Tarigan M (2022) Building attendance application with location based service technology and waterfall method to overcome long attendance queues and reduce the risk of exposure to Covid-19. J Intell Comput Heal Inform 2(2):29 12. Baskaran G, Aznan AF (2016) Attendance system using a mobile device: face recognition, Gps or both? Int J Adv Res Electron Comput Sci 3(8):26–32 13. Naen MF et al (2021) Development of attendance monitoring system with artificial intelligence optimization in cloud. Int J Artif Intell 8(2):88–98 14. Fatkharrofiqi A et al (2020) Employee attendance application using location based service (lbs) method based on android. J Phys Conf Ser 1641(1) (IOP Publishing, Jawa Barat, Indonesia) 15. Dalkiliç F et al (2017) An analysis of the positioning accuracy of iBeacon technology in indoor environments. In: 2nd International Conference on Computer Science Engineering UBMK 2017, no October. IEEE, Antalya, Turkey, pp 549–553
Factors Influencing the Intention to Use Electronic Payment Among Generation Z in Jabodetabek Adriyani Fernanda Kurniawan, Jessica Nathalie Wenas, Michael, and Robertus Nugroho Perwiro Atmojo
Abstract All metropolitan cities including Jakarta, Bogor, Depok, Tangerang, and Bekasi, which are abbreviated as Jabodetabek, have high population and demand levels. On this basis, of course, Jabodetabek became a trading center. In today’s technological era, Indonesia is starting to focus on online transaction facilities. Epayment is one of the service options used by various companies, businesses, and merchants through applications. The primary objective of this research is to investigate the factors that influence user’s intention to adopt electronic payment services through the TAM Model. A total of 165 of Generation Z in Jabodetabek participated in this research and were chosen by using a purposive sampling method. The data were collected through descriptive-quantitatively (questionnaires) and analyzed through structural equation model. The findings showed that Generation Z intentions to adopt e-payment had a positive relationship with perceived security (t = 3.761, p = 0.000), perceived ease of use (t = 5.872, p = 0.000), perceived usefulness (t = 11.367, p = 0.000), perceived satisfaction (t = 5.034, p = 0.000) and had a negative relationship with perceived risk (t = 2.173, p = 0.030). This research is unique as it includes one main proponent for mobile payment systems (perceived risk) that was missing in the previous model. Hence, this research would aid in the development of a better model and would be beneficial to e-payment companies and developers in offering better applications that recognize customers’ intentions and the requirements that they truly desire. Keywords E-payment · TAM model · Consumer intention · Generation Z
A. F. Kurniawan · J. N. Wenas (B) · Michael · R. N. P. Atmojo School of Information Systems, Bina Nusantara University, Jakarta, Indonesia e-mail: [email protected] A. F. Kurniawan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_47
539
540
A. F. Kurniawan et al.
1 Introduction Digital innovations in technology and information lead to many new initiatives and create an inevitable mass consumer change. One technology that is commonly used by the public is the internet. Most Indonesian people are familiar with the internet, including its activities. It certainly causes significant lifestyle changes for the community, thus encouraging several new frameworks and patterns in several fields, one of which is payments. Due to various internet deregulation in Asia, particularly in Indonesia, mobile internet penetration has expanded substantially. Many data suggest that smartphone companies in Indonesia have grown at an exponential rate. The improvement of the internet and technology in the payment field will undoubtedly replace the function of cash or other traditional payments as the primary payment in the community. It is proven in the data published by Bank Indonesia in 2022 below, where the nominal electronic transactions in Indonesia over the past five years (2017–2021) have increased significantly (Table 1). They choose non-cash payments as a more practical and economical method of payment. The locations that were chosen in this study are Jakarta, Bogor, Depok, Tangerang, and Bekasi. First based on the latest report of BKSP (Badan Koordinasi Sertifikasi Profesi) Jabodetabek is a metropolitan city in Indonesia that has a high population. Second, the urbanization rate and demographic growth also continue to climb, making it possible that this city to be a city of commerce. Lastly, according to APJII (Association of Indonesian Internet Service Providers), the highest level of internet usage is in the city of Jakarta. This is one of the reasons that Jabodetabek influences people’s behavior patterns in dealing with electronic payments. However, now it has become a habit and lifestyle of the Indonesian people. The use of epayments provides many additional benefits, such as increasing the time efficiency and effectiveness of the payment process, fostering customer loyalty, and reducing the circulation of counterfeit money. In addition, e-payments make spending much more efficient due to promotions in the form of discounts that customers can enjoy. In fact, multiple research projects have been undertaken on the intention of young consumers about the implementation of e-payments in various developing countries such as Indonesia (Varsha and Thulasiram 2016). According to the Indonesian Central Statistics Agency (2020), the population in Indonesia is 74.93 million people consisting of 27.94% Gen Z, 25.87% Gen Y, 21.88% Gen X, 11.56% Baby Boomers, 10.88% Post Gen Z, and 1.87% Pre boomers. Table 1 Total E-payment transaction in Indonesia
Year
Total transaction
Nominal
2017
943.319.933
Rp12.375.468.720.000
2018
2.922.698.905
Rp47.198.616.110.000
2019
5.226.699.919
Rp145.165.467.600.000
2020
4.625.703.561
Rp204.909.170.000.000
2021
5.068.694.329
Rp284.689.349.480.000
Factors Influencing the Intention to Use Electronic Payment Among …
541
As the mobile payment marketplace evolves and is predicted to expand exponentially in Indonesia, it is essential to investigate the issues of Indonesian consumers for the benefit of e-payment service providers in Indonesia. Mukherjee has highlighted a few factors, such as quality of security and grievance redressal, which affect the intention of mobile payment service. Hence, this is worth investigating the factors that influence the intention to adopt e-payment among Generation Z in Jabodetabek foresee that adoption of e-payment will increase significantly in the coming few years.
2 Literature Review 2.1 E-Payment Electronic payment or commonly referred to as e-payment is a payment transaction service that involves the use of electronic communication media such as the internet, mobile phone, tablet, laptop or computer between the payer and the recipient. A transaction can be defined as e-payment transaction if the transaction does at least the first phase through electronic medium [1]. There are several types of electronic payments that can be used by customers or payers, as follows: (1) Online credit card, (2) E-wallet, (3) E-cash, (4) Stored-value online, (5) Digital accumulating balance, and (6) E-check. E-payment can also be categorized by the feature, like transfer, pay bills, top-up, and others.
2.2 Technology Acceptance Model (TAM) The Technology Acceptance Model (TAM) was first developed by Davis in 1986. TAM is a model developed from the psychological side of computer user behavior that is focused on beliefs, attitudes, intentions, and user behavior relationships to describe the factors of user behavior totowardhe acceptance and use of technology. There are two attitudinal factors of user behavior that will affect the use of technology. These factors are perceived ease of use and perceived usefulness [2]. Extended TAM Extended Technology Acceptance Model (TAM) developed by Venkatesh and Davis [3] outlines the perceived usefulness and intended use of information systems. The model adds, “theoretical con-structs involving social influence processes and cognitive instrumental processes”. These model help determine whether a person will adopt or reject a new system. The results of this study clearly support the appropriateness of using extended TAM to understand why individuals adopt e-payment services.
542
A. F. Kurniawan et al.
2.3 Perceived Security People tend to put more trust in something that is more secure than something that is not yet secure. In the context of e-payment perceived security can be interpreted as “the extent to which a consumer believes that making payments online is safe”. Perceived security is the process of keeping perceived risk at an acceptable level. The higher the level of security, the more individuals will trust the technology and it will cause individuals to use the technology. This variable has been identified as one of the main factors for the success of a transaction using e-payment. Perceived security is one of the fundamental variables to increase satisfaction in the adoption of e-payment. The foundation for the execution of any IT project is protecting corporate information systems from security threats [4].
2.4 Perceived Risk Most people are risk-averse, they prefer something that has less profit but with certainty than a lot of profit but no certainty. As much as possible they are risk averse even if they gain or lose the same amount. Perceived Risk is defined by Oglethorpe [5] as consumer perceptions regarding uncertainty and negative consequences that may be received for purchasing a product or service. Meanwhile perceived risk is an important component in information processing by consumers. When the perception of risk becomes high, there is a motivation whether to avoid purchasing and using, or minimize risk. So, it can be stated that perceived risk is the perception or thought of the risk that will be experienced by consumers. Uncertainty and negative consequences may be received for purchasing a product or service.
2.5 Perceived Ease of Use Perceived ease of use is a person’s confidence that technology can be easily understood. User interaction with the system and intensity of use is also evidence of ease of use [2]. It concludes that the use of technology and information is based on a person’s confidence that the system can be easily understood, used, and operated. There are several indicators of ease of use of information technology: (1) Easy to learn, (2) Easy to use, (3) Clear and understandable, (4) Controllable, (5) Flexible in time, (6) Flexible in place, (7) Flexible to choose, and (8) Easy to become skillful.
Factors Influencing the Intention to Use Electronic Payment Among …
543
2.6 Perceived Usefulness Perceived usefulness explains the form of a belief in making decisions to use a technology and information. When users feel helped and affect actual use, users will be interested in using the system. Usefulness is divided into two classifications, namely: (1) Usefulness with one factor and (2) Usefulness with two-factor estimation [6].
2.7 Perceived Satisfaction Perceived satisfaction is a person’s level of emotional state after comparing the performance or outcomes with the expectations. So, the level of satisfaction is a function of the difference between perceived performance and expectations [7]. Satisfaction describes the post-purchase response of a consumer to a product/service that is believed to be right or there is a match between what the customer expects and the performance of the service he has received.
2.8 Behavioral Intention Consumers with behavioral intentions have the intention or attitude of being loyal to the brand, product, and company and freely sharing their benefits with others [8]. If a product has a favorable behavioral intention, the business will be able to compete and succeed, as well as the emergence of a positive attitude from users. Word of mouth is a low-cost promotion that has a substantial impact on the company’s survival. Positive behavioral intention also has many companies benefits because loyalty is the goal of the company for a product or service.
3 Materials and Methods Research conducted by Reichheld [9] produced a conclusion that states that perceived security has a significant positive effect on perceived satisfaction which has a positive influence on behavioral intention. In addition, according to research conducted by Chang [10] stated that perceived security mediated by perceived satisfaction will have a positive influence on the continuation of online technology use. Thus, we proposed the following hypothesis: H1: Perceived Security (PS) mediated by the Perceived Satisfaction (PSa) has a positive and significant influence on the formation of the Behavioral Intention (BI).
544
A. F. Kurniawan et al.
Perceived risk does not directly affect behavioral intention to e-payment, but indirectly reduces or has a negative effect on perceived usefulness on user usage behavior [11]. Meanwhile, in previous research [12] stated that in making online purchases, consumers will consider the risks compared to the benefits of the purchase. While perceived risk theories have been applied to different contexts of consumer behavior through consumer satisfaction and the willingness to use a website. Thus, we proposed the following hypothesis: H2: Perceived Risk (PR) mediated by the Perceived Satisfaction (PSA) has a negative and significant effect on the formation of the Behavioral Intention (BI). H3: Perceived Risk (PR) mediated by the Perceived Usefulness (PU) has a negative and significant influence on the formation of the Behavioral Intention (BI). H4: Perceived Risk (PR) which is mediated by the Perceived Usefulness (PU) and Perceived Satisfaction (PSa) variables has a negative and significant influence on the formation of the Behavioral Intention (BI). Perceived ease of use has a significant positive effect on behavioral intention [13]. In addition, previous research [14] produced a conclusion which states that perceived ease of use has a direct and indirect effect (mediated by perceived usefulness) and in this study, perceived usefulness has a direct effect on the use of internet services or behavioral intention. Variables are extrinsic factors that refer to targets or rewards. So that someone will have a behavioral intention to use technology if the technology has perceived ease of use and perceived usefulness [15]. Thus, we proposed the following hypothesis: H5: Perceived Ease of Use (PEoU) mediated by the Perceived Usefulness (PU) has a positive and significant influence on the formation of Behavioral Intention (BI). H6: The Perceived Ease of Use (PEoU) which is mediated by Perceived Usefulness (PU) and Perceived Satisfaction (PSa) has a positive and significant influence on the formation of the Behavioral Intention (BI) variable. Based on the above hypothesis, in this research, quantitative method was applied to this research to study the relationship between variables, examine, and generalized a conclusion and findings [18]. The non-probabilistic sampling with purposive sampling approach was the most suitable method for the scope of this research. The respondents selected for analysis are Generation Z who were experienced using e-payment in Jabodetabek with a total of 165 usable surveys. A Likert qualitative scale with intervals of 1–6 was utilized in the questionnaire instrument (very lowvery high). Scale conversion was used to obtain more stable and objective weighting scores. The last stage of analysis was to test model and hypothesis, so we applied SEM-PLS method with the latest SmartPLS version 4 software which has been used to carry out path analysis and to confirm the statistical relationship among latent constructs. A flow diagram is shown in Fig. 1 representing the methodology adopted for the research.
Factors Influencing the Intention to Use Electronic Payment Among …
545
Fig. 1 Research model
4 Data Analysis and Result From distributed online questionnaires obtained 165 respondents answers are valid. Most respondents were dominated by female with 131 data and the rest are male respondents. A total of 20 respondents (12.1%) with age range 12–17 years, followed by age range 18–22 years with 128 respondents (77.6%), and the rest were 17 respondents (10.3%) with age range 23–27 years (Fig. 2). According to the coefficient R2 shown in Table 2, the Perceived Satisfaction and Perceived Usefulness moderately explained the formation 53.4% variance in Behavior Intention. Moreover, Perceived Security and Perceived Risk moderately explained the formation 58.7% in Perceived Satisfaction. Lastly, Perceived Risk and Perceived Ease of Use weakly explained the formation 37.0% in Perceived Usefulness. Based on Table 2, all research variables have values of Cronbach’s Alpha > 0.7 and rho_A > 0.4. It can be concluded that all variables in this research have met the requirements of the reliability standard [16]. Also, as shown in that table the value of Composite Reliability for all variables is above 0.6, indicating acceptable reliability. Referring to Hair et al. [16] regarding convergent validity, the minimum criteria acceptable Average Variance Extracted (AVE) value should be 0.5 or higher. According to the result in Table 2. all variables are higher than 0.5, which can conclude its validity. Based on the results shown in Table 3, the Fornell-Lacker Criterion to measure the discriminant validity, all variables have met the requirement for validity. It indicates that all square roots of AVEs (marked with an *) exceed the corresponding inter-construct correlation value of other variables. For example, Behavioral Intention has a higher value (0.709) than the other inter-construct correlation values. Thus, it can be concluded based on this measurement, all variables have met the requirements for validity (Table 4).
546
A. F. Kurniawan et al.
Fig. 2 Path coefficient and R-Square Table 2 Determination of coefficient values Variable
R2
R2 Adjusted
Behavior Intention
0.534
0.528
Perceived Satisfaction
0.587
0.579
Perceived Usefulness
0.370
Note
R2
= 0.75 = Substantial;
R2
0.362
= 0.50 = Moderate;
R2
= 0.25 = Weak. Hair et al. [17]
Table 3 The result of reflective outer model measurement Latent variable
Cronbach’s alpha
rho_A
Composite reliability
Average Variance Extracted (AVE)
Behavioral Intention (BI)
0.753
0.770
0.833
0.502
Perceived Ease of Use (PEoU)
0.793
0.811
0.859
0.552
Perceived Risk (PR)
0.834
0.845
0.877
0.588
Perceived Satisfaction (PSa)
0.834
0.835
0.883
0.601
Perceived Security (PS)
0.855
0.858
0.896
0.632
Perceived usefulness
0.750
0.771
0.834
0.506
Factors Influencing the Intention to Use Electronic Payment Among …
547
Table 4 The Fornel-Larcker criterion analysis Latent Var
Behavioral intention
Behavioral intention
0.709*
Perceived ease of use
0.509
Perceived ease of use
Perceived risk
Perceived satisfaction
Perceived security
0.743*
−0.087
−0.145
Perceived satisfaction
0.706
0.646
−0.190
0.775*
Perceived security
0.434
0.500
−0.207
0.563
0.795*
Perceived usefulness
0.646
0.592
−0.222
0.733
0.505
Perceived risk
Perceived usefulness
0.767*
0.711*
Hypotheses Testing The hypotheses testing for this research used total effect to measure it and the result can be seen in Table 5. Based on the total effect value in Table 5, it can be concluded that: H1: Perceived Security (PS) mediated by the Perceived Satisfaction (PSa) has a positive and significant effect on the formation of the Behavioral Intention (BI) (H1 accepted, p-values significant at [0.000] < 0.05 and < 0.01, respectively). H2: Perceived Risk (PR) mediated by the Perceived Satisfaction (PSA) has a negative and significant effect on the formation of the Behavioral Intention (BI) (H2 rejected, p-values insignificant at [0.099] > 0.05). Table 5 The total effects Latent Var
Original Sample
Sample Mean
STDEV
T statistics
p values
PEoU → BI
0.332
0.333
0.057
5.872
0.000**
PEoU → PSa
0.344
0.341
0.065
5.319
0.000**
PEoU → PU
0.572
0.571
0.077
7.410
0.000**
PR → BI
−0.082
−0.092
0.038
2.173
0.030
PR → PSa
−0.086
−0.099
0.052
1.650
0.099
PR → PU
−0.139
−0.151
0.057
2.450
0.014
PSa → BI
0.502
0.504
0.100
5.034
0.000**
PS → BI
0.130
0.133
0.035
3.761
0.000**
PS → PSa
0.259
0.265
0.055
4.723
0.000**
PU → BI
0.580
0.582
0.051
11.367
0.000**
PU → PSa
0.601
0.595
0.058
10.401
0.000**
Notes **,* correlation is significant at level 5% and 1%
548
A. F. Kurniawan et al.
H3: Perceived Risk (PR) mediated by the Perceived Usefulness (PU) has a negative and significant effect on the formation of the Behavioral Intention (BI) (H3 accepted, p-values significant at [0.014] < 0.05). H4: Perceived Risk (PR) which is mediated by the Perceived Usefulness (PU) and Perceived Satisfaction (PSa) variables has a negative and significant effect on the formation of the Behavioral Intention (BI) (H4 accepted, p-values significant at [0.030] < 0.05). H5: Perceived Ease of Use (PEoU) mediated by the Perceived Usefulness (PU) has a positive and significant effect on the formation of Behavioral Intention (BI) (H5 accepted, p-values significant at [0.000] < 0.05 and 0.01, respectively). H6: The Perceived Ease of Use (PEoU) which is mediated by Perceived Usefulness (PU) and Perceived Satisfaction (PSa) has a positive and significant effect on the formation of the Behavioral Intention (BI) variable (H6 accepted, p-values significant at [0.000] < 0.05 and < 0.01, respectively).
5 Conclusion This research found that five proposed hypotheses were accepted, and one rejected. Also, in this research found that Perceived Ease of Use had a positive and significant direct effect towards Perceived Usefulness with T-Stat 7.410 (p-value < 0.05). Also had a positive and significant indirect effect with Perceived Satisfaction (T-stat value 5.319; p-value < 0.05) and Behavior Intention (T-stat value 5.872; p-value < 0.05). Furthermore, Perceived Satisfaction had a significant direct influence on the formation of Behavior Intention with a t-Stat value 5.034 and p-value < 0.05. The Perceived Security in this research had a positive and significant direct effect on Perceived Satisfaction with a T-stat value 4.723 and p-value > 0.05. It also had a positive and significant indirect effect on Behavior Intention with T-stat value 3.761 and p-value > 0.05. Then the result of Perceived Usefulness shows that had positive and significant indirect influence on establishment of Perceived Satisfaction (T-stat value 10.401; p-value > 0.05) and Behavior Intention (T-stat value 11.367; p-value > 0.05). Interestingly, we found that Perceived Risk had a significant direct effect on formation of Perceived Usefulness with T-stat 2.450 and p-value < 0.05. However, Perceived Risk had no significant direct effect on formation of Perceived Satisfaction with T-stat 1.650 and p-value > 0.05. According to the sample used, e-payment as a third party should have proper procedures to maintaining the transaction and reduce the risk. So, consumers will feel beneficial, ease, satisfied, secured, and increases the consumers’ intention to use. Based on our experience in this research process, there are some limitations experienced from both the literature and the research focus. In the future, this proposed model will be further developed and tested on a larger scale to enhance their applications and fulfill what consumers want.
Factors Influencing the Intention to Use Electronic Payment Among …
549
References 1. Kim C, Tao W, Shin N, Kim K-S (2010) An empirical study of customers’ perceptions of security and trust in e-payment systems. Electron Commer Res Appl 9(1):84–95 2. Adams DA, Nelson RR, Todd PA (1992) Perceived usefulness, ease of use and usage of information technology: a replication. MIS Q 16:227–247 3. Venkatesh V, Davis FD (2000) A theoretical extension of the technology acceptance model: four longitudinal field studies. Manage Sci 46(2):186–204 4. Fingered AG, Finogeev AA (2017) Information attacks and security in wireless sensor networks of industrial SCADA systems. J Ind Inf Integr 5:6–16 5. Oglethorpe JE, Monroe KB (1994) Determinants of perceived health and safety risks of selected hazardous products and activities. J Consum Affairs 28(2):326–346 6. Chin WC, Todd PA (1995) On the use, usefulness and ease of use of structural equation modelling in mis research: a note of caution. MIS Q 19:237–246 7. Kotler P, Keller KL (2006) Marketing management 12e. Pearson International Edition. Pearson Prentice Hall Inc, Upper Saddle River, New Jersey 8. Kotler P, Keller K (2014) Marketing management, 15th edn. Prentice Hall, Saddle River 9. Reichheld FF (1996) The loyalty effect. Harvard Business School Press, Boston, Massachusetts 10. Chang HH, Chen SW (2009) Consumer perception of interface quality, security, and loyalty in electronic commerce. Inf Manag 46(7):411–417 11. Chang E-C, Tseng Y-F (2013) Research note: E-store image, perceived value and perceived risk. J Bus Res 66(7):864–870 12. Mitchell V (1999) Consumer perceived risk: conceptualisations and models. Eur J Mark 33(1/2):163–195 13. Alharbi S, Drew S (2014) Using the technology acceptance model in understanding academics’ behavioural intention to use learning management systems. Int J Adv Comput Sci Appl 5(1):143–155 14. Teo TSH, Lim VKG, Lai RYC (1999) Intrinsic and extrinsic motivation in internet usage. Internet J Manag Sci 27:25–37 15. Nysveen H, Pedersen PE, Thorbjørnsen H (2005) Explaining intention to use mobile chat services: moderating effects of gender. J Consum Mark 22(5):247–256 16. Hair JF, Sarstedt M, Ringle CM (2018) Advanced issues in partial least squares structural equation modelling (PLS-SEM). Sage, Thousand Oaks, CA 17. Hair J, Risher J, Sarstedt M, Ringle C (2018) When to use and how to report the results of PLS-SEM. Eur Bus Rev 31. https://doi.org/10.1108/EBR-11-2018-0203 18. Wright S, O’Brien BC, Nimmon L, Law M, Mylopoulos M (2016) Research design considerations. J Grad Med Educ 8(1):97–98
Recent Advances in AI for Digital Innovations
Shirt Folding Appliance for Small Home Laundry Service Business Lukas Tanutama, Hendy Wijaya, and Vincent Cunardy
Abstract Shirt Folding Appliance for Small Home Laundry Service Business is developed to enhance laundry service providers operational efficiency. It automates one of the tedious and repetitive process in laundry business. It is designed to assist a laundry operator to lessen the time of shirt folding process and at the same time increase its productivity in terms of number folded shirts. The device is using off the shelf economic and reliable devices that is suitable for a home industry. The major component consist of Bipolar Hybrid Stepper Motor with an Arduino controllable motor driver. An ultrasonic sensor is used to detect clothes placed on the moveable tray. The tray has paddles or flippers that operates according to the steps programmed in the controller. The paddles movements are controlled by the motor drive. Comparing the number of shirts folded within a certain time taken for an experienced operator to fold manually to the number if assisted by the device, the efficiency is increases above 20%. Keywords Automatic folding · Stepper motor · Driver · Laundry
1 Introduction Around the university campuses, shop-houses and mid income residences, small home laundry service businesses are plentiful. They are competing to provide fast services to their customers. In a tropical as Indonesia, it is the need for daily clean clothes are a must due to hygienic reasons. Yet, students and young work force have limited time to do their own laundry. Laundry service is quite a help for them and an essential service. The washing and drying process are already automated. The ironing part is still being processed manually. Ironing is a tedious work and time consuming. The home laundry service businesses has limited human resources L. Tanutama (B) · H. Wijaya · V. Cunardy Computer Engineering Department, Faculty of Engineering, Bina Nusantara University, Jakarta, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_48
553
554
L. Tanutama et al.
due to financial limitation. They have to structure their pricing carefully as their customers also have limited financial resources. Adding helping hands will erode their net income and become disincentive to a sustainable small enterprise. Out of various manual processes folding shirts and the like is a time consuming process, but it is a candidate for automation. Automating folding manual process can increase the efficiency of the process while keeping the cost relatively constant. This appliance will not only decrease the processing time but also could increase the establishment revenue and profitability, it can become a win–win solution for both the business owner and the customers.
2 Previous Works There were numerous efforts to develop systems and devices to decrease the manual work in folding both man and women clothes for later use [1–3]. These efforts were not simple due to the nature of clothes structure. The mechanical construction to perform the actions of human being is quite complicated. The intelligence to control the steps taken is usually perform by a micro controller or microprocessor. Movements mimicking human manual work are performed by motors either stepper or servo motors. A Photovoltaic powered T-shirt folding machine folds a T-shirt more efficient than the manual folding was developed by some researchers [1]. It was constructed using small DC motors and micro controller to run the controlling program. Another system was designed to fold shirts and pants. It was designed for household purposes and four plates moved by servo motor to control the folding motion. A sophisticated machine was also developed. It accepted a T-shirt out of the washing machine and placed on a special hanger for drying. The device folded the T-shirt automatically. It detects if the T-shirt is sufficiently dry for folding and storage [4].
3 System Design 3.1 System Diagram See Fig. 1.
3.2 System Components Controller. The brain of Shirt Folding Appliance is Arduino Uno R3 [5]. It is an opensource microcontroller module. It uses ATmega328P microcontroller. The module
Shirt Folding Appliance for Small Home Laundry Service Business Fig. 1 System diagram
Controller (Arduino)
555
Motor Driver
Motor for Right Paddle
Motor Driver
Motor for Left Paddle
Motor Driver
Motor for Upper Paddle
Sensor
digital and analog input/output which can interface to various expansion modules that are known as shields and other circuits. The board has six I/O output that can provide pulse width modulated output out of 14 digital I/O terminals. It has also six analog I/O terminals. Arduino IDE (Integrated Development Environment) is providing programmability. The ATmega328 on the module is preprogrammed with a boot loader that allows uploading new code to without external hardware programmer. The controller use a crystal oscillator of 16 MHz. In case additional communication needs are demanded, Arduino prepared a variety of communications such as using UART, I2C, and SPI to connect to other chip. Arduino will order the driver according to a loaded controller program. Stepper Motor. Stepper motor is DC yang driven by digital pulses. The pulses is translated to mechanical discrete movements accordingly. A driver to generate pulses is needed for motor movements. Motors based on permanent magnet and variable reluctance is selected as they have high precision and strong torsion. Precision due to use of variable reluctance and torsion due to permanent magnet. The selected motors are able to move either clockwise or counter clockwise [6]. Stepper Motor Driver. Stepper motor driver enable the selected motors in micro stepping. The driver can easily move the motor clockwise or counter clockwise while preserving the torque. TB6600 is a driver that is selected to drive the selected bipolar motor. It has a signal terminal that determines the amount of steps needed to turn, a direction signal terminal and a terminal to activate or deactivate the motor [7]. Ultrasonic Sensor. Ultrasonic sensor is used to measure distance. It has a pair of transmitter and receiver. The transmitter coverts electrical signal to an ultrasonic signal of 40 kHz and the receiver detects the reflected ultrasonic wave. This sensor will detect the shirt placed on the paddle. If nothing is detected the paddle will not move. Paddle. See Fig. 2.
556
L. Tanutama et al.
Fig. 2 Paddle structure
3.3 System Operations To operate this device, a shirt (T-shirt, polo shirt, plain shirt) is placed on the paddle that is equipped with ultrasonic sensor. Upon detection the device will be ready for operations. Naturally the shirt must be dry and ready for folding. Preferably the shirt has been be ironed. There are four paddles for folding the shirt. Left wing paddle, right wing paddle and bottom paddle each is moveable through its stepper motor and a special horizontal bar holding the paddle. The fourth paddle is the platform for holding the folded shirt. The stepper motors turn the horizontal bars of the paddles first counter clockwise and the return to its initial position. The number of turns is defined by the number of pulses received from the controller to the driver. The controller sequentially moves left wing paddle to operate and moves 180° clockwise. The next move is to return left wing paddle to its home position by turning the motor 180° counter clockwise. Similarly the right paddle and bottom paddle are operated. The shirt is now folded and manually remove from the platform paddle. The process is repeated for other shirts (Fig. 3).
4 Results and Discussions Two parameter of interest are the maximum weight that can be tolerated by the motors to perform their duties and the time needed to complete a folding cycle. The time needed is measured from the shirt is placed on the bottom paddle and detected by the sensor until it is on the take-up platform. Measurement is performed at least twenty times to obtain average figures.
Shirt Folding Appliance for Small Home Laundry Service Business
557
Fig. 3 Shirt folding appliance structure
Table 1 Maximum shirt weight
Paddle
Average shirt weight (gram)
Left paddle
72
Right paddle
78
Bottom paddle
92
4.1 Maximum Shirt Weight Data Using the obtained data the maximum shirt weight that can be handled is determined the left paddle even though the other paddles can handle more weight. Left paddle is the first step in this process (Table 1).
4.2 Time for Folding Shirt of Different Sizes The second parameter of interest is to check if shirt size influence the total time taken to perform a complete cycle of folding (Table 2).
558 Table 2 Shirt size and average folding time
Table 3 Comparison manual and automatic process
L. Tanutama et al. Size
Average folding time (seconds)
S
3.8
M
4.0
L
4.1
XL
4.4
Process Manual Automatic
Number of shirts 89 110
Measurements of time showed that large shirt will take more time to fold due to its weight. The motor will need more time to flip especially the bottom paddle. Motor with higher torque will be less affected by the shirt size. The system under study had small torque and the result showed that shirt size affected its performance.
4.3 Comparison Manual and Automatic Process Table 3 showed the result of how many shirts were folded manually by a human being and by the device within 30 min. Within 30 min manual operation could finish 89 shirts and using the appliance the same operator can finish 110 shirts. The appliance could increase the operator’s productivity by 23%.
5 Conclusion It can be concluded that this appliance can help improve the productivity of a human operator in folding shirts and the like such as T-shirt and Polo shirts. Depending on the torque rating of the stepper motor, the time needed to fold will depend on the size of the shirts. High torque motors are less sensitive as to shirt size. Shirt size means weight. Larger size means heavier and need more torque to lift.
References 1. Gomesh N (2013) Photovoltaic powered T-shirt folding machine. Energy Procedia 36:313–322 2. Liu Y, Tran D, Wang K (2017) Cloth folding machine. Mechanical Engineering Design Project Class
Shirt Folding Appliance for Small Home Laundry Service Business
559
3. Vinitha SM (2020) Cloth folding machine. Int J Recent Technol Eng (IJRTE) 4492–4495 4. Miyamoto R, Mu S, Kitazono Y (2014) Development of system to fold T-shirt in the state of hanging. In: Proceedings of the 2nd international conference on intelligent systems and image processing 5. Allied Electronics & Automation. Arduino. https://datasheet.octopart.com/A000066-Arduinodatasheet-38879526.pdf. Accessed on 18 Aug 2022 6. Roy TS, Kabir H, Chowd MM (2014) Simple discussion on stepper motors for the development of electronic device. Int J Sci Eng Res 1089–1096 7. Makerguides.com. https://www.makerguides.com/wp-content/uploads/2019/10/TB6600-Man ual.pdf. Accessed on 15 Aug 2022
Artificial Intelligence as a New Competitive Advantage in Digital Marketing in the Banking Industry Wahyu Sardjono and Widhilaga Gia Perdana
Abstract Digital disruption has entered the banking industry, especially since the era of the COVID-19 pandemic. To remain competitive, every bank must adjust to the digital age. Reputation has a direct impact on a bank’s performance, capacity to draw in new clients, and capacity to keep hold of current ones. Making choices about how to address the difficulties of adopting artificial intelligence must consider these issues (AI). The financial sector as a whole now has a way to meet client expectations for more creative, practical, and secure solutions thanks to artificial intelligence (AI). The bank can swiftly decide which channel to use at what time and what material to target consumers with by utilizing AI. Of course, this is very important considering the shift in customer behavior in the digital era, so AI is expected to assist the Bank in providing products and services that are right on target according to customer needs and create a new competitive advantage. Keywords Artificial intelligence · Digital transformation · Competitive advantage · Customer · Marketing
1 Introduction In the long run, artificial intelligence (AI) will be a crucial component of every business organization on the planet. Industry 4.0 is widely acknowledged to be a driving force behind artificial intelligence (AI). The present excitement around the advancement and commercialization of AI systems is primarily related to deep learning, or W. Sardjono (B) Information Systems Management Department, BINUS Graduate Program—Master of Information Systems Management, Bina Nusantara University, Jl. K. H. Syahdan No. 9, Kemanggisan, Palmerah, Jakarta 11480, Indonesia e-mail: [email protected] W. G. Perdana Post Graduate Program of Management Science, State University of Jakarta, Jl. R.Mangun Muka Raya No. 11, Jakarta Timur, Jakarta 13220, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_49
561
562
W. Sardjono and W. G. Perdana
the AI’s capacity to carry out certain particular cognitive tasks more effectively than humans. Significant changes in the AI landscape are reflected in the latest automation trends led by AI [1]. In the domain of commercial implementation of AI, there has been a clear shift in concepts, expectations, and infrastructure. Currently, businesses and society as a whole are paying more attention to AI [2]. The ability to take advantage of new opportunities provided by AI technology increases businesses’ and the global economy’s drive to transform themselves through AI. Most people chose online payment to keep themselves healthy during the COVID19 epidemic. The major portion of retail malls has closed, making online shopping the go-to option for a variety of needs, from necessities to recreational items. This creates chances for banking, finance, and certain other financial service companies because of the persistent interest in spending. Additionally, a number of online marketplaces for buying and selling promote cashless transactions by allowing users to pay using credit cards, bank transfers, or electronic wallets. Therefore, it is now that the banks should increase marketing. One way of utilizing artificial intelligence is by providing input for the bank in targeting the customer market segment according to their needs so that the products and services offered are right on target. This paper will provide a general picture of how artificial intelligence is being employed in banking to advance market share in the era of the new normal.
2 Literature Review The shift to online business has accelerated over the past ten years, and the banking industry is one area where these services are becoming more commonplace because a significant number of banking institutions have begun to provide them. Digital revolution and customer adoption of services have reportedly been in use for decades, according to academic study on banks and financial services. Online banking, phone banking, and personality machines are some examples [3]. Thus, as AI and digital technologies are used more frequently, the entire financial services sector is transforming and old intermediary methods, particularly those of traditional banks, are being disrupted. Technology advancements have led to the emergence of new financial services and products. New players have entered the retail banking sector as a result of this trend. These problems affect several stakeholders in the financial sector from various backgrounds and with various objectives. Each of these actors is interconnected, and both their own conduct and that of others are influenced by it [3]. Computers can be taught to comprehend and emulate natural behavior and interactions through the study of artificial intelligence (AI). Using AI and the supplied data, a new intelligent machine has been created that thinks, responds, and completes tasks like people do. Artificial intelligence (AI) is capable of doing highly technical and specialized tasks like robotics, audio & solution, speech recognition, and visual recognition. A variety of technologies known as artificial intelligence (AI) can perform tasks that need human intelligence. When deployed in routine commercial
Artificial Intelligence as a New Competitive Advantage in Digital …
563
operations, these technologies can behave, learn, and perform like humans. It simulates human intellect in robots, which allows us to conduct business more quickly and for less money [4]. Likewise, according to Holloway and Hand (1988: 70), the concept is no longer an intellectual word, but an actuality. Additionally, in some firms, this still seem that the AI system is making ethical and business judgments in place of humans. Already, a lot of businesses are using AI extensively. Recognizing whether technologies, like digital banking, is utilized to improve services to meet the requirements of customers requires an understanding of how system and clients interact and how they perceive the service. Customers anticipate using digital banking in a manner similar to social media, and the services are available whenever, whenever, and under all circumstances. The digitalization of the bank has effects on users as well, including hazards to their privacy, security, time management, and performance. To develop a marketing plan, marketers must consider a number of critical factors, including customer experience. A bank’s ability to outperform the competition and provide clients with pleasant or negative experiences depends on how it uses innovative technology to deliver its services. The term “customer experience” refers to the consumer’s assessment of every its either direct or indirect interactions with the company in connection to their purchasing behavior [4]. Digital banking experiences can be characterized by customer satisfaction, service product, potential value, service customization, transactions, workforce linkage, customer loyalty, entrepreneurship in digitalization, performance expectancy, and risk involved. The term “digital marketing” refers to all forms of online, Internet, and digital advertising combined. Advertising implements marketing techniques using digital channels (infrastructure, programming, and communication devices). Digital marketing techniques involve competitor analysis, questionnaires, several types of marketing, Conversion rate optimization, publishing, and content marketing. Using any of these technologies effectively requires the use of marketing analytics. To recover relevant information and increase the performance of its content marketing initiatives, these 3 major categories (marketers, firms, and publishers) must comprehend and be able to operate with vast volumes of information [5]. As a practical answer to capitalize through and gain from the high consumer engagement on the Internet, businesses created digital marketing. Digital marketing is used by companies including banking as part of their marketing strategies and launch plans. The ability to quantify results is one of digital marketing’s biggest advantages over conventional marketing tools and platforms. A significant amount of data is left behind by every Internet user that can be used in marketing research. These state-of-the-art analytical tools utilize algorithms (ML) to interpret past data and assist in operational planning. The marketing sector is consequently interested in artificial intelligence (AI). Some people think it’s because marketing has advanced. Numerous innovations have been made in the area of artificial intelligence. Many marketers enjoy praising the benefits of novel developments in technology. Artificial intelligence is used to distinguish both voice and images (AI). Additionally, data breaches are stopped,
564
W. Sardjono and W. G. Perdana
Fig. 1 Artificial intelligence (AI) in digital marketing [5]
and drones are more effectively directed at distant networks. Marketers must build continuing, informed, experience and understanding relationships with their clients on an individual level to get the socioeconomic stronger position within that permanently connected, ongoing reality. Brands that have adopted the appropriate scaling frameworks and recognized the benefits of AI may get a competitive advantage that is very difficult to duplicate. Context and content are interwoven in AI [5] (Fig. 1). AI aims to create automated systems that could also think and act like people. This technology is supposedly the “next phase” of the industrial revolution. The majority of the problems of today are seen to have solutions in AI and ML. AI may also help with problem prediction in the future. New industries, technologies, and surroundings can be produced through AI. This might include knowledge, logic, and—most significantly—the capacity for self-correction. AI has the capacity to analyze, understand, and decide. It is for data on current users and is used to forecast market trends and user behavior. It is used by businesses of all sizes to fine-tune their marketing and sales plans in order to increase sales. It is often referred to as a data forecast. In every area of our professional lives, the impact of artificial intelligence is evolving. As a result, everything will be impacted, including the strategy and execution of marketers’ initiatives. Artificial intelligence will oversee innovative shows inside the achievement of sustainable development.
3 Methodology This study was conducted by searching the literature and sources of information on the internet related to the problem discussed (AI in Digital Marketing) for later
Artificial Intelligence as a New Competitive Advantage in Digital …
565
analysis. The major goals of this literature search are to gather pertinent data and determine how closely related the data are to one another and whether they are mutually supporting or not. 1. Compile references and existing literature on this issue from books, the internet, pre-existing papers, personal experiences, and other sources. 2. Examining the consulted sources. 3. Determine whether the material is pertinent to the subjects being discussed. 4. Highlighting the most important ideas from the relevant texts. 5. Write down and rearrange the important points that have been obtained in a structured manner into a paper.
4 Results and Discussion AI has significantly changed marketing and will continue to do so. Marketing techniques such as internet pursuit as well as ad campaigns, social networking sites engagement, smartphone checking and collaboration, cash on delivery, with in online shopping are progressively driven by customizable and sophisticated algorithms due largely to both technology behemoths such as Amazon and Google as well as multiple small MarTech (Marketing Technology) companies. A positive feedback loop has been created between machine learning and numerous significant marketing trends, as shown in Fig. 2. These trends challenge existing machine learning techniques and promote their advancement. The analysis of marketing trends and practices [6] include the following:
Fig. 2 AI-driven marketing landscape [6]
566
W. Sardjono and W. G. Perdana
4.1 Marketing Trends • Rich in interactive and media Provided by the internet, social networks, and mobile platforms, factors are regarded in rich media formats, such as text, image, and video, has significantly improved relationships between customers and businesses. Identifying customer preferences and impressions is essential for organizations to do in order to create brand positioning insights. Designing educational and interesting material is another essential area of competition for raising awareness, perception, and acceptance. Despite being natural to individuals, rich media product is challenging for machines to process. Because of their enhanced capacity to provide insights and make recommendations for improvements, AI tools powered on deep learning approaches are used in various engaging and media-rich environments. • Customizing and focusing Marketing is becoming more and more individualized. In markets where abundant data is the norm and digitized methods make it easy to distribute customized items, optimization algorithms are pushing expansive situationally personalization and retargeting to a new level. More precise segmentation is being used. Data on preferences and behaviors are increasingly used in segmentation. Continual improvement results in personalisation, where each customer forms a separate section and receives offers that are specifically catered to her unique profile. Therefore, successful addressing calls for providing the right product to the right customer at the right time and location in addition to matching the right item to the right consumer. The rapid growth of the technologies is also attributable to the powerful machine learning algorithms that provide a significant portion of personalization and situationally targeted. • Automatization and real-time optimization Long past the point where human analysts’ manual and intuitive abilities could handle the complexity of the marketing environment, there are no longer any human analysts. Well if fragmentation and continuous market interactions further raise the requirement to eliminate workers from of the important route. Although certain segments can be targeted utilizing traditional techniques, hundreds of accurately determined resulted in multiple involve automation. For frequent encounters, actual responses are also important. For instance, there is only a little window of time to give a promotional offer when mobile tracking detects an inbound customer. Automation is being used more and more in marketing, and the favored methods include actual optimization and advanced analytics. • Concentrate on the customer journey Businesses are beginning to view the complete client journey holistically. As a result, from the position of the business, the vision of such an accumulated purchase funnel
Artificial Intelligence as a New Competitive Advantage in Digital …
567
is changed into the viewpoint of continuous choice experiences with control loops from the standpoint of existing clients. Information about each client touchpoint can now be gathered with the use of communications technology. Businesses may monitor and guide customers while offering the correct data, customer experience, and advertising at the right time and place when they are able to put together a consumer’s entire trip. Businesses can devise a longer-term strategy that address the complete trip in addition to guiding customers through individual obstacles. When a lead materializes, a comprehensive understanding of the customer aids in developing a plan to guide her through the sales process.
4.2 Marketing Practices • Advertising mix More than often, descriptive statistical analysis is used to make decisions about the marketing plan. Dynamic quantitative models are frequently used to make pricing, assortment, channel, and location decisions. During this time, branding is getting increasingly personalized and digital. Various predictive digital marketers and services use artificial intelligence techniques to target customers based on expressing personal profiles and prior patterns as well as perform nanoseconds real-time bid decisions. Even the traditionally qualitative field of product and advertising creative designs has embraced algorithms, and this shift is speeding up. • Customer interaction Intelligent agents support customer involvement along the route to enhance experience as businesses concentrate on client decision journeys. Targeted adverts sent by bidding machines to online users and uniquely designed webpages made by website morphing algorithms aid in informing and piqueing their interest. Virtual influencers created by computers spread brand awareness and advertise goods. In response to client message inquiries, virtual assistants deliver information or enable purchases and are enabled by advanced Computing processors in the cloud. Chatbots driven by information retrieval and statistical approaches algorithms are quickly taking over the handling of treated and no queries. Innovations fuelled by AI are rapidly altering satisfaction and value engagement methods. • Search Many client journeys start with an internet search engine. Google formerly utilized the PageRank algorithm to process a lot of questions but has since transitioned to the learned in the classroom Page ranking program, which improves the relevance and robustness of search results. Marketers who use solutions based on machine learning also perform marketing services. The internet environment has always been dominated by web searches, but learning methods are facilitating scanning based on
568
W. Sardjono and W. G. Perdana
a range of different types of content. Conversation engines like Dialog flow, which have features for text-to-speech, natural language processing, and voice recognition, can handle voice search well. Visual search is made feasible by tools like Syntheti’s Style Intelligence Agent. • Recommendation Giving the proper product recommendations to customers who are interested can boost marketing effectiveness dramatically. Initial recommender systems employ machine learning algorithms like collaborative filtering that is item- or contentbased. As more recent techniques, like matrix factorization, have been incorporated, their accuracy and resilience have gradually increased. Recently, steganography approaches and dimensionality reduction have been utilized to further improve quality. Decision support systems, which analyze the data of millions of customers and items to determine relevance, are becoming a crucial part of marketing, successfully connecting products and customers through digital channels. • Attribution With the multiplicity of promotional tools and customer relationship management engagements, it becomes more important to connect the actual result to the strive to accomplish. The achievements produced by the communication channels have not always been accurately attributed in the past since simplistic guidelines what first or even the last, while easy to understand, have been used. Businesses are moving toward prototype accountability. In order to get accurate output feedback and improve distribution strategy and management, a variety of machine learning techniques, from discriminant analysis to those comprehensive evaluation metrics and Letter of acceptance, are being investigated and implemented. Figure 3 shows the many key marketing segments of AI efforts [6]. It has been necessary to carefully account for price, resource management, brand name, sponsorship, and present strategic planning when aiming marketing efforts at Intelligence technology. Other difficulties have been emphasized as essential communications mix for AI applications, such as differentiation, storylines, even envisioning processes in addition to market design and end-user requirements. AI is used by marketers to boost customer demand. Integrated applications that make advantage of machine intelligence provide customers with a great user experience. It records every purchase, including the location and time it was made. It may analyze the data and provide customers with personalized marketing messages. These notifications give tips and discounts to raise customers’ average order values when they visit a nearby business. By utilizing an integrated strategy for system automation, marketing gives the business a competitive edge. The AI marketing strategy has advantages in decision-making and client micromanagement. Data is essential for enhancing the patterns of content that ML systems recommend to customers. The automated process for purchasing and selling online advertising is known as programmatic media bidding.
Artificial Intelligence as a New Competitive Advantage in Digital …
569
Fig. 3 Multiple segments for applications of AI in the marketing domain [7]
When used in conjunction with reliable market research data, AI is a powerful tool. This makes it possible for businesses to finish a variety of jobs. An essential component of this often utilized use case is the segmentation of target audiences. In this task, AI is noticeably quicker and more productive than humans. If businesses take their efforts further, they could be capable of providing their target markets more tailored offers that they will be more willing to approve. With the rapid adoption of new technologies, many business titans have been inspired to advance into more sophisticated and effective fields, where AI has established itself as the most helpful. Integration to AI will increase a company’s chances of staying another updated with the latest in a number of different ways. The bank can more precisely determine which customers to target but would not include in the campaign. By better matching customers with products those who are likely to buy, superfluous rather than out goods will be encountered. Banks are able to provide each customer with great customer service. One technique that businesses employ with AI is predictive marketing analytics. AI can accurately and reliably forecast how performance will look in the future based on a variety of characteristics by analyzing data from prior occurrences. Making recommendations to people that are more meaningful requires an understanding of what they value most. But the majority of AI-based customization solutions work top-down and are built for a single person rather than an entire community. The marketing sector has seen a number of AI-based changes that have increased its effect and impressiveness. Figure 4 illustrates the various AI used in today’s competitive and high-level marketing publicizing to carry out the many intended functions for managing the industry challenges [7]. Data collection using research reports, modernization as well as through Big data applications, considerate
570
W. Sardjono and W. G. Perdana
consumer knowledge, research, and still need full implementation in the market domain are additional inputs for adopting Blockchain for managing market-level tactics. AI technology can be used by marketers to recognize patterns and project them into the future. They can then choose who to target and how to distribute their money based on these details. AI is essential to the success of any marketing effort, from the preliminary stages to the conversions and customer satisfaction phases. It has been demonstrated that it is possible to build computers that can simulate certain cognitive functions that are specific to another conscious imagination, namely especially acquisition and constraint. AI is helps marketers in understanding the constantly evolving world of content marketing by analyzing customer information and supporting advertisers in trying to make sense of user intent. AI can be used by marketers to create content for straightforward tales. In order to make inferences and use a data-driven decision-making process, AI technology may aggregate and interpret data from numerous platforms. As power has passed from the industry to the customer, traditional marketing has undergone a drastic transformation. Businesses are investing more and more in marketing systems
Fig. 4 AI transformations for marketing sectors [7]
Artificial Intelligence as a New Competitive Advantage in Digital …
571
that can collect, interpret, and use vast volumes of consumer and corporate data. Banks are able to see what their consumers say, think, and feel regarding their businesses thanks to AI technologies. Similarly, with the abundance of digital networking at their disposal, banks can actually comprehend how customers feel. Before this precise information, banks with foresight can rapidly modify messaging or branding for maximum impact [8]. Digital marketing has seen a number of changes due to AI as following [9]: a. Chatbots Artificially intelligent systems interact with customers using natural language. Marketers are becoming more and more interested in these projects as social media consumption shifts more and more to private messaging services like Facebook Messenger and What’s App. It would be a shame to miss out on this fantastic possibility for contact. While the majority of savvy marketers see virtual assistants as a method of providing individualized customer support at scale, they are not regarded as legitimate marketing tools. Visiting chatbots, on the other hand, can also be employed to support individuals through the buying process. b. Refinement of the advertising Artificial intelligence is also used to improve the dissemination of commercials (AI). According to marketing experts, Google and Facebook lead PPC campaigns in the United States. According to recent study, advertisers may gain from machine learning by learning new marketing messages for their PPC ads. Artificial intelligence is advantageous for advertising because competitors might not utilize these platforms. c. Email marketing Artificial intelligence has increased the effectiveness of email marketing for both companies and their customers. The ability to personalize at scale is the dream of every marketer, and artificial intelligence makes it a reality. Depending on how each subscriber has interacted with the brand in the past, human intelligence may create personalized private messages for each of the recipients. The experience can be tailored by taking into consideration content consumption, wish lists, and websites visited. If one user frequently clicks on links to market and product in the company’s emails while another never does, for instance, artificial intelligence can send each user a unique message with the most pertinent facts. d. Marketing is becoming more consumer-cantered Data integration from various sources plays a significant role in the application of artificial intelligence. Customers leave small bits of sensitive information behind when they access the internet. Regardless of whether they explore, publish, or shop, the data is collected. Currently, artificial intelligence systems are studying these enormous databases to learn exclusively about the clients’ online conduct and confirm their identity.
572
W. Sardjono and W. G. Perdana
e. Creating leads Due to the data, it already has and the tools it is employing, artificial intelligence can go through mountains of data to determine the optimal response for users, clients, and even staff members. However, it can also determine or forecast why big material is. More time will be available for marketing to devote to activities like presentations and sales calls. f. Content creation that is automated Automation of content generation is a common practice among enterprises. Content creation has been accelerated and made simpler by this technological improvement. g. Image recognition Picture recognition is perhaps one of the most intriguing and potentially most important developments in artificial intelligence. For machines to think and behave average people, such as self-driving cars, workers are encouraged to demonstrate like we do. PCs are currently capable of differentiating and recognizing fundamental things, operations, and contexts. Compared to what the human eye can see, that multimedia strong muscular have permitted and continue to facilitate several important mechanical occurrences. Image recognition enables marketers to find images on social networking sites even without a caption.
5 Conclusions The banking sector in emerging nations has been impacted by the rise of digital banking. Opportunities in this sector are expected to increase annually, making them more alluring. Many customers are still unfamiliar with totally digital-only banking, despite the fact that they regularly use the Internet and mobile banking. Digital-only banks offer distinctive customer experiences due to the lack of physical locations and the delivery of all services via online platforms. Experience-based aspects that affect how customers use this technology have thus become crucial. Artificial intelligence (AI) describes methods that let robots carry out mental tasks that call for human intelligence. These involve interaction with the environment, learning, and reasoning. Two of the most popular AI methods are machine learning and deep learning. In order to cultivate customer engagement and loyalty, AI may develop a more personalized brand experience. To enhance the user experience, marketers deploy vocabulary AI as customer relationship management systems, payment services, and engagement managers. Customers may now depend on a chatbot to complete the purchasing process for them rather of being required to figure it out on their own. A bank may employ AI to assess consumer trends and behaviours, forecast outcomes, and modify advertising as necessary. To predict future trends, it makes use of data, algorithms, and trying to cut AI technologies. As they examine more
Artificial Intelligence as a New Competitive Advantage in Digital …
573
data, AI systems continuously learn how to enhance their performance and offer the finest solutions. Massive volumes of previous consumer data can be analyzed by AI-powered ML algorithms to determine which adverts are suitable for customers and at what point of the purchase decision. AI will give researchers the advantage of optimizing content release timing by systematically searching and data. Language-based artificial intelligence is advancing quickly, “learning” from past usage and automatically fine-tuning to produce something even better user experience the following time. By recognizing important indicators that consumers desire to read, it can help marketers. It is now conceivable to personalize programming via observation, sampling procedures, and assessment attributable to AI. This content marketing software helps marketers increase performance by supporting them with email marketing. Email marketing is one of the numerous social media strategies that helps in contacting the target audience at the right time and assuring electronic properties strategies. The main benefit of AI in advertising is data analysis. With the help of this technology, enormous amounts of data will be analysed to give marketers practical and useful insights.
References 1. Haleem A, Javaid M, Asim Qadri M, Pratap Singh R, Suman R (2022) Artificial intelligence (AI) applications for marketing: a literature-based study. Int J Intell Netw 3:119–132. https:// doi.org/10.1016/J.IJIN.2022.08.005 2. Rodrigues ARD, Ferreira FAF, Teixeira FJCSN, Zopounidis C (2022) Artificial intelligence, digital transformation and cybersecurity in the banking sector: a multi-stakeholder cognitiondriven framework. Res Int Bus Financ 60:101616. https://doi.org/10.1016/J.RIBAF.2022. 101616 3. Windasari NA, Kusumawati N, Larasati N, Amelia RP (2022) Digital-only banking experience: insights from gen Y and gen Z. J Innov Knowl 7(2):100170. https://doi.org/10.1016/j.jik.2022. 100170 4. Sarath Kumar Boddu R, Santoki AA, Khurana S, Vitthal Koli P, Rai R, Agrawal A (2022) An analysis to understand the role of machine learning, robotics and artificial intelligence in digital marketing. Mater Today Proc 56:2288–2292. https://doi.org/10.1016/J.MATPR.2021.11.637 5. Mahalakshmi V, Kulkarni N, Pradeep Kumar KV, Suresh Kumar K, Nidhi Sree D, Durga S (2022) The role of implementing artificial intelligence and machine learning technologies in the financial services industry for creating competitive intelligence. Mater Today Proc 56:2252–2255. https:// doi.org/10.1016/J.MATPR.2021.11.577 6. Ma L, Sun B (2020) Machine learning and AI in marketing—connecting computing power to human insights. Int J Res Mark 37(3):481–504. https://doi.org/10.1016/j.ijresmar.2020.04.005 7. Haleem A, Javaid M, Qadri M, Singh RP, Suman R (2022) Artificial intelligence (AI) applications for marketing: a literature-based study. Int J Intell Netw 3:119–132 8. Fosso Wamba S (2022) Impact of artificial intelligence assimilation on firm performance: the mediating effects of organizational agility and customer agility. Int J Inf Manage 67(July):102544. https://doi.org/10.1016/j.ijinfomgt.2022.102544 9. Verma S, Sharma R, Deb S, Maitra D (2021) Artificial intelligence in marketing: Systematic review and future research direction. Int J Inf Manag Data Insights 1(1):100002. https://doi.org/ 10.1016/j.jjimei.2020.100002
Digital Supply Chain Management Transformation in Industry: A Bibliometric Study Azure Kamul, Nico Hananda, and Rienna Oktarina
Abstract Technology is becoming today’s most promising approach to transform the company, specifically its supply chain system. This research is conducted through bibliometric analysis based on published data regarding digital transformation in supply chain systems from Scopus Database. The publication search is based on the keyword “Digital Supply Chain Transformation” and several criteria to better filter the obtained publication data. Overall, the bibliometric analysis from VOSViewer uncovers two research gaps and beneficial information. The first research gap shows that the digital supply chain transformation of other industrial sectors besides agriculture and food can be further explored. The second gap is AI identified as a promising technology that is still rarely used in digital supply chain transformation research. Furthermore, we find out that blockchain is a phenomenal technology being used in the research of digital supply chain transformation nowadays. We envision this study to encourage more research about digital supply chain transformation, which hopefully could accelerate the development of a more sustainable, efficient, and dataoriented supply chain system. This study is divided into seven sections, including Introduction, Literature Review, Methods, Data Collection, Result and Discussion, Conclusion, Recommendations. Keywords Blockchain · Digital supply chain transformation · Big data · Artificial intelligence · Internet of things
A. Kamul · N. Hananda · R. Oktarina (B) Industrial Engineering Department, Faculty of Engineering, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] A. Kamul e-mail: [email protected] N. Hananda e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_50
575
576
A. Kamul et al.
1 Introduction Various technologies have been adopted in today’s industrial supply chains to transform digitally, such as blockchain, metadata, Artificial Intelligence (AI), Internet of Things (IoT), big data, and other technology approaches. The application of this technology aims to direct the supply chain system toward the point of sustainability, development, and improvement through digital transformation. Technology adoption provides a more straightforward supply chain process system, a more transparent transaction process, reduced operational costs, and increased accuracy in analysis and forecasting to design a more effective and efficient supply chain system [1]. Technology utilization also braces more credible, reliable, and data-oriented decisions and innovations for companies [2]. Thus, discussions related to technological transformation for supply chain systems have become the main topic of discussion for industry players. The transformation trend in this industry forces more research related to digital transformation in the supply chain to be covered [3]. Companies require research and further study information to find the most suitable and efficient digital transformation strategy for their supply chain. Therefore, this scientific paper aims to show the research gaps in the relationship between supply chain management systems and digital transformation along with various promising technological approaches. The study was conducted using bibliometric analysis through a literature search system related to digital transformation in industrial supply chain management using the Scopus Search Index. The search results will be processed using VOSViewer software to display visualizations of the keyword network, which will then be used as parameters for bibliometric analysis. The discussion is expected to uncover further opportunities for digital transformation using specific promising technology approaches. We envision our results to support the digital supply chain management systems and industrial sectors that are transforming to face the era of technological disruption, especially in Indonesia.
2 Literature Review A supply chain system comprises a coordinated network, including the company’s business processes, company facilities, business activities, procurement, manufacturing, and logistics processes. From the point of view of traditional supply chain systems, the flow of supply chains is always influenced by previous entities. This will result in considerable problems and losses if one of these entities cannot fulfill its duties. By transforming digitally, technology allows the company to make decisions anticipating risks that will cause losses quickly. Transformation of the supply chain system using digital will increase accuracy in data analysis so that decision-making in the supply chain system is more targeted [3]. Technology systems can easily see and analyze the procurement process for manufacturing activities to market needs so that
Digital Supply Chain Management Transformation in Industry: …
577
companies can quickly and precisely take steps or decisions [4]. Altogether, the benefits provided by the digital supply chain system can make the entire system optimal, reduce additional operational costs, and be able to manage production scheduling to delivery so that all types of logistics processes will run on time and efficiently [5].
2.1 Technology in Supply Chain Digital Block Chain Blockchain is one of the methods for storing data and setting up the system in a block manner that is stored as data from the computing system through an integrated internet network [6]. The blockchain system has a hashcode on the model block, so it cannot be forged in a computational system. Its characteristic of not being able to be erased and has high validity makes blockchain suitable to be used as data collection in supply chain processes. Blockchain will help companies securely exchange data between departments in real-time. The other benefits of adopting this technology are simpler business processes, increased transaction security, increased trust and accuracy of data analysis, emphasized operational costs and made the supply chain system transparent [7]. The blockchain systems will be very suitable for various industries that have many material components of a particular type. The identification of these items will be easily formed through model blocks that cannot be duplicated or eliminated so that errors in data information will slowly decrease [8]. In addition, the blockchain system can be used as a safeguard against transactions that take place in the supply chain process. Transparent, secure, and irreversible transactions make the supply chain system more stable because of the reduced risk of fraud or data falsification. Internet of Things Internet of things (IoT) in the company’s supply chain system, aims to increase the effectiveness and efficiency of company performance. As a concrete example of the application of IoT, it is found in warehouse systems and data analysis of the product procurement process [9]. The warehouse operational system is synonymous with a lot of data, and moving goods needs to be carried out quickly. The application of the IoT system will help in processing goods data which is then translated to machines such as Material Handling Equipment (MHE). The IoT system is able to provide data information accurately and quickly so that the entire warehouse process will be faster and, of course, reliable [10]. In supply chain, IoT system can provide real-time information notifications to suppliers to know directly the material that needs to be used for production and to forecast and analyze data on consumer needs to suit the company’s production targets [9]. However, the success of the IoT system certainly depends on human resources and infrastructure [11]. Therefore, the IoT need to be built in form of an integrated system consisting skillful worker along with adequate
578
A. Kamul et al.
internet and computer infrastructure to create an effective, efficient, reliable, and sustainable supply chain system. Artificial Intelligence AI is a branch of science used to create intelligent machines that can work like humans. AI’s intelligence is programmed through computer systems so that it seems to mimic human intelligence [12]. Therefore, implements AI can make a machine do a job through learning, understanding, and training like a human. AI’s ability to do a job will also increase along with the amount of “data” they get from their experiences and learnings while doing the job. AI capabilities make it one of the alternative technological approaches used to help companies make decisions or manage their supply chain systems [13]. AI was chosen because it is able to analyze and process a lot of company data beyond human capabilities accurately and quickly, thanks to the help of computer systems. As the result, AI systems are often linked to big data as analysis parameters and prediction algorithms [14]. Big data has a huge volume and variety of data, so it requires a complex and lengthy process before it can be processed into useful information or predictions [15]. AI’s role is to read patterns and relationships between various data variables through computer algorithms and predict results more accurately in less time. AI is a substitute for human brain capabilities that are currently unable to process big data into useful information [16]. AI itself can be used to create a more effective, efficient, and risk-free supply chain management system [17]. For instance, AI can be used to automatically create an analytical databased prediction which reduces the number of workers needed and provide a more reliable forecasting [18]. Big Data Big data is a term used to describe vast volumes of structured and unstructured data that are very difficult for conventional computers and software to process today [19]. Big data has become an exciting issue for industry players considering the engineering capabilities and software data mining that has developed so that it is possible to obtain “big data” from companies and industries. However, big data itself is just a pile of garbage if it cannot be processed into information that can direct decisions. The four main characteristics of big data are volume, velocity, variety, and veracity [20]. This is where Big Data Analytics (BDA) comes to transform large amounts of data that have no value into valuable information that can drive the decisions of business people and industry. The results of extensive data analysis are generally presented as graphs. The process is referred to as data visualization [21]. Big data and BDA can transform supply chain system processes through four types of operations: procurement, production, delivery, and sales [22].
Digital Supply Chain Management Transformation in Industry: …
579
3 Methods This research was conducted using a bibliometric analysis method supported by the VOSViewer software. Bibliometric analysis is a quantitative method that allows researchers to leverage large amounts of bibliometric data such as publications and citations to measure and study trends or novelties of a particular research topic. Bibliometric analysis in this study centered on studying and revealing topics and information about digital transformation in supply chain system management based on publication data extracted from the Scopus Database. The VOSViewer software will later visualize keywords obtained from a set of publications into a network form. The network will display each keyword’s relationship, density, and renewal related to digital transformation in supply chain system management. In support of the bibliometric process, the search for literature parameters is carried out by the PRISMA method. The PRISMA method is used to find a more focused scope on the objectives of bibliometric analysis. Systematic literature process flow using PRISMA method is shown in Fig. 1.
4 Data Collection The data collection process is carried out by searching literature studies through the Scopus Search Index on Scopus metadata. The Scopus site was chosen to be a search source because it has an excellent reputation and an extensive collection of literature studies. The search process begins with performing a search using the keyword “Supply Chain”. Search results showed that there were 509,427 documents related to the supply chain. Then the filtering and identification system continued by adding more narrow keywords to the topic of discussion, namely “Digital Supply Chain Transformation,” and only articles published above 2016 were displayed. The search results showed that there were 104 study documents. The final process of identification and screening of studies is carried out through the inclusion and exclusion criteria in Table 1 Inclusion and Exclusion Criteria. Literature studies under the inclusion and exclusion criteria will be used as data to conduct bibliometric analysis. If a literature study citation is found on the analysis data relevant to the topic of discussion, it will be used as analysis data.
5 Result and Discussion The network, overlay, and density visualization of selected keywords based on filtered Scopus publication data related to the topic of “Digital supply chain transformation in the industry are shown in Figs. 2, 3 and 4. According to the network visualization, three primary keywords become the focus of this analysis, namely “supply chain
580
A. Kamul et al.
Bibliometric Analysis Data Identification and Filtering Process Flow Scope of Study Identification
Identified studies through Scopus database search with keywords “Digital Supply Chain”
Additional studies identified through manual identification
Focused literature studies related to transformation in supply chain
Study results that do not include search criteria are not used as analysis data
Specified searching using boolean query code: TITLE-ABSKEY(transformation digital supply chain) AND ( LIMIT-TO ( DOCTYPE , "ar" ) AND ( LIMIT-TO ( LANGUAGE , "English" ) ) AND PUBYEAR > 2016
Articles that are not included in the boolean query code or outside the research discussion are not used as analysis data
Inclusion and Exclusion
Manual filtering of selected literature study data using inclusion and exclusion criteria
Citations contained in the literature study have a scope of discussion in accordance with the topic of discussion of the research included
Result
The selected literature study will be the data for conducting bibliometric analysis
Study Literature Screening
Fig. 1 Bibliometric analysis data identification and filtering process flow Table 1 Inclusion and exclusion criteria No
Inclusion
Exclusion
1
Publications related to the topic of digital transformation in supply chain management
Publications in relation to digital transformation on other fields are not included
2
Publications must be type of article
Other types of publications are not included
3
Publications must be in English
Publications in other language are not included
4
Selected articles must be published above 2016 Article published under 2016 were not included
Digital Supply Chain Management Transformation in Industry: …
581
Fig. 2 Overlay visualization VOSViewer
management”, “digital transformation”, and “supply chain”. Network visualization identifies four clusters from the bibliometric data: green cluster with nine associated keywords, red cluster with ten associated keywords, blue cluster with six associated keywords, and yellow cluster with four associated keywords. Green cluster with “supply chain management” as its main keyword connects all keywords correlated to supply chain management, such as decision making, manufacturing, and performance. Blue cluster associates all keywords that evoke the purpose of digital supply chain transformation, such as sustainability, business, innovation, and digitalization. The red and yellow cluster contains keywords of technology that influence the digital supply chain, including IoT, industry 4.0, AI, blockchain, and metadata. The network visualization first informs that the technology approach has attracted many researchers in transforming the supply chain system. Comprehensively, we find out four promising technologies from the network which highly adopted in the research of digital supply chain transformation such as blockchain, internet of things, big data and artificial intelligence. The capability of those technologies enables a better supply chain system through transformation. Moreover, we utilize the overlay visualization in Fig. 2 to understand those technologies’ trends in these days’ research. Overlay visualization points out “blockchain” as today’s most utilized technology in digital supply chain transformation research, followed by big data, IoT, and AI. The research utilizing blockchain massively started in early 2021, while the rest of the technologies have been used since 2020. Yet, density visualization in Fig. 3 shows that the density of blockchain research for digital supply chain transformation is higher than the other three technologies. This indicates the industrial interest in exploiting blockchain technology to transform and support the performance of supply chain systems.
582
Fig. 3 Density visualization VOSViewer
Fig. 4 Network visualization VOSViewer
A. Kamul et al.
Digital Supply Chain Management Transformation in Industry: …
583
The second information obtained is about the specific industrial sectors currently transforming through research and development. Figure 4 shows two main keywords related to the transforming industrial sectors: “agriculture” and “food supply”. There is a considerable attraction to the big data technology approach in transforming the food supply chain system. Big data allows food supply companies to analyze customer behavior from demand history accurately. The analysis and prediction are used to optimize the promotion and forecasting in order to minimize food waste. Big data, along with blockchain, is crucial to integrate company collaboration with third parties, such as suppliers and distributors, by providing real-time, relevant, and transparent information. These capabilities develop a more efficient food supply chain system reducing the probability of food waste occurrence. This is also true for the agriculture sector, where many items like fruits and vegetables can quickly decay. Both of those sectors enforce industry to develop a better and more sustainable supply chain system as soon as possible. Thus, many researchers do their best to transform both sectors’ supply chains through a technology approach in their research. However, we believe that digital transformation in other supply chain sectors is also critical for industrial advancements. The research possibility in different industrial sectors is the first gap we uncover through this bibliometric study. This study also uncovers some promising technologies which are barely utilized in the research of digital supply chain transformation. Researchers hovered specifically to the keywords “supply chain management” to visualize a more specialized connection between supply chain management and its correlated keywords. The more specified visualization is shown in Fig. 5. The figure shows “artificial intelligence” as a promising technology approach that does not have a correlation with supply chain management. Without the correlation, we assume current researchers still rarely use AI to support their research regarding digital supply chain transformation. This means that AI opens a huge opportunity to further support the research of digital supply chain transformation. That opportunity is a research gap that we uncover through this bibliometric and literature study. Future researchers could use AI to support the research of digital supply chain transformation, especially in the not-yet-transformed industry fields. Overall, the bibliometric analysis discovers two research gaps and beneficial information for future research. The first research gap shows that the digital supply chain transformation, besides the agriculture and food sectors can be further explored. The second gap is AI, identified as a promising technology that is still rarely used in the research of digital supply chain transformation. Furthermore, we find out that blockchain is a phenomenal technology being used in the study of digital supply chain transformation nowadays.
6 Conclusion This paper uses bibliometric analysis based on collected Scopus publication data on digital supply chain transformation. The publication data search in Scopus Database is derived from the keyword “Digital Supply Chain Transformation” and several
584
A. Kamul et al.
Fig. 5 Supply chain keyword network visualization VOSViewer
criteria to better filter the results. The filtering process leaves 104 papers to be bibliometrically visualized in VOSViewer. From the analysis and result, we find that today researchers only focus on transforming the agriculture and food supply chain systems. The research possibility in other industrial sectors is the first gap we uncover through this bibliometric study. We also reveal four technologies that are massively adopted in the research of digital supply chain transformation, such as blockchain, IoT, big data, and artificial intelligence. Nevertheless, a more specific network visualization from the “supply chain management” keyword shows that AI still has a huge opportunity further to support the research of digital supply chain transformation. That opportunity is the second research gap we uncover through this bibliometric and literature study. We envision this study to encourage more research about digital supply chain transformation, which hopefully could accelerate the development of a more sustainable, efficient, and data-oriented supply chain system.
Digital Supply Chain Management Transformation in Industry: …
585
References 1. Liu W, Wei S, Li KW, Long S (2022) Supplier participation in digital transformation of a two-echelon supply chain: monetary and symbolic incentives. Transp Res Part E Logist Transp Rev 161:102688 2. Junge AL, Straube F (2020) Sustainable supply chains—digital transformation technologies’ impact on the social and environmental dimension. Proc Manuf 43:736–742 3. Nasiri M, Ukko J, Saunila M, Rantala T (2020) Managing the digital supply chain: the role of smart technologies. Technovation 96–97:102121 4. Yevu SK, Yu ATW, Darko A (2021) Digitalization of construction supply chain and procurement in the built environment: emerging technologies and opportunities for sustainable processes. J Clean Prod 322:129093 5. Nabila AW, Mahendrawathi ER, Chen JC, Chen TL (2022) The impact analysis of information technology alignment for information sharing and supply chain integration on customer responsiveness. Proc Comput Sci 197:718–726 6. Huang L, Zhen L, Wang J, Zhang X (2022) Blockchain implementation for circular supply chain management: evaluating critical success factors. Ind Mark Manag 102:451–464 7. Burgess P, Sunmola F, Wertheim-Heck S (2022) Blockchain enabled quality management in short food supply chains. Proc Comput Sci 200:904–913 8. Wu Y, Zhang Y (2022) An integrated framework for blockchain-enabled supply chain trust management towards smart manufacturing. Adv Eng Inf 51:101522 9. Koot M, Mes MRK, Lacob ME (2021) A systematic literature review of supply chain decision making supported by the internet of things and big data analytics. Comput Ind Eng 154:107076 10. Sun W, Zhao Y, Liu W et al (2022) Internet of things enabled the control and optimization of supply chain cost for unmanned convenience stores. Alexandria Eng J 61:9149–9159 11. Rejeb A, Simske S, Rejeb K et al (2020) Internet of things research in supply chain management and logistics: a bibliometric analysis. internet of things 12:100318 12. Bloomfield BP (2021) The culture of artificial intelligence. Routledge Libr Ed Artif Intell 2–10:59–105 13. Singh S, Gupta A, Shukla AP (2021) Optimizing supply chain through internet of things (IoT) and artificial intelligence (AI). Proc Int Conf Technol Adv Innov ICTAI 2021:257–263 14. Ojokoh BA, Samuel OW, Omisore OM et al (2020) Big data, analytics and artificial intelligence for sustainability. Sci African 9:e00551 15. Riahi Y, Riahi S (2018) Big data and big data analytics: concepts, types and technologies. Int J Res Eng 5:524–528 16. O’Leary DE (2013) Artificial intelligence and big data. IEEE Intell Syst 28:96–99 17. Toorajipour R, Sohrabpour V, Nazarpour A et al (2021) Artificial intelligence in supply chain management: a systematic literature review. J Bus Res 122:502–517 18. Mediavilla MA, Dietrich F, Palm D (2022) Review and analysis of artificial intelligence methods for demand forecasting in supply chain management. Proc CIRP 107:1126–1131 19. Singh Jain AD, Mehta I, Mitra J, Agrawal S (2017) Application of big data in supply chain management. Mater Today Proc 4:1106–1115 20. Dhas JTM (2022) Introduction to big data analytics. The Palm, Pattabiram 21. Rawat R, Yadav R (2021) Big data: big data analysis, issues and challenges and technologies. IOP Conf Ser Mater Sci Eng 1022 22. Lee I, Mangalaraj G (2022) Big data analytics in supply chain management: a systematic literature review and research directions. Big Data Cogn Comput 6:17
Development of Wireless Integrated Early Warning System (EWS) for Hospital Patient Steady, Endra Oey, Winda Astuti, and Yuli Astuti Andriatin
Abstract The implementation of the Early Warning System (EWS) in hospitalized patients aims to identify worsening of the patient’s condition based on the assessment of vital signs. Currently, the EWS is still applied manually by nurses by filling out observation sheets to assess the risk of patient deterioration, so that it can be used. There was an error filling in by the human error factor. In addition, the lack of nurses in Indonesia makes the implementation of EWS not optimal. Therefore, a system for measuring vital signs and risk assessment of patient deterioration was created automatically which nurses can monitor through the website. The results of this study are a system design prototype consisting of five data retrieval devices and a Raspberry Pi as a web server. Each device sends data to the server wireless using the MQ Telemetry Transport (MQTT) protocol. The test results show that the system can retrieve vital signs data automatically every hour and displays a risk that is one level lower than manual EWS implementation. Keywords Early warning system (EWS) · Raspberry Pi · Web server · Prototype · MQ telemetry transport (MQTT)
1 Introduction Early Warning System (EWS), this system is used to identify the patient’s condition worsening as early as possible while in the inpatient room, so that treatment can be immediately carried out to prevent the patient’s condition from getting worse [1]. EWS identifies the patient’s condition by providing an assessment of the condition Steady · E. Oey · W. Astuti (B) Automotive and Robotics Program, Computer Engineering Department, BINUS ASO School of Engineering, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] Y. A. Andriatin Nursery Department, Cilacap Regional General Hospital, Gatot Subroto No. 28, Cilacap, Central Java 53223, Indonesia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_51
587
588
Steady et al.
or risk of worsening based on the results of examination of vital signs such as blood pressure, body temperature, oxygen saturation, respiratory rate, and average heart rate [2]. Generally, at this time the EWS data collection is still done manually and using an observation sheet for examining vital signs to assess the risk of worsening the patient’s condition. So nurses will need a long time to enter data and assess risk [3]. In addition, it can cause fatigue for nurses because of the increased workload when data collection must be taken individually for each patient. The nurse’s fatigue has the potential to cause human error which can have a negative impact on patient safety [4]. The results showed that fatigue was the variable that had the most significant effect on the error rate for all work shifts in the hospital [5]. In addition, patients who are at moderate risk of worsening based on the EWS will be examined at least every hour. Meanwhile, for some cases such as ER patients [6], dynamically unstable patients, patients after major surgery and patients transferred to the ICU, the condition is unstable and requires continuous monitoring by nurses. So that more nurses will be needed to monitor the condition of the patients. However, in fact the number of nurses in Indonesia is still not sufficient, especially during the current COVID-19 pandemic [7–9]. Based on the 2017 Organization for Economic Cooperation and Development (OECD) report, the number of Indonesian nurses is ranked the lowest out of 45 countries surveyed by the OECD [7]. To overcome this problem, it is necessary to automate the EWS data retrieval which is connected to a computer network, so that nurses can carry out continuous monitoring. This will increase the efficiency of the hospital in providing its services and reduce the occurrence of human error. Therefore, in this research, we will build system design to retrieve patient’s vital sign data automatically using sensors and assessing the risk of worsening the patient’s condition to be displayed on the LCD.
2 Proposed System The proposed system shown in Fig. 1, the wireless EWS design utilizes five a device to retrieve data on the patient’s vital signs as well as the patient’s identity. The five devices used are touching screen Liquid Crystal Display (LCD), digital sphygmomanometer, temperature sensor, respiratory rate sensor, and oxygen saturation and heart rate sensors. The touch screen LCD device in this system functions to enter patient identity data, parameters of consciousness and oxygen administration, and displays the results of the risk assessment of the patient’s condition worsening. While the sensor devices and the tensimeter is used to measure every patient’s vital signs. Each device used in this system has a different use position. This is because the way to measure each vital sign is different based on the standard. Each device used in this system has a different use position. This is because the way to measure each vital sign is different based on the standard. In this system, temperature measurement using temperature sensor DS18B20 [10, 11] which is placed on the patient’s upper arm. Meanwhile, blood pressure measurement uses a wrist blood pressure monitor that is placed in the hand. In measuring the respiratory rate (breathing rate) a piezoelectric sensor [12] is used which is looped around
Development of Wireless Integrated Early Warning System (EWS) …
589
Fig. 1 Flow chart Indonesian license plate recognition system
the abdomen or chest. Then, measurement of oxygen saturation and heart rate will use the MAX30100 sensor [13], which is placed on the patient’s fingertip [14]. The touch screen LCD [15] is placed near the patient to display the risk of worsening the patient. Based on the different positions in the use of each of these devices, the process of sending data from the device to the web server will be carried out wirelessly. In addition to the difference in position, the selection of the wireless method also aims to make the patient feel comfortable by reducing the use of cables and wires cuff hose, and ease of use of the device. This wireless data transmission utilizes the MQTT (Message Queuing Telemetry Transport) protocol because the data sent is small and requires a fast response. Therefore, each vital sign measurement device is equipped with ESP8266 [16] and ESP32 [17] on the touch screen LCD so that it can connect to the MQTT broker [18] via a WiFi network. In this wireless EWS, all data will be entered into the Raspberry Pi 4 Model B which functions as an MQTT broker, web server, and database. Each device in this system will publish to the MQTT broker [19] according to a predetermined topic as shown in Fig. 1. The published data contains the client id, sensor measurement data, and scores from parameter measurements that have been assessed by the microcontroller based on EWS standards at the hospital. In the end, the data in the form of a total score of parameter assessment will be received by the LCD device and categorized as risk level the deterioration to display in the area near the patient’s bed.
590
Steady et al.
Table 1 Clinical response based on the patient’s risk of worsening Total score
Treatment
Score (0)—healthy
Monitoring every 8–12 h
Score (1–4)—fair
Report to the head of nurse on duty—monitoring every 4–6 h
Score (3)—poor (with single parameter) Assessment by doctor on duty–consultation to the doctor of the patient—monitoring minimal every 1 h Score (5–6)—serius
Assessment by doctor on duty–consultation to the doctor of the patient and specialist—monitoring minimal every 1 h
Score (7)—critical
Assistance and continue monitoring by doctor on duty and head of nurse on duty–consultation to the doctor of the patient and specialist–treatment consideration in ICU
Table 1 shows the monitoring of the risk of aggravation of the patient along with the action or clinical response. Actions or clinical responses displayed depending on the level of risk of worsening the patient’s condition that has been adjusted to the established standards.
3 Experiment Result and Discussion In system design, testing is carried out to see the performance of the system when it is run continuously. System testing is done by taking risk monitoring data on a person who is not a patient every hour. In testing this system, the results of measuring vital signs are automatically displayed on the LCD can be seen in Table 2. As shown in Table 2, there are five parameters that are measured automatically and two parameters that are entered manually. Some of the parameters that are measured automatically are oxygen saturation, frequency respiration, systolic blood pressure, Table 2 Measurement result based on EWS development system Parameter
Parameter score based on time monitoring 1
2
3
4
5
6
7
Oxygent administrator
Air
Air
Air
Air
Air
Air
Air
Oxygen saturation (%)
99
97
98
98
100
99
98
Respiratory rate (s)
12
15
13
17
17
18
17
Sistolik blood pressure (mmHg)
109
108
109
103
101
104
104
Heart beat (min)
88.43
43.83
80.18
59.61
69.9
69.76
77.28
Temperature (°C)
36.34
36.71
36.58
35.68
36.89
36.71
36.83
Consciousness
Alert
Alert
Alert
Alert
Alert
Alert
Alert
Development of Wireless Integrated Early Warning System (EWS) …
591
Table 3 Risk value result with apply EWS embedded system Parameter
Parameter score based on time monitoring 1
2
3
4
5
6
7
Oxygen administrator
0
0
0
0
0
0
0
Oxygen saturation
0
0
0
0
0
0
0
Respiratory rate
0
0
0
0
0
0
0
Sistolik blood pressure
0
0
0
0
0
0
0
Heart beat
0
2
0
0
0
0
0
Temperature
0
0
0
1
0
0
0
Consciousness
0
0
0
0
0
0
0
Total score
0
2
0
1
1
1
1
Risk
No risk
Fair
No risk
Fair
No risk
No risk
No risk
pulse, and body temperature. While the other two parameters which are entered manually using the LCD display device are the parameters of providing oxygen and awareness. As can be seen in Table 3, the parameter of giving oxygen during the test is water because breathing uses air directly without the help of an oxygen cylinder. In addition, the alert awareness parameter is selected which indicates the person is in an awake state fully and can respond to sound and has a motor function. Based on the measurement data in Table 2, a risk assessment is carried out in the database so that the results of monitoring the risk of deterioration are displayed on the website as shown in Table 3. Table 3 shows the results of a trial for monitoring the risk of deterioration that was carried out for seven hours using a measurement device that sends data every hour. This test run is carried out when the battery of the measuring device marks vital signs are in full state until the device runs out of power and cannot transmit data to the server after seven hours of running. Based on the results of the risk monitoring trial, it is known that the highest level of risk displayed by the website is an idle condition which indicates that there is no risk of deterioration. The mild risk in the second monitoring is obtained from the pulse measurement score which has a measurement error with a value of 43.83 beats per minute in Table 4. This measurement error is caused because the battery used in the device is damaged in the form of running out of power abnormally. The voltage on the sensor decreased to 0 V drastically this resulted in reduced measurement accuracy. Therefore, a battery change was carried out in the fourth monitoring to overcome the measurement problem. In addition to the second monitoring, measurement errors also occurred in the fourth monitoring which caused the website to display a light risk. The measurement error occurred in the measurement of body temperature recorded in Table 2 of 35.68 °C. This temperature measurement error is caused because the probe of the device is not properly attached to the inside of the armpit, but rather on the outside that is different from the core body temperature. This causes the body temperature to not be measured correctly. So, if the two technical errors of measurement do not occur, then the measurement results
592
Steady et al.
obtained will be in the range of temperature and normal human heart rate, which is 36.1–38 °C and 51–90 heartbeats per minute. This is evident through the results of temperature and heart rate measurements displayed on the follow-up time after which did not experience significant changes and remained in the normal range. So, it can be said that the results of the assessment by the designed system indicate that the person does not have a risk of deterioration during the system testing. Meanwhile, in this trial stage, a manual risk assessment is carried out based on the results of measuring vital signs using measurement tools standards and manual measurements. This is done to compare the results of the assessment between the designed system and the manual implementation of the EWS. Some measurements of vital signs using standard measuring instruments include measuring temperature, blood pressure, and measuring oxygen saturation and heart rate. The standard measuring instrument for measuring temperature used is the Omron MC-341 digital thermometer which is placed in the armpit. In measuring blood pressure, a standard measuring instrument is used in the form of an Omron digital sphygmomanometer model JPN500 [20] while measuring oxygen saturation and heart rate. The pulse oximeter used is the Shenzhen Jiangnan Medical Technology model P-04. In addition to using standard measuring instruments, measurements are also carried out and entered manually. The manual measurement carried out is the measurement of respiratory rate per minute. As for the parameters of giving oxygen and awareness manually entered based on the condition of the person being monitored. The results of measuring vital signs using these standard tools can be seen in Table 4. Based on the results of the manual risk assessment in Table 5, it is known that the resulting aggravation risk is a mild risk. When compared to automatic risk assessment using the designed system, automatic assessment this manual on average has a higher level of risk with a total score greater than 1 score. This is due to differences in the results of measuring systolic blood pressure using standard measuring instruments with blood pressure measurement devices designed. The magnitude of the measurement difference can be seen in the following Table 6 which displays the percentage difference between the measurement results of standard tools from Table 3 and the results of system measurements from Table 4. Table 4 Measurement result based on standard measurement Parameter
Parameter score based on time monitoring 1
2
3
4
5
6
7
Oxygen administrator
Air
Air
Air
Air
Air
Air
Air
Oxygen saturation
98
96
97
97
97
98
97
Respiratory rate
14
17
17
17
18
17
18
Sistolik blood pressure
120
128
121
129
127
131
126
Heart beat
80
84
80
78
81
80
79
Temperature
36.2
36.4
36.3
36.8
36.8
36.5
36.3
Consciousness
Alert
Alert
Alert
Alert
Alert
Alert
Alert
Development of Wireless Integrated Early Warning System (EWS) …
593
Table 5 Risk value result with apply EWS manually Parameter
Parameter score based on time monitoring 1
2
3
4
5
6
7
Oxygen administrator
0
0
0
0
0
0
0
Oxygen saturation
0
0
0
0
0
0
0
Respiratory rate
0
0
0
0
0
0
0
Systolic blood 1 pressure
1
1
1
1
1
1
Heart beat
0
0
0
0
0
0
0
Temperature
0
0
0
0
0
0
0
Consciousness 0
0
0
0
0
0
0
Total score
1
1
1
1
1
1
1
Risk
Moderate Moderate Moderate Moderate Moderate Moderate Moderate
Table 6 Percentage difference in vital signs measurement Monitoring
Parameter score based on time monitoring Temperature (%)
Oxygen saturation (%)
Heart beat (%)
Respiratory rate (%)
Systolic blood pressure (%)
1
−0.4
−1.0
−10.5
14.3
9.2
2
−0.9
−1.0
47.8
11.80
15.60
3
−0.8
−1.0
0.2
23.5
9.90
4
3.0
−1.0
23.6
0.0
20.2
5
−0.2
−3.1
13.7
5.6
20.5
6
−0.07
−1.0
12.8
5.9
20.6
7
−1.5
−1.0
2.2
5.6
16.2
Average difference
1
1.30
15.80
9.5
16.2
Based on the results of the manual risk assessment in Table 5, it is known that the resulting aggravation risk is a mild risk. When compared to automatic risk assessment using the designed system, automatic assessment this manual on average has a higher level of risk with a total score greater than 1 score. This is due to differences in the results of measuring systolic blood pressure using standard measuring instruments with blood pressure measurement devices designed. The magnitude of the measurement difference can be seen in the following Table 6 which displays the percentage difference between the measurement results of standard tools from Table 2 and the results of system measurements from Table 4.
594
Steady et al.
The response time of each device in Table 7 shows that the system is able to update vital signs data in a short time in seconds. The data shows that the device measuring blood pressure, temperature, heart rate and oxygen saturation has a very fast average response time, which only takes less than 1 s to send data from the sensor and display it to the website. The next input device that has a fast response time is an LCD display device that is capable of updating data with an average time of 1.21 s. Meanwhile, the device that has a delayed response time is the respiratory frequency measurement device with a time ranging from 5.2 to 10.46 s. This is due to the long process of reconnecting the device to the MQTT broker. Even so, this respiratory rate measurement device can be said to be fast enough to update the condition of the patient’s vital signs which are taken every 1 h. Based on the discussion and analysis above, it can be concluded that the wireless Early Warning System (EWS) has been successfully designed and tested. The results of the design of this system indicate that the system can perform data collection of vital signs automatically and carry out a risk assessment of deterioration in accordance with the measurement results. In addition, the test results show that the system has been able to display the risk of patient deterioration on the website and the designed LCD screen device. Although it has slightly different risk assessment results from manual EWS due to differences in the accuracy of blood pressure measurement tools. However, in its application, it can be re-calibrated the blood pressure measurement device, so that the overall data collection of vital signs and risk assessment can run well. The response time of each device in Table 7 shows that the system is able to update vital signs data in a short time in seconds. The data shows that the device measuring blood pressure, temperature, heart rate and oxygen saturation has a very fast average response time, which only takes less than 1 s to send data from the sensor and display it to the website. The next input device that has a fast response time is an LCD display device that is capable of updating data with an average time of 1.21 s. Meanwhile, the device that has a delayed response time is the respiratory frequency measurement device with a time ranging from 5.2 to 10.46 s. This is due to the long process of reconnecting the device to the MQTT broker. Even so, this respiratory rate measurement device can be said to be fast enough to update the condition of the patient’s vital signs which are taken every 1 h. Based on the discussion and analysis Table 7 Time response from the development system Input parameter
Average time response (s)
Minimum (s)
Maximum (s)
Standard deviation (s)
0.44
0.27
0.6
0.15
Sistolik blood pressure
0.37
0.18
0.51
0.12
Temperature
7.86
5.2
10.46
2.67
Respiratory rate
0.90
0.73
1.08
0.12
Heart beat & oxygen saturation
1.21
1.2
1.22
0.01
Development of Wireless Integrated Early Warning System (EWS) …
595
above, it can be concluded that the wireless Early Warning System (EWS) has been successfully designed and tested. The results of the design of this system indicate that the system can perform data collection of vital signs automatically and carry out a risk assessment of deterioration in accordance with the measurement results. In addition, the test results show that the system has been able to display the risk of patient deterioration on the website and the designed LCD screen device. Although it has slightly different risk assessment results from manual EWS due to differences in the accuracy of blood pressure measurement tools. However, in its application, it can be re-calibrated the blood pressure measurement device, so that the overall data collection of vital signs and risk assessment can run well.
4 Conclusion The Early Warning System (EWS) for the condition of hospital patients wirelessly has been successfully created using five devices for retrieving patient vital signs and a Raspberry Pi as a server. Data retrieval of patients’ vital signs can be done automatically using ESP8266 and ESP32 microcontrollers which read the data measurement of vital signs from the MAX30100 sensor, piezoelectric sensor, DS18B20 sensor and wrist blood pressure monitor. The design of this EWS wirelessly applies the MQTT communication protocol to send data from the vital signs measurement device to the server via a WiFi network. The application of this wireless EWS can display patient vital sign measurement data and the results of the patient’s aggravation risk assessment on the website. The results of testing on normal people who are not patients show that the total score in the aggravation risk assessment is 1 score lower than with manual EWS deployment.
References 1. Smith MEB, Chiovaro JC, O’Neil M et al (2014) Early warning system scores: a systematic review. Natl Libarary Med [Online]. Available: https://www.ncbi.nlm.nih.gov/books/NBK259 029/. Last accessed 09 Aug 2022 2. Nursalam N, Ahsan A (2020) Knowledge and skill in relation to the speed and accuracy of the nurses when assessing using an early warning system (EWS). J Ners 15:531–537 3. United State Enviromental Protection Agency: Health Risks (2010) 4. Word Health Organization: THE GLOBAL HEALTH OBSERVATORY (2022) [Online]. Available: https://www.who.int/. Last accessed 09 Sept 2022 5. Harnadini S, Purnawan AW (2012) Pengaruh beban kerja, kelelahan kerja, dan tingkat kewaspadaan terhadap tingkat kesalahan dalam upaya meminimasi human error (Studi Kasus di R.S Semarang). Ind Eng Online J 1:1–4 6. Melissa CS (2021) Medical definition of ER. MedicineNet. https://www.medicinenet.com/er/ definition.htm. Last accessed 09 Sept 2022. 7. Citradi T (2020) Covid-19 Lagi Ganas-ganasnya, Dunia Malah Krisis Perawat!. CNBC Indonesia. https://www.cnbcindonesia.com/news/20201230151656-4-212611/covid-19-lagiganas-ganasnya-dunia-malah-krisis-perawat. Last accessed 09 Sept 2022
596
Steady et al.
8. Aswatini IH (2020) Kebutuhan Tenaga Perawat di Indonesia. Pusat Riset KependudukanBRIN [Online]. Available: https://kependudukan.brin.go.id/mencatatcovid19/perawat-indone sia-di-tengah-pandemi-covid-19/. Last accessed 09 Sept 2022 9. WHO (2020) WHO and partners call for urgent investment in nurses [Online]. Available: https://www.who.int/news-room/detail/07-04-2020-who-and-partners-call-for-urg ent-investment-in-nurses. Last accessed 09 Aug 2022 10. Maximintegrated (2015) Ds18B20. Maxim Integr. https://www.maximintegrated.com/en/pst/ run.mvp?q=DS18B20. Last accessed 09 Sept 2022 11. Universitas Medan Area (2021) Menggunakan Sensor Suhu DS18B20 pada Arduino. Universitas Medan Area. https://elektro.uma.ac.id/2021/03/10/10780/. Last accessed 09 Sept 2022 12. Trindade MA (2014) Applications of piezoelectric sensors and actuators for active and passive vibration control. In: Brazilian conference on dynamics, control and applications 13. Last Minutes Engineering (2019) Interfacing MAX30100 pulse oximeter and heart rate sensor with Arduino. Last Minutes Engineering. https://lastminuteengineers.com/max30100-pulseoximeter-heart-rate-sensor-arduino-tutorial/. Last accessed 09 Sept 2022 14. ezoic (2022) Interfacing MAX30100 pulse oximeter and heart rate sensor with Arduino. https:// lastminuteengineers.com/max30100-pulse-oximeter-heart-rate-sensor-arduino-tutorial/. Last accessed 09 Sept 2022 15. Walker G, Fihn H (2010) LCD in-cell touch. Inf Disp 26:8–14 16. Muhammad Firdaus SP (2019) Penggunaan NodeMCU ESP8266 ESP 12E V1.0 17. ESP (2021) ESP32 series datasheet. Espr Syst 1–65 [Online]. Available: https://www.esp ressif.com/en/support/download/documents. https://www.espressif.com/sites/default/files/doc umentation/esp32_datasheet_en.pdf. Last accessed 09 Aug 2022 18. Eclips (2020) Eclipse mosquitto. https://mosquitto.org/. Last accessed 09 Sept 2022 19. Eclipe Fondation (2020) Eclipse MosquittoTM an open source MQTT broker [Online]. Available: https://www.eclipse.org/. Last accessed 09 Sept 2022 20. omron (2022) Automatic blood pressure monitor JPN500. omron. https://www.omronhealthc are-ap.com/ap/product/99-jpn500. Last accessed 09 Sept 2022
Hexapod Robot with Indoor Path Planning Using ROS Navigation Stack on a Static Map Denzel Polantika , Yusuf Averroes Sungkar, and Johannes
Abstract The design of this robotic system is intended to develop a six-legged robot to navigate in an indoor environment relying on the LIDAR sensor using ROS (ROS Navigation Stack) platform. So that the results of this research can be developed to carry out more complicated space exploration missions. The design of the robot system is carried out with various adjustments to the two main system parts, namely the robot mobility system which is supported by the Inverse Kinematics algorithm from Phoenix Code, and the navigation capabilities supported by the wheeled robot navigation system programs integrated in the ROS. This research analysis focuses on the accuracy of the robot in navigating, namely the accuracy of the coordinates of the actual navigation results compared to the coordinates of the navigation target set and the accuracy of the robot performing rotational movement. Overall, the robot manages to complete the navigation mission in 2 min 27 s in the KRSRI 2021 arena with 10 successes out of 10 tries. The navigation accuracy of the system has a value of 76.08% on the x-axis and 75.54% on the y-axis based on the target coordinate inputted to the system. The robot uses a maximum linear velocity speed of 0.12 m/s and 0.44 rad/s that works for well with the kinematics algorithm and the frame for navigating in the arena. Keywords ROS navigation stack · Hexapod · Inverse kinematics · Navigation · KRSRI
1 Introduction ROS (Robot Operating System) platform could help so much in developing a physical robot. The open-source platform works well with wheeled robots let alone the specified robot, the “turtlebots”. Using open-source components and software means D. Polantika (B) · Y. A. Sungkar · Johannes Computer Engineering Department, Faculty of Engineering, Bina Nusantara University, Jakarta, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_52
597
598
D. Polantika et al.
the community of talented and dedicated developers can push this design further as new techniques and approaches become available [1]. It allows engineers to use algorithms built far earlier and updated along the way and applied to the project at hand while an easier interface to tweak with the parameters provided on the packages. Obviously, an open-source platform invites a lot of engineers to create the ROS package. However, a few adjustments must be applied to our system when we vary the terms of the robot we are building. It could be a change of physical form, general functions, parameter variations, algorithms, etc. In this case, the robot uses a legged robot concept using a hexapod frame (six-legged robot). This was originally a reference from the paper that uses a small size humanoid robots that can play football [2]. These terms must work in a tight indoor environment and be capable of navigating to all the targeted coordinates. These intended features aimed for the making of a robot that can help and or substitute human personnel as a Search and Rescue team, specifically the autonomous navigation part. This robot development originated from the annual robotics contest in Indonesia called KRSRI (Kontes Robot Search and Rescue Indonesia). The navigation mission is to explore all the rooms within the map, but to go there the robot must be able to avoid obstacles applied to the path to go there. While ROS provides most of the features, the system still has its own custom Inverse Kinematics for the robot’s locomotion and “local planner,” that follows a plan generated by the ROS Navigation Stack’s feature named “global planner”. The use of ROS Navigation Stack is targeted at navigating an autonomous robot. The concept also includes mapping (HectorSLAM), matrices transformation, robot’s localization (AMCL), and path planning. The robotic system design in this experiment uses all the features mentioned before out of the ROS Navigation Stack. This was a reference that also uses SLAM to build a 2D map “This paper has presented the construction of the four-wheeled Omnidirectional mobile robot. It shows the good efficient of Slam using Gmapping stack for building the 2D map” [3]. The algorithm succeeded its navigation for a total of 100% navigating in two different environments with obstacles with a maximum travel time of 156 s (about 2 and a half minutes) to complete navigation [1—ROS based Autonomous Indoor Navigation Simulation Using SLAM Algorithm]. In this case, ROS Navigation Stack’s Local Planner does not suit this robot design. Path planning is performed by combining GPP and LPP algorithms, where the global path planning algorithm use the A* algorithm based on graph search structure and the local path planning algorithm uses the DWA algorithm [4]. While performing navigational tasks, robotic systems, such as AVs, make use of capabilities that involve modeling the environment and localizing the system’s position within the environment, controlling motion, detecting, and avoiding obstacles, and moving within dynamic contexts ranging from simple to overly complicated environments. The four general problems of navigation are (1) perception, (2) localization, (3) motion control, and (4) path planning [5]. The goal of the research and design is to see how well the ROS Navigation Stack and the Local Planner custom algorithm work together finishing the mission within the KRSRI 2021 arena. In this paper, we are confirming the success rate of the system to finish the mission by measuring the time travel to finish the mission and
Hexapod Robot with Indoor Path Planning Using ROS Navigation Stack …
599
how accurate the robot lands the target coordinate as opposed to the ideal value considering the tolerance parameter of the system targeting algorithm.
2 Methods The robot aimed for engaging the navigation mission inside the KRSRI 2021 arena. Other than having the advantage of having a known-static map, the design must comply with the characteristics of the arena such as how the robot must fit the arena (able to move around), turn, and stop whenever the target coordinate has been reached. For that matter, the design must fit all the hardware components and fit the arena. Figure 1 explains the design and research flow on making the robot. It begins with the determination of what the research goal is, which will then identify the goal on what the research focuses on.
Fig. 1 Research flow chart
600
D. Polantika et al.
2.1 ROS Navigation Stack on ROS Master (PC) Separated ROS in the system, the PC, is used because of the need for a much more powerful processor to use the ROS Navigation Stack. The ROS Navigation Stack minimum-requires a 4 GB memory and a four-core processor which differs the Raspberry Pi used, Raspberry Pi Zero 2W, in the design with the PC. Other than that, the ROS Navigation Stack on a PC is also going to be used for the simulation part of the method. The design shall work when the simulation works, because it proves what hardware is mandatory for the design. In the PC, ROS is running in an Ubuntu 18.04 OS (Operating System) inside a Windows 10’s virtual machine (VMware Workstation 16 Player).
2.2 ROS Navigation Stack Simulation Design The simulation uses a robot provided in ROS called the “turtlebot”, specifically the “Burger” typed turtlebot. The simulation environment visualized in Fig. 2 and the robot at Fig. 3. Within the simulated turtlebot, there is hardware needed for navigation and therefore, we can simplify our design when we use this simulation. The most mandatory is the 2D mapping sensor, LIDAR. It can map an environment, which then we can navigate in (localization and exploration). From this process, it outcomes what parameter we are using to produce the “Global Planner”, a planner that the “local planner” will be subscribing to.
Fig. 2 ROS TurtleBot3 World (simulation)
Hexapod Robot with Indoor Path Planning Using ROS Navigation Stack …
601
Fig. 3 TurtleBot3 “Burger”
2.3 Robot’s Hardware Design From Fig. 1, the dimension target for the robot must be in the form a circle that can fit all the hall and doors provided in the arena. Overall, the maximum radius of one’s robot can be determined as half the size as the smallest width of the hall within the arena which is 450 mm. Other than that, the LIDAR sensor must be placed on top of the robot and the height minimum must be below 280 mm. All these criteria must be met just to make sure the robot can move freely, and the LIDAR readings are relevant. As it was confirmed that it is capable to just use one sensor, that LIDAR, the design using “Phoenix Lynxmotion” frame and LIDAR RPLIDARA1M8 fits the criteria. The output is the interpolation of the robot’s walking pattern.
2.4 ROS on Robot’s Raspberry Pi ROS running in the Raspberry Pi Zero 2W is to only access both sensor and actuators physically through ROS framework without the navigation program. That way, the readings of the sensor and command input to the actuators can be transacted along the ROS server. The concept is for remote computing either the main computation done straight from the mobile processor or, in this case, from a remote computer connected through a random-access point.
2.5 Robot’s Kinematics Driver and Raspberry Pi ROS Interface Code The interface is coded in the form of a python script subscribing to a topic in ROS and output a UART message to the ATMEGA328p. The code’s flow is described in Fig. 4. The communication is based on a custom protocol that uses string datatype to
602
D. Polantika et al.
Fig. 4 ATMEGA328p-PiZero2W interface
exchange information. The Pi sends a string consists of a two-character describing a command code followed by the value corresponding to the code if the code needed one. For example, to turn on ATMEGA328 sends a string of “ON”, so that means the ATMEGA328p must also return a string of “ON” to the Pi. The concept of ping is used in this code. If the command were to send a valued command, for example a command to stand at 50 mm height, the string would be “AA50”. The “AA” is for the stand command and as for the 50 is the value part of the command.
2.6 ROS Master and ROS Raspberry Pi Connection A separated main computing node from the mobile electronical component is connected through a random-access point. This was mentioned before and is the final component to put the entire system together before the hardware could be used as the “local planner” vessel. To connect it, the exchanged information is the IP address of the master and slave device. This was done by finding each ROS ip for
Hexapod Robot with Indoor Path Planning Using ROS Navigation Stack …
603
each device by typing “ifconfig” on each Linux based OS (Raspbian Buster for the Raspberry Pi and Ubuntu 18.04 for the PC master).
2.7 ROS Navigation Stack “Local Planner” Code The aforementioned “Local Planner” mentioned before is how the robot calculates the most efficient path created by the global plan in how to keep the robot follows the path with a decent amount of speed. Path planning algorithms are used by mobile robots, unmanned aerial vehicles, and autonomous cars to identify safe, efficient, collision-free, and least-cost travel paths from an origin to a destination [5]. The algorithm prevents the robot from reaching full speed because it amplifies the error. The approach is to use a linear regression to each point of the path planned by the “global plan”. The global plan was based on a paper stated a goal to develop a SIG (Geographic Information System) creating the most efficient path using A* and Dijkstraa’s Algorithm in Kota Bandar Lampung [6]. Points were pre-declared by a method parameter, which defaults at 100 points. It is used for dividing the overall points of the path into a more suitable option, for example, the pre-declare value was 100, while the real path has 700 points that means there are 7 divided points. This is to ease the optimization algorithm to predict when a robot should turn. The said optimization algorithm uses an RMSE approach that compares the linear regression and the actual value. The bigger the RMSE value, the less ready the robot is to turn. Overall, the flow is show in Fig. 5. This was an update from the paper that uses the DWA local planner. It states that, “The robot replans the path to finally reach the target point. The global path planning algorithm uses the algorithm based on the graph search structure, and the local path planning algorithm uses the DWA algorithm” [4].
2.8 Performance Test Test the code produced after programming the “local planner” on what is the success rate to completely navigate and how accurate the final coordinate is as opposed to its original command. The success rate is measured by engaging the robot with the full course of the mission, that is to explore “Ruang 1”, then “Ruang 2”, and finally go back to the start position (the ‘H’ symbolized area). The checkpoint is described in Fig. 1. This experiment is taken 10 times and outputs the mean time travel and the percentage of the success rate. The 2nd experiment, the accuracy test, commanded the robot to navigate to a given coordinate somewhere on the 2D static map. In this case, the target coordinate is (2.31026, −2.68284). The 2D static map is shown in Fig. 6.
604
Fig. 5 Robot’s local planner flow chart Fig. 6 KRSRI 2021 2D static map
D. Polantika et al.
Hexapod Robot with Indoor Path Planning Using ROS Navigation Stack …
605
3 Result and Discussion 3.1 ROS Navigation Stack Simulation ROS Navigation Stack works with the setting of using the Dijkstraa’s Algorithm for the “global planner” and Adaptive Monte Carlo Localization to localize the robot inside a static map. The mapping process uses the “HectorSLAM” algorithm for its custom-capable particle parameter and its ease of use. Simultaneous Localization and Mapping (SLAM) is a computing problem to build or update an unknown environmental map and at the same time track its own location [7].
3.2 Hardware Design The robot’s final diameter is 200 mm (about 7.87 in) with an operating height of 250 mm (about 9.84 in). The design is specified in Fig. 8. While its 3D design is shown in Fig. 7. Fig. 7 Robot’s 3D design
Fig. 8 Robot’s real-life form
606
D. Polantika et al.
3.3 Robot’s Kinematics Interpolation The interpolation equation specified in (1), the linear velocity, and (2), the angular velocity. f(x) = − 399937 · x6 − 2.313 · 107 · x 5 + 1988.89 · x 4 + 457542 · x 3 − 0.868933 · x 2 − 3125.12 · x + 128
(1)
f(x) = 14306.4 · x5 − 14.7768 · x4 − 4275.81 · x3 + 0.50238 · x 2 + 511.622 · x + 128
(2)
Both equations are compared to its actual value and then outputs the RMSE value, shown in Fig. 11. In those figures, provided 3 lines, the blue line is the actual value, the orange is the interpolated value, and the red line is the error line. RMSE value states the total value of 23.64% error for the linear velocity and 14.27% error for the angular velocity. Both have a high value caused by the spike in the beginning and the last value. If it was cut off from the list, the linear velocity interpolation (1) has an error of 9.27% and the angular velocity (2) has an error of 5.6%.
3.4 Performance Test The 1st experiment is to test the robot’s travel time, finishing one full mission, that is navigating to start position, room 1, room 2, and back to the starting position. The map used within the ROS Navigation Stack is shown in Fig. 6. Table 1 shows the test drive of the robot under ten trials. The average travel time for this experiment is 2:27, with a success rate of 100%, or 10 successes out of 10 trials. The 2nd experiment compares the commanded coordinate with the actual coordinate that the robot travelled. The orange dot is the ideal coordinate while the blue is the real coordinate. The data is split into two corresponding graphs addressing each axis on the 2D map. Each graph is shown in Figs. 9. and 10. Those figures also show the range of tolerable errors or known as standard deviation (σ) shown as black lines on each data along the x-axis, valued at 0.0227 m on Fig. 11. From all the data given, we can see there are few dots that are out of the standard deviation, around 76.1% on the x-axis and 75.54% on the y-axis of the data span to be exact. Out of the 23.9% and the 24.46%, the normal distribution frequency reaches a peak at 17.6 unit of frequency from x-axis coordinate and 18.7 unit of frequency from y-axis coordinate.
Hexapod Robot with Indoor Path Planning Using ROS Navigation Stack … Table 1 Navigation success rate
Fig. 9 Coordinate navigation target experiment X-axis
607
No
Travel completion time
1
2:29
2
2:26
3
2:25
4
2:26
5
2:30
6
2:28
7
2:27
8
2:30
9
2:29
10
2:29 2.360 2.340 2.320 2.300 2.280 2.260
Fig. 10 Coordinate navigation target experiment Y-axis
0
10
20
30
10
20
30
-2.620 -2.640
0
-2.660 -2.680 -2.700 -2.720
Fig. 11 Coordinate navigation target experiment error distribution
20 15 10 5 0
0
10
20
30
608
D. Polantika et al.
4 Conclusion Overall, the robot manages to complete the navigation mission in 2 min 27 s in the KRSRI 2021 arena with 10 successes out of 10 trials. The navigation accuracy of the system has a value of 76.08% on the x-axis and 75.54% on the y-axis based on the target coordinate input to the system. The robot uses a maximum linear velocity speed of 0.12 m/s and 0.44 rad/s that works well with the kinematics algorithm and the frame for navigating in the arena. The global planner used in the system is the Dijkstraa’s Algorithm along with AMCL for localization straight from the ROS Framework. Meanwhile, the local planner created by a ROS Python script that subscribes to the global planner with the parameter of the how many points needed to divide the total sum of the coordinate length provided by the global planner (called steps_ahead = 100). In addition, there is also a goal tolerance parameter, which lets the robot finish the navigation within the tolerance of the goal (goal_tolerance = 0.2 m).
References 1. Caracciolo M (2021) Autonomous navigation system from simultaneous localization and mapping, vol 14, pp 1–5 2. Burchardt H, Salomon R (2006) Implementation of path planning using genetic algorithms on mobile robots. In: Proceedings of the 2006 IEEE international conference on evolutionary computation. Vancouver 3. Manh TH, Kiem NT, Duc DN (2020) An approach to design navigation system for omnidirectional mobile robot based on ROS. Int J Mech Eng Robot Res 9:2–8 4. Cai H (2022) Autonomous navigation system for exhibition hall service robots via laser SLAM, vol I. Springer, Jiangxi, pp 2–21 5. Karur K (2021) A survey of path planning algorithms for mobile robots. In: 2019 IEEE international conference on autonomous robot systems and competitions (ICARSC), pp 448–468. IEEE News, Porto 6. Heni Sulistiani DAW (2018) Perbandingan algoritma A* dan Dijsktra dalam Pencarian Kecamatan dan Kelurahan di Bandar Lampung. In: Konferensi Nasional Sistem Informasi. STMIK, Pangkalpinang 7. Saat S (2020) HectorSLAM 2D mapping for simultaneous localization. J Phys: Conf Ser. IOP, Indonesia
Detect the Use of Real-Masks with Machine Acquiring Using the Concept of Artificial Intelligence Bambang Dwi Wijanarko, Dina Fitria Murad , Dania Amelia, and Fitri Ayu Cahyaningrum
Abstract The emergence of the global pandemic COVID-19 as an outbreak of a dangerous disease is attacking and growing badly all over the world. In this pandemic situation, state governments, social authorities, companies, and workplaces must follow the regulations that have been determined by the government to improve security and prevent the increase in transmission of COVID-19, requiring community discipline in wearing masks. Artificial intelligence (AI) technology that is currently developing can be used to identify objects and convert objects descriptively, analyze the output and acquire the machine. In overcoming mask raids that are carried out manually, modeling of object detection on masks (Mask-on) is carried out using YOLO (You Only Look Once) version four, which is one of the systems that can detect objects in real-time. This model uses 2000 datasets in image format with 90% training data composition and 10% testing data. From the test results, the detection performance with two tests gives good results on the YOLOv4 model with an accuracy value of 93.495%. So, it can be concluded that the performance of the model in this study can run optimally in detecting facial images using a mask and without a face mask. Keywords Object detection · YOLO · Deep learning · CNN · Mask
B. D. Wijanarko · D. Amelia · F. A. Cahyaningrum Computer Science Department, BINUS Online Learning, Bina Nusantara University, Jakarta, Indonesia D. F. Murad (B) Information Systems Department, BINUS Online Learning, Bina Nusantara University, Jakarta, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_53
609
610
B. D. Wijanarko et al.
1 Introduction Background The Coronavirus disease 2019 (COVID-19) became a world health problem in early 2020. The first case of COVID-19 was discovered in Wuhan, China on December 31, 2019, with an unknown etiologic. COVID-19 cases are increasing quite rapidly, in less than 1 month this disease has spread to various countries [1]. The World Health Organization (WHO) designated COVID-19 as a Public Health Emergency of International Concern (PHEIC) on January 30, 2020. Transmission of COVID-19 can occur through droplets released from infected individuals when talking, coughing, or sneezing, and hitting the eyes nose, and mouth. Transmission can also occur through objects or surfaces that have been contaminated by droplets from infected people. The spread of COVID-19 can be prevented by early detection, isolation, and basic protection such as maintaining a minimum distance of 2 m, washing hands more, and using masks when in crowded areas or at risk of transmission. The use of masks aims as source control, namely by preventing infected users from spreading the virus to others, and as a preventive measure, namely by providing protection to uninfected users against virus exposure [2]. To prevent a wider spread, the Indonesian government made many changes to the state order, which was followed by the local government. The process of law enforcement against people who do not use masks by the Civil Service Police Unit (Satpol PP) is to carry out title mask raids and business raids at predetermined points with the assistance of the TNI, POLRI, and related agencies. In one day, the raids were carried out 2 times and for 3 h. Residents who are caught not wearing masks or not reprimanding people who do not use masks and are in a place of business such as a cafe, then (PPNS/Officers) from the Satpol PP will check the identity of the violator or the owner of the business actor and interrogate him and are proven to have violated it will be sanctioned. State of the Art Artificial Intelligence (AI) offers a variety of transformative potential and other opportunities as a substitute for human tasks. AI helps humans in minimizing human functions in human activities in various industrial, intellectual, and social applications [3]. Artificial Intelligence (AI) technology that is currently developing such as computer vision learning [4] and Face Detection System with Expression Recognition [5, 6], this technology is used to identify objects and convert objects descriptively, analyze the output and obtain machines. Referring to the image identification [7] or image detector, hate speech detection [8, 9], to identify Human Objects [5], numerical data is used. Vision learning is closely related to the YOLO method for each type of image detector. Most previous detection systems used a classifier or localizer to perform detection by applying a model to the image at multiple locations and scales and assigning a value to the image as material for detection [10, 11]. Meanwhile, the YOLO method uses a very different approach from the previous algorithm, which is to apply a convolutional neural network to the entire image. This
Detect the Use of Real-Masks with Machine Acquiring Using …
611
network will divide the image into regions and then predict bounding boxes and probabilities, for each bounding region box the probability of classifying objects is weighed. WHO declares that it is necessary to wear masks in the community. In the YOLO method, the use of masks can be detected by covering and translating the mouth part on the face. Computer Vision is part of deep learning, especially around convolutional neural networks. The main thing in a convolutional neural network is that it can support high Graphics processing unit (GPU) configurations on images and videos in real-time. Research Purpose Although there have been several applications that have been developed for face detection, this proposed experimental study is different from previous studies. In particular, the proposed mask detection application (Mask on) is a face mask detection application that has a specific task, namely, to accurately detect faces and detect these faces by using a mask or not using a mask. This proposed research can also detect faces with various facial movements or in real-time.
2 Literature Review YOLO stands for “You Only Look Once”. YOLO is a type of algorithm. This type of algorithm model has the advantage of being able to detect objects in the image [12] and shown in Fig. 1. YOLO could identify objects with the best accuracy such as detecting license plates in real time using [13] Fast YOLO and YOLOv2 with the CNN method same thing continued with different accuracy by [14].
3 Research Method In particular, the proposed mask detection application (Mask on) is a face mask detection application that has a specific task, namely to accurately detect faces and detect these faces by using a mask or not using a mask. This proposed research can also detect faces with various facial movements or in real-time. In this proposed research, it consists of several stages, namely [15]: 1. Data Acquisition In data-driven research such as machine learning and deep learning, data is very important. The more data there is, the higher the success rate. The YOLO method which is applied to the face mask detection application (Mask on), requires a lot of data with the right annotations. In the development of this application, we collect facial image data without a mask or by using a mask as the dataset needed for
612
B. D. Wijanarko et al.
Fig. 1 YOLOv4 backbone architecture
processing. The first dataset is sourced from the open-source kaggle.com with face data using masks and without masks with a total of 1500 images with face compositions without masks and using balanced masks of 750 images each. The second dataset was carried out by organic search through the Google Image site so that around 1000 images were collected in the categories of faces without masks and faces with masks. 2. Data Pre-processing The data that has been collected is checked first through quality control manually so that the dataset that has been collected is clean as desired. The removed image data is the use of masks that are not correct or only half in the with mask class and moved to the without mask class. It is expected that if the use of masks is not appropriate, it will be detected as without mask. 3. Image Annotation (Image Labeling) In the process of labeling images or data, it aims to make the data more organized and structured. This data will then be entered into the training process, the image labeling process using a program called LabelImg. LabelImg is software for tagging image type data, LabelImg is open source and written in the Python programming language. LabelImg supports the PASCAL VOC tagging file storage format in.xml and YOLO formats in.txt format. The following is a class display for labeling where the first row is class 0 and the second row is class 1.
Detect the Use of Real-Masks with Machine Acquiring Using …
613
Table 1 Sample dataset Class
Description
With Mask
Face with the correct use of the mask
Without mask
Face without a mask or using a mask that is not correct
Table 2 Determination of training and testing data
Image with mask
Class With mask Without mask Total
Training
Image withoutmask
Testing
Total
900
100
1000
900
100
1000
1800
200
2000
4. Determination of Training Dataset Training data is a collection of data that will be used by the program at the learning and training stages to create a YOLO model from data that has been annotated. The dataset that has been determined is in the form of coordinates, which are generated from image annotations and then trained so that YOLO can recognize the objects that have been determined. The data collected is then carried out by object detection based on image data on the faces of people wearing masks and not wearing masks. The following is a sample of the Kaggle.com Site’s open-source data collection process. The dataset will be divided into two parts namely train data (see Table 1), and test data (see Table 2), where 90% is for train data and 10% is for testing data. The following is a dataset distribution table. 5. Training Data The data training carried out in the Mask On application using the YOLO (You Only Look Once) method is an object detection system in real-time and can identify objects more quickly and precisely than other detection systems. YOLO works because of the Convolutional Neural Network method. YOLO can observe objects through training and testing time to encode the information class and its features on the object to be detected. The data training is carried out using a backbone. A backbone is a part of a computer network infrastructure that connects various networks and provides a path for data exchange between these different networks. The backbone used in the Mask On application, namely CSP DarkNet 53, is a unique backbone that increases CNN’s learning capacity. CSP Darknet 53 was used to enhance the receptive field and distinguish critically important context features. Before the training starts, the previously labeled datasets are put into one folder. The next step is to set a file with an extension (.data). This file is used for the training process. Here are the contents of the file: (a) Classes: Specified number of classes
614
(b) (c) (d) (e)
B. D. Wijanarko et al.
Train: Directory to the file train.txt Valid: Directory to file test.txt Names: Directory to obj files names Backup: Model folder directory to be saved
6. Take a Picture In this study, real-time image and video capture will be carried out using a webcam that is connected to the YOLO algorithm with the Google Collab platform. The video source will be read as a frame on each image. Each image frame will be straightened using the model with the original height and width (example: 1080 pixels wide, 1920 pixels high). The frame is used as a bounding box to detect image labels or detected objects (faces with masks or faces without masks). The final result is a new video with the above visualization with MPEG-4 encoding, and the input video is not modified in any way. This section will be divided into two parts. First, the YOLO V4 Set Up was carried out, then continued with the application of the YOLO V4 method in the mask detection model. The first YOLOv4 configuration created is a file “obj.names” which contains the name of which model class you want to detect. Then a file will then be created containing the data objects to be detected. The backup folder will store several files such as training data directory, data validation, “obj.names”. Applied YOLO V4: The input in this application is an image that is sent into the YOLOv4 model. This object detector detects the image and finds the coordinates present in the image. Next, the detector divides the input image into several grids, then the detector analyzes the detected target object. 7. Evaluation Evaluation Measurement is a proposed test, which will then find a comparison between the methods used. The evaluation stage is the last stage carried out in this research, where the results or conclusions can be known. The evaluation method used in this research is: a. Confusion matrix, a table that states the number of test data that is correctly classified and the number of test data that is incorrectly classified. Recall and precision are a ratio of predictions made to the positive class, which serves as a measure of how precise and complete the classification has been. The following is a reference confusion matrix that shows the predictions and actual conditions of the data generated by the algorithm used. b. Mean Average Precision (mAP) is a metric to measure object detector accuracy by finding the average of the maximum precision at different recall values or often called AP (Average Precision). Mean Average Precision only calculates the average AP for all classes. c. Intersection over Union (IoU) is a technique to measure how much overlap in 2 regions. This technique aims to measure the level of prediction in the object detector. The 2 regions are ground truth and ground prediction
Detect the Use of Real-Masks with Machine Acquiring Using …
615
4 Result and Discussion System testing is used to assess the performance of the pre-trained model used. Performance assessment is used to see the level of accuracy of the results of the training carried out. Training is used to configure the YOLO architecture which is trained at each specified size. The following calculations are used in the mask detection application with two data classes, namely the class using a mask and without using a mask as follows: Filters = (2 + 5) × 3 = 21
(1)
Batch and subdivision values describe the number of mini batches in each batch. Batch 64 contains 64 images for one iteration. Subdivision 8 i.e., splits the batch into 8 mini-batches into 64/8 i.e. 8 images per mini-batch, then these 8 images are sent to the GPU for processing. This process will be repeated 8 times until the batch is complete and a new iteration will start with 64 new images. The following are the results of a training experiment conducted using several different datasets aimed at obtaining a high level of accuracy and the expected loss validation. The first training uses a learning rate of 0.001 with a data set of 1500 datasets consisting of 750 images of faces wearing masks and 750 faces not wearing masks, the second training uses the same learning rate as a data set of 2000 datasets, consisting of 1000 images of faces wearing masks and 1000 faces not wearing masks. Table 2 contains the network values used in the training that has been done in Table 3. From the results of the two trainings in this study, the highest mean average precision was obtained in the second training, namely 98.89, the training was carried out for 5 h but resulted in an average loss value of 1.32. From the network parameter table above, the following performance training can be generated in Table 4. Here in the data prediction section using the test results via a live webcam. The following two experiments were carried out, the first experiment was with the actual face wearing a mask and the detection system was successful in classifying the face by wearing a mask and producing an accuracy rate of 85.85% (Fig. 2). On the actual face that does not wear a mask, the system is successful in correctly classifying the face that does not wear a mask with an accuracy rate of 72.79%. The following second experiment was carried out on two people in one webcam with the actual face wearing a mask and the face not wearing a mask and the detection Table 3 Yolo-V4 network parameter training Training number
Total data set
Total class
Iteration number
Image size
Batch size
Sub divison
Learning rate
1
1500
2
1100
416 × 416
64
32
0.001
2
2000
2
1200
416 × 416
64
32
0.001
616 Table 4 Comparison of Yolo-V4 performance results with different training
B. D. Wijanarko et al. Training
1
2
mAP
82.53
98.89
P
0.76
0.93
R
0.84
0.99
F1
0.80
0.96
TP
181
129
FP
12
11
TN
95
101
FN
6
3
IoU
57.15
71.68
Average loss
2.53
1.32
Fig. 2 Single mask
system managed to classify correctly according to the actual image input with the results of an accuracy rate of 80.1% of faces not wearing a mask and 81.92% of faces that don’t wear a mask (Fig. 3). From the results of the test-taking place via a live webcam, there is a slight difference in the accuracy of the results of the previous training data. This happens because there is a reduction in image accuracy due to image conversion with a live Fig. 3 Multiple mask
Detect the Use of Real-Masks with Machine Acquiring Using … Table 5 Confusion matrix from the results
617
Actual values Predicted values
With_mask (0)
With_mask (0)
129
11
5
101
Without_mask (1)
Without_mask (1)
webcam, so that it can because differences in the level of accuracy, but the face detection system in this final project can still classify two different types of objects properly and correctly. In this study, the results obtained with a loss of 1.3279 obtained at epoch 1800/64 = 28, and the results obtained as the TP value of 129, FP value of 11, FN value of 5 and TN value of 101 and F1-score of 0.96, and the average IoU = 71.68% as in the following test results. From the research results can be made confusion matrix as in the table below (Table 5). Table 5 explains that the dataset with training data is 90% and 10% for testing and has the accuracy, precision, and recall values as follows. To prove the calculation of the accuracy value obtained from the confusion matrix table with the formula. TP +TN T P + FP + FN + T N 129 + 101 = 129 + 11 + 101 + 5 230 = = 0.93495935 = 94 246
Accuracy =
(2)
The researcher also tested the effect of the training data and to test this effect, the examiner would divide the number of datasets into 2 with a total of 1500 and 2000. The results of the trial can be seen in Table 6 and shown graphic in Fig. 4. Table 6 Training effect of total data
Fig. 4 Effect of total training data
Dataset
Loss validation
mAP (%)
1500
2.53
82
2000
1.32
99.6
200%
4
100%
2
0%
0 1500
mAP (%)
2000
Loss Validation
618
B. D. Wijanarko et al.
With the results obtained, it is proven that the You Only Look Once (YOLO) model that has been made has succeeded in detecting faces that use masks and do not use masks properly. So, the more training data used, the higher the level of accuracy will be, because the model can better understand the pattern of the image that is entered. So, the accuracy in the detection process will be better.
5 Conclusion The conclusions that can be drawn from the results of this research object detection are: The results of the deep learning model with YOLOv4 used with the composition of each dataset class are the same with training data of 90% and test data of 10% resulting in an accuracy of 0.9349 or 94%. The IoU value obtained is 71.68%. From these results, it can be concluded that the accuracy of this method is good and the IoU value is good. In this study, the number of datasets for training is 1800. However, the system takes longer to detect all the images that are being trained. With the results of the percentage of accuracy obtained, it is proven that the detection object with the YOLOv4 model that has been made successfully detects faces that use masks properly. So, the more training data used, the higher the level of accuracy will be. So, the accuracy in the detection process will be better.
References 1. Ri KK (2020) Pedoman Pencegahan dan Pengendalian Corona Virus Diseases (Covid-19), vol 5, 178 2. WWH Organization (2020) Anjuran mengenai penggunaan masker dalam konteks COVID19. World Heal Organ. https://www.who.int/docs/default-source/searo/indonesia/covid19/ anjuran-mengenai-penggunaan-masker-dalam-konteks-covid-19-june-20.pdf?sfvrsn=d1327a 85_2 3. Dwivedi YK et al (2021) Artificial intelligence (AI): multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. Int J Inf Manage 57(August):0–1. https://doi.org/10.1016/j.ijinfomgt.2019.08.002 4. Parameswaran NS, Venkataraman D (2019) A computer vision based image processing system for depression detection among students for counseling. Indones J Electr Eng Comput Sci 14(1):503–512. https://doi.org/10.11591/ijeecs.v14.i1.pp503-512 5. Owayjan M, Achkar R, Iskandar M (2016) Face detection with expression recognition using artificial neural networks. In: Middle East conference on biomedical engineering MECBME, vol 2016 Nov, pp 115–119. https://doi.org/10.1109/MECBME.2016.7745421 6. Fernando E, Andwiyan D, Fitria Murad D, Touriano D, Irsan M (2018) Face recognition system using deep neural network with convolutional neural networks. J Phys: Conf Ser 1235(1). https://doi.org/10.1088/1742-6596/1235/1/012004 7. Sethy PK, Negi B, Behera SK, Barpanda NK, Rath AK (2017) An image processing approach for detection, quantification, and identification of plant leaf diseases—a review. Int J Eng Technol 9(2):635–648. https://doi.org/10.21817/ijet/2017/v9i2/170902059
Detect the Use of Real-Masks with Machine Acquiring Using …
619
8. Isnain AR, Sihabuddin A, Suyanto Y (2020) Bidirectional long short term memory method and Word2Vec extraction approach for hate speech detection. IJCCS (Indonesian J Comput Cybern Syst). https://doi.org/10.22146/ijccs.51743 9. Gomez R, Gibert J, Gomez L, Karatzas D (2020). Exploring hate speech detection in multimodal publications. https://doi.org/10.1109/WACV45572.2020.9093414 10. Corazza M, Menini S, Cabrio E, Tonelli S, Villata S (2020) A multilingual evaluation for online hate speech detection. ACM Trans Internet Technol. https://doi.org/10.1145/3377323 11. Arango A (2020) language agnostic hate speech detection. https://doi.org/10.1145/3397271. 3401447 12. Plastiras G, Kyrkou C, Theocharides T (2018) Efficient convnet-based object detection for unmanned aerial vehicles by selective tile processing. ACM Int Conf Proc Ser. https://doi.org/ 10.1145/3243394.3243692 13. Laroca R, Severo E et al (2018) A robust real-time automatic license plate recognition based on the YOLO detector 14. Chen RC (2019) Automatic license plate recognition via sliding-window darknet-YOLO deep learning. Image Vis Comput 87:47–56 15. Setiyono B, Amini DA, Sulistyaningrum DR (1821) Number plate recognition on vehicle using YOLO—Darknet. J Phys Conf Ser 1:2021. https://doi.org/10.1088/1742-6596/1821/1/012049
Development of Indoor Positioning Engine Application at Workshop PT Garuda Maintenance Facilities Aero Asia Tbk Bani Fahlevy, Dery Oktora Pradana, Maulana Haikal, G. G. Faniru Pakuning Desak, and Meta Amalya Dewi Abstract Information systems are part of the development of information technology. The role of information systems in an organization cannot be doubted and can even have an impact on rapid changes in various areas of the organization. The purpose of this study is to develop an information system that can facilitate companies in monitoring and controlling the location of aircraft engines. This is because the monitoring and control work is done manually using a spreadsheet which prevents updating information on the presence of aircraft engines. The methodology used in this study is a waterfall approach where the stages are carried out in a structured manner, starting from planning, analysis, design, to implementation. The result of the design produces a web-based information system that can make it easier for companies to monitor and control the location of aircraft engines. Keywords Information system · Monitoring · Controlling · Waterfall
1 Introduction The inventory system is used by the company to help determine the stock of goods [1] as part of the company’s business processes. This system plays an important role in ensuring operational activities continue to run well [2]. Monitoring and control are also a support for managing company assets by utilizing information technology in the decision support process for developing information systems [3]. PT Garuda Maintenance Facility Aero Asia Tbk (GMF) is one of the largest aircraft maintenance companies [4] in Indonesia that provides integrated solutions for customers around the world. The company provides aircraft services of various types and is one of the largest aircraft maintenance facilities in Asia [5]. As a company engaged in the MRO (Maintenance, Repair, & Overhaul) sector, aircraft engines are B. Fahlevy · D. O. Pradana · M. Haikal · G. G. F. P. Desak · M. A. Dewi (B) Information Systems Department, BINUS Online Learning, Bina Nusantara University, Jakarta, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_54
621
622
B. Fahlevy et al.
Fig. 1 Spreadsheet monitoring and control
an asset that requires consistency in handling. In carrying out its business processes, monitoring and controlling aircraft engines at PT. Garuda Maintenance Facility Aero Asia Tbk (GMF) still uses Spreadsheets. Where the Spreadsheet file consists of a workshop map showing the identity and location of the aircraft engine. List of all aircraft engines in the workshop with some necessary details. The current system still uses the manual method to process data updates for a list of engines and their locations. With the high movement of aircraft engine locations, and the presence of repetitive handling that needs to be maintained on time, the use of the manual control system results in additional workloads for the related units. And the data update process that takes a long time often results in inconsistent data and the repetitive handling process is missed (Fig. 1). Based on the problems above, it is necessary to develop an information system that can facilitate editing of data and engine locations more interactively and display information quickly to facilitate business processes at PT. Garuda Maintenance Facility Aero Asia Tbk (GMF). For this reason, this research was carried out to produce an Indoor Positioning Engine information system at the PT. Garuda Maintenance Facility Aero Asia Tbk is web-based which can facilitate the process of monitoring and controlling aircraft engines in the Workshop better.
2 Related Work Several studies on Indoor Positioning System (IPS) have been conducted previously. Accurate and reliable IPS has long been studied and developed, considering that navigation in indoor and closed facilities is very much needed [6] to provide information about the location of an object, be it a device, item, or person by using the available
Development of Indoor Positioning Engine Application at Workshop …
623
network to connect various user devices using communication technology in various places [7]. One of them is in a hospital that has the potential for interference with the navigation system due to the large number of technical equipment used [8]. For companies engaged in providing room services for meetings, where the number of rooms is very large, and it makes it difficult for companies to check the availability of rooms for transaction needs so that IPS is a solution and room reservation services to customers can be done remotely using devices owned by customers [9].
3 Methodology This study uses the Waterfall approach because the author needs to do it step by step in a structured manner, namely planning, analysis, design, implementation. Each phase consists of a series of stages, which rely on techniques that produce output [10]. As an initial phase, planning is made as good as possible by determining the timeline, scope, and budget. Carry out interviews and collect documents. In the next phase, organizational analysis and business process analysis are carried out, as well as user requirements. Next in the design phase, Unified Modeling Language is used which is a general object-oriented vocabulary and diagramming technique to model system development projects [10] using use case diagrams that express information in a way that is not too so that it becomes easier to understand and design the database with class diagrams as a static model that describes the classes and relationships between classes that are constant in the system. In this phase, a user interface is also created. In the implementation phase, coding is written using HTML to create web pages that can be accessed from a web browser and CSS to decorate and arrange attractive layouts [11], as well as JavaScript that runs on the browser [12]. Node.js to manage routing that can run on the server [13], as well as using the Express library with the help of the EJS template engine. In addition to indoor maps created with Adobe Illustrator for vector graphics [14] with an output format of svg files that have a scalable view [15]. After the coding process is complete, the testing process is continued by using black box testing to test whether the functional specifications are in accordance with the requirements specifications [16]. The database is built using NoSQL to solve the problem of scalability and availability to consistency risk [17].
4 Result and Discussion The application of the Indoor Positioning Engine system was developed on a standard public web and open-source library with the required hardware components commonly found on a PC or mobile computer. the application does not require complex data input, uses simple routing algorithms. Waterfall approach used, where each activity is carried out in a structured manner. This section will explain how the system application is built, which includes an explanation of system requirements,
624
B. Fahlevy et al.
business process design, display design and system features. Images are presented to help facilitate understanding during the system design and implementation process.
4.1 Requirement Analysis Based on the analysis of the problem, the Indoor Positioning Engine system is designed to be used by the Engine Production Control (EPC) work unit. The proposed system can be used to monitor and control aircraft engine location data using the barcode function, to avoid the possibility of human error and provides a function that displays reports on the number of aircraft engines that are in the Workshop. Based on the data that has been collected, the system requirements are analyzed which are grouped into functional requirements and non-functional requirements as can be seen in Tables 1 and 2. Table 1 Functional requirements
Module name
Functional requirement
User
Login
Fill in login form
Admin (EPC)
Data engine
Insert data engine
Admin (EPC)
Update data engine
Admin (EPC)
Delete data engine Admin (EPC)
Customer list
Map
Report engine
Check due date preservation
Admin (EPC)
View dashboard
Admin (EPC)
Insert data customer
Admin (EPC)
Update data customer
Admin (EPC)
Delete data customer
Admin (EPC)
Update engine location
Admin (EPC), User (Public)
Search engine location
Admin (EPC), User (Public)
Download report
Admin (EPC), User (Public)
Development of Indoor Positioning Engine Application at Workshop …
625
Table 2 Nonfunctional requirements Parameter
Requirement
Performance
Response time no more than 3 min
Reliability
Information available when needed Reliable and proven information
Security
Users and admins access the system via login registration Functions and information displayed according to the role of the logged in user
Usability
System easy to learn and use Navigation is well made and effective
4.2 Design In the database design, 5 Tables were created including the Engine Type Table with 3 fields, the Engine List Table which has 23 fields, the Customer List Table has 5 fields, the Status List Table consists of 7 fields, and the Storage Type Table has 3 fields. Furthermore, web-based applications are designed using various programming languages including HTML, CSS, JavaScript, node Js, library express with EJS templates, adobe illustrator with svg file output format. Results the interface design of the Indoor Engine System consists of three areas: top bar, sidebar, and content. This interface is divided into two, the interface for the public user and the interface for the EPC as admin which adjusts to the type of user. The basic differences between the two interfaces are: Public User Interface: • Top bar only displays the application name and login button. • The sidebar only has the Dashboard, List Engine, and MAP menus. • The displayed content does not have a Remove or Delete function. Admin EPC Interface: • Top bar displays the name of the application and there is a notification button and profile button. • Sidebar displays Dashboard menu, Engine Management with List Engine submenu and New Engine, MAP, and Data Management with Customer List submenu. Figure 2 shows the Public Dashboard page of the Indoor Engine System application which only consists of the Dashboard, Engine List, and MAP menus. Unlike the EPC Dashboard page of the Indoor Engine System application which consists of a Dashboard menu, Engine Management with New Engine and Engine List submenus, then there is a MAP menu, and a Data Management menu with a Customer List sub menu as shown in Fig. 3.
626
B. Fahlevy et al.
Fig. 2 Dashboard display for user (public)
Fig. 3 Dashboard display for Admin (EPC)
On the top bar there are notifications and user profiles with profile and logout submenus. To be able to enter the page with the sidebar and top bar as above requires a login process. The login page of the Indoor Engine System application is as shown in Fig. 4, the fields required to login are the registered email and password. Figure 5 is a New Engine page that can only be accessed by EPC users, this page serves to add new engine data. The new engine form page can be accessed on the Engine Management menu then select New Engine. Map page serves to see where the engine is located. On this page is the name of the room and the position the engine is facing (Fig. 6). The Report Engine page will be displayed when the Report button is selected. This page displays report engine data that can be selected based on the Engine Type, Customer, and Status categories, as shown in Fig. 7.
Development of Indoor Positioning Engine Application at Workshop …
Fig. 4 Login page
Fig. 5 New engine page
627
628
B. Fahlevy et al.
Fig. 6 Map page
Fig. 7 Report engine page
4.3 Black Box Testing User Acceptance Testing was carried out by 25 people who gave an assessment that have been made using the black box testing method tested the features and functionality of the system to ensure that required has match of the system with a Likert scale of 1–5:
Development of Indoor Positioning Engine Application at Workshop …
1. 2. 3. 4. 5.
629
Very Incompatible (0–24.99%) Not Appropriate (25–39.99%) Fair Appropriate (40–64.99%) Appropriate (65–84.99%) Very Appropriate (85–100%)
And the results of the tests that has processed can be seen in the Table 3. Based on the user acceptance testing that has been done, it can be concluded that all menus in the Indoor Positioning Engine application function very well. While the accuracy test was carried out 5 times for each room. Tests are carried out to determine the accuracy of the machine’s position. The test results can be seen in Table 4. The test results obtained the lowest accuracy is in the Tool CRIB room which has an accuracy of 70% while the highest accuracy is in the NDT and EPC room which has an accuracy of 90%. The average accuracy of the entire room is 82%. Table 3 User acceptance testing scenarios Menus
Test Case
%
Dashboard
Admin (EPC) and user can view display with menus as they role
90 Very appropriate
Login
All user successfully logged in with the registered username and password correctly
96 Very appropriate
New engine form
Admin (EPC) can add new engine data and saved it 86 Very appropriate
Map
User can see where the engine is located, detailed the name of the room and the position the engine is facing
Report engine
User can see report engine data that can be selected 87 Very appropriate based on the engine type, customer, and status categories
Data management Admin (EPC) can manage of data (create, update, delete)
Table 4 Accuracy testing
Information
85 Very appropriate
89 Very appropriate
Room
Accuracy (%)
EPC room
90
Tool CRIB room
70
NDT room
90
EO room
80
HSG room
80
630
B. Fahlevy et al.
5 Conclusion The web-based Indoor Positioning Engine application at the PT GMF workshop was successfully developed using a waterfall approach which was carried out in a structured step by step manner, based on functional and non-functional requirements that have been tested using black boxes to determine user acceptance with a Likert scale and test results from all scenarios as expected and all the features fit perfectly. This application can make it easier to edit data and engine locations more interactively and display information quickly, thus facilitating the process of monitoring and controlling aircraft engines in the workshop better. For further study as a development of this study, applications can be built on a mobile-based basis which will make it easier for users to access information through their smart phones.
References 1. Wijoyo AC, Hermanto D (2020) Analisis dan Perancangan Sistem Informasi Inventory pada PT Insan Data Permata. JRAMI 1(2). https://doi.org/10.30998/jrami.v1i02.231 2. Hasibuan SS (2015) Analysis of planning and control of inventories of raw materials. Dimensi 4(3):46–55 3. Ardi KM, Al Amin IH (2020) Sistem Pendukung Keputusan Prioritas Persediaan Tools Menggunakan Metode Fuzzy AHP. Jurnal Ilmiah Ekonomi dan Bisnis 13(1):503–516 4. Tyas NA, Haryono, Aksioma DF (2016) Penentuan Kebijakan Waktu Optimum Perbaikan Komponen Heat Exchanger (HE) Pewawat Boeing 737–800 Menggunakan Metode Power Law Process di PT. Garuda maintenance Facility (GMF) Aero Asia. Jurnal Sains dan Seni ITS 5(1) 5. Asmara B, Mursityo YT, Rachmadi A (2020) Evaluasi Proses Optimalisasi Sumber Daya dan Kegiatan Operasional pada PT. Garuda Maintenance Facility Aeroasia Tbk Menggunakan Kerangka Kerja COBIT 5 Domain EDM, APO 07, dan DSS 01. Jurnal Pengembangan Teknologi Informasi dan Ilmu Komputer 4(3):988–993 6. Wichmann J (2022) Indoor positioning system in hospital: a scoping review. Digital Health 8:1–20 7. Jamaluddin NAT, Maulina W (2019) Rancang Bangun Indoor Positioning System berbasis Wireless Smartphone menggunakan Teknik Global Positioning System dengan Metode Absolut. Berkala Saintek 7(1):13–18 8. Mendoza-Silva GM, Torres-Sospedra1 J, Huerta J (2019) A meta-review of indoor positioning systems. Sensors 19. https://doi.org/10.3390/s19204507 9. Abrianto HH, Arissanty R, Irmayani I (2021) (Sistem Informasi Pengecekan Dan Pemesanan Ruangan Jarak Jauh Menggunakan Indoor Positioning System. JNKTI 4(4):272–283 10. Dennis A, Wixom BH, Tegarden D (2015) System analysis and design an object oriented approach with UML. Wiley, United States of America 11. Setiawan D (2017) Buku Sakti Pemrograman Web: HTML, CSS, PHP, MySQL & Javascript. Start Up, Yogyakarta 12. Nugroho AT (2012) Pemrograman Web Berbasis Web menggunakan JavaScript + HTML 5. Penerbit Andi, Yogyakarta 13. Fajrin R (2017) Pengembangan Sistem Informasi Geografis Berbasis Node.JS untuk Pemetaan Mesin dan Tracking Engineer dengan Pemanfaatan Geolocation pada PT IBM Indonesia. Jurnal Komputer Terapan 3(1):33–40
Development of Indoor Positioning Engine Application at Workshop …
631
14. Johnson S (2012) Adobe photoshop CS6 on demand. Paul Boger, USA 15. Limbong T, Napitupulu E, Sriadhi S (2020) Multimedia: editing video dengan Corel VideoStudio X10. Yayasan Kita Menulis, Medan 16. Sukamto RA, Shalahuddin M (2020) Getting started with NoSQL. PACKT Publishing, Birmingham 17. Vaish G (2013) Getting started with NoSQL. PACKT Publishing, Birmingham
Analysis and Design of Information System E-Check Sheet GG Faniru Pakuning Desak, Sunardi, Imanuel Revelino Murmanto, and Johari Firman Julianto Sirait
Abstract Abstract Quality control is a system designed to maintain the intended level of product or service quality. PT. XYZ used the check sheet to help the quality control management process is currently still manual, and in the future, the company wants to replace the manual process with a digital check sheet. The design is carried out using the System Development Life Cycle (SDLC) method with prototyping model until the design stage. The analysis is carried out on ongoing business processes. The analysis results are contained in an information system design in the form of an E-Check Sheet which can be used as a future solution. E-Check Sheet is expected to facilitate the distribution of check sheets, a standard formatting, eliminate inconsistencies in filling out check sheets, and carry out checks and approvals. Keywords SDLC · Prototyping model · E-check sheet · Design
1 Introduction Changes to the development of increasingly advanced science and technology can create a fast and accurate data processing system with a small risk of error. The existence of computer software is increasingly needed in all aspects of life, not only
GG Faniru Pakuning (B) · Sunardi · I. R. Murmanto · J. F. J. Sirait Information Systems Department, BINUS Online Learning, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] Sunardi e-mail: [email protected] I. R. Murmanto e-mail: [email protected] J. F. J. Sirait e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_55
633
634
G. F. P. Desak et al.
in the business world but also in various institutions, education, universities, and other industries, to assist routine activities. One of the most critical factors for sustainable development is not only the use of technology but also product quality because each company will be evaluated by customers based on the results of the product (and service also) and the quality of its management system. So, reasonable quality control is also needed in industrial activities to monitor the overall business process. Quality control can be defined as a system used to maintain the desired quality of a product or service through feedback on the characteristics of the product or service produced, accompanied by the implementation of improvements if there is a deviation from the specified common characteristics [1]. In the entire production process, it is necessary to have quality control to monitor the extent to which the products produced have met predetermined specifications or standards. Quality control is also carried out to provide evidence that the process produces a suitable product or service and identify problems to take corrective action and make improvements in the future [2]. PT XYZ is a company engaged in manufacturing and was established in 2011 and has several branches throughout Indonesia. The company’s line of business is manufacturing heavy equipment components such as engines, transmission, main pumps, and others. Its customers come from several mine and plantation industries in Indonesia. PT XYZ prioritizes quality so its products can be accepted in the market based on the excellence and quality produced and maintained. To maintain and improve the quality of these products, the company conducts inspections of the production process, per the ISO 9001. ISO 9001 is an international standard containing quality management and product manufacturing requirements [3]. The flow of the quality control process carried out at PT XYZ starts from inspection of newly arrived components (receiving), inspections in the production process (in-process), to inspections in the packaging process (out-going). The quality control process (part of the process sequence) aims to determine the status of the inspections carried out and whether the results of the inspections are released or rejected. The obstacle often faced in the manufacturing industry of PT XYZ is that the quality control process does not yet use a system that supports the Quality Management System. It still uses a simple method, and it can even be said that it is still lagging, namely by using a handwriting system on a check sheet in paper form (manual system). The check sheet is used to analyze the quality control of the products produced by the company. This inspection management can be said to be manual because the process carried out is in the form of checking sheets of paper, then stored in a cupboard and scanned, and also stored in separate computer folders. The results of these inspections cannot support the Audit Management in monitoring, searching, reviewing, evaluating, and filling out check sheets correctly and consistently. Inspection can result in a longer re-checking time, a higher error rate (human error), and a waste of expenses because they always have to make a new check sheet every time there is a component that the company will produce. The company is now facing the problem of the check sheet filling procedure that has not been consistently implemented.
Analysis and Design of Information System E-Check Sheet
635
It is hoped that with the design of this E-Check Sheet Information System application, it can help companies to overcome inconsistencies in filling out check sheets because the data input process is still done manually and is prone to errors in the checking and validation process. And also this application is expected to assist the Quality Control Section in supervising and evaluating the results of checking the products produced by the company. As well as the entire application process of the E-Check Information System can also be used for centralized processing of inspection results data, so that in can improve and increase the value of business processes within the company.
2 Related Work A check sheet is a simple document to collect data in real-time at the location of the data [4, 5]. Check sheets have two primary purposes: to make data collection easy and for further preparation and processing of data so they can be used easily. The data collected in the check sheet is the starting point of the repair process and for troubleshooting. The data obtained can be used to create diagrams as needed. Usually, filling out the check sheet is done by giving a certain sign or symbol. Check Sheet has two √ general types of fields that are often used [6]. Check sheet used by a check mark ( ) to ensure quality in terms of qualitative or attribute data. indicates that the checker only checks the suitability of the data field with the appropriate data and using a dash (|) which means to ensure the quality of observations quantitatively, meaning that the examiner performs calculations on the observed object, the conformity between expectations and reality is measured by a direct counting process. Quality Control is defined as a system used to maintain the desired quality (product or service) through feedback on the characteristics of the product or service and the implementation of improvements [1]. Quality control is also carried out to provide evidence that the process produces the appropriate product or service, as well as to identify problems to take corrective action and make improvements in the future [2]. In previous studies, it was found that many companies still carry out the quality control process manually. Every activity and evaluation that is still done manually actually slows down the flow of information needed by the company. Due to the delay in the flow of information needed, eventually many companies have created and implemented a quality control information system that is useful for improving information security, can minimize errors and avoid data manipulation, minimize the potential for human errors during the data input process [7], making a more structured data storage area, and minimize the risk of data loss [8]. In the case at PT. XYZ, the check sheet used is still in manual form (paper based), where the check sheet document can be tucked away/wasted/there is inconsistent content. This will also affect the maintenance of quality control in related departments. So it is hoped that the proposed E-Check Sheet Information System design can eliminate inconsistencies that occur in filling out the check sheet and to minimize tucked documents.
636
G. F. P. Desak et al.
3 Research Methodology The author’s research method is literature study, observation, and interviews. The methodology used in analyzing and designing the Information System E-Check Sheet is the System Development Life Cycle (SDLC) [9]. For this reason, it is necessary to select a model suitable for the current business process conditions [10]. Information system analysis will be carried out up to the design stage of the SDLC process with the prototype model. Prototyping is a model that can be used to build a system when the user’s requirements are unclear [10]. In addition, the Prototyping model is also a software development model where a system prototype is built, tested, and reworked to produce a prototype acceptable to end-users [11]. And the system design will use Unified Modeling Language (UML) [12] diagrams such as activity diagrams, use case diagrams, use case descriptions, and domain class diagrams. In collecting data and information, researchers conducted interviews and in-depth observations of the companies and departments to investigate the use of manual check sheets. Furthermore, from the results of these observations and interviews, it can be seen that the current business processes are running, the problems that occur in the company, and what future improvement plans are preferable. And due to regulations from the company, this research was carried out only to the stage of the proposed business process design and user interface.
4 Result and Discussion The application of the E-Check Sheet system is designed on a web-based application and can be accessed anywhere in the company area. The designed application does not require special hardware computer specifications and also does not require complicated data input. This section will explain how the application is built, which includes an explanation of the initial existing business processes in the company, the proposed new business processes that will be implemented in the system, and the design of the system display.
4.1 Communication (Data Collection) To identify user’s problems and needs, the first step the researcher took was to collect data and information by conducting interviews with users who were directly involved in the input process and working on check sheets at the company. The following are some of the questions posed by researchers to users are as follows: a. How long does it take the staff to validate the check sheet? b. How long does it take for the staff to send a validated check sheet/distribute it to the department that needs the data?
Analysis and Design of Information System E-Check Sheet
637
c. What obstacles often occur when working on a manual check sheet? In addition, researchers also conducted thorough observations related to running business processes, reading supporting documents related to business processes. And don’t forget to look for literature from previous research on the use of manual and digital check sheets (in the form of a system) to get an idea of what kind of E-check sheet will be designed in the future.
4.2 Analysis Business Process After the first stage (Communication—Data Collection) has been completed, then it is still in the stage of analyzing business processes running in the company. Where in this stage, the researcher conducts a more in-depth analysis of business processes from start to finish related to the use of check sheets, who are the users involved in the check sheet filling process (Fig. 1), and what problems often occur due to the check sheet used is still manual in the form of sheets of paper (Table 1). This is done so that researchers can find out what the user needs of the system will be like in the future.
4.3 Design The results of the analysis carried out in the previous stage regarding the quality control (check sheet process) at PT XYZ found that there was no system capable of monitoring the quality control (check sheet process). Causes when filling out the check sheet, it is often incomplete, and the validation process is only carried out after all components of the company’s manufacture have been produced, which results in many processes being skipped and not following the flow of the company’s business processes. Sometimes even if the check sheet document is lost, there is no other documentation history, and you have to print and fill it back from the beginning. Therefore, at the Design stage, system analysts (researchers) provide solutions to solving problems in the company. The analysis process for the system design for the quality control check sheet is carried out using the SDLC Prototyping method. This design (Modeling Quick Design) stage also aims to present all the requirements obtained from the design using use case diagrams, activity diagrams, class diagrams, and a user interface tailored to the user’s needs. The following are some of the results of the proposed research designs. Figure 2 shows the proposed business process that will be implemented in the Echeck sheet system in the form of a use case diagram. In the picture there are 6 actors who will access the system based on their respective functions or job descriptions. Furthermore, Fig. 3 shows one of the proposed business processes in the form of an
638
G. F. P. Desak et al.
Fig. 1 The flow business process running Table 1 Problems in the company Number
Problems
1
Team C conveys information via email and sends check sheet data using GDrive
2
Giving check sheets from Team C to Team E is carried out directly according to production needs
3
There are check sheets that are not uniform and have not been validated, confusing the production team, which check sheet will be used for the production process
4
Inconsistent in filling out check sheets and validating check sheets
5
If there is an incomplete check sheet filling, the production process continues. The production will result in many processes that are not appropriate for the company’s business process flow
6
If the check sheet is lost, the check sheet will be reprinted. Data in the previous process is lost, and new data will be filled, which may be inaccurate
Analysis and Design of Information System E-Check Sheet
639
Fig. 2 Proposed use case diagram
activity diagram related to the process of making a new Check Sheet. Where in the picture, the QC actor who acts makes a new check sheet on the system. And in Fig. 4 is the proposed class diagram that will be used to design the database table in the E-check sheet application. Figures 5 and 6 are examples of what the proposed application (user interface) looks like to the company. Where the two menus will be used by each actor in processing the check sheet as needed. From the proposed design of the E-Check Sheet application, master data is also created and used as a data input center for the application. If there is an addition of master data, then only the Helpdesk and QC can add data to the system as needed so that all data can be confirmed and well consistent. Because in the latest business process proposal, the Helpdesk task is to process the user’s E-Check Sheet master data and the QC task is to process the check sheet master data. It is hoped that with a clear division of tasks, there will be no difference in the input data and the quality control process can be well maintained in the company.
5 Conclusion It was found that there was no consistency in the use of check sheets for each section in the company, incomplete check sheet filling, sometimes missed approval, and scattered or missing check sheet documents. Based on the problems that occur, the researcher proposes the creation of a system that can accommodate all check sheet needs and also proposes additional functionality in the system, where the additional
640
G. F. P. Desak et al.
Fig. 3 Proposed activity diagram—add new check sheet
functionality can be used by users within the company to accommodate the same data input and do not overlap each other. The design of the E-Check Sheet system also divides the tasks among each actor involved in using the check sheet to be more explicit. And some additional functionality offered in the system is the existence of a master data menu, mapping check sheet, review check sheet and so on, which can be used by the company so that each data input process becomes more uniform. The hope is that in the future regulations from the company can continue the results of this E-Check Sheet design to be further refined and implemented in existing applications in the company without having to create other applications separately. And there will be further development of the features and functions of the E-Check Sheet to better align PT XYZ’s business goals.
Analysis and Design of Information System E-Check Sheet
Fig. 4 Proposed class diagram
Fig. 5 User interface—edit data check sheet by QC
641
642
G. F. P. Desak et al.
Fig. 6 User interface—menu filling check sheet by staff
References 1. Mitra A (2016) Fundamentals of quality control and improvement, 4th edn. Wiley, Canada 2. Marchewka JT (2015) Information technology project management: providing measurable organizational value, 5th edn. Wiley 3. DISNAKERTRANS Provinsi Banten, 28 Mar 2020 [Online]. Available: https://disnakertrans. bantenprov.go.id/Berita/topic/268. Accessed 15 June 2021 4. Syukron A, Kholil M (2013) Six sigma: quality for business improvement. Graha Ilmu, Yogyakarta 5. Patel PJ, Shah SC, Makwana S (2014) Application of quality control tools in taper shank drills manufacturing industry: a case study. J Eng Res Appl 4(2) (Version 1):129–134 6. Tannady H (2015) Pengendalian Kualitas. Graha Ilmu, Yogyakarta 7. Yuliandra B, Wulan RF (2018) Perancangan Sistem Informasi Pengendalian Kualitas pada Laboratorium Proses IV PT X. Jurnal Optimasi Sistem Industri 17(2):113–125 8. Nasution AB, Astuti E (2017) Implementasi Sistem Informasi Quality Control pada Produksi Granit Tile Berbasis Web (Studi Kasus PT. Jui Shin Indonesia). Jurnal Sistem Informasi Kaputama (JSIK) 1(2):38–45 9. Alshamrani A, Bahattab A (2015) A comparison between three SDLC models waterfall model, spiral model, and incremental/iterative model. IJCSI Int J Comput Sci Issues 12(1):106 10. Pressman RS, Maxim B (2015) Software engineering a practitioner’s approach. McGraw-Hill Education, New York 11. Martin M (2022) Prototyping model in software engineering: methodology, process, approach, 07 May 2022 [Online]. Available: https://www.guru99.com/software-engineering-prototypingmodel.html. Accessed 15 June 2022 12. Satzinger JW, Jackson RB, Burd SD (2016) System analysis and design in a changing world. Cengage Learning, Boston
Analysis of Request for Quotation (RFQ) with Rejected Status Use K-Modes and Ward’s Clustering Methods. A Case Study of B2B E-Commerce Indotrading.Com Fransisca Dini Ariyanti and Farrel Gunawan Abstract In the current situation, PT. Innovation Success Sentosa with its B2B e-commerce platform, Indotrading.com, has a critical problem with the RFQ transaction process. Nearly 25% of all RFQs are rejected which results in the transaction being terminated, thereby losing potential profits. The purpose of this study is to find out, analyze and classify the characteristics of all rejected RFQs to find ways to overcome them. The analysis was carried out using the concept of data mining, using 3 algorithms, namely k-modes, average-linkage and ward’s clustering, which were applied to 7029 data objects in the rejected RFQ, during the period October to December 2021. To validate the result, we use the silhouette coefficient index. The clustering method that has the most optimal results, based on accuracy and consistency is the Hierarchical Ward’s clustering method, with a value closest to 1, which is 0.664. Therefore, to solving the rejected RFQ, Ward’s algorithm is used as a reference. The clustering results show that the dominant RFQ was rejected, caused by sellers, with the reason ‘Stock not available now’ and dominant in the product categories ‘Electrical and Electronic supplies’ and ‘Construction and Property supplies’. Keywords Data mining · Hierarchical clustering · K-modes clustering · Average-linkage clustering · Ward’s clustering
1 Introduction The research was conducted on a B2B marketplace in Indonesia, named Indotrading, which was founded by PT. Innovation Success Sentosa. Indotrading facilitates and helps transactions between companies and other companies in Indonesia, with the F. D. Ariyanti (B) · F. Gunawan Industrial Engineering Department, Faculty of Engineering, Bina Nusantara University, Jl. KH Syahdan 9, Jakarta 11480, Indonesia e-mail: [email protected] F. Gunawan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_56
643
644
F. D. Ariyanti and F. Gunawan
aim that buyers can procure goods directly to producers easily and quickly. Requests for quotations (RFQ) on the Indotrading platform, in fact, face several obstacles or prob-lems that cause the RFQ rejected, therefore the transaction fails. During quarter 4 2021 to February 2022, there were 24% rejected RFQs. Therefore, marketplace face potential loss of revenue. Thus, this is crucial or important enough to take further action regarding RFQs that have been rejected. A request for quotation (RFQ) is a business process in which a company or public entity requests a quote from a supplier for the purchase of specific products or services. Quotations that meet buyer expecta-tions will continue to become transactions, and this process also occurs in market-places such as Indotrading.com. E-commerce is a system of transactions carried out with electronic media between sellers and buyers in a marketplace [1]. Cloud computing’s data analysis feature makes it possible to process a large amount of data in very large clusters [2]. Clustering’s ultimate objective is to provide users with useful in-sights from the original data so that they can effectively address issues [3].
1.1 Big Data Big data is a collection of processes that consist of very large volumes of data, both structured and unstructured. In a big data, it takes several techniques and ways to be able to extract information or insight from the big data. How large big data should not the main problem, but how to get useful results and information from this large data set, and turn it into an easy-to-understand structure for future needs [4]. According to Balachandran and Prasad (2017), there are several types of benefits, when analyzing and processing big data, first faster or better decision making. Second, the analysis results of useful information could facilitate important and critical decision making for related parties. Third, the big data useful to determine the creation and development of new products or services. Fourth, the information from big data that covers all activities and customer perceptions and could provide good recommendations of customers big data based.
1.2 Knowledge Discovery in Database (KDD) Knowledge Discovery in Database (KDD) is a process that includes the collection, use of past or historical data, patterns, or relationships in very large data. One of them is data mining which is part of KDD. KDD is the process of finding hidden knowledge in data in a database that is technically capable of being created and stored. KDD is a multi-step process for converting data into useful information [5]. KDD is an interdisciplinary area that focuses on methodologies or techniques for retrieving useful data from sources [6].
Analysis of Request for Quotation (RFQ) with Rejected Status Use …
645
1.3 Data Mining Several data mining purpose are, first, Classification, which is to generalize the structure or label that is known to be applied to new data. Second, Clustering, namely grouping data into certain groups according to the similarities they have. Generally, the label is unknown. Third, Regression, which is finding the relationship or influence of two or more variables that exist in the scope of the data. Fourth, Forecasting, which is a method used to predict the value to be achieved in a certain period. Fifth, Association, which is a method that can find a relationship or similarity between item variables. Sixth, Sequencing, which is a method that can identify a series of events or events in sequence.
1.4 Clustering Good clustering can be known if the data in a group or region has a very close similarity and has dissimilarity to other groups or regions [7]. Clustering has several approaches in the process. Starting from the partition-based clustering and hierarchical clustering. Hierarchical clustering works based on a dendrogram diagram where similar or similar data will be placed into an adjacent hierarchy and dissimilar or dissimilar data will be placed in a distant hierarch [8].
1.5 K-Modes Clustering K-modes is a partition-based clustering algorithm and was described in 1997 by Huang. This algorithm is one of the development algorithms from k-means. In contrast to k-means which are generally applied to numerical data, k-modes are specifically applied to categorical data. In general, k-modes work by looking for a mode that has a dominant frequency in a data set [9]. Average-linkage Clustering. Average-linkage is a hierarchical clustering algorithm and is a variation of linkage. Average-linkage has the working principle of calculating the average or average distance between data or objects in each group or cluster with the aim of minimizing the distance between the combined clusters [10]. Ward’s Clustering. Ward’s is one of the hierarchical clustering algorithms to obtain clusters that have the smallest possible internal cluster variance or cluster that has a very minimal value or level of difference. The principle works by calculating the average of each cluster, then calculating the distance between the data or objects (Euclidean distance) and calculating the average value of the distance between the data. The SSE, sum of squares error for each cluster that experiences slight or small changes will be combined into one group or one cluster [11].
646
F. D. Ariyanti and F. Gunawan
1.6 Phyton Programming Python is often used for data science or data analysis purposes. According to Dhruv et al. (2021), data analysis needs are very efficient with the help of a variety of qualified libraries such as pandas, NumPy, matplotlib, seaborn and so on. Starting from data preparation needs such as cleaning, transformation, and integration, 2D and 3D data visualization to data analysis using statistical sciences, it will be easier with the Python library [12].
2 Research Method 2.1 Research Diagram The following explains Fig. 1, the first step, big data collection of rejected RFQ during Q4 2021 or October 2021– December 2021 generated more than eight thousand data. Second step, data preparation includes data cleaning, data integration, data selection and data transformation, left 7029 data. Third step, Data mining use three algorithm, namely k-modes clustering, average-linkage clustering, and ward’s clustering. Fourth, Data analyzing, by compare SCI, silhouette coefficient index among three algorithms, resulting index number with value closest to 1, would choose as the most optimal and used as a reference. Analyze cluster result from algorithm with highest SCI. Fifth, develop marketing strategies, the results of clustering will form the characteristics dan cause of rejected RFQ. Then proceed with providing some suggestions for the stages of development and implementation based on the results of the clustering for E-commerce.
Fig. 1 Research method
Analysis of Request for Quotation (RFQ) with Rejected Status Use …
647
3 Result and Discussion 3.1 Data Preparation Data collection generates more than 8000 data, would process through a data preparation first, which includes the stages of data cleaning, data integration, data selection and data transformation. All activities will use the Python programming language with supporting libraries. Continue to data integration, tables are merged against other tables to obtain the necessary variables. The RFQ table rejected is carried out by data integration with the RFQ category table to obtain product category variables in each request for quotation made. It also aims to strengthen the characteristic variables of each rejected RFQ. After data integration, data selection is carried out by selecting variables or attributes that will be used in the data mining process later. The selected variables or attributes are RFQID. The selected variables or attributes are RFQID, Reason and ProductCategory. Variables or attributes that are not selected are removed from the dataset, i.e. RejectedDate (Fig. 2). The last stage is data transformation. Data transformation is carried out with the aim of fulfilling the requirements for using the average-linkage and Ward’s algorithms which require numeric data. The current data is categorical, so it must be converted to numeric first. Tables 1 and 2 shown summary the value of the reason variable. Includes reasons or reasons from the seller or buyer that caused the RFQ to be rejected. Next is the stage of transforming the data into numeric form. The following is a table of numbers or numerals in order according to the existing variable values, namely Tables 2, and 3. The final description of the data after going through data preparation is Phyton screen shoot as follows (Figs. 3 and 4).
Fig. 2 Dataset after data integration in phyton
648 Table 1 Categorical number tag
F. D. Ariyanti and F. Gunawan Values
Numeric
Agriculture supplies
0
Food & beverage supplies
1
Factories & industrial supplies
2
Hospitality & hotels supplies
3
Promotion & advertising supplies
4
Education & office supplies
5
Electrical and electronics supplies
6
Chemical and health product supplies
7
Entertainment and party supplies
8
Energy supplies
Table 2 Reason rejected number tag
Table 3 Numeric values ‘CausesBy’
9
Rubber & plastic supplies
10
Personal protection and security supplies
11
Automotive supplies and accessories
12
Mechanical equipment and spare parts supplies
13
Construction and property supplies
14
Reason
Causesby
Numeric
Stock not available now
Seller
0
Similar product not available
Seller
1
Purchase quantity too low
Buyer
2
Token run out
Seller
3
Buyer no response on discussion
Buyer
4
Purchase quantity too high
Buyer
5
Report as spam
Buyer
6
Product unclear
Buyer
7
Location too far
Seller
8
Values
Numeric
Seller
0
Buyer
1
3.2 K-Modes Clustering The first algorithms is k-modes clustering. K-modes clustering is one of the developments of k-means, which is specifically for categorical data, so that the application of this algorithm will use non-numeric datasets. The following is Fig. 5 Elbow Method
Analysis of Request for Quotation (RFQ) with Rejected Status Use …
649
Fig. 3 Data categorial
Fig. 4 Data numeric
for determining the number of clusters. Based on the elbow method formed, the most optimal number of clusters is 2. The recap of the results of clustering using K-Modes clustering is shown in Table 4.
3.3 Average-Linkage Clustering In the average-linkage, a numerical dataset is used and the determination of the number of clustering uses a dendrogram which is depicted in Fig. 6. The farthest vertical line without crossing the horizontal line depicted is 3. Thus, the optimal number of clusters in this average-linkage method is 3 clusters (k = 3). The amount of data in cluster 1 is 4309 data, cluster 2 is 2664 data and cluster 3 is 56 data. The results summary of Average-linkage clustering is shown in Table 5.
650
F. D. Ariyanti and F. Gunawan
Fig. 5 Elbow graph Table 4 Clustering (K-modes) result Clusters
Result
Cluster 1. 5144 data
Cluster 1 dominant characteristics, caused by the seller (89.52%), with reason, “stock not available now” (72.86%). The dominant RFQ rejected occurred in the category of electrical & electronic supplies (26.63%)
Cluster 2. 1.885 data Cluster 2 dominant characteristics caused by the seller (91.14%) with reasons, “similar product not available” (89.23%). The dominant RFQ rejected occurred on category of factory and industrial supplies (30.92%) Fig. 6 Dendogram
Analysis of Request for Quotation (RFQ) with Rejected Status Use …
651
Table 5 The results of average-linkage clustering Clusters
Result
Cluster 1 4309 data
Cluster 1 dominant characteristics, caused by the seller (91.01%) with reason, “stock not available now” (52.23%). On category “electrical & electronic supplies” (31.79%)
Cluster 2 2664 data
Cluster 2 dominant characteristics, cause by seller (89.07%) with reason, “stock not available now” (56.19%). On category “construction and property supplies”
Cluster 3 56 data
Cluster 3 dominant characteristics, cause by seller (50.09%) with reason, “product unclear (50.00%), location too far” 50.00%). On the category of construction & properties supplies (75.00%).
3.4 Ward’s Clustering In the Ward’s clustering (Table 6) a numerical dataset is used and the determination of the number of clustering uses a dendrogram which is illustrated in Fig. 7. Shown in Fig. 7, the farthest vertical line without crossing the horizontal line depicted is 2. Thus, the optimal or best number of clusters in Ward’s method is 2 clusters (k = 2). The number of data in cluster 1 is 4309 data and in cluster 2 is 2720 data. Table 6 The result Ward’s clustering Clusters
Result
Cluster 1 Cluster 1 dominant characteristics, caused by the seller (91.01%) with reason, 4309 data “stock not available now” (52.23%). On category “electrical & electronic supplies” (31.79%) Cluster 2 Cluster 2 dominant characteristics, cause by seller (88.27%), with reason, “stock not 2720 data available now” (55.03%). On category of construction and property supplies (47.20%)
Fig. 7 Dendogram as result of Ward’s method
652 Table 7 Silhouette Coefficient Index between 3 clustering
F. D. Ariyanti and F. Gunawan No
Methods
Silhouette index
1
K-modes
0.003
2
Average-linkage
0.419
3
Ward’s
0.461
3.5 Validity Result of Clustering with Silhouette Coefficient The clustering validity test is carried out using SCI, Silhouette Coefficient Index. SCI would find and measure how close the relationship between objects in a group or cluster. Shown on Table 7, Silhouette Coefficient Index between 3 clustering. Therefore, Ward’s clustering method has the most optimal results based on accuracy and consistency, which has a value closest to 1, which is 0.664.
3.6 Discussion Result The results of the study should be taken based on clustering with the Ward’s algorithm. First, rejected RFQ dominantly caused by the seller side. Second, the most dominant reason is because the seller does not have enough quantity of products desired by the buyer, therefore the RFQ from the buyer is failed/stop/reject. Last, product categories from RFQ with rejected status are dominant on category Electrical and Electronic Supplies, as well as Construction and Property Supplies. System Development & Implementation to Marketing Strategy. The first, add new feature “stock estimation” which seller obliged to update on a regular basis. Therefore, the buyer would issue correct order quantity on RFQ, thus avoid rejection with reason “stock not available now”. Second, The dominant RFQ rejected occurred on category both “electronic & Electric supplies”, as well as “construction & Properties supplies”. Therefore, the marketing division should carry out their strategy for a more “niche” for both market category. Third, It is important to develop an automatic dashboard in order to monitoring the rejected RFQ movements. The dashboard is made with the Streamlit library with the Python programming language. The following is an overview of the dashboard created in Fig. 8. Based on the dashboard description above, the results of the RFQ reports that were rejected in a certain period are accompanied by an indication of the result of the dominant characteristics that occurred in the rejected RFQ. There is a side bar that shows the number code of each variable for pie chart visualization on the dashboard.
Analysis of Request for Quotation (RFQ) with Rejected Status Use …
653
Fig. 8 Dashboard
4 Conclusion It was concluded, first, the most problem that causes the rejected RFQ is dominantly caused by the seller side, with the dominant rejected RFQ reason is seller has less products stock compare buyer order quantity. New feature “Stock estimation” on each products catalog on E-commerce, should be reduce rejected RFQ in near future. Second, the results of Ward’s cluster analysis on the ProductCategory variable show that the dominant RFQ reject occurs in the category of electrical and electronic supplies, as well as construction and property supplies. E-commerce should improve marketing strategy & performance in these two categories where buyers are dominant, in a way, by opening more opportunities for sellers from these two categories. Therefore, the performance of the marketing division would be more niche. Last, it is important, to develop an automatically dashboard of RFQ rejected reports, instead of semi-manual dashboard. Which would perform only by changing the data used. The suggestion to Indotrading marketplace is, first, describe and depict more details, each type of products and more focused on two categories “Electronic & electric supplies” as well as “Construction & Property supplies”. Second, The Indotrading marketplace could develop a proposed “pre-order” feature on the system, which inform “lead time” to fulfill product units desired by the buyer if the seller stock, less than buyer order quantity.
654
F. D. Ariyanti and F. Gunawan
References 1. šorait˙e M, Miniotien˙e N (2018) Electronic commerce: theory and practice. Integr J Bus Econ 73–79 2. Jiang D, Tung AK, Chen G (2010) Map-join-reduce: toward scalable and efficient data analysis on large clusters. IEEE Trans Knowl Data Eng 23(9):1299–1311 3. Xu R, Wunsch D (2005) Survey of clustering algorithms. IEEE Trans Neural Netw 16(3):645– 678 4. Balachandran BM, Prasad S (2017) Challenges and benefits of deploying big data analytics in the cloud for business intelligence. Proc Comput Sci 1112–1122 5. Nwagu CK, Omankwu OC, Inyiama H (2017) Knowledge discovery in databases (KDD): an overview. Int J Comput Sci Inf Secur (IJCSIS) 6. Sabri IA, Man M, Bakar WA, Rose AN (2019) Web data extraction approach for deep web using WEIDJ. Proc Comput Sci 7. Tan P-N, Steinbach M, Karpatne A, Kumar V (2019) Introduction to data mining. Pearson Michigan 8. Devi KR (2019) Evaluation of partitional and hierarchical clustering techniques. Int J Comput Sci Mob Comput 48–54 9. Nduru EK, Buulolo E, Pristiwanto (2018) Implementasi Algoritma K-Modes Untuk Strategi Marketing. KOMIK (Konferensi Nasional Teknologi Informasi dan Komputer), 12–19 10. Ramadhani L, Purnamasari I, Amijaya FD (2018) Application of complete linkage method and hierarchical clustering multiscale bootstrap method. Eksponensial 1–10 11. Paramadina M, Sudarmin, Aidid MK (2019) Perbandingan Analisis Cluster Metode Average Linkage dan Metode Ward. J Stat Appl Teach Res 22–31 12. Dhruv AJ, Patel R, Doshi N (2021) Python: the most advanced programming language for computer science applications. Scitepress, pp 292–299
Innovation Design of Lobster Fishing Gear Based on Smart IoT with the TRIZ (Teoriya Resheniya Izobreatatelskikh Zadatch) Approach Roikhanatun Nafi’ah, Era Febriana Aqidawati, and Kumara Pinasthika Dharaka Abstract With a sea area of 5.8 million km2 that makes up 70% of the country’s total land area and is home to 17,480 islands, Indonesia is a maritime nation. The immense potential marine water resources that exist in Indonesia include squid, shrimp, lobster, fish, seaweed, crabs, pearls, shellfish, and octopus. Utilizing the available human and technological resources, it is essential to process Indonesia’s current marine resources properly and responsibly. Gresik and Lamongan fishermen utilize small boats between 1.1 × 4 m and 1.5 × 9 m to catch fish. The most popular items include rebon shrimp, white shrimp, lobster, crab, big fish (such snapper, dorang, and anchovy), and lobster. Diverse equipment is used by fishermen, such as nets, fishing rods, traps, and three-layer nets. About five to ten times as much money is spent on lobster as on crabs. Most of the available lobster-catching equipment is in poor condition, has only one entrance hole, and has a small carrying capacity. An innovative instrument incorporates a lobster presence detecting system and a pulley-style lifting device to elevate the lobster fishing gear when lobsters are detected. Additionally, many technologies include paradoxes that are essential to their manufacture, such as robust yet lightweight materials and straightforward yet efficient equipment. The Theoriya Resheniya Izobretatelskikh Zadatch (TRIZ) method was utilized to resolve the contradiction issue based on the existing contradictions. Keywords Lobster fishing gear · Smart IoT · TRIZ
R. Nafi’ah (B) · E. F. Aqidawati · K. P. Dharaka Industrial Engineering Department, Faculty of Engineering, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] E. F. Aqidawati e-mail: [email protected] K. P. Dharaka e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_57
655
656
R. Nafi’ah et al.
1 Introduction Indonesia is a maritime nation with a sea area larger than its land area of 5.8 million km2 , making up 70% of Indonesia’s total land area and home to 17,480 islands. According to data from the Food and Agriculture Organization (FAO) in 2020, Indonesia is second to China in terms of global fisheries production. Considering the immense potential, it is also vital to carefully and prudently discover and process Indonesia’s existing marine resources by employing available human and technology resources. One of the key fisheries products, lobster is much sought after and is fished all over the world due to its great economic worth (with a price of about IDR 700,000.00 per kilogram). The amount of lobster caught globally has grown dramatically during a period of 21 years, reaching 25%, or an increase of 51,250 tons [1, 2]. Lobster is one of the extremely promising resources in East Java Province, Indonesia, that haven’t been properly used. The Java Sea’s lobster potential has only been used to 7.7% of its full capacity [3]. In order to maximize the long-term exploration of the existing resources, it is required to manage fishing techniques and upgrade equipment in addition to improving the benefits of fishing in light of the current potential and constraints. This may be accomplished by creating technology that supports fishing operations effectively and efficiently while still being ecologically benign. The small and medium scale fishing sector includes the fishing settlements of Paciran, Lamongan, Lumpur, Gresik, and Ujungpangkah, Gresik, which are located along the north coast of the Java Sea (sea along the Pantura). Small boats between 1.1 × 4 m and 1.5 × 9 m in size are used by Gresik and Lamongan fisherman to capture fish, although the catches are not ideal owing to the boat’s limited capacity. Fishermen from Lamongan and Gresik use a variety of tools, including as nets, fishing rods, traps (crab fishing gear), and three-layer nets, to catch a variety of marine products. White shrimp, lobster, crab, large fish (such as snapper, dorang, and their size), small fish (such as anchovies), and rebon shrimp are the most popular products. The Lamongan and Gresik fishing communities depend heavily on these catch commodities since they are frequently found along the seas in those regions. Lobster is the catch product with the highest value. The value of lobster is around five to ten times greater than the cost of crabs when compared to other catch commodities. According to Marine and Fisheries Research and Human Resources Agency Indonesia, a kilogram of medium-sized lobster presently costs roughly 450 thousand rupiah, while the cost of a kilogram of crab varies between 50 and 60 thousand rupiah. According to the findings of fieldwork in Lamongan and Gresik, it was discovered that obtaining lobster was more dependent on luck, namely because the lobster was caught in the net or caught in the crab trap. The bulk of lobster capturing gear on the market is in poor condition, has only one entrance hole, and has a modest carrying capacity. In the meanwhile, the lobster fishing equipment that has been created to address current issues [4] still has flaws. This flaw was discovered through discussions with researchers and fishermen; there is no tool for detecting the presence of lobsters, no lifting apparatus or pulley for lobster fishing gear, the equipment cannot
Innovation Design of Lobster Fishing Gear Based on Smart IoT …
657
be easily stored or transported, there are only six holes for fishing gear, and the dimensions of the apparatus are inappropriate. Including the sizes of the holes for the inlet and outflow. Given these flaws, a novel tool is needed to strengthen the ones already in use. It includes a pulley-style lifting device to hoist the lobster fishing gear when lobsters are present and a lobster presence detection system. Additionally, it is simple to fold and carry and has perforations on each side. These innovations also have contradictions that are necessary for their production, such as tools made of strong but light materials, tools that are simple but can catch different kinds of lobsters, lifting tools that are light but strong, strong but foldable tools, and tools that are simple but have numerous uses. Based on the existing contradictions, the Theoriya Resheniya Izobretatelskikh Zadatch (TRIZ) approach was used to solve the contradiction problem.
2 Methods 2.1 Early Identification Stage At the initial identification stage, a field study was carried out. This field study was conducted by making direct observations on the object of research, namely in Ujungpangkah, Gresik and Paciran, Lamongan, East Java. This field study was conducted by conducting interviews and Focus Group Discussions with 9 fishermen from Gresik and Lamongan. These fishermen were chosen because they are experienced in the process of catching lobster. In addition, direct observations were also made on the process of catching lobster and the stages of lobster management.
2.2 Data Collection and Processing Stage Determination of Research Variables The research variables were obtained before the lobster fishing gear design process, carried out as a comparison of current conditions and after the lobster fishing gear innovation. Determination of the variables of this study was carried out through indepth interview as a guide to collect data related to the quantity of the catch, the shape of the fishing gear. In addition, we also determined the number of stakeholders and what stakeholders are involved. After in-depth interviews were carried out, we conducted focus group discussion with the experienced fishermen. Based on the focus group discussion, we generated a voice of customer which contains the fishermen’s needs. The voice of the customer then was translated into attributes and technical responses. Finally, the attributes and technical responses were established as research variables.
658
R. Nafi’ah et al.
Fig. 1 TRIZ framework
Theory Resheniya Izobretatelskikh Zadatch (TRIZ) The completion of this TRIZ begins with determining the contradiction of the technical response where this technical response is the result of translating attributes, and these attributes are obtained from the results of the VOC (Voice of Customer). Furthermore, this contradiction is made into a specific problem and then converted into a general problem. This general problem was obtained from the 39-parameter table and searched for general solutions from the general problem obtained from the 40 Inventive Principles table [5, 6]. Finally the last stage in TRIZ is to find the best solution (specific solution) from the alternative solutions given [7]. Figure 1 presents the TRIZ framework employed in this study. Prototyping In this stage, a product design was carried out based on the alternative concepts that have been selected, based on the references of the previous chapter. The basis for choosing this concept is a design that can improve the work function of the previous system and in the end if the product has been designed the lobster catching system will run more efficiently and more effectively in its implementation. Making this prototype based on the needs of consumers (fishermen). Then we designed the product with software, evaluated and physically designed the product. Product Testing The purpose of this stage is to obtain the results of lobster fishing gear innovations that can increase lobster fishing capacity so that lobster fishing gear can be utilized optimally. This product test was conducted in Ujungpangkah, Gresik. Testing of this product was carried out 6 times to determine whether the catch from i-LOCA is more effective and optimal or not.
Innovation Design of Lobster Fishing Gear Based on Smart IoT …
659
3 Results 3.1 Identification of Fishermen’s Condition In carrying out efforts to catch marine products, the tools used by Gresik and Lamongan fishermen have their own specifications to catch marine commodities as needed. Currently, there are seven main tools used to catch commodities, namely shrimp nets, squid nets, fishing nets, fine nets, crab traps, fishing rods and rebon nets. The use of existing fishing gear is adjusted to the season for catching commodities which are in large numbers. In carrying out the fishing process, Lamongan fishermen depart at around three o’clock in the morning using traditional boats they have and bring fishing gear that is considered necessary. Meanwhile, Gresik fishermen usually leave in the afternoon and return at night. The income earned is divided by the number of people who go to sea. In relation to the process of catching lobsters, the fishing gear that is often successful in catching lobsters itself is mostly fishing nets and crab traps. Both tools are tools that are often operated considering the presence of fish and crabs are classified as commodities that often appear. Currently, the main obstacle faced is the uncertain season, so fishing is somewhat hampered.
3.2 TRIZ Results Based on the results of interviews with fishermen, the voice of the customer is obtained which is then translated into attributes, then these attributes are translated again into technical responses. This technical response is the answer to the attributes where in the translation process this technical response is carried out through Focus Group Discussions with fishermen. Below are the attributes and technical response of i-LOCA. Table 1 presents the attribute and Technical Response of i-LOCA. After identifying the needs of consumers (voice of customer), then at this stage the design of lobster fishing gear with the TRIZ method is carried out, which focuses on solving the problem of contradictions that occur. The i-LOCA is expected to be able to improve the lobster catching process and increase the lobster catching capacity. The contradiction that will be solved using TRIZ. Contradiction that is obtained based on consumer demand or needs. This can be seen from interviews with several fishermen. These fishermen were chosen because they understand the manufacture of fishing gear, understand the conditions of the sea and marine resources. In addition, this fisherman has also worked as a lobster catcher for many years (at least 2 years). Based on the results of the interview, it was found that the needs of fishermen (voice of customers) about lobster fishing gear, namely tools made of strong but light materials with reasonable price, simple tools but can catch various types of lobsters with easy of repairing, lifting equipment that is light but strong, strong but foldable tools, a simple tool but has many functions, and a tool that can last longer. Consumer demand will be used as a specific problem which is then used as a general problem. The flow
660 Table 1 Attribute and technical response of i-LOCA
R. Nafi’ah et al. Attribute
Technical response
Catch capacity
Product dimensions are large Pull strength Catch mechanism
Use safety
Mechanism of use
Environmental friendliness
Catch mechanism Tool material
Ease of carry
Tool weight
Size
Product dimensions are large Mechanism of use
Selectivity
Catch mechanism
Durable
Tool material
Ease of making
Design complexity Material price
Ease of repair
Design complexity
Live catch conditions
Catch mechanism
of TRIZ implementation starts from identifying the existing contradictory problems. Furthermore, these problems are divided into two, namely useful features and harmful features. This definition is followed using a table of 39 technical parameters (The 39 Engineering Parameters) which is then found a solution from the table of 40 innovative principles (The 40 Inventive Principle). Specific Problem—General Problem The specific problem created in this TRIZ is a technical response that has a negative relationship with other technical responses. The technical response that has a negative relationship is obtained from the results of interviews with several fishermen (based on consumer desires), then converted into general problems using table 39 technical parameters. The first step is the determination of contradictory technical responses, where at this stage an explanation of the contradictions between technical responses is explained and classified into the categories of useful features and harmful features. A useful feature is a technical thing that you want to fix but causes other problems, while a harmful feature is a technique that will get worse when the problem is solved. Where contradictory technical responses are made into specific problems which will then be converted into general problems. Product Design Versus Materials The technical response that contradicts is the product design and the materials used. The product design referred to here is the design of the existing lobster fishing gear that is small and complicated or difficult to understand while the material used is made of material that is not sturdy so that when used in the fishing process it is not
Innovation Design of Lobster Fishing Gear Based on Smart IoT …
661
optimal. So, we need a tool that has a simple design with lightweight but strong materials. Traction Versus Tool Weight The contradictory technical response is the traction and weight of the tool. The pull strength that is meant here is the ability to pull fishing gear that is difficult and heavy so that it is possible to fall into the sea or be dangerous where i-LOCA has a large enough weight when the catch is large so that a strong lifting equipment is needed that can lift the big fishing gear. The expected improvement is a strong but light lifting device. Thus, the product lifter will be strong but still light when you must lift heavy loads. Product Design Versus Mechanism of Use The contradictory technical response is product design and use mechanism. The product design referred to here is a design that is initially complicated so that it is difficult to use especially in storage, which also has a difficult mechanism of use where it is hoped that this lobster fishing gear will have an easy mechanism both in use and storage so that the catch becomes optimal because fast process. The desired improvement is that the product is easy to use and included in its storage with a simple design. Thus, the product will be easier and safer when stored. This results in easy use of the product when it comes to tool catch and storage. Catch Mechanism Versus Design Complexity The contradictory technical response is the catch mechanism and the complexity of the design. The fishing mechanism referred to here is a long fishing process because it has a complicated design so that it does not attract lobsters to enter the fishing gear. Existing fishing gear is difficult to repair and manufacture. It is expected to form fishing gear that has an effective process so that the catch is optimal, environmentally friendly and the catch is in the form of lobster with a tool design that is easy to repair and manufacture. The desired improvement is that the product has an easy fishing mechanism with a simple design so that fishermen can easily carry out the fishing process. Then, the technical response of the catch mechanism is generalized to a level of automation. General Problem—General Solution After knowing what contradictions occur and generalizing, the next step is to find solutions to existing contradictions and generalize the solutions obtained, according to the TRIZ principle, which is to generate new and creative ideas. Based on the generalization of the existing problems, several solutions were found to the existing contradiction problems. From the available alternative solutions, one of the most feasible solutions is then selected, to be used as a specific solution. In order to obtain the alternatives above, a tool is used in the form of the TRIZ site, which contains a data input section and results, called the interactive matrix. What is used as input is a generalization of the technical response in the previous section. After that, finding the alternative solutions from Technical Response in each contradiction.
662
R. Nafi’ah et al.
General Solution—Specific Solution The solution principles that have been offered above, which were obtained from the 40 Inventive Problem Solving, are specified as the most appropriate solution to be applied to the design of i-LOCA (Lobster Catcher). Product Design Versus Materials Used In the explanation of the previous sub-chapter, the solution idea was obtained, namely principle 35, parameter changes at point C which reads “Change the degree of flexibility”. The above principles provide ideas for making lobster fishing gear that is flexible and easy to use. Principle 35 was chosen because it is based on consumer needs, namely ease of use, easy to fold, carry and repair so that a solution is formulated based on this principle 35. While principle 10 was not chosen because it was not possible with the existing technical conditions (not in accordance with the voice of consumers). Principle 19 was not chosen because it takes a long time in the process, so it is not relevant during the tool manufacturing process. While principle 14 was not chosen because the shape of the ball did not match the conditions of the ocean waves, thus allowing the tool to be overturned. Traction Versus Tool Weight In the explanation of the previous sub-chapter, the solution idea was obtained, namely principle 1, segmentation, at point B which reads “Make an object easy to disassemble”. The above principles provide an idea for making lobster fishing gear that is easy to assemble and disassemble so that it is easy to use and store. Principle 1 was chosen because it is based on consumer needs, namely ease of use, easy to fold and store. This is also related to the solution of the contradiction in product design versus the material used in the form of the level of flexibility. While principle 40 was not chosen because it would cost production. Principle 26 was not chosen because it is not technically relevant if using infrared so that the tool does not function in real terms. Principle 27 was also not chosen because it is technically impossible to do because the price of the material is the same, so it is difficult to replace it with a cheaper one. Product Design Versus Use Mechanism In the explanation of the previous sub-chapter, the solution idea was obtained, namely principle 15, dynamics, at point A which reads “Allow (or design) the characteristics of an object, external environment, or process to change to be optimal or to find an optimal operating condition.” The above principles provide an idea to make lobster fishing gear with an optimal design by providing i-LOCA lifting equipment so that the product can be moved and lifted easily. Principle 15 was chosen based on consumer needs, namely the existence of supporting tools to increase the level of safety of fishermen during the fishing process so that a comfortable, safe, healthy and efficient lifting tool is needed. Meanwhile, principle 32 was not chosen because it is technically impossible to change the object’s color. Principle 26 was also not chosen because it
Innovation Design of Lobster Fishing Gear Based on Smart IoT …
663
is not technically relevant if using infrared so that the tool does not function in real terms. Catch Mechanism Versus Design Complexity In the explanation of the previous sub-chapter, the solution idea was obtained, namely principle 10, preliminary action, at point B which reads “Pre-arrange objects such that they can come into action from the most convenient place and without losing time for their delivery”. The above principles provide ideas for making lobster fishing gear with different shapes with a simple process and lobster detection so that the fishing process becomes effective. Principle 10 was chosen because it answers the needs of consumers in terms of an easy and fast catching process by providing a lobster detection device or a lobster puller. Principle 15 was not chosen because it did not respond to consumer needs for an easy fishing mechanism. In addition, principle 24 was also not chosen because it is technically impossible to combine tool parts because of the flexible tool concept.
3.3 Product Design After obtaining alternative solutions that have been carried out in TRIZ methodology, making alternative product design which includes physical design and choose the best design. Physical Product Design During the design of this product, there are several alternative designs obtained from the general solution from TRIZ, namely as follows: Design 1: obtained from principle 10, namely changing the shape of the tool from the existing bubu so that it has a different number of inlet holes, or a barrier is made for each square area. Design 1 emerged from the modification of existing tools by increasing the fishing capacity of the gear. Design 2: obtained from principle 14, namely using a spherical or curved shape in the design of the tool that is made so that it is cylindrical. In addition, these 2 designs arise from modifications to existing tools. Design 3: obtained from the solution on principle 19 which is to perform different actions or different processes in the manufacture periodically so that it takes the form of a baby hood. Design 3 emerged from the modification of the existing tool design. Design 4: obtained from the solution on principle 35, namely by changing the level of flexibility of the tool so that it is made an octagon with optimal catch and easy to fold. In addition, these 4 designs emerged based on consumer needs. From the 4 alternative designs available, the fishing gear design 4 was chosen as the i-LOCA (Innovative Lobster Catcher) design because it is in accordance with consumer demand and in accordance with TRIZ solutions. After processing the data
664
R. Nafi’ah et al.
using the TRIZ method, the next step is to design the product according to the selected solutions. In this physical product design explanation, the product is explained with the help of 3D’s Max software and SolidWorks software. The shape of this octagonal product is a modification of an existing tool. i-LOCA is an improvement from the tool by Felayati [4] which still has some shortcomings. This form is also the result of a Focus Group Discussion with several fishermen from the Mud area, Gresik. Among them are Mr. Samsul Ma’arif and Mr. Ikhwan who stated that the octagonal design is the right design because so far there is no fishing gear specifically for lobster. So far, lobster catches are still not optimal. Lobster caught is only limited to being trapped by accident. This octagonal shape is considered appropriate because it has an optimal fishing capacity, a good level of strength so that it is stable during the fishing process at sea, has a practical design because it is easy to carry, fold and store. The i-LOCA design is also safe and easy to use and uses environmentally friendly, lightweight and strong materials. The design of i-LOCA is different from existing tools. The criteria that distinguish i-LOCA from existing tools are adjusted to the attributes of the voice of customers and fishermen. Furthermore, product specifications are determined based on consumer needs obtained through interviews, which consist of parts for fishing gear, lifting equipment, support and detection equipment or lobster pullers. Design of i-LOCA (Innovative Lobster Catcher) Based on the solution of the TRIZ principle, the i-LOCA (Innovative Lobster Catcher) design suggests that the fishing gear has an optimal fishing capacity, can be folded and stored (flexible), a strong but lightweight lifting device, and a lobster detection device. It has also been adapted to the needs of consumers (fishermen) so that i-LOCA is easy to use. This lobster detection tool or lobster puller uses a microcontroller in which there is a driver where this driver controls the intensity of the red LED light and the frequency on the buzzer. The type of controller used in this tool is the Arduino nano control board package. How to activate this LED and buzzer by setting the variable resistor. In addition, the i-LOCA design is also obtained based on consumer needs (Fig. 2).
Fig. 2 i-LOCA design: a side view, b top view
Innovation Design of Lobster Fishing Gear Based on Smart IoT …
665
i-LOCA Lifting Equipment Based on the existing TRIZ solution, it is recommended to make a strong but lightweight lifting device so that fishermen can easily lift the i-LOCA and do not fall into the sea because this lifting equipment prioritizes the element of safety. i-LOCA Material The material used in the design of this i-LOCA is iron where the original material is stainless steel. This iron was chosen in the manufacture because of its low price. While the original material is stainless steel because the material is strong, lightweight and corrosion resistant according to consumer demand. Prototyping of i-LOCA After designing the physical design of i-LOCA, the next step is to make an i-LOCA prototype. In the 3-dimensional illustration of i-LOCA, it was carried out with the help of 3D’s Max software and SolidWorks software and the physical design was carried out in a tool workshop. Making a real prototype is intended so that the tool can be tested directly so that the shortcomings of the tool can be identified. The following is an image of the i-LOCA prototype (Fig. 3). Prototype Testing The prototype that has been made is then tested to prove whether the planned concept is as expected or not. There are 4 steps to testing the I-LOCA prototype. First, the prototype is tested at sea by fishermen. Second, the trial was carried out 3 times a day. Third, the trial is carried out by comparing the new tools and existing tools. Last, the catches of the two tools are compared, which tool is more optimal. In the trial process, overall, the tool was successful in catching lobsters, but there were several weaknesses. These weaknesses include weaknesses in timing and test points that are less likely to be tested continuously. The trials carried out with the tools made produced several catches. In the trials carried out, several lobsters were obtained show optimal results. From a physical design concept, the existing tools have fulfilled several main aspects in the principle of passive fishing gear, namely the ease of entry Fig. 3 i-LOCA prototype
666
R. Nafi’ah et al.
Fig. 4 Prototype testing: a Tool trial in Ujungpangkah, b The i-LOCA device was put into the sea
and difficulty of exiting for lobsters. Both aspects have been well supported in the tools that are made to allow the existing lobsters to be trapped in large numbers. In the process of testing the tool, intense communication with fishermen is the key to whether this tool can be used or not. Figure 4 shows the prototype testing process.
4 Discussion 4.1 Tool Design Analysis TRIZ Analysis Contradicting technical responses will be input and resolved through the TRIZ stage, which will produce solutions as the final product design. This study resolves four existing contradictory problems. In providing solutions that are generated based on the existing contradiction problems, one of the most feasible solutions is chosen which will be applied to the design of the i-LOCA product. The proposed solutions were discussed between researchers, fishermen and i-LOCA producers. The first contradiction is the contradiction between the product design and the materials used. The solutions offered are principles 35, 10, 19 and 14. From these principles the principle of parameter changes (35) is chosen at point C which reads “change the degree of flexibility”. The purpose of this principle is to change the level of flexibility of the tool. These principles provide ideas for making lobster fishing gear that is flexible and easy to use. This principle was chosen because it is based on consumer demand where fishermen have been struggling with tools that take up a lot of space on the boat so that fishermen want tools that are easy to fold, carry and store. This can be solved by the solution in principle 35 by changing the level of flexibility in which the tool is made easy to fold, carry and store. Furthermore, for the second contradiction, namely the contradiction between the attraction and the weight of the tool. The solutions offered are 40, 26, 27 and 1. Based on the solutions offered, the chosen solution is principle 1, segmentation, at point B
Innovation Design of Lobster Fishing Gear Based on Smart IoT …
667
which reads “Make an object easy to disassemble”. The purpose of this principle is to divide the object into several parts or make the object easy to disassemble. This principle gives the idea to make lobster fishing gear that is easy to disassemble so that it is easy to use and store. This principle was chosen compared to others because it is the solution that best suits the conditions of the existing lobster fishing gear. In addition, this solution also supports the first contradiction where a tool that is flexible and easy to disassemble is required. With the ease of disassembling the tools, it is easy for fishermen to use i-LOCA. Tool’s Trial Analysis The prototype that has been made according to the details of the solution that has been obtained, is then tested to determine whether i-LOCA is better than the lobster fishing gear on the market. At this testing stage, what is tested includes fishing capacity (catch), safety and strength. The trial was carried out in Ujungpangkah waters, Gresik. The trial was conducted by comparing the two tools, namely the tools on the market (bubu) with i-LOCA. This is done to find out the catch under the same conditions and treatment in two different fishing gear so that the results are valid because they are treated under the same conditions. After conducting discussions with fishermen regarding the results achieved by the tool, several analyzes were reached regarding the non-conformance of the trial. For example, ocean wave factor that is difficult to predict is related to the wind that blows in the sea. The place factor where the tools are installed is also not suitable because of the clarity of the water. Finally, season factor has the biggest possible tool performance discrepancy due to the right lobsters fishing season should be January–April.
4.2 Effect of Tool Presence The effect of the existence of the resulting tool is obtained after the fact that the tool has been able to carry out the planned function, namely catching lobster. By referring to the function that has been successfully achieved with several evaluations, analysis is made about the effect of the lobster fishing gear that has been obtained. With the use of i-LOCA, it has an effect, namely: the income of lobster fishermen increases. This is evidenced by the increase in the number of lobsters caught. In addition, the level of safety has also increased so that fishermen will be safe when catching lobsters. The easy fishing mechanism also makes fishermen feel comfortable because the design is easy to understand and use. Moreover, usability of this tool had biggest impact on objectives, such as, learnability, efficiency, memorability, errors, and satisfaction. Five elements of tool usability make satisfaction to the customers.
668
R. Nafi’ah et al.
5 Conclusions The design of lobster fishing gear produced with the TRIZ approach has been able to meet productive criteria in the form of the ability to catch equipment in quantities that exceed the previous fishing gear, user friendly in its use with minimal application of mechanical systems, environmentally friendly with a passive fishing system (trap) and able to be commercialized which can increase the income of fishermen. Furthermore, the design of the resulting lobster fishing gear has been able to resolve the existing contradictions, namely product design versus the material used, pull strength versus tool weight, product design versus use mechanism and catching mechanism versus design complexity so that lobster fishing gear is made according to the existing criteria. namely tools made of strong but light materials, simple tools but can catch various types/sizes of lobster, lifting tools that are light but strong, strong but foldable tools, and simple tools but have many functions. The contradictory solutions obtained from the TRIZ approach are: making lobster fishing gear in the shape of an octagon that forms a circle, making lobster fishing gear that is easy to disassemble so that it is easy to use and store, making lobster fishing gear with an optimal design by providing lifting equipment on i-LOCA so that the product can be moved and lifted easily and makes lobster fishing gear with different shapes with a simple process and the presence of lobster detection so that the fishing process becomes effective. Some suggestions that can be given for further research related to the design of lobster fishing gear are as follows. The overall design of lobster fishing gear is adjusted to the needs of consumers, in this case fishermen. Testing of the tool should be carried out during the existing lobster catching season, which is carried out in the months of the lobster catching season. Trial activities are carried out optimally with the same conditions and treatment by comparing fishing gear that have the same function. Prototype work should use actual materials so that the tool functions optimally. Assembling electronic components should pay attention to the design and shape of the packaging so that it is safe when used at sea.
References 1. Elliot M (2006) Seafood Watch-American Lobster. Monterey Bay Aquarium. https://www.sea foodwatch.org/. Last accessed 14 May 2022 2. Everett JT (1972) Inshore lobster fishing. Fishing Facts 4:26 3. Kanna I (2006) Lobster: Penangkapan Pembesaran Pembenihan. Kanisius, Jakarta 4. Felayati (2011) Perancangan Alat Tangkap Lobster dengan Pendekatan Quality Function Deployment (QFD) dan Function Analysis System Technique (FAST) serta Manfaatnya terhadap Klaster Industri Perikanan (Studi Kasus : Komunitas Nelayan Paciran) 5. Barry K, Domb E, Slocum MS (2010) TRIZ-what is TRIZ. TRIZ J 603–632 6. Domb E (1997) Contradictions : air bag application (The TRIZ Journal). The TRIZ Institute, USA 7. Orloff MA (2006) Inventive thinking through TRIZ: a practical guide. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-33223-7
Controllable Smart Locker Using Firebase Services Ivan Alexander and Rico Wijaya
Abstract The smart locker system uses the firebase system so it can be controlled remotely. This system is based on the ESP8266 microcontroller which is run using the Blynk library so that it can be controlled using a smartphone. For the locking mechanics, the system uses a magnetic solenoid door lock so it can be controlled digitally with logic on or off, besides that there is also a user database, so only certain user that can operate the lock. In this research several experiments have been done to test the locking system, the locking system works well which is indicated by the very good result in locking and unlocking operation with 100% data of successfully locking and 100% data of successfully unlocking the lock by using mobile data signal and the user database addition system can run smoothly. The last is the smart locker has been accepted by the community, because from the user experience survey more than 95% of the responses are positive. Keywords Smart locker · ESP8266 · Android application · Firebase
1 Introduction Locks play an important role in a home security system or storage area [1]. Most of the existing locker door lock systems still use conventional keys, namely mechanical keys [2]. Along with the development of digital technology that is growing rapidly, there are several solutions in a lock system as a better security [3]. On this occasion, we will explain some of the problems that often occur when using conventional locks including loose or damaged locks, from time to time, homeowners have to deal with doorknobs that will not open even though they work with the right key and the most dangerous is when the user have many keys and have to choose which key fits, it is I. Alexander (B) · R. Wijaya Computer Engineering Department, Faculty of Engineering, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_58
669
670
I. Alexander and R. Wijaya
quite time consuming and there is a possibility that the key that we choose is wrong and forces the key to enter, the result of which the key can be broken inside [4, 5]. In this section the author offers a solution not to use conventional keys and replace them with a lock system that can be controlled by the user via a smartphone, where this can save time without bothering to choose which keys are suitable, reducing the risk of wrong keys, loose keys and so on. namely by using a smart drawer system (Smart Locker) [5]. This security system does not rely on mechanics as an interface but uses electronics which are quite difficult to break into because apart from knowledge of electronics is required [6, 7]. A security system for reliable lockers is needed by humans [8]. Moreover, there are still many locker lock systems that use conventional lock systems. Locker lock systems, especially those that are still conventional, are still very vulnerable to being victims of burglary. Therefore, many security systems for lockers have been developed using various methods, ranging from adding new systems or upgrading existing systems, to those that are automated using sensors and microcontrollers. However, this automatic system still has drawbacks because it is considered not safe enough for locker owners, because the automatic system may not even be able to control its current state [9]. To overcome this, a locker locking system is needed that can be monitored in real time whether its condition is locked or not [10, 11]. Research conducted by Akash A. Phalak, Piyush N. Jha, Prof. N. C. Thoutam and Prashant V. Rahane have succeeded in making a smart locker system based on Bluetooth Low Energy (BLE) [12]. In this study the user can open the locker using a smartphone that is connected to the BLE Locker, if it is connected then the user can give an open signal to the cloud and there is an administrator who checks the validation of the data sent whether the real user is or not. This system is good but needs to be added an extra fee to pay an administrator who has to be on standby for 24 h and is very inefficient. Research conducted by Niaz Mostakima, Ratna R. Sarkarb and Md. Anowar Hoss has succeeded in making a locker system that uses face detection and OTP codes [13], this system is good but if you must use OTP and face detection it will be very time consuming for users to open the locker. Back again to the main purpose of this research, which is to simplify and shorten the time in opening or closing drawers or lockers. Research conducted by Arpit Sharma, Abhishek Jain, Abhishek Bagora, Kavita Namdeo, Anurag Punde has succeeded in making a smart locker that can be opened using a fingerprint scanner [14]. In this study, the user can open the locker using the fingerprint provided, if it matches the database, the locker will open and if it does not match the database, a buzzer will sound. This system is good but sometimes there will be fingerprint reading errors because the user’s hands are dirty or so, this system is very ineffective because it cannot distinguish which users are real and which are not, besides that there is also no tracing history system who was the last user of the locker [15]. Therefore, we propose to create a quickly accessible locking system to lock and unlock lockers for users who have gained locker access. The locker is also only accessible to certain users if the verified user is registered in the database.
Controllable Smart Locker Using Firebase Services
671
2 Research Method In this research, a smart locking system is made. The process of locking and unlocking the locker can be done through the mobile application. The smart lock system uses the ESP8266 microcontroller to regulate the relay so that the magnetic door lock can lock and unlock the locker. This microcontroller is connected to a Wi-Fi network to support existing applications on the user’s smartphone to communicate. Using Wi-Fi network will facilitate the user database storage feature in the cloud. We use Firebase database as our database. Firebase can handle large data sets effectively, also can sync and stores data between users in real-time via the Internet. Figure 1 is the block diagram of a smart lock system. This system using ESP8266 microcontroller as the main controller. Inside the ESP8266 microcontroller, there is a Wi-Fi module that allows the system to validate data to the database. The input of this system is based on smartphone application. After the microcontroller receives the data from user, it will adjust to turn on or off the relay which is connected to the magnetic doorlock as the output. An LED is also used as an indicator of the condition of the relay being off or on. Figure 2 is a flow chart of the smart lock system. We use Blynk library in this system. For the first step, the ESP8266 will perform initial initiation, after completing the initiation process the ESP8266 will connect the system to Blynk server, so the system can be controlled via internet using a smartphone. After that the system will wait until there is a signal to lock or unlock from the user. If it is a signal to lock then the magnet of the door lock will be turned Off and the LED will be turned On, otherwise the magnet will be turned On and the LED will be turned Off. Besides that, this system is also provided with a feature that only certain users could access the lockers if the user registered to the locker Firebase database. The purpose of this feature is for security purposes so only authorized users can access the locker. Fig. 1 System block diagram
NodeMCU v3 ESP8266
NodeMCU v3 ESP8266
Relay
Magnetic Doorlock
LED
672
I. Alexander and R. Wijaya
Fig. 2 System flow chart
Figure 3 is a flow for a new user to gain access to the locker. First the new user needs to register themselves to the smart locker application, the user needs to register with username and password, after that their identity will be updated to the locker Firebase database, after that the new user can gain access to the locker by selecting the unlock or lock menu on the smartphone. Figure 4 is a flow for old users who want to gain access to the locker. First the user needs to login to the smart locker application, the user must input their username and password, after that the Firebase database will check whether the username and password is correct and registered. If successful, the user will get access to the locker by selecting the unlock or lock menu on the smartphone, otherwise the user cannot access the locker and must re-enter their username and password.
Controllable Smart Locker Using Firebase Services New User register through the Application on their smartphone
The registration updated to Locker Database
673 The user gain access to open and close the new locker from smartphone
New User
Fig. 3 Flow for new users
If True: The user gain access to open and close their locker from smartphone
User login to the Application
Old User
The database checking the username and password
If False The user cannot access their locker
Fig. 4 Flow for old users
3 Result and Analysis In this research, an experiment has been conducted to measure the success of locking and unlocking the locker. The parameter measured is the success rate of locking and unlocking the locker through the stimulation of signals sent from smartphones application based on Android operating system by clicking lock and unlock menu. The purpose of this experiment is to make sure that the smartphone application works correctly. In total 500 technical experiments were carried out divided into 250 locking and 250 unlocking operations. In this first 500 technical experiments, the smartphone is connected via Wi-Fi that connected to the internet. The percentage of successful locking the locker is 98% (5 times failed) as seen on Fig. 5 and the percentage of successful unlocking the locker is 99% (2 failed) as seen on Fig. 6. Our strong suspicion, the failure was caused by a Wi-Fi connection problem, because the Wi-Fi was used together with 5 people and the speed of the internet connection is only 1 Mbps. For the next experiment we change the smartphone internet connection to mobile data signal. A total of 500 technical experiments were also carried out divided into 250 locking and 250 unlocking operations. The percentage of successful locking and unlocking the locker is 100% (0 times failed) as seen on Figs. 7 and 8 by using mobile data signal. In this experiment we can conclude that even though the problem is the Wi-Fi internet connection, there is also a backup plan to access the locker by
674 Fig. 5 Locking success rate using Wi-Fi connection
I. Alexander and R. Wijaya
Locking Success Rate using WiFi Connection Success Failure 2%
Failure
Success [PERC ENTA GE]
Fig. 6 Unlocking success rate using Wi-Fi connection
Unlocking Success Rate using WiFi Connection Success Failure 1%
Failure
Success [PERC ENTA GE]
using mobile data signal or another plan is to speed up the Wi-Fi internet connection. The user can also check whether the locker is locked or not via the smartphone application. The next experiment is done by connecting the ESP8266 microcontroller with WiFi that is connected to the internet. When the microcontroller is connected properly to Wi-Fi and the Wi-Fi connected to the Internet, if there is a new user registered, the data in the database will be updated. However, if the Wi-Fi is off, the microcontroller is not connected to Wi-Fi, and the internet is down, the adding of database system will not work. The result is shown in Table 1. The internet speed for this experiment is 100 Mbps. In the experiment 1 until 5 we can see that if the microcontroller is connected to Wi-Fi and the internet connection is connected, the new registered user will be successfully update to the database, the next experiment is experiment 6 until 10, we
Controllable Smart Locker Using Firebase Services Fig. 7 Locking success rate using mobile data connection
675
Locking Success Rate using Mobile Data Connection Success Failure 0%
Failure
Success [PERC ENTA GE]
Fig. 8 Unlocking success rate using mobile data connection
Unlocking Success Rate using Mobile Data Connection Success Failure 0%
Failure
Success [PERC ENTA GE]
can see that even though the microcontroller connected to Wi-Fi, but the internet is down, the new registered user will be not updated. The next experiment 11 through 15 are the same as experiments 1 until 5. The last experiment is experiment 16 until 20, when the microcontroller is not connected to Wi-Fi even though the internet connection is on, the new registered user will not be updated. We provide an alternative method if there is a power outage, the system can be connected to power bank to gain the power voltage and smartphone mobile hotspot can be used to connect the ESP8266 microcontroller to the internet. In addition, several qualitative parameters are also taken through experience survey conducted on the surrounding community, with questions about the effectiveness of the smart locker, including: 1. Are you interested with this locker? 2. Would the locker be practical enough to use?
676
I. Alexander and R. Wijaya
Table 1 Database add user test Experiment
Microcontroller connected to Wi-Fi
Wi-Fi connected to the internet
User database
1
Yes
Yes
Updated
2
Yes
Yes
Updated
3
Yes
Yes
Updated
4
Yes
Yes
Updated
5
Yes
Yes
Updated
6
Yes
No
Not updated
7
Yes
No
Not updated
8
Yes
No
Not updated
9
Yes
No
Not updated
10
Yes
No
Not updated
11
Yes
Yes
Updated
12
Yes
Yes
Updated
13
Yes
Yes
Updated
14
Yes
Yes
Updated
15
Yes
Yes
Updated
16
No
Yes
Not updated
17
No
Yes
Not updated
18
No
Yes
Not updated
19
No
Yes
Not updated
20
No
Yes
Not updated
3. Would you find it helpful to have this locker? Of the 150 samples of people who were asked. 98% of survey participants stated that they are interested with the locker. 96% said this locker is quite practical to use, and 98% felt it would be helpful to have this kind of locker.
4 Conclusion The smart locker works well which is indicated by the very good result in operating locking and unlocking the lock with 100% data of successfully locking and 100% data of successfully unlocking by using mobile data signal. Besides that, If the WiFi network connection is well connected to the internet, the user database addition system can run smoothly. The last is the locker has been accepted by the community, because from the existing survey more than 95% of the responses are positive. For the future work we will try to provide a borrowing system for locker, so there will be two or more locker that can be used for borrowing, the user can borrow and choose
Controllable Smart Locker Using Firebase Services
677
the locker through the smartphone application. We also add the security system if there is a thief forcefully damage the locker, there will be a notification that notifying the user via smartphone application.
References 1. Fragano M, Crosetto P (1984) Solid state key/lock security system. IEEE Trans Consum Electron 30:604–607. https://doi.org/10.1109/TCE.1984.354112 2. Churi A, Bhat A, Mohite R, Churi P (2016) E-zip: an electronic lock for secured system. In: 2016 IEEE international conference on advances in electronics, communication and computer technology (ICAECCT), pp 45–49. https://doi.org/10.1109/ICAECCT.2016.7942553 3. Yuen KF, Wang X, Ma F, Wong YD (2019) The determinants of customers’ intention to use smart lockers for last-mile deliveries. J Retail Consum Serv 49:316–326. https://doi.org/10. 1016/j.jretconser.2019.03.022 4. Patil KA, Vittalkar N, Hiremath P, Murthy MA (2020) Smart door locking system using IoT. Int Res J Eng Technol (IRJET) 7:3090–3094 5. Wee BS (2021) Design and implementation of an Arduino based smart fingerprint authentication system for key security locker. Int ABEC 155–160 6. Prasad H, Sharma RK, Saini U (2020) Digital (electronic) locker. In: 2020 First IEEE international conference on measurement, instrumentation, control and automation (ICMICA), pp 1–4. https://doi.org/10.1109/ICMICA48462.2020.9242688 7. Verma S, Jahan N, Rawat P (20190 INTERNET OF THINGS its application usage and the problem yet to face. In: International conference on intelligent data communication technologies and internet of things, pp 38, 238–244. https://doi.org/10.1007/978-3-030-340803_27 8. Nagarajan L, Arthi A (2017) IOT based low cost smart locker security system. Int J Res Ideas Innov Technol 3:510–515 9. Alqahtani HF, Albuainain JA, Almutiri BG, Alansari SK, AL-awwad GB, Alqahtani NN, Tabeidi RA (2020) Automated smart locker for college. In: 2020 3rd international conference on computer applications & information security (ICCAIS), pp 1–6. https://doi.org/10.1109/ ICCAIS48893.2020.9096868 10. Jeong JI (2016) A study on the IoT based smart door lock system. Inf Sci Appl (ICISA) 376:1307–1318. https://doi.org/10.1007/978-981-10-0557-2_123 11. Chikara A, Choudekar P, Asija D (2020) Smart bank locker using fingerprint scanning and image processing. In: 2020 6th international conference on advanced computing and communication systems (ICACCS), pp 725–728. https://doi.org/10.1109/ICACCS48705.2020.907 4482 12. Phalak AA (2019) An Iot based smart locker using BLE technology. Int J Eng Res Technol (IJERT) 8:274–276 13. Mostakim N, Sarkar RR, Hossain A (2019) Smart locker : IOT based intelligent locker with password protection and face detection approach. Int J Wirel Microwave Technol 3:1–10. https://doi.org/10.5815/ijwmt.2019.03.01 14. Sharma A, Jain A, Bagora A, Namdeo K, Punde A, Year F, Assistant S (2020) Smart locker system. Int Res J Modern Eng Technol Sci 2:911–916 15. Meenakshi N, Monish M, Dikshit KJ, Bharath S (2019) Arduino based smart fingerprint authentication system. In: 2019 1st International conference on innovations in information and communication technology (ICIICT), pp 1–7. https://doi.org/10.1109/ICIICT1.2019.874 1459
Controllable Nutrient Feeder and Water Change System Based on IoT Application for Maintaining Aquascape Environment Daniel Patricko Hutabarat, Ivan Alexander, Felix Ferdinan, Stefanus Karviditalen, and Robby Saleh
Abstract The purpose of this research is to produce a system that can make it easier for users to maintain one of the water parameters, namely TDS (Total Dissolve Solid) using IoT (Internet of Things) based technology. The hardware used is an ESP32 as a microcontroller that can be connected to an internet connection via Wi-Fi and a DFROBOT TDS Meter sensor as its input. Using the Peristaltic Pump as a motor to do the nutrient dosing. Peristaltic Pump is run using Relay Module 8 Channel 5 V. Water Pump for water change using L298N Motor Driver and Continuous Motor Servo to perform mineral administration. The software used to communicate directly with the device is Blynk. Users can set tools such as scheduling, TDS range, and desired nutrient dosage via Blynk. The system first measures TDS as a reference tool for work. This system works if the TDS in the water has exceeded the maximum TDS specified by the user and automatically adds minerals to the ideal TDS that has been inputted by the user, if the TDS has reached the minimum TDS, the water change process stops. The nutritional dosing system also uses TDS as a condition in addition to the arrival schedule for dosing, if the TDS is more than the maximum TDS just before the nutrition dosing system wants to run, the TDS will be lowered to the minimum TDS first by changing the water. Furthermore, the system will provide minerals if the TDS is less than the ideal TDS. Results from this research show that the accuracy level of the dosing pump has an accuracy of 99.5% and the result of scheduling system also works 100%. Keywords TDS · Nutrient feeder · Water change · IoT · Blynk · Aquascape
1 Introduction Gardening is a hobby that is quite popular with some people. In general, this activity is carried out on land. Now gardening cannot only be done on land, in water too. D. P. Hutabarat (B) · I. Alexander · F. Ferdinan · S. Karviditalen · R. Saleh Computer Engineering Department, Faculty of Engineering, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_59
679
680
D. P. Hutabarat et al.
Aquascape art can be an alternative to creating a garden in the water [1]. Aquascape or Freshwater Planted Tank is the art of arranging the layout, both aquatic plants, rocks and wood and other media naturally so that it looks beautiful in an aquarium. Aquascape is also often interpreted as “gardening in the water” [2]. Currently aquascape has become part of interior design that can bring nature into the house. In recent years, aquascape enthusiasts have increased. One of the important components in an aquascape is general hardness (GH). GH can be measured using a TDS ratio meter. The higher the TDS level, the harder the water, and vice versa [3]. Some types of plants are more comfortable and grow better in ecosystems with high TDS levels or harder water. Other types of plants do better in soft water [4]. The ideal GH level for aquascape is 3. When measured using a TDS meter, it shows a TDS level of 120 ppm. It is highly recommended to keep the TDS level at 120 ppm or GH = 3. Nowadays internet of things (IoT) application almost covers all aspects of our real life. Some IoT applications in agriculture allow spraying, weeding, humidity, and temperature monitoring to be done automatically and accurately in real-time [5]. In IoT based application, a chip called microcontrollers used to control and monitor the system IoT [6–8]. For the communication between microcontroller and server, technologies such as Wi-Fi, internet, 4G and global system for mobile communications (GSM) are used [9, 10] and the last technology called cloud computing is used to upload data from the system or smartphone to the server and download the required data [11, 12]. In this research, the system consists of two sub-systems, namely the water change system and the nutrient feeder which can facilitate users in caring for the aquascape environment. The parameter that can be monitored and controlled is Total Dissolved Solids. Using a TDS meter as a TDS sensor. If the TDS exceeds the specified level range, the water will be replaced with new water then the system adds minerals to increase the TDS to the ideal TDS. The system will ensure that TDS does not exceed the maximum limit that has been determined which then becomes one of the nutritional needs that can run according to a predetermined schedule.
2 Research Method Software and hardware are developed in this research and used to construct the system. The block diagram of the system is shown in Fig. 1. Based on the block diagram in Fig. 1, the TDS Meter will send the data to microcontroller to be processed, whether it is in accordance with the user’s wishes, the Microcontroller will give an order to the water change system to run according to the desired level range of the user. Furthermore, it is checked again by the TDS Meter until it reaches the desired level. The dosing pump runs according to the specified schedule and the water change system first ensures that the TDS is not higher than the maximum TDS inputted by the user. There is a load memory to retrieve settings that have been previously input by the user. The ESP32 will be connected to Blynk and the schedule is recorded and TDS
Controllable Nutrient Feeder and Water Change System Based on IoT …
681
Fig. 1 Block diagram
is measured, next the system checks whether the time is right with the schedule, if not, the system will go to the next step, which is checking the water reservoir, if it runs out, the system sends an alert “The Shelter is Out of Water”, if so, the system will check whether the current TDS is greater than the maximum TDS according to the schedule user input, otherwise the system will run the dosing pump and proceed to checking the water reservoir. If yes, then the system will run the water change system by removing the water according to the percentage specified by the user and then adding water again until the water level is on. Then the system checks whether the current TDS is greater than the ideal TDS as specified by the user, if yes then the system can run a dosing pump, if not, then add minerals and wait 1 min to let the minerals dissolve and then re-check the TDS if it has not reached the ideal TDS then the system will add more minerals, if the addition of minerals has been 3 times but the TDS has not reached the ideal TDS then the system gives an alert “Mineral Out”, if the TDS is ideal then the system runs a dosing pump then the system returns to get TDS to monitor the TDS value. When the software is operating, the system will download the TDS data on the water which is then displayed to allow users to monitor. Then the system displays the last entered user input to help the user remember the last settings. Next, the user determines whether he wants to change the settings, if so, the user input will change according to the last user input, if not, the system will re-download the water parameter data to display the data. The schematics diagram for the developed system is shown in Fig. 2. The system is designed using ESP32 as a microcontroller. Utilizing 3 L298N motor drivers to run the water pump. The tool is designed to use a 12 V 3A power supply as a power source. This tool uses an 8 channel 5 V relay module. Uses 8 peristaltic pump slots and 2 water pumps to perform water changes. Uses a float water level sensor to prevent excessive water from entering. Using a servo to put
682
D. P. Hutabarat et al.
Fig. 2 Schematic of dosing pump, water change system, and mineral feeder
minerals into the aquarium. Another float water level sensor is used to monitor water reservoirs.
3 Results and Discussion Several experiments have been conducted in this research. The first experiment is dosing pump calibration experiments. In this experiment, we calculate how much time does it takes for dosing pump to produce 1 ml of water output. Based on the experiment the time it takes for the dosing pump to produce 1 ml of output is 1300 ms, so to produce 3 ml output, the dosing pump takes 3900 ms. We want to know the accuracy of the dosing pump by entering the 3 ml dose setting in the Blynk application, then at the end of the dosing pump hose a measuring cup is used to measure how much liquid comes out of the dosing pump whether it is 3 ml or not. Table 1 is the dosing pump calibration experiments, based on the data from total 20 experiments that we did, there are 3 experiments showing an output of 3.1 ml which did not match with the input on the application, 3 ml. Based on the data, it be concluded that the dosing pump has an accuracy of 99.5%. There are factors that can affect this experiment which results in a difference of 0.1 ml from the target, namely the liquid at the end of the hose from the dosing pump. Sometimes the liquid can stop at the end of the hose and sometimes the liquid from the end of the hose can drop, affecting the results of the experiment. The second experiment is testing the scheduling experiment, in this experiment we want adjust the scheduling in the monitoring tab on the scheduling menu as shown in the Fig. 3. As seen in Fig. 3, we arrange the nutritional dose schedule at 01:35 for the first dose and 01:36 for the second dose. Table 2, Figs. 4 and 5 is the experiment result for SCHEDULER experiment. Of a total of 20 experiments, all experiments had the same result, which is successful.
Controllable Nutrient Feeder and Water Change System Based on IoT … Table 1 Dosing pump calibration experiments
Fig. 3 Scheduling experiment
683
Experiment
Application input (ml)
Dosing pump output (ml)
1
3
3
2
3
3
3
3
3
4
3
3.1
5
3
3.1
6
3
3
7
3
3
8
3
3
9
3
3
10
3
3
11
3
3
12
3
3
13
3
3
14
3
3
15
3
3
16
3
3.1
17
3
3
18
3
3
19
3
3
20
3
3
684 Table 2 Scheduling experiment result 1
D. P. Hutabarat et al. Experiment
Time schedule
Status
1
01.35
Succeed
2
01.36
Succeed
3
01.38
Succeed
4
01.39
Succeed
5
01.41
Succeed
6
01.42
Succeed
7
01.44
Succeed
8
01.45
Succeed
9
01.47
Succeed
10
01.48
Succeed
11
01.50
Succeed
12
01.51
Succeed
13
01.53
Succeed
14
01.54
Succeed
15
01.56
Succeed
16
01.57
Succeed
17
01.59
Succeed
18
02.00
Succeed
19
02.02
Succeed
20
02.03
Succeed
From the data above, it can be concluded that the accuracy of the dosing pump scheduling has an accuracy of 100%.
Fig. 4 Scheduling experiment result 2
Controllable Nutrient Feeder and Water Change System Based on IoT …
685
Fig. 5 Scheduling experiment result 3
The third experiment is the TDS experiment, in Fig. 6, an experiment of scheduling nutrition was carried out at 22:21 and 22:22. We set the maximum TDS at 145 ppm. It can be seen that the time is now 22:21 which means it is time for the system to provide nutrition. The TDS menu shows that the current measured water TDS is 139 ppm. In Fig. 7 there is a graph of experimental activity which shows the dosing pump is not running. And the water change in progress is indicated by a blue graph. In this case, the author can conclude that if the time for giving the nutritional dose has arrived and the TDS has a difference of 20 ppm from the maximum TDS that has been determined by the user, the dosing pump does not run, and the system changes the water first.
4 Conclusion and Future Works The system works well, which is indicated by the accuracy level of the dosing pump has an accuracy of 99.5% according to user input. The Scheduling on the system functions with 100% accuracy according to user input and the last is If the TDS is close to the maximum or there is a difference of 20 ppm with the maximum TDS according to user input, the water will be changed first and then added minerals. For the future works we will try to use a dosing pump hose with a smaller diameter at the end of the hose so that the accuracy of the dosing pump output can be increased and develop web or android based applications that can facilitate the addition of features and databases.
686 Fig. 6 TDS experiment
D. P. Hutabarat et al.
Controllable Nutrient Feeder and Water Change System Based on IoT …
687
Fig. 7 TDS experiment result
References 1. Ramdani D (2020) Rancang Bangun Sistem Otomatisasi Suhu Dan Monitoring pH Air Aquascape Berbasis IoT (Internet Of Thing) Menggunakan Nodemcu Esp8266 Pada Aplikasi Telegram. INISTA: J Inf Inf Syst Softw Eng Appl 3:59–68. https://doi.org/10.20895/inista.v3i 1.173 2. Martin M (2013) Aquascaping: landscaping like a pro aquarist’s guide to planted tank aesthetics and design, 2nd edn. Ubiquitous Publishing 3. Fikri M, Musthafa A, Pradhana FR (2021) Design and build smart aquascape based on PH and TDS with IoT system using fuzzy logic. Proc Eng Life Sci 2. https://doi.org/10.21070/pels. v2i0.1166 4. Hutabarat DP, Susanto R, Prasetya B, Linando B, Senanayake SMN (2022) Smart system for maintaining aquascape environment using internet of things based light and temperature controller. Int J Electr Comput Eng 12:896–902. https://doi.org/10.11591/ijece.v12i1.pp8 96-902 5. Anushree MK, Krishna R (2018) A smart farming system using arduino based technology. Int J Adv Res Ideas Innov Technol 4:850–856 6. Triantoro R, Chandra R, Hutabarat DP (2020) Multifunctional aromatherapy humidifier based on ESP8266 microcontroller and controlled using Android smartphone. IOP Conf Ser Earth Environ Sci 426. https://doi.org/10.1088/1755-1315/426/1/012152 7. Syahputra ME, Hutabarat DP (2020) Eco friendly emergency alert system (EFEAS) based on microcontroller and android application. IOP Conf Ser Earth Environ Sci 426. 1088/17551315/426/1/012160 8. Hutabarat DP, Budijono S, Saleh R (20180 Development of home security system using ESP8266 and android smartphone as the monitoring tool. IOP Conf Ser Earth Environ Sci 195. https://doi.org/10.1088/1755-1315/195/1/012065 9. Yuehong Y (2016) The internet of things in healthcare: an overview. J Ind Inf Integr 1:3–13. https://doi.org/10.1016/j.jii.2016.03.004 10. Parab AS, Joglekar A (2015) Implementation of home security system using GSM module and microcontroller. Int J Comput Sci Inf Technol 6:2950–2953 11. Hutabarat DP, Susanto R, Fauzi A, Wowor AJ (2017) Designing a monitoring system to locate a child in a certain closed area using RFID sensor and android smartphone. In: Proceedings of the 5th international conference on communications and broadband networking, pp 54–58. https://doi.org/10.1145/3057109.3057115
688
D. P. Hutabarat et al.
12. Verma P, Bhatia JS (2013) Design and development of GPS-GSM based tracking system with Google Map based monitoring. Int J Comput Sci Eng Appl (IJCSEA) 3:33–40. https://doi.org/ 10.5121/ijcsea.2013.3304
Website Personalization Using Association Rules Mining Benfano Soewito and Jeffrey Johan
Abstract Many companies are redefining their business strategies to improve the business output. Business over internet provides the opportunity to customers and partners where their products and specific business can be found. Web usage mining is the type of web mining activity that involves the automatic discovery of user access patterns from web servers. A real-world challenging task of the web master of an organization is to match the needs of user and keep their attention in their web site. So, web pages can capture the intuition of the user and provide them with the recommendation list. Personalize e-commerce website is done after knowing the habits and behavior patterns of customers e-commerce website using web usage mining with association rules mining apriori algorithms. The method used is a method of analysis and design. In the method of analysis, research variables are determined, and data of sales are collected. In addition, the method of analysis is also performed to measure the accuracy of the apriori algorithm. Designing apriori, program design, and the design of the screen is done in the design method. Results are achieved in the form of an e-commerce website that is personalized using association rules mining apriori algorithm that can recommend the goods in accordance with the preferences and needs of the user. The conclusion of this study is to obtain patterns of association, it takes the data transactions made by customers and the recommendations given by the apriori algorithm would be more accurate if the transaction data is processed more, the categories of goods are fewer, the limit minimum value of support and the limit minimum value of confidence are high. Keywords e-commerce · Web usage mining · Association rules mining · Apriori algorithm
B. Soewito (B) · J. Johan Computer Science Department, Master of Computer Science, BINUS Graduate Program, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_60
689
690
B. Soewito and J. Johan
1 Introduction Web mining is interpreted as an attempt to apply data mining techniques to dig and extract useful information from the documents stored in the web page automatically [1]. The purpose of the web mining is to find and acquire useful patterns from a large dataset. Web mining is divided into 3, there are web structure mining, web content mining, and web usage mining [2]. Web usage mining is a reconstruction of a user session by using heuristic techniques and find useful patterns by using pattern discovery techniques such as association rule mining [3]. Benefits of web usage mining is for the customization of web pages based on user profile, determine customer interest towards a particular product, determine the appropriate target market, system improvement, site modification, business intelligence, and usage characterization [4, 5]. This study was conducted to personalize web pages at one of the e-commerce website created by researcher using a web usage mining to capture and model the behavior patterns of web users using apriori algorithm. Several web usage mining systems have been proposed to predict user navigation behavior and their preferences [5]. In the following we review some of the most significant web usage mining algorithm. Fuzzy clustering is the process of determining the degree of membership, and then use it by putting it in the data elements into one or more cluster groups [6]. It will provide information on the similarity of each object. One of the many fuzzy clustering algorithms used is fuzzy clustering algorithm c-means. Fuzzy clustering c-means algorithm divides the available data from each data element and insert it into part of a cluster collections that are influenced by several criteria. In the initial condition, the center of the cluster is still not accurate. Each of data has a degree of membership for each cluster. By fixing the center of the cluster and the membership value of each data repeatedly, it can be seen that the center of the cluster will have exact location. Optimal number of clusters required depends on the amount of transaction data. Each parameter of fuzzy c-means algorithm has the effect of each of the performance of the algorithm. Naive Bayesian classification is a classification of the simple chance by application of Bayes theorem, assuming between the explanatory variables are independent. In this case, it is assumed that the presence or absence of a particular event of a group is not associated with the presence or absence of other events. Naive Bayesian can be used for various purposes, among others for the classification of documents, detection of spam or spam filtering, and other classification problems [7]. Naive Bayesian only requires a small amount of training data to estimate the parameters required for classification. Naive Bayesian is not able to be applied if the conditional probability is zero then the prediction probability will be zero as well. Apriori algorithm is an algorithm that searches frequent itemset by using the technique of association rules. Apriori algorithm is the most well-known algorithms to find patterns of high frequency [8, 9]. High frequency patterns are patterns of items in a database that has a frequency or support above a certain threshold called
Website Personalization Using Association Rules Mining
691
the minimum support. This high frequency pattern used to draw up the rules associative and also some other data mining techniques [10]. Apriori algorithm using attribute frequency knowledge previously known to process more information. At Apriori algorithm, to determine which candidates may appear in any way concerned minimum support and minimum confidence [11]. Support is the percentage of a combination of an item in the database. Support formula is as follows: Suppor t( A, B) = P( A ∩ B) Suppor t(A, B) =
T ransaction contains A and B × 100% T otal T ransactions
(1)
While confidence is a confidence value that the strong relationship between items in a Apriori. Confidence can be sought after the emergence of an item frequency pattern found. Confidence following formula: Con f idence = P( B|A) =
T ransaction contains A and B × 100% T ransaction contains A
(2)
2 Theoretical Review 2.1 Web Mining Web mining is the extraction of important and useful patterns and they are stored implicitly on relatively large datasets on world wide web services. Web mining consists of three parts: web content mining, web structure mining, and web usage mining [12]. Web content mining is an automated process to find useful information from documents or data [13]. In principle, this technique extracts the keywords contained in the document. The contents of web data can include text, images, audio, video, metadata, and hyperlinks. There are two strategies that are commonly used, namely first, directly mining the data, and second, conducting searches and improving search results like a search engine. Web structure mining also known as web log mining is a technique used to find the link structure of hyperlinks and build summaries of websites and web pages [14]. One of the benefits is to determine the page rank on a web page. Web usage mining is a technique to identify customer behavior and web structure through information obtained from logs, click streams, cookies, and queries [15]. Various tools that already exist include WebLogMiner which performs mining on log data. More sophisticated techniques are used to perform OLAP (Online Analytical Processing). The benefits of web usage mining are for customizing web pages based
692
B. Soewito and J. Johan
on user profiles, determining customer interest in certain products, and determining the appropriate target market.
2.2 Association Rules Mining Association analysis or association rule mining is a data mining technique to find associative rules between a combination of items. Association rules are used to find relationships between data or how a group of data affects the existence of other data [15]. This method can help identify certain patterns in large data sets. In association rules, a group of items is called an itemset. The support of itemset X is the percentage of transactions in D that contain X, usually written as supp(X). The search for association rules was carried out in two stages, namely the search for frequent item sets and the preparation of rules. If the support of an itemset is greater than or equal to the minimum support, then the itemset can be said to be a frequent itemset or frequent pattern, which does not meet is called an infrequent. Confidence is a measure of how much validity an association rule is. The confidence of a rule R (X = > Y) is the proportion of all transactions containing both X and Y with those containing X, usually written as conf(R). An association rule with confidence equal to or greater than the minimum confidence can be said to be a valid association rule. An example of an associative rule of buying analysis in a supermarket is knowing how likely it is that a customer buys bread along with milk. With this knowledge, supermarket owners can arrange the placement of their goods or design marketing campaigns by using discount coupons for certain combinations of goods. Because association analysis became famous for its application to analyze the contents of shopping carts in supermarkets, association analysis is also often referred to as market basket analysis which is used to find relationships or correlations between sets of items. Market Basket Analysis is an analysis of a customer’s buying habits by looking for associations and correlations between different items that a customer places in his shopping cart. This function is most widely used to analyze data for the purposes of marketing strategy, catalog design, and business decision-making processes. Based on the above definition, the search for patterns of association rules uses two value parameters, namely support and confidence which have values between 0 and 100%.
3 Research Method The steps of measuring personalized e-commerce website using web usage mining with association rules mining apriori algorithm are: 1. Collecting data from web logs and databases. 2. Perform data preprocessing.
Website Personalization Using Association Rules Mining
693
3. The implementation of the results of mining in the form of personalized web pages. 4. Measuring and analyzing the accuracy of algorithm. In the initial phase of research begins by determining the background and purpose of the study as well as defining the scope. The literature study is done to deepen the understanding on the apriori algorithm and the stages of what it takes to get the customer behavior patterns and customer data using the web usage mining. In addition, a literature study was also conducted to compare the techniques and methods used by other researchers. The second phase of the study is collecting data from web logs and database. To perform the analysis of the business process model required a website log data which is recording user activity for interacting with the website. Model process and log data will be analyzed in this study came from business process models and data log websites created by researchers. Log recorded was the activity of each user since the early users interact on the web page until the end of the leave. Each row in the data log contains the following information: Timestamp, IP Address, Session, and Visited pages. The third phase of the study is to perform data preprocessing. To get the process model of the pages are accessed in sequence required preprocessing stage to identify a sequence that is considered valid to be included in the mining process at a later stage. Preprocessing is done with association rules mining technique on the database through several stages of: • Data cleaning. • Session identification. • Data conversion. Activities performed on data cleaning include cleaning the log data of noise, such as incomplete data, attributes that are not relevant to the needs of mining so only attribute session ID, visited pages, and timestamp are left on the log file. Every time a user visits a website, a user will have 1 session which will be identified as a single session that contains a set of events, visited page is a page that is accessible to users on a single session, and timestamp is the duration of access when a user accesses a web page. In a log file, 1 transaction stated in 1 piece for one session where the session will contain a sequence of pages accessed by the user to a web page that is accessed at specific time intervals. Between one session with another session for users who are identified with the same IP address will be distinguished by intervals of idle time (30 min). This is done because there is no access to data log information when a user session ends. Before processing the data mining using apriori algorithms, data is converted into the form of a table. Each record data is converted into the form of tables and sequences are referred to as sequence database. Sequence database represents a set order in which each sequence is a list of items purchased. An itemset is a series of different items.
694
B. Soewito and J. Johan
Fig. 1 Topology system
After the data preprocessing is done, the next step is the implementation of the results of mining done in the form of personalized web pages. Any customers who have registered and logged in into the website will offer a number of products into the customer’s preference. Apriori algorithm applied to customers who have logged in or have purchase transactions so that customers will get a recommendation for next purchase. After personalizing web page, the next step is to measure and analyze the accuracy of the algorithm with customer behavior pattern that has been done. Accuracy measurement of the algorithm is done by comparing the categories of goods that are frequented by the recommendations derived from the apriori algorithm to transaction data. The last step is to draw conclusions and provide suggestions. Web usage mining is divided into two phases, they are the data preprocessing and implementation apriori algorithm by presenting web pages according to user needs (Fig. 1). The data used for the implementation of apriori algorithms and personalized website is data derived from web logs e-commerce website that is made by researchers and sales transaction data coming from the database of e-commerce websites. Data needed in the form of the IP address of website visitors, sessions, time of access, visited pages, and the transactions have been carried out for 3 months. To get the data clean and arranged so that data will be processed first through the data preprocessing. The data used for the analysis of the accuracy of the algorithm is derived from a comparison of the recommendations given by the system with a history of frequently visited pages by users [9]. The results of the implementation process apriori algorithm can be used as a measuring tool to determine the association rules contained in the transaction data. Limit of minimum value of support and minimum value of confidence is too high will cause the pattern/rule gained nothing at all, otherwise limit of minimum value of support and minimum value of confidence that is too low will lead to rules that are formed inefficient and relationships between items are low. The definition of minimum value of support and minimum value of confidence should be tested several
Website Personalization Using Association Rules Mining
695
times in order to limit of minimum value of support and minimum value of confidence approached the expected goals.
4 Results and Analysis Data transaction to be processed is taken from the transaction date of August 20th, 2020 at 00:00 a.m. until the date of November 20th, 2020 at 24.00 p.m. From that time span had accumulated as many as 52 transactions with 15 categories of goods. An itemset is a set of items that exist in the I, and k-itemset is itemset that contains k items. For example {fashion, dress} is a 2-itemsets and {batik, dresses, skirts} is a 3-itemset. Frequent itemsets show that has the frequency of occurrence more than a predetermined minimum value (φ). Let φ = 4, then all itemset frequencies of occurrence more than or equal to 4 times are called frequent. The set of frequent k-itemset denoted by Fk. From the combination of these two itemsets, researcher set minimum value (φ) = 3 to reduce the number of candidate itemsets that must be examined. Apriori principle is that if an infrequent itemset, then infrequent itemset no longer need in-explore so that the number of candidates should be checked to be reduced. Apriori should continue to be developed to improve the efficiency and effectiveness to reduce the search time when scanning database repeatedly. In this case if researcher determined minimum value (φ) = 1 then the set of members of frequent k-itemset led to 22 members. Determination of the minimum value (φ) depends on the number of existing combinations of two itemsets and number of categories of goods. From the minimum value (φ) = 3, formed 2-frequent itemset as follows: F2 = {{Fashion, Women’s Pants}, {Fashion, Suit}, {Women’s Pants, Suit}, {Women’s Shirts, Women’s Jacket/Cardigan/Blazer}}. The combination of itemset in F2 can be combined into a combination of 3itemsets. F2-itemsets of which can be combined are itemsets have in common in the combination of 2-itemsets. Thus F3 = {{Fashion, Women’s Pants, Suit}}, because only that combination has the frequency of occurrence more than φ. From F2 that has been formed, it can be seen the value of the support and confidence of the prospective association rules F2 in Table 1. Meanwhile, from F3 has been formed, it can be seen the value of the support and confidence of the prospective association rules F3 in Table 2. From Tables 1 and 2, the researcher set the value of the minimum confidence is 60% on the strength of the relationship believes an item to another. The determination of the minimum confidence is based on the number of items that are purchased together and interconnected. From the value of at least 60% confidence so the rules that can be formed are as follows: • If buying Fashion, then buying Suit. • If buying Women’s Pants, then buying Suit.
696
B. Soewito and J. Johan
Table 1 Prospective association rule F2 If Antecedent, then consequent
Support (%)
If buying fashion, then buying women’s pants If buying women’s pants, then buying fashion
Confidence (%)
4/52 = 7.7
4/13 = 30.77
4/52 = 7.7
4/10 = 40
If buying fashion, then buying suit
8/52 = 15.4
8/13 = 61.54
If buying suit, then buying fashion
8/52 = 15.4
8/22 = 36.36
If buying women’s pants, then buying suit
6/52 = 11.5
6/10 = 60
If buying suit, then buying women’s pants
6/52 = 11.5
6/22 = 27.27
If buying women’s shirt, then buying women’s jacket/cardigan/blazer
3/52 = 5.8
3/9 = 33.33
If buying women’s jacket/cardigan/blazer, then buying women’s shirt
3/52 = 5.8
3/8 = 37.5
Table 2 Prospective association rule f3 If antecedent, then consequent
Support (%)
Confidence (%)
If buying fashion and women’s pants, then buying suit
3/52 = 5.8
3/4 = 75
If buying fashion and suit, then buying women’s pants
3/52 = 5.8
3/8 = 37.5
If buying women’s pants and suit, then buying fashion
3/52 = 5.8
3/6 = 50
• If buying Fashion and Women’s Pants, then buying Suit. Association rules formed by the largest value of Confidence multiplied by Support can be seen in Table 3. Accuracy validation of apriori algorithm is conducted by comparing the recommendations generated by the apriori algorithm with visitor data on associations between 3 categories of goods. Table 4 shows the number of visits made by the visitors during 3 months (August 20th, 2020–November 20th, 2020). From Table 4, it appears that the total visits is 753 times with number of visit to the Fashion category is 102 times, number of visit to the Women’s Pants category is 60 times, and number of visit to the Suits category is 127 times. The percentage of visits to the Fashion, Women’s Pants, and Suits category is 38.38% of the total visits. Table 3 Association rules If antecedent, then consequent
Support (%)
Confidence (%)
Support × confidence (%)
If buying fashion, then buying suit
8/52 = 15.4
8/13 = 61.54
9.48
If buying women’s pants, then buying suit
6/52 = 11.5
6/10 = 60
6.9
If buying fashion and women’s pants, then buying suit
3/52 = 5.8
3/4 = 75
4.35
Website Personalization Using Association Rules Mining Table 4 Total visiting of each category
697
Category
Number of visiting
Fashion
102
Dress
66
Skirt
37
Women’s pants
60
Women’s T-shirt
59
Women’s shirt
119
Women’s jacket/blazer
57
Men’s T-shirt
17
Men’s shirt
10
Boy’s clothes
15
Girls’ clothes
18
Batik
32
Night gown
29
Men’s jacket/blazer Suits
5 127
Researcher created a combination of pairs 3 categories of goods. The combination of pairs Fashion, Women’s Pants, and Suits categories ranks 7th out of 455 combinations of pairs 3 categories. Although the combination of pairs Fashion, Women’s Shirts, and Suits categories ranks first, the combination of the pair 3 items are more independent. By calculating the percentage of sequence data in which the first order means 100% accurate (combination recommended in accordance with the most visited combination), the combination of Fashion, Women’s Pants, and Suits categories which ranks 7th has a percentage of 98.68%. So it can be concluded that the accuracy of the apriori algorithm in this case reached 98.68%.
5 Conclusion From the research that has been done using association rules mining apriori algorithm, the researcher can draw conclusions as follows: 1. To obtain the patterns of association in the sales transaction data, required information data transactions made by consumers by forming a combination of two categories of goods and three categories of goods in order to form patterns of association. 2. Recommendations given by the apriori algorithm will be more accurate if the transaction data is processed more, the categories of goods are fewer, limit minimum value of support and limit minimum value of confidence are high.
698
B. Soewito and J. Johan
References 1. Oo Htun Z, Kham NSM (2018) Pattern discovery using association rule mining on clustered data. Int J New Technol Res 4(2) 2. Kumbhare T, Chobe S (2014) An overview of association rule mining algorithms. Int J Comput Sci Inf Technol 5:927–930 3. Abdurrahman BRT, Mandala R, Govindaraju R (2009) ANT-WUM: Algoritma Berbasis ant colony optimization untuk web usage mining. Jurnal Teknologi Technoscientia 2:1–12 4. Siddiqui AT, Aljahdali S (2013) Web mining techniques in e-commerce applications. Int J Comput Appl 69:39–43 5. Rajagopal S (2011) Customer data clustering using data mining technique. Int J Database Manage Syst (IJDMS) 3:1–11 6. Dholakia UM, Rego LL (1998) What makes commercial web page popular? An empirical investigation of webpage effectiveness. Eur J Market 32:724–732 7. Suresh K, Madana Mohana R, Rama Mohan Reddy A (2011) Improved FCM algorithm for clustering on web usage mining. Int J Comput Sci (IJCSI) 8:42–46 8. Bhardwaj BK, Pal S (2011) Data mining: a prediction for performance improvement using classification. Int J Comput Sci Inf Secur (IJCSIS) 99:1–5 9. Santhosh Kumar B, Rukmani KV (2010) Implementation of web usage mining using apriori and FP growth algorithms. Int J Adv Netw Appl 01:400–404 10. Geeta RB, Shashikumar GT, Prasad R (2012) Literature survey on web mining. IOSR J Comput Eng (IOSRJCE) 5:31–36 11. Gao J (2021) Research on application of improved association rules mining algorithm in personalized recommendation. J Phys: Conf Ser 1744:032111 12. Eason G, Noble B, Sneddon IN (1955) On certain integrals of Lipschitz-Hankel type involving products of Bessel functions. Phil Trans Roy Soc London A247:529–551 13. Wang F, Wen Y, Guo T, Liu J, Cao B (2019) Collaborative filtering and association rule miningbased market basket recommendation on spark. Concurrency Comput Pract Exp 32:e5565 14. Mahesh Balan U, Mathew SK (2019) An experimental study on the swaying effect of webpersonalization. SIGMIS Database 50:71–91 15. Aiolfi S, Bellini S, Pellegrini D (2021) Data-driven digital advertising: Benefits and risks of online behavioral advertising. Int J Retail Distrib Manage 49:1089–1110
Adoption of Artificial Intelligence in Response to Industry 4.0 in the Mining Industry Wahyu Sardjono and Widhilaga Gia Perdana
Abstract Nowadays The Industrial Revolution 4.0 is identified by the growing implementation of technology in every business, one of the technologies it called Artificial Intelligence (AI). AI gives a solution for a company to manage efficient operations by finding solutions, relearning data insight, and implementing action to reduce inefficiencies in Operations. The confluence of information technology with industrial process automation and routine jobs provides for better operations, lower costs, and higher process quality. With the ability to extract and control data in real-time connected home and commercial equipment, technologies like big data, the internet of things, information technologies, and robotic operating systems have become important partners and pillars of both industry and society 4.0. PT Delta Dunia Makmur TBK (BUMA), Optimizes its efficient production cost by using Artificial Intelligence in order to help the company to manage its fuel consumption, however, AI has weaknesses due to high dependency on data integrity the company should improve those issues. Keywords Industrial revolution 4.0 · Artificial intelligence · Fuel consumption · Mining industry
W. Sardjono (B) Information Systems Management Department, Master of Information Systems Management, BINUS Graduate Program, Bina Nusantara University, Jl. K. H. Syahdan No. 9, Kemanggisan, Palmerah, Jakarta 11480, Indonesia e-mail: [email protected] W. G. Perdana Post Graduate Program of Management Science, State University of Jakarta, Jl. R.Mangun Muka Raya No.11, Jakarta Timur, Jakarta 13220, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_61
699
700
W. Sardjono and W. G. Perdana
1 Introduction The Industrial Revolution 4.0 (4IR) is influencing many elements of society and daily life, and businesses are using technology to increase productivity through hyperautomation and connectivity [1]. The 4IR differs from the preceding three in the paradigm shift. The mechanization process was the focal point of the first industrial revolution, which started in Great Britain in the late eighteenth century. using a steep waterpower broker [2]. The second industrial revolution, which saw the widespread use of electricity for mass production, was started in the United States at the beginning of the twentieth century [3]. In the third industrial revolution, which was likewise of American origin and centered on in the middle of the twentieth century, automation employing electronics and information technology took place [4]. The 4th IR is different from the first three because it is known for introducing cyber-physical systems, a concept that integrates a number of virtual and actual actors. Through efficient real-time and Internet-based system control, it can transform our society, industry, and economy [5]. The fundamental technological forces behind the third industrial revolution were in the hardware sector, whereas those behind the fourth IR were concentrated in the software sector. The coal Mining industry in the modern industrial era has a Challenge regarding its fluctuation of the Coal Market Price. Coal Prices from the end of 2021 until mid-2022 significantly rose to $ 400/Ton (Breaking the History Record) this phenomenon happened because of Russia—the Ukraine War twitching to the energy crisis in Europe. This trend is claimed will not sustain in the future, because it reflects the history of increasing coal prices in 2013 there was: $84/Ton and suddenly dropped in 2015 to $58/Ton, and then in 2018 increased to $107/Ton and suddenly drop in 2019 to $ 60.8/Ton. This condition gives a huge challenge to Coal Mining Company keeping their efficient process. Regarding the major cost of the Mining process, there are three costs that companies maintain their efficiency of operation Repair and Maintenance, Fuel Consumption, and People Cost. This Paper will discuss the study case implementing AI in PT BUMA in helping to keep their fuel consumption Efficient.
2 Literature Review AI is a broad concept that suggests using a computer to simulate intelligent behavior with little assistance from humans. The development of robots is widely considered to be the start of AI. The inspiration came from the Czech word robot, which refers to nanoparticle tools used for forced labor AI has been found to have tremendous advantages in business. Within the Healthcare Sector, The improvement in physician performance at hospital facilities is one of the advantages. The circumstance works in the consumers’ (patients’) best interests. Using computer systems designed specifically for this task, the medical staff can determine which patients provide the
Adoption of Artificial Intelligence in Response to Industry 4.0 …
701
greatest risk [6]. These technologies can sensibly value the physiologic issues that some patients with healthcare needs are experiencing. maybe experiencing and can provide exact figures about the patient who needs based on real-time. Through the method, facilities’ limited resources might be used most effectively to get the best results, by making sure they are able to improve it’s overall experience of life by addressing any difficulties they may be having. Computer technologies are also able to enhance decision-making using this manner, saving doctors the time it would have taken them to conduct additional research on some probable health conditions their patients may be experiencing. It has been used, for instance, to assess the number of drug interactions in patients and determine whether they have any antagonistic or additive effects [7]. AI is also applied by the banking and finance sectors to ensure that they can monitor a range of operations. They can evaluate the numerous challenges that arise during the process to ensure that suspect behaviors, like deception, may not materialize. Systems must be put in place to look for the many wrongdoings that people may commit and to cause concern about the different losses that businesses may inevitably suffer [8]. There is a probability that they can aid in lowering the potential loss that the organizations could incur through the presented situation. Even while artificial intelligence has many advantages, there is a likelihood that it could also have major negative consequences. Effects on a variety of people who interact with it, either directly or indirectly. One benefit of using such technology is that the information that is stored in it determines how useful it is. There is a significant likelihood that given data that has the potential to provide false results will result in serious issues [9]. One of the myths surrounding the application of AI. Reference [10] is that there is a potential that it will lead to the issue of replacing human intelligence itself. According to them, it is morally wrong Because they feel that human knowledge is insufficient and must be bolstered by the use of machines and computers, and more humans are coming to rely on artificial intelligence. Because of this, such a practice suggests that humans are incapable of doing a variety of jobs that would be considered relevant to them [11].
3 Methodology This study was conducted to review and learned how machine learning helps organizations in managing risk, with kind of methodology such as: Gather sources of the literature and information from journals and Internet articles Reading Sources that have been obtained Identify the information read and whether is relevant to the topic Perform Implementation analysis to identify the relationship between literature and implementation result 5. Re-Writing important points that were obtained in a structured manner. 1. 2. 3. 4.
702
W. Sardjono and W. G. Perdana
4 Result and Discussions AI has features regarding its functionality such as Deep learning and machine learning. AI can tackle problems thanks in large part to machine learning. They do store and calculate but in more complicated ways. Additionally, by employing data and algorithms to mimic how people learn, data is gradually improved [12] (Fig. 1). Machine Learning has two features such as- [13]: 1. Supervised approach, where pre-defined category based on the likelihood predicted by a training set of labeled documents, labels are allocated to documents.; and 2. Unsupervised approach, where there is no stage of the process that requires labeled documents or human intervention. Machine learning has another subset feature, called Deep learning which includes statistics and predictive modeling in collecting, analyzing, and interpreting large amounts of data; deep learning makes this process faster and easier. Both data analysis in Machine Learning and Deep Learning will present or visualize by Business Intelligence in this study case its used Microsoft Power BI, this tool is an open-source application that can help users to manipulate, analyze, visualize data, and create dashboard reports [14]. The writer will focus to implement Supervised Machine Learning integrated with Power BI Applications. There are things to consider by the company when they will start to implement AI such as: Fig. 1 Display artificial intelligence subset (https:// datacatchup.com/artificialintelligence-machine-lea rning-and-deep-learning/)
Adoption of Artificial Intelligence in Response to Industry 4.0 …
1. 2. 3. 4. 5. 6. 7. 8. 9.
703
Established Project Team Understanding the performance baseline of operations Sourcing of Vendor that capable to help Organizations Identify critical success factors for Project Identify data drivers that cause fuel consumption increase Data collection strategy Learning insight data pattern Deciding which pattern to monitor and translates into Action Deliver Potential Solutions
4.1 Establish Project and Team Structure The Experiment Project is Called Athena which comes from the Greek myth that symbol of women warrior who guards the Greek nation against their opponents, this project has the objective to guard BUMA in every risk that they face it. Used Scrum Method to run this project, Internal Audit Division lead this project with close collaborations with our Operation Team and Board of Directors in the Steering Committee.
4.2 Understanding the Performance Baseline of Operations BUMA has suffered increasing in Fuel consumption because it’s difficult to control since there are many variables to control, complex data, and a big volume of data produced by their applications that come from the Internet of Things. Below is an example of pain point fuel consumption in one site which is over $ 2,5 million dollar YTD June 2020. This Increasing fuel cost compared to budget happened every year, especially in site operation which has bigger production activity compared to other sites. It’s too difficult to analyze why this condition happened, it has constraints because the transactions are very complex regarding their volume (daily volume of data amounting to 1.5 GB) and complexity in data source gathering and analyzing data using Excel (Need 1–2 Week to Analyze).
4.3 Sourcing of Vendor that Has the Capability of Building Artificial Intelligence To implement AI faster to decide to close a partnership with our vendor and start to find and compare. Two scenarios for vendor sourcing whether:
704
W. Sardjono and W. G. Perdana
• Build our own machine-learning platform • Use existing machine learning platform from Vendor After discussing with IT Council, they conclude that its best to use the existing vendor platform because organizations need cost savings to deliver faster to our site operation. Finally, Decide Spark beyond to be a Partnership with our company because it already has a Machine Analytics Platform.
4.4 Identify Critical Success from Project • Cost: Focus to help the organization to reduce their fuel consumption by up to 7% from baseline (this target defines by our vendor’s past experience). • Time: Measurement of a successful application is defined by having a realtime platform to monitor and give recommendations for fuel consumption improvement.
4.5 Identify Data Drivers that Cause Fuel Consumption Increase According to Sulman, Dorn, and Niemi (2017, in [15]), Fuel consumption has a characteristic/driver that impact to increasing its consumption such as: • Working Environment: Bad Working Environment can increase fuel inefficiencies such as Bad roads, Bad Mining Sequence, Hard Material. • People Behaviour: Bad people behavior such as Harsh Acceleration, Harsh Break, etc. could bring efficiency usage, the effective Driving habits can help minimize fuel consumption and gasoline costs. • Engine Condition: Lack of maintenance or a Bad Maintenance process could bring the engine would consume fuel more than normal.
4.6 Data Collection Technique The Important thing for this project, need to identify the source of data and characteristics such as the volume of data on Fuel Consumption, below the source of data for fuel consumption. After data is mapped, the machine learning to capture data and translate insights with the below mechanism. The data comes from source data uploaded to the data factory and processed by SQL then transfer with Jupiter Notebook for further processing, then data is moved
Adoption of Artificial Intelligence in Response to Industry 4.0 …
705
to the Machine Learning Platform to analyze each pattern from data later on the pattern will be discussed with the Subject Matter Expert.
4.7 Research and Review How Data Interrelate with Each Other By using a Machine learning platform, to understand the correlation between data that is impactful to fuel consumption below.
4.8 Learning Insight Data Pattern After data runs in the machine learning platform it will result in the data insight that gives the highest contribution to fuel consumption, below the data. The 10 Key Insights out of 70 Insights that significant contributions to Fuel Efficiency will drill down the analysis for each parameter.
4.9 Deciding Which Parameter Want to Monitor and Translates into Action After summarizing the key insight, try to understand each parameter and decide which pattern to monitor first. From the Summary above it can conclude that Spotting Time activity, Loaded Stopping Time, and Operator Behavior give the highest contribution to Fuel Efficiency. However, this activity is not a priority because this activity already monitors by our site (Progress) since the target for this project delivers new insight that is not known by our site. So, after concluding 4 Initiatives to roll out such as payload, Refueling, Road Condition, and Payload Distribution. Before implementation, knows the characteristics of the pattern. From the machine analytics result, it gives us insight that when the truck passes a road that has a grade > 10 tons, it would consume higher fuel, so will set the target that all of the roads in our mine should have a grade of < 10°.
706
W. Sardjono and W. G. Perdana
5 Conclusions In Conclusion basics theory regarding the benefit of AI correlated with our experience, AI helps us in managing fuel consumption and reducing inefficiencies in operations. However, companies should notify the point below before their implement AI, Such as: 1. Digital Transformation Primary focus is to transfer their people to be technology people so the company should have a clear strategy to make people aware of their roles and responsibilities in performing this initiative. 2. Data Governance from their Applications, AI Depend on the quality of the data source, however, implementing AI could help the company to improve its data quality because when a machine gives an error prediction it will describe which data source error and what is a potential solution for fix it (Ex: Case Unit 777-E). 3. There should be a clear strategy in the Benefit Calculation and target of this project so that organizations know what milestones they want to achieve. 4. Finally the Project Management control from their team should be robust to handle all of the tasks.
References 1. Park SC (2017) The fourth industrial revolution and implications for innovative cluster policies. Artif Intell J 33:433–445 2. Bloem J, Doorn MV, Duivestein S, Excoffier D, Maas E, Ommeren R (2014) The fourth industrial revolution. Int J Commun Netw Syst Sci 10–11 3. Leonardo E, Mariano N, Jano MS (2022) Analyzing Industry 4.0 trends through the technology road mapping method. Proc Comput Sci 201:511–518 4. Ahmad RB, Vineet K, Edward S (2020) Industry 4.0 implementation challenges in manufacturing industries: an interpretive structural modelling approach. Proc Comput Sci 176:2384– 2393 5. Ibrahim AJ, Hazarina H, Basheer MA (2020) The role of cognitive absorption in predicting mobile internet users continuance intention: an extension of the expectation-confirmation model. Technol Soc 63(101355) 6. Park I, Yoon B, Kim S, Seol H (2021) Technological opportunities discovery for safety through topic modeling and opinion mining in the fourth industrial revolution: the case of artificial intelligence. Eng Manage J 68(5):1504–1519 7. Hamet P, Tremblay J (2017) Artificial intelligence in medicine. Metabolism J 69:S36–S40 8. Michalski M, Ryszard S, Carbonell JG, Mitchell TM (2013) Machine learning: an artificial intelligence approach. Springer J 88–93 9. Nadimpalli M (2017) Artificial intelligence—consumers and industry impact. Int J Econ Manag Sci 6:429 10. Boutilier C (2015) Optimal social choice functions: a utilitarian view. Artif Intell J 227:190–213 11. Arner DW, Buckley RP, Zetzsche DA, Veidt R (2020) Sustainability, FinTech and financial. Bus J 21:7–35 12. Ivanov S (2017) Adoption of robots and service automation by tourism and hospitality companies. Tourism J 2–3
Adoption of Artificial Intelligence in Response to Industry 4.0 …
707
13. Hasan F (2017) Addressing legal and contractual matters using natural language processing. J Constr Eng Manage 147:17 14. Sardjono W, Firdaus F (2020) Readiness model of knowledge management systems implementation at higher education. ICIC Express Lett ICIC Int 14(5):477–487. ISSN 1881-803X 15. Sardjono W, Selviyanti E, Perdana WG (2019) The application of the factor analysis method to determine the performance of IT implementation in companies based on the IT balanced scorecard measurement method. J Phys Conf Ser 1538
AI and IoT for Industry Applications
IoT in the Aquaponic Ecosystem for Water Quality Monitoring N. W. Prasetya, Y. Yulianto, S. Sidharta, and M. A. Febriantono
Abstract The community’s food security was disrupted during the Covid-19 pandemic where some people lost their jobs. Aquaponic is an alternative solution because of the limited space for cultivation caused by land conversion from fields to housing and tourist areas. Irrigation and maintenance of conventional cultivation are also limited. Aquaponics needs attention because the system will affect each other. The purpose of this study is to provide convenience for aquaponics users to be able to monitor the condition of their aquaponics remotely. There are several water quality parameters that affect survival rates such as Temperature, pH, Dissolved Oxygen (DO), Total Dissolved Solid (TDS), Oxidation–Reduction Potential, and Turbidities. Based on the aquaponics system that has been built, it is known that the aquaponic system has been able to obtain all the sensor parameters that have been installed and send them to the server. In the sensor accuracy test, the integrated IoT water sensors showed 96.4% for Temperature, 97.91% for turbidity sensor, 98.27% for TDS sensor and 9886% for DO sensor. Keywords Aquaponics · IoT · Water Quality Monitoring. System Design
N. W. Prasetya (B) · Y. Yulianto · S. Sidharta · M. A. Febriantono Computer Science Department, School of Computer Science, Bina Nusantara University, West Jakarta, Indonesia e-mail: [email protected] Y. Yulianto e-mail: [email protected] S. Sidharta e-mail: [email protected] M. A. Febriantono e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_62
711
712
N. W. Prasetya et al.
1 Introduction Aquaponics is an innovative and widely studied urban farming technology that uses the symbiotic relationship between fish and plants to increase food production. Aquaponics systems grow fish and plants simultaneously. Aquaponics is an alternative solution when people’s food security is disrupted during the Covid-19 pandemic where most people lose their jobs. People must stay at home and always maintain cleanliness in order to stay healthy so they try to meet their own needs by farming. Irrigation and maintenance of cultivation in a conventional way are also limited so Aquaponics is one solution to the limited space for cultivation due to land conversion from fields to housing. Aquaponic systems offer the possibility to make considerable water savings, according to water sources 80–99% less than conventional methods [1]. Using an aquaponics system means reducing the problem of plant diseases so that more healthy foods are free from chemical treatments and disinfectants [2]. In addition, there is very little maintenance and weeding compared to conventional methods so no more than 5–10 min maximum per day to inspect plants and feed fish. Aquaponics can be used as an educational tool for students by trying to cultivate various types of fish and plants in aquaponics, thus integrating students into the environment to collaborate with technology. The utilization of the Internet of Things (IoT) has now reached the agricultural sector, especially hydroponic and aquaponic systems. Haryanto et al. [3] have presented the design of a smart aquaponics system that can control and monitor aquaponics parameters viewed via the internet using a mobile-based application. Pasha et al. [4] has designed a tracking and control system for an IoT-based aquaponic system via an internet site interface. This study measures and shows 3 parameters, namely temperature, water level, and pH value. The authors [5] have designed an IoT-based aquaponics monitoring system that measures and controls various parameters in real time. The architecture uses NodeMcu to collect data from sensors (pH, temperature, Oxygen and ammonia levels) and send it to a web server. The proposed work uses the Web Socket protocol to transfer information in order to provide a secure connection to the server and keep the system running in real-time. Barosa et al. [6] has designed an intelligent aquaponics system named Plantabo Aevum. This aquaponics system design uses the Internet of Things to continuously monitor and control the environment. The IoT section consists of a microcontroller and a microprocessor to process the data generated by the connected objects (sensors, actuators). Aquaculture [7, 8] and coastal area management [9] are two domains that have already implemented IoT sensors. Lee et al. [10] proposed a study: a cloud-based Internet of things monitoring system in aquaponics to measure various parameters such as water temperature, water depth, dissolved oxygen, and pH value. Data is uploaded via Wi-Fi to a cloud platform called ThingSpeak™. Any data sent to the cloud will fire an alarm system if it shows abnormalities in real time and periodic regression analysis is performed.
IoT in the Aquaponic Ecosystem for Water Quality Monitoring
713
MQTT (Message Queue Telemetry Transport) is a communication protocol that implements a publish/subscribe architecture to provide topics to brokers and has low power consumption [11]. The aquaponic condition monitoring prototype was built by combining IoT technology and the MQTT protocol where the data obtained will be sent to the cloud server. The purpose of this study is to show how IoT can help make an effective monitoring system for aquaponic on the Binus Malang campus.
2 Research Method 2.1 System Design The sensor nodes are placed on the side of the pool (Fig. 1) using Arduino UNO R3 as the main microcontroller. All sensors are read by Arduino UNO R3 and then sent to the raspberry pi3 via the ESP01 module, which communicates via Wi-Fi to the access point. Raspberry pi3 will send processed data to the cloud server periodically every 1 min. The box is made with industrial grade quality to make it stronger and water resistant so that it can be installed for a long time outdoors. The block diagram of the system is shown in Fig. 2. Water from fish ponds contains nitrate nutrients from fish feed and fish digestive waste which will be used by plant to grow. Sediment from aquaculture systems is caused by food residues and fish manure containing ammonia which can cause a decrease in dissolved oxygen levels and inhibit fish growth [12]. Fig. 1 Water quality sensor on the side of the pool
714
N. W. Prasetya et al.
Fig. 2 Block diagram IoT system
A biofilter is a place where nitrite bacteria can live to be added to the aquaponics system as a converter of ammonia into nitrate erection and an aerator can be added to increase the air content (dissolved oxygen) in the water so plants can breathe. The selection of special plants for aquaponics such as lettuce, spinach, and several other types of vegetables that effectively absorb excess nutrients in the water also with more plants will further reduce the concentration of ammonia in the water. The role of aquaculture here is very important so the temperature, oxygen levels, water pH, and several other water quality variables are very important to pay attention to so that fish or plants can grow well. The overall prototype of the aquaponic system is shown in Fig. 3. Water from the fish pond is pumped into a 3-level filter using zeolite stone, cotton filter, and bio-filter to process ammonia. From the 3 filter levels, the water will be passed to the NFT hydroponic system. At the end of the NFT system, the water will flow back into the fish pond.
2.2 Communication Design In this study, MQTT is used as a communication protocol where MQTT is one of the IoT communication protocols that implement the publish and subscribe architecture [13]. Figure 4 shows that MQTT has a broker as the control center to distribute data to all devices connected to the broker [13]. Brokers will receive data from sensor nodes differentiated by publish and subscribe topics.
IoT in the Aquaponic Ecosystem for Water Quality Monitoring
715
Fig. 3 Aquaponic systems prototype
Fig. 4 MQTT system initialization
MQTT works with a simple system where subscribers only receive information from the broker when they have subscribed to a certain topic. Every time the publisher has updates, it will send the updated information about the topic to the broker. Communication between publishers and subscribers will continue as long as they remain subscribed to the same topic [14]. This happens because when communication takes place, between the publisher and the customer, there is a broker whose function is to forward the communication. MQTT is not suitable for peer-to-peer communication [15]. When the customer is disconnected from the broker and the issuer is still connected to the broker, the customer can still get data from the broker when reconnecting to the broker. MQTT is a protocol that works on the TCP/IP stack, has low bandwidth, and has a low data packet size overhead [16]. It makes MQTT have low power consumption. However, as the network expands, a centralized broker will be needed to control the MQTT system and this will certainly affect network scalability due to overhead [17]. The broker can be the single point of failure in this case.
Fig. 5 MQTT system
716
N. W. Prasetya et al.
Arduino which controls all sensors will act as a publisher, will send data continuously to raspberry pi using MQTT Protocol. In this case, raspberry pi act as a broker and subscriber at the same time. This communication occurs by utilizing an access point that has been set up on a particular network as media communication. Raspberry received the data as array data then change the format to JSON. Raspberry uses API services to send the data to the cloud server with a simple POST method. Figure 5 show the data flow from sensor to web apps.
3 Result and Discussion In this study, mathematical equations were used to measure the accuracy and precision of the sensor. These equations will show the accuracy of every sensor that is used in the aquaponic system. Accuracy = 1 −
|(Manual − Sensor)| 100% Manual
(1)
Sensor testing is intended to determine whether sensor readings can be compared with commonly used measuring instruments. In this test, 4 sensors were calibrated, namely temperature sensors, turbidity sensors, TDS sensors and DO sensors. Below is the procedure to test the sensor: 1. 2. 3. 4.
Sensor data that has been sent to the server is monitored every day Periodic daily sampling at 10–11 am and 1–2 pm for 10 days. Tabulate data according to date and time Calculating sensor accuracy against manual measurement
The test results show that the accuracy of all sensors is still accurate where the sensor accuracy ranges from 96 to 98% (Table 1).
4 Conclusion Based on the results of the design and implementation, it is concluded that the system can be used so that routine manual measurements are no longer needed. The water quality monitoring system for aquaponics requires reliability and is placed in a special position to be able to reach the position of the water in the pool. This is achieved by using a packaging system of industrial grade. Creating a mount to place the system at the edge of the pool along with the float sensor. Wi-Fi with MQTT protocol is used as a communication medium between Arduino and Raspberry. The information system that has been designed can monitor the system remotely using the internet network. This system is intended so that managers can always get periodic measurement results on the parameters of DO, TDS, temperature, and turbidity. The test result showed an
Acuracy
25.57
16
25.26
26.82
26.28
20
Avg
26.38
26.07
26.82
19
25.59
27.49
25.43
25.32
17
18
25.34
25.16
27.77
26.19
25.03
14
25.66
25.95
15
25.20
13
27.28
27.19
26.21
11
12
26.29
26.45
10
25.70
27.33
27.27
27.45
26.28
8
26.18
7
27.39
25.58
9
25.32
26.09
5
6
27.55
26.68
4
27.37
25.41
25.85
27.77
2
3
96.40
93.84
97.13
92.09
99.37
99.11
90.14
95.92
98.20
99.00
99.66
99.40
99.56
93.91
99.63
98.00
92.45
96.86
90.70
94.43
98.61
2525
3132
2991
2986
2909
2860
2747
2692
2702
2567
2513
2460
2480
2408
23.47
2209
2234
2137
2108
2045
1973
Sensor
Manual
27.06
Sensor
26.68
No.
1
Turbidity
Temperature
Table 1 Test result Manual
2521
3050
3069
2950
2938
2825
2834
2751
2675
2659
2504
2509
2387
2333
2288
2278
2207
2160
2042
1988
1972
Acuracy
97.91
97.31
97.46
98.78
99.01
98.76
96.93
97.86
98.99
96.54
99.64
98.05
96.10
96.79
97.42
96.97
98.78
98.94
96.77
97.13
99.95
1762
1991
1954
1909
1950
1880
1896
1821
1804
1764
1786
1710
1735
1737
1694
1627
1693
1611
1611
1530
1534
Sensor
TDS Manual
1752
1956
1992
1926
1922
1858
1850
1830
1800
1774
1755
1740
1716
1678
1673
1611
1612
1638
1634
1580
1503
Acuracy
98.27
98.21
98.09
99.12
98.54
98.82
97.51
99.51
99.78
99.44
98.23
98.28
98.89
96.48
98.74
99.01
94.98
98.35
98.59
96.84
97.94
6.67
5.19
5.39
5.63
5.56
5.71
5.98
6.18
6.27
6.51
6.54
6.71
6.87
7.03
7.29
7.41
7.54
7.71
7.82
7.86
8.10
Sensor
DO Manual
6.67
5.29
5.40
5.47
5.77
5.73
5.87
6.19
6.25
6.34
6.56
6.79
6.92
7.02
7.14
7.45
7.53
7.73
7.88
8.04
8.06
Acuracy
98.86
98.11
99.81
97.07
96.36
99.65
98.13
99.84
99.68
97.32
99.70
98.82
99.28
99.86
97.90
99.46
99.87
99.74
99.24
97.76
99.50
IoT in the Aquaponic Ecosystem for Water Quality Monitoring 717
718
N. W. Prasetya et al.
accuracy of 96.4% for Temperature, 97.91% for turbidity sensor, 98.27% for TDS sensor and 98.86% for DO sensor.
References 1. Carlos S, Lourdes H, Jimenez D (2018) A brief analysis of an integrated fish-plant system through phase planes. IFAC PapersOnLine 13:131–136 2. Yanes R, Martinez P, Ahmad R (2020) Towards automated aquaponics: a review on monitoring, IoT, and smart systems. Journal of Cleaner Production 263 3. Haryanto, Ulum M, Ibadillah AF, Alfita R, Aji K, Rizkyandi R (2019) Smart aquaponic system based internet of things (IoT). Journal of Physics: Conf Series 1211 4. Pasha AK, Mulyana E, Hidayat C, Ramdhani MA, Kurahman OT, Adhipradana M (2018) System design of controlling and monitoring on aquaponic based on internet of things. In: 4th international conference on wireless and telematics (ICWT), pp 1–5 5. Al-Abassi S, Ali A, Al-Baghdadi M (2019) Design and construction of smart IoT-based aquaponics powered by PV cells. IJEE 10:127–134 6. Barosa R, Hassen SIS, Nagowah L (2019) Smart aquaponics with disease detection. In: 2019 conference on next generation computing applications (NextComp), pp 1–6 7. Encinas C, Ruiz E, Cortez J, Espinoza A (2017) Design and implementation of a distributed IoT system for the monitoring of water quality in aquaculture. In: 2017 wireless telecommunications symposium (WTS), pp 1–7 8. Bresnahan PJ, Wirth T, Martz T, Shipley K, Rowley V, Anderson C, Grimm T (2020) Equipping smart coasts with marine water quality IoT sensors. Results in Engineering 5 9. Huan J, Li H, Wu F, Cao W (2020) Design of water quality monitoring system for aquaculture ponds based on nb-iot. Aquacultural Engineering 90 10. Lee C, Wang Y-J (2020) Development of a cloud-based IoT monitoring system for fish metabolism and activity in aquaponics. Aquacultural Engineering 90 11. Dauhan RES, Efendi E (2014) Efektivitas Sistem Akuaponik dalam Mereduksi Konsenstrasi Amonia pada Sistem Budidaya Ikan. e-Jurnal Rekayasa dan Teknologi Budidaya Perairan 3 12. International Business Machines Corporation (IBM) Eurotech, MQTT V3.1 protocol specification, IBM (2010) 13. Prada MA, Reguera P, Alonso S (2016) Communication with resource constrained devise through MQTT for control education. IFAC-PaperOnline 49:150–155 14. Tantitharanukul N, Osathanunkul K, Hantrakul K, Pramokchon P, Khoenkaw P (2016) MQTTtopic naming criteria of open data for smart cities. In: 2016 international computer science and engineering conference (ICSEC), pp 1–6 15. Amaran MH, Noh NAM, Rohmad MS, Hashim H (2015) A comparison of lightwight communication protocols in robotic applications. Procedia Computer Science 76:400–405 16. Banks A, Gupta R (2014) MQTT version 3.1.1. OASIS Standard 17. Niruntasukrat A, Issariyapat C, Pongpaibool P, Maesublak K, Aiumsupucgul P, Panya A (2016) Authorization mechanism for MQTT-based internet of things. In: 2016 IEEE international conference on communications workshops (ICC), pp 290–295
IoT Based Methods for Pandemic Control Artem Filatov and Mahsa Razavi
Abstract The COVID-19 pandemic, which emerged around late 2019, had a significant impact on the world and created a new ‘normal’ of living. The vast and rapid spread of the pandemic cost more than 6 million lives to date and active cases continue to be reported worldwide. Science and technologies combined to find ways to address this pandemic. Internet of Things (IoT) were utilised used to obtain information and to communicate via the internet in an attempt to process at the highest processing power. This research identifies major elements of existing tools used in this domain to control the COVID-19 pandemic and address future pandemics demonstrating how IoT-enabled technologies can be successfully utilized to control and limit a pandemic. Keywords Covid-19 · Pandamic · IoT · IoT-enabled technologies
1 Introduction The Internet of Things (IoT) consists of a collection of wireless, wired, and connected digital devices that collect, store, and send data over a network without human intervention. Due to technological advances and driven by the urgencies created by the outbreak of the COVID-19 pandemic in 2020, IoT technologies underwent a major development. Even after the 2020 outbreak, IoT applications remained a major facilitator of information. The primary purpose of this research is to analyse how IoT applications were utilized to control the pandemic, thus showcasing expertise transferable to similar pandemic situations in the future. A generic IoT-based architecture comprises three main layers: perception, network, and application [1]. The perception layer consists of sensors responsible for A. Filatov · M. Razavi (B) Western Sydney University, Penrith, Australia e-mail: [email protected] A. Filatov e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_63
719
720
A. Filatov and M. Razavi
collecting data and utilises cameras, Radio Frequency Identification (RFID), smart device sensors, and even biosensors. The network layer is responsible for sending the collected data via the perception layer and store these. Notable IoT storage options are cloud computing, edge computing, fog nodes, and more decentralized blockchain storage options. The final application layer, usually implemented on top of storage utilities, is responsible for analysing the data through machine- and deep learning techniques. With the help of these techniques, the IoT solution can be more optimized and scaled according to consumer needs and satisfaction. After identifying major components of generic IoT-based architecture, it is essential to gain sufficient knowledge of facilitating factors and barriers for IoT-based services, here health care services. As mentioned above, with the pandemic outbreak a considerable number of policies were introduced globally to facilitate technologies such of as IoT. Moreover, this pandemic-driven promotion of smart devices meant that more and more consumers were forced to engage with unfamiliar technologies, thus becoming more comfortable dealing with such devices. This, in turn, has enabled the generalisation of IoT. However, barriers caused by consumers’ lack of confidence in relying on IoT-based architecture due to security concerns about data ownership and storing and data control, still remain [1]. Another concern centres on the development of security protocols practiced in IoT applications to ensure the protection of sensitive data, especially medical records. Since IoT-enabled applications can handle remote utilities, they are extensively used for remote recording and retrieval of data like pulse rate, oxygen level in the blood, body temperature, and location of patients [2]. This enables diagnosis of COVID-19 symptoms, monitoring of imposed quarantine protocols and the number of contacted people—all identified as non-pharmaceutical intervention (NPI) [3]. Moreover, with the incorporation of appropriate tools, the system can be scaled to make forecasts about the pandemic and track down virus mutations. Although this research focuses mainly on IoT to control pandemics in the context of the health care sector, it has much wider application for smart homes, smart cities, industries, transportation, and education [4]. In the following sections, existing research into algorithms, tools, models, types of devices/sensors used, and technologies are presented and analysed followed by future work and conclusions.
2 Methodology This review was carried out by accessing publicly available secondary data sources to identify IoT-enabled technologies used to control the spread of the COVID-19 pandemic. The main data bank used in this review was that of the Western Sydney University Library. The literature was extracted by filtering of the keywords ‘IoT’, ‘pandemic control’, ‘COVID-19’, and ‘corona virus’ from 2021 onward. A bibliometric analysis was done using VOS Viewer based on keywords used in the reviewed literature providing information on the occurrence of various technologies based on keywords. Manual screening of the collected literature was by journal quality, i.e.,
IoT Based Methods for Pandemic Control
721
Fig. 1 Map of heatspots
Q1, Q2, Q3, and Q4 for each work with only papers from Q1 and Q2 journals being selected. 30 documents were pre-selected narrowed to 12 papers from the Q1 and Q2 chosen for systematic review. Figure 1 shows the interconnection between the multiple keywords in the research, and Fig. 1 shows the density of occurrence of the same keyword in the literature, where red depicts a high frequency of occurrence and frequency decreases from orange, yellow, green, and to blue, respectively.
3 Literature Review This research conducts a systematic literature review to identify preventive methods and controlling mechanisms using IoT sensors and technologies to curtail the spread of the pandemic’s spread using a range of NPI. The focus is on the IoT components deployed by researchers summarized in Tables 1 and 2—the basis for the conceptualization of the overall architecture.
3.1 Components Data Sources The research was done by reviewing 30 papers to obtain data from existing research. These data sources come under three broad categories:
Data sources
Data input
Instances—see below
(continued)
IOT Sensors: Web cameras, CCTV cameras, inertial sensors, magnetometer sensors, proximity sensors, RFID readers, NFC readers, Smart Spot by HOPU with a ESP32 chipset (alternatively can be used Raspberry Pi 4 Model B),Spy camera (Raspberry Pi Camera), GPS sensor (LM80), temperature sensor (MLX90416),Sparkfun biosensors (MAX32664 and MAX30101), Pulse Oximeter (MAX30102), Gyroscopes (MPU6050), Accelerometer, Respiratory sensor (using a FSR402 sensor), ECG sensor, Vibration sensor, Switches and routers emulated as Fog Nodes (FNs), photoplethysmogram sensors, Grove—Multichannel Gas Sensor, MH-Z19, DHT11, HM3301 Laser PM 2.5 Sensor, Contact-free temperature sensor ( Melexis MLX90614), motion and ultrasonic sensors, FLIR Lepton 3.5-Pure Thermal camera, AMG8833, FLIR One Pro, MAX 30100, MUSE EEG Sensor, ReSpeaker 4-Mic Array, microphone, Infrared IR Temperature Sensor, GPS Module NEO-6M, body area sensor network (BASN), Meteorological sensors, Ecological sensors, iBeacon network, Medical devices and sensors to get X-ray images, MPU6050 Accelerometer Gyroscope sensor
Data Attributes: Wearing a mask or not, name of the patient, age of the patient, blood group, gender, vitals of the patient such as heart rate, body temperature, oxygen saturation level (SpO2), cough details, respiratory rate, torso muscle motion, anxiety, and perspiring, location in terms of latitude and longitude, RSSI level, CSI level, ecological data such as water quality, diet, level of supplements, and air contamination, meteorological data such as pressure, moisture, environmental temperature and humidity, infection rate, contact duration, distance threshold, levels of gases: Ammonia, CO2 , CO, CH4 , N2 O, PM2.5, ECG and EEG signals, path length or distance, intensity of virus containment, SSID, MAC addresses, travelling details, history of smoking, comorbid factors, inertial sensor readings, magnetic field signals, acoustic signals, blood mass
VIRAT, CorDa on Kaggle, Masked Face-Net dataset, data from wearable devices, Bing search API, Kaggle datasets, real-world masked face dataset (RMFD), masked face analysis (MAFA) dataset, surveys, IoT sensors and sensor networks, questionnaires, user inputs into user application, previous works, Reports from Gartner, Yole, McKinsey and other consulting firms, interviews and questions raised by IT industry workers, Wi-Fi signals and RF signals, properly wearing masked face detection (PWMFD) dataset, from BLE beacons, COVID-19 Open Research Dataset, OpenStreetMap, Khorshid COVID Cohort Study, A collected dataset of air quantities from December 2019- March 2020 in a lab, Google’s audio dataset, UTA drowsiness dataset, MIT-BIH Arrhythmia3 ECG dataset, tufts-face-database-thermal-td-ir dataset, eeg-brainwave-dataset-feeling-emotions dataset, eeg-brainwave-dataset-mental-state dataset, 300 people participated in the experiment, Coswara dataset, Regional datasets were acquired and assessed from Amritsar, India, UCI dataset, X-ray images dataset of Hospital Israelita Albert Einstein, Covid-chestxray-dataset, public available datasets were combined to get 4000 samples of Covid and 7000 samples of non-Covid images were fetched, medical images from devices and sensors, live video streams and surveillance, Alcoi Tourist Lab data, prajna bhandary mask dataset, BIMCV-COVID19 + dataset, Kaggle dataset, Chest XRay_AI dataset, Mayo Clinic dataset, BIMCV: Medical Imaging Databank of the Valencia Region, MIDRC: Medical Imaging and Data Resource Center, hosted by RSNA, LIDC: Lung Image Database Consortium Image Collection
Attributes
Factor
Table 1 Component table
722 A. Filatov and M. Razavi
Attributes
Instances—see below
(continued)
Data Processing Algorithms: Mask detection algorithm, Distance detection algorithm, People counter algorithm, MobileVNetv2, YOLO-V3 tiny, Improved Pigeon Optimization Algorithm, Backtracking Search-Based Deep Neural Network, few-shot/zero-shot learning, Firefly technique, Consolidation algorithm, Sensor fusion algorithm, Face detection algorithm, Algorithms to detect entrance and exit of people from a given room, Positioning algorithm to estimate the position of people, Meta-heuristics optimized convolute neural network (MHCNN), Mel-frequency Cepstral Coefficients (MFCC) feature extraction method, Peak detection algorithm for breathing rate estimation, Algorithm to extract SpO2 and Heart rate, Algorithm to measure the rate of cough per minute., Q-learning algorithm, S-Nav—Training algorithm, S-Nav-Determination of PS—Nav algorithm to determine the path, Google’s Eddystone protocol, Channel wise Average Fusion (CAF), Kernel-Based Discriminant Correlation Analysis (KDCA), DCA algorithm, minimal-Redundancy Maximal-Relevance (mRMR), Gramian Angular Summation Field (GASF), SVM, KNN, LSTM learning algorithm, Naive Bayes (NB), VGG-16, Inceptionv3, ResNet-50, RNN for cough detection, YoLoV4, CNN, Data level fusion algorithm, CNN based algorithm, Algorithm on Operating Procedure, IoT-based contact tracing spatio-temporal (ST)-based prediction, Temporal Recurrent Neural Network (TRNN), Booking algorithm, matching algorithm, path planning algorithm with congestion control using modified Sequential and Parallel Dijkstra algorithm, RSSI-based indoor positioning algorithm, Augmented Data Recognition (ADR) security Algorithm, device-verified handshake protocol personal area network, long-term diagnostic algorithm, Mean–Variance-SoftMax-Rescale (MVSR) algorithm, fixed-point division algorithm, fixed-point square root algorithm, Faster-RCNN with ResNet-101, Custom algorithm to follow hand gestures and robotic functionality, Re-training of FL neural network, Non-Dominated Sorting Genetic Algorithm (NSGA-II), genetic algorithms PAV optimization algorithm, Proposed, scheduling algorithm, Particle Swarm, scheduling algorithm, Particle Swarm Optimization (PSO) algorithm, Parameter sweep model, Unsupervised learning algorithms, n-way k-shot training algorithm, DDnet (DenseNet and Deconvolution Network), anisotropic hybrid network (AH-Net), DenseNet-121 network, Resnet-50, InceptionResNet, Xception, AlexNet, SqueezeNet, Sha-1 encryption algorithm, Haar cascade classifier, GoogLeNet
Factor
Table 1 (continued)
IoT Based Methods for Pandemic Control 723
Attributes
Instances—see below
(continued)
Data Processing Techniques: Cloud computing, fog computing, machine learning (ML), deep learning (DL), reinforcement learning, deep neural networks, Virtual machines, Advanced Encryption Standard (AES), Big Data Analytics, Meta learning, Transfer learning, Adam optimizer, NFC and RFID techniques, FIWARE technology, face detection, image processing, image segmentation, edge computing, multithreading programming, BLE technology, Bluetooth, Wi-Fi, LoRa, LoRaWAN, PAN, WBAN, salp optimization technique, Computer Aided Drawing and Modelling (CAD and CAM), Augmented Reality (AR), mobile and web applications, ray casting technique, IoT, IoMT, IoHT, Non-greedy selection techniques, Reward-matrix, Q-matrix, GPS location, Cellular network (5G/4G/3G/GSM/LTE), Zigbee, Signal processing, k-fold cross validation, Kernel Multiview Canonical Correlation Analysis (KMCCA), Air Quality Index (AQI), Predictive data analysis, hit and trail method, Bi-linear interpolation, min–max normalization, inverse scale transformation, Haar Wavelet technique, pathological pattern mining with lesion detection methods, cross entropy, data augmentation, confusion matrix, Blockchain, smart contracts, Transport Layer Security protocol, IEEE 802.15.4, Sliding window protocol, pending data flag protocol, OAuth V2 protocol, affective computing, Ethereum and Hyperledger-based blockchain, computer vision (CV), modified transfer learning, Technological Readiness Level (TRL), Early Warning System (EWS) sensor fusion approach, APIs, MQTT, HTTP, SSH, Graph theoretic approach, Markov process, D2D communication, SOAP web services, predictive sampling used for energy optimization, voting ensemble-based mechanism, fuzzy logic, long shor-term memory (LSTM) network, secure socket layer (SSL) convention, elliptic curve cryptography, space–time sequence patterns (STSP), spatio-temporal granulation technique, back-propagation through-time (BPTT) mechanism, Euclidean distance, Highest Response Ratio Next (HRRN), Postioning technology based on the fingerprint database, Kalman filtering, vectorization, probability theory, cache strategy, firewall login, live video streaming, VPN connection, Regression techniques, Plug and Play, HDL programming, systolic arrays, batch normalization, region proposal network (RPN), correlation coefficient, statistical methods, p-value, filter back projection (FBP), parallel computing, enhancement functions, gradient-based meta-learning optimization strategy, WIoTs
Factor
Table 1 (continued)
724 A. Filatov and M. Razavi
Attributes
Instances—see below
(continued)
Models: Healthcare Monitoring-Data Science Technique (HM-DST) Model, Crowd monitoring model in compliance with GDPR, CNN and DNN models, distributed architecture, Adaptive-Network-Based Fuzzy Inference System (AN-FIS), Sugeno architecture, fusion analysis model, Long and Short Term Memory (LSTM) model, Smart Screening and Disinfection Walkthrough Gate (SSDWG), IoT-based Health Monitoring Systems (IoT-HMS) in compliance with NEWS-2, Hyper Ledger Fabric architecture, publish-subscribe paradigm (PubSub), QoL ensured in-home monitoring framework, Intelligent Management Systems, “Dolphin” model which doesn’t contain Personal Identifiable Information (PII), Random graph model, Self-organization Map (SOM) method, infection risk assessment model, Dolev-Yao (DY) threat model, “Ekursy”, unsupervised meta learning model, Tomography-based framework
Data Processing Tools: IoT wearables and sensors, Python 3.7/2.7, Keras, Tensorflow, numpy, Microsoft Azure, KVM hypervisor, Aneka Cloud application, CloudSim, Fog and cloud servers, NFC reader, smartphone, Ubuntu VM, ESP32 chipset, MongoDB, NodeJS, HOPU SmartSpot (omni directional antenna with gain output 5dBi, A dual-band USB WiFi dongle with high-gain antenna model TP-Link Archer T3U Plus with a sensitivity of about − 75 dBm for 2.4 GHz band and − 70 dBm for 5 GHz band), SMTP server, ThingSpeak Server, OpenCV, Pushbullet server, MATLAB, Autodesk Revit, Autodesk forge, BLE module and beacons, Rest API, Cloudflare, Python Flask, Google Map API, Serial communication (RS-32), Google Colab, Javascript, Tampering detection wire, power battery and USB cables, contiki-cooja simulator., Atmega 325,Wifi Module, R programming, MySQL database, Bing Images, Arduino Uno/Nano, multi-protocol low-power radio transceiver, Google Coral TPU, Intel NCS2 Edge GPU, Raspberry Pi, Nvidia Jetson Nano, TensorFlow Lite, PyTorch, Flask server deployed behind a Nginx reverse proxy server, Docker, CUDA tools, Cloud L Ionons, Telegram messaging service, Oracle Cloud, Digi XBee-3 radio, ArcGIS 10.2 programming, LCS gadgets, WiSense hubs, Actuators: DC motors, servo motors, iFogSim tool, Amazon EC2 distributed storage, STATA platform, Kali linux, Pcaps, CICFlowMeter tool, OBS Streaming Software, Xilinx ISE, twitch.tv, LCD screen, Zynq-7000 Development Board, VGA Monitor, Anaconda, Vivado 2018.1, HT-12E encoder IC, 433 kHz radio wave transmitter and receiver, Antenna, L293D motor driver IC, DC gear motors, Adobe Connect, SPSS, Jupyterlab, RFID RC522, webhook API to connect to Integromat platform, Nvidia’s Clara Train pipeline, OpenCL, gloo communication backend, routers and switches, drones, Admin DASHBOARD, MPU 9250 module, Commodity Wi-Fi, mmWave radar
Factor
Table 1 (continued)
IoT Based Methods for Pandemic Control 725
Attributes
Instances—see below
Future Work: Deployment of tested studies, A SARS-CoV-2 Mutation prediction scheme is to be studied, Automation of collection, storage, and processing of data, linking with social media, booking services, provide encryption, increase accuracy, integration of ML, DL, Include real-time traffic, Device and battery optimization, voice control, obstacle avoidance
Data Presentation—Application: Virtual and e learning, Gesture control Robotics, Remote medicine delivery, Telemedicine, Medical Imaging, X-ray imaging, Diagnosing and Detection of COVID-19, Smart Health IoT (HIoT), Smart health monitoring and scheduling systems, Hospitals, care centers and adult houses, Social distancing, Distance learning, Indoor positioning, Congestion control in enclosed spaces, Airports, factories, workshops, Early detection of patients, Multimodality Illness Diagnostics, Contact tracing and infection tracing with social awareness, Track and Trace applications, Early Warning Systems (EWS), In-home health management in a pandemic, Remote sensing, Rapid Screening and Face Mask Detection and Classification, Indoor Air quality detection and forecasting systems, Smart wearables, Geofencing of COVID patients, Smart navigation, Smart buildings, Crowd management, Queue management, Smart cities, Tourism, Forecasting and Mutation Tracking, Quarantine Monitoring
Data Presentation—Primary Output Warnings and alerts about violations, Reports, graphs, Charts, No of visitors, No of residents, total number of people in a given timestamp, mobile notifications and SMS, Google Maps, optimal paths, risk factor score, color coded infographics, Type of mask, vitals of the patient, Chatbot, Pill detection and reminder, heatmaps, contact tracing graph, infection tracing graph, congestion control, visual outputs, normalized X-ray images, movement of robot depending on gestures, statistical results, COVID-19 Symptom Diagnosis, Scatter and plots
Factor
Table 1 (continued)
726 A. Filatov and M. Razavi
IoT Based Methods for Pandemic Control
727
Table 2 IoT technologies used COVID-19 solutions
IoT applications
IoT devices
Network layer
Fog layer
Cloud layer
COVID-19 symptom diagnosis
Breathing monitoring
Inertial sensor
Cellular
+
−
Depth camera
−
−
+
Microphone
−
+
+
mmWave radar
−
−
−
Wi-Fi
Wi-Fi
+
−
Blood oxygen saturation monitoring
Oximeter
−
−
−
PPG sensor
−
−
−
RGB camera
−
−
−
Body temperature monitoring
Infrared temperature sensor
Cellular, Wi-Fi
−
+
IRT camera
Cellular
−
+
RFID
−
−
−
Smart Phone
LoRa
+
−
Wearable device
Cellular, Wi-Fi
−
+
Microphone
Cellular, Wi-Fi
+
+
GPS, Health, Environmental, Ecological sensors
ZigBee, Cellular, Bluetooth, Wi-Fi
+
+
Temperature, heart rate and torso muscle motion sensors
ZigBee
−
−
RFID
−
−
+
Smart devices
Wi-Fi, Cellular
+
+
Ear sensor, motion sensor
Cellular
−
+
Smartphone
Wi-Fi, Cellular
+
+
Drone, GPS
Radio
Cough pattern monitoring
Multi modal sensor monitoring
Quarantine monitoring
Contact tracing social distancing
Human activity tracking
+
−
Wi-Fi, Bluetooth Wi-Fi, Cellular
−
+
Cellular trace
Cellular
−
−
BLE beacons
Wi-Fi
−
+
OpenStreet Maps Wi-Fi
+
+ (continued)
728
A. Filatov and M. Razavi
Table 2 (continued) COVID-19 solutions
IoT applications
IoT devices
Network layer
Fog layer
Cloud layer
COVID-19 outbreak forecast
Disease outbreak prediction
Wearable device
−
+
+
Mobile phone and body sensor
Wireless Mode
+
+
GPS
Cellular
+
+
SARS-CoV-2 mutation prediction
Virus mutation prediction
−
−
−
−
(I) User input using questionnaires, surveys, application input; (II) Data extracted from open datasets available on the internet, (III) Sensory data acquired by IoT sensors, sensor networks, and devices. Moreover, most of the papers also have used data augmentation [5] to fill data gaps. The data presented in these publicly available datasets shows promise for managing pandemics like COVID-19 [6]. Data Attributes Data utilised in the existing works were based on health (i.e., HL7 Data Types | Lyniate, 2022), meteorology, the environment, geography, and networks. Health data included body temperature, heart rate, oxygen saturation rate, etc., while physical awareness systems were based on meteorological and environmental data, including humidity and temperature, etc. Geographical data included mainly latitude and longitude values of a given location relating to the sensor node or the user. IoT Sensors The types of sensors deployed varied depending on the technology, parameters being measured, and the nature of the measurement (i.e., active/passive). The most prevalent types of sensors were temperature sensors; however, they also varied from one work to another, depending on the proposed systems. Algorithms This component has the greatest variety in terms of implementation, and techniques are based on a range of logical and mathematical procedures that are utilized by the research papers to mitigate and control the pandemic. However, not all researchers relied on algorithms for their proposed system; for example, Paganelli et al. [7] have developed an Early Warning System (EWS) by utilizing a microcontroller-based sensor integrated model known as W-kit. The basic contribution from this is in remote sensing and does not require high processing capabilities; however, in systems where a more robust solution is presented, complex and sophisticated algorithms needed to be incorporated.
IoT Based Methods for Pandemic Control
729
Models The solutions presented in the literature are structured and built upon a predefined framework. Models vary and depend on what the researcher needed to achieve as well as scalability, integrability into prevailing systems, and reliability. Most models relied on the generic architecture of IoT systems, i.e., the perception, processing, and application layer with which the user interacted. Costs were also a significant concern. Primary Output Deliverables from the reviewed literature are web or mobile applications, user dashboards, outputs from actuators, warnings, and alerts. These collectively contributed to addressing the domain issue the relevant research paper intended to address. However, some researchers did not deliver a tangible outcome but simulated results as proof-ofconcept. For example, Sharma et al. [8] have used a simulation environment known as Contiki-Cooja as a working proof for their solution. Moreover, most of the research has identified sensing data from sensors as their primary output which is viewed in a user application as graphs and reports and updated as database entries in cloud storage or databases. Application Area The main focus of this research is to identify domains in which IoT, and related technologies were implemented to control COVID-19 and future epidemic situations. The application area was critical in scrutinizing the literature to identify more important and related works to answer the research questions. The most prominent area of interest was diagnosis and monitoring of COVID-19 patients where some research papers focused on contact tracing [9, 10]. These approaches will be further treated in the discussion section. The following Table 1 summarises information obtained from 30 existing works.
3.2 Components Table Table 1 provides insight into data sources, data attributes, IoT sensors, algorithms, technologies, and the future work proposed in the reviewed literature. The component table identifies the previously discussed components in three broad sections Data Input, Data Processing, and Data Presentation. The first column identifies three general areas of Data Input, Data Processing, and Data Presentation; in the second column, various attributes corresponding to each section are presented and in the last column, instances for each attribute are presented. It was important to provide this information to understand the key components of the reviewed literature. It also provides an easy visualization method to summarize the research as it lists all the instances occurring in the papers according to the attributes.
730
A. Filatov and M. Razavi
3.3 Classification Table Table 2 categorizes components for each work, selecting the most impactful 12 papers from the reviewed literature. With this, we will be able to identify unique inputs, techniques, algorithms, and methodologies incorporated by authors and researchers. Data sources and attribute columns identify the multiple sources from which data were extracted and attributes such as sensor readings, user inputs, and statistical data. IoT sensors list multiple types of sensors and sensor technologies incorporated in each paper. Algorithms and techniques discuss various technologies and procedures used by the literature to achieve results which are presented in the output column. The tools and model columns depict various software, architectural, and hardware requirements for each proposed solution. The application area categorizes the papers according to their significant impact on a given approach to mitigate the spread and severity of the pandemic. Finally, future work identifies the research gap in each paper, which is to be addressed.
3.4 IoT Solutions 3.5 Overall Architecture This section aims to construct an overall architecture based on the information gathered from the existing literature. A prominent example is the concept of a smart city as overall architecture for a system with IoT-enabled technologies to control pandemics and other implications. A smart hospital is equipped with IoT sensors to monitor and prevent pandemics and emergencies. Ideally, they have camera sensors ranging from thermal to infrared, depending on the context and environmental conditions. In both these cases, IoT sensors can be used to control a number of people in a given perimeter to prevent the rapid spread of diseases due to high population density. Another incorporation is the monitoring of body vitals in real-time using non-contact sensors, including body temperature and facial expressions, for example. Telemedicine and e-health are other technologies that can be incorporated into the context of smart hospitals. The services provided by a conventional hospital are delivered through internet and mobile services. These can be further developed through technologies such as VR, AR, and MR. During the pandemic, most of the patients were kept under home quarantine. The smart city concept also can facilitate an IoT-enabled home quarantine, as discussed extensively in the literature. Non-contact temperature sensors can be used to monitor patient temperature, mobile robots with various sensors can be used as an assistant to help with the patient, blood oxygen saturation level sensors can be used to monitor patient’s blood oxygen level and various microphones can be used to monitor patient’s breathing and cough patterns. These enable an IoT-based home quarantine procedure
IoT Based Methods for Pandemic Control
731
in the context of a smart city. Moreover, these vitals of the patient can be transferred to relevant authorities using secure technologies such as Blockchain, which are safe and immutable by default. Moreover, medical research centres play a significant role in mitigating the spread of the pandemic, thus requiring a heavy load. For this purpose, robots were used to collect, store, and analyse samples from patients and hospitals. This reduces the time complexity and human intervention, thus limiting the spread of pandemic to the most valuable assets such as researchers, specialized technicians, and scholars. Drones, incorporated with depth infrared camera sensors, can be used for crowd management and surveillance of the population, and thus enforcing the relevant pandemic protocols accordingly. This is more effective in monitoring public places like shopping malls, public parks, etc. Smart congestion control is another component employed in the context of smart cities for crowd management and congestion control. This is enabled by using multi-modal sensors, Big Data Analytics, and Pattern Recognition. For the proposed architecture one medical database stored data retrieved from smart hospitals, clinics, and research centres through reliable 5G communication, or any other form of networking with underlying Blockchain technology for security and privacy concerns. The second database is the public database, which can be either on cloud or fog servers and can be incorporated into edge devices. This contains details about crowd management and gathering statistical data on the spread of the pandemic. It is also combined with the medical database for system completion and accuracy. These system components combine to formulate the smart city architecture for pandemic control, as shown in Fig. 2.
3.6 IoT Solutions Table This section discusses IoT solutions based on various technologies used in the reviewed literature. The solutions are broadly categorized into two categories, namely, COVID-19 Symptom Diagnosis and COVID-19 Control as shown in Table 2. Various IoT applications for these two solutions are then shown in the second column. For COVID-19 Symptom Diagnosis, monitoring breathing patterns, blood oxygen levels, body temperature, cough patterns, and multi-modal senor monitoring are identified as the most impactful IoT applications. Similarly, for COVID-19 Control, human activity tracking, disease outbreaks, and prediction of virus mutation are identified. Next, the major IoT devices are identified for each IoT application. Having identified the IoT device for each IoT application, the technology used for the network layer was highlighted with the indication of the use of either fog or cloud layer, with a ‘+’ mark to indicate the utilization of each layer and a ‘−’ to depict lack of utilization. The data presented in Table 3 helps build an overall architecture and identify any research gap.
732
A. Filatov and M. Razavi
Fig. 2 Overall architecture
4 Discussion 4.1 Input The research findings of the literature review suggested that IoT and related technologies are promising in detecting, monitoring, controlling, and forecasting COVID-19 and associated implications. However, only Dong and Yao [3] had made prospects to forecast COVID-19 by integrating Big Data analytics and IoT solutions presented in the research. This is also discussed in Singh et al. [11], where other wide varieties of IoT applications are suggested, along with some of the issues that IoT technologies would face in addressing these applications. Internet-connected hospitals with
IoT Based Methods for Pandemic Control
733
automated treatment processes, telehealth consultation, and transparent and smart tracking of COVID-19 patients are the main applications presented in Singh et al. [11]. In maintaining the transparency of this treatment and tracing process, safety is a significant concern to be addressed with caution, as we are dealing with sensitive personal details. A more robust and feasible solution for forecasting COVID-19 trends is discussed by Ahanger [12] where they have proposed a four-level architecture to address various aspects, including a COVID-19 prediction and decision model. They have used the user mobile device data to predict and make decisions using fuzzy C-mean classification. Most of the research has focused on detecting COVID-19; however, according to Poongodi et al. [9], the majority of patients with COVID-19 show mild symptoms or are asymptomatic but unintendedly are carriers of the virus. Therefore, the researchers have utilized a mechanism to continuously monitor a patient’s or a suspect’s vitals via multi-sensors and sensor networks. These sensors are connected wirelessly or wired together. Emokpae [13] had used a wearable shirt (vest) to monitor patients with COVID-19, using Body Area Sensor Network (BASN), these sensors measure the full range of motion of the torso, muscle activation, and body vitals in the form of photoplethysmography (PPG), electrocardiography (ECG), electromyography (EMG), acoustic cardiography (ACG) and acoustic myography (AMG). Another significance of this research was that it used a long short-term memory (LSTM) network instead of typical cloud or fog computing technologies. To overcome the task of detecting patients with COVID-19, the evaluation of chest radiograph images was used as a clinically accepted procedure, where any patient with COVD-19 had abnormalities such as ground glass, linear opacities, vascular consolidations, reversed halo signs, and crazy-paving patterns. Thus, many researchers have undertaken viable and fast solutions to direct the path of research toward this domain. The tomography-based framework to detect COVID-19 patients by using CT scans of chest X-ray images using DDNet (DenseNet and Deconvolutional Network) deep learning model with a testing accuracy of 91% was very much higher when compared to Reverse-Transcription Polymerase Chain Reaction (RTPCR) which had only 67% accuracy and was a much tedious procedure. However, the solution was highly resourced and intensive. To overcome this, Yaman et al. [14] introduced a normalization algorithm to reduce the complexity of execution of large deep neural networks. However, both of the above-mentioned had the drawback of extracting features before the training of the model, to overcome this limitation, Miao et al. [15] introduced an unsupervised meta-learning procedure, where the model did not require a pre-trained model. Ahmed et al. [16] also proposed a similar framework utilizing deep learning techniques to predict COVID-19 patients using chest X-ray images. Ahmed et al. [16] had used numerous deep learning techniques to choose a better solution.
734
A. Filatov and M. Razavi
4.2 Processing In addition to concern for the general public, health care workers are the most vulnerable set of people exposed to direct contact with virus containment and subjects. Kumar et al. [17] proposed a framework for the safety of health care workers by collaborating Artificial Intelligence (AI) and IoT. It identified various challenges they faced under physical, operational, resource-based, organizational, technological, and external health care aspects. The major issues faced by health care workers were the scarcity of health tools with high demand, limited capacities of testing labs, limited medical supplies, inadequate Personal Protective Equipment (PPE), lack of training, and lack of safety during sample collection. Kumar et al. [17] also highlighted the importance of the participation of policymakers in ensuring the safety of health care workers in terms of policy reforms. Another cost-effective solution was to develop a self-sanitizing PPE kit with a simple solution, utilizing only an Arduino and NE555 Timer IC, with other relevant actuators. The suit is sanitized regularly to prevent the coronavirus from spreading through the clothing. However, the design was limited to a prototype. Another exciting solution was proposed to address the risk involved in PPE units by providing an IoT-enabled smart PPE vending machine. Critical patients admitted to Intensive Care Units (ICU) required a high degree of attention and imposed additional stress on the workload of health workers. To overcome this, Filho et al. [18] proposed and deployed a patient monitoring framework, which was successfully deployed in a hospital. This system included Remote Patient and Environment Monitoring, Patient Health Data Management, Patient Health Condition Management, and Emergency and Crisis Management units. Enclosed spaces were another vulnerable aspect in the rapid spreading of COVID-19. To limit the spread of COVID-19 in such spaces, Yang et al. [19] proposed a system by exploiting the variation of the Received Signal Strength Indicator (RSSI) corresponding to the number of connected devices to a given network. Yang et al., used Bluetooth Lower Energy (BLE) RSSI strength to prove their concept. However, RSSI alone didn’t provide sufficient information regarding the crowd. Therefore, it suggested an early warning system for crowd management in indoor and outdoor contexts by incorporating both RSSI and Channel State Information (CSI) of a given network. This solution was effective in crowd management. Hussain et al. [20] proposed an IoT-based Smart Screening and Disinfection Walkthrough Gate (SSDWG) for entrances to all public places which ensured that the person entering through the gate had a mask and had a favourable body temperature. It was also equipped with a sanitizer to sanitize the entering people, thus ensuring public safety. Due to the continuous lockdowns prevailed, there was a possibility of retainment of toxic gases such as N2 O in places where AC machines were installed. Mumtaz et al. [21] addressed this issue by using numerous sensor to forecast and predict indoor air quality. The proposed solution is also capable of extending to identify fires and other emergencies in advance. Other solutions suggested by Sava¸scı Sen ¸ et al. [22], Poongodi et al. [9], Varshini et al. [23], and Roy et al. [10] provided solutions aiming at ensuring social distancing in public places.
IoT Based Methods for Pandemic Control
735
4.3 Output Monitoring of COVID-19 patients was vital as they required immediate attention. This included tracing and tracking their social interactions and monitoring patients who were directed to home quarantine due to a lack of space in hospitals. Rajasekar [24] presented a mobile tracking application that could read Near Feld Communication (NFC) and RFID tags. However, this solution had a significant drawback as it required the genuineness of the patient as it was a voluntary action made by the patient. A robotic solution with gesture control was presented by a drone-based solution [25] for screening and detecting COVID-19 subjects in rural areas equipped with DNN. The solutions proposed by Cacovean [26] Rahman and Shamim Hossain [27] incorporate deep learning techniques for detecting and monitoring COVID-19 patients remotely. Therefore, we can conclude that a decent amount of research was undertaken to address controlling the COVID-19 pandemic by detecting, monitoring, and forecasting. However, mutation tracking remained a less addressed research area, which remained a future goal in most reviewed papers.
5 Recommendations After a systematic review of the literature, the following recommendations were formulated for enabling IoT-related technologies to limit the control of prevailing and any future pandemics. 1. Enabling internet use, especially in the context of developing and undeveloped countries and rural regions is recommended. 2. Providing basic training and educating health care workers about IoT-enabled applications and providing them with user-friendly interfaces to achieve a fastlearning curve is recommended. 3. Formulating relevant regulatory frameworks and policies to enhance the security and privacy concerns in protecting user data is recommended. 4. Use of more automation techniques and robotic process automation to hinder human involvement in the case of sample collection and sample analysis is recommended.
6 Future Work Future work needs to include the development of a united modular systems as a base for integration and connection of IoT devices used for pandemic control and monitoring into the grid. This paper identified and gathered IoT devices and technologies into a method to control contagious diseases similar to COVID. Due to the limited time and resources, practical implementation of the defined methods and devices
736
A. Filatov and M. Razavi
were not tested in a small environment to identify integration issues. These must be rectified by future research to evaluate how the devices can be attached and the data will be processed.
7 Conclusion With the rapid advancement of internet-related services and networking capabilities, the usage of IoT devices is promising to increase. According to the reviewed literature, we can conclude that IoT can be utilized to control the pandemic by integrating other technologies successfully. The main objective of this research was to identify the various control methods for controlling pandemic situations such as COVID-19 and to gain insight into future advancements in the field. In this research, I have successfully identified current methods used for pandemic control using IoTenabled technologies. Most of the studies utilized IoT sensors such as temperature and blood oxygen saturation rate. These two are the widely used sensors in the proposed solutions. Moreover, some proposed solutions utilized a network of sensors to control the pandemic. Artificial Intelligence (AI) techniques are widely used to enhance the proposed solutions; deep learning and machine learning techniques are the main types of AI techniques, while a few have used reinforcement learning methodologies. These IoT solutions helped mitigate the spread of pandemics by diagnosing, detecting, and tracking the spread of pandemics. The use of IoT in monitoring the spread of pandemics has played a major role in curtailing the pandemic. From this research, I have identified the utilization of IoT sensor networks to facilitate smart monitoring systems, smart hospitals, smart city concepts, the use of robotic systems to mitigate the direct contact with infections in collecting samples, and research and drone surveillance. 5G, Blockchain, Robotic Process Automation, and Cryptography are some of the future IoT enabling technologies. As discussed, security and privacy are the main hindrances to developing IoT-enabled technologies, which is also a research area to be addressed. Also, to boost the deployment of IoT-based developments, the need for alteration of policies to suit the requirements of IoT-based technologies can be highlighted. These policies can be categorized as economic, political, social, and technological. Moreover, increasing internet use and boosting the research area on nano and macro scale sensors is essential for extending the use of IoT technologies. Therefore, we can conclude that by utilizing IoT-based solutions, we can reduce the risk involved in situations such as the COVID-19 pandemic, where humans need not get involved in monitoring and detecting the spread of the pandemic. Accordingly, the research has successfully addressed the study’s objectives and identified future research directions.
IoT Based Methods for Pandemic Control
737
References 1. Misra S, Deb PK, Koppala N, Mukherjee A, Mao S (2021) S-Nav: safety-aware IoT navigation tool for avoiding COVID-19 hotspots. IEEE Internet Things J 8(8):6975–6982. https://doi.org/ 10.1109/JIOT.2020.3037641 2. Chaudhari SN, Mene SP, Bora RM, Somavanshi KN (2020) Role of internet of things (IOT) in pandemic covid-19 condition. International Journal of Engineering Research and Applications 10. www.Ijera.com 3. Dong Y, Yao YD (2021) IoT platform for covid-19 prevention and control: a survey. In: IEEE Access, vol 9, pp 49929–49941. Institute of Electrical and Electronics Engineers Inc. https:// doi.org/10.1109/ACCESS.2021.3068276 4. Umair M, Cheema MA, Cheema O, Li H, Lu H (2021) Impact of COVID-19 on IoT adoption in healthcare, smart homes, smart buildings, smart cities, transportation and industrial IoT. Sensors 21(11). https://doi.org/10.3390/s21113838 5. Chlap P, Min H, Vandenberg N, Dowling J, Holloway L, Haworth A (2021) A review of medical image data augmentation techniques for deep learning applications. J Med Imaging Radiat Oncol 65(5):545–563. https://doi.org/10.1111/1754-9485.13261 6. Popkova EG, Sergi BS (2022) Digital public health: automation based on new datasets and the internet of things. Socio-Economic Planning Sciences 80 https://doi.org/10.1016/J.SEPS. 2021.101039 7. Paganelli AI, Velmovitsky PE, Miranda P, Branco A, Alencar P, Cowan D, Endler M, Morita PP (2022) A conceptual IoT-based early-warning architecture for remote monitoring of COVID19 patients in wards and at home. Internet of Things (Netherlands). https://doi.org/10.1016/J. IOT.2021.100399 8. Sharma N, Mangla M, Mohanty SN, Gupta D, Tiwari P, Shorfuzzaman M, Rawashdeh M (2021) A smart ontology based IoT framework for remote patient monitoring. Biomedical Signal Processing and Control 68. https://doi.org/10.1016/J.BSPC.2021.102717 9. Poongodi M, Nguyen TN, Hamdi M, Cengiz K (2021) A measurement approach using smartIoT based architecture for detecting the COVID-19. Neural Process Lett. https://doi.org/10. 1007/S11063-021-10602-X 10. Roy A, Kumbhar FH, Dhillon HS, Saxena N, Shin SY, Singh S (2020) Efficient monitoring and contact tracing for COVID-19: a smart IoT-based framework. IEEE Internet of Things Magazine 3(3):17–23. https://doi.org/10.1109/IOTM.0001.2000145 11. Singh RP, Javaid M, Haleem A, Suman R (2020) Internet of things (IoT) applications to fight against COVID-19 pandemic. Diabetes Metab Syndr 14(4):521–524. https://doi.org/10.1016/ J.DSX.2020.04.041 12. Ahanger TA, Tariq U, Nusir M, Aldaej A, Ullah I, Sulman A (2022) A novel IoT–fog– cloud-based healthcare system for monitoring and predicting COVID-19 outspread. Journal of Supercomputing 78(2):1783–1806. https://doi.org/10.1007/S11227-021-03935-W 13. Emokpae LE, Emokpae RN, Lalouani W, Younis M (2021) Smart multimodal telehealth-IoT system for COVID-19 patients. IEEE Pervasive Comput 20(2):73–80. https://doi.org/10.1109/ MPRV.2021.3068183 14. Yaman S, Karakaya B, Erol Y (2022) A novel normalization algorithm to facilitate preassessment of Covid-19 disease by improving accuracy of CNN and its FPGA implementation. Evol Syst. https://doi.org/10.1007/S12530-022-09419-3 15. Miao R, Dong X, Xie SL, Liang Y, Lo SL (2021) UMLF-COVID: an unsupervised metalearning model specifically designed to identify X-ray images of COVID-19 patients. BMC Medical Imaging 21(1). https://doi.org/10.1186/S12880-021-00704-2 16. Ahmed I, Ahmad A, Jeon G (2021) An IoT-based deep learning framework for early assessment of covid-19. IEEE Internet Things J 8(21):15855–15862. https://doi.org/10.1109/JIOT.2020. 3034074 17. Kumar S, Raut RD, Narkhede BE (2020) A proposed collaborative framework by using artificial intelligence-internet of things (AI-IoT) in COVID-19 pandemic situation for healthcare
738
18.
19.
20.
21.
22.
23.
24. 25.
26.
27.
A. Filatov and M. Razavi workers. International Journal of Healthcare Management 13(4), 337–345. https://doi.org/10. 1080/20479700.2020.1810453 Filho IDMB, Aquino G, Malaquias RS, Girao G, Melo SRM (2021) An IoT-Based healthcare platform for patients in ICU beds during the COVID-19 outbreak. IEEE Access 9:27262–27277. https://doi.org/10.1109/ACCESS.2021.3058448 Yang J, Liu Y, Liu Z, Wu Y, Li T, Yang Y (2021) A framework for human activity recognition based on wifi CSI signal enhancement. Int J Antennas Propag. https://doi.org/10.1155/2021/ 6654752 Hussain S, Yu Y, Ayoub M, Khan A, Rehman R, Wahid JA, Hou W (2021) IoT and deep learningbased approach for rapid screening and face mask detection for infection spread control of COVID-19. Applied Sciences (Switzerland) 11(8). https://doi.org/10.3390/APP11083495 Mumtaz R, Zaidi SMH, Shakir MZ, Shafi U, Malik MM, Haque A, Mumtaz S, Zaidi SAR (2021) Internet of things (Iot) based indoor air quality sensing and predictive analytic—a covid-19 perspective. Electronics (Switzerland) 10(2):1–26. https://doi.org/10.3390/ELECTR ONICS10020184 Sava¸scı Sen ¸ S, Cicio˘glu M, Çalhan A (2021) IoT-based GPS assisted surveillance system with inter-WBAN geographic routing for pandemic situations. Journal of Biomedical Informatics 116. https://doi.org/10.1016/J.JBI.2021.103731 Varshini B, Yogesh H, Pasha SD, Suhail M, Madhumitha V, Sasi A (2021) IoT-enabled smart doors for monitoring body temperature and face mask detection. Global Transitions Proceedings 2(2):246–254. https://doi.org/10.1016/J.GLTP.2021.08.071 Rajasekar SJS (2021) An enhanced IoT based tracing and tracking model for COVID-19 cases. SN Computer Science 2(1). https://doi.org/10.1007/s42979-020-00400-y Naren N, Chamola V, Baitragunta S, Chintanpalli A, Mishra P, Yenuganti S, Guizani M (2021) IoMT and DNN-enabled drone-assisted covid-19 screening and detection framework for rural areas. IEEE Internet of Things Magazine 4(2):4–9. https://doi.org/10.1109/IOTM.0011.210 0053 Cacovean D, Ioana I, Nitulescu G (2020) IoT system in diagnosis of covid-19 patients. Informatica Economica 24(2/2020), 75–89. https://doi.org/10.24818/ISSN14531305/24.2.202 0.07 Rahman MA, Shamim Hossain M (2021) An internet-of-medical-things-enabled edge computing framework for tackling COVID-19. IEEE Internet Things J 8(21), 15847–15854. https://doi.org/10.1109/JIOT.2021.3051080
Identifying Renewable Energy Sources for Investment on Sumba Island, Indonesia Using the Analytic Hierarchy Process (AHP) Michael Aaron Tuori, Andriana Nurmalasari, and Pearla Natalia
Abstract Climate change and declining fossil fuel reserves have accelerated the push to find alternative renewable energy sources the world over. This has proven an opportunity in Indonesia, where access to electricity is a vital concern for many regions. Sumba Island in the Nusa Tenggara Timur (NTT) province has been of particular interest in this pursuit due to its lack of existing energy infrastructure and high potential for renewable energy. However, the selection of a renewable energy source for investment is a complex and political issue. Given its complexity, the decision is well suited for the Analytic Hierarchy Process (AHP). This research employees AHP to identify the most important criteria and subcriteria in the selection decision for a renewable energy investment in Sumba and ultimately recommend the highest potential source for investment. The research relies on responses from 10 expert respondents from governmental, non-governmental, residential, and private sectors. The findings suggest that solar power has the highest investment potential for the island, followed by hydro, biogas, wind, and finally biomass. Keywords Energy · Renewable Energy · Analytic Hierarchy Process
1 Introduction Climate change and declining fossil fuel reserves have drawn the attention of governments around the world to sustainable alternative energy sources. Under the current administration, Indonesia has enacted policies focused on the development of renewable energy throughout the country, particularly in areas that still do have adequate access to electricity [1]. Sumba is an island in the Nusa Tenggara Timur (NTT) province that is one of the least developed areas of Indonesia. However, the island has many natural resources and has a high potential for renewable energy [2]. Nonetheless, the process of choosing the best renewable energy source is complex and M. A. Tuori (B) · A. Nurmalasari · P. Natalia Binus Business School, Jakarta 10271, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_64
739
740
M. A. Tuori et al.
political [3]. The aim of this research is to evaluate the available renewable energy sources based on the various selection criteria and recommend the best investment opportunity based on the perspectives of the important stakeholders involved.
2 Literature Review While traditionally seen as a costly alternative to fossil fuels, in recent years, renewable energy has shown the potential to reduce energy costs and limit energy waste during distribution [4]. Furthermore, the adoption of renewable energy displays positive externalities, such as reduced CO2 emissions and improved local air quality [5]. The success of a renewable energy project depends not only on the management of the project by providers, but also on the stakeholder satisfaction of beneficiaries [6], and government support [7]. Therefore, stakeholder buy in is key. Due to the complexity of the decision-making criteria and stakeholders involved Multi-criteria Decision Making (MCDM) has become an accepted choice for evaluating renewable energy decisions around the world [8–10]. The Analytic Hierarchy Process (AHP) is one of the most preferred MCDM methods [11]. AHP is capable of arranging a multitude of decision criteria into a hierarchy and then prioritizing key criteria in order to achieve a main objective [12].
3 Methodology This research employs the Analytic Hierarchy Process (AHP) method for choosing the best renewable energy source for Sumba, Indonesia. AHP is a multi-criteria decision-making (MCDM) technique that has proven useful for making decisions on large, complex problems [13]. AHP requires the determination of the potential alternatives for the solution, as well as the criteria and subcriteria crucial in the decision-making process. The research process followed is illustrated in Fig. 1. This research considered the following renewable energy sources as viable alternatives: biogas, biomass, hydro, solar, and wind. These were chosen based on the findings of a previous study done by the NGO, Hivos, which identified these sources as having the highest potential based on the ecological conditions on the island [14]. A review of 21 journal articles was conducted to determine the most important criteria and subcriteria to consider when developing a new renewable energy project [15– 35]. From this review, 5 criteria and 17 subcriteria were selected. The alternatives, criteria, and subcriteria, were constructed into a hierarchy (see Fig. 2). The stakeholders in the decision-making process were chosen using Mitchell’s Typology of Stakeholder Identification, which states that stakeholders should be identified based on the presence of three attributes: power, legitimacy, and urgency [36]. In this case, “power” represents a stakeholder’s power to impact the project; “legitimacy” refers to the perception of the stakeholder’s power externally; and
Identifying Renewable Energy Sources for Investment on Sumba Island, …
741
Fig. 1 Research process
Fig. 2 Hierarchy structure
“urgency” indicates that there is an immediate need to address the stakeholder’s concerns [37]. Based on these attributes, stakeholders were identified as: dominant, discretionary, definitive, dangerous, dependent, dormant, and demanding. The most important stakeholders to include are the definitive stakeholder, in this case representatives from the local government, because they meet the criteria for all three attributes. Stakeholders identified as dependent and discretionary were also included and represent respondents from non-governmental, residential, and private sectors. No stakeholders were identified as dominant, dangerous, or demanding. In total, 10 decision makers were chosen to represent the 6 stakeholder categories in the decision-making process (see Fig. 3). After selecting the expert respondents, a questionnaire was developed using a 9 point pairwise comparison scale. The pairwise scale allows respondents to rank their preferences towards the criteria and subcriteria in a way that is easily understood [38].
742
M. A. Tuori et al.
Fig. 3 Stakeholder identification
Respondents compare each criteria one-by-one and indicate their level of preference for each. Likewise, respondents then compare and indicate their preferences for each subriteria within the criteria. Next the pairwise comparisons from each respondent were classified into a square matrix where the diagonal element of the matrix is 1. From this matrix, the corresponding normalized eigenvector and the principal eigenvalue of the comparison matrix provides the relative importance of the various criteria compared. The respondents’ judgments were then aggregated using the geometric mean. The consistency of the respondents’ judgments was assessed by calculating the Consistency Index (CI), and Consistency Ratio (CR). Finally, the rating for each alternative was multiplied by the weight of the subcriteria and aggregated to obtain local ratings with respect to each criterion and identify the priority of each alternative.
4 Analysis and Discussion Once the pairwise comparisons were collected via electronic questionnaire from each of the 10 respondents, the weights of each of the criteria and subcriteria were calculated using the geometric mean (GM), shown in Fig. 4. Geometric Mean =
√ n C1D M1 × C1D M2 × · · · × C1D Mn
(1)
Identifying Renewable Energy Sources for Investment on Sumba Island, …
743
Fig. 4 Criteria geometric means
Next, the Consistency Ratio of the preferences was checked to meet the threshold of 0.1 [39]. An example of this calculation for the criteria is shown in Table 1. A summary of the Consistency Ratio for all subcriteria is shown in Table 2. All values are below 0.1 and are therefore consistent and acceptable. Consistency index (CI), calculated as: CI =
(λmax − n) n−1
(2)
where: C I = Consistency Index λmax = Maximum eigenvalue of the judgement matrix n = Size of comparison matrix Consistency Ratio (CR), calculated as: CR =
Table 1 Criteria consistency check
Table 2 Summary of subcriteria consistency ratios
Consistency I ndex Random I ndex
(3)
λ max
5.104
CI
0.026
RI
1.120
CR
0.023
Subcriteria
Consistency ratio
Economic
0.022
Environmental
0.012
Social
0.009
Technical
0.022
Political
0.000
744
M. A. Tuori et al.
After calculating the geometric means of all criteria and subcriteria, as well as the global weights of each criterion, the ranks of each criteria and subcriteria were obtained (see Table 3). Finally, eigenvectors of the pairwise comparisons of the alternatives with respect to all subcriteria were constructed in a matrix and multiplied by the weights of the subcriteria to find the local ratings with respect to each criterion. The local ratings were then ranked with solar being identified as having the highest investment potential, followed by hydro, biogas, wind, and biomass, respectively. The weights and rankings for the alternatives are summarized in Table 4. Based on the criteria weights and subcriteria global weights, it is shown that the most important criterion is Environmental (0.336), followed by Social (0.221). The Table 3 Criteria and subcriteria weights Criteria
Subcriteria
Subcriteria weight
Subcriteria global weight
Investment cost
0.204
0.038
Operation and maintenance cost
0.384
0.072
Payback period
0.226
0.042
Generation cost
0.186
0.035
CO2 emission
0.162
0.054
Land use
0.169
0.057
NOx emission
0.144
0.048
Local environmental effects
0.526
0.177
Job creation
0.304
0.067
Social acceptance
0.212
0.047
Social benefits
0.484
0.107
Efficiency
0.145
0.022
Reliability
0.277
0.043
Maturity
0.257
0.040
0.321
0.049
Political acceptance
0.581
0.060
Compatibility with national energy policy
0.419
0.043
Economic
Criteria weight 0.186
0.336
Environmental
0.221
Social
0.154
Technical
Safety Political
0.102
Identifying Renewable Energy Sources for Investment on Sumba Island, … Table 4 Alternative weights and ranking
Alternative
Weights
745 Ranking
Solar
0.361
1
Hydro
0.185
2
Biogas
0.179
3
Wind
0.147
4
Biomass
0.129
5
most important subcriteria overall are: Local Environmental Effects (0.117), Social Benefits (0.107), and Operation and Maintenance Cost (0.072).
5 Conclusion and Limitations The findings of the research concluded that the best source for renewable energy investment in Sumba, Indonesia is solar energy, followed by hydropower, biogas, wind, and finally biomass. This was based on the judgment of 5 criteria (Economic, Environmental, Social, Technological, Political) and 17 subcriteria by 10 expert respondents from governmental, non-governmental, residential, and private sectors. The most important criterion was identified as Environmental, followed in order by Social, Economic, Technological, and Political. The top five most important subcriteria overall were identified as: Local Environmental Effects, Social Benefits, Operation and Maintenance Cost, Job Creation, and Political Acceptance. This suggest that any investment considered must prioritize its impact on the environment, and must also seek to benefit the local community, be affordable, create employment, and have political buy in. This is an important finding because even if solar power is chosen as an investment opportunity, without prioritizing the important criteria and subcriteria, it is not likely to be successful. This study only focused on the island of Sumba in the NTT province of Indonesia. Due to the complexity of the decision, which depends not only on the local landscape’s potential to generate various renewable energies, but also on ever-changing social and political factors, similar studies are suggested to identify renewable energy investment opportunities in other regions within the country or elsewhere in the world. However, this study supports the use of AHP as a decision-making tool in the selection of renewable energy sources and promotes its use in other areas.
746
M. A. Tuori et al.
References 1. Nugroho H, Rustandi D, Widyastuti N (2021) What position should Indonesia have in placing its renewable energy development and energy transition plan? Bappenas Working Papers 4(2):239–254 2. Wen C, Lovett J, Rianawati E, Arsanti T, Suryani S, Pandarangga A, Sagala S (2022) Household willingness to pay for improving electricity services in Sumba Island, Indonesia: a choice experiment under a multi-tier framework. Energy Research & Social Science 88 3. Rigo P, Rediske G, Rosa C, Gastaldo N, Michels L, Neuenfeldt A,Siluk J (2020) Renewable energy problems: exploring the methods to support the decision-making process. Sustainability 12(23) 4. Nigim K, Munier N, Green J (2004) Pre-feasibility MCDM tools to aid communities in prioritizing local viable renewable energy sources. Renewable Energy 29(11):1775–1791 5. Buonocore J, Hughes E, Michanowicz D, Jinhyok H, Allen J, Williams A (2019) Climate and health benefits of increasing renewable energy deployment in the United States. Environmental Research Letters 14(11) 6. Maqbool R, Deng X, Rashid Y (2020) Stakeholders’ satisfaction as a key determinant of critical success factors in renewable energy projects. Energy, Sustainability, and Society 10 7. Ata N (2015) The impact of government policies in the renewable energy investment: developing a conceptual framework and qualitative analysis. Global Advanced Research Journal of Management and Business Studies 4(2):67–81 8. Shatnawi N, Abu-Qdais H, Abu-Qdais F (2021) Selecting renewable energy options: an application of multi-criteria decision making for Jordan. Sustainability: Science, Practice and Policy 17(1):209–219 9. Ulewicz R, Siwiec D, Pacana A, Tutak M, Brodny J (2021) Multi-criteria method for the selection of renewable energy sources in the polish industrial sector. Energies 14(9) 10. Bhowmilk C, Bhowmilk S, Ray A (2019) Optimal green energy source selection: an eclectic decision. Energy & Environment 31(5):842–859 11. Oey E, Noviyanti, Sanny L (2018) Evaluating international market selection with multi-criteria decision making tools—a case study of a metal company in Indonesia. Int J Business Excellence 16(3):341–361d 12. El Hadidi O, El-Dash K, Besiouny M, Meshref A (2022) Evaluation of building life cycle cost (LCC) criteria in Egypt using the analytic hierarchy process (AHP). International Journal of Analytic Hierarchy Process 14(2) 13. Liu J, Liu P, Liu S, Zhoud X, Zhang T (2015) A study of decision process in MCDM problems with large number of criteria. Intl Trans in Op Res 22:237–264 14. Hivos (2010) Sumba: an iconic island to demonstrate the potential of renewable energy. Hivos 15. Abdullah L, Najib L (2014) Sustainable energy planning decision using the intuitionistic fuzzy analytic hierarchy process: choosing energy technology in Malaysia. Int J Sustain Energ 35(4):360–377 16. Algarin C, Llanos A, Castro A (2017) An analytic hierarchy process based approach for evaluating renewable energy sources. International Journal of Energy Economics and Policy 7(4):38–47 17. Guerrero-Liquet G, Sánchez-Lozano J, García-Cascales M, Lamata M, Verdegay J (2016) Decision-making for risk management in sustainable renewable energy facilities: a case study in the Dominican Republic. Sustainability 8(5) 18. Tegou L, Polatidis H, Haralambopoulos D (2010) Environmental management framework for wind farm siting: methodology and case study. J Environ Manage 91:2134–2147 19. Tsoutsos T, Drandaki M, Frantzeskaki N, Iosifidis E, Kiosses I (2009) Sustainable energy planning by using multi-criteria analysis application in the island of crete. Energy Policy 37(5):1587–1600 20. Mourmouris J, Potolias C, Fantidis J (2012) Evaluation of renewable energy sources exploitation at remote regions, using computing model and multi-criteria analysis: a case-study in Samothrace, Greece. International Journal of Renewable Energy Research 2(2):307–316
Identifying Renewable Energy Sources for Investment on Sumba Island, …
747
21. Wimmler C, Hejazi G, de Oliveira Fernandes E, Moreira C, Connors S (2015) Multi-criteria decision support methods for renewable energy systems on islands. Journal of Clean Energy Technologies 3(3):185–195 22. Mirjat N, Uqaili M, Harijan K, Mustafa M, Rahman M, Khan M (2018) Multi-criteria analysis of electricity generation scenarios for sustainable energy planning in Pakistan. Energies 11(4) 23. Kaya I, Colak M, Fulya T (2019) A comprehensive review of fuzzy multi criteria decision making methodologies for energy policy making. Energy Strategy Review 24:207–228 24. Katal F, Fazelpour F (2018) Multi-criteria evaluation and priority analysis of different types of existing power plants in Iran: an optimized energy planning system. Renewable Energy 120:163–177 25. Nsafon B, Butu H, Owolabi A, Roh J, Suh D, Huh J (2020) Integrating multi-criteria analysis with PDCA cycle for sustainable energy planning in Africa: application to hybrid mini-grid system in Cameroon. Sustainable Energy Technologies and Assessments 37 26. Zhang L, Zhou D, Zhou P, Chen Q (2014) Modelling policy decision of sustainable energy strategies for Nanjing City: a fuzzy integral approach. Renewable Energy 62:197–203 27. Shimray B (2017) A survey of multi-criteria decision making technique used in renewable energy planning. International Journal of Computer 25(1):124–140 28. Neves D, Baptista P, Simões M, Silva C (2017) Designing a municipal sustainable energy strategy using multi-criteria decision analysis. J Clean Prod 176(1):251–260 29. Wang C, Nguyen V, Thai H, Duong D (2018) Multi-criteria decision making (MCDM) approaches for solar power plant location selection in Vietnam. Energies 11(6) 30. Xu L, Shah S, Hashim Z, Solangi Y (2019) Evaluating renewable energy sources for implementing the hydrogen economy in Pakistan: a two-stage fuzzy MCDM approach. Environ Sci Pollut Res 26:33202–33215 31. Daniel J, Vishal N, Albert B, Selvarsan I (2010) Evaluation of the significant renewable energy sources in India using analytical hierarchy process. Lecture notes in economics and mathematical systems. In: Ehrgot M, Naujoks B, Stewart T, Wallenius J (eds) Multiple criteria decision making for sustainable energy and transportation systems. Lecture notes in economics and mathematical systems, vol 634. Springer, Berlin, pp 13–26 32. Chanchawee R, Usapein P (2018) Ranking of renewable energy for the national electricity plan in Thailand using an analytical hierarchy process (AHP). International Journal of Renewable Energy Research 8(3):1553–1562 33. Ahmad S, Tahar R (2014) Selection of renewable energy sources for sustainable development of electricity generation system using analytic hierarchy process: a case of Malaysia. Renewable Energy 63:458–466 34. Wang Y, Xu L, Solangi Y (2020) Strategic renewable energy resources selection for Pakistan: based on SWOT-Fuzzy AHP approach. Sustainable Cities and Society 52 35. Das A, Uddin S (2016) Renewable energy source selection using analytical hierarchy process and quality function deployment: a case study. In: 2016 second international conference on science technology engineering and management (ICONSTEM), pp 298–302 36. Mitchell R, Agle B, Wood D (1997) Toward a theory of stakeholder identification and salience: defining the principle of who and what really counts. Acad Manag Rev 22(4):853–886 37. Flak L, Nordheim S, Munkvold B (2008) Analyzing stakeholder diversity in G2G efforts: combining descriptive stakeholder theory and dialectic process theory. E-Service Journal 6(2):3–23 38. Lootsma F (1999) The AHP, pairwise comparisons. Multi-criteria decision analysis via ratio and difference judgement. Applied Optimization 29 39. Saaty T (1996) The analytic hierarchy process: planning, priority setting, resource allocation, 2nd edn. RWS Publications, Pittsburgh
Light Control and Monitoring System Based on Internet of Things Syahroni, Gede Putra Kusuma, and Galih Dea Pratama
Abstract Electrical energy is one of the most important human needs. Almost all human work requires electrical energy. Lack of electricity may hinder human activities. Therefore, it is important to maintain the availability of the electrical energy. This paper explores a way to reduce waste of electricity use through the implementation of the Internet of things (IoT). We introduce an IoT system that expands the function of the internet that allows us to control household appliances properly and efficiently, enabling us to monitor the cost of usage every month. The proposed system is implemented at the Campus Academy Computer Management and Informatics (AMKI) Ketapang to control and monitor lights in the Computer Laboratory and Terrace Campus. This system utilizes Raspberry Pi 3, LDR sensors, Relays, and software like PHP and Raspbian Linux. The evaluation results have shown that the proposed IoT system has reduced the monthly cost of electricity. Keywords Internet of Things · Microcontroller · Light Sensors · Electricity Usage
1 Introduction The increase in population is very influential in the use of electrical energy by the community. The use of electrical energy continues to increase, while the ability to provide electrical energy is limited. Persuading the public against the negative impact of the waste of electricity and inviting people to save electricity are not easy, because those require awareness of the community. However, if people do not have awareness of the impact of electrical energy waste, then people do not think of the fate of future generations. Therefore, public awareness to conserve energy plays an important role in the continuity of our generation later. Increasing use of electrical energy can be used as an indicator of increasing public welfare [1], but excessive use of it can also Syahroni · G. P. Kusuma (B) · G. D. Pratama Computer Science Department, BINUS Graduate Program—Master of Computer Science, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_65
749
750
Syahroni et al.
have a negative impact on ourselves or society in general. Due to that, we must save electricity because it can also be used as cost savings, improvement of environmental values, state security, personal security, and comfort in life. Internet of Things (IoT) is a concept to extend the benefits of ever-connected internet connectivity, including the ability to share data, remote control [2], and in hardware-connected sensors such as solar panels as an alternative source of electrical energy [3]. IoT involves global infrastructure for the information society, enabling advanced services by linking physical and virtual objects based on current information exchange and communications technology development [4]. Our habit of leaving the hose in a state of illuminated electricity can cause hazards such as forgetting to turn of the air conditioner, electric stove, dispenser, personal computer, and any other electrical devices that can cause short circuit and fire, required control and power that can control these home appliances remotely. The increase in electricity tariffs made by the government makes the burden of the people heavy. In day-to-day life, we cannot escape from electrical energy, IoT design system to monitor the power savings in grid tie solar system [5].
2 Proposed Light Control System 2.1 Network Topology The IoT system involves Raspberry Pi to control another hardware and monitor the lamps used in the prototype. There is also certain hardware used, such as Wi-Fi router, Relay, Light Dependent Resistor (LDR) sensor, Arduino Nano, switch, 12 V power supply, and bulb. In the proposed schematics, Raspberry Pi acts as the server of all activities surrounding the connected hardware, whereas the monitoring is done through the usage of web server that can be accessed via smartphone or personal computer to see the historical data gathered from the sensors. On the other hand, LDR sensor reads the resistance of light intensity with lower intensity will produce higher resistance and vice versa. The data collected from the sensor will be forwarded to Raspberry Pi and it will put a task for Relay to turn the bulb on, where the sensor will also send the record of light-on duration to Raspberry Pi to the database. The whole topology of IoT system can be seen in Fig. 1. To recreate the design in Fig. 1, the prototype is made in accordance with the topology. There are two main prototypes, where the main prototype revolves around the lamp control system. Raspberry Pi 3 is used as the core of the prototype, where all the lamps can be controlled through the app installed inside. The app is used specifically to control the lamps and power usage monitoring. The real implementation of main prototype can be seen in Fig. 2. On the other hand, there is another prototype used in the light control system which purpose is to count the humans inside certain rooms. The prototype uses two LDR sensors and Arduino Nano as the main parts. The first LDR sensor is used to
Light Control and Monitoring System Based on Internet of Things
Fig. 1 Design of network topology
Fig. 2 Lamp control system prototype
751
752
Syahroni et al.
Fig. 3 Human counter prototype
count the humans that entered the room, where the second one is used to count the humans that got out of the room. The light itself will turn on when there are at least one human entering the room and will turn off when there is no human inside the room. The real implementation of this prototype is shown in Fig. 3.
2.2 Control Hardware for Terrace Lamp The base of hardware used in the proposed system is Raspberry Pi 3 module, which will determine the execution setting based on the sensors input data. There are certain components directly connected to the module, such as Arduino Nano, relay, sensor, LCD, LAN (Local Area Network), and Wi-Fi. The module itself uses 5 V adapter as power supply device. The Relay IC module receives input from the GPIO pin and outputs the output pin so that it can move the relay contacts and activate the relay without relying on drivers to act as the lamp switch. LDR itself will conduct an electric current if it receives certain amount of light intensity, which will turn off and inhibit electric current in darker conditions. Porch lamp will turn on after being connected to digital input pin from Raspberry Pi 3. Relay without driver serves as lamp switch, where switch without driver gets 12 V power supply. The hardware design itself can be seen in Fig. 4.
2.3 Control Hardware for Lights in Computer Laboratory The hardware to control lights in computer laboratory relies on the usage of Arduino Nano microcontroller. The microcontroller functions to calculate the activities around the environment, such as counting personnel entering the laboratory. It also decides whether the light in the laboratory should be turned on or off based on the data
Light Control and Monitoring System Based on Internet of Things
753
Fig. 4 Hardware design for terrace lights
obtained from 2 LDR sensors. Data obtained from the sensors will then be forwarded to Raspberry Pi 3 to be recorded in the database. The design of the control hardware can be seen in Fig. 5.
Fig. 5 Hardware design for computer laboratory lights
754
Syahroni et al.
3 Results and Analysis 3.1 Conventional Experiments Experiments on the research are divided into two, one in conventional way and the other uses the sensor in comparison. The conventional experiment is done on campus terrace and computer laboratory for one month. The result shows that the campus terrace lamp has a light-on duration of 12.5–15 h, with total duration of 420.13 h. The full result can be seen in Table 1. The other conventional experiment involves the testing on computer laboratory. With the same timeline as the campus terrace experiment, the lamp turns on for around 2–7 h, with total light-on duration of 153.43 h for both lamps. The full result can be seen in Table 2. Alongside the light-on duration of lamps, this research also concerns the electricity savings. To do that, there is a formula to define the electricity usage in the campus environment. The formula itself, which is based on PLN standards, can be seen in Eq. (1). K W H = (watt ∗ duration)/1000
(1)
The KWH represents the total power used by the lamps when turned on. The result is obtained when the watt got multiplied with duration and then divided by a thousand. Equation (1) will then be used to determine the total cost of electricity also based on PLN standards, which can be seen in Eq. (2). cost = K W H ∗ T DL
(2)
From (1) and (2) above, the cost of electricity usage from Tables 1 and 2 can be calculated further. The full result of electricity cost calculations can be seen in Table 3. Table 3 represents the whole cost of electricity usage, both in campus terrace and computer laboratory. From the results, it can be inferred that the campus terrace produces higher cost than computer laboratory with Rp 152,017, almost five-times fold of computer laboratory cost. The total cost is also shown in Table 3, where it produces cost of Rp 184,727 in one month for both places.
3.2 Experiments with the Proposed System After the conventional experiment is done, this research also puts the proposed system to the experiment on campus terrace and computer laboratory lights. The experiments were conducted for three months using the website to monitor the light usage history in both places. The interface to monitor light usage history can be seen in Fig. 6.
Light Control and Monitoring System Based on Internet of Things Table 1 Result of conventional experiment on campus terrace
Days
Time on
Time off
755 Duration (hour)
Day 1
18.00
8.00
14.00
Day 2
17.30
8.20
15.30
Day 3
18.10
8.10
14.00
Day 4
18.00
8.00
14.00
Day 5
17.30
7.50
14.20
Day 6
18.10
7.15
13.05
Day 7
18.05
8.15
14.10
Day 8
18.00
8.20
14.20
Day 9
18.45
8.13
14.08
Day 10
18.10
7.16
13.06
Day 11
18.20
8.00
14.20
Day 12
17.45
8.15
15.10
Day 13
17.35
8.14
15.19
Day 14
17.45
8.15
15.10
Day 15
17.55
7.45
14.30
Day 16
17.50
8.10
14.20
Day 17
19.20
8.20
13.00
Day 18
19.00
8.00
13.00
Day 19
18.12
8.17
14.05
Day 20
18.00
8.30
14.30
Day 21
18.45
8.20
13.35
Day 22
18.20
8.25
14.05
Day 23
18.10
8.10
14.00
Day 24
18.00
8.00
14.00
Day 25
18.20
8.30
14.10
Day 26
19.10
8.00
12.50
Day 27
19.15
8.13
12.58
Day 28
19.20
8.15
12.55
Day 29
18.20
8.10
13.50
Day 30
18.25
8.12
Total
13.47 420.13
The website itself displays the history of lamps activity, which is stored in database recorded from the sensors deployed beforehand. It displays the light-on time, lightoff time, light-on duration, and the cost on each date and lamp. The data is represented in either full date since first deployment or monthly, depending on the preference of the user.
756
Syahroni et al.
Table 2 Result of conventional experiment on computer laboratory Date
LP 1 on
LP 1 off
Day 1
15.55
17.35
Time 1 1.40
LP 2 on
LP 2 off
18.00
21.00
Time 3.00
Day 2
16.00
21.30
5.30
Day 3
16.10
21.50
5.40
Day 4
16.05
2.10
17.55
1.50
19.00
21.10
Day 5
15.10
21.30
6.20
Day 6
16.10
17.55
1.45
19.00
21.40
2.40
Day 7
16.05
18.00
1.55
19.10
21.45
2.35
Day 8
16.10
17.55
1.45
19.05
21.50
2.45
Day 9
16.00
17.55
1.55
19.00
22.00
3.00
Day 10
16.00
18.00
2.00
19.00
21.50
2.50
Day 11
16.10
17.45
1.35
18.10
21.20
3.10
Day 12
15.50
17.30
1.40
18.15
21.30
3.15
Day 13
16.10
17.35
1.25
18.10
22.00
3.50
Day 14
16.05
17.40
1.35
18.45
22.10
3.25
Day 15
15.55
17.35
1.40
18.15
22.10
3.55
Day 16
16.05
17.43
1.38
18.10
21.40
3.30
Day 17
16.10
17.45
1.35
18.10
21.50
3.40
Day 18
16.00
17.36
1.36
18.00
21.23
3.23
Day 19
15.50
22.10
7.00
Day 20
15.55
21.10
3.10
Day 21
15.55
22.20
6.25
Day 22
15.55
21.30
5.35
17.45
1.50
18.00
Day 23
15.10
17.35
2.25
19.00
23.00
4.00
Day 24
16.00
18.40
2.40
19.00
21.10
2.10
Day 25
15.00
17.35
2.35
19.00
21.10
2.10
Day 26
16.10
22.40
6.30
Day 27
15.50
17.45
1.55
19.00
22.00
3.00
Day 28
15.50
18.36
1.46
19.00
22.00
3.00
19.00
21.10
2.10
17.45
1.55
19.00
22.15
3.15
Day 29
15.45
Day 30
15.50
Durations
40.40
Total
153.43
113.03
Light Control and Monitoring System Based on Internet of Things
757
Table 3 Cost calculation on terrace and computer laboratory Place
Light number
Watt
Duration
Total power
KWH
TDL
Cost
Terrace
24
476
420.13
200,023
200.023
760
152,017
Lab
8
280
153.43
43,040
43.040
760
32,710 184,727
Amount
Fig. 6 Interface of light control website
3.3 Comparison of Conventional and Proposed System Based on the results from the previous subsections, the data gathered can then be compared to see the effectiveness of proposed system. The comparison of electricity cost between conventional and proposed system can be seen in Table 4. To further see the differences before and after the usage of light control system on campus terrace and computer laboratory, this research will show the real-life electricity cost during previous three months using conventional method and the following three months using the light control system. The full result is represented in Table 5. Table 4 Cost comparison of conventional and proposed system Method
Conventional
Light control system
Month
1st month
1st month
2nd month
3rd month
Place
Cost
Cost
Cost
Cost
Terrace
Rp 152,017
Rp 79,399
Rp 84,709
Rp 82,290
Laboratory
Rp 32,710
Rp 26,972
Rp 24,178
Rp 20,852
Total Cost
Rp 185,727
Rp 106,371
Rp 108,887
Rp 103,142
758
Syahroni et al.
Table 5 Real-life cost comparison of electricity usage Treatment
1st month
2nd month
3rd month
Total cost
Conventional
Rp 929,721
Rp 919,871
Rp 915,767
Rp 2,765,359
Light control system
Rp 754,070
Rp 680,198
Rp 670,129
Rp 2,104,397
Excess payment
Rp 175,651
Rp 239,673
Rp 245,638
Rp 660,962
Table 5 shows that conventional electricity usage costs at Rp 2,765,359 spanning three months of usage for all lamps in the campus. Meanwhile, the proposed system costs Rp 2,104,397 for the same duration of usage, which is lesser than previous treatment. From the result, it can be inferred that there is cost excess of Rp 660,962 in three months, which saves approximately Rp 220,321 of electricity cost in each month.
4 Conclusion Based on the implementation and experiments from the previous sections, the IoTbased light control system on AMKI Ketapang campus in form of prototype tool has been implemented and experimented on using hardware such as Raspberry Pi 3, Arduino Nano, LDR sensors, and Relay microcontrollers to control the light. The experiment also shows that the deployment of the prototype tool run as expected to control the lamps and electricity around the campus, specifically at campus terrace and computer laboratory. Also from the implementation, the tool shows that it can save the cost of electricity usage in the campus with average amount of approximately Rp 220,000 each month.
References 1. Suryaningsih S, Hidayat S, Abid F (2016) Rancang Bangun Alat Pemantau Penggunaan Energi Listrik Rumah Tangga Berbasis Internet. In: Prosiding Seminar Nasional Fisika (E-Journal) SNF2016 5:87–90. Prodi Pendidikan Fisika dan Fisika, Fakultas MIPA, Universitas Negeri Jakarta, Jakarta 2. Rohman F, Iqbal M (2016) Implementasi Iot Dalam Rancang Bangun Sistem Monitoring Panel Surya Berbasis Arduino. In: Prosiding SNATIF Ke-3. Fakultas Teknik—Universitas Muria Kudus, Central Java, pp 96–101 3. Budioko T (2016) Sistem Monitoring Suhu Jarak Jauh Berbasis Internet of Things Menggunakan Protokol MQTT. In: Seminar Riset Teknologi Informasi (SRITI) tahun 2016. STMIK AKAKOM, Yogyakarta, pp 353–358 4. Andrianto A, Susanto A (2015) Aplikasi Pengontrol Jarak Jauh Pada Lampu Rumah Berbasis Android. In: Prosiding SNATIF Ke-2. Fakultas Teknik - Universitas Muria Kudus, Central Java, pp 413–420 5. Rhapsody MR, Zuhri AA, Asa UD (2017) Penggunaan IoT untuk Telemetri Efisiensi Daya pada Hybrid Power System. In: Seminar MASTER 2017 PPNS, vol 1509. PPNS, Surabaya, pp 67–72
Transfer Learning Approach Based on MobileNet Architecture for Human Smile Detection Gusti Pangestu, Daniel Anando Wangean, Sinjiru Setyawan, Choirul Huda, Fairuz Iqbal Maulana, Albert Verasius Dian Sano, and Slamet Kuswantoro
Abstract The face is an important part of the human body. The face can express various things that are felt by humans. By looking at facial expressions, humans can determine whether the person is angry, happy, or sad. This is important as a basic part of communication. However, there are enough people who are not able to see or are less able to see so that they cannot recognize someone’s facial expressions, especially when they are talking to each other. For this reason, in this research, a study was conducted to detect the basic expression of the human face, namely a smile that indicates happiness. In this study, a Deep Learning approach was used to determine human facial expressions. This research also carried out several architectural scenarios such as ResNet and MobileNet. MobileNet is an architecture that has a high level of accuracy, which is around 92%, which indicates that MobileNet can be used to detect facial expressions, especially smiles. Keywords Face · Deep Learning · MobileNet · Expression G. Pangestu (B) · D. A. Wangean · S. Setyawan · C. Huda · F. I. Maulana · A. V. D. Sano School of Computer Science, Bina Nusantara University, Jakarta, Indonesia e-mail: [email protected] D. A. Wangean e-mail: [email protected] S. Setyawan e-mail: [email protected] C. Huda e-mail: [email protected] F. I. Maulana e-mail: [email protected] A. V. D. Sano e-mail: [email protected] S. Kuswantoro Building Management Department, Bina Nusantara University, Jakarta, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_66
759
760
G. Pangestu et al.
1 Introduction Vision impairment, or an inability to see at all or any object is the problem that continue to increase until now. In a global, there are around more than 1 billion people with vision impairment, most of them are woman with 55% presentation. Vision impairment caused by several deceases such cataract, glaucoma, corneal opacities, diabetic retinopathy and many more. People with vision impairment, experiencing difference kind comparing to the normal people. People with vision impairment experiencing a problem to see especially to see and understanding the face expression, therefore, it is difficult for them to understand the expression of their interlocutor. Even though the expression especially face expression is take an important role to the communication [1]. Referring to the problem explained above, based on the importance of face expression thus take a role in communication, it needs a solution to recognize the expression of human face to help people with vision impairment to better understand the face expression of their interlocutors. Actually, there are several research that take a concern on face expression detection. The research from Lee et al. [2] proposed an automatic smile detection using earable device, furthermore, this research proposed by Lee et al. produce an average accuracy around 70%. However, research proposed by Lee et al. still needs an earable device to be installed in ears which is impractical. Based on the situation, this research focus on how to detect and recognize a smile based on the image. The image was chosen because of its simplicity and convenience. This research is the initial step to achieve an advance development to develop the advanced wearable eyeglasses to assist people with vision impairment called “icsy (i can see you)”. There are several research that focused on how to detects and recognize the smile by utilizing image, such as the research from An et al. [3]. This research is utilizing support vector machine (svm) and reach the accuracy around 76%, however, it’s need an improvement for the performance especially the accuracy. Another research about smile detection has been done by Tang et al. [4]. This research is utilizing convolutional neural network (CNN) to achieve its accuracy. This research produce accuracy around 86%. It is very interesting to pay attention on how they tweak the CNN architecture to obtain the accuracy. In this research, the preprocessing of the data was very well influential. This research divides the image of the face area into 2 areas with similar size. The first area is the upside of the face image including eyes and several part of nose, the second part of the image containing mouth and some part of nose. Each image area are processed using each CNN architecture. Each result of the CNN process then put together to be processed further. However, this process is not as simple process considering that the goals of the research is to find the best approach to detects and recognize the face expression especially smile detection using as simple as possible approach to save the process and memory given that the computation approaches will be applied on the smartphone.
Transfer Learning Approach Based on MobileNet Architecture …
761
Considering the problem explained, this research will focuses on how to detects a smile using convolutional neural network (CNN) as simple and modest as possible. This research will be divided into several section including method, result and analysis and conclusion.
2 Method This section will explain the methodology of our research. This research will divided into several part of process that point to Convolutional Neural Network (CNN) process in general. The detailed process is shown in Fig. 1.
2.1 Face Image and Dataset In this research, image became the main object that will be process. There are much research that focused on how to detect the face such a research from Paggio et al. [5], Ganakwar [6], Khrisna et al. [7], Chen [8], and many more. The face detection is important because it is establishing the system to focus of its process to face area only. However, in this research, face image was obtained using open dataset.
2.2 Convolutional and Pooling The Convolutional Neural Network (CNN) has became a popular topic in recent years. CNN has its own unique steps called architecture. The architecture of CNN consist of several steps that arranged in such a way to reach the goal [9]. (1) Convolutional Process The convolutional process act as an vital role in CNN process. This layer focus on the learnable kernel. Generally, the kernel used in this process has small size such
Fig. 1 The methodology of this research
762
G. Pangestu et al.
3 × 3, 5 × 5 or 7 × 7. The convolutional process itself are explained in (1). y[i, j] =
∞
∞
h[m, n].x[i − m, j − n]
(1)
m=−∞ n=−∞
Denotes, x represents the input image matrix to be convolved, h is the matrix or kernel, y is the new matrix from convolution process, m and n are the size of the kernel of h, and i and j are the matrix size. After y matrix was obtained, the next process is applying Pooling mechanism explained below. (2) Pooling The Pooling layer is the layer whom commonly included after the Convolutional layer. The purpose of this layer, is to reduce the size of the image or matrix. Pooling is the key step in CNN system. There are several approach for Pooling methods such Average Pooling, Max-Pooling, Mix Pooling, Stochastic Pooling and much more [10]. The formula process of Pooling (Max-Pooling) methods shown in (2). P j,m = max(h j,(m−1)N +r )
(2)
Denotes, h is the matrix, P is the result process and N is the pooling shift that allows overlapping between pooling regions when N < R, which R is the group of activation. The Pooling and Convolutional stage could be combined without considering the manner. The combination of Pooling and Convolutional layer are built and combine to obtain robust architecture [11].
2.3 Fully Connected Layer The extremity of CNN architecture is the Fully Connected layer. This layer took an influential role on CNN systems [12]. Fully Connected layer play as an important role to connect the Convolutional and Pooling layers to the Neural Network layer. In this research, we adopt pre-trained CNN architecture such as MobileNet [13]. Pre-trained architecture are chosen because of its ability to detects the object since the pre-trained architecture was formed and trained with huge amount of data. In the fully Connected layer, flatten process have a role to transform Pooling or Convolution matrix (depends on its structure and composition) into the array formed to be created as an Neuron Input. The illustration of Flatten process was shown on Fig. 2.
Transfer Learning Approach Based on MobileNet Architecture …
763
Fig. 2 Flatten process illustration
2.4 Activation The activation is the one of the most important part in CNN. The aim of activation function is to made a network (fully connected layer) able to learn complex patterns on the data. In theoretical, the activation is needed to avoiding the linear classifier. There are several activation approaches such as sigmoid, softmax, ReLU, Tanh and others [14]. This research are focused on classification of 2 labels including smile and not smile. Therefore, there are 2 activation function that could be applied which are sigmoid (3) and softmax (5) explained below. S(x) =
1 1 + e−x
(3)
Denotes, S(x) is the sigmoid function, x is the horizontal data, and e is the Euler number computed using (4). e=
∞ 1 n! n=0
(4)
where, e is the Euler value, and n is the amount of data. Considering that this research focus on how to classifying the binary case, the softmax activation also became the part of experiment, shown in (5) below. e zi → σ (− z )i = K j=1
ez j
(5)
764
G. Pangestu et al.
→ Denotes, σ is the softmax activation value, − z is the input vector, e zi is the standar exponential function for input vector, K is the number of classes in multi class classifier, and e z j is the standard exponential function for output vector.
2.5 Dataset This research is utilizing FER-2013 dataset which contain 28.709 example data including 6 categories such Angry, Disgust, Fear, Happy, Sad, Surprise and Neutral face expression [15]. The categories used in this research are divided into 2 groups. The first group contain Neutral, Angry, Sad, Fear and Disgust. The second group was containing Happy and Surprise. In total, we obtain 2 classes including Smile (Happy and Surprise) and Not Smile (Neutral, Angry, Sad, Fear and Disgust).
3 Result and Analysis In this section, we apply the training process and validation process on the total 28.709 data for two classes. There are 80% data for testing or around 22.967 data and 20% data for testing or around 5.741 data following the manner of splitting data 80/20 [16]. The sample of dataset using FER-2013 is shown in Fig. 3. This section will be divided into 2 sections. The first section is the training process, the second section is the analysis of accuracy we achieved.
3.1 Training Section In this research, we train 80% of data from total dataset using costumized MobileNet CNN architecture. We also applying augmentation data to deal with size of dataset and decrease the validation error [17]. For the augmentation, we applying random flipping and random rotation for every image dataset as shown in Fig. 4. Also, we applied grayscale dataset considering that the smile and not smile are not influenced by the color of skin [18]. In this training section we applying softmatx activation and 50 epoch. The result of training using customized MobileNet architecture was shown in Fig. 5. From the training result, the Customized architecture achieve the accuracy of 92.4%. As the information, in this research we do the conventional learning instead of transfer learning.
Transfer Learning Approach Based on MobileNet Architecture …
765
Fig. 3 Sample data of smile and not smile classes
3.2 Analysis and Discussion Our customized MobileNet architecture achieve accuracy 92.4%. MobileNet architecture produce better accuracy compared to ResNet50 and customized ResNet50, shown in Table 1. The ResNet50 has more parameters than MobileNet, so it could be the reason why ResNet50 produce lower accuracy compared to the MobileNet. It is because the case is to classified the expression of the face rather than classifying the face itself. Hence, the ResNet50 should has a better performance if it applied to the Face Classification problem rather than Face Expression recognition. This research proving that the ability of standard architecture such ResNet could perform better only on the data type that they trained before. However, on the new dataset, a standard architecture is surpasses by the modified architecture by only changing the size of the Dense layer which is the most important layer that connects to the fully connected layer.
766
G. Pangestu et al.
Fig. 4 The augmentation result for random flipping and random rotation
Fig. 5 Training result using customized MobileNet architecture in 50 epoch Table 1 Accuracy comparison
Architecture
Dataset
Avg. Acc. (%)
ResNet50
FER-2013
82.6
Customized ResNet50
FER-2013
87.3
Custom MobileNet
FER-2013
92.4
Transfer Learning Approach Based on MobileNet Architecture …
767
4 Conclusion The MobileNet architecture produce better accuracy around 92.4% compared to the other architectures such as ResNet50 and customized ResNet50. The data used in this research is FER-2013 open dataset. Not all of FER-2013 is used. From several class of data, we incorporate the 5 class included into only 2 class that contain smile and not smile dataset. Hence, the accuracy produced by the experiment is about 92,4% using customized MobileNet by changing the size of the Dense layer.
References 1. Bruce V (1996) The role of the face in communication: implications for videophone design. Interact Comput 8:166–176. https://doi.org/10.1016/0953-5438(96)01026-0 2. Lee S, Min C, Montanari A, Mathur A, Chang Y, Song J, Kawsar F (2019) Automatic smile and frown recognition with kinetic earables. In: Proceedings of the 10th augmented human international conference 2019. ACM, New York, NY, USA, pp 1–4. https://doi.org/10.1145/ 3311823.3311869 3. An L, Yang S, Bhanu B (2015) Efficient smile detection by extreme learning machine. Neurocomputing 149:354–363. https://doi.org/10.1016/j.neucom.2014.04.072 4. Tang C, Cui Z, Zheng W, Qiu N, Ke X, Zong Y, Yan S (2018) Automatic smile detection of infants in mother-infant interaction via CNN-based feature learning. In: ASMMC-MMAC 2018—proceedings of the joint workshop of the 4th workshop on affective social multimedia computing and 1st multi-modal affective computing of large-scale multimedia data, co-located with MM 2018. Association for Computing Machinery, Inc., pp 35–40. https://doi.org/10.1145/ 3267935.3267951 5. Paggio P, Agirrezabal M, Jongejan B, Navarretta C (2020) Automatic detection and classification of head movements in face-to-face conversations. In: Proceedings of LREC2020 workshop “people in language, vision and the mind” (ONION2020), pp 15–21 6. Ganakwar DG (2019) A case study of various face detection methods. Int J Res Appl Sci Eng Technol 7:496–500. https://doi.org/10.22214/ijraset.2019.11080 7. Krishna M, Srinivasulu A (2012) Face detection system on AdaBoost algorithm using Haar classifiers. IJMER 2:3556–3560 8. Chen J (2019) A face detection method based on sliding window and support vector machine. J Comput (Taipei) 14:470–478. https://doi.org/10.17706/jcp.14.7.470-478 9. Alzubaidi L, Zhang J, Humaidi AJ, Al-Dujaili A, Duan Y, Al-Shamma O, Santamaría J, Fadhel MA, Al-Amidie M, Farhan L (2021) Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. J Big Data 8. https://doi.org/10.1186/s40537-02100444-8 10. Gholamalinezhad H, Khosravi H: Pooling methods in deep neural networks, a review 11. Khan A, Sohail A, Zahoora U, Qureshi AS (2020) A survey of the recent architectures of deep convolutional neural networks. Artif Intell Rev 53:5455–5516. https://doi.org/10.1007/ s10462-020-09825-6 12. Basha SHS, Dubey SR, Pulabaigari V, Mukherjee S (2020) Impact of fully connected layers on performance of convolutional neural networks for image classification. Neurocomputing 378:112–119. https://doi.org/10.1016/j.neucom.2019.10.008 13. Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) MobileNets: efficient convolutional neural networks for mobile vision applications 14. Hao W, Yizhou W, Yaqin L, Zhili S (2020) The role of activation function in CNN. In: Proceedings—2020 2nd international conference on information technology and computer application,
768
15. 16. 17. 18.
G. Pangestu et al. ITCA 2020. Institute of Electrical and Electronics Engineers Inc., pp 429–432. https://doi.org/ 10.1109/ITCA52113.2020.00096 Sambare M: FER-2013 dataset Gholamy A, Kreinovich V, Kosheleva O (2018) Why 70/30 or 80/20 relation between training and testing sets: a pedagogical explanation Shorten C, Khoshgoftaar TM (2019) A survey on image data augmentation for deep learning. J Big Data 6. https://doi.org/10.1186/s40537-019-0197-0 Ramirez GA, Fuentes O, Crites SL, Jimenez M, Ordoñez J: Color analysis of facial skin: detection of emotional state
Breakdown Time Prediction Model Using CART Regression Trees Ni Nyoman Putri Santi Rahayu and Dyah Lestari Widaningrum
Abstract The process of mining copper and gold concentrates from ores requires a variety of tools for the separation of valuable minerals. Availability and equipment utilization are certainly measures of the performance of a mining company. Production support equipment from the mining company of this study has an average utilization for all areas that do not reach the target, which is 46.7%. The low utility value of the equipment caused by downed equipment will affect the company’s productivity. This study designed a monitoring technique for all equipment from a mining company in Indonesia at all mine sites. A predictive model was developed by utilizing a database currently maintained by the company. The existing data generates a model that predicts the duration of equipment down using the Classification and Regression Tree (CART) technique. Other information related to down equipment status is visualized in a monitoring dashboard. This monitoring technique can help companies evaluate the status of equipment to support company productivity and make immediate decisions if needed. Keywords Data Visualization · Data Mining · Classification & Regression Tree (CART)
1 Introduction One of the leading industrial activities in Indonesia is the mining industry, such as breaking, smelting, refining, and all processing of mining/excavation products. This research was conducted at a company that mining and processing mineral ores to produce concentrates containing copper, gold, and silver through underground mining. The processing plant produces copper and gold concentrate from ore mined N. N. P. S. Rahayu · D. L. Widaningrum (B) Industrial Engineering Department, Faculty of Engineering, Bina Nusantara University, Jakarta, Indonesia e-mail: [email protected] N. N. P. S. Rahayu e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_67
769
770
N. N. P. S. Rahayu and D. L. Widaningrum
by separating valuable minerals from the soil that covers them. The main steps are crushing the soil, grinding, floating, and drying. The mining process requires tools that support the company’s business processes. Based on data in March 2022, there is a relatively high amount of downtime, where there are 1143 units out of 3081 units that were damaged, so the average utilization of production equipment and production support equipment at each site in underground mining has not reached the target maximum, which is 46.7%. Supporting tools used to carry out production need to be observed related to damage factors, the amount and type of damage to see the performance of the equipment because the appearance of damage to the production equipment is a problem that hinders the optimization of the production process. Identifying the factors causing the damage and monitoring is necessary to see the tool’s performance because equipment damage can cause a lack of product effectiveness and increase product defects. The tracking and monitoring process can be done by designing a classification system to determine the duration of equipment damage based on specific categories. Classification of data based on the cause of the damage that occurred requires a model or data processing method using data mining to process large amounts of data [1]. The method considered the most suitable for predicting the length of repair is the Classification and Regression Tree (CART) method because the results of the prediction design are easy to interpret even though the variables used are of various types and in large numbers. This method can also describe the relationship between the response variable and one or more predictor variables based on pre-existing data sets so that users can find out which variables have an essential effect on prediction results. The monitoring process to support the process of monitoring a large number of production equipment can also be carried out using a dashboard that can be designed using Power BI. The equipment damage monitoring system was built with the aim of being able to monitor the handling of equipment damage carried out by technicians [2]. Dashboards can summarize data into visuals that can provide an overview of the information needed by the company to control the quality of production support tools. The concept of dashboard performance is an information system application model provided for managers to present performance quality information from a company or organizational institution, and dashboards have been widely adopted by companies or businesses [3]. This study is preliminary research in the development of a predictive maintenance system. Various studies have been carried out in developing predictive maintenance systems using historical data. The development of an algorithm to predict by combining exponential smoothing and artificial neural network and integrated with a monitoring dashboard increases the availability of assets in the pulp industry [4]. The monitoring dashboard can be integrated with unsupervised data-driven and knowledge-driven approaches to managing large datasets in the form of information/labels from human experts [5]. The development of predictive maintenance systems is increasing along with the development of Industry 4.0, which utilizes digital technology to focus on products and services in general, and reduces maintenance costs and operations in particular in predictive maintenance [6]. The efficiency
Breakdown Time Prediction Model Using CART Regression Trees
771
and effectiveness of the results of this initial research will illustrate the opportunities for developing a more advanced predictive maintenance system by using digital technology, whose development creates more significant opportunities for adoption.
2 Data Processing and Results 2.1 CART Regression Tree The data used in making the CART regression tree is data on damage to production support equipment in the Gasberg Block Cave (GBC) area. The data will be processed using the Minitab application to generate a CART regression tree as a reference for predicting the duration of tool failure. The initial data obtained by the author is still in the form of raw data from the company. Hence the data must be selected according to the research needs. There are 779 data, namely the name of the tools and components, work center, progress status of the damage, damage category, the duration of repairing time, and the tool’s age. Table 1 shows the variable collected for this research. The steps taken to perform data processing on Minitab are as follows: 1. Prepare the dataset to be classified and input data on the worksheet Minitab. The response variable must be continuous for regression and categorical for classification trees. If the predictor variable used is continuous, then the data must be in the form of numbers, whereas if the predictor variable is categorical, then the data can be input in the form of text or the number. 2. Select the classification type that will be implemented in the Predictive Analytics Module. In this study, the dependent variable studied is the time the equipment Table 1 CART Regression Tree Data Details Variable
Description
Equipment Id
Equipment specification code
Equipment name
The name of damaged equipment consisting of cranes, electric drills, excavators, fuel trucks, graders, light vehicles, loaders, mixer trucks, scissor trucks, shotcrete sprayers, dan man lift/scissor lift
Main work centre
Consists of 8 work centers
Status
Status of the damage repair progress, namely on progress, restricted, waiting for labor, waiting for part/parts, and waiting for space
Down category
Equipment damage categories, namely accident, operation loss, scheduled maintenance work, unscheduled maintenance work, and warranty
Component name
Name of the damaged component
Duration
Downtime duration in hours
SMU engine hours Amount of equipment usage time in hours
772
N. N. P. S. Rahayu and D. L. Widaningrum
fails, which is continuous or numeric, so the type of classification chosen is regression trees. In the Minitab application, the choice of classification type can be inputted into the Predictive Analytics Module. 3. Determine the response variable and predictor variables which are divided into continuous predictors and categorical predictors. The data that has been collected will be used as the dependent variable or response variable and the independent variable or predictor variable. The dependent variable or response variable used in this study is the time of tool failure, and the continuous predictor used in the study is the amount of time the tool is used in hours (SMU Engine Hours). In contrast, the categorical predictor used is Equipment Name, Main Work Centre, Status, Down Category, and Component Name. 4. The next step was determining the validation method to calculate the accuracy of the classification. The first choice is k-fold cross-validation, and the second is validation with a test set. 5. The last step was specifying the node splitting. The node separation method used in this research is the least square error, where this method will produce a category tree by minimizing the number of square errors. Data processing to produce CART Regression Trees using k-fold cross-validation with a k = 10 has results that can be seen in Fig. 1 Results of Data Processing with 10-Fold Cross Validation. The R-squared for training data is excellent at 82.23% but only 69.97% for testing data. Model improvement needs to be done by adding the amount of data to produce a more accurate prediction model. CART Regression tree resulting from data processing has 26 nodes. Figure 2 shows that the Component Name is the most critical predictor variable in determining predictions, where each damaged component has a different complexity in its repair so it will affect the processing time duration. Figure 3 shows the relative variable importance for each variable in the tree model. The use of the category tree can be implemented using the Minitab application by selecting the “predict” option at the end of the Minitab data processing results Fig. 1 Results of Data Processing with 10-Fold Cross Validation
773
Fig. 2 Relative Variable Importance
Breakdown Time Prediction Model Using CART Regression Trees
774
N. N. P. S. Rahayu and D. L. Widaningrum
Fig. 3 Prediction Results with CART Regression Tree
page, then input the data variable you want to predict. The last step is by clicking OK button. The prediction time results and the terminal node will appear according to Fig. 3 Prediction Results with CART Regression Tree. Based on the prediction results, the light vehicle with SMU Engine Hours of 1000, located at the UGKSG1 work center, with on progress status, category of damage unscheduled maintenance work, and the component name tires/wheel/track is predicted to be repaired for 3.951 h, per the prediction at terminal node 4.
2.2 Data Visualization Other information related to down equipment status is visualized in a monitoring dashboard. There are 13,429-row data that will be processed to get a visualization according to the company’s needs. The data collected has details and components in each column, which can be seen in Table 2 Dashboard Data Details. The information that we want to display on the dashboard visualization is the order of the types of damage from the highest to the lowest, the amount of damage at each site by date, and the order sites with the highest to lowest damage. Data processing to perform visualization using Microsoft Power BI can be done with the following steps: 1. Connect data sources with Power BI apps. 2. The data source used in this study is a SharePoint link, so the authors input the data source using the website SharePoint. The data source can be entered on the Get Data menu, then select the website option, and write the website address 3. Define the data type with the Power Query Editor. 4. The columns contained in the data table that will be used are detected automatically by the Power BI application, but if there are data types that do not match
Breakdown Time Prediction Model Using CART Regression Trees
775
Table 2 Dashboard data details Variable
Description
Site code
The name of the mining process site, namely GBC, BG, CIP, DMLZ, DOZ, NONM
Equipment Id
Equipment specification code
Equipment category Category of equipment which includes production equipment and support equipment Date
Breakdown start date
Status
Status of damage which includes delay, extended loss and unplanned down
Down category
Equipment damage categories, namely accident, operation loss, scheduled maintenance work, unscheduled maintenance work, and warranty
Component name
Name of the damaged component
Duration
Downtime duration in hours
Progress status
Status of the damage repair progress, namely on progress, restricted, waiting labor, waiting part, and waiting space
5. 6. 7.
8.
9.
or there is data that is not required, then the necessary changes can be changed in the Power Query Editor by changing query menu advanced editor. Determine the type of graph display and determine the information to highlight, which can be used as a basis for decision making Add slicers to the dashboard to make it easier for users to see visualizations under certain conditions. The use of slicers is the same as the feature to filter data. Set the dashboard display according to the desired color, placement, and size. The dashboard is suitable for displaying all the information needed, which has a size of 1200 × 1300 pixels, has a writing size of 14 points so that it is visible, and the order of the graphs is placed according to their respective sizes so that the contents of the graph can be seen clearly. Publish the dashboard on Power BI website. The designed dashboard can be uploaded on the website so that workers in need can access and check the dashboard to see progress regularly. Set up settings for the schedule for updating data. The dashboard that has been uploaded on the website does not yet have a mechanism to update if data is added to the dataset, so it is necessary to make arrangements to schedule a data update by going to the dataset settings on the website Power BI scheduled refresh menu. The frequency of data updates selected for the dashboard is every day at 00.00 so that workers who want to monitor can see the latest data every day.
Figure 4 Equipment Breakdown Dashboard dashboard that will be used for monitoring. Data processing to visualize the amount of equipment in each progress status is displayed in a clustered bar chart. There are five signs of progress in repairing tools, namely on progress or repairs are in progress, waiting for labor or waiting for workers to work on, waiting for parts or waiting for parts of the tool to continue work, waiting for space or being in a queue to enter the repair place, and restricted or tool access
776
N. N. P. S. Rahayu and D. L. Widaningrum
Fig. 4 Equipment breakdown dashboard
is being restricted. Based on the visualization in Fig. 4, as many as 66 tools in April were waiting in line due to limitations in the repair area, and 460 tools were waiting for workers to continue repairs so that repair time would be delayed. The second visualization in the dashboard is the number of damage to equipment by site using a tree map. There are 5 sites mining process underground, namely: Gasberg Block Cave (GBC), Deep Mill Level Zone (DMLZ), Common Infrastructure Project (CIP), Deep Ore Zone (DOZ), and Big Gossan (BG). Based on the visualization in Fig. 4, the area that experienced the most damage to production support equipment was the site, with a total of 463 units, while the least damage was at the site, with a total of 75 units. The line chart on the dashboard shows data regarding the number of damages based on site per day in one month. The fluctuation of the amount of damage is different at each site. Based on the chart, the amount of damage to the site has increased from the beginning to the end of the month, with the highest peak on April 27. Meanwhile, at the site, DOZ and BG did not experience a significant increase or decrease from the month’s beginning to the end. The fourth visualization in the
Breakdown Time Prediction Model Using CART Regression Trees
777
dashboard is the number of tool failures based on the date and status of the damage using a bar graph. The company can find out the amount of damage to all sites on a daily based on the status of the equipment failure. The damage that is included in the unplanned down has a higher amount of damage than the planned down every day. Detailed data related to damaged components displayed as a table. The amount of data in the table will adjust to the filters used on the dashboard. The columns in the table can also be sorted in alphabetical order or the value of the smallest or largest duration. Information on the dashboard can be used as a reference for supervisors in making decisions to reduce the duration of equipment damage and mitigate damage. For example, the dashboard can see the site with the highest number of damages so that the supervisor can check on site. If the damage occurs due to an accident, the supervisor can check the validity period license of the operator or propose mandatory training to provide regular training related to the use of tools to minimize accidents caused by human error. Other information that can be used as a reference is components that often experience damage, where supervisors can check environmental factors that can cause equipment to experience accidents, for example, uneven ground conditions that cause damage to tire components. Progress maintenance can also be a reference for supervisors, and if there are still many tools that are experiencing maintenance due to a lack of workers or maintenance places, supervisors can consider adding workers by considering the costs and benefits generated.
3 Conclusion Based on the results of data processing that has been done previously, it can be concluded that the monitoring to predict the time of failure of production support equipment is designed using the Minitab application to produce a CART Regression Tree. The variables used as continuous predictors are engine hours, while the variables used as categorical predictors are equipment name, main work center, status, down the category, and component name. Validation of the CART Regression Tree with the most optimal results is tenfold cross-validation, which produces the value of r square of 69.97% and Means Absolute Percentage Error (MAPE) of 1.1928, with the number of terminal nodes as many as 26. It can be interpreted that the predictor variables influence the prediction of the duration of damage, with the order of variables having the most significant importance to the most diminutive, namely component name, main work center, equipment name, status, engine hours, and down category. The system monitoring equipment malfunction information can be done using a dashboard designed through the Power BI application. Data visualization in April showed that the site with the largest equipment damage and an increase from the beginning to the end of the month was Gasberg Block Cave (GBC), with a total of 463 units. This case study illustrates the use of available data to generate information to evaluate the current situation of the equipment used for operations. Further research
778
N. N. P. S. Rahayu and D. L. Widaningrum
can be done by combining other variables and adding data to produce a more accurate prediction model.
References 1. Franseda A, Kurniawan W, Anggraeni S, Gara W (2020) Integrasi Metode Decision Tree dan SMOTE untuk Klasifikasi Data Kecelakaan Lalu Lintas. Jurnal Sistem dan Teknologi Informasi 8(3):282–290 2. Kusuma AS (2017) Sistem Monitoring dalam Penanganan Kerusakan Peralatan Elektronik di STMIK STIKOM Indonesia. Jurnal Ilmiah Teknologi dan Informasi ASIA (JITIKA) 11(1):57– 70 3. Ilhamsyah I, Rahmayudha S (2017) Perancangan Model Dashboard Untuk Monitoring Evaluasi Mahasiswa. Jurnal Informatika. Jurnal Pengembangan IT (JPIT) 2(1):13–17 4. Rodrigues JA, Farinha JT, Mendes M, Mateus R, Cardoso AM (2022) Short and long forecast to implement predictive maintenance in a pulp industry. Eksploatacja i Niezawodno´sc´ 24(1):33–41 5. Moens P, Vanden Hautte S, De Paepe D, Steenwinckel B, Verstichel S, Vandekerckhove S et al (2021) Event-driven dashboarding and feedback for improved event detection in predictive maintenance applications. Applied Sciences (Switzerland) 11(21):10371 6. Sahba R, Radfar R, Ghatari AR, Ebrahimi AP (2021) Development of industry 4.0 predictive maintenance architecture for broadcasting chain. Advanced Engineering Informatics 49:101324
Video Mapping Application in Sea Life Experience Interior Design as Education and Recreation Facilities Florencia Irena Wijaya, Savitri Putri Ramadina, and Andriano Simarmata
Abstract Indonesia is a maritime country, where its territory is dominated by sea. However, the condition of Indonesia’s aquatic ecosystems and the biota that live in them have been damaged by illegal and destructive fishing, plastic waste pollution, and climate change. Educational and recreational facilities are needed for the people regarding the development of ecosystem life, Indonesian marine life, and the causes of damage to marine ecosystems. The Sea Life Experience serves as an education and recreation facility that contains marine ecosystems through interactive exhibitions, which involve interaction between visitors and installations as a new experience. It is hoped that with the presence of the Sea Life Experience, the involvement of visitors in interactive exhibitions using replicas and video mapping can increase awareness and concern for Indonesian marine life and the aquatic biota that live in them. Qualitative design method by conducting literature studies and observations to precedent sites. Based on the design visualization results, video mapping produces underwater ambience so that visitors feel like they are under the sea. In addition, visitors also get new experiences through interactive and exploratory activities through video mapping. Keywords Marine Ecosystem · Interactive Exhibition · Video Mapping
1 Introduction The ocean has a big role to play in reducing global warming. At least, the ocean can produce 50% oxygen for the Earth. As a maritime country, where its territory is dominated by sea waters, Indonesia has great potential in restoring the Earth from global warming. According to Indonesia Climate Change Trust Fund (2021), Indonesia’s waters have a potential of 17% of the world’s carbon reservoirs [1]. F. I. Wijaya (B) · S. P. Ramadina · A. Simarmata Interior Design Department, School of Design, Bina Nusantara University, West Jakarta, Indonesia e-mail: fl[email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_68
779
780
F. I. Wijaya et al.
As stated by Ministry of Marine Affairs and Fisheries of the Republic Indonesia (2018), Indonesia’s position is on the equator and is in area known as the Coral Triangle. The region is recognized as a global center of marine life diversity, covering all or part of six countries—Indonesia, Malaysia, Papua New Guinea, the Philippines, the Solomon Islands, and Timor-Leste [2]. It can be said that Indonesia’s position is at the center of the area of abundant marine life diversity. Based on 2019 data from the Fish Quarantine, Quality Control, and Safety of Fishery Products Agency (2020), currently Indonesia occupies the second position of the largest producer of fishery products with a total production of 24 million tons per year. The value of Indonesia’s fishery exports occupies the fifteenth highest position in the world, where the export value reaches 4.5 billion dollars [3]. However, the condition of Indonesia’s aquatic ecosystems and the biota living in them have been damaged. This damage is caused by many things, among which are illegal and irresponsible fishing, destructive fishing, plastic waste pollution, and climate change. Research proves that by the beginning of the twenty-first century, Indonesia’s coral reefs that were considered to be in good to excellent condition were only one-third of all existing coral reefs and the rest had experienced varying degrees of degradation. In recent decades, seagrass ecosystems that are habitats for marine life to shelter from human threats have disappeared in a rapid time. Similar to coral reef and seagrass ecosystems, the condition of Indonesia’s mangrove forests has also suffered damage in the fastest time in the world. Research proves that in the last three decades, 40% of mangroves in Indonesia have disappeared. It is estimated that 6% of the forest that disappeared in Indonesia is mangrove forest [2]. The marine animals living in the ecosystem has also experienced a population decrease and many of them are in risky condition, and there are even some that are endangered. It is estimated that every year, because of being trapped in shrimp and tuna trawls, as many as 7,700 turtles are killed by accident. The population decrease of algae-eating herbivores resulted in excessive algae growth, causing coral death. Marine life, which is classified as rare of endangered and should be protected, is actually traded illegally [2]. Plastic waste that pollutes the sea is also often ingested by marine animals and caused disease even death. It also can be harmful to humans as consumers of seafood. Damage to ecosystems and marine life causes the use of marine resources in Indonesia to be less than optimal. Marine resources that are optimally utilized can improve the welfare and economy of the Indonesian people, especially for people living on the coast. The utilization of marine resources can be more optimal and sustainable by means of conservation. Conservation can increase fishermen’s catches, stabilize long term fish production, improve the economy of the Indonesian people, especially for marine coastal areas, and support the improvement of marine life habitats [4]. This is in line with the development of the Blue Economy that is being pursued by Indonesia. Fisheries, energy and mineral resources, marine transportation, marine infrastructure, marine tourism, coastal, and small islands, unconventional resources, and industry and biotechnology are eight sectors in the application of the blue economy [5].
Video Mapping Application in Sea Life Experience Interior Design …
781
Therefore, the Sea Life Experience is designed as one of the efforts to build public awareness and concern about marine biodiversity, as well as the importance of preserving Indonesia’s marine life. Sea Life Experience is designed as education and recreation facilities for the community regarding the development of Indonesian marine ecosystem along with what endangered them through interactive exhibitions using replicas and video mapping as a new experience. Sea Life Experience is a fictitious project designed as the Final Project of Interior Design Study Program at Bina Nusantara University Bandung. Sea Life Experience is located in Pakuwon City, Kalisari, Mulyorejo, Surabaya, East Java. The building was originally used as an Art Gallery. The location of this building is strategic, where there are educational, settlement, health, and commercial functions around it. Access to this building is also easy to reach. One of the interactive exhibition media that can be used is video mapping. Video mapping is a technique to create optical illusions using lighting and projection on one or several media in the form of objects. This projection technique can emit dynamic video views on various surface shapes. The projected object can be in the form of 3D animation or motion graphics consisting of flat planes or spaces [6]. According to Empler, video mapping projection is included in the type of augmented reality. This is generated through digital processing that interacts with areas that do not have to be flat as a medium for projecting an image or video. The word ‘projection’ means the act of projecting an object on a surface, while the technical word ‘mapping’ indicates mapping on the surface that is used as a projection medium. With augmented reality, it allows users to see and interact with physical reality by using different media, for example by smartphones, earphones, sensors, camcorders or as in the case of ‘projection mapping’ with the use of projectors [7]. Video mapping can be used as an interactive medium that can provide new immersive experiences digitally in educational tourism. According to the Big Indonesian Dictionary, interactive means that there are activities to take action on each other. Interactive media is a medium that provides education using video, audio, threedimensional shapes, graphics, and animations that create interaction. From these two things, it can be concluded that interactive is an activity of mutual action where there is reciprocity between users and a media. Based on the Cambridge Dictionary, immersive means a condition in which a person seems to be surrounded so that they are actually involved in something. It can be concluded that immersive is an experience in which a person can feel and engage with surrounding circumstances. Interactive video mapping is when projectors are modified in real-time following certain events or activities. This can result from the use of effects from external devices such as mobile phones connected to servers where projection mapping software operates, tablets, or even infrared cameras. Tracking is a technique that allows users to follow the movement of projection objects with the dynamic projection method. In this case, the machine interprets and reduces its position in the represented and real 3D space. The machine also anticipates the subsequent movement of the object so that the content projected on the surface is automatically deformed to be calibrated in real-time during its movement.
782
F. I. Wijaya et al.
In museums, projection mapping is used for interactive activities between the showpiece and visitors. This can create a more dynamic relation with the displayed object or text. Not only understanding a fact based on the display context, but also about experimenting on the exhibition so that it can enrich or enhance the experience. Projection mapping can also be used to create an atmosphere and experience while inside the exhibition. In classical museum exhibitions, projection mapping can be used as a complement to exhibitions. Thus, projection mapping can be used as a medium in didactic, sensory, and entertainment. Based on the project background, a formulation of the problem can be made as follows. 1. Designing Sea Life Experience interior design as education and recreation facilities. 2. Designing interactive and immersive exhibitions in Sea Life Experience as a new experience of marine life edu-tourism. The purpose of this project is as follows. 1. Producing Sea Life Experience interior design using video mapping as a new educational experience. 2. Building public awareness and concern for the Indonesian marine life diversity.
2 Methodology Qualitative research methods were used in the design of Sea Life Experience. Qualitative research methods are designs that use interpretation of phenomena that occur by involving various existing methods [8]. The result of qualitative design is a descriptive data about a situation. It can be obtained through literature studies and observations. 1. Literature Studies Literature studies are carried out by collecting data or information sourced from journal articles, books, or the internet. The data collected is about theories related to video mapping and precedent study. 2. Observations Observations are carried out by observing some objects directly at the precedent location. Some of the things that are observed and recorded are user activities, facilities, circulation, projector placement, sound system placement, and other things needed in video mapping area design. A projector is a tool used to project objects onto a surface. To produce video mapping works, mastery and understanding of the projector and projection media are needed. The projector position is one of the important variables that must be considered to achieve maximum video mapping result [9]. The projector’s ideal position is a forward projection with an axis perpendicular to the surface of the model (see Fig. 1).
Video Mapping Application in Sea Life Experience Interior Design …
783
Fig. 1 Upright axis projection
The appearance of multiple shadows on the surface resulting from a multiprojective system, or a poor projector placement position, may result in a poor video mapping finish. It is necessary to determine from which point of view the projector or audience is located so that all the projected video content can be seen properly [9]. There are three different surface types for video mapping in general, that is planar surfaces, nonplanar surfaces, and discontinuous surfaces (see Fig. 2). Projection mapping or video mapping is not just a technique, but also a new media format for spatial entertainment. This type of projection is used when projecting objects against complex projection surfaces, such as objects, facades, 3Ds, or relief structures. The difference between projection mapping and standard projection is anamorph, that is, the projection screen is not flat, and therefore the projection must
Fig. 2 Types of video mapping surface
784
F. I. Wijaya et al.
be deformed, in which this process consists in correcting distortions in different parts of the figure through different parts of the figure through different deformations and geometric curves. As for the nonplanar and discontinuous surface, they needed the mediation of software that enabled images to be precisely put in specific surfaces by “masking” (selecting the portion of space on which the projection acts), “stacking” (arranging projectors to obtain monumental image), positioning and calibrations of projectors [10]. Data searches are carried out regarding area, ticket prices, show duration, electrical data, electrical placement, visitor capacity, visitor activities, and facilities provided in a video mapping area (Table 1). Based on the results of literature studies and observations above, it can be concluded that in the video mapping exhibition room, the following things are needed. 1. Projectors with an intensity of 9000–12,000 lumens so that the projected light can be clearly seen on the projection surface. 2. Video mapping projection media should be white so that the light from the projector can be seen clearly. The use of dark colors can absorb light so that the projection results are not optimal. 3. The lighting of the video mapping room is made dark to maximize video projection. However, general lighting is still needed for space cleaning needs. 4. Within the span of the panel surface ± 10 m, two projectors are needed. 5. Sensors are needed to make video mapping interactive. 6. Video mapping also needs to be equipped with audio through loudspeakers and other sound systems. 7. A computer control room is required to run the video mapping show. 8. The video mapping exhibition area should have a cool room temperature or optimum temperature because the projector produces hot air.
3 Results and Discussions In the design of Sea Life Experience, a projector is used as a video mapping tool. The projection results can only be clearly seen in a dim and dark room, which has a closer ambience to the underwater atmosphere. From a technical point of view, the projector was chosen as the video mapping tool because of the easy installation (Table 2). Besides the projector, it needs an interactive system with real-time depth and motion sensor to provide interactions which can give visitors an immersive experience. Spatial Augmented Reality (SAR) equipment such as Microsoft Kinect can be used for surface reconstruction and dynamic projection mapping in real time [11]. The use of projectors for video mapping can be found in the exhibition area. The following is the implementation of the exhibition area design in the form of an electrical plan and the perspective of the video mapping area in the exhibition (Table 3).
Video Mapping Application in Sea Life Experience Interior Design …
785
Table 1 Precedent study Location
Kala Kini Nanti
ImagiSpace
Paris Van Java, Bandung
Astha District 8, South Jakarta
Dimension Area: ± 225 Panel size: ± 15 × 4 m Ceiling height: ± 5.5–6 m m2
Panel size: ± 3.5 m
Ticket Price
Regular Show: Rp 50.000 Special Show: Rp 150.000
Weekday: Rp 128.000 Weekend: Rp 138.000
Show Duration
30 min
60 min
Visitor Activities
• Buy tickets on the spot • Experiencing digital art show • Taking pictures
• Tickets only available online • Experiencing digital art show • Taking pictures
Facilities
• Ticketing spot • Exhibition area • Control room
• Exhibition area • Control room
Visual
Layout
Projector
EPSON 9.000–12.000 lumens, with There are projectors directed to the wall ultra-shot throw lens and floor There are projectors directed to the wall and floor
786
F. I. Wijaya et al.
Table 2 Video mapping tools comparison Brightness Reflection Contrast Size Flexibility Durability Installation
Projector √
LED √√√
LCD √√
√√
√√√
√√
√
√√√
√√
√√√
√√√
√
√√√
√√√
√
√√
√√√
√√
√√√
√
√√
4 Conclusion Sea Life Experience is designed as an education and recreation facility for the community regarding the development of life and the damage to marine ecosystems and the biota that live in them through the exhibition of replicas of marine life and interactive exhibitions. Video mapping is used in interactive exhibitions as a new experience of marine life education. Based on the results of the design visualization, the underwater ambience is already reached so that visitors feel like they are under the sea. In addition, visitors not only receive education, but also get new experiences through interactive and exploratory activities in exhibitions. Based on the design that has been done, it can be suggested to use an LED screen as another alternative in displaying video mapping because of the high level of durability, brightness, contrast, and flexibility. However, the installation of LED screens is quite difficult and complex so it must be handled by professionals.
Prehistoric sea and open ocean exhibition
Interactive area
1
2
Floor Area
Table 3 Design implementation
Electrical plan
Interactive drawing area
Interactive plastic sea
Interactive video Mapping area
Open Ocean Exhibition
Prehistoric sea exhibition
Perspective
Implementation
(continued)
In the Interactive Video Mapping Area, visitors will watch educational videos about the condition of marine ecosystems and the causes of their damage. When visitors walk, a video mapping effect will appear in the form of blue light that will follow the visitor’s footsteps In Interactive Plastic Sea, visitors can interact with plastic objects resembling marine animals that appear in video mapping and can turn them into real marine animals by touching them In the Interactive Drawing Area, visitors can explore creativity by creating works of imaginary or favorite marine animals. The work will be included in the video mapping and can interact with visitors by touching
In this area, video mapping is used as the replica background. Video mapping shows the habitats and living habits of the marine animal displayed. There is an interactive video mapping in the Open Ocean area, where visitors can feed the whale sharks by touching the video mapping wall
Video Mapping Application in Sea Life Experience Interior Design … 787
3
Bioluminescent sea exhibition
Floor Area
Table 3 (continued)
Electrical plan
Interactive bioluminescent sea
Perspective
Implementation In the Interactive Bioluminescent Sea, visitors can interact with deep-sea bioluminescent animals that appear in video mapping by touching them
788 F. I. Wijaya et al.
Video Mapping Application in Sea Life Experience Interior Design …
789
References 1. Indonesia Climate Change Trust Fund, https://www.icctf.or.id/mulai-dari-karbon-biru-untukmenyelamatkan-bumi/. Last accessed 15 Sept 2022 2. Ministry of Marine Affairs and Fisheries of Republic Indonesia (2018) Kondisi Laut Indonesia, Jilid Satu: Gambaran Umum Pengelolaan Sumber Daya Laut untuk Perikanan Skala Kecil dan Habitat Laut Penting di Indonesia. PT. Bentuk Warna Citra, Jakarta 3. Fish Quarantine, Quality Control, and Safety of Fishery Products Agency. https://kkp.go. id/bkipm/artikel/25535-peringkat-kedua-produsen-hasil-perikanan-pemerintah-indonesia-upa yakan-peningkatan-ekspor. Last accessed 15 Sept 2022 4. Directorate of Areas and Fish Species Conservation. http://kkji.kp3k.kkp.go.id/index.php/ber itabaru/267-14th-konservasi-untuk-kesejahteraan. Last accessed 15 Sept 2022 5. Ministry of Marine Affairs and Fisheries of Republic Indonesia. https://kkp.go.id/artikel/ 35427-kkp-gaungkan-ekonomi-biru-di-kepri. Last accessed 15 Sept 2022 6. Rompas JH, Sompie SR, Paturusi SD (2019) Penerapan Video Mapping Multi Proyektor untuk Mempromosikan Kabupaten Minahasa Selatan. Jurnal Teknik Informatika 14(04):493–504 7. Empler T (2017) Dynamic urban projection mapping. Proceedings 1–15 8. Anggito A, Setiawan J (2018) Metodologi Penelitian Kualitatif. CV Jejak 9. Maniello D (2018) Improvements and implementations of the spatial augmented reality applied on scale models of cultural goods for visual and communicative purpose. Springer International Publishing AG, pp 303–319 10. Schmitt D, Thébault M, Burczykowski L (2020) Image beyond the screen: projection mapping 11. Guo Y, Chu S-C, Liu Z, Qiu C, Luo H, Tan J (2018) A real-time interactive system of surface reconstruction and dynamic projection mapping with RGB-depth sensor and projector. International Journal of Distributed Sensor Networks 14(7)
The Approach of Big Data Analytics and Innovative Work Behavior to Improve Employee Performance in Mining Contractor Companies Widhi Setya Wahyudhi, Mohammad Hamsal, Rano Kartono, and Asnan Furinto Abstract Many papers discuss how big data analytics influences company performance, but few studies discuss how it impacts employee performance, so this learning intention is to recognize the part of big data analytics and innovative work behavior on employee performance, especially heavy equipment experts in coal mining contractor companies in Indonesia. The existence of big data technology plays a very important role in asset management, in optimizing the predictive maintenance process. Besides that how the role and behavior of each experts in the company become an important aspect to improve the maintenance process. This research is very important to see whether big data investments that require large costs and efforts affect increasing employee performance. This research was carried out through a quantitative research method approach with an SEM. The sampling technique is based on the probability sampling method using simple random sampling and the slovin formula. The questionnaire has taken 119 respondents from heavy equipment experts in Indonesian coal mining contractors. The test using the smartpls 3.2.9 and testing the outer model by looking at Average Variance Extracted (AVE), Composite Reliability (Pc), Cronbach’s Alpha, and Discriminant Validity. The test results show that the relationship between big data analytic variables and innovative work behavior is proven to be significant and has an outcome on employee performance from the perspective of heavy equipment experts. It was found that the greater the big data analytics and innovative work behavior possessed by the experts, the higher the employee performance. W. S. Wahyudhi (B) · M. Hamsal · R. Kartono · A. Furinto Management Department, BINUS Business School Doctor of Research in Management, Bina Nusantara University, West Jakarta, Indonesia e-mail: [email protected] M. Hamsal e-mail: [email protected] R. Kartono e-mail: [email protected] A. Furinto e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_69
791
792
W. S. Wahyudhi et al.
Keywords Big Data Analytics · Innovative Work Behavior · Employee Performance
1 Introduction The application of big data is the implementation of the latest technology that requires a large investment whereas the application of big data requires the involvement of large resources both in terms of human assets, infrastructure, and others. The existence of big data technology plays a very important role in asset management, especially heavy equipment in coal mining contractors, among others, in optimizing the predictive maintenance process. In addition to big data technology in the world of heavy equipment maintenance, the role of engineers is very much needed in improving the performance of managed assets, one of which is how the role and behavior of each engineer in the company. This research is very important to see whether big data investments that require large costs and efforts take an effect on cumulative employee performance. In addition, this research also identifies how the influence of innovative work behavior possessed by heavy equipment experts also has an impact on employee performance. Big data is the implementation of technology that can create a sustainable competitive advantage [1]. In his follow-up research, Wamba also adds that big data is an analytical tool and technique for obtaining actionable insights from the vast amount of data available to provide sustainable value, improve business performance, and provide competitive advantage [1]. Managers can leverage and optimize big data technology to identify or explore more about their business and transform knowledge The resulting decisions become efficient decisions, improve performance, and the entire decision-making process [2]. From a business perspective, Marshall et al. stated that companies that use big data in running their organizations may have more optimal opportunities to improve operating efficiency and increase revenue compared to their rivals [3]. Predictive analysis is a methodology that is useful in identifying patterns with statistical approaches, artificial intelligence, machine learning, and data mining techniques [4]. Big Data Analysis is one of the analytical tools that can significantly increase business benefits and create organizational effectiveness [5]. In Tao’s research, big data is categorized into 5 Vs (i.e., volume, veracity, variety, velocity, and value) [6]. The creation of sustainable competitive advantage is indicated by more efficient operational activities and faster and more precise strategic decision-making [7].
The Approach of Big Data Analytics and Innovative Work Behavior …
793
2 Literature Review 2.1 Big Data Analytics In today’s uncertain conditions, many companies are investing heavily in implementing big data to create differentiation compared to their rivals [8]. In several studies, it is stated that 87% of companies have confidence in big data will create a new pattern of competition, whereas 89% rely on they will lose a sizeable market share if they do not implement big data in the future [9]. Some research states that big data is an analytical tool and a technique for obtaining actionable insights from the abundance of existing data to provide sustainable value, improve business performance, and provide competitive advantage [1]. There are 3 main dimensions in dividing big data volume, variety, and velocity [10] to veracity and value [11]. Big data can identify the causes of failures that occur and can detect potential damage and can evaluate the performance of heavy equipment and workers [12]. Previous research has shown that big data analytics are significant and positively related to their firm performance [5]. Hypothesis 1. Big Data Analytics has a positive and significant effect on employee performance.
2.2 Innovative Work Behavior Problem recognition, idea generation, promotion of ideas, and idea realization, can increase employees’ ability to innovate [13]. Studies show that individuals who are willing and able to innovate, expand their contribution beyond the scope of their job requirements and at the same time will generate a continuous stream of innovation [14]. Some of the studies above explain that there is a positive and significant impact between the variables of innovative work behavior on organizational performance. Previous research has shown that innovative work behaviors are positively related to task performance. However, task performance has traditionally been included in the employee’s job description and did not consider the employee’s various nonexplicit contributions to the organization. Other studies have shown that employees who work in non-innovative positions may be less motivated to implement new ideas because they do not consider new ideas or processes that aid their work. Hypothesis 2. Innovative work behavior has a positive and significant effect on employee performance.
2.3 Employee Performance Employee performance is one of the measurement parameters by utilizing task standards that are by job criteria [15]. in several studies, it is stated that job criteria
794
W. S. Wahyudhi et al.
classify into 3 criteria, namely trait-based information, behavior-based information, and results-based information [16]. In the study, it was identified that three variables have an impact on employee performance, namely the individual capability variable in carrying out the task, the organizational support variable, and the effort variable [16]. Individual capability variables consist of talents, welfare, and personality traits of employees. Theoretically, employee performance is one of the variables influenced by several factors, namely the individual. the system, and contextual [17]. Employee Performance is Behavior or activities that are related to the objectives of the companies, and under the control of the individual. This variable has four dimensions, namely task performance, contextual performance, and adaptive performance [18].
3 Methods The research technique used in this study is a quantitative method that seeks to explore the model which is the influence of big data analytics and Innovative work behavior on employee performance. The method of collecting data in this study is by using a survey method where data will be collected through respondents by utilizing measuring instruments/research instruments that have been compiled in the form of questionnaires. The questionnaire is aimed at using a sample of 119 respondents from several heavy equipment experts in Indonesian coal mining contractors. The sample collection method is implemented using simple random sampling where the sample collection of population members is carried out in a random pattern by ignoring the strata that exist in the population. The sampling technique was based on the probability of selection using the probability sampling method using a simple random sampling technique. The technique of decisive the amount of samples using the slovin principle. Where the population in this learning are heavy equipment experts in coal mining contracting companies with the largest market share and have implemented big data technology, where it is known that the number of experts is 154 people and uses a margin of error of 5% so that by using the slovin formula the minimum sample size is obtained 111 experts. From the questionnaires distributed to all experts, data were obtained as many as 119 respondents. Table 1 will explain the operationalization of variables. Based on the study of theories and models, the hypotheses of this research are: • H1: Big Data Analytic has a significant effect on employee performance • H2: Innovative Work Behavior has a significant effect on employee performance The data analysis method was carried out using structural equation modeling (SEM) to decide the causative correlation between the latent variables confined in the structural equation. Data exploration from Fig. 1 using Smartpls 3.2.9. After going through the calculation mechanism from the questionnaire results, the data will be analyzed and will go through a testing process using multivariate structural equation modeling (SEM) techniques. Data analysis using Smartpls 3.2.9. There
The Approach of Big Data Analytics and Innovative Work Behavior …
795
Table 1 Variable operationalization Variable
Dimensions
Indicator
Scale
Source
Big data analytics Big data analytics is defined as “high-volume, high-speed and high-diversity assets that create cost-effective and innovative information processing that gather more information” [10]
Volume
The ability to examine big amounts of data
Ordinal
[10, 19]
The ability to explore Ordinal very huge amounts of data Velocity
The ability to explore data quickly Variety
Innovative work behavior A complex process that combines creativity and application of ideas, this process consists of four dimensions [13].
The ability to examine Ordinal data as soon as we accept it
Exploration of ideas
Ordinal
The ability to use Ordinal numerous dissimilar databases to gain insights The ability to examine data from multiple sources
Ordinal
Ability Looking for opportunities to improve something
Ordinal
[13]
The ability to pay Ordinal consideration to problems that are not from the daily task Idea generation
Promoting ideas
Ability to find new methods, techniques, or ways of doing work
Ordinal
Ability to find solutions to work problems
Ordinal
Ability to promote ideas to gain enthusiasm from influential senior members of the company
Ordinal
Ability to convince colleagues and related supervisors to promote ideas
Ordinal
(continued)
are 2 stages of testing, specifically the outer model and the inner model where the instrument is tested with Outer Loading > 0.7, Average Variance Extracted (Pvc) > 0.5, Composite Reliability (Pc) > 0.7, Cronbach’s Alpha > 0.6, and Discriminant Validity.
796
W. S. Wahyudhi et al.
Table 1 (continued) Variable
Dimensions
Indicator
Scale
Implementation of ideas
Ability to introduce innovative ideas in innovative work in a systematic way
Ordinal
Ability to contribute to the implementation of new ideas
Ordinal
The quality of work has gotten better in the last 3 months
Ordinal
Able to do a good job with maximum time and effort
Ordinal
Able to communicate with others and lead to the desired result
Ordinal
Able to learn from feedback from others about work
Ordinal
Able to keep job knowledge up to date
Ordinal
Able to handle unclear and unpredictable situations in the workplace
Ordinal
Employee performance Task Behavior or activities that performance are important to the objectives of the companies, and under the mechanism of the individual. This variable has four dimensions, [18] Conceptual performance
Adaptive performance
Source
[18]
Volume Velocity Variety
Big Data Analytics Task Performance
Employee Performance Exploration of Ideas Idea Generation Promoting Ideas Implementation of ideas
Fig. 1 Research model
Innovative Work Behavior
Contectual Performance Adaptive Performance
The Approach of Big Data Analytics and Innovative Work Behavior … Table 2 Respondent profile
797
Demographic
Attribute
Percentage (%)
Age
21–30 years old
16.50
31–40 years old
62.00
41–50 years old
16.50
> 51 years old Education
Work experience
5.00
High school
27.50
Diploma
42.50
Bachelor
29.20
Magister
0.80
< 3 years
2.50
3–5 years
8.30
6–10 years
18.90
11–20 years
55.40
> 21 years
14.90
4 Result and Discussion 4.1 Sample Description There are several stages of statistical analysis, namely descriptive and inferential statistics. Where in the descriptive statistics data obtained that the respondent’s profile is dominated by experts aged between the range of 31–40 at 62.00%, diploma degree at 42.50%, and work experience of 11–20 years at 55.40% show in Table 2.
4.2 Reliability and Validity Through the Outer Value Model Outer Loading The outer loading rate must be bigger than 0.7 so that it can be considered valid. Testing will be carried out using smartpls 2.3.9. software. In the outer model, we know Cross Loading. This value is another amount of discriminant validity [20, 21]. The outcomes of the smartpls assessment show that all Big Data Analytic indicators have a loading factor > 0.7 so that they meet the requirements of a good convergent validity value. There is one indicator of Innovative Work Behavior which has a score of 0.655. However, some references show that the loading factor > 0.6 is considered adequate, so the indicator is still used. From these tests, it is shown that Big Data Analytics has a role of 0.245 on Employee Performance and Innovative Work Behavior has a role of 0.534 on Employee Performance. From Fig. 2
798
W. S. Wahyudhi et al.
means that the role of Innovative Work Behavior is greater than Big Data Analytics in determining how mining contractor companies can improve employee performance. Path Coefficient Another test in the outer model in smartpls is the path coefficient test, this test shows the reliability of all indicators in the model. Several indicators can be measured, including Cronbach’s Alpha with a score that must be above 0.7. In addition, the composite reliability test must be above 0.7 and the AVE value must be above 0.5. Cronbach’s alpha identifies the lower bound of the reliability value of a construct, while composite reliability will identify the reliability value of a construct. Composite reliability is considered better in estimating the internal consistency of a construct [20, 21]. From the test in Table 3 display the score of Cronbach’s Alpha > 0.7 so it can be decided that the latent variable has good reliability. Reliability testing was conducted to test whether the data gained from the examination mechanism showed suitable internal constancy. The Average Variance Extracted (AVE) test obtained a score of more than 0.5 and this test is used to decide the achievement of discriminant validity requirements [20, 21].
Fig. 2 Outer loadings
The Approach of Big Data Analytics and Innovative Work Behavior … Table 3 Path coefficient
799
Cronbach’s alpha
Composite reliability
AVE
Big data analytics
0.881
0.910
0.628
Innovative work behavior
0.887
0.914
0.639
Employee performance
0.906
0.924
0.606
Discriminant Validity Discriminant validity is used to test how far the latent construct differs from other constructs. Where the testing technique taken is to use the Fornell-Larcker methodology. The result of discriminant validity can be shown in Fig. 3 and Table 4.
Fig. 3 T-statistic result
800
W. S. Wahyudhi et al.
Table 4 Discriminant validity Big data analytics
Employee performance
Big data analytics
0.792
Employee performance
0.483
0.799
Innovative work behavior
0.446
0.643
Innovative work behavior
0.778
4.3 Hypothesis Testing T Statistic The testing technique on the inner model is hypothesis testing which is completed by testing the T-statistics is the value used to see the level of significance in testing the hypothesis by looking for the T-statistical value. In smartpls, the method used in testing the hypothesis is the bootstrap approach. Convergent validity desires, a statistical value bigger than the t-table value (t-statistic. 1.96) [22]. In Fig. 4, T Statistics Big Data Analytic on Employee Performance of 2.113 and Innovative Work Behavior on Employee Performance of 6.223 indicates the relationship between variables that have a significant effect on Employee Performance. Path Coefficient The probability value (P-Value) can be defined as the number of observed opportunities (probability) from the test statistic. 0.05 is the significance level, alpha, and the probability of rejecting H0 is true. There are several stages regarding the level of significance, namely 1, 5, or 10%. If p is less than 0.05, it means that the opportunity is in the alpha region, so reject H0. The p-value is strongly influenced by the number of samples. If the results are significant, then the study is considered ‘successful’ to find a significant effect, whereas if the p-value > 0.05 means the researcher has not succeeded in finding a significant relationship. In Fig. 5, the test outcomes display that the P-Value of Big Data Analytics is 0.040 and the P-Value of Innovative Work Behavior is 0.000. It can be determined that there is a significant relationship among latent variables. The outcomes display that the value of Big Data Analytic and Innovation Work Behavior is positive, so it can be interpreted that the greater the Big Data Analytic and Innovation Work Behavior a Mining Contractor company has, the employee performance will be higher.
Fig. 4 Path Coefficient
The Approach of Big Data Analytics and Innovative Work Behavior …
801
Fig. 5 R-square
R-Square R square aims to identify how the influence of the independent variable (exogenous) on the dependent variable (endogenous). R-squared (R2 ) value. Some literature indicates that the R square value of 0.75 is a strong category, the R square value of 0.50 is the medium category and the R square value of 0.25 is the weak category. The results in Fig. 6, it shows that the R square of 0.449 means that all independent variables concurrently have a consequence of 44.9% on Employee Performance. While the residual 55.1% is prejudiced by other variables not tested in the research. F-Square In addition to identifying how significant the relationship is between variables, the study is required to identify how the influence between variables is with Effect Size or f-square [23]. If the value of f squared is identified with a score of 0.02 then it is considered small, while 0.15 is considered moderate, and 0.35 is considered large. If it has a score of less than 0.02 it is considered not to have a significant impact [24]. The outcomes in Fig. 7 shows that Innovative Work Behavior shows a bigger effect than Big Data analytics. Model Fit Model assessment can be completed by considering the score of the Standardized Root Mean Square Residual (SRMR). If the value is in the range between 0.05 and 0.08, it can be said that the model is a fit.
Fig. 6 F-square
802
W. S. Wahyudhi et al.
Fig. 7 Model fit
The result of the SRMR value of 0.079 indicate that the research model is a fit. This model shows that Employee Performance will be increased by coal mining contractor companies when they successfully implemented of advance technology like big data analytics and innovative behavior in uncertain situations [25].
4.4 Discussion In inferential statistics, two stages of testing are carried out, specifically the inner model and the outer model. Where from the outer loading test it was found that the reliability and validity of the research instrument had met the requirements through statistical tests with smartpls. It was found that several indicators were appropriate, including Outer Loading > 0.7, Average Variance Extracted (Pvc) > 0.5, Composite Reliability (Pc) > 0.7, Cronbach’s Alpha > 0.6, and Discriminant Validity was valid in testing the scope to which the latent construct was dissimilar from the construct other. In the inner model stage by testing the hypothesis, it is found that T Statistics Big Data Analytic on Employee Performance of 2.113 and Innovative Work Behavior on Employee Performance of 6.223 indicates that the relationship between variables that have a significant effect on Employee Performance. The test score display that the P-Value of Big Data Analytics is 0.035 and the P-Value of Innovative Work Behavior is 0.000. It can be determined that there is a significant relationship between latent variables. The path coefficient shows the direct effect of the variable resolute as the effect on the variable determined as the consequence. The Path Coefficient with a positive value indicates a positive relationship. The value seen from the Original Sample table, score displays that the value of Big Data Analytic and Innovation Work Behavior is positive, so it can be interpreted that the greater the Big Data Analytic and Innovation Work Behavior a Mining Contractor company has, the more employee performance will be higher.
The Approach of Big Data Analytics and Innovative Work Behavior …
803
5 Conclusion From the research results supported by testing the survey results using the SEM method and smartpls 3.2.9 software, it is found that big data analytic variables and innovative work behavior have a positive and positive effect on employee performance, especially for heavy equipment experts at coal mining contractors in Indonesia. In the rapid development of technology, the application of big data is one of the keys to creating a sustainable competitive advantage. One of the duties and responsibilities of heavy equipment experts is to be able to analyze the amount of data available to be used in the decision-making process in managing company assets. With big data analytics, 3 dimensions can be optimized related to data management, namely volume, variety, and velocity. These three dimensions, if managed optimally, will create increased employee performance. The next variable is from the employees themselves, namely how innovative work behavior can have a positive impact on employee performance. Heavy equipment experts are required to always provide creative ideas in the management of heavy equipment because it optimizes the lifetime of managed components or spare parts. From testing using the survey method, it is proven that innovative work behavior can improve employee performance. So that the research model built can provide a very good picture for practitioners in coal mining contractors.
804
W. S. Wahyudhi et al.
References 1. Wamba SF, Gunasekaran A, Akter S, Ren SJF, Dubey R, Childe SJ (2017) Big data analytics and firm performance: effects of dynamic capabilities. J Bus Res 70:356–365 2. Gupta M, George JF (2016) Toward the development of a big data analytics capability. Information and Management 53(8):1049–1064 3. Marshall A, Mueck S, Shockley R (2015) How leading organizations use big data and analytics to innovate. Strategy and Leadership 43(5):32–39 4. Abbott D (2014) Applied predictive analytics: principles and techniques for the professional data analyst. Wiley, New York 5. Gunasekaran A, Papadopoulos T, Dubey R, Wamba SF, Childe SJ, Hazen B, Akter S (2017) Big data and predictive analytics for supply chain and organizational performance. J Bus Res 70:308–317 6. Tao F, Cheng J, Qi Q, Zhang M, Zhang H, Sui F (2018) Digital twin-driven product design, manufacturing, and service with big data. Int J Adv Manuf Technol 94(9–12):3563–3576 7. Hazen BT, Boone CA, Ezell JD, Jones-Farmer LA (2014) Data quality for data science, predictive analytics, and big data in supply chain management: an introduction to the problem and suggestions for research and applications. Int J Prod Econ 154:72–80 8. Corte-Real N, Oliveira T, Ruivo P (2017) Assessing business value of big data analytics in European firms. J Bus Res 70:379–390 9. Akter S, Wamba SF, Gunasekaran A, Dubey R, Childe SJ (2016) How to improve firm performance using big data analytics capability and business strategy alignment? Int J Prod Econ 182:113–131 10. Gartner (2018) Gartner IT glossary. Available at: www.gartner.com/it-glossary/big-data/. Accessed 18 April 2019 11. Richey Jr RG, Morgan TR, Lindsey-Hall K, Adams FG (2016) A global exploration of big data in the supply chain. International Journal of Physical Distribution & Logistics Management 46(8):710–739 12. Bilal M, Oyedele LO, Munir K, Ajayi SO, Akinade OO, Owolabi HA, Alaka HA (2017) The application of web of data technologies in building materials information modeling for construction waste analytics. Sustain Mater Technol 11:28–37 13. De Jong J, Den Hartog D (2010) Measuring innovative work behavior. Creative and Innovation Management 19(1):23–36 14. Yildiz B, Uzun S, Semih coskun S (2017) Drivers of innovative behaviors: the moderator roles of perceived organizational support and psychological empowerment. International Journal of Organizational Leadership 6(3):341–360 15. Flyinn WJ (2016) Healthcare human resource management, 3rd edn. Cengage Learning, Boston 16. Mathis RL (2014) Human resource management, 14th edn. Cengage Learning, Stanford 17. Armstrong M (2018) Armstrong’s handbook of performance management: an evidence-based guide to delivering high performance, 6th edn. Kogan Page Limited, London 18. Koopmans L, Bernaards CM, Hildebrandt VH, Schaufeli WB, De Vet HCW, Van der Beek AJ (2011) Conceptual frameworks of individual work performance—a systematic review. J Occup Environ Med 53(8):856–866 19. Ghasemaghaei M, Calic G (2019) Can big data improve firm decision quality? The role of data quality and data diagnosticity. Decis Support Syst 120:38–49 20. Gaston S (2009) PLS path modelling: an introduction with R, p 34. www.gastonsanchez.com 21. Hair JF, Sarstedt M, Hopkins L, Kuppelwieser VG (2014) Partial least square equation modeling (PLS-SEM): an emerging tool in business research. Eur Bus Rev 26(2):106–121 22. Ghozali I (2016) Structural equation model concepts and applications with 24 AMOS program. Update Bayesian SEM. UNDIP Press, Semarang 23. Olugu EU, Wong KY, Shaharoun AM (2011) Development of key performance measures for the automobile green supply chain. Resour Conserv Recycl 55(6):567–579
The Approach of Big Data Analytics and Innovative Work Behavior …
805
24. Schroeck M, Shockley R, Smart J, Romero-Morales D, Tufano P (2012) Analytics: the realworld use of big data. IBM Institute for Business Value, Said Business School, New York 25. Cegielski CG, Allison Jones-Farmer L, Wu Y, Hazen BT (2012) Adoption of cloud computing technologies in supply chains: an organizational information processing theory approach. The International Journal of Logistics Management 23(2):184–211
Emotion Recognition Based on Voice Using Combination of Long Short Term Memory (LSTM) and Recurrent Neural Network (RNN) for Automation Music Healing Application Daryl Elangi Trisyanto, Michael Reynard, Endra Oey, and Winda Astuti Abstract Stress has become a problem for many people experience in the world. It can affect in decrease in activity and productivity in daily life. Therefore, it is important to reduce or even heal this stress. One way to do that is by listening to appropriate music. Emotion recognition based on voice intonation has been developed in this work. The system has the ability to recognize the emotion of the speaker then automatically play the appropriate music based on emotion of the speaker. In this work, the combination of Long Short Term Memory (LSTM) and Recurrent Neural Network (RNN) approach for identifying emotion based on voice intonation of the speaker is proposed. The Mel Frequency Cepstral Coefficient (MFCC) is adopted as a feature extraction of the voice, which will used as input to the combination of Long Short Term Memory (LSTM) and Recurrent Neural Network (RNN)-based identifier. The performance of the proposed technique has been investigated, especially for multiclass classification and it is found to produce good accuracy within short duration training time. The accuracy of 99.8% and 62.5%, for training and testing accuracy respectively. Furthermore, the system accurately plays the music based on the emotion identified. Keywords Emotion recognition · Short Term Memory (LSTM) · Recurrent Neural Network (RNN) · The Mel Frequency Cepstral Coefficient (MFCC) · Music Healing Application
1 Introduction Mental health is something that is neglected by most of the population in development country, so there are few psychiatrists and also workers dealing with mental health. According to WHO, in 2017 there were around 0.31 psychiatrists, 2.5 mental health D. E. Trisyanto · M. Reynard · E. Oey · W. Astuti (B) Automotive and Robotics Program, Computer Engineering Department, BINUS ASO School of Engineering, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_70
807
808
D. E. Trisyanto et al.
nurses and also 0.17 social workers per 100,000 people in Indonesia [1]. Do research also by Riskedas 4 years earlier, in knowing the number of people who get therapy from individuals aged 15 years and over. It is said that with the many mental problems that exist in Indonesia, only 9% of people with these problems receive appropriate treatment. However, someone who has emotional/mental problems does not always have to use a psychiatrist or psychologist to heal. Many people perform healing in their own way, with music being one of them. Music is an art in the form of sound, one of which can affect the emotional state of a person. One of the ways recognizes the emotion sometime by the intonation of the person when they speak. Many research has been done in this work, Aouani, et al., perform emotion recognition system based on speech signals, by using combination of Mel Frequency Cepstral Coefficients (MFCC), Zero Crossing Rate (ZCR), Harmonic to Noise Rate (HNR) and Teager Energy Operator (TEO) and applied Support Vector Machines (SVM) as a classifier method, this work resulting 74.09% of accuration [2] [Speech Emotion Recognition with deep learning]. Zeng et al., proposed a speech emotion recognition (SER) method based on acoustic segment model (ASM) and modeled by Hidden Markov Models (HMMs) with the accuracy of 73.9% [3] [Speech Emotion Recognition Based on Acoustic Segment Model]. Grecov et al. Speech emotion recognition (SER) system, improving SER accuracy, different emotion units defined based on voiced segments are investigated. To find the optimal emotion unit, SER system based on support vector machine (SVM) with the accuracy of 82–84.3%, Helmiyah et al., develop human voices that express five sets of emotions: angry, bored, happy, neutral, and sad. They apply Mel Frequency Cepstrum Coefficient (MFCC) and classification uses Multi-Layer Perceptron (MLP), which is one of the artificial neural network methods. The MLP classification proceeds in two stages, namely the training and the testing phase. MLP classification results in good emotion recognition. Classification using 100 hidden layer nodes gives an average accuracy of 72.80%, an average precision of 68.64%, an average recall of 69.40%, and an average F1-score of 67.44% [4] [Speech Classification to Recognize Emotion Using Artificial Neural Network]. Wang et al., the bimodal, voice and facial, emotion recognition was obtained by the use of Gaussian Mixture Model. The experimental results showed that, the bimodal emotion recognition rate combined with facial expression was about 6% higher than the single-model recognition rate merely using prosodic features with the accuracy of 84% [5]. In this paper, the combination of Long Short Term Memory (LSTM) and Recurrent Neural Network (RNN) approach for identifying emotion based on voice intonation of the speaker are introduced and proposed for emotion identification. LSTM and RNN are simple and very powerful method, this is due to optimization and generalization solution [6]. The proposed combination of Long Short Term Memory (LSTM) and Recurrent Neural Network (RNN)-based emotion identification system used Mel Frequency Coefficient Ceptrals (MFCC) as input. The effectiveness of the proposed system is evaluated experimentally. The results show that the proposed technique has produced the good level of accuracy.
Emotion Recognition Based on Voice Using Combination of Long Short …
809
2 Proposed System This paper used combination of LSTM and RNN as recognition engines for speaker emotion identification system. A block diagram of conventional emotion identification is shown in Fig. 1. The system is trained to identify the emotion of the speaker based on speaker intonation by each person speaking out a specific utterance into the microphone. The speech signal is digitized and some of them are carried out to create a template for the voice pattern and stored in the memory. The overall model of the proposed system for speaker emotion identification system in this paper is depicted in Fig. 2. A digital speech signal is passed through processing stages from which MFCC features are extracted and passed through the combination of LSTM and RNN learning algorithm for both training and testing. The emotion of the speaker is identified, which decided the music played based in the emotion of the speaker. As shown in Fig. 2, the emotion classification system consists of two main functions: 1. Feature Extraction. 2. Pattern Matching. Signal processing is a process of converting a continuous signal to a discrete form. Feature Extraction is to extract a sequence of feature vector of the speech signal that will be used as input to the system. There are many techniques that are used to extract the speech signal such as linear prediction coefficient, cepstral coefficients, fundamental frequency and formant frequency. In this research, Mel Frequency Ceptral Coefficient (MFCC)-based feature extraction is used to extract the voice signal. Pattern matching is measuring the similarity between unknown feature vector and feature vector that is saved on the memory of the system. The model of the speech signal is constructed from the extracted feature of the voice signal. The pattern matching algorithm compares the incoming voice signal to the reference model and scores their difference, called as distance. The distance is used to determine the unknown pattern. There are two types of models: generative models and discriminative models. In generative models, the pattern matching is probability of
Fig. 1 Block diagram of conventional speaker emotion identification system
810
D. E. Trisyanto et al.
Fig. 2 Flow chart Indonesian license plate recognition system
density estimator that attempt to capture all the underlying fluctuation and variation of data, such as DWT, HMM, and GMM. For discriminative models, the pattern matching is the optimization technique to minimize the error on a set of training sample, such as ANN and SVM. In this research the combination of LSTN and RNN applied to identify the voice signal as the output from feature extraction. Emotion identification is saving in database server, the data in the database used as input in order to play the song. The entire android music player application system works from this system which can be shown as Fig. 3.
2.1 Mel Frequency Ceptral Coefficient (MFCC) MFCC or Mel Frequency Ceptral Coefficient is a power spectrum diagram based on the linear cosine change of the non-linear mel scale of the frequency [4]. Basically,
Emotion Recognition Based on Voice Using Combination of Long Short …
811
Fig. 3 Application system music player for healing emotion
MFCC is a feature used in many ASR (Automatic Speech Recognition) because of its high effectiveness in displaying and representing the human voice in an audio signal [7]. The MFCC takes advantages of several well-known properties of the auditory system, since provide relatively good performance and straightforward to implement diagram representing the MFCC feature extraction Fig. 4. Mel-scale frequencies are distributed linearly in the low frequency range and logarithmically in the high range. Cepstral analyses calculate the inverse Fourier transform of the logarithmic of the power spectrum of the speech signal to extract MFCC on an audio signal, perform the following steps, as shown in Fig. 4 [7]. Perform the Fourier transform of the windowing audio signal which already divided by framing for 30 s and overlap 10 s, map the power spectrum obtained from the mel scale using an overlapping triangular window, take power log of each Mel frequency, Perform a Discrete Cosine Transform (DCT) of the obtained frequency list as if the list is a signal and The resulting spectrum amplitude is the sound MFCC.
Fig. 4 MFCC feature extraction steps [7]
812
D. E. Trisyanto et al.
2.2 Recurrent Neural Network (RNN) Recurrent Neural Network (RNN) is one of the best algorithms for performing sequential data formats [8]. RNN, principally work as CNN, RNN has an architecture that mimics the way human nerve cells work in receiving information [9]. However, the difference between RNN and CNN is that there is an iterative process, this means that the output that will be obtained will depend on the input entered and also the previous inputs. This makes the order rather than the input used as a very important variable in conducting data [9]. In the system created by the researcher, the RNN algorithm is used after using the LSTM layer in the manufacturing model.
2.3 Long Short Term Memory Algorithm (LSTM) Long Short Term Memory (LSTM) is an RNN algorithm that is widely used in voice-based model training. An LSTM unit structure basically contains 3 types of gates, namely input gates, forget gates, and output gates and consists of neurons that function as cells. Gates contained in the LSTM are used to regulate the flow of incoming information so that if there is repeated data, it will be removed from the cell state. Since, LSTM is part of the RNN or Recurrent Neural Network, the output layer generated on the neural network will then loop back to the created hidden layer so that it seems to create an internal memory that can remember the data set from the previous input [10].
2.4 Music Healing Method Music, consisting of sounds and notes in a specific order, often combining the two to create a unified composition. Music is made of sounds, vibrations, and moments of silence, and it doesn’t always have to be fun or beautiful. Music can be used to convey a wide variety of experiences, environments, and emotions. Listening to music has a positive effect on one’s emotions [11]. Here are the types of music that can increase emotions: Happy Music, A happy song is a song that has a positive tone or a positive message. The lyrics of a happy song will usually evoke passion, inspiration, love, or just happy words put together in such a way that it makes listeners feel happy when they hear the song. Calm Music, A calm song is a song that has a calm tone and is good for keeping one’s stress at bay. This song usually have a classical genre, soft pop and other types of calm music. Calm songs can usually calm one’s emotions when listened to. New Age music, New Age Music is music sung with a melodic note that is very basic in music sung by a solo female vocalist, accompanied by synthesized and continuous notes, including muted organ instruments with an airy chorus. New Age music has a slow,
Emotion Recognition Based on Voice Using Combination of Long Short …
813
free-flowing tempo. The echo used is usually abundant and has dynamic contrast. Plus the lack of percussion gives this music an overall mellow and dreamy feeling. New Age music has a great effect in reducing one’s feelings of anger and tension [8]. Designer Music, Designer Music is a musical term introduced by the music industry to describe a musical genre of songs designed to influence listeners in a certain way. Designer music has a cheerful and light feel to it. This music is very easy to listen to because it doesn’t require much focus or concentration. Designer-type music has a good effect in reducing feelings of sadness and fatigue.
2.5 Music Healing Application Android Studio, an official Integrated Development Environment (IDE) software used for the development of an android application. Android Studio has many features that can be used in developing android applications. Features—These features are considered to increase productivity during development. The programming languages used in Android Studio are kotlin and java. Firebase used as platform for development that functions as a key-value multinode realtime database optimized for synchronizing data between machines and smartphones with central storage in the cloud. Firebase is often used in development because it can continuously deploy and sync changes between data stored on user machines and those on cloud storage. This removes many of the challenges of combining authentication, synchronization, and splitting by changing multiple versions and ensuring the exact bits are the same across systems [12].
3 Experimental Result Experiments conducted using a computer with specifications of ROG Strix G531 GT with an Intel i7 processor and 8 GB RAM. The computer used also has NVIDIA GeForce GTX 1650 graphics. The experiment performed with multiple different voice intonation from database. Both the voice processing and classifier would be run through Phyton. The development of the system for training phase was carried out using the TESS dataset. The voice data used was a voice that displayed four emotions that would be detected later. The four emotions are neutral, angry, sad and happy, respectively. Each emotion from the dataset contains 400 voices from 2 voice actors aged 26 years and 64 years where the actors say the same carrier phrase where the actor says “Say The Word” and then says different words and emotions. In conducting the test, the author uses the RAVDESS dataset where there are 24 actors saying the same 16 phrases by displaying different emotions. The voice data is the extracted based on MFCC method. Each of the voice data extracted into 40 matrix data. MFCC will be taken from each voice to be extracted
814
D. E. Trisyanto et al.
and trained into a model for training. Then to predict emotions from sound files, MFCC extracted data is applied as input to the classifier and then predictions are made using the created model. Each emotion to be trained, the raw data of each emotion as shown in Fig. 5, 6, 7, and 8, for each emotion happy, angry, sad and neutral, respectively. The structure of the Neural Network that will be used is developed. The combination of LSTM algorithm and RNN is developed in order for making the structure which produces the better accuracy. The Neural Network structure used consists of 4 layers. The first layer of the structure is the 123 node LSTM layer. The next layer consists of 64 Relu nodes, then 32 Relu nodes and then 4 softmax nodes, as shown in Fig. 9.
Fig. 5 Signal raw of happy emotion\
Fig. 6 Signal raw of angry emotion
Emotion Recognition Based on Voice Using Combination of Long Short …
Fig. 7 Signal raw of sad emotion
Fig. 8 Signal raw of neutral emotion
Fig. 9 Structure LSTM
815
816
D. E. Trisyanto et al.
40 epochs has applied in this experiment in conducting the data training, this number of eopoch resulting good accuracy, the larger or lower number of epochs than 40 causes an overfit or underfit model. The voice data used as input to the MFCC feature extraction technique, this step help the device separate and recognize the human voice by minimizing the variability in the human voice. For each voice data in the dataset, 40 MFCC will be extracted which will then be entered into an array. The feature extraction matrix, identification of the system will be carried out where a model will be created to distinguish the emotions from the voice to be trained. The model being trained will use a mixture of RNN and LSTM to identify the data to be studied. The training process on ANN is done by inputing 1600 data consisting of 400 voice data of angry emotions, 400 voice data of happy emotions, 400 voice data of sad emotions and 400 sound data of neutral emotions. These data are the result of extracted data of the MFCC feature extracting technique. The 62.5% from 1600 data, which means that 100 validation data is used from 1600 data used for training. The following are the results of the training accuracy and validation accuracy as followed: The voice data which extracted based on MFCC method into matrix used as input to the classification system. This is done to help the device separate and recognize the human voice by minimizing the variability in the human voice. For each voice data in the dataset, 40 MFCC will be extracted which will then be entered into an array. The following is an example of an attached Python code that is used to perform feature extraction on a dataset: After performing the feature extraction, identification of the system will be carried out where a model will be created to distinguish the emotions from the voice to be trained. The model being trained will use a mixture of RNN and LSTM to identify the data to be studied. The training process on ANN is done by inputting 1600 data consisting of 400 voice data showing angry emotions, 400 voice data showing happy emotions, 400 voice data showing sad emotions and 400 sound data showing neutral emotions. These data are data that has been extracted from the MFCC feature. Before conducting the training, the author also determines the amount of data that will be used for validation and training. The author uses a validation split of 62.5% from 1600 data, which means that 100 validation data is used from 1600 data used for training. The following are the results of the training accuracy, validation accuracy and testing accuracy as shown in Tables 1, 2, and 3, respectively. The testing data used one of the actors from the RAVDESS dataset which contains 16 voices where each emotion tested has the same number of voices, namely four voices, therefore each of the emotion consist of four data. The results of the application can be see in Fig. 10, the result for happy emotions result from voice emotion identification system. The results of this data are obtained when the voice identification application sends data of happy emotions to the database. This data will be taken by the android phone and will be processed to determine the song to be played. Because the data taken is happy, the song that can be played is a happy tone song.
Emotion Recognition Based on Voice Using Combination of Long Short … Table 1 Training accuracy result
Table 2 Testing accuracy result
Table 3 Hasil akurasi data uji coba
817
No.
Emotion
Number of data
Training accuracy
1
Angry
375
99.8%
2
Sad
375
3
Happy
375
4
Neutral
375
No.
Emotion
Number of data
Validation accuracy 100%
1
Angry
25
2
Sad
25
3
Happy
25
4
Neutral
25
No.
Emotion
Number of data
Testing accuracy
1
Angry
16
62.5%
2
Sad
3
Happy
4
Neutral
Fig. 10 Output data from happy emotion
4 Conclusion The development of identification emotion system based on MFCC and combination LSTN and RNN had developed in this work. The system had work well, since it has training accuracy and testing accuracy are 100% and 62.5%, respectively. Although
818
D. E. Trisyanto et al.
this is quite good, it can be concluded that the system created needs to be developed more so that the system can be used and applied to other applications. The limitations of datasets in conducting training and datasets in conducting tests are able to increase the accuracy of the system created by this author to then be able to achieve high test accuracy. The application of healing emotion based on voice with song also applied in this work, the application can work well.
References 1. Rahvy A, Habsy A, Ridlo I (2020) Actual challenges of mental health in Indonesia: urgency, UHS, humanity, and government commitment. Eur J Public Health 30, Supplement_5 2. Aouani H, Ben Ayed Y (2020) Speech emotion recognition with deep learning. Procedia Comput Sci 176:251–260 3. Zheng S, Du J, Zhou H, Bai X, Lee C, Li S (2021) Speech emotion recognition based on acoustic segment model. In: International symposium on Chinese spoken language processing, pp 1–5 4. Helmiyah S, Riadi I, Umar R, Hanif A (2021) Speech classification to recognize emotion using artificial neural network: Khazanah Inform. J Ilmu Komput dan Inform 7:12–17 5. Wang Y, Yang X, Zou J (2013) Research of emotion recognition based on speech and facial expression. TELKOMNIKA Indones J Electr Eng 11:83–90 6. Goldberg DE, Holland JH (1988) Genetic algorithms and machine learning. Mach Learn 3:95– 99 7. Prabakaran D, Sriuppili S (2021) Speech processing: MFCC based feature extraction techniques—an investigation. J Phys Conf Ser 1717:1–15 8. Gaia JWP, Ferreira RW, Pires DA (2021) Effects of physical activity on the mood states of young students. J Phys Educ 32:1–25 9. Yanuar RA (2018) Recurrent neural network (RNN). Universitas Gadjah Mada, Menara Ilmu Machine Learning, Yogyakarta 10. Hochreiter S, Schmidhuber J (1997) LSTM can solve hard long time lag problems. Adv Neural Inf Process Syst 473–479 11. AlAmmar WA, Albeesh FH, Khattab RY (2020) Food and mood: the corresponsive effect. Curr Nutr Rep 9:296–308 12. Pan D (2016) Firebase tutorial, pp 1–5
A Systematic Review of Marketing in Smart City Angelie Natalia Sanjaya , Agung Purnomo , Fairuz Iqbal Maulana , Etsa Astridya Setiyati , and Priska Arindya Purnama
Abstract Good marketing with entrepreneurship will help build smart cities across the country to create efficiency in the welfare of their citizens using technologybased development. This study aims to examine the existing literature and research on marketing in the smart city. PRISMA guidelines were adopted in this study to conduct and report a systematic literature review. Based on a systematic search of the Scopus database, a total of 21 reviewed articles were included which were carefully selected for an in-depth analysis of the interrelated concept of smart city and marketing. The results show that there have been several publications on marketing in the smart city from a multilevel, industry, and perspective analysis. The top research country was Italy and the most widely used multilevel analysis was the individual level. Further research is possible, such as smart city marketing research, using team, network, and institutional levels with perspectives in information technology or health. Keywords Entrepreneurship · Marketing · Multilevel Analysis · Smart City · Systematic Literature Review
1 Introduction The smart city is a city concept with entrepreneurship that focuses on developing technology, managing resources efficiently, and supporting the community by providing the right information. Every urban area in developed countries today needs a smart city not only to compete with other countries but also to promote smart city development and increase the productivity of its citizens [1]. The existence of smart cities supports them to increase flexibility and adapt quickly to market changes by providing innovative solutions [2]. The development of a smart city requires a lot of information because a smart city has a broad scope such as the daily life of its citizens [3]. To support the smart city idea, the relationship between creativity, innovation, and A. N. Sanjaya (B) · A. Purnomo · F. I. Maulana · E. A. Setiyati · P. A. Purnama Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_71
819
820
A. N. Sanjaya et al.
technology is discussed as a determinant of the concept of intelligence, which also applies to marketing [4]. Marketing in smart cities has an important role. Smart city development as an important dimension will be prioritized, and one that can be a development strategy is smart branding [5]. Good marketing and communication is a strategy to spread accurate information quickly through technology. Research on both traditional and digital marketing will continue to develop nationally and regionally at the international level [6]. It is important for a smart branding strategy based on creativity and innovation by relying on advanced information and communication technology as a driving force for smart city development [4]. However, not many studies have studied marketing in smart cities using the Systematic Literature Review approach. A Systematic Literature Review (SLR) is a research method by collecting data from previous research topics including journals and conference papers for observation [7]. This study will use qualitative data collection methods from previous research papers as a reference for observations [8]. A description of the protocol and notes on the steps for document retrieval, exclusion and inclusion of documents, and analysis are also attached [9]. This study poses a research question, what is the state of the existing literature and research on marketing in the smart city? In terms of a systematic literature review, this study aims to review the existing literature and research on marketing in the smart city.
2 Methods The Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) were used to conduct a systematic literature review in this research. In this research, a large literature database was subjected to an exhaustive literature search. For this research, we linked appropriate keywords related to smart cities and marketing research to find and link to relevant articles in the Scopus database worldwide. The Scopus database was used as the main source of information because scholars consider it a reliable source of scientific research. This research uses the keywords “smart city” and “marketing” in the title, abstract and author’s keywords to retrieve relevant data from the Scopus database, as shown in Fig. 1. Data mining was limited to annual data because it collects one year of data that was fully published. In order to avoid any bias, all published articles obtained have been fully read, screened, and mapped by several authors. The search query options used in data mining are [TITLE-ABS-KEY (“smart cit*”) AND TITLEABS-KEY (marketing)] AND [LIMIT-TO (SUBJAREA, “BUSI”)] AND [LIMITTO (SRCTYPE, “j”)] AND [EXCEPTION (PUBYEAR, 2022)] as of October 2022. We found 21 articles in this phase. This study has used published journal data from 2015 to 2021. This SLR uses analysis of quantitative, industry, perspective, and multi-level. The quantitative analysis consists of annual publications and geographical contexts [6]. The multi-level analysis includes levels of individual, team, company, network,
A Systematic Review of Marketing in Smart City
821
Fig. 1 PRISMA protocol
and institutional. The industry analysis includes communication, services, manufacturing, information technology, and health. The perspective of smart city system analysis includes smart people, smart economy, smart mobility, smart environment, smart living, and smart governance.
822
A. N. Sanjaya et al.
Fig. 2 The marketing in the smart city sector’s annual publications
3 Results and Discussion In the area of marketing in smart cities, we have described the current status of the existing literature and research based on analysis of the quantitative, industry, perspective, and multi-level.
3.1 Annual Publications Figure 2 presents the 21 documents that have been published annually. Based on these data, it can be seen that the number of publications related to marketing in the smart city has increased trends since 2015. The publication with an ever-increasing and high growth rate between 2019 and 2020. The peak of annual publication occurs in 2021. Marketing in smart city systems became a trend in the year because they offered effective solutions to urban problems [10].
3.2 Geographical Contexts 22 countries studied marketing in the smart city (can be seen in Fig. 3). Italy was the top research country for publications on marketing in the smart city (n = 5). Next followed by Spain (n = 3), United Kingdom (n = 3), United States (n = 3), India (n = 2), Australia (n = 1), Bahrain (n = 1), China (n = 1), and Cyprus (n = 1) were
A Systematic Review of Marketing in Smart City
823
Fig. 3 Nation number of annual publication of marketing in smart city
the next countries to join. Italy, Spain, the United Kingdom, and the United States were the most active countries in terms of marketing in smart city research.
3.3 Multilevel and Industry Analysis Research on marketing in the smart city can be analyzed using multilevel analysis and industry analysis as shown in Table 1. The five levels of analysis—individual, team, firm, network, and institutional—have been the primary emphasis of the studies that have been examined. The majority of the publications included in this research examine smart city systems in the context of marketing by taking individual-level analysis. In total, seven publications concentrate on the individual level of analysis, that was, on people who work primarily as entrepreneurs or members of business teams. The first and second authors discuss the importance of a smart destination branding strategy with community participation, innovative creativity [4], and unique elements as driving force for smart city development [5]. The third author examines some of the challenges that lie in traditional video technologies as they pursue consumer demand for high-definition and virtual immersive interactive environments [11]. The fourth author examines the development of applications in smartphones to meet the demands of tourists and local visitors of tourism [12]. The next authors examine the management and marketing of tourism cities requiring a rethink of tourism and urban policies [13] and smart strategies to deal with tourism challenges [14]. In the development of information and technology, there was a conflict that artificial intelligence can rule the planet [15], which was followed by the discussion and improvement of the recommendation system for smart cities [16]. Three publications use the team as the primary level of analysis. The first article builds on the antecedent-phenomenon-consequence framework of the innovative marketing strategy adopted by smart cities for internationalization [17]. The next
824
A. N. Sanjaya et al.
Table 1 Multilevel analysis and industry analysis of marketing in smart city Analysis of industry and multilevel
Individual
Team
Firm
Network
Communication
[4, 5, 11, 12]
[17]
[2, 20, 21]
[22, 23]
Services
[13, 14]
[18, 19]
Manufacturing
[24]
Information technology
[15, 16]
Health
[25, 26] [27]
[28]
article contains efforts to develop and implement indicators for measuring the progress of smart destinations [18]; as well as contributing to a rethink based on human-technology interactions that are redefining the knowledge gap in tourism [19]. Then, there were six publications on smart city systems that analyze marketing at the firm level. The first group presents an innovative alternative to distribution to optimize the delivery of soft drinks to end customers from the “At Work” sales channel [2]. The second and third group reports that the socio-economic situation of Turin, in general, has changed its branding strategy, mainly due to the ongoing economic crisis [20]; followed by a report on the development of the city of Barcelona as a smart city with conceptualized destinations and strategies for themselves [21]. The next group raised the issue of the social role of smart city planners in providing electric vehicle services and assisting their production by developing skills [26]; then the group further focused on the overseas expansion of Korean Mobike because little research has examined the role of partnership between the private and public sectors by bicycle sharing companies [25]. The last group describes the “Trikala Quit Smoking” initiative by setting an example of achieving the good long-term goal of protecting adults and children [27]. There were three publications included in this research by taking network-level analysis. The first group observed a combination of tourist cities on the Mediterranean coast to get the point of view of a group of territorial interests [22]. The second group has observed urban development by encouraging a collaborative approach between sports and smart cities [28]; also entrepreneurship and smart cities [23]. There were several research gaps in the study of smart city marketing. There were not many studies that examine the perspective of information technology using multilevel analysis of teams, firms, networks, and institutions. Research of health perspective has also not been linked to individual and team analyses. Meanwhile, from the service perspective, it has not been linked to company and network analysis. In addition, manufacturing perspective research has also not been linked to team and network analysis. The least researched levels for smart city marketing were the team, network, and institutional levels—there has never been any institutional multi-level research. The least studied perspectives for marketing in the smart city were health and information technology.
A Systematic Review of Marketing in Smart City Table 2 Smart city system perspective of marketing in smart city
825
Smart city system
Articles
Smart people
[11, 16]
Smart economy
[17, 20, 23]
Smart mobility
[2, 15, 24–26]
Smart environment
[27]
Smart living
[13, 18, 28]
Smart governance
[4, 5, 12, 14, 21, 22]
3.4 Perspective Analysis The smart city system has six components as shown in Table 2. The first was smart people, which is the main and most important component because it contains democracy, creativity, and open-mindedness. Smart city economics, discusses entrepreneurship, productivity, and trade markets. Smart mobility takes international, national, and local accessibility into account. Smart living demonstrates the importance of carbon footprint, pollution, natural landscapes, and their equivalents. The last one was smart governance which includes smart living with access to various cultural facilities for minority or majority communities and education [29]. Two publications concentrate on the perspective of smart people analysis. They developed a smart scheme to solve the problem [11] and a model recommender system model for a smart city [16]. Then, there were three publications on the smart economy component. These articles discuss various innovative marketing strategies adopted by smart cities [17], the socio-economic situation of a city [20], and the influence of smart city characteristics on entrepreneurship [23]. Furthermore, five publications link the smart mobility component with marketing. These articles offer innovative solutions in the distribution of multinational sales channels [2] and rapid advances in worldwide communications [15]. The next article looks at the overseas expansion of Korean Mobike [25]. The last two articles discuss innovative transportation, namely the adoption of electric vehicles (EV) [24] and their vehicle service areas [26]. There was only one publication that analyzes the relationship between marketing and the smart environment. This article develops a program to reduce air pollution from cigarette smoke [27]. Next, three publications analyze the components of smart living. These articles discuss tourism attractions, including the tourism gap within smart cities [13] and the application of a three-level indicator system for smart tourism destinations [18]. The next article discusses health conditions with exercise in smart city development [28]. The last component was smart governance which was analyzed in six related articles. This article discusses public and social services, including community participation can be a driving force for smart city development [4]. The second article analyzes the meaning of a smart city from a visitor’s perspective [5] and gets an overview of the smart city paradigm [22]. The next article develops special applications on smartphones to meet consumer needs
826
A. N. Sanjaya et al.
[12]. The last article analyzes the concept of a smart city by considering the strategic role of technology [14] which was very useful in implementing brand image [21]. There were some gaps in smart city marketing research with a smart city system perspective. The last analyzed smart city system perspective is about the smart environment and smart people. From this gap, it is possible to carry out a further analysis because both components were very important for smart city development and marketing.
4 Conclusions Marketing is a key role in promoting the development of smart cities that facilitate the seamless dissemination of information by citizens. This study focuses on the dissemination of research findings related to marketing in the smart city by providing some quantitative analysis in the related literature, such as annual publications and geographic location. In addition, this study presents the results of industry analysis (covering communication, service, manufacturing, information technology, and health), multi-level analysis (individual, team, company, network, and institution), and perspective analysis of smart city system (smart people, smart economy, smart mobility, smart environment, smart living, and smart governance). The results of this research study indicate that marketing in the smart city has been studied in various fields of science and industry sectors. Annual publications show that research in the field has increased trends. Italy was a leading research country that has published five articles on marketing in the smart city. The most multi-level analysis of smart city marketing was the individual level with nine papers. In contrast, no multilevel analysis was conducted at the institutional level. Further research that has the opportunity to be carried out, especially marketing in smart environment research, is to use team, network, and institutional levels. And how does marketing in smart city research link information technology and health perspectives? The study of digital marketing in smart cities using the metaverse and web 3.0 is also interesting. It is hoped that this overview will pave the way for new research on sub-knowledge perspectives and advanced analytics. Acknowledgements We would like to express our appreciation to Bina Nusantara University, for its support in providing the funds that made this publication possible. We would also like to thank the Entrepreneurship Department at Bina Nusantara University for providing support and facilities so that this research can be completed.
A Systematic Review of Marketing in Smart City
827
References 1. Madyatmadja ED, Nindito H, Dian Sano AV, Purnomo A, Haikal DR, Sianipar CPM (2021) Application of internet of things in smart city: a systematic literature review. In: Proceedings of 2021 1st international conference on computer science and artificial intelligence, ICCSAI 2021, pp 324–328. https://doi.org/10.1109/ICCSAI53272.2021.9609771 2. Buestán G, Cañizares K, Camacho C, Suárez-Núñez C (2020) Distribution trends in industry 4.0: case study of a major soft drink multinational enterprise in Latin America. Logistics Journal. https://doi.org/10.2195/lj_NotRev_buestan_de_202009_01 3. Madyatmadja ED, Munassar AH, Sumarlin, Purnomo A (2021) Big data for smart city: an advance analytical review. In: Proceedings of 2021 1st international conference on computer science and artificial intelligence, ICCSAI 2021, pp 307–312. https://doi.org/10.1109/ICCSAI 53272.2021.9609728 4. Trinchini L, Kolodii NA, Goncharova NA, Baggio R (2019) Creativity, innovation and smartness in destination branding. International Journal of Tourism Cities 5:529–543. https://doi. org/10.1108/IJTC-08-2019-0116 5. Chan CS, Peters M, Pikkemaat B (2019) Investigating visitors’ perception of smart city dimensions for city branding in Hong Kong. International Journal of Tourism Cities 5:620–638. https://doi.org/10.1108/IJTC-07-2019-0101 6. Purnomo A, Asitah N, Firdausi N, Putra SW, Raya MKF (2021) A study of digital marketing research using bibliometric analysis. In: Proceedings of 2021 international conference on information management and technology, ICIMTech 2021, pp 807–812. https://doi.org/10.1109/ICI MTECH53080.2021.9535086 7. Madyatmadja ED, Noverya NAR, Surbakti AB (2021) Feature and application in smart campus: a systematic literature review. In: Proceedings of 2021 international conference on information management and technology, ICIMTech 2021, pp 358–363. https://doi.org/10.1109/ICIMTE CH53080.2021.9535021 8. Tranfield D, Denyer D, Smart P (2003) Towards a methodology for developing evidenceinformed management knowledge by means of systematic review. Br J Manag 14:207–222. https://doi.org/10.1111/1467-8551.00375 9. PRISMA, https://prisma-statement.org/. Last accessed 13 Oct 2022 10. Kim JH (2022) Smart city trends: a focus on 5 countries and 15 companies. Cities 123. https:// doi.org/10.1016/j.cities.2021.103551 11. Wang K, Shawl RQ, Neware R, Dylik J (2021) Research on immersive interactive experience of content e-commerce live users in the era of computer digital marketing. Int J Syst Assur Eng Manag. https://doi.org/10.1007/s13198-021-01310-9 12. Iványi T, Bíró-Szigeti S (2019) Smart city: studying smartphone application functions with city marketing goals based on consumer behavior of generation Z in Hungary. Period Polytech Soc Manag Sci 27:48–58. https://doi.org/10.3311/PPso.12451 13. Coca-Stefaniak JA (2020) Beyond smart tourism cities—towards a new generation of “wise” tourism destinations. Journal of Tourism Futures. 7:251–258. https://doi.org/10.1108/JTF-112019-0130 14. della Corte V, D’Andrea C, Savastano I, Zamparelli P (2017) Smart cities and destination management: Impacts and opportunities for tourism competitiveness. European Journal of Tourism Research 17:7–27. https://doi.org/10.54055/ejtr.v17i.291 15. Portnoff AY, Soupizet JF (2018) Artificial intelligence: opportunities and risks. Futuribles: Analyse et Prospective 5–26 16. Tayal S, Sharma K (2019) The recommender systems model for smart cities. International Journal of Recent Technology and Engineering 8:451–456. https://doi.org/10.35940/ijrte. B1083.0782S719 17. Christofi M, Iaia L, Marchesani F, Masciarelli F (2021) Marketing innovation and internationalization in smart city development: a systematic review, framework and research agenda. Int Mark Rev 38:948–984. https://doi.org/10.1108/IMR-01-2021-0027
828
A. N. Sanjaya et al.
18. Ivars-Baidal JA, Celdrán-Bernabeu MA, Femenia-Serra F, Perles-Ribes JF, Giner-Sánchez D (2021) Measuring the progress of smart destinations: the use of indicators as a management tool. Journal of Destination Marketing and Management 19. https://doi.org/10.1016/j.jdmm. 2020.100531 19. Pasquinelli C, Trunfio M (2020) Reframing urban overtourism through the smart-city lens. Cities 102. https://doi.org/10.1016/j.cities.2020.102729 20. Vanolo A (2015) The image of the creative city, eight years later: Turin, urban branding and the economic crisis taboo. Cities 46:1–7. https://doi.org/10.1016/j.cities.2015.04.004 21. Marine-Roig E, Anton Clavé S (2015) Tourism analytics with massive user-generated content: a case study of Barcelona. J Destin Mark Manag 4:162–172. https://doi.org/10.1016/j.jdmm. 2015.06.004 22. Sigalat-Signes E, Calvo-Palomares R, Roig-Merino B, García-Adán I (2020) Transition towards a tourist innovation model: the smart tourism destination: reality or territorial marketing? J Innov Knowl 5:96–104. https://doi.org/10.1016/j.jik.2019.06.002 23. Richter C, Kraus S, Syrjä P (2015) The smart city as an opportunity for entrepreneurship. Int J Entrep Ventur 7:211–226. https://doi.org/10.1504/IJEV.2015.071481 24. Shareeda AR, Al-Hashimi M, Hamdan A (2021) Smart cities and electric vehicles adoption in Bahrain. J Decis Syst 30:321–343. https://doi.org/10.1080/12460125.2021.1911024 25. Li L, Park P, Yang SB (2021) The role of public-private partnership in constructing the smart transportation city: a case of the bike sharing platform. Asia Pacific Journal of Tourism Research. 26:428–439. https://doi.org/10.1080/10941665.2018.1544913 26. Graham G, Burns L, Hennelly P, Meriton R (2019) Electric sports cars and their impact on the component sourcing process. Bus Process Manag J 25:438–455. https://doi.org/10.1108/ BPMJ-11-2017-0335 27. Skerletopoulos L, Makris A, Khaliq M (2020) “Trikala quits smoking”: a citizen co-creation program design to enforce the ban on smoking in enclosed public spaces in Greece. Soc Mar Q. 26:189–203. https://doi.org/10.1177/1524500420942437 28. Dickson G, Zhang JJ (2021) Sports and urban development: an introduction. Int J Sports Mark Spons 22:1–9. https://doi.org/10.1108/IJSMS-11-2020-0194 29. Singh S (2015) E-governance for smart cities. Springer, Singapore. https://doi.org/10.1007/ 978-981-287-287-6
Design and Development Applications HD’R Comic Cafe Using Augmented Reality Charin Tricilia Hinsauli Simatupang , Dewi Aliyatul Shoviah , Fairuz Iqbal Maulana , Ida Bagus Ananta Wijaya , and Ira Audia Agustina Abstract Augmented Reality is a technology that is able to bring virtual objects into the real environment. Augmented reality environment allows users to interact with virtual objects so that users will get a hands-on experience. For this reason, we designed an AR application that will be used as a presentation medium in redesigning interactive elements in one of the interior cafes in the city of Malang, namely HD’R Comic Cafe, which uses comics and board game media commonly known as cafe libraries. The design of the use of AR uses the Unity 3D engine that can display objects in real time using the marker-based tracking method. The results of this design are interactive products for the first floor and for the second floor which aims to test the use of applications in the form of AR in the scope of interior design, especially users of interactive products at the HD’R Comic Cafe. Keywords Catalog · Cafe · Augmented Reality · 3D Object
C. T. H. Simatupang · D. A. Shoviah · I. B. A. Wijaya · I. A. Agustina Interior Design Department, School of Design, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] D. A. Shoviah e-mail: [email protected] I. B. A. Wijaya e-mail: [email protected] I. A. Agustina e-mail: [email protected] F. I. Maulana (B) Computer Science Department, School of Computer Science, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_72
829
830
C. T. H. Simatupang et al.
1 Introduction Along with the times, there are more and more new technological innovations that exist with the aim of making it easier for humans to do various things. Technology has a very important role in the development of this era, which includes various human work activities in everyday life. One of the technologies that is currently popular with many people is AR (Augmented Reality). AR or Augmented Reality is a technology from combining two-dimensional or three-dimensional projections that make the object look real [1]. With the existence of AR, it is able to create two worlds, namely the real world and the virtual world which provides updated information in real time. The many uses and benefits of Augmented Reality are certainly many from various fields that take advantage of the advantages of AR, namely used in the fields of health [2], IT [3], games [4], and one of them is the use for interior design [5]. The use for interior design is to be able to arrange the placement of the right furniture or as a medium of information to a wide audience that the interior has furniture or interactive products. That way, the interior design can be implemented into Unity 3D. Unity 3D is an engine or application or software that is used to develop applications in the form of 2D or 3D such as games [6]. In addition, it can develop an application in the form of AR and VR applications. Therefore, we designed an AR application that will be used as a presentation medium in redesigning interactive elements in one of the cafe interiors in Malang, namely HD’R Comic Cafe, which uses comic and board game media commonly known as cafe libraries. The cafe library is a cafe in which there is a book reading, where visitors can borrow and read it at the cafe so that in essence visitors can eat and relax as well as take their time to read books in order to increase interest in reading. The interior created in the existing interior of this cafe uses the color tones of their logo, namely yellow and red. So, the use of yellow is widely applied in the interior of this cafe which indicates the characteristic of the interior of this cafe. This research refers to a previous study, with the title “Interactive Design design on the Interior of HD’R Comic Cafe Malang”. The difference between these two studies is that in the previous study focused on redesigning the interior with the application of interactive media in it, while this study discussed the role of AR as a presentation medium from the results of the interior redesign of HD’R Comic Cafe.
2 Methods 2.1 Research and Development Design In this study, researchers chose the ADDIE development model because this development model is considered effective, dynamic, and supports the performance of the program that the researcher is doing [7]. The ADDIE model consists of five
Design and Development Applications HD’R Comic Cafe Using …
831
Fig. 1 Method step development this application
interrelated components and is structured systematically. It must be systematic and cannot be ordered randomly from the first stage to the fifth stage in its application. These five stages are very simple compared to other design models [8]. The steps of ADDIE development research in this study are presented in the form of a chart [9], are as follows (Fig. 1): a. Analyze The analysis stage is where the researcher analyzes the problems and needs of the community, especially for media presentations in the interior world. The steps of analysis carried out by researchers include three things, namely problem analysis, needs analysis, and analysis of the world of interior design [10]. b. Design The second stage of the ADDIE model is the design or design stage. At this stage, the researcher begins to design 3D objects used for presentation materials from what has been developed. Next, the researcher creates the layout and appearance of the 3D catalog that will be used as a presentation [11]. Researchers make several alternative designs according to the theme of the object. Researchers also create content that will be delivered in the 3D catalog, one of which is Augmented Reality [12]. c. Develop
832
C. T. H. Simatupang et al.
The development stage is the stage in realizing the 3D catalog using the Unity application. At this stage, the development of a 3D catalog is carried out according to the design made. After that, the 3D catalog will be validated using the instruments that have been compiled in the previous stage. Validation was carried out to assess the validity of the content and functionality of the created 3D catalog [13]. d. Implement At this stage, the researcher will implement it by applying the 3D catalog that has been created into Android and testing the 3D catalog that has been created by distributing survey questionnaires to several respondents [14]. Based on the survey results that have been distributed, researchers will analyze the indicators of achieving functionality to see the effectiveness of using the 3D catalog developed as a presentation medium [15]. e. Evaluate At this stage, the researcher will make a final revision of the 3D catalog developed based on the input obtained from the survey questionnaire. This is so that the 3D catalog created is genuinely appropriate and can be used by designers and the wider community.
2.2 System Design This block of diagrams is a basic illustration of the process of making AR applications as a presentation medium from the results of the interior redesign of HD’R Comic Cafe. The block diagram can be seen through Fig. 2: The full explanation of the above diagram block is as follows: 1. In the block diagram the process of designing the system for Augmented Reality applications at the designed HD’R Comic Cafe has several stages. The first stage is to create an interactive HD’R Comic Cafe product object using a sketchup and provide material or texture to the product. The initial stage is used as the main reference so that AR made in the form of an Android application if run will appear the product we want to present to the user. 2. The second stage is to create an interface using Unity 3D, where the use of the software is used to create an application that you want to create by creating several page or scene menus. 3. The third stage is to make a Vuforia Engine marker. The use of making markers from Vuforia is because the software is a device for making AR applications so that they can display the desired product into 3D, so that the software can help in the smooth creation of AR applications in Unity. 4. Then the last stage is to import the work that has been made in Unity in the form of an apk by paying attention to the settings according to your needs and desires.
Design and Development Applications HD’R Comic Cafe Using …
833
Fig. 2 System diagrams for application creation
3 Results and Discussion In the area of smart city marketing, we will describe the current status of the existing literature and research based on quantitative, perspective, and multi-level analysis.
3.1 Analysis of Existing Studied In the existing interior of the first floor at HD’R Comic Cafe, Malang is a reading and dining area where there are several tables and chairs as well as shelves containing comic books and boardgames placed on the right and left side attached to the wall, so that in the middle it becomes the center of the traffic of employees and visitors. It can be seen that the application of interactive design in this room is still lacking (Fig. 3). In the interior of the existing second floor at HD’R Comic Cafe, Malang is a relaxing area where there are chairs and tables. Unlike the first floor, which has many bookshelves, the second floor is devoted to relaxing while doing tasks or eating snacks. In the terrace area on the second floor, there is an interactive concept, namely the use of green plants, but it is still lacking and the placement and selection of plants are still felt to be incompatible with the interior on the second floor.
834
C. T. H. Simatupang et al.
Fig. 3 Interior of HD’R Comic Cafe 1st floor and 2nd floor
3.2 3D Design Result Above is the 3D interior of the first floor along with interactive designs in the form of touchscreen screens, origami walls, and kinetic doors. This touchscreen screen looks like a TV in general, which distinguishes it, namely how to operate using touch with a function as a support for scoring in playing board games. To make it look more beautiful and attractive, the touchscreen screen is given additional accessories on the edges resembling a frame, so it looks like a pigura. The Origamic Wall was created as the wall of the cafe’s logo which in the middle reads the name “HD’R Comic Cafe” and on the edge is surrounded by origami shapes. The origami shape is made simple in the shape of a triangle so that the design made remains harmonious and in harmony with the interior on the 1st floor. And the last design for the first floor is the kinetic room limiter as a room accessory. This use is an attraction for visitors when they want to enter this room. For the motion system, it is moved manually, so that visitors can touch it (Figs. 4 and 5). The picture above is a 3D interior of the 2nd floor with the application of interactive designs in the form of nano possibilities, kinetic frontiers, integrating nature. There are nano possibilities on the table in the form of a touchscreen screen with the function of helping visitors provide information about directions on the placement of the book they want to read. The touchscreen screen is made smaller so that the edges can be
Fig. 4 3D design result of HD’R Cafe
Design and Development Applications HD’R Comic Cafe Using …
835
Fig. 5 3D design asset of HD’R Cafe
used to place food or books. Furthermore, the use of integrating nature is to use hanging plants as an interactive design on the interior of the 2nd floor, where for the existing interior the use of nature is still not well and appropriately utilized so that it is redesigned so that the use of green plants can be used properly. For the latter, the use of kinetic frontiers as a partition placed in the middle of the bookshelf as room accessories. Just like on the first floor in the use of kinetic doors, the system uses manually and from the wind so that it can move.
3.3 Application Design Preparation The application will be tested using a personal cellphone and the results will be screenshots (Fig. 6).
3.4 Implementation Results In the Unity program, a program is called a script, which can only function following the application that brings it, or in other words, the script in Unity cannot be used in other programs. Inside there are scenes that are useful for saving the application creation process. The scenes that will be created in this catalogue application are as follows (Fig. 7). Just like in the previous point, this is the camera view for AR on the 2nd floor. If the marker from the 1st floor or other markers will not be detected on the application camera.
836
C. T. H. Simatupang et al.
Fig. 6 Menu view of HD’R Cafe app
Fig. 7 Kinetic partition view, interactive table, integrating nature (from left to right)
Design and Development Applications HD’R Comic Cafe Using …
837
Fig. 8 Stages of designing an application system
3.5 Functionality Testing From the age data, the respondents who filled out this questionnaire were from adolescents to adults with an age range of 20–40 years. Where the age range is the productive period for a person to be close to technology. From the gender data who filled out this questionnaire was a man with a percentage of 17.6% and a woman with a percentage of 82.4%. From this data, it shows that respondents are easy to use menus or menu application features with a percentage of 88.2%, namely 15 people and 2 people are still confused. From this data, it shows that respondents are satisfied with the application we made with a percentage of 76.5%, namely 13 people and 4 people feeling unsatisfied (Fig. 8). From the data, it shows that with the highest percentage, namely 41.2% as many as 7 people take 1–3 s to run the responsive AR camera feature on the marker. There are still 2 respondents who take more than 10 s. From the data, it shows some Hopes, Criticisms, and Suggestions for this application in the future so that the response to the marker provided can be improved, the features can be added to make it more perfect, added an exit feature on the camera to make it easier when encountering problems such as objects that are stuck or do not appear so that they can exit the camera screen then come back again without having to leave the application and start from scratch (Table 1). From the results of testing the functionality of the HD’R Cafe 3D Catalog Application, it can be concluded that the testing of this functionality went as expected and was successful.
4 Conclusions The use of Augmented Reality has uses in various ways, one of which is in the interior design design of the existing problem, namely the use of facilities or products at the
838
C. T. H. Simatupang et al.
Table 1 Test result of functional HD’R Cafe application Test
Test form
Installing APK Installer app on Android Smartphone
Entering and installing the The installation process is GloVic application.apk installed on the Android Smartphone properly
Expected results
Test result
Run the installed application
Open the app
Berjalan dan aplikasi dapat Succeed terbuka dengan baik
Button Back touched
Touch the back button
Back on the previous menu Succeed
Button Home disentuh
Touching home back
Back on the main menu
Succeed
A detector on a created and predefined catalog (marker)
Pointing the Smartphone camera at the created and defined catalog (marker)
Outputting 3D objects according to their markers
Succeed
Succeed
selected cafe is not interactive, so it gives rise to a solution, namely by implementing an interactive product at HD’R Comic Cafe using a Unity engine device. This can attract users because of the existence of adequate products or facilities and can find out the facilities that want to appear in the interior of the cafe. In addition, in designing this AR application, of course, pay attention to various aspects, one of which is in the AR display which is the focal point in the design. From the results of the questionnaire that has been distributed to several responses, it can be concluded that the application that is made feels easy to use the features contained in it because the shape is simple but the design still looks good even though the shape is simple. Behind the ease of the features used, there are some features that still need to be improved, such as there is a marker that is less responsive if the camera points at the marker and also the lack of an “exit” feature on the AR camera section of the application which makes some users less comfortable because they have to leave the application or exit the main menu of the cellphone first. So, this development application needs to be improved again in terms of features in order to make it easier for users when using and fixing Vuforia markers so that the camera can respond to 3D AR objects quickly.
References 1. Magfiroh L (2019) Prospek bisnis transportasi online dalam masyarakat industrial: pendekatan islamic innovation disruptif. Available http://digilib.iain-palangkaraya.ac.id/1843/ 2. Vaughan T, Rogers S (1998) Multimedia making it work, 4th edn. McGraw-Hill Inc., USA 3. Mega FK (2018) Aplikasi Augmented Reality Berbasis Vuforia dan Unity Pada Pemasaran Mobil. JISA (Jurnal Inform. dan Sains) 1(2):52–56. https://doi.org/10.31326/jisa.v1i2.502 4. Ardiansyah L (2017) Perancangan corporate identity Berupa stationery set CV. Hensindo Media Melalui Teknik Vektor. Institut Bisnis dan Informatika Stikom Surabaya 5. Putri Nourma Budiarti R, Annas Susanto F, Kristianto B, Nerisafitra P (2019) Pengembangan Desain Interaktif 3D VR-Room Patient Menggunakan Unity 3D Engine Dan Evaluasi Usability Testing. J Ilm Inform 4(2):79–87. https://doi.org/10.35316/jimi.v4i2.584
Design and Development Applications HD’R Comic Cafe Using …
839
6. Nursita YM, Hadi S (2021) Development of mobile augmented reality based media for an electrical measurement instrument. J Phys Conf Ser 2111(1):012029. https://doi.org/10.1088/ 1742-6596/2111/1/012029 7. Sriwahyuni T, Kamaluddin, Saehana S (2021) Developing android-based teaching material on temperature and heat using ADDIE model. J Phys Conf Ser 2126(1):012021. https://doi.org/ 10.1088/1742-6596/2126/1/012021 8. Maulana FI, Pangestu G, Sano AVD, Purnomo A, Rahmadika S, Widartha VP (2021) Contribution of virtual reality technology to increase student interest in vocational high schools. In: 2021 international seminar on intelligent technology and its applications (ISITIA), pp 283–286. https://doi.org/10.1109/ISITIA52817.2021.9502195 9. Maulana FI, Hidayati A, Agustina IA, Purnomo A, Widartha VP (2021) Augmented reality technology ReAR contribution to the student interest in high schools Pontianak Indonesia. In: 2021 3rd international conference on cybernetics and intelligent system (ICORIS), pp 1–4. https://doi.org/10.1109/ICORIS52787.2021.9649492 10. Rizal R, Rusdiana D, Setiawan W, Siahaan P (2021) Development of a problem-based learning management system-supported smartphone (PBLMS3) application using the ADDIE model to improve digital literacy. Int J Learn Teach Educ Res 20(11):115–131. https://doi.org/10.26803/ ijlter.20.11.7 11. Shari AA, Ibrahim S, Sofi IM, Noordin MRM, Shari AS, Fadzil MFBM (2021) The usability of mobile application for interior design via augmented reality. In: 2021 6th IEEE international conference on recent advances and innovations in engineering (ICRAIE), pp 1–5. https://doi. org/10.1109/ICRAIE52900.2021.9703984 12. Badu MR, Widarto, Uno HB, Dako RDR, Uloli H (2021) Development and validation of learning media on combustion engine. J Phys Conf Ser 1833(1):012024. https://doi.org/10. 1088/1742-6596/1833/1/012024 13. Choi Y, Ahn HY (2021) Developing and evaluating a mobile-based parental education program for preventing unintentional injuries in early childhood: a randomized controlled trial. Asian Nurs Res (Korean Soc Nurs Sci) 15(5):329–336. https://doi.org/10.1016/j.anr.2021.12.001 14. Widiartini NK, Hadeli, Darmini NPN (2021) Development of e-learning content in educational program evaluation courses. J Phys Conf Ser 1810(1):012052. https://doi.org/10.1088/17426596/1810/1/012052 15. Muhammad AK, Khan N, Lee MY, Imran AS, Sajjad M (2021) School of the future: a comprehensive study on the effectiveness of augmented reality as a tool for primary school children’s education. Appl Sci 11(11). https://doi.org/10.3390/app11115277
Portable Waste Bank for Plastic Bottles with Electronic-Money Payment Safarudin Gazali Herawan , Kristien Margi Suryaningrum, Desi Maya Kristin, Ardito Gavrila, Afa Ahmad Yunus, and Welldelin Tanawi
Abstract Plastic is the second largest amount of waste in Indonesia, reaching 17.3% of the total waste of 28.6 million tons for 2021. The type of plastic that is most often recycled is plastic bottle packaging. Plastic waste that is not managed properly can affect the health and safety of the ecosystem. The waste bank program is one of the government’s efforts to reduce the amount of waste heaps. The portable waste bank for plastic bottles has a function of automatically exchanging plastic bottle waste with points that can be exchanged into electronic-money. Therefore, This research are aims to design a mobile friendly website-based application to integrate in the use of portable waste banks by using the QR code scan feature to exchange points into electronic-money. The results of the design are found that the portable waste bank with the simulated application has functioned well and can be applied in the real condition. Keywords Plastic Waste · Waste Bank · Mobile Friendly Website · QR Code
1 Introduction Plastic is one of the materials that are often used in everyday life, such as household furniture, vehicles, electrical devices, food containers, and others. Plastic is easy to obtain and very flexible in use. In addition, plastic has a relatively low cost, S. G. Herawan (B) · W. Tanawi Industrial Engineering Department, Faculty of Engineering, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] K. M. Suryaningrum · A. A. Yunus Computer Science Department, School of Computer Science, Bina Nusantara University, Jakarta 11480, Indonesia D. M. Kristin · A. Gavrila IInformation Systems Department, School of Information Systems, Bina Nusantara University, Jakarta 11480, Indonesia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_73
841
842
S. G. Herawan et al.
light weight, ease of manufacture, flexibility, and water resistance, so it is widely produced for various things. There are many types of plastic that have been produced in everyday life, each type has its own use based on the function of its use, one of which is plastic bottles. Plastic bottles mostly function as a drink holder that is practical, easy to carry, and lightweight. It is common to see the beverage industry using plastic bottles as packaging for their beverage products. The impact is an increase in the number of plastic bottle production. The Ministry of Environment and Forestry (2021) recorded that the total amount of waste heap in Indonesia reached 28.6 million tons for 2021, of which plastic made up 17.3% of waste [1]. This fact proves that plastic is one of the largest sources of waste donations in Indonesia. Media Indonesia (2021) states that the type of plastic that is most often recycled is soft drink packaging with around 23% PET (polyethylene terephthalate) bottles and around 15% PP (polypropylene) glass based on research results from Sustainable Waste Indonesia (SWI) [2]. They also mentioned that the industry’s demand for PET material is very high, so it has good economic potential where the type of PET plastic contributes 30–48% of the total income of waste collectors in recycling. In addition to the many uses of plastic in everyday life, plastic that is not managed properly can also have an adverse impact on the surrounding environment. Plastics are generally difficult to break down into basic materials, and are toxic for consumption, both for humans and for other living things [3]. Plastic waste that is dumped into the ocean is consumed by small organisms, which are then consumed by fish and form a food chain which is ultimately consumed by humans indirectly. Plastic waste consumed by animals cannot be digested so that it can cause death to these animals. In addition, plastic piled up on waterways can also cause blockage of waterways which ultimately results in flooding. Plastic waste that is not managed properly can result in many negative effects on the surrounding environment. One technique that is often known to deal with plastic waste is the 3R (Reuse, Reduce, Recycle). Reuse is the reuse of plastic that can still be used, which increases the economic value of a product. Reduce is a reduction in the amount of use of goods made from plastic. Recycle is the process of processing used goods into new goods that have different functions. In addition to the 3R concept, research has also found a way to manage plastic into fuel oil which is then used as an ingredient for making activated carbon to reduce certain parameters in liquid waste [4]. One method of overcoming the problem of plastic waste that has been implemented by the government is the application of the waste bank program. A waste bank is a facility for managing waste with 3R principles, as a means of education, behavior change in waste management, and the implementation of a circular economy, which is formed and managed by the community, business entities, and/or local governments [1]. The waste bank program certainly cannot be integrated without the participation of the community. Garbage banks are basically selling something to receive the waste produced by the community. Waste bank system with automatic sorting becomes popular because it is easy and does not require human labor. This can be
Portable Waste Bank for Plastic Bottles with Electronic-Money Payment
843
applied because of the sensors that work to sort it out with a computer program, one of which is Arduino. Arduino is an open-source computer program that is easy to use and is gaining attention in the hobby and market world. It consists of an Integrated Development Environment (IDE) where one can write and run programs and these programs are known as sketches in Arduino and microcontrollers. Arduino boards can read light inputs on sensors, fingers on buttons, or messages and convert them into outputs such as activating a motor, turning on an LED, publishing something online [5]. Examples of projects from Arduino to date are in the form of smart homes, where the home system is applied with motion sensors, temperature sensors, garage door controls, and others [6]. In addition, Arduino can be applied in various broad fields such as military, medical, traffic and others [7]. Fathonah and Hastuti designed a waste bank machine that sells stationery in exchange for plastic bottles designed with Arduino [8]. However, gifts in the form of stationery can only be introduced to school children. Handoko Research, Hermawan and Jaya also awarded tickets as currency for designing a plastic bottle waste bank machine. However, along with the development of internet technology, electronicmoney has become the people’s choice in carrying out transactions because it is practical and efficient [9]. The plastic bottle waste bank machine implemented with the electronic-money system is believed to facilitate and increase people’s motivation to sort plastic bottle waste. Garbage bank machines with electronic-money systems certainly require internet technology in carrying out their duties.
2 Research Methodology 2.1 Research Flowchart The research flow chart serves as a guide in carrying out the analysis and design, hence the research stage can be well structured. The research flow chart in this case can be seen in Fig. 1. The research is continued by analyzing the components needed to design a portable waste bank system. The required components are such as infrared sensors, loadcell sensors, Arduino Mega, jumpers, and servo motors. Some components need to be added to enable the waste bank to perform Internet of Things (IoT), such as ESP32. The programming language used for Arduino and ESP32 is C++. Mastery of the program language is very important because it can affect the function of the applied waste bank. The SmartWaste application model uses a mobile friendly website-based application. SmartWaste is based on the Java programming language. SmartWaste has a feature to save and redeem points earned in collecting plastic bottle waste. Points that have been collected can be exchanged for electronic-Money such as OVO or Gopay. SmartWaste users are required to register an account first to collect points.
844
S. G. Herawan et al.
Fig. 1 Research flowchart
The SmartWaste application uses a database to store user data along with the points earned.
2.2 Preliminary Research This research is carried out by continuing the research conducted by Surya Maulana with the title “Portable Waste Bank for Plastic Bottles Based on Bottle Weight and Category” in 2021 [10]. Previous research started from the design of the waste bank machine to the process of calculating the number of points obtained based on weight and category bottle. Therefore, this research is aim to improve the system by designing the Smart Waste application to allow the process of exchanging plastic bottle waste for money in the form of electronic-money.
Portable Waste Bank for Plastic Bottles with Electronic-Money Payment
845
3 Results and Discussion 3.1 Waste Bank Portable System Flowchart The system flow diagram of the waste bank starting from the beginning to the end of the process can be seen in Fig. 2.
Fig. 2 Flowchart of the portable waste bank system
846
S. G. Herawan et al.
The waste bank system starts from the user pressing the start button on the waste bank LCD. The waste bank then instructs the user to visit the website application by scanning the QR code displayed. Users are asked to register if the user still does not have an account on the website application, but users who already have an account or have created an account can log in to the application. Users can put plastic bottle waste in the waste bank. When the infrared sensor in the waste bank has detected the presence of plastic bottles, the waste bank activates the load cell sensor to measure the weight of the plastic bottles placed. The waste bank will reject plastic bottles that are not appropriate weight and size, and accept plastic bottles of the appropriate weight and size. Plastic bottles that have the wrong size and weight can be retrieved by the user. Then, the user can add a plastic bottle that he wants to exchange for points by inserting the bottle until the user presses the finish button on the LCD. The waste bank then provides users with a QR code to redeem points by scanning through the waste bank application. Users earn points after scanning the QR code and the points earned can be accumulated to exchange for other prizes. After the user has earned points from the exchange of plastic bottles, the waste bank sends data in the form of the number of bottles exchanged and the number of points earned by the user to ThingSpeak.
3.2 Component Analysis and Design The flow diagram of the system has stated that there is a need to install certain components. These components must have functions according to their needs, starting from sensors to receive bottles, load measuring sensors, tools to accept or reject plastic bottles, screens to send information to consumers, send data to websites, and others. These components can be broken down into 8 components as shown in Table 1. The components needed are then arranged into an Arduino circuit that is interconnected and functioning. An overview of the Arduino and ESP32 circuits along with their components can be seen in Fig. 3. ESP32 is used on waste bank machines as a provider of WiFi modules, making it easier for waste bank machines to apply Internet of Things (IoT). The number of points that have been earned by the user is sent to the cloud provided by MathWorks called ThingSpeak which functions as a database of the waste bank machine.
3.3 Waste Bank Prototype The Prototype of waste bank has a body structure designed using wood and plywood. The prototype waste bank that have been designed with a length of 50 cm, width 50 cm, and height of 100 cm can be seen in Fig. 4.
Portable Waste Bank for Plastic Bottles with Electronic-Money Payment
847
Table 1 Required components No.
Component
Function
1
Arduino Mega 2560
Play a role in providing instructions for displaying the website’s QR code link and the number of points received by users
2
ESP32
Act as accepting or rejecting plastic bottles based on the information sent by the load sensor and infrared sensor used and sending waste bank data to ThingSpeak
3
Infrared sensor
Acts as detecting the presence of a plastic bottle placed by the user
4
Load cell and HX711
Serves as measuring the weight of plastic bottles
5
Servo motor
Serves to move the plastic bottle placement container placed by the consumer so that the plastic bottle falls into the appropriate place
6
24 TFT LCD
Serves as a screen to display information to consumers in the form of a QR code for the website that has been provided and the number of points earned
7
Breadboard
Acts as a component connecting board
8
Jumper
Act as a liaison between all components
3.4 Use Case Diagram The following is a Use Case Diagram to illustrate user activities with the SmartWaste application system based on a mobile friendly website as shown in Fig. 5.
3.5 Entity Relationship Diagram The following is the Entity Relationship Diagram for the SmartWaste application to design the SmartWaste application database. This Entity Relationship Diagram clearly describes the structure that contains a description of the classes, attributes, methods, and relationships of each table that can be seen in Fig. 6.
3.6 SmartWaste Website-Based Application The following is a display of the results of the SmartWaste website-based application design using the Javascript Programming Language and the Vue.js framework. The SmartWaste application has several features such as Swap-Bottle to scan QR code on the LCD and get points, Swap-Point to exchange points owned with electronic-Money such as Ovo or Gopay and History to display user transaction history after SwapBottles and Swap-Points. Figure 7 shows several appearances on the SmartWaste
848
S. G. Herawan et al.
Fig. 3 Arduino and ESP32 circuits for waste banks
Application. The demonstration this product can be seen in https://youtu.be/KW3 sxY4AcpI for Waste Bank and https://youtu.be/ZZRgIAxSnSo for the application of SmartWaste.
4 Conclusion The conclusions that can be drawn based on the design of the waste bank machine and the SmartWaste application are:
Portable Waste Bank for Plastic Bottles with Electronic-Money Payment
849
Fig. 4 The prototype of waste bank
Fig. 5 Use case diagram of SmartWaste
1. Users can exchange plastic bottle waste for points through a mobile friendly website-based application called SmartWaste by scanning the QR code displayed on the TFT LCD screen. Points earned by users can be accumulated and exchanged for electronic-money that has been provided in the application. 2. Storage points for users who exchange plastic bottle waste on a waste bank machine can be automatically stored in a database on the SmartWaste application based on a mobile friendly website.
850
S. G. Herawan et al.
Fig. 6 Entity relationship diagram of SmartWaste
3. ESP32 can be implemented on waste bank machines as a provider of WiFi modules, making it easier for waste bank machines to apply Internet of Things (IoT).
Portable Waste Bank for Plastic Bottles with Electronic-Money Payment
Fig. 7 SmartWaste application
851
852
S. G. Herawan et al.
References 1. PET Plastic Waste Has High Economic Value (2021) Retrieved from Media Indonesia. https:// mediaindonesia.com/economy/431238/sampah-plastik-pet-miliki-level-daur-ulang-tinggi 2. National Waste Management Information System (2021) Retrieved from the Ministry of Environment and Forestry. https://sipsn.menlhk.go.id/sipsn/ 3. Gunadi RA, Parlindungan DP, Santi AU, Aswir, Aburahman A (2021) Plastic hazards for health and the environment. National Seminar on Community Service LPPM UMJ, 1–7 4. Purwaningrum P (2016) Efforts to reduce the generation of plastic waste in the environment. Journal of Environmental Technology 8(2):141–147 5. Kaswan KS, Singh SP, Sagar S (2020) Role of Arduino in real world applications. International Journal Scientific & Technology Research 9(1):1113–1116 6. Kundu D, Khallil ME, Das TK, Mamun AA, Musha A (2020) Smart home automation system using on IoT. Int J Sci Eng Res 11(6):697–701 7. Mallick B, Patro AK (2016) Heart rate monitoring system using finger tip through Arduino and processing software. IJSETR 5(1):82–89 8. Fathonah PD, Hastuti (2020) Design and build reverse vending machine plastic bottle garbage with stationery. JTEIN: Indonesian Journal of Electrical Engineering 1(2):201–206 9. Handoko P, Hermawan H, Jaya S (2018) Reverse vending machine plastic packaging bottle waste exchange with tickets as currency exchange. National Seminar on Science and Technology 1–12 10. Maulana S, Herawan SG (2022) Portable waste bank for plastic bottles by weight and bottle categories. In: IOP Conference Series: Earth and Environmental Science 998(1), no. 1 012026, IOP Publishing
Evaluation of Branding Strategy in Automotive Industry Using DEMATEL Approach Ezra Peranginangin and Yosica Mariana
Abstract The shift from fossil fuel vehicles to the era of electric vehicles led to a revolution in the branding strategies of automobile vehicle manufacturers to pursue market value. Considering the quality of autocar becomes more similar from one brand to another, the decision making in autocar branding needs to intensify customer engagement and customer retention with the brand. This article aims for determining the key criteria in autocar branding in Indonesia including how those factors are interrelated to provide effective autocar branding. In order to achieve this, the criteria for autocar branding strategy is composed based on interviews with the eighteen autocar marketing from various brands in Indonesia. As the result, this study extracted seven factors as the criteria for autocar branding in Indonesia as follow: applied drive train technology, product reliability, available feature, complaint solution responsiveness, service center coverage area, manufacturing plant existence, and promotion. Then, this research applies the Decision-Making Trail and Evaluation Laboratory (DEMATEL) approach where those experts are invited as respondents to fill in pairwise comparison questionnaire as the DEMATEL’s data. The result shows that even product reliability is the key criterion, however, it is an effect of three causal criteria. This denotes that explicit product reliability in branding needs to be strengthening by at least the service coverage area and manufacturing plant existence. This research conclude that product reliability is important factors in branding strategies that should be supported by the manufacturing plant existence domestically. Keywords Branding Strategy · Decision-Making Trail and Evaluation Laboratory (DEMATEL) · Automotive Industry
E. Peranginangin (B) Industrial Engineering Department, Binus University, Jakarta 11480, Indonesia e-mail: [email protected] Y. Mariana Architecture Department, Binus University, Jakarta 11480, Indonesia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_74
853
854
E. Peranginangin and Y. Mariana
1 Introduction The shift from fossil fuel vehicles to the era of electric vehicles led to a revolution in the branding strategies of automobile vehicle manufacturers to pursue market value. The application of electric drive systems that simplify the design of automobiles makes the branding aspect in the context of fossil fuel engines have to alter with a more relevant branding strategy. In the context of internal combustion engine vehicles, Indonesian consumers strongly consider aspects of engine durability and the availability of engine components supported by the distribution of vehicle service centers. The era of electric vehicles will reduce the complexity of mechanical-based components into simple electric components. In this case, the simplicity of this electric engine shifts the competition in the automotive business where the quality of electric-based vehicle quite similar from one brand to another brand. Therefore, decision maker in branding should prioritize the criteria in branding with is more open in which customer perception in brand quality including branding. Branding is an imaging process to maintain the engagement with consumers. Perceive enjoyment is the most significant stimulus in increasing positive behavior in consumer engagement with brands [1]. It means perceived engagement in the automotive business is influenced by consumers’ enjoyment during the interaction with the brand’s touchpoint, such as the availability of spare parts, repair services, customer service, and other services. Decision makers in the automotive industry, require a branding strategy that synergize with market trends in accordance with relevant criteria and identify causal relationships between branding strategy criteria.
2 Literature Review 2.1 Brand Loyalty and Brand Trust Through Hedonic Value The consumer experience of automotive products is inseparable from the support of services after vehicle purchases. Historical experience [2, 3] determine hedonic value perspective and behavioral intention. Post-purchase services can be in the form of repair services, response to customer complaints, and the availability of spare parts, which guarantee the vehicle’s reliability. Customer satisfaction as a result of managing services after the purchase of this vehicle can increase a positive perception in terms of hedonic value will have an impact on consumer pride in the products they have, thus arousing consumers to testify to their satisfaction with other consumers through word of mouth. Furthermore, the hedonic value perception aspect together with brand personality appeal and attitude influence behavioral attention in increasing brand loyalty [4]. Thus, the management of servicescapes in consumers of motorized vehicles can lead to brand loyalty [5] even brand trusts [6] through hedonic value that causes consumers to make repeated purchases and even recommend the brand to other consumers.
Evaluation of Branding Strategy in Automotive Industry Using …
855
Hedonic value is a feeling of comfort, safety, pleasure arising from the use or consumption of a product or service [7]. Hedonic value plays a role in shaping brand personality arising from the suitability of service management with consumer expectations. Ailawadi et al. argues about Two factors that strengthen hedonic value are described as follow: the entertainment and exploration aspects [8]. The entertainment aspect includes entertainment that is relevant when consumers use products or services. The exploration aspect is a benefit of the unique experience of consumers when using the product due to feeling satisfied because of the advantages of the product that is different from other products. In auto car, consumers can feel these aspects of hedonic value in the form of features on the vehicle, engine technology, and vehicle aesthetic design [9].
2.2 Decision-Making Trail and Evaluation Laboratory (DEMATEL) The decision-making trail and evaluation laboratory (DEMATEL) discovered in 1976 by the Science and Human Affairs Program at the Batelle Memorial Institute of Geneva aims to identify solutions to complex problems with a hierarchical approach. The output of DEMATEL is causal diagram [10] between elements in describing contextual relationships including the strength of relationships between aspects [11] in identifying interdependent relationships between aspects in a hierarchical manner to provide information to decision makers regarding causal relationships between aspects. The procedure for operating the DEMATEL approach is described in steps 1 through step 4 as follows [12]: Step 1: Average matrix calculation Each expert is asked to evaluate the direct relationship between factors on a scale of 0 (no influence), 1 (low influence), 2 (medium influence), and 3 high influence) so as to produce a matrix of a number of experts who become evaluators. The entire matrix of direct relationships is then constructed into one matrix mean with the Eq. (1) where the notation xij is a matrix of relationships between aspects i to j in the matrix n x n based on the opinion of the respondent. Every respondent will produce a non-negative matrix X k = xikj , where k represents number of respondents with 1 ≤ k ≤ H while n explains the number of criteria. ai j =
H 1 k x H k=1 i j
(1)
Step 2: Calculation of normalized initial direct-relation matrix D with D = A × S, S = max1≤i≤n1n ai j where each element in matrix D will be in the range of 0 and 1. j=1
Step 3: Construction of total relation matrix defined as matrix T where T = D(I − D)−1 , where I is the identity matrix. The total relation matrix constructs
856
E. Peranginangin and Y. Mariana
using variable r and c where number of rows defined by nx1 and the number of columns 1xn. In this context, r i is the sum of ith rows in matrix T, this conclude that r i is the quantity of the direct and indirect effects based on factor i to other factors. If cj expresses the quantity of jth matrix column T, then cj indicates two direct and indirect effects. When j = i, the sum (r i + cj ) indicates the total effect that the factor i gives. This indication, (r i + cj ) indicates the priority of the i factor towards the overall structure. On the other hand, the difference (r i − cj ) is the effect that determines the contribution of factor i to the overall structure. Specifically, if (r i − cj ) is positive, it means that factor i is the cause, also factor i is the result if (r i − cj ) is negative. Step 4: Specify the threshold value to illustrate the chart. Matrix T describes the relationship between factors in the form of a matrix and decision-makers urgently need to filter from negligible effects. In this study, the threshold value was determined through the mean of the elements in the matrix T. Relationship diagrams were depicted based on data (r + c, r − c).
2.3 The Case of the Automotive Industry in Indonesia This article conducts a study on the automotive industry in Indonesia which is one of the backbones of the Indonesian economy. The Ministry of Industry noted that the automotive industry contributed an investment of Rp. 99.16 trillion with a total production capacity of 2.35 million units per year and absorbed 38.39 thousand people. The potential of the motor vehicle industry with 26 companies will pour investment of Rp. 10.05 trillion with a production capacity of 9.53 million units per year and contributing a workforce of up to 32 people. In addition, the supply chain sector of the automotive industry is able to provide jobs for 1.5 million people [12]. Based on this contribution, the automotive industry is included in the Making Indonesia 4.0 roadmap which means it is a development priority in the implementation of industry 4.0. With this significant potential, a branding strategy that is in accordance with the characteristics of the consumer market in Indonesia needs to be identified [13]. This study conducted a study on branding strategies in the Indonesian automotive industry by considering aspects of branding [14–19].
3 Research Method This research method applies the Decision-Making Trail and Evaluation Laboratory (DEMATEL) approach, where experts are invited as respondents to this method instrument. The experts are autocar marketer from variety of brand that have at least three-years’ of experience in autocar marketing. Then, the instruments in the form of pairwise comparisons are filled by eighteen experts who are expertise in sales in the
Evaluation of Branding Strategy in Automotive Industry Using …
857
automotive marketing field. Researchers compile criteria through literature exploration and validate experts through one-on-one interviews. The branding strategy criteria are applied drive train technology (A), product reliability (B), available feature (C), complaint solution responsiveness (D), service center coverage area (E), manufacturing plant existence (F), and promotion (G). These criteria are compiled in a pairwise comparison questionnaire where experts fill compare between criteria according to the DEMATEL procedure above. The average matrix of 18 respondents describes in Table 1. According to stage 2 of the DEMATEL procedure above, the average matrix is normalized, and the normalized direct relation matrix is the result (Table 2). The total relation matrix as the result of operating DEMATEL procedure in step 3 above are described in Table 3. The relationship among criteria of branding strategy in the automotive industry in Indonesia can be defined based on matrix T (Table 4) which concludes the role of the criteria, whether it plays as an effect or a cause. Table 1 Average matrix A
B
C
D
E
F
G
A
0.0000
2.7222
2.7222
0.6667
0.1111
0.6667
0.7778
B
3.3889
0.0000
2.1667
1.1667
1.7778
1.5556
1.2778
C
1.2778
2.7222
0.0000
0.0000
0.1111
0.0000
2.6667
D
0.7222
1.8889
0.0000
0.0000
0.0556
0.1111
0.0000
E
0.5556
2.2222
0.8889
2.8889
0.0000
2.3889
2.1111
F
0.9444
1.7222
1.0000
2.5556
2.8333
0.0000
2.5556
G
2.4444
2.1111
1.8889
0.0000
2.0556
0.4444
0.0000
Table 2 Normalized direct relation matrix A
B
C
D
E
F
G
A
0.0000
0.2033
0.2033
0.0498
0.0083
0.0498
0.0581
B
0.2531
0.0000
0.1618
0.0871
0.1328
0.1162
0.0954
C
0.0954
0.2033
0.0000
0.0000
0.0083
0.0000
0.1992
D
0.0539
0.1411
0.0000
0.0000
0.0041
0.0083
0.0000
E
0.0415
0.1660
0.0664
0.2158
0.0000
0.1784
0.1577
F
0.0705
0.1286
0.0747
0.1909
0.2116
0.0000
0.1909
G
0.1826
0.1577
0.1411
0.0000
0.1535
0.0332
0.0000
858
E. Peranginangin and Y. Mariana
Table 3 Total relation matrix A
B
C
D
E
F
G
A
0.2066
0.4181
0.3651
0.1526
0.1346
0.1416
0.2310
B
0.4815
0.3457
0.3995
0.2504
0.2893
0.2449
0.3283
C
0.2942
0.4033
0.1885
0.0983
0.1381
0.0980
0.3328
D
0.1370
0.2181
0.0798
0.0480
0.0561
0.0530
0.0636
E
0.3010
0.4638
0.2882
0.3678
0.1862
0.2956
0.3626
F
0.3343
0.4549
0.3093
0.3567
0.3730
0.1524
0.4032
G
0.3950
0.4317
0.3518
0.1495
0.2842
0.1619
0.2099
Table 4 Relationship among criteria ri
ci
r i + ci
r i − ci
Causal/Effect
A
1.6495
2.1497
3.7992
−0.5002
Effect
B
2.3396
2.7356
5.0752
−0.3960
Effect
C
1.5533
1.9823
3.5356
−0.4290
Effect
D
0.6557
1.4233
2.0790
−0.7676
Effect
E
2.2653
1.4615
3.7267
0.8038
Cause
F
2.3838
1.1474
3.5312
1.2363
Cause
G
1.9842
1.9315
3.9156
0.0527
Cause
4 Results Based on quantitative analysis using DEMATEL, overall, the seven criteria can be prioritized as B > G > A > E > C > F > D according to the value of r i + ci in Table 4 which means, product reliability (B) plays as an important role with the value 5.0752. The other criteria promotion (G) and applied drive train technology (A) posit as second and third importance criteria with each value of 3.9156 and 3.7992. On the other hand, criteria service center coverage area (E), manufacturing plant existence (F), and promotion (G) are playing as net causes. As the summary, decision maker of autocar branding should focusing on three causes service center coverage area (E), manufacturing plant existence (F), and promotion (G). Product reliability (B) is the priority which is the effect by other criteria (Fig. 1). This means, product reliability can be strengthened through well managed service center coverage area, manufacturing plant existence, and promotion. The policy of utilizing more than 80% Made-In-Indonesia autocar spare parts significantly affects the product reliability caused by the ease of availability for spare parts affecting the authorized dealer to provide spare parts quickly. The manufacturing plant existence is proof that the autocar brand guarantees the business sustainability which increases customer trust to the brand. Additionally, promotion posits as the third rank as causal to the other criteria, therefore, it may not be the focus
Evaluation of Branding Strategy in Automotive Industry Using …
859
Fig. 1 The digraph showing the causal relationship among seven criteria
for autocar branding. Even product reliability is the key criterion, however, it is an effect of three causal criteria. This denotes that explicit product reliability in branding needs to be strengthening by at least the service coverage area and manufacturing plant existence. Number of service center availability ensure customer peace of mind during the use of the autocar.
5 Conclusion This study initially collect insight through interview eighteen autocar marketers for identifying criteria for branding strategy in autocar industry in Indonesia. According to the interviews, seven criteria were found as the criteria for branding strategy that consists of applied drive train technology, product reliability, available feature, complaint solution responsiveness, service center coverage area, manufacturing plant existence, and promotion. Then, the survey based on DEMATEL method was distributed to those eighteen autocar marketing to fill in the pairwise comparison questionnaire for identifying the causal relationship among selected criteria. Concern that DEMATEL method is unlike major multi criteria decision making methods that required the assumption to be mutually independent, consequently DEMATEL can support the decision maker to identify the relationship among criteria without mutually independent assumption among criteria. Thus, the value of this research is identifying the causal relationship among criteria besides prioritizing those criteria. The result describes that service center availability is the most importance criterion followed by manufacturing plant existence. As a major belief that product reliability is the key for a successful branding strategy, this research discovers that product reliability needs to support by service quality where the manufacturing plant potentially strengthens product quality through fast response in providing customers with auto
860
E. Peranginangin and Y. Mariana
car repair and parts. Therefore, branding policy from the decision maker in the automotive industry should concern with the service center and spare parts availability guarantee to increase the value of the brand and gain customer trust.
References 1. Arghashi V, Arsun Yuksel C (2022) Customer brand engagement behaviors: the role of cognitive values, intrinsic and extrinsic motivations and self-brand connection. Journal of Marketing Theory and Practice 1–27 2. Dedeoglu BB, Bilgihan A, Ye BH, Buonincontri P, Okumus F (2018) The impact of servicescape on hedonic value and behavioral intentions: the importance of previous experience. Int J Hosp Manag 72:10–20 3. Park JY, Back RM, Bufquin D, Shapoval V (2018) Servicescape, positive affect, satisfaction and behavioral intentions: the moderating role of familiarity. Int J Hosp Manag 78:102–111 4. Ekawati N, Yasa N, Kusumadewi N, Setini M (2021) The effect of hedonic value, brand personality appeal, and attitude towards behavioral intention. Management Science Letters 11(1):253–260 5. Kuikka A, Laukkanen T (2012) Brand loyalty and the role of hedonic value. Journal of Product & Brand Management 21(7):529–537 6. Albayrak T, Karasakal S, Kocabulut Ö, Dursun A (2020) Customer loyalty towards travel agency websites: the role of trust and hedonic value. J Qual Assur Hosp Tour 21(1):50–77 7. Desmet P, Hekkert P (2007) Framework of product experience. International Journal of Design 1(1) 8. Ailawadi KL, Neslin SA, Gedenk K (2001) Pursuing the value-conscious consumer: store brands versus national brand promotions. J Mark 65(1):71–89 9. Opata CN, Xiao W, Nusenu AA, Tetteh S, John Narh TW (2020) Customer value co-creation in the automobile industry: antecedents, satisfaction, and moderation. SAGE Open 10(3) 10. Kim Y (2006) Study on impact mechanism for beef cattle farming and importance of evaluating agricultural information in Korea using DEMATEL, PCA and AHP. Agriculture Information Research 15(3):267–279 11. Wu WW, Lee YT (2007) Developing global managers’ competencies using the fuzzy DEMATEL method. Expert Syst Appl 32(2):499–507 12. Tzeng GH, Chiang CH, Li CW (2007) Evaluating intertwined effects in e-learning programs: a novel hybrid MCDM model based on factor analysis and DEMATEL. Expert Syst Appl 32(4):1028–1044 13. Automotive sector’s contribution to national industry remains positive. https://en.antaranews. com/news/179246/automotive-sectors-contribution-to-national-industry-remains-positive. Last accessed 15 Nov 2022 14. Bajde D (2019) Branding an industry? J Brand Manag 26(5):497–504 15. Keller KL (2021) The future of brands and branding: an essay on multiplicity, heterogeneity, and integration. Journal of Consumer Research 48(4):527–540 16. Kumar V, Kaushik AK (2022) Engaging customers through brand authenticity perceptions: the moderating role of self-congruence. J Bus Res 138:26–37 17. Kumar V (2020) Building customer-brand relationships through customer brand engagement. J Promot Manag 26(7):986–1012 18. Niefer W (1994) Power branding in the automotive industry. In: Brand power. Palgrave Macmillan, London, pp 103–119 19. Ziegler D, Abdelkafi N (2022) Business models for electric vehicles: literature review and key insights. J Clean Prod 330:129803
IoT Based Beverage Dispenser Machine Wiedjaja Atmadja, Hansel Pringgiady, and Kevin Lie
Abstract Most of the water filling systems in beverage dispensers use a lever or button for the dispenser which is commonly used in fast food restaurants or malls that are devoted to soft drinks and the payment still must be made through the cashier which may be in a crowded moment. This paper discusses the design of a IoT based beverage dispenser that can fill the glass automatically with the help of a laser sensor that detects the cup, and the filling of water is controlled by opening a solenoid valve and activating a pump that is adjusted to the results of measuring the water that comes out, made possible by a water flow sensor device that measures the amount of water that flows out. System using ESP32 as main controller and Google Firebase as cloud storage database to store user account and money balanced. The IoT based dispenser machine use simple payment system that utilize unique user ID from RFID card and use this unique ID as a key in database. Each customer will have a RFID card as payment card to do the transaction. System connect to the internet using local Wi-Fi network. The appliance has a cooling component, namely use of Peltier which cools soft drinks in a separate container from the main container made of aluminum to speed up the cooling process. The test results show that the system can cool the water in the secondary container as low as 15 °C and the temperature in the secondary container can be stable at a temperature range of 19 °C and makes the range below 21 °C the foundation for serving drinks. Keywords Automatic dispenser · ESP32 · Google firebase · Cooling system · Self-service payment
1 Introduction Along with the rapid economic growth and human development, household appliances have been developed towards digitalization, functional diversification with W. Atmadja (B) · H. Pringgiady · K. Lie Computer Engineering Department, Faculty of Engineering, Bina Nusantara University, Palmerah, West Jakarta 11480, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_75
861
862
W. Atmadja et al.
intelligent features [1], where one of the widely known smart features in this modern era is the automatic feature. The use of automation technology in a company, especially a manufacturing company can increase productivity [2], where repetitive daily tasks previously done by human workers can now done by machines, and the workers can now focus on more challenging tasks. In practice, not all processes can be done by machines, because developments are still relatively new and the application of automation technology to replace manual systems in the manufacturing industry requires careful planning and design due to the large investment costs. Thorough planning and design, along with the alignment with user needs, need to be carried in order not to cause heavy changes of system design in the future [3]. In everyday life, there are many factors affecting how people enjoying their free time, and which places that they often visit to spend free time such as restaurants, canteens, libraries, or other places which allow visitors to sit back, wait, or relax. During peak hours in places like restaurants or malls, it can be troublesome, and time consuming to order beverages and having to pay at the cashier. To solve this problem, we design an automatic beverage dispenser with self-service payment system. The objective of the Automatic Beverage Dispenser project is to help visitors enjoy their free time, without having to spend a long time go get their drinks, hence increasing customer satisfaction which is very important for the successfulness of a business [4–6]. The Automatic Beverage Dispenser works as a normal liquid dispenser and makes it easier in terms of purchasing. Purchases are provided with two size options, from 355 ml cups size since it’s not to large and not to small, as half fill 220 ml (medium), and almost full 300 ml (large) size. These two size options are enabled using waterflow sensor, which is used to measure the amount of water that comes out, so the device can stop the flow of water according to the measurement given to the device. By utilizing a very responsive laser sensor, the dispenser can detect the glass presence. The laser sensor allows the dispenser to know if the cup is lifted while filling and to stop filling quickly. The dispenser is also equipped with a cooler that is targeted to cool the water to approximately below 21 °C, because liquids at a temperature of 21 °C and below are classified as cold liquids, so customers don’t have to worry about the drinks served. The IoT technology, enable machine to communicate with the cloud database for monitoring and updating the status of the data in the device every time an order occurs [7]. The machine has a unique feature in contrary to other water dispensers, which is a self-service feature to make payments through RFID card. The customers do not to need to make payments through the cashier and can directly pay using the system, and the balance used will be displayed on the LCD screen when making payments. The LCD module works as a customer guide when placing orders, with descriptions that will help customers in every process. With the interesting features of this device, it will encourage expansion for technology automation and IoT technology implementation in general in the future along with the most challenging issue in the adoption of new technologies which is understanding the attitude of customers towards new technologies [8].
IoT Based Beverage Dispenser Machine
863
2 Research Method In this research, the development of the system is divided into three parts. The first part is block diagram of the hardware development; the second part is flow chart of the software development; and the third part is the design of database structure of the system. Figure 1 depict the system block diagram, where there is ESP32 which is used as the microcontroller of this system to enable wireless connectivity function through the cloud database system. Cloud database system that is used in this system is Google Firebase RTDB (Realtime Database). In sensor parts, there are mainly four sensors used in this system, which are Distance Sensor, Temperature Sensor, Water Level Sensor, and Water Flow Sensor. Distance Sensor is used for detecting the presence of the drinking cup. Temperature Sensor is used for measuring temperature of water in the cooler, which enables on– off switching of components like Peltier and Exhaust Fan. Water Level Sensor is used for determining whether the water capacity in reservoir almost empty or not. Water Flow Sensor is used for measuring how much volume of water that has been through to the cup, which enables the system to automatically stop the filling if the measured volume has exceeded the target volume. Both water level sensor and water flow sensor are operated in 5 V supply, meanwhile ESP32 only utilize voltage signals of 3.3 V. Hence, logic level converter module is used to convert 5 V output signals from the sensors into 3.3 V output signals. For water pumping components, Water Pump and Solenoid Valve are used, where Solenoid Valve is used more like a valve, which opens or closes the flow of water. For the design of this system, solenoid valve itself is not enough to pump water directly from reservoir to the cup. In this case, water pump is connected to the output of solenoid valve to ensure the water can be pumped strong enough from reservoir into the cup. For the cooling system, Peltier is attached to cooler reservoir, which is an aluminum bottle, by using thermal paste. Exhaust Fan is used to remove excess
Fig. 1 Block diagram of automatic soft drink dispenser
864
W. Atmadja et al.
heat produced from the hot side of the Peltier. In other side, LED is used as indicator whether the Peltier is turned on or off. These four components namely Exhaust Fan, Peltier, Water Pump, and Solenoid Valve are controlled through MOSFET. For user interface parts, there are two push buttons, RFID Module RC522, Buzzer, and LCD TFT Display Module. Two push buttons are used for representing Medium Size and Large Size. RFID Module RC522 is used to simulate order transaction of the drink by using RFID cards, meanwhile buzzer is used to provide sound if RFID card is detected by the module. LCD TFT Display is used to display order instruction for the customers. For the power line of the system, DC Power Supply 12 V in the form of Adaptor is used. For providing 5 V supply to components like ESP32, Water Level Sensor, and Water Flow Sensor, DC Step Down (Buck Converter) is used to convert 12 V supply into 5 V supply. The power line of the system can be either connected or disconnected by using Power Switch. Here the PCB design of this system: The dispenser is implemented with sensor technology and IoT technology, which allow the dispenser to process customer orders autonomously with very little intervention, as shown in Fig. 2. System connects to the Internet through local Wi-Fi network that provided in the local area. The dispenser produces data output shown in the database in Fig. 3. The device has a sensor on the main container that will indicate whether there is still a lot of water in the container or not, and the sensor in the cooling reservoir section that can stop working if the water temperature in the reservoir reaches 16 °C and turns back on when the water temperature reaches 17 °C, as a power saving feature this process will turn on and off the LED on the PCB as a cooler debugging feature. The ordering process is structured by stages that are detected by the dispenser initiation sensor, namely the presence of a cup sensor, and these stages will trigger the LCD to display the information needed by the customer to complete the order and display the balance deducted at the time of payment. After the order is completed, the dispenser will send an invoice to the customer email registered on the RFID card. This RFID card act as card payment system, but the money balanced stored in Firebase database. User can top up their balanced in the cashier, and each time user tap their card to the dispenser machine, the machine will check the user balanced first on firebase database, if enough then transaction will proceed.
3 Result and Discussion In this result and discussion part of the paper, it is divided into two parts, the final product of the system and the results of the testing data. The first part is the final product of the system which is represented in Fig. 3. Figure 3 shows final product of Automatic Soft Drink Dispenser system. It consists of mainly three parts, which are enclosure for the main reservoir, cup holder, and user interface component. For the cup placement, the customer should place the cup in the designed hole on the cup holder. After the cup is placed, LCD Display will show next instruction to the customer. The right side of Fig. 3 shows the user interface
IoT Based Beverage Dispenser Machine
865
Fig. 2 Flow chart of automatic soft drink dispenser
components, which are LCD Display, RFID Scanner, and Two Push Buttons for representing Medium Size and Large Size. Figure 4 is the system database structure by using Google Firebase RTDB. In this structure, there are two main parts, which are RFID part and SystemID part. The purpose of RFID part is to monitor the balance data and email data which are correlated to the Card ID (for example 37625BB5 and E72797B4). Balance data is used for processing the balance if medium order or large order transaction is
866
W. Atmadja et al.
Fig. 3 Final product of automatic soft drink dispenser
done, while email data is used for sending information of order to the email that correlated with the Card ID (for example [email protected] for Card ID 37625BB5 and [email protected] for Card ID E72797B4). For SystemID part, the purpose of this part is to monitor the order count of large and medium drink size transaction. For example, if transaction of medium size is done for ID Card 37625BB5, its order count for the Medium Size will be increased from 0 to 1 and its balance will be cut by 10,000. Next section is the result of the testing data, which is separated into several parts. Each part has its own table. Two graphs are also provided for additional visual information. Based on the Table 1 data (temperature difference between cooler and cup), assume there is 3 liters of water poured into the main reservoir. For every iteration, 300 milliliters of water is taken out from the cooler reservoir. Hence, there is total of 10 iterations. In the approximation of getting the best temperature for the cold drink, waiting time is provided for every two iterations. The calculation of Waiting Time is given in Table 2. Based on the result above, waiting time for the first two iterations took longer time than the other two iterations, where the waiting time for the first two iterations took about 110 minutes while for the two iterations, Fig. 4 Order simulation for ID card 37625BB5 on medium size cup
IoT Based Beverage Dispenser Machine Table 1 Temperature difference between cooler and cup
Iteration
867 Cooler water temperature (°C)
Cup water temperature (°C)
Temperature difference (°C)
1
16.19
18.1
1.91
2
20.31
21.2
0.89
3
16.51
18.70
2.19
4
20.87
21.15
0.28
5
16.82
18.42
1.60
6
20.35
20.55
0.20
7
16.00
17.70
1.70
8
20.37
20.80
0.43
9
15.56
17.9
2.34
10
18.31
19.2
0.89
the waiting time variated between 20 minutes mark. This may be due to the operation of Peltier component is still not optimal enough when the system has been just turned on. On other side, the system can maintain the temperature difference between water in cooler and water in cup, which resulted in average of 1.243 °C. The average temperature of water in cup is about 19.37 °C which means that temperature of the drink already fulfilled the target of the cold drink temperature in the system, which is under 21 °C. According to the results of Table 1 above, there is a fluctuation of temperature from 15 to 21 °C because of the temperature mechanism of the Peltier itself, where Peltier can’t hold the temperature difference close enough. In Table 2, the cooler waiting time also take too much time, because of the working nature of Peltier itself. It may can be improved by change the cooling component from Peltier into compressor. But, for cooling simulation purpose, Peltier component is used for this machine. From the data taken of the Table 3 (comparison between water volume that has been poured into the cup and target volume), based on the reading of water flow sensor, there is variation of water output volume for every iteration. For the medium size, in the system simulation, the target volume is 220 ml. The average of water volume is around 220.75 ml which is an acceptable result because the value is approximate with the targeted volume (220 ml). Also, the average of error percentage is only Table 2 Cooler waiting time for every two iterations
Iteration
Waiting time (min)
1–2
110
3–4
21
5–6
25
7–8
18
9–10
21
868
W. Atmadja et al.
around 1.014% which is low and does not greatly affect the desired target volume of the drink. Based on the data taken from the Table 4, the error calculation was performed again, similar with Table 3. However, for Table 4, the target volume is 300 ml as the simulation is for the large size drink. From the 20 iterations, the average of water volume is around 299.5 ml which is also an acceptable result because the value is approximate to the target volume (300 ml). In addition, the average of error percentage is only around 0.833% which is low and does not significantly affect the desired target volume of the drink. Both result of Table 3 and result of Table 4 are represented by the graph. Based on Fig. 5, the error calculation between actual water volume in cup and targeted water volume is very small as shown in the graph. For 20 iterations, the graph line for both medium size and large size is almost flat, where the error can be seen just around 0% to 2.5% which means that the system can hold its consistency for maintaining error in very small value for 20 iterations for both medium size and large size. For the last part in the results of the testing data, it is focused on temperature stability testing on the cooler for 3 h. Table 3 Comparison between water volume and target volume (medium size = 220 mL)
Iteration
Water volume (mL)
Volume difference (mL)
Error percentage (%)
1
225
5
2.22
2
220
0
0
3
220
0
0
4
225
5
2.22
5
220
0
0
6
225
5
2.22
7
220
0
0
8
215
5
2.32
9
215
5
2.32
10
225
5
2.22
11
220
0
0
12
220
0
0
13
225
5
2.22
14
220
0
0
15
215
5
2.32
16
220
0
0
17
220
0
0
18
225
5
2.22
19
220
0
0
20
220
0
0
IoT Based Beverage Dispenser Machine Table 4 Comparison between water volume and target volume (large size = 300 mL)
869
Iteration
Water volume (mL)
Volume difference (mL)
Error percentage (%)
1
295
5
1.69
2
305
5
1.63
3
300
0
0
4
305
5
1.63
5
295
5
1.69
6
300
0
0
7
305
5
1.63
8
300
0
0
9
300
0
0
10
295
5
1.69
11
300
0
0
12
300
0
0
13
295
5
1.69
14
295
5
1.69
15
300
0
0
16
300
0
0
17
295
5
1.69
18
300
0
0
19
305
5
1.63
20
300
0
0
Based on the data taken from Table 5 (stability testing of cooler reservoir temperature for 3 h), data plotting for the cooler reservoir temperature is done for every 10 min to measure the temperature value of the cooler reservoir when the cooler reservoir is currently not used to pour the water to the cup. The system can maintain its temperature well from 16 to 17 °C when it finally reached the desired temperature target for the cooler reservoir, which is 16 °C. The measurement result of the cooler reservoir temperature from Minute—0 to Minute—180 (in total of 3 h) is represented in Fig. 6. It is an acceptable result for the system as the system can maintain its cooler reservoir temperature to keep the water in the cooler reservoir cold.
4 Conclusion This project was established from a concept development of a dispenser that is connected to IOT technology which is very popular in today’s modern times. This dispenser is developed using sensors that allow automatic service functions, with an average result of a water pouring error of 1.014% (for Medium size drink) and
870
W. Atmadja et al.
Fig. 5 Graph of error calculation between actual water volume in cup and target water volume
Table 5 Temperature stability testing of cooler reservoir for 3 h (180 min) Minute
Cooler reservation temperature (°C)
Minute
Cooler reservation temperature (°C)
0
16.1
100
16.5
10
16.5
110
16.6
20
16.8
120
17.0
30
16.4
130
15.8
40
16.4
140
16.4
50
16.2
150
16.7
60
16.3
160
16.6
70
17.1
170
16.3
80
16.8
180
16.8
90
17.3
0.833% (for Large size drink) as well as the average temperature difference of cup water with a cooling container of 1.243 °C which is a satisfactory result. The use of tools in terms of IoT and automated services makes it easier to use. From a business perspective, this will open up more opportunities in implementing the tool, thereby increasing business opportunities that initially could not be done because of limited functions that were difficult to perform.
IoT Based Beverage Dispenser Machine
871
Fig. 6 Graph of Cooler Reservoir Temperature versus Time
References 1. Huang J, Xie J (2010) Intelligent water dispenser system based on embedded systems. In: Proceedings of 2010 IEEE/ASME international conference on mechatronic and embedded systems and applications. IEEE Press, QingDao, pp 279–282 2. Fauzan MI, Rachmat H, Anugraha RA (2016) Perancangan Sistem Otomasi Proses Chamfer Part Stopper Valve Pada Mesin Bench Lathe SD-32A Di PT Dharma Percision Parts. J Rekayasa Sistem dan Industri 3:59–66 3. Geantari EU, Rachmat H, Astuti MD (2014) Perancangan User Requirement Specification (URS) Sistem Otomatisasi Pelayuan Teh Hitam. J Rekayasa Sistem dan Industri 1:43–48 4. Gupta S, McLaughlin E, Gomez M (2007) Guest satisfaction and restaurant performance. Cornell Hotel Restaur Adm Q 48(3):284–298 5. Hallowell R (1996) The relationships of customer satisfaction, customer loyalty, and profitability: an empirical study. Int J Service Industry Manage 7:27–42 6. Rust RT, Zahorik AJ (1993) Customer satisfaction, customer retention, and market share. J Retailing 69:193–215 7. Poongothai M, Subramanian PM, Rajeswari A (2018) Design and implementation of IoT based smart laboratory. In: 2018 5th International conference on industrial engineering and applications (ICIEA). IEEE Press, Singapore, pp 169–173 8. Kumar KN, Balaramachandran PR (2018) Robotic process automation—a study of the impact on customer experience in retail banking industry. J Internet Bank Commer 23:1–27
The Implementation of the “My Parking” Application as a Tool and Solution for Changing the Old Parking System to a Smart Parking System in Indonesia Anisah El Hafizha Harahap and Wahyu Sardjono
Abstract Nowadays with globalization and digitalization, many companies change their business process from conventional business to digital-based business. It aims to shorten and simplify business processes and also to avoid fraud that can occur. It’s the same with the parking system in Indonesia, by taking the advantage of advanced technology to create a better parking system, a smart parking system can be implemented in Indonesia. To be specific by using the Internet of Things or IoT, a sensor system, and developing an application that can be accessed and used by the driver as a user to make a reservation of parking space. Came along with that problem, the existence of “My Parking” can be the problem solver, and with the My Parking innovation as an application to implement a smart parking system in Indonesia, it is hoped that it can help both the community and the government in building a smart city in Indonesia which will lead Indonesia to become a developed country. The purpose of this paper is to visualize the My Parking Apps starting from the flowchart diagram to the User Interface of the apps. Keywords Smart parking · IoT · User interface · Reservation parking
1 Introduction The current parking system in Indonesia still has many shortcomings. Among them is the risk of motorbikes being moved or irregular and parking tickets that are easily lost. Currently, the number of vehicles continues to increase, and sometimes the existing parking area is unable to accommodate vehicles. Even the parking lot is getting narrower as the volume of vehicles increases. Meanwhile, the existing parking system has not been able to overcome these problems. In general, parking areas in Indonesia still use the conventional system, the drivers search manually for which A. E. H. Harahap · W. Sardjono (B) Information Systems, Management Department, BINUS Graduate Program, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_76
873
874
A. E. H. Harahap and W. Sardjono
parking spaces are still empty. If the parking area has a small parking location, it may not be a problem, but for large parking areas such as shopping centers, and other recreational areas, it cannot only be controlled by parking attendants, because of the large number of vehicles that want to park and their time in using the parking lot is unpredictable, the new parking system needed to find the parking areas easily.
2 Literature Review Parking is a temporary condition of a vehicle in a state not moving or being abandoned by the driver. This automatic parking system is one of the systems that are automatically used in many commercial places to manage the parking lot of vehicles using the best possible. In the automatic parking system, there are several ways to arrange a vehicle that is being parked, one of which is parallel parking. Parallel parking is a parking condition where vehicles parked in parallel use the front of the vehicle facing the back of the vehicle. Usually, parallel parking is found on the side of the road. In this global and technological era, many companies are applying technology to their business, so there are many changes from conventional business to digital business, and absolutely with the help of technology business processes become easier, transparent, and beneficial to both parties, both companies and consumers. As with the parking system in Indonesia, if the government can take the advantage of the sophistication of the technology, without question, an excellent and measured parking system will be created [1]. Currently, the common method of finding a parking space is manual where the driver usually finds a space in the street through luck and experience. However, by changing the conventional parking system into a smart parking system with the help of the Internet of Things or IoT and booked parking apps, it can be predicted that it will greatly help drivers in saving time looking for empty parking spaces. The definition of the Internet of Things or usually called as IoT is a technology of sensors, processing ability, software, and other technologies that connect and exchange data with other devices. Came along with the problem of conventional parking systems, an innovative idea is created as an alternative or solution that can solve these problems. To be specific, by using a booked parking application or called “My Parking” to reserve a parking space and placing a sensor in the parking area where the sensor will sound if the plate number of the parked vehicle does not match the plate number recorded in the parking application. In metropolitan city areas, parking management affects drivers to [2] have extra time for searching the available parking space, and traffic congestion. However, with the appearance of “My Parking” as a tool of the smart parking system completely removes the guesswork of the parking experience. Since effectiveness of finding parking slots with My Parking’s help, it will reduce the traffic and parallel parking that is caused by the unavailability of parking slot in the shopping areas or other recreation areas. In the previous research paper, [3] “Smart Parking System based on Reservation”, stated that the need for facilities and convenience in finding parking
The Implementation of the “My Parking” Application as a Tool …
875
spaces on a daily basis has rapidly increased the ratio of people owning vehicles thereby increasing busy city traffic. Therefore, a new parking system that can provide comfort in finding a parking space is indispensable in this fast-paced and sophisticated era. With the digitalization, smart parking system can be the solution for replacing the old parking system to the new one that can include in-ground sensors or cameras to spot if the parking slot is free or occupied [4]. This happens through real-time data collection, and the data itself will be transmitted to a smart parking mobile application or website, which communicates the availability to its users. “My parking” is a parking reservation-based parking application where the driver as a user can use it to reserve a parking space before coming to the destination. With the existence of “My Parking”, it is expected to minimize fraudulent grabbing of parking spaces and reduce the time and level of stress for drivers in finding empty parking spaces [5]. Each parking slot has sensor that can detect the availableness, and every time the car enters the parking slot the sensor will scan the plate number and if it doesn’t match with the registered in the system, the sensor alarm will sound, and if the drivers still park their vehicle on the booked parking slot without reservation first, the officer will come, and the driver will get the penalty. By using the “My Parking” apps, [6] people who are looking for a parking space will find it in the most efficient way, and also make the city streets less crowded, because it can reduce car queues when looking for an empty parking space, and most importantly from using the “My Parking” is able to make the city environment more neat and sophisticated in accordance with the definition and benefits of implementing a smart city. Below these is the benefits and advantages of using smart parking apps, My Parking: 1. 2. 3. 4.
Can make it easier for vehicle owners when parking their vehicles Can make the parking lot tidier and more organized Can reduce paper usage Can make the driver not forget where he parked with the numbering
3 Methodology This paper was conducted by searching the information and literature on the internet that related to the issue which is the implementation of smart parking in Indonesia. The main purpose of this paper is to gather and obtain data and the relevance between the obtained information and the solutions that will be provided through this paper.
3.1 Data Collection First step started with gathering sources from journals, literature, paper, own experience, and article on the internet that related to this topic issue. After gather all the information that have been gathered, the next step is start to identify and summarize the related information to the topic issue. From that research, it is found that
876
A. E. H. Harahap and W. Sardjono
many people had difficulty finding a parking space due to a lack of information about the availability of parking spaces. Then it can conclude that not a few of them are forced to keep parking their vehicles in parallel which can annoy other visitors, this can happen because of the density of parking spaces and of course the lack of information from the mall that tells the amount of their parking space capacity.
3.2 Designing Flowchart A flowchart means a diagram that explains how the business process runs, in other words, it represents the workflow of the business process. Develop a flowchart for this paper is important, because it is conducted to discover the system workflow and determine whether the information is correctly formatted and delivered. Microsoft Visio is used as a tool for designing My Parking Apps flowchart process.
3.3 Designing User Interface To visualize My Parking Apps, Axure RP 9 was used for designing the user interface of My Parking apps. User interface is the interpretation design of an application or website, or in other words, it is the interaction between user and system computer of the application or website.
4 Result and Discussion 4.1 Flowchart In this paper, the flowchart of “My Parking” app start with the driver as a user submitting their personal data for account registration, after that the driver can use the email and password that already registered to log in to the application. Then, users can search the parking area that they want to visit, for example, if users want to go to Mall Town Square, users have to type the name of the mall on the search box and have to enter the arrival time and duration of parking. After the driver (as a user) search the location and enter the duration parking, the system will scan that area whether that area still has an available parking space or not. If there’s an available parking space, the system will display the location of the parking area like the floor and section, and the user can enter their plate number, [7] so the sensor can check if there’s another vehicle parked there the sensor’s alarm will sound and the officer will come. But if there’s no available parking space,
The Implementation of the “My Parking” Application as a Tool …
877
unfortunately, visiting that mall is impossible and user have to search the other destination. It aims to reduce overloaded parking lots and parallel parking. After the user makes a reservation, the system will update the available quota of parking slots in the chosen area. When the driver arrives at the parking area and parked their vehicle, the system will calculate the parking charges and count the time remaining of their parking duration, and “My Parking” will send the notification to their use as a reminder that the parking duration is almost over. Users can request to extend the duration of their parking if users still have a thing to do in the building area (mall or another recreation area) and the system will re-calculate the parking charges. Before users leave the parking areas, the system will display the bills or parking fee that need to pay by users, and right after users leave the parking area the system will re-update the availableness of parking slot in that area (Fig. 1).
4.2 Data Processing Diagram In Fig. 2, there’s a diagram of hardware part, exclusively for “My Parking” business process. It helps the developer and UI designer to understand the input and output data and also the data that will be kept to the storage as an information that can be studied for the next use. • Input: Start with the input data, in “My Parking” business process, there are several data that inputted by users. Start with the first registration account, that required users to input their personal data (Full name, date of birth, phone number, email, and new password). Not only in the registration step, but users also need to input their vehicle plate number and place that want to visit as the requirement for booking parking space. • Processing: After users input all the required data, the system will process the data. To be example, after users input the desired place for their visit, system will process them by matching the place and read the capacity of the parking space. In processing, the data not only out as a result or output for customer, but also the data from processing can be kept on storage. • Output: Soon after the system process the data succeed process the input data that given by users, it will be seen by users on their screen the result of the availableness as an output. • Storage: Storage means, the input data not run for 1 process only, but also it can be kept on the system by saving it in the storage. It aims to simplify and accelerate the process business. As an example, for the first-time use of “My Parking” users need to enter their vehicle plate number for making a reservation. Meanwhile, after the first-time use users don’t need to re-enter their vehicle plate number, because the system already saved their information, except if users want to make a reservation using a different vehicle, and it will make the reservation process
878 Fig. 1 Flowchart of My Parking apps
A. E. H. Harahap and W. Sardjono
The Implementation of the “My Parking” Application as a Tool …
879
Fig. 2 Hardware parts diagram “My Parking”
Table 1 The different of first-time and second-time use of booking reservation First-time reservation
Second-time reservation
On users first-time make a booking reservation for their parking, it is a mandatory to enter their plate number
After users finished the first-time parking reservation, and if users want to make another reservation, there’s no need to re-enter their plate number as long as they use the same vehicle. Because it shows the last-inputted plate number as a suggestion
Booking reservation menu
Explanation
faster and easier. In Table 1, it shows the different of the first-time and second-time reservation.
4.3 User Interface of My Parking A. Landing Page: The first page of “My Parking” is the landing page, which the driver as a user can choose to sign in or sign up (Fig. 3). B. Sign-Up Page: For the new user, the driver can create account by pressing the “Sign Up” button on the landing page. After that, the driver need to enter the appropriated personal data for register account, press the “Register” button (Fig. 4). C. Sign-In Page: On Sign-In Page, users who already have an account, need to press the “Sign In” button on the landing page, and the system will generate to the Sign in Page where users have to enter the email or telephone number and password that already registered (Fig. 5).
880
A. E. H. Harahap and W. Sardjono
Fig. 3 My Parking UI (landing page)
D. Home Page: After users succeed for login to the “My Parking” apps, it will generate to the home page. At the home page there’s 2 different vehicles (Car and Motorcycle), users can choose the parking area based on their vehicle (Fig. 6). E. Car Parking Reservation Page: After users select the vehicle, the system will generate to the Reservation Page. On this page, users can search the location, enter the visit time, and parking duration. If the parking areas still available, the system will inform user by showing the pop-up message that contains the availability of their selected parking area. After that, users have to enter the plate number, it aims to book the parking slot and avoid the other vehicle to enter their parking space (Fig. 7). F. Reservation Summary Page: After users submit the information of their parking, the system will display the information of their parking reservation. On this page, there’s 2 buttons. First is the “Extend” button if users want to extend their parking duration, and “Out” button if users want to end their parking session and leave the building (Fig. 8). G. Car Parking Bill Page: System will generate to the Parking Bill page if users click “Out” Button and will shows the price that users need to pay. After users
The Implementation of the “My Parking” Application as a Tool …
881
Fig. 4 My Parking UI (sign up page)
leave the building, system will re-update the availableness of the parking slot (Fig. 9). H. Parking History Page: On History Page, users can see their completed parking reservation history (Fig. 10).
5 Conclusion With the increasing volume of vehicles in Indonesia, it often makes it difficult for motorists and drivers to find a parking space and this can sometimes cause stress because they have to spend a long time even hours to find an empty parking space, and from there it can cause congestion. Then in the midst of the rapid development of technology today, not a few businesses that were originally conventional businesses have been digitalized, by changed and turned into digital-based businesses with the help of implementing technology into business processes. From these problems, an idea emerged that could be a solution by utilizing the sophistication of existing technology, that is by changing the conventional parking system to a smart parking system through “My Parking” application. It is hoped that the presence of “My Parking” apps can reduce paper waste and help the community
882
A. E. H. Harahap and W. Sardjono
Fig. 5 My Parking UI (sign in page)
to find their parking area easier by accessing the application. So that, people who want to go to a building (mall, office, etc.) no longer need to bother and have trouble finding a parking space for their vehicle and can minimize the loss of parking tickets and forgetting the place or location of the vehicle being parked and can also minimize the occurrence of vehicle transfers that carried out by irresponsible persons. Another reason for advancing Indonesia in the field of technology is by using technology to control the parking system in Indonesia, including efforts to minimize tree felling caused by the increasing use of paper. Even though “My Parking” could be the solver of parking problem that Indonesia faced, but it is possible, that in making it happen there are many obstacles that will be faced. Such as, incompatibility devices and outdated technology, the unsuccessful architecture pattern or codes, lack of resources and platforms, and does not have enough funds. Given that so far there is no application to reserve a parking space in Indonesia, obstacles, whether minor or major, will certainly exist. Therefore, this paper only focuses on the design and manufacture planning of “My Parking” apps as an illustration if this application will be true. Thus, the future work, the collaboration with government and parking company will be helpful for the real implementation and developing “My Parking” as a problem solver for make a smart parking system in Indonesia.
The Implementation of the “My Parking” Application as a Tool … Fig. 6 My Parking UI (home page)
883
884 Fig. 7 My Parking UI (car parking reservation page)
A. E. H. Harahap and W. Sardjono
The Implementation of the “My Parking” Application as a Tool … Fig. 8 My Parking UI (reservation summary page)
885
886 Fig. 9 My Parking UI (car parking bill page)
A. E. H. Harahap and W. Sardjono
The Implementation of the “My Parking” Application as a Tool …
887
Fig. 10 My Parking UI (parking reservation history page)
References 1. Pham TN, Tsai M-F, Nguyen DB, Dow C-R, Deng D-J (2015) A cloud-based smart-parking system based on Internet-of-Things technologies. 3:1–2 2. Shaikh FI, Jadhav PN, Bandarkar SP, Kulkarni OP, Shardoor NB (2016) Smart parking system based on embedded system and sensor network. 140:1–2 3. Chandran M et al (2019) An IoT-based smart parking system. 1339:2–4 4. Cynthia J, Priya CB, Gopinath PA (2018) IOT based smart parking management system. 7:1–2 5. Bonde DJ, Shende RS, Kedari AS, Gaikwad KS, Bhokre AU (2014) Automated car parking system commanded by Android application. In: 2014 International conference on computer communication and informatics. IEEE, India, pp 1–4 6. Kiliç T, Tuncer T (2017) Smart city application: android based smart parking system. In: 2017 International artificial intelligence and data processing symposium. IEEE, Turkey, pp 1–4 7. Mudaliar S, Agali S, Mudhol S, Jambotkar CK (2019) IoT based smart car parking system. 5:270–271
Knowledge-Based IoT and Intelligent Systems
Study of Environmental Graphic Design Signage MRT Station Blok M Arsa Widitiarsa Utoyo and Santo Thin
Abstract Environmental graphic design (EGD) is a graphic design object that has not been widely discussed because it is often considered inferior. This is because EGD is often judged to only ‘complement’ interior or architectural designs which are higher in volume than EGD, but that does not mean that EGD is a trivial design object. This paper will discuss the EGD of signage MRT station Blok M. This research was conducted using a literature study of the texts of Calori and Vanden-Eynden (Signage and wayfinding design: a complete guide to creating environmental graphic design systems. Wiley & Sons, Inc., New Jersey [1]). The study of the design is carried out by deconstructing it based on the pyramid signage method, then viewing it as a graphic design object that has image and writing elements. The author concludes how the EGD is designed not only functional, but also conceptual and has aesthetic considerations that are no less important. The benefit of this research is as an example a review of the graphic design discipline in EGD design. Keywords EGD · Design · Signage · Jakarta · Indonesia · MRT
1 Introduction PT Mass Rapid Transit Jakarta (PT MRT Jakarta) was established on June 17, 2008, in the form of a Limited Liability Company with majority shares owned by the Provincial Government of DKI Jakarta (ownership structure: DKI Jakarta Provincial Government 99.98%, PD Pasar Jaya 0.02%). PT MRT Jakarta has a scope of activities including the exploitation and construction of MRT infrastructure and facilities, operation, and maintenance (O&M) of MRT infrastructure and facilities, as well as A. W. Utoyo (B) New Media Department, School of Design, Universitas Bina Nusantara, Jakarta, Indonesia e-mail: [email protected] S. Thin Desain Komunikasi Visual, Universitas Sampoerna, Jakarta, Indonesia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_77
891
892
A. W. Utoyo and S. Thin
property/business development and management at stations and surrounding areas, as well as depots and areas surroundings. Graphic design is a form of visual communication to convey messages and information to an audience; graphic design is also a visual representation using visual elements [2]. This understanding emphasizes that graphic design is not limited by the medium talking about the elements used to design or design, namely visual and graphic elements. One of the objects of graphic design itself is environmental graphic design, which is the design of visual aspects related to wayfinding, identity and information communication and shapes the audience’s experience of a place [3]. This article will discuss the EGD of signage MRT station Blok M, this MRT signage has also been recognized by Paul Hessel from TDC London. Overall, the design results of the Jakarta MRT signage can meet the desired objectives. Paul said that they got a good response from the community engaged in the graphic industry. “It’s quite fun, MRT Jakarta uses a color scheme that we use as the color for the livery (the outside of the train carriage), thus adding to the uniformity of the overall design,” said Paul. As one of the ‘friendly’ stations in Jakarta Blok M Stations it’s ‘warm and comfortable’ station concept, which was also reflected through its EGD. Considering the important role of context in a design work, this paper is divided into six parts as follows, the design is an explanation of the basic context related to the creation of the work, description of work from macro to micro related to EGD from signage MRT station Blok M, discussion of the relationship between type and image in the work. The elaboration of meaning, message, and impression that occurs because of the relationship type and image in the work., discussion of the form of EGD design with design theory relevant to works, in this case using visual figures of speech. The elaboration of the meaning, impressions, and messages that exist with the existence of the theory. The author hopes that this paper can show the other side of EGD design, where the resulting designs can be made more varied and exploratory, not just sticking to existing conventions. In addition, the author also hopes to demonstrate the use of Skolos and Wedell’s image and type relation glasses as a formalism study perspective in discussing graphic design objects [4].
2 Research Methodology The research that discusses or examines the design of the signage MRT station Blok M was carried out with a qualitative approach and with a literature study method. The literature study method itself is a method used to collect and synthesize material for research [5]. The process of this research can be seen in the chart in Fig. 1. There are two types of library medium used by the author as a library source. The first medium is the medium of books or journals as a reference to the theoretical framework and a study reference. The main text on which this research is based is the book ‘signage and Wayfinding Design’ by Chris Calori and David VandenEynden, which is used to understand, enumerate, and analyze EGD; and ‘Type,
Study of Environmental Graphic Design Signage MRT Station Blok M Fig. 1 Research stage chart. Sources Author’s Documentation, 2022
893
Study Literature Books & Journals Theoretical Foundations MRT Blok M Station Research Object
Work Analysis
Conclusions
Image, Message: A Graphic Design Layout Workshop’ by Nancy Skolos and Tom Wedell. The second medium is the internet, which the author uses to dig up more information about Umeda Hospital itself. The analysis of the work is then carried out based on the theories obtained from the literature study that has been carried out. Two specific theories underlying the analysis of Umeda Hospital’s EGD are the Signage Pyramid Method by Calori and Vanden-Eynden and the Type and Image Relation theory from Skolos and Wedell [4]. After the work analysis stage, the results of the analysis are then mapped through concept mapping. Concept mapping itself is a mapping method in the form of associations and implications used to clarify a concept. Based on this mapping, the author can draw a conclusive conclusion to open further discussions about EGD and graphic design [5].
3 Environmental Graphic Design Environment Graphic Design or commonly abbreviated as EGD is a graphic communication of information found in an environment [4]. Calori’s understanding confirms that EGD is a discipline that deals with graphic design, not just architecture and interiors. Another significance of the understanding presented by Calori is that graphic design does not necessarily only cover two dimensions but can be three-dimensional [6]. There are three main components to an EGD, Signage (and wayfinding), where the EGD serves to guide and carry audience to a place by showing the way, placemaking, where EGD serves to provide identity to a location, thus making the place recognizable and easier to remember and interpretation, where the EGD explains
894
A. W. Utoyo and S. Thin
information about a place or location and that information can be interpreted and understood. When we look at the three components of the EGD, we can conclude that the main importance of the EGD is for communication, and not just decoration, this seems to make the EGD a function-oriented design. EGD itself also adopts understandings of designing for humans or human centered design, user interface and user experience design where many of these principles are also applied as design principles in EGD. For example, the design principle of Hick’s law which discusses the amount of time required to understand and decide is directly proportional to the number of alternatives offered [7]. The practical implication in an EGD is that when there is a lot of information conveyed in an EGD, the audience can take a long time to digest and understand the design, then decide. Of course, this is not desirable when designing an EGD. The hope is that by paying attention to the amount of information displayed, someone can make decisions quickly and accurately [6]. Based on 250 respondents who have experience and opportunities to the public using MRT from station Blok M. A total of 68.3% of audience expressed great pleasure and support for the steps taken by MRT Jakarta implementing the rules with the EGD approach through in-station media conducted in the centre of station, especially in the enter or exit. The public responded positively to this positive step. Total of 75.4% of respondents supported and were pleased with MRT Jakarta that used EGD as a form of social responsibility and empathy directed toward public. This means that public realize the importance and need for the government rules conducted by manufacturers as a form of MRT Jakarta on the issues and problems of this signate. In addition, supported by the benefits for public, as many as 71% of respondents feel helped and expressed very useful. Logos and visual brand approaches contained in EGD are used for MRT Jakarta recognition for public awareness. Many as 72% of respondents were very happy and agreed on the brand’s presence in a logo or visual forms, such as colour or typography. This means using brand logos or other visual articles amplify consumer awareness of products and brands even though it is contained in EGD on in-store media as a enter and exit signate. Measuring awareness is also related to consumer loyalty brand, and based on a survey, as many as 68% of respondents agreed and influenced in increasing loyalty to products. The strategy can be a way to increase the value of consumer awareness of a product or brand. Measurement of utilization of EGD as a formal aspect of the manufacturer in following the government’s rules, as many as 70% of respondents agreed that manufacturers took such approach because of the form of obligations required by the government to producers. This was also supported by 82% of respondents who stated that EGD placement is appropriate and as per health protocols set by the government. The above discussion results can be found related to the direction of strategy in the use of EGD as an adaptive step taken by brands in the pandemic era as it is today. Three functions lead to one of the functions of the strategy. In particular, the critical function of manufacturers in approaching EGD is a formal requirement of government regulations that must be met in complying with health protocols during this pandemic. The following function is part of the brand’s social responsibility
Study of Environmental Graphic Design Signage MRT Station Blok M
895
strategy in dealing with the current pandemic problem. The function that becomes a slice of both is part of strengthening awareness of the brand that is unconsciously one of the essential parts and awaited by consumers in adapting between the needs and trust in choosing a brand. This step strengthens consumer loyalty to the brand or brands that have become their choice.
4 Realis Type and Image As part of graphic design, EGD cannot be separated from two main components of design graphics, namely Type (writing) and Image (image). The understanding and differences between the two elements are explained by Skolos and Wedell when discussing how they are ‘read’ in different ways [4]. A picture is read and can be interpreted because the audience understands and has experienced empirically what is represented by the image. This is different from writing, whose meaning is constructed because of knowledge and agreement or convention. Because both are interpreted in different ways, the audience also unconsciously distinguishes the two; for a designer, this knowledge can be used to push new boundaries in designing with both elements. Skolos and Wedell introduced four types of relationship between type and image that exist in all graphic design work, separation is where pictures and writings are designed as if in their respective dimensions. Both are designed with no regard for each other, images and text can overlap or collide with no relation at all, fusion, where images and text are designed with each other in mind. Both seem to be in the same realm, where the form of the writing seems to interact or influence the image, and vice versa, fragmentation, where images and writing interact with each other intensely or anarchically until they finally interfere with each other and disrupt each other and inversion, where images and text seem to swap roles. Images can be ‘read’ like a piece of writing or writing ‘formed’ until finally they cannot be ‘read’ but can be seen as if they were pictures.
5 Signage Pyramid Method Calori and Vanden-Eynden explained that the main function in a sign system or EGD is to provide information about the related environment to visitors who are in that environment, and that information is conveyed through graphics displayed on physical objects [1]. Based on this understanding, Calori and Vanden-Eynden see that there are three separate but related aspects that make up a sign system. These three things are the information content system, or information content that needs to be displayed in the alert system, graphic system or graphical display of the information displayed in the sign system, hardware system, or material/medium where the information is displayed in a sign system (Fig. 2).
896
A. W. Utoyo and S. Thin
Fig. 2 MRT Blok M Station. Sources MNC Portal Indonesia/Ari Sandita Murti, 2021
hic
rdw a re
Gr ap
Ha
Fig. 3 Three components signage pyramid method. Sources Signage and Wayfinding Design, Calori and Vanden-Eyden [1]
s
In the design process, these three components influence each other. For example, the amount of information that needs to be displayed will affect the processing of graphics, such as the type and number of typefaces, and whether the sign needs to use images as well or just use text. The amount of information also affects material selection because designers need to consider what medium can display large amounts of information that remains coherent with the visual identity of the EGD, has good legibility, and has high durability. Besides being used as a design method, the signage pyramid method can also be used to enumerate an EGD. In the context of signage MRT Blok M Station, a description of the three components in the signage pyramid method can be seen in Fig. 3. Information Content System is descriptive information about the location of the place and place name information.
Three Components Signage Pyramid
Information
Study of Environmental Graphic Design Signage MRT Station Blok M
897
Graphic System MRT Jakarta as general already has a logo with dark lettering which usually comes with a white background. The logo that was born because of the competition may not be replaced, but there are no strict guidelines regarding the use of the logo. “Seeing the need for signage, we finally decided to use the logo in an ‘inverted’ form, using white as the logo color and dark color as the background color,” said Paul. This is done because signage with a light background color tends to be uncomfortable for users to read. In the end, the MRT Jakarta signage design uses a dark blue background color in each of its designs. The color which later became its own peculiarity for this mode of transportation. Hardware System for making signage is a general term for a type of graphic display in the form of text and symbols intended to convey information to the audience. Generally, the information displayed is in the form of place names, directions for road users, company addresses and other important information. Acrylic is one of the materials used to make signage. It is a plastic-like material that is like glass but has superior properties to glass. Acrylic is also more impact resistant than glass. The description from above is carried out to separate what components are contained in the MRT Blok M Station. By separating these components, we can review a more structured EGD design at signage MRT Blok M Station.
6 Environmental Graphic Design Signage MRT Blok M Station Talking about the design objectives of a signage cannot be separated from the need to be easy to understand for users. According to Paul, this also makes signage have a down-to-earth design. Paul also said that there are many important things in designing signage, from choosing the right font, good color contrast, to the right size. However, what is also important is ensuring consistency so that the design standards that have been created can still be implemented in the future. Therefore, the need for signage is more needed in railway stations to help passengers spend their time more efficiently and encourage mobility growth. The different types of information displayed inside railway stations grant passengers assistance to rapidly access information points, platforms, and trains to reach their destinations, as well as directions to connections with other means of transport. To benefit from a safe and time-saving journey, passengers need that the railway station transit would be correlated with the signage system. The most important aspects for passengers when it comes to transport will always be safety and punctuality, as well as the quality of the information system inside a railway station. This is also considered for improving standards. If railway transport is about to become the first choice against road and air transport, then it will have to provide superior comfort, commodity, and ease in accessing the stations. This is the area where passengers want to see an improvement inside stations: signage, facilities,
898
A. W. Utoyo and S. Thin
and staff. Consequently, what passengers need is real-time information, directions to emergency points, display panels for transport services, maps indicating local public transport, as well as other useful information. Priorities must be headed towards optimizing access, the information system, and facilities. Current modernization projects of railway stations are prioritizing the implementation of signage components, both for the reconstruction period and permanent, consisting in railway signs, safety signs and parking spaces for operational interfaces and customers. Finally, we will have to mention the design elements used (arrows, symbols, colors) that organize and deliver the information content. Also, the size of letters and numbers must ensure optimal visibility while indicating platforms, directions etc. Signage is the most visible element in a station, and it is essential for its well development. Signage provides essential information for use and navigation inside a system, helping the passenger feel secure and directing them to the entrance and exit points. The information provided guides passengers to the different areas of the railway station and accelerates the process of identifying important points. A proper signage and clear warnings should be displayed to the right distance, while prior warnings should be indicated to the station entrance/exit. The design of the signage MRT Blok M station sign system, in simple terms, consists of three basic shapes. As a public place, the signage for MRT Blok M Station uses two languages, namely Indonesian and English. As a tourist who is assumed not to understand Indonesian script, we can see this relationship as if it were a type and image relation, where we perceive the foreign script as a form and not something that can be read. The arrangement made gives a clear hierarchy, if the audience can read kanji, then the audience will read the characters directly. On the other hand, if the audience cannot read Indonesian characters, then the audience’s gaze will be directly ’directed’ to Roman characters. This is achieved by placing the two characters in different configurations; Indonesian characters are arranged vertically, and Roman characters are arranged horizontally. Although the image or image elements in the signage MRT Blok M Station wayfinding design are minimal, the depiction of symbols or pictograms is made to resemble the anatomical characteristics of letters, with strokes and characteristics of similar proportions to the same color. The relationship between type and image also supports the theme of modern in the MRT Blok M Station wayfinding design. By using a clear and orderly layout in a visual system, information can be conveyed quickly and precisely. If we approach other types of image and type relations, such as fusion (type and image relationships where they affect each other) or in a very extreme way, fragmentation (type and image relationships where both interfere with each other), this will not give the impression that it is. Basically, the work of the sign system is expected to be purely functional work, ‘clean’ of concepts that might interfere with the functional aspects of the design. The usefulness of a sign system is essentially a giver of information, such as directing the audience to a place and guiding them there (wayfinding), giving information on the position of the location of the space clearly (place-making), to providing information on the area (interpretation). The
Study of Environmental Graphic Design Signage MRT Station Blok M
899
Type
Graphic System
EGD
Image
Information Content System
Hardware System
Signage
Metropolitan, Modern, Professional
Fig. 4 Signage MRT Blok M Station EGD study concept mapping. Source Author’s Documentation, 2022
relationship between dark blue, light blue, red, and yellow are inspired by colors often found in the capital cities of Jakarta. Allegory is a figure of speech, visually and linguistically, which is a symbolic representation [8]. A symbolic representation of an object or character is used as a symbol for an idea or principle. These forms are also a form of elaboration with allegory figures of speech, although they are not as strong as the allegory of blue and modern. The shape of the signage can be associated with rest because signage is an essential part of every of most of modern society. This answers the need for a friendly and safe sign system for its people who use the MRT as their main transportation. Based on the conclusion analysis that has been done, the author makes a concept mapping that explains the relationship of the theory and describes the overall analysis carried out in this study. The author’s concept mapping can be seen in Fig. 4.
7 Conclusion The work of signage is an example of a sign system design that provides a new perspective in environmental design. This work provides an example of an expressive design without forgetting the functionality side. The researcher made three points. 1. First, the use of letters and images (both icons, symbols, and foreign characters) offers an informative and functional design, while emphasizing the main message of the entity it represents. 2. Second, the use of material that becomes the ‘destiny’ of a sign system is used as well as possible. Each existing element is used to reinforce the message of the entity being represented. Not only that, the use of various forms of the sign
900
A. W. Utoyo and S. Thin
system also adds layers of messages and concepts offered by a sign system and the entities it represents. 3. Third, the practical approach used by MRT Jakarta provides a new concept of ‘reuse’ and the modular system of a sign system. At a time when the need for a sign system is increasingly essential in the life of a growing urban society, a sign system approach that is only systematic is common. The sign system is required to provide something more and not only become an ’obligation’ for public space. With the various signage and wayfinding’s, researchers not only provide practical examples of sign system design, but also identity design, packaging design, to design thinking for designers globally. Therefore, apart from providing a new perspective, this work has also succeeded in providing a new dimension in terms of sign system design, which will continue to influence the next sign system designers.
References 1. Calori C, Vanden-Eynden D (2015) Signage and wayfinding design: a complete guide to creating environmental graphic design systems. Wiley & Sons, Inc., New Jersey. https://doi. org/10.1002/9781119174615 2. Landa R (2011) Graphic design solutions, 4th edn. Wadsworth Cengage Learning, Boston 3. “What is environmental graphic design (EGD)?” n.d. Segd.Org. https://segd.org/article/whatenvironmental-graphic-design-egd. Last accessed 24 March 2022 4. Skolos N, Wedell T (2011) Type, image, message: a graphic design layout workshop. Rockport, Massachusetts 5. Martin B, Hanington (2012) Universal methods of design: 100 ways to research complex problems, develop innovative ideas, and design effective solutions. Rockport Publisher, Massachusetts 6. Hananto BA (2017) Tahapan Desain Sistem Tanda Interior Mini Mart (Studi Kasus: Wayfinding & Placemaking Signage FMX Mart). Jurnal Dimensi DKV 2(2):135–150 7. Lidwell W, Holden K, Butler J (2010) Universal principles of design. Rockport Publisher, Massachusetts. https://doi.org/10.1007/s11423-007-9036-7 8. Meggs PB (1992) Type and image: the language of graphic design. John Wiley & Sons, New York 9. Hara K (2017) Designing design. Lats Muller Publishers, Baden 10. Sejarah | MRT Jakarta. (n.d.). https://jakartamrt.co.id/id/sejarah. Last accessed 13 June 2022 11. Tangelo creative signage pyramid. https://tangelocreative.com.au/tag/signage-pyramid/. Last accessed 5 March 2022 12. Tanica Billboard Jenis Material Signage Paling Populer. https://tanicabillboard.com/2019/08/ 02/jenis-material-signage-paling-populer-dan-berkualitas/. Last accessed 21 May 2022 13. Railway pro signage in stations encourage mobility. https://www.railwaypro.com/wp/signagein-stations-encourage-mobility/. Last accessed 12 April 2022 14. Inews Megapolitan Stasiun Mrt Blok M Sepi Hari ini Impas Penerapan Stp. https://www.inews. id/news/megapolitan/stasiun-mrt-blok-m-sepi-hari-ini-imbas-penerapan-strp. Last accessed 22 May 2022
Church Online Queuing System Based-On Android Hendry Hendratno, Louis Bonatua, Teguh Raharjo, and Emny Harna Yossy
Abstract The pandemic COVID 19 case in the world changes the way of life for most people. Indonesia’s government has declared a disaster emergency status related to this outbreak. One of the policies includes the limitation of capacity in the house of worship with a maximum of 50%. The manager of the religious events has the challenge to limit the participants who want to join the events. The system is needed to support the managers of the house of worship to manage a queue. The study aims to design and develop a queue system to resolve the challenges. The waterfall methods, object-oriented analysis and design using Unified Modeling Language (UML) were used to develop the system. The application can be utilized to manage the queue of the congregation and reduce the pile of worshipers who attend places of worship without knowing whether the worship quota is still available. The evaluation results found that all application functions can run according to their duties. Previously, there is no research to explore this specific case, hence it can contribute to the academic by exploring the functions and technologies, and to the community by extending this system to the other environment, such as hospital, education, or pharmacy queuing system. Keywords Queuing system · Software development · Object-oriented analysis and design · Unified Modeling Language (UML) · Android
H. Hendratno · L. Bonatua · T. Raharjo · E. H. Yossy (B) Computer Science Department, BINUS Online Learning, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] H. Hendratno e-mail: [email protected] L. Bonatua e-mail: [email protected] T. Raharjo e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_78
901
902
H. Hendratno et al.
1 Introduction In early 2020 the world was shocked by the corona virus outbreak (Covid 19), this virus was first discovered in Wuhan, China. In Indonesia itself, the government has declared a disaster emergency status related to this outbreak. The government has taken various anticipatory actions to prevent this pandemic from spreading widely and having further impacts on other sectors [1]. Many countries have adopted the Lockdown policy in overcoming this pandemic. Indonesia also adapted and modified the policy, and it is called the PSBB (Large-Scale Social Restrictions), this policy includes dismissing schools and workplaces, restricting religious activities, restricting activities in public places, limiting activities socio-culture, restrictions on transportation modes, and restrictions on other activities specifically related to aspects of defense and security [2]. At the beginning of July 2020, the governor of Jakarta extended the transitional PSBB, which has been regulated in Governor Regulation No. 51 of 2020 concerning the Implementation of Large-Scale Social Restrictions during the transition period towards a healthy, safe, and productive society [3]. The regulation impacts religious event in Jakarta, such as churches and mosques. It is required that the maximum capacity of the congregation is 50%. This condition has the potential risks to cause problems for congregations who want to worship but do not know whether the place of worship has met the maximum capacity. This research aims to resolve the situation by implementing the queuing system. There is some research related to this study. The first research is based on the queuing system which is still done manually in addition to many visiting patients. Through this research, an application based on SMS Gateway Android was developed. The development method used in this research is Prototype. The features of this application are managing the registration queue process, providing information related to practices and doctors, the application also provides facilities for managing patient data by the admin [4]. The next research is a multiplatform-based system was developed, and the user could easily monitor the queue number that was running because the system built was real-time [5]. The next was very difficult, because all processes that were carried out, starting from registration, queuing, patient data management, to payments were still done manually [6]. The researchers compared the previous studies and developed some new features in the queue system. There is research that has been done regarding this queuing problem about Designing Mobile-Based Queuing Application Design. In this study, an application was developed that has advantages in its development method, namely using a hybrid app framework. Where development can be done only once for multiplatform, for example, android, iPhone, windows phone, web app, and others. The design made has a chat feature for doctors and employees, as well as a news feature for sharing news. The application also provides doctor searches and polyclinic schedules with varying times [7]. Besides that, there is also research study aims to develop an online system that is useful for managing queues at service time. Through this research, the authors developed a web-based online appointment
Church Online Queuing System Based-On Android
903
booking system. The author claims that the system developed is easier and safer than conventional systems [8]. As a solution to the above problems and the author’s reference review, a system that can manage a queue is needed, so that it can facilitate both parties, in this case, the owner of the house of worship and the congregation. The study aims to design and develop a queue system to resolve the challenges. The application can be used to manage the queue of the congregation and reduce the pile of worshipers who attend places of worship without knowing whether the worship quota is still available. By using an Android-based queuing system with online capabilities, the congregation will be facilitated queuing, more practically, effectively, and efficiently. The admin, as the queue manager, can more easily organize the queuing system, because it can be done in real-time.
2 Literature Review 2.1 Queueing System There are three parts or components in a queuing system namely: system entry, the queue, and queue services. The system entry has characteristics such as population size, behaviour, and statistical distribution. The characteristics of a queue include whether the number of queues is limited in length or not, as well as the discipline of people or goods in the queue. The characteristics of service facilities include the design and distribution of service time statistics [9]. There are four characteristics that characterize the queuing system, among others: input source, service mechanism, service discipline, and capacity of the system [10].
2.2 Application Program Interface (API) Many applications use APIs to interact with other applications. A good API must have the following criteria [11]: 1. Platform Independence: All clients must be able to call the API, regardless of how the API is internally implemented and executed. An API requires a standard protocol, and has a mechanism by which the client and the web service can agree on a data format so that they can be exchanged. 2. Service Evolution: REST API is defined as an architectural tool designed for web services that focus on system resources such as data transfer and requests using HTTP [12]. Representational State Transfer (REST) was proposed by Roy Fielding in 2020 as an architectural approach to designing web services. REST is an architectural style for building distributed systems based on hypermedia. The method used in the API is dependency injection. The basis of the dependency
904
H. Hendratno et al.
injection method is to have a separate object, the assembler, which fills the fields in the class lister with an implementation suitable for the search interface, generating dependency diagrams.
2.3 Flutter Flutter is an open source SDK for developing high-performance and high-fidelity applications developed by Google for various platforms in a single codebase [13]. Flutter is a portable UI Toolkit introduced by Google to develop cross-platform applications, Flutter also provides its own set of interfaces, its own rendering mechanism, and is claimed to be able to compete with other cross platforms [14]. Here are the brief differences between flutter, browser-based solution, and react-native: browserbased solution using the web view on the platform to render HTML and CSS, react native uses native components, while flutter renders to its UI and sends it to a machine running on that platform, in this case, the dart virtual machine.
2.4 Software Development Life Cycle (SDLC) SDLC is a method in designing a system that will always move like a wheel, which goes through several steps, including investigation, analysis, design implementation, and maintenance. SDLC has several development methods, one of which is used for this research is the waterfall. The waterfall is a development method or model that is based on a plan, so basically, the developer must make a work plan and work process before the software development process is carried out. The waterfall process is as follows: requirements analysis and definition, system software design, implementation and unit testing, integration and system testing, operation and maintenance [15].
2.5 Unified Modeling Language (UML) UML is a visualization of iterative processes and incremental processes [16]. According to [16], UML is not a software development tool, nor is it a programming language, or a specific platform used in system development. UML is a separation between Modeling Method and Modeling Language. UML is a system design tool based on the object-oriented method, UML analogizes a system like a process in real life, which consists of objects and processes denoted by specific symbols. UML is depicted in several diagrams, namely use case diagrams, activity diagrams, class diagrams, and sequence diagrams.
Church Online Queuing System Based-On Android
905
2.6 Evaluation Evaluation is carried out on the system and user interface. System evaluation using the black box testing method. Black box testing is a software testing stage where the focal point of testing is on the functional specifications of the software. Black box testing allows software developers to create a set of input conditions that will be tested against the functional requirements of a program [17]. Evaluate the user interface using eight golden rules consisting of: strive for consistency, cater to universal usability, offer informative feedback, design dialogs to yield closure, prevent errors, permit easy reversal of actions, support internal locus of control, reduce short-term memory load [18].
2.7 Previous Research The research was the authors developed an online registration information system to reduce the density in the outpatient queues, as well as make it easier for administrators to input patient data without having to write in books or Microsoft Office assistance. The writer developed a web-based system. The development method used by the writer is Waterfall. In conducting research, the writer emphasizes the importance of system design based on analysis of existing problems. Through this developed system, the authors hope this system can become a media for related patient health [19]. The next research for the implementation of this research is the manual queue implementation, and the lack of valid sources of information related to pregnancy. Although this application is only limited to taking numbers online, this application provides various information related to pregnancy, pregnancy health, and pregnancy visits [20]. The next previous research was conducted by Alfin Siddik Amrullah and Ema Utami with the title Implementation of “Pop Up Notification” in the Online Queuing System at Website & Android Based Health Clinics. Researchers in this case develop an online queuing application that is real-time. The concept of the online queuing system at health clinics designed by the author consists of three main components, namely a website-based application for the clinic admin panel, an Android-based application for patients, and a Web Server [21].
3 Research Methods The research methodology is shown in Fig. 1. It begins with the formulation and identification of problems. The identified problem definition is the need for the queuing system for the managers of the house of worship and their congregation to apply the maximum 50% of capacity. A hypothesis was drawn which will be tested through
906
H. Hendratno et al.
Fig. 1 Research methodology
the research conducted. The hypothesis was adjusted to the background and existing problems. Data collection was carried out in several ways, namely observation, interviews, and literature studies. Observations and interviews were conducted to obtain primary data, while literature studies were conducted to obtain secondary data. Observations and interviews are carried out to determine the limitations of the research so that the research carried out can be measured and directed, literature studies are carried out to obtain references in conducting research, the state of the art in the research carried out, and the development method that best suits the existing problems. Through the existing boundaries and scope, the system needs to be developed, this is so that the system created can be effective and able to answer existing problems. The next stage was to design the user interface and database which will be the basis for carrying out the system development process. After being developed, the application will move to the testing phase where black box testing was carried out, as well as the eight golden rules evaluation [22]. After passing the testing phase, the application
Church Online Queuing System Based-On Android
907
can be implemented and put into use, which is then followed by compiling research conclusions and suggestions for the future.
4 System Design This system was designed starting with collecting data through church interviews. The results of the interview are that the system can help congregations book quotas flexibly, congregations and churches are able to monitor the number of queues or quotas in real time, and are able to increase time and cost efficiency, and reduce congregations’ piles at one time. After that, the researchers compared two similar applications. The results of the comparison are that there are two actors and two use cases, namely logging in and looking for a queue. For further system development based on interviews and application comparisons, the system can be accessed by two users (congregation and admin) and seven use cases (login, create a queue, search a queue, close a queue, send a message (chat), share a link, scan the QR code). Application development uses SDLC with the waterfall method. The design system utilizes the object-oriented using UML. We constructed the use case model, use case diagram, activity diagram, sequence diagram, state diagram, and class diagram. The entity-relationship diagram was created to model the database. The use case model was designed to describe the user requirement. There are two actors and ten use cases in the diagram. Each use case was described in the use case specification documentation. The use case represented the features of the application. In this system there are several actors and use cases. Actors are users and system administrators. The usecase for users consists of login, create a queue, search a queue, close a queue, send a message (chat), share a link, scan QR code. The usecase for system administrator consist of login, search a queue, join queue, send a message (chat), send a link, show QR code, quite a queue. We utilized the activity diagram to demonstrate the detailed flow for the use case. The first activity diagram is for the login process. Users must log in first, which starts with the user opening the application, then the user can choose whether to use a google account or just by using the name alone, after successful the dashboard page will open. The queue creation process begins when the user joins the dashboard page. He can select the option to create a queue. The system would require user to input the queue details containing all information related to the queue, then select the type of queue. After all was finished, the user can immediately create a queue. When the queue successfully created, the queue would be displayed in the queue list. There is a feature for joining the queue. It begins when the user on the dashboard page, selecting the search queue menu. After which the user is required to enter the queue keyword or can immediately select the queue that has been displayed. The queue can allow several people who want to be included in the queue. After that the queue presses the list, if the quota was available then the queue would immediately enter the queue, but if the quota was not available then the application would display a message that the quota was insufficient. The QR code scan process begins when
908
H. Hendratno et al.
the user opens the queue. The system showed the QR code. The queue maker or admin performs a QR code scan and executed manual verification regarding whether the participant complies with the number and conditions that have been set. If it is appropriate, the queue maker or admin can accept the status in the participant table will change, but if it does not match the operator/queue maker can refuse the users. The system evaluation method uses black box testing with the user acceptance test method which contains questions related to the functionality of the application, whether it is running as expected or not. Evaluate the user interface using eight golden rules by designing the user interface according to the definition of the eight rules.
5 Result and Discussion 5.1 System Specification The system specification consist of the database server, web server, browser, and Android operating system in the mobile device. For the browser and Android, the minimum specification is required. The application allows the higher specification. Komponen terdiri dari database server, web server, browser, Android. Database server specification: 127.0.0.1 via TCP/IP, Server type: Microsoft SQL Server 2016, Server Version: 13.0.4259.0, User: sa. Web server specification: IIS Internet Information Services Version 10.0.17763.1, UrlScan 3.1, Dotnetcore hosting 3.1, Dotnetcore sdk 3.1, Donetcore runtime 3, Dotnetcore aspnetcore 3.1. Browser minimum specification: 1.IE (Internet Explorer) 10.0, Mozilla Firefox, Opera, Chrome.
5.2 Application Installation The application is divided into the front end and back end. First, we create the backend. For how to install the application, create a new site on IIS with the name api.ngantrionline.com. It is finished for the backend to be accessible, then deploy the frontend in the same way. Create a new site on IIS with the name live.ngantrionline.com. The front-end installation can be performed by downloading the application from the Apk.ngantrionline.com link. When the download is successful, then the user can open the application from the device.
5.3 Application Flow In this section, we would like to explain the flow of the application. We named the application ngantri online. The application uses Indonesian language. There are two
Church Online Queuing System Based-On Android
909
Fig. 2 Application flow for admin
Fig. 3 Application flow for user
user groups: admin and the end user. Admin will be responsible to create a queue, while the end user will use the application to join in the queue. Figure 2 shows the user interface for the administrator. After login to the system, the admin can see two main functions, creating the queue and searching the queue. When the admin has successfully created the queue, the system will show a list of queues that have been created. In the detailed queue transaction, the admin can see the list of users who register in the event before joining. He can perform the verification for each user. Here is the application flow from the admin side: Here is the application flow from the user side (Fig. 3).
5.4 Evaluation Regarding the functionalities test, we are using the black box testing which contains the functionalities to be tested. Each test includes a scenario, description, steps, and the expected results. Test scenarios on the login feature, queue list, create queue, chat, share link, close queue, scan QR code, logout. In addition, a user acceptance test was conducted using the interview method. Interviews were conducted with representatives of the Daily Workers Council. The results of the interview are that the appearance of the application is considered good by the Daily Workers Council, the features contained in the application are considered to have met the needs, this application is considered very helpful in terms of time, the Church is satisfied with the application that has been made. All the tests are executed and got confirmation from the stakeholders to agree on the results. The next evaluation is the evaluation of the user interface. The results of the evaluation based on the Eight Golden Rules in Human–Computer Interaction were Strive for con-sistency: the appearance of the application interface tries to maintain consistency both in font style, font colour, icon, and overall design. It can be seen in the Search Queue and Create a Queue page; Cater to universal usability: with the help of the search shortcut, users can easily search for existing queues, as seen in the picture above; Offer informative feedback: one of the informative feedbacks we can
910
H. Hendratno et al.
see when the admin wants to delete the queue that has been created. There is feedback to the users for confirmation; Design dialogue to yield closure: if the admin wants to close the queue, the application will display a mes-sage which aims to ensure that the activity carried out by the admin is correct and not due to an error; Offer simple error handling: error prevention by the application will occur if the admin wants to scan the QR code, however, on the other hand, no congregation has yet to queue up, permit easy reversal of actions: An example of implementing a reversal of action in the system, one of which is on the Search Queue page, there is a left arrow icon that aims to return to the previous page if pressed; Support internal locus of control: the application provides flexibilities in terms of navigation as evidenced by the menu bar, so that users can easily move to the desired page; Reduce short-term memory load: The application uses a concise and clear UI so that users don’t have to remember things that are not needed in accessing the application.
6 Conclusion Based on the analysis, design, and implementation that the author has done on this developed application, it can be concluded that application can help managers of places of worship in managing queues during this transitional PSBB period in Indonesia, application can help managers of places of worship to provide information regarding the capacity of the number of congregants who can attend the event, application can help reduce the pile of worshipers who attend places of worship without knowing whether the worship quota is still available. The author realizes that there are various shortcomings of this developed application. Here are some suggestions for further development is recommended that this application can run on the IOS platform in the future, expected that in the future it can provide a better user experience, and extension of the application to other business processes, such as education, hospital, pharmacy, or other event required the queue system. Other than that, this application was developed to cater to the queuing system in the house of worship in Indonesia. This application can be extended to the relevant queuing requirement in the specific event in the house of worship, such as the religious church events, Islamic activities in the mosques, and other religious events. The queuing activities in the hospital, pharmacy, and education class can utilize the design and features. Acknowledgements The authors would like to thank you for the guidance of the Online Learning Computer Science Study Program in Bina Nusantara University so that this research was completed, and the funds provided for publication as the result by Bina Nusantara University.
Church Online Queuing System Based-On Android
911
References 1. Buana DR (2020) A analysis of Indonesian people’s behavior in facing the Corona Virus pandemic (Covid-19) and tips for maintaining mental welfare. SALAM J Sos Dan Budaya Syar-I 7(3). [Online]. Available: https://doi.org/10.15408/sjsbs.v7i3.15082 2. Muhyiddin (2020) Covid-19, new normal, and development planning in Indonesia. J Perenc Pembang Indones J Dev Plan 4(2):240–252. [Online]. Available: https://doi.org/10.36574/jpp. v4i2.118 3. Governor’s order—implementation of large-scale social restrictions (2020). https://jdih.jakarta. go.id/himpunan/produkhukum_detail/10203 4. Lubis H, Nirmala ID, Nugroho SE (2019) Designing an online queue information system for patients at Seto Hasbadi hospital using an android-based SMS gateway, pp 79–91 5. Junirianto E, Fadhliana NR (2019) Development of Samarinda realtime online queue application. Sebatik 23:513–516 6. Alhamidi IE, Asmara R (2019) E-registration and patient queuing systems at doctor’s practices at pharmacies. JJ Click 6(1):130–144. [Online]. Available: http://ejurnal.jayanusa.ac.id/index. php/J-Click/article/view/114 7. Zulfikar AA, RA, Supianto (2018) Design and build mobile-based polyclinic queuing applications. J Teknol Inf Dan Ilmu Komput 5(3):36 8. Mohamad E, Jiga IA, Rahmat R, Azlan MA, Rahman A, Saptari A (2019) Online booking systems for managing queues at the road transport department. JIE Sci J Res Appl Ind Syst 4(1):21. [Online]. Available: https://doi.org/10.33021/jie.v4i1.745 9. Heizer J, Render B, Munson C (2017) Operation management: sustainability and supply chain management. J Purch Supply Manag 10. Bhunia AK, Sahoo L, Shaikh AA (2019) Advanced optimization and operations research. Springer Optimization and its applications, vol 153. Available: https://doi.org/10.1007/978981-32-9967-2_1 11. Fowler M (2020) Inversion of control containers and the dependency injection pattern. https:// martinfowler.com/articles/injection.html 12. Suzanti IO, Fitriani A, Jauhari N, Khozaimi A (2020) REST API implementation on android based monitoring application. J Phys Conf Ser 13. Madhuram M, Kumar A, Pandyamanian M (2019) Cross platform development using flutter. Int J Eng Sci Comput 9(4):21497–21500 14. Dagne L (2019) Flutter for cross-platform App and SDK development. https://www.theseus. fi/bitstream/handle/10024/172866/LukasDagneThesis.pdf?sequence=2&isAllowed=y 15. Summerville I (2010) Software engineering, 9th ed. Pearson Education 16. Paradigm V (2020) What is unified modeling language (UML)? https://www.visual-paradigm. com/guide/uml-unified-modeling-language/what-is-uml/. Accessed 18 May 2020 17. Jaya T (2018) Application testing with the blackbox testing boundary value analysis method (case study: Lampung state polytechnic digital office). J Informatics J IT Dev 3(1):45–48 18. Shneiderman’s: eight golden rules of interface design (2020). https://faculty.washington.edu/ jtenenbg/courses/360/f04/sessions/schneidermanGoldenRules.html 19. Chistian F, Ariani (2019) Web-based online patient registration information system for outpatients. J Manaj Inform 6(2):71–80 20. Armelia SD, Agasia W (2018) Web-based application queue design for pregnancy visits. J Enter 1:81–91. [Online]. Available: https://www.stmikpontianak.ac.id/ojs/index.php/enter/art icle/view/797 21. Amrullah A, Utami E (2018) Android, implementation of ‘pop up notification’ in online queuing systems at website & android-based health clinics. Semin Nas Teknol Inf dan Multimed 22. Interaction design foundation: worksheet: how you can apply Shneiderman’s 8 golden rules to your interface designs. https://www.interaction-design.org/literature/article/shneiderman-seight-golden-rules-will-help-you-design-better-interfaces. Accessed 18 May 2020
Tourism Recommendation System Using Fuzzy Logic Method Arinda Restu Nandatiko, Wahyu Fadli Satrya, and Emny Harna Yossy
Abstract After struggling with the COVID-19 pandemic for a year, traveling is something that Indonesians need. Tulungagung Regency is one of the tourist destinations in East Java Province. However, many of the tourists do not know the tourist attractions in Tulungagung Regency. It takes a media that can help novice tourists to prepare all their needs, especially in areas that have never been visited. This thesis provides an application that helps tourists to plan a vacation in Tulungagung Regency. The application provides recommendations for tourist attractions according to the budget that has been determined by tourists. This application recommendation system is made using the Fuzzy Tahani method with tourism budget criteria and disaster history. The application also provides information on tourist destinations, lodging, and restaurants in Tulungagung Regency. The information provided in this application is quite up to date because there is an admin in charge of updating information on tourist attractions, lodging, and restaurants. Through this application, all the needs of tourists during their vacation have been met without being confused with expenses. The result of this thesis is a list of recommended tourist attractions for tourists to visit. The accuracy of the recommendation system is 84.9%. Keywords Tulungagung regency · Tourism · Budget · Fuzzy logic · Disaster
A. R. Nandatiko · W. F. Satrya · E. H. Yossy (B) Computer Science Department, BINUS Online Learning, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] A. R. Nandatiko e-mail: [email protected] W. F. Satrya e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_79
913
914
A. R. Nandatiko et al.
1 Introduction Tourism is an important economic sector in Indonesia [1]. In 2019, tourism ranks fourth in terms of foreign exchange earnings after oil and gas commodities, coal and palm oil [2]. Java Island is one of the largest and most populous islands owned by Indonesia, in 2019 it was recorded that 56.35% of the total population in Indonesia was on the island of Java [3]. Tulungagung Regency is one of the regencies located in East Java Province, Indonesia. The nature tourism industry in Tulungagung is quite developed due to its geographical location on the edge of the Indian Ocean. Efforts to develop tourism can also be made by utilizing existing technology. In the past few years, Artificial Intelligence technology has begun to be widely used in building a system that is able to “read” human desires [4]. Since before the pandemic occurred, the Tulungagung Regency Government has developed its tourism sector by improving facilities and infrastructure at the tourist location. In addition to local government applications whose main function is to provide general information, there are several applications with a similar purpose, namely providing recommendations for tourist attractions in other cities. Based on the background that has been described, a research recommendation system for tourism in Tulungagung Regency was made using the Fuzzy Tahani method [5]. So, with this system, it can help the public and tourists who are not familiar with Tulungagung Regency to be able to choose tourist locations and aspects of supporting holidays that suit their wishes with just one application.
2 Literature Review 2.1 Tulungagung District Tulungagung Regency is located at 111º 43' —112º 07' east longitude and 7º 51' —8º 18' south latitude [6]. The border area in the north is bordered by Kediri Regency, precisely with Kras District. In the east it is bordered by Blitar Regency. In the south it is bordered by the Indonesian Ocean and in the west, it is bordered by Trenggalek Regency. The total area of Tulungagung Regency which reaches 1055.65 Km2 is completely divided into 19 sub-districts, 14 sub-districts, and 257 villages [7]. Due to its geographical location, Tulungagung Regency has a lot of tourism potential, both natural and artificial.
2.2 Fuzzy Logic Method Fuzzy Tahani is a branch of fuzzy logic, which uses standard databases [5]. Tahani describes a fuzzy query processing method, based on the manipulation of a language
Tourism Recommendation System Using Fuzzy Logic Method
915
known as SQL (Structured Query Language), so that the fuzzy Tahani model is very appropriate to be used in the process of searching for precise and accurate data [5]. Fuzzy logic is an appropriate way to map an input space into an output space [8]. • Fuzzy Set: In the crisp set, the membership value has only two possibilities, namely 0 and 1. However, in the fuzzy set, the membership value lies in the range 0 to 1. If x has a fuzzy membership value, it means that x is not a member of set A, as well as if x has a fuzzy membership value, it means that x is a full member of set A. The fuzzy set has two attributes, namely linguistic and numerical. There are several things that need to be known in understanding fuzzy systems, namely fuzzy variables, fuzzy sets, universe of speech, domains. • Membership function: The membership function (membership function) is a curve that shows the mapping of data input points, into their membership value (degree of membership), which has an interval between 0 to 1. There are several functions that can be used, including linear representation, triangular curve representation, trapezoidal curve representation, shoulder shape curve representation, s-curve representation, bell shape curve representation. • Basic Operator Zadeh: As in conventional sets, there are several operations that are specifically defined for combining and modifying fuzzy sets. There are three basic operators, created by Zadeh, namely: AND, OR, and NOT operators.
2.3 System Evaluation Methods When building a system or application, the application must go through the testing stage before the application is released to users. It aims to find errors (errors or bugs) [9]. Based on the test method, there are two categories of testing or evaluation. First, the White-Box method, the main purpose of this method is to ensure that all statements and conditions have been executed at least once [10]. Second, the Black-Box method, this method tests without looking at the contents of the application. Testing is done by providing input, performing a set of events, and providing requirements according to the instructions given, then checking the resulting output [9]. Based on this explanation, this study will use the black box method as a system evaluation method.
2.4 Previous Research Several studies to develop a tourism recommendation system. The study discusses the decision support system in determining tourist objects in Banyuwangi Regency according to the criteria specified by the user. The method used in the decision support system is Fuzzy Tahani, because this method is considered capable of evaluating several alternatives against a set of attributes or criteria. The criteria used in this study are the price of tourist tickets, the distance of the tourist attraction from the
916
A. R. Nandatiko et al.
city center, the price of lodging, and the number of visitors. The result of this research is the recommendation of the best tourist attraction that the user can visit [11]. The second study aims to determine the best route to tourist sites in Surabaya. The method used is Sugeno’s Fuzzy logic by obtaining membership degrees by fuzzification then implicated in the appropriate route, followed by a defuzzification process to produce optimal output. The criteria used are the distance of tourist attractions, travel time, and road density. This research data was obtained through Google Maps by comparing the three paths and the optimal path will be determined which will be passed to get to the selected tourist attractions. The results of this study are six optimal paths from each starting point [12]. The third study discusses the recommendation system and the distribution of tourism potential in the Greater Malang Region. The method used is a combination of the Fuzzy Mamdani method and sentiment analysis. The use of the fuzzy method to find the value or degree of each tourist attraction through four criteria, namely distance, budget, disaster history, and sentiment value. Meanwhile, the use of sentiment analysis to find the objective value of each tourist attraction is obtained from the comparison of the positive values of the review of the tourist attraction. The final results of this study are two recommendations based on sentiment values and fuzzy values that have gone through the calculation process [13]. Further research discusses the recommendation system for tourist attractions using Android-based Fuzzy Tahani. The research took a case study in the city of Bandung. The system works by providing a recommendation in the form of a tourist spot recommendation based on the tourism criteria parameters and ticket prices entered by the user. The recommendations or suggestions given by the application have gone through the data processing stage using Fuzzy Tahani, so that it can make it easier for tourists to get information and directions to the location of the tourist attractions to be addressed [14]. The last research aims to make it easier for tourists to choose tourist objects in Pasuruan Regency using the Fuzzy Tahani Method. The criteria used are price, visiting time, facilities, and number of visitors. The results in this study are a decision support system in the selection of tourist objects [15].
3 Research Methods 3.1 Fuzzy Tahani Methods There are two criteria for the recommendation process, namely budget and disaster ble, because the budget is user input. The first stage of calculating the fuzzy method is calculating the costs incurred by the user. The nominal fee is obtained from user input in the form of the number of tourists (adults or children) and the number of vehicles (motorcycles, cars, or buses) used. The cost value will be calculated at the fuzzification stage. In addition, the user is also asked to input the travel budget. The system calculates how much it costs to travel, with the formula:
Tourism Recommendation System Using Fuzzy Logic Method Table 1 Range of disaster history fuzzy asset
917
Variable name
Universe of conversation Lower
Middle
High
Disaster history
0–9
5–12
>9
Cost = (number o f motor bikes × price o f motor bike parking) + (number o f car price o f car par king) + (number o f buses × price o f bus parking) + (number o f adults tourists × price o f adult entrance tickets) + (number o f child tourist × price o f childs entrance tickets) (1) After calculating the cost, the system will check for each tourist attraction, whether the budget input by the user exceeds the input cost or not. If the cost is greater than the budget, then the tourist attraction will not enter the fuzzy calculation in the next stage. Next calculate the fuzzification value. Fuzzification value is obtained from the highest number (MAX) of each degree of membership. To get the value of the degree of membership, three conditions must be made as the universe of speech (range set). The range of fuzzy sets for the disaster history criteria is shown in the Table 1. The range of the disaster history fuzzy set is obtained by taking the average of the disaster history data, then divided by three to determine the lower limit, middle limit, and upper limit. This method was adapted from research entitled Recommendations for Tourism Potential in the Greater Malang Region on social media Using the Fuzzy Method and Sentiment Analysis [10]. If illustrated, the diagram looks like the Fig. 1. Membership function range: a = 5; b = 9; c = 12. The degree of membership in the disaster history variable can be formulated as follows: ⎧ 1 ⎨x ≤ a µ cheap(x) a ≤ x < b b−x b−a ⎩ x ≥b 0
Fig. 1 Disaster history membership degree
(2)
918
A. R. Nandatiko et al.
Table 2 Range of fuzzy set of costs Universe of conversation
Variable name Cost
Lower
Middle
High
0–200,000
50,000–500,000
200,000–1.000,000
⎧ ⎪ 0 ⎨x ≤ a x−a µ average(x) a ≤ x < b b−a ⎪ c−x ⎩x ≥ b c−b ⎧ ⎫ 0 ⎬ ⎨ x ≤b x−a µ high(x) b ≤ x < c b−a ⎩ ⎭ c ≥ c c−b 1
(3)
(4)
Then the criteria for travel costs. Based on the cost calculations that have been described previously, the fuzzy set range of costs is obtained as shown in the Table 2. Disaster History Membership Degree The fee membership function can be seen in the following graph (Fig. 2). Membership function range: a = 50,000; b = 200,000; c = 500,000. Membership function on variable costs can be formulated using the formula 2, 3, and 4. After looking for the fuzzification value, the researcher uses the MAX method, where the system must find the highest value of the degree of membership of each data in each criterion. The next stage is to determine the rules that are used as benchmarks for recommendations (Evaluation Rule). Fuzzy rules for this research are as follows (Fig. 3). The last stage is defuzzification. The Fuzzy Tahani method calculates the recommended/fuzzy value with the formula: Recommendation = CriteriaMany Criteria
Fig. 2 Fee membership function
(5)
Cost
Tourism Recommendation System Using Fuzzy Logic Method
919
Expensive
Recommended
Not Recommended
Not Recommended
Moderate
Highly Recommended
Recommended
Not Recommended
Cheap
Highly Recommended
Recommended
Not Recommended
Little
Moderate
Many
Disaster Frequency. Fig. 3 Recommendation rules
The output for the user is a list of tourist attractions with the “Highly Recommended/Recommended” rule and the highest recommendation value.
4 Result and Discussion 4.1 Features Based on the literature study that has been described, this study uses the black box method for testing. This test is useful for testing existing features that are already running well. • User Main Page Features: This page displays tourist attractions, hotels/inns, and restaurants/culinary in Tulungagung Regency. If the user clicks on a place, the details of the place that the user clicked will appear. This feature is already running according to its purpose. • User’s Travel Budget Input Feature: On this page the user can enter the total travel budget, choose the type of vehicle he wants to use and the number, and enter the number of people traveling to find out how much the total cost of the tour is needed. This page has successfully saved the data entered by the user. • User’s Travel Recommendation Results: After entering the budget, the user will be directed to the recommendation results page. This page will display a list of tourist attractions that the user can visit with the travel budget that has been inputted on the previous page.
4.2 Fuzzy Trial In this trial, the cost required by the user to travel: number of bicycles is 0, number of cars: 1, number of bus is 1, number of adults travel are 20, and number of childs travel are 10. Then the system calculates the user input with the data in the database.
920 Table 3 Budget check (in Rupiah)
A. R. Nandatiko et al. Tourist attraction
Cost
Budget
Cost < Budget?
Budheg Mountain
135,000
500,000
Yes
Banyu Mulok Beach
30,000
500,000
Yes
Coro Beach
130,000
500,000
Yes
Kedung Tumpang Beach
135,000
500,000
Yes
Klathak Beach
15,000
500,000
Yes
Splash Waterpark
580,000
500,000
No
Jambooland Waterpark
615,000
500,000
No
Suppose we take one example, Mount Budheg Tourism data, then the costs required are: Cost = (0 × 2000) + (1 × 5000) + (20 × 5000) + (20 × 2000) = 135, 000 (6) Before the data is processed, the system first selects whether the input budget is greater than the entered cost. If budget > cost, then the tourist attraction is not included in the fuzzy calculation. Example Details of the data are shown in the Table 3. Because there are tourist attractions whose costs exceed the budget, the tourist attractions (Jambooland Waterpark and Splash Waterpark) will not enter the next stage. Then the system enters the fuzzification stage. Figure 4 shows the universe of criteria talks. Criteria Disaster history, then the range value becomes: a = 5; b = 9; c = 12. If one of the examples of Mount Budheg Tourism is taken which has 3 disasters, then the value of for each set must be calculated with a value of: value µ Cheap (x) = 1, value average (x) = 0, and value high (x) = 0. So, if a table is created for all data, it is as follows (Table 4). Then calculate the cost. To calculate the cost fuzzification, a diagram as shown in the Fig. 5 is used. Fig. 4 Fuzzy diagram of disaster history
Tourism Recommendation System Using Fuzzy Logic Method
921
Table 4 Fuzzification of disaster history Tourist attraction
Disaster history
Degree member Lower
Average
High
Budheg Mountain
3
1
0
0
Banyu Mulok Beach
5
1
0
0
Coro Beach
5
1
0
0
Kedung Tumpang Beach
1
1
0
0
Klathak Beach
5
1
0
0
Fig. 5 Fuzzy diagram of cost criteria
Then the range value becomes: a = 50,000, b = 200,000, c = 500,000. So that the value of x = 135000, , value µ Cheap (x) = 0.43, value average (x) = 0.57, and value high (x) = 0. So, if a table is created for all data, it is as follows (Table 5). From the two tables of membership degrees of disaster history and costs, the system looks for the greatest value that each tourist spot has (Table 6). The next stage is the evaluation rule, where the rules are used as in Fig. 3. According to Fig. 3, the conclusions can be drawn up by considering the class of cost and disaster (Table 7). After successfully getting the recommendation values and rules, the next stage is defuzzification, which is reaffirming the recommended tourist attractions. For the defuzzification stage, the researcher uses the Means of Maximum (MOM) method, Table 5 Fuzzification of cost Tourist attraction
Cost
Degree member (µ) Cheap
Average
High
Budheg Mountain
135,000
0.4333
0.5667
0
Banyu Mulok Beach
30,000
1
0
0
Coro Beach
130,000
0.4667
0.5333
0
Kedung Tumpang Beach
135,000
0.4333
0.5667
0
Klathak Beach
15,000
1
0
0
922 Table 6 Max membership degree
Table 7 Evaluation rule
Table 8 Recommendation value
A. R. Nandatiko et al. Tourist attraction
Disaster history
Cost
Budheg Mountain
1
0.5667
Banyu Mulok Beach
1
1
Coro Beach
1
0.5333
Kedung Tumpang Beach
1
0.5667
Klathak Beach
1
1
Tourist attraction
Cost
Disaster
Conclusion
Budheg Mountain
Average
Low
Highly recommended
Banyu Mulok Beach
Cheap
Low
Highly recommended
Coro Beach
Average
Low
Highly recommended
Kedung Tumpang Beach
Average
Low
Highly recommended
Klathak Beach
Cheap
Low
Highly recommended
Tourist attraction
Cost
Disaster
Conclusion
Budheg Mountain
1
0.566667
0.783333
Banyu Mulok Beach
1
1
1
Coro Beach
1
0.533333
0.766667
Kedung Tumpang Beach
1
0.566667
0.783333
Klathak Beach
1
1
1
which means taking the average value of the domain that has the maximum membership value [12]. In order to make it easier for system users to sort from the largest value (Table 8). Thus, the results of the system recommendation after sorting can be seen in the Table 9.
4.3 Recommendation Accuracy The results of the level of accuracy are obtained by using the accuracy of user satisfaction due to the absence of comparison data. User satisfaction is taken from the questionnaire, where the respondents involved as many as 33 respondents. The scale used is a Likert scale with a scale of 1 to 5, with details: Scale 1 = TS (Not
Tourism Recommendation System Using Fuzzy Logic Method Table 9 Result
923
Tourist attraction
Conclusion
Recommendation
Banyu Mulok Beach
Very recommended
1
Klathak Beach
Very recommended
1
Budheg Mountain
Very recommended
0.783333
Kedung Tumpang Beach
Very recommended
0.783333
Coro Beach
Very recommended
0.766667
Fig. 6 User satisfaction questionnaire results
Appropriate), Scale 2 = KS (Not Appropriate), Scale 3 = S (Appropriate), Scale 4 = CS (Sufficiently Appropriate), Scale 5 = SS (Very Appropriate). The results of the questionnaire are shown in the Fig. 6. From the picture above, as many as 28 respondents considered that the recommendations given by the system were appropriate and very in line with the wishes of the respondents. So, it can be concluded that the accuracy of the recommendation system is 84.9%.
5 Conclusion From the results of the tourism recommendation system in Tulungagung Regency, it can be concluded that: Based on the results of testing the accuracy of the recommendation system is 84.9%, it can be concluded that the recommendation system in this study is quite good compared to the previous system that already exists is government website, which only displays general information such as photos, descriptions, and the location of each tourist spot, from the calculation results of the questionnaire analysis, it shows that the application is classified as good. This application helps tourists in traveling in Tulungagung Regency, from the results of the questionnaire analysis, it was concluded that the application could be a medium for promoting tourism potential in Tulungagung Regency, there are some applications that are less than perfect when implemented on certain devices. From several conclusions that have
924
A. R. Nandatiko et al.
been drawn, the researchers consider several suggestions needed in the process of improvement—improvements in this study include for further development, restaurants and lodging can be entered into the recommendation system, so that users receive complete recommendations, for display, the buttons to be further clarified. So that the application works more optimally and the display can be more responsive. Acknowledgements Researchers thank you for the guidance and direction until this research is completed, as well as publication funds to the Computer Science Department of BINUS Online Learning, Bina Nusantara University.
References 1. Fudholi DH, Rani S, Arifin DM, Satyatama MR (2021) Deep learning-based mobile tourism recommender system. Sci J Deep Learn Mob Tour Recomm Syst Informatics 8(1) 2. dan P RI KK (2021) Tourism foreign exchange ranking of export commodities. https://www.kemenparekraf.go.id/asset_admin/assets/uploads/media/old_all/devisa20112015.pdf. Accessed 6 Dec 2021 3. Statistik BP (2021) Indonesian statistics 2020, Badan Pusat Statistik, 2020. https://www.bps. go.id/publication/2020/04/29/e9011b3155d45d70823c141f/statistik-indonesia-2020.html. Accessed 6 Dec 2021 4. Thomas B, John AK (2021) Machine learning techniques for recommender systems—a comparative case analysis. IOP Conf Ser Mater Sci Eng 5. Lalitha TB, Sreeja PS (2021) Recommendation system based on machine learning and deep learning in varied perspectives: a systematic review. In: Information and communication technology for competitive strategies (ICTCS 2020), pp 419–432 6. Tulungagung PK (2021) Profile of Tulungagung regency (Lambang Daerah), Kabupaten Tulungagung Website. https://tulungagung.go.id/?page_id=4613. Accessed 6 Dec 2021 7. Statistik BP (2021) Tulungagung regency in figures 2020. Tulungagung: BPS Kabupaten Tulungagung. https://tulungagungkab.bps.go.id/publication/2020/04/27/63d49562c649 8f49a620073a/kabupaten-tulungagung-dalam-angka-2020.html. Accessed 6 Dec 2021 8. Hadikurniawati W, Winarno E, Prabowo AB, Abdullah D (2019) Implementation of Tahani fuzzy logic method for selection of optimal tourism site. J Phys Conf Ser 1361(1) 9. Pressman RS (2005) Software engineering: a practitioner’s approach, 6th ed. McGraw-Hill 10. Blogger T (2018) White box testing vs. black box testing. https://medium.com/@techblogg ers/white-box-testing-vs-black-box-testing-19754e950398 11. Syakir AA, Nilogiri A, Al Faruq HA (2020) Decision support system for determining tourism objects in Banyuwangi regency based on the fuzzy Tahani model. J Smart Teknol 2(2). [Online]. Available: http://jurnal.unmuhjember.ac.id/index.php/JST/article/view/4990 12. Mukaromah M (2019) Application of Sugeno’s fuzzy method to determine the best path to tourist locations in Surabaya. J Mat Sains dan Teknol 2(2) 13. Sugiharto E, Istiadi, Wijaya ID (2021) Tourist attractions recommendation system in Malang City using web-based fuzzy method. J Apl Dan Inov Ipteks SOLIDITAS 1(1) 14. Firdaus F, Nurhayati S (2017) Tourist attractions recommendation application system using the android-based fuzzy Tahani method, Unikom 15. Akhmad Busthomy S, Hariyanto R (2016) Decision support system for selection of tourism objects in Pasuruan regency using the fuzzy method. JIMP (Jurnal Inform Merdeka Pasuruan) 1(2)
Set of Experience Knowledge Structure (SOEKS) and Decisional DNA (DDNA)—A Review A. B. M. Mehedi Hasan, Md. Nafiz Ishtiaque Mahee, and Cesar Sanin
Abstract The concept of SOEKS and DDNA is to capture, store, organise, and reuse formal decisions in a knowledge-explicit form to help in decision-making. In this study, we review past works and advancements in the concepts of Decisional DNA (DDNA) and Set of Experience Knowledge Structure (SOEKS) since the birth of the original concept. Firstly, the original concept of SOEKS and DDNA, a comparison with human DNA and the construction of DDNA are discussed. In the second part, advancements of DDNA are investigated in chronological order. Lastly, the application and limitation of DDNA are discussed. Finally, the possible future advancements of DDNA are suggested. Keywords Set of Experience Knowledge Structure (SOEKS) · Decisional DNA (DDNA) · Knowledge engineering · Evolutionary Algorithms (EA) · Genetic Algorithms (GA) · Virtual Engineering Object (VEO) · Virtual Engineering Process (VEP) · Industry 4.0 · Internet of Things (IoT)
1 Introduction As we are heading towards the future with automation, we are generating more data than ever. Many of these data is generated on formal decision events and knowledge in the form of experience is, in many cases, overlooked or wasted. Such vast amount of data and knowledge experience requires proper collection, administration, and reuse. This is where the concept of Decisional DNA (DDNA) comes in. With the help of Set of Experience Knowledge Structure (SOEKS), it can both manage data in a systematic way, and be used for prediction based on previous decisional experiences. A. B. M. Mehedi Hasan (B) · C. Sanin Australian Institute of Higher Education, Sydney, NSW 2000, Australia e-mail: [email protected] Md. N. I. Mahee BRAC University, Dhaka, Bangladesh © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_80
925
926
A. B. M. Mehedi Hasan et al.
As it is still early days, our review tells us that there are some further opportunities for implementation and improvement of DDNA.
2 Literature Review 2.1 Concepts of SOEKS and DDNA Concepts of “Knowledge Engineering” arrived from the necessity of a more organized way of knowledge and experience management. Knowledge engineering is suggested as a specific discipline which focused on solving complex problem by virtue of merging knowledge into computer system [1]. Human intelligence capabilities such as learning, reasoning, and predicting are key aspects of knowledge engineering [2]. “The only source of knowledge is experience” (Albert Einstein). According to Sanin and Szczerbicki [3], multiple real-life applications can make decisions or support decision in a structured way which are called formal decision events. These formal decision events are commonly ignored, unexploited, and not stored in any process. This is where the concept of DDNA comes in with SOEKS (or SOE for short). Figures 1 and 2 shows visual Decisional DNA and SOEKS respectively. These knowledge structures are capable of capturing previous formal decisions in a knowledge-explicit form [2]. Sanin also recommended to store past and present decisional data which turns into decisional fingerprints that can be applied to revamp the user’s decisional experience. DDNA can be applied in several different technologies including common programming languages, markup languages, ontologies, software agents, and embedded systems [2]. The authors emphasized some basic concepts to understands the DDNA
Fig. 1 SOEKS and decisional DNA [2]
Set of Experience Knowledge Structure (SOEKS) and Decisional DNA …
927
Fig. 2 SOEKS [2]
concept in a proper manner. Initially, Sanin et al. [2] concentrated knowledge, more specifically previous experience, which worked while the decision-making process is executed. After that the authors talked about SOEKS as a more efficient and precise way of storing and representing formal decision events. There are four components defined by Sanin and Szczerbicki [3] which comprise SOEKS, they are: variables, functions, constraints, and rules. The initial part of the SOE is made up of variables that describe the decisionmaking process. Variables are connected through functions, the second part of SOEKS, which defines the relations among variables. The third element of SOEKS is constraints, which are restrictions on what is possible in the given system. Lastly, rules specify the actions relevant to the condition or circumstances. According to Sanin et al. [2], to perform in an optimal way, multiple SOEKS need to be captured from daily experiences in order to make the system grow in knowledge.
2.2 Biological DNA and Brain Functions Human body is compared with a robust information management system by Szczerbicki and Sanin [4]. The success of the DNA structure demonstrated by the persistence of knowledge throughout subsequent generations in nature has led the experts to correlate DNA with a data structure. DNA allowed humans to store their experience and knowledge through multiple generations to survive and improve over
928
A. B. M. Mehedi Hasan et al.
time. Furthermore, the brain is the most potent database and processor as it can store and process experiences as knowledge. DNA and the brain combined to create a perfect mechanism for humans to process and assemble experience that can be used both for survival and improvement in humans. DNA is a type of nucleic acid in all organisms and is the basis for inheritance. It is formed with two strands around a helix axis, creating a helical spiral-like shape. Both strands consist of a different combination of four unique nucleotides: Adenine (A), Thymine (T), Guanine (G) and Cytosine (C). Each human is unique from one another due to the different combinations of these four nucleotides. A gene is a section of the DNA molecule that directs the function of a specific part of an organism and gives a direction under different circumstances. Chromosomes consist of a pair of genes, and several chromosomes combine to form the complete genetic code of a single person. The human brain, on the contrary, receives signals from the human body and interprets them to create a proper reaction. Psychologist George Kelly suggested a theory of “psychological space” which does not exist beforehand but is instead created while going through decision-making scenarios. Kelly further explained it as a multivariate system of intersects and concepts. Based on these concepts, the authors proposed psychological space of a person contains previous decision events [4]. In the case of a new decision-making event, an experienced person makes their current decisions based on these previous interactions that suited them better.
2.3 Digital Architecture and Construction of DNA It is suggested that past decision-making experiences are often overlooked and not adequately maintained [2]. Due to differences in technologies between generations or lack of a proper knowledge management system paves the way for loss of decisional experience. Sanin and others identified two key reasons behind this circumstance. Firstly, he recognized the inadequate knowledge structure to store formal decision events. Secondly, the technological challenges to maintaining such experiences. The authors further introduced some solutions. Initially, they offered to implement a knowledge structure like DDNA and SOEKS to maintain the knowledge appropriately. After that, they suggested a mechanism to apply various technologies to gather experiences. Finally, they advocated for an automated process that can give solutions to current scenarios based on previous knowledge. In modern times we need better ways to gain insight into data as analyzing is expensive and lackluster. There can be multiple ways to equate different data objects. Bringing efficiency in searching for patterns finding similarities between two data objects can provide better results. This is a crucial concept not only in data mining, but also in knowledge discovery. Moreover, this process can create a ranking system that would provide the best matching objects, i.e., previous experiences to help in solving a current decision query. Similarities can be measured in numerous ways. Depending on the process, the similarity may vary from an object. However, various similarity matrices of the same type can be applied to the same set of data
Set of Experience Knowledge Structure (SOEKS) and Decisional DNA …
929
objects. So, choosing which metric to use is vital, considering our output. A systematic approach for similarity metric is multidimensional scaling, where a geometric approach is taken to plot the objects in continuous dimensions. Distance between objects where the closest plotted object is considered the most similar. The critical flaw in this approach becomes visible when we must plot objects with multiple qualitative attributes. To bypass this issue, researchers suggested pre-arranging qualitative attributes as binary variables. It is suggested that Euclidian and Hamming matrices, part of the Minkowskian family distance matrices, can take two attribute vectors si and sj and calculates the distance “d” in the following manner [2]: dij =
r wk Sik − Sij
1r (1)
k
In the Eq. (1) above, wk = given weight to the kth attribute. r = parameter that determines which of the family of metrics is used. Besides this, event sequence techniques have been used in some cases. Also, other special techniques like additive trees, additive clustering, information content, mutual information, dice coefficient, cosine coefficient, and feature contrast model could be used.
2.4 SOEKS and DDNA in Research The management of knowledge challenges us when it comes to the ways of organizing the knowledge. The major obstacle to achieving this goal is the lack of storing isolated solutions when problems are experienced. The idea is simple; learn a new problem, solve the problem, store solutions, organize knowledge, and re-use knowledge for decision-making purpose. SOEKS and DDNA concept open a door to see the problem-solving world in a new form. Knowledge structure was first proposed by Sanin and Szczerbicki [5] to store and re-use experiences to make decision-making easier. They called it ‘The Set of Experience Knowledge Structure’ or SOEKS. We noticed an appreciable amount of research on SOEKS and DDNA. The past, present, and future of DDNA and SOEKS are discussed and summarized by Shafiq et al. [6] in a tabular form SOEKS and DDNA applications by different authors. In this literature review, we explored the major development of SOEKS and DDNA and its state of the art. Relevant below information recorded in Table 1.
930
A. B. M. Mehedi Hasan et al.
Table 1 Development of DDNA and SOEKS Development of DDNA and/or SOEKS
References
The authors proposed the set of experience knowledge structure to support in [5] recording past experiences leading to an easier decision-making process. According to the authors, decisions taken in the past can be recorded and future decision-making can be made easy by capturing knowledge in the shape of SOEKS. According to our observation, this study has led further investigations on DDNA enabling knowledge management and smart decision-making ideas by many authors Besides proposing the idea of SOEKS and DDNA, Sanin and Szczerbibcki [7, 8] realized that the portability of set of experience knowledge structure is crucial to enable compatibility with diverse systems. They chose XML (Extensible Markup Language) and converted SOEKS into XML to transform DDNA into XML-based knowledge structure The authors [9] showed how SOEKS is implemented in the Knowledge Supply [9, 12] Chain System (KSCS). Knowledge management can be a challenging task due to organization complexity and quality of knowledge. The quality of knowledge was determined by Mancilla-Amaya et al. [10] and it was based on some attributes; some of the attributes are: relevance, completeness, accuracy, timeliness, and objectivity. A conceptual model was proposed by Sanin and Szczerbicki [11] to support knowledge management problems which is the knowledge supply chain system (KSCS), supported by SOEKS Incoming SOEKS can be classified and comparable before proceeding further in a target system. Comparison and classification of SOEKS would be helpful minimizing errors occurring while feeding external systems. Heterogenous similarity metrices were developed by Sanin and Szczerbicki [12] to make formal decision making more acceptable in knowledge management and in multi-platform/technology use According to Sanin and Szczerbicki [13], sometimes there are different heterogenous or diversified sets of experience as output of a unique formal decision event. They realized the necessity of a homogeneous form of the set of experience collecting the formal decision event and adding compatibility among various platforms offering intelligent systems A Java class ontology implementation was presented by Sanin and Szczerbicki [14] to derive benefits from experiential knowledge that can be used in different industries or domains. The Java class with an ontology implementation system is supported by a combined set SOEKS, DDNA, and SOUPA. According to Chen et al. [15], SOUPA is a shared standard ontology to support ubiquitous and pervasive computing systems. Szczerbicki and Sanin extended SOUPA with SOEKS to enhance DDNA which is aligned with the ontology of universal applications or systems
[13, 14, 16]
(continued)
Set of Experience Knowledge Structure (SOEKS) and Decisional DNA …
931
Table 1 (continued) Development of DDNA and/or SOEKS
References
[17] A meta-heuristic technique was chosen by Sanin and Szczerbicki [17] to find an optimal set of experience from homogenization and mixture of various sets of experience knowledge structure. Based on the recommendation by Zitzler et al. [18] and Fonseca and Fleming [19], Sanin and Szczerbicki were convinced that Evolutionary Algorithms (EAs) are a good fit for multi-objective optimization (MOO). A Genetic Algorithm (GA) was selected because biological chromosomes can be interpreted by GA. The authors also found a disadvantage of Genetic Algorithms. Because of the heuristic nature of GAs, optimal solution is not guaranteed. Sanin and Szczerbicki also explored and ‘Strength Pareto Evolutionary Algorithm’ (SPEA) because this SPEA outperformed other four multi-objective evolution algorithms tested by Zitzler and Thiele [20] [21, 22] Previously, Szczerbicki and Sanin extended SOUPA ontology with SOEKS. The concept of ‘reflexive ontologies’ as a framework was proposed by Toro et al. [21]. With this approach, any existing ontology can be extended. The authors added a case study of testing SOEKS with this framework Creating trust in decision making with DDNA is crucial. Decisional DNA, reflexive ontologies, and security technologies were combined by Sanín et al. [22] and proposed ‘decisional trust’. This was proposed to offer trustable decisions by extending the use of DDNA and reflexive ontologies The idea of SEOKS and DDNA was refreshed by Sanín et al. [2] from previous studies and showed four types of application potentials of DDNA: XML-based DDNA Knowledge Structure, Ontology based DDNA Knowledge Structure, DDNA for Software Agents Technology, and DDNA-Based Embedded Systems
[2]
Cyber physical systems are a combination of interactive networks of physical and other computer components [23]. Internet of Things (IoT) can be defined as an interconnected network of objects. Case studies on SOEKS and DDNA shared by Sanin et al. [24] can be a knowledge representation component for Internet of Things (IoT) and Cyber Physical Systems. The authors presented the following architecture for all case studies: DDNA-based IoT architecture, computer integrated manufacturing, hazard control system, and smart innovation engineering system (SIE)
[24]
DDNA-based machine monitoring was presented by Shafiq et al. [25] to perform [25] overall maintenance in industry 4.0 framework. According to the authors, with virtual engineering object (VEO) [26], virtual engineering process (VEP) [26] and virtual engineering factory (VEF) [27] manufacturing footprints can be captured. They stored formal decisions of VEO-based wear monitoring, VEO-based condition monitoring, and VEP-based quality prediction and monitoring into CSV (comma separated value) format. The data were read with the help of Java programming code and converted to SOEKS, followed by the creation of productive maintenance-DNA
932
A. B. M. Mehedi Hasan et al.
Fig. 3 Architecture of decisional DNA-based embedded systems [2]
3 Applications of SOEKS and DDNA 3.1 DDNA Based Embedded System Knowledge based embedded systems are useful when they offer an efficient way of gathering, using, distributing, and re-using knowledge. According to Zhang et al. [28], sharing knowledge among various knowledge-based systems can be hectic, time consuming, and expensive when there are no standardized solutions. DDNA is cross platform compatible, compact, and configurable which makes DDNA a good fit for the integration with embedded systems [2] (Fig. 3).
3.2 Decisional DNA in Web Data Mining Web data mining can be beneficial to website owners to track not only performance of their website, but also opens the door to investigate user insights with visualizations. This helps website owners to improve their business strategy. Business decision making can be more efficient together with Decisional DNA and data mining [29]. The author proposed a new way of web data mining (web usage or content mining) with Decisional DNA.
3.3 Decisional DNA in Robotics Storing knowledge or experiences in a structured way and using stored experiences to improve decision making is the core benefit of Decisional DNA. Robots are being deployed in different conditions. An automated smart manufacturing plant can be a good example to relate DDNA. If robots can learn from the set of experiences, re-use knowledge or experience, errors can be minimized. An approach on how robots can
Set of Experience Knowledge Structure (SOEKS) and Decisional DNA …
933
use Decisional DNA to capture, store and re-use knowledge was demonstrated by Zhang et al. [30].
4 Limitations of DDNA Supported Information Systems Technological advancements never stop. Humans learn from experience and advance technologies to make their lives easier. Now-a-days, we can see advancement of computing, hardware, software, cloud services, blockchain and what not. Ongoing research of DDNA/SOEKS and maintaining compatibility with different systems are crucial. Further research on the limitations of DDNA/SOEKS-based information systems is pending but, it can be stated that integrating or mixing machine learning (ML) with DDNA knowledge structure may bring more applications ideas.
5 Future Advancements of SOEKS and DDNA In this modern day and age, we are constantly facing decision making events, and among them, a big part of them is being taken by not humans but rather machines. These events generate numerous decisional experiences. If implemented, DDNA could provide immense success. As we move towards automation gradually, we are focusing on executing machine learning algorithms on multiple occasions. Primarily, image processing is being used in our day-to-day life more than we are aware of. In these cases, we are processing a considerable amount of data that could be better handled with the DDNA. Also, with the help of natural language processing, we are creating algorithms witch which computers can continue conversations like humans. These algorithms learn from past conversations and take decisions as the current conversation progresses. Even here, we can see a situation of managing a massive amount of data and the importance of decisional experience. DDNA could be added to popular language libraries to be helpful in artificial intelligence. Furthermore, companies with massive data use query languages and other data management services to maintain their substantial data repositories properly. These enterprises regularly take huge decisions based on their data and past circumstances. DDNA can be implemented in query language as well. While constructing DDNA in such affairs Sanin et al. [2] suggested keeping in mind to maintain search capabilities by keywords, permitting extraction of relevant elements, preserving easy maintenance of knowledge, and keeping the structure of DDNA intact. DDNA and SOEKS can be integrated with search engines. As DDNA can provide results from decisional experiences thus it can produce results based on the user’s criteria or specific needs in an efficient way.
934
A. B. M. Mehedi Hasan et al.
6 Conclusion The necessity to properly oversee massive amounts of data is increasing every moment. DDNA can prove vital as it can manage and give predictions while considering past decisional experience, all while maintaining its proper structure. As technology progresses, we can implement DDNA along with SOEKS in multiple scenarios, which can help us to administrate a large amount of data while having the benefits of searching and predicting as well.
References 1. Noble D (1998) Distributed situation assessment. Proc FUSION 98:478–485 2. Sanin C, Mancilla-Amaya L, Haoxi Z, Szczerbicki E (2012) Decisional DNA: the concept and its implementation platforms. Cybern Syst 43(2):67–80 3. Sanin C, Szczerbicki E (2009) Experience-based knowledge representation: SOEKS. Cybern Syst Int J 40(2):99–122 4. Szczerbicki E, Sanin C (2020) Knowledge management and engineering with decisional DNA. Springer International Publishing 5. Sanin C, Szczerbicki E (2005) Set of experience: a knowledge structure for formal decision events. Found Control Manage Sci 3:95–113 6. Shafiq SI, Sanín C, Szczerbicki E (2014) Set of experience knowledge structure (SOEKS) and decisional DNA (DDNA): past present and future. Cybern Syst 45(2):200–215 7. Sanin C, Szczerbicki E (2005) Using XML for implementing set of experience knowledge structure. In: Khosla R, Howlett RJ, Jain LC (eds) Knowledge-based intelligent information and engineering systems. Springer Berlin Heidelberg, pp 946–952 8. Sanin C, Szczerbicki E (2007) Extending set of experience knowledge structure into a transportable language extensible markup language. Cybern Syst 37(2–3):97–117 9. Sanin C, Szczerbicki E (2006) Using set of experience in the process of transforming information into knowledge. Int J Enterprise Inf Syst (IJEIS) 2(2):45–62 10. Mancilla-Amaya L, Szczerbicki E, Sanín C (2013) A proposal for a knowledge market based on quantity and quality of knowledge. Cybern Syst 44(2–3):118–132 11. Sanin C, Szczerbicki E (2004) Knowledge supply chain system: a conceptual model 12. Sanin C, Szczerbicki E (2006) Developing heterogeneous similarity metrics for knowledge administration. Cybern Syst 37(6):553–565 13. Sanin C, Szczerbicki E (2007) Dissimilar sets of experience knowledge structure: a negotiation process for decisional DNA. Cybern Syst 38(5–6):455–473 14. Sanin C, Szczerbicki E (2007) Towards the construction of decisional DNA: a set of experience knowledge structure Java class within an ontology system. Cybern Syst 38(8):859–878 15. Chen H, Perich F, Finin T, Joshi A (2004) SOUPA: standard ontology for ubiquitous and pervasive applications. In: The first annual international conference on mobile and ubiquitous systems: networking and services, Mobiquitous, pp 26–26 16. Szczerbicki E (2007) Editorial: knowledge management and ontologies—part II. Cybern Syst 38(8) (2007) 17. Sanin C, Szczerbicki E (2007) Genetic algorithms for decisional DNA: solving sets of experience knowledge structure. Cybern Syst 38(5–6):475–494 18. Zitzler E, Deb K, Thiele L (2000) Comparison of multiobjective evolutionary algorithms: empirical results. Evol Comput 8(2):173–195 19. Fonseca CM, Fleming PJ (1995) An overview of evolutionary algorithms in multiobjective optimization. Evol Comput 3(1):1–16
Set of Experience Knowledge Structure (SOEKS) and Decisional DNA …
935
20. Zitzler E, Thiele L (1999) Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Trans Evol Comput 3(4):257–271 21. Toro C, Sanín C, Szczerbicki E, Posada J (2008) Reflexive ontologies: enhancing ontologies with self-contained queries. Cybern Syst 39(2):171–189 22. Sanín C, Szczerbicki E, Toro C (2008) Combining technologies to achieve decisional trust. Cybern Syst 39(7):743–752 23. Griffor ER, Greer C, Wollman DA, Burns MJ (2017) Framework for cyber-physical systems: Volume 1, overview 24. Sanin C, Haoxi Z, Shafiq I, Waris MM, Silva de Oliveira C, Szczerbicki E (2019) Experience based knowledge representation for Internet of Things and cyber physical systems with case studies. Fut Gener Comp Syst 92:604–616 25. Shafiq SI, Sanin C, Szczerbicki E (2022) Decisional DNA (DDNA) based machine monitoring and total productive maintenance in industry 4.0 framework. Cybern Syst 53(5):510–519 26. Shafiq SI, Sanin C, Szczerbicki E, Toro C (2015) Virtual engineering object/virtual engineering process: a specialized form of cyber physical system for Industrie 4.0. Procedia Comp Sci 60:1146–1155 27. Shafiq SI, Sanin C, Szczerbicki E, Toro C (2016) Virtual engineering factory: creating experience base for industry 4.0. Cybern Syst 47(1–2):32–47 28. Zhang H, Sanin C, Szczerbicki E (2010) Decisional DNA-based embedded systems: a new perspective. Syst Sci 36(1):21–26 29. Wang P, Sanin C, Szczerbicki E (2011) Application of decisional DNA in web data mining. In: König A, Dengel A, Hinkelmann K, Kise K, Howlett RJ, Jain LC (eds) Knowlege-based and intelligent information and engineering systems. Springer Berlin Heidelberg, pp 631–639 30. Zhang H, Sanin C, Szczerbicki E (2010) Decisional DNA applied to robotics. In: Setchi R, Jordanov I, Howlett RJ, Jain LC (eds) Knowledge-based and intelligent information and engineering systems. Springer Berlin Heidelberg, pp 563–570
Multiple Regression Model in Testing the Effectiveness of LMS After COVID-19 Meta Amalya Dewi, Dina Fitria Murad, Arba’iah Binti Inn, Taufik Darwis, and Noor Udin
Abstract The role of the Learning Management System (LMS) in universities today is no longer as support but as a primary need. This has been proven when the COVID19 pandemic hit the world as a medium that bridged the needs of students and lecturers in learning activities. Measured the effect of Access to LMS, Material of LMS and Discussion Forums on the Effectivity of LMS after the COVID-19 pandemic is the goal of this study at one of the universities that provide distance learning which allows changes in student learning patterns during and after the pandemic. The output showed that access to LMS has a positive and significant effect, material has a positive but not significant effect, and forum discussions have a positive and significant effect. Keywords Learning management system · Material · Discussion forum · Online learning
1 Introduction 1.1 Background The world of education has been rocked by the COVID-19 virus, which has turned into a pandemic [1]. Direct learning in the classroom should be stopped to avoid increasing the spread of the virus, especially to students [2], but because education M. A. Dewi (B) · D. F. Murad · T. Darwis Information Systems Department, BINUS Online Learning, Bina Nusantara University, West Jakarta, Indonesia e-mail: [email protected] A. B. Inn Department of Electric, Centre for Diploma Studies SPACE, Universiti Teknologi Malaysia, Kuala Lumpur, Malaysia N. Udin Visual Communication Design, Bina Nusantara University, West Jakarta, Indonesia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_81
937
938
M. A. Dewi et al.
should continue throughout life [3], so distance learning was carried out [4] by developing a Learning Management System (LMS). LMS is very broad in meaning to describe a system that provides online access to educational services to teachers and students [5]. Communication media that students can communicate effectively with friends and lecturers [6] and can carry out their work activities without having to be afraid of disturbing the lecture time. LMS was developed web-based to facilitate teaching instructors and student learning media [7] by accessing all services without time and place restrictions [8] including providing learning materials, viewing schedules, doing personal assignments and group assignments, quizzes, conducting discussions, to holding final exams and seeing grades, so that the process involves students continuously online to monitor their learning progress [9]. Learning continues to run well with LMS. However, since the COVID-19 pandemic took place, there has been a change in the pattern of activity among students, not a few of them have been affected by health, suspected either themselves or their families or other impacts such as working from home, being affected by employee reductions, losing their jobs, and so on that make it possible for this to happen. Changes in their learning patterns after the pandemic.
1.2 Related Work Before the COVID-19 pandemic, several measures were taken to assess the effectiveness of LMS, such as that conducted by Faniru and Gayatri [10] that LMS has been developed to the maximum by looking at 5 factors, namely information that is always updated, KMS that is easy to use, culture of sharing, development learning, and technology that is easy to use. R. B. Ikhan et al. also evaluated learning perceptions and student satisfaction on LMS, and Duta et al. [11] analyzed the functionality of LMS and tested its effectiveness, and Kaburuan et al. [12] evaluated student experiences on LMS with hedonic and pragmatic factors having an impact on attractiveness and practicality.
1.3 Research Purpose This study was carried out with the aim of evaluating the effectiveness of the LMS after COVID-19 whether there were changes during COVID-19 with changes in student activities such as working at home, not working due to the impact of reduction, illness, and other reasons by testing the effect of Access to LMS, Material of LMS and Discussion Forum on the Effectiveness of LMS either partially or simultaneously, as shown in Fig. 1.
Multiple Regression Model in Testing the Effectiveness of LMS After …
939
Access to LMS (X1) H1
Material of LMS (X2)
H2
Effectivity of LMS (Y)
H3
Discussion Forum (X3) H4
Fig. 1 Conceptual model
Based on the model above, the hypothesis in this study can be described as follows: H1: Access to LMS partially has a positive and significant effect on the Effectiveness of LMS. H2: Material of LMS partially has a positive and significant effect on the Effectiveness of LMS. H3: The Discussion Forum partially has a positive and significant effect on the Effectiveness of LMS. H4: Access to LMS, Material of LMS, and Discussion Forum simultaneously have a positive and significant effect on the Effectivity of LMS.
2 Research Method This study was carried out at one of the universities that provide distance learning in Indonesia. The questionnaire survey was distributed randomly to 100 students during February 2022 when the pandemic had begun to subside. The questionnaire was prepared to measure the effect of Access to LMS, Material of LMS and Discussion Forum on the Effectivity of LMS. Descriptive analysis and normality distribution test were calculated using SPSS 25. Tests with Multiple Regression Analysis are used to decide the effect of 1 or more variables on 1 variable [13]. Ordinary Least Square (OLS) which has produced the model will be tested with the classical assumption test and the Goodness of Fit test. The assumption of classic test is carried out by tested of normality that measures a set of data or variables, where if the ratio of the skewness and kurtosis values is ± 3 then the distribution of data is normal [14], the effect of the independent variable on
940
M. A. Dewi et al.
the dependent variable will be known if the classic of assumption test is successful by using the Goodness of Fit test which is calculated from the F test, t test and R2 test [15].
3 Result and Discussion Primary data used in this study were got from 100 students with the characteristics of respondents based on gender, 53 (53%) were men, and 47 (47%) were woman. The average value, median value, mode, variance, standard deviation, skewness and kurtosis are described as the results of descriptive statistics. The skewness value obtained also has a value between − 3 and + 3 so that it can be said that the data distribution is even (normal). Also, the kurtosis value, it based on the table above is negative in the range of minus three and plus three.
3.1 Classical Assumption Test To find out whether there is a classical assumption problem in the OLS linear regression calculation, a classic assumption test is performed by testing the assumptions of normality and linearity, autocorrelation, multicollinearity and heteroscedasticity. (a) Assumption of Linearity In general, a test of linearity is conducted to find out whether or not there is a linear relationship between two significant variables. The Sig Deviation from Linearity value for the access to LMS variable is 0.384 this number is greater than 0.05, it’s from Linearity value for the material of LMS variable is 0.280 this number is greater than 0.05, the value The Sig Deviation from Linearity for the discussion forum variable is 0.117, this number is more than 0.05. So, from these calculations it can be stated that there is a significant linear relationship between access to LMS, material of LMS, and discussion forums on the effectiveness of LMS. (b) Autocorrelation Assumption To find a correlation between a residual with another residual, a test of autocorrelation is performed. Autocorrelation is a correlation between the variables themselves, at different observations at different times or individuals. A good model does not occur autocorrelation. Detection of autocorrelation was measured using the D-W statistic (Durbin-Watson) whose value was between 0 and 4. The Durbin-Watson statistic (D) was close to 2, meaning there was no autocorrelation.
Multiple Regression Model in Testing the Effectiveness of LMS After …
941
Table 1 Auto correlation test results (Durbin-Watson) Model
1
R
0.660a
a Predictors
R square
Adjusted R square
Std error of the estimate
0.435
0.418
0.689
Change statistic
Durbin-Watson
R square change
F change
Sig. F change
0.435
24.653
0.000
1.712
(constant). Discussion forum, access to LMS, material of LMS variable: Effectivity of LMS
b Dependent
Based on the Durbin-Watson test, it yields 1.712 (N = 100) where this value lies in an area that does not occur autocorrelation. From the Durbin-Watson Table 1, it can be seen that for N = 100 it has a value of dl = 1.604 and du = 1.733. From the test of the Durbin-Watson in that area where there is no autocorrelation, it can be explained that the Access to LMS, Material of LMS and Discussion Forum variables are free from autocorrelation. (c) Multicollinearity Assumption This test was conducted with the aim of finding a correlation between the independent variables of the regression model, and a good model should not find any correlation between those variables of independent. The correlation between independent variables causes the estimated coefficient to be unstable (has a large variation). The assumptions which state that there is no perfect or near perfect relationship between the independent variables in the model are all fulfilled. This is indicated by the results of the VIF (Variance Inflation Factor) test of each independent variable, where multicollinearity problems are found if the variable has a VIF of more than ten [16]. The complete test results are in the Table 2. Test above show that there is no multicollinearity problem, so the test results are said to be reliable.
Table 2 VIF (variance inflation factor) test results Model
1
Unstandardized coefficients
Standardized coefficients
B
Std. Error
Beta
(Constant)
1.101
0.879
1.252
0.213
Access to LMS
0.072
0.023
0.302
3.183
Material of 0.080 LMS
0.043
0.196
Discussion forum
0.033
0.281
a Dependent
0.088
variable: Effectivity of LMS
t
Sig
Collinearity statistics Tolerance
VIF
0.002
0.655
1.527
1.847
0.068
0.521
1.921
2.654
0.009
0.524
1.910
942
M. A. Dewi et al.
(d) Heteroscedasticity Assumption Heteroscedasticity should not be found in a good regression model. To detect it is by overlaying the scatter plot graph between the predicted value of the dependent variable (SRESID) and the residual (ZPRED). Base of analysis: (a) If there is a special pattern, such as a wave, widens and then narrows which is formed from dots, this means there is heteroscedasticity. (b) Heteroscedasticity does not occur if there is no clear pattern, the points spread on the Y axis [17] as in the Fig. 2. Test of Glejser is carried out to obtain heteroscedasticity by means of the independent variables are regressed to the absolute value of the residual with the equation formula. Heteroscedasticity was not found in the regression model if the significance value (Sig.) was more than 0.05. On the other hand, conversely heteroscedasticity is found in the regression model when the significance value (Sig.) is less than 0.05. Table 3 shows the output of the test: Based on the output above, the coeficients value of the Abs_RES_1 variable is known that the significance value (Sig.) for the Access to LMS variable is 1000, the number is greater than 0.05, the significance value (Sig.) for the Material of LMS variable is 1,000, the number is greater than 0.05, the significance value (Sig.) for the Discussion Forum variable is 1000, the number is greater than 0.05. Because the significance value of the Access to LMS, Material of LMS, and Discussion Forum variables is more than 0.05 then it can be stated in the regression model that there are no symptoms of heteroscedasticity. From the output of the classical assumption test, in the multiple regression analysis, several classical assumptions such as normality, autocorrelation, multicollinearity and heteroscedasticity have passed the test, so that the model can be
Fig. 2 Heteroscedasticity test results
Multiple Regression Model in Testing the Effectiveness of LMS After …
943
Table 3 Output of heteroscedasticity test–glesjer test Model
1
Unstandardized coefficients
Standardized t coefficients
B
Beta
Std. error
Sig
Collinearity statistics Tolerance VIF
(Constant)
−9.148E-16 0.879
Access to LMS
0.000
0.023
0.000
0.000 1.000 0.655
1.527
Material of 0.000 LMS
0.043
0.000
0.000 1.000 0.521
1.921
Discussion 0.000 forum
0.033
0.000
0.000 1.000 0.524
1.910
a Dependent
0.000 1.000
variable: Abs_Res_1
generalized, and conclusions will be drawn which will be explained in the discussion part.
3.2 Multiple Regression Analysis Results To determine whether there is influence of the independent variables on the dependent variable used in the analysis model, multiple regression analysis is performed. Hypothesis tests were performed to determine the effect of Access to LMS, Material of LMS, and Discussion Forum variables on the Effectivity of LMS. The results of multiple regression testing can be seen in the Table 4. Form the output of that multiple regression testing, several things can be explained: (a) Correlation Coefficient The R value means that the the dependent variable relationship and the observed independent variable (correlation). Therefore, it found the R value to be 0.660 or 66.0%, meaning that the variables of Access to LMS, Material of LMS, and Discussion Forum together have a strong relationship with the Effectivity of LMS . Table 4 Multiple regression test results Model
1
R
0.660a
a Predictors
R square
Adjusted R square
Std error of the estimate
0.435
0.418
0.689
Change statistic
Durbin-Watson
R square change
F change
Sig. F change
0.435
24.653
0.000
(constant). Discussion forum, access to LMS, material of LMS variable: Effectivity of LMS.
b Dependent
1.712
944
M. A. Dewi et al.
Table 5 Test results (simultaneous) Sum of squares
Mode 1
Regression
35.091
df
Mean square
F
Sig
3
11.697
24.653
0.000b
0.474
Residual
45.549
96
Total
80.640
99
a Dependent b Predictors:
variable: effectivity of LMS (constant), discussion forum, access to LMS, material of LMS
(b) Coefficient of Determination R2 used to measured coefficient of determination. At the outcomes of the regression analysis, the value of R2 is 0.435 or 43.5%, which means that the Access to LMS, Material of LMS, and Discussion Forum variables are predictors or determinants of the Effectivity of LMS with a coefficient of determination of 43.5%. While the remaining 56.5% are variables that are not used. The higher the value of R2 , the better the model’s ability to explain the observed phenomenon. (c) F-Test Results (Simultaneous) The purpose of the F test is to find out the independent variables used in the study model simultaneously have a significant effect on the dependent variable. The hypothesis can be accepted if the value of Fcount is greater than Ftable . The tolerable degree of error is 0.05 and the confidence interval is 0.95. The significance level used in this analysis is 0.05. if the value of sig is less than 0.05, then the influence of the independent variable on the dependent variable can be declared significant and the hypothesis can be accepted. The complete F test results can be seen in Table 5. The significant value of F or Fcount shows the significance of the determinant coefficient or proves the fit of the regression model, meaning that the resulting regression equation is able to explain the variation of Y. In the table above, Fcount is 24,653 > Ftable and sig F is 0.000 < 0.05. So, it can be concluded that the model used is in accordance with the developed model and the results of the analysis can be generalized to the population with an error degree of 5% and a degree of confidence of 95%. From the output above, it can be proven that the variables Access to LMS, Material of LMS, and Discussion Forum together (simultaneously) have a positive and significant effect on the Effectivity of LMS. (d) T-Test Results (Partial) The t-test is performed to show the independent variables or independent variables are individually affected by the dependent variable or the dependent variable partially [17]. The hypothesis can be accepted if the value of tcount is greater than ttable . The tolerance of 0.05, confidence interval of 0.95. The significance level used in this analysis is 0.05. This Table 6 show the complete of t-test. Form the output of the t-test (partial) at that table, several things can be states as: 1. The Access to LMS variable has a positive influence on the Effectivity of LMS of 0.072 (standardized beta of 0.302) with a tcount of 3.183 > 1.96 (ttable ), with a
Multiple Regression Model in Testing the Effectiveness of LMS After …
945
Table 6 T-Test results (partial) Model
1
Unstandardized coefficients
Standardized coefficients
B
Beta
Std. error
t
Sig
Collinearity statistics Tolerance
VIF
(Constant)
1.101
0.879
1.252
0.213
Access to LMS
0.072
0.023
0.302
3.183
0.002
0.655
1.527
Material of 0.080 LMS
0.043
0.196
1.847
0.068
0.521
1.921
Discussion forum
0.033
0.281
2.654
0.009
0.524
1.910
a Dependent
0.088
variable: effectivity of LMS
significance level of 0.002, at a 95% confidence interval or a degree of error = 5%. Thus, Access to LMS partially has a positive and significant effect on the Effectiveness of LMS. 2. The Material of LMS variable has a positive effect on the Effectivity of LMS of 0.072 (standardized beta of 0.196) with a tcount of 1.847 < 1.96 (ttable ), with a significance level of 0.068, at a 95% confidence interval or a degree of error = 5%. Thus, the Material of LMS has a positive and significant influence on the Effectiveness of LMS. This result is positive and significant at 90% confidence level or with 10% error degree. 3. The Discussion Forum variable has a positive influence on the Effectivity of LMS of 0.088 (standardized beta of 0.281) with a tcount value of 2.654 > 1.96 (ttable ), with a significance level of 0.002, at a 95% confidence interval or a degree of error = 5%. Thus, the Discussion Forum partially has a positive and significant influence on the Effectiveness of LMS. So, the overall hypothesis testing results in the Table 7: Table 7 Summary of hypothesis testing Hypotheses
B
Std. Error
H1
Access to LMS H1 –> Effectivity of LMS
0.072
0.023
H2
Material of LMS H2 –> Effectivity of LMS
0.080
H3
Discussion Forum H3 –> Effectivity of LMS
0.088
H4
AL + ML + DF –> Effectivity of LMS
F value
t/F
Sig
Hypoteses testing
3.183
0.002
Accepted
0.043
1.847
0.068
Rejected
0.033
2.654
0.009
Accepted
24.653
0.000
Accepted
946
M. A. Dewi et al.
1. The Access to LMS variable has a positive influence on the Effectivity of LMS of 0.072 (standardized beta of 0.302) with a tcount of 3.183 > 1.96 (ttable ), with a significance level of 0.002, at a 95% confidence interval or a degree of error = 5%. Thus, Hypothesis 1 which states that Access to LMS has a positive and significant effect on the Effectivity of LMS is statistically accepted. 2. Material variable has a positive influence on the Effectivity of LMS of 0.072 (standardized beta of 0.196) with a tcount value of 1.847 < 1.96 (ttable ), with a significance level of 0.068, at a 95% confidence interval or a degree of error = 5%. Thus, Hypothesis 2 which states that Material of LMS has a positive but not significant effect on the Effectiveness of LMS is statistically rejected. This result is accepted at 90% confidence level or with 10% error degree. 3. The Discussion Forum variable has a positive influence on the Effectivity of LMS of 0.088 (standardized beta of 0.281) with a tcount value of 2.654 > 1.96 (ttable ), with a significance level of 0.002, at a 95% confidence interval or a degree of error = 5%. Thus, Hypothesis 3 which states that partially, the Discussion Forum has a positive and significant influence on the Effectiveness of LMS, which is statistically accepted. 4. Variables Access to LMS, Material of LMS, and Discussion Forum together (simultaneously) have a positive and significant effect on the Effectivity of LMS with an Fcount of 24,653 > Ftable and sig F is 0.000 < 0.05. Thus, hypothesis 4 which states that Access to LMS, Material of LMS, and Discussion Forum together (simultaneously) has a positive and significant influence on the Effectivity of LMS is statistically accepted.
4 Conclusion Based on the multiple regression model, the effectiveness of the LMS is still quite high through LMS access considering that all learning activities are carried out at the LMS, as well as forum discussions are still quite significant for students as a weekly activity to discuss with lecturers and other students, but the material has no significant effect. on the effectiveness of the LMS, concluded that there are still many students who seek and study materials and resources outside the LMS. This study needs to be continued to determine the effect of the effectiveness of the LMS on the scores obtained by students, especially when they prefer to look for resources outside the LMS. Acknowledgements Bina Nusantara University supports this work through the Office of Research and Technology Transfer, as a part of the Bina Nusantara University (BINUS) International Research Grant with University Teknologi Malaysia (UTM) entitled “Model Pembelajaran Online berdasarkan Prediksi Nilai Mahasiswa untuk Pengukuran Prestasi Akademik Menggunakan Machine Learning” contract number: No: 061/VR.RTT/IV/2022 and contract date: 8 April 2022.
Multiple Regression Model in Testing the Effectiveness of LMS After …
947
References 1. Mishra L, Gupta T, Shree A (2020) Online teaching-learning in higher education during lockdown period of COVID-19 pandemic. Int J Educ Res Open 1 2. Yasmin M (2022) Online chemical engineering education during COVID-19 pandemic: lessons learned from Pakistan. Educ. Chem. Eng. 39:19–30 3. Alturki U, XAldraiweesh A (2021) Application of learning management system (LMS) during the COVID-19 pandemic: a sustainable acceptance model of the expansion technology approach. Sustainability 13 4. Syahruddin S et al (2021) Studens’ acceptance to distance learning during Covid-19: the role of geographical areas among Indonesian sports science students. J Heliyon 7 5. Aldiab A et al (2019) Utilization of learning management systems (LMSs) in higher education system: a case review for Saudi Arabia. Energy Procedia 160:731–737 6. Ikhsan RB, Prabowo H, Yuniarty (2021) Validity of the factors students’ adoption of learning management system (LMS): a confirmatory factor analysis. ICIC Express Lett Part B: Appl 12:979–986 7. Bradley VM (2021) Learning management system (LMS) use with online instruction. Int J Technol Educ (IJTE) 4:68–92 8. Takahashi S et al (2014) The role of learning management systems in educational environments: an exploratory case study. J Inf Syst Res Innov 2:57–63 9. Al-Fraihat D, Joy M, Masa’deh R, Sinclair J (2020) Evaluating e-learning systems success: an empirical study. Comput Hum Behav 102:67–86 10. Desak GGFP, Gayatri NAG (2016) Evaluasi implementasi binus online pada proses pembelajaran studi kasus: program studi sistem informasi. Ultima Infosys 7 11. Duta IPGP, Rio Febriansyah MR, Anggreainy MS (2021) Effectiveness of LMS in online learning by analyzing its usability and features. In: 1st international conference on computer science and artificial intelligence (ICCSAI), pp 56–61 12. Kaburuan ER, Lindawati ASL, Rahmanto NPT (2020) User experience evaluation on university’s learning management system (LMS). In: 1st international conference on intermedia arts and creative technology (CREATIVEARTS2019), pp 176–184 13. Alita D, Putra AD, Darwis D (2021) Analysis of classic assumption test and multiple linear regression coefficient test for employee structural office recommendation. IJCCS 15:295–306 14. George D, Mallery P (2019) IBM SPSS statistics 25 step by step: a simple guide and reference, 15 edn. Routledge 15. Rawlings JO et al (1998) Applied regression analysis: a research tool, 2nd edn. SpringerVerlag, New York 16. Belsley DA (1991) Conditioning diagnostics: collinearity and weak data in regression. John Wiley & Sons Inc., New York 17. Ghozali I (2016) Aplikasi analis multivariete dengan program IBM SPSS 23. Penerbit Universitas Diponogoro, Semarang
Skin Disease Detection as Unsupervised-Classification with Autoencoder and Experience-Based Augmented Intelligence (AI) Kushal Pokhrel, Suman Giri, Sudip Karki, and Cesar Sanin
Abstract In this paper, we propose an Artificial Neural Network using an autoencoder trained with fewer images but increases accuracy based on experience Augmented Intelligence. Most neural network systems use a large number of training sets to achieve a well-performing model and spend great efforts on pre-processing and training times to create a static model. In our case, we propose a system that uses just 4% images per class training set and learns with each iteration of being used, interacting with the user, and acquiring experience to increase the accuracy. The average accuracy rate is increased at a 1.33% rate per every 20 user experiences. The proposed model offers advantages in creating dynamic experience-based augmented intelligence models. Keywords Autoencoder neural networks · Unsupervised learning · Image processing · Convolutional neural network · Deep neural network · Skin disease · Imaging analysis · Experience-based knowledge representation · Augmented intelligence
1 Introduction Skin diseases are the 4th most common cause of skin affliction worldwide [1]. Robust and automated systems have been developed to lessen this burden and help patients K. Pokhrel (B) · S. Giri · S. Karki · C. Sanin Australian Institute of Higher Education AIH, Sydney, NSW 2000, Australia e-mail: [email protected] S. Giri e-mail: [email protected] S. Karki e-mail: [email protected] C. Sanin e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_82
949
950
K. Pokhrel et al.
to assess their skin disease early. Most of the review papers presenting systems for skin disease diagnosis only provide skin cancer classification [2] and lack on identifying other skin diseases. For all skin diseases, treatments are more effective and less damaging when found early. Our project attempts to detect skin diseases from a wider poll. Our proposed approach is based on the pre-processing and use of a deep learning algorithm, involving training, validation, and classification phases as usual; however, it involves an Augmented Intelligence (AI) element based on experience. Our proposal starts with a very low training set that grows over time based on human interaction with the system conforming with the AI concept. Experiments were performed on 490 images obtaining a 35% accuracy with a growing average rate of 1.33% accuracy for every 20 experiences. This is achieved for seven-class classification using an Autoencoder Neural Networks (AENN) and Unsupervised Learning Algorithm. We propose a system that can take relatively lesser images than other systems saving processing time and costs, while over time, it is trained through augmented intelligence to reach the optimum level of accuracy. The model can achieve the optimum learning curve it needs to scale to the accuracy of other systems.
2 Background Skin diseases are the 4th common cause of skin burden worldwide. Robust and Automated systems have been developed to lessen this burden and help patients to assess skin lesions at early stages [3]. Most available systems found in literature only provide skin cancer classification leaving aside other less deadly diseases. All skin treatments are more effective and less disfiguring when found early and it is challenging due to skin diseases with similar characteristics. In this project, we attempt to detect skin diseases. Our novel system attempts for the diagnosis of the most common skin lesions: Melanocytic nevi, Melanoma, Benign keratosis-like lesions, Basal cell carcinoma, Actinic keratoses, Vascular lesion, and Dermatofibroma [3], but could be easily extended to other lesions given the dynamic learning model [4]. The proposed approach is based on an augmented intelligence model combining Decisional DNA, autoencoder, and neural network technologies in an unsupervised learning algorithm.
2.1 Existing Frameworks Several frameworks and technologies based on machine learning and artificial intelligence support image identification, particularly in skin diseases. Current frameworks found in literature provide accuracies between 75 and 95% in models that do not grow over time, improve, nor use augmented intelligence, but what is more critical,
Skin Disease Detection as Unsupervised-Classification …
951
they are static models; this is not optimum. Below are presented the most common technologies found in literature.
2.1.1
Artificial Neural Network (ANN)
Inspired by our brain neuron’s biological pattern, an Artificial Neuron Network (ANN) is a statistical non-linear predictive modelling method used to learn complex relationships between input and output [5, 6]. It learns computations through feedforward and back-propagation on three types of nodes. It employs supervised and unsupervised learning approaches [7]. Also, ANNs require processors with parallel processing power.
2.1.2
Back Propagation Neural Network (BNN)
Backpropagation is a strategy in ANN to find out the error contributions of each neuron after a cluster of information is processed, in image recognition, multiple images. Backpropagation is quite sensitive to noise and outliers’ data. BNN classifiers can achieve up to 75–80% accuracy [7]. BNN benefits prediction and classification but the processing speed is slower compared to other learning algorithms [4, 7].
2.1.3
Support Vector Machine (SVM)
SVM is a supervised non-linear classifier that constructs an optimal n-dimensional hyperplane to separate all the data points into two categories [7]. In SVM, choosing a fine kernel function isn’t easy as it requires a long training time for large datasets. Since the final model is not easy to use, it cannot be made small calibrations to the model, and it becomes difficult to tune the parameters used in SVMs. SVMs when compared with ANNs always give improved results [5].
2.1.4
Convolutional Neural Network (CNN)
CNN is a category of deep neural networks, where the machine learns on its own and divides the data provided into the levels of prediction in a very short period giving accurate results [7]. A CNN is an algorithm in deep learning which consists of a combination of convolutional and pooling layers in sequence and then fully connected layers at the end like a multilayer neural network [7]. CNN stands out among all alternative algorithms in classifying images with some crucial characteristics such sparse connectivity, shared weights, and pooling features to extract the best features and gain accuracy. Giant databases of labelled data and pre-trained networks are now publicly available [8–10].
952
2.1.5
K. Pokhrel et al.
Set of Experience Knowledge Structure (SOEKS) and Decisional DNA (DDNA)
Knowledge and experience engineering techniques are becoming increasingly useful and popular components of hybrid integrated systems used to solve complex real-life problems in different disciplines [4]. These techniques offer features such as learning from experience, handling noisy and incomplete data, and helping with decisionmaking and predicting capabilities. Several different applications of a multidomain knowledge representation structure called Decisional DNA (DDNA) have been implemented and shared for the exploitation of embedded knowledge within different technologies [4, 11]. DDNA has been used in combination with CNN helping learn image and knowledge experience of a set of practices to train itself in a format that behaves like humans as we humans also develop knowledge in a form of structure to remember [4, 12].
2.1.6
Augmented Intelligence (AI)
Augmented intelligence (AI) (differentiated from Artificial Intelligence not referred in this paper as AI) is a design pattern for a human-centred collaboration approach in which people and artificial intelligence collaborate to improve learning ability, such as teaching, decision-making process, and unique experiences [3]. Although artificial intelligence has been frequently acknowledged as a significant strategy to help identify images, particularly to establish medical assessments, artificial intelligence underscores the necessity and potential to apply AI. Human expertise cannot be substituted easily, and AI can help specialists make complicated decisions by compiling quickly changing data from disparate sources. AI can help solve current issues and advance in machine-based intelligence.
3 Proposed Framework: Methodology and Features 3.1 Datasets The image data set used is from Harvard University named the Ham10000 [13]. The dataset contains 640 × 450 RGB images of 96 dpi conforming to 276 Skin Diseases from 7 categories. Our model requires starts with an initial dataset of 280 images for training and 105 images for AI testing, adding up to 385. Then, an additional set of 147 images were used for increasing AI in the system and measuring improvement.
Skin Disease Detection as Unsupervised-Classification …
953
3.2 Preprocessing and Loading Data Images were classified and preprocessed as follows. Images were gathered and converted into 64 * 64-bit images. Then, with the encoder function, each image gets reduced in quality by changing it into a grayscale reconstruction in order to increase the chances for the encoder to focus on the segmented image [14]. Such segmented image is used with the autoencoder for unsupervised classification. Afterwards, the decoder activates decoding the image back to its initial state and applies the data augmentation for better yield and get results as per the 7 diseases (clusters) whichever is most likely the disease to be. It creates, then, a prediction function used for skin disease assessment. Grayscale images with pixel values ranging from 0 to 255 and a dimension of 64 × 64 bits need to be loaded into a Keras framework. Train and test images are arranged into float32 matrix sets of size 64 × 64 × 1, rescaling pixel values in the range 0–1 inclusive. This can now be fed into CNN.
3.3 Training Model The proposed model uses an Autoencoder Neural Networks (ANN) in an unsupervised learning algorithm very highly regarded in detecting images of any scales and learning to get the patterns. For training purposes, images must be divided into training and testing sets. Such division can be in any ratio. Also, the batch size and number of epochs must be decided beforehand which in our case the lowest was chosen to start with is 1. Increasing the number of epochs will increase accuracy. The autoencoder is a type of ANN used to learn data encodings in an unsupervised manner. It aims at learning a lower-dimensional representation (encoding) for higherdimensional data, typically for dimensionality reduction, by training the network to capture the most important parts of the input image [2]. Autoencoders consists of 3 parts: Encoder, Bottleneck and Decoder.
3.3.1
Encoder
A module that compresses the train-validate-test set input data into an encoded representation that is typically several orders of magnitude smaller than the input data. The encoder is a series of convolutional blocks followed by pooling modules that compress the model’s input it into a small segment known as the bottleneck [7].
954
3.3.2
K. Pokhrel et al.
Bottleneck
A module that contains the compressed knowledge representations and is therefore the most essential part of the network. The bottleneck exists to limit the flow of information from the encoder to the decoder, enabling only the most important data to pass through and assisting the model in forming a knowledge representation of the input since it is built in such a manner that the greatest information held by a picture is captured. It generates valuable correlations between various inputs inside the network [14–16]. A bottleneck in the form of a compressed representation of the input stops the neural network from remembering and overfitting the data.
3.3.3
Decoder
A module that helps the network “decompress” the knowledge representations and reconstructs the data back from its encoded form. The output is then compared with the set input data. Ultimately, the decoder is a collection of a sequence of upsampling and convolutional units that rebuilds the compression feature back into the image output of the bottleneck. Because the decoder’s input is a compressed knowledge representation, it acts as a “decompressor” and reconstructs the picture from its latent properties [14–16]. Figures 1 and 2 depict a convolutional encoder and its layers.
Fig. 1 Visual representation of autoencoder and its layers
Skin Disease Detection as Unsupervised-Classification …
955
Fig. 2 Image processing and classification function
Before training an autoencoder, four hyperparameters must be established [14, 15]. (a) Code size: The most essential hyperparameter used to modify the autoencoder is the code size, also known as the bottleneck size. The magnitude of the bottleneck determines how much data must be compressed. This is also a regularization phrase. For our model, highlighted hyperparameters are: Activation function: RELU, Loss: mse, binary cross-entropy the adam optimizer, Dropout: 0.1/0.2, Batch size: 1/2, and Epochs: 1/2. (b) Number of layers: As with other neural networks, the encoders and decoder’s depth are crucial hyperparameters to optimize autoencoders. A higher depth increases model complexity, but a lesser depth is quicker to process. (c) Node count per layer: The number of nodes per layer determines the weights used per layer. The number of nodes in the autoencoder typically decreases with each consecutive layer as the input to each of these levels grows less throughout the layers. (d) Reconstruction Loss: The error function used to train the autoencoder is largely dependent on the type of input and output to which the autoencoder is to be adopted. When dealing with picture data, the most often used loss functions for reconstruction are MSE Loss and L1 Loss. If the inputs and outputs are in the [0, 1] range, like in MNIST, we may utilize Binary Cross Entropy as the reconstruction loss. 3.3.4
Web User Interface and Augmented Intelligence
The model has a Front-End interface running on Flask as a development server on a production/testing environment and AI experience through experienced skin diseases experts (SDE). The platform allows SDE to upload pictures of skin lesions to be assessed and diagnosed by the system. Upon diagnosis or more advanced skin disease testing, if SDE disagrees with the system, a recommendation can be given to the result obtained and the system which will add such experience into the system to improve accuracy and grow in a dynamic mode. Currently, the model is closed to
956
K. Pokhrel et al.
the 7 skin diseases presented, but an open model will allow any type of skin disease AI to be added.
4 Result Below are the metrics gathered while testing and training the model with the 490 images from the HAM10000 dataset. Our proposed model establishes a very low number of 280 training images. After this, six iterations of AI experience were done using 21 images per iteration (i), 3 images per disease. The total images were 390. This allows us to measure how experience increases accuracy. Training experiments showed slight variations based on the images used and the operation of the model. Experiment 1 showed an average increasing accuracy of 1.15% (Delta) per 21 images, while experiment 2 showed an average Delta of 2.24% per 21 images. Results are shown in Tables 1 and 2. A third experiment was performed continuing from the set of 395 images used. This was totally on the AI experience model. Seven iterations of 21 images, 3 per disease, adding to a total of 537 images. An average increase Delta of 1.33% per 21 images was achieved (Table 3) Table 1 First training experiment i
No of images
Loss
Accuracy
Time (s)
Trainable parameters
Delta
1
280
75.86
24.14
41
1,138,951
–
2
301
74.2
25.8
44
1,138,951
1.66
3
322
72.74
27.26
46
1,138,951
1.46
4
343
72.02
27.98
47
1,138,951
0.72
5
364
71.46
28.54
53
1,138,951
0.56
6
385
70.16
29.84
59
1,138,951
1.30
Table 2 Second training experiment i
No of images
Loss
Accuracy
Time (s)
Trainable parameters
Delta
1
280
85.23
14.77
55
1,138,951
–
2
301
84.17
15.83
69
1,138,951
1.06
3
322
81.1
18.90
76
1,138,951
3.07
4
343
80.11
19.89
73
1,138,951
0.99
5
364
78.18
21.82
73
1,138,951
1.93
6
385
74.02
25.98
85
1,138,951
4.16
Skin Disease Detection as Unsupervised-Classification …
957
Table 3 The third experiment results are based on AI experience i
Images
Loss
Accuracy
Delta
1
21
73.0
27.0
–
2
21
71.3
28.7
1.70
3
21
70.98
29.02
0.32
4
21
70.13
29.87
0.85
5
21
68.35
31.65
1.78
6
21
66.23
33.77
2.12
7
21
65
35.0
1.23
5 Conclusions and Future Work Experiments allow us to determine that unsupervised classification of our model was successful in learning the patterns to detect images for skin diseases of 7 different types: Actinic Keratoses, Basil Cell Carcinoma, Melanocytic Nevi, Dermato fibroma, Melanoma, Benign Keratoses and Vascular Lesions. Testing a model with a low number of training images offered a low accuracy of 30% approximately. However, most importantly, adding elements of experience-based AI provided by skin disease experts provides an increasing learning curve that grows at an average increasing Delta accuracy of 1.33% per 21 images experience. AI by experience offers an alternative model of improvement with low training and preprocessing images in a dynamic form. Future work associated with this model presents an assessment of other alternatives for the model to grow quickly in experience and enhanced better capability, adding more layers to the classifier, and determining the Region of Interest (ROI) obtaining better accuracy.
References 1. Americxanss Academy of Dermatology Association, Skin conditions by the numbers, https:// www.aad.org/media/stats-numbers. Last Accessed 1 Oct 2022 2. Dildar M, Akram S, Irfan M, Khan HU, Ramzan M, Mahmood AR, Alsaiari SA, Saeed AHM, Alraddadi MO, Mahnashi MH (2021) Skin cancer detection: a review using deep learning techniques. Int J Environ Res Public Health 18(10):5479. https://doi.org/10.3390/ijerph181 05479 3. Cano E, Mendoza-Avilés J, Areiza M, Guerra N, Mendoza-Valdés JL, Rovetto CA (2021) Multi skin lesions classification using fine-tuning and data-augmentation applying NASNet. PeerJ Comput Sci. 7:e371. https://doi.org/10.7717/peerj-cs.371 4. Shafiq SI, Sanin C, Szczerbicki E (2014) Set of experience knowledge structure (SOEKS) and decisional DNA (DDNA): past. Present Future Cybern Syst 45(2):200–215. https://doi.org/10. 1080/01969722.2014.874830
958
K. Pokhrel et al.
5. Kucharski D, Kleczek P, Jaworek-Korjakowska J, Dyduch G, Gorgon M (2020) Semisupervised nests of melanocytes segmentation method using convolutional autoencoders. Sensors 20(6):1546. https://doi.org/10.3390/s20061546 6. Abiodun OI, Jantan A, Omolara AE, Dada KV, Mohamed NA, Arshad H (2018) State-of-theart in artificial neural network applications: a survey. Heliyon 4(11):e00938. https://doi.org/10. 1016/j.heliyon.2018.e00938 7. Guo X, Liu X, Zhu E, Yin J (2017) Deep clustering with convolutional autoencoders. In: Liu D, Xie S, Li Y, Zhao D, El-Alfy ES (eds) Neural information processing. ICONIP 2017. LNCS, vol 10635, pp 373–382. Springer, Heliderberg. https://doi.org/10.1007/978-3-319-70096-0_39 8. P54: development of a national dataset of skin images to support evidence-based artificial intelligence in dermatology. British J Dermatol 187(S1):57–57 (2022). https://doi.org/10.1111/ bjd.21176 9. Kinyanjui NM, Odonga T, Cintas C, Codella NCF, Panda R, Sattigeri P, Varshney KR (2020) Fairness of classifiers across skin tones in dermatology. In: Medical image computing and computer assisted intervention—MICCAI 2020 proceedings, pp 320–329. https://doi.org/10. 1007/978-3-030-59725-2_31 10. Zhang H, Li F, Wang J, Wang Z, Shi L, Zhao J, Sanin C, Szczerbicki E (2017) Adding intelligence to cars using the neural knowledge DNA. Cybern Syst 48(3):267–273. https://doi.org/ 10.1080/01969722.2016.1276780 11. Sun Y, Mao H, Guo Q, Yi Z (2016) Learning a good representation with unsymmetrical autoencoder. Neural Comput Appl 27:1361–1367 12. de Oliveira CS, Sanin C, Szczerbicki E (2018) Video classification technology in a knowledgevision-integration platform for personal protective equipment detection: an evaluation. Intell Inf Database Syst 10751:443–453. https://doi.org/10.1007/978-3-319-75417-8_42 13. Long JB, Ehrenfeld JM (2020) The role of augmented intelligence (AI) in detecting and preventing the spread of novel coronavirus. J Med Syst 44:59. https://doi.org/10.1007/s10 916-020-1536-6 14. Harvard Dataverse, The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions, https://dataverse.harvard.edu/dataset.xhtml?persis tentId. https://doi.org/10.7910/DVN/DBW86T. Last Accessed 1 Oct 2022 15. Badrinarayanan V, Kendall A, Cipolla R (2017) SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell 39(12):2481–2495. https://doi.org/10.1109/TPAMI.2016.2644615 16. Wu B, Nair S, Martin-Martin R, Fei-Fei L, Finn C (2021) Greedy hierarchical variational autoencoders for large-scale video prediction. In: 2021 IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp 2318–2328. https://doi.org/10.1109/CVPR46437. 2021.00235
Intelligent System of Productivity Monitoring and Organic Garden Marketing Based on Digital Trust with Multi-criteria Decision-Making Method Sularso Budilaksono , Febrianty , Woro Harkandi Kencana , and Nizirwan Anwar Abstract The problem of food security in Indonesia is a serious problem that needs to be immediately sought for alternative solutions. The rapid development of information technology should be one of the great opportunities for improving the management of agricultural production and marketing in Indonesia and supporting sustainable food security programs. Organic farming that relies on natural inputs has complexities, especially since Organic is a process claim, not a product claim. Therefore, IT-based organic production and marketing management for group or independent garden scales will encourage the realization of food self-sufficiency in Indonesia. This study aims to build an intelligent system for monitoring the productivity and marketing of organic gardens based on digital trust by applying the MultiCriteria Decision Making (MCDM) Method. The research partner is the Indonesian Agri Empowered SME, which is involved in the management and development of organic farmers, including marketing organic agricultural products. They must still integrate information technology to support effective and efficient organic farming management. The application enables Farmer Partner and BAI administrators to monitor the development of organic cultivation inventory. With this, it is possible to follow up on information as soon as possible. Organic agriculture employs manure or organic liquid fertilizer, pure water for watering, and organic pesticides. The soil conditions utilized in organic agriculture must be free of chemical residues from conventional agriculture. Customers can place orders, make payments, and request a shipping address based on their desired destination. Keywords Intelligent system · Monitoring · Organic garden · Digital trust · Multi-criteria S. Budilaksono (B) · W. H. Kencana YAI Persada Indonesia University, Jakarta 10430, Indonesia e-mail: [email protected] Febrianty Palcomtech Institute of Technology and Business, Palembang 30151, Indonesia N. Anwar Esa Unggul University, DKI Jakarta 11510, Indonesia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_83
959
960
S. Budilaksono et al.
1 Introduction The issue of food security in Indonesia is severe, and an alternative solution must be sought immediately. The rapid development of information technology ought to be one of the most significant opportunities for enhancing agricultural production and marketing management in Indonesia and bolstering sustainable food security programs. Because “organic” is a process claim and not a product claim, farming that relies on natural inputs presents challenges. Consequently, IT-based organic production and marketing management for group or individual garden scales will promote Indonesia’s food self-sufficiency. In dynamic business networks, a crucial challenge of the involved organizations is to achieve and maintain their information systems’ interoperability about continuous changes [1]. The promotion of increased public awareness regarding the consumption of organic agricultural products and support for organic farming also has a significant impact. Information technology utilization in the farm sector will modify the state of affairs of choice, and stakeholders can yield a superior mode. Data mining is pivotal in accomplishing realistic and efficient solutions for this problem and several issues associated with agriculture [2]. Information technology/IT is one of the government’s focuses to encourage progress in the agricultural sector. IT development in sustainable agricultural production management is contained in the Strategic Plan of the Ministry of Agriculture for the 2020–2024 period. According to BPS data for 2018, the internet potential at the farmer and rural level is 93.9%. Based on the 2018 Village Potential data, 4.5 million farmers use the internet (13%) of the total Indonesian farmers, around 33.4 million people [3]. Availability, Ease of Access, and Costs that are affordable for rural communities, according to Dymond, can impact the agricultural sector’s development in various countries [4]. Consequently, the potential for digitizing agriculture in Indonesia is substantial and likely to be realized. Access to Big Data and IoT services has allowed farmers real-time access to information and expertise to assist with making important decisions in their daily activities [5]. The development of information technology that facilitates organic agriculture management is still in its infancy. This Scientific Research Program’s SME partner is “Berdaya Agri Indonesia,” a pioneering Indonesian agricultural start-up whose mission is to promote sustainable agriculture in Indonesia and Southeast Asia. Partners of Indonesian Agri Empowered SMEs involved in the management and development of organic farmers, including marketing organic agricultural products, lack integrated information technology that supports effective and efficient organic farming management. Partners also need help finding solutions to increase public awareness of organic farming and support for organic agriculture. The application of information and communication technologies in agriculture can refer to each phase of the agricultural cycle. Pre-cultivation, crop cultivation, and post-harvest are the three main phases of the farm cycle, according to Deloitte [6]. According to a study titled “Design of a Geographic Information System (GIS) as a Development of an Operational IoT-Based Plantation Area Monitoring System”
Intelligent System of Productivity Monitoring and Organic Garden …
961
by Thereza et al., the plantation sector in Indonesia still relies on conventional systems (human labor) controlling the field, making operations difficult, enhancing operational efficiency, effectiveness, and output. In addition, the current Covid-19 pandemic has indirect effects and may reduce productivity. There are some changes made to the operating system/management of plantation land. Technology and innovation are necessary to maintain and improve product quality and quantity [7]. Online marketing can disseminate information about agricultural products to overcome limitations in agrarian product sales transactions and create a more effective and efficient sales system [8]. According to Mowen, consumer trust comprises all consumer knowledge and conclusions about objects, attributes, and benefits. Other parties or business partners cannot merely acknowledge this trust; it must be built from the start and demonstrated [9]. Promotion can be a stimulant to attract consumers, but trust remains an essential aspect of the business. A transaction will occur if both parties have confidence in one another [10]. Credibility and technology Industry 4.0 have the synergistic potential to increase cooperation and collaboration between businesses in an innovation network that can accelerate open innovation while strengthening digital trust-based business relationships. Conventional trust and Industry 4.0 technology are essential building blocks for digital trust [11]. Multi-Criteria Decision Making (MCDM) is one of the most popular decisionmaking techniques. MADM identifies the optimal alternative from a set of other options (choice problem) using alternative preferences as selection criteria. MODM employs an optimization strategy that requires a mathematical model of the problem to be solved before it can be solved [12]. The development of information technology that facilitates organic agriculture management is still in its infancy. This Scientific Research Program’s SME partner is “Berdaya Agri Indonesia,” a pioneering Indonesian agricultural startup whose mission is to promote sustainable agriculture in Indonesia and Southeast Asia. Berdaya Agri Indonesia, involved in the management and development of organic farmers and the marketing of organic agricultural products, lacks integrated information technology that supports effective and efficient management of organic farming. They also need help finding solutions to increase public awareness of organic farming and support for organic agriculture.
2 Research Method The system development method used by the author in conducting this research is to use the SDLC (Software Development Life Cycle) method. This SDLC method has several stages as follows: Planning, Analysis, Design, Development, Testing, and the last is Implementation and Maintenance. System analysis is carried out based on the management needs of organic garden cultivation in Berdaya Agri Indonesia. This Small and Medium Industry requires an application that can be used to monitor organic garden cultivation from planting to sales. The application developed must be based on Android, considering that the
962
S. Budilaksono et al.
location of organic cultivation is in the highlands and is separated from non-organic cultivation. Based on the areas surveyed, organic locations are separate from nonorganic plants. It can be done because the soil used for organic crops must be sterilized first from chemical fertilizer residues within four years.
3 Result and Discussion 3.1 Result This activity’s business process derives from Garden Management, which oversees organic garden cultivation of specific commodities from seedlings through fertilization, pest and disease eradication, watering, treatment, and harvesting. After all, the farmers have completed the harvesting process. The combined farmer groups will conduct an inventory of organic garden products from the successfully harvested commodities for sorting. The results will be submitted to Berdaya Agri Indonesia (BAI) to be sold via the application. If BAI has received the product, BAI will make payments to Farmer Partner, which will be distributed directly to the farmers. This process will occur regularly because farmers have multiple lands containing various commodities; therefore, Farmer Partner must be able to harvest and manage entities that can be deposited with BAI daily. In this application, Farmer Partner has the capacity for daily activities, including planting management, harvesting management, and BAI deposit management (Fig. 1). The application also includes Farmer Partners for managing product planting. Additionally, the product’s planting date and harvest date will be displayed on this page. Additionally, there is an arrow that, when clicked, reveals additional product information. When the planting button is clicked, the planting form that must be completed for the next planting is displayed. Farmer Partner can manage the documentation of its organic certification in the form of photographs or videos of the lengthy certification process. In addition, Farmer Partner can include the video documentation of organic cultivation to increase the confidence of prospective customers, as the organic harvest business is based on public trust (Fig. 2). The sale or distribution of organic garden produce to customers is another requirement for BAI. Typically, organic vegetables are not sold directly to final consumers or the public but rather through specific distributor channels. These distributors may consist of merchants, cafes, restaurants, hotels—hospitals, and other entities that distribute organic produce to final consumers. BAI must have approved this customer or distributor’s account in the Organic Gardens application. The subsequent business procedure is identical to e-commerce software in general. On the customer’s homepage are several menus, including profile, transaction history, shop, search menu, basket, popular products, top products, and orderable categories such as fresh vegetables and fresh fruit.
Intelligent System of Productivity Monitoring and Organic Garden …
963
Fig. 1 Use case diagram for organic garden application
Fig. 2 Activity diagram for the organic garden application
In addition, if a customer is interested in one of the products in the product list and wishes to place an order, they must click on the product, and the product’s detail page will be displayed. This page has several features, including the “Add to Cart” button. Note part for providing special order notes, Quantity feature specifying the number of products to be ordered, a function for describing a product’s characteristics, and displays the quantity of available product stock, as well as the name of the store from which the product will be shipped.
964
S. Budilaksono et al.
This organic garden application is fully managed by the BAI Admin so that the activities of Berdaya Agri Indonesia can distribute organic garden cultivation efficiently and effectively. The BAI administrator can simultaneously run six processes through the application, including: 1. 2. 3. 4. 5. 6.
Approve customer accounts and farmer partner accounts Conduct an inventory of products that will be sold through the application Receive orders and payments from customers Send charges that have been paid to the customer’s address Farmer Partner Management Receive product supply or deposit from farmer partners.
Every day there is a process of depositing organic garden products from Farmer Partner because organic cultivation products must be harvested daily. This deposit will fill BAI’s warehouse daily with various organic agriculture product shipments. BAI must update its inventory via the Organic Garden application to make it simpler for customers to select products and purchase them online. Farmer Partner Management enables BAI to manage farmer associations with organic farming in various locations. BAI currently has two productive Farmer Partner locations: Farmer Partner in Soreang, West Bandung Regency, with organic vegetable commodities, and Farmer Partner in Subang Regency, with organic rice commodities. The Farmer Partner management feature enables BAI to accept highquality organic agricultural commodities shipments. A list of farmer partners who have joined or registered will be displayed on the Organic Gardens application’s partner farmer page. This page will display several registered farming partners, both those who have joined and those who have yet to. The intelligent system is a component of the developed application that enables Farmer Partner and BAI administrators to monitor the development of organic cultivation inventory results. With this smart system, it is possible to follow up on information as soon as possible. For the BAI administrator, an example of an existing innovative system would be information on the minimum stock that has already reached the threshold, necessitating an immediate notification to Farmer Partner to send commodities that have exceeded the minimum stock limit. Customers’ most popular purchase is an additional intelligent system. This information will have a wide-ranging effect: farmers must cultivate this crop widely because it is widely appreciated and purchased by consumers. Another intelligent system is information on products ready for harvest because crop management includes information on the planting date and estimated harvest date, allowing for the most accurate prediction information. For Farmer Partners, a system enables the Farmer Group Association to monitor the managed product information. The Farmer Partner account’s intelligent system includes the following: • Information on the minimum stock below the threshold. • The number of products deposited with BAI. • The number of products harvested by farmers.
Intelligent System of Productivity Monitoring and Organic Garden …
965
Fig. 3 Smart system for BAI administrations and farmer partner
This intelligent system data illustrates Farmer Partner’s inventory fluctuations so that BAI’s requests can be anticipated and sufficient stock is available (Fig. 3).
3.2 Discussion The number of assisted farmers from Small and Medium Industries with Empowered Agri Indonesia is 352 (spread across Bogor Regency, Subang Regency, and Bandung Regency), and the number of consumers who have repeatedly been documented as purchasing organic products is 458. (Report of Berdaya Agri Indonesia, January 2020–July 2021). Fifty respondents comprised the pilot test sample for the intelligent monitoring system prototype. This information will be utilized to develop the Organic Garden application. Based on initial observations and discussions, there are three user groups for this application: Small and Medium Industries with Empowered Agri Indonesia (BAI) in this case represented by administrators, customers who purchase products from Berdaya Agri Indonesia (BAI), and Farmers Group Association users (Gapoktan). BAI’s customers are distinguished individuals who purchase organic vegetables for resale or special preparation for their guests’ meals. These clients may be restaurants, wholesalers, hotels, or organic produce stores. Gapoktan is a collection of farmer groups that offer guidance and problem-solving services to its members who encounter issues with organic agriculture.
966
S. Budilaksono et al.
In the highlands, organic vegetable cultivation is practiced because it cannot be mixed with conventional agriculture. The soil conditions utilized in organic agriculture must be free of chemical residues from traditional agriculture. Therefore, this location for organic vegetable cultivation is distinct from all non-organic cultivation areas. Organic agriculture employs manure or organic liquid fertilizer, pure water for watering, and organic pesticides for eradicating pests and diseases. Organic farmers only practice organic agriculture because organic agriculture requires specialized knowledge and abilities. Considering that the application’s users are organic farmers, the developed application is based on the Android platform. The BAI administrator has the most privilege, including the ability to approve creating customer and farmer partner accounts (a combination of farmer groups). Customers can place orders, make payments, and request a shipping address based on their desired destination. Due to limited knowledge, skills, and literacy of application technology, farmers’ access to the Farmer Partner application is restricted so that Farmer Partner accounts can be represented as groups of farmers. Smart system, this application is used by BAI admins and farmer partners to monitor organic planting management. The application consists of several smart systems: a smart planting system that will provide recommendations for organic seeds and the best planting season to farmers or farmer partners. This recommendation is based on the training data stored in the application based on organic grains, weather information, soil fertility information, fertilization treatment, treatment, and yields of these commodity seeds in the past. Report on success and failure in planting the same seeds with the same weather conditions and soil fertility becomes valuable training data or experience for farmers in the next planting season. This recommendation is a guide for farmers to continue the success of previous farmers or improve the failure of farmers in the past relative to the same crop commodity (Fig. 4). A smart system for warehousing management is also implemented with the Minimal Stock algorithm. With the smart warehousing system, the stock of organic products that have reached the stock limit can be ordered back to farmers, or the plant’s productivity can be increased. Because organic cultivation cannot be accelerated, the planting period is necessary, so it is required to schedule this planting system so that the products in the warehouse are always available with good quality. Smart systems to monitor marketing and sales are also included in this application, including the organic product algorithm that customers buy the most and the organic product algorithm that is most deposited to BAI. This smart marketing and sales system will recommend to customers which organic products are purchased the most by customers and which are in BAI’s warehouse the most. Organic product information accompanied by organic certificates and other supporting information will guide buyers to buy the products they want. This supporting information can be in the form of videos of planting, fertilizing, and treating organically. Organic customers, in general, are companies or distributors such as restaurants, hotels, organic shops, hospitals, and distributors of organic vegetables.
Intelligent System of Productivity Monitoring and Organic Garden …
967
Algorithm: The product that are widely harvested by farmers Data: harvest:Array product:Array read(product) for i to product.length read(harvest) for j to trasaction.length if product[i][product] = harvest[j][product] product[i][count_trx] => product[i][count_trx]+1 endif endfor endfor read(product) for k to product.length min=k for l = i+1 to product.length if product[l] < product[min] min= l endif if i != min temp= product[i] product[i] < product[min] product[min] = product[temp] endif endfor
endfor output(product)
Fig. 4 Smart system for planting system, inventory, marketing, and sales
4 Conclusion Based on the features and intelligent system of the Organic Garden application, it can be concluded: 1. The application involves three groups of users: the BAI administrator as the manager of this application because it has six business process features that can be run through this account, the Farmer Partner as a collection of farmer groups, and users as distributors who purchase organically grown commodities. 2. Farmer Partner can accommodate Berdaya Agri Indonesia’s management process for planting, harvesting, and depositing organic cultivation products. 3. Farmer Partner can accommodate documentation of the certification process to obtain organic certificates and video documentation of planting to increase customer confidence. 4. The application’s intelligent system enables BAI administrators and Farmer Partners to monitor business processes, including organic planting, harvesting, inventory, marketing, and sales.
968
S. Budilaksono et al.
References 1. Cretan A, Nica C, Coutinho C, Jardim-Goncalves R, Bratu B (2021) An intelligent system to ensure interoperability for the dairy farm business model. Future Internet (13):1–24 2. Nath S, Debnath D, Sarkar P, Biswas A (2018) Design of intelligent system in agriculture using data mining. In: International conference computer intelligence & IoT (ICCIIoT), pp.631–637. Elselvier, India 3. Badan Pusat Stastistik, http://www.BPS.go.id, Last Accessed 8 Oct 2022 4. Dymond A, Oestmann S (2018) ICT regulation. In: Information for development (infoDev) and international telecommunication union, Geneva 5. Chatuphale P, Armstrong L (2018) Indian mobile agricultural services using big data and internet of things (IoT). In: Abraham A, Muhuri P, Muda A, Gandhi N (eds) Intelligent systems design and applications. ISDA 2017. Advances. In: International conference on intelligent systems designs and application 2017, vol 736, pp 1028–1037. Springer, Cham 6. Delima R, Santoso HB, Purwadi J (2016) Study of agricultural applications developed in several Asian and African countries. In: National seminar on information technology applications SNATi 2016, pp 19–26. Yogyakarta 7. Thereza N, Saputra IPA, Husin Z (2021) Design and build a geographic information system (GIS) as the development of an IoT-based monitoring system for plantation areas. J Tekno Kompak (15):40–54 8. Anggraini N et al (2020) Digital marketing of agricultural products in Sukawaringin Village, Bangunrejo District, Central Lampung Regency. J Pengabdi Nas (1):36–45 9. Rusmardiana A (2015) Analysis of the formation of consumer confidence levels in digital printing businesses “CV.ABC”. JABE J Appl Bus Econ (1):220–227 10. Cahyani HH (2021) Analysis of the influence of digital promotion, trust, and ease of shopping through marketplaces on consumptive behavior of generation Y moderated by religiosity variable. J Ilm Mhs (9):1–16 11. Mubarak MF, Petraite M (2020) Industry 4.0 technologies, digital trusts and technological orientation: What matters in open innovation? Technol Forecast Soc Change (161):120–131 12. Qureshi MRN, Singh RK, Hasan MA (2017) Decision support model to select crop pattern for sustainable agricultural practices using fuzzy MCDM. Environ Dev Sustain (20):641–659
Projection Matrix Optimization for Compressive Sensing with Consideration of Cosparse Representation Error Endra Oey
Abstract Compressive sensing (CS) technique provides a dimensional reduction of signal acquisition by multiplying a projection matrix with the signal. Conventionally, projection matrix optimization was proposed for sparse synthesis model based (SSMB) in which a signal can be generated from linear combination of synthesis dictionary matrix columns with a few coefficients. Cosparse analysis model-based (CAMB) offers an alternative view where the cosparse coefficients are generated from the multiplication between an analysis dictionary (operator) and the signal. While projection matrix optimization methods of SSMB-CS have been widely proposed but not much attention for CAMB-CS. The amplified cosparse representation error (CSRE) and relative CSRE are considered in this paper besides the mutual coherence in the projection matrix optimization problem for CAMB-CS. The optimized projection matrix was obtained by solving the optimization problem using the nonlinear conjugate gradient method and alternating minimization algorithm. The experimental results with real images from the test images database show performance of the introduced method outperformed the existing ones in terms of recovered signal quality. Keywords Compressive sensing · Projection matrix optimization · Cosparse
1 Introduction Compressive sensing (CS) is a promising technique for signal acquisition that offers sensing and compression simultaneously where the mathematical frameworks of CS were introduced in 2006 by authors in [1–4]. Since then, there have been many works on CS theory development and its applications in various fields have been carried out. Sparse synthesis model-based (SSMB) in which a signal can be synthesized from a E. Oey (B) Department of Computer Engineering, Facultyof Engineering, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_84
969
970
E. Oey
synthesis dictionary matrix with a few coefficients [5] initially underlay the work of CS systems. The counterpart of SSMB is cosparse analysis model-based (CAMB) which provides sparse representation as a product of an analysis dictionary (operator) matrix and the signal [6]. The CAMB as an alternative framework of CS has begun to be widely applied in various applications because of its superior performance than SSMB [7, 8]. The CS both in SSMB and CAMB is performed by multiplying a projection matrix with the signal. The incoherence of the projection matrix and synthesis dictionary is required in SSMB-CS in order the signal can be recovered properly. Random matrices such as Gaussian, Bernoulli, or random partial Fourier have feasible coherence properties so they are usually used as projection matrix [4], however, those random matrices can be further optimized. While plentiful works on the projection matrix optimization for SSMB-CS have been provided such as in [9–12], CAMB-CS has not received much attention. The average mutual coherence minimization of the equivalent dictionary is the common approach of projection matrix optimization in SSMB-CS where the equivalent dictionary is the multiplication between the projection matrix and synthesis dictionary. The recovered signal accuracy of SSMB-CS has been improved by using an optimized projection matrix compare to a random matrix as shown in [9, 10]. Natural signals such as real images are not exactly sparse so they have sparse representation error (SRE). The average mutual coherence of the equivalent dictionary was effectively reduced by using projection matrix optimization algorithms as in [9, 10]. However, the SRE of the signal may be amplified by the optimized projection matrix which will reduce the overall performance of SSMB-CS. The amplified SRE is the multiplication between the projection matrix and SRE where in [11] its energy was taken into account in the projection matrix optimization and showed the performance improvement of SSMB-CS for real image applications. The SRE can be replaced by an identity matrix for the large set of training images as shown in [12] so the computation is more efficient because the training data is no longer needed. A similar strategy was used in [13] by introducing the equivalent operator as the multiplication between the projection matrix and the transpose of operator. The method in [11] was adopted by Oey et al. [13] to optimize the projection matrix of CAMBCS but without consideration of cosparse representation error (CSRE). The CSRE is similar to SRE because the natural signals are not exactly cosparse. The amplified CSRE is defined as the multiplication between the projection matrix and CSRE. However, as will be shown later if only considering the amplified CSRE energy in the projection matrix optimization will fail to improve the performance of CAMB-CS. The relative amplified CSRE energy is defined as the ratio of the amplified CSRE energy to the amplified cosparse signal energy. The proposed method in this paper takes into account the relative amplified CSRE energy along with amplified CSRE energy in the projection matrix optimization problem of CAMB-CS. Through the extensive numerical experiments in this paper, it is shown that the proposed method to optimize projection matrix outperformed the previous ones.
Projection Matrix Optimization for Compressive Sensing …
971
2 Previous Projection Matrix Optimization Methods The dimensional reduction of signal x ∈ multiplying it with a projection matrix ϕ ∈
N ×1 M×N
in SSMB-CS is performed by and produce Eq. (1).
y = ϕx
(1)
where y ∈ M×1 with M < N . The signal x is formed from a synthesis dictionary ψ ∈ N ×K with K ≥ N and can be written as Eq. 2. x = ψθ + es
(2)
where θ ∈ K ×1 and es ∈ N ×1 are the sparse coefficients and the SRE of signal x. If ∥θ∥0 = S with ∥θ∥0 denotes the number of non-zero values of θ and es = 0 then the signal x is said exactly S − spar se in ψ. Insert Eq. (2) into Eq. (1) so Eq. (3) can replace Eq. (1) where D = ϕψ ∈ M×L and ϕes = σs ∈ M×1 are equivalent dictionary and amplified SRE. y = ϕψθ + ϕes = Dθ + ϕes
(3)
The recovered signal xˆ can be obtained from y by solving the constrained problem in Eq. (4). θˆ = arg min ∥θ∥0 s.t. ∥ y − Dθ∥ 2 ≤ ∥ σs ∥ 2 and xˆ = ψ θˆ θ
(4)
The l0 —minimization problem in Eq. 4 can be replaced by the l1 —minimization problem as in Eq. 5 that is easier to be solved if D meets certain conditions [1, 2, 14]. θˆ = arg min ∥θ∥1 s.t. ∥ y − Dθ∥ 2 ≤ ∥ σs ∥ 2 and xˆ = ψ θˆ θ
(5)
The greedy pursuit algorithms such as orthogonal matching pursuit (OMP) also provide an approximate solution to the problem in Eq. 4 [15]. The recovered signal quality xˆ from y obviously will be affected by the amplified SRE ϕe as can be seen in Eqs. (3), (4), and (5). In CAMB-CS, the signal x ∈ N ×1 can be modelled as Eq. (6) where z ∈ N ×1 and ecs ∈ N ×1 are the cosparse signal and the CSRE of signal x. x = z + ecs
(6)
Insert Eq. (6) into Eq. (1) so Eq. (7) can replace Eq. (1) where ϕecs = σcs ∈ M×1 is amplified CSRE. The analysis coefficients α ∈ K ×1 are product of the operator Ω ∈ K ×N and the cosparse signal z ∈ N ×1 so α = Ωz. y = ϕx = ϕz + ϕecs
(7)
972
E. Oey
If K > N , Ω is called overcomplete operator. The index rows of Ω that are orthogonal to the signal z is the co-support Ʌ ⊆ {1... K } of z where ΩɅ z = 0 and |Ʌ| = C is the co-sparsity of z. The signal xˆ can be recovered from y by solving the constrained problem in Eq. (8). xˆ = arg min∥ y − ϕx∥ 22 s.t. ΩɅ x = 0 & ∥α∥0 = K − C x,Ʌ
(8)
where ∥α∥0 denotes the number of non-zero values of α. CAMB-CS used the analysis version of SSMB-CS sparse recovery algorithms to solve problem in Eq. 8 [16]. The mutual coherence property is usually used in the projection matrix optimization method because it can be evaluated and manipulated easily [9]. Let D = [d1 d2 ... dK ] ∈ N ×K , gs−i j Δ diT dj as the (i, j ) element of the ) ( −1/2 −1/2 −1/2 Gram matrix of D and Ds Δ diag gs−11 ... gs−kk ... gs−K K is a diagonal matrix. The normalized equivalent dictionary is D = DDs and the normalized Gram matrix T of D is Gs = D D such that g s−kk = 1, ∀k. The mutual coherence of D is defined as Eq. (9). | | μ(D) = max|g s−i j |
(9)
i/= j
μt (D) in Eq. (10) is t—averaged mutual coherence as a parameter to optimize projection matrix [9]. ∑ μt (D) =
| | )| (| |g s−i j | ≥ t .|g s−i j | | ) (| ∑ |g s−i j | ≥ t i/= j,1≤i, j≤K
i/= j,1≤i, j≤K
(10)
| ) (| The indicator function |g s−i j | ≥ t = 1 if the condition is true and otherwise is zero and 0 ≤ t < 1. The projection matrix optimization problem in SSMB-CS is carried out by solving the optimization problem in Eq. (11). min
Gt ∈St ,ϕ
[ ] ∥2 ∥ I(ϕ, Gt ) = ζ1 ∥Gt − ψ T ϕT ϕψ ∥ F +ζ2 ∥ϕ(X − ψΘ)∥2F
(11)
St is a class of Gram matrices that has desired properties, X ∈ N ×P is training signals, Θ ∈ K ×P is sparse coefficients of training signals and P is number of training signals that is used to construct the synthesis dictionary ψ ∈ N ×K , SRE matrix is Es Δ X − ψΘ, and ∥∥ F denotes the Frobenius norm. The projection matrix optimization in [9] was based on iterative shrinkage algorithm to solve Eq. (11) with ζ1 = 1 and ζ2 = 0. The projection matrix optimization in [10–12] was based on alternating minimization with ζ1 + ζ2 = 1, and ζ2 = 0. The equivalent operator O ∈ M×K which is defined as O = ϕΩT was introduced of O is Gc = O)T O where gc−i j is the in [13] for CAMB-CS. The Gram matrix ( −1/2
−1/2
−1/2
(i, j) element of Gc and Oc Δ diag gc−11 ... gc−kk ... gc−K K is a diagonal matrix.
Projection Matrix Optimization for Compressive Sensing …
973
The normalized equivalent operator is OΔ OOc and the normalized Gram matrix is T
Gc = O O such that g c−kk = 1, ∀k. The Eq. (10) can be used to calculate μt (O) by replacing g s−i j with g c−i j .
3 Proposed Method The new method of projection matrix optimization is proposed for CAMB-CS by taking into account the relative amplified CSRE and the amplified CSRE energy in Eq. (12) which is an optimization problem. ] [ ∥ ∥ ∥ϕE cs ∥2F T T ∥2 2 ∥ min I(ϕ, Gt ) = ζ1 Gt − Ωϕ ϕΩ F +ζ2 ∥ϕEcs ∥ F +ζ3 Gt ∈St ,ϕ ∥ϕZ∥2F
(12)
The training signals X ∈ N ×P is used to build operator Ω ∈ K ×N , Ecs = X − Z is CSRE matrix where Z ∈ N ×P is cosparse signals that was obtained from X by using a cosparse coding [17]. The proposed method used the alternating minimization algorithm to solve Eq. (12) with ζ1 , ζ2 , ζ3 /= 0 and ζ1 + ζ2 +ζ3 = 1. The method in [11] was adopted by the proposed method to update target Gram matrix Gt and the nonlinear conjugate gradient (NCG) method [18] was employed to find the optimized projection matrix ϕ with extension to dimensional (matrix) case. The set of nonoverlapping 8×8 patches were obtained by extracting randomly 8 patches from each of the 40,000 training images in LabelMe training data set [19, 20]. The each patch of 8 × 8 was reshaped as a vector of 64 × 1 to generate training signals X ∈ 64×320000 . It was used to build synthesis dictionary ψ and operator Ω by using KSVD algorithm [21] and the algorithm in [22], respectively with P = 320000, N = 64, K = 96, S = 4, and C = 64 − 4 = 60. The learned operator Ω was used to find the cosparse signals Z from X by using backward greedy algorithm [17]. The CSRE matrix Ecs can be computed by using Ecs = X − Z. Define Zr and Ecs−r are diagonal matrices with diagonal entries z n and ecs−n which z n and ecs−n are the averaged value of nth row of Z and Ecs , respectively. The CAMB-CS-Proposed algorithm reduce the computation cost by replacing Z and Ecs with Zr and Ecs−r respectively. The SSMBCS-RG, SSMB-CS-Elad, and SSMB-CS-LZYB denote SSMB-CS using random Gaussian projection matrix and optimization algorithms in [9, 10], respectively. The SSMB-CS-BLH-HZ used algorithm in [11] by replacing Es with IN to solve Eq. (11) efficiently. The CAMB-CS-RG and CAMB-CS-1 denote CAMB-CS using random Gaussian projection matrix and optimization algorithm in [13], respectively. The CAMB-CS-2 adapted algorithm in [11] by replacing Ecs with IN to solve (16) with ζ3 = 0. The NCG method parameters with σ = 0.9, δ = 0.0001, λ = 0.65, η = 0.65 and stopping criteria with εG = 10−3 and ε N C G = 10−5 were used in the CAMB-CS-Proposed algorithm.
974
E. Oey
4 Results and Discussion The SSMB-CS-RG and CAMB-CS-RG used the same random Gaussian matrix ϕ(0) ∈ M×64 . It was also used as initial projection matrix in all optimization algorithms except SSMB-CS-LZYB that does not need ϕ(0) . SSMB-CS-Elad used relative threshold tr el = 15 % and γ = 0.75 while t = μ B was used in the rest optimization algorithms. SSMB-CS-BLH-HZ and CAMB-CS-2 used ζ1 = 0.5 and ζ2 = 0.5. CAMB-CS-Proposed algorithm used the parameters as mentioned previously with ζ1 = 0.5, ζ2 = 0.25, and ζ3 = 0.25. Figure 1 depicts the histograms of the absolute off-diagonal elements of Gs and Gc for each algorithm with M = 20. The average mutual coherence μavg that can be calculated by using Eq. (10) with t = 0.2 also was shown for all CS systems. Two types test signals were used to compare performance of the proposed method of projection matrix optimization and the previous ones. For the first type test signal, the p—non-overlapping 8 × 8 patches were extracted randomly from each of 10,000 test images in LabelMe test data set [19, 20]. The first type test signal was generated by reshaping each patch as a vector of 64×1 and arrange them to get Xtest−1 ∈ 64×L where L = 10000 p. For the second type test signal, all of the non-overlapping 8 × 8 patches were extracted randomly from a standard test image. The second type test signal was generated by reshaping each patch as a vector of 64 × 1 and arrange them to get Xtest−2 ∈ 64×P where P is number of patches in a standard test image.
Fig. 1 μavg and histograms of the absolute off-diagonal elements of Gs and Gc
Projection Matrix Optimization for Compressive Sensing …
975
The CS operation was implemented using Ytest = ϕXtest for each ϕ of projection ˆ test was obtained by using matrices as mentioned earlier. The recovered test signal X OMP [15] with s = 4 and Greedy Analysis Pursuit (GAP) [6] with c = 64 − 4 for SSMB-CS and CAMB-CS respectively. The performance of each projection matrix was measured by using the Peak Signal-to-Noise Ratio in decibel (PSNR (dB)) defined in Eq. (13) ( with )N = 64 and the maximum possible value of a pixel was normalized from 2b − 1 to 1 where b is number of bits per pixel. ⎛
⎞
⎜ PSNR (dB) = 10 log10 ⎝
1 ⎟ ∥ ∥2 ⎠ ∥ 1 ∥ˆ X − Xtest ∥ N ×L ∥ test
(13)
F
Amplified SRE energy, amplified CSRE energy, relative amplified SRE energy, ∥ϕEs ∥2F and relative amplified CSRE energy are denoted by ∥ϕEs ∥2F , ∥ϕEcs ∥2F , ∥ϕψΘ∥ 2 , ∥ϕE ∥2
F
cs F and ∥ϕZ∥ 2 , respectively. Figures 2 and 3 show the performances comparison of F all projection matrices with M = 20 for Xtest−1 for number of patches p = 1 to 8, 16, 32 and 64 per test image. Figure 1 shows that all optimization algorithms provide a smaller μavg than random Gaussian matrix so it is expected the PSNR of recovered test signal will increase. It is justified by Fig. 2 where all of them except CAMB-CS-2 provide higher PSNR than random Gaussian matrix. Figures 1 and 3 show even though CAMB-CS-2 can reduce the μavg and has the smallest amplified CSRE energy but the relative amplified CSRE energy increased significantly, it causes the PSNR of CAMB-CS-2 is much smaller than Gaussian random matrix. Figure 2 shows even CAMB-CS-RG outperforms all of SSMB-CS because it has relative amplified CSRE energy smaller than relative amplified SRE energy of all SSMB-CS as shown in Fig. 3b.
Fig. 2 Comparison of PSNR (dB) for Xtest−1 with M = 20 as a function of p
976
E. Oey
Fig. 3 Comparison of a amplified SRE/CSRE energy, b relative amplified SRE/CSRE energy for Xtest−1 with M = 20 as a function of p
CAMB-CS-1 and CAMB-CS-Proposed have smaller μavg and relative amplified CSRE energy than CAMB-CS-RG so they provide higher PNSR. CAMB-CSProposed outperforms CAMB-CS-1 because it provides a smaller amplified CSRE energy while their μavg and relative amplified CSRE differ only slightly as shown in Fig. 1 until Fig. 3. The PSNR changes only slightly to p as shown in Fig. 2, it is because the relative amplified SRE and CSRE tends to be constant to p as shown in Fig. 3b. Table 1 shows the comparison of reconstruction accuracy for SSMB-CS and CAMB-CS projection matrix with M = 20 by using 10 standard test images to get Xtest−2 from each image. The results are similar to the results of Xtest−1 where the CAMB-CS-Proposed outperformed the others.
5 Conclusion A novel method for projection matrix optimization of CAMB-CS has been presented in this paper. The main contributions are introducing the amplified CSRE energy and the relative amplified CSRE energy parameter and taking them into account in the projection matrix optimization problem of CAMB-CS. The alternating minimization algorithm and nonlinear conjugate gradient method were used to solve the optimization problem. The experiment results have been presented and show the proposed method outperformed the existing ones in term of recovered signal accuracy.
Projection Matrix Optimization for Compressive Sensing …
977
Table 1 SSMB-CS Reconstruction Accuracy for Xtest−2 and M = 20 Test image
PSNR (dB) SSMB-CS-
PSNR (dB) CAMB-CS-
RG
Elad
LZYB
BLH-HZ
RG
1
2
Proposed
Cable car
22.66
23.45
23.99
24.64
28.42
31.14
4.48
31.64
Flower
27.05
27.80
28.56
29.22
34.71
37.55
6.85
38.17
Fruits
27.74
27.85
28.27
29.10
33.15
35.71
7.78
36.19
Cameraman
25.25
26.44
26.48
27.28
32.54
35.28
5.95
36.02
House
23.35
23.81
23.90
25.11
28.13
30.58
2.90
31.03
Jet plane
24.16
24.82
24.95
26.12
30.18
32.72
2.78
33.36
Lena
26.22
26.62
26.90
28.08
30.88
33.43
4.34
33.83
Peppers
25.36
25.81
25.91
27.10
30.10
33.01
2.50
33.35
Sailboats
26.05
26.42
26.62
28.06
30.08
32.66
4.12
32.89
Cockatoo
27.57
28.16
28.43
29.48
31.51
33.93
5.96
34.25
Average
25.54
26.12
26.40
27.42
30.97
33.60
4.77
34.07
References 1. Donoho DL (2006) Compressed sensing. IEEE Trans Inf Theory 52:1289–1306 2. Candès EJ, Romberg J, Tao T (2006) Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory 52:489–509 3. Candes EJ, Romberg JK, Tao T (2006) Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl Math J Issued Courant Inst Math Sci 59:1207–1223 4. Candes EJ, Tao T (2006) Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Trans Inf Theory 52:5406–5425 5. Bruckstein AM, Donoho DL, Elad M (2009) From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev 51:34–81 6. Nam S, Davies ME, Elad M, Gribonval R (2013) The cosparse analysis model and algorithms✩. Appl Comput Harmon Anal 34:30–56 7. Endra O, Gunawan D (2015) Comparison of synthesis-based and analysis-based compressive sensing. In: 2015 International conference on quality in research (QiR), pp 167–170. IEEE 8. Ravishankar S, Bresler Y (2016) Data-driven learning of a union of sparsifying transforms model for blind compressed sensing. IEEE Trans Comput Imaging 2:294–309 9. Elad M (2007) Optimized projections for compressed sensing. IEEE Trans Signal Process 55:5695–5702 10. Li G, Zhu Z, Yang D, Chang L, Bai H (2013) On projection matrix optimization for compressive sensing systems. IEEE Trans Signal Process 61:2887–2898 11. Bai H, Li S, He X (2016) Sensing matrix optimization based on equiangular tight frames with consideration of sparse representation error. IEEE Trans Multimedia 18:2040–2053 12. Hong T, Zhu Z (2018) An efficient method for robust projection matrix design. Signal Process 143:200–210 13. Oey E, Gunawan D, Sudiana D (2018) Projection matrix design for co-sparse analysis model based compressive sensing. In: MATEC web of conferences, p 1061. EDP Sciences 14. Candes E, Tao T (2005) Decoding by linear programming. IEEE Trans Inf Theory 51:4203– 4215 15. Tropp JA, Gilbert AC (2007) Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Inf Theory 53:4655–4666
978
E. Oey
16. Giryes R (2016) A greedy algorithm for the analysis transform domain. Neurocomputing 173:278–289 17. Rubinstein R, Peleg T, Elad M (2013) Analysis K-SVD: a dictionary-learning algorithm for the analysis sparse model. IEEE Trans Signal Process 61:661–677 18. Huang Y, Liu C (2017) Dai-Kou type conjugate gradient methods with a line search only using gradient. J Inequal Appl 2017:66 19. Russell BC, Torralba A, Murphy KP, Freeman WT (2008) LabelMe: a database and web-based tool for image annotation. Int J Comput Vis 77:157–173 20. Uetz R, Behnke S (2009) Large-scale object recognition with CUDA-accelerated hierarchical neural networks. In: 2009 IEEE international conference on intelligent computing and intelligent systems, pp 536–541. IEEE 21. Aharon M, Elad M, Bruckstein A (2006) K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans Signal Process 54:4311–4322 22. Hou B, Zhu Z, Li G, Yu A (2016) An efficient algorithm for overcomplete sparsifying transform learning with signal denoising. Math Probl Eng 2016
Detection of Type 2 Diabetes Mellitus with Deep Learning Mukul Saklani, Mahsa Razavi, and Amr Elchouemi
Abstract Prediction of the likelihood of diabetes has received considerable attention in recent years. Research has focused on deep learning methods, but it remains unclear which is most appropriate for this purpose. The focus of this research is to provide a detailed summary of the datasets, methods, and algorithms utilized as a basis for identifying the most effective framework for Type 2 diabetes mellitus prognosis validated through a k-cross validation approach. This paper contributes to the body of knowledge on Diabetes Mellitus by providing clarity on feasibility of methods proposed by researchers. Keywords Diabetes · Mellitus · Deep leaning · Machine learning · Dataset
1 Introduction Diabetes Mellitus (DM) is one of the top ten most common diseases in the world, according to a World Health Organization (WHO) study. This disorder also has a high mortality rate. Diabetes is affecting an increasing number of people as the disease progresses. Every year, many people are killed because of it. Many people are discouraged from seeking appropriate treatment early because they are unaware of the seriousness of their health condition. Early diagnosis is critical because late diagnosis frequently leads to serious health problems and many deaths each year. As a result, there is a need for a system that can predict diabetes with high accuracy at an early stage. M. Saklani · M. Razavi (B) Western Sydney University, Penrith, Australia e-mail: [email protected] M. Saklani e-mail: [email protected] A. Elchouemi American Public University System, Charles Town, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_85
979
980
M. Saklani et al.
Diabetes mellitus is a chronic disease characterized by an uncontrollable rise in blood sugar levels in the body, caused by abnormal beta cell function in the pancreas. In addition to affecting the entire body, it can cause pancreatic problems, heart disease, hypertension, kidney problems, pancreatic issues, nerve damage, foot problems, ketoacidosis, headaches, and issues requiring eye surgery, as well as other vision-related issues. His disease is caused by several factors, including a lack of physical activity, smoking, high cholesterol, high blood pressure, and so on. Diabetes has affected almost every age group, from children to the elderly [1]. Data analysis, machine intelligence, and other cognitive algorithms in health care have been used to predict a wide range of diseases. The use of artificial neural networks (ANNs) for data-driven applications, particularly in the healthcare sector, is causing a revolution in medical research. The scope of the research is broad, encompassing diagnosis, image processing, decision support systems (DSS), and disease prediction. The goal of this study is to identify the variables or features in the diabetes dataset that are responsible for predicting whether a patient is diabetic [2]. Although, various machine learning algorithms or classifiers have predicted diabetes with 99.2% accuracy, the goal of this research is to analyze the existing Deep Learning technique and create a new framework by adding new features to it. This project will fill the gap by adding the new feature in the best algorithm selected from the best 12 papers. This study is based on an analysis of existing DL frameworks to identify gaps in those frameworks and propose a new framework with some potential changes to improve diabetes prediction.
2 Objective The aim of the research is to analyze the existing framework to find the gap and develop a new framework with new features that can predict the risk of diabetes at an early stage, which can help people and the healthcare sector. The analysis will be based on the research done in the last two years 30 papers or articles which are closely related to the topic will be extracted from the western Sydney university library. After analyzing those 30 papers, the best 12 papers would be selected to do the analysis to find the gaps and further research. The main aim of the research is to compare the existing development and research and to extract important components to develop a new framework that guides future improvement. We asked: What dataset has been used? What factors may affect the accuracy of the system? Which method and algorithm should be used for better prediction? What is the framework used in the research?
Detection of Type 2 Diabetes Mellitus with Deep Learning
981
3 Methodology For the research, a total of 30 papers from the last two years of Q1 and Q2 rankings were extracted from the Western Sydney University library. After analyzing 30 papers, 12 papers closely related to the topic are selected. The papers are both qualitative and quantitative, which are selected based on topic and keywords. Based on those 12 papers, a component table is designed which includes factors such as input, process, and output. Each factor has data attributes and their instances. Then the classification table is derived from the component table. The data attributes are further classified and act as features of the table. Finally, the evaluation table is designed based on the component table and classification table. The evaluation table helps to do gap analysis to propose a new framework (Fig. 1).
Fig. 1 Methodology for research
982
M. Saklani et al.
4 Literature Review According to the International Diabetes Federation, 382 million people across the world have diabetes. It has been termed as a global threat and is estimated to increase by 48% by 2045. However, many people are unaware of the seriousness of this disease. Emerging technology and the advent of digitization make the health sector the source of data. Moreover, the data could be utilized to build an automated system for the prediction of diabetes using DL techniques. Nguyen et al. [3] proposed and implemented a DL model using the logistic loss function with stochastic gradient descent. Electronic health records for the United States population from public hospitals have been used in this research as a dataset. It consists of a total of 9948 patients, out of which 1904 were diagnosed with T2DM. The author has used the Synthetic Minority Oversampling Technique (SMOTE) to handle the imbalance class of the model for the cross-validation technique to analyze the performance. Finally, the research proposed a neural network model on the same dataset, which improved the AUC (Area Under Curve) and specificity and increased the sensitivity for predicting diabetes mellitus compared to the other machine learning algorithms. The research also highlighted the use of the auto selection feature as a solution to increase the performance of the existing model. The classification technique has been used to detect diabetes at an early age. The paper has clearly compared all the classification techniques such as SVM, Random Forest, KNN (K-Nearest Neighbor), Decision Tree, and Nave Byes with their accuracies. Naz and Ahuja [4] used the PIMA Indian dataset and designed the prediction model. The research shows that the DL algorithm performs better than data mining algorithms on the same dataset. The research clearly mentioned the results of all the classifiers such as DL, DT, NB, and ANN, and it was found that DL performs the best, which resulted in the highest prediction accuracy. As a result, the DL technique has achieved an accuracy rate of 98.07%. The model can be used as a tool to predict the early onset of the disease. Also, if we can include the omics data, the research declares that this can improve the accuracy of the DL model. Therefore, the research has covered all the classifiers needed to predict the disease and mentioned the solution to improve the accuracy. Also, if the real-time application is implemented, it could help millions of people. There are different sets of algorithms and methods which have been used in the articles. However, it is still not clear which algorithm and method is effective for the prediction of diabetes. This is a gap which has been deducted from the 12 articles. The research aims to extract the main features from the current articles and propose a new framework to fill the gap. The diabetes dataset has 2000 instances and 9 columns. The binary outcome column has two classes which take values ‘0’ or ‘1’ where ‘0’ for negative case means the absence of diabetes and ‘1’ for positive case means the presence of diabetes. The remaining 8 columns are real value attributes. Thus, the dataset is a 2000 × 8 features matrix. Furthermore, in the data set, 1316 are healthy subjects and 684 are diabetic subjects. The dataset was generated from Type 2 (DM1) diabetes
Detection of Type 2 Diabetes Mellitus with Deep Learning
983
patients. DM1 generally occurs in children but it can also appear in older people. In type 1 diabetes, subjects do not produce insulin and type 2 subjects do not have enough insulin. It then iterates on the features which have not been used so far and extracts the useful information. Haq et al. [5] used the same algorithm and proposed the flowchart as shown in fig. for feature selection, as it is very important during classifying testing and training data. Ada Boost and Random Forest techniques are also used for feature selection. Additionally, cross validation methods such as hold out, LOSO, and K-fold are used to check the model’s performance and various metrics have been used such as accuracy, specificity, sensitivity, MCC, ROC-AUC, precision, recall, F1-score, and execution time. As a measure of the performance of the model classification, this study evaluated various metrics including accuracy, specificity, sensitivity, MCC, ROC-AUC, precision, recall, F1-score, and execution time. Metrics used in the research provide satisfactory experimental results. Comparing the proposed method to the previously proposed ones, the proposed method performs better in terms of accuracy. Additionally, implementing an embedded-based feature selection method would make the system more robust. Based on the findings of the proposed research, it appears that the proposed method is the better one for detecting diabetes and could be improved with the stated method. A total 50,000 records in the dataset have been considered. The information has been collected through the questionnaire. Exercise, sugar level (Glucose level), drinking, nature of the job, gender, dietary habits, BMI, and other factors are among the information gathered. The features present in the dataset are very important for performing the training of the model. Therefore, these features are extracted for pre-training purposes, which is done by RBM (Restricted Boltzmann Machine). The final tuning is handled by DBN and then it is forwarded to validation process. Finally, the diabetic complications such as nephropathy, retinopathy, are predicted. Diabetes is one of the most hazardous diseases and, if not prevented, could lead to some serious complications. There are some microvascular complications of diabetes diseases, which are retinopathy, nephropathy, and neuropathy. In the [6] study proposed the DL prediction model to predict the risk of diabetes. To reduce the amount of work, an artificial neural network was developed. The recurrent neural network has been deployed well with the regression layer. As a recurring neural network, it consists of a sequence processing layer and a regression layer. There are three models that differ in how they remember the previous sequence. It was discovered that LSTM (Long Short-term Memory) and Gru outperform the regular neural network in terms of average area root mean square, while GRU (Gated Recurrent Unit) performed the best in terms of average area root mean square. The research of [7] used machine learning techniques and extracted 10 features from electronic medical records for the prediction of T2DM. The study used support vector machine, leave-one-subject-out, holdout resampling strategy and tenfold showed the best performance. A machine learning algorithm using data from electronic medical records along with the predictive model helps in supporting the diagnosis of T2DM.
984
M. Saklani et al.
Omisore et al. [8] proposed a model which recommends diets tailored to specific needs based on a multimodal adaptive neuro-fuzzy inference model. 87.5% of the Pima Indians’ diabetes dataset and 13.5% of the Schorling diabetes dataset were used in training and revalidating the diagnosis model. Both datasets are publicly available. A retrospective application of the model was applied to a private dataset obtained from the Obafemi Awolowo University Teaching Hospital Complex, Ile-Ife, Nigeria, consisting of 14 female patients’ records. Furthermore, the recommendation model uses users’ diagnosis results and their eating formulae to determine users’ food consumption per day and create a personalized food plan for them that matches a template designed by an expert. These studies found both models to perform well. The multimodal model achieved accuracy rates of 83.8% and 79.2%, respectively, for training and validation on the Pima (North Americans Indians) dataset and 72.9% and 94.3%, respectively, for prediction on the Schorling and private datasets. Also, the research presented 10 different models and compared their accuracy. Roy et al. [9] proposed a framework that had two phases, first phase consist of identifying optimal imputation techniques and second phase uses a random sampling method. This study examined the impact of three data imputation methods on the performance of classification models. After balancing the imputation data with SMOTETomek and random oversampling in the second phase, ANN models were used for modelling the best-performing data. García-Ordás [10] uses DL techniques and proposed a pipeline to predict diabetic patients’ outcomes. In addition to using VAE, SAE and convolutional neural networks for classification, data augmentation is also done with variationally autoencoders (VAE) and sparse autoencoders (SAE). Researchers have reviewed the Diabetes Database of the Pima Indians, which contains information about pregnant women, glucose levels, blood pressure, and age. Using a popular data set referred to as Pima Indian Diabetes, this paper proposes methods based on DL and augmentation techniques for predicting diabetes. There are 768 examples in this dataset, and there are only 8 features per sample and 22 classes. The paper used a Variationally Autoencoder (VAE) to enhance the data set, and a Sparse Autoencoder to increase the number of features. An IOT framework has been developed which integrates the T2DM model. The author chose the multi-objective optimization technique, which was more robust than a single objective, and an efficient classifier (Weighted Voting LRRFs), which optimizes more than one quality measure. Also, results of the investigation show this multi-objective optimization-based, weighted voting ensemble method to be more effective and efficient than single classifiers and risk score systems, and to have superior learning capacity compared to its competitors using both inductive and transudative learning setups.
Detection of Type 2 Diabetes Mellitus with Deep Learning
985
5 Relevant Studies After reviewing 30 papers from the last two years, the selected 12 papers according to the accuracy have been selected to fill the gap and deciding the new framework. Therefore, the algorithms used to generate articles have been mentioned in Table 1. Along with their occurrence in terms of percentage, Table 1 shows the occurrence of each algorithm present in the article in terms of percentage. KNN (K-nearest neighbor) has been used in 5 papers [1, 8–10], and [11] out of 12 papers. Similarly, SVM was used in five papers, NB in four, and Back Propagation in three. The output table has been created to identify and analyze the result of each paper. The result shown in the table is in terms of Accuracy, Specificity and Sensitivity. In Table 2, the main features which have been extracted from the articles are the dataset, features present in the specific dataset and the result. Since the research is to design a framework for the prediction of diabetes, data is the most important factor in designing the model. So, this table mainly focused on the dataset, its features, and the result. The datasets used in the paper are Electronic Health Record, PIMA Indian dataset, Omics dataset, Diabetes dataset, OhioT1DM dataset, ELSA longitudinal dataset, Schorling (size) OAUTHC datasets, public Pima datasets, medical datasets etc. The main features which are generally present in the diabetes datasets are blood glucose level, pregnancy, glucose, skin thickness, serum insulin, BMI, diabetes pedigree function, age, and diastolic blood pressure. Therefore, features related to the disease have been mentioned in the feature column. The output or result column is the performance of the model used in each paper as shown in Table 1.
6 Components Table 2 illustrates the component table demonstrates the relationships between the inputs, processes, and outputs of research involved within this project. Data, study selection, and systematic literature review are the input factors which have been considered in the table. Dataset is the attribute that consists of various datasets as instances that have been used to train the model for the prediction of diabetes. The gathered journal articles are both qualitative and quantitative, which have been considered under the study type. Applied areas explain where the research is mostly focused. Secondly, algorithms, techniques, methods, models, tools, and frameworks are considered as a part of the data analysis process. Thirdly, the outcome of the reviewed papers is divided into primary and secondary output. The primary output consists of findings, comparisons of algorithms, recommendations, etc., whereas the secondary output contains graphs, numerical data. The attribute from the component table is further categorized as features in the classification table. The input phase consists of all the attributes shown in Table 2 as
986
M. Saklani et al.
Table 1 Output table from 12 papers Paper Id
Dataset
References
Features
Result
1
Electronic health record
Nguyen et al. [3]
Neoplasms, endocrine, blood, pregnancy, circulatory, digestive, respiratory, skin, musculoskeletal, injuries, suppl, infectious, sense, nervous, mental health, symptoms or ill-defined
Accuracy, specificity, sensitivity 84.28% 96.85% 31.17%
2
PIMA Indian dataset, Omics dataset
Naz and Ahuja [4]
Pregnancy, glucose, skin thickness, serum insulin, BMI, diabetes pedigree function, age, diastolic blood pressure
Accuracy, specificity, sensitivity 98.07% 99.29% 95.52%
3
PIMA Indian dataset Diabetes dataset
Zhou et al. [12]
Number of times pregnant, plasma glucose concentration, triceps skin fold thickness, serum insulin, BMI, Diabetes pedigree function, age
Accuracy on DS Diabetes: 94.02174% PIMA Indian 99.4112%
4
DATASETS:UCI Clinical fMRI
Haq et al. [5]
Pregnancies, glucose, blood pressure, skin thickness, insulin, BMI, diabetes, pedigree, function, age
Accuracy 98.2% Specificity 97% Sensitivity 100%
5
OhioT1DM dataset Kim et al. [6]
Age group, gender
NA
6
Schorling (size) OAUTHC datasets public Pima datasets
Age, BMI, diastolic reading, systolic reading, glycated hemoglobin, triceps skin fold, pregnancy count, gender, diabetes pedigree function
Trainingvalidation accuracy: PIMA dataset: 83.8%, 79.3% Schorling and private dataset: 72.9% 92.3%
Omisore et al. [8]
(continued)
Detection of Type 2 Diabetes Mellitus with Deep Learning
987
Table 1 (continued) Paper Id
Dataset
Features
Result
7
Pima Indian dataset Sneha and Gangil [1]
References
Pregnancy, glucose, skin thickness, serum insulin, BMI, diabetes pedigree function, age, diastolic blood pressure
Specificity Decision tree: 98.2% Random forest: 98.00% Accuracy for SVM 77% and NB 82.30%
8
PIMA Indian dataset
García-Ordás [10]
Pregnancy, glucose, skin thickness, serum insulin, BMI, diabetes pedigree function, age, diastolic blood pressure
92.31% accuracy occurred when CNN classifier was trained jointly with SAE for feature augmentat. over well-balanced dataset
9
Medical dataset
Roy et al. [9]
Pregnancy, glucose, Accuracy: 98.0% blood pressure, skin thickness, insulin, BMI, diabetes pedigree function, age
10
ELSA longitudinal dataset
Fazakis et al. [13]
Age, obesity, Specificity: 84.6% impaired glucose Sensitivity: 82.5% tolerance, ethnicity, gender, gestational diabetes, polycystic ovary syndrome, family history, physical activities, high blood pressure, alcohol
11
Diabetic repository Vidhya and Shanmugalakshmi Diabetic dataset [11]
Age, family history, Accuracy 81.20% BMI, poor food habit, exercise, use of alcohol, staying seated, glucose, job nature, gender
12
NA
Gender, age, smoking, BMI, cholesterol, glucose, high density lipoprotein cholesterol, low density lipoprotein cholesterol, triglyceride, hemoglobin
Kuo et al. [7]
Accuracy SVM: 1 Neural network: 0.78 Random forest
988
M. Saklani et al.
Table 2 Component table Factors
Attributes
Instances
Data
Datasets
PIMA Indian dataset, medical records, Diabetes dataset, Schorling (size), OAUTHC datasets, public Pima datasets, UCI datasets, Clinical datasets, fMRI datasets, ELSA longitudinal dataset
Study selection
Study type
Qualitative, Quantitative
Study sources
Journal, articles, WSU library, digital library, google scholar
Applied area Systematic literature review Digital Library
Public health sector, public healthcare WSU library, IEEE, Google scholar
Article selection Q1 and Q2 rank quality, last two years, journal, article, peer reviewed
Data Analysis
Keywords
Deep learning, T2DM, Type 2 diabetes mellitus, machine learning, Data mining, framework
Challenges
Requirement dependency, time estimation, document management
Metrics
Accuracy, specificity, Sensitivity, precision, False Positive Rate, Mathew’s Correlation coefficient (F-MCC), Negative predictive value, False negative rate, False rejection rate
Data Extraction
Author, Title, year, Research methodology
Algorithm
KNN, SVM, tenfold validation, Multivariate logistic regression, Naïve byes, Lasso regression, back propagation, forward propagation, logistic regression, Ridge regression, Ada boost, K-folds, RNN, Decision tree, First order polynomial exploration, Hybrid, GA-ELM, Bayesian regulation, Light Gradient boosting machine, Gradient Boosting
Technique
Deep learning, Synthetic Minority Over-sampling Technique (SMOTE), Artificial Neural Network, Decision Tree algorithm, cross validation techniques, rule-based expert system, neural and Bayesian networks, fuzzy logic, Naïve Bayes, Random Forest, Knn, Logistic regression SVM, multi-objective optimization-based technique, ensemble technique (continued)
Detection of Type 2 Diabetes Mellitus with Deep Learning
989
Table 2 (continued) Factors
Attributes
Instances
Method
Ensemble-based method, DBN-DNN method, WeightedVotingLRRFs ML model, Logistic regression model Long short-term memory CNN (Convolutional neural network), SVM, Decision tree, naïve bayes, Radial basis function network model K-Means + logistic regression model K-Means + C4.5 prediction model Multilayer perception Least squares support vector machine Extreme learning machine (ELM), Cross Validation Methods, Hold out, K-folds, Leave One Subject Out, Computer vision method Natural language processing Reinforcement learning hyper parameter tuning method Ridge Regression, Lasso regression, ANN
Model
Synthetic Minority Oversampling Technique model, DNN model, SVM Random forest Decision tree Naïve Byes KNN Multifactor Dimensionality Reduction Radial Basis Function Artificial Neural Network, Classification model rule-based fuzzy classifiers Linear classifier, a multi-class prediction model
Tools
Html, php, JavaScript, sql, Hardware: • Intel @ R Core TM i5; 2400 CPU; 4 GB RAM Software: Visio, Origin pro, Python, tensorflow, effective prognostic tool Web based, QDiabetesTM-2018
Framework
• • • • • • • • • •
Wide and deep learning Framework Deep dynamic neural network Framework Five-fold nested cross validation Framework hybrid intelligent system framework MyDi framework Least Square Support Vector Machine (LS-SVM) Generalised discriminant Analysis gradient boost framework IoT enabled framework stochastic frameworks (continued)
990
M. Saklani et al.
Table 2 (continued) Factors
Attributes
Instances
Evaluation
Foundation
Dataset, data extraction, accuracy, Article selection, journal rank
Validity
Accuracy, reliability, scalability, Conclusion
Output
Primary
Findings, recommendations, statistical comparison of algorithm, comparison of algorithm in terms of accuracy, specificity, sensitivity
Secondary
Bar graph, Histogram, percentage values, numerical values
features such as Data, keywords, challenges, metrics, and Data Extraction. Attributes such as algorithms, techniques, methods, models, tools, frameworks, foundations (databases), and validity are included as features under the process phase. Lastly, the output phase includes primary and secondary output. The detailed information about all the attributes is collected from 12 closely related papers. The main aim of this research is to collect information and compare all the attributes with other research. Table 3 includes features such as datasets, keywords, metrics, challenges, and data extractions. These features are part of the input process. The values of each attribute are compared to determine the most important component for the new framework is an important feature as a part of the input phase in the architecture process, i.e., Table 4 is categorized into two parts, such as data analysis and evaluation. The features under data analysis are Algorithm, Technique, Method, Model, Tool, Framework. Similarly, the foundation or database and validity are the features included under evaluation. The classification table is based on output, i.e., Table 4 has two parts as Primary and Secondary output. As shown in the table, primary output has results in percentage. Accuracy and precision Specificity and sensitivity findings are considered as primary outputs. This research is only focused on accuracy. Therefore, Table 4 would help analyze the results of each algorithm based on performance. Bar graph, line graph, numerical values included in secondary output.
7 Current Architecture Figure 2 illustrates the process of the existing architecture from the analyzed papers; it involves splitting of the dataset into training and testing. The training dataset is then fed to the different algorithms such as KNN, SVM, NB, AB, DT, etc. Then there are different models used in the 12 papers, which are then visualized using tools such as Python and some visualization libraries. After visualizing the data, the model is fit with the training data and prediction is carried out. The prediction is then evaluated based on evaluation metrics such as Accuracy, False negative rate, False
Detection of Type 2 Diabetes Mellitus with Deep Learning
991
Table 3 Classification table based on Input References
Data dataset
Keywords
Challenges
Metrics
Nguyen et al. [3]
Electronic health record
T2DM, Wide and deep learning, onset, incidence
Requirement dependency
Accuracy, specificity, Sensitivity
Naz and Ahuja [4]
PIMA Indian dataset, Omics dataset
Neural network, deep learning, Pima Indian dataset, Diabetes prediction
Requirement dependency
Accuracy, Precision, Recall, F measure
Zhou et al. [12]
Pima Indian Deep learning, Deep dataset, neural networks, Diabetes dataset Diabetes type prediction, Type 1 diabetes, Type 2 diabetes
Requirement dependency
Accuracy
Haq et al. [5]
UCI dataset, clinical dataset, fmri dataset
Diabetes disease, Requirement feature selection, dependency decision tree, performance, machine learning, e-healthcare
Accuracy, specificity, sensitivity, MCC
Kim et al. [6]
OhioT1DM dataset
Continuous glucose monitoring. diabetic inpatient. glucose prediction model. Deep learning
Requirement dependency
Root mean square propagation
Omisore et al. [8]
Schorling (size) OAUTHC datasets public Pima datasets
Diabetes mellitus, MANFIS, Medical diagnosis, Recommender, system, Diet personalization, Affective systems
Requirement dependency
Accuracy
Sneha and Gangil [1]
Pima Indian dataset
Data mining, Big Data, Diabetes, Naive Bayesian SVM
Requirement dependency
Accuracy, specificity, sensitivity
García-Ordás [10]
Pima Indian dataset
Diabetes Detection Requirement Deep learning, Sparse dependency autoencoder, Variational autoencoder, Oversampling
Accuracy
Roy et al. [9]
Medical dataset
NA
Precision, recall, specificity, F1-
Requirement dependency
(continued)
992
M. Saklani et al.
Table 3 (continued) References
Data dataset
Keywords
Challenges
Metrics
Fazakis et al. [13]
ELSA longitudinal dataset
T2DM, long-term health risk prediction, machine learning, ensemble learning
Requirement dependency
Sensitivity, Specificity
Vidhya and Shanmugalakshmi [11]
Diabetic Big data · Deep belief repository, network · Support Diabetic dataset vector machine · Random Forest · K nearest neighbor · Long short-term memory · Gated recruitment unit · Convolutional neural network
Requirement dependency
Accuracy, precision, recall, F1
Kuo et al. [7]
NA
Diagnosis, Requirement Machine-learning dependency techniques, Predictive models, Type 2 diabetes mellitus
Accuracy, Precision, Recall, Mathew correlation coefficient
rejection rate, Precision, Specificity, Negative prediction value. Finally, the output, which is the primary and secondary output. The primary output includes findings, recommendations, accuracy, etc., whereas the secondary output includes bar graphs, histograms, etc.
8 Discussion According to 12 papers, the qualitative and quantitative are the main study types, and the digital library, such as Western Sydney University, is the source type from where the information has been gathered. All the papers extracted from the study source are closely related to the topic and are from the last 3 years. The journal quality is Q1 and Q2, which has been considered for the effective evaluation of the research. Comparing with all the other papers [4, 5, 9, 12] are the most suitable to do the analysis and propose the new framework. These papers have the highest prediction accuracy, therefore considering them for further research would be effective. The process includes information about data analysis and evaluation containing attributes such as algorithms, techniques, model methods, tools, frameworks, foundations, and validity. The 12 articles focused on algorithms such as KNN, SVM, tenfold validation, Multivariate logistic regression, Naïve byes, Lasso regression, back propagation, forward propagation, logistic regression, Ridge regression, Ada boost, Kfolds, DL, Synthetic Minority Over-sampling Technique (SMOTE), Artificial Neural Network, Decision Tree algorithm, cross validation techniques, rule-based expert
Detection of Type 2 Diabetes Mellitus with Deep Learning
993
Table 4 Classification based on output References
Output Primary
Secondary
Nguyen et al. [3]
Accuracy 84.28% Specificity 96.85% Sensitivity 31.17%
Numerical values
Naz and Ahuja [4]
Accuracy 98.07% Specificity 99.29% Sensitivity 95.52%
Bar graph, line graph, numerical values
Zhou et al. [12]
Diabetes data set Accuracy: 94.02174% PIMA Indian dataset Accuracy: 99.4112%
Bar graph, numerical values
Haq et al. [5]
Accuracy 98.2% Specificity 97% Sensitivity 100%
Numerical values
Kim et al. [6]
NA
Omisore et al. [8]
Training and validation accuracy Bar graph, numerical for PIMA dataset: 83.8%, 79.3% values Prediction accuracies for Schorling and private dataset: 72.9% 92.3%
Sneha and Gangil [1]
Specificity Decision tree: 98.2% Random forest: 98.00% Accuracy for SVM 77% and NB 82.30%
Numerical values, bar graph
García-Ordás [10]
A 92.31% of accuracy was obtained when CNN classifier is trained jointly the SAE for featuring augmentation over a well-balanced dataset
Bar chart, numerical values
Roy et al. [9]
Accuracy: 98.0%
Numerical values, bar graph
Fazakis et al. [13]
Specificity: 84.6% Sensitivity: 82.5%
Line graph, numerical
Vidhya and Shanmugalakshmi [11]
Accuracy 81.20%
Bar charts, Numerical values
Kuo et al. [7]
Accuracy SVM: 1 Neural network: 0.78 Random Forest:
Bar charts, Numerical values
994
M. Saklani et al.
Fig. 2 Process in the current framework
systems, neural and Bayesian networks, fuzzy logic, Naïve Bayes, Random Forest, KNN, such as the ensemble-based method, the DBN-DNN method, WeightedVotingLRRFs ML model, Logistic regression model, Long short-term memory, CNN (Convolutional Neural Network), SVM, Decision tree, nave bayes, Radial basis function network model, K-Means + logistic regression model, K-Means + C4.5 prediction model, Multilayer perception, Least Squares Support Vector Machine. These papers [4, 5, 9, 11, 12] are considered the best based on the above factors. The output data of the component table consists of primary and secondary output. The primary output contains information such as findings, recommendations, comparison of the accuracies of the model, and the secondary output contains information in terms of bar graphs, numerical values, etc. Out of 12 articles, 11 articles were found to have variable information such as accuracy, including numerical data, but such information was not present in the article [6]. Therefore, based on factors such as methods, algorithm, and accuracy mentioned in the 12 papers, [4, 5, 9, 11, 12] are suitable for further research to analyze and propose the new framework.
9 Proposed Framework Suitable papers such as [4, 9, 11, 12] have been selected to propose the new framework. Based on accuracy, only those algorithms have been considered, and the
Detection of Type 2 Diabetes Mellitus with Deep Learning
995
method used for each algorithm is K-fold cross validation. The PIMA Indian and Diabetes datasets have been combined to make one dataset. Utilization of data: small datasets can be problematic. For instance, if we have only 100 records and apply the splitting rule of 80:20, then we will get only 20 records as a test dataset. Of course, that’s not enough data to create the prediction model. The situation gets worse when the dataset is too small to split it into training and testing. So, if we use the K-cross validation method for our architecture, it would use K-different models, which would use all the data in the dataset and would increase the performance of the model. In the [12] study, they proposed a deep neural network model for the prediction of diabetes. The network makes use of two datasets with more than 1000 records each. The number of epochs used is less, which results in the efficiency of the model. The result for the diabetes dataset was 94.01, whereas the result for the PIMA Indian dataset was 99.41%. To overcome the problem of less data, a combination of the Pima Indian dataset and the Diabetes dataset has been used. The K-fold cross validation method uses the same data point for testing and training purposes. Therefore, using the value of K to be 5 would not only split the dataset into 5 equal data points, but also there would be enough data present in each set, which would help during training. The algorithms considered are 4 which are first analyzed based on performance and extracted with the help of the classification and evaluation table. So, 5 * 4 = 20 instances of the folds would be created, each algorithm having 5 folds. When the method is applied to the dataset, it splits the data into 5 data points and with 5 folds as shown. On that split dataset, we fit each algorithm for training and validated it with the testing dataset. This process will continue for 5 iterations. Computational errors are calculated for each fold and then for each algorithm, i.e., for 5 folds the average would be calculated, which is known as the “generalization error.” The generalization error would then be compared, and the best model would be selected for the prediction. Out of those several algorithms used in the existing papers, four algorithms based on performance were selected and the validation techniques, i.e., K-fold, were applied to validate their performance. After a comparison of the results, we selected the best algorithm for prediction.
10 Conclusion Type 2 diabetes is one of the 10 most common diseases worldwide. It floods the bloodstream with sugar, leading to disorders of the circulatory, nervous, and immune systems. Unfortunately, most people are not aware of when they are in the early stages when detection is crucial. Effective predictive systems would fill that gap. These systems exist, but evaluation of effectiveness is urgently needed. The current research has filled this gap through the in-depth analysis of 12 papers extracted from scientific databases based on which this research developed a new framework for the prediction of diabetes. The main features such as components, classification, and evaluation table are designed to compare and understand those 12 papers closely. Gap
996
M. Saklani et al.
analysis is done with the help of classification, evaluation, Tables 1, 2 to propose a new framework. The dataset used in the proposed framework is a combination of the PIMA Indian and the Diabetes dataset. Therefore, the total number of records present in the dataset is 2700, which makes the training of the model better. After analyzing Tables 1, 2 and the evaluation table, four algorithms are selected based on their performance for prediction. The paper used methods such as the K-folds validation method to validate the performance of the selected algorithms such as KNN, SVM, NB, and Back Propagation with Feed Forward Propagation. The K-fold method with a value of K = 5 is used and applied to the dataset. This method splits the dataset into n folds, in our case 5. Each fold has 5 datapoints with 1 testing datapoint and 4 training datapoints. Then each algorithm is applied and fitted to the training datasets. As a result, the model is trained and tested for each data point and is repeated until 5 iterations. Therefore, using four algorithms and a value of k of 5, there would be 20 instances of folds which are created, i.e., 5 * 4 = 20. At each fold, the computation error is calculated, and an average is taken for 5 folds for each algorithm. The generalization error is the average of the computation errors which define the accuracy of the model. Finally, the generalization errors are compared for each algorithm and the best model is selected for prediction. In conclusion, the paper illustrates how the proposed solution would help develop a real-time application for the prediction of Type 2 Diabetes Mellitus.
References 1. Sneha N, Gangil T (2019) Analysis of diabetes mellitus for early prediction using optimal features selection. J Big Data 6(1) 2. Bukhari MM, Alkhamees BF, Hussain S, Gumaei A, Assiri A, Ullah SS (2021) An improved artificial neural network model for effective diabetes prediction. Complexity 2021:1–10 3. Nguyen B, Pham H, Tran H, Nghiem N, Nguyen Q, Do T, Tran C, Simpson C (2019) Predicting the onset of type 2 diabetes using wide and DL with electronic health records. Comput Methods Programs Biomed 182:105055 4. Naz H, Ahuja S (2020) Deep learning approach for diabetes prediction using PIMA Indian dataset. J Diabetes Metab Disord 19(1):391–403 5. Haq AU, Li JP, Khan J, Memon MH, Nazir S, Ahmad S, Khan GA, Ali A (2020) Intelligent machine learning approach for effective recognition of diabetes in E-healthcare using clinical data. Sensors 20(9):2649 6. Kim D-Y, Choi D-S, Kim J, Chun SW, Gil H-W, Cho N-J, Kang AR, Woo J (2020) Developing an individual glucose prediction model using recurrent neural network. Sensors 20(22), 6460. Available at: https://www.mdpi.com/1424-8220/20/22/6460. Accessed 24 Jul 2021 7. Kuo K-M, Talley P, Kao Y, Huang CH (2020) A multi-class classification model for supporting the diagnosis of type II diabetes mellitus. PeerJ 8:e9920 8. Omisore OM, Ojokoh BA, Babalola AE, Igbe T, Folajimi Y, Nie Z, Wang L (2021) An affective learning-based system for diagnosis and personalized management of diabetes mellitus. Futur Gener Comput Syst 117:273–290 9. Roy K, Ahmad M, Waqar K, Priyaah K, Nebhen J, Alshamrani SS, Raza MA, Ali I (2021) An enhanced machine learning framework for Type 2 diabetes classification using imbalanced data with missing values. Complexity 2021:1–21
Detection of Type 2 Diabetes Mellitus with Deep Learning
997
10. García-Ordás MT (2021) Diabetes detection using deep learning techniques with oversampling and feature augmentation. Comput Methods Prog Biomed 202, 105968. Available at: https:// www.sciencedirect.com/science/article/abs/pii/S0169260721000432?via%3Dihub. Accessed 3 Aug 2021 11. Vidhya K, Shanmugalakshmi R (2020) Deep learning based big medical data analytic model for diabetes complication prediction. J Ambient Intell Humaniz Comput 11(11):5691–5702 12. Zhou H, Myrzashova R, Zheng R (2020) Diabetes prediction model based on an enhanced deep neural network. EURASIP J Wirel Commun Netw 2020(1) 13. Fazakis N, Dritsas E, Kocsis O, Fakotakis N, Moustakas K (2021) Long-term cholesterol risk prediction using machine learning techniques in ELSA database. In Proceedings of the 13th International Joint Conference on Computational Intelligence (IJCCI 2021), pp 445–450
Irrigation Control System for Seedlings Based on the Internet of Things André de Carvalho, Gede Putra Kusuma, and Alif Tri Handoyo
Abstract The agricultural activities in the Vocational School of Agricultural Engineering (ESTVA) in Timor-Leste requires large quantities of water with high availability for the production and cultivation of plants. However, the crop results sometimes decrease due to challenges problem faced by the management of the water resources. This study aims to design an Internet of Things (IoT) based automatic control system to control water supply and maintain soil moisture conditions effectively. The evaluation of manual and automatic irrigation systems during the 21 days trial shows that the automatic system provides stable utilization of water with watering frequency of 274 occurs only in 12 days, with an amount of total watering of 35.3 L. Meanwhile, the manual system uses up to 49.8 L of water, which is 14.5 L more than the water spent by the automatic system. The results of the evaluation also indicate that automatic system obtains the average moisture value of 509 with a standard deviation of 63, which is smaller than the manual system that obtains an average moisture value of 559 and a significant standard deviation of 126. From these results, the automatic system is more effective in maintaining the soil moisture level and yet requires less water compared to the manual watering system. Furthermore, the quality and productivity of the growth of mustard seedlings with automatic system prove that the seeds are more stable and feasible to continue transplanting since the automatic system produces 12 more mustard trees compared to the manual system. Keywords Internet of Things · Control systems · Automatic irrigation · Plant seedling · Moisture control
A. de Carvalho · G. P. Kusuma (B) · A. T. Handoyo Department of Computer Science, BINUS Graduate Program—Master of Computer Science, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. C. Mukhopadhyay et al. (eds.), Innovative Technologies in Intelligent Systems and Industrial Applications, Lecture Notes in Electrical Engineering 1029, https://doi.org/10.1007/978-3-031-29078-7_86
999
1000
A. de Carvalho et al.
1 Introduction Escola Secundaria Técnica Vocacional de Agrícola (ESTVA) is a Vocational School of Agricultural Engineering, located in the mountain area sub-district of Maubisse, Ainaro District, Timor-Leste. To increase food crop production, the Agriculture Forestry Department and Ministry of Educational participated in developing technical schools to prepare future human resources as technical professionals. Agricultural activities in ESTVA until now focusing on the activities require large quantities of water to the production and cultivation. Sometimes, the crop results decrease due to the challenges of managing the water resources. Irrigation system in ESTVA school until now is still traditional, which is equipped with storage vats with ponds. All the activity still used manual system to directly or through a hose to water the seedlings up to the agricultural activities. Improper irrigation of water supply is the cause of low productivity of plant growth. The water needs in the ESTVA school until now increasingly complex because the quantity and the quality degraded due to population growth and domestic activities growing in the surrounding area. To use the irrigation system, we must regulate the use of water for horticulture activities around the school. A better irrigation system is needed to improve the efficiency of limited water resources. Also, the decrease of the environmental carrying capacity of water bodies due to damage to the watershed is becoming a threat in the dry season. The threat of the dry season cannot be avoided, which is causing lands productivity decreases. To cope with the growing population of the world, but with limited agricultural land, many researchers have investigated the application and system of precision agriculture [1]. Agricultural activities have various factors and roles in determining the productivity of the cultivation. One factor that cannot be forgotten is the problem of water supply to the plants. Automatic irrigation system to efficiency and reduces the water usage to the plants [2]. Also, there are over much dry lands that could highly-develop for the production of certain crops with water-efficient irrigation intervention. The crop will be growing requires handling of irrigation water management that is quite complex. The limited problem water in the dry land, which always happens because it does not adequate with any irrigation system for the availability of water utilization using the irrigation control system [3]. To determine the availability of water to dry land productivity and potential opportunities for the utilization of groundwater and dry land. The irrigation system is technically possible to apply the results of the present study the potential groundwater conditions on dry land to taking into condition land for types of plants and efficient water supply [4]. Groundwater content is the amount of the pores soil particular. The water content is very influential on soil consistency and the suitability of the soil for processing. Water is the most significant to the part of many roles in the soil; water soil has the essential role in plant growth to find out how much role and the relationship of groundwater with plant growth and determination of soil water content [5].
Irrigation Control System for Seedlings Based on the Internet of Things
1001
Water sources are the most valuable primary need in the world and play an essential role in all agricultural and non-agricultural activities. To conserve water resources for these primary needs on earth must use a significant and useful automatic water delivery system. The primary function of the building an automated system is the goal for more manual control of water use. Much research conducted studies on the implementation of stable irrigation systems to maintain an ideal soil fertility level for plants, but any have not yet created an automated system to control watering efficiency for seedlings while seedlings are determinants agricultural production [6]. The new way to overcome the challenge of monitoring soil moisture ecosystem the connect various sensor aerial assimilation entered into a predictive model produce curve local results potential [7]. To support and rehabilitation the irrigation systems are still traditional, in this study, the authors will take advantage of the technology Internet of Things (IoT) for the distribution of irrigation water supply that is stable to the seedlings. To develop the irrigation system as an opportunity to increase the productivity of food crops and perennial crops that have the potential to directly support food security in the dry land through the application of modern irrigation technology. To enable more careful treatment of every part of the land to increase productivity by increasing agricultural yields, always using modern technology to efficiently monitor the accurate groundwater to manage irrigation water in agriculture [8]. IoT has become a trending topic to talk about a late concept that affects human life, but how can help and facilitate human life [9]. Internet of Things (IoT) can be defined as the interconnections between the more objects to provide information about the real world by taking action or decisions at the time we needed. With the new approach to incorporate the internet into the private and professional life and collection of connected people at any time whatever services networks or more sensors connected to the internet. Initially, IoT is used to refer to uniquely identifiable connected objects with the Radio Frequency Identification (RFID) technology. After that, the actuator sensors of GPS devices and cellular devices are connecting [10]. Another IoT-based system also reported capable of providing irrigation to more than 2.5 million hectares and managing more than 90,000 km of water flow. To ensures the distribution of water needs to the farmers; that irrigation system is around rotation follows the list of time spent by government agencies. This irrigation system develops water with the indicator networks control based Internet of Things (IoT) and approaching reporting real-time debit flow of water through GPRS and backend server services [11]. Especially in the paper research in agriculture with relevant agricultural information sensing and transmissions such as the development of changes the weather conditions plants namely the quantity and quality of water amount and other enable to monitor were developed using microcontroller system compatible with Arduino to transmit sample. That must also be to conduct in laboratory calibration with the sensor operation converter resolutions [12]. The proposed advantage of the automatic irrigation system is to the purpose of avoiding the operational complexity of farmers. The early stage fuzzy analysis process weighs five environmental inputs that are not yet known as one of the assessment criteria. This criterion obtained from the best advanced to estimates achievement of the adequate soil moisture level [13]. Inspired
1002
A. de Carvalho et al.
by the above-related works proposed by the various researcher, therefore propose a suitable solution to monitor soil humidity and control the water usage using IoTbased irrigation control system. The proposed solution is designed specifically to meet the need for water irrigation control in ESTVA as the location of research. The proposed system is describing in detail in the following section.
2 Proposed Irrigation Control System 2.1 Hardware Block Diagram The IoT tools diagram purpose design to facilitate the installation and working of this system can be view in Fig. 1. The system builds with the components microcontroller Arduino Uno, soil moisture YL-69, and actuator 12 V communication with the relay switch electronics were working to activate the actuator when the soil moisture levels near to the dry condition. A few of the components required of this system equipped with the LCD connections with the potentiometer and transistors use to configure the brightness display system. The primary function of the LCD to give the display output value information from soil moisture. Buzzer used to give the alarm to inform to the user the automatic system already watering the plants.
Fig. 1 Hardware block diagram
Irrigation Control System for Seedlings Based on the Internet of Things
1003
2.2 System Operation The performance of this system to automatic watering adjust soil moisture that is read by the sensor the results of the values specified in the workings of this system and based on the reference value readings from sensors (Soil moisture sensor). The workflow of the operating system based on IoT is shown in Fig. 2. This system works automatic watering the plants based on changes soil moisture conditions that are read by the sensor with the reference value of the sensor readings. That has been determined turn ON the actuator if the values are greater or equal to 800 (>= 800), meaning that the soil moisture in semi-dry or approaches the conditions. The actuator turns OFF automatic the system if the sensor values smaller or less than 600 (= 200 and < 300
Wet
2
>= 300 and < 550
Semi wet
3
>= 600 and < 800
Semi dry
4
>= 800 and < 1023
Dry
Table 2 Summary of average value and standard deviation of moisture level during the eight days Days
Automatic irrigation (analog output)
Manual irrigation (analog output)
Average
Average
Std Dev
Std Dev
1
534
56
558
102
2
538
55
591
102
3
545
58
565
109
4
503
60
520
119
5
437
67
449
132
6
457
68
508
135
7
457
73
494
153
8
602
66
785
160
Average
509
63
559
126
the average value of both systems during the day. Based on the table, it shows that the average value of the first day of the automatic control system with manual system really differently.
3.2 Experimental Results on Water Usage The results of evaluation during the 21 days, showed that the automatic system could control water usage more efficiently to maintain ideal humidity more effectively and efficiently. The log date and log time automatically or manually recorded, proving that the automatic system can work well-watering frequency of 374 occurs only in 12 days with an amount of total watering frequency of 35.3 L. The results of the recording data prove that the manual system with the total watering frequency does not always determine water appropriately if the watering frequency activity is occasional then the water provided cannot control such as the watering amount which used 49.8 L with a difference of 14.5 L from the automatic system. Tables 3 and 4 show the different results of water use from both systems.
Irrigation Control System for Seedlings Based on the Internet of Things Table 3 Automatic system evaluation results
Log day Log time Day 1
08:27:23 PM
1007
Watering Amount of water frequency (times) (liters) 8
2.8
Day 2
07:07:37 AM
1
1.2
Day 4
06:06:57 PM
10
2.9
Day 5
09:41:20 AM
5
1.85
Day 6
05:49:20 AM
3
0.95
Day 8
04:15:15 AM
4
2.5
Day 9
11:08:54 AM
15
2.6
Day 10
08:41:47 AM
32
2.6
Day 11
09:23:52 AM
31
2.5
Day 12
05:35:22 PM
32
2.3
Day 17
05:20:36 PM
18
2
Day 18
06:22:28 AM
47
4.9
Day 19
10:49:21 AM
47
3.4
Day 21
11:27:44 AM
Total
21
2.8
274
35.3
4 Conclusion and Future Work During the 21 days, the automatic system turns on the actuator when the condition of the soil moisture will dry out and turn off the actuator as long the ideal humanity successfully built. The system can also control the provision of water, which is very useful and efficient in saving the water as much 35.5 L compared to a manual system of more than 49.8 L of water wasters with a difference of 14.5 L. The evaluation data, from both systems, show that the gets an average value of 509 and the standard deviation is 63 smaller than the manual system with an average value of 559 and a standard deviation. Judged from the quality growing of mustard plant from both system in the field, it has been proving the automatic system produces 12 mustard trees more stable and feasible for advanced transplanting.
1008 Table 4 Manual system evaluation results
A. de Carvalho et al. Log date
Log time
Day 1
06:00 PM
1
2.4
Day 2
7:00 AM–06:00 PM
2
2.4
Day 3
9:37 AM–07:00 PM
2
2.4
Day 4
09:00 AM–07:00 PM
2
2.4
Day 5
08:00 AM–06:00 PM
2
2.4
Day 6
06:50 PM
1
1.2
Day 7
08:00 AM–06:00 PM
2
3.6
Day 8
09:00 AM–07:00 PM
2
3.6
Day 9
06:00 PM
1
3.6
Day 10
07:00 PM
1
1.2
Day 11
06:50 PM
1
1.2
Day 12
05:17 PM
1
1.2
Day 13
08:00 PM
1
1.8
Day 14
07:00 PM
1
1.8
Day 15
07:00 AM–07:00 PM
2
3.6
Day 16
08:00 AM–06:00 PM
2
2.4
Day 17
09:00 AM–06:00 PM
2
3.6
Day 18
06:30 PM
1
1.8
Day 19
10:00 AM–05:00 PM
2
3.6
Day 20
09:30 AM–05:00 PM
2
2.4
Day 21
09:00 AM–06:00 PM
2
3.6
33
49.8
Total:
Watering frequency (times)
Amount of Water (liters)
References 1. Bourgeois D, Cui S, Obiomon PH, Wang Y (2015) Development of a decision support system for precision agriculture. Int J Eng Res Technol (IJERT) 04:226–231 2. Shekhar S, Kumar M, Kumari A, Jain SK (2017) Soil moisture profile analysis using tensiometer
Irrigation Control System for Seedlings Based on the Internet of Things
1009
under different discharge rates of drip emitter. Int J Curr Microbiol Appl Sci 06:908–917 3. Mohapatra AG, Lenka SK (2016) Neural network pattern classification and weather dependent fuzzy logic model for irrigation control in WSN based precision agriculture. In: International conference on information security & privacy (ICISP2015), pp 499–506. Procedia Computer Science, Nagpur 4. Aguilera H, Moreno L, Wesseling JG, Hernández J, Castaño S (2016) Soil moisture prediction to support management in semiarid wetlands during drying episodes. CATENA 147:709–724 5. Susha LSU, Singh DN, Maryam SB, Shojaei BM (2014) A critical review of soil moisture measurement. Measurement 54:92–105 6. Prakash KVA, Sajeena S, Lakshminarayana SV (2017) Field level investigation of automated drip irrigation system. 06:1888–1898 7. Phillips AJ, Newlands NK, Liang SHL, Ellert BH (2014) Integrated sensing of soil moisture at the field-scale: measuring, modeling and sharing for improved agricultural decision support. Comput Electron Agric 107:73–88 8. Payero JO, Qiao X, Khalilian A, Mirzakhani NA, Davis R (2017) Evaluating the effect of soil texture on the response of three types of sensors used to monitor soil water status. J Water Resour Protect 09:566–577 9. Hussain F (2017) Internet of Things building blocks and business models, 1st edn. Springer, Toronto 10. Howell S, Rezgui Y, Beach T (2017) Integrating building and urban semantics to empower smart water solutions. Autom Constr 81:434–448 11. Muhammad A, Haider B, Ahmad Z (2016) IoT enabled analysis of irrigation rosters in the Indus Basin irrigation system. In: 12th International conference on hydroinformatics (HIC 2016), pp 229–235. Procedia Engineering, Lahore 12. Payero JO, Nafchi AM, Davis R, Khalilian A (2017) An Arduino-based wireless sensor network for soil moisture monitoring using decagon EC-5 sensors. Open J Soil Sci 07:288–300 13. Flores-Carrillo DA, Sánchez-Fernández LP, Sánchez-Pérez LA, Carbajal-Hernández JJ (2017) Soil moisture fuzzy estimation approach based on decision-making. Environ Model Softw 91:223–240