132 71 50MB
English Pages 791 [788] Year 2021
Lecture Notes in Networks and Systems 278
Leonard Barolli Kangbin Yim Tomoya Enokido Editors
Complex, Intelligent and Software Intensive Systems Proceedings of the 15th International Conference on Complex, Intelligent and Software Intensive Systems (CISIS-2021)
Lecture Notes in Networks and Systems Volume 278
Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas— UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Turkey Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA; Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada; Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong
The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science.
More information about this series at http://www.springer.com/series/15179
Leonard Barolli Kangbin Yim Tomoya Enokido •
•
Editors
Complex, Intelligent and Software Intensive Systems Proceedings of the 15th International Conference on Complex, Intelligent and Software Intensive Systems (CISIS-2021)
123
Editors Leonard Barolli Department of Information and Communication Engineering Fukuoka Institute of Technology Fukuoka, Japan
Kangbin Yim Department of Information Security Engineering Soonchunhyang University Asan, Korea (Republic of)
Tomoya Enokido Faculty of Bussiness Administration Rissho University Tokyo, Japan
ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-3-030-79724-9 ISBN 978-3-030-79725-6 (eBook) https://doi.org/10.1007/978-3-030-79725-6 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Welcome Message of CISIS-2021 International Conference Organizers
Welcome to the 15th International Conference on Complex, Intelligent and Software Intensive Systems (CISIS-2021), which will be held from July 1 to July 3, 2021, at Soon Chun Hyang (SCH) University, Asan, Korea, in conjunction with the 15th International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS-2021). The aim of the conference is to deliver a platform of scientific interaction between the three interwoven challenging areas of research and development of future ICT-enabled applications: software intensive systems, complex systems and intelligent systems. Software intensive systems are systems, which heavily interact with other systems, sensors, actuators, devices, other software systems and users. More and more domains are involved with software intensive systems, e.g., automotive, telecommunication systems, embedded systems in general, industrial automation systems and business applications. Moreover, the outcome of web services delivers a new platform for enabling software intensive systems. The conference is thus focused on tools, practically relevant and theoretical foundations for engineering software intensive systems. Complex systems research is focused on the overall understanding of systems rather than its components. Complex systems are very much characterized by the changing environments in which they act by their multiple internal and external interactions. They evolve and adapt through internal and external dynamic interactions. The development of intelligent systems and agents, which is each time more characterized by the use of ontologies and their logical foundations, builds a fruitful impulse for both software intensive systems and complex systems. Recent research in the field of intelligent systems, robotics, neuroscience, artificial intelligence and cognitive sciences is a very important factor for the future development and innovation of software intensive and complex systems.
v
vi
Welcome Message of CISIS-2021 International Conference Organizers
The CISIS-2021 is aiming at delivering a forum for in-depth scientific discussions among the three communities. The papers included in the proceedings cover all aspects of theory, design and application of complex systems, intelligent systems and software intensive systems. We are very proud and honored to have two distinguished keynote talks by Dr. Jayh (Hyunhee) Park, Myongji University, Korea, and Dr. Antonio Esposito, University of Campania “Luigi Vanvitelli”, Italy, who will present their recent work and will give new insights and ideas to the conference participants. The organization of an international conference requires the support and help of many people. A lot of people have helped and worked hard to produce a successful CISIS-2021 technical program and conference proceedings. First, we would like to thank all the authors for submitting their papers, the program committee members and the reviewers who carried out the most difficult work by carefully evaluating the submitted papers. We are grateful to Honorary Co-Chairs Kyoil Suh, Soon Chun Hyang (SCH) University, Korea, and Prof. Makoto Takizawa, Hosei University, Japan, for their guidance and advices. Finally, we would like to thank Web Administrator Co-Chairs for their excellent and timely work. We hope you will enjoy the conference proceedings.
Organization
CISIS-2021 Organizing Committee Honorary Co-chairs Kyoil Suh Makoto Takizawa
Soonchunhyang University, Korea Hosei University, Japan
General Co-chairs Kangbin Yim Tomoya Enokido Marek Ogiela
Soonchunhyang University, Korea Rissho University, Japan AGH University of Technology, Poland
Program Committee Co-chairs Jonghyouk Lee Antonio Esposito Omar Hussain
Sejong University, Korea University of Campania “Luigi Vanvitelli”, Italy University of New South Wales, Australia
International Advisory Board David Taniar Minoru Uehara Arjan Durresi Beniamino Di Martino
Monash University, Australia Toyo University, Japan IUPUI, USA University of Campania “L. Vanvitelli”, Italy
Award Co-chairs Akio Koyama Kin Fun Li Kiwoong Park Olivier Terzo
Yamagata University, Japan University of Victoria, Canada Sejong University, Korea LINKS Foundation, Italy vii
viii
Organization
International Liaison Co-chairs Wenny Rahayu Fumiaki Sato Flora Amato
La Trobe University, Australia Toho University, Japan University of Naples Frederico II, Italy
Publicity Co-chairs Nadeem Javaid Takahiro Uchiya Markus Aleksy Farookh Hussain
COMSATS University Islamabad, Pakistan Nagoya Institute of Technology, Japan ABB AG Corporate Research Center, Germany University of Technology Sydney, Australia
Finance Chair Makoto Ikeda
Fukuoka Institute of Technology, Japan
Local Arrangement Co-chairs Seongkeun Park Kyuhaeng Lee Taeyoon Kim
Soonchunhyang University, Korea Soonchunhyang University, Korea Soonchunhyang University, Korea
Web Administrator Chairs Phudit Ampririt Kevin Bylykbashi Ermioni Qafzezi
Fukuoka Institute of Technology, Japan Fukuoka Institute of Technology, Japan Fukuoka Institute of Technology, Japan
Steering Committee Chair Leonard Barolli
Fukuoka Institute of Technology, Japan
Track Areas and PC Members 1. Database and Data Mining Applications Track Co-chairs Kin Fun Li Pavel Krömer
University of Victoria, Canada Technical University of Ostrava, Czech Republic
PC Members Antonio Attanasio Tibebe Beshah
Links Foundation, Italy Addis Ababa University, Etiopia
Organization
Jana Heckenbergerova Konrad Jackowski Petr Musílek Aleš Zamuda Genoveva Vargas-Solar Xiaolan Sha Kosuke Takano Masahiro Ito Watheq ElKharashi Mohamed Elhaddad Wei Lu
ix
University of Pardubice, Czech Republic Wroclaw University of Technology, Poland University of Alberta, Canada University of Maribor, Slovenia French Council of Scientific Research, LIG-LAFMIA, France Sky, UK Kanagawa Institute of Technology, Japan Toshiba Lab, Japan Ain Shams University, Egypt University of Victoria, Canada Keene State College, USA
2. Artificial Intelligence and Bio-inspired Computing Track Co-chairs Hai Dong Salvatore Vitabile Urszula Ogiela
Royal Melbourne Institute of Technology, Australia University of Palermo, Italy Pedagogical University of Krakow, Poland
PC Members Kit Yan Chan Shang-Pin Ma Pengcheng Zhang Le Sun Sajib Mistry Klodiana Goga Vincenzo Conti Minoru Uehara Philip Moore Mauro Migliardi Dario Bonino Andrea Tettamanzi Cornelius Weber Tim Niesen Rocco Raso Fulvio Corno
Curtin University, Australia National Taiwan Ocean University, Taiwan Hohai University, China Nanjing University of Information Science and Technology, China Curtin University, Australia Istituto Superiore Mario Boella, Italy University of Enna Kore, Italy Toyo University, Japan Lanzhou University, China University of Padua, Italy CHILI, Italy University of Nice, France Hamburg University, Germany German Research Center for Artificial Intelligence (DFKI), Germany German Research Center for Artificial Intelligence (DFKI), Germany Politecnico di Torino, Italy
x
Organization
3. Multimedia Systems and Virtual Reality Track Co-chairs Yoshinari Nomura Santi Caballé Shinji Sugawara
Okayama University, Japan Open University of Catalonia, Spain Chiba Institute of Technology, Japan
PC Members Shunsuke Mihara Shunsuke Oshima Yuuichi Teranishi Kazunori Ueda Hideaki Yanagisawa Kaoru Sugita Keita Matsuo Santi Caballé Nobuo Funabiki Yoshihiro Okada Tomoyuki Ishida Nicola Capuano Jordi Conesa Farzin Asadi David Gañan Le Hoang Son Jorge Miguel David Newell
Lockon Inc., Japan Kumamoto National College of Technology, Japan NICT, Japan Kochi University of Technology, Japan National Institute of Technology, Tokuyama College, Japan Fukuoka Institute of Technology, Japan Fukuoka Institute of Technology, Japan Open University of Catalonia, Spain Okayama University, Japan Kyushu University, Japan Fukuoka Institute of Technology, Japan University of Basilicata, Italy Universitat Oberta de Catalunya, Spain Kocaeli University, Kocaeli, Turkey Universitat Oberta de Catalunya, Spain Vietnam National University, Vietnam Grupo San Valero, Spain Bournemouth University, UK
4. Next Generation Wireless Networks Track Co-chairs Marek Bolanowski Andrzej Paszkowski Sriram Chellappan
Rzeszow University of Technology, Poland Rzeszow University of Technology, Poland Missouri University of Science and Technology, USA
PC Members Yunfei Chen Elis Kulla Admir Barolli Makoto Ikeda Keita Matsuo Shinji Sakamoto
University of Warwick, UK Okayama University of Science, Japan Aleksander Moisiu University, Albania Fukuoka Institute of Technology, Japan Fukuoka Institute of Technology, Japan Seikei University, Japan
Organization
Omer Wagar Zhibin Xie Jun Wang Vamsi Paruchuri Arjan Durresi Bhed Bista Tadeusz Czachórski
xi
University of Engineering & Technology, Poland Jiangsu University of Science and Technology, China Nanjing University of Post and Telecommunication, China University of Central Arkansas, USA IUPUI, USA Iwate Prefectural University, Japan Polish Academy of Sciences, Poland
5. Semantic Web and Web Services Track Co-chairs Antonio Messina Ilona Bluemke Natalia Kryvinska
Istituto di Calcolo e Reti ad Alte Prestazione CNR, Italy Warsaw University of Technology, Poland Comenius University in Bratislava, Slovakia
PC Members Alba Amato Nik Bessis Robert Bestak Ivan Demydov Marouane El Mabrouk Corinna Engelhardt-Nowitzki Michal Gregus Jozef Juhar Nikolay Kazantsev Manuele Kirsch Pinheiro Cristian Lai Michele Melchiori Giovanni Merlino Kamal Bashah Nor Shahniza Eric Pardede Aneta Poniszewska-Maranda Pethuru Raj Jose Luis Vazquez Avila Salvatore Venticinque Anna Derezinska
Italian National Recserch Center (CNR), Italy Edge Hill University, UK Czech Technical University in Prague, Czech Republic Lviv Polytechnic National University, Ukraine Abdelmalek Essaadi University, Morocco University of Apllied Sciences, Austria Comenius University in Bratislava, Slovakia Technical University of Košice, Slovakia National Research University, Russia Université Paris 1 Panthéon Sorbonne, France CRS4 Center for Advanced Studies, Italy University of Brescia, Italy Uniersity of Messina, Italy Universiti Teknologi MARA, Malaysia La Trobe University, Australia Lodz University of Technology, Poland IBM Global Cloud Center of Excellence, India University of Quintana Roo, México University of Campania “Luigi Vanvitelli”, Italy Warsaw University of Technology, Poland
xii
Organization
6. Security and Trusted Computing Track Co-chairs Hiroaki Kikuchi Omar Khadeer Hussain Lidia Fotia
Meiji University, Japan University of New South Wales (UNSW) Canberra, Australia University of Calabria, Italy
PC Members Saqib Ali Zia Rehman Morteza Saberi Sazia Parvin Farookh Hussain Walayat Hussain Sabu Thampi
Sun Jingtao Anitta Patience Namanya Smita Rai Abhishek Saxena Ilias K. Savvas Fabrizio Messina Domenico Rosaci Alessandra De Benedictis
Sultan Qaboos University, Oman COMSATS University Islamabad, Pakistan University of New South Wales (UNSW) Canberra, Australia University of New South Wales (UNSW) Canberra, Australia University of Technology Sydney, Australia University of Technology Sydney, Australia Indian Institute of Information Technology and Management - Kerala (IIITM-K) Technopark Campus, India National Institute of Informatics, Japan University of Bradford, UK Uttarakhand Board of Technical Education Roorkee, India American Tower Corporation Limited, India University of Thessaly, Greece University of Catania, Italy University Mediterranea of Reggio Calabria, Italy University of Naples “Frederico II”, Italy
7. HPC and Cloud Computing Services and Orchestration Tools Track Co-chairs Olivier Terzo Jan Martinovič
Jose Luis Vazquez-Poletti
Links Foundation, Italy IT4Innovations National Supercomputing Center, VSB Technical University of Ostrava, Czech Republic Universidad Complutense de Madrid, Spain
PC Members Alberto Scionti Antonio Attanasio Jan Platos
Links Foundation, Italy Links Foundation, Italy VŠB-Technical University of Ostrava, Czech Republic
Organization
Rustem Dautov Giovanni Merlino Francesco Longo Dario Bruneo Nik Bessis MingXue Wang Luciano Gaido Giacinto Donvito Andrea Tosatto Mario Cannataro Agustin C. Caminero Dana Petcu Marcin Paprzycki Rafael Tolosana
xiii
Kazan Federal University, Russia University of Messina, Italy University of Messina, Italy University of Messina, Italy Edge Hill University, UK Ericsson, Ireland Istituto Nazionale di Fisica Nucleare (INFN), Italy Istituto Nazionale di Fisica Nucleare (INFN), Italy Open-Xchange, Germany University “Magna Græcia” of Catanzaro, Italy Universidad Nacional de Educación a Distancia, Spain West University of Timisoara, Romania Systems Research Institute, Polish Academy of Sciences, Poland Universidad de Zaragoza, Spain
8. Parallel, Distributed and Multicore Computing Track Co-chairs Eduardo Alchieri Valentina Casola Lidia Ogiela
University of Brasilia, Brazil University of Naples “Federico II”, Italy Pedagogical University of Krakow, Poland
PC Members Aldelir Luiz Edson Tavares Fernando Dotti Hylson Neto Jacir Bordim Lasaro Camargos Luiz Rodrigues Marcos Caetano Flora Amato Urszula Ogiela
Catarinense Federal Institute, Brazil Federal University of Technology—Parana, Brazil Pontificia Universidade Catolica do Rio Grande do Sul, Brazil Catarinense Federal Institute, Brazil University of Brasilia, Brazil Federal University of Uberlandia, Brazil Western Parana State University, Brazil University of Brasilia, Brazil University of Naples “Federico II”, Italy Pedagogical University of Krakow, Poland
xiv
Organization
9. Energy Aware Computing and Systems Track Co-chairs Muzammil Behzad Zahoor Ali Khan
University of Oulu, Finland Higher Colleges of Technology, United Arab Emirates
PC Members Naveed Ilyas Muhammad Sharjeel Javaid Muhammad Talal Hassan Waseem Raza Ayesha Hussain Umar Qasim Nadeem Javaid Yasir Javed Kashif Saleem Hai Wang
Gwangju Institute of Science and Technology, South Korea University of Hafr Al Batin, Saudi Arabia COMSATS University Islamabad, Pakistan University of Lahore, Pakistan COMSATS University Islamabad, Pakistan University of Alberta, Canada COMSATS University Islamabad, Pakistan Higher Colleges of Technology, UAE King Saud University, Saudi Arabia Saint Mary’s University, Canada
10. Complex Systems, Software Modeling and Analytics Track Co-chairs Lech Madeyski Bigumiła Hnatkowska Yogesh Beeharry
Wroclaw University of Science and Technology, Poland Wroclaw University of Science and Technology, Poland University of Mauritius, Mauritius
PC Members Ilona Bluemke Anna Bobkowska Anna Derezińska Olek Jarzębowicz Miroslaw Ochodek Michał Śmiałek Anita Walkowiak-Gall Zbigniew Huzar Robert T. F. Ah King
Warsaw University of Technology, Poland Gdańsk University of Technology, Poland Warsaw University of Technology, Poland Gdańsk University of Technology, Poland Poznań University of Technology, Poland Warsaw University of Technology, Poland Wroclaw University of Science and Technology, Poland Wroclaw University of Science and Technology, Poland University of Mauritius, Mauritius
Organization
xv
11. Multi-agent Systems, SLA Cloud and Social Computing Track Co-chairs Giuseppe Sarnè Douglas Macedo Takahiro Uchiya
Mediterranean University of Reggio Calabria, Italy Federal University of Santa Catarina, Brazil Nagoya Institute of Technology, Japan
PC Members Mario Dantas Luiz Bona Márcio Castro Fabrizio Messina Hideyuki Takahashi Kazuto Sasai Satoru Izumi Domenico Rosaci Lidia Fotia
Federal University of Juiz de Fora, Brazil Federal University of Parana, Brazil Federal University of Santa Catarina, Brazil University of Catania, Italy Tohoku University, Japan Ibaraki University, Japan Tohoku University, Japan Mediterranean University of Reggio Calabria, Italy Mediterranean University of Reggio Calabria, Italy
12. Internet of Everything and Machine Learning Track Co-chairs Omid Ameri Sianaki Khandakar Ahmed Inmaculada Medina Bulo
Victoria University, Sydney, Australia Victoria University, Australia Universidad de Cádiz, Spain
PC Members Farhad Daneshgar M. Reza Hoseiny F. Kamanashis Biswas (KB) Khaled Kourouche Huai Liu, Lecturer Mark A. Gregory Nazmus Nafi Mashud Rana Farshid Hajati Ashkan Yousefi Nedal Ababneh Lorena Gutiérrez-Madroñal Juan Boubeta-Puig
Victoria University, Sydney, Australia University of Sydney, Australia Australian Catholic University, Australia Victoria University, Sydney, Australia Victoria University, Australia RMIT University, Australia Victoria Institute of Technology, Australia CSIRO, Australia Victoria University, Sydney, Australia Victoria University, Sydney, Australia Abu Dhabi Polytechnic, Abu Dhabi, UAE University of Cádiz, Spain University of Cádiz, Spain
xvi
Guadalupe Ortiz Alfonso García del Prado Luis Llana
Organization
University of Cádiz, Spain University of Cádiz, Spain Complutense University of Madrid, Spain
CISIS-2021 Reviewers Adhiatma Ardian Ali Khan Zahoor Amato Alba Amato Flora Barolli Admir Barolli Leonard Bista Bhed Caballé Santi Chellappan Sriram Chen Hsing-Chung Cui Baojiang Dantas Mario De Benedictis Alessandra Di Martino Beniamino Dong Hai Durresi Arjan Enokido Tomoya Esposito Antonio Fachrunnisa Olivia Ficco Massimo Fotia Lidia Fun Li Kin Funabiki Nobuo Gotoh Yusuke Hussain Farookh Hussain Omar Javaid Nadeem Ikeda Makoto Ishida Tomoyuki Kikuchi Hiroaki Koyama Akio
Kryvinska Natalia Kulla Elis Lee Kyungroul Matsuo Keita Mostarda Leonardo Ogiela Lidia Ogiela Marek Okada Yoshihiro Palmieri Francesco Paruchuri Vamsi Krishna Poniszewska-Maranda Aneta Rahayu Wenny Rawat Danda Saito Takamichi Sakamoto Shinji Sato Fumiaki Scionti Alberto Sianaki Omid Ameri Sugawara Shinji Takizawa Makoto Taniar David Terzo Olivier Uehara Minoru Venticinque Salvatore Vitabile Salvatore Wang Xu An Woungang Isaac Xhafa Fatos Yim Kangbin Yoshihisa Tomoki
CISIS-2021 Keynote Talks
Asking AI Why: Explainable Artificial Intelligence Jayh (Hyunhee) Park Myongji University, Yongin, Korea
Abstract. In the early phases of AI adoption, it was okay to not understand what the model predicts in a certain way, as long as it gives the correct outputs. Explaining how they work was not the first priority. Now, the focus is turning to build human interpretable models. In the invited talk, I will explain why explainable AI is important. Then, I will explain an AI model. Through this invited talk, I will discuss models such as ensembles and neural networks called black-box models. I will deal with the following questions. • Why should we trust your model? • Why did the model take a certain decision? • What drives model predictions?
xix
Coevolution of Semantic and Blockchain Technologies Antonio Esposito University of Campania “Luigi Vanvitelli”, Aversa, Italy
Abstract. Semantic technologies have demonstrated to have the capability to ease interoperability and portability issues in several application fields such as cloud computing and the Internet of things (IoT). Indeed, the increase in resource representation and the inference capabilities enabled by semantic technologies represent important components of current distributed software systems, which can rely on better information interoperability and decision autonomy. However, semantics alone cannot solve trust and reliability issues that, in many situations, can still arise within software systems. Blockchain solutions have shown to be effective in this area, creating data sharing infrastructure where information validation can be done without the necessity of third-party services. A coevolution and integration of semantic and blockchain technologies would at the same time enhance data interoperability and ensure data trust and provenance, creating undeniable benefits for distributes software systems. This talk will focus on the current state of the art regarding the integration of semantic and blockchain technologies, looking at the state of their coevolution, at the available and still needed solutions.
xxi
Contents
Four Grade Levels-Based Models with Random Forest for Student Performance Prediction at a Multidisciplinary University . . . . . . . . . . . Tran Thanh Dien, Le Duy-Anh, Nguyen Hong-Phat, Nguyen Van-Tuan, Trinh Thanh-Chanh, Le Minh-Bang, Nguyen Thanh-Hai, and Nguyen Thai-Nghe
1
The Role of Collective Engagement to Strengthen Organizational Identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Olivia Fachrunnisa, Ardian Adhiatma, and Ken Sudarti
13
A Novel Structural and Semantic Similarity in Social Recommender Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Imen Ben El Kouni, Wafa Karoui, and Lotfi Ben Romdhane
23
Trustworthy Explainability Acceptance: A New Metric to Measure the Trustworthiness of Interpretable AI Medical Diagnostic Systems . . . Davinder Kaur, Suleyman Uslu, Arjan Durresi, Sunil Badve, and Murat Dundar
35
Entity Relation Extraction Based on Multi-attention Mechanism and BiGRU Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lingyun Wang, Caiquan Xiong, Wenxiang Xu, and Song Lin
47
Time Series Prediction of Wind Speed Based on SARIMA and LSTM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Caiquan Xiong, Congcong Yu, Xiaohui Gu, and Shiqiang Xu
57
Dimensionality Reduction on Metagenomic Data with Recursive Feature Elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Huong Hoang Luong, Nghia Trong Le Phan, Tin Tri Duong, Thuan Minh Dang, Tong Duc Nguyen, and Hai Thanh Nguyen
68
xxiii
xxiv
Contents
The Application of Improved Grasshopper Optimization Algorithm to Flight Delay Prediction–Based on Spark . . . . . . . . . . . . . . . . . . . . . . Hongwei Chen, Shenghong Tu, and Hui Xu
80
Application of Distributed Seagull Optimization Improved Algorithm in Sentiment Tendency Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hongwei Chen, Honglin Zhou, Meiying Li, Hui Xu, and Xun Zhou
90
Performance Evaluation of WMNs by WMN-PSOSA-DGA Hybrid Simulation System Considering Stadium Distribution of Mesh Clients and Different Number of Mesh Routers . . . . . . . . . . . . . . . . . . . . . . . . . 100 Admir Barolli, Shinji Sakamoto, Leonard Barolli, and Makoto Takizawa A New Scheme for Slice Overloading Cost in 5G Wireless Networks Considering Fuzzy Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Phudit Ampririt, Ermioni Qafzezi, Kevin Bylykbashi, Makoto Ikeda, Keita Matsuo, and Leonard Barolli COVID-Prevention-Based Parking with Risk Factor Computation . . . . 121 Walter Balzano and Silvia Stranieri Coarse Traffic Classification for High-Bandwidth Connections in a Computer Network Using Deep Learning Techniques . . . . . . . . . . 131 Marek Bolanowski, Andrzej Paszkiewicz, and Bartosz Rumak A Privacy Preserving Hybrid Blockchain Based Announcement Scheme for Vehicular Energy Network . . . . . . . . . . . . . . . . . . . . . . . . . 142 Abid Jamal, Sana Amjad, Usman Aziz, Muhammad Usman Gurmani, Saba Awan, and Nadeem Javaid Prediction of Wide Area Road State Using Measurement Sensor Data and Meteorological Mesh Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 Yoshitaka Shibata and Akira Sakuraba A Coverage Construction and Hill Climbing Approach for Mesh Router Placement Optimization: Simulation Results for Different Number of Mesh Routers and Instances Considering Normal Distribution of Mesh Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Aoto Hirata, Tetsuya Oda, Nobuki Saito, Yuki Nagai, Masaharu Hirota, Kengo Katayama, and Leonard Barolli Related Entity Expansion and Ranking Using Knowledge Graph . . . . . 172 Ryuya Akase, Hiroto Kawabata, Akiomi Nishida, Yuki Tanaka, and Tamaki Kaminaga Zero Trust Security in the Mist Architecture . . . . . . . . . . . . . . . . . . . . . 185 Minoru Uehara
Contents
xxv
Blockchain Based Authentication for End-Nodes and Efficient Cluster Head Selection in Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . 195 Sana Amjad, Usman Aziz, Muhammad Usman Gurmani, Saba Awan, Maimoona Bint E. Sajid, and Nadeem Javaid The Redundant Active Time-Based Algorithm with Forcing Meaningless Replica to Terminate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 Tomoya Enokido, Dilawaer Duolikun, and Makoto Takizawa A Novel Approach to Network’s Topology Evolution and Robustness Optimization of Scale Free Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Muhammad Usman, Nadeem Javaid, Syed Minhal Abbas, Muhammad Mohsin Javed, Muhammad Aqib Waseem, and Muhammad Owais Implementation of an Indoor Position Detecting System Using Mean BLE RSSI for Moving Omnidirectional Access Point Robot . . . . . . . . . 225 Atushi Toyama, Kenshiro Mitsugi, Keita Matsuo, Elis Kulla, and Leonard Barolli A Survey on Internet of Things in Telehealth . . . . . . . . . . . . . . . . . . . . 235 Komal Marwah and Farshid Hajati Alexnet-Adaboost-ABC Based Hybrid Neural Network for Electricity Theft Detection in Smart Grids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Muhammad Asif, Ashraf Ullah, Shoaib Munawar, Benish Kabir, Pamir, Adil Khan, and Nadeem Javaid Blockchain and IPFS Based Service Model for the Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Hajra Zareen, Saba Awan, Maimoona Bint E Sajid, Shakira Musa Baig, Muhammad Faisal, and Nadeem Javaid Building Social Relationship Skill in Digital Work Design . . . . . . . . . . . 271 Ardian Adhiatma and Umi Kuswatun Hasanah How to Push Digital Ecosystem to Explore Digital Humanities and Collaboration of SMEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Marno Nugroho and Budhi Cahyono IOTA-Based Mobile Application for Environmental Sensor Data Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 Francesco Lubrano, Fabrizio Bertone, Giuseppe Caragnano, and Olivier Terzo Electricity Theft Detection in Smart Meters Using a Hybrid Bi-directional GRU Bi-directional LSTM Model . . . . . . . . . . . . . . . . . . 297 Shoaib Munawar, Muhammad Asif, Beenish Kabir, Pamir, Ashraf Ullah, and Nadeem Javaid
xxvi
Contents
Developing Innovation Capability to Improve Marketing Performance in Batik SMEs During the Covid-19 Pandemic . . . . . . . . . . . . . . . . . . . 309 Alifah Ratnawati and Noor Kholis Muthmai’nnah Adaptive Capability: A Conceptual Review . . . . . . . . . . 324 Asih Niati, Olivia Fachrunnisa, and Mohamad Sodikin Interaction Model of Knowledge Management, Green Innovation and Corporate Sustainable Development in Indonesia . . . . . . . . . . . . . . 332 Siti Sumiati, Sri Wahyuni Ratnasari, and Erni Yuvitasari The Impact of Covid-19 Pandemic on Continuance Adoption of Mobile Payments: A Conceptual Framework . . . . . . . . . . . . . . . . . . . 338 Dian Essa Nugrahini and Ahmad Hijri Alfian An Analysis in the Application of the Unified Theory of Acceptance and Use of Technology (UTAUT) Model on Village Fund System (SISKEUDES) with Islamic Work Ethics as a Moderating Effect . . . . . 347 Khoirul Fuad, Winarsih, Luluk Muhimatul Ifada, Hendry Setyawan, and Retno Tri Handayani MOC Approach and Its Integration with Social Network and ICT: The Role to Improve Knowledge Transfer . . . . . . . . . . . . . . . . . . . . . . . 357 Tri Wikaningrum An Integrated System for Actor Node Selection in WSANs Considering Fuzzy Logic and NS-3 and Its Performance Evaluation . . . 365 Yi Liu, Shinji Sakamoto, and Leonard Barolli Design of an Intelligent Driving Support System for Detecting Distracted Driving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 Masahiro Miwata, Mitsuki Tsuneyoshi, Yoshiki Tada, Makoto Ikeda, and Leonard Barolli Detection of Non-Technical Losses Using MLP-GRU Based Neural Network to Secure Smart Grids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Benish Kabir, Pamir, Ashraf Ullah, Shoaib Munawar, Muhammad Asif, and Nadeem Javaid Synthetic Theft Attacks Implementation for Data Balancing and a Gated Recurrent Unit Based Electricity Theft Detection in Smart Grids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Pamir, Ashraf Ullah, Shoaib Munawar, Muhammad Asif, Benish Kabir, and Nadeem Javaid Blockchain Enabled Secure and Efficient Reputation Management for Vehicular Energy Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 Abid Jamal, Muhammad Usman Gurmani, Saba Awan, Maimoona Bint E. Sajid, Sana Amjad, and Nadeem Javaid
Contents
xxvii
Religious Value Co-Creation: A Strategy to Strengthen Customer Engagement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Ken Sudarti, Olivia Fachrunnisa, Hendar, and Ardian Adhiatma Environmental Performance Announcement and Shareholder Value: The Role of Environmental Disclosure . . . . . . . . . . . . . . . . . . . . . . . . . . 426 Luluk Muhimatul Ifada, Munawaroh, Indri Kartika, and Khoirul Fuad Integrating Corporate Social Responsibility Disclosure and Environmental Performance for Firm Value: An Indonesia Study . . . . . 435 Maya Indriastuti and Anis Chariri Financial Technology and Islamic Mutual Funds Investment . . . . . . . . . 446 Mutamimah and Rima Yulia Sueztianingrum Towards Spiritual Wellbeing in Organization: Linking Ihsan Achievement Oriented Leadership and Knowledge Sharing Behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455 Mohamad Sodikin, Olivia Fachrunnisa, and Asih Niati Tax Avoidance and Performance: Initial Public Offering . . . . . . . . . . . . 464 Kiryanto, Mutoharoh, and Zaenudin Knowledge Sharing, Innovation Strategy and Innovation Capability: A Systematic Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473 Mufti Agung Wibowo, Widodo, Olivia Fachrunnisa, Ardian Adhiatma, Marno Nugroho, and Yulianto Prabowo The Determinant of Sustainable Performance in Indonesian Islamic Microfinance: Role of Accounting Information System and Maqashid Sharia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484 Provita Wijayanti and Intan Salwani Mohamed The Role of Digital Utilization in Accounting to Enhance MSMEs’ Performance During COVID-19 Pandemic: Case Study in Semarang, Central Java, Indonesia . . . . . . . . . . . . . . . . . . . . . . . . . . 495 Hani Werdi Apriyanti and Erni Yuvitasari The Role of Confidence in Knowledge and Psychological Safety on Knowledge Sharing Improvement of Human Resources in Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 Arizqi A Model of Agency Theory-Based Firm Value Improvement Through Cash Holding with Firm Size and Profitability as Control Variable . . . . 514 Ibnu Khajar and Ayu Rakhmawati Kusumaningtyas The Model of Tax Compliance Assessment in MSMEs . . . . . . . . . . . . . 524 Devi Permatasari, Naila Najihah, and Mutoharoh
xxviii
Contents
Survival and Sustainability Strategies of Small and Medium Enterprises (SMEs) During and After Covid-19 Pandemic: A Conceptual Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534 Naila Najihah, Devi Permatasari, and Mutoharoh Bridging the Semantic Gap in Continuous Auditing Knowledge Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544 Sri Sulistyowati, Indri Kartika, Imam Setijawan, and Maya Indriastuti Comparison of Financing Resources to Support Micro and Small Business Sustainability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555 Mutoharoh, Devi Permatasari, and Naila Najihah The Mediating of Green Product Innovation on the Effect of Accounting Capability and Performance Financial of MSMEs in the New Normal Era . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565 Winarsih, Khoirul Fuad, and Hendri Setyawan Supply Chain Management Quality Improvement Model with Adaptive and Generative Relationship Learning . . . . . . . . . . . . . . . . . . 573 Lutfi Nurcholis and Ardian Adhiatma Company’s Characteristics and Intellectual Capital Disclosure: Empirical Study at Technology Companies of Singapore . . . . . . . . . . . . 580 Dista Amalia Arifah, Anis Chariri, and Pujiharto The Influence of Sustainability Report on Islamic Banking Performance in Indonesia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590 Muhammad Jafar Shodiq The Antecedent and Consequences of Commitment to the Environment in Environmentally Friendly Automotive Products . . . . . . 598 Tanti Handriana, Praptini Yulianti, and Decman Praharsa Towards a Trustworthy Semantic-Aware Marketplace for Interoperable Cloud Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606 Emanuele Bellini, Stelvio Cimato, Ernesto Damiani, Beniamino Di Martino, and Antonio Esposito Toward ECListener: An Unsurpervised Intelligent System to Monitor Energy Communities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616 Gregorio D’Agostino, Alberto Tofani, Beniamino Di Martino, and Fiammetta Marulli Semantic Techniques for IoT Sensing and eHealth Training Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627 Beniamino Di Martino and Serena Angela Gracco
Contents
xxix
PrettyTags: An Open-Source Tool for Easy and Customizable Textual MultiLevel Semantic Annotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636 Beniamino Di Martino, Fiammetta Marulli, Mariangela Graziano, and Pietro Lupi Supporting the Optimization of Temporal Key Performance Indicators of Italian Courts of Justice with OLAP Techniques . . . . . . . 646 Beniamino Di Martino, Luigi Colucci Cante, Antonio Esposito, Pietro Lupi, and Massimo Orlando Semantic Techniques for Automated Recognition of Building Types in Cultural Heritage Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657 Beniamino Di Martino, Mariangela Graziano, and Nicla Cerullo Semantic Representation and Rule Based Patterns Discovery and Verification in eProcurement Business Processes for eGovernment . . . . 667 Beniamino Di Martino, Datiana Cascone, Luigi Colucci Cante, and Antonio Esposito Research on the Development of Programming Support Systems Focused on the Cooperation Between Activity Diagrams and Scratch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677 Kazuhiro Kobashi, Kazuaki Yoshihara, and Kenzi Watanabe Research on the Development of Keyboard Applications for Reasonable Accommodation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686 Reika Okuya, Kazuaki Yoshihara, and Kenzi Watanabe Development of a Teaching Material for Information Security that Detects an Unsecure Wi-Fi Access Point . . . . . . . . . . . . . . . . . . . . . 694 Kazuaki Yoshihara, Taisei Iwasaki, and Kenzi Watanabe A Study of Throughput Drop Estimation Model for Concurrently Communicating Links Under Coexistence of Channel Bonding and Non-bonding in IEEE 802.11n WLAN . . . . . . . . . . . . . . . . . . . . . . 700 Kwenga Ismael Munene, Nobuo Funabiki, Hendy Briantoro, Md. Mahbubur Rahman, Sujan Chandra Roy, and Minoru Kuribayashi Dynamic Fog Configuration for Content Sharing with Peer-to-Peer Network Using Mobile Terminals in a City . . . . . . . . . . . . . . . . . . . . . . 715 Takuya Itokazu and Shinji Sugawara Voice Quality Change Due to the Amount of Training Data for Multi- and Target-Speaker WaveNet Vocoders . . . . . . . . . . . . . . . . 727 Satoshi Yoshida, Shingo Uenohara, Keisuke Nishijima, and Ken’ichi Furuya Web-Based Collaborative VR Training System and Its Log Functionality for Radiation Therapy Device Operations . . . . . . . . . . . . 734 Yuta Miyahara, Kosuke Kaneko, Toshioh Fujibuchi, and Yoshihiro Okada
xxx
Contents
Action Input Interface of IntelligentBox Using 360-Degree VR Camera and OpenPose for Multi-persons’ Collaborative VR Applications . . . . . 747 Bai Yu, Wei Shi, and Yoshihiro Okada Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
Four Grade Levels-Based Models with Random Forest for Student Performance Prediction at a Multidisciplinary University Tran Thanh Dien, Le Duy-Anh, Nguyen Hong-Phat, Nguyen Van-Tuan, Trinh Thanh-Chanh, Le Minh-Bang, Nguyen Thanh-Hai, and Nguyen Thai-Nghe(B) College of Information and Communication Technology, Can Tho University, Can Tho, Vietnam {thanhdien,nthai.cit,ntnghe}@ctu.edu.vn
Abstract. Student performance is a critical task in universities. By predicting student performance in the early stage, we can identify students who need more attention to improve their learning performance. Also, these forecast tasks support students to select appropriate courses and design good study plans for themselves to obtain higher performance. Previous studies usually use only one model for all kinds of students regardless of each student’s ability and characteristics. For multidisciplinary universities, this type of model can produce poor performance. In this study, we propose to consider 4-grade levels of degree classification in Vietnam, and the prediction is based on the average performance in previous semesters to perform the prediction tasks. Four student groups are trained separately with their mark records—the used model depends on students’ average marks in previous semesters. The proposed method is validated on more than 4.5 million mark records of nearly 100,000 students at a multidisciplinary university in Vietnam. The experimental results show that the four Random Forest-based models give a positive average mean absolute error of 0.452 of Random Forest regression comparing with the error of 0.557 while using one model. Keywords: Recommendation system · Student performance levels · Random forest · Multidisciplinary university
1
· Grade
Introduction
Researchers are conducting educational data mining to address urgent issues in education and improve the training quality of education centers. Also, universities and colleges store tremendous students’ scores in information systems. Such systems have created a buffer for researchers to build a predictable system. Recently, in university, the number of students being warned and forced to leave school tends to increase. Another important reason is that students cannot c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 1–12, 2021. https://doi.org/10.1007/978-3-030-79725-6_1
2
T. T. Dien et al.
evaluate and correctly predict their ability to choose the right course. Meanwhile, student performance results are an essential task of higher education institutions because it is the criterion for evaluating universities’ quality, but currently, there is a problem with student status. According to [1], the student performance can be calculated by measuring the learning assessment and curriculum. In recent years, the situation of students in universities has been academically warned tended to accelerate. Taking an example from Can Tho University, in the first semester of the school year 2018–2019,1 the number of students whom academic warned in one semester were 886 and the two semesters were 125, these numbers in the first semester of the academic year 2019–2020 were 986 and 196 respectively. For example, data mining is one of the most popular approaches to be widely applied in the educational area. One of the most popular techniques to predict student performance is classification. Several algorithms are used for classification tasks such as Decision Tree, Artificial Neural Networks, Naive Bayes, K-Nearest Neighbor, and Support Vector Machines [2]. This study proposes a new training model about student performance prediction based on student’s grade levels. The considered four levels are divided into four different models (including “Excellent”, “very good”, “good”, “fairly” performance) using the random forest (RF) to build a student’s performance prediction model in the following semesters. The results show that the proposed model provides an accurate prediction, and it can apply in practical other cases.
2
Related Work
Predicting students’ academic performance becomes critical because institutions can support students to achieve more learning performance. In the work of [3], they showed different modern techniques that have widely applied for predicting student’s performance. These techniques belong to several areas such as Artificial Intelligence, Machine Learning, Collaborative Filtering, Recommender Systems, and Artificial Neural Networks. The authors in [4] recast the student performance prediction problem as a sequential event prediction problem and propose a new deep learning-based algorithm, termed GritNet, which builds upon the bidirectional long short term memory (BLSTM). From real Udacity students’ graduation predictions, they show that the GritNet not only consistently outperforms the standard logisticregression-based method but that improvements are substantially pronounced in the first few weeks when accurate predictions are most challenging. Predicting student academic performance is also an important research topic in Educational Data Mining which uses machine learning and data mining techniques to explore data from educational settings. They can develop a classification model to predict student performance using Deep Learning, a representative multi-level learning automation model. The authors in [5] deployed pre-trained 1
www.ctu.edu.vn.
Four Grade Levels-Based Models with Random Forest
3
hidden layers and then finetuning the parameters. They trained the model on a relatively sizeable real-world student dataset. The experimental results show the proposed method’s effectiveness, applied to the academic pre-warning mechanism. According to [6], transferring knowledge from one domain to another has gained much attention among scientists in recent years. They investigate the effectiveness of transfer learning from deep neural networks for student performance prediction in higher education. The experiments were conducted based on data originating from five compulsory courses of two undergraduate programs. Another study in [7] proposed “A system for predicting students’ course result using a free recommender system library - MyMediaLite”. The idea is based on the grade data that is collected from the grade management system. The authors proposed using the Biased Matrix Factorization (BMF) technique to predict student performance, which acts as the basis for selecting appropriate subjects. The authors also used MyMediaLite (an open-source recommendation library) for integrating into the proposed system. This system helps the students choose suitable courses for their ability and facilitates them to develop in-depth scientific research according to the credit system. The authors in [8] used Collaborative Filtering (CF), Matrix Factorization (MF), and Restricted Boltzmann Machines (RBM) techniques to analyze realworld data collected from Information Technology University systematically. The RBM technique is better than the other techniques used in predicting the students’ performance in the particular course. The ability to combine the prediction techniques is also used by the authors of [9] to perform two approaches of machine learning, logistic regressions, and decision trees, to predict student dropout at the Karlsruhe Institute of Technology. The models are computed based on examination data, i.e., data available at all universities without specific collection. Therefore, they propose a systematic approach that may be put in practice with relative ease at other institutions and find decision trees to produce slightly better results than logistic regressions. However, both methods yield high prediction accuracies of up to 95% after three semesters. A classification with more than 83% accuracy is already possible after the first semester. Moreover, the work of [8] used Collaborative Filtering, Matrix Factorization, and Restricted Boltzmann Machines techniques to analyze data collected from a university systematically. The results showed that the Restricted Boltzmann Machines technique predicts students’ academic outcomes better than the remaining techniques. The authors in [10] stated that it is possible to reach a reasonable accuracy rate on a small training set. The prediction of student performance has been proposed by numerous research and brought positive results. However, there are still many problems with improving the efficiency of prediction. In this research, we proposed a new approach based on machine learning techniques. Random forest techniques give a high performance of prediction when comparing with existing supervised algorithms. We suggest using four different models based on grade levels (including Excellent, very good, good, fairly) instead of one model as traditional training.
4
T. T. Dien et al.
After we have trained and obtained four models, To predict a new mark entry of a student for a course at a specific semester, we need to compute an average score from courses in which the student obtained the previous semester score. Based on the calculated average, we apply the model corresponding to the student’s grade level. In this study, Random Forest, a robust machine learning algorithm, is leveraged to do student performance prediction tasks. The obtained results are rather promising, with an average Mean Absolute Error of 0.45 over a band score of 4.00.
3
Data Description
To evaluate the proposed model, we have collected real datasets at the Student Management System of Can Tho University.2
Fig. 1. Distribution of mark levels on training set
The collected data relates to students, courses, marks, and other information from 2007 to 2020 with 4,530,865 records, 4,967 courses (subjects), and 97,543 students. Data distributions about student marks are described in Fig. 1 and Fig. 2. In this study, we use Algorithm 1 from the previous work [11] for preprocessing the dataset before evaluating the model.
4
Proposed Approach
As revealed in Fig. 3 and Algorithm 2, our approach includes 4 steps. In the first step, we divide the complete dataset into training and test sets based on 2
Can Tho University, 2020. Management Information System accessed on 12 May 2020. Available from https://htql.ctu.edu.vn/.
Four Grade Levels-Based Models with Random Forest
5
Fig. 2. Distribution of mark levels on testing set
the time from the full data—training dataset from 2007 to 2017, test one from 2018 to 2020. In the second step, we compute average scores for all students at the training stage to train four different models based on grade levels (including Excellent, Very Good, Good, Fairly). In the third step, we divide students according to average scores, and for each group, get mark entries of all students in that group. In the final step, after building four models based on grade levels, we used machine learning to load the test dataset into one of four models based on a student’s GPA to choose a suitable model to predict the score for each student. The output after predicting belongs to [0, 1, 1.5, 2, 2.5, 3, 3.5, 4].
Fig. 3. Proposed model based on grade levels
6
T. T. Dien et al.
Algorithm 1: Algorithm for pre-processing and filtering features from student management system Begin Step 1: Redundant attributes such as Student Name, Course Name, Lecturer Name, etc., are eliminated from the original set of features collected from the Can Tho University’s student management system. Step 2: For each student, mark entries that did not contain a specific value (for example, null or empty values) or were exemption courses, etc., were removed. Step 3: Transform features which is text type to numeric values. End
In Fig. 4, to pre-process data with datatype as string, we use OrdinalEncoder to transform string type to numeric type to training models. Two features that need to be changed are Student ID and Courses ID. 3
Fig. 4. An illustration of transformation from strings of Student ID, Course ID to numbers
A Random Forest (RF) regression with the number of trees in the forest is 80. The minimum number of samples required to split an internal node is 30, and the max depth of the tree is 30 is used to run the learning and do the student performance prediction tasks. The training set was fetched into RF for the learning phase. We obtained the trained weights of RF from the training phase and applied to predict new data entries.
5
Experimental Results
From the original dataset, we divide it into training and test sets based on time. The training dataset consists of 3,874,421 records covering a period from 2007 to 2017, while the other includes 656,444 mark records from 2018 to 2020. The performance is measure by an average result five times of running. In the 3
https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing. OrdinalEncoder.html, accessed on 31 March 2021.
Four Grade Levels-Based Models with Random Forest
7
Algorithm 2: Steps of dataset division to build four models based on GPA Begin Step 1: We divide the original full database into the training set and test set based on the time. The training set covers from 2007 to 2017, while the test set is ranges from 2018 to 2020. Step 2: In the training dataset, we compute each student’s average score. Step 3: We group students who own the same range of scores with groups as follows. The score of a course in the considered university ranges from 0 to 4. . Excellent Grade Point Average (GPA) model: 4.0 ≥ GPA ≥ 3.6 . Very good GPA model: 3.6 > GPA ≥ 3.2 . Good GPA model: 3.2 > GPA ≥ 2.5 . Fairly GPA model: GPA < 2.5 Step 4: For each group, we select mark entries of all students in that group. Step 5: We train separately for each group, and obtain four groups corresponding to each range of score detailed in Step 3. End
experiments, we use two metrics: Root Means Squared Error (RMSE) and Mean Absolute Error (MAE), for comparing the performance between methods. The root means squared error (RMSE) and mean absolute error (MAE) are calculated by Eqs. (1) and (2), respectively.
Fig. 5. Comparisons MAE and RMSE between RF and Four Models with RF
n 1 (yi − yˆi )2 n i=1
(1)
8
T. T. Dien et al. n
1 |yi − yˆi | n i=1
(2)
yi is the actual value, and yˆi is the predicted value. To evaluate the proposed method’s efficiency, we also compare the results using only one model. The training data is fetched into a Random Forest regression algorithm with 80 trees, an internal node of 30, and a max depth of the tree of 30.
Fig. 6. Distribution comparison between actual scores and predicted scores.
As revealed in Fig. 5, we illustrate the results with one model for all types of students and the proposed four models. One model is 0.557 in MAE, while the proposed approach’s error reduces to 0.452 in MAE. Also, in RMSE, there is a significant improvement of 22% when we compare the proposed method to the approach with one model. Using the approach with one model, we do not consider the student’s performance in the past and ignore grade levels. In the model predicting course scores for all students, the results are inaccuracies compared with the proposed method using grade levels into four models. The four models can help predict better the scores in the following semester based on each student’s previous semester’s GPA. In other words, dividing the dataset to train four models based on grade levels is expected to improve the student performance prediction. Figure 6 exhibits the distributions of predicted scores and actual scores. As observed, the actual values can be higher than the predictions in a score ranging from 0 to 2.5, while it shows the opposite in the other range value. In Fig. 7, we compare the performance of four proposed models. We can see that the ‘Excellent’ Model and ‘Very Good’ Model reveals lower errors while the ‘Good’ Model and ‘Fairly’ Model illustrate higher errors in both MAE and RMSE. ‘Excellent’ Model predicted the students and ‘Very Good’ Model tend to achieve high marks in courses as they showed their good performance in the previous semesters. Students predicted by these models in the other two cases can have a wide range of fluctuations in scores where some courses can get
Four Grade Levels-Based Models with Random Forest
9
Fig. 7. Performance comparison with mark levels in MAE and RMSE
Fig. 8. Comparison between actual score and predicted with the model for ‘excellent’ student performance.
high scores while other courses can obtain worse marks. To predict a student’s course at a specific semester with the proposed method, we need to calculate the average scores of the students’ average scores in the previous semesters. Based on this average, the approach chooses the model for the prediction. We compute students’ average score in the previous semesters and distribute to 4 groups based on their average for the test phase. The records distribution of predicted performance and actual scores of four groups are illustrated in Fig. 8, Fig. 9, Fig. 10, and Fig. 11. In these charts, the red lines represent the actual score, while the blue lines exhibit predicted values. Figure 8 exposes the predicted scores with ground truths on the ‘excellent’ group. As seen from the figure, mark records predicted with the ‘excellent’ model are nearly identical to the actual scores. With the range from 0–2.5, they share the same pattern. Figure 9 illustrates students’ group from the ‘very good’ model similar to the ‘excellent’ group. In Fig. 10 and Fig. 11, we found that the prediction results are lower than the two models above. The ‘good’ model and ‘fairly’ model made most predictions on the range from 2.5 to 3.5, while the actual
10
T. T. Dien et al.
Fig. 9. Comparison between actual score and predicted with the model for the ‘very good’ level.
Fig. 10. Comparison between actual score and predicted with the model for the ‘good’ level. Table 1. Details of training and test performance with the predicted numbers of records
Score
Range
Model One
Excellent
Very Good
Good
Test
Pred
Test
Pred
Test
Pred
Test
Pred
Test
Fairly Pred
4
3.5–4
104,257
88,561
9,801
21,293
25,953
15,246
36,184
3,250
7,199
0
3.5
3–3.5
167,273
267,504
6,964
0
30,831
74,377
60,391
74,261
14,094
928
3
2.5–3
161,817
208,661
2,840
0
19,632
850
57,819
134,467
16,855
22,035
2.5
2–2.5
55,834
56,120
605
0
4,923
25
19,940
15,711
7,074
38,949
2
1.5–2
80,830
34,922
660
0
5,839
0
28,711
695
12,028
10,719
1.5
1–1.5
29,834
663
203
0
1,560
0
10,085
175
5,160
309
1
0.5–1
38,448
13
180
0
1,441
0
11,632
0
7,061
0
0
0–0.5
18,151
0
40
0
319
0
3,797
0
3,469
0
656,444
656,444
21,293
21,293
90,498
90,498
228,559
228,559
72,940
72,940
Total
value distribution of ‘good’ and ‘fairly’ group seems to be more uniform than others. This can leads to a significant error when we compare the actual scores and predicted scores. Details of the experiments are presented in Table 1.
Four Grade Levels-Based Models with Random Forest
11
Fig. 11. Comparison between actual score and predicted with the model for the ‘fairly’ level.
6
Conclusion
In this study, we propose a new training method and divide the dataset based on grade levels to predict student performance. Four models can predict subjects scores for each student at universities and colleges. We analyzed and came up with several data pre-processing techniques before entering the Random Forest algorithm to make predictions. The proposed method has shown very positive results and is expected to be applied in the future. Using these results, we can help universities, academic advisors, and students have better plans for the future while improving students’ grades and improving the quality—training amount of the university in the future. We continue to work on other datasets and change the settings to improve results. Acknowledgment. Can Tho University funded this work under grant number TSV2021-40.
References 1. Mat, U., Buniyamin, N., Arsad, P., Kassim, R.: An overview of using academic analytics to predict and improve students’ achievement: a proposed proactive intelligent intervention. In: 2013 IEEE 5th Conference on Engineering Education (ICEED) (2013). https://doi.org/10.1109/iceed.2013.6908316 2. Nguyen, T., Zucker, J.: Enhancing metagenome-based disease prediction by unsupervised binning approaches. In: 2019 11th International Conference on Knowledge and Systems Engineering (KSE) (2019). https://doi.org/10.1109/kse.2019.8919295 3. Rastrollo-Guerrero, J., G´ omez-Pulido, J., Dur´ an-Dom´ınguez, A.: Analyzing and predicting students’ performance by means of machine learning: a review. Appl. Sci. 10, 1042 (2020). https://doi.org/10.3390/app10031042 4. Kim, B.-H., Vizitei, E., Ganapathi, V.: GritNet: student performance prediction with deep learning (2018). arXiv abs/1804.07405 5. Guo, B., Zhang, R., Xu, G., Shi, C., Yang, L.: Predicting students performance in educational data mining. In: 2015 International Symposium on Educational Technology (ISET) (2015). https://doi.org/10.1109/iset.2015.33
12
T. T. Dien et al.
6. Tsiakmaki, M., Kostopoulos, G., Kotsiantis, S., Ragos, O.: Transfer learning from deep neural networks for predicting student performance. Appl. Sci. 10, 2145 (2020). https://doi.org/10.3390/app10062145 7. Huynh-Ly, T.-N., Thai-Nghe, N.: A system for predicting students’ course results using a free recommender system library of MyMediaLite (2013). (in Vietnamese). https://dspace.ctu.edu.vn/jspui/bitstream/123456789/997/1/ HNHT/ 2018/ 002/ 192-201.pdf 8. Iqbal, Z., Qadir, J., Mian, A., Kamiran, F.: Machine Learning Based Student Grade Prediction: A Case Study. Computers and Society (2017). arXiv abs/1708.08744v1 9. Kemper, L., Vorhoff, G., Wigger, B.: Predicting student dropout: a machine learning approach. Eur. J. High. Educ. 10, 28–47 (2020). https://doi.org/10.1080/ 21568235.2020.1718520 10. Abu Zohair, L.: Prediction of student’s performance by modelling small dataset size. Int. J. Educ. Technol. High. Educ. 16 (2019). https://doi.org/10.1186/s41239019-0160-3 11. Dien, T., Hoai-Sang, L., Thanh-Hai, N., Thai-Nghe, N.: Course Recommendation with Deep Learning Approach. Future Data and Security Engineering. Big Data, Security and Privacy, Smart City and Industry 4.0 Applications, pp. 63–77 (2020). https://doi.org/10.1007/978-981-33-4370-2 5
The Role of Collective Engagement to Strengthen Organizational Identity Olivia Fachrunnisa(&), Ardian Adhiatma, and Ken Sudarti Department of Management, Faculty of Economics, Universitas Islam Sultan Agung (UNISSULA), Jl. Kaligawe Raya Km. 4, Semarang, Indonesia {olivia.fachrunnisa,ardian,kensudarti}@unissula.ac.id
Abstract. This study aims to test a model of strengthening organizational identity through managerial motivation and collective engagement. Managerial motivation refers to the Top Management Team’s spirit and determination to achieve higher excellence professionally. While collective engagement is manifested in the form of cognitive engagement and affective engagement. Collective engagement is based on mutual reinforcement and help among members of the organization. There were 147 respondents who take as part of Top Management Team (TMT) or leader in educational institutions involved in this research. Data were collected using a questionnaire and analyzed by using SmartPLS. The results showed that managerial motivation which comprises of Need of Achievement and Need of Growth will form collective engagement. This collective engagement strengthens organizational identity. Keywords: Affective and cognitive engagement Organizational identity
Managerial motivation
1 Introduction An organizational association community is a group of organizations incorporated on the basis of achieving greater common goals, usually formed on the basis of common vision, mission, and goals. The formation of an organizational association, such as professional associations, business associations, and company associations are often due to the similarity of identities held by each of the organizations involved. Hence, the strength of an organizational association is referred to as a community that depends on the extent to which members are attached to the organization. Recent study have been done on the factors that strengthen the community identity, such as value congruence, engagement, leadership and managerial style [1]. When community members are engaged to focused on achieving goals, the activities of sharing information, sharing values, and sharing vision will occur to strengthen each other. The concept of individual engagement has been discussed extensively by several previous researchers. Engagement is defined as a more comprehensive description of a person’s investment in affective, behavioral, and cognitive energy at work. In community or association, involvement will be measured at the organizational level which involves all community members [2, 3]. Thus, collective involvement is an organizational and group level construct which is an indicator of the existence of a © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 13–22, 2021. https://doi.org/10.1007/978-3-030-79725-6_2
14
O. Fachrunnisa et al.
motivational environment within the organization. While individual engagement is based on one’s engagement with the organization, so that at this level, evaluative aspects dominate more [4]. On the other side, the competition is a situation to fight for goals, get competitive advantages that are superior to competitors, so that they can win the competition [5]. Competition from a management perspective, is the efforts of several individuals/ organizations or more where they are struggling to achieve their respective goals through the most profitable alternative option. Hard and aggressive competition in an organization, if they, as the strongest winners, are willing to involve themselves in altruism and create mutually supportive relationships, mutual trust, cooperation and engagement. The existence of competitive incentive in competition is sometimes used to solve problems that are intended to encourage productive behavior. Meanwhile, when the target set in competition is not achieved (the competition does not succeed in achieving the desired goal) it will reduce the level of motivation, productivity and organizational performance [6]. The results of research by Chong and Rundus [7], state that there is a positive influence between market competition on organizational performance through the role of managerial practices. Research finding from Bunger et al. [8] shows the need for a managerial or leadership role to be concerned with managing collaboration and competition between organizations. Meanwhile, a study on the duality of collaboration-competition by Grant and Dacin [9], found that collaborative and competitive activities lead members of the organization to achieve common goals, balance pressures (for example, collaborating in studying sportive competition strategies). Hence, it can be concluded that existing research mostly discusses balancing collaboration and competition on the individual level basis. In this research, we introduce a form of competition practices that are based on organizational level. This managerial practice will moderate relationship between collective engagement and community identity. Therefore, this research becomes important to compile a basic description and concept regarding collapetition management practices and their impact on strengthening community identity. The balance between cooperation and collaboration is an important factor to improve the welfare of community members which ultimately strengthens community identity. The result of this study expected to improve the identity of a community or organizational association through engagement theory that develops from the individual level to the organizational level and managerial policies oriented to collapetition.
2 Literature Review and Hypothesis Development 2.1
Collaboration and Competition
Study by Hu et al. [10] states the need to use information technology for modeling formal and positive collaboration (cultural change, tolerance, trust, altruism, and skill use). In addition, plan-do-act collaboration strategy as a form of applying Knowledge Models can increase staff productivity, maximizing the use of resources, so that the employees are more motivated, and continuous improvement is achieved. In other side,
The Role of Collective Engagement
15
research about competition has also widely discussed. Naidoo and Sutherland [11] shows that internal competition within organizations can foster the motivation of members of the organization and the community to improve their competencies to be superior, so that it will provide outcomes on continuous performance improvement. The process of formation of collaboration between organizations, can be achieved with the support and motivation of the formation of discussion/ dialogue on the faced by members of the organizational community, to be able to achieve goals, mutual understanding, trust each other, mutual relationship/win-win solution, mutual help, and complement each other. Furthermore, intrinsic motivation is formed in every individual who involved in collaboration, in terms of voluntary engagement to increase creativity and innovation ability to get competitive advantage. In the co-creation community there can be collaboration in competition and competition in collaboration. Collaboration in the community will give members the ability to engage in competition, and vice versa competition that occurs in the community can lead to collaboration. Members of the community work in a collaborative and competitive environment to achieve mutual goals that are mutually beneficial. The process of collaboration and competition acts encourages the formation of an ecosystem culture that is conducive to learning effective competition strategies in collaboration or preferably. So that collapetition is a merging effort to balance the level of collaboration and competition. Collapetition practices have been discussed by Hutter et al. [12]. They argued that integration of collaboration and competition can enhance co-creation and the innovative abilities of community members. The application of a combination of competition and collaboration practices in a balanced and simultaneous manner can produce strategies to gain competitive advantage [11]. Hence, we need a managerial role to balance collaboration and competition in setting contractual administrative relationship rules between partners of the community. 2.2
Managerial Motivation and Collective Engagement
In this study, motivation will be measured from the top management team (TMT) behavioral discourse in developing a community led by which the researchers called as managerial motivation. Managerial N-Ach is defined as the extent to which TMT has a strong desire to do well-challenged tasks and to make its organization a superior organization. While Managerial N-Growth is the strong desire of TMT to increase community growth. Research shows that there are N-Ach and N-Growth test at the individual level [17], whereas organizational level test which is a collective push of top management teams have not been done much. Moreover, the need to grow from the leadership level will be a strong predictor of engagement among community members [13]. Cognitive Collective Engagement is a function of intrinsic motivation. When intrinsically involved, they will be willing to pay attention to the problem at hand. Such attention directs people to engage in cognitive processes [14]. As a result, there is an effect of intrinsic motivation on the degree to which someone will survive in carrying out the creative process [19]. Creative responses arise when a person must be involved in creative activities such as identifying problems, the environment scanning, data collection, subconscious mental activity, generation and evaluation of solutions, and implementation of solutions [20]. The second form of involvement is affective
16
O. Fachrunnisa et al.
involvement which is defined as the willingness to fully invest themselves emotionally in their work. A sense of mutual involvement arises partially through affective and social processes in a community. Engagement can appear at the organizational level, and organizations can be distinguished by the level of collective engagement [22]. Despite having a strong conceptual relationship between organizational collective involvement and the pursuit of organizational goals, collective engagement is organizational-level construction. In addition, the need for achievement is unconscious drive to do better and aspire to be superior [23]. People with a strong need for achievement often assess themselves as a way to measure their progress towards various goals. they set a goal by trying to take risk that are challenging but realistic, prefer individual activities, choose fun activities where someone can get value (such as sales targets) and choose jobs where performance data is clearly available. The need for achievement defined as a desire to achieve something difficult, achieve high of success, complex tasks, and exceed others [24]. Individual who have needs for achievement will look for realistic but challenging goals. Shoaib and Baruch [25] explain that the motivation to grow (Managerial NGrowth) is an opportunity for a community to experience actualization of growth and become large in its industry. Hence, the Managerial N-Growth is defined as the strong desire of the organization to grow compared to others in their industry, with indicators of the need to create value, the need to adapt to change, the need to innovate, the need to compete, the need to have competencies and the need to be qualified. H1a:Managerial N-Ach significantly relates to Cognitive Collective Engagement H2b:Managerial N-Ach significantly relates to Affective Collective Engagement H2a:Managerial N-Growth significantly relates to Cognitive Collective Engagement H2b: Managerial N-Growth significantly relates to Affective Collective Engagement. 2.3
Engagement and Community Identity
Community identity is defined as the perception of community members towards the formation of their community. According to Smith [26], community members who identify themselves with their community groups feel united with the group. So that the future of the community becomes their future; Community goals become also their goal, community success becomes also their success and community failure becomes also their failure and finally the prestige of this community becomes a “prestige” of its members. Organizational identity is defined by Knippenberg and Sleebos [27] as harmonizing individual values with values that already exist in the organization. In other definitions, organizational identity is the perception of members as part of organizational values that are aligned with their perceptions so that they are integrated with organizational goals. Identity is specifically defined as a perception of loyalty, agreement, and collaboration [28].
The Role of Collective Engagement
17
It can be concluded that there are characteristics such as the existence of shared goals, formal and informal interactions. A community as one form of an organization has a shared desire to achieve a common goal. Hence, community identity is defined as the psychological relationship between community members and the community in which they join. To make it specifically, this refers to a kind of psychological relationship status for goals, values, caring and family attachment relationships, with indicators of the organization’s emotional attachment to its affiliate community, feeling proud to be part of the affiliate community, giving satisfaction to the affiliate community, provide motivation to members of the affiliate community and provide identity values in the affiliate community. H3: Cognitive Collective Engagement significantly relates to Community Identity H4: Affective Collective Engagement significantly relates to Community Identity. 2.4
Collapetition Management Practices
We define collapetition management practices as the HR practices and policies of community leaders where community members must have a balance between collaboration and competition with other community members. The top management team needs to identify a series of practices and policies that encourage community members to be enthusiastic in competition and collaboration activities. In this study, the collapetition policy can be measured by the availability of policies to provide a reward of the best winner, reward of the best information providers, expand exposure to knowledge, announce the best and the worst, share the main resources and run a mentoring relationship. Engagement that exists among community members makes the concern among members tight, which in turn creates the strength of the identity of the community. Cognitive engagement that is suspected by the cognitive agreement will be stronger if it is encouraged by the existence of policies that encourage collapetition practices. The regulation made by the TMT is that each member of the community helps each other and strives to be the best among the members of the community will strengthen the relationship between engagement and community identity. Therefore, collapetition management practices are expected to be a moderator in the relationship between cognitive and affective engagement with community identity. H5a: Collapetition Management Practices moderates the relationship between Cognitive Collective Engagement and Community Identity H5b: Collapetition Management Practices moderates the relationship between Affective Collective Engagement and Community Identity. Briefly, the relationship among variables is presented in Fig. 1.
18
O. Fachrunnisa et al.
H1a
Management N-Ach
Cognitive Collective Engagement
H3 Community Identity
H1b H5a H5b
H2a Management N-Growth
H2b
H4 Affective Collective Engagement
Collapetition Management Practices
Fig. 1. Research model
3 Method The population of this study is Educational Institution that have settled community associations and they are required to become members. With regard to the research background, the data is suitable to be collected from leaders in several Islamic educational institutions in Central Java (Indonesia). We distributed our questionnaire to 230 respondents from snowballing effects in such a community. By the end, after two months, as much as 147 respondents involved in this study (response rate is 46.87). The questionnaire contains detail literature review on measurement scales and some questions about managerial motivation, collective engagement, community identity, and collapetition management practices. The result showed that the majority (58 lecturers) were dominated by men. From the level of education, the majority of respondents (88 lecturers) were postgraduate. Meanwhile, in term of age, majority respondent aged 40–44 years. All items range from 1 = strongly disagree to 5 = strongly agree. Managerial NAch (N-Ach) consist of four items such as efforts to achieve higher performance, efforts to make the best organization, efforts to achieve the best performance, and efforts to achieve success in future [24]. Managerial N-Growth items, include the need to create value, the need to adapt to change, the need to innovate, the need to compete, the need to have competence, and the need to be qualified [25]. Cognitive collective engagement (CCE), includes 4 items, such as intellectual engagement, ability engagement, competence engagement, and physical engagement, developed by a combination from [3]. Affective Collective Engagement (ACE) measured with four items such as closeness in the same spirit and value, the habit of mutual support, cooperation to improve organizational progress, and high emotional psychological closeness. Collapetition Management Practices (CMP) consist of six items, namely reward of best winner, reward of information resources, broadening exposures to knowledge, announcing the best and the worst, sharing resources and acting as a mentor [11]. Community Identity (CI) consist of five items, such as feeling proud to be part of the affiliate community,
The Role of Collective Engagement
19
giving satisfaction to the affiliate community, providing motivation to members of the affiliate community and providing identity values in the affiliate community. This study used SmartPLS software, partial least squares, structural equation model in HRM research. The result of measurement model test showed that all variables have fulfilled reliability and validity.
4 Result and Discussion Diamantopoulos et al. (2005) [23], categorized the path coefficient of 0.60 as very strong. Hypothesis test results show that all hypotheses are accepted (Table 1). Table 1. Structural model test Hypothesis Beta f2 T-value (sign) Notes H1a: N-Ach ! CCE 0.220 1.648*** Supported H1b: N-Ach ! ACE 0.229 1.677*** Supported H2a: N-Growth ! CCE 0.611 4.361*** Supported H2b: N-Growth ! ACE 0.598 3.819*** Supported H3: CCE ! CI 0.336 2.348** Supported H4: ACE ! CI 0.351 2.940** Supported H5a: CMP*CCE ! CI 0.488 0.012 8.342*** Supported H5b: CMP*ACE ! CI 0.565 0.026 7.641*** Supported Note: N-Ach: Managerial N-Ach; N-Growth: Managerial N-Growth; CCE: Cognitive Collective Engagement; ACE: Affective Collective Engagement; CMP: Collapetition Management Practices; CI: Community Identity; CMP*CCE ! CI: CMP moderated the interaction between CCE on CI; CMP*ACE ! CI: CMP moderated the interaction between ACE ! CI **q < 0.05; ***q < 0.001
The direct effect of N-Ach on CCE and ACE, N-Growth on CCE and ACE, and effect of CCE and ACE on CI shows that there is a moderate, moderate, very strong, strong, strong, and strong. Hence, hypothesis 1, 2, 3, and 4 are supported. This in line with the research by Han et al. [28] showed that managerial motivation impacts on emotional labor performed by nurses to increase service quality and collective work engagement. This hypothesis test results show that the higher level of Managerial NAch and N-Growth, the higher cognitive collective engagement and affective collective engagement achieved. In this research, the leader ability of high educational organization in managing need of achievement and need of growth will be able to strengthen their member engagement collectively both from a cognitive and affective perspective. Moreover, a combination of CCE and ACE gives a positive relationship with community identity. Traditionally, research demonstrates that this collective engagement has relation with organizational member capabilities to increase their identity. The
20
O. Fachrunnisa et al.
results of this study are in line with [28] which explain that cognitive and affective engagement produce organizational prestige and in-group so as to strengthen employee-organizational identification. Furthermore, CMP has a moderating role of the interaction between Cognitive CCE on CI and ACE on CI. To conclude, hypothesis 5a and 5b are supported. In the context of moderation effect size, f2 classification is 0.005, 0.010, and 0.025 constitute more realistic standards for low, moderate, and high effect size, respectively. The effect size of the moderating effect CMP has an effect (0.012 = moderate) on the interaction between CCE on CI, and (0.026 = moderate) on the interaction between ACE on CI. Although the results show that the moderating effect of CMP is still relatively small, however, it contributes to strengthening community identity. A managerial role is needed to foster a conducive community atmosphere or a policy to increase the awareness of community members, in order to be able to balance collaboration and competition. Increasing the attachment of members collectively driven by collapetition management practice will contribute to the improvement of community identity. Therefore, developing an environment that supports the balance of both collaboration and competition – through management practices – is an essential condition for leaders or managerial to improve collapetiton ability for their members.
5 Conclusion, Implication and Future Research In conclusion, this paper shows the effect of managerial motivation on organizational identity, through CMP as moderating role to interaction between collective engagements on community identity. The empirical evidence has important implication for members and marks progress in the research of the moderating effects of managerial practices on collective engagement influence to community identity. An additional contribution of this paper is to investigate the relationship theories among collapetition management practices, managerial motivation, collective engagement, and organizational identity through an extensive literature review, and anticipate some effect among these constructs. Indeed, the call for additional research on how collapetition management practices can influence collective engagement to strengthen community identity is explained by this study. However, this research has following aspects of limitations, first, the research design of this study used cross-sectional, and the research design is incapable of ensuring the causal relationship set out in these hypotheses, even the results are consistent with theoretical reasoning. For further researchers, they could solve this issue by applying a longitudinal design. Second, the study of the Collapetition Management Practice is assessed from the perceptions of the members of the organization, namely the leadership of the community member organization. Nevertheless, approaches that are more specific may be needed (e.g., managerial perception) to assess the level of Collapetition Management Practice of an organization. Hence, when the organization wants to improve the strengthening of identity, the role of managerial policies is needed to encourage the Collective Engagement of members. Future studies could try to analyze Collapetition Management Practices by managerial perception. Third, selfreport data is used by this study. It may suffer from the effects of general method
The Role of Collective Engagement
21
variance. Future researches could be useful from independently achieving and using objective measures of organizational identity. Fourth, the respondents are the leader in several educational institutions in Central Java, Indonesia. Future research could focus on other organizational communities (e.g., Banking, Hospital, etc.).
References 1. Kesari, L., Pradhan, S., Prasad, N.: Pursuit of organisational trust : role of employee engagement, psychological well-being and transformational leadership. Asia Pac. Manag. Rev. 23, 227–234 (2018). https://doi.org/10.1016/j.apmrv.2017.11.001 2. Rich, L.R., LePine, J.A., Crawford, E.R.: Job engagement: antecedents and effects on job performance. Acad. Manag. J. 53, 617–635 (2010) 3. Fachrunnisa, O., Adhiatma, A., Tjahjono, H.K.: Cognitive collective engagement: relating knowledge-based practices and innovation performance. J. Knowl. Econ. 11(2), 743–765 (2018). https://doi.org/10.1007/s13132-018-0572-7 4. Mansor, Z.D., Mun, C.P., Farhana, B.S.N., Aisyah, W., Wan, N., Tarmizi, M.: Influence of transformation leadership style on employee engagement among generation Y. Int. J. Soc. Behav. Educ. Econ. Bus. Ind. Eng. 11, 161–165 (2017) 5. Chambers, M.C.: Dynamic, inter-subsidiary relationships of competition and collaboration (2015). https://doi.org/10.1007/978-1-349-95810-8_377 6. Luft, J.: Cooperation and competition among employees: experimental evidence on the role of management control system. Manag. Account. Res. 31, 75–85 (2016). https://doi.org/10. 1016/j.mar.2016.02.006 7. Chong, V., Rundus, M.: Total quality management, market competition and organizational performance. Br. Account. Rev. 36, 155–172 (2004). https://doi.org/10.1016/j.bar.2003.10. 006 8. Bunger, A.C., McBeath, B., Chuang, E., Collins-Camargo, C.: Institutional and market pressures on interorganizational collaboration and competition among private human service organizations. Hum. Serv. Org. Manag. Leadersh. Gov. 41, 13–29 (2017). https://doi.org/10. 1080/23303131.2016.1184735 9. Grant, A., Dacin, P.A.: Co-creating value through balancing a collaboration-competition duality. Adv. Consum. Res. 10, 494–496 (2014) 10. Hu, Y., Hou, J., Chien, C.: A framework for knowledge management of university – industry collaboration and an illustration. Comput. Ind. Eng. 87, 1–19 (2018). https://doi.org/10. 1016/j.cie.2018.12.072 11. Naidoo, S., Sutherland, M.: A management dilemma: positioning employees for internal competition versus internal collaboration. Is coopetition possible? S. Afr. J. Bus. Manag. 47, 75–87 (2016) 12. Hutter, K., Hautz, J., Füller, J., Matzler, K.: Communitition: the tension between competition and collaboration in community-based design contests. Creat. Innov. Manag. 20, 3–21 (2011) 13. Shernoff, D.J., et al.: Student engagement as a function of environmental complexity in high school classrooms. Learn. Instr. 43, 52–60 (2016). https://doi.org/10.1016/j.learninstruc. 2015.12.003 14. Harvey, S., Kou, C.-Y.: Collective engagement in creative task the role of evaluation in the creative process in groups. Adm. Sci. Q. XX, 1–41 (2013). https://doi.org/10.1177/ 0001839213498591
22
O. Fachrunnisa et al.
15. Du, Y., Zhang, L., Chen, Y.: From creative process engagement to performance: bidirectional support. Leadersh. Org. Dev. J. 37, 966–982 (2016). https://doi.org/10.1108/ LODJ-03-2015-0046 16. Wu, C., Chen, T.: Collective psychological capital: linking shared leadership, organizational commitment, and creativity. Int. J. Hosp. Manag. 74, 75–84 (2018). https://doi.org/10.1016/ j.ijhm.2018.02.003 17. Klein, H.J., Brinsfield, C.T., Cooper, J.T., Molloy, J.C., Stuart, K., Hoppensteadt, K.: Quondam commitments: an examination of commitments employees no longer have. Acad. Manag. Discov. 3, 331–357 (2017). https://doi.org/10.5465/amd.2015.0073 18. Shoaib, S., Baruch, Y.: Deviant behavior in a moderated-mediation framework of incentives, organizational justice perception, and reward expectancy. J. Bus. Ethics 157(3), 617–633 (2017). https://doi.org/10.1007/s10551-017-3651-y 19. Smith, W.K.: Epidemiological and mycofloral relationships in cotton seedling disease in Mississippi. Acad. Manag. J. 57, 1592–1623 (2014). https://doi.org/10.5465/amj.2011.0932 20. Van Knippenberg, D., Sleebos, E.: Organizational identification versus organizational commitment: self-definition, social exchange, and job attitudes. J. Org. Behav. 27, 571–584 (2006) 21. Chi, M., Zhao, J., George, J.F., Li, Y., Zhai, S.: The influence of inter-firm IT governance strategies on relational performance: the moderation effect of information technology. Int. J. Inf. Manag. 37, 43–53 (2017). https://doi.org/10.1016/j.ijinfomgt.2016.11.007 22. Zhang, X., Bartol, K.M.: Linking empowering leadership and employee creativity: the influence of psychological empowerment, intrinsic motivation, and creative process engagement. Acad. Manag. J. 53, 107–128 (2010) 23. Slåtten, T., Lien, G.: Consequences of employees’ collective engagement in knowledgebased service firms. J. Serv. Sci. Res. 8(2), 95–129 (2016). https://doi.org/10.1007/s12927016-0006-7 24. Ahn, R.: Japan’ s communal approach to teacher induction: Shokuin shitsu as an indispensable nurturing ground for Japanese beginning teachers. Teach. Teach. Educ. 59, 420–430 (2016). https://doi.org/10.1016/j.tate.2016.07.023 25. Ringle, C.M., Sarstedt, M., Mitchell, R., Siegfried, P.: Partial least squares structural equation modeling in HRM research. Int. J. Hum. Resour. Manag. 5192, 1–27 (2018). https://doi.org/10.1080/09585192.2017.1416655 26. Hair, J.F., Hult, G.T.M., Ringle, C.M., Sarstedt, M.: A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM), 2nd edn. SAGE Publicstion, Los Angeles (2017) 27. Diamantopoulos, A., Riefler, P., Roth, K.: The problem of measurement model misspecification in behavioral and organizational research and some recommended solutions. J. Appl. Psychol. 90, 710–730 (2005) 28. Han, S., Han, J., Kim, Y.: Effect of nurses ’emotional labor on customer orientation and service delivery: the mediating effects of work engagement and burnout. Saf. Health Work. 9, 441–446 (2018). https://doi.org/10.1016/j.shaw.2017.12.001
A Novel Structural and Semantic Similarity in Social Recommender Systems Imen Ben El Kouni1,2(B) , Wafa Karoui1,3 , and Lotfi Ben Romdhane1,2 1
3
Laboratoire MARS LR17ES05, ISITCom, Universit´e de Sousse, 4011 Sousse, Tunisia 2 ISITCom, Universit´e de Sousse, 4011 Sousse, Tunisia Institut Superieur d’Informatique, Universit´e de Tunis El Manar, 2080 Tunis, Tunisia
Abstract. Recommender systems are used to suggest items to users based on their preferences. Recommender systems use a set of similarity measures as part of their mechanism that could help to identify interesting items. Even though several similarity measures have been presented in the literature, most of them consider only the rating of similar users and suffer from a range of drawbacks. In order to fix these problems, we propose a novel similarity measure based on the semantic and structural information in the network. On one hand, the preference of the target user is calculated using the similarity between similar users based on several factors such as user profile, ratings, and tags. On the other hand, a user is an element who has relations with other elements in the network. Therefore, we can use the network topology in the similarity measurement. We apply this idea in the social recommender system to improve the quality of recommendations. The experimental results show that our method achieves better precision and accuracy and handles the cold-start problem. Keywords: Feature
1
· Homophily · Semantic · Structural
Introduction
The vast amount of information and elements available makes them uncertain and indecisive. Users need to spend more time and energy looking for their expected results. Recommender systems (RSs) may be seen as the development of a method of data mining in which data collection, pre-processing, user profile creation, and evaluation techniques are carried out in order to provide personalized recommendations. The objective of RSs is to provide a consumer with personalized recommendations based either on his or her needs and interests or a group of people with similar needs and interests [1]. In terms of the context data, input data, and the algorithm to produce recommendations, five classes of recommender systems are suggested: content-based, collaborative, demographic, c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 23–34, 2021. https://doi.org/10.1007/978-3-030-79725-6_3
24
I. B. El Kouni et al.
utility-based, and knowledge-based [6]. Many current RSs implement the user– user similarity measure. Formally, the system constructs a list of similar users for an active user based on a measure of similarity that must represent the most significant factors taken into account between the two users. As a consequence, for any collaborative RS, the similarity computation process plays a fundamental role in its performance. To find the nearest user there are many similarity measurements. User-item ratings are used to provide recommendations. When a user purchases an item, the recommender system asks the customer to rate the item typically from 1 to 5 with 1 if the user founds the item not at all interesting and 5 if the user founds the item interesting. Later, using current ratings and/or other supporting information, the recommender process estimates the consumer rates for non-rated products. Then, the recommender system recommends the products whose rating values are the highest for a user. However, users’ ratings similarity fails to discover real preference and interpret interest in some cases. As an example, suppose two users have a similar rate history, user 1 and user 2 both gave a rating of 4 to the movie “Braveheart”. Consequently, if user 1 gives a rating of 4 to “Schindler’s List”, then this film would be suggested to user 2. Despite this, while user 1 and user 2 have an identical rating history, they may have different behaviors, and both like it for different reasons. User 1 may like this movie because of its genre, and user 2 may like it because of the main actor in it. Another lack is that we can find similar users but with insufficient overlapping rating history. Also, user-user similarity fails when a new user does not has a sufficient rating history which is called the cold-start issue, one of the known problems in the recommender system. All of these reasons show the need to identify a new similarity measure that concentrates on user interest using other supplementary factors to perform the results. In this paper, we propose a new similarity measure in the social recommender system that takes into account the rating history, social information, and user attributes. We extract explicit and implicit information to improve the precision of this similarity, and as consequence, the quality of recommendations. Then, we apply the proposed measure into a social recommender system and compare it with other similarity measures. The rest of this article is structured as follows. Section 2 is a literature review of the works in this area. Section 3 explains our proposed measurement of similarity. Section 4 represents the experimental results, and the findings and future work are given in Sect. 5.
2
Related Work
Recommender system guides the consumer to find the correct information or product. In order to solve the cold start and the sparsity problem, the authors exploit the interests of similar users to predict the interest of the active user. Various measures of similarity have been discussed in the literature for recommender systems. In some approaches, the rating set for each user determines the user profile that will be compared on a predefined similarity measure to other
A Novel Structural and Semantic Similarity in Social Recommender Systems
25
users’ profiles. The similarity between two users is an indicator of how closely they associate with each other. Also, Resnick et al. [18] propose the person correlation similarity in 1994, that determines the correlation between user rates. Based on the Pearson correlation coefficient (PCC), the similarity between two users, x and y, is determined only based on the common scores, Sxy , both users have declared [1,5]. Besides, each user is represented by the cosine similarity measure as a vector in the space of the item. Then, considers the cosine of the angle between the two vectors as a measure of similarity between the two users [1,17]. Another variant of the Pearson correlation coefficient is called a Constrained Pearson correlation coefficient [19]. This measure helps to fix the weaknesses observed in PCC, and take into account the effect of rating whether positive or negative. Jaccard similarity was introduced by Jaccard in 1901 [11] and used in many domains, including recommendation systems. This measure was used specifically to determine similar users by calculating the percentage of common rates. However, if positive (liked items) and negative (disliked items) matches are distinguished, various advantages can be obtained. This can be established using measures of asymmetric similarities, such as the Jaccard coefficient. Unfortunately, the latest attempts to use the Jaccard coefficient [17] are based on finding matches between two users, regardless of whether they are positive or negative matches. Besides, the dice coefficient [7] is equal to the Jaccard coefficient but provides positive matches twice the weight. It offers more importance on positive matches. The mean square difference (MSD) between two users is another similarity measure that is the opposite of the mean square difference between two users’ scores on the same items [3]. This measure will not take the number of common ratings into account instead it includes absolute ratings. In contrast with PCC, the drawback of the MSD is limited because it does not capture negative correlations between the users’ preferences. The authors in [4] projected a new metric combining the Jaccard measurement and the mean square difference. The idea was that these two measures should balance each other. The use of user rates may be misleading. The correlation between the two users’ rates does not necessarily mean that the two users are similar. Two users have the same ratings, and they would have different behaviors. The authors in [16] use a content similarity instead of using rates. It used user profiles and item descriptions. Liu et al. [13] suggest a novel similarity measure based on the local context of user ratings and the global preferences. When user reviews are insufficient, this similarity measure proves useful. Patra et al. [15] propose a similarity measure that takes into consideration the weight of each pair of rated items based on the Bhattacharyya similarity and applied all ratings given by a pair of users. Wang et al. [20] introduce a hybrid similarity model that modified user similarity by using item similarity as a weight. Zhang et al. [21] propose a recommendation method based on social relations. They extract user-topic features and Jensen-Shannon divergence to estimate interest similarity. Gurini et al. [9] propose an RS that utilizes user attributes such as opinion and objectivity, which are extracted from the semantics of tweets. Hawashin et al. [10] introduce a solution that takes the interest of users into account. The solution
26
I. B. El Kouni et al.
identifies users with similar contexts and detects their interests. It is important to infer details based on user-system interaction to create user profiles. Using user profiles to find similar users can also fail, as user preferences in user profiles may not be clearly identified. The typical user profile may include his or her profession, age, location, etc., but these profiles frequently lack user interests, which are the most important factor in identifying similar users. In real life, the user’s preferences are influenced by the preferences of his friends. For that, adding social relationships into similarity measures can change the importance of some customers compared to others. Inspired by the above research status and existing problems, our work considers the interaction and social information among users in social recommender systems. Specifically, the behavioral information and interaction relationship of social users are exploited to analyze and mine the preferences of a target user. Then the social similarity tendency is calculated to obtain the rating prediction of items.
3
Proposed Method
In this section, we introduce a novel similarity measure to fix the problems of cold-start and improve the quality of social recommendation. This step aims to construct a network based on the combination of user–item rating matrix and features of users. 3.1
Dependency Graph of Users
A network is built at this stage by the combination of user attributes and social relationships. The modeling of this network is done using the dependency graph. Thus, the users’ network is represented as a dependency graph G = (U, E, W) where U is the set of users, E is the set of edges between the pairs of users, and W denotes the weights of the edges. For each member of the network, we detect a set of attributes that characterize each user. The similarity values and the numerous social features that arise from the collective users are combined to calculate the edge weights W . 3.2
Homophily Concept
According to several authors, [8,12,14] one of the common phenomena in this context is the phenomenon of homophily. The latter is one of the most dominant concepts governing the architecture of networks. According to McPherson et al. [8] “birds of a feather flock together”. This concept interprets the tendency of the users of the network to be connected with other users with similar properties. It can conclude that individuals with similar characteristics associate with each other more than with other users of the network. According to these studies, we prove that the characteristics of social actors often carry relevant information, and their integration in similarity measurement between actors to detect similar groups can allow more precision and refinement. For that, we used the homophily phenomenal to find the list of similar users. Each user is modeled by a set of attributes detailed in Sect. 3.3.
A Novel Structural and Semantic Similarity in Social Recommender Systems
3.3
27
User Modeling
The idea is to model a user based on some features presented. Then, calculate the similarities between these profiles to detect users with the same interests and preferences. In other words, to infer topics of interest of passive users from topics of interest of active users based on profile similarity. In addition, The topology of the graph provides rich information that can help to discover and understand the relationships between nodes (users). We identify two characteristics: friendship, and the mutual friends of two users. To predict the similarity between two profiles based on both profile attributes and social features, a summary formula is used by integrating the following factors: Overlap Between Profiles Calculates the percentage of overlap between two users “u” and “v” among the following attributes: age, sex, and occupation. Profile overlap calculates the number of characteristics of the profile where two people exhibit equal value. The degree of interest similarity increases also with profile overlap, independent of the interest domains. This finding shows that if two users share more common features in their profiles, they are more similar in their choices. Gender In this context, a user pair’s gender combination involves three possible values as female–female, male–male, and male–female. Users allow higher interest similarities when they are from the same sex (i.e., male or female). Furthermore, as an example, we can found that males are still more similar in music and film interests, while females show a higher interest in TV programs. Hence, we assign 1 (male-male), 1 (female-female), and 0 (male-female). Age or Generation Calculates the difference in ages between “u” and “v” (i.e. distance of age). Users which are closed at age, share more interests. The interesting similarity reduces as the age distance increases. In addition, we can select an age interval as a generation. Generally, two users from the same generation are more similar in interest than any other pairs. Rating We introduce the rating as a note given by a consumer to evaluate a product on social media. Users may assign ratings to products from 1 to 5 (5 indicates “like,” while 1 indicates “dislike”). Rating is the most important feature in the recommendation that is always present. The similarity is typically calculated by the available ratings given by users. For social recommendations, positive and negative matches give different meanings; positive matches are an indication of user satisfaction while negative matches indicate user dissatisfaction. For that, starting with the user-item matrix, we consider the fact that the rating is in the range of {1, 2, 3, 4, 5}. In this step, we divide the rating information into three categories to get more successful results C1 = {1, 2} “low ratings”, C2 = {3, 4} “average ratings”, R3 = {5} “high ratings”. The main idea is to construct a new user-user matrix based on ratings. Therefore, we count the number of
28
I. B. El Kouni et al.
similar ratings (i.e. user ratings belong to the same category) between two users. Figure 1 illustrates an example with four users and five items. For example, the user 1 rates item i2 with score = 1, item i4 with score = 3, and item i5 with score = 5, on the other hand, the user 3 rates item i2 with score = 2 and item i4 with score = 3. The similarity between users 1 and 3 is equal to 2 because they rate two items by scores in the same category among the items rated in common. Given the new user-user rating matrix, the rating similarity between a pair of users is calculated as: 3 {Ru , Rv }i (1) Simpref erences = i=1 N bitem Where {Ru , Rv }i represents the ratings of two users u and v in the same category and N bitem indicates the number of items rated by both u and v.
Fig. 1. Processus of ratings similarity calculation
Tag One of the features of most social recommender systems is the ability for the user to assign a tag to an item. Thus, users can allocate tags based on their preferences for each product. A tag is social information that represents the opinion of the user about an item such as a movie that he was watch or a book that he was read. Each tag is a keyword assigned to an item to express a consumer’s opinion or describe it, such as movies, images, or music. It is also one of the additional features that represent the interests of the users. The use of tags alone will not represent a perfect measure for providing decisions. For that, we propose to add this social feature to other attributes in order to improve recommendations. Tag-based similarity means to rate the same item with a similar tag. The tag similarity between two users is calculated by using the overlap coefficient [2] as follows in Eq. 2: |Tu ∩ Tv | (2) Simtags (u, v) = min(|Tu |, |Tv |) Where Tu and Tv represent respectively a set of tags given by users u and v. Finally, two users’ similarity is calculated by combining the previous attributes as follows: N b Sim(u, v) (3) Sim(u, v) = i=1 Nb Where N b indicates the number of partial similarities.
A Novel Structural and Semantic Similarity in Social Recommender Systems
29
After the calculation of users’ similarities, we construct the user-user network where the nodes represent the users and the links indicate the relation between them. In addition, each link is weighted with a similarity value. 3.4
Structural Information Extraction
The main idea is to find recommendations based on structural social friendship attributes. The topology of the graph gives rich information that can help to discover and understand the relationships between nodes (users). We identify two features: Friendship In this case, we can find two conditions in social relationships, direct and indirectfriends. In general, users with less distance between friends share more preferences. Therefore, direct-friend pairs have the highest similarity of interest. We assign the value 1 if two users are friends and 0 otherwise. Mutual Friends It represents the number of friends in common. This factor usually affects the degree of interest similarity between two users. We can say much they are neighbors in common; much their preferences and choices are similar. The value of Mutual friends measures two users’ common friends by Jaccard similarity: Jacc(a, b) =
|Ia ∩ Ib | |Ia ∪ Ib |
(4)
where Ia is the neighbors of user a. The integration of social features in the similarity measure can improve the quality and efficiency of recommendations. To recapitulate, Fig. 2 illustrates the various type of features used in similarity calculation. The weight of each link in the network which represents the similarity value between two users is recalculated as follows in Eq. 5 Simglobal (u, v) =
N b
i=1
Sim(u,v) Nb
(5)
The proposed similarity measure works well in various cases. If the user has not made enough evaluations, we combine the user profile attributes and the social features to find the list of socially and semantically close friends to the target user in the next step. Whereas, if the profile of user u is not sufficiently informed, the social features and evaluations will be applied. But, if the user u has not yet added any friends, the profile attributes and evaluations will be used. Besides, we combine the three types of information if there are enough evaluations.
30
I. B. El Kouni et al.
Fig. 2. Similarity calculation between two users
4
Experiment and Results
4.1
Datasets
One of the commonly used datasets in recommender systems is MovieLens,1 which includes the rates of online customers for a set of movies. This dataset consists of 100k ratings assigned by 943 users on 1682 movies. The movie ratings are on a scale of 1–5, with 5 being the highest rating. This dataset contains the necessary features used in the proposed similarity measurement such as user profile attributes, rating history, and tags. Therefore, we adopted this dataset for our experiments. 4.2
Evaluation Metrics
Generally, we use a social recommender system based on community detection to evaluate our proposed measure compared to another similarity measure. To examine the quality of recommendation, we use five evaluation metrics: precision, recall, F1, RC, and diversity. These metrics are widely used by various researchers to evaluate the quality of recommendation [5]. The recommendations generated by the algorithm that users like are defined as True Positive (TP), and the others those not like by the users are defined as False Positive (FP). The items that are not recommended and the uses not like are defined as True Negative (TN), and those not recommended by the algorithm but liked by users are defined as False Negative (FN). • The precision informs about the capacity of the recommender system to identify only relevant items from a collection of irrelevant and relevant items. It is calculated by Eq. 6. n
P recision =
1
www.movielens.umn.edu.
TP 1 ) ( n i=1 T P + F P
(6)
A Novel Structural and Semantic Similarity in Social Recommender Systems
31
• Recall informs about the recommender’s ability to recommend all related items. It is calculated by Eq. 7. n
Recall =
1 TP ) ( n i=1 T P + F N
(7)
• F1-measure is the combination of the two previous metrics, and it is defined by the Eq. 8. 2 ∗ P recision ∗ Recall F1 = (8) P recision + Recall • Rate coverage (RC) refers to a fraction of ratings for which, after being hidden, a recommender system can produce a predicted rating. It is calculated by using Eq. 9. number of predicted ratings RC = (9) number of all ratings 4.3
Results and Discussion
4.3.1 The Impact of the User Profile in the Recommendation Table 1 shows the values of different valuation metrics obtained through our recommendation system on MovieLens. The results demonstrate the utility of the user profile factor in the recommendation process. Indeed, the best scores are achieved when our recommendation system takes the users’ profile as additional information. In particular, age is the most influential factor. In fact, the more the users are the same age or belong to the same generation, the more they tend to like the same items and have the same interests. Indeed, the precision and recall values increase significantly when we combine both age and occupation as additional information given that individuals of the same age and exercising the same profession have a fairly similar profile. The second most influential factor for recommendation is the user’s occupation of individuals then gender. The order of these attributes appears logical when we talk about movie datasets. However, the precision value decreases when we combine all the user profile features compared to the results when we considering only the age. If one factor or a combination of two factors produces better results in terms of certain metrics, combining all of them significantly reduces the quality of the recommendations. In addition, the quality of recommendations decreases because, in real life, it is rare to find individuals with the same occupation, the same age, the same gender, and preferring the same resources. Besides, when our recommendation system does not take any additional information about users, the quality of recommendations decreases.
32
I. B. El Kouni et al. Table 1. The interest of the user profile in the recommendation User profile
Precision Recall F1
Diversity RC
Without
0.9
0.51
0.66 47
0.14
Age
0.92
0.52
0.66 47
0.14
Gender
0.89
0.52
0.66 47
0.13
Profession
0.89
0.51
0.65 44
0.14
Age+Gender
0.9
0.51
0.65 50
0.13
Age+Profession
0.91
0.52
0.66 46
0.14
Gender+Profession
0.89
0.51
0.65 48
0.13
Age + Gend + Prof 0.91
0.56
0.69 50
0.14
4.3.2 The Interest of Social Relations in the Recommendation The experimental results in Table 2 show the impact of social factors in the recommendation process. These values prove that two users who are friends and have a sufficient number of mutual friends are susceptible to be more similar in their choices. Table 2. The impact of social relations in the recommendation Information
Precision Recall F1
Without
0.9
0.50
0.65 47
Friend
0.9
0.52
0.66 47
0.13
Mutual friend
0.91
0.53
0.66 45
0.13
0.52
0.66 50
0.14
Social relations 0.9
Diversity RC 0.14
4.3.3 Comparison with Other Similarity Measures We compare the proposed similarity measure with other well known measures: PCC (Person correlation) and Jaccard coefficient. The performance of these measures is shown in Fig. 3 and 4 in terms of precision, recall, and diversity. The results show that our method outperforms traditional measures in terms of recall, F1-measure, and diversity. Precision values show that the Jaccard measure gives the best results, but our method is more stable than others while other measures decrease. Regarding diversity values, in the top 5, Jaccard gives the best diversity value then, our method outperforms it and gives the best results that mean that our similarity measure has a greater impact on the variety and diversity of recommendations.
A Novel Structural and Semantic Similarity in Social Recommender Systems
33
Fig. 3. Precision and Recall values of similarity measures
Fig. 4. F1 and Diversity values of similarity measures
5
Conclusion
This paper proposes a novel structural and semantic similarity measure for social recommender systems based on many homophily concept, taking into account the user attributes, explicit preferences, and social relationships. We empirically analyze the trends of each factor in the recommendation performance and conclude that our methodology was required to yield more distinguishable and practical user similarity. The social information also proves that it is an important additional knowledge to be considered in the recommendation process. The obtained results based on MovieLens datasets demonstrate the effectiveness of our approach in enhancing the recommendation quality. Our method uses many numerical and textual factors to model user interest and hence it can be discover other effective techniques inspired from the information retrieval domain. We plan to consider the time factor to apply the dynamics of user preferences.
References 1. Adomavicius, G., Tuzhilin, A.: Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions. IEEE Trans. Knowl. Data Eng. 17(6), 734–749 (2005) 2. Allyson, F.B., Danilo, M.L., Jos´e, S.M., Giovanni, B.C.: Sherlock n-overlap: invasive normalization and overlap coefficient for the similarity analysis between source code. IEEE Trans. Comput. 68(5), 740–751 (2018) 3. Bagchi, S.: Performance and quality assessment of similarity measures in collaborative filtering using mahout. Procedia Comput. Sci. 50, 229–234 (2015)
34
I. B. El Kouni et al.
4. Bobadilla, J., Ortega, F., Hernando, A., Bernal, J.: A collaborative filtering approach to mitigate the new user cold start problem. Knowl.-Based Syst. 26, 225–238 (2012) 5. Bobadilla, J., Ortega, F., Hernando, A., Guti´errez, A.: Recommender systems survey. Knowl.-Based Syst. 46, 109–132 (2013) 6. Burke, R.: Hybrid recommender systems: survey and experiments. User Model. User-Adap. Inter. 12(4), 331–370 (2002) 7. Dice, L.R.: Measures of the amount of ecologic association between species. Ecology 26(3), 297–302 (1945) 8. Easley, D., Kleinberg, J., et al.: Networks, crowds, and markets: reasoning about a highly connected world. Significance 9, 43–44 (2012) 9. Gurini, D.F., Gasparetti, F., Micarelli, A., Sansonetti, G.: Temporal people-topeople recommendation on social networks with sentiment-based matrix factorization. Futur. Gener. Comput. Syst. 78, 430–439 (2018) 10. Hawashin, B., Mansour, A., Kanan, T., Fotouhi, F.: An efficient cold start solution based on group interests for recommender systems. In: Proceedings of the First International Conference on Data Science, E-learning and Information Systems, pp. 1–5 (2018) ´ 11. Jaccard, P.: Etude comparative de la distribution florale dans une portion des alpes et des jura. Bull. Soc. Vaudoise Sci. Nat. 37, 547–579 (1901) 12. Khediri, N., Karoui, W.: Community detection in social network with node attributes based on formal concept analysis. In: 2017 IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA), pp. 1346– 1353. IEEE (2017) 13. Liu, H., Hu, Z., Mian, A., Tian, H., Zhu, X.: A new user similarity model to improve the accuracy of collaborative filtering. Knowl.-Based Syst. 56, 156–166 (2014) 14. McPherson, M., Smith-Lovin, L., Cook, J.M.: Birds of a feather: homophily in social networks. Ann. Rev. Sociol. 27(1), 415–444 (2001) 15. Patra, B.K., Launonen, R., Ollikainen, V., Nandi, S.: A new similarity measure using Bhattacharyya coefficient for collaborative filtering in sparse data. Knowl.Based Syst. 82, 163–177 (2015) 16. Pazzani, M.J., Billsus, D.: Content-based recommendation systems. In: Brusilovsky, P., Kobsa, A., Nejdl, W. (eds.) The Adaptive Web, vol. 4321, pp. 325–341. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-720799 10 17. Rajaraman, A., Ullman, J.D.: Mining of Massive Datasets. Cambridge University Press, Cambridge (2011) 18. Resnick, P., Iacovou, N., Suchak, M., Bergstrom, P., Riedl, J.: GroupLens: an open architecture for collaborative filtering of netnews. In: Proceedings of the 1994 ACM Conference on Computer Supported Cooperative Work, pp. 175–186 (1994) 19. Sheugh, L., Alizadeh, S.H.: A note on Pearson correlation coefficient as a metric of similarity in recommender system. In: 2015 AI & Robotics (IRANOPEN), pp. 1–6. IEEE (2015) 20. Wang, Y., Deng, J., Gao, J., Zhang, P.: A hybrid user similarity model for collaborative filtering. Inf. Sci. 418, 102–118 (2017) 21. Zhang, Y., Tu, Z., Wang, Q.: TempoRec: temporal-topic based recommender for social network services. Mob. Netw. Appl. 22(6), 1182–1191 (2017)
Trustworthy Explainability Acceptance: A New Metric to Measure the Trustworthiness of Interpretable AI Medical Diagnostic Systems Davinder Kaur, Suleyman Uslu, Arjan Durresi(B) , Sunil Badve, and Murat Dundar Indiana University-Purdue University Indianapolis, Indianapolis, IN, USA {davikaur,suslu}@iu.edu, {adurresi,sbadve,mdundar}@iupui.edu
Abstract. We propose, Trustworthy Explainability Acceptance metric to evaluate explainable AI systems using expert-in-the-loop. Our metric calculates acceptance by quantifying the distance between the explanations generated by the AI system and the reasoning provided by the experts based on their expertise and experience. Our metric also evaluates the trust of the experts to include different groups of experts using our trust mechanism. Our metric can be easily adapted to any Interpretable AI system and be used in the standardization process of trustworthy AI systems. We illustrate the proposed metric using the highstake medical AI application of Predicting Ductal Carcinoma in Situ (DCIS) Recurrence. Our metric successfully captures the explainability of AI systems in DCIS recurrence by experts.
1
Introduction
The past decade has witnessed the massive deployment of algorithmic decisionmaking and artificial intelligence systems. These powerful, intelligent systems are used in many applications like business, healthcare, education, government, judicial, and many more. These systems have completely transformed our lives. With vast data and computation power availability, these systems have become very efficient but equally complex. Despite the many advantages, these systems have become opaque and can cause harm to the users and society. These systems have become black boxes, which are challenging to interpret, driving unfair and wrong decisions. Well-known examples of the harm caused by them are: selfdriving car killed a pedestrian [37], recidivism algorithm used in our judicial system found to be biased against black people [3], recruitment algorithm used by a significant tech company found to be biased against women [9]. These examples show how important it is to make these systems safe, reliable, and trustworthy. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 35–46, 2021. https://doi.org/10.1007/978-3-030-79725-6_4
36
D. Kaur et al.
To prevent the harm caused by them, various government and scientific agencies have proposed guidelines and frameworks to make these systems trustworthy. European Union (EU) has presented ethical guidelines and frameworks to govern the development and working of AI systems [12] and also passed a law called GDPR(General Data Protection Regulation) [6] which gives users the “right of explainability” about the decisions made by AI systems. Standardization organization ISO also presented different approaches to establishing trust in AI systems [18]. Various solutions have been proposed to implement different requirements of trustworthy AI like fairness, accountability, explainability, privacy, and controllability [18]. All these requirements play an important role in making AI systems safe, reliable, and trustworthy. [20,22] presented survey of all these requirements and their proposed methods. However, nowadays, great attention is given to make AI systems explainable and interpretable. Many methods have been proposed to make sure that people using the system and impacted by it should have a clear understanding of the system, its uses, and its limitations [16]. [1,11,17] presented an in-depth survey reviewing different explainability methods. Despite the availability of many explainability approaches, there is a lack of evaluation metrics to quantitatively compare and judge them. Different researchers have presented different ways to evaluate them without any concrete metrics. As the use of AI systems in high stake applications has increased, there is a growing need for standardization to govern the development and implementation of these systems [18]. The standardization process requires metrics [23] that will quantitatively compare and judge different explainability approaches and create a common language for the developers and the users of the system. Authors in [33] proposed a trustworthy AI metric for acceptance requirement. There is a need for such metrics for the explainability requirement of trustworthy AI. For this reason, in this paper, we have proposed a concrete, Trustworthy Explainability Acceptance metric and its measurement methodology. Our metric uses human-in-the-loop and is capable of quantifying the interpretable AI system’s explanations by the experts. We have illustrated our metric using a high stake medical application involving predicting the recurrence of Ductal Carcinoma In Situ (DCIS). Our contributions can be summarized as follows: • We propose in Sect. 3 our Trustworthy Explainability Acceptance metric for evaluating the Explainability of AI-based systems by field experts. • The measurement procedure for the proposed metric is described in Sect. 3 and is based on the concept of a distance acceptance approach that is adaptable to a wide range of systems. In addition to the acceptance value, our metric provides the confidence of the acceptance. • Our metric utilizes the trust of the experts in the given context, managed by our trust system, summarized in Sect. 2. • Our metric can be measured in many points of the system to reach an assessment of the whole system, as discussed in Sect. 4. • Finally, in Sect. 4, we illustrate the application of our trustworthy acceptance metric and its measurement methodology using an interpretable AI system for DCIS Recurrence Prediction.
Trustworthy Explainability Acceptance
2
37
Background and Related Work
This section gives information about the need for the metrics to measure the trustworthy explainability acceptance and the trust mechanism on which the metric is based on. 2.1
Need for AI Explainability Metrics
Much research is done in designing the guidelines and the framework to make AI systems trustworthy. However, significantly less work is dedicated to creating measurement mechanisms to measure the trustworthiness of the AI system. This measurement mechanism is needed to quantify the system’s trustworthiness, which will lead to more acceptance of the systems. Standardization organizations such as ISO [18] also raised the need for metrics to measure the trustworthiness of the AI systems. Their standardization document has mentioned various challenges related to the implementation and use of AI systems. The primary concern they have raised is the over-reliance and under-reliance on the AI system. Overreliance can happen if the user becomes too reliant on the automation without considering its limitations. It can lead to adverse outcomes. And under-reliance can occur if the user keeps on overriding/disagreeing with the correct AI system decisions. To avoid these issues, a quantitative measurement analysis is needed to effectively compute the trustworthiness of the system based on its past predictions and its explanation for that. Different trustworthy requirements need different methods of evaluation. In this paper, we have focused on the explainability requirement of trustworthy AI systems. Over the past years, many explainability methods have been proposed to make AI systems transparent and understandable. However, there is still an implementation gap from research to practice. The main reason for such an implementation gap is the lack of methods to compare and evaluate these systems [4]. Very little research has been done in designing these evaluation methods. Different researchers have presented various measures, tools, and principles to develop these evaluation systems. Some researchers have presented measures that are important for explainability evaluation [18], some have presented fact sheet to evaluate explainability methods based on their functionality, usability, and safety [29], some have presented fidelity method of comparing the accuracy of an interpretable model with a black box for evaluation [14], some have proposed an evaluation approach based on comparing local explanations with ground truth [15]. Some researchers also suggested quantitative evaluation methods like faithfulness metric [2] and monotonicity [25] which evaluate the system by measuring how the change in feature importance weights affect the classifier probability. All these proposed methods do not capture the human aspect of explainability. It is essential to have human involvement to increase the confidence in AI systems [8]. There is a need for more quantitative evaluation metrics that can compare different explainability methods and quantify the human acceptance of those methods to increase the use and trust in them. Furthermore, such metrics can
38
D. Kaur et al.
be used for the standardization of explainable AI solutions and later for their certification by the appropriate agencies. 2.2
Trust Mechanism
The proposed metric is based on our trust framework [28], which is composed of two parameters: impression and confidence. The impression is defined as the level of trust one entity has towards another entity. It is the comprehensive summary of all the measurements between two entities (P and Q) taken over time, as shown in Eq. 1. The more the impression, the more will be the trustworthiness of the system. mP :Q is the impression, and riP :Q is the i-th measurement from P to Q, where N is the total number of measurements. mP :Q =
N
P :Q i=1 ri
N
(1)
Confidence measures the certainty of the impression and is defined as how sure one entity is about its impression of another entity. It is inversely proportional to the standard error of the mean. It is calculated using the formula given in Eq. 2. cP :Q is the confidence that P has about his impression of Q. N P :Q − r P :Q )2 P :Q i i=1 (m (2) c =1−2 N (N − 1) Trust is a tuple of impression and confidence. Trust can also be calculated if the two entities are not directly related to each other using trust propagation methods, namely transitivity and aggregation. Trust transitivity is needed when two entities are not communicating directly but through a third entity between them. Trust aggregation is used to calculate summarized trust when two or more different links are present between entities. Authors in [28] proposed various methods for calculating transitivity and aggregation. In this application, we have used averaging aggregation method which is presented in Eq. 3, and its error formula is shown in Eq. 4. Q PQ mP T1 ⊕ mT2 =
Q PQ mP T1 + mT2 2
(3)
1 Q 2 ((eP Q )2 + (eP (4) T2 ) ) 22 T1 Our trust framework is validated and applied in various applications such as stock market prediction using twitter [27], fake users detection [19], crime prediction [21], and decision making systems in food-energy-water sectors [30– 32,34–36] . Q eP T1
⊕
Q eP T2
=
Trustworthy Explainability Acceptance
3
39
AI Trustworthy Explainability Acceptance Metric
This section introduces our Trustworthy Explainability Acceptance metric and its measurement methodology. We assume an explainable AI system that provides reasoning for its decisions, and there is a group of experts that will evaluate the system based on their judgment. Each expert can agree or disagree with the explanation provided by the system. The explainability acceptance is based on the closeness of the explanations. More the distance between the explanations, the lesser will be the acceptance. The explainability distance between the two explanations is calculated using the Euclidean distance formula. Each explanation is considered as a vector for distance calculation where different attributes of the explanation become a dimension. The explainability distance can be anything between 0 and 1, 1 being the maximum distance. For example, the explainability distance between two n-dimensional explanations X and Y represented as dYX , is shown in Eq. 5 where Xi and Yi are the values if i is the dimension for each explanation. n 2 Y i=1 (Xi − Yi ) dX = (5) n Explainability acceptance is calculated using the formula given in Eq. 6. Explainability acceptance by the expert e for the system will be based on his/her explanation X and the explanation Y provided by the system. Explainability acceptance also lies between [0,1], 1 being the highest acceptance. The distance is bidirectional. That is, if the explanation provided by the system is not close enough to the explanation of the expert, the expert will have less explainability acceptance for the system. Ae = 1 − dYX (6) A certain number of experts will evaluate and rate the system with their explainability acceptances based on their reasoning to reduce the subjectivity. Each acceptance is considered a trust assessment, and we aggregate them using Eq. 3. We calculate the Trustworthy Explainability Acceptance metric by aggregating different experts’ explainability acceptances weighted by their trust values, as shown in Eq. 7, where Te is the trust of expert e and E is the total number of experts. The trust value is calculated using the impression and confidence described in Sect. 2.2. Ae Te (7) T wA = e E The confidence of the measured Trustworthy Explainability Acceptance is calculated based on Eq. 2 and 4, as shown in Eq. 8 and Eq. 9. 2 e (T wA − Ae ) SET wA = (8) n cT wA = 1 − 2(SET wA )
(9)
40
D. Kaur et al.
Therefore, our metric to measure the trustworthy explainability is the tuple (T wA , cT wA ). When the system needs to be evaluated based on different sample measurements we can use the aggregation method of our trust framework. The aggregated explainability acceptance and the standard error for n samples are calculated based on the aggregation trust propagation method as shown in Eq. 10 and Eq. 11. T wA (10) SystemT wA = n n 1 SESystemT wA = (SET wA )2 (11) n2 n
4
Evaluating AI System for DCIS Recurrence Prediction
To illustrate the proposed Trustworthy Explainability Acceptance metric, we have evaluated an AI system for DCIS recurrence prediction. This section provides an overview of the DCIS Recurrence prediction problem, data, experimental setup, and results after assessing the prediction systems using the proposed metric. 4.1
Background
Ductal carcinoma in situ (DCIS) is a non-obligate precursor lesion that is managed aggressively. Different DCIS trials have documented that the addition of radiation [7,10] and endocrine therapy [5] will result in reductions of recurrence rates. DCIS if left untreated, has a chance to progress to invasive carcinoma in only 20–40% cases [13,24,26]. This has led to significant concerns regarding the over-treatment of patients. There is a need for an objective tool that helps identify women who are unlikely to recur and perform evidence-based de-escalation of additional therapies to avoid aggressive treatments. The development of such a tool requires a deep interpretable machine learning system for computer-aided recurrence prediction for DCIS, and the implementation and use of such a system require doctors’ acceptance and trust. 4.2
Data
Original diagnostic slides from the patients diagnosed with DCIS in 2009–2012 within Indiana University Hospital System and at the Eskenazi City hospital were reviewed, and clinical data were obtained. Any case that was upgraded to invasive carcinoma was excluded. This review identified around 20 cases each of recurrent and non-recurrent DCIS with at least 8-year follow-up data. After excluding the missing cases (including referral/ second opinion cases) and the cases with a scant amount of DCIS, 30 cases (15 recurring and 15 non-recurring) were available for our studies. The machine learning system uses these patient cases with eight years of follow-up data to determine DCIS recurrence. For simplicity, the image analysis
Trustworthy Explainability Acceptance
41
part was performed manually by an internationally recognized breast pathologist. A thorough review of the histological slides was carried out, and the areas of DCIS were identified. The pathologist has categorized each slide based on 25 attributes. The values of the 25 attributes for all 30 cases served as the input for the machine learning model. A supervised machine learning model support vector machine (SVM) is used for classification. Using this model, we were able to get 83% accuracy on a leave-one-out cross-validation basis. The model has found 13 features useful for predicting DCIS recurrences and their corresponding optimized weights. Table 1 provides the list of these morphological features and their corresponding descriptors. As our approach is not to quantify a given system but to develop a proof of the concept of the metric, we have simulated another algorithm profile and four pathologist profiles by changing the weight of one of the morphological feature descriptors to keep it simple. This is done to simulate how different pathologists can have different opinions based on their experience level related to DCIS prediction. One AI system may find some features more important than others for prediction. Table 1. Relevant histo-morphological criteria for predicting DCIS recurrence Morphological features
Descriptors
Architecture solid
Yes, no
Architecture other
Cribriform, micropaa, papillary, other
Lumina
Regular, irregular
Nuclear shape
Round, oval, irregular
Nuclear size
Small, intermediate, large
Nuclear pleomorphism
Mild, moderate, prominent
Nuclear membrane
Smooth, irregular
Nuclear spacing
Even, uneven
Nucleoli
Present, absent
Nucleolar shape
Round, oval, pleom, n/a
Mitosis
Abnormal, normal
Necrosis
Absent, focal, comedo
Immune cells with circumferential distribution Yes, no
4.3
Experiments and Results
In our study, we assumed that the AI system is better in predicting the recurrence of DCIS than pathologists, and there is a need to evaluate the system to create trust and acceptance of the system among pathologists before deploying it in the hospital setting. An appropriate organization responsible for testing and certifying AI systems has employed high trust expert pathologists to evaluate the system using their expertise. We used our explainability acceptance metric
42
D. Kaur et al.
to measure the system’s acceptance by comparing the explanations provided by the algorithm to the cognitive reasoning of expert pathologists. To demonstrate our proposed metric, we assumed two interpretable AI systems (System 1 and System 2) that need to be evaluated. Four simulated expert pathologists with high trust are deployed to do the evaluation. The interpretable AI systems generate morphological feature descriptors and corresponding weights along with their predictions. The pathologists evaluate the output provided by the system based on their expertise and provide their weights for the morphological feature descriptors.
Fig. 1. Explainability distance between explanations provided by the system and the reasoning provided by pathologists based on their expertise.
Fig. 2. Measured explainability acceptances of each pathologist D for System 1 and System 2.
An evaluation of a system starts by comparing the system explanation/weights of the feature descriptors with the weights of the pathologists,
Trustworthy Explainability Acceptance
43
which is given based on their expertise and experience. For each pathologist, a non-zero distance is measured between the pathologist’s explanation and the explanation provided by the system using Eq. 5. After having the individual distances, the acceptance rate for each pathologist is calculated by Eq. 6. Then we averaged all the individual acceptances weighted by their trust and calculated the confidence of the acceptance using Eqs. 7, 8, and 9. The trust-weighted average acceptance and its confidence constitutes our Trustworthy Explainability Acceptance metric. We have performed the same tasks for System 2.
Fig. 3. Trustworthy Explainability Acceptance of System 1 (confidence: 0.99) and System 2 (confidence: 0.98)
Figure 1 shows the explainability distance of all the pathologists for System 1 and System 2., calculated using Eq. 5. Figure 2 shows the explainability acceptance of the systems calculated based on the difference in the opinions, calculated using Eq. 6. Figure 3 shows Trustworthy Explainability Acceptance metric, T wA , calculated using Eq. 7, and the corresponding confidence values are calculated using Eq. 9. The variation in the acceptance shows how one system with similar accuracy as the other one can be more accepted based on the feature importance and explanation provided by it. The certifying agencies could use this type of evaluation metrics to standardize and certify AI systems based on the evaluation provided by top experts regarding the AI systems’ explainability. For example, in our experiment, System 1 is more acceptable than System 2 regarding explainability. Therefore, our metric provides a framework to measure the acceptance of AI systems by experts based on system explainability.
5
Conclusions
We presented Trustworthy Explainability Acceptance metric for evaluating explainable AI systems using expert-in-the-loop. This evaluation method provides a quantitative way to compare and judge different interpretable AI systems
44
D. Kaur et al.
and provides a common language to all the various stakeholders involved in the designing, development, testing, standardization, and implementation phase of AI systems. Our metric measures the distances between the explanation provided by the system and the reasoning provided by the experts. Based on these distances and the trust of the experts, we calculated the Trustworthy Explainability Acceptance and its confidence. Our metric trust mechanism will help differentiate between different experts based on their expertise and reputation. Our Trustworthy Explainability Acceptance metric can be applied to any interpretable AI system that can use the concept of distance measurements. Acknowledgements. This work was partially supported by the National Science Foundation under Grant No. 1547411, by the U.S. Department of Agriculture (USDA) National Institute of Food and Agriculture (NIFA) (Award Number 2017-67003-26057) via an interagency partnership between USDA-NIFA and the National Science Foundation (NSF) on the research program Innovations at the Nexus of Food, Energy and Water Systems, and by an IUPUI iM2CS seed grant.
References 1. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018) 2. Alvarez-Melis, D., Jaakkola, T.S.: Towards robust interpretability with selfexplaining neural networks. arXiv preprint arXiv:1806.07538 (2018) 3. Angwin, J., Larson, J., Mattu, S., Kirchner, L.: Machine Bias. ProPublica (2016). https://www.propublica.org/article/machine-bias-risk-assessmentsin-criminal-sentencing 4. Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. arXiv preprint arXiv:1909.03012 (2019) R as a tool to stratify breast cancer patients at 5. Bartlett, J.M., et al.: Mammostrat risk of recurrence during endocrine therapy. Breast Cancer Res. 12(4), 1–11 (2010) 6. Calder, A.: EU GDPR: A Pocket Guide. IT Governance Ltd. (2018) 7. Correa, C., et al.: Overview of the randomized trials of radiotherapy in ductal carcinoma in situ of the breast. J. Natl. Cancer Inst. 2010(41), 162–177 (2010). Monographs 8. Danny Tobey, M.: Explainability: where AI and liability meet: Actualit´es: Dla piper global law firm (2019). https://www.dlapiper.com/fr/france/insights/ publications/2019/02/explainability-where-ai-and-liability-meet/ 9. Dastin, J.: Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women. Reuters, San Francisco (2018). Accessed 9 Oct 2018 10. DeSantis, C.E., Fedewa, S.A., Goding Sauer, A., Kramer, J.L., Smith, R.A., Jemal, A.: Breast cancer statistics, 2015: convergence of incidence rates between black and white women. CA: Cancer J. Clin. 66(1), 31–42 (2016) 11. Doˇsilovi´c, F.K., Brˇci´c, M., Hlupi´c, N.: Explainable artificial intelligence: a survey. In: 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 0210–0215. IEEE (2018) 12. EC: Ethics guidelines for trustworthy AI (2018). https://ec.europa.eu/digitalsingle-market/en/news/ethics-guidelines-trustworthy-ai 13. Esserman, L.J., et al.: Addressing overdiagnosis and overtreatment in cancer: a prescription for change. Lancet Oncol. 15(6), e234–e242 (2014)
Trustworthy Explainability Acceptance
45
14. Freitas, A.A.: Comprehensible classification models: a position paper. ACM SIGKDD Explor. Newsl. 15(1), 1–10 (2014) 15. Guidotti, R.: Evaluating local explanation methods on ground truth. Artif. Intell. 291, 103,428 (2021) 16. Gunning, D.: Explainable Artificial Intelligence (XAI). Defense Advanced Research Projects Agency (DARPA), nd Web 2(2) (2017) 17. Gunning, D., Aha, D.: Darpa’s explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44–58 (2019) 18. Information Technology - Artificial Intelligence - Overview of trustworthiness in artificial intelligence. Standard, International Organization for Standardization (2020) 19. Kaur, D., Uslu, S., Durresi, A.: Trust-based security mechanism for detecting clusters of fake users in social networks. In: Workshops of the International Conference on Advanced Information Networking and Applications, pp. 641–650. Springer (2019) 20. Kaur, D., Uslu, S., Durresi, A.: Requirements for trustworthy artificial intelligencea review. In: International Conference on Network-Based Information Systems, pp. 105–115. Springer (2020) 21. Kaur, D., Uslu, S., Durresi, A., Mohler, G., Carter, J.G.: Trust-based humanmachine collaboration mechanism for predicting crimes. In: International Conference on Advanced Information Networking and Applications, pp. 603–616. Springer (2020) 22. Kumar, A., Braud, T., Tarkoma, S., Hui, P.: Trustworthy AI in the age of pervasive computing and big data. In: 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), pp. 1–6. IEEE (2020) 23. Lakkaraju, H., Adebayo, J., Singh, S.: Tutorial: (track2) explaining machine learning predictions: state-of-the-art, challenges, and opportunities. In: NeurIPS 2020. NeurIPS Foundation (2020) 24. Lester, S.C., Connolly, J.L., Amin, M.B.: College of American pathologists protocol for the reporting of ductal carcinoma in situ. Arch. Pathol. Lab. Med. 133(1), 13– 14 (2009) 25. Luss, R., et al.: Generating contrastive explanations with monotonic attribute functions. arXiv preprint arXiv:1905.12698 (2019) 26. Moran, M.S., et al.: Society of surgical Oncology-American society for radiation oncology consensus guideline on margins for breast-conserving surgery with wholebreast irradiation in stages I and II invasive breast cancer. Int. J. Radiation Oncol.* Biol.* Phys. 88(3), 553–564 (2014) 27. Ruan, Y., Durresi, A., Alfantoukh, L.: Using twitter trust network for stock market analysis. Knowl.-Based Syst. 145, 207–218 (2018) 28. Ruan, Y., Zhang, P., Alfantoukh, L., Durresi, A.: Measurement theory-based trust management framework for online social communities. ACM Trans. Internet Technol. (TOIT) 17(2), 1–24 (2017) 29. Sojda, R.S.: Empirical evaluation of decision support systems: needs, definitions, potential methods, and an example pertaining to waterfowl management. Environ. Model. Softw. 22(2), 269–277 (2007) 30. Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M.: Decision support system using trust planning among food-energy-water actors. In: International Conference on Advanced Information Networking and Applications, pp. 1169–1180. Springer (2019)
46
D. Kaur et al.
31. Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M.: Trust-based gametheoretical decision making for food-energy-water management. In: International Conference on Broadband and Wireless Computing, Communication and Applications, pp. 125–136. Springer (2019) 32. Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M.: Trust-based decision making for food-energy-water actors. In: International Conference on Advanced Information Networking and Applications, pp. 591–602. Springer (2020) 33. Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M.: Trustworthy acceptance: a new metric for trustworthy artificial intelligence used in decision making in food-water-energy sectors. In: International Conference on Advanced Information Networking and Applications. Springer (2021) 34. Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M., Tilt, J.H.: Control theoretical modeling of trust-based decision making in food-energy-water management. In: Conference on Complex, Intelligent, and Software Intensive Systems, pp. 97–107. Springer (2020) 35. Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M., Tilt, J.H.: A trustworthy human-machine framework for collective decision making in food-energywater management: the role of trust sensitivity. Knowl.-Based Syst. 213, 106,683 (2021) 36. Uslu, S., Ruan, Y., Durresi, A.: Trust-based decision support system for planning among food-energy-water actors. In: Conference on Complex, Intelligent, and Software Intensive Systems, pp. 440–451. Springer (2018) 37. Wakabayashi, D.: Self-driving Uber car kills pedestrian in Arizona, where robots roam. The New York Times 19 (2018)
Entity Relation Extraction Based on Multiattention Mechanism and BiGRU Network Lingyun Wang1, Caiquan Xiong1(&), Wenxiang Xu1, and Song Lin2 1
2
School of Computer Science, Hubei University of Technology, Wuhan 430068, China Strategic Teaching and Research Section, Naval Command College, Nanjing 210016, China
Abstract. Entity relationship extraction is the main task in information extraction, and its purpose is to extract triples from unstructured text. The current relationship extraction model is mainly based on the BiLSTM neural network, and most of the introduced are sentencelevel attention mechanisms. The structural parameters of this model are complex, which easily leads to over-fitting problems, and lacks the acquisition of word-level information within the sentence. In response to these problems, we propose a model based on the multi-attention mechanism and BiGRU network. The model mainly uses BiGRU as the main coding structure. By reducing the parameter settings, the training efficiency can be effectively improved. At the same time, a multi-attention mechanism is introduced to learn the influence of different features on relationship classification from the two dimensions of word level and sentence level, and to improve the effect of relationship extraction through different weight settings. The model is tested on the SemVal 2010 task8 dataset. The experiment shows that our model is significantly better than the baseline method.
1 Introduction With the rapid development of the NLP field, especially the rise of deep learning technology, entity relationship extraction technology has also made significant progress, and has shown irreplaceable application value in many practical applications. For example, in the process of constructing and maintaining the knowledge graph, entities in the text require entity relationship extraction technology to identify the relationship between entity pairs, and convert unstructured text data into structured data that can be directly used by computers. At present, the commonly used method of entity relationship extraction is mainly based on the neural network model of the deep learning framework. On this basis, the introduction of the attention mechanism is of great significance to the improvement of entity relationships. Attention not only has better performance, but also provides insight into which words and sentences are helpful for classification decision, which has value in application and analysis [1]. Our innovative work mainly includes the following points: (1) Introduce location features to strengthen the representation of word vectors, and introduce a self-attention mechanism to learn the weights of different words and features on relationship © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 47–56, 2021. https://doi.org/10.1007/978-3-030-79725-6_5
48
L. Wang et al.
classification, and obtain information within the sentence sequence. (2) Introduce the sentence-level attention mechanism to obtain the feature information of the sentence, and through learning to assign different weights to the sentence, reduce the impact of noisy data. (3) Use BiGRU neural network to reduce model parameters and improve training efficiency.
2 Related Work The method of deep learning is to treat the extraction of entity relationship as a classification problem and train the classification model. Liu et al. [2] used CNN for relation extraction earlier. They added a small amount of artificial features (such as part-of-speech labeling and entity category), mapped it to a low-dimensional space, and then combined word vectors were used as the input of CNN. Afterwards, Zeng et al. [3] proposed the use of segmented convolutional neural network for relation extraction on the basis of Liu et al. [2]. First, using word vectors and location feature vectors as the input of the convolutional neural network. The sentence representation is obtained through the convolutional layer, the segmented pooling layer and the linear layer, and then the relationship classification is performed through the classification layer, and good results are obtained. Santos et al. [4] proposed a new convolutional neural network and designed a new loss function to optimize parameters, which can effectively distinguish different relationship categories. Miwa et al. [5] proposed a relationship extraction model based on an end-to-end neural network. The model uses BiLSTM and Tree-LSTM to simultaneously model entities and sentences, and extract relationships between entities. Although the convolutional neural network has good learning and feature extraction capabilities, it is limited to extracting the dependent features between words within a short distance, and it cannot extract well the dependence between words at a long distance. Based on this consideration, Zhang et al. [6] applied recurrent neural network to relation extraction, which solved the problem of dependence between longdistance words. Zhang et al. [7] used the two-way LSTM model for relation extraction, and achieved good results, confirming the effectiveness of the two-way LSTM in relation extraction. The research work on the attention mechanism was first proposed in the field of visual images, mainly to give more important attention to the areas that need to be focused, and at the same time reduce the attention of the surrounding images. Bahdanau et al. [8] used a mechanism similar to Attention to perform translation and alignment on machine translation tasks at the same time. This work is currently recognized as the first to propose the application of the Attention mechanism to the NLP field. In 2017, Katiyar et al. [9] used the attention mechanism together with the recurrent neural network BiLSTM for the joint extraction of entities and relationship classification for the first time. Zhou et al. [10] connected the output of each moment in the two-way LSTM, introduced an attention mechanism, and focused on the degree of influence of words on relation classification.
Entity Relation Extraction Based on Multi-attention Mechanism
49
3 Model Structure In this paper, we propose a neural network model based on the multi-attention mechanism and BiGRU (MultiAtt-BiGRU). In this section, we introduce each module of the model (MultiAtt-BiGRU) in detail. The model frame diagram is as follows (Fig. 1):
Fig. 1. The model structure
The model consists of 5 parts: (1) Input layer: for data preprocessing and feature extraction. (2) Embedding layer: Map features into low-dimensional dense vectors. The main work is to splice relative position features in word vectors to construct a hybrid embedding layer. (3) BiGRU layer: Use BiGRU neural network to construct entity relationship context representation that includes entity relationship semantics and hierarchical structure information. (4) Attention layer: Introduce multiple attention mechanisms. 1. First, the word vectors of the sentence are spliced, and the self-attention weight is obtained through the self-attention layer processing, and then the vector is multiplied by the self-attention weight matrix to obtain the vector representation of the sentence. 2. Through the sentence-level attention mechanism, the semantic features between
50
L. Wang et al.
sentences are obtained, the obtained weights are spliced, and finally the output vector of the BiGRU layer is weighted and summed to generate a sentence-level feature representation. (5) Output layer: Combining sentence feature representation, using SoftMax function for relation classification. 3.1
Embedding Layer
Word Embedding The function of the Embedding layer is mainly to learn the distributed representation of words and reduce the dimensionality of extremely sparse one-hot encoded words. Given a sentence S ¼ ½w1 ; w2 :::wn , the word vector corresponding to the i-th word wi is ei . For wi , there is a word vector matrix: ð1Þ
W 2 RdjVj
V in the formula is the size of the vocabulary, and d is the dimension of the word vector. Convert each word into a representation of a word vector through the formula (2) ei ¼ W vi
ð2Þ
vi represents a one-hot vector of size |V|, and the sentence S is finally converted to: Sw ¼ ½e1 ; :::en
ð3Þ
In this paper, we use the Word2vec model to train word vectors. The specific parameters are described in the parameter setting part of the experiment. Position Embedding In a sentence, the position information of a word is very important information. Usually words that are closer to the entity word contain more information. In order to fully obtain the entity position information in the sentence, we add relative position features to strengthen the representation of the word vector, That is, the distance between each word in the sentence and two entities. Position Features essentially adds a small piece of position information to the original embedding. The formula is as follows:
PE2t ðpÞ ¼ sinðp=100002i=dpos Þ PE2t þ 1 ðpÞ ¼ cosðp=100002i=dpos Þ
ð4Þ
Ashish [11] explained that the meaning here is to map the position with id p to a dpos-dimensional position vector, and the value of the i-th element of this vector is PEi(p). We add this Position Embedding to the word Embedding and pass it into the BiGRU neural network as a complete Embedding result.
Entity Relation Extraction Based on Multi-attention Mechanism
3.2
51
BiGRU Neural Network
Since the semantics of a word is not only related to the previous historical information, but also closely related to the information after the current word, LSTM can capture long-term dependence very well, but the LSTM network has the problem of complex calculations and a large number of parameters. Cho [12] proposed GRU (Gated Recurrent Unit) on the basis of LSTM, also known as Gated Recurrent Unit. Compared with LSTM, GRU significantly reduces the number of parameters and training time through a special design. GRU controls the flow of information through two different gating mechanisms: reset gate and update gate, which can achieve a training effect similar to that of LSTM in less training time. In the paper, we use the BiGRU network structure in both word encoding and sentence encoding. For the input word vector sequence xt, the execution process of the two GRU units of BiGRU in the hidden state at the generation time t is as follows: ! ! ! ht ¼ GRU ðxt ; ht1 Þ
ð5Þ
ht ¼ GRU ðxt ; ht1 Þ
ð6Þ
The output of the bidirectional GRU neural network is shown in the formula: h! i ht ¼ ht ht
3.3
ð7Þ
Multi-attention Strategy
Lin [13] introduced a sentence-level attention mechanism in the relation extraction model, which improves the performance of the classifier by giving different weights to different sentences and removing noise, but this method lacks the learning of sentence internal features. The model mainly uses the CNN network, which cannot learn the contextual semantic information in the sentence well. The difference from Lin's work is that we use the BiGRU network in the sentence encoding layer to capture the contextual information between sequences, instead of the original pooling layer. And we introduced a self-attention mechanism to learn word information and obtain fine-grained feature information. These features are aggregated into a sentence vector, and finally a sentence-level attention mechanism is used to remove the influence of noisy sentences and improve the performance of the model. In the model of this paper, the vector set output by the BiGRU layer is represented as H : ½h1 ; h2 :::hn . The weight matrix is obtained through the self-attention layer, and the expression of sentence s is obtained by the following formula: M ¼ tanhðHÞ
ð8Þ
a ¼ Soft maxðW T MÞ
ð9Þ
52
L. Wang et al.
s ¼ HaT
ð10Þ
In the formula, H 2 RdwT , Among them, dw is the dimension of the word vector; w is a vector learned in training, WT is its transpose, tanh is the activation function, and softmax is the normalized exponential function. Through the self-attention layer, we can calculate the word and feature attention scores, and then we get the attention weights through the softmax function, and finally the sentences based on the selfattention mechanism are expressed as follows: hc ¼ tanhðsÞ
ð11Þ
The sentence sequence si is encoded by BiGRU to obtain the sentence vector ri, and the formula for introducing the sentence-level attention mechanism is as follows: expðei Þ bi ¼ P expðek Þ
ð12Þ
k
hs ¼
X
hc :bi
ð13Þ
i
In the formula bi is the weight vector based on the sentence-level attention mechanism.
4 Experiment and Analysis 4.1
Dataset
In order to evaluate the performance of the model, we select the relation extraction data set SemEval-2010 Task 8 of the semantic evaluation conference SemEval [14]. The data set contains 8000 training samples and 2717 test samples. The specific characteristics of the data set are shown in Table 1: Table 1. Characteristics of the dataset Characteristic Numbers Train_size 8000 Test_size 2717 Vocab_size 25804
Nine major types of relationships and one other relationship type are defined in the data set. The relationship type definition is shown in Table 2.
Entity Relation Extraction Based on Multi-attention Mechanism
53
Table 2. Relationship types and proportions in the training set Method Cause-Effect (C-E) Component-Whole (C-W) Entity-Destination (E-D) Entity-Origin (E-O) Product-Producer (P-P) Member-Collection (M-C) Message-Topic (M-T) Content-Container (C-C) Instrument-Agency (I-A) Other (O)
Explanation Radiation[Cause]cancer[Effect] Kitchen[Component]apartment[Whole] Boy[Entity]went to bed[Destination] Letters[Entity]from the city[Origin] Tree[Member]forest[Collection] Tree[Member]forest[Collection] Lecture[Message]on semantics[Topic] Wine[Content]is in the bottle[Container] Phone[Instrument]operator[Agency] People filled with joy
mAP 12.54% 11.76% 10.56% 8.95% 8.96% 8.63% 7.92% 6.75% 6.30% 17.63%
According to the proportion of various relationships in the training set, we select 2000 samples from the training set as the development set to adjust the parameters of the model (Fig. 2).
Fig. 2. Relationship type distribution chart
We use standard Precision, Recall and F1 scores as evaluation indicators. When the relationship between the subject and the object and the head are correctly identified, it is considered that the extracted relationship triplet is correct. 4.2
Settings
In the embedding layer, the Word2vec model we use is Skip Gram, the window size is 5, and the vector dimension is 50. For the BiGRU encoder, the hidden vector length is set to 100, L2 regularization and a parameter setting of 0.001 are used to avoid
54
L. Wang et al.
overfitting, and the bias variable in self-attention is set to 10. Some parameters are as follows (Table 3):
Table 3. Parameter setting Parameter Word_emb_dimension Pos_emb_dimension Hidden vector length Dropout rate Batch_size Epoch number Learning_rate Parameter update
4.3
Value 50 100 100 0.5 10 100 0.001 Adam
Baseline
We select two models that are tested on the same data set to conduct comparative experiments, and compare and analyze the effects of the models. CNN [3]: Zeng et al. proposed to use CNN for relation extraction for the first time in 2014, using convolutional neural network to extract lexical and sentence-level features. HybirdBilstm-Siamese [14]: Two word level Bilstms are combined together through the Siamese model architecture to learn the similarity of two sentences and predict the relationship of new sentences through k-nearest neighbor algorithm. 4.4
Results
It can be seen that our model has the highest F1 score compared to the baseline model. Compared with the Hybird-BiLSTM-Siamese model that uses the same Feature Sets, the F1 value of our model has increased by 1.5%. Compared with the CNN model with WordNet added, we use fewer features, but the F1 value is still improved. The results in Table 2 show that our model is effective. Through the feature settings in the table, adding more features is beneficial to the improvement of the classification effect (Table 4).
Table 4. Experimental result Method Feature sets Hybird-BiLSTM-Siamese [15] WV, PF CNN [3] WV, PF, WordNet MultiAtt-BiGRU WV, PF
F1 81.8% 82.7% 83.3%
Entity Relation Extraction Based on Multi-attention Mechanism
55
5 Conclusion In this paper, we propose an entity relationship extraction model based on multiattention mechanism and Bigru network. The model uses BIGRU network to replace the commonly used BILSTM network, which effectively improves the training efficiency. In addition, we introduced more attention mechanism from two dimension of word level and sentence level to study the impact of different characteristics on the relationship between classification, on the public data sets to demonstrate the effectiveness of our model, the experimental results show that our model has good performance, and through the experiment, we found that the more effective features for classification of ascension, this is also our work in the future research direction. Acknowledgments. This research is supported by National Key Research and Development Program of China under grant number 2017YFC1405404, and Green Industry Technology Leading Project (product development category) of Hubei University of Technology under grant number CPYF2017008.
References 1. Shen, Y., He, X., Gao, J., et al.: A latent semantic model with convolutional-pooling structure for information retrieval. In: Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, pp. 101–110 (2014) 2. Liu, C.Y., Sun, W.B., Chao, W.H., Che, W.: Convolution neural network for relation extraction. In: Motoda, H., Wu, Z., Cao, L., Zaiane, O., Yao, M., Wang, W. (eds.) ADMA 2013. LNCS (LNAI), vol. 8347, pp. 231–242. Springer, Heidelberg (2013). https://doi.org/ 10.1007/978-3-642-53917-6_21 3. Zeng, D., Liu, K., Lai, S., Zhou, G., Zhao, J.: Relation classification via convolutional deep neural network. In: Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pp. 2335–2344 (2014) 4. Santos, D., Bing, X., Zhou, B.: Classifying relations by ranking with convolutional neural networks. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing: Long Papers, vol. 1 (2015) 5. Miwa, M., Bansal, M.: End-to-end relation extraction using LSTMs on sequences and tree structures. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics: Long Papers, vol. 1 (2016) 6. Zhang, D., Wang, D.J.: Relation classification via recurrent neural network. arXiv preprint arXiv:1508.01006 (2015) 7. Zhang, S., Zheng, D., Hu, X., Yang, M.: Bidirectional long short-term memory networks for relation classification. In: Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation, pp. 73–78 (2015) 8. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. Comput. Sci. (2014) 9. Katiyar, A., Cardie, C.: Going out on a limb: joint extraction of entity mentions and relations without dependency trees. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics: Long Papers, vol. 1, pp. 917–928 (2017)
56
L. Wang et al.
10. Zhou, P., et al.: Attention-based bidirectional long short-term memory networks for relation classification. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics: Short Papers, vol. 2, pp. 207–212 (2016) 11. Vaswani, A., Shazeer, N., Parmar, N., et al.: Attention is all you need. arXiv (2017). GB/T 7714 12. Cho, K., Van Merrienboer, B., Gulcehre, C., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. Comput. Sci. (2014) 13. Lin, Y., Shen, S., Liu, Z., et al.: Neural relation extraction with selective attention over instances. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics: Long Papers, vol. 1 (2016) 14. Hendrickx, I., Kim, S.N., Kozareva, Z., et al.: SemEval-2010 task 8: multi-way classification of semantic relations between pairs of nominals. In: Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions, pp. 94–99 (2009) 15. Zeyuan, C., Pan, L., Liu, S.: Hybrid BiLSTM-siamese network for relation extraction. In: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp. 1907–1909 (2019)
Time Series Prediction of Wind Speed Based on SARIMA and LSTM Caiquan Xiong(&), Congcong Yu, Xiaohui Gu, and Shiqiang Xu School of Computer, Hubei University of Technology, Wuhan, China Abstract. Wind speed has an important impact on the navigation of ships at sea. If the wind speed can be accurately predicted, the safety of ship navigation would be greatly improved. This paper proposes a wind speed series prediction model based on SARIMA and LSTM. Firstly, the SARIMA model is used to predict and model the observed wind speed sequence data to obtain the predicted value and the residual between the predicted value and the observed value. Training the long and short-term memory neural network with the residual sample set to get a trained network for residual prediction. Finally, to sum the two parts predicted values up to obtain the predicted value of the wind speed series. In order to test the prediction effect of this model, a deep learning environment based on Keras was built, and 5 days of real-time wind speed data in a certain sea area of the South China Sea was used as the input of the model. The prediction results are compared with the prediction results of SARIMA model, LSTM network model, BP network model, LSTM and ARIMA combined model. The experimental results show that the model has high accuracy and less error in the prediction of wind speed series.
1 Introduction In the navigation of ships at sea, wind speed plays a quite important role. It not only determines the navigation route, but also has a great impact on the safety of the ship's navigation. If the wind speed were accurately predicted before set sail, it would contribute to route planning. However, wind speed can be affected by many factors, which are totally different considered in long-term wind speed forecasts and short-term wind speed prediction. Specially at sea, there is an important factor in short-term wind speed that wind speed is non-stationary but has a certain periodicity. If a mathematical model for extracting periodic information from real-time detected wind speed data can be established, the accuracy of wind speed prediction will be improved.
2 Related Work At present, wind speed series prediction methods mainly include time series prediction method [1, 2], Kalman filter method [3], wavelet transform analysis method [4], empirical mode decomposition method [5], support vector machine method [6, 7], spatial correlation method [8], neural network method [9] and combination prediction algorithm. The time series analysis method uses historical wind speed data to construct a linear model of wind speed prediction. This model requires less information, but its limited to a stationary time series; when use the Kalman filter method predicts wind © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 57–67, 2021. https://doi.org/10.1007/978-3-030-79725-6_6
58
C. Xiong et al.
speed, the features of wind speed noise statistics is difficult to estimate, which makes it difficult to establish characteristic equations and state equations; Support Vector Machine [10] (SVM) training requires fewer samples and strong generalization ability, but it is more difficult to select kernel functions and kernel parameters; The neural network method has a strong nonlinear learning ability, and fits a nonlinear time series very well, but when training the model, the network parameter settings are more complicated and the training time is longer [11]. Based on this, this paper proposes a prediction model based on the combination of the Seasonal Autoregressive Integrated Moving Average Model (SARIMA) and the long and short-term memory neural network model (LSTM). This model decomposes the wind speed time series into two parts, linear autocorrelation and nonlinear residuals, and predicts them respectively. It gives full play to the advantages of the SARIMA in processing linear and LSTM in fitting nonlinearity and well extracts the periodicity in the series. Experimental results show that the prediction model can effectively improve the accuracy of wind speed sequence prediction.
3 Establishment of Combined Prediction Model Based on SARIMA-LSTM 3.1
Seasonal Auto Regressive Integrated Moving Average
Auto-Regressive Moving Average (ARMA) is frequently used in stationary time series forecasting. For non-stationary time series, it should be smoothed before modeling with ARMA, that is, Autoregressive Integrated Moving Average Model (ARIMA). In the observation of the wind speed data, a periodicity of 24 h is found. However, the periodic data hidden in the wind speed data cannot be completely extracted when it predicted with the trend-determined ARIMA (p, d, q) model. Therefore, it is necessary to establish a SARIMA model that can fit the periodic information. The SARIMA model is based on the ARIMA model. It can convert a non-stationary time series into a stationary time series through the difference method so that the dependent variable only performs regression on its present value and lag value of its lag term and the random error term. The constructed model is suitable for univariate time series prediction. Combining the ARIMA and the periodicity of wind speed data, we can get the SARIMA (p, d, q)(P, D, Q)s, as shown in the following formula (1). /p BS uP ðBÞð1 Bs ÞD ð1 Bs Þd ¼ HQ ðBs Þhq ðBÞat
ð1Þ
P /p ðBs Þ ¼ 1 /1 Bs . . . /p BS
ð2Þ
up ðBs Þ ¼ 1 /1 B . . . /p Bp
ð3Þ
Where,
Time Series Prediction of Wind Speed Based on SARIMA and LSTM
59
Q HQ ðBs Þ ¼ 1 H1 Bs . . . Hq BS
ð4Þ
hq ðBÞ ¼ 1 h1 B . . . hq Bq
ð5Þ
Formulas (2)–(5) respectively represent p-order seasonal AR operator, p-order regular AR operator, Q-order seasonal MA operator, and q-order regular MA operator. at represents the residual sequence whose mean value is 0, and variance is 1. 3.2
Long Short-Term Memory
3.2.1 Recurrent Neural Network A typical recurrent neural network consists of three layers including the Input Units, Hidden Units and the Output Units. As shown in Fig. 1:
o V
s
ot 1 V
W
st
W
ot V
st
1
W U
U
x
ot
xt
1
V
1
st
W
W
U
xt
1
U
xt
1
Fig. 1. Structure of recurrent neural network
In Fig. 1, t is a certain point of time, x is the input layer, S is the hidden layer, o is the output layer, and the matrix W consists of the weights of values input to current hidden layer from last hidden layer, the matrix U consists of the connection weights from the input layer to the hidden layer, and the matrix V consists of the connection weights from the hidden layer to the output layer. The calculation between the output layer o and the hidden layer S is as (6)–(7). Ot ¼ gðVSt Þ
ð6Þ
St ¼ f ðUxt þ WSt1 Þ
ð7Þ
Substitute (7) into (6), (8) can be got: Ot ¼ Vf ðUxt þ Wf ðUxt2 þ Wf ðUxt3 þ . . .ÞÞÞ
ð8Þ
f and g are activation functions. The recurrent neural network mainly uses historical data to support current decision-making. However, in condition of wind speed prediction, for wind speed data with a larger time interval, the entire calculation process will be long since there is too much information to record, the efficiency will decrease. In order to improve the efficiency, a long-short-term memory network [12] (LSTM) can be used.
60
C. Xiong et al.
3.2.2 LSTM Compared with the traditional recurrent neural network, LSTM introduces some “gate” structure, which can selectively retain the information in the network, effectively solves some invalid data dependence problems, and improves the efficiency of neural network. The network structure of LSTM is shown in Fig. 2. ht Ct
1
ht
1
Ct
ht
Xt
Fig. 2. Structure of LSTM network
ht is calculated through xt and ht1 in LSTM, the internal structure of the neuron is redesigned with input gate it , forgetting gate ft , output gate Ot and an internal memory unit Ct added, the input gate controls how much the current state is updated to the memory unit. The forgetting gate controls how much of data in the previous memory unit is forgotten. The output gate controls output depends on the current memory unit. In LSTM, the update of the t-th layer is showed in formulas (9)–(14), which respectively correspond to input gate, forget gate, output gate, candidate layer, memory unit updating and the output information of each layer. it ¼ rðWi ½ht1 ; xt þ bi Þ ft ¼ r Wf ½ht1 ; xt þ bf
ð9Þ ð10Þ
Ot ¼ rðWo ½ht1 ; xt þ bo Þ
ð11Þ
~ t ¼ TanhðWc ½ht1 ; xt þ bC Þ C
ð12Þ
~t Ct ¼ ft Ct1 þ it C
ð13Þ
ht ¼ Ot TanhðCt Þ
ð14Þ
Where it , ft , Ot , Ct represent the input gate, forget gate, output gate and memory unit respectively, r is the sigmoid activation function, Wi is the weight matrix of the input information of the input gate, and Wf is the weight matrix of the input
Time Series Prediction of Wind Speed Based on SARIMA and LSTM
61
information of the forget gate, Wo is the weight matrix for the input information of the output gate. And bi , bf , bo represent the bias of the input gate, forget gate, and output gate respectively. 3.3
Modeling Process
Wind speed time series not only have linear feature, but also possess non-linear, random and non-stationary feature. The advantage of the SARIMA model is that it can fit the linear part of the wind speed time series better, while the advantage of the LSTM network model is that it fits nonlinear and non-stationary data effectively. By combining the two predicting methods, a better prediction effect can be expected. Assuming that there is a wind speed time series Yt composed of two parts, which are the linear autocorrelation Lt and the nonlinear residual Nt , then: Yt ¼ Lt þ Nt . Therefore, this paper combining SARIMA and LSTM model to predict the wind speed. In the beginning of our scheme, the SARIMA is used to model the linear part of the ^t and its corresponding residual wind speed sequence to obtain the prediction result L Nt , where Nt contains the nonlinear relationship in the wind speed sequence, and then Nt sequence is reconstructed to get LSTM training set of the model, and then LSTM is ^ t , and finally the used to predict the residual and get the residual prediction result N linear prediction result and the nonlinear residual prediction result are combined to get ^t þ N ^ t . The Flow chart of combined prethe final wind speed prediction result Y^t ¼ L dicting is shown in Fig. 3.
Fig. 3. Flow chart of SARIMA-LSTM combined prediction
62
C. Xiong et al.
4 Experiment and Analysis 4.1
Data Source and Preprocessing
Actually observed wind speed time series data comes from the South China Sea Institute of Oceanology, Chinese Academy of Sciences. Based on South China Sea at 110.5° east longitude and 17.6333° north latitude, starting from 12 o’clock on April 20, 2020, the wind speed data of the next five days were sampled once an hour. A total of 121 data samples are of high quality because there are no missing values. The first 4 days of the observed wind speed data is used as the training data of the model, and the wind speed data of the last day is regard as the test data. The time series of wind speed is shown in the Fig. 4.
Fig. 4. Time series diagram of wind speed
It can be seen from Fig. 4, it is found that this wind speed time series have obvious periodicity, and it takes days as the cycle. The ACF and PACF diagrams as drawn in Fig. 5. We can see clearly that the autocorrelation coefficient did not decay to 0 quickly which means the wind speed series is non-stationary. Therefore, the seasonal SARIMA (p,d,q)(P,D,Q)s model can be used for modeling.
Fig. 5. ACF and PACF diagram of wind speed time series
Time Series Prediction of Wind Speed Based on SARIMA and LSTM
4.2
63
SARIMA Model Construction and Testing
Through the observation of Fig. 4, we can figure out that the seasonal parameters of the SARIMA model is 24, and other parameters of the model are determined by grid search and combined with the Akaike Information Criterion (AIC). The range of grid search parameters is set between 0 and 2. The search process is carried out in two stages. In the first stage, To obtain the parameter p, d, q corresponding to the minimum AIC value by constructing the optimal ARIMA (p, d, q) model. In the second stage, constructing the SARIMA (p, d, q)(p1, d1, q1)s model with the optimal ARIMA model to obtain the parameter values p1, d1, q1 of the model and the optimal value is obtained through the two-stage search. The combination of model parameter values is shown in Table 1. Therefore, the optimal parameter model is SARIMA (1, 1, 1)(1, 1, 0, 24), and its corresponding AIC value is 53.16. And from the results of the optimal parameters, it can be seen that the value of AIC in SARIMA is more reduced, and the optimization of the model is obvious compared to ARIMA. Table 1. Grid search optimal parameter results Model parameters p d q AIC First stage 1 1 1 121.28 Second stage 1 1 0 53.16
The significance test of the optimized model and its parameters are performed. The significance test of the model is mainly aimed at the residual of the model. The standard residual result of the model is shown in Fig. 6.
Fig. 6. Standard residual diagram of SARIMA model
64
C. Xiong et al.
The histogram and Q-Q diagram of the model residuals are shown in Fig. 7.
Fig. 7. SARIMA model residual histogram and Q-Q diagram
It can be seen from Figs. 7 that the residuals of the model are randomly distributed and uncorrelated, indicating that the residuals are a piece of white noise signal, useful information has been extracted into the SARIMA model, and the model’s significance test by. The significance test of the model parameters is mainly to test whether the model parameters are significant and non-zero. It is used to simplify the model. The model parameters and their significance test information are shown in Table 2. The test method adopts the Z value test. The results show that the model parameters are significantly non-zero, and the model parameters all pass the test. Applying this model to predict the existing observed wind speed data one day in advance, the predicted value of wind speed and the actual wind speed observation value are shown in Fig. 8. Table 2. Model parameters and parameter significance test Parameter
Coefficient value
Standard error
Z value
P value
ar.L1 ma.L1 ar.S.L24 Sigma2
−0.583 0.8159 −0.526 0.1558
0.322 0.235 0.133 0.037
−1.814 3.472 −3.945 4.199
0.07 0.001 0 0
95% lower confidence limit −1.214 0.355 −0.787 0.083
95% confidence limit 0.047 1.277 −0.265 0.299
Fig. 8. Comparison of SARIMA wind speed forecast and observation
Time Series Prediction of Wind Speed Based on SARIMA and LSTM
65
According to the above experimental results. it is feasible to use the SARIMA model to predict wind speed, but there is still a certain error between the wind speed prediction result of the model and the actual wind speed observation result, and the prediction accuracy is supposed to be further improved. 4.3
SARIMA-LSTM Model Wind Speed Prediction
In the SARIMA-LSTM combined model, the observed value of wind speed is utilized to build the SARIMA model, and the residual error obtained from the model is applied as the input of the LSTM model. When building the LSTM model, set the input and output dimensions of the LSTM network to be 1 dimensional, the number of hidden layer neuron nodes is 120, the loss function is set to the mean square error function (MSE), and the optimizer chooses Adam to train the model. After the model is trained, the residual value is predicted, and the last 24 predicted residual values of the SARIMA model can be obtained. To obtain the final wind speed prediction value of the model, the obtained prediction residual value and the wind speed prediction value obtained by the SARIMA model should be added. The comparison between the predicted wind speed value of the model and the actual wind speed observation value is shown in Fig. 9.
Fig. 9. Comparison of predicted wind speed value and wind speed observation value of combined model
From the experimental results in Fig. 9, it can be seen that the predicted value of the combined model of SARIMA and LSTM is closer to the original wind speed value, and the trend of the curve is basically the same, so the combined model is used to predict the wind speed time series with higher accuracy. 4.4
Error Analysis
In order to verify the effectiveness of this combined model in wind speed prediction, under the same experimental conditions, the SARIMA model, BP network model, LSTM network model, the combined model of LSTM and ARIMA and the combined model of SARIMA and LSTM in this paper are used to perform wind speed time
66
C. Xiong et al.
Sequence prediction. The quality of the prediction results selects the average absolute error MAE, the root mean square error RMSE and the average absolute percentage error MAPE as the evaluation criteria. Their definitions are shown in formulas (15)– (17) respectively. 1 XN jx ^xt j t¼1 t N rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 XN RMSE ¼ ðx ^xt Þ2 t¼1 t N 1 XN xt ^xt MAPE ¼ 100% t¼1 x N t MAE ¼
ð15Þ ð16Þ ð17Þ
Where xt represents the wind speed observation value at time t, and ^xt is the wind speed forecast value at time t. The prediction results are shown in Table 3. Table 3. Comparison of prediction results Model SARIMA BP LSTM ARIMA + LSTM SARIMA + LSTM
RMSE (m/s) 0.8246 0.6035 0.5609 0.5527 0.5328
MAE (m/s) 0.4303 0.4653 0.4747 0.3964 0.3283
MAPE (%) 7.72 7.188 7.203 6.35 5.34
From the error results in Table 3, obviously, due to the short-term wind speed series not only have linear characteristics, but also possess nonlinear and non-stationary characteristics, the effect of a single model on wind speed series prediction is not so effective. The SARIMA model is suitable for predicting linear time series, while the LSTM neural network model is more effective in predicting nonlinear time series. The prediction accuracy of the BP neural network model is generally related to the size of the sample set of the training network. The more training data, the better predictive results. but the time complexity is also higher, and no periodicity is extracted when LSTM and ARIMA are combined. The combined model proposed in this paper makes full use of the advantages of the SARIMA model and the LSTM network model. From the comparison of the prediction results in Table 3, it can be seen that the accuracy of the combined model in the wind speed time series prediction is significantly better than the other four models.
5 Conclusion For the prediction of wind speed time series data, this paper proposes a combined predicting model based on SARIMA and LSTM. First, the SARIMA model is used to process the original wind speed time series data to obtain the predicted value and the
Time Series Prediction of Wind Speed Based on SARIMA and LSTM
67
residual of the wind speed series, and then the residual Reconstruct and use it as the training sample of LSTM. After training the LSTM, predict the residual, and sum the prediction results to obtain the wind speed prediction result. Through a series of simulation experiments, the following conclusions are drawn: (1) The combined wind speed prediction model proposed in this paper can fit the actual wind speed curve well when forecasting the wind speed sequence, which verifies the effectiveness of the method. (2) The method proposed in this paper is obviously superior to other methods in the comparison experiment of wind speed sequence prediction. The wind speed prediction result value is more accurate, and it is more valuable for the guidance of ship navigation safety at sea. Acknowledgments. This research is supported by National Key Research and Development Program of China under grant number 2017YFC1405404, and Green Industry Technology Leading Project (product development category) of Hubei University of Technology under grant number CPYF2017008.
References 1. Ding, M., Zhang, L., Wu, Y.: Wind speed forecast model for wind farms based on time series analysis. Electric Pow. Autom. Equip. 25(8), 32–34 (2005) 2. Erdem, E., Shi, J.: ARMA based approaches for forecasting the tuple of wind speed and direction. Appl. Energy 88(4), 1405–1414 (2011) 3. Zuluaga, C.D., Alvarez, M.A., Giraldo, E.: Short-term wind speed prediction based on robust Kalman filtering: an experimental comparison. Appl. Energy 156, 321–330 (2015) 4. Wang, L., Dong, L., Liao, X., Gao, Y.: Short-term power prediction of a wind farm based on wavelet analysis. Proc. CSEE 29(28), 30–33 (2009) 5. Abedinia, O., Lotfi, M., Bagheri, M., Sobhani, B., Shafie-Khah, M., Catalão, J.P.: Improved EMD-based complex prediction model for wind power forecasting. IEEE Trans. Sustain. Energy 11(4), 2790–2802 (2020) 6. Zendehboudi, A., Baseer, M., Saidur, R.: Application of support vector machine models for forecasting solar and wind energy resources: a review. J. Clean. Prod. 199, 272–285 (2018) 7. Santamaría-Bonfil, G., Reyes-Ballesteros, A., Gershenson, C.: Wind speed forecasting for wind farms: a method based on support vector regression. Renew. Energy 85, 790–809 (2016) 8. Alexiadis, M., Dokopoulos, P., Sahsamanoglou, H.: Wind speed and power forecasting based on spatial correlation models. IEEE Trans. Energy Convers. 14(3), 836–842 (1999) 9. Cao, Q., Ewing, B.T., Thompson, M.A.: Forecasting wind speed with recurrent neural networks. Eur. J. Oper. Res. 221(1), 148–154 (2012) 10. Suykens, J.A., Vandewalle, J.: Least squares support vector machine classifiers. Neural Process. Lett. 9(3), 293–300 (1999) 11. Mishra, S.P., Dash, P.K.: Short term wind speed prediction using multiple kernel pseudo inverse neural network. Int. J. Autom. Comput. 15(1), 66–83 (2017). https://doi.org/10.1007/ s11633-017-1086-7 12. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Dimensionality Reduction on Metagenomic Data with Recursive Feature Elimination Huong Hoang Luong1 , Nghia Trong Le Phan1 , Tin Tri Duong1 , Thuan Minh Dang1 , Tong Duc Nguyen1 , and Hai Thanh Nguyen2(B) 2
1 FPT University, Can Tho 900000, Vietnam College of Information and Communication Technology, Can Tho University, Can Tho, Vietnam [email protected]
Abstract. The Fourth Industrial Revolution has a significant impact on many aspects, which help improve and develop significantly. These beneficial works give a better life for all society. When we mention the medical or healthcare field, there has been much creative and vital research that promotes everyone’s life. Inflammatory Bowel Disease (IBD) is one of the most dangerous diseases that can cause millions of deaths every year. In this research, we would like to raise a topic about IBD diagnosis using metagenomic data to advance prediction for initial detection. The problem is not well-studied adequately due to the lack of data and information in the past. However, with the rapid development of technology, we obtain massive data where a metagenomic sample can contain thousands of bacterial species. To evaluate which species are essential to the considered disease, this work investigates a dimension reduction approach based on Recursive Feature Elimination combining with Random Forest to provide practical prediction tasks on metagenomic data. The relationship between bacteria causing IBD is what we have to figure out. Our goal is to evaluate whether we can make a more reliable prediction using a precise quantity of features decided by Recursive Feature Elimination (RFE). The proposed method gives positively promising results, which can reach 0.927 in accuracy using thirty selected features and achieve a significant improvement compared to the random feature selection. Keywords: Inflammatory bowel disease · Metagenomic data Recursive Feature Elimination · Dimensionality reduction
1
·
Introduction
Recently, an increasing number of deaths are relevant to the gastrointestinal tract (GI tract). There have been many diseases causing negative impacts on everybody’s health. IBD is not a catastrophe disease but worth mentioning. It includes some factors that can have the possibility to cause harmful infections. It is particularly one of the critical reasons for colorectal cancer (CRC) [1]. IBD [2] c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 68–79, 2021. https://doi.org/10.1007/978-3-030-79725-6_7
Dimensionality Reduction on Metagenomic Data
69
consists of two fundamental divisions, namely Crohn’s disease (CD) and Ulcerative colitis (UC). Digestive tract illnesses, affected by prolonged inflammation, are among the danger factors for creating malignancies. Typically, cancer is the name given to a selection of relevant disorders. In every kind, some cells in the body start to divide continuously, then expanded into surrounding muscles. If any unusual increases were happening in the colon or rectum, we could call colorectal cancer. Many factors result in the disease’s negative impacts, such as lifestyle, genetics, and environmental surroundings. According to WHO’s recorded statistics in 2018 [3], CRC was in the top three common cancers and the second most deadly cancer globally. So far, this disease affects terribly to all citizens. The number of deaths is increasing rapidly day by day. If a person suffers from this disease, there would be many dangers and bad symptoms occurring. The cancer cells only develop and increase in the colon region in initial staging. Based on that development, we can see that treatment is not complicated if we can detect it as soon as possible. After forming in the first stage, the pathogenic cells invade another part of the colon in the second stage and will be a remarkable expansion of cancer cells. If they are not eliminated soon, they start to affect negatively, which leads to the spread to lymph nodes. Moreover, the condition may get severe if more and more lymph nodes are under attack. Subsequently, almost all of the tissues, the cells, and the organs are affected by cancer cells when the disease comes to the final stage. The cure for this period now brings a little hope and can lead to death. Because of harmful symptoms, we have to prevent and detect the patients suffering from the disease as soon as possible. The more people recognize the signs early, the more percentages of deaths related to CRC will decrease. With the advancement of medication, we can enhance patients’ experiences. Recently, they have to diagnose and classify the disease as soon as possible to find out correct treatments for themselves. Moreover, reducing expenses is also one of the main factors they will consider while curing the disease. Besides, medical companies provide chances to produce tools for patients who do not adapt to medications as expected. On the direct side, health sponsors, health care providers, and hospitals face one more challenge. That is how they can illustrate data for diagnosing effectively and efficiently. There are many types of data for processing and presenting, such as metagenomics data and imaging. This field consists of ecological genetics, environmental genetics, or simple genetics as well. Metagenomics plays a vital role in microbial variety. It can help us research and understand the living of tiny organisms better and gain more valuable knowledge about the existing system. From that study, we will aim to combine technology, significantly enhancing the machine learning field linking with medical information, to classify bacteria based on their characteristics that will cause IBD. Many popular applications of personalized medication and applying machine learning are efficient, not only within the idea of helping scientists in quick diagnosis but also for every stakeholder’s benefit. Moreover, technology, especially
70
H. H. Luong et al.
machine learning, brings us many applications and machines in healthcare. With the help and support of machine learning, some fields are applying such as classifying and recognizing diseases through data, discovering drugs, and processing and diagnosing images. Our purpose when doing this research is to propose a new method approaching the usage of Recursive Feature Elimination by applying Random Forest, to enhance the accuracy when predicting IBD. The goal of this research is to give out reliable results for prediction based on the given datasets. Our paper contains four primary sections. In the following Sect. 2, we show out some of the related research. Next, the methodology Sect. 3 will be explicitly described all of the methods using in the paper. After that, the fourth Sect. 4 will be detailed describing the Experiments. Finally, in the last Sect. 5, we conclude our paper and summarize the relevant fields related to the study.
2
Related Research
Many researchers have applied Machine Learning to analyze and visualize measurement datasets, with research related to IBD. Significantly, the author has researched the diagnosis of disease based on Network-Based Biomarker Discovery (NBBD). The purpose of this research is to integrate Network analyzes methods to prioritize potential biomarkers and machine learning techniques to assess the discriminating strength of biomarkers with preference [4]. Many types of research used RFE as the method for selecting critical features for the dataset. For example, an article applied RFE to excerpt SND-PSSM and CC-PSSM features, merging the two sets of derived features to create a new feature matrix, the RFE-SND-CC-PSSMM. Using this technique to forecast the groups of the six primary forms of enzymes effectively increases its accuracy of prediction to 94.54%, suggesting that this technique has general applicability to other protein issues [5]. Other authors apply RFE for gene selection. By integrating minimum-redundancy maximum-relevance, the author boosts the vector machine recursive feature elimination (SVM-RFE) elimination process for gene selection (MRMR). The detection of cancerous tissues from benign tissues on a variety of typical datasets has been enhanced by this approach, as it takes into account the redundancy of genes in their selection phase [6]. Besides RFE, applying K-Fold Cross Validation for dividing training and testing datasets in the research is a standard methodology for many researchers. An example of using K-Fold Cross Validation is using a technique that predicts when a protein with a specified atomic structure can fold according to two-state or multistate kinetics and tested on a training range of 63 proteins, correctly classifies 81% of the folding process. During folding, the instrument identifies a protein characterized by two or more states’ dynamics and simultaneously calculates the value of the process’s constant rate [7]. Moreover, the other authors researched how developmental brain differences could anticipate IQ changes in the teenage years. Data obtained from 33 stable adolescents are subjected to two well-known k-fold cross-validation tests - both evaluated for 3.5 years at Time 1
Dimensionality Reduction on Metagenomic Data
71
and Time 2. Fifty-three percentages of verbal IQ change and 14% of efficiency IQ change are projected through this method [8]. Additionally, choosing an algorithm is critical to achieving the trained model’s high accuracy. Random Forest is typical to use for training models in numerous kinds of research related to machine learning. The author in [9] researched to convince that the combination of slope enhancement regression and Random Forest modeling. The results of both MaxEnt and Random Forest strategies demonstrate robust predictive ability with the area under the curve (AUC), and actual skill statistics (TSS) measures over 0.8 and 0.6, respectively [10].
3 3.1
Methodology Implementing Process for Study
Our research can be divided into eight main processes, which will be explained below and illustrated by the flowchart in Fig. 1. Steps for processing in the study: 1. Reading dataset from csv files provided: Based on dataset provided mentioned in Subsect. 4.1, there would be a process for reading those metagenomic data from csv files and store it. 2. Set number of features for training dataset: To choose the critical relationship for all features using for training, we have to set the number of features selected for the training dataset (In this research, it would be 5, 20, and 30, respectively). 3. Select features by Recursive Feature Elimination: After setting the number of features in the previous step, and based on the features having read from the CSV files, RFE will be applied to figure out several features having the most related ones. 4. Set number of folds for dividing dataset: To spilled the dataset into different folds for training and testing, we have to set the number of folds for dataset division (In this research, the number of folds choosing is ten folds). 5. Spilt data by K-Fold Cross-validation: Spilt the dataset into several folds that will use one for testing and the others for training. 6. Train data by applying the Random Forest algorithm: After the training dataset has been formed, our study would apply the Random Forest algorithm to train for the model. Then it will be used to test for the remaining dataset and compared with the randomly selected feature. 7. Test data and calculate scoring metrics: The dataset used for testing would be used based on the training model in the previous step, then calculate the metrics to evaluate the model’s accuracy and efficiency. Three used metrics include Accuracy (ACC), Area Under Curve (AUC), and Matthews correlation coefficient (MCC). 8. Show result: After calculating the scoring metrics, the results will be shown and illustrated in graphs.
72
H. H. Luong et al.
Fig. 1. The implementation process flowchart
3.2
Using Recursive Feature Elimination for Feature Selection
Recursive Feature Elimination, or RFE for the brief, could be a well-liked feature choice algorithm. RFE is well-liked as a result of it is simple to configure together and use. It effectively chooses those features (columns) in an exceedingly coaching dataset that are additional or most relevant in predicting the target variable. Both necessary configuration choices once use RFE: the selection within the range of features to pick and, therefore, the rule’s choice for features [11]. Technically, Recursive Feature Elimination is a wrapper-style feature choice algorithm that additionally uses filter-based feature choice internally. RFE works by sorting out a set of options by beginning with all features within the coaching dataset and with success removing features till the required variety remains. The features can be attained by fitting the given machine learning algorithm within the core of the model, discarding the smallest amount of essential features, refitting the model, and ranking features by importance. This method is perennial till a specified variety of options remains [12]. 3.3
Using Random Forest Algorithm
Random Forest may be a versatile, simple to use machine learning algorithm. The Random Forest Algorithm consists of multiple call trees, every with an equivalent node; however, victimization has completely different knowledge that results in different leaves. One approach to dimensionality reduction is to develop a large and punctiliously created set of trees against a target attribute so use every attribute’s usage statistics to figure out the initial informative set of options. Use this idea to reduce the number of options in the dataset while not losing much data and keep (or improve) the model’s performance. It is compelling thanks to modifying massive datasets [13]. The popularity of the Random Forest algorithm applying in research has been proved through its numerous advantages. It can be able to manipulate binary characteristics, categorical features, and numerical
Dimensionality Reduction on Metagenomic Data
73
features. Moreover, the pre-processing needing to be done is minor, which helps data not be prescaled or modified during processing. 3.4
Using Feature Selection Randomly for Comparison
To analyze, we used randomly chosen features to compare. Feature Selection adopting the RFE with Random Forest to get the final result We utilized the IBD datasets mentioned in the previous Subsect. 3.2. This approach is based on the number of features that have been filtered out randomly using Feature Selection by applying RFE. From there, the Feature Selection would randomly balance the Feature Selection using RFE in quantity terms. The following process of using those chosen features for training and testing is the same. 3.5
Using K-Fold Cross-Validation for Separating Training and Testing Data
Cross-Validation (k-fold) is a mathematical approach of splitting data into two segments to test and compare learning algorithms: one used to learn or train a model, and the other used to verify the model. In traditional cross-validation, in consecutive rounds, the training and validation sets must cross over such that each data point has a probability of being tested against it. The simple crossvalidation type [14]. The value of k is fixed to n, where n is the dataset’s size. Each cluster is used to evaluate the model once. This approach is also known as leave-one-out cross-validation. In this problem, we use k = 10, a commonly used value and proven to give a small error, low variance (empirically) [15]. 3.6
Using Scoring Metrics to Evaluate Predicting Results
We will discuss 6 insightful datasets in following Subsect. 4.1. Also, these datasets are divided into two groups: one for the train and the other for the test. We select ibdfullHS CDr x for the function in the first category. A last option for the train. Five others in the other group later set out for research. We used three ranking criteria for assessing the outcomes: – Accuracy (ACC): The corresponding ACC metrics vary from 0.0 to 1.0. Price of 1.0 Perfect prediction implies, and value 0.0 means all false predictions. Depending on the number of total predictions and correct predictions, the equation accuracy will have the formula as shown in Eq. 1. numcorrect (1) Accuracy = numtotal – Matthews correlation coefficient (MCC): In biomedical research, a commonly used calculation [16]. MCC steps end in a variation of −1.0 and +1.0. The value +1.0 implies that one hundred percent prediction is true, the value 0.0 means prediction is no better than random, and prediction indicates that prediction and observation have nothing to do with each other [17].
74
H. H. Luong et al.
– Area Under Curve (AUC): In biomedical science, another widely-used calculation. AUC tests the whole field below the curve, with values varying from 0.0 to 1.0 [18].
4 4.1
Experiments Dataset
Data were originally released on 3 February 2016, a well-known research paper on Fungal microbiota dysbiosis in IBD. The authors collected data from 235 patients with IBD and 38 stable subjects (HS) using clinical 16S and ITS2 sequencing [19]. Based on the six datasets of IBD published, we figure out some important data for making statistics. Also, we make some processes for training, testing, and making a relevant comparison to calculate the necessary metrics as shown in Table 1 where UC denotes as “Ulcerative Colitis”, CD is exhibited of Crohn’s disease (CD) in remission, f is “flare” and i as “ileal”. Table 1. Detail information of IBD dataset
4.2
Information
Dataset CDf CDr iCDf iCDr UCf UCr
Total number of features
260
258
248
258
251 238
Number of IBD patients
60
77
44
59
41
44
Number of healthy subjects
38
38
38
38
38
38
Total number of subjects
98
115
82
97
79
82
Using RFE with Random Forest Algorithm to Select Features and Train Model
After the training dataset has been read and figured out all features, RFE would determine the relationship between the features in that dataset and classify the features’ essential relationship with others. After calculating and analyzing some features set, it will choose the top features for later processing. After that, Random Forest will be applied for training based on key features having chosen by RFE. It was then testing to give out the final result, which will mainly be based on an accuracy metric. In this study, we have six datasets related to IBD. Then we will use each one for training and the others for testing, respectively. After the training phase, the result is recorded with the scoring metrics, which are ACC, AUC, and MCC. With K-Fold Cross-Validation equals to 10, the number of selected features using
Dimensionality Reduction on Metagenomic Data
75
RFE is 5, 20, and 30, respectively; the results have been figured out using ibdfullHS CDr x as the training dataset, and ibdfullHS CDf x as testing dataset giving the highest result. We describe the three scenarios for selecting features using RFE with Random Forest on the dataset ibdfullHS CDr x as the training dataset and the others for testing dataset in the 3 following graphs. In the first scenario illustrated in Fig. 2, we apply RFE to figure out five features having a critical relationship from the dataset. The accuracy has ranged from 73% to 91%.
Fig. 2. Bar chart with 5 selected features
In the second scenario illustrated in Fig. 3, we apply RFE to figure out 20 features having the critical relationship from the dataset. The accuracy is higher than the first scenario, having a range from 78% to 91%. In the third scenario, which is illustrated in Fig. 4, we apply RFE to figure out 30 features having the critical relationship from the dataset. The accuracy achieves the highest, having a range from 79% to 93%. Based on the three-bar charts before, we have illustrated a line chart shown in Fig. 5, showing an overall result when applying RFE with Random Forest algorithm for training and testing each dataset. The accuracy metric has been chosen as the critical value for making a comparison between dataset and features. We have found out some critical results after illustrating this line graph. Firstly, the accuracy score achieved the highest when choosing ibdfullHS CDf x as the training dataset. Secondly, selecting 30 features has the highest accuracy compared with the others. However, it is noticeable that there will be not many differences as the features chosen are getting too many. Finally, the average accuracy score was the highest when selecting ibdfullHS iCDf x as the training dataset, compared with the other five datasets.
76
H. H. Luong et al.
Fig. 3. Bar chart with 20 selected features
Fig. 4. Bar chart with 30 selected features
Fig. 5. Accuracy metric comparison between 5, 20, and 30 features
Dimensionality Reduction on Metagenomic Data
4.3
77
Predicting Results and Comparison with Randomly Selected Features
Following the Subsect. 4.2, we compare the results between applying RFE and Random for Feature Selection (with K-Fold Cross-Validation equals 10, and several selected feature is 30) with randomly selected features. Using ibdfullHS CDR x as the training dataset, we illustrate the results when comparing those as shown in Table 2. Table 2. Using ibdfullHS CDr x as the training dataset Scenario Features Metrics Dataset CDf CDr RFE
iCDf
iCDr
UCf
UCr
30
ACC AUC MCC
0.92778 0.81515 0.91250 0.82556 0.89821 0.79306 0.92500 0.79048 0.90417 0.82500 0.89917 0.79333 0.86084 0.60852 0.83427 0.67246 0.81208 0.59241
Random 30
ACC AUC MCC
0.82444 0.81591 0.82639 0.80778 0.89821 0.60000 0.81667 0.79226 0.81750 0.78750 0.89917 0.59250 0.66872 0.61098 0.66732 0.60355 0.81208 0.21080
The table has illustrated the results when using each dataset as the training dataset. Applying choosing 30 features has figured out a detailed comparison between 2 scenarios: Recursive Feature Elimination versus Random. The three types of metrics shown in the table, which are ACC, AUC, and MCC, show the RFE gave out higher results than random in most cases. We can see that applying RFE (with Random Forest algorithm) has a higher and more accurate result than using randomly.
5
Conclusion
The Fourth Industrial Revolution has given chances for meaningful and remarkable development in all fields worldwide, especially the healthcare and medical field. After using Recursive Feature Elimination for selecting features and applying Random Forest as the algorithm for training, we could state that our proposed method for predicting IBD patients has reached encouraging results with the prepared dataset. Comparing with random feature selection, it has proved that the accuracy is higher and more reliable when pointing out some significant relationships between features. This approach has improved the accuracy (approximately 93%, higher than randomly choosing features) and promising results when applying metagenomic data for detection. Further researches can be done based on this premise, which can lead people to a healthier life with early detection of any types of diseases or illnesses, not just concentrating on predicting IBD.
78
H. H. Luong et al.
References 1. Stidham, R.W., Higgins, P.: Colorectal cancer in inflammatory bowel disease. Clin. Colon Rectal Surg. 31(3), 168–178 (2018). https://doi.org/10.1055/s-0037-1602237 2. Flynn, S., Eisenstein, S.: Inflammatory bowel disease presentation and diagnosis. Surg. Clin. North America 99(6), 1051–1062 (2019). https://doi.org/10.1016/j.suc. 2019.08.001 3. World Health Organization. Cancer. https://www.who.int/news-room/factsheets/detail/cancer. Accessed 25 Jan 2021 4. Abbas, M., et al.: Biomarker discovery in inflammatory bowel diseases using network-based feature selection. PLoS ONE 14(11), e0225382 (2019). https://doi. org/10.1371/journal.pone.0225382 5. Yuan, F., Liu, G., Yang, X., Wang, S., Wang, X.: Prediction of oxidoreductase subfamily classes based on RFE-SND-CC-PSSM and machine learning methods. J. Bioinform. Comput. Biol. 17(4), 1950029 (2019). https://doi.org/10.1142/ S021972001950029X 6. Mundra, P.A., Rajapakse, J.C.: SVM-RFE with MRMR filter for gene selection. IEEE Trans. Nanobiosci. 9(1), 31–37 (2010). https://doi.org/10.1109/TNB.2009. 2035284 7. Capriotti, E., Casadio, R.: K-Fold: a tool for the prediction of the protein folding kinetic order and rate. Bioinformatics (Oxford, England) 23(3), 385–386 (2007). https://doi.org/10.1093/bioinformatics/btl610 8. Price, C.J., Ramsden, S., Hope, T.M., Friston, K.J., Seghier, M.L.: Predicting IQ change from brain structure: a cross-validation study. Dev. Cogn. Neurosci. 5, 172–184 (2013). https://doi.org/10.1016/j.dcn.2013.03.001. Epub 15 March 2013. PMID: 23567505; PMCID: PMC3682176 9. Cai, J., Kai, X., Zhu, Y., Fang, H., Li, L.: Prediction and analysis of net ecosystem carbon exchange based on gradient boosting regression and random forest. Appl. Energy 262, 114566 (2020). https://doi.org/10.1016/j.apenergy.2020.114566. ISSN 0306-2619 10. Acharya, B.K., et al.: Mapping environmental suitability of scrub typhus in Nepal using MaxEnt and random forest models. Int. J. Environ. Res. Public Health 16(23), 4845 (2019). https://doi.org/10.3390/ijerph16234845 11. Brownlee, J.: Recursive Feature Elimination (RFE) for Feature Selection in Python (2020). https://machinelearningmastery.com/rfe-feature-selection-inpython/. Accessed 27 Jan 12. Darst, B.F., Malecki, K.C., Engelman, C.D.: Using recursive feature elimination in the random forest to account for correlated variables in high dimensional data. BMC Genet. 19(Suppl. 1), 65 (2018). https://doi.org/10.1186/s12863-018-0633-8 13. Dimitriadis, S.I., Liparas, D.A., Initiative, D.N.: How random is the random forest? Random forest algorithm on structural imaging biomarkers’ service for Alzheimer’s disease: from Alzheimer’s disease neuroimaging initiative (ADNI) database. Neural Regeneration Res. 13(6), 962–970 (2018). https://doi.org/10.4103/1673-5374. 233433 14. Brownlee, J.: A Gentle Introduction to k-fold Cross-Validation (2018). https:// machinelearningmastery.com/k-fold-cross-validation/. Accessed 28 Jan 2021 15. Wang, Y., Li, J.: Credible intervals for precision and recall based on a k-fold crossvalidated beta distribution. Neural Comput. 28(8), 1694–1722 (2016). https://doi. org/10.1162/NECO a 00857
Dimensionality Reduction on Metagenomic Data
79
16. Chicco, D., Jurman, G.: The Matthews correlation coefficient (MCC) advantages over F1 score and accuracy in binary classification evaluation. BMC Genomics 21(1), 6 (2020). https://doi.org/10.1186/s12864-019-6413-7 17. Wikipedia. Matthews Correlation Coefficient (2020). https://en.wikipedia.org/ wiki/Matthews correlation coefficient. Accessed 28 Jan 2021 18. Ma, H., Bandos, A.I., Gur, D.: On the use of partial area under the ROC curve for comparison of two diagnostic tests. Biometrical J. Biometrische Zeitschrift 57(2), 304–320 (2015). https://doi.org/10.1002/bimj.201400023 19. Sokol, H., et al.: Fungal microbiota dysbiosis in IBD. Gut 66(6), 1039–1048 (2017). https://doi.org/10.1136/gutjnl-2015-310746
The Application of Improved Grasshopper Optimization Algorithm to Flight Delay Prediction–Based on Spark Hongwei Chen, Shenghong Tu(&), and Hui Xu School of Computer Science, Hubei University of Technology, Wuhan 430068, China
Abstract. Flight delay prediction can improve the quality of airline services, help air traffic control agencies to develop more accurate flight plans. This paper proposes a distributed and improved grasshopper optimization algorithm based on Spark to optimize the classification model of random forest parameters (SPGOA-RF) for flight delay prediction. The SPGOA-RF uses the method of adaptive chaotic descent which based on Logistic mapping and Sigmoid curve to enhance the randomness of the grasshopper optimization algorithm, thereby improve the early exploration and later optimization capabilities of the algorithm and accelerate the speed of convergence. The improved grasshopper optimization algorithm is used to adjust the random forest parameters to obtain a better performance classification model. In addition, the Spark platform is used to implement a distributed grasshopper optimization algorithm training model to effectively improve its operating efficiency. The results of simulation experiment prove that in comparison to the unoptimized algorithm, the SPGOA-RF flight delay prediction accuracy rate could achieve to 89.17%.
1 Introduction In the modern transportation system, air transportation plays a very important role, and its transportation efficiency is also irreplaceable by other means of transportation. World Air Cargo Corporation predicts that the aviation market will maintain a 4.7% growth rate [1]. With the increase of flights, flights’ delay and cancellation become more common. It may not only affect personal travel but also the airline’s service quality, resulting in declined customer satisfaction and increased costs, thereby seriously damage the interests of airlines [2]. Generally, the research methods used for flight delay prediction can be divided into three categories which are prediction methods based on statistical inference [3], prediction methods based on simulation models [4, 5] and prediction methods based on machine learning and deep learning [6]. In recent years, several achievements have been made in flight delay prediction. Rahul Nigam and K. Govinda used machine learning to combine weather conditions such as temperature, humidity, and rainfall with airport data to establish a Logistic regression model [7]. BinYu and ZhenGuo took the combination of the novel deep belief network and the support vector machine model to train the neural network to improve the prediction accuracy [8]. Ding Yi regarded flight delay prediction as a regression © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 80–89, 2021. https://doi.org/10.1007/978-3-030-79725-6_8
The Application of Improved Grasshopper Optimization Algorithm
81
problem, focusing on identifying variables that affect flight delays and using them to build multiple linear regression models [9]. This paper uses a classification method based on machine learning. It uses improved grasshopper optimization algorithm to find the most suitable random forest parameters and selects the best random forest model to predict flight delay data. Dealing with the problem of long time and low efficiency of model training during the algorithm operation, the Spark platform was introduced to implement the distributed SPGOA-RF algorithm to improve the operation efficiency of the model.
2 Grasshopper Optimization Algorithm The Grasshopper Optimization Algorithm (GOA) is a new intelligent optimization algorithm proposed in 2016 based on the predation behavior of grasshopper populations in nature [10]. GOA has the advantages of simple principle and easy implementation, but it also has the shortcomings of converging to a local optimal solution and easy to converge. The first feature of grasshopper cluster is the slow, small step movement of larvae. And the second feature is the long-distance movement and migration of adults. These two functions are like the local search and global search of evolutionary algorithms. The mathematical model of the grasshopper optimization algorithm is as follows: Xi ¼ Si þ Gi þ Ai
ð1Þ
Among them, Xi defines the current position of the i-th grasshopper, Si is the social interaction force, Gi is the gravity of the i-th grasshopper, and Ai is the advection force of the i-th grasshopper by the wind. The iterative model and implementation steps of GOA algorithm: 1) Initialize the grasshopper population size as N, the search space dimension as dim, and the maximum number of iterations Max_iter. Among them, cmax and cmin are the maximum and minimum values ofparameter c. The position of the grasshopper , d 2 1; 2; ; d; dimg, in each iteration is expressed as: xi ¼ x1i ; x2i ; ; xdi ; xdim f i d xi represents the d-th dimension of the grasshopper. 2) Calculate individual fitness of grasshoppers and saving the optimal value in T ¼ T 1 ; T 2 ; ; T d ; T dim , d 2 f1; 2; ; d; dimg, T d means T is the d-th dimension of the best position of the grasshopper. 3) The parameter c is a reduction coefficient, its purpose is to linearly reduce the comfortable space, repulsive space, and attractive space. The parameter c is updated by formula 2: c ¼ cmax t
cmax cmin Max iter
ð2Þ
4) In each iteration, the position update formula 3 of each individual grasshopper is as follows:
82
H. Chen et al.
0
1
B X C xj ð t Þ xi ð t Þ C B N ubd lbd C þ Td s x xdi ðt þ 1Þ ¼ cB ð t Þ x ð t Þ c j i B dij ðtÞ C 2 @i ¼ 0 A
ð3Þ
j 6¼ i
Among them, t is the number of grasshopper iterations, N is the population size of the grasshoppers, xdi ðtÞ is the d dimension of the i grasshopper in the t-th iteration, and dij ðtÞ is the distance between the i grasshopper and the j grasshopper in the t-th iteration. ubd , lbd represents the upper and lower boundaries of the position of the grasshoppers in the d dimension, and the S function is defined as a function representing the social forces between grasshoppers: r
Sðr Þ ¼ fe l er
ð4Þ
Where f represents the strength of attraction, l is the range of attraction, which is used in the literature f ¼ 0:5, l ¼ 1:5. 5) Calculate the individual fitness of the grasshopper and update the target value. 6) Judge whether the maximum number of iterations is reached, if the condition is not met, skip to step 3). Otherwise, the algorithm comes to an end. 2.1
The Adaptive Descent of S-shaped Curve
In order to balance the stage of development and search, the parameter c will first linearly decrease in the iterative process, then adjust the attraction and repulsion distance between the grasshoppers. However, the linear decline may lead to insufficient search range in the initial phrase and insufficient search time in the subsequent phrase, then fall into the local optimal value. This paper proposes a S-shaped adaptive curve descent function which based on the Sigmoid function to enhance global as well as local search. The Sigmoid function is not sensitive to variables whose input exceeds a certain range. Therefore, this paper limits the function input to (−10, 10) and offset by a certain distance. The modified Sigmoid curve descent function formula is: c¼1
1 t 1 þ eabT
ð5Þ
Among them, t is the current iteration number, T is the maximum iteration number, Suppose the value of parameter a is 10 and parameter b is 20, a better S-shaped adaptive function can be achieved. Sigmoid curve will slowly decrease in the algorithm development stage to increase the search range, and the decline in the final search stage will also increase slowly, thereby increase the time for local development of the grasshopper optimization algorithm.
The Application of Improved Grasshopper Optimization Algorithm
2.2
83
Sigmoid Curve Descent Based on Logistic Mapping
Aiming at the problem of low accuracy of GOA optimization and slow convergence speed, Logistic chaotic mapping [11] is introduced to improve the drop parameter c of the grasshopper optimization algorithm, so as to avoid falling into the local optimum. In the iterative process, the global exploration ability and local development ability of the algorithm are better balanced, and the convergence speed of the algorithm is improved. Logistic mapping uses one-dimensional nonlinear iterative function to characterize chaotic behavior. Through the chaotic function, the values of adjustment parameters can be changed to generate completely different pseudorandom sequences. Compared with blind and disordered random search, using chaotic variables to map to the solution space of optimization traversal is superior. The formula for Logistic generating a chaotic sequence is: yðk þ 1Þ ¼ lxðkÞð1 xðkÞÞ; k ¼ 1; 2; ; n
ð6Þ
Logistic mapping bifurcation parameters l ¼ 4, When l ¼ 4, the system is in a completely chaotic state, therefore its randomness is more suitable. Combining the randomness of Logistic chaotic mapping and the adaptive search of S-curve, a method of S-curve descent based on Logistic mapping is proposed. The formula is: CðlÞ ¼ LðlÞ cðlÞ
ð7Þ
Among them, LðlÞ is the chaotic sequence value generated according to formula 6, and cðlÞ is the S-shaped curve value generated according to formula 5. This mechanism helps the search agent to be released from the local minimum trap. The adaptive method of chaotic sequence achieves the transition from the global search stage to the local one, so that the parameter c could obtain both adaptability and randomness (Fig. 1).
Fig. 1. The comparison of parameter c.
84
H. Chen et al.
3 SPGOA-RF Algorithm 3.1
Distributed Grasshopper Optimization Algorithm
Spark is a memory-based big data processing platform [12]. In order to effectively improve the computational efficiency of the grasshopper optimization algorithm, the idea of distributed processing is introduced. The improved grasshopper optimization algorithm is applied to the Spark platform to improve the efficiency of the algorithm in processing massive data.
Begin
Initialize algorithm parameters to generate random grasshopper swarms
Call the Random Forest module to calculate grasshopper adaptation and broadcast the optimal grasshopper individual
The population changes to RDD
The population changes to RDD
The population changes to RDD
Map to (individuals, grasshopper swarms)
Map to (individuals, grasshopper swarms)
Map to (individuals, grasshopper swarms)
Update the latest position of each individual according to formulas (6) and (3).
Update the latest position of each individual according to formulas (6) and (3).
Update the latest position of each individual according to formulas (6) and (3).
Recycle the population after the update location
Use distributed random forest modules to calculate adaptability
Iter == Max_iterϋ
No
Yes Output the best individual
End
Fig. 2. The process of distributed grasshopper optimization algorithm.
The algorithm flow chart of the distributed grasshopper optimization algorithm is shown in Fig. 2. It first initializes the algorithm parameters to set the number of iterations, the number of grasshopper populations, and the upper and lower limits of grasshopper individuals. Secondly, it calls the random forest module to calculate the grasshopper groups in distribution, then finds the best grasshopper individuals and broadcast them to all partition. Thirdly, it uses formula 6 to update the parameter c and formula 3 to update the positions of the grasshopper. Finally, the cluster aggregates the
The Application of Improved Grasshopper Optimization Algorithm
85
new positions of the grasshopper calculated from all partitions to the master node to obtain a new grasshopper cluster. When the termination condition is reached, the optimal individual should be outputted. 3.2
Optimization of Random Forest Parameter Based on SPGOA
Random Forest (RF) [13] is an overall learning method for classification and regression, and it is one of the cluster classifications models. Random forest can select important variables and automatically identify the relative importance of each independent variable to obtain a better calculation model. It has been increasingly applied to various machine classification and anomaly detection. The performance of random forest is mainly affected by the number of decision trees, the number of randomly selected features, and the maximum depth of the subtree. Therefore, this paper chooses these three parameters of random forest as the GOA grasshopper individual code for optimization. The coding dimensions of grasshopper are n_estimators, max_features and max_depths. Among them, n_estimators represent the number of decision trees in the forest. Increasing the number of trees will reduce the variance of prediction and improve the accuracy of model classification. In theory, the larger number decision trees could achieve, the better. But it will increase the calculation time accordingly. Therefore, a reasonable number of decision trees is needed to obtain the best model. Max_features represent the number of randomly selected features of each decision tree. Max_depths represent the maximum depth of each tree in the forest. The increase in depth makes the model more powerful. The increase in depth requires longer training time, and it may cause the situation of overfitting. In the iterative process, grasshopper cluster is passed to the random forest training module, and each grasshopper is decoded and further inputted into the random forest training to calculate the fitness value of it. The training will retain the best grasshopper individuals and output them to the GOA. The GOA updates the population position according to the best individual position of the grasshopper to form a new population and continues to iterate the random forest module until the best individual is outputted.
4 Simulation Experiment and Analysis 4.1
Analysis of the Effectiveness of Improved GOA
This paper verifies the influence of different parameters c on the convergence of the algorithm through comparative experiments. All algorithm initialization parameters are the same: the maximum number of iterations is 500, population size is 20, the number of dimensions is 3. In the simulation experiment, the parameter c of GOA is the linear descent parameter of the original algorithm, GOA1 is the descent parameter c of the improved Sigmoid curve, and GOA2 is the logarithmic chaotic mapping parameter added on the basis of GOA1. Figure 3 is the iterative curve of the optimization test of the extreme values of the 8 functions.
86
H. Chen et al.
Fig. 3. Different function convergence curves.
It can be seen from the experimental results that the convergence speed of the Sigmoid curve using Logistic mapping is faster than that of ordinary GOA, and it is also faster than the decline speed of only the Sigmoid curve, and it can effectively improve the accuracy. This is because the chaotic mapping greatly improves the randomness of the search, and the Sigmoid curve effectively increases the early search time of the grasshopper, and simultaneously increases the later development time of the grasshopper. 4.2
Sample Data Set Construction
To evaluate the performance and effectiveness of the SPGOA-RF, the algorithm was verified through experiments. First, three data sets and the classic traffic data set KDDCUP99 were selected from the UCI machine learning library for simulation experiments [14]. The detailed description of the data set is shown in Table 1.
Table 1. The data set of the experiment Dataset Sample size Iris 150 Glass 214 Wine 178 KDDCUP99 4898431
Dim Training set Testing set 4 100 50 10 150 64 13 125 53 41 3428901 1469530
The flight delay data set uses the US Department of Transportation’s flight information from 2009 to 2018. First, the original data features are processed, and the time is formatted, including month digitization, date digitization, flight code digitization, and null value removal processing. Finally, in order to reflect the processing speed of Spark’s processing of different data files, the data set is divided into four data sets with different data volumes which are five million data sets, ten million data sets, 20 million data sets, and 50 million data sets.
The Application of Improved Grasshopper Optimization Algorithm
4.3
87
Flight Delay Detection Analysis
The experiment platform is divided into single machine experiment and Spark cluster experiment. The Spark cluster environment is composed of four virtual machines. First, use different machine learning data sets to run 10 times independently. And take GOARF, PSO-RF, SA-RF and GA-RF algorithms into comparison, then calculate the average of the prediction accuracy. From the average accuracy and standard deviation in Table 2, it can be concluded that GOA has achieved better accuracy than SA and PSO algorithms, and the results of using random forest are better than unoptimized algorithms. Table 2. Comparison of the accuracy of different data set algorithms. Dataset Iris Glass Wine KDDCUP99
GOA-RF 96.36% ± 94.48% ± 97.50% ± 93.51% ±
1.6 3.1 1.1 5.1
SA-RF 94.25% 88.14% 96.16% 90.50%
± ± ± ±
2.3 4.7 2.2 3.1
POS-RF 95.22% ± 94.36% ± 95.29% ± 92.16% ±
2.6 3.9 3.2 3.0
GA-RF 96.24% 93.31% 96.76% 90.85%
± ± ± ±
1.9 4.6 1.5 2.9
The single-machine experimental results of the flight delay data set are shown in Fig. 4. The experimental results show that the optimal accuracy of GOA-RF reaches 89.17%, and the algorithm can obtain the optimal random forest model with optimal random forest parameters.
Fig. 4. The iterative curve of the flight delay dataset.
4.4
Analysis of Distributed Algorithm Effect
The analysis of the effect of distributed algorithms is mainly reflected in the running time of the algorithm. Figure 5 is a comparison of the running time of SPGOA-RF with different sizes of data sets based on the Spark platform and the standalone platform. The experimental results show that the SPGOA-RF can effectively shorten the running time.
88
H. Chen et al.
Fig. 5. Stand-alone and Spark platform time comparisons.
Taking all the experimental results into consideration, the grasshopper optimization algorithm could effectively improve the random forest classification effect. Whether it is the standard UCI data set or the KDD CUP99 intrusion detection data set, the SPGOA-RF algorithm proposed in this article can achieve higher classification accuracy. Besides, the distributed algorithm which based on the Spark can effectively reduce the running time when calculating big data sets.
5 Conclusion This paper proposes a Spark-based distributed grasshopper optimization algorithm combined with a random forest classification algorithm. The improvements to the algorithm are summarized as follows: (1) Improve the attractiveness and repulsion parameters in the update process of the grasshopper optimization algorithm, introduce chaos and adaptive strategies to improve the limitations of traditional parameters in the update process, and effectively improve the accuracy of the algorithm. (2) The Spark operator is used to map grasshopper individuals and populations, and distributed calculations update the grasshopper positions to effectively improve the running speed of the algorithm. (3) Combine the random forest classification algorithm and use the grasshopper optimization algorithm to optimize the random forest parameters to find the optimal prediction model. The experimental results show that the improved GOARF algorithm has an accuracy rate of 89.17%. As the grasshopper optimization algorithm is a new type of cluster intelligence optimization algorithm, little research is conducted in the application direction. Thus, future work will be more research on engineering applications. Acknowledgments. This research is supported by National Natural Science Foundation of China under grant number 61602162, 61772180.
The Application of Improved Grasshopper Optimization Algorithm
89
References 1. Niu, B., Dai, Z., Zhuo, X.: Coopetition effect of promised delivery time sensitive demand on air cargo carriers’ big data investment and demand signal sharing decisions. Transp. Res. Part E: Logist. Transp. Rev. 123, 29–44 (2019) 2. Anderson, S.W., Baggett, L.S., Widener, S.K.: The impact of service operations failures on customer satisfaction: evidence on how failures and their source affect what matters to customers. Manuf. Serv. Oper. Manag. 11(1), 52–69 (2009) 3. Tu, Y., Ball, M.O., Jank, W.S.: Estimating flight departure delay distributions—a statistical approach with long-term trend and short-term pattern. J. Am. Stat. Assoc. 103(481), 112– 125 (2008) 4. Kafle, N., Zou, B.: Modeling flight delay propagation: a new analytical-econometric approach. Trans. Res. Part B: Methodol. 93, 520–542 (2016) 5. Sternberg, A., Soares, J., Carvalho, D., Ogasawara, E.: A review on flight delay prediction. arXiv preprint arXiv:1703.06118 (2017) 6. Kim, Y.J., Choi, S., Briceno, S., et al.: A deep learning approach to flight delay prediction. In: 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC), Sacramento, CA, pp. 1–6. IEEE (2016) 7. Nigam, R., Govinda, K.: Cloud based flight delay prediction using logistic regression. In: 2017 International Conference on Intelligent Sustainable Systems (ICISS), Palladam, pp. 662–667. IEEE (2017) 8. Yu, B., Guo, Z., Asian, S., Wang, H., Chen, G.: Flight delay prediction for commercial air transport: a deep learning approach. Transp. Res. Part E: Logist. Transp. Rev. 125, 203–221 (2019) 9. Ding, Y.: Predicting flight delay based on multiple linear regression. In: IOP Conference Series: Earth and Environmental Science, Zhuhai, China, pp. 1–7 (2017) 10. Saremi, S., Mirjalili, S., Lewis, A.: Grasshopper optimization algorithm: theory and application. Adv. Eng. Softw. 105, 30–47 (2017) 11. Arora, S., Anand, P.: Chaotic grasshopper optimization algorithm for global optimization. Neural Comput. Appl. 31(8), 4385–4405 (2018). https://doi.org/10.1007/s00521-018-33432 12. Armbrust, M., Xin, R.S., Lian, C., et al.: Spark SQL: relational data processing in spark. In: Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, New York, USA, pp. 1383–1394 (2015) 13. Liaw, A., Wiener, M.: Classification and regression by random Forest. R News 2(3), 18–22 (2002) 14. Dua, D., Graff, C.: UCI Machine Learning Repository. http://archive.ics.uci.edu/ml
Application of Distributed Seagull Optimization Improved Algorithm in Sentiment Tendency Prediction Hongwei Chen1, Honglin Zhou1(&), Meiying Li2, Hui Xu1, and Xun Zhou1 1 2
School of Computer Science, Hubei University of Technology, Wuhan, China School of Foreign Languages, Hubei University of Technology, Wuhan, China
Abstract. Emotion analysis is of great practical significance in the aspects of network control, public opinion monitoring and public sentiment guidance. In order to obtain better accuracy of emotion classification and analyze users’ emotion tendency more accurately, a distributed model of emotion classification with improved Seagull optimization algorithm (SOA) is proposed. The improvement of Cauchy variation and uniform distribution of SOA (CC-SOA) is to solve the problems of slow convergence speed, easy to fall into local optimal and poor accuracy of SOA. The uniform population distribution strategy can increase the diversity of the population and enhance the search ability of the local optimal solution of the algorithm. Cauchy variation is helpful to jump out of the local optimal solution and finally reach the global optimal solution. Due to the large amount of data to be processed and the long training time, single machine mode processing could not meet the actual requirements. Combining logistic regression with CC-SOA, a new model LG-SOA is proposed. Finally, LG-CCSOA is distributed processing on Spark platform, Distributed computing is carried out on different nodes, and the final running time is greatly reduced. After testing with benchmark function, the simulation results show that the CCSOA has higher convergence accuracy and faster convergence speed. It has higher prediction accuracy for both small and large data sets, and the Spark platform improves the running efficiency of the algorithm. Keywords: Seagull Optimization Algorithm analysis Spark
Cauchy variation Emotional
1 Introduction Emotional analysis is to extract the words with emotional tendency from the comments and analyzes the emotional tendency of the commentator. In recent years, sentiment analysis has received a lot of attention in the field of natural language processing. The combination of swarm intelligence algorithm and machine learning algorithm can be applied in emotion analysis. Seagull Optimization Algorithm (SOA) [1] is a new type of Swarm intelligence algorithm. SOA and traditional swarm intelligence algorithms are easy to fall into local optimal solution and convergence speed slowly. To address this problem, scholars at home and abroad have done a lot of research to improve the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 90–99, 2021. https://doi.org/10.1007/978-3-030-79725-6_9
Application of Distributed Seagull Optimization Improved Algorithm
91
algorithm. Literature [2] proposed a firefly algorithm based on Cauchy mutation. Literature [3] proposed a fruit fly algorithm based on chaotic mutation. Literature [4] proposed a fuzzy improved c-means crow search algorithm model, which was applied in the field of feature selection. Literature [5] proposed an application model of multiobjective optimization Antlion algorithm in engineering problems. Combining traditional machine learning algorithms with intelligent optimization algorithms can improve classification accuracy. Literature [6] proposed a model that combines bee colony algorithm and machine learning clustering algorithm. Literature [7] proposed a model for ant colony algorithm to optimize machine learning algorithms. Literature [8] proposed an optimization model based on firefly algorithm for routing clustering algorithm. Literature [9] proposed an application model of drosophila algorithm and clustering negative selection calculation in garbage mailing classification. Logistic regression is a traditional machine learning classification algorithm, which has many forms. Literature [10] presents a kernel logistic regression method based on confusion matrix to classify unbalanced data. Reference [11] proposed a model based on logistic regression and decision tree to predict customer churn. Literature [12] proposed a prediction model combining butterfly optimization algorithm with machine learning algorithm. The traditional classification algorithm of machine learning is not as accurate as emotion dictionary in emotion prediction. Especially when it comes to massive data, and it is also not realistic to label data in single machine mode. Distributed Spark can perform distributed operations on massive amounts of data. Mogha et al. [13] Based on spark, compared the classification accuracy with naive Bayes, decision tree, random forest and other algorithms. The model results show that the result of decision tree algorithm is the best. This paper proposes CC-SOA, which can avoid falling into local optimal solution and achieve convergence quickly. It is tested by four benchmark functions and has higher convergence accuracy and convergence speed. CC-SOA is used to optimize logistic regression algorithm to improve classification accuracy. The classification accuracy of the optimized model is improved by testing the small data of iris. To solve the problem of large data sets and long training time, this paper proposes SPCC-SOA model and introduces Spark to carry out distributed computing of CC-SOA. In order to certify the effectiveness of SPCC-SOA, the film review data set of Kaggle is used to verify the model. Finally, experiments show that the computational efficiency is improved.
2 Relevant Knowledge The seagull is a gregarious species with two important characteristics. During migration, seagulls travel in groups. Each seagull is positioned differently during migration to avoid collisions. In a group, seagulls can move in the direction of the best position and change their position. The two most important formulas are the position update formula and the attack position formula in SOA. As shown in formula (1) and Formula (2):
92
H. Chen et al.
dy ¼ jCs ðtÞ þ Ms ðtÞj: dDs ðtÞ
ð1Þ
dy ¼ Ds ðtÞ x y z þ Pbs ðtÞ: dPs ðtÞ
ð2Þ
Seagull has two main behaviors: migration and attacking. During migration, the algorithm simulates the movement of a flock of seagulls from one location to another. At this stage, seagulls should avoid collision: In order to avoid collision with neighbors, the algorithm uses additional variable A to calculate the new position of sea-gulls. 8 dy > > dCs ðtÞ ¼ A Ps ðtÞ > < A ¼ fc ðfx =ðMaxiter ÞÞ : dy > > dMs ðtÞ ¼ B ðPbs ðtÞ Ps ðtÞÞ > : B ¼ 2 A 2 rd
ð3Þ
Cs ðtÞ is the new position that does not conflict with other seagulls. Ps ðtÞ is the current position of the seagulls. A is an additional variable used to calculate the new position of seagulls. fc is used to control the frequency of variable A and to decrease linearly. To avoid conflict with other seagulls in position, the seagulls move in the direction of the best position. Ms ðtÞ represents the direction in which the best position is located, and B is a balanced search random number. rd is a random number in the range of [0, 1], When the seagull has moved to a position where will not collide with other seagulls, it moves in the direction of the best position and reaches a new position. Attacking: Seagulls can change their attack angle and speed during migration, and use their wings and weight to maintain their height. When they attack their prey, they make a spiral motion in the air. The operation state formula is as follows: 8 x0 ¼ ekv cosðkÞ > > < 0 y ¼ ekv sinðkÞ : ð4Þ z0 ¼ ekv k > > ! :! ! 0 0 0 p ðtÞ ¼ ð D x y z Þ þ p ðtÞ s
s
bs
ekv is the radius of the helix, k is a random number in 0 k 2p,u and v are constants, which define the shape of the helix, ! pbs ðtÞ represents the current optimal position, which is updated by other seagulls.
3 Improving SOA 3.1
Population Uniform Distribution Strategy
According to the formula (3) above, variable A is used to control the new position of seagull, and variable A will vary with the change of the number of iterations, which is A linear decreasing process. In the initial iteration, the descend speed would be too fast,
Application of Distributed Seagull Optimization Improved Algorithm
93
and the position information of seagull could not be better traversed, which would eventually lead to insufficient global search and be difficult to achieve global optimization. In the later stage, A will slow down with the increase of iteration times, resulting in insufficient local search ability and slower convergence speed. In order to solve this problem, a new function is introduced by formula (5). Ct ¼ rðlgð
Cmax kt t kðCmax Cmin Þ Þ þ expð Þ Þ: Tmax Tmax
ð5Þ
r is a uniformly distributed random number in the range of [0, 1], k is a certain value. According to the above formula, Ct is a function of linear decline, and the nonlinear decrease will occur with the increase of iteration times in the whole iteration process. In the early stage, the rate of decline will be relatively slow, which enables a good global exploration and easy to find the optimal solution. In the later stage, the rate of decline will be faster, and convergence will be achieved quickly. 3.2
Cauchy Mutation
The position of seagulls is updated based on the positions of other seagulls, and they will move in one direction. SOA has strong local ability of local optimization, but there will be shortcomings in global optimization capabilities, and local optimal solutions are prone to appear. As far as the problem that SOA is easy to fall into local optimal solution, Cauchy mutation operation is added in every iteration process of seagull. After mutation, the population diversity of seagulls will be increased. The purpose of this is to jump out of the local optimal solution and find a new optimal solution. The curve of the Cauchy distribution function is relatively smooth and the peak value is relatively small. After being mutated by Cauchy, the individual extremum of seagulls will take less time to update the local optimal position and more time to find the latest points in the global position. In the process of global location optimization, better convergence will be achieved. The function of Cauchy distribution is as follows: 1 a fðxÞ ¼ ð Þ: p ðx x0 Þ2 þ a2
ð6Þ
! Cauchy variation is performed on Pbs ðtÞ of formula (2). The variation formula is as follows: Pbs ðtÞ ¼ Pbs ðtÞ Cahchyð0; 1Þ:
ð7Þ
CC-SOA algorithm flow chart, as shown in the figure below (Fig. 1): 3.3
CC-SOA Based on Spark
Spark is a big data framework based on memory computing. The biggest advantage is that it is based on memory computing, with scalability, high availability and load balancing. In order to improve the computational efficiency of the algorithm, CC-SOA
94
H. Chen et al. Strat Population initialization The population was uniformly distributed by introducing Ct migration behavior
Location update,Cs(t)=A*Ps(t),Ds=Cs(t)+Ms(t)
Do they coincide?
N
Y
Introduce cauchy variation Pbs(t)=Pbs(t)*Cahchy(0,1)
Aggressive behavior Attack position
Location update
Ps(t)=Ds(t)*x*y*zPbs(t)
Is the maximum number of iterations reached?
Calculate fitness value and optimal position End
Fig. 1. The flow chart of SOA
was used to optimize the logistic regression algorithm and Spark distribution was carried out for the optimized model. In the data processing part, the first part is the collection of the data set, uploading the data set to HDFS, then creating Spark object and using textFile() operator to read the data. Data is persisted in memory through a series of data-processing operations. Stop words are pulled to construct the word bag vector, and then the word bag vector is distributed. Then the test set and training set are extracted from memory and the data is trained in each partition. Finally, the training results are drawn, and the specific implementation flow chart is shown in the figure below (Fig. 2):
Application of Distributed Seagull Optimization Improved Algorithm
95
HDFS data acquisition F1=sc.textFile().Read data is distributed to individual partitions Block1
Block2
Block3
Block_n
partition_1
partition_2
partition_3
partition_n
f1.persist()
f1.persist()
f1.persist()
f1.persist()
Block and partition Form corresponding relationships Data is persisted on each Partition operationf2=f1.persisit()
Operate on f2, f3= f2.take () Fetching data persisted in memory Read stop_word Read data in each partition and build The word bag vector is then broadcast Construct word bag vector, broadcast Read the data and select the partition, f4= f3.mapPartitionsWithIndex() Read train_data,test_data,get the broadcast word vector
partition_1
partition_2
partition_n
Train...
Train...
Train...
A partition iterator is passed through f4, Returns a partition index and partition value
Get training results
End
Fig. 2. The flow chart of Distributed
4 Analysis of Experimental Results 4.1
Experimental Environment
In order to verify the various performances of the SOA algorithm, the programming language is python in this experiment, the version number is python 3.7, and the dataset is the Kaggle’s movie review dataset and iris dataset. Spark cluster information is composed of 6 virtual machines. 4.2
Experimental Analysis
In order to verify the effectiveness of the improvement, four benchmark functions are used for testing. The information of the benchmark functions is shown in the following Tables 1 and 2:
96
H. Chen et al. Table 1. Benchmark function test table
Benchmark functions
Dimension/population/iteration Functional expression
F1
30,30,1000
n P
30,30,1000
i¼1 n P
F3
30,30,1000
i¼1 2
F4
30,30,1000
F2
Theoretical optimal solution 0
x2i jxi j þ
n Q
0
jxi j
i¼1
0 x þ y2 þ 25ðsin2 x þ sin2 yÞ sffiffiffiffiffiffiffiffiffiffiffi! sffiffiffiffiffiffiffiffiffiffiffi 0 n n P P x2i þ 0:1 x2i þ 1 cos 2p i¼1
i¼1
Table 2. Benchmark function test results Algorithm Benchmark functions Optimal solution SOA F1 1.87e−41 F2 1.66e−24 F3 1.21e−35 F4 1.01e−17 CC-SOA F1 1.66e−97 F2 9.28e−57 F3 2.23e−67 F4 1.42e−41
Fig. 3. The convergence curve of each algorithm on the test function
Application of Distributed Seagull Optimization Improved Algorithm
97
As can be seen from the above table, the convergence accuracy of CC-SOA is higher than SOA, but neither of them reaches the theoretical optimal solution in a given number of iterations. CC-SOA can converge to near the optimal solution better, and the convergence gap is very obvious (Fig. 3). In order to verify the effectiveness of the improved algorithm, CC-SOA is compared with the Antlion Lion Algorithm (ALO), Dragonfly Algorithm (DA), Sine and Cosine Algorithm (SCA), Grasshopper Optimization Algorithm (GOA) and original Sea Optimization Algorithm (SOA). As shown in the above pictures, under the condition of 1000 dimensions, the CC-SOA shows better solving performance in F1, F2, F3 and F4. This is due to the addition of population uniform distribution strategy and Cauchy mutation, which is conducive to jump out of the local optimal solution and accelerate the convergence speed and accuracy of the algorithm. The second part of the experiment combines the CC-SOA with logistic regression. The parameters of logistic regression were optimized by CC-SOA(LG-CCSOA), and iris data set was used. The accuracy of text classification after optimization was shown in the figure below (Table 3):
Table 3. Simulation results of different classifiers Classifier name KNN Random Forests Bayes Logistic Regression LG-CCSOA
50 84.10 91.34 90.21 92.11 94.73
± ± ± ± ±
1.1 0.9 1.6 1.8 0.5
100 83.97 91.54 90.42 92.72 94.83
± ± ± ± ±
0.9 1.2 1.2 1.4 0.7
200 84.31 92.11 90.52 92.74 95.02
± ± ± ± ±
1.4 1.2 0.4 1.5 0.8
500 84.19 91.29 90.26 92.81 95.17
± ± ± ± ±
1.4 0.7 0.9 1.2 0.8
As shown in the above table, when the number of iterations is 50, 100, 200, 500 respectively, KNN, Random Forests, Bayes, Logistic Regression, and LG-CCSOA are used to classify the iris data set, and the accuracy is shown in the table. Finally, it is found that the LG-CCSOA has a better classification effect. This paper used the Kaggle movie review data set for sentiment analysis to predict 5 types of emotions. The LG-CCSOA model is used for Sentiment Tendency Prediction. The data set is Kaggle’s film review data set. There are 60000 training sets and 160000 test sets. The accuracy of emotion prediction is shown below (Table 4).
Table 4. Cluster configuration information Classifier name KNN Random Forests Bayes Logistic Regression LG-CCSOA
50 62.01 60.98 62.11 60.10 65.81
± ± ± ± ±
1.5 0.7 1.2 1.1 0.7
100 62.09 61.11 62.43 61.02 65.92
± ± ± ± ±
0.9 1.1 1.4 0.4 0.5
200 62.07 61.28 62.45 61.14 65.98
± ± ± ± ±
1.8 1.5 1.5 0.9 0.4
500 62.12 61.34 62.52 61.21 66.22
± ± ± ± ±
1.2 1.0 0.9 1.2 0.9
98
H. Chen et al.
As shown in the above table, when the number of iterations is 50,100,200,500 respectively, KNN, Random Forests, Bayes, Logistic Regression, and LG-CCSOA are used to classify the Kaggle film review data set for propensity, and the accuracy is shown in the table. Finally, it is found that the LG-CCOSA has a better classification effect. The experiment selects a cluster of 6 nodes for distributed computing. Distributed computing is mainly reflected in the running time of the algorithm, as shown in the figure below (Fig. 4):
Fig. 4. The running time of different nodes
In the case of the same amount of data, the final classification accuracy does not fluctuate greatly, and the running time decreases with the increase of nodes.
5 Conclusion This paper proposes an CC-SOA, which can solve the problems of easily falling into local optimum and poor precision with Cauchy mutation and uniform distribution of population. In order to verify the effectiveness of the improvement. Four benchmark functions are used for testing. The convergence speed and algorithm accuracy are greatly improved. Parameter optimization is a classic continuous optimization problem. CC-SOA optimizes the parameters of logistic regression and tests it with iris data set, compared with other machine learning algorithms, the accuracy of LG-CCSOA is obviously improved. The test results of Kaggle’s big data set of movie reviews show that the final classification effect is improved, but it takes a lot of time. Finally, LGCCSOA is distributed processing on Spark platform. Distributed computing is carried out on different nodes, and the final running time is greatly reduced. Acknowledgments. This research is supported by National Natural Science Foundation of China under grant number 61602162, 61772180.
Application of Distributed Seagull Optimization Improved Algorithm
99
References 1. Dhiman, G., Kumar, V.: Seagull optimization algorithm: theory and its applications for large-scale industrial engineering problems. Knowl.-Based Syst. 165, 169–196 (2019) 2. Wang, W., et al.: Yin-Yang firefly algorithm based on dimensionally Cauchy mutation. Expert Syst. Appl. 150, 113216 (2020) 3. Zhang, X., et al.: Gaussian mutational chaotic fruit fly-built optimization and feature selection. Expert Syst. Appl. 141, 112976 (2020) 4. Anter, A,M., Ella Hassenian, A., Oliva, D.: An improved fast fuzzy c-means using crow search optimization algorithm for crop identification in agricultural. Expert Syst. Appl. 118, 340–354 (2019) 5. Mirjalili, S., Jangir, P., Saremi, S.: Multi-objective ant lion optimizer: a multi-objective optimization algorithm for solving engineering problems. Appl. Intell. 46(1), 79–95 (2016). https://doi.org/10.1007/s10489-016-0825-8 6. llango, S.S., et al.: Optimization using artificial bee colony based clustering approach for big data. Cluster Comput. 22(5), 12169–12177 (2019) 7. Janani, R., Vijayarani, S.: Text document clustering using spectral clustering algorithm with particle swarm optimization. Expert Syst. Appl. 134, 192–200 (2019) 8. Anter, A.M., Ella Hassenian, A., Oliva, D.: An improved fast fuzzy c-means using crow search optimization algorithm for crop identification in agricultural. Expert Syst. Appl. 118, 340–354 (2019) 9. Chikh, R., Chikhi, S.: Clustered negative selection algorithm and fruit fly optimization for email spam detection. J. Ambient Intell. Humaniz. Comput. 10(1), 143–152 (2017). https:// doi.org/10.1007/s12652-017-0621-2 10. Saki, M., Wang, P., Matsuda, K., et al.: Confusion-matrix-based kernel logistic regression for imbalanced data classification. IEEE Trans. Knowl. Data Eng. 29(9), 1806–1819 (2017) 11. De Caigny, A., Coussement, K., De Bock, K.W.: A new hybrid classification algorithm for customer churn prediction based on logistic regression and decision trees. Eur. J. Oper. Res. 269(2), 760–772 (2018) 12. Wang, L., Cao, Y.: A hybrid intelligent predicting model for exploring household CO2 emissions mitigation strategies derived from butterfly optimization algorithm. Sci. Tot. Environ. (2020) 13. Mogha, G., Ahlawat, K., Singh, A.P.: Performance analysis of machine learning techniques on big data using apache spark. In: Panda, B., Sharma, S., Roy, N.R. (eds.) REDSET 2017. CCIS, vol. 799, pp. 17–26. Springer, Singapore (2018). https://doi.org/10.1007/978-981-108527-7_2
Performance Evaluation of WMNs by WMN-PSOSA-DGA Hybrid Simulation System Considering Stadium Distribution of Mesh Clients and Different Number of Mesh Routers Admir Barolli1(B) , Shinji Sakamoto2 , Leonard Barolli3 , and Makoto Takizawa4 1
3
Department of Information Technology, Aleksander Moisiu University of Durres, L.1, Rruga e Currilave, Durres, Albania 2 Department of Computer and Information Science, Seikei University, 3-3-1 Kichijoji-Kitamachi, Musashino-shi, Tokyo 180-8633, Japan [email protected] Department of Information and Communication Engineering, Fukuoka Institute of Technology, 3-30-1 Wajiro-Higashi, Fukuoka, Higashi-Ku 811-0295, Japan [email protected] 4 Research Center for Computing and Multimedia Studies, Hosei University, 3-7-2 Kajino-Cho, Koganei-Shi, Tokyo 184-8584, Japan [email protected]
Abstract. Wireless Mesh Networks (WMNs) are gaining a lot of attention from researchers due to their advantages such as easy maintenance, low upfront cost, and high robustness. Connectivity and stability directly affect the performance of WMNs. However, WMNs have some problems such as node placement problem, hidden terminal problem and so on. In our previous work, we implemented a simulation system to solve the node placement problem in WMNs considering Particle Swarm Optimization (PSO), Simulated Annealing (SA) and Distributed Genetic Algorithm (DGA), called WMN-PSOSA-DGA. In this paper, we evaluate the performance of WMNs by using the WMN-PSOSA-DGA hybrid simulation system considering the Stadium distribution of mesh clients. Simulation results show that 32 mesh routers are enough for maximizing the network connectivity and user coverage.
1
Introduction
The wireless networks and devices are becoming increasingly popular and they provide users access to information and communication anytime and anywhere [2,10,12]. Wireless Mesh Networks (WMNs) are gaining a lot of attention because of their low cost nature that makes them attractive for providing wireless Internet connectivity. A WMN is dynamically self-organized and self-configured, with the nodes in the network automatically establishing and maintaining mesh connectivity among them-selves (creating, in effect, an ad hoc network). This c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 100–109, 2021. https://doi.org/10.1007/978-3-030-79725-6_10
Performance Evaluation of WMNs
101
feature brings many advantages to WMNs such as low up-front cost, easy network maintenance, robustness and reliable service coverage [1]. Mesh node placement in WMN can be seen as a family of problems, which are shown to be computationally hard to solve for most of the formulations [21]. We consider the version of the mesh router nodes placement problem in which we are given a grid area where to deploy a number of mesh router nodes and a number of mesh client nodes of fixed positions (of an arbitrary distribution) in the grid area. The objective is to find a location assignment for the mesh routers to the cells of the grid area that maximizes the network connectivity and client coverage. Network connectivity is measured by Size of Giant Component (SGC) of the resulting WMN graph, while the user coverage is simply the number of mesh client nodes that fall within the radio coverage of at least one mesh router node and is measured by Number of Covered Mesh Clients (NCMC). Node placement problems are known to be computationally hard to solve [8,22]. In previous works, some intelligent algorithms have been investigated for node placement problem [3,11]. In [15], we implemented a Particle Swarm Optimization (PSO) and Simulated Annealing (SA) based simulation system, called WMN-PSOSA. Also, we implemented another simulation system based on Genetic Algorithm (GA), called WMN-GA [3,9], for solving node placement problem in WMNs. Then, we designed a hybrid intelligent system based on PSO, SA and DGA, called WMN-PSOSA-DGA [14]. In this paper, we evaluate the performance of WMNs by using the WMNPSOSA-DGA simulation system considering the Stadium distribution of mesh clients. The rest of the paper is organized as follows. We present our designed and implemented hybrid simulation system in Sect. 2. The simulation results are given in Sect. 3. Finally, we give conclusions and future work in Sect. 4.
2
Proposed and Implemented Simulation System
Distributed Genetic Algorithms (DGAs) are capable of producing solutions with higher efficiency (in terms of time) and efficacy (in terms of better quality solutions). They have shown their usefulness for the resolution of many computationally hard combinatorial optimization problems. Also, Particle Swarm Optimization (PSO) and Simulated Annealing (SA) are suitable for solving NP-hard problems. 2.1
Velocities and Positions of Particles
WMN-PSOSA-DGA decides the velocity of particles by a random process considering the area size. For instance, when √ the area size is W × H, the velocity √ is decided randomly from − W 2 + H 2 to W 2 + H 2 . Each particle’s velocities are updated by simple rule.
102
A. Barolli et al.
For SA mechanism, next positions of each particle are used for neighbor solution s . The fitness function f gives points to the current solution s. If f (s ) is larger than f (s), the s is better than s so the s is updated to s . However, if f (s) is not larger than f (s), the s may be updated by using the probability of f (s )−f (s) . Where T is called the “Temperature value” which is decreased exp T with the computation so that the probability to update will be decreased. This mechanism of SA is called a cooling schedule and the next Temperature value of computation is calculated as Tn+1 = α × Tn . In this paper, we set the starting temperature, ending temperature and number of iterations. We calculate α as 1.0/number of iterations SA ending temperature . α= SA starting temperature It should be noted that the positions are not updated but the velocities are updated in the case when the solusion s is not updated. 2.2
Routers Replacement Methods
A mesh router has x, y positions and velocity. Mesh routers are moved based on velocities. There are many router replacement methods. In this paper, we use RIWM and LDVM. Constriction Method (CM) CM is a method which PSO parameters are set to a week stable region (ω = 0.729, C1 = C2 = 1.4955) based on analysis of PSO by M. Clerc et al. [4,7,17]. Random Inertia Weight Method (RIWM) In RIWM, the ω parameter is changing randomly from 0.5 to 1.0. The C1 and C2 are kept 2.0. The ω can be estimated by the week stable region. The average of ω is 0.75 [6,19]. Linearly Decreasing Inertia Weight Method (LDIWM) In LDIWM, C1 and C2 are set to 2.0, constantly. On the other hand, the ω parameter is changed linearly from unstable region (ω = 0.9) to stable region (ω = 0.4) with increasing of iterations of computations [5,20]. Linearly Decreasing Vmax Method (LDVM) In LDVM, PSO parameters are set to unstable region (ω = 0.9, C1 = C2 = 2.0). A value of Vmax which is maximum velocity of particles is considered. With increasing of iteration of computations, the Vmax is kept decreasing linearly [6,16,18]. Rational Decrement of Vmax Method (RDVM) In RDVM, PSO parameters are set to unstable region (ω = 0.9, C1 = C2 = 2.0). The Vmax is kept decreasing with the increasing of iterations as T −x . Vmax (x) = W 2 + H 2 × x Where, W and H are the width and the height of the considered area, respectively. Also, T and x are the total number of iterations and a current number of iteration, respectively.
Performance Evaluation of WMNs
103
Fig. 1. Model of WMN-PSOSA-DGA migration.
Fig. 2. Relationship among global solution, particle-patterns and mesh routers in PSOSA part.
2.3
DGA Operations
Population of individuals: Unlike local search techniques that construct a path in the solution space jumping from one solution to another one through local perturbations, DGA use a population of individuals giving thus the search a larger scope and chances to find better solutions. This feature is also known as “exploration” process in difference to “exploitation” process of local search methods. Selection: The selection of individuals to be crossed is another important aspect in DGA as it impacts on the convergence of the algorithm. Several selection schemes have been proposed in the literature for selection operators trying to cope with premature convergence of DGA. There are many selection methods in GA. In our system, we implement 2 selection methods: Random method and Roulette wheel method. Crossover operators: Use of crossover operators is one of the most important characteristics. Crossover operator is the means of DGA to transmit best genetic features of parents to offsprings during generations of the evolution process. Many methods for crossover operators have been proposed such as Blend Crossover (BLX-α), Unimodal Normal Distribution Crossover (UNDX), Simplex Crossover (SPX). Mutation operators: These operators intend to improve the individuals of a population by small local perturbations. They aim to provide a component of randomness in the neighborhood of the individuals of the population. In our
104
A. Barolli et al.
system, we implemented two mutation methods: uniformly random mutation and boundary mutation. Escaping from local optimal: GA itself has the ability to avoid falling prematurely into local optimal and can eventually escape from them during the search process. DGA has one more mechanism to escape from local optimal by considering some islands. Each island computes GA for optimizing and they migrate its gene to provide the ability to avoid from local optimal. Convergence: The convergence of the algorithm is the mechanism of DGA to reach to good solutions. A premature convergence of the algorithm would cause that all individuals of the population be similar in their genetic features and thus the search would result ineffective and the algorithm getting stuck into local optimal. Maintaining the diversity of the population is therefore very important to this family of evolutionary algorithms. In following, we present fitness function, migration function, particle pattern and gene coding. 2.4
Fitness and Migration Functions
The determination of an appropriate fitness function, together with the chromosome encoding are crucial to the performance. Therefore, one of most important thing is to decide the determination of an appropriate objective function and its encoding. In our case, each particle-pattern and gene has an own fitness value which is comparable and compares it with other fitness value in order to share information of global solution. The fitness function follows a hierarchical approach in which the main objective is to maximize the SGC in WMN. Thus, the fitness function of this scenario is defined as Fitness = 0.7 × SGC(xij , y ij ) + 0.3 × NCMC(xij , y ij ). Our implemented simulation system uses Migration function as shown in Fig. 1. The Migration function swaps solutions between PSOSA part and DGA part.
Fig. 3. Stadium distribution.
Performance Evaluation of WMNs
105
Table 1. WMN-PSOSA-DGA parameters. Parameters
Values
Clients distribution
Stadium distribution
Area size
32.0 × 32.0
Number of mesh routers
16, 24, 32, 40
Number of mesh clients
48
Number of GA islands
16
Number of Particle-patterns 32 Number of migrations
2.5
200
Evolution steps
320
Radius of a mesh router
2.0–3.5
Selection method
Roulette wheel method
Crossover method
SPX
Mutation method
Boundary mutation
Crossover rate
0.8
Mutation rate
0.2
SA Starting value
10.0
SA Ending value
0.01
Total number of iterations
64000
Replacement method
CM
Particle-Pattern and Gene Coding
In order to swap solutions, we design particle-patterns and gene coding carefully. A particle is a mesh router. Each particle has position in the considered area and velocities. A fitness value of a particle-pattern is computed by combination of mesh routers and mesh clients positions. In other words, each particle-pattern is a solution as shown is Fig. 2. A gene describes a WMN. Each individual has its own combination of mesh nodes. In other words, each individual has a fitness value. Therefore, the combination of mesh nodes is a solution.
106
A. Barolli et al.
Fig. 4. Simulation results of WMN-PSOSA-DGA for SGC.
Fig. 5. Simulation results of WMN-PSOSA-DGA for NCMC.
Performance Evaluation of WMNs
3
107
Simulation Results
In this section, we show simulation results. In this work, we analyze the performance of WMNs by using the WMN-PSOSA-DGA hybrid intelligent simulation system considering Stadium distribution [13] as shown in Fig. 3. We carried out the simulations 10 times in order to avoid the effect of randomness and create a general view of results. We show the parameter setting for WMN-PSOSA-DGA in Table 1. We show simulation results in Fig. 4 and Fig. 5. We consider number of mesh routers 16, 24, 32 and 40. We see that for SGC, WMN-PSOSA-DGA can maximize the network connectivity even if the number of mesh routers is 16. However, for NCMC, when the number of mesh routers is 16 and 24, some clients are not covered. In the case when the number of mesh routers is 32 and 40, all clients in the considered area are covered. So, 32 mesh routers are enough to cover all mesh clients for this scenario.
4
Conclusions
In this work, we evaluated the performance of WMNs by using a hybrid simulation system based on PSO, SA and DGA (called WMN-PSOSA-DGA) considering Stadium distribution of mesh clients. Simulation results show that 32 mesh routers were enough for maximizing the network connectivity and user coverage. In our future work, we would like to evaluate the performance of the proposed system for different parameters and patterns.
References 1. Akyildiz, I.F., Wang, X., Wang, W.: Wireless mesh networks: a survey. Comput. Netw. 47(4), 445–487 (2005) 2. Barolli, A., Sakamoto, S., Barolli, L., Takizawa, M.: A hybrid simulation system based on particle swarm optimization and distributed genetic algorithm for WMNs: performance evaluation considering normal and uniform distribution of mesh clients. In: International Conference on Network-Based Information Systems, pp 42–55. Springer (2018) 3. Barolli, A., Sakamoto, S., Ozera, K., Barolli, L., Kulla, E., Takizawa, M.: Design and implementation of a hybrid intelligent system based on particle swarm optimization and distributed genetic algorithm. In: Barolli, L., Xhafa, F., Javaid, N., Spaho, E., Kolici, V. (eds.) EIDWT 2018. LNDECT, vol. 17, pp. 79–93. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-75928-9 7 4. Barolli, A., Sakamoto, S., Durresi, H., Ohara, S., Barolli, L., Takizawa, M.: A comparison study of constriction and linearly decreasing Vmax replacement methods for wireless mesh networks by WMN-PSOHC-DGA simulation system. In: Barolli, L., Hellinckx, P., Natwichai, J. (eds.) 3PGCIC 2019. LNNS, vol. 96, pp. 26–34. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-33509-0 3
108
A. Barolli et al.
5. Barolli, A., Sakamoto, S., Ohara, S., Barolli, L., Takizawa, M.: Performance analysis of WMNs by WMN-PSOHC-DGA simulation system considering linearly decreasing inertia weight and linearly decreasing Vmax replacement methods. In: Barolli, L., Nishino, H., Miwa, H. (eds.) INCoS 2019. AISC, vol. 1035, pp. 14–23. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-29035-1 2 6. Barolli, A., Sakamoto, S., Ohara, S., Barolli, L., Takizawa, M.: Performance analysis of WMNs by WMN-PSOHC-DGA simulation system considering random inertia weight and linearly decreasing Vmax router replacement methods. In: Barolli, L., Hussain, F.K., Ikeda, M. (eds.) CISIS 2019. AISC, vol. 993, pp. 13–21. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-22354-0 2 7. Clerc, M., Kennedy, J.: The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 6(1), 58–73 (2002) 8. Maolin, T., et al.: Gateways placement in backbone wireless mesh networks. Int. J. Commun. Netw. Syst. Sci. 2(1), 44 (2009) 9. Matsuo, K., Sakamoto, S., Oda, T., Barolli, A., Ikeda, M., Barolli, L.: Performance analysis of WMNs by WMN-GA simulation system for two WMN architectures and different TCP congestion-avoidance algorithms and client distributions. Int. J. Commun. Netw. Distrib. Syst. 20(3), 335–351 (2018) 10. Ohara, S., Barolli, A., Sakamoto, S., Barolli, L.: Performance analysis of WMNs by WMN-PSODGA simulation system considering load balancing and client uniform distribution. In: Barolli, L., Xhafa, F., Hussain, O.K. (eds.) IMIS 2019. AISC, vol. 994, pp. 25–38. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-2226353 11. Ozera, K., Sakamoto, S., Elmazi, D., Bylykbashi, K., Ikeda, M., Barolli, L.: A fuzzy approach for clustering in MANETs: performance evaluation for different parameters. Int. J. Space-Based Situated Comput. 7(3), 166–176 (2017) 12. Ozera, K., Inaba, T., Bylykbashi, K., Sakamoto, S., Ikeda, M., Barolli, L.: A WLAN triage testbed based on fuzzy logic and its performance evaluation for different number of clients and throughput parameter. Int. J. Grid Utility Comput. 10(2), 168–178 (2019) 13. Sakamoto, S., Oda, T., Bravo, A., Barolli, L., Ikeda, M., Xhafa, F.: WMN-SA system for node placement in WMNs: evaluation for different realistic distributions of mesh clients. In: The IEEE 28th International Conference on Advanced Information Networking and Applications (AINA-2014), pp. 282–288. IEEE (2014) 14. Sakamoto, S., Barolli, A., Barolli, L., Takizawa, M.: Design and implementation of a hybrid intelligent system based on particle swarm optimization, hill climbing and distributed genetic algorithm for node placement problem in WMNs: a comparison study. In: The 32nd IEEE International Conference on Advanced Information Networking and Applications (AINA-2018), pp. 678–685. IEEE (2018) 15. Sakamoto, S., Ozera, K., Ikeda, M., Barolli, L.: Implementation of intelligent hybrid systems for node placement problem in WMNs considering particle swarm optimization, hill climbing and simulated annealing. Mob. Netw. Appl 23(1), 27–33 (2018) 16. Sakamoto, S., Ohara, S., Barolli, L., Okamoto, S.: Performance evaluation of WMNs by WMN-PSOHC system considering random inertia weight and linearly decreasing Vmax replacement methods. In: International Conference on NetworkBased Information Systems, pp. 27–36. Springer (2019)
Performance Evaluation of WMNs
109
17. Sakamoto, S., Ohara, S., Barolli, L., Okamoto, S.: Performance evaluation of WMNs WMN-PSOHC system considering constriction and linearly decreasing inertia weight replacement methods. In: Barolli, L., Hellinckx, P., Enokido, T. (eds.) BWCCA 2019. LNNS, vol. 97, pp. 22–31. Springer, Cham (2020). https:// doi.org/10.1007/978-3-030-33506-9 3 18. Schutte, J.F., Groenwold, A.A.: A study of global optimization using particle swarms. J. Global Optim. 31(1), 93–108 (2005) 19. Shi, Y.: Particle swarm optimization. IEEE Connections 2(1), 8–13 (2004) 20. Shi, Y., Eberhart, R.C.: Parameter Selection in Particle Swarm Optimization. Evolutionary programming VII, pp. 591–600 (1998) 21. Vanhatupa, T., Hannikainen, M., Hamalainen, T.: Genetic algorithm to optimize node placement and configuration for WLAN planning. In: The 4th IEEE International Symposium on Wireless Communication Systems, pp. 612–616 (2007) 22. Wang, J., Xie, B., Cai, K., Agrawal, D.P.: Efficient mesh router placement in wireless mesh networks. In: Proceedings of of IEEE Internatonal Conference on Mobile Adhoc and Sensor Systems (MASS-2007), pp. 1–9 (2007)
A New Scheme for Slice Overloading Cost in 5G Wireless Networks Considering Fuzzy Logic Phudit Ampririt1(B) , Ermioni Qafzezi1 , Kevin Bylykbashi1 , Makoto Ikeda2 , Keita Matsuo2 , and Leonard Barolli2 1
Graduate School of Engineering, Fukuoka Institute of Technology, 3-30-1 Wajiro-Higashi, Higashi-Ku, Fukuoka 811-0295, Japan 2 Department of Information and Communication Engineering, Fukuoka Institute of Technology, 3-30-1 Wajiro-Higashi, Higashi-Ku, Fukuoka 811-0295, Japan [email protected], {kt-matsuo,barolli}@fit.ac.jp
Abstract. The Fifth Generation (5G) network is expected to be flexible to satisfy user requirements and the Software-Defined Network (SDN) with Network Slicing will be a good approach for admission control. The 5G network resources are limited and the number of devices is increasing much more than the system can support. So, the overloading problem will be a very critical problem for 5G wireless networks. In this paper, we propose a Fuzzy-based scheme to evaluate the Slice Overloading Cost (SOC) considering 3 parameters: Virtual Machine Overloading Cost (VMOC), Link Overloading Cost (LOC) and Switches Overloading Cost (SWOC). We carried out simulations for evaluating the performance of our proposed scheme. From simulation results, we conclude that the considered parameters have different effects on the SOC performance. When VMOC, LOC and SWOC are increasing, the SOC parameter is increased.
1
Introduction
Recently, the growth of wireless technologies and user’s demand of services are increasing rapidly. Especially in 5G networks, there will be billions of new devices with unpredictable traffic pattern which provide high data rates. With the appearance of Internet of Things (IoT), these devices will generate Big Data to the Internet, which will cause to congest and deteriorate the QoS [1]. The 5G network will provide users with new experiences such as Ultra High Definition Television (UHDT) on Internet and support a lot of IoT devices with long battery life and high data rate on hotspot areas with high user density. In the 5G technology, the routing and switching technologies aren’t important anymore or coverage area is shorter than 4G because it uses high frequency for facing higher device’s volume for high user density [2–4]. There are many research work that try to build systems which are suitable to 5G era. The SDN is one of them [5]. For example, the mobile handover c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 110–120, 2021. https://doi.org/10.1007/978-3-030-79725-6_11
A Fuzzy-Based Scheme for Admission Control in 5G Wireless Networks
111
mechanism with SDN is used for reducing the delay in handover processing and improve QoS. Also, by using SDN the QoS can be improved by applying Fuzzy Logic (FL) on SDN controller [6–8]. In our previous work [9,10], we proposed a fuzzy-based scheme for evaluation of QoS in 5G Wireless Networks considering three parameters: Slice Throughput (ST), Slice Delay (SD), Slice Loss (SL) and Slice Reliability (SR). In this paper, we propose a fuzzy-based scheme for evaluation of Slice Overloading Cost (SOC) in 5G wireless networks considering three parameters: Virtual Machine Overloading Cost (VMOC), Link Overloading Cost (LOC) and Switches Overloading Cost (SWOC). The rest of the paper is organized as follows. In Sect. 2 is presented an overview of SDN. In Sect. 3, we present application of Fuzzy Logic for admission control. In Sect. 4, we describe the proposed fuzzy-based system and its implementation. In Sect. 5, we discuss the simulation results. Finally, conclusions and future work are presented in Sect. 6.
2
Software-Defined Networks (SDNs)
The SDN is a new networking paradigm that decouples the data plane from control plane in the network. In traditional networks, the whole network is controlled by each network device. However, the traditional networks are hard to manage and control since they rely on physical infrastructure. Network devices must stay connected all the time when user wants to connect other networks. Those processes must be based on the setting of each device, making controlling the operation of the network difficult. Therefore, they have to be set up one by one. In contrast, the SDN is easy to manage and provide network software based services from a centralised control plane. The SDN control plane is managed by SDN controller or cooperating group of SDN controllers. The SDN structure is shown in Fig. 1 [11,12]. • Application Layer builds an abstracted view of the network by collecting information from the controller for decision-making purposes. The types of applications are related to: network configuration and management, network monitoring, network troubleshooting, network policies and security. • Control Layer receives instructions or requirements from the Application Layer and control the Infrastructure Layer by using intelligent logic. • Infrastructure Layer receives orders from SDN controller and sends data among them. The SDN can manage network systems while enabling new services. In congestion traffic situation, management system can be flexible, allowing users to easily control and adapt resources appropriately throughout the control plane. Mobility management is easier and quicker in forwarding across different wireless technologies (e.g. 5G, 4G, Wifi and Wimax). Also, the handover procedure is simple and the delay can be decreased.
112
P. Ampririt et al.
Fig. 1. Structure of SDN.
3
Outline of Fuzzy Logic
A FL system is a nonlinear mapping of an input data vector into a scalar output, which is able to simultaneously handle numerical data and linguistic knowledge. The FL can deal with statements which may be true, false or intermediate truth-value. These statements are impossible to quantify using traditional mathematics. The FL system is used in many controlling applications such as aircraft control (Rockwell Corp.), Sendai subway operation (Hitachi), and TV picture adjustment (Sony) [13–15]. In Fig. 2 is shown Fuzzy Logic Controller (FLC) structure, which contains four components: fuzzifier, inference engine, fuzzy rule base and defuzzifier. • Fuzzifier is needed for combining the crisp values with rules which are linguistic variables and have fuzzy sets associated with them. • The Rules may be provided by expert or can be extracted from numerical data. In engineering case, the rules are expressed as a collection of IF-THEN statements. • The Inference Engine infers fuzzy output by considering fuzzified input values and fuzzy rules. • The Defuzzifier maps output set into crisp numbers.
A Fuzzy-Based Scheme for Admission Control in 5G Wireless Networks
113
Fig. 2. FLC structure.
3.1
Linguistic Variables
A concept that plays a central role in the application of FL is that of a linguistic variable. The linguistic variables may be viewed as a form of data compression. One linguistic variable may represent many numerical variables. It is suggestive to refer to this form of data compression as granulation. The same effect can be achieved by conventional quantization, but in the case of quantization, the values are intervals, whereas in the case of granulation the values are overlapping fuzzy sets. The advantages of granulation over quantization are as follows: • it is more general; • it mimics the way in which humans interpret linguistic values; • the transition from one linguistic value to a contiguous linguistic value is gradual rather than abrupt, resulting in continuity and robustness. For example, let Temperature (T) be interpreted as a linguistic variable. It can be decomposed into a set of Terms: T (Temperature) = {Freezing, Cold, Warm, Hot, Blazing}. Each term is characterised by fuzzy sets which can be interpreted, for instance, “Freezing” as a temperature below 0 ◦ C, “Cold” as a temperature close to 10 ◦ C. 3.2
Fuzzy Control Rules
Rules are usually written in the form “IF x is S THEN y is T” where x and y are linguistic variables that are expressed by S and T, which are fuzzy sets. The x is a control (input) variable and y is the solution (output) variable. This rule is called Fuzzy control rule. The form “IF ... THEN” is called a conditional sentence. It consists of “IF” which is called the antecedent and “THEN” is called the consequent.
114
3.3
P. Ampririt et al.
Defuzzificaion Method
There are many defuzzification methods, which are showing in following: • • • • •
4
The Centroid Method; Tsukamoto’s Defuzzification Method; The Center of Are (COA) Method; The Mean of Maximum (MOM) Method; Defuzzification when Output of Rules are Function of Their Inputs.
Proposed Fuzzy-Based System
In this work, we use FL to implement the proposed system. In Fig. 3, we show the overview of our proposed system. Each evolve Base Station (eBS) will receive controlling order from SDN controller and they can communicate and send data with User Equipment (UE). On the other hand, the SDN controller will collect all the data about network traffic status and controlling eBS by using the proposed fuzzy-based system. The SDN controller will be a communicating bridge between eBS and 5G core network. The proposed system is called Integrated Fuzzy-based Admission Control System (IFACS) in 5G wireless networks. The structure of IFACS is shown in Fig. 4. For the implementation of our system, we consider four input parameters: Quality of Service (QoS), User Request Delay Time (URDT), Slice Priority (SP), Slice Overloading Cost (SOC) and the output parameter is Admission Decision (AD).
Fig. 3. Proposed system overview.
In this paper, we apply FL to evaluate the SOC. For SOC evaluation, we consider 3 parameters: VMOC, LOC and SWOC. The output parameter is SOC.
A Fuzzy-Based Scheme for Admission Control in 5G Wireless Networks
115
Fig. 4. Proposed system structure.
Virtual Machine Overloading Cost (VMOC): The VMOC value is the overloading cost on virtual machine in a slice. When VMOC value is high, the SOC performance is high. Link Overloading Cost (LOC): The LOC is the overloading cost on virtual link in a slice. When LOC value is high, the SOC performance is high. Switches Overloading Cost (SWOC): The SWOC is the overloading cost of switch occupation rate in a slice. When the SL value is high, the SOC performance is high. Slice Overloading Cost (SOC): The SOC is the slice overloading cost. The slice with the lowest overloading cost value will have a high acceptance possibility for a new user. Table 1. Parameter and their term sets. Parameters
Term set
Virtual Machine Overloading Cost (VMOC)
Small (Sm), Medium (Me), High (Hi)
Link Overloading Cost (LOC)
Low (Lw), Medium (Md), High (Hg)
Switches Overloading Cost (SWOC)
Low (Lo), Medium (Mi), High (Hh)
Slice Overloading Cost (SOC)
SOC1, SOC2, SOC3, SOC4, SOC5, SOC6, SOC7
116
P. Ampririt et al. Table 2. FRB. Rule VMOC LOC SWOC SOC 1
Sm
Lw
Lo
SOC1
2
Sm
Lw
Mi
SOC1
3
Sm
Lw
Hh
SOC2
4
Sm
Md
Lo
SOC1
5
Sm
Md
Mi
SOC2
6
Sm
Md
Hh
SOC4
7
Sm
Hg
Lo
SOC2
8
Sm
Hg
Mi
SOC4
9
Sm
Hg
Hh
SOC5
10
Me
Lw
Lo
SOC1
11
Me
Lw
Mi
SOC2
12
Me
Lw
Hh
SOC3
13
Me
Md
Lo
SOC2
14
Me
Md
Mi
SOC4
15
Me
Md
Hh
SOC5
16
Me
Hg
Lo
SOC4
17
Me
Hg
Mi
SOC5
18
Me
Hg
Hh
SOC6
19
Hi
Lw
Lo
SOC3
20
Hi
Lw
Mi
SOC5
21
Hi
Lw
Hh
SOC6
22
Hi
Md
Lo
SOC5
23
Hi
Md
Mi
SOC6
24
Hi
Md
Hh
SOC7
25
Hi
Hg
Lo
SOC6
26
Hi
Hg
Mi
SOC7
27
Hi
Hg
Hh
SOC7
The membership functions are shown in Fig. 5. We use triangular and trapezoidal membership functions because they are more suitable for real-time operations [16–19]. We show parameters and their term sets in Table 1. The Fuzzy Rule Base (FRB) is shown in Table 2 and has 27 rules. The control rules have the form: IF “condition” THEN “control action”. For example, for Rule 1:“IF VMOC is Sm, LOC is Lw and SWOC is Lo THEN SOC is SOC1”.
A Fuzzy-Based Scheme for Admission Control in 5G Wireless Networks Sm
Me
Hi
0.5
0
0
10
20
30
40
50
60
70
80
90
Lw
1
µ[LOC]
µ[VMOC]
1
0
10
20
30
40
VMOC(%)
Lo
µ[SWOC]
Mi
Hh
10
20
30
40
50
70
80
90
100
SOC1
60
70
80
90
SOC2
SOC3
SOC4
SOC5
SOC6
SOC7
1
µ[SOC] 0
60
(b) Link Overloading Cost
0.5
0
50
LOC(%)
(a) Virtual Machine Overloading Cost
1
Hg
0.5
0
100
Md
117
100
0.5
0
0
0.1
0.2
0.3
0.4
SWOC(%)
0.5
0.6
0.7
0.8
0.9
1
SOC
(c) Switches Overloading Cost
(d) Slice Overloading Cost
Fig. 5. Membership functions. VMOC=10% 1
LOC=10% LOC=50% LOC=90%
0.9 0.8
SOC [unit]
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0
10
20
30
40 50 60 SWOC [%]
70
80
90
100
Fig. 6. Simulation results for VMOC = 10%.
5
Simulation Results
In this section, we present the simulation result of our proposed scheme. The simulation results are shown in Fig. 6, Fig. 7 and Fig. 8. They show the relation of SOC with VMOC, LOC and SWOC. We consider the VMOC as constant
118
P. Ampririt et al. VMOC=50% 1
LOC=10% LOC=50% LOC=90%
0.9 0.8
SOC [unit]
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0
10
20
30
40 50 60 SWOC [%]
70
80
90
100
Fig. 7. Simulation results for VMOC = 50%. VMOC=90% 1 0.9 0.8
SOC [unit]
0.7 0.6 0.5 0.4 0.3 0.2 LOC=10% LOC=50% LOC=90%
0.1 0 0
10
20
30
40 50 60 SWOC [%]
70
80
90
100
Fig. 8. Simulation results for VMOC = 90%.
parameter. We change the LOC value from 10% to 90% and the SWOC from 0% to 100%. In Fig. 6, we consider the VMOC value as 10%. When SWOC increased form 0% to 100%, we see that SOC is increasing. When SWOC is 70%, the SOC is increased by 19.33% and 22.5% when LOC is increased form 10% to 50% and form 50% to 90%, respectively. We compare Fig. 6 with Fig. 7 to see how VMOC has affected SOC. When SWOC is 50% and LOC is 50%, SOC is increased 30% by increasing VMOC from 10% to 50%. This is because a higher VMOC value means the slice has more overloading cost on the virtual machine part and can not give a good performance for a new user. In Fig. 7, when LOC is 90%, all SOC values are higher than 0.5. This means that the system can not accept new users. In Fig. 8, we increase the value of VMOC to 90%. We see that the SOC value is increased much more compared with the results of Fig. 6 and Fig. 7.
A Fuzzy-Based Scheme for Admission Control in 5G Wireless Networks
6
119
Conclusions and Future Work
In this paper, we proposed and implemented a Fuzzy-based scheme for evaluation of SOC. The SOC parameter will be used an input parameter for Admission Control in 5G Wireless Networks. We evaluated the proposed scheme by simulation. From the simulation results, we found that the three parameters have different effects on the SOC. When VMOC, LOC and SWOC are increasing, SOC is increasing. In the future, we would like to evaluate the Admission Control system by considering other parameters.
References 1. Navarro-Ortiz, J., Romero-Diaz, P., Sendra, S., Ameigeiras, P., Ramos-Munoz, J.J., Lopez-Soler, J.M.: A survey on 5G usage scenarios and traffic models. IEEE Commun. Surv. Tutor. 22(2), 905–929 (2020) 2. Hossain, S.: 5g wireless communication systems. Am. J. Eng. Res. (AJER) 2(10), 344–353 (2013) 3. Giordani, M., Mezzavilla, M., Zorzi, M.: Initial access in 5G mmwave cellular networks. IEEE Commun. Mag. 54(11), 40–47 (2016) 4. Kamil, I.A., Ogundoyin, S.O.: Lightweight privacy-preserving power injection and communication over vehicular networks and 5G smart grid slice with provable security. Internet Things 8(100116), 100–116 (2019) 5. Hossain, E., Hasan, M.: 5G cellular: key enabling technologies and research challenges. IEEE Instrument. Meas. Mag. 18(3), 11–21 (2015) 6. Yao, D., Su, X., Liu, B., Zeng, J.: A mobile handover mechanism based on fuzzy logic and MPTCP protocol under SDN architecture*. In: 18th International Symposium on Communications and Information Technologies (ISCIT-2018), pp. 141– 146, September 2018 7. Lee, J., Yoo, Y.: Handover cell selection using user mobility information in a 5G SDN-based network. In: 2017 Ninth International Conference on Ubiquitous and Future Networks (ICUFN-2017), pp. 697–702, July 2017 8. Moravejosharieh, A., Ahmadi, K., Ahmad, S.: A fuzzy logic approach to increase quality of service in software defined networking. In: 2018 International Conference on Advances in Computing, Communication Control and Networking (ICACCCN2018), pp. 68–73, October 2018 9. Ampririt, P., Ohara, S., Qafzezi, E., Ikeda, M., Barolli, L., Takizawa,M.: Integration of software-defined network and fuzzy logic approaches for admission control in 5G wireless networks: A fuzzy-based scheme for QoS evaluation. In: Barolli, L., Takizawa, M., Enokido, T., Chen, H.-C., Matsuo, K. (eds.) Advances on Broad-Band Wireless Computing, Communication and Applications, pp. 386–396. Springer International Publishing, Cham (2021) 10. Ampririt, P., Ohara, S., Qafzezi, E., Ikeda, M., Barolli, L., Takizawa, M.: Effect of slice overloading cost on admission control for 5G wireless networks: a fuzzy-based system and its performance evaluation. In: Barolli, L., Natwichai, J., Enokido, T. (eds.) Advances in Internet, Data and Web Technologies, pp. 24–35. Springer International Publishing, Cham (2021) 11. Li, L.E., Mao, Z.M., Rexford, J.: Toward software-defined cellular networks. In: 2012 European Workshop on Software Defined Networking, pp. 7–12, October 2012
120
P. Ampririt et al.
12. Mousa, M., Bahaa-Eldin, A.M., Sobh, M.: Software defined networking concepts and challenges. In: 2016 11th International Conference on Computer Engineering & Systems (ICCES-2016), pp. 79–90. IEEE (2016) 13. Jantzen, J.: Tutorial on fuzzy logic. Technical University of Denmark, Dept. of Automation, Technical Report (1998) 14. Mendel, J.M.: Fuzzy logic systems for engineering: a tutorial. Proc. IEEE 83(3), 345–377 (1995) 15. Zadeh, L.A.: Fuzzy logic. Computer 21, 83–93 (1988) 16. Norp, T.: 5G requirements and key performance indicators. J. ICT Stand. 6(1), 15–30 (2018) 17. Parvez, I., Rahmati, A., Guvenc, I., Sarwat, A.I., Dai, H.: A survey on low latency towards 5G: Ran, core network and caching solutions. IEEE Commun. Surv. Tutor. 20(4), 3098–3130 (2018) 18. Kim, Y., Park, J., Kwon, D., Lim, H.: Buffer management of virtualized network slices for quality-of-service satisfaction. In: 2018 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN-2018), pp. 1–4 (2018) 19. Barolli, L., Koyama, A., Yamada, T., Yokoyama, S.: An integrated CAC and routing strategy for high-speed large-scale networks using cooperative agents. IPSJ J. 42(2), 222–233 (2001)
COVID-Prevention-Based Parking with Risk Factor Computation Walter Balzano(B) and Silvia Stranieri University of Naples, Federico II, Naples, Italy {walter.balzano,silvia.stranieri}@unina.it
Abstract. Smart parking is one of the most interesting applications of vehicular ad hoc networks, since drivers looking for free parking slots seriously impact traffic conditions and road congestion. The pandemic experience affecting the world with COVID-19 virus since 2020 has radically changed the citizens lives and their habits. Any sector, from sanitary to economics, had to adapt itself to a new approach to everyday-life. Mobility is included in such a change and researchers in this field should contribute in this direction. In this work, we propose an innovative smart technique for parking detection, which takes care of anti-covid standards: indeed, such a parking schema maps vehicles in the available slots with the aim of reducing assembly, after the vehicle has been parked. After analyzing the parking process, we conclude that the arrival to the parking area, and the moment when one leaves that area can be source of crowd. In order to reduce this phenomenon, we perform a user profiling with the aim of reducing the probability that drivers are in the same place at the same time. We achieve the goal by computing a risk factor for any pair of vehicles populating the car parking. Keywords: VANET
1
· Parking · COVID · Risk factor
Introduction
Vehicular ad hoc networks (VANETs) are one of the emerging application of Artificial Intelligence (AI), which is pervading any research field in these last years. VANETs allow smart communication between vehicles through broadcast information exchanging, by ensuring traffic congestion management, road safety and driver security [17]. AI applied to VANETs constitutes the basis for autonomous driving systems, but also smart parking mechanisms, which can seriously improve the quality of time spent across the streets of our cities. In this work, we want to put these hot research topics in a even hotter issue, which is the COVID-19 pandemic. Nowadays, any economic, sanitary, politic, or scientific sector had to deal with the anti-covid standards imposed all over the world. Even after this emergency will be over, hopefully very soon, the pandemic experience already marked our way of intending life. Many habits probably will stay for years. For this reason, we think it is interesting proposing a parking strategy which takes into account the rules aimed to assembly prevention. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 121–130, 2021. https://doi.org/10.1007/978-3-030-79725-6_12
122
W. Balzano and S. Stranieri
The scenario we consider is the one of a huge car park, such as the one associated to the Dubai Mall [19], the biggest mall in the world. Such a car park has 14 thousands parking slots to deal with, and the resulting crowd can be impressive. First of all, we observe that the kind of assembly we need to avoid is the one resulting from people arriving at the car park, and from people leaving the car park. In the first case, a crowd can be generated by people leaving their car to reach the entrance of the mall, in the second one by people leaving the mall to recover their car. In order to avoid the situations we just described, we propose a system based on a user profiling to understand future drivers behavior, a car park classification based on several entries for the mall, and a risk factor computation for any pair of drivers. The final goal is to force drivers with high risk factor not to be assigned to parking slots in the same area. Indeed, a high risk factor denotes a high probability that those drivers will be in the same place at the same time. At least we can make sure that they arrive at the same time, but in different places. The same holds for the moment when they leave the car park. Outline The rest of the paper is organized as follows: in Sect. 2, we present the state of the art in parking issues and anti-covid solutions. In Sect. 3, we propose our innovative parking technique aimed to prevent assembly, by analysing each phase of the process and their corresponding computations. In Sect. 4, some examples and simulations are proposed. Finally, Sect. 5 provides conclusions and hints for future works based on our proposal.
2
Related Work
VANETs in general, and smart parking as particular application, are largely studied from several research fields due to the potential ability to improve road conditions, by reducing traffic congestion, rising driver safety, decreasing gas emissions. The versatility of VANETs is pretty clear if we just analyze the numerous ways in which they can be employed: starting from traffic management to nearest parking discovery [7]. Authors of [18] propose a machine learning approach, putting together potentialities of VANETs and those of artificial intelligence. In [20], instead, authors focus on improving security and privacy in parking process, by introducing blockchain mechanisms. Morevoer, in [15], authors exploit IoT (Internet of Things) potentialities to perform a visual-aided parking process. For a survey on smart parking, consult [14]. Many authors take into account the COVID-19 rules and how they have changed citizens lives. For instance, in [11], authors propose a way to enforce social distancing measures in Smart Cities. Authors of [9] also provide a parking technique that takes into account the need of keeping a social distancing between people looking for a free parking slot. In our work, the innovative aspect is to perform a user profiling before the slot assignment, and to compute a risk factor between pairs of vehicles, so to avoid the situation in which two persons are in the same place at the same time. Iterating this process for any pair of vehicles, the
COVID-Prevention-Based Parking with Risk Factor Computation
123
benefit of the approach is immediately clear: persons approaching the parking area in the same time, will be placed in totally different areas of the mall, and hence, they will enter the mall from different points. Through such an approach, the people flow towards the mall is distributed along the time and the space, as well as the people flow from the mall.
3
Anti-COVID-Standards-Based Parking
Our proposal is placed in the following scenario: let us move to Dubai mall, the biggest one in the world for number of shops [19]. The parking area of such a mall offers 14 thousands parking spaces distributed across three car parks [16]. It is easy to imagine that finding a free parking slot in such a scenario is far than an easy task. Moreover, considering the current world situation, dealing with anti-covid rules to reduce assemblies, a criterion to distribute vehicles in a smart way seems to be needed. Indeed, in a huge parking area, such as the one we are considering, we cannot ignore the flow of people deriving from the parking process. Our proposal aims to reduce the number of persons moving towards the mall at same time. To achieve our goal, we assume that the mall provides to the clients an opportune app, which is assumed to be used by them anytime they plan to reach the mall. Before starting their path to the mall, the users load some useful information through the app, to classify them and separate similar users. 3.1
First Phase: Information Loading
This phase is supposed to be performed before reaching the mall. Any user willing to go to the mall, which is supposed to be already registered to the app, upload the following information: • Starting Time: the user informs the parking owners of the time in which they intend to start their travel to the mall. This operation can also be performed at the very time user is in his car ready to leave, by selecting the current hour. • Starting Position: the user uploads the information about his current position, if it is the one from which he will begin his travel, or another position, if he is planning his travel for the next hours. • Estimated Permanence Time: the user informs the parking owners about the time during which he supposes to occupy the associated slot. Any of this information is supposed to be uploaded the same day of the travel, so to be precise enough for the next evaluation. Notice that, among the information needed to chose a parking slot, we are not considering the destination of each driver because, dealing with a huge mall, we can easily assume that the user wants to reach several shop centers placed in different areas of the mall, and hence the position of the selected parking slot with respect to the shops is not relevant. In Table 1, we show how any user gives his information about starting time, starting position, and estimated permanence time respectively. We assume
124
W. Balzano and S. Stranieri
that the position is computed through GPS localization, and that the estimated time of permanence is expressed in hours. In Table 2, we show a similar example, by picking a wider interval of time with respect to the previous one. Table 1. Example 1 of information loaded by the users before traveling towards the mall. ST
SP ET (in hours)
U1
11:00 a.m p1
2
U2
12:00 a.m p2
1
U3
11:15 a.m p3
2
U4
11:45 a.m p4
5
U5
11:05 a.m p5
3
U6
11:30 a.m p6
1
U7
11:15 a.m p7
3
U8
12:55 a.m p8
1
U9
11:20 a.m p9
3
U10 11:15 a.m p10 1
3.2
Second Phase: Parking Classification
This phase is needed to overcome the assembly problem. Indeed, the crucial moment of the parking process is when drivers leave their vehicles and move toward the nearest entrance to the mall. This is the very situation in which the assembly risk is the highest. Since we are taking into account very big car parks, such as the one of the Dubai Mall, we also have to consider that it provides several entries to the mall. Our aim is to avoid crowd in proximity of any entrance to the mall, which is the very problem in a pandemic era. Table 2. Example 2 of information loaded by the users before traveling towards the mall. ST
SP ET (in hours)
U1
8:30 a.m
p1
3
U2
9:00 a.m
p2
2
U3
8:45 a.m
p3
1
U4
10:00 a.m p4
1
U5
10:30 a.m p5
2
U6
11:00 a.m p6
1
U7
11:30 a.m p7
1
U8
10:45 a.m p8
1
U9
9:15 a.m
p9
2
U10 9:30 a.m
p10 3
COVID-Prevention-Based Parking with Risk Factor Computation
125
For this reason, we classify the whole car park in several parking areas, one for each entrance to the mall. Let us suppose that the car park provides n different entrances to the mall, we obtain as result n different areas, as computed through the Algorithm 1. The result is shown in Fig. 1, where a simple example of car park classification is proposed: the parking slots are divided into 4 groups, as 4 are the different entries to the mall.
Fig. 1. Example of car park areas clusterization
3.3
Third Phase: Risk Factor Computation
In this phase, after user profiling and car park classification, we compute the risk factor between users. The idea is that users with high risk factor should be assigned a slot belonging to different areas. The risk factor is intended to be the probability that two users are in the same place at the same time, creating assembly.
Algorithm 1: Car Park classification Input: n entrances Output: n parking areas create n empty parking areas; for each slot s in the car park do pick the nearest entrance i; assign s to the area i; end
Without loss of generality, we assume that the system analyzes the group of users reaching the mall in a time interval I (for instance, drivers approaching from 10:00 to 11:00). In such a time interval, we compute the maximum gap
126
W. Balzano and S. Stranieri
between any pair of vehicles involved. This operation is needed in order to normalize the risk factor and obtain a value between 0 and 1. Under this assumption, the gap (expressed in minutes) between any two drivers arrivals can be between 0 (if they arrive at very same time), and max (if they arrive the maximum time distance for that interval). We, then, compute several parameters for any user. Definition 1 (Travel Time). The travel time for the user i, tti , is computed by estimating the distance between the GPS position of the mall car park and the one of the user i, by applying the maximum speed allowed for the road to be traveled: (1) tti = f (pmall , pi ) Definition 2 (Arrival Time). The arrival time for the user i, ati , is computed in terms of the starting time provided in the first phase, sti , and the travel time estimated as in Definition 1, and is expressed in minutes: ati = sti + tti
(2)
Definition 3 (Release Time). The release time for the user i, rti , is computed in terms of the arrival time computed as in Definition 2, ati , and the estimated permanence time provided in the first phase, eti , and is expressed in minutes: rti = ati + eti
(3)
Definition 4 (Risk Factor). The risk factor, intended as the risk that two users are in the same place at the same time, is the result of the combination of the risk that the users i and j meet when they arrive, rfa (i, j), and the risk of meeting when they leave, rfr (i, j). Such a factor is a probability measure, meaning a value between 0 and 1, indicating how much is likely that the two users create assembly. rf (i, j) = g(rfa (i, j), rfr (i, j)) rf (i, j) ∈ [0, 1]
(4)
According to the values of the g function, we can classify the risk factor in: • High: when both arrival and release constitute source of risk; • Medium: when one between arrival and release constitutes source of risk; • Low: when there is no risk that the two users overlap. Clearly, vehicles which have a low risk factor can be assigned to the same parking area, while the ones with a high risk factor must be separated necessarily. For the medium risk factor, we can define a threshold to establish if treating the involved vehicles as high o low risk. Definition 5 (Arrival Risk Factor). In order to compute the risk that users i and j meet at the arrival, we need to take into account the respective arrival time, whose difference is between 0 and max by construction, divide it by max, in order to have a probability measure, and pick the opposite: rfa (i, j) = 1 − (
|ati − atj | ) max
(5)
COVID-Prevention-Based Parking with Risk Factor Computation
127
Definition 6 (Release Risk Factor). In order to compute the risk that users i and j meet when they leave the mall, we need to take into account the respective release time, whose difference is between 0 and max by construction, divide it by max, in order to have a probability measure, and pick the opposite: rfr (i, j) = 1 − (
|rti − rtj | ) max
(6)
Definition 7 (Weights). In order to define the function g from the Definition 4, we define two weights, α and β ∈ [0, 1], chosen such that α + β = 1, to assign an importance to the two partial risk factors and obtain only one risk factor. In particular, assigning α = 0.5 and β = 0.5 means assigning equal weight to the risk factor at the moment of arrival and the one of release. Varying these parameters values we obtain different final risk factors, as follows: rf (i, j) = αrfa (i, j) + βrfr (i, j)
(7)
Hence, α and β parameters allow to allocate vehicle in the parking areas giving higher (or lower) priority to the avoidance of assembly at the arrival (or leaving) moment.
4
Example
The example of Table 1, takes a time interval of 1 h and 10 users as sample. In Table 3, the resulting risk factors are computed as in Definition 7, by choosing equal weights for α and β parameters, 0.5. The table has a triangular form, because of the symmetric nature of risk factor (rf (i, j) = rf (j, i)). Indeed, the first row indicates the risk factor for user 1 with respect to all the other ones (except himself), while the row for user 10 is not shown, since the last column gives us all the information needed. As we ca notice, by choosing such a small time slot (only 1 h), we obtain pretty high risk factors. Table 3. Risk factors corresponding to the example of Table 1 with α = 0.5 and β = 0.5. U1 0.67 0.93 0.45 0.86 0.82 0.81 0.43 0.79 0.85 U2
0.7
0.4
0.57 0.69 0.59 0.75 0.6
U3
0.51 0.86 0.85 0.88 0.5
U4
0.59 0.46 0.63 0.3
0.62
0.86 0.88 0.65 0.4
U5
0.71 0.95 0.45 0.93 0.74
U6
0.73 0.45 0.74 0.93
U7
0.47 0.97 0.76
U8
0.48 0.38
U9
0.74
128
W. Balzano and S. Stranieri
In Table 4, we propose a first example by picking the number of areas n = 3 and the threshold θ = 0.5. We can see that, because of the tiny time interval, the number of users, and the few areas, the algorithm assigns the first area to the first user, by putting together in the same area all the users with whom the risk factor is low (4 and 8, respectively 0.45 and 0.43). Then, users 2 and 3 are put in different areas, but there are not areas enough to separate all the users that have a conflict. Table 4. Users-to-area assignment according to risk factors of Table 3, with n = 3, and θ = 0.5. U1 U2 U3 U4 U5 U6 U7 U8 U9 U10 Area 1
2
3
1
3
3
2
1
2
3
By changing the number of available areas to 6 and the threshold to 0.3, the assignment reduces the conflicts, as shown in Table 5. Table 5. Users-to-area assignment according to risk factors of Table 3, with n = 6, and θ = 0.3. U1 U2 U3 U4 U5 U6 U7 U8 U9 U10 Area 1
2
3
1
4
5
6
1
6
6
Table 6 shows the risk factors computed between users of the example shown in Table 2. The parameters are the same as before, the time window taken into account is wider than the previous one. The result with n = 3 and θ = 0.5 is shown in Table 7. Still we have some overlaps with a low number of areas, but less than the previous case, thanks to the larger time interval. By changing, as before, n to 6 and θ to 0.3, the assignment is almost perfect, with only one overlap (Table 8). Table 6. Risk factors corresponding to the example of Table 2 with α = 0.5 and β = 0.5. U1 0.83 0.65 0.66 0.49 0.42 0.32 0.61 0.69 0.65 U2
0.63 0.83 0.55 0.49 0.38 0.67 0.75 0.72
U3
0.46 0.19 0.12 0.02 0.31 0.39 0.35
U4
0.72 0.65 0.55 0.84 0.67 0.65
U5
0.83 0.83 0.82 0.79 0.83
U6
0.89 0.81 0.68 0.66
U7
0.7
0.63 0.66
U8
0.67 0.65
U9
0.96
COVID-Prevention-Based Parking with Risk Factor Computation
129
Table 7. Users-to-area assignment according to risk factors of Table 6, with n = 3, and θ = 0.5. U1 U2 U3 U4 U5 U6 U7 U8 U9 U10 Area 1
2
3
3
1
1
1
3
3
2
Table 8. Users-to-area assignment according to risk factors of Table 6, with n = 6, and θ = 0.3. U1 U2 U3 U4 U5 U6 U7 U8 U9 U10 Area 1
5
2
3
3
1
2
4
5
6
6
Conclusions
Smart parking is a hot research topic in VANETs research field, and in particular if it is combined with the pandemic from COVID-19 affecting the whole world. In this work, we provide a parking system aimed to avoid assembly. We classify parking areas and users so to separate those users who risk to be in the same place at the same time. Simulations validate the strength of our proposal. As future work, we also aim at exploiting formal techniques based on the strategic reasoning for multi-agent systems. In this setting, one can consider cars as intelligent agents and the desired solution may come as an equilibrium or by solving an opportune game among the agents, in a way similar as it has been done in [4,5,12,13], by considering, in the case, applications to parity games too, as in [8] and [10]. We can also combine the proposed approach with innovative wireless sensor networks, as the ones proposed in [6], or even social networks, such as in [1–3].
References 1. Amato, F., Moscato, F., Moscato, V., Pascale, F., Picariello, A.: An agent-based approach for recommending cultural tours. Pattern Recognit. Lett. 131, 341–347 (2020) 2. Amato, F., Moscato, V., Picariello, A., Sperli`ı, G.: Diffusion algorithms in multimedia social networks: a preliminary model. In: Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017, pp. 844–851 (2017) 3. Amato, F., Moscato, V., Picariello, A., Sperli`ı, G.: Extreme events management using multimedia social networks. Fut. Gener. Comput. Syst. 94, 444–452 (2019) 4. Aminof, B., Malvone, V., Murano, A., Rubin, S.: Graded modalities in strategy logic. Inf. Comput. 261, 634–649 (2018) 5. Aminof, B., Malvone, V., Murano, A., Rubin, S.: Graded strategy logic: reasoning about uniqueness of nash equilibria. In: Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, pp. 698–706 (2016) 6. Balzano, W., Murano, A., Vitale, F.: WiFACT–wireless fingerprinting automated continuous training. In: 2016 30th International Conference on Advanced Information Networking and Applications Workshops (WAINA), pp. 75–80. IEEE (2016)
130
W. Balzano and S. Stranieri
7. Balzano, W., Vitale, F.: PAM-SAD: ubiquitous car parking availability model based on v2v and smartphone activity detection. In: International Conference on Intelligent Interactive Multimedia Systems and Services, pp. 232–240. Springer (2018) 8. D’Amore, L., Murano, A., Sorrentino, L., Arcucci, R., Laccetti, G.: Toward a multilevel scalable parallel Zielonka’s algorithm for solving parity games. Concurr. Comput.: Pract. Experience 33(4), e6043 (2021) 9. Delot, T., Ilarri, S.: Let my car alone: parking strategies with social-distance preservation in the age of covid-19. Proc. Comput. Sci. 177, 143–150 (2020) 10. Di Stasio, A., Murano, A., Prignano, V., Sorrentino, L.: Improving parity games in practice. Ann. Math. Artif. Intell. 89(5), 1–24 (2021) 11. Gupta, M., Abdelsalam, M., Mittal, S.: Enabling and enforcing social distancing measures using smart city and its infrastructures: a covid-19 use case. arXiv preprint arXiv:2004.09246 (2020) 12. Jamroga, W., Malvone, V., Murano, A.: Reasoning about natural strategic ability. In: Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, pp. 714–722 (2017) 13. Jamroga, W., Murano, A.: Module checking of strategic ability. In: Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, pp. 227–235 (2015) 14. Khalid, M., Wang, K., Aslam, N., Cao, Y., Ahmad, N., Khan, M.K.: From smart parking towards autonomous valet parking: a survey, challenges and future works. J. Netw. Comput. Appl. 102935 (2020) 15. Luque-Vega, L.F., Michel-Torres, D.A., Lopez-Neri, E., Carlos-Mancilla, M.A., Gonz´ alez-Jim´enez, L.E.: IoT smart parking system based on the visual-aided smart vehicle presence sensor: Spin-v. Sensors 20(5), 1476 (2020) 16. Parking in dubai mall. https://www.bayut.com/mybayut/parking-malls-dubai/ 17. Rashid, S.A., Hamdi, M.M., Alani, S., et al.: An overview on quality of service and data dissemination in VANETs. In: 2020 International Congress on HumanComputer Interaction, Optimization and Robotic Applications (HORA), pp. 1–5. IEEE (2020) 18. Saharan, S., Kumar, N., Bawa, S.: An efficient smart parking pricing system for smart city environment: a machine-learning based approach. Fut. Gener. Comput. Syst. 106, 622–640 (2020) 19. The dubai mall. https://thedubaimall.com/ 20. Zhang, C., et al.: BSFP: Blockchain-enabled smart parking with fairness, reliability and privacy protection. IEEE Trans. Veh. Technol. 69(6), 6578–6591 (2020)
Coarse Traffic Classification for HighBandwidth Connections in a Computer Network Using Deep Learning Techniques Marek Bolanowski(&), Andrzej Paszkiewicz, and Bartosz Rumak Department of Complex Systems, The Faculty of Electrical and Computer Engineering, Rzeszow University of Technology, al. Powstancow Warszawy 12, 35-959 Rzeszow, Poland {mb,andrzejp}@prz.edu.pl
Abstract. The paper concentrates on the issues related to the detection of anomalies in computer networks that are used in Small and Medium Enterprises (SME). The authors proposed the architecture of such a system based on the use of the initial traffic classifier and an arbiter supervising the work of the network. In this solution, after the learning stage using samples of real traffic (normal and abnormal) system detects abnormal traffic and uses Policy Based Routing (PBR) to redirect abnormal traffic for further analysis using Deep Packet Inspection (DPI) systems. This approach allows to significantly reduce the amount of traffic analyzed by IDS, and hence reduces the cost of purchasing and implementing security systems. The proposed model has been developed using the methodology OSEMN to work with data. The system can be used as the first step in the detection of threats and anomalies in systems with cascade security architecture.
1 Introduction Cybersecurity is currently the main subject of research and activities related to the design and operation of computer networks. It should be noted that one of the basic problems associated with risk identification is the correct classification of incidents. A number of techniques, papers, and ready-made applications are known that identify threats and eliminate their negative impact on the network [1–4]. They are perfect for corporate networks, that is networks with consistent policies for managing incoming and outgoing traffic with a clearly defined demarcation line [5–9]. The cost of this class of solutions for systems that aggregate large amounts of traffic is, however, gigantic and in many cases a very big obstacle to their implementation. Initial research and experience of authors related to the administration of distributed systems show that one of the basic threats in a computer network are DDoS attacks. However, it should be noted that they can be divided into two main groups: • DDoS attacks carried out by entities or external persons to prevent the proper functioning of websites or e-mail or mailing group system. These attacks are carried out relatively often, and their nature and characteristics change very quickly. For this reason, classic filters and mechanisms often do not correctly identify the threat. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 131–141, 2021. https://doi.org/10.1007/978-3-030-79725-6_13
132
M. Bolanowski et al.
• The second type of attack - USER DDoS is related to the activities of authorized users of the distributed system including computer system. The user performs permissible actions in unpredicted time or to non-appropriate scale. Often the negative impact of these types of activities results from an incorrectly configured network, but this is not always the main reason. An example of such an action may be to backup a website containing a lot of graphic and video materials by a concerned administrator (or user) of the site, which saturated the whole band of the web server in the middle of the day - which basically paralyzed the operation of the server. The problem with the analysis of network traffic to detect the anomaly is additionally deepened by the increasingly common IoT and Industry 4.0 solutions [10]. For all the systems described above, it seems crucial to identify an abnormal situation in operation that will be a trigger for further protective measures. The proposed solution should be easy to implement on the network without the need to interfere in the network devices code. Therefore, the aim of the paper is to develop a rough mechanism for the detection of anomalies in the IT system and to identify flows (traffic classes) that will be further analyzed. The proposed solution should be also characterized by low implementation costs, which may be applied in the environment of small and mediumsized enterprises (SMEs).
2 The Proposed Approach Significant from the point of view of the purpose of the conducted research is to define in the first step the architecture of the system in which the proposed approach will be implemented. Figure 1 presents a simplified system architecture. The traffic in-coming the network is sent through the P switch. All production traffic is copied using the port mirrioring mechanism (Pm) to the classifier which classifies flows into normal and suspicious. The classifier transfers the flows parameters to the arbiter A that identifies suspicious flow parameters (e.g. source MAC, destination MAC, Source IP, Destination IP). Arbiter is an application which task is the ongoing supervision over the elements of a computer network. It can be built in various architectures, e.g.: SDN, central scripts, distributed scripts, Python scripts [11, 12]. In the next step, the arbiter modifies the forwarding policy for the suspicious flow on the P switch using Access Control List (ACL) L2 or Policy Based Routing in L3. The suspicious flow is directed to the IDS system (or immediately rejected on the basis of previous IDS analysis results) which carries out an accurate traffic analysis in the L1–L7 layers of the ISO/OSI model. The classifier was based on the neural network described in the following chapters. The neural network learning can be done in several ways: learning with a set of previously prepared data or learning using data received in real time from the system. In the proposed approach, data intended for neural network learning are collected (sampled) and prepared by the arbiter A. Network learning can be carried out on continuously or in service windows. Research has shown that too complicated application classifying traffic leads to very large detection delays. It is also possible to
Coarse Traffic Classification for High-Bandwidth Connections
133
implement several classifiers on the basis of micro-services dedicated to specific types of traffic (e.g. video, www…). Incoming traffic
Pm
Incoming traffic copy
Classifier (K )
Suspected flow
IDS(L2-L7)
PBR
Normal flow
Control Message (CM)
CM
Probe
Network device (P )
Normal flow Abnormal flow
Abnormal flow
Drop
Outgoing traffic copy (OTC)
Arbiter (A)
Droped traffic copy (DTC)
Outgoing traffic Fig. 1. The architecture of the anomaly detection system based on a simple classifier using a neural network.
The first step should therefore be to develop the architecture and methods of the neural network learning used by the classifier K. For this purpose, the OSEMN methodology described in [13] was used, which is a kind of data work taxonomy. We can identify the following steps in it: 1. O – Obtain: at this stage, the data used in the next steps is obtained. 2. S – Scrub: at this stage, you must prepare the data for use in the next steps. 3. E – Explore: At this stage, data should be visualized in order to find trends and diagrams. 4. M – Model: at this stage, predictive algorithms should be developed. 5. N – Interpret: interpretation of the results obtained in previous steps. Using the relatively simple OSEMN methodology allows to organize the process of creating the classifier architecture and its subsequent use. It is particularly important at the stage of creating a prototype by teams of programmers.
134
2.1
M. Bolanowski et al.
Research Stand
During the research, an attempt was made to implement individual elements of the presented architecture. The most problematic was the creation of a controller architecture that would be characterized by sufficiently low computational complexity and sufficient accuracy of classification. To speed up the process of searching for a prototype, a virtual research station was created consisting of three virtual machines operating on the basis of the KVM virtualization environment [14]. Also the virtual switch “Open vSwitch” [15] was used thanks to which it was possible to capture packets on the L2 layer. The connection diagram of the research environment is shown in Fig. 2. The individual machines implemented the following functionalities: 1. M1: The machine generates normal web traffic and is also the target of the attacker’s host attack. 2. M2: The machine through which the attacks on M1 were simulated. 3. M3: The machine on which the traffic from the M1 host and the v-switch was copied. 2.2
Acquiring Data
The first step used in the OSEMN taxonomy is to obtain data for further analysis. Since the machine learning model used in recognizing attacks will be a classification model, data from classes (at least two) are needed. One of them will be described as a normal state, containing data (packets) captured during standard network activity. The second one as an abnormal state and will contain data (packets) acquired during the attack. The whole process of receiving data was carried out in the following steps: 1. Construction of a network emulation environment with two hosts: M1 and M2. 2. Traffic assembly on the M3 host. 3. Using a program to analyzing network traffic (Wireshark) to write data in a manner convenient for subsequent modeling of the algorithm. While emulating the normal state, we attempted to map the natural network traffic, with particular emphasis on the TCP packets generated by the real person. Of course, it is possible to use different traffic sources described in [16], or holistic approach to changes in the structure of the IT network [17]. In the presented example, we will focus on traffic: www, operating system, and streaming (e.g.: www.youtube.com). The Open vSwitch device has been configured so that the M3 host will collect all network traffic of the second host. Then, data collection was started by Wireshark software [18]. The collected data was exported to a *.csv file and forwarded for further analysis. The attack was carried out using the Metasploit framework [19]. A number of attacks have been implemented. In the presented example, DoS SYNflood attack was made. As a result of the simulation, packets were obtained from the normal and abnormal state.
Coarse Traffic Classification for High-Bandwidth Connections
135
KVM virtualization environment M1 (Attacker host)
Open Vswitch M2 (The host that generates traffic)
M3 (The host that collect traffic)
Network (Internet)
Fig. 2. The environment emulating network traffic.
The last element of obtaining data is marking them in terms of their status, that is, assigning an additional column of 0 values for traffic without attack and 1 for traffic with an attack. Loading files into variables and entering additional columns can be achieved with the following commands (Phyton): data_frame_normal = pd.read_csv('normal.csv') data_frame_attack = pd.read_csv('attack.csv') data_frame_normal.insert(7, 'class', 0) data_frame_attack.insert(7, 'class', 1)
2.3
Adjusting the Data
In order to properly adapt the data that can be used to build the model, the following technologies were used: language - Python 3.6; for data processing - the Pandas library; to create a predictive model - Keras library. Each package included in the *.csv file that has been exported from the Wireshark software has the following variables: 1. No. - number, or the unique identifier of the package. 2. Time - time label, defining the arrival time of the package in relation to the start of the interface listening process. 3. Source - the source IP address of the given packet. 4. Destination - the destination IP address of the given packet. 5. Protocol - protocol describing the packet. 6. Length - package length in bytes. 7. Info - additional descriptive data related to the package, such as e.g. its type.
136
M. Bolanowski et al.
Figure 3 shows an example of a package before the data normalization process.
Fig. 3. An example of an unprocessed packet captured with Wireshark.
The second step in the OSEMN taxonomy describes the cleaning of data collected in the first step from unnecessary elements, and the addition of data with parameters that will facilitate their ordering, analysis and comparison. The only element that does not carry any data useful in modeling is the unique identifier of the package - because each packet has its own number. Sequence dependencies are included in the variable “Time” so the variable “No.” can be omitted. In addition, variables such as Source IP Address that have a complex form (e.g. 192.169.1.11) have been replaced with consecutive numbers for each occurrence of a unique address (e.g. 11). We applied similar procedure to the other complex variables were followed. Figure 4 presents normalized parameters describing the package. Data about this structure will be used to create a predictive model.
Fig. 4. Sample package after processing which prepares it for predictive modeling.
2.4
Adjusting the Data
In the next step, an attempt was made to visualize the data in order to identify trends or anomalies. The simplest graphs about distributions or dependencies are histograms and correlation matrices. They are presented in Figs. 5 and 6. This presentation method can be useful in a detailed analysis of anomalies or its sources and should be perceived as an additional element of the detection system. 2.5
Predictive Model
One of the aims of the research is to create a predictive model, based on which it can be detected whether a given package comes from a class described as normal network traffic or from an abnormal class (a specific attacker source). Creating a model can be divided into the following elementary steps: Step 1: Creating a neural network topology. Step 2: Compilation of the created neural network. Step 3: Determination of predictive correctness after completing the neural network learning process.
Coarse Traffic Classification for High-Bandwidth Connections
137
Fig. 5. Distributions of values of individual variables in the captured Wireshark packages.
Fig. 6. Correlation between individual data variables.
138
M. Bolanowski et al.
The entire model was created using Keras library. Key commands are presented below: model = Sequential() model.add(Dense(1200, input_dim=6, activtion='relu')) model.add(Dense(100, activation='relu')) model.add(Dense(100, activation='relu')) model.add(Dense(1, activation='sigmoid'))
The presented code defines a four-layer network, with the output layer being one neuron which returns the class value = 1 for the activation value of a > 0.5. The input layer has 1200 neurons and its input is a six-dimensional variable, i.e. values describing the package after omitting the variable “No.”. The quantities concerning the number of neurons in a given layer and the number of layers were selected empirically and arbitrarily. Theoretically, increasing the number of neurons or number of layers should positively affect the predictive capabilities of the model, and negatively affect the speed of its training, but in the considered case the improvement was too small to justify the complexity of the model. The neural network has only one neuron at its output, due to the binary classification problem used - the only values that can appear on the output are 0 and 1. model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
The above-mentioned fragment of code creates a pre-defined neural network, for which the loss function is defined as the cross entropy: L ¼ y logð pÞ þ ð1 yÞ logð1 pÞ
2.6
Interpretation of the Results Obtained
During the tests of the proposed model, 70% of packets were classified correctly. This is a satisfying result taking into account the fact that the proposed system can be implemented in a cascade model. It should be noted that in the environment of dynamically changing traffic, classification results may be slightly worse. The tests used a data stream in which video traffic dominated. Attempts to use more complex traffic and a wider spectrum of attacks have forced the introduction of more generalized data, and thus the correctness of data classification decreased to 52%. Due to the efficiency, a sequential neural network was used in the paper. It is possible to use a recursive network that could take into account the order of incoming packets, which would lead to better results, but in this case it is necessary to strictly control the computational complexity of the solution.
Coarse Traffic Classification for High-Bandwidth Connections
139
3 Examples of Application One of the areas in which the proposed solution has a great potential for application are wireless networks, in particular MESH networks. They are used to build a redundant and high-performance wireless infrastructure. Typically, they are designed to cover an area with no wired infrastructure with a computer network. They can be permanent or temporary. Therefore, they are suitable both for the creation of industrial networks, public networks, as well as highly specialized, e.g. distributed sensor infrastructure. When designing and creating MESH networks, particular emphasis is placed on the reliability and efficiency of data transmission. Due to the use of energy efficient devices in this type of networks, and taking into account the transmission medium, it is not possible to perform a full, computationally complex analysis of network traffic at the level of access nodes or intermediate nodes. This analysis also applies to the detection of anomalies. Therefore, an interesting solution is to use coarse detection and then implement network mechanisms based on Access Control Lists or Policy Based Routing in order to adequately treat suspicious traffic. An example of the architecture of such an infrastructure is presented in Fig. 7.
Fig. 7. The environment MESH network with the anomaly detection system based on a simple classifier using a neural network.
The proposed architecture enables the integration of a large number of devices both in the external environment and inside buildings. Typically, traffic from individual wireless nodes takes the shortest path to the gateway node that connects the MESH network to the wired network. Therefore, a rough analysis of network traffic is done at the boundary of both networks, and control instructions are broadcast to the wireless node network. This solution also allows the use of the structure of wireless nodes based on simple microcontrollers. Taking into account the fact that within a wireless network connections are usually created on the basis of the shortest paths, it is possible to use
140
M. Bolanowski et al.
both common communication channels to transmit Control Message, as well as to use dedicated channels only for control traffic. Tests carried out in an environment based on the ESP32 microcontroller confirm the great potential for the application of the developed solution for IoT.
4 Conclusion The approach presented in the paper enables coarse detection of anomalies in the environment of high-throughput communication channels and in the case of the MESH network, while limiting implementation costs. Simultaneously, studies were carried out in which statistical signatures were used to detect anomalies [20–22]. The solutions based on the presented classifier burdened the computer system on which it was installed to a lesser extent (compared to statistical detection), while it was characterized by lower correctness of anomaly detection for strongly converged network traffic. The conducted research clearly showed that the proposed solution cannot function as an independent and the only system of network protection. It may, however, be used as the first cascade (stage) of network protection. The system should be seen in terms of the IDS system, but it detects, apart from classical attacks (in particular DDoS) abnormal situations related to the permitted but not desirable activity on the network. The developed system can be implemented in any environment built using layer 3 switches with implemented: routing, ACL and QoS. The remaining elements of the system are based on Open Source solutions, so the cost of their implementation is limited and acceptable from the point of view of the SME systems. The system architecture and classifier proposed by the authors of the paper has been tested in a laboratory environment. Subsequent studies will be aimed at testing in the network environment of the ISP network operator as part of jointly carried out research. Acknowledgments. This project is financed by the Minister of Education and Science of the Republic of Poland within the “Regional Initiative of Excellence” program for years 2019–2022. Project number 027/RID/2018/19, amount granted 11 999 900 PLN.
References 1. Bhuyan, M., Bhattacharyya, D., Kalita, J.: Network Traffic Anomaly Detection and Prevention, Information. Springer, Heidelberg (2017). https://doi.org/10.1007/978-3-31965188-0 2. Bialas, A., Michalak, M., Flisiuk, B.: Anomaly detection in network traffic security assurance. In: Zamojski, W., Mazurkiewicz, J., Sugier, J., Walkowiak, T., Kacprzyk, J. (eds.) DepCoS-RELCOMEX 2019. AISC, vol. 987, pp. 46–56. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-19501-4_5 3. Conti, M., Lal, C., Mohammadi, R., Rawat, U.: Lightweight solutions to counter DDoS attacks in software defined networking. Wirel. Netw. 25(5), 2751–2768 (2019). https://doi. org/10.1007/s11276-019-01991-y 4. Ahmed, M.: Intelligent Big Data Summarization for Rare Anomaly detection. IEEE Access 7, 68669–68677 (2019)
Coarse Traffic Classification for High-Bandwidth Connections 5. 6. 7. 8. 9. 10.
11. 12.
13. 14. 15. 16. 17.
18. 19. 20. 21.
22.
141
Checkpoint Homepage. https://www.checkpoint.com/ Palo Alto Networks Homepage. https://www.paloaltonetworks.com/ Fortinet Homepage. https://www.fortinet.com/ Sourcefire. https://en.wikipedia.org/wiki/Sourcefire IBM Qradar. https://www.ibm.com/pl-pl/security/security-intelligence/qradar Hai, H.D., Duong, N.H.: Detecting anomalous network traffic in IoT networks. In: 2019 21st International Conference on Advanced Communication Technology (ICACT), pp. 1143– 1152. IEEE (2019). https://doi.org/10.23919/ICACT.2019.8702032 Bolanowski, M., Twarog, B., Mlicki, R.: Anomalies detection in computer networks with the use of SDN. Meas. Autom. Monit. 9(61), 443–445 (2015) Bolanowski, M., Paszkiewicz, A.: Methods and means of creating applications to control a complex network environment. In: Scientific Papers of the Polish Information Processing, Society Scientific Council, pp. 151–160 (2017). ISBN 978–83–946253-5-1 Lao, R.: Life of Data | Data Science is OSEMN (2017). https://medium.com/@randylaosat/ life-of-data-data-science-is-osemn-f453e1febc10 KVM Homepage. https://www.linux-kvm.org/page/Main_Page Open vSwitch Homepage. https://www.openvswitch.org/ Bolanowski, M., Paszkiewicz, A.: Performance test of network devices. Annales Universitatis Mariae Curie-Sklodowska, Sectio Ai Informatica 13(2), 29–36 (2013) Paszkiewicz, A., Iwaniec, K.: Use of ising model for analysis of changes in the structure of the IT network. In: Borzemski, L., Świątek, J., Wilimowska, Z. (eds.) ISAT 2018. AISC, vol. 852, pp. 65–77. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-99981-4_7 Wireshark Homepage. https://www.wireshark.org/ Metasploit Homepage. https://www.metasploit.com/ Bolanowski, M., Paszkiewicz, A.: Detekcja anomalii w systemach autonomicznych Internetu Rzeczy. Elektronika 10, 14–17 (2018) Bolanowski, M., Cislo, P.: The possibility of using LACP protocol in anomaly detection systems. In: Computing in Science and Technology (CST 2018), ITM Wecb Conference, vol. 21 (2018). https://doi.org/10.1051/itmconf/20182100014 Bolanowski, M., Paszkiewicz, A.: The use of statistical signatures to detect anomalies in computer network. In: Gołębiowski, L., Mazur, D. (eds.) Analysis and Simulation of Electrical and Computer Systems. LNEE, vol. 324, pp. 251–260. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-11248-0_19
A Privacy Preserving Hybrid Blockchain Based Announcement Scheme for Vehicular Energy Network Abid Jamal1 , Sana Amjad1 , Usman Aziz2 , Muhammad Usman Gurmani1 , Saba Awan1 , and Nadeem Javaid1(B) 1
2
COMSATS University Islamabad, Islamabad, Pakistan COMSATS University Islamabad, Attock Campus, Islamabad, Pakistan
Abstract. The vehicular announcement is an essential component of the Intelligent Transport System that enables vehicles to share important road information to reduce road congestion, traffic incidents, and environmental pollution. Due to the multiple security issues like single point of failure, data tampering, and false information dissemination, many researchers have proposed Blockchain (BC) based solutions to ensure data correctness and transparency in the vehicular networks. However, these schemes suffer from high computational cost and storage overhead due to the use of unsuitable BC on the vehicular layer, costly authentication schemes, and inefficient digital signature verification methods. Moreover, the privacy leakage can occur due to publicly available reputation values and lack of pseudonyms update mechanism. In this paper, we propose a privacy-preserving hybrid BC based vehicular announcement scheme to enable secure and efficient announcement dissemination. We use IOTA Tangle to enable the benefits of BC on vehicular layer while reducing the storage and computational cost. We employ Elliptic Curve Cryptography based pseudonym update mechanism for hiding the real identities of vehicles. To prevent false information dissemination in the network, we propose a reputation-based incentive mechanism for encouraging the users to provide honest ratings about the announcement messages. Furthermore, we use Cuckoo Filter to enable lightweight trustworthiness verification of the vehicles without revealing their reputation values. We also employ a batch verification mechanism to reduce the delays caused by digital signature verification. Moreover, we use InterPlanetary File System, and Ethereum BC for ensuring data availability and secure trust management. Finally, we evaluate the performance of our proposed scheme to prove its efficiency. Keywords: Blockchain Efficiency
1
· Vehicular Energy Network · Privacy ·
Introduction
Intelligent Transport System (ITS) plays a prominent role in enhancing smart cities by reducing traffic congestion and roadblocks and providing efficient routes c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 142–151, 2021. https://doi.org/10.1007/978-3-030-79725-6_14
A Privacy Preserving Hybrid Blockchain Based Announcement Scheme
143
to drivers. One of the ITS applications is Vehicular Energy Network (VEN). VENs are based on standards of Mobile Ad-hoc Network (MANET) and they enable vehicle to vehicle (V2V) and vehicle to infrastructure (V2I) communication for energy trading and dissemination of useful information about road and weather conditions, roadblocks, alternative routes, etc. Vehicles in VENs are equipped with an On-Board Unit (OBU), which uses Dedicated Short-Range Communication (DSRC) protocol for communication. Along with many benefits, VENs are also vulnerable to different attacks like Single Point of Failure (SPoF), false information dissemination, privacy leakage, etc., due to the open and trustless environment. To overcome these issues, several researchers have proposed Blockchain (BC) based solutions for VENs. In recent years, the Distributed Ledger Technology (DLT) has gained immense popularity in academia and industry due to its features like transparency, non-repudiation, tamper-resistance, etc. BC is a widely adopted DLT in which transaction data is stored in cryptographically linked blocks. The set of linked blocks is considered as a distributed ledger and it is shared with all the network participants. BC was initially introduced by Satoshi Nakamoto in 2008 as a backbone for the first ever digital currency named Bitcoin [1]. In BC, new blocks are added to the chain by the process of mining. BC provides captivating features like non-repudiation, data transparency, data availability, tamper-resistance, etc. Due to these features, BC is applied in many different sectors like healthcare, smart cities, supply chain, vehicular networks, etc. Recently many researchers have employed BC to overcome the trust and security issues of the traditional VENs. Besides the many benefits of BC based VENs, it is susceptible to many potential flaws, which can limit the efficiency of the vehicular networks. Many of the existing BC based vehicular networks use Ethereum BC on the vehicle layer [2,3], which increases the storage and computational cost on the vehicles. Some researchers have used Directed Acyclic Graph (DAG) based DLT named IOTA Tangle in vehicular networks [4,5]. As IOTA Tangle relies on data pruning method for saving storage space, it can cause data unavailability due to which malicious users can repudiate sharing false announcements in the network. Generally, in BC based vehicular networks, pseudonym mechanism are used to preserve the vehicles’ privacy [6,7]. However, due to use of static pseudonym identity, the real identities of the users can be inferred by background knowledge attacks. These attacks are overcome by pseudonym update mechanism [8]. However, due to lack of vehicle traceability, the internal attackers can disseminate false information in the network. To overcome this issue, BC based reputation schemes are introduced to identify the malicious users. As the reputation scores of the vehicles are publicly stored on BC, the adversaries can exploit the predictable patterns in the reputation values to perform vehicle tracing attacks. To address the aforementioned issues, we propose a privacy-preserving vehicular announcement scheme based on hybrid BC. In our proposed scheme, we use IOTA Tangle to enable zero-value transactions, high throughput and low storage cost on vehicle layer. Moreover, to preserve the privacy of the vehicles, we use
144
A. Jamal et al.
Elliptic Curve Cryptography (ECC) based dynamic pseudonyms mechanism to hide vehicles’ real identities. Also, we use Cuckoo Filter (CF) for hiding the predictable patterns in the reputation values. To further enhance the efficiency of the proposed scheme, we use batch verification scheme to enable simultaneous verification of multiple vehicle rating messages. In addition, data storage and availability issues are resolved by using InterPlanetary File System (IPFS).
2 2.1
Related Work Authentication
The vehicular networks require a secure and efficient authentication scheme to prevent malicious users from entering the network. The authors in [9] address the privacy leakage caused by centralized Public Key Infrastructure. They propose a distributed pseudonym management system that allows the users to create their own pseudonyms. However, their proposed scheme fails to ensure conditional privacy, making malicious vehicles untraceable. In [2], authors address the issue of SPoF in conventional centralized authentication scheme in vehicular networks. They use edge computing and local BC to efficiently store the registration and trust information of vehicles to ensure data transparency. However, their proposed scheme is prone to privacy leakage due to publicly available trust values. In [6,8], the authors propose a certificate revocation mechanisms in vehicular networks to efficiently manage the Certificate Revocation List (CRL). The CRL is used for verifying whether a certificate of a certain user is revoked. The authors in [6] enable privacy preservation in BC based certificate revocation mechanism by introducing pseudonym shuffling mechanism. Moreover, authors in [8] reduce the cost of CRL management by using an efficient data structure called Merkle Patricia Tree. However, due to the use of inefficient signature verification mechanism, both schemes add unnecessary verification delays. 2.2
Trust Management
In [10], authors propose a BC based incentive scheme to motivate the users to share important traffic information in the network. However, their proposed scheme is vulnerable to privacy leakage due to the use of static pseudonyms. The authors in [11] propose a privacy-preserving incentive scheme to encourage user participation while preserving the privacy. They develop an anonymous vehicular announcement aggregation protocol to prevent unique identification of a vehicle in network. However, in addition to the delays due to inefficient message verification, this scheme also suffers from an overwhelming storage cost due to inefficient key management. Moreover, in [12], authors address the security and trust issues in BC based Industrial Internet of Things. They use a monetary incentive mechanism to encourage the honest contribution. However, due to the lack of a batch verification mechanism, their proposed scheme suffers from unnecessary verification delays. In [13–15], authors address the trust and authentication issues
A Privacy Preserving Hybrid Blockchain Based Announcement Scheme
145
in Internet of Vehicles. They propose a decentralized trust management scheme based on Hyperledger Fabric to store the nodes’ trust values. However, due to the open availability of trust values, this scheme is vulnerable to tracking attacks and privacy leakage. In [16], authors propose a consortium BC based data and energy trading scheme to enable decentralized trading. They use bloom filters to prevent data duplication and smart contracts to overcome trading disputes. In [17], authors propose a BC based food supply chain management scheme to enable users’ trust and ensure products traceability. They use smart contracts to store the records in an immutable manner. 2.3
Privacy
The privacy preservation is of utmost importance in a vehicular network. The lack of privacy can allow malicious users to perform vehicle tracking attacks. In this regard, authors in [7] address the false information dissemination in vehicular network due to the lack of conditional anonymity. They develop a BC based pseudonym mechanism to hide the real identity of the vehicles. However, due to the use of centralized cloud server for storage, their proposed model is vulnerable to SPoF. Moreover, authors in [18] developed a pseudonym mechanism that allows vehicle users to generate and update pseudonyms for themselves without CA’s intervention. However, the self-generated pseudonym scheme can allow malicious vehicles to spam the network with false information. 2.4
Efficiency
In [3], authors propose Proof of Event consensus mechanism to reduce consensus delays and ensure the validity of the events shared by the vehicles. However, their proposed scheme is susceptible to privacy leakage. In [19], authors develop a joint Proof of Stake and modified Practical Byzantine Fault Tolerance consensus algorithm for reducing the resource requirement to perform consensus. However, in their proposed scheme, privacy leakage can occur due to unrestricted access to the reputation values. In [20], authors address location privacy leakage in the smart parking applications due to publicly available location data of the users. Moreover, they address the issue of the existing centralized smart parking schemes that are vulnerable to SPoF. The authors use group signatures, bloom filters, and vector-based encryption to enable anonymous authentication and malicious users’ traceability. However, their proposed scheme lacks a reputation mechanism which makes the system vulnerable to the internal attacks. Moreover, in the proposed scheme, an unsuitable BC is used on the vehicular layer. The conventional BC schemes are not suitable for the vehicular layer due to the resource constrained OBU of the vehicles. The authors in [10,21], use IPFS to efficiently store the transaction data. They also use a reputation management system to store the reputation values of the vehicles. However, their proposed scheme is susceptible to privacy leakage due to openly available reputation values and static pseudonyms. In [22,23], authors have utilized consortium BC to store and share the data efficiently. However, these schemes lack batch verification
146
A. Jamal et al.
mechanism and use unsuitable BC on the vehicle layer. In [4], authors address high cost of storing BC ledger on vehicles in vehicular social networks. They use a DAG based DLT on vehicle layer to efficiently reduce the storage cost of the ledger on vehicles. However, their proposed scheme does not preserve the privacy of the vehicles which can lead to reduced user trust.
3
Problem Statement
In [24,25], BC based announcement schemes are proposed to enable secure and trustworthy announcement dissemination in VANETs. However, due to the use of unsuitable BC on the vehicle layer, these schemes incur excessive storage and computational cost on resource constrained vehicles. In [4,5], a lightweight DAG based DLT is proposed for vehicular networks to overcome the excessive storage cost of conventional BC. In [4], DAG-chain is used, whereas in [5], IOTA is used for efficient and distributed storage of the transaction records on vehicles. However, due to the use of static pseudonyms in [4], the adversaries can identify a vehicle by performing background knowledge attacks. Moreover, due to lack of incentive mechanism in [5], the vehicles are not encouraged to give honest ratings about their peers. In [26], ABE-based authentication scheme is proposed to authenticate the vehicles while preserving their privacy. However, due to the high computational cost of ABE and use of inefficient message verification method, this scheme introduces excessive delays, which can reduce the overall efficiency of the network. In [12,25], BC based reputation mechanisms are proposed to prevent false information dissemination in vehicular networks. In these schemes, the vehicles verify the trustworthiness of messages by accessing the reputation scores of other vehicles available on the BC ledger. However, due to the public availability of the vehicles’ reputation scores on BC, the adversaries can trace a vehicle by utilizing the predictable patterns in the reputation values.
4
System Model
A hybrid blockchain based announcement framework is proposed to improve efficiency, preserve privacy, and enable secure communication in VENs. The proposed model consists of three layers as depicted in Fig. 1. The first layer is IOTA Tangle layer, wherein the vehicles communicate with each other and with Roadside Units (RSUs) via IOTA Tangle, which is a DAG based DLT. The second layer is the BC layer, which consists of RSUs and Certificate Authority (CA). RSUs are connected to each other via wired connection and they manage the overall network activities. The third layer is the storage layer which consists of IPFS. RSUs offload the excessive historical data to IPFS to reduce the storage cost and ensure the data availability. The proposed model in Fig. 1 contains a mapping table of the identified limitations and their proposed solutions. The limitations addressed in this proposed model range from L1 to L6 and the solutions proposed range from S1 to S7. The L1 refers to the computationally complex authentication scheme. It is mapped with the solution S1 using Elliptic Curve
A Privacy Preserving Hybrid Blockchain Based Announcement Scheme
147
Fig. 1. Proposed system model
Digital Signature Algorithm (ECDSA) based digital certification scheme. The L2 refers to the privacy leakage due to predictable patterns in publicly available reputation information of the vehicles. This limitation is mapped with S2 using CF for storing the reputation values. The L3 shows the use of sequential message verification method, which can cause delays in the message verification process. The proposed solution S3 overcomes this issue using batch verification method. L4 indicates the various shortcomings of the conventional blockchain schemes which makes them unsuitable for the vehicular layer of the VEN. These shortcomings include low transaction throughput, lack of microtransactions and high storage cost on vehicles. S4 and S7 overcome these issues by using IOTA Tangle and IPFS. The L5 refers to the lack of incentive mechanism, which can impede the vehicle cooperation in the network. This limitation is mapped with the solution S5 using of reputation based incentive scheme. The L6 refers to the privacy leakage due to the use of static pseudonym. This limitation is mapped with S6, which relates to updating the pseudonym on regular basis to minimize the risk of privacy leakage.
148
4.1
A. Jamal et al.
Entities
The following is a brief description of our proposed model’s entities. Certification Authority: In VEN, the CA is an essential entity which allows only the authorized users to join the network. The CA is assumed to be fully trusted and secure against any kind of attacks. In the proposed model, RSUs and vehicular nodes provide their true identity information to CA for registration. The CA generates pseudonym certificates for the vehicles. The CA keeps an encrypted copy of a mapping between true identity and the pseudonym of the vehicle to enable conditional anonymity. So that in case of disputes, the digital certificates of the malicious vehicles can be revoked and their true identity can be revealed to prevent them from rejoining the network. Vehicles: In VENs, each vehicle is equipped with an OBU which enables V2V and V2I communication via DSRC protocol. The OBU of the vehicles is considered as a tamper-proof device and is used for storing the private keys of the vehicles. The V2V communication includes announcements related to traffic conditions, road incidents, and advertisements etc. To reduce false information dissemination in the network, the vehicles provide ratings about the received announcements. In return, the vehicles receive incentives for giving the honest ratings and punishment for the dishonest ratings and false announcements. Roadside Units: In VENs, RSUs manage the overall network by providing different services to the vehicles. RSUs are connected to each other via wired connection and they have high computational capabilities. In our proposed model, RSUs are the part of both, the IOTA Tangle layer and the BC layer. On IOTA Tangle layer, the RSUs act as full nodes and store the vehicles’ announcement record shared on Tangle to ensure data availability. In BC layer, the RSUs act as authorized node and are responsible for: – – – – –
vehicle reputation calculation based on Tangle record and user feedback, adding pseudo-IDs of malicious users to a CF, performing consensus, batch verification of message signatures, and uploading the historical data to IPFS.
IOTA Tangle: The conventional blockchain schemes are not suitable for vehicular networks due to multitude of reasons including, low transaction throughput, high storage requirement, lack of microtransaction etc. To overcome these limitations, IOTA Tangle is used in the proposed scheme. IOTA tangle is a DAG based distributed ledger technology, which supports microtransactions, provides high transaction throughput and reduces storage overhead. In the proposed model, IOTA is used for storing the announcement sharing records in a distributed
A Privacy Preserving Hybrid Blockchain Based Announcement Scheme
149
ledger. In addition, it enables non-repudiation so that vehicles cannot deny sending any announcement message. Hence, the vehicles only disseminate accurate announcements in the network. Blockchain: In the proposed scheme, Ethereum BC is applied on RSUs. The use of BC ensures data integrity, transparency and immutability. The complete Tangle record is backed up on IPFS and its hash is stored on BC to avoid data loss, which may occur due to data pruning on IOTA layer. Also, the CF generated by RSUs are stored on the BC for trust verification. The data in BC is accessed via smart contracts. Cuckoo Filter: The CF is a new data structure proposed in [27] that replaces Bloom Filter as a method for testing whether an element belongs to a set or not. It uses Cuckoo Hashing and is designed to store items efficiently while targeting low false positive rate and requiring significantly lesser storage space than Bloom Filter. InterPlanetary File System: IPFS is a distributed data storage system. In IPFS, a distinct hash value is generated for each file which is then used for file retrieval. In the proposed model, the historical Tangle data is uploaded to IPFS to enable system’s scalability and efficiency. Whereas, only the hashes of the uploaded files are stored in the BC, which significantly reduces the storage cost. 4.2
Conclusion
In this paper, a hybrid Blockchain based vehicular announcement scheme is proposed for VENs. IOTA Tangle is employed to reduce the resource utilization on vehicle layer. The IPFS is used for ensuring the data availability while reducing the overall storage cost of the system. The Ethereum BC is used on the RSU layer to store IPFS hashes of the sensitive data and CF is utilized for ensuring the data transparency. The ECC-based pseudonym update mechanism is used to enable conditional anonymity. Moreover, a reputation-based incentive mechanism is utilized to encourage users to share the honest ratings. CF are implemented to prevent background knowledge attacks by hiding predictable patterns in the reputation values of the vehicles. The effectiveness of the proposed is evaluated by performing the simulations. The results show that the proposed scheme efficiently reduces time cost and storage overhead.
References 1. Nakamoto, S.: Bitcoin: a peer-to-peer electronic cash system. Manubot (2019) 2. Shrestha, R., Bajracharya, R., Shrestha, A.P., Nam, S.Y.: A new type of blockchain for secure message exchange in VANET. Digit. Commun. Netw. 6(2), 177–186 (2020)
150
A. Jamal et al.
3. Yang, Y.-T., Chou, L.-D., Tseng, C.-W., Tseng, F.-H., Liu, C.-C.: Blockchainbased traffic event validation and trust verification for VANETs. IEEE Access 7, 30868–30877 (2019) 4. Yang, W., Dai, X., Xiao, J., Jin, H.: LDV: a lightweight DAG based blockchain for vehicular social networks. IEEE Trans. Veh. Technol. 69(6), 5749–5759 (2020). https://doi.org/10.1109/TVT.2020.2963906 5. Hassija, V., Chamola, V., Garg, S., Krishna, D.N.G., Kaddoum, G., Jayakody, D.N.K.: A blockchain based framework for lightweight data sharing and energy trading in V2G network. IEEE Trans. Veh. Technol. 69(6), 5799–5812 (2020). https://doi.org/10.1109/TVT.2020.2967052 6. Lei, A., et al.: A blockchain based certificate revocation scheme for vehicular communication systems. Future Gener. Comput. Syst. 110, 892–903 (2020) 7. Pu, Y., Xiang, T., Hu, C., Alrawais, A., Yan, H.: An efficient blockchain-based privacy preserving scheme for vehicular social networks. Inf. Sci. 540, 308–324 (2020) 8. Lu, Z., Wang, Q., Qu, G., Zhang, H., Liu, Z.: A blockchain-based privacy-preserving authentication scheme for VANETs. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 27(12), 2792–2801 (2019) 9. Benarous, L., Kadri, B., Bouridane, A.: Blockchain based privacy-aware pseudonym management framework for vehicular networks. Arabian J. Sci. Eng. 1–17 (2020) 10. Khalid, A., Iftikhar, M.S., Almogren, A., Khalid, R., Afzal, M.K., Javaid, N.: A blockchain based incentive provisioning scheme for traffic event validation and information storage in VANETs. Inf. Process. Manag. 58(2), 102464 (2021) 11. Li, L., et al.: CreditCoin: a privacy-preserving blockchain-based incentive announcement network for communications of smart vehicles. IEEE Trans. Intell. Transp. Syst. 19(7), 2204–2220 (2018) 12. Wang, E.K., Liang, Z., Chen, C.-M., Kumari, S., Khan, M.K.: PoRX: a reputation incentive scheme for blockchain consensus of IIoT. Future Gener. Comput. Syst. 102, 140–151 (2020) 13. Sun, L., Yang, Q., Chen, X., Chen, Z.: RC-chain: reputation-based crowdsourcing blockchain for vehicular networks. J. Netw. Comput. Appl. 176, 102956 (2021) 14. Zhang, X., Wang, D.: Adaptive traffic signal control mechanism for intelligent transportation based on a consortium blockchain. IEEE Access 7, 97281–97295 (2019) 15. Malik, N., Nanda, P., He, X., Liu, R.P.: Vehicular networks with security and trust management solutions: proposed secured message exchange via blockchain technology. Wirel. Netw. 26(6), 4207–4226 (2020). https://doi.org/10.1007/s11276-02002325-z 16. Sadiq, A., Javed, M.U., Khalid, R., Almogren, A., Shafiq, M., Javaid, N.: Blockchain based data and energy trading in internet of electric vehicles. IEEE Access (2020) 17. Shahid, A., Almogren, A., Javaid, N., Al-Zahrani, F.A., Zuair, M., Alam, M.: Blockchain-based agri-food supply chain: a complete solution. IEEE Access 8, 69230–69243 (2020) 18. Zhao, N., Wu, H., Zhao, X.: Consortium Blockchain based secure software defined vehicular network. Mob. Netw. Appl. 25(1), 314–327 (2020) 19. Sutrala, A.K., Bagga, P., Das, A.K., Kumar, N., Rodrigues, J.J.P.C., Lorenz, P.: On the design of conditional privacy preserving batch verification-based authentication scheme for Internet of vehicles deployment. IEEE Trans. Veh. Technol. 69(5), 5535– 5548 (2020)
A Privacy Preserving Hybrid Blockchain Based Announcement Scheme
151
20. Zhang, C., et al.: BSFP: blockchain-enabled smart parking with fairness, reliability and privacy protection. IEEE Trans. Veh. Technol. 69(6), 6578–6591 (2020) 21. Firdaus, M., Rhee, K.-H.: On blockchain-enhanced secure data storage and sharing in vehicular edge computing networks. Appl. Sci. 11(1), 414 (2021) 22. Rahman, Md.A., Rashid, Md.M., Shamim Hossain, M., Hassanain, E., Alhamid, M.F., Guizani, M.: Blockchain and IoT-based cognitive edge framework for sharing economy services in a smart city. IEEE Access 7, 18611–18621 (2019) 23. Li, K., Lau, W.F., Au, M.H., Ho, I.W.-H., Wang, Y.: Efficient message authentication with revocation transparency using blockchain for vehicular networks. Comput. Electr. Eng. 86, 106721 (2020) 24. Ma, J., Li, T., Cui, J., Ying, Z., Cheng, J.: Attribute-based secure announcement sharing among vehicles using blockchain. IEEE Internet of Things J. (2021) 25. Liu, X., Huang, H., Xiao, F., Ma, Z.: A blockchain based trust management with conditional privacy-preserving announcement scheme for VANETs. IEEE Internet of Things J. 7(5), 4101–4112 (2020). https://doi.org/10.1109/JIOT.2019.2957421 26. Feng, Q., He, D., Zeadally, S., Liang, K.: BPAS: blockchain-assisted privacypreserving authentication system for vehicular ad hoc networks. IEEE Trans. Ind. Inform. 16(6), 4146–4155 (2020). https://doi.org/10.1109/TII.2019.2948053 27. Fan, B., Andersen, D.G., Kaminsky, M., Mitzenmacher, M.D.: Cuckoo filter: practically better than bloom. In: Proceedings of the 10th ACM International on Conference on emerging Networking Experiments and Technologies, pp. 75–88 (2014)
Prediction of Wide Area Road State Using Measurement Sensor Data and Meteorological Mesh Data Yoshitaka Shibata(&) and Akira Sakuraba Regional Coorporate Research Center, Iwate Prefectural University, Sugo, Takizawa 152-89, Iwate, Japan {shibata,a_saku}@iwate-pu.ac.jp
Abstract. In this paper, prediction of wide area road state using actual measurement data on on-board environment sensors and meteorological mesh data are introduced. Actual sensor data are sampled from running vehicles gathered to roadside edge computer. On the other hand, those meteorological mesh data are collected into the cloud computing from Meteorological Agency as open data. By integrating those actual data and the meteorological mesh data and applying Kalman filter technology, temporal and spatial prediction of wide area road state can be realized and visualized on GIS viewer system. In this paper, the fundamental system configuration and both temporal and spatial prediction algorithms are discussed.
1 Introduction Recently, automotive driving technology has been developed in well-developed counties, such as U.S. EUs, China and Japan. Several car industries in those countries produce autonomous cars and run on highway and exclusive roads where the driving lanes are clear, the driving direction is the same and opposite driving car lanes are completely separated [1]. On the other hand, autonomous driving does not introduce in snow countries because there is no consideration of bad and dangerous road surface conditions in winter season for safety controlling speed and steering of running vehicle. In fact, more than 90% of car accidents in snow area in winter in Japan is due to slipping car on snowy or iced road [2]. More advanced road state sensing, decision and prediction system to identify the dangerous locations are indispensable for driving in winter. So far, in order to realize autonomous driving in snow countries, wide area road state information platform is proposed and developed as a prototype system to evaluate the effects and validity of our proposed platform in winter season [*]. From the past field experiment, the proposed platform could detect the physical sensor data and decide the precise road states in realtime with more than 90% accuracy for dry, wet, damp and snowy/icy states. Those decided road state information can be shared with many vehicles which are running on the same road in opposite direction using V2X
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 152–160, 2021. https://doi.org/10.1007/978-3-030-79725-6_15
Prediction of Wide Area Road State
153
communication. At the same, those road state information are collected to crowed system to organize wide area GIS information system. However, it is furthermore expected that the future road states in time and geological can be predicted from the current time and locations for the vehicles to drive the wide area. In order to respond to the request, a prediction system of wide area road state using measurement sensor data and meteorological meth data is proposed. In addition to the actual sensor data sampled on vehicle, the meteorological mesh data are collected into the cloud computing from Meteorological Agency as open data. By integrating those actual data and the meteorological mesh data and applying Kalman filter technology, temporal and spatial prediction of wide area road state can be realized and visualized on GIS viewer system. In the following, the related works with road state information by sensor and V2X technologies are explained in section two. Next, on-board sensing system to collect various sensor data and to identify road states is introduced in section three. A prediction system and algorithm of wide area road state using the measurement sensor data and meteorological meth data precisely explained in section four. In final, conclusion and future works are summarized in section five.
2 Related Works There are several related works with the road state sensing method using environmental sensors. Particularly road surface temperature is essentially important to know snowy or icy road state in winter season by correctly observing whether the road surface temperature is under minus 4 ºC or over. In the paper [3], the road surface temperature model by taking account of the effects of surrounding road environment to facilitate proper snow and ice control operations is introduced. In this research, the fixed sensor system along road is used to observe the precise temperature using the monitoring system with long-wave radiation. They build the road surface temperature model based on heat balance method. In the paper [4–6], cost effective and simple road surface temperature prediction method in wide area while maintaining the prediction accuracy is developed. Using the relation between the air temperature and the meshed road surface temperature, statistical thermal map data are calculated to improve the accuracy of the road surface temperature model. Although the predicted accuracy is high, the difference between the ice and snow states was not clearly resolved. In the paper [7], a road state data collection system of roughness of urban area roads is introduced. In this system, mobile profilometer using the conventional accelerometers to measure realtime roughness and road state GIS is introduced. This system provides general and wide area road state monitoring facility in urban area, but snow and icy states are note considered. In the paper [8], a measuring method of road surface longitudinal profile using build-in accelerometer and GPS of smartphone is introduced to easily calculate road
154
Y. Shibata and A. Sakuraba
flatness and International Road Index (IRI) in offline mode. Although this method provides easy installation and quantitative calculation results of road flatness for dry or wet states, it does not consider the snow or icy road states. In the paper [9], a statistical model for estimating road surface state based on the values related to the slide friction coefficient is introduced. Based on the estimated the slide friction coefficient calculated from vehicle motion data and meteorological data is predicted for several hours in advance. However, this system does not consider the other factors such as road surface temperature and humidity. In the paper [10], road surface temperature forecasting model based on heat balance, so called SAFF model is introduced to forecast the surface temperature distribution on dry road. Using the SAFF model, the calculation time is very short and its accuracy is higher than the conventional forecasting method. However, the cases of snow and icy road in winter are not considered.
3 Road State Sensor System In order to detect the precise road surface states, such as dry, wet, dumpy, showy, frozen roads, various sensing devices including accelerator, gyro sensor, infrared temperature sensor, humidity sensor, quasi electrical static sensor, camera and GPS are integrated to precisely and quantitatively detect the various road surface states and determine the dangerous locations on GIS in sensor server as shown in Fig. 1. The 9 axis dynamic sensors including accelerator, gyro sensor and electromagnetic sensors can measure vertical amplitude of roughness along the road. The infrared temperature sensor observes the road surface temperature to know whether the road surface is snowy or icy without touching the road surface. The quasi electrical static sensor detects the snow and icy states by observing the quasi electrical static field intensity. Camera can detect the obstacles on the road. The far-infrared laser sensor precisely measures the friction rate of snow and icy states. The sensor server periodically samples those sensor signals and performs AD conversion and signal filtering in Prefiltering module, analyzes the sensor data in Analyzing module to quantitatively determine the road surface state and learning from the sensor data in AI module to classify the road surface state as shown in Fig. 2. As result, the correct road surface state can be quantitatively and qualitatively decided. The decision data with road surface state in SMB are temporally stored in Regional Road State Data module and mutually exchanged when the SMB on one vehicle approaches to other SMB. Thus the both SMBs can mutually obtain the most recent road surface state data with just forward road. By the same way, the SMB can also mutually exchange and obtain the forward road surface data from roadside SRS.
Prediction of Wide Area Road State
155
Fig. 1. Sensor server unit
Fig. 2. Road state decision system by AI
4 Temporal and Geological Prediction of Road Conditions 4.1
Temporal Prediction Method of Road State
In order to predict the road state in future time, the current observed sensor data and predicted meteorological data from meteorological agency must be integrated as bigdata and analyzed. To formulate the prediction process, the whole area R is divided into the region Rn for n = 1, N as shown in Fig. 3. In Rn, we define road position as h for 1 to d and time as ti. Here each temperature is defined as measured road surface temperature TMn(ti,h) for h = 1, d, the measured air temperature TSn(ti,h) for h = 1, d, the average air temperature TSn(ti,h)ave where ()ave is expressed as an average for whole road position h = 1, d. Then, TSn(ti,h) can be expressed as:
156
Y. Shibata and A. Sakuraba
Tsnðti; hÞave ¼
Xd h¼1
TSnðti; hÞ=N
ð1Þ
Where N is the observed number on the road h. On the other hand, the meteorological mesh data equivalent to the region Rn can be obtained from meteorological agency, such as the future averaged air temperature TLn (ti,h)ave, humidity HLn(ti,h)ave, pressure PLn(ti,h)ave (for i = 0, 1, 2…) and etc. In the following, the air temperature is mainly considered. Since the averaged observed air temperature TSn(ti,h)ave is same as the meteorological air temperature TLn(ti,h)ave, TSnðti; hÞave ¼ TLnðti; hÞave
ð2Þ
With the Eq. (2), by substituting ti+1 instead of ti, the following equation is also held. Tsnðti þ 1; hÞave ¼ TLNðti þ 1; hÞave
ð3Þ
Then, future observed air temperature TSn(ti+1,h) can be estimated from the current observed temperature TSn(ti,h) from the TSn(ti+1,h)ave and TSn(ti,h)ave as follows: Tsnðti þ 1; hÞ ¼ Tsnðti; hÞ þ Aðti; hÞ fTSnðti þ 1; hÞave TSnðti; hÞavegfor h ¼ 1; d ð4Þ Therefore, the future observed road surface temperature TMn(ti+1,h) can be estimated from the future and current observed air temperature TSn(ti+1,h) and TSn(ti,h) as follows: TMnðti þ 1; hÞ ¼ TMnðti; hÞ þ Bðti; hÞ ½TSnðti þ 1; hÞ TSnðti; hÞ for k ¼ 1; d ð5Þ By substituting Eqs. (2) and (3) into the Eq. (5), the following equations is formulated. TMnðti þ 1; hÞ ¼ TMnðti; hÞ þ Bðti; hÞ ½TSnðti; hÞ þ Aðti; hÞ fTLnðti þ 1Þave TSnðti; hÞaveg TSnðti; hÞ ð6Þ Here, G(ti, h) = A(ti,h) B(ti,h) is equivalent to Kalman Gain in Kalman Filtering and expressed by the following recursion formula, Gðti; hÞ ¼ P ðti; hÞC=CT P ðti; hÞC þ r2
ð7Þ
Prediction of Wide Area Road State
Pðti; hÞ ¼ ðI Gðti; hÞCT ÞP ðti; hÞ P ðti; hÞ ¼ Pðti; hÞ þ x2
157
ð8Þ ð9Þ
Where P(ti,h) means covariance matrix of the measurement error and C means unit vector with the coefficient as 1/N, r2 means error variance of the actual observed road surface temperature TMn(ti,h) and x2 means error variance of the meteorological air temperature TLn(ti+1,h)ave. G(ti,h) can be calculated by the Eqs. (7), (8) and (9) when the initial vaue of P(0,h) is given. Normally P(0,h) = cI where I means a unit matrix and c is adjusted parameter and normally changed with 0 *1000. In summery, the future road surface temperature TMn(ti,h) can be estimated by recursively reading the meteorological air temperature TLn(ti+1,h)ave and giving the initial values, TMn(o,h), TSn(0,h), TSn(0,h), P(0,h). The other meteorological data such as humidity HLn(ti,h)ave, pressure PLn(ti,h)ave (for i = 0, 1, 2…) are also estimated by the same formula from (6) through (9). The road condition for i = 1, d can be finally predicted by Road State Decision System using AI model which is shown in Sect. 4. For the other Rn for n = 1, N, the same process can be applied and predicted the road condition for i = 1, e. Eventually, the road condition in regions R can be predicted.
Fig. 3. Objective road state region
4.2
Geological Prediction Method of Road State
In order to predict the road state in spatial neighbor region from the observed region, the same formulation be made as the temporal prediction method. To formulate the prediction process, both the observed road state region Rn and the predicted area Rj are considered as shown in Fig. 4.
158
Y. Shibata and A. Sakuraba
In Rn, the same formulation can be used as shown in previous Eqs. (1) through (9). On the other hand, in the predicted neighbor region Rj, we define the road position as f for 1 to d and time ti. Here, each temperature is defined as predicted road surface temperature TMj(ti,f), meteorological air temperature TLj(ti,f)ave for h = 1, d. By considering the temperature change in time from ti to ti+1, the air temperature TSj(ti+1,f) in region Rj can be formulated using the measured air temperature TSn(ti,h) for h = 1, d, as follow, TSjðti þ 1; f Þ ¼ TSjðti; f Þ þ B0 ðti; f ÞfTSjðti þ 1; f Þave TSjðti; f Þavegðfor 1 to dÞ ð10Þ The road surface temperature TMj(ti+1,f) can be predicted as follow, TMjðti þ 1; f Þ ¼ TMjðti; f Þ þ A0 ðti; f Þ½TSjðti þ 1; f Þave TSjðti; f Þave
ð11Þ
By substituting Eq. (10) to Eq. (11), the following recurrent equation is obtained, TMjðti þ 1; f Þ ¼ TMjðti; f Þ þ G0 ðti; f Þ ½TLjðti þ 1; f ÞaveTLjðti; f Þave
ð12Þ
Here, G’(ti,h) = A’(ti,h) B’(ti,h) is equivalent to Kalman Gain in Kalman Filtering and expressed by the following recursion formula,
G0 ðti; hÞ ¼ P0 ðti; hÞC0 =C0 P0 ðti; hÞC0 þ r2 T
P0 ðti; hÞ ¼ ðIG0 ðti; hÞC0 ÞP0 ðti; hÞ T
P0 ðti; hÞ ¼ P0 ðti; hÞ þ x2
ð13Þ ð14Þ ð15Þ
At ti = 0, the initial value TMj(0,f) can be obtained from the following equations, TMjð0; f Þ ¼ TMnð0; hÞ þ Gð0; f Þ ½TLjð0; f Þave TLjð0; f Þave
ð16Þ
As result, the neigbour resion road surface temperature TMj(ti,f) in region Rj can be estimated from Eq. (12) through (15) by recursively reading the meteorological air temperature TLn(ti+1,f)ave and giving the initial values, the Eq. (16) and P’(0,h). The other meteorological data such as humidity HLj(ti,f)ave, pressure PLj(ti,f)ave ( for i = 0, 1, 2…) are also estimated by the same formula from (12) through (15).
Prediction of Wide Area Road State
159
Fig. 4. Spatial neighbor region
5 Conclusions and Future Works In previous research, we proposed a wide area road state information platform to identify the road state condition in real time and implemented and evaluated. Furthermore, in this paper, temporal and spatial predictions of wide area road state using actual observed data by on-board environmental sensors and equivalent meteorological mesh data are introduced. Actual sensor data are sampled from running vehicles and gather to roadside edge computer. On the other hand, the forecasted meteorological mesh data from Meteorological Agency are collected into the cloud computing as open data. By integrating those actual data and the forecasted meteorological mesh data, both temporal and spatial predictions of wide area road state can be formulated by applying Kalman filtering. This prediction is useful for autonomous driving to control the speed and position of steering of the vehicles to drive safely and reliably even on the snowy and icy road in snow countries. Currently we are developing a prototype of a predicted road state information platform in actual snow area to evaluate the effects of proposed method. Acknowledgments. The research was supported by Japan Keiba Association Grant Numbers 2021M-198 and Communication and Strategic Research Project Grant by Iwate Prefectural University in 2021.
References 1. SAE International: Taxonomy and Definitions for Terms Related to Driving Automation Systems, for On-Road Motor Vehicles. J3016_201806, June 2018 2. Police department in Hokkaido: The Actual State of Winter Typed Traffic Accidents, https:// www.police.pref.hokkaido.lg.jp/info/koutuu/fuyumichi/blizzard.pdf. November 2018 3. Takahashi, N., Tokunaga, R.A., Sato, T., Ishikawa, N.: Road surface temperature model accounting for the effects of surrounding environment. J. Jpn Soc. Snow Ice 72(6), 377–390 (2010)
160
Y. Shibata and A. Sakuraba
4. Fujimoto, A., Nakajima, T., Sato, K., Tokunaga, R., Takahashi, N., Ishida, T.: Route-based forecasting of road surface temperature by using meshed air temperature data. In: JSSI&JSSE Joint Conference, pp. P1–34, September 2018 5. Saida, A., Sato, K., Nakajima, T., Tokunaga, R., Sato, G.: A study of route based forecast of road surface condition by melting and feezing mass estamation method using weather mesh data. In: JSSI&JSSE Joint Conference, pp. P2–57, September 2018 6. Hoshi, T., Saida, A., Nakajima, T., Tokunaga, R., Sato, M., Sato, K.: Basic consideration of wide-scale road surface snowy and icy conditions using weather mesh data. Monthly Rep. Civil Eng. Res. Inst. Cold Region 800, 28–34 (2020) 7. Fujita, S., Tomiyama, K., Abliz, N., Kawamura, A.: Development of a roughness data collection system for urban roads by use of a mobile profilometer and GIS. J. Jpn Soc. Civil Eng. 69(2), I_90–I_97 (2013) 8. Yagi, K.: A measuring method of roadsurface longitudinal profile from sprung acceleraionand verification with road profiler. J. Jpn Soc. Civil Engineers 69(3), I_1–I_7 (2013) 9. Mizuno, H., Nakatsuji, T., Shirakawa, T., Kawamura, A.: A statistical model for estimating road surface conditions in winter. In: The Society of Civil Engineers, Proceedings of Infrastructure Planning (CD-ROM), December 2006 10. Saida, A., Fujimoto, A., Fukuhara, T.: Forecasting model of road surface temperature along a road network by heat balance method. J. Civil Eng. Japan 69(1), 1–11 (2013)
A Coverage Construction and Hill Climbing Approach for Mesh Router Placement Optimization: Simulation Results for Different Number of Mesh Routers and Instances Considering Normal Distribution of Mesh Clients Aoto Hirata1 , Tetsuya Oda2(B) , Nobuki Saito1 , Yuki Nagai2 , Masaharu Hirota3 , Kengo Katayama2 , and Leonard Barolli4 1
4
Graduate School of Engineering, Okayama University of Science (OUS), Okayama, 1-1 Ridaicho, Kita-ku, Okayama 700–0005, Japan {t21jm02zr,t21jm01md}@ous.jp 2 Department of Information and Computer Engineering, Okayama University of Science (OUS), 1-1 Ridaicho, Kita-ku, Okayama 700–0005, Japan {oda,katayama}@ice.ous.ac.jp, [email protected] 3 Department of Information Science, Okayama University of Science (OUS), 1-1 Ridaicho, Kita-ku, Okayama 700–0005, Japan [email protected] Department of Information and Communication Engineering, Fukuoka Institute of Technology, 3-30-1 Wajiro-Higashi, Higashi-Ku, Fukuoka 811-0295, Japan [email protected]
Abstract. The Wireless Mesh Networks (WMNs) enables routers to communicate with each other wirelessly in order to create a stable network over a wide area at a low cost and it has attracted much attention in recent years. There are different methods for optimizing the placement of mesh routers. In our previous work, we proposed a Coverage Construction Method (CCM) and CCM-based Hill Climbing (HC) system for mesh router placement problem considering normal and uniform distributions of mesh clients. In this paper, we evaluate performance of CCMbased HC system for different number of mesh routers and instances. For the simulation results, we found that the CCM-based HC system was able to cover more mesh clients for different instances compared with CCM.
1
Introduction
The Wireless Mesh Networks (WMNs) [1–3] are one of the wireless network technologies that enables routers to communicate with each other wirelessly to create a stable network over a wide area at a low cost, and it has attracted much attention in recent years. The placement of the mesh router has a significant impact c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 161–171, 2021. https://doi.org/10.1007/978-3-030-79725-6_16
162
A. Hirata et al.
Fig. 1. Flowchart of the CCM.
on cost, communication range, and operational complexity. Therefore, research is being done to optimize the placement of these mesh routers. In our previous work [4–7,7,8], we proposed and evaluated the different meta-heuristics such as Genetic Algorithms (GA) [9], Hill Climbing (HC) [10], Simulated Annealing (SA) [11], Tabu Search (TS) [12] and Particle Swarm Optimization (PSO) [13] for mesh router placement optimization. Also, we proposed a Coverage Construction Method (CCM) for mesh router placement problem [14] and CCM-based Hill Climbing (HC) system [15]. The CCM is able to rapidly create a group of mesh routers with the radio communication range of all mesh routers linked to each other. The CCM-based HC system was able to cover many mesh clients generated by normal and uniform distributions. We also showed that in the two islands model, the CCM-based HC system was able to find two islands and can cover many mesh clients [16]. In this paper, we evaluate performance of CCM-based HC system for different number of mesh routers and instances. As evaluation metrics, we consider the Size of Giant Component (SGC) and the Number of Covered Mesh Clients
CCM Based HC Approach for Mesh Router Placement Optimization
163
(NCMC). The simulation results show that the CCM-based HC system was able to cover more mesh clients for different instances compared with CCM. The structure of the paper is as follows. In Sect. 2, we defines the mesh router placement problem. In Sect. 3, we describes proposed system of the CCM and CCM-based HC. In Sect. 4, we presents the simulation results. Finally, conclusions and future work are given in Sect. 5.
2
Mesh Router Placement Problem
In this problem, we are given a two-dimensional continuous area where to deploy a number of mesh routers and a number of mesh clients of fixed positions. The objective of the problem is to optimize a location assignment for the mesh routers to the two-dimensional continuous area that maximizes the network connectivity and mesh clients coverage. Network connectivity is measured by the SGC of the each mesh routers link, while the NCMC is the number of mesh clients that within the radio communication range of at least one mesh router. An instance of the problem consists as follows. • An area W idth × Height where to considered area in mesh router placement. Positions of mesh routers are not pre-determined, and are to be computed. • The mesh routers, each having its radio communication range, defining thus a vector of routers. • The mesh clients located in arbitrary points of the considered area, defining a matrix of clients.
3
Proposed System
In this section, we describe the proposed method. In Fig. 1 and Fig. 2 are shown the flowchart of CCM and CCM-based HC system. 3.1
CCM for Mesh Router Placement Optimization
In our previous work, we proposed CCM [14] for mesh router placement optimization problem. The flowchart of CCM is shown in Fig. 1. The CCM search the solution with maximized SGC. Among the solutions generated, the one with the highest NCMC is the final solution. We describe the operation of CCM in following. First, generate mesh clients in the considered area. Next, randomly determine a single point coordinate to be mesh router 1. Once again, randomly determine a single point coordinate to be mesh router 2. Each mesh router has a radio communication range. If the radio communication ranges of the two routers do not overlap, delete router 2 and randomly determine a single point coordinate and make it as mesh router 2. This
164
A. Hirata et al.
Fig. 2. Flowchart of HC method for mesh router placement optimization.
CCM Based HC Approach for Mesh Router Placement Optimization
165
process is repeated until the radio communication ranges of two mesh routers overlaps. If the radio communication ranges of the two mesh routers overlap, generate next mesh routers. If there is no overlap in radio communication range with any mesh routers, the mesh router is removed and generate randomly again. If any of the other mesh routers have overlapping radio communication ranges, generate next mesh routers. Continue this process until the setting number of mesh routers. By this procedure is created a group of mesh routers connected together without the derivation of connected component using Depth First Search (DFS) [17,18]. However, this method only creates a population of mesh routers at a considered area, but does not take into account the location of the mesh clients. So, the procedure should be repeated for a setting number of loop. Then, determine how many mesh clients are included in the radio communication range group of the mesh router. The one with the highest number of covered mesh clients during repetition process is the solution. 3.2
CCM-based HC for Mesh Router Placement Optimization
In this subsection, we describe CCM-based HC system [15]. The implementation of HC for the mesh router placement problem is shown in Fig. 2. We describe the operation of CCM-based HC system in following. First, we randomly select one of the mesh routers in the group of mesh routers in the initial solution obtained by the CCM and change the coordinates of the chosen mesh router randomly. Then, we decide the NCMC for all mesh routers. If the NCMC is greater than that of previous one, then the changed mesh router placement is the current solution. If the NCMC is less than that of previous NCMC, the changed mesh router coordinates are restored. This process is repeated until the setting of number of loops. However, this process alone is inadequate for the mesh router placement problem. This is because, depending on the routers placement and their radio communication range all mesh routers may be are not connected. Therefore, it is necessary to create an adjacency list for a mesh router each time the mesh router placement is changed and use DFS to find out if all mesh routers are connected. The NCMC is decided only when all mesh routers is connected and only when NCMC is greater than the current solution. In this algorithm, in order to increase the probability that all mesh router are connected, the coordinates are randomly changed until the radio communication ranges of mesh routers overlap. We also tightened the terminal conditions in order to cover as many clients as possible.
166
A. Hirata et al.
Table 1. Parameters and value for simulation. Width of considered area
32, 64, 128
Height of considered area
32, 64, 128
Number of mesh routers
16, 32, 64
Radius of radio communication range of mesh routers
2
Number of mesh clients
48, 96, 192
Distributions of mesh clients
Normal distribution
Standard deviation for normal distribution
Width of considered area/10
Number of loop for CCM
3000
Number of loop for HC method
100000
Table 2. Parameter settings of instances. Instance
Grid size
Number of mesh router Number of mesh client
Instance 1 32 × 32
16
Instance 2 64 × 64
32
96
Instance 3 128 × 128 64
192
48
Table 3. Simulation results of CCM. Instance
Best SGC Average SGC Best NCMC Average NCMC [%]
Instance 1 16
16
48
92.778
Instance 2 32
32
67
63.889
Instance 3 64
64
77
36.528
Table 4. Simulation results of CCM-based HC. Instance
Best SGC Average SGC Best NCMC Average NCMC [%]
Instance 1 16
16
48
99.167
Instance 2 32
32
92
96.297
Instance 3 64
64
149
71.840
CCM Based HC Approach for Mesh Router Placement Optimization
167
Fig. 3. Visualization results of instance 1.
4
Simulation Results
In this section, we evaluate proposed method. The parameters used for simulations are shown in the Table 1. We considered three types of instances. Parameter settings of instances are shown in Table 2. We performed the simulations 15 times for each instance. The simulation results are shown in Table 3 and Table 4 for CCM and CCMbased HC system, respectively. The average processing time was 2.241, 23.050 and 108.034 s for each instance. We show also the simulation results of best SGC, avg. SGC, best NCMC and avg. NCMC. For each simulation result, the SGC is always maximized. In instance 1 and instance 2, the proposed system was able to cover most of mesh clients on average. However, for instance 3, about 72 [%] of mesh clients were covered. Comparing with CCM, the proposed system covered twice more mesh clients for instance 3. The visualization results are shown in Fig. 3, Fig. 4 and Fig. 5 for instance 1, instance 2 and instance 3, respectively. We can see that the proposed CCM-based HC system can cover more mesh clients than CCM.
168
A. Hirata et al.
Fig. 4. Visualization results of instance 2.
CCM Based HC Approach for Mesh Router Placement Optimization
Fig. 5. Visualization results of instance 3.
169
170
5
A. Hirata et al.
Conclusion
In this paper, we evaluated the performance of CCM-based HC system for different number of mesh routers and instances. For the simulation results, we found that the CCM-based HC system was able to cover more mesh clients for different instances compared with CCM. In the future, we would like to consider Simulated Annealing and Genetic Algorithms. Acknowledgement. This work was supported by JSPS KAKENHI Grant Number 20K19793.
References 1. Akyildiz, I.F., et al.: Wireless mesh networks: a survey. Comput. Netw. 47(4), 445–487 (2005) 2. Jun, J., et al.: The nominal capacity of wireless mesh networks. IEEE Wirel. Commun. 10(5), 8–15 (2003) 3. Oyman, O., et al.: Multihop relaying for broadband wireless mesh networks: from theory to practice. IEEE Commun. Mag. 45(11), 116–122 (2007) 4. Oda, T., et al.: WMN-GA: a simulation system for WMNs and its evaluation considering selection operators. J. Ambient. Intell. Humaniz. Comput. 4(3), 323– 330 (2013) 5. Ikeda, M., et al.: Analysis of WMN-GA simulation results: WMN performance considering stationary and mobile scenarios. In: Proceedings of the 28th IEEE International Conference on Advanced Information Networking and Applications (IEEE AINA-2014), pp. 337–342 (2014) 6. Oda, T., et al.: Analysis of mesh router placement in wireless mesh networks using Friedman test considering different meta-heuristics. Int. J. Commun. Netw. Distrib. Syst. 15(1), 84–106 (2015) 7. Oda, T., et al.: A genetic algorithm-based system for wireless mesh networks: analysis of system data considering different routing protocols and architectures. Soft. Comput. 20(7), 2627–2640 (2016) 8. Sakamoto, S., et al.: Performance evaluation of intelligent hybrid systems for node placement in wireless mesh networks: a comparison study of WMN-PSOHC and WMN-PSOSA. In: Proceedings of the 11th International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS-2017), pp. 16–26 (2017) 9. Holland, J.H.: Genetic algorithms. Sci. Am. 267(1), 66–73 (1992) 10. Skalak, D.B.: Prototype and feature selection by sampling and random mutation hill climbing algorithms. In: Proceedings of the 11th International Conference on Machine Learning (ICML 1994), pp. 293–301 (1994) 11. Kirkpatrick, S., et al.: Optimization by simulated annealing. Science 220(4598), 671–680 (1983) 12. Glover, F.: Tabu search: a tutorial. Interfaces 20(4), 74–94 (1990) 13. Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of the IEEE International Conference on Neural Networks (ICNN 1995), pp. 1942–1948 (1995)
CCM Based HC Approach for Mesh Router Placement Optimization
171
14. Hirata, A., et al.: Approach of a solution construction method for mesh router placement optimization problem. In: Proceedings of the IEEE 9th Global Conference on Consumer Electronics (IEEE GCCE-2020), pp. 1–2 (2020) 15. Hirata, A., Oda, T., Saito, N., Hirota, M., Katayama, K.: A coverage construction method based hill climbing approach for mesh router placement optimization. In: Barolli, L., Takizawa, M., Enokido, T., Chen, H.-C., Matsuo, K. (eds.) BWCCA 2020. LNNS, vol. 159, pp. 355–364. Springer, Cham (2021). https://doi.org/10. 1007/978-3-030-61108-8 35 16. Hirata, A., et al.: Simulation results of CCM based HC for mesh router placement optimization considering two islands model of mesh clients distributions. In: Proceedings of the 9th International Conference on Emerging Internet, Data & Web Technologies (EIDWT 2021), pp. 180–188 (2021) 17. Tarjan, R.: Depth-first search and linear graph algorithms. SIAM J. Comput. 1(2), 146–160 (1972) 18. Lu, K., et al.: The depth-first optimal strategy path generation algorithm for passengers in a metro network. Sustainability 12(13), 1–16 (2020)
Related Entity Expansion and Ranking Using Knowledge Graph Ryuya Akase(B) , Hiroto Kawabata, Akiomi Nishida, Yuki Tanaka, and Tamaki Kaminaga Yahoo Japan Corporation, Kioi Tower, 1-3 Kioicho, Chiyoda-ku, Tokyo 102-8282, Japan
Abstract. Nowadays, it is possible for web search users to receive relevant answers, such as a summary and various facts of entity, quickly from a knowledge panel. Search services recommend relevant entities based on search trends. If the search engine recommends sufficient related entities, users will acquire adequate information of interest. This enhances user experience and for web service providers, increases the opportunity to attract users. In this study, we increase the number of knowledge panels that recommend related entities and optimize their order using a knowledge graph. We also introduce a production-level system that generates related entities from a massive knowledge base and search log; it achieves low latency serving. We deploy our system to the production environment and perform quantitative and qualitative estimation using A/B testing. Based on the results, we conclude that our method significantly enhances the impression based coverage of knowledge panels preventing a significant change in click-through rate.
1
Introduction
Search engines have multiple functions that provide various results based on user’s intent as well as links to indexed documents. For example, a user search for “Tokyo Skytree” can return a summary of information regarding the place, weather forecast, map, official site and congestion situation of the site. The knowledge panel is a component of a search engine that provides more information regarding an entity. The knowledge base is a linked data written in RDF (resource description framework). Using data and an appropriate graph to enable graph searching, search engines can understand the intent of a query and answer adequately. Although the knowledge panel provides a brief answer and meta-information to enhance user convenience, from a business perspective, search engine providers seek to increase the time spent by users on their service. In fact, many services such as e-commerce and motion picture distribution recommend related links so that users can follow links from one web page to another.
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 172–184, 2021. https://doi.org/10.1007/978-3-030-79725-6_17
Related Entity Expansion and Ranking Using Knowledge Graph
2
173
Related Work
Blanco et al. developed the recommendation engine Spark [1]. They built the knowledge graph using open data sources such as Wikipedia and Freebase and closed data sources composed of rich information about films, TV shows, music and sports. The proposed system used entity importance in the knowledge graph, calculated using Page Rank, and the occurrence probability of the entity in web searches, Twitter and Flickr to obtain the feature used in entity ranking. The feature extraction is implemented with Map Reduce from Apache Hadoop and ranking computed by GBDT (gradient boosted decision trees). The labels Bad, Fair, Good, Perfect, and Excellent to generate related entities are created manually. Although there is no baseline, the system achieves acceptable results in both qualitative and quantitative analysis. By contrast, our study focuses on topologies of a knowledge graph and explores direct and indirect relationships to collect related entity candidates. Our time-series data are generated automatically using click log and then used in the neural network so that our system ranks related entities. In addition, we evaluate our methods using a baseline that employs simple tf-idf (term frequency-inverse document frequency). Jia et al. proposed a system that embeds both queries and entities using the neural network and recommends related entities by calculating cosine similarity between query vectors and entity vectors [2]. The entities are extracted from the document and title in web search results. They reported that users spent considerable time trying to read recommended documents when the system provided adequate entities satisfying user needs, causing a 5.1% increase in the click-through rate and a 5.5% increase in the page view in the A/B testing. Jia et al. insisted that the user interface clustering entities belonging to the same domain in the knowledge graph improved visual effects. In our system, while we prepare related entities having various domains in the generation phase, we filter the related entities with a domain different from the knowledge panel during the serving phase to isolate the effect of complexity. Onuki et al. used the multi-label neural network to predict relationships between related entities [3]. The network received the head and tail of a particular entity as an input and outputted a label. For example, their system predicted “is-capital-of ” when given “Japan” and “Tokyo”. TransE can obtain relation vectors satisfying d(h + l, t) = 0, where (h, l, t) is an RDF triple and d is a function to calculate Euclidean distance between vectors [4]. It enables vector operation because it embeds words and relations in the same vector space. The multi-label neural network proposed by Onuki et al. predicted a label from a head and tail with a higher accuracy than TransE. Moreover, Sun et al. proposed TransEdge to handle complex relations, which could not be performed by TransE [5]. These approaches are useful for describing related entities, and the additional explanation improves user comprehension. However, we need an accurate description of entities, especially in production environments. In our system, we deactivate the caption of a related entity in a current version to avoid misinterpretations of relationship while we generate the label using a knowledge graph.
174
R. Akase et al.
Miao et al. summarized a relationship graph and visualized it by applying bisimulation so that the user can recognize a relationship between related entities [6]. Although explaining in words how an entity relates to others is difficult, visualization facilitates understanding of the relationship. Miao conducted a stability evaluation and claimed their system reduced the time taken to find the relationship between related entities. This method is also helpful in terms of user experience as mentioned in the studies described earlier. Related entities provide not only shortcuts to other entities but also serendipitous connections. Fatma et al. presented DTM (DBpedia trivia miner) to mine curious entities from a structured knowledge graph such as DBpedia [7]. Five specialized staff labeled facts to indicate interestingness, and the convolutional neural network classified facts into either interesting entities or basic ones using the labels and knowledge graph. A quantitative evaluation that applied DTM to the domains of Bollywood actors and musical artists showed a superior accuracy when compared with the conventional support vector machine. Our system does not treat personalization; instead it targets general interests and concerns many users have. There are a variety of other studies we leave out do not discuss owing to limited space. TEM (three-way entity model) analyzed the relationship between a user, main entity, and related entity [8]. As for the entity summarization used in generating a knowledge panel, techniques such as page rank and LDA (latent Dirichlet allocation) have been introduced [9,10]. The knowledge graph is also applied to diagnosis of mental health, explainable recommendation, interpretation of search intention and knowledge graph completion [11–14]. In addition, efficient graph databases and a visualization method of knowledge graph have been implemented [15–17]. Though we described a number of related works, the approaches we propose such as the collection method of related entities and the model used for ranking are unique. To the best of our knowledge, there are no similar systems or evaluation results in the production environment.
3
System Architecture
Figure 1 illustrates the knowledge panels our system generates. First, a query is mapped to the entity ids using an entity linking method. Then, a knowledge panel and related entities are generated based on our knowledge graph and search log. The click history will be stored in the search log database so that we can use it as training data. The knowledge panel is composed of facts that the rule-based system extracts from the knowledge base and related entities that are generated automatically using the structured knowledge graph and machine learning. For example, the knowledge panel of “Frozen” contains links to an official site, Wikipedia, and names of the director and composer. It also displays links to other knowledge panels such as “Frozen II ” that have a deep relationship with the source knowledge panel. Our system requires a knowledge base and entity linking to map a query to the entity ids identifying knowledge panels.
Related Entity Expansion and Ranking Using Knowledge Graph
175
Fig. 1. Our knowledge panels. Related entities are displayed in “People also search for ” field. Copyrighted images are replaced with alternate texts in this paper.
We use YJKB (Yahoo Japan knowledge base) [18]. YJKB wraps LOD (linked open data) such as Wikidata, Wikipedia, and Freebase to maintain quality of facts continuously; it has its own ontology. This knowledge base contains facts extracted from semi-structured data such as web documents and private data from providers offering subscription content. YJKB is a non-public knowledge base because it includes private data, and its ontology is customized for internal use. Nonetheless, our proposed methods do not depend on a particular knowledge base; it can use other knowledge bases such as Wikidata instead. We need to match a query with an entity in order to display a knowledge panel including related entities. This task is called entity linking. Our entity linking method uses cw and cq to build the model for calculating the probability P (e|q) that a query q matches entity e. cw : A query will correspond to the entity of Wikipedia page if a query matches the anchor text in the Wikipedia page. cq : A query will correspond to the entity of Wikipedia page if a user clicks a link to the Wikipedia page in search results. The system binds a query to the entity having the highest probability. Currently, the LDA-based model takes into account the context information extracted from the knowledge base [19]. In this study, we use the results of entity linking, or resolved entity ids for generating related entities.
176
R. Akase et al.
Fig. 2. System flow chart. This system is designed for continuous model learning and low latency serving.
Figure 2 illustrates the flow chart of our system. All knowledge panels including related entities are generated every day using the knowledge base and search log; its three days’ data are stored in a KVS (key-value store) so that we can roll back the changes. The API (application programming interface) for knowledge panels fetches facts and related entity lists with entity id as a key. It is important to always provide fresh related entities users are interested in to continually improve the model. In this study, we automate the pipeline, including offline generation and online serving, in the production environment. The pipeline is configured as follows: 1. Data collection: loading the search log and constructing the knowledge graph from the knowledge base 2. Data preparation: extracting co-occurrence relations and exploring knowledge graph to generate entity pairs 3. Feature engineering: calculating the number of impressions and clicks and obtaining some categorical data such as type of relation 4. Model training 5. Batch prediction 6. Serving To treat massive search log, knowledge base and graph, we employ the distributed processing framework Apache Spark. The model is implemented using TensorFlow. Daily generation and feed job is defined as a CronJob of Kubernetes. This system stores the result of batch prediction in Apache Cassandra. A serving API fetches the results to cope with high throughput in the production environment. Apache Solr executes the entity linking. Automated completely, the system can train the model using fresh data every day and monitor metrics such as the number of impressions or clicks and latency.
4
Search Log-Based Model
In this section, we prepare a baseline model using tf-idf (Fig. 4). This method is our primary entity recommendation system released in the production environment. We record search log data to improve user experience. Though it includes various features such as user attribute and device information, this study employs the following:
Related Entity Expansion and Ranking Using Knowledge Graph
177
Fig. 3. Generation of virtual documents to calculate tf-idf. (i) Collections of search log per user. (ii) Collections of search log per session. (iii) Combinations of co-occurrence. (iv) Virtual documents, where src entity is a document id, dst entities are elements in a document.
Fig. 4. Collection of related entities (a1) and ranking method (a2) for search log-based model.
1. Time stamp 2. Anonymized user ID 3. Searched entity ID Figure 3 shows the process to generate related entities. The searched entities are grouped by sessions. One session is valid for 30 min. It is allocated based on the time stamp and anonymized user ID. Meanwhile, noise sessions with a single element or too many elements, specifically more than 1800 elements, are eliminated. Next, to generate co-occurring pairs, we prepare all combinations of entities. Finally, the pairs in all sessions are grouped by the first element. The grouped elements indicated in the right column of Fig. 3(iv) are virtual documents used in tf-idf calculation. The entity in the left column of Fig. 3(iv) is the main entity identifying a knowledge panel, and unique entities in the right column of Fig. 3(iv) are related entities. The system calculates relations using the virtual documents described earlier. The following equation is used for computing the score. ce,d |D| + 1 · ln scoree,d = |d ∈ D : e ∈ d| + 1 i ci,d
(1)
where e is a related entity, d is a virtual document, ce,d is co-occurrence frequency of the appearance of a related entity in the virtual document, |D| is the
178
R. Akase et al.
Fig. 5. Left: Part of the knowledge graph. Right: Generated dataset used in machine learning.
aggregated number of virtual documents, |d ∈ D : e ∈ d| is the number of virtual documents including a related entity.
5
Search Log and Knowledge Graph-Based Model
The search log-based model cannot acquire entities users have not searched for making it hard to provide serendipitous information because less common cooccurrence entities will be set aside even if they contain helpful information. Moreover, a factually inaccurate relation might appear because there is no feature to check for authenticity between co-occurrent entities. To complement rare entities and incorporate a flag indicating if an actual relation exists in a knowledge graph, we build and explore the graph in this section. The proposed system employs time series data and infers the number of clicks so that we can rank related entities on demand.
Fig. 6. Collection of related entities (b1) and ranking method (b2) for search log and knowledge graph-based model.
As with Wikidata JSON dumps, YJKB is saved as a JSON Lines text file format. Our system parses this data and builds up the knowledge graph from vertices composed of head and tail entities and edges composed of predicates.
Related Entity Expansion and Ranking Using Knowledge Graph
179
We insert additional predicates and literals and perform entity linking when feasible. For example, although the name of prefectural and city governments is important in a task to find nearby related facilities, necessary edges cannot be set up in case the ontology does not define the predicates we need. Therefore, the system adds the edge extracting the prefecture name from address information and links it with the corresponding entity. Ad hoc handling making the best use of the incomplete knowledge base cannot be avoided when we release the system to the production environment. This problem could be solved by knowledge base completion; we plan to solve this in the future. We need to define motifs in order to find related entities. Depth-first search with many hops increases calculation time owing to the combinatorial explosion from multiple search patterns; moreover, it might collect noise entities having little relation to each other. In this study, we use the following simple motifs and extract related entities based on the knowledge graph. Direct relation (h) − [p]− > (t) Indirect relation (h) − [p]− > (t) < −[p] < −(h ) where h is head entity, p is predicate, and t is tail entity. Our system explores the knowledge graph and extracts direct and indirect relations. The outcomes are merged with the relations and features obtained from the search log. The left side of Fig. 5 illustrates an example of the Disney animated movie, “Frozen”. “Frozen” is directly connected with “Frozen II ” via the predicate “followed by”. It is also indirectly connected with “Moana” via the predicate “production company” and the entity “Walt Disney Animation Studios”. Through this exploration, we collect related entities and provide a flag indicating the presence of a valid path to each related entity. In addition, we aggregate these results and search log data such as a search frequency (imps) of the entity, click frequency of related entities, and co-occurrence frequency between a source entity corresponding with a knowledge panel and a related entity (destination entity). The right side of Fig. 5 demonstrates a merged result. For example, “Frozen II ” seems to have a strong relationship with “Frozen” because it has a direct path and was searched frequently between August 15 through 19. Conversely, “Coco” can appear in the related entity list despite having a few clicks because it has an indirect path. The number of clicks is obtained from previously released related entities as shown in Fig. 1. Related entities should be sorted by not only relevance but also user needs. This study assumes that the related entities many users click are valuable. Our model solves a regression problem that predicts the number of clicks on a related entity by using the features obtained from the search log and knowledge graph. Metrics such as knowledge panel impressions and related entity clicks reflect user tendency according to seasons and weeks. For instance, an entity with increasing retrieval frequencies over the last seven days tends to be clicked the next day. Therefore, it is important that the model can manage time series data provided by the daily log. At the same time, we need to add long-term and universal trends extracted from the knowledge graph. We employ the time series and static features described in Table 1. These features are carefully screened by
180
R. Akase et al. Table 1. Features used in the search log and knowledge graph-based model # Type 1
Feature
3
Time series Transition of daily searching frequency (imps) of a source entity Transition of daily searching frequency (imps) of a related entity Transition of daily clicking frequency of a related entity
4
Static
2
5 6 7 8
Co-occurrence frequency between an entity pair during a certain period Flag indicating direct relation Flag indicating indirect relation Flag indicating search log-derived relation Flag indicating whether a related entity image is supported
means of correlation analysis and accuracy validation taking into account the feature importance calculated by GBDT. Figure 6(b2) illustrates the model composed of LSTM (long short-term memory) and a fully connected layer. We chose this model after trying other models such as GBDT and simple feedforward neural networks because it gets a relatively high precision. We found an appropriate hyperparameter using grid search. Eight features listed in Table 1 and the number of clicks the model should predict are given in training; input data are standardized using training data. Time series features are input into LSTM and static features input into the first fully connected layer. These outputs are concatenated and fed into the second fully connected layer.
6
Evaluation
This study performs an A/B testing composed of quantitative and qualitative analysis to compare the search log-based model with the search log and knowledge graph-based model. We introduce generated related entities to the knowledge panels belonging to the film class and the place class including educational institutions, architectural structures and natural landscapes, and release them to a part of the production environment so that we can collect statistical information. During A/B testing, we apply each model to 20% of actual web search users in Yahoo Japan. Figure 7 shows impression-based coverage and click-through rate of related entities for 26 days. Knowledge panels belonging to the film and place class are tested. The impression-based coverage of knowledge panels is calculated by (number of displayed knowledge panels including related entities/number of all displayed knowledge panels). The click-through rate of related entities in the knowledge panel is calculated by (number of clicked related entities/number of
Related Entity Expansion and Ranking Using Knowledge Graph
181
Fig. 7. Quantitative evaluation results of the search log-based model (Log) and the search log and knowledge graph-based model (Log+KG).
displayed related entities). According to a Z-test result with a significance level of 5%, the coverage of the film class increases 22pp (percentage point) and the coverage of the place class increases by 8pp. The click-through rate of the film class decreases by 0.04pp, and the click-through rate of the place class increases by 0.03pp. The coverage of both achieves more than 95% on average with the aid of knowledge graph. It is possible to cover tail entities of hardly searched queries using the search log and knowledge graph-based model. The significant increase in related film entities causes a reduction in the click-through rate of the film class. However, there is scope for improvement. The search log and knowledge graph-based model could predict clicks because the reduction of the film class is relatively less impactful and the place class is well predicted. From the aforementioned results, we achieve the goal of improving coverage of related entities and maintaining click-through rate. In a qualitative evaluation, an experienced search quality team assesses the knowledge panels including related entities of the film and place class. Each evaluator subjectively checks 60 knowledge panels; each panel is checked three times by different evaluators to reduce bias. Evaluators focus on defects and test relevance, order and sufficiency of related entities. Relevance problems are reported when evaluators feel at odds with a related entity. Order problems are reported when they find a strongly related entity at low order. Sufficiency problems are reported when they find a lack of related entities. In this review, many evaluators reported relevance problems because the prediction model enhanced certain related entities receiving clicks even when such entities have a weak association. This problem can be solved by weighting the features derived from the knowledge graph though it may decrease the click-through rate. In addition, we need to introduce a user feature to cope with this problem because criterion for relevance depends on user needs. Conversely, few evaluators reported order and sufficiency problems. For example, they reported that the search log and knowledge graph-based model increases the number of strongly related places in the neighborhood, as shown in Fig. 8. The knowledge graph could complement the place entities in the same district and film entities of the same period that the search log could not cover.
182
R. Akase et al.
Fig. 8. Related entities of “Cubic Plaza Shin-Yokohama” (wikidata:Q11297526). Copyrighted images are replaced with alternate texts in this paper.
7
Conclusion
This paper proposes models to extend and rank related entities displayed in a knowledge panel. The first search log-based model orders related entities using user sessions. It functions as a baseline, collects click data, and copes with the cold start. The second search log and knowledge graph-based model extends and optimizes related entities based on graph exploration. Evaluation results indicate that 95% of the generated knowledge panels are covered by related entities, causing sufficient coverage and acceptable rankings. Rich related entities can improve user experience and provide serendipitous information. In addition, the system is implemented as a scalable and continuous deployment structure; hence, we can supply entities stably in the production environment. A/B testing is performed in a phased manner. The first phase observes user response to newly emerged related entities using the search log-based model. The second phase validates the search log and knowledge graph-based model as described in this paper. Though we evaluate related entities of film and place classes, we can apply our methods to other classes. After evaluation, the production release is approved allowing us to currently use the film, place and company classes (person class is still under review). The third phase will allow various related entities with different classes from the ones in the knowledge panel. In addition, our system will display captions describing relationships; this information from the knowledge graph exploration is already available and ready to use. We will perform further studies on personalization and a very long-term feature to improve the relevance factor based on methods such as the transformerbased model that trains complex time-series characteristics [20].
References 1. Blanco, R., Cambazoglu, B.B., Mika, P., Torzec, N.: Entity recommendations in web search. In: Alani, H., et al. (eds.) ISWC 2013. LNCS, vol. 8219, pp. 33–48. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-41338-4 3
Related Entity Expansion and Ranking Using Knowledge Graph
183
2. Jia, Q., Zhang, N., Hua, N.: Context-aware deep model for entity recommendation in search engine at Alibaba (2019). arXiv preprint arXiv:1909.04493 3. Onuki, Y., et al.: Relation prediction in knowledge graph by multi-label deep neural network. Appl. Netw. Sci. 4(1), 20 (2019) 4. Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. In: Advances in Neural Information Processing Systems, pp. 2787–2795 (2013) 5. Sun, Z., Huang, J., Hu, W., Chen, M., Guo, L., Qu, Y.: TransEdge: translating relation contextualized embeddings for knowledge graphs. In: Ghidini, C., et al. (eds.) The Semantic Web – ISWC 2019. LNCS, vol. 11778, pp. 612–629. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30793-6 35 6. Miao, Y., Qin, J., Wang, W.: Graph summarization for entity relatedness visualization. In: Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1161–1164 (2017) 7. Fatma, N., Chinnakotla, M.K., Shrivastava, M.: The unusual suspects: deep learning based mining of interesting entity trivia from knowledge graphs. In: Thirty-First AAAI Conference on Artificial Intelligence, pp. 1107–1113 (2017) 8. Bi, B., Ma, H., Hsu, B.J., Chu, W., Wang, K., Cho, J.: Learning to recommend related entities to search users. In: Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, pp. 139–148 (2015) 9. Diefenbach, D., Thalhammer, A.: PageRank and generic entity summarization for RDF knowledge bases. In: Gangemi, A., et al. (eds.) The Semantic Web – ISWC 2018. LNCS, vol. 10843, pp. 145–160. Springer, Cham (2018). https://doi.org/10. 1007/978-3-319-93417-4 10 10. Wei, D., Gao, S., Liu, Y., Liu, Z., Hang, L.: MPSUM: entity summarization with predicate based matching (2020). arXiv preprint arXiv:2005.11992 11. Gaur, M., et al.: “Let me tell you about your mental health!” Contextualized classification of reddit posts to DSM-5 for web-based intervention. In: Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pp. 753–762 (2018) 12. Anelli, V.W., Di Noia, T., Di Sciascio, E., Ragone, A., Trotta, J.: How to make latent factors interpretable by feeding factorization machines with knowledge graphs. In: Ghidini, C., et al. (eds.) The Semantic Web – ISWC 2019. LNCS, vol. 11778, pp. 38–56. Springer, Cham (2019). https://doi.org/10.1007/978-3-03030793-6 3 13. Pan, J.Z., Zhang, M., Singh, K., Harmelen, F., Gu, J., Zhang, Z.: Entity enabled relation linking. In: Ghidini, C., et al. (eds.) ISWC 2019. LNCS, vol. 11778, pp. 523–538. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30793-6 30 14. Hamaguchi, T., Oiwa, H., Shimbo, M., Matsumoto, Y.: Knowledge base completion with out of-knowledge-base entities: a graph neural network approach. Trans. Jpn. Soc. Artif. Intell. 33(2), F-H721 (2018). (Japanese) 15. Tanase, G., Suzumura, T., Lee, J., Chen, C. F., Crawford, J., Kanezashi, H.: System G distributed graph database. arXiv preprint arXiv:1802.03057 (2018) 16. Nguyen, V., Yip, H.Y., Thakkar, H., Li, Q., Bolton, E., Bodenreider, O.: Singleton property graph: adding a semantic web abstraction layer to graph databases. In: BlockSW/CKG@ ISWC, pp. 1–13 (2019) 17. Vargas, H., Buil-Aranda, C., Hogan, A., L´ opez, C.: RDF explorer: a visual SPARQL query builder. In: Ghidini, C., et al. (eds.) ISWC 2019. LNCS, vol. 11778, pp. 647–663. Springer, Cham (2019). https://doi.org/10.1007/978-3-03030793-6 37
184
R. Akase et al.
18. Yamazaki, T., et al.: A scalable and plug-in based system to construct a productionlevel knowledge base. In: DI2KG@ KDD, pp. 1–5 (2019) 19. Toyota, I., Tsuchizawa, Y., Tsukiji, T., Sugawara, K., Noguchi, M.: dishPAM: a distributable seeded hierarchical pachinko allocation model. In: Proceedings of the Association for Natural Language, pp. 217–220 (2020). (Japanese) 20. Wu, N., Green, B., Ben, X., O’Banion, S.: Deep transformer models for time series forecasting: the influenza prevalence case (2020). arXiv preprint arXiv:2001.08317
Zero Trust Security in the Mist Architecture Minoru Uehara(&) Faculty of Information and Arts, Toyo University, Tokyo, Japan [email protected]
Abstract. Recently, the Internet of Things (IoT) has become popular. Fog computing is suitable for the IoT because of computation offloading. We propose the mist architecture as a model of fog computing. A mist system consists of a cloud, mist, and droplets. A conventional mist architecture is premised on boundary defense by a firewall (FW). The mist and droplets are protected by the FW. The cloud accesses the mist using network address translation traversal. However, in recent years, increasing numbers of organizations have adopted the zero trust architecture (ZTA). Additionally, the ZTA does not trust the inside of the FW. In this paper, we describe the mist architecture that realizes ZTA.
1 Introduction In recent years, the Internet of Things (IoT) has become widespread. The IoT is about to make a big difference in our society. Future visions such as Its Industry 4.0 and Japan Society 5.0 have been advocated. It is essential that the IoT works with cloud services. The IoT connected in the cloud is sometimes called the Cloud of Things [1]. Cloud services and the IoT have a brain-body relationship. The cloud interacts with the real world in both directions via the IoT. This promotes the evolution of artificial intelligence. Because the IoT acts in the real world, it has applications that require low latency. Unfortunately, the cloud is not suitable for such applications. Computation offloading [2] in the cloud has been studied, but the cloud is inherently high latency. Therefore, fog computing [3] is attracting attention. We propose the mist architecture as a model of fog computing [4]. The mist architecture has three elements: a misty cloud, mist, and droplets. In Ref. [4], the misty cloud was simply called the cloud. However, because it is difficult to distinguish it from the public cloud, we call it the misty cloud in this paper. A droplet is an IoT device. The mist corresponds to fog or a cloudlet [12, 13]. The mist and droplets are in a LAN and the misty cloud is in a WAN. The LAN and WAN are separated by a firewall (FW). Under these conditions, the mist achieves network address translation (NAT) traversal. A problem with the IoT is security. To use the IoT with peace of mind, it is necessary to solve security problems. In recent years, the zero trust architecture (ZTA) [5] has been attracting attention. The ZTA is a new security model that solves the problem of perimeter defense. In this paper, we call it zero trust security (ZTS) or the zero trust network (ZTN), depending on the context. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 185–194, 2021. https://doi.org/10.1007/978-3-030-79725-6_18
186
M. Uehara
In this paper, we discuss the method of realizing a ZTN in the mist architecture. The mist has the role of a connector in the ZTN. Therefore, the ZTN can be realized with minimum changes. This paper is organized as follows: In Sect. 2, we describe related research. In Sect. 3, we describe the improved mist architecture that realizes ZTN. In Sect. 4, we evaluate it from multiple perspectives. Finally, we conclude the paper.
2 Related Works 2.1
Mist Architecture
The mist architecture [4] was proposed to realize small-scale fog computing. The name mist comes from a small fog. The mist architecture-based system consists of three elements: a misty cloud, mist, and droplets. The misty cloud is a globally accessible Web service that manages multiple mists. The mist is equivalent to a cloudlet and manages multiple droplets. A droplet is a managed device that is managed by the mist. Figure 1 shows a configuration example of the mist architecture. The mist is registered in the misty cloud in association with the owner. A droplet is registered with the mist of the same owner. This registration process eliminates unauthorized devices.
Fig. 1. Configuration of the mist architecture
All three elements provide web services. These web services work together. In the mist architecture, it is assumed that there is a FW between the cloud and mist, and NAT is configured. Therefore, access from a WAN to a LAN is prohibited. By contrast, access from a LAN to a WAN is allowed. With the FW, the LAN is secure, and unauthorized access from outside can be prevented. Such a security model is called perimeterization. Such security models are vulnerable when compromised across boundaries. However, NAT prevents the cloud from requesting the mist. Therefore, the mist architecture provides the mechanism for NAT traversal. NAT traversal in the mist architecture is achieved as follows (see Fig. 2): When the misty cloud receives a request to mist, it queues the request and returns the ticket to the requester. The ticket has the role of a future object. The requester can use the ticket to
Zero Trust Security in the Mist Architecture
187
check whether the request has been completed. If the request is complete, the requester can obtain the result from the response queue. The mist checks the request queue of the misty cloud, processes any request immediately, and returns the result to the misty cloud. The misty cloud puts the result in the response queue that corresponds to the request and completes the request.
Fig. 2. NAT traversal in the mist architecture
An example of using the mist architecture is managed network blocks (MNBs) [6, 7]. In MNBs, the droplet becomes a network device and constitutes the data plane. The mist constitutes the control plane. 2.2
Zero Trust Network
The ZTN is a network that does not trust everything. De-perimeterization was proposed because of the limitations of a perimeter security model such as a FW. In 2014, Google published the BeyondCorp paper [8]. In 2020, the National Institute of Standards and Technology established the ZTA [5]. The elemental technologies that realize a ZTN are as follows: identity access management (IAM) identity-aware proxy (IAP) mobile device management (MDM) security information and event management (SIEM)
188
M. Uehara
cloud access security broker (CASB) secure web gateway (SWG) data loss prevention (DLP) endpoint detection and response (EDR) secure access service edge (SASE) Not all of the above elements are required. For example, BeyondCorp is a ZTN minimum set that can consist of IAM, IAP, IAP connector, SIEM, and MDM. Figure 3 shows an example of the ZTN configuration.
Fig. 3. Example of the ZTN configuration
In the ZTN, IAM controls authentication and authorization. IAM products include Google Cloud Identity, Microsoft Azure Active Directory, and Auth0. Using IAM, advanced authentication such as single sign-on becomes possible. The IAP is a reverse proxy that works with IAM. It authenticates and authorizes in cooperation with IAM. IAP products include Google IAP, Microsoft Active Directory Application Proxy, Akamai Enterprise Application Access (EAA), and Zscaler Private Access. Akamai EAA and Zscaler Private Access support all protocols other than HTTPS. IAP is more scalable than a VPN. The IAP connector is a reverse proxy that works with the IAP. Because the IAP connector connects to the IAP, it is not necessary to open the inbound port. Because the IAP cannot be accessed as freely as a VPN, unauthorized access can be suppressed. MDM/mobile application management manages bring your own device (BYOD) devices or applications and isolates incidents. MDM products include Google endpoint management and Microsoft Intune. SIEM collects and analyzes all logs to detect anomalies. SIEM products include Google Chronicle Security Analytics Platform, Microsoft Azure Sentinel, and Splunk Enterprise Security. Because SIEM handles a large number of logs, a cloud that can process big data is required. The CASB visualizes SIEM. CASB monitors and inspects access to software as a service (SaaS). CASB products include Microsoft Cloud App Security.
Zero Trust Security in the Mist Architecture
189
The SWG operates as a public proxy and always prevents unauthorized access. The user accesses SaaS via the SWG. The SWG may work with SIEM or also serve as a CASB and DLP. SWG products include Zscaler Internet Access. DLP prevents data leakage. DLP products include Microsoft 365 DLP and Zscaler Cloud DLP. It is necessary to consider whether DLP is necessary for IoT devices. Generally, it is difficult for IoT devices to perform high-load processing. EDR reduces damage through early detection and response rather than preventing attacks. EDR products include Microsoft Defender for Endpoints. SASE is a concept advocated by Gartner and includes a ZTN. In SASE, the agent of the client and the gateway of the server are connected to achieve secure communication between the client and server. At this time, access control of the agent and gateway is performed by the controller linked with the authentication infrastructure; that is, SASE is like OpenFlow in a ZTN. In SASE, the software-defined perimeter defines small boundaries.
3 Mist Architecture for ZTN 3.1
System Overview
The mist architecture does not use a VPN. The VPN capacity is not large. In 2020, homework became widespread in the Covid-19 pandemic, but the VPN became a bottleneck. Because there are more IoT devices than VPN users, there is a high possibility that the VPN cannot solve the problem. The mist architecture achieves safe NAT traversal because it accesses the misty cloud from the mist. The existing mist architecture does not support the ZTA. We describe the mist architecture that achieves ZTS. Figure 4 shows an overview of mist-based ZTS.
Fig. 4. System overview
In the ZTN, endpoint security is required, even for the mist and droplets in a private network. The Droplet Web application programming interface does not accept communication, except from the registered mist.
190
M. Uehara
In the ZTN, BYOD of the IoT becomes a problem. Connecting an unregistered IoT to a corporate network increases the risk of unauthorized access. 3.2
IAM
The mist architecture does not provide its own IAM, but uses existing cloud services. For example, Google Cloud Identity and Microsoft Azure Active Directory can be used. 3.3
IAP and IAP Connector
There are two approaches used to implement the IAP and IAP connector in the mist architecture: native IAP and hybrid IAP. First, we describe native IAP. In this method, the misty cloud becomes an IAP, and the mist becomes an IAP connector. In Fig. 4, the IAP and misty cloud, IAP connector, and mist can be integrated, and induction can be omitted. When a user signs in to the misty cloud, the IAP refers to IAM. As mentioned in Sect. 2.1, requests from the misty cloud to the mist are processed with a delay. The mist periodically visits the misty cloud to receive requests. The boundary defense does not allow direct access to the mist from the outside. Access from unregistered devices is prohibited, even if users attempt to gain unauthorized access by invading the inside via a VPN. However, misty cloud IAP implementations are often functionally inferior to existing IAPs. Hence, we describe the second method: hybrid IPA. This method is used together with the existing cloud IAP. 1. A client accesses the induction web service from the existing cloud IAP via the existing connector. 2. The induction web service grants access to the misty cloud and returns a redirect to the misty cloud to the client. 3. The user reconnects to the misty cloud and accesses the mist. It is desirable to use this redirect together with a one-time password so that it does not become a security hole. For example, in Google Cloud IAP, the IAM role allows user access; that is, the user allows the misty cloud to access the mist by setting appropriate access rights to the misty cloud and mist. 3.4
SIEM
SIEM is important in the ZTN. In the mist architecture, the mist manages the log of droplets registered in the mist. If the log exceeds the capacity of the mist, it is sent to the cloud regularly. We proposed a method for safely managing logs in past studies [9– 11]. The traditional mist architecture allows droplets to access the cloud directly. In this case, the mist can be used as a SWG by allowing droplets to access the cloud via the mist. This logs all elements of the mist architecture and integrates them into SIEM.
Zero Trust Security in the Mist Architecture
3.5
191
MDM
MDM is important not only for the ZTN but also for the mist architecture. A motivation for developing the mist architecture was device management. In the mist architecture, all droplets are managed devices. The mist architecture manages droplets as follows: 1. A user registers the mist in the misty cloud. At this time, the user becomes the owner of the mist. In this case, registration is the building of a lasting relationship of trust. 2. The user registers a droplet in the misty cloud. At this time, the user becomes the owner of the droplet. 3. The user registers the droplet in the mist. 4. The user connects the droplet to the same network as the mist. 5. The droplet sends a subscription request to the mist. In this case, subscription is the building of a temporary relationship of trust. The mist accepts subscription requests from registered droplets and rejects requests from unregistered droplets. 6. If the user discards the droplet, the user unregisters it. 7. When the user discards the mist, the registration is deleted. In mist-based ZTN, communication is not performed unless there is a relationship of trust. The misty cloud confirms the trust relationship. The user sets a relationship of trust. We do not trust the machine. The misty cloud has the IDs of all devices manufactured. It also has the IDs of unregistered devices, but it does not know where such devices are used. 3.6
Others
In this subsection, we describe elements not covered in this paper. EDR can be replaced by SIEM and MDM. Containers can be used to isolate devices that may have been infected with malware. The mist and droplets can be operated in each container. In fact, the prototype runs in a container. Dangerous devices can be eliminated by discarding the container.
4 Evaluations We described the NAT traversal of the mist architecture in a previous paper [6]. In the present paper, we verify that the mist works as a connector. In the prototype system in this study, we moved the misty cloud, mist, and droplet individually. The droplet was an RGB LED. The system turned on the RGB LED from the misty cloud. The specification of the evaluation machine is as follows: CPU: AMD Ryzen 5 3500U. RAM: 16 GB. OS: Windows 10.
192
M. Uehara
We ran Ubuntu/focal64 with Vagrant on the above machine. Additionally, the LXC containers were nested. Because they were on the same machine, the delay could be ignored. The average response time of the prototype system was 1.018 s. In this prototype system, the mist used a scheduler to periodically fetch requests from the cloud. The interval was 1 s. Therefore, the response time depended on the scheduler interval. The mist architecture did not assume frequent communication between the mist and misty cloud. The delay from the misty cloud to the mist was large, but the delay from the mist to the misty cloud and droplet was small. Next, we consider security. Mist-based ZTN protects using two methods. The first method is building an explicit trust relationship by registering the device. The three elements of mist-based ZTN reject requests from unregistered devices. This eliminates unauthorized access. Three types of licensing procedures exist: registration, transfer, and disposal. The combinations are shown in Table 1. Unregistered devices cannot be used. Additionally, discarded devices cannot be used. Transfer can be used because it only transfers the right.
Table 1. Availability Registration Yet * Done Done
Transfer * * Yet Done
Disposal * Done Yet Yet
Availability Disabled Disabled Enabled Enabled
The second method is the detection of false trust relationships. Even a ZTN does not really trust anything. At a minimum, the misty cloud needs to be trusted. We consider the scenario in which a malicious mist (m-mist), non-malicious mist (n-mist), malicious droplet (m-droplet), and non-malicious droplet (n-droplet) are mixed in the tissue. First, it is important to detect the m-mist at an early stage. If an n-droplet is connected to the m-mist, the m-mist may tamper with the n-droplet information. The tm-mist is caused by unauthorized registration or transfer. Because the owner can confirm the deviation, early detection is possible if there is an intellectual warning. Next, it detects the existence of the m-droplet in the range of the n-mist. The n-mist confirms the access right of the droplet to connect with the misty cloud. This detects the m-droplet.
Zero Trust Security in the Mist Architecture
193
5 Conclusions In this paper, we described how to realize a ZTN in the mist architecture. The mist architecture is a mechanism to make an IoT device (droplet) a managed device. The droplet is a secure managed device if it supports the ZTN. This makes it possible to use IoT devices safely. Future issues are as follows: First, we need to improve the response delay. Specifically, we introduced a method that waits until a mist obtains a request and retries until it succeeds. As a result, the waiting time becomes almost zero. However, because the number of constant connections to the misty cloud increases, too many mists cannot be connected. Therefore, it is necessary to consider layering the misty cloud. Next, we need to manage the ownership of the droplet. The ownership management proposed in this paper is not sufficient. If the purchaser and administrator are different, we also need the ability to formally transfer ownership. Thus, we will integrate the misty cloud into the IAP. Acknowledgments. We thank Maxine Garcia, PhD, from Edanz Group (https://en-authorservices.edanz.com/ac) for editing a draft of this manuscript.
References 1. Parwekar, P.: From Internet of Things towards cloud of things. In: 2011 2nd International Conference on Computer and Communication Technology (ICCCT), pp. 329–333, 15–17 Sept. 2011, https://doi.org/10.1109/ICCCT.2011.6075156 2. Lin, L., Liao, X., Jin, H., Li, P.: Computation offloading toward edge computing. Proc. IEEE 107, 1584–1607 (2019). https://doi.org/10.1109/JPROC.2019.2922285 3. Bonomi, F., et al.: Fog computing and its role in the internet of things. In: Proceedings of the FIRST edition of the MCC Workshop on MOBILE Cloud Computing, pp. 13–16 (2012) 4. Uehara, M.: Mist Computing: Linking Cloudlet to Fogs. In: Lee, R. (ed.) Computational Science/Intelligence and Applied Informatics, pp. 201–213. Springer, Cham (2018). https:// doi.org/10.1007/978-3-319-63618-4_15 5. Rose, S., et al.: Zero Trust Architecture. National Institute of Standards and Technology (2019) 6. Saka, R., Uehara, M.: Web API-based NAT traversal in managed network blocks. In: Proc. of the 12th International Workshop on Engineering Complex Distributed Systems (ECDS2018) in Conjunction with the 12th International Conference on Complex, Intelligent and Software Intensive Systems (CISIS-2018), pp. 660–669, July 4–6, 2018, Kunibiki Messe, Matsue, Japan 7. Saka, R., Uehara, M.: Implementations of droplets in managed network blocks. In: Proc. of the 1st Sustainable Computing Workshop (SUSCW-2018) in conjunction with The 6th International Symposium on Computing and Networking (CANDAR-2018), pp. 380–382, November 27–30, Hida Takayama, Japan, https://doi.org/10.1109/CANDARW.2018.00076 8. Ward, R.; Beyer, B.: Beyondcorp: A New Approach to Enterprise Security (2014) 9. Tomono, A., Uehara, M., Shimada, Y.: Trusted log management system. In: Book of Trustworthy Ubiquitous Computing, Trustworthy Ubiquitous Computing, vol. 6. Atlantis Press, pp 79–98 (2012)
194
M. Uehara
10. Tomono, A., Uehara, M., Shimada, Y.: YAML based long-term log management. GESTS Int. Trans. Commun. Sig Process. 64(1), 73–90 (2011) 11. Tomono, A., Uehara, M., Shimada, Y.: Transferring trusted logs across vulnerable paths for digital forensics. In: Proc. of the 4th International Workshop on Broadband and Wireless Computing, Communication and Applications (BWCCA2009) in Conjunction with the 7th International Conference on Advances in Mobile Computing & Multimedia (MoMM2009), pp. 487–492 (2009) 12. Satyanarayanan, M., Bahl, P., Caceres, R., Davies, N.: The case for VM-based cloudlets in mobile computing. IEEE Pervasive Comput 8(4), 14–23 (2009). https://doi.org/10.1109/ MPRV.2009.82 13. Satyanarayanan, M.: Mobile computing: the next decade. ACM SIGMOBILE Mobile Comput. Commun. Rev. Arch. 15(2), 2–10 (2011)
Blockchain Based Authentication for End-Nodes and Efficient Cluster Head Selection in Wireless Sensor Networks Sana Amjad1 , Usman Aziz2 , Muhammad Usman Gurmani1 , Saba Awan1 , Maimoona Bint E. Sajid1 , and Nadeem Javaid1(B) 2
1 COMSATS University, Islamabad, Islamabad 44000, Pakistan COMSATS University Islamabad, Attock Campus, Islamabad, Pakistan
Abstract. In this paper, a secure blockchain based identity authentication for end-nodes is proposed in wireless sensor networks (WSNs). Moreover, to resolve the issue of limited energy in WSNs, a mechanism of cluster head (CH) selection is also proposed. The nodes in a network are authenticated on the basis of credentials to prevent from malicious activities. The malicious nodes harm the network by providing false data to nodes. Therefore, a blockchain is integrated with the WSN to make the network more secure as it allows only authenticated nodes to become a part of the network. Moreover in a WSN, sensor nodes collect the information and send it towards CH for further processing. The CH aggregates and processes the information; however, its energy depletes rapidly due to extra workload. Therefore, the CH is replaced with the node that has the highest residual energy among all nodes. The simulation result shows the network lifetime increases after CH replacement. Moreover, it shows that he transaction cost is very low during authentication phase. Keywords: Blockchain authentication
1
· Wireless sensor networks · Identity
Introduction
The wireless sensor networks (WSNs) are useful in sensing the environmental or physical changes in healthcare, surveillance, transportation, etc. The sensors in WSN are randomly deployed for environmental monitoring [1,2]. The sensor nodes have many constraints, which are limited battery, less computation capabilities, low storage, etc., [3,4]. Satoshi Nakamoto introduced blockchain in 2008 that has emerged as a promising technology to address the issues of data security and remove dependency on the third party [5]. The blockchain is used in the field of healthcare, energy trading, smart grids etc. It gives a secure, decentralized, a distributed mechanism for storage and addresses the single point of failure issue. The blockchain provides a tamper-proof ledger in which a new record is added after being validated by the miners. The miner nodes validate the transactions c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 195–205, 2021. https://doi.org/10.1007/978-3-030-79725-6_19
196
S. Amjad et al.
by different consensus mechanisms: proof of work (PoW), proof of authority (PoA), proof of stack(PoS) etc., [6,7]. In the PoW, all the interested nodes participate and solve a mathematical puzzle. The node, which solves the puzzle first is responsible for validating the transaction and adding a block in the blockchain. This process consumes a lot of computational power, which is not suitable for energy constrained WSNs. Moreover, the blockchain uses the smart contracts in which all the business rules are stored and it removes the need of any external third party. In PoA, only pre-selected nodes are responsible for mining, these nodes are selected on the bases of their capabilities. Because, the miners are pre-selected node; therefore, they do not have to solve the mathematical puzzle that requires high computation. Moreover, blockchain plays an important role in WSN and provides security and privacy in them [8,9]. It provides security by detecting the malicious nodes in the network. There are many techniques for malicious nodes detection [10,11]. However, blockchain faces the issue of high storage cost which is not suitable for WSN that are not resource enriched. In the WSNs, sensor nodes gather data from the environment and forward it to CH for further processing. The CH process the data and forward it to the base station (BS). The nodes in the network are registered as well as authenticated to prevent from malicious acts. In [12], CHs send the sensed data to BS; however, there is no registration and authentication mechanism for the network nodes. The malicious nodes can enter into the network and make it vulnerable. Whenever a node fails due to energy depletion in the network, it affects the whole networks’ performance. There is no mechanism in [13,14] for the selection of CH. When any CH fails to perform due to serve the network due to its low energy, there is no criteria of selecting a new CH. The list of contributions of this paper are as following: – nodes’ authentication is performed to prevent the network from malicious activities, – a smart contract is used to resolve the trust issue by removing third party, and – the CH is selected on the basis of nodes’ highest energy by using low-energy adaptive clustering hierarchy (LEACH) protocol.
2
Related Work
In this section, relevant studies of blockchain in WSNs are discussed based upon limitations addressed. 2.1
Registration and Authentication of Nodes
The nodes in the IoT environment cooperate to provide the services. However, in [6] and [14], nodes’ identity authentication is compromised, any node can enters in network and behaves maliciously which affects the network performance. The node identity authentication relies on the central authentication server.
Blockchain Based Authentication and Efficient CH Selection in WSNs
197
The sensors play an important part in IoT for different purposes and one of them is nodes’ identity authentication. However, sensors have very low computational power. In [13], the authentication issue occurs because of non authenticated nodes can enter in network and act maliciously. While in [15], users’ privacy and authenticity need to be assured. As in [16] the routing protocol is used to authenticate the devices; however, the trust issue is created due to centralized authority. In [17], the wireless body area network consists of sensors that gather the data of human body parts and send it to the local node publically. However, the nodes in the network are not authenticated. 2.2
Storage Issues in Network Nodes
In [18], the sensor nodes have some constraints such as low storage and low computational power. Moreover, some nodes in the network behave selfishly and do not store the data. The PoW mechanism is used in previous work that consumes much computational power. Whereas, in [19], a static routing protocol is not good for the internet of underwater things (IoUT) and there is a storage issue in the centralized system. Also, in [20], PoW consensus is used. However, it consumes high computational power and storage. In addition to the problems discussed above, in [21], IoT is integrated with the blockchain in a centralized manner and PoW is used for the mining process. However, it creates central point of failure due to central authority. In the network each node stores data that is generated by other nodes. However, the storage issue occurs. While in [22], the nodes data record is stored in a centralized system that creates single point of failure. Whereas in [23], blockchain is integrated with IoTs. However, it is not suitable for keeping a copy of the ledger due to storage constraints. 2.3
Data Privacy of Nodes
In [12], no mechanism is proposed for data protection due to which any malicious node can steal data and harm the network. Whereas in [24], the controlling and take care of the manufacturing products in the industry are done by the workers; however, data transparency issue occurs. The workers can steal the restricted product information. Moreover, the misuse of important records is another issue. Also, in [6], the data security and privacy of sensor nodes are compromised in the WSNs. In [25], crowdsensing is essentially used to collect information using different devices. However, no data privacy protection mechanism is used. As in [26], the dynamic WSNs play an important role in collecting data; however, the untrusted behavior of nodes occurs. Whereas, in [15], users’ data privacy and authenticity need to be assured. Whereas in [27], the information-centric network (ICN) is integrated with a WSN. The caching data is duplicated and shared in the network. However, data privacy and security concerns may occur. While in [28], the concept of a smart city is developed with the integration of IoT. However, due to data growth and no management, data security issues occur. In [29], the data
198
S. Amjad et al.
is transmitted from sensor nodes to IoT devices; however, there is no mechanism for data protection. 2.4
Excessive Energy Consumption
In [18], the sensing nodes in the network selfishly behave and do not store the data. The PoW mechanism is used in previous work that consumes much computational power. Also, in [20], blockchain technology is used in different fields for trading and supply chain purposes. PoW is used as consensus mechanism that consumes high computational power. In [23], blockchain is integrated with IoT. However, IoT nodes are less competitive and are not able to keep a copy of the ledger due to low energy. Whereas, in [30], IOTA is a distributed ledger that provides fast and tamper-proof information. However, the nodes’ information gathering rate is very low. The IoT sensors may have the very low computational power and their energy may decreases very fast. It creates a problem for IoT to validate the transactions very fast. While in [31], sensor nodes do not have enough battery to survive in the network and not able to communicate for a long time. In [32], the wireless body area networks work evolutionary with the healthcare applications. However, it consumes a lot of energy consumption. 2.5
Malicious Nodes Detection and Removal from Networks
In a WSN, localization of sensor nodes is the major issue nowadays. There are some nodes in the WSN, which give the wrong location and act maliciously. In this way, network security is compromised in [33]. Whereas, in [34], there is no proper mechanism for the detection of malicious nodes. Moreover, there is no traceability mechanism for the detection of malicious nodes. Also, in [35], the industrial IoT (IIoT) is being used in different fields such as manufacturing, healthcare. However, some service provisioning challenges occur. In service provisioning challenges, the untrusted service provider can act maliciously and provide the wrong services. On the other side, the client can acts maliciously by repudiating against the services. In [18], the nodes in the network selfishly behave and do not store the data. Whereas in [36], sensor nodes communicate by finding the routing path. However, no best way is used to find the malicious node and secure the data to be infected. While in [37], the range based localization approach needs hardware for finding the precise location and it becomes very costly. Moreover, the range-free approach is affected by the malicious nodes in the network. In [38], cloud based computing is performed; however, data is retrieved through the internet and no data security mechanism is performed. 2.6
Single Point of Failure Issue Due to Centralized Authority
In [14], nodes’ identity authentication is compromised, which affects the network performance. The node identity authentication relies on central authentication servers that are considered third parties and cause single point of failure.
Blockchain Based Authentication and Efficient CH Selection in WSNs
199
Whereas in [39], authors compare the centralized and distributed network model. In a centralized system, the data sent to the cloud directly; however, data bandwidth and data latency issues arise. In a distributed system, fog computing is used; however, a single point of failure issue arises. In [16], in the routing protocol, the central devices are used to authenticate the devices; however, a trust issue is created due to centralized authority. While in [22], shellfish products are the most popular food throughout the globe. Its freshness is needed for long time storage. Cold storage is needed for maintaining its freshness. A WSN is a beneficial and great impact on managing the cold operation. However, their records are stored in a centralized system that creates single-point failure and malicious attacks. In [40], the central authority is used for data storage. However, single-point of failure issue is created. Table 1. Mapping of problems to solutions and validations Limitations
Proposed solutions
Validations
L1. Nodes’ registration S1. Authentication technique V1. Message size and authentication [12] V2. Transaction cost L2. Malicious nodes detection [12] L3. Node battery issue [13, 14]
S2. LEACH protocol
V3. Network lifetime
In the Table 1, the problems, their solutions and validation parameters are mentioned. For intrusion prevention of malicious nodes, an authentication scheme is used. By using authentication scheme, only authenticated nodes enter into the network. The node battery issue resolved by selecting the highest energy node using the LEACH protocol [41].
3
Proposed System Model
In the proposed system model, a blockchain based model is proposed for establishing the secure communication between authentic nodes and CH. 3.1
System Components
The system model comprises of three primary entities: end-nodes, CH and BS. Sensor Node: In a WSN, the sensor nodes sense data from the environment. These nodes are resource constrained and not able to store the large amount of data. Therefore, they send data to the CH for further computation and storage (Fig. 1).
200
S. Amjad et al.
Fig. 1. System model for end-nodes authentication and CH selection
Cluster Head: The CH receives data from sensor nodes, processes and stores it on BS. Our proposed model provides a mechanism for the selection of CH on the basis of its residual energy, when the battery of existing CH depletes. For this, the energy of each sensor node in the network is calculated and the node with highest residual energy is selected as cluster head. Base Station: The blockchain is deployed on BS, which has sufficient resources for performing the validating transactions. The BS stores sensing data and the credentials of registered nodes. The CH then send data to BS for storage. End-Node: Nodes’ registration and authentication scheme is used to prevent network from malicious activities. The authentication scheme use in this paper is motivated by [14], the authentication scheme is used in this paper. In the proposed system model, this scheme is used for private blockchain. The unknown end-node sends a request to the blockchain for registration. Initially, the blockchain checks either this node is already registered in the network or not. If end-node is already registered, then it will not be re-registered in the network and proceed further otherwise, the blockchain registers it. When any end-node wants some data from the network, then it will authenticate firstly. The end-node is authenticated on the bases of its credentials stored on the blockchain at the time of registration. At the request time the end-node provides its credentials, if these credentials are same with the credentials stored on the blockchain, then it will be considered as authenticated node. Otherwise, this specific end-node will
Blockchain Based Authentication and Efficient CH Selection in WSNs
201
be announced as a malicious node and rapidly removed from the network. After authentication, the end-user is allowed to get the data from the network. Smart Contract: A smart contract is a digital contract, which is deployed on blockchain for handling the transactions without the involvement of any third party. In this paper, PoA consensus mechanism is used for nodes’ authentication that consumes less computational power because pre selected nodes perform mining process.
4
Simulation Results and Discussion
This section demonstrates the analysis of the proposed solution. Figure 2 depicts the message size during registration and authentication phase. During registration, nodes send request to enter in the network. The nodes send their credentials for registration. Their credentials are stored in BS, whenever end-node enters in the network first it is authenticated. The authentication is performed by BS. The message size for registration is high because much of the network resources like bandwidth and throughput are used in sending the data to the blockchain. 8
Message size (bytes)
7
6
5
4
3
2
1 Registration phase
Authentication phase
Fig. 2. Registration and authentication message size for end-nodes
On the other hand, during authentication, less network resources are used because BS has only to match the credentials to verify end-nodes’ credentials. Only authenticated nodes take part in the network. PoA consensus mechanism is used in our proposed model to validate transactions. In this way, malicious nodes are not allowed in the network to do malicious activities. Figure 3 illustrates the network lifetime and shows the number of dead nodes based on number of rounds. There are 200 sensor nodes and from these sensors, CH is selected on
202
S. Amjad et al.
the basis of highest residual energy. The energy of first CH depletes at round No. 180. The energy of ten node depletes at round No. 200. The energy of all the nodes depletes at round No. 1200. This shows that the network has good lifetime. The use of LEACH protocol is to prolong the network lifetime and reduce the energy consumption of nodes.
1400
Number of rounds
1200
1000
800
600
400
200
First deadNode Tenth deadNodes All deadNodes
Fig. 3. Network lifetime
14
104
Transaction cost (gwei)
12
10
8
6
4
2
0 Registration phase
Authentication phase
Fig. 4. Transaction cost
Figure 4 illustrates the transaction cost of registration and authentication of end-nodes. The transaction cost in registration phase is greater than the authentication phase. Because during registration phase, blockchain stores all
Blockchain Based Authentication and Efficient CH Selection in WSNs
203
the information of nodes one by one to register end-nodes that takes much cost as compared to authentication phase. During authentication phase only the nodes have to authenticate from the information that is already provided during registration phase. That is the reason it does not take much time for authentication process.
5
Conclusion and Future Work
The aim of this paper is to enhance the network lifetime as well as to prevent it from malicious nodes. Therefore, identity authentication scheme is used to register the nodes and then authenticate them. The unknown nodes first register themselves and then authenticate them to enter in the network. In this way, only legitimate nodes become a part of the network. The LEACH protocol is integrated with blockchain to enhance the network lifetime because CHs with low energy are replaced by highest energy CH node. On other side, identity authentication scheme is used to make the network more secure because only legitimate nodes can enter in the network. In the future work, services will be provided to legitimate nodes. The sensed data will be stored in a storage system. Also, the computational cost will be checked as comparison for authentication and storage.
References 1. Fu, M.H.: Integrated technologies of blockchain and biometrics based on wireless sensor network for library management. Inf. Technol. Libr. 39(3) (2020) 2. Kumari, S., Om, H.: Authentication protocol for wireless sensor networks applications like safety monitoring in coal mines. Comput. Netw. 104, 137–154 (2016) 3. Jiang, Q., Zeadally, S., Ma, J., He, D.: Lightweight three-factor authentication and key agreement protocol for internet-integrated wireless sensor networks. IEEE Access 5, 3376–3392 (2017) 4. Farooq, H., Arshad, M.U., Akhtar, M.F., Abbas, S., Zahid, B., Javaid, N.: BlockVN: a distributed blockchain-based efficient communication and storage system. In: Barolli, L., Hellinckx, P., Enokido, T. (eds.) Broadband and Wireless Computing, Communication and Applications, vol. 97, pp. 56–66. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-33506-9 6 5. Padmavathi, U., Rajagopalan, N.: Concept of blockchain technology and its emergence. In: Blockchain Applications in IoT Security, pp. 1–20. IGI Global (2021) 6. Moinet, A., Darties, B., Baril, J.L.: Blockchain based trust and authentication for decentralized sensor networks. arXiv preprint arXiv:1706.01730 (2017) 7. Abubaker, Z., et al.: Decentralized mechanism for hiring the smart autonomous vehicles using blockchain. In: Barolli, L., Hellinckx, P., Enokido, T. (eds.) Broadband and Wireless Computing, Communication and Applications, vol. 97, pp. 733– 746. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-33506-9 67 8. Goyat, R., et al.: Blockchain-based data storage with privacy and authentication in internet-of-things. IEEE Internet Things J. (2020) 9. Abbas, S., Javaid, N.: Blockchain based vehicular trust management and less dense area optimization. In 2019 International Conference on Frontiers of Information Technology (FIT), pp. 250–2505. IEEE, December 2019
204
S. Amjad et al.
10. Christidis, K., Devetsikiotis, M.: Blockchains and smart contracts for the internet of things. J. Fintech Blockchain Smart Contracts 1(1), 7–12 (2018) 11. Magazzeni, D., McBurney, P., Nash, W.: Validation and verification of smart contracts: a research agenda. Computer 50(9), 50–57 (2017) 12. Haseeb, K., Islam, N., Almogren, A., Din, I.U.: Intrusion prevention framework for secure routing in WSN-based mobile Internet of Things. IEEE Access 7, 185496– 185505 (2019) 13. Hong, S.: P2P networking based internet of things (IoT) sensor node authentication by blockchain. Peer-to-Peer Netw. Appl. 13(2), 579–589 (2019). https://doi.org/ 10.1007/s12083-019-00739-x 14. Cui, Z., et al.: A hybrid blockchain- based identity authentication scheme for multiWSN. IEEE Trans. Serv. Comput. 13(2), 241–251 (2020) 15. Kolumban-Antal, G., Lasak, V., Bogdan, R., Groza, B.: A secure and portable multi-sensor module for distributed air pollution monitoring. Sensors 20(2), 403 (2020) 16. Ramezan, G., Leung, C.: A blockchain-based contractual routing protocol for the internet of things using smart contracts. Wirel. Commun. Mob. Comput. (2018) 17. Xu, J., Meng, X., Liang, W., Zhou, H., Li, K.C.: A secure mutual authentication scheme of blockchain-based in WBANs. China Commun. 17(9), 34–49 (2020) 18. Ren, Y., Liu, Y., Ji, S., Sangaiah, A. K., Wang, J.: Incentive mechanism of data storage based on blockchain for wireless sensor networks. Mob. Inf. Syst. (2018) 19. Uddin, M.A., Stranieri, A., Gondal, I., Balasurbramanian, V.: A lightweight blockchain based framework for underwater IoT. Electronics 8(12), 1552 (2019) 20. Liu, M., Yu, F.R., Teng, Y., Leung, V.C., Song, M.: Computation offloading and content caching in wireless blockchain networks with mobile edge computing. IEEE Trans. Veh. Technol. 67(11), 11008–11021 (2018) 21. Liu, Y., Wang, K., Lin, Y., Xu, W.: LightChain: a lightweight blockchain system for industrial internet of things. IEEE Trans. Ind. Inf. 15(6), 3571–3581 (2019) 22. Feng, H., Wang, W., Chen, B., Zhang, X.: Evaluation on frozen shellfish quality by blockchain based multi-sensors monitoring and SVM algorithm during cold storage. IEEE Access 8, 54361–54370 (2020) ˇ Popovski, P.: Delay and communication 23. Danzi, P., Kalør, A.E., Stefanovi´c, C, tradeoffs for blockchain systems with lightweight IoT clients. IEEE Internet Things J. 6(2), 2354–2365 (2019) 24. Rathee, G., Balasaraswathi, M., Chandran, K.P., Gupta, S.D., Boopathi, C.S.: A secure IoT sensors communication in industry 4.0 using blockchain technology. J. Ambient Intell. Hum. Comput. 1–13 (2020) 25. Jia, B., Zhou, T., Li, W., Liu, Z., Zhang, J.: A blockchain-based location privacy protection incentive mechanism in crowd sensing networks. Sensors 18(11), 3894 (2018) 26. Tian, Y., Wang, Z., Xiong, J., Ma, J.: A blockchain-based secure key management scheme with trustworthiness in DWSNs. IEEE Trans. Industr. Inf. 16(9), 6193– 6202 (2020) 27. Mori, S.: Secure caching scheme by using blockchain for information-centric network-based wireless sensor networks. J. Signal Process. 22(3), 97–108 (2018) 28. Sharma, P.K., Park, J.H.: Blockchain based hybrid network architecture for the smart city. Futur. Gener. Comput. Syst. 86, 650–655 (2018) 29. Guerrero-Sanchez, A.E., Rivas-Araiza, E.A., Gonzalez-Cordoba, J.L., ToledanoAyala, M., Takacs, A.: Blockchain mechanism and symmetric encryption in a wireless sensor network. Sensors 20(10), 2798 (2020)
Blockchain Based Authentication and Efficient CH Selection in WSNs
205
30. Rovira-Sugranes, A., Razi, A.: Optimizing the age of information for blockchain technology with applications to IoT sensors. IEEE Commun. Lett. 24(1), 183–187 (2019) 31. Sergii, K., Prieto-Castrillo, F.: A rolling blockchain for a dynamic WSNs in a smart city. arXiv preprint arXiv:1806.11399 (2018) 32. Shahbazi, Z., Byun, Y.C.: Towards a secure thermal-energy aware routing protocol in wireless body area network based on blockchain technology. Sensors 20(12), 3604 (2020) 33. Kim, T.H., et al.: A novel trust evaluation process for secure localization using a decentralized blockchain in wireless sensor networks. IEEE Access 7, 184133– 184144 (2019) 34. She, W., Liu, Q., Tian, Z., Chen, J.S., Wang, B., Liu, W.: Blockchain trust model for malicious node detection in wireless sensor networks. IEEE Access 7, 38947– 38956 (2019) 35. Xu, Y., Ren, J., Wang, G., Zhang, C., Yang, J., Zhang, Y.: A blockchain-based nonrepudiation network computing service scheme for industrial IoT. IEEE Trans. Industr. Inf. 15(6), 3632–3641 (2019) 36. Kumar, M.H., Mohanraj, V., Suresh, Y., Senthilkumar, J., Nagalalli, G.: Trust aware localized routing and class based dynamic block chain encryption scheme for improved security in WSN. J. Ambient Intell. Hum. Comput. 1–9 (2020) 37. Goyat, R., Kumar, G., Rai, M.K., Saha, R., Thomas, R., Kim, T.H.: Blockchain powered secure range-free localization in wireless sensor networks. Arab. J. Sci. Eng. 45(8), 6139–6155 (2020). https://doi.org/10.1007/s13369-020-04493-8 38. Rahman, A., Islam, M.J., Khan, M.S.I., Kabir, S., Pritom, A.I., Karim, M.R.: Block-SDoTCloud: enhacing security of cloud storage through blockchain-based SDN in IoT network (2020) 39. Rathore, S., Kwon, B.W., Park, J.H.: BlockSecIoTNet: blockchain-based decentralized security architecture for IoT network. J. Netw. Comput. Appl. 143, 167–177 (2019) 40. Lee, Y., Rathore, S., Park, J.H., Park, J.H.: A blockchain-based smart home gateway architecture for preventing data forgery. Hum.-Centric Comput. Inf. Sci. 10(1), 1–14 (2020). https://doi.org/10.1186/s13673-020-0214-5 41. Heinzelman, W.R., Chandrakasan, A., Balakrishnan, H.: Energy-efficient communication protocol for wireless microsensor networks. In: Proceedings of the 33rd Annual Hawaii International Conference on System Sciences, p. 10. IEEE, January 2000
The Redundant Active Time-Based Algorithm with Forcing Meaningless Replica to Terminate Tomoya Enokido1(B) , Dilawaer Duolikun2 , and Makoto Takizawa3 1
3
Faculty of Business Administration, Rissho University, 4-2-16, Osaki, Shinagawa-ku, Tokyo 141-8602, Japan [email protected] 2 Department of Advanced Sciences, Faculty of Science and Engineering, Hosei University, 3-7-2, Kajino-cho, Koganei-shi, Tokyo 184-8584, Japan Research Center for Computing and Multimedia Studies, Hosei University, 3-7-2, Kajino-cho, Koganei-shi, Tokyo 184-8584, Japan [email protected]
Abstract. Reliable and available distributed application services can be provided by redundantly performing each application process. However, multiple replicas of each application process are performed on multiple virtual machines in a server cluster system. As a result, a server cluster consumes a large amount of electric energy to provide application services. In this paper, the RATB-FMRT (redundant active time-based with forcing meaningless replicas to terminate) algorithm is proposed to reduce the total electric energy consumption of a server cluster by forcing meaningless replica to terminate. In the evaluation, we show the total electric energy consumption of a server cluster and the average response time of each process can be reduced in the RATB-FMRT algorithm. Keywords: The RATB-FMRT algorithm Virtual machines · Server cluster systems
1
· Meaningless replicas ·
Introduction
In information systems, various types of distributed application services are proposed and implemented by using virtual machines [1] supported by server cluster systems [2–5] like cloud computing systems [6]. In server cluster systems, multiple virtual machines are installed in each physical server and application processes are performed on virtual machines to more efficiently utilize computation resources of each physical server. Physical servers in a server cluster might stop by fault. If a physical server stops by fault [7], every virtual machines performed on the physical server also stops and application processes performed on the virtual machines cannot correctly terminate. In order to provide reliable and available application services, each application process can be redundantly performed on multiple virtual machines in a server cluster [8–10]. On the other c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 206–213, 2021. https://doi.org/10.1007/978-3-030-79725-6_20
The RATB-FMRT Algorithm
207
hand, a large amount of electric energy is consumed in a server cluster system compared with non-redundant approaches since the more number of application processes are performed on multiple virtual machines which are performed on different physical servers. Hence, an energy-efficient process replication approach is required to provide reliable and available distributed application services by using a server cluster system as discussed in Green computing. In our previous studies, the RAT B (redundant active time-based) algorithm [11] is proposed to redundantly perform each computation type application process (computation process) on multiple virtual machines so that the total electric energy consumption of a server cluster can be reduced. In the RATB algorithm, each time a load balancer receives a request process, the load balancer selects multiple virtual machines and broadcasts the request process to the virtual machines. On receipt of the request process, a virtual machine creates and performs a replica of the request process. If a replica r1 of the request process successfully terminates on a virtual machine but another replica r2 is still performed on another virtual machine, the replica r2 is meaningless since the request process can commit without performing the meaningless replica r2 . In this paper, the RATB-FMRT (redundant active time-based with forcing meaningless replicas to terminate) algorithm is newly proposed to provide an energy-efficient server cluster system for redundantly performing computation processes. Meaningless replicas are forced to terminate in the RATB-FMRT algorithm. As a result, the total electric energy consumption of a server cluster to redundantly performing computation processes can be reduced in the RATBFMRT algorithm than the RATB algorithm. The evaluation results show the total electric energy consumption of a server cluster and the response time of each computation process can be more reduced in the RATB-FMRT algorithm than the RATB algorithm. In Sect. 2, the system model of this paper is discussed. In Sect. 3, the RATBFMRT algorithm is proposed. In Sect. 4, we evaluate the RATB-FMRT algorithm compared with the RATB algorithm.
2 2.1
System Model Server Cluster
A server cluster S is composed of multiple physical servers s1 , ..., sn (n ≥ 1). A notation nct (nct ≥ 1) shows the total number of cores in a server st . Let Ct be a set of homogeneous cores c1t , ..., cnct t in a server st . Let ctt (ctt ≥ 1) be the total number of threads on each core cht in a server st and ntt (ntt ≥ 1) be the total number of threads in a server st , i.e. ntt = nct · ctt . A notation T Ht shows a set of threads th1t , ..., thntt t in a server st and threads th(h−1)·ctt +1 , ..., thh·ctt (1 ≤ h ≤ nct ) are bounded to a core cht . Each server st supports a set Vt of virtual machines V M1t , ..., V Mntt t and each virtual machine V Mkt is exclusively performed on one thread thkt in a server st . Computation type application processes (computation processes) which mainly consume CPU resources of a virtual machine V Mkt are performed on each virtual machine
208
T. Enokido et al.
V Mkt . A term process stands for a computation process in this paper. Each time a load balancer K receives a request process pi from a client cli , the load balancer K selects a subset V M S i of virtual machines in the server cluster S and broadcasts the process pi to every virtual machine V Mkt in the subset V M S i to redundantly perform the request process pi . On receipt of a request process pi , a virtual machine V Mkt creates and performs a replica pikt of a process pi . i On termination of a replica pikt , the virtual machine V Mkt sends a reply rkt i to the load balancer K. The load balancer K takes only the first reply rkt and ignores every other reply. Let N F be the maximum number of servers which concurrently stop by fault in the server cluster S and rdi be the redundancy of a process pi , i.e. rdi = |V M S i |. Here, N F + 1 ≤ rdi ≤ n. A virtual machine V Mkt is active iff (if and only if) at least one replica is performed on the virtual machine V Mkt . A virtual machine V Mkt is idle iff the virtual machine V Mkt is not active. A core cht is active iff at least one virtual machine V Mkt is active on a thread thkt in the core cht . A core cht is idle if the core cht is not active. 2.2
Computation Model to Perform Replicas
Each replica pikt of a request process pi is performed on a virtual machine V Mkt in a server cluster S. Replicas which are being performed at time τ are current. Let CPkt (τ ) be a set of current replicas being performed on a virtual machine V Mkt i at time τ and N Ckt (τ ) be |CPkt (τ )|. A notation minTkt shows the minimum i i computation time of a replica pkt where the replica pkt is exclusively performed on a virtual machine V Mkt and the other virtual machines are idle in a server i i i = · · · = minTnt in a server st and minT i = minTkt on st . We assume minT1t tt the fastest server st . We assume one virtual computation step [vs] is performed for one time unit [tu] on a virtual machine V Mkt in the fastest server st . The maximum computation rate M axfkt of the fastest virtual machine V Mkt is 1 [vs/msec]. M axf = max(M axfk1 , ..., M axfkn ). A replica pikt is considered to be i i i i virtual computation steps. V Skt = minTkt · M axf = minTkt composed of V Skt [vs]. i (τ ) of a replica pikt performed on a virtual machine The computation rate fkt V Mkt at time τ is given as follows [12]: i i fkt (τ ) = αkt (τ ) · V S i /(minTkt · N Ckt (τ )) · βkt (nvkt (τ )).
(1)
Here, the computation degradation ratio αkt (τ ) of a virtual machine V Mkt at N C (τ )−1 time τ (0 ≤ αkt (τ ) ≤ 1) is assumed to be εkt kt where 0 ≤ εkt ≤ 1. αkt (τ1 ) ≤ αkt (τ2 ) ≤ 1 if N Ckt (τ1 ) ≥ N Ckt (τ2 ). αkt (τ ) = 1 if N Ckt (τ ) = 1. Let nvkt (τ ) be the total number of active virtual machines on a core which performs a virtual machine V Mkt at time τ . A notation βkt (nvkt (τ )) shows the performance degradation ratio of a virtual machine V Mkt at time τ (0 ≤ βkt (nvkt (τ )) ≤ 1) where multiple virtual machines are active on the same core. βkt (nvkt (τ )) = 1 if nvkt (τ ) = 1. βkt (nvkt (τ1 )) ≤ βkt (nvkt (τ2 )) if nvkt (τ1 ) ≥ nvkt (τ2 ). Suppose a replica pikt starts and terminates on a virtual machine V Mkt at i [msec] of a time stikt and etikt , respectively. Here, the total computation time Tkt
The RATB-FMRT Algorithm
replica pikt is etikt - stikt and lcikt (τ ) [vs] of a replica pikt 2.3
209
etikt
i fkt (τ ) = V S i [vs]. The computation laxity τ i at time τ is V S i - x=sti fkt (x). τ =stikt
kt
Power Consumption Model of a Server
Notations maxEt and minEt show the maximum and minimum electric power [W] of a server st , respectively. Let Et (τ ) be the electric power [W] of a server st at time τ to perform replicas of computation processes on multiple virtual machines. Let act (τ ) be the total number of active cores in a server st at time τ and minCt be the electric power [W] where at least one core cht is active on a server st . Let cEt be the electric power [W] consumed by a server st to make one core active. In our previous studies, the PCSV (Power Consumption Model of a Server with Virtual machines) model [13] to perform computation processes on virtual machines is proposed. The electric power Et (τ ) [W] of a server st to perform replicas of computation processes on virtual machines at time τ is given as follows [13]: Et (τ ) = minEt + σt (τ ) · (minCt + act (τ ) · cEt ).
(2)
Here, σt (τ ) = 1 if at least one core cht is active on a server st at time τ . Otherwise, σt (τ ) = 0. The processing power P Et (τ ) [W] is Et (τ ) - minEt at time τ in a server st . The total electric energy T P Et (τ1 , τ2 ) [J] of a server st from time τ1 processing τ to τ2 is τ2=τ1 P Et (τ ).
3 3.1
Process Replication Algorithm The Redundant Active Time-Based (RATB) Algorithm
The redundant active time-based (RAT B) algorithm [11] is proposed in our previous studies to reduce the total electric energy consumption of a server cluster S and the average response time of each process for redundantly performing each computation process. The increased active time iACTht (τ ) shows a period of time where a core cht is active to terminate every current process on every active virtual machine performed on threads allocated to the core cht at time τ . The RATB algorithm estimates the increased active time iACTht (τ ) of each core i of each replica pikt cht in a server st at time τ based on the response time RTkt performed on each virtual machine V Mkt in the server st . Let dKt be the delay time [msec] between a load balancer K and a server st . The minimum response i i i of a replica pikt is calculated as minRTkt = 2dKt + minTkt · time minRTkt M axf /M axfkt = 2dKt + 1 · 1/M axfkt = 2dKt + 1/M axfkt [msec] where the replica pikt is exclusively performed on a virtual machine V Mkt and only V Mkt is active on a core cht in a server st . The total processing electric energy laxity tpelt (τ ) [J] shows how much electric energy a server st has to consume to perform every current replica on every active virtual machine in the server st at
210
T. Enokido et al.
time τ . In the PCSV model, the electric power Et (τ ) [W] of a server st at time τ depends on the number act (τ ) of active cores in the server st at time τ as shown in Eq. (2). In the RATB algorithm, the total processing electric energy laxity tpelt (τ ) of a server st at time τ is estimated by the following Eq. (3): tpelt (τ ) = minCt +
nct
(iAT Cht (τ ) · cEt ).
(3)
h=1
Let T P ELikt (τ ) be the total processing electric energy laxity of a server cluster S where a replica pikt of a request process pi is allocated to a virtual machine V Mkt performed on a server st at time τ . In the RATB algorithm, a virtual machine V Mkt whose the total processing electric energy laxity T P ELikt (τ ) of a server cluster S is the minimum at time τ is selected for a replica pikt of a request process pi . 3.2
The RATB-FMRT Algorithm
Suppose a replica pikt of a request process pi successfully terminates on a virtual machine V Mkt but another replica piku is still being performed on another virtual machine V Mku . The request process pi can commit without performing the replica piku since the replica pikt already successfully terminates. A replica is modeled to be a sequence of operations. [Definition]. A subsequence of a replica piku to be performed after another replica pikt successfully terminates is a meaningless tail part. We newly propose an RATB-FMRT (Redundant Active Time-Based with Forcing Meaningless Replicas to Terminate) algorithm to provide an energyefficient server cluster system for redundantly performing computation processes. Each time a replica pikt successfully terminates on a virtual machine V Mkt performed on a server st , the virtual machine V Mkt sends a termination notification TN (pikt ) message of the replica pikt to both the load balancer K and every other virtual machine in the subset V M S i . The termination notification i . On receipt of TN (pikt ) from a virtual machine TN (pikt ) includes a reply rkt V Mkt , the meaningless tail part of each replica is forced to terminate on each virtual machine V Mku by the following procedure: Force Termination(TN (pikt )) { if a replica piku is performing, piku is forced to terminate; else TN (pikt ) is neglected; } The total processing electric energy consumption of a server cluster can be more reduced in the RATB-FMRT algorithm than the RATB algorithm by forcing meaningless tail part of replicas to terminate. In addition, the computation time of each replica can be more reduced in the RATB-FMRT algorithm than the RATB algorithm since the computation resources of each virtual machine to perform meaningless tail part of replicas can be used to perform other replicas.
The RATB-FMRT Algorithm
4
211
Evaluation
The RATB-FMRT algorithm is evaluated in terms of the total processing electric energy consumption [KJ] of a homogeneous server cluster S and the response time of each process pi compared with the RATB algorithm [11]. A homogeneous server cluster S is composed of five servers s1 , ... s5 (n = 5). Parameters of every server st and virtual machine V Mkt are obtained from the experiment [13] as shown in Tables 1 and 2. Every server st is equipped with a dual-core CPU (nct = 2). Two threads are bounded for each core in a server st (ctt = 2) and the total number ntt of threads in each server st is four. Hence, there are twenty virtual machines in the server cluster S. We assume the fault probability f rt for every server st is the same f r = 0.1. Table 1. Homogeneous cluster S. Server nct ctt ntt minEt st
2
2
4
minCt
cEt
maxEt
14.8 [W] 6.3 [W] 3.9 [W] 33.8 [W]
Table 2. Parameters of virtual machine. Virtual machine M axfvt V Mvt
εvt βvt (1) βvt (2)
1 [vs/ms] 1
1
0.6
The number m of processes p1 , ..., pm (0 ≤ m ≤ 10,000) are issued. The starting time sti of each process pi is randomly selected in a unit of one milliseci ond [msec] between 1 and 3600 [ms]. The minimum computation time minTvt i of every replica pvt is assumed to be 1 [ms]. The delay time dKt of every pair of a load balancer K and every server st is 1 [ms] in the server cluster S. The i i of every replica pivt is 2dKt + minTvt =2·1 minimum response time minRTvt + 1 = 3 [ms]. Figure 1 shows the average total processing electric energy consumption of the server cluster S to perform the number m of processes in the RATB-FMRT and RATB algorithms. In Fig. 1, RATB-FMRT(rd) and RATB(rd) stand for the average total processing electric energy consumption of the server cluster S in the RATB-FMRT and RATB algorithms with redundancy rd (= 1, 2, 3, 4, 5), respectively. If rd = 1, every request process pi is not redundantly performed in the RATB-FMRT and RATB algorithms. Hence, the average total processing electric energy consumption to perform the number m of processes in the RATB-FMRT algorithm is the same as the RATB algorithm. In the RATBFMRT algorithm, the meaningless tail part of each replica is forced to terminate on every virtual machine V Mkt . As a result, the average total processing electric energy consumption to perform the number m of processes can be more reduced in the RATB-FMRT algorithm than the RATB algorithm for rd ≥ 2.
212
T. Enokido et al.
Figure 2 shows the average response time of each process in the RATB-FMRT and RATB algorithms. If rd = 1, the average response time of each process in the RATB-FMRT algorithm is the same as the RATB algorithm since every request process pi is not redundantly performed in the RATB-FMRT and RATB algorithms. For rd ≥ 2, the average response time in the RATB-FMRT algorithm can be more reduced than the RATB algorithm since the computation resources of each virtual machine to perform meaningless tail part of replicas can be used to perform other replicas. Following the evaluation, we conclude the RATBFMRT algorithm is more useful in a homogeneous server cluster than the RATB algorithm.
Fig. 1. Total electric energy consumption.
5
Fig. 2. Average response time.
Concluding Remarks
In this paper, the RATB-FMRT algorithm was proposed to reduce the total electric energy consumption of a server cluster and the average response time of each process by forcing meaningless tail part of replicas in order to redundantly perform computation processes. The evaluation results showed the total electric energy consumption of a server cluster and the average response time of each process can be more reduced in the RATB-FMRT algorithm than the RATB algorithm. Following the evaluation results, we showed the RATB-FMRT algorithm is more useful than the RATB algorithm in a homogeneous server cluster.
References 1. KVM: Main Page - KVM (Kernel Based Virtual Machine) (2015). http://www. linux-kvm.org/page/Mainx Page 2. Enokido, T., Aikebaier, A., Takizawa, M.: Process allocation algorithms for saving power consumption in peer-to-peer systems. IEEE Trans. Ind. Electron. 58(6), 2097–2105 (2011) 3. Enokido, T., Aikebaier, A., Takizawa, M.: A model for reducing power consumption in peer-to-peer systems. IEEE Syst. J. 4(2), 221–229 (2010)
The RATB-FMRT Algorithm
213
4. Enokido, T., Aikebaier, A., Takizawa, M.: An extended simple power consumption model for selecting a server to perform computation type processes in digital ecosystems. IEEE Trans. Ind. Inf. 10(2), 1627–1636 (2014) 5. Enokido, T., Takizawa, M.: Integrated power consumption model for distributed systems. IEEE Trans. Ind. Electron. 60(2), 824–836 (2013) 6. Natural Resources Defense Council (NRDS): Data center efficiency assessment - scaling up energy efficiency across the data center industry: evaluating key drivers and barriers (2014). http://www.nrdc.org/energy/files/data-centerefficiency-assessment-IP.pdf 7. Lamport, R., Shostak, R., Pease, M.: The Byzantine generals problems. ACM Trans. Program. Lang. Syst. 4(3), 382–401 (1982) 8. Schneider, F.B.: Replication management using the state-machine approach. In: Distributed systems, 2nd edn. ACM Press, pp. 169–197 (1993) 9. Enokido, T., Aikebaier, A., Takizawa, M.: An energy-efficient redundant execution algorithm by terminating meaningless redundant processes. In: Proceedings of the 27th IEEE International Conference on Advanced Information Networking and Applications (AINA-2013), pp. 1–8 (2013) 10. Enokido, T., Aikebaier, A., Takizawa, M.: Evaluation of the extended improved redundant power consumption laxity-based (EIRPCLB) algorithm. In: Proceedings of the 28th IEEE International Conference on Advanced Information Networking and Applications (AINA-2014), pp. 940–947 (2014) 11. Enokido, T., Duolikun, D., Takizawa, M.: An energy-efficient process replication algorithm based on the active time of cores. In: Proceedings of the 32nd IEEE International Conference on Advanced Information Networking and Applications (AINA-2018), pp. 165–172 (2018) 12. Enokido, T., Takizawa, M.: An energy-efficient process replication algorithm in virtual machine environments. In: Barolli, L., Xhafa, F., Yim, K. (eds.) BroadBand Wireless Computing, Communication and Applications, vol. 2, pp. 105–114. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-49106-6 10 13. Enokido, T., Takizawa, M.: Power consumption and computation models of virtual machines to perform computation type application processes. In: Proceedings of the 9th International Conference on Complex, Intelligent and Software Intensive Systems (CISIS-2015), pp. 126–133 (2015)
A Novel Approach to Network’s Topology Evolution and Robustness Optimization of Scale Free Networks Muhammad Usman1 , Nadeem Javaid1(B) , Syed Minhal Abbas1 , Muhammad Mohsin Javed1 , Muhammad Aqib Waseem2 , and Muhammad Owais1 1
COMSATS University Islamabad, Islamabad 44000, Pakistan 2 Bahauddin Zakariya University, Multan 60000, Pakistan
Abstract. Internet of Things (IoT) is rapidly increasing day by day due to its involvement in many applications such as electric grids, biological networks, transport networks, etc. In complex network theory, the model based on Scale Free Networks (SFNs) is more suitable for IoT. The SFNs are robust against random attacks; however, vulnerable to malicious attacks. Furthermore, as the size of a network increases, its robustness decreases. Therefore, in this paper, we propose a novel topology evolution approach to enhance the robustness of SFNs. Initially, we divide the network area into upper and lower parts. The nodes are deployed equally in both parts and connected via one-to-many correspondence. The distribution is made because small sized networks are more robust against malicious attacks. Moreover, we use k-core decomposition to calculate the hierarchical changes in the nodes’ degree. In addition, the core-based and degree-based attacks are performed to analyze the robustness of SFNs. For the network optimization, we compare the Genetic Algorithm (GA) with Artificial Bee Colony (ABC) and Bacterial Foraging Algorithm (BFA). In the optimization process, the node’s distance based edge swap is performed to draw long links in the network because these links make the network more robust.
1 Introduction The Wireless Sensor Networks (WSNs) have various applications in electric grids [1, 2], transportation [3], military [4, 5], healthcare [6, 7], underwater etc. In the WSNs, sensor nodes take environmental information, including temperature, moisture, radiations, air quality, etc., [8]. This information is then transmitted to the central control unit i.e., a base station or a hub to make important decisions about the network. When the sensor nodes join the network, the WSNs become an integrated part of the Internet of Things (IoT). The sensor nodes in the IoT network are properly arranged in the form of a topology [9]. The construction of network topology is based on graph theory where nodes are considered as vertices and the connections between them as edges. In complex networks, Small World Networks (SWNs) [10] and Scale Free Networks (SFNs) [11] are two models mostly used in the IoT [12]. In SWNs the nodes have average shortest path length and high clustering coefficient, whereas in SFNs nodes’ degree follows a powerlaw distribution. The power-law proves that the SFNs have a small number of high c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 214–224, 2021. https://doi.org/10.1007/978-3-030-79725-6_21
A Novel Approach to Network’s Topology Evolution
215
degree nodes and a large number of low degree nodes. The nodes in the network can be classified into two categories: homogeneous and heterogeneous. The homogeneous nodes have the same communication range, bandwidth and energy, whereas these characteristics vary in heterogeneous nodes. Homogeneous nodes are generally considered in the study of SFNs. The SFNs are generated by following the famous Barab´asi Albert (BA) model [11]. The model consists of two processes: growth and preferential attachment process. The network grows with the addition of nodes asynchronously and the preferential attachment defines that a new node connects with the high degree nodes in the network. Recently, there has been an exponential increase in the number of IoT devices therefore, the IoT networks are becoming dense. Consequently, these networks are becoming vulnerable to failure or attacks [13]. By considering attacks for SFNs, they are mainly classified as random and malicious. In random attacks, a node is removed randomly from the network. However, malicious attacks happen on the important nodes [14]. The importance of the nodes is usually measured based on their degree and the attack on them termed as High Degree Adaptive (HDA) attacks. The SFNs are robust against random attacks; however, they are vulnerable to malicious attacks. The robustness is the ability of a network to resist attacks. Many methods are available to measure the robustness such as conditional connectivity, Laplacian matrix [15], Natural Connectivity (NC) [16] etc. However, these measures have high computational cost; therefore, Schneider et al. [17] proposed a robustness measure based on the percolation theory. It states when the attacks happen on the high degree nodes, the network fragments into multiple subgraphs. The robustness R is calculated as, R=
1 N−1 MCSn ∑ N N − 1 N=0
(1)
1 where, N is the total number of nodes in a network, N−1 is a normalization factor, MCSn is maximum connected subgraphs after nth high degree node removed and summation means nodes removal after each attack is considered. The robustness is greatly reduced after the removal of important nodes or edges [18] from the network. Their importance is usually based on degree and betweenness centrality. The edge degree is calculated as, (2) di j = ki ∗ k j
Here, di j is the degree of edge, ki and k j are degree of nodes i and j, respectively. To increase the robustness of a network, the easy approach is to add edges between nodes [19, 20]. However, it is practically not possible to add edges among all the nodes, because of the cost constraint. The network should be optimized to achieve high robustness without adding cost. Therefore, the independent edges are required in the process of optimization. The two edges are considered to be independent if they are in the communication range of each other and they are adjacent to each other. The robustness of the network is calculated after performing the edge swap. If the robustness increases, then the edge swap is accepted and the topology is updated. Otherwise, the next pair of independent edges are searched and the same process is repeated.
216
M. Usman et al.
For the SFNs, the onion-like structure is proved to be robust against malicious attacks [21]. Its core consists of high degree nodes surrounded by the rings of nodes whose degree decreases hierarchically. Each ring presents the same degree nodes in the network. A perfect onion-like structure proves that the network robustness is enhanced. However, the long tail of low degree nodes is formed that affects the robustness of the network. By considering the importance of SFNs, the following contributions are presented in this paper: • to increase the robustness, small sized networks are evolved because they are more robust against malicious attacks, • networks are connected by one-to-many correspondence i.e., a node in a network A is linked with more than one node of the network B and vice versa. These links’ degree distribution follow power-law, • the nodes’ degree in an onion-like structure changes hierarchically. Therefore, it is calculated by k-core decomposition, • the edges are swapped to make long links in the network because the existence of these links make the network robust against malicious attacks. A high degree node connects with the node having a low degree and present far as compared to other neighboring nodes, • Genetic Algorithm (GA), Artificial Bee Colony (ABC) and Bacterial Foraging Optimization (BFO) algorithms are used and we select one that has better performance. The rest of the paper is organized as follows: related work studies are presented in Sect. 2. Section 3 describes the edge swap mechanism to enhance the robustness. Scale free networks topology evolution and robustness optimization are demonstrated in Sect. 4. In the last Sect. 5 the conclusion and future work are explained.
2 Related Work The SFNs are vulnerable to malicious attacks and a lot of real world networks have scale free nature. Therefore, to resist attacks these networks need to be optimized to have a proper structure. Global edge swap based Hill Climbing (HC) algorithm in [21] enhances the robustness; however, HC traps into local optima. Moreover, the local optima problem is solved by introducing the local edge swap [22]. For the WSNs construction, the node’s communication range is limited that is addressed in [23]. For SFNs, onion-like structure considers the importance of node’s degree in rings therefore, same degree nodes need to be connected with each other. Furthermore, when a node removes, its respective edges should be used to enhance the topological parameters of the network. By increasing the nodes in Maximam Connected Subgraph (MCS) robustness can be increased [24]. The optimization of SFNs is an NP-hard problem because of the large number of edges in a network. Heuristic algorithms are used to solve these problems, GA is among them. Classical GA stops convergence to a sub-optimal solution that is called premature convergence. It happens due to less population diversity. In optimization, the global optimal solution is required. Therefore, [25, 26] deals with the premature convergence
A Novel Approach to Network’s Topology Evolution
217
problem of GA. Multi-population is used to achieve high diversity. However, the computation cost is increased due to operations on multi-population. Therefore, [27] solves the premature convergence with less computation cost by self-competition among individuals. The same problem is solved using the local search operation by the authors in [28]. On the other hand, to get the optimal solution quickly an algorithm is proposed in [29]. During the evolution of the network against attacks, the fault probability is not considered. The fault probability and preferential attachment based network evolution is proposed in [30]. In optimizing SFNs, a single objective is considered so far. However, a network that is optimized for nodes attack collapse when the links attack happens and vice versa. The SFNs are vulnerable to malicious attacks; therefore, to optimize them according to the attacks on high degree nodes and links, Multi-Objective Optimization (MOO) is required. Therefore, the authors in [18] proposed a MOO algorithm. It consists of two phases: sampling and optimization. The sampling phase is used to generate the diverse population and the optimization phase is used to enhance the robustness. Furthermore, the robustness of an undirected network is discussed so far; however, directedness is also an important network feature. By considering the directedness of the network two important variables emergence of cooperation and controllability robustness are used to define the resilience of a network against different attacks. Moreover, no practical approach is available to understand the correlation between a network’s topology features and their robustness by considering controllability. From the theoretical analysis, it is impossible to define this relation at that time. Therefore, to control networks for better utilizing them a practical approach is proposed by [31]. All of the above algorithms although improve the robustness of the networks; however, have high computation costs. Due to this, self-optimization is not possible. Therefore, an Artificial Intelligence (AI) based robustness optimization technique is proposed in [32]. The back propagation is used to find the optimal solution. However, it is not suitable for different size of networks and edge densities.
3 Edge Swap The SFNs topology is represented as a graph G = (V, E) where set of N nodes represented as vertices V = {1, 2, ..., N} and the set of M edges are shown as E = emp |m, p ∈ V and m = p . The edge swap is used because nodes’ degree remains the same. To perform the edge swap, the edges emp and eno should be independent. To prove the independency of edges, they should follow two conditions: • nodes m, n, o and p should be in the communication range of each other, • there is no extra edge between nodes except emp and eno . The edge swap is performed in the Fig. 1. In the original network topology, as shown in Fig. 1(a), the nodes m, n, o, and p fulfills the independent edges conditions. We have two alternative connections as shown in Fig. 1(b) and Fig. 1(c). The idea behind this edge swap is to enhance the robustness of the network. After performing the edge swap, robustness is calculated. If the robustness is improved then the edge swap is accepted; otherwise, a new independent edge pair is found. When edges are considered in edge
218
M. Usman et al.
swap, they are marked. So that they can not be considered in the next edge swap process. By marking edges, the number of redundant operations can be reduced. Considering the importance of edge swap, two types of edge swaps are performed. 1. Random edge swap 2. Degree based edge swap 3.1
Random Edge Swap
In random edge swap, edges are randomly selected in the network and the swap is made. Since, the network structure consists of different topological features and some nodes are more densely connected than the rest of the network. So, the random edge swap may affect the network structure. Moreover, in the SFNs there is more number of low degree nodes; therefore, the probability of edges selection of low degree nodes is high.
Fig. 1. Edge swap mechanism
3.2
Degree Based Edge Swap
In a degree based edge swap, two high degree nodes are selected in the network. Afterward, their neighboring nodes that have a low degree are selected. These nodes must be different so that they follow the independent edge conditions. The degree based edge swap makes the similar degree nodes connect. Since, the onion-like structure consists of rings that have same degree of nodes. Therefore, the existence of edges between same degree nodes enhances the robustness.
4 Scale Free Networks Topology Evolution and Robustness Optimization In this section, the complete process of network topology evolution is discussed. The proposed network topology provides the solutions to the limitations that are discussed in the Table 1. Moreover, the degree of the node based on k-core decomposition is found to know the hierarchically changes in the network degree. Furthermore, the network optimization against the attacks is discussed.
A Novel Approach to Network’s Topology Evolution
219
4.1 Network Topology Evolution A network having a small number of nodes is more robust against the malicious attacks [23, 26, 28]. Therefore, to generate a robust network topology, nodes are distributed equally into two parts. In each part, the networks are evolved by considering the powerlaw distribution. There are two ways to connect both parts of the network: one-to-one correspondence and one-to-many correspondence. In one-to-one correspondence each node of network A is connected with a node of network B. However, in one-to-many correspondence, a node in network A is connected with more than one node of network B and vice versa. One-to-many correspondence is preferred because it makes the network more robust [33].
Fig. 2. Network topology evolution
In Fig. 2, the network topology evolution is shown. The dotted line shows the division of the network. In both parts, equal number of nodes are randomly deployed. The blue nodes are used to denote network A (NA ), black nodes are used for network B (NB ) and NM denotes the mutual nodes of both networks. The black solid lines represent the connectivity links (CL ) while the dotted lines are used for mutual links (ML ). Both upper and lower parts form a network by synchronously adding edges for each node. 4.2 Nodes Degree Distribution Based on K-Core In the network, the nodes’ have different degrees. Due to the power-law distribution in SFNs, there are more low degree nodes as compared to high degree nodes. Therefore, to find the hierarchical change of nodes’ degree in SFNs, k-core decomposition as shown in Fig. 2 is performed. Different rings represent the existence of different degree nodes. In k-core decomposition, nodes’ removal starts with low degree nodes and these nodes are placed in C4. After that, the degree of the nodes is recalculated and low degree nodes are placed in C3. The process continues until the highest degree nodes are removed from the network. In Fig. 2 C1 is the highest core consisting of the most important nodes based on degree.
220
4.3
M. Usman et al.
Attacks on the Designed Topology
The attackers have complete information about the network and can make new attacks to paralyze it. So, the defenders should take measures against these attacks to make the network robust. Therefore, malicious attacks based on inner core nodes and nodes’ degree based on rings are considered. At first, the inner core nodes are removed because they have more influence on the network. In Fig. 3, the attack on inner core nodes is shown and red color nodes (NR ) are removed from the network. Due to the better topology evolution, initially, by removing high degree nodes the network is still connected. So, multiple attacks are required to fragment the network.
Fig. 3. Attack based on inner core nodes
After attacks are made on the core nodes, the network is divided into multiple subgraphs. High degree nodes present in the subgraphs as shown in Fig. 4 are removed and the robustness is calculated. Red nodes represent the removed nodes of the inner core nodes and white shaded area represents the MCS. Yellow nodes are the highest degree nodes in their respected subgraphs. To increase the number of nodes in MCS, edge swap is made at the outer core nodes because after the removal of high degree nodes long tails of low degree nodes are created. Swap of low degree nodes’ edges increases the number of nodes in MCS. In the proposed model edges are swapped without changing nodes degree. So, the cost remains the same in that operation. Random edge swap increases the computational cost due to redundant operations. Therefore, the edge swap should be kept minimum. So, to increase the robustness it is more important that the topology evolved better than by improved it through edges swap. The nodes’ degree attack based on rings enables the attackers to remove a specific portion of the network. Due to the existence of long tails, the nodes removal using this approach has less computational cost.
A Novel Approach to Network’s Topology Evolution
221
4.4 Robustness Optimization of the Network by Heuristic Algorithms Due to the premature convergence of GA and high computational cost required to solve the problem, two heuristic algorithms Artificial Bee Colony (ABC) and Bacterial Foraging Optimization (BFO) are used. In both of these algorithms, a random position change is required to find the global optimal solution in the search space. In SFNs, a random position change is not possible; therefore, a degree based edge swap and a random edge swap are made. When the operators in these algorithms improve the robustness then degree based edge swap is performed. However, when they trap into local optima random edge swap is performed [21].
Fig. 4. Attack based on high degree nodes that are part of MCS Table 1. Mapping of the identified limitations with proposed solutions and their validations Limitations identified
Proposed solutions
L1: Large sized networks are more vulnerable to malicious attacks
S1: Proposed topology evolution V1: Network is divided into two technique makes small networks parts; in each part, the topology evolved [11]
Validations
L2: The interdependent links do not following power-law [33]
S2: Using the interdependent links concept, networks are connected by power-law
L3: There are no predefined criteria to know how nodes’ degree changes in onion-like structure
S3: k-core decomposition is used V3: After the network deployment, to find the same degree nodes in nodes are removed based on the the network degree and same degree nodes are connected with each other
V2: Through the degree distribution of the mutual nodes, the power-law is validated
L4: Edge swap using the Degree S4: Edge swap is done on the Difference Operation (DDO) basis of the distance between increases redundant operation nodes because the long links make the network more robust
V4: Through performing distance based edge swap the robustness of the network is calculated
L5: Premature convergence of GA
V5: For different network structures, robustness is calculated
S5: Optimization is done by ABC and BFO
222
M. Usman et al.
5 Conclusion The SFNs have become attractive due to their property to resist random attacks. However, they are vulnerable to malicious attacks. Therefore, this paper studies the importance of topology design to enhance the robustness of SFNs. The small sized networks are more robust against malicious attacks as compared to large scale networks; therefore, the network is evolved by dividing the total number of nodes equally into two parts. The power-law distribution is followed in both parts. After that, the networks are connected using one-to-many correspondence. Then the random and malicious attacks are performed on both parts. The network becomes robust because the nodes are removed in one part; however, the second part of the network is still connected. The effect of attacks on the proposed network is considered and the network is optimized by GA, ABC and BFO. The experimental results prove that the network robustness is increased against malicious attacks. Furthermore, the onion-like structure consists of high degree nodes that are at the center of the network are removed using the k-core decomposition. The high degree attack is more vulnerable to the network as compared to the core based attack. In the future, the application of the proposed scheme in the synthetic and realworld networks will be considered and the network will be optimized against the core based and degree based attacks.
References 1. Abdulwahid, A.H.: Power grid surveillance and control based on wireless sensor network technologies: review and future directions. In: Journal of Physics: Conference Series, vol. 1773, p. 012004. IOP Publishing (2021) 2. Al-Anbagi, I., Erol-Kantarci, M., Mouftah, H.T.: A delay mitigation scheme for WSN-based smart grid substation monitoring. In: 2013 9th International Wireless Communications and Mobile Computing Conference (IWCMC), pp. 1470–1475. IEEE (2013) 3. Dubey, J.R., Bhavsar, A.R.: WSN-based driver cabinet monitoring system for the fleet of long-route vehicles. In: Kotecha, K., Piuri, V., Shah, H.N., Patel, R. (eds.) Data Science and Intelligent Applications. LNDECT, vol. 52, pp. 499–508. Springer, Singapore (2021). https:// doi.org/10.1007/978-981-15-4474-3 55 4. Djuriˇsi´c, M.P., Tafa, Z., Dimi´c, G., Milutinovi´c, V.: A survey of military applications of wireless sensor networks. In: 2012 Mediterranean Conference on Embedded Computing (MECO), pp. 196–199. IEEE (2012) 5. Ismail, M.N., Shukran, M.A., Isa, M.R.M., Adib, M., Zakaria, O.: Establishing a soldier wireless sensor network (WSN) communication for military operation monitoring. Int. J. Inform. Commun. Technol. 7(2), 89–95 (2018) 6. Zhang, Y., Sun, L., Song, H., Cao, X.: Ubiquitous WSN for healthcare: recent advances and future prospects. IEEE Internet Things J. 1(4), 311–318 (2014) 7. Fahmy, H.M.A.: WSN applications. Concepts, Applications, Experimentation and Analysis of Wireless Sensor Networks. SCT, pp. 67–232. Springer, Cham (2021). https://doi.org/10. 1007/978-3-030-58015-5 3 8. Ye, D., Gong, D., Wang, W.: Application of wireless sensor networks in environmental monitoring. In: 2009 2nd International Conference on Power Electronics And Intelligent Transportation System (PEITS), vol. 1, pp. 205–208. IEEE (2009)
A Novel Approach to Network’s Topology Evolution
223
9. De Masi, G.: The impact of topology on internet of things: a multidisciplinary review. In: 2018 Advances in Science and Engineering Technology International Conferences (ASET), pp. 1–6. IEEE (2018) 10. Watts, D.J., Strogatz, S.H.: Collective dynamics of ‘small-world’ networks. Nature 393(6684), 440–442 (1998) 11. Barab´asi, A.-L., Albert, R.: Emergence of scaling in random networks. Science 286(5439), 509–512 (1999) 12. Sohn, I.: Small-world and scale-free network models for IoT systems. Mobile Inf. Syst. 2017 (2017) 13. Almogren, A., Khan, A.U., Almajed, H., Mohiuddin, I.: An adaptive enhanced differential evolution strategies for topology robustness in internet of things 14. Javaid, N.: Attack resistance based topology robustness of scale-free internet of things for smart cities talha naeem qureshi1, nadeem javaid, ahmad almogren, zain abubaker1, hisham almajed2, irfan mohiuddin2 15. Merris, R.: Laplacian matrices of graphs: a survey. Linear Algebra Appl. 197, 143–176 (1994) 16. Harary, F.: Conditional connectivity. Networks 13(3), 347–357 (1983) 17. Schneider, C.M., Moreira, A.A., Andrade, J.S., Havlin, S., Herrmann, H.J.: Mitigation of malicious attacks on networks. Proc. Natl. Acad. Sci. 108(10), 3838–3841 (2011) 18. Zhou, M., Liu, J.: A two-phase multiobjective evolutionary algorithm for enhancing the robustness of scale-free networks against multiple malicious attacks. IEEE Trans. Cybern. 47(2), 539–552 (2016) 19. Beygelzimer, A., Grinstein, G., Linsker, R., Rish, I.: Improving network robustness by edge modification. Phys. A 357(3–4), 593–612 (2005) 20. Jiang, Z.Y., Liang, M.G., Improving the network load balance by adding an edge. In: Advanced Materials Research, vol. 433, pp. 5147–5151. Trans Tech Publ (2012) 21. Herrmann, H.J., Schneider, C.M., Moreira, A.A., Andrade Jr., J.S., Havlin, S.: Onion-like network topology enhances robustness against malicious attacks. J. Stat. Mech.: Theory Exp. 2011(01), P01027 (2011) 22. Buesser, P., Daolio, F., Tomassini, M.: Optimizing the robustness of scale-free networks ˇ with simulated annealing. In: Dobnikar, A., Lotriˇc, U., Ster, B. (eds.) ICANNGA 2011. LNCS, vol. 6594, pp. 167–176. Springer, Heidelberg (2011). https://doi.org/10.1007/9783-642-20267-4 18 23. Qiu, T., Zhao, A., Xia, F., Si, W., Wu, D.O.: ROSE: robustness strategy for scale-free wireless sensor networks. IEEE/ACM Trans. Netw. 25(5), 2944–2959 (2017) 24. Rong, L., Liu, J.: A heuristic algorithm for enhancing the robustness of scale-free networks based on edge classification. Phys. A 503, 503–515 (2018) 25. Qiu, T., Liu, J., Si, W., Han, M., Ning, H., Atiquzzaman, M.: A data-driven robustness algorithm for the internet of things in smart cities. IEEE Commun. Mag. 55(12), 18–23 (2017) 26. Qiu, T., Liu, J., Si, W., Wu, D.O.: Robustness optimization scheme with multi-population co-evolution for scale-free wireless sensor networks. IEEE/ACM Trans. Netw. 27(3), 1028– 1042 (2019) 27. Qiu, T., Lu, Z., Li, K., Xue, G., Wu, D.O.: An adaptive robustness evolution algorithm with self-competition for scale-free internet of things. In: IEEE INFOCOM 2020-IEEE Conference on Computer Communications, pp. 2106–2115. IEEE (2020) 28. Zhou, M., Liu, J.: A memetic algorithm for enhancing the robustness of scale-free networks against malicious attacks. Phys. A 410, 131–143 (2014) 29. Deng, Z., Xu, J., Song, Q., Hu, B., Wu, T., Huang, P.: Robustness of multi-agent formation based on natural connectivity. Appl. Math. Comput. 366, 124636 (2020) 30. Shihong, H., Li, G.: TMSE: a topology modification strategy to enhance the robustness of scale-free wireless sensor networks. Comput. Commun. 157, 53–63 (2020)
224
M. Usman et al.
31. Lou, Y., Wang, L., Tsang, K.-F., Chen, G.: Towards optimal robustness of network controllability: An empirical necessary condition. IEEE Trans. Circuits Syst. I Regul. Pap. 67(9), 3163–3174 (2020) 32. Chen, N., Qiu, T., Zhou, X., Li, K., Atiquzzaman, M.: An intelligent robust networking mechanism for the internet of things. IEEE Commun. Mag. 57(11), 91–95 (2019) 33. Cui, P., Zhu, P., Wang, K., Xun, P., Xia, Z.: Enhancing robustness of interdependent network by adding connectivity and dependence links. Phys. A 497, 185–197 (2018)
Implementation of an Indoor Position Detecting System Using Mean BLE RSSI for Moving Omnidirectional Access Point Robot Atushi Toyama1 , Kenshiro Mitsugi1 , Keita Matsuo2(B) , Elis Kulla3 , and Leonard Barolli2 1
3
Graduate School of Engineering, Fukuoka Institute of Technology (FIT), 3-30-1 Wajiro-Higashi, Higashi-Ku 811-0295, Fukuoka, Japan {mgm20105,mgm20108}@bene.fit.ac.jp 2 Department of Information and Communication Engineering, Fukuoka Institute of Technology (FIT), 3-30-1 Wajiro-Higashi, Higashi-Ku 811-0295, Fukuoka, Japan {kt-matsuo,barolli}@fit.ac.jp Okayama University of Science (OSU), 1-1 Ridaicho, Kita-ku, Okayama 700-0005, Japan [email protected]
Abstract. Recently, various communication technologies have been developed in order to satisfy the requirements of many users. Especially, mobile communication technology continues to develop rapidly and Wireless Mesh Networks (WMNs) are attracting attention from many researchers in order to provide cost efficient broadband wireless connectivity. The main issue of WMNs is to improve network connectivity and stability in terms of user coverage. In this paper, we introduce a moving omnidirectional access point robot (called MOAP robot) and propose an indoor position detecting system using mean BLE RSSI for MOAP Robot. In order to realize a moving Access Point (AP), the MOAP robot should move omni directionally in 2 dimensional space. It is important that the MOAP robot moves to an accurate position in order to have a good connectivity. Thus, MOAP robot can provide good communication and stability for WMNs.
1 Introduction Recently, communication technologies have been developed in order to satisfy the requirements of many users. Especially, mobile communication technologies continue to develop rapidly and has facilitated the use of laptops, tablets and smart phones in public spaces [4]. In addition, Wireless Mesh Networks (WMNs) [1] are becoming on important network infrastructure. These networks are made up of wireless nodes organized in a mesh topology, where mesh routers are interconnected by wireless links and provide Internet connectivity to mesh clients. WMNs are attracting attention from many researchers in order to provide cost efficient broadband wireless connectivity. The main issue of WMNs is to improve network connectivity and stability in terms of user coverage. This problem is very closely related to the family of node placement problems in WMNs [5, 9, 11]. In these papers c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 225–234, 2021. https://doi.org/10.1007/978-3-030-79725-6_22
226
A. Toyama et al.
are assumed that routers move by themselves or by using network simulator moving models. In this paper, we propose an indoor position detecting system using mean BLE RSSI for MOAP Robot. We consider a moving robot as network device. In order to realize a moving Access Point (AP), we implemented moving omnidirectional access point robot (called MOAP robot). It is important that the MOAP robot moves to an accurate position in order to have a good connectivity. Thus, the MOAP robot can provide good communication and stability for WMNs. The rest of this paper is structured as follows. In Sect. 2, we introduce the related work. In Sect. 3, we present our implemented moving omnidirectional access point robot. In Sect. 4, we propose the position detecting system using BLE RSSI and show the experimental environment. In Sect. 5, we show the experimental results. Finally, conclusions and future work are given in Sect. 6.
2 Related Work Many different techniques are developed to solve the problem of position detection. One of important research area is indoor position detection, because the outdoor position can be detected easily by using GPS (Global Positioning System). However, in the case of indoor environment, we can not use GPS. So, it is difficult to find the target position. Asahara et al. [2] proposed to improve the accuracy of the self position estimation of a mobile robot. A robot measures the distance to an object in the mobile environment by using a range sensor. Then, the self position estimation unit estimates a self position of the mobile robot based on the selected map data and range data obtained by the range sensor. Wang et al. [14] proposed the ROS (Robot Operating System) platform. They designed a WiFi indoor initialize positioning system by triangulation algorithm. The test results show that the WiFi indoor initialize position system combined with AMCL (Adaptive Monte Carlo Localization) algorithm can be accurately positioned and has high commercial value. Nguyen et al. [10] proposed a low speed vehicle localization using WiFi fingerprinting. In general, these researches rely on GPS in fusion with other sensors to track vehicle in outdoor environment. However, as indoor environment such as car park is also an important scenario for vehicle navigation, the lack of GPS poses a serious problem. They used an ensemble classification method together with a motion model in order to deal with the issue. Experiments show that proposed method is capable of imitating GPS behavior on vehicle tracking. Ban et al. [3] proposed indoor positioning method integrating pedestrian Dead Reckoning with magnetic field and WiFi fingerprints. Their proposed method needs WiFi and magnetic field fingerprints, which are created by measuring in advance the WiFi radio waves and the magnetic field in the target map. The proposed method estimates positions by comparing the pedestrian sensor and fingerprint values using particle filters. Matsuo et al. [6, 7] implemented and evaluated a small size omnidirectional wheelchair. Mitsugi et al. [8, 12, 13] controlled robot in various ways and proposed some methods to get a good position and make a good communication environment considering the robot as a moving AP.
Implementation of an Indoor Position Detecting System
227
3 Implementation of Moving Omnidirectional Access Point Robot In this section, we describe the implementation of MOAP (Moving Omnidirection Access Point) robot. We show the implemented MOAP robot in Fig. 1. The MOAP robot can move omnidirectionally keeping the same direction and can provide APs for network devices such as routers, repeaters and so on. In order to realize our proposed MOAP robot, we used omniwheels which can rotate omnidirectionaly in front, back, left and right. The movement of the MOAP robot is shown in Fig. 2. We would like to control the MOAP robot to move accurately in order to offer a good environment for communication.
Fig. 1. Implemented MOAP robot.
Fig. 2. Movement of our implemented MOAP robot.
228
3.1
A. Toyama et al.
Overview of MOAP Robot
Our implemented MOAP robot has 3 omniwheels, 3 brushless motors, 3 motor drivers and a controller. The MOAP robot requires 24V battery to move and 5V battery for the controller. We show the specification of MOAP robot in Table 1. 3.2
Control System
We designed the control system for operation of MOAP robot, which is shown in Fig. 3. We are using brushless motors as main motor to move the robot, because the motor can be controlled by PWM (Pulse Width Modulation). We used Rasberry Pi as a controller. However, the controller has only 2 PWM hardware generators. But, we need to use 3 generators, so we decided to use the software generator to get a square wave for the PWM. As software generator, we use the Pigpio which can generate better signal than other software generators and make PWM signals with 32 lines. Table 1. Specification of MOAP robot. Item
Specification
Length
490.0 [mm]
Width
530.0 [mm]
Height
125.0 [mm]
Brushless motor BLHM015K-50 (Orientalmotor corporation) Motor driver
BLH2D15-KD (Orientalmotor corporation)
Controller
Raspberry Pi 3 Model B+
Power supply
DC24V Battery
PWM Driver
Pigpio (The driver can generate PWM signal with 32 line)
Fig. 3. Control system for MOAP robot.
Implementation of an Indoor Position Detecting System
229
4 Implementation of an Indoor Position Detecting System Using BLE RSSI For the control of the MOAP robot, we need to get the position of the robot correctly in the 2 dimensional space. Thus, the robot can be used as wireless AP and it can improve the communication environment. In Fig. 4 is shown the implemented position detecting system using BLE RSSI (Bluetooth Low Energy Receive Signal Strength Indication). The RSSI is predicted by Scikit-learn. The right side of the screen shows the measure area. In Fig. 5 is shown the image of BLE beacons (MM-BLEBC3). We show the specification of the Beacon in Table 2. We used machine learning tool (Scikit-learn) to predict the MOAP robot position. The Scikit-learn is an open source software for machine learning. The Scikit-learn has a number of algorithms that support vector machine, random forest, k-means clustering and neural network. We use neural network for predicting the RSSI. In Fig. 6, we show the experimental environment of position detecting system using BLE RSSI for MOAP robot. The RSSI of BLE is used to calculate the MOAP robot positions. We predicted 6 spaces in the area (length: 10.5 m, width: 8.2 m). The area is divided in 6 areas (see Area 0 to 5 of Fig. 4). The proposed position detecting system in Fig. 7 has two stages: the training stage and the detection stage. In this paper, we used four RSSI receivers as shown in Fig. 6. The detecting target (MOAP robot) has a BLE beacon device for emitting the BLE radio wave. Four receivers are able to get the RSSI from the target. Hence, the proposed system can predict the target position.
Fig. 4. Implemented position detecting system using predicted RSSI.
230
A. Toyama et al.
Fig. 5. BLE Beacon.
Fig. 6. Experimental environment of position detecting system using BLE RSSI for MOAP robot. Table 2. Specification of Beacon (MM-BLEBC3). Item
Specification
Communication type BLE Radio range
1–100 [m] (8 step setting)
Signal interval
100–5000 [ms]
Data format
iBeacon Eddystone (UID/URL/TLM)Info
Size, Weight
Ø50 × H15 [mm]
Battery
CR2477
Implementation of an Indoor Position Detecting System
231
Fig. 7. Proposed detecting architecture using Scikit-learn.
5 Experimental Results We have shown the experimental result in Fig. 8. The results show classification for 6 areas considering BLE RSSI which is predicted by machine learning. In Fig. 8(a) is used just one beacon and in Fig. 8(b) are used 3 beacons to predict the target positions. The dots in the graph show the measured position data using four RSSI receivers. If the dots have the same color with the background color of the area, the position is correct, while different color of dots means that dots have wrong position data. In order to conduct experiments, we made the decision rules by using machine learning. In this case, we considered 10000 learning times to make the decision rules. In the case of one BLE beacon to predict the position. (Fig. 8(a)), the system did not get correct position data. We can see this from the figure, because each area has different color of dots. In order to get more correct data, we used more than one beacon as shown in Fig. 8(b). In this case, we used the mean BLE RSSI value of 3 beacons. The results are better than one beacon.
Fig. 8. Experiment results of classification for 6 spaces for 1 beacon and 3 beacons.
232
A. Toyama et al.
Fig. 9. Accuracy rate for each learning time by using BLE RSSI.
Fig. 10. Accuracy rate for each learning time by using WiFi RSSI.
Our proposed system has a good prediction for the target (MOAP robot) position. However, there are some wrong data in Area 2 and Area 3. We show the accuracy rate about each area in Fig. 9. Also, the graph shows different accuracy rate for different learning times 1000, 3000, 5000 and 10000, respectively. We can see that Areas 0, 1, 4, 5 are close to receiver and have high accuracy. So, we need to consider about Area 2 and Area 3 to improve the accuracy rate. Our proposed system shows good result compared with WiFi RSSI. In Fig. 10 is shown the accuracy rate for the area number of each learning times by using WiFi RSSI. The experimental conditions are the same with Fig. 9. Therefore, our proposed system has a good prediction for the MOAP robot position.
Implementation of an Indoor Position Detecting System
233
6 Conclusions and Future Work In this paper, we introduced our implemented MOAP robot. We showed some of the previous works and discussed the related problems and issues. Then, we presented the control system for our implement MOAP robot. We implemented an indoor position detecting system using mean BLE RSSI for moving omnidirectional AP robot. The experiment results show that the proposed system has better performance than WiFi RSSI by using 4 BLE RSSI receivers. In the future work, we would like to improve the proposed system in order to get more correctly the robot position.
References 1. Akyildiz, I.F., Wang, X., Wang, W.: Wireless mesh networks: a survey. Comput. Netw. 47(4), 445–487 (2005) 2. Asahara, Y., Mima, K., Yabushita, H.: Autonomous mobile robot, self position estimation method, environmental map generation method, environmental map generation apparatus, and data structure for environmental map, uS Patent 9,239,580 (2016) 3. Ban, R., Kaji, K., Hiroi, K., Kawaguchi, N.: Indoor positioning method integrating pedestrian dead reckoning with magnetic field and WiFi fingerprints. In: 2015 Eighth International Conference on Mobile Computing and Ubiquitous Networking (ICMU), pp. 167–172 (2015) 4. Hamamoto, R., Takano, C., Obata, H., Ishida, K., Murase, T.: An access point selection mechanism based on cooperation of access points and users movement. In: 2015 IFIP/IEEE International Symposium on Integrated Network Management (IM), pp. 926–929 (2015) 5. Maolin, T.: Gateways placement in backbone wireless mesh networks. Int. J. Commun. Netw. Syst. Sci. 2(01), 44–50 (2009) 6. Matsuo, K., Barolli, L.: Design and implementation of an omnidirectional wheelchair: Control system and its applications. In: Proceedings of the 9th International Conference on Broadband and Wireless Computing, Communication and Applications (BWCCA-2014), pp. 532–535 (2014) 7. Matsuo, K., Liu, Y., Elmazi, D., Barolli, L., Uchida, K.: Implementation and evaluation of a small size omnidirectional wheelchair. In: Proceedings of the IEEE 29th International Conference on Advanced Information Networking and Applications Workshops (WAINA-2015), pp. 49–53 (2015) 8. Mitsugi, K., Toyama, A., Matsuo, K., Barolli, L.: Optimal number of MOAP robots for WMNs using elbow theory. In: Barolli, L., Li, K.F., Enokido, T., Takizawa, M. (eds.) NBiS 2020. AISC, vol. 1264, pp. 116–126. Springer, Cham (2021). https://doi.org/10.1007/978-3030-57811-4 12 9. Muthaiah, S.N., Rosenberg, C.: Single gateway placement in wireless mesh networks. In: Proceedings of ISCN, vol. 8, pp. 4754–4759 (2008) 10. Nguyen, D., Recalde, M.E.V., Nashashibi, F.: Low speed vehicle localization using WiFi fingerprinting. In: 2016 14th International Conference on Control, Automation, Robotics and Vision (ICARCV), pp. 1–5 (2016) 11. Oda, T., Barolli, A., Spaho, E., Xhafa, F., Barolli, L., Takizawa, M.: Performance evaluation of WMN using WMN-GA system for different mutation operators. In: 2011 14th International Conference on Network-Based Information Systems, pp. 400–406 (2011) 12. Toyama, A., Mitsugi, K., Matsuo, K., Barolli, L.: Implementation of control interfaces for@moving omnidirectional access point robot. In: Proceedings of the 12th International Conference on Intelligent Networking and Collaborative Systems (INCoS-2020), pp. 281– 290 (2020)
234
A. Toyama et al.
13. Toyama, A., Mitsugi, K., Matsuo, K., Barolli, L.: Optimal number of MOAP robots for WMNs using silhouette theory. In: Proceedings of the 12th International Conference on Broadband and Wireless Computing, Communication and Applications (BWCCA-2020), pp. 436–444 (2020) 14. Wang, T., Zhao, L., Jia, Y., Wang, J.: WiFi initial position estimate methods for autonomous robots. In: 2018 WRC Symposium on Advanced Robotics and Automation (WRC SARA), pp. 165–171 (2018)
A Survey on Internet of Things in Telehealth Komal Marwah and Farshid Hajati(&) College of Engineering and Science, Victoria University Sydney, Sydney, Australia [email protected], [email protected] Abstract. With the recent development in the field of Information and Communication Technology, there are many different ways to deliver health services remotely. Over the last few years there has been a vital change in providing medical consultation to patients, video chatting is proved to be one of the most essential ways of Telemedicine or Telehealth. This remote way of providing consultation is ensuring provision of medical facility to people who reside countryside, who used to travel long distances in order to get regular health check-ups. The main goal of improvising telehealth in day-to-day practice is to enhance end user’s experience, increase the efficiency as well as quality of medical services provided to common people. Nowadays, telehealth can be considered as a strong alternative over the traditional methods. With the increase in outreach to people, these health services have become fast paced and completely digitized. Such services are even more useful for people with a disability or aged people suffering from chronic diseases.
1 Introduction In recent years Internet of Things (IoT) has been able to revolutionize various industries by connecting the devices to the Internet that we use on day-to-day basis all together to form an integrated network [13–22]. These objects or devices can easily turn into an IoT device when embedded with a small computer chip, for instance a small medicine pill or windowpane can act as an IoT device and can be a part of the distal network. In order to add a next level of intelligence to this network, a tiny device which is commonly known as a sensor in embedded on each and every physical object. These sensors are quite famous to interpret or read the surrounding physical conditions, the physical conditions are converted into binary numbers or machine language which can be easily processed by a computer. The communication between these devices is entirely wireless and automated. With the evolution of smart sensors and IoT devices, the industrial revolution is successfully touched by its fourth wave commonly known as industry 4.0. Smart cities, predictive industry appliances, connected logistics, smart supply chains, integrated manufacturing a few marvels of the Industry 4.0 or the industrial revolution. A lot of organizations are capitalizing money using the advantages of bringing IoT in different sectors of commercial industries such as manufacturing logistics, transport, and healthcare. The advantages of RFID tags embedded onto the products in the retail industry ensures a lot of data is being captured about the product onto a single small © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 235–248, 2021. https://doi.org/10.1007/978-3-030-79725-6_23
236
K. Marwah and F. Hajati
sized computer chip. The inventory control and monitoring become a seamless process, for instance very less quantity of a particular product is left behind, a notification is sent to the store manager to refill the stocks. IoT let the doors wide open to different sectors with a lot many opportunities with the following applications such as smart wearables, smart homes, driver less cars, connected healthcare [1]. Telehealth is one of the high-end applications of IoT, wherein the Medical services such as regular check-ups or consultation is achieved through a platform over the web. The health care professionals provide the same quality of experience to their patients without having a need for the patients to travel to the hospital or medical centers. The digital setup for a Tele health medical consultation includes computer or an iPad at both ends, high speed internet, camera is needed for the video conference to take place. depending upon the size of the medical facility, Skype or Google meet are the opensource video conference platforms which are used on a large scale or the private hospitals have had their applications developed specifically for their patients and their teams. The primary goal of telehealth is to provide the necessary medical treatment by using the services provided by the Information and Telecommunication (ICT) industry. This method allows to an enhancement of the health care services, provided online and helps overcome the barriers of geography, culture or time. Especially during the times of global pandemics, such telehealth services allow patients to receive their medical services at their doorstep without any possible contact with another human being. This zero-contact treatment is extremely beneficial for people residing in the countryside or regional areas [2].
2 Research Background and Objectives This report envisages an integrated network of sensors deployed within a smart processing unit in order to access the patient’s current medical condition and allows them to capture and record their health conditions. Telehealth is considered as a smart fusion of advanced health care facilities along with the marvels of Internet of Things (IoT). The following research work is thoroughly studied in order to have a smart Telehealth solution available at everybody’s doorstep. This report entails the advances of remote monitoring as well as challenge faced by this industry in terms of sensor design and applications, user acceptance, lack of standardization when it comes to sensor nodes in the field of wearable technology. 2.1
IOT Based Remote Heartbeat Monitoring
This research envisages the reduction of occurrence of a medical mishappening, which is possible by making our day-to-day objects smart. These objects when deployed with a small sensor or a small computer chip allows the object to act as an IoT device. The healthcare industry is revolutionized by these tiny smart devices by accurately reading, monitoring and capturing the surrounding physical conditions. Decision making at the accurate timing is one of the key features of the proposed work in this research. Additionally, the remote base heartbeat monitoring system reduces the intervention
A Survey on Internet of Things in Telehealth
237
level of human beings that negates the chances of any possible human errors. This heartbeat monitoring set us is investigated using Arduino UNO microcontroller embedded with a Wi-Fi module ESP8266 in order to process and communicate the captured readings securely over the cloud platforms. The cloud platform used in this research is ThingSpeak which allows the cluster of data to be imagined and analyzed in detail for future purposes. In order to monitor the health of a patient, the medical professionals check the heartbeat rate of the patient. The pulse of the patient is monitored in real time using a heart rate monitoring system. This method is used extensively in medical centers and hospitals, with the exponential increase in the number of heart attacks among young people. With this thought in mind, it will be a boom if the entire process is made available at the patient’s home for more frequent and independent check-ups. This process is coined as the remote heartbeat monitoring system. This set up is foreseen to be incredibly advantageous for people with severe health conditions or chronic diseases. The outcome or readings of this setup can be accessed by any medical professional at any time anywhere basis. With the ongoing developments in the field of Information and Communication Technology, this remote monitoring system transforms the traditional paper-based heart rate monitoring system to Electronic Health Records (EHR). With the help of these secure cloud platforms, a lot of collaborative work is now possible with teams designated at different geographies of the world. The core objective is to automate the process of monitoring the heartbeat of patients utilizing a couple of sensors responsible for detecting the accurate heartbeat at every instance. Two threshold heartbeats are recorded and stored in the database as minimum and maximum heartbeat. This lower and upper value of heartbeat refers to as the extreme low and high for a patient. Whenever the recorded heartbeat falls out of this range, an emergency situation is declared leading to appropriate notifications send over the network. The author emphasizes on the decrease of human intervention with the advent of technology, it also allows the system to make informed decisions as per the recorded value of heartbeat at any instant. This framework utilizes a micro-controller (Arduino UNO), a PPG (Photoplethysmogram) sensor, Wi-Fi module (ESP8266) and ThingSpeak cloud storage. A self-sufficient heartbeat monitoring system is developed using the above elements, this set up also ensures sending alerts to concerned people at the time of emergency [3] (Fig. 1). Considering the boom in the Telehealth care industry, this heartbeat monitoring device allows patients to seamlessly check their heartbeat independently. With regard to the process of recording and assessing the heartbeat of a patients in hospitals, Stella Joseph found an innovative way of deploying the PPG sensors with Arduino unit that monitors the heartbeat recorded by the sensors. Further the recorded data is stored on cloud and a Wi-Fi module is integrated in this network to ensure a seamless connectivity between devices. This research focuses on the sampling the patient’s heartbeats and with minimum interference of human beings, the system can itself make informed decisions at the time of an emergency by notifying the concerned people and nearby hospitals about the emergency.
238
K. Marwah and F. Hajati
Fig. 1. Remote heartbeat monitoring using internet of things [3]
2.2
A Portable Wearable Tele-ECG Monitoring System
Building a unified ECG system capturing the heart rate using Textile Electrodes (TE), ECG circuitry and a Bluetooth enabled set up is the primary focus for the author. The author believes that the ECG signals of the patient can be plotted accurately with a small amount of absolute error. The recorded data is further processed and send to the server, from this point onwards a General practitioner (GP) now has the access to the data. The GP now can make analysis and review the patient's condition. This innovative portable easy system allows the user to send short messages with their longitude and latitude values do the emergency stations. With the provision of a “Help” button, the user can notify be concerned about his or her medical condition. Moreover, if the heart rate value is beyond the extreme limit, a message will be sent to the concerned automatically. This research proposed two different architectures, one installed at patient’s end and another web interface for the doctors or medical professionals. The following components are utilized at the patient’s end namely. • • • •
Textile electrodes embedded on t-shirt or belt Smartphone or tablet device Remote IoT server well connected with user’s phone and doctors’ platform System interface for medical staff with respect to administrative work and for doctors to view patient’s data
This wearable Tele ECG monitoring system is portable in nature as it comes with easily detachable textile electrodes that can be mounted on the clothing using the Velcro. One doctor can be associated to more than one patient at the same time and this doctor will be able to access the patient’s records and previous medical history. However, with these four important elements added to the patient’s side namely an underwear embedded with textile ECG electrodes, a smart phone, IoT server and an interface for the medical practitioners to be able to assist the patients by viewing their reports online. This set ups comes off handy with respect to easily detaching it from the daily wear and cleaning it for hygiene purposes. This in turn results in higher acceptance by the users without much hassle (Fig. 2).
A Survey on Internet of Things in Telehealth
239
Fig. 2. Types of electrodes in portable tele ECG heartbeat monitoring system [4]
This novel system comprises of a hardware and software unit, the software development takes place by digitising the ECG data and filtering it further. The hardware part comprises of a battery unit and an LED which is deployed to indicate the drainage level of the battery. In normal circumstances, the battery lasts for fourteen days and with the installed LED helps the user notify to charge the set up for limited amount of time. On the other hand, the software design comprises of utilizing latest web development tools and database such as MongoDB, Angular 4 framework is used to host the website interface and a secure transmission is allowed every time using https. The art of plotting the heart rate on smart phone or tablet device in the form of easily interpretable graphs has made the data capturing process quick and faster. Additionally, the location and the ECG device information are sent along the heart rate readings to server. The medical staff can access this data as soon as the user’s readings are taken. The real time data analysis is possible due to a thorough inter connection of the patient’s device with the doctor’s user interface through the IoT servers. The recorded data and patient’s personal details are sent to the medical facility via wireless transmission. A user-friendly mobile application is developed for both doctors as well as patients. The patients will be able to see their own data recorded on the previous timeline. However, a doctor will have access to multiple patients along with their previous medical history. Furthermore, this research is successfully tested on thirty individuals within the age range of 25–50. These tests were conducted while performing different physical activities such as running, cycling at rest, or standing position. Enhanced battery life, long lasting power back up of sensor battery, latest technology smart devices such as tablets or smartphones, IoT servers are key components of this proposed methodology. The software part comprises of retaining the medical history of patients along with tracking the location of devices which captures
240
K. Marwah and F. Hajati
the details and location of devices used to capture the readings of heart rate. With the above-mentioned advantages, this proposed research work helps reduce the congestion in hospitals and in turn allows these remote medical facilities to be scalable in nature as the heart rate can be captures and analyzed remotely [4]. 2.3
The Rise of Consumer Health Wearables: Promises and Barriers
This research is uncovering the potential as well as roadblocks of the wearable devices utilized in our day to day lives. As per the statistics, in the 21st century there are approximately 19 million fitness measuring devices to be sold in market [5]. This will include more minute devices embedded in our daily widgets such as watches, jewelry, bands etc. Most likely, with so many devices being converted to IoT devices, a humongous amount of data is recorded and set for analysis; there is a gradual increase in confidentiality and security concerns. The data privacy norms are supposed to be followed strictly in order to secure private information of an individual. The author highlights the power of small sensors implanted in day-to-day jewelry, watches or clothing. The gadgets are embedded with tiny microchips which are responsible for recording surrounding data. As a result, these devices are interconnected and form a centralized ecosystem wherein these gadgets interact with each other and work based on commands from other devices. For instance, a ring can be embedded with a small device called oximeter which can measure heart rate, wristband implanted with an electrodermal sensors or mind attention span can be analyzed using ECG electrodes (Fig. 3).
Fig. 3. Different health parameters captured by vast range of sensors [6]
A Survey on Internet of Things in Telehealth
241
It also addresses the concern of acceptance of these user-friendly devices by the medical professionals on a day-to-day basis. Despite of the fact, glucose monitoring devices and thermometers are extensively used in medical facilities, these digital technology devices got a lot of attraction in order to recognize the health parameters of the patients. There is a higher chance of these health monitoring devices to be considered as next “Dr. Google”. With an active use of health wearables, individuals become extremely aware about their body and mental health, this empowers them to be self-reliant. This in turn reinforces the concept of Quantified Self (QS), an International community is built and is actively working around developing tools and widgets that tracks and records personal health data. It is defined as capturing the health parameters in the form of numbers, these self-tracking widgets are responsible for discovering personal analytics and empowering people to draw interventions based on their data behavior and habits [7]. This concept is not only popular for using the self-tracking smart devices but also controversial for being able to fulfils the promises of stabilized human health. It unravels an important question of being able to improve people’s habits, increase efficiency, cope up with stress and increase endurance. This selftracking psychology can prove to illuminating for some people and a little traumatic for others at the same time. People need to take a lot of precautionary methods in order to cope up with this inherent ability to track every little behavior trait or habit. There has been huge empirical research work being drawn which does not support the effectiveness of using these health monitoring devices which are wearable in nature. These surveys indicate that around 32% consumers break off wearing these smart devices in just six months of purchase, however another 50% terminate the use of these devices in around a year of use. Apart from the people who wear these smart devices in order to stay informed about their health, there are another set of individuals who suffer from a continued illness or a prolonged disease. The author discusses the scope of these devices to be utilized as a secondary mean of treatment and providing diagnosis. These widgets provide a suitable alternative to the expensive medical treatments and allows individuals to track their health conditions at any time anywhere basis. For instance, by effectively tracking the movement of the body, early symptoms of Parkinson disease can be detected and addressed timely. Moreover, few static automated solutions can be provided to the patients dealing with stress, anxiety, post-traumatic stress disorder (PTSD), and panic attacks. This can be achieved by sending regular feedback via emails and text messages or sending detailed reports about the recorded parameters. The author explains about the grey area with respect to the data confidentiality and a slight chance that people will be completely dependent on the self-automated diagnosis. It also highlights about the validity and reliability of these devices keeping in mind the 25% errors gaps. A formal training for the health care staffs to get accustomed to this new software’s and tools is the need of the hour. This will include thorough understanding and knowledge of the programs and reports generated by the system. For instance, a mobile application encounters to failure up to 30% in detecting melanoma disease [8]. This resonates the vulnerabilities of these smart devices which are prone to cyber-attacks which may result in compromising every patients’ personal details and health records.
242
K. Marwah and F. Hajati
Wearable Sensors in Intelligent Clothing for Human Activity Monitoring With the advanced range of minute circuits built on the devices, the IoT industry is bringing a game changing revolution in the fields of health, leisure activities such as gaming and entertainment etc. This modern architecture includes a set of these closed minute circuitry embedded on the usual devices, these act as independent entities which are connected to the other smart devices and following a strict networking protocol, these devices communicate with each other using wireless mediums such as Bluetooth or Wi-Fi modules. The integrated network of devices can be installed anywhere such as cars, clothing, jewelry, fans, windowpanes within affordable range of costings. This enormous set of devices are responsible for tracking and detecting any change in the physical movement, this in turn results in huge data set being recorded in the databases. The researcher has access to these huge number of health parameters recorded in numbers which makes it easier to do analytics. Apart from the fit-bits and other activity trackers being the niche products in this booming industry [9], highlights the progress of Information and Communication technology in collaboration with Electronics where design of embedding sensors in clothing is picking pace in the industry. The research discusses the different classification of the smart devices available in market as followed (Table 1).
Table 1. Range of products available as wearable technology gadgets Classification Leisure
Wearable technology product suite Virtual reality glasses Smart glasses Smart watches Smart headsets Health & fitness tracking Fitness bands Pedometers Emotional monitors Sleep sensors Activity monitors Commercial Head up displays Smart clothing Smart homes Medicine Hearing aids Looking glasses Biometric monitors Drug monitoring devices
It also collaborates all the components of IoT architecture responsible for making this innovation possible. The paper emphasizes on the need of building motor detectors and sensors more affordable which will add on to the greater acceptance by a huge user set. In order to have a huge market base of any wearable product, the product has to be within the affordable range of pricing, which can be controlled monitoring the hardware
A Survey on Internet of Things in Telehealth
243
components used to build the circuit such as sensors, battery, connecting switches or chips. Considering the hardware, it is crucial to note that the device or material onto which the circuit is embedded has to be cost efficient too. In this scenario, clothing is one of the basic necessities of human existence and that too comes within in the stipulated range of cost. The research work proposes a basic architecture of building a network circuit of devices implanted on the regular pants. The role of the circuit is to track the movement of knee and its motion during walking, sitting, cycling or running, this is made possible by a set of motor sensors being embedded onto the circuitry, which is responsible of instigating an electrical circuit, whenever it records any physical movement. The amount of physical movement can be captures as per the intensity of the electrical signal being induced, these set of signals are then recorded over the database and fed to the microcontroller Arduino IDE in this scenario, the mobile interface designed for the software design is developed using MIT app inventor. MIT app inventor is one of leading examples of developing mobile and web interfaces using computational thinking, it provides a seamless platform for users to develop their application by using a set of inbuilt components which a user can drag and drop to build a smartphone application which is compatible with Android and iOS. The research work promises to collaborate the heart rate signals in the future using data derived from ECG. There is huge capacity to integrate the respiration patterns, blood glucose and sugar levels, sleep cycles and pulse rate of the patients which are monitored on a regular basis.
3 Research Problems and Questions In the technology landscape, the field of wearable gadgets is still emerging with its significant promises to create a huge impact in the health and fitness industries. The research is progresses keeping in mind the following pointers which are identified as concerns by researchers in their respective research papers such as preserving privacy and data confidentiality of users while collecting all personal details, lack of standardization in wearable technology and efficient sensor design along with user acceptance. The major flaw is observed after a candidate’s data is recorded; this data is transferred seamlessly over a wireless network. There are multiple promising options such as Bluetooth, NFC, Wi-Fi which can be opted from where data breach can happen during transmission. The recorded data includes name, age, gender, locality, nationality along with their health parameters such as heart rate, pulse, sleep, steps etc. The data is not only owned by the person itself but the company as well. These companies usually do not follow government security regulations. For business perspective these companies owe a social media aspect and people can themselves choose to make their data public. This usually comes as an option for sharing today’s progress with other users. Moreover, the most overlooked section i.e., terms and conditions explicitly mention that even though we respect user’s privacy, but we may share this data with third parties [10]. Smart wearable devices are splurging just like smart phones in the last couple of years, where there are new gadgets in the market promising great impact on one’s personal lifestyle. However, there is lack of any regulatory terms for these gadgets. The two big giants include the sensor design industry and the enterprises
244
K. Marwah and F. Hajati
which develop this to real life. These two domains are highly disrupted at the moment and diverse. Even though wearable technology is evolving and trying to gain acceptance by people from different age groups, this aiding technology is focusing on adapting a simple yet intelligent design which can be accepted by all of its stake holders. These considerations are crucially important while designing new interfaces keeping in mind the end users social and psychological preferences [11]. 3.1
Multiple Factors Affecting the Heartbeat Measurement of the Patient
The process of monitoring the heartbeat of a person can rely on a couple of factors such as person’s age, gender, previous medical conditions etc. Most likely it varies from case to case, the Sinoatrial (SA) and Atrio-Ventricular (AV) nodes are responsible for change in blood volume in our circulatory system. Any type of rigorous physical activity such as exercising, cycling or other physical conditions such as breathlessness or dizziness can result in rise in heartbeat of a person. This medical condition is commonly known as tachycardia. On the contrary, a slower heartbeat known as bradycardia can be experienced with a person suffering from diaphoresis or syncope. There are multiple underlying clinical factors resulting in change in the blood pressure level. An extra precautionary measure needs to be taken care of while recording the heartbeat of the person remotely. The application developed is supposed to be smart enough to monitor the heart rate and compare it with the threshold values from time to time before declaring an emergency. 3.2
Data Security and Data Confidentiality During Wireless Transmission
Maintaining the data in its original form and not letting hackers access the data is one of the primary concerns of researchers these days. There could be multiple wireless threats, bearing vehicle can get the access to data in its original form by intruding the system. Denial of service, configuration flaws or rogue access points are the most common challenges in the field of wireless sensor networks. In an adult network an attacker having the physical access would be able to understand the vulnerability of system. Once he gains the control of the system, he has the power to manipulate data or even transfer data to unethical sources. One of the most common ways to block the wireless network is true denial of service attacks (DoS). The attacker essentially tries to overwhelm the network with multiple unnecessary requests from different nodes. All of these notes try to access the single machine at the same time, making the target system or device inefficient to handle a lot of requests at the same time. 3.3
Limited Battery Power of Sensors
A wireless sensor network (WSN) constitutes a humongous number of sensors deployed in a spatially distributed area. These sensors are responsible for monitoring the parameters and sending the values to the processing unit in real time. Sensors often termed as nodes are considered heart of a wireless sensor network. However, the battery cycle of a sensor is gaining a lot of momentum these days, which depends on a
A Survey on Internet of Things in Telehealth
245
lot of factors such as the amount of data transferred from sensor to the processing unit, the number of times this data is being transferred which is often termed as frequency. The physical conditions such as temperature and pressure are also accounted for evaluating the lifespan of a sensor. Considering such a small size of a node/sensor, the batteries and power backups are supposed to be designed in a very small size there are numerous options of batteries such as Lithium polymer, Nickel Cadmium or Alkaline batteries which can offer Longer life span of a sensor. Following are a couple of research questions raised after careful study of the research papers and journal articles: • How can we enhance data security in IoT used in health care? • Improve sensor efficiency in Telehealth? • How to find out energy saving options for batteries in order to last longer?
4 Research Methodology With utilizing the extended features of technology, the field of health care has improvised the use of a huge number of gadgets and user-friendly widgets installed in individual’s smartphone. The researchers have been always curios about the abovementioned research problems, followed by extensively studying the advantages and disadvantages of using technology in the field of healthcare. In the process of studying and reviewing a lot of journal articles and research papers published in reputed conferences and journals, we realized Telehealth and IoT goes hand in hand as long as there is an increase in the acceptance rate of smart devices in our day to day lives. This topic has become universal, and a lot of theories have been published in the favor of including technology for the people suffering from serious diseases. In this report, our goal is to conduct quantitative research methodology technique, by conducting surveys among the medical staff and patients having symptoms of a prolog illness from long time. This approach will include sending a questionnaire comprising of well researched questions for the patients and medical staff. Based on the data captured from both parties, we will proceed with analyzing data and finding out improvements which can be made in the current Telehealth systems. This study is focused on finding out the flaws with respect to privacy and security of the web servers, mobile applications or databases. Furthermore, suggestions can be drawn in order to improve the privacy and security with respect patient’s personal information such as contact, address, health parameter values etc. In order to improvise an efficient Telehealth solution which serves as one stop solution to all the medical challenges being faced by patients, the project will be kick started in the name of a pilot project. This pilot project will be including conducting mock interviews, structured with in depth questions which can be accessed later on in the form of numbers. These interviews will be of two types, one on one with the patients and group interviews with the medical staff. This method of quantitative research entails end user experience with the respect to using the remote health monitoring set up. This methodology will also involve conducting surveys and brief questionnaire checking the challenges experienced by the medical staff and patients while using these kits provided to them. These research questions are brief, and tailor
246
K. Marwah and F. Hajati
made with respect to different scenarios which in turn helps us accomplish critical analysis to improvise a remote telehealth system. During this process a lot of behavioural aspects of the patient will be checked and converted successfully into quantitative numbers. undoubtedly performing a detailed analysis is way easier on numbers rather than facts shared by patients. Apart from reviewing the research work, journal articles and publications from the last five years, the core focus of this research is to analyse the ease with which the patients are using the tool kits at their homes. The challenge is with respect to hardware and software will be encountered in; the main focus is led on capturing real-time feedback from the patients based on the user experience. Furthermore, there are a set of straightforward questions targeting the patients as well as the medical staff, for us to understand the ease of the medical kit being provided, day to day challenges with respect to the application dashboard all the hardware itself. A detailed analysis should be done followed by the surveys and interviews in order to ensure the impact of Internet of Things on the current health care systems.
5 Research Significance The concept of telehealth is gaining a lot of attraction during the times of worldwide pandemic. The Internet of Things (IoT) has touched the health care industry and has enabled the process of the diagnosis zero contact. With the health parameters getting recorded using sensors as end points, this data is being transmitted over the network securely. Once the data is successfully uploaded on the server, it is made immediately available for the doctors and medical staff for further analysis. The primary advantage of this method is the immediate records and reports made available from both patient’s as well as doctor’s ends. In case of an emergency situation, the concerned authorities are informed on immediate basis to ensure timely medical facility. The Telehealth care system allows people from all geographies to have an equal access to the medical facilities since the entire process of recording the health parameters is remote followed by the consultation by the General Practitioner online. This model of providing health services is scalable and allows each and every one to have equal opportunity to avail best of the services with respect to health care. Wearable sensor technology is certainly one of the most proliferating industries of the 21st century which is supported by exponential increase in health awareness among people. The numerous options in smartphones, connected devices and configurable sensors has led to rise of analyzing human data with respect to their day-to-day chores. Human activity has been watched closely and precisely recorded to draw patterns and analysis. Sensors are devices which renders a feedback after recording and processing a specific quantity such as pressure, temperature, pH etc. They detect and record a physical quantity in nature and provide a human readable output; it is widely used in various industries such as Medical, Aerospace, Automotive, Defense and so on. A further extension of the sensor industry comprises amalgamation of recording and monitoring data with respect to human body. This domain is termed as wearable technology where smart jewelry bands or rings, smart watches, smart spectacles, fit-bits or wrist bands, body mounted devices are most common examples of wearable
A Survey on Internet of Things in Telehealth
247
accessories used by people. The utilization of sensors and wearable devices are becoming extremely impactful in terms of continuously monitoring and recording data humans as well as animal health management. This multi-functional domain enables us to build human friendly devices which can analyze body temperature, observe movement and behavior [12].
6 Conclusion The field of monitoring health parameters remotely is named as Telehealth, undoubtedly it can serve as one of the crucial applications of Internet of Things (IoT). The increasing influence of technology on health care department has led to the development of such remote monitoring setups which allows everyone to have an equal opportunity to access the resources. The healthcare industry is trying to minimize the human intervention and let the machines take relevant decisions as per the recorded data. This also ensures accuracy in recording data as well as data interpretation. There is an exponential increase in the consumer base of wearable technology, with more than 80% acceptance level the health monitoring industry is expedited with its wearable devices. This includes a person wearing an electronic device which is equipped with different sensors that will keep track of a particular physical human activity such as number of steps or heart rate. These electronic devices such as smart wristbands are further integrated with smartphone applications, where the data recorded by the sensor is stored, processed to make better analysis over time. For instance, smart health watches monitor heart rhythms and send immediate alerts for those who are experiencing atrial fibrillation. Similarly, wearable bio sensors can be attached to human body and it allows us to perform our day to day to tasks and these tiny adhesive devices records respiration rate, temperature and heart rate. The whole process of consultation becomes fast track and efficient with the usage of such innovative kits like ECG monitoring, recoding diabetes while sitting at home. The recorded data is further share over a secured network with the health care professional in order to write a prescription which can be shared online. The user-friendly web interfaces allow both parties to view the previous medical history of the individual. These doorstep services improve the overall quality of life enabling great user experiences.
References 1. Oracle. https://www.oracle.com/au/internet-of-things/what-is-iot.html (2020) 2. Tuckson, R.V., Edmunds, M., Hodgkins, M.L.: TeleHealth. N. Engl. J. Med. 377(16), 1585– 1592 (2017) 3. Joseph, S., Ferlin Shahila, D., Patnaik, S.: IoT based remote health care monitoring. In: 2019 International Conference on Advances in Computing, Communication and Control (ICAC3) (2020) 4. Ozkan, H., Ozhan, O., Karadana, Y.: A portable wearable Tele-ECG monitoring system. IEEE Trans. Instrum. Meas. 69, 173–182 (2020) 5. Piwek, L., Ellis, D.A., Andrews, S., Joinson, A.: The rise of consumer health wearables: promises and barriers. PLoS Med. 13(2), e1001953 (2016)
248
K. Marwah and F. Hajati
6. Ledger, D., McCaffrey, D.: Inside Wearables - How the Science of Human Behaviour Change Offers the Secret to Long-term Engagement. Endeavour Partners, Cambridge, MA, USA (2014) 7. QS: https://quantifiedself.com/about/what-is-quantified-self/ (2020) 8. Wolf, J.A., et al.: Diagnostic inaccuracy of smartphone applications for melanoma detection. JAMA Dermatol. 149(4), 422–426 (2013) 9. Chiuchisan, I., Geman, O., Hagan, M.: Wearable sensors in intelligent clothing for human activity monitoring. In: 2019 International Conference on Sensing and Instrumentation in IoT Era (ISSI), (2019) 10. Pirbhulal, S., Pombo, N.: Towards machine learning enabled security framework for IoTbased healthcare. In: 13th International Conference on Sensing Technology (ICST), (2020) 11. Anon.: The Department of Health. https://www1.health.gov.au/internet/main/publishing.nsf/ Content/ehealth-nbntelehealth-pilots (2020) 12. Direct, S.: https://www.sciencedirect.com/topics/neuroscience/wearable-sensor (2020) 13. Abdoli, S., Hajati, F.: Offline signature verification using geodesic derivative pattern. In: 22nd Iranian Conference on Electrical Engineering (ICEE), pp. 1018–1023. Tehran (2014) 14. Barzamini, R., Hajati, F., Gheisari, S., Motamadinejad, M.B.: Short term load forecasting using multi-layer perception and fuzzy inference systems for Islamic countries. J. Appl. Sci. 12(1), 40–47 (2012) 15. Shojaiee, F., Hajati, F.: Local composition derivative pattern for palmprint recognition. In: 22nd Iranian Conference on Electrical Engineering (ICEE), pp. 965–970. Tehran (2014) 16. Hajati, F., Raie, A., Gao, Y.: Pose-invariant 2.5 D face recognition using geodesic texture warping. In: 11th International Conference on Control Automation Robotics and Vision, pp. 1837–1841. Singapore (2010). 17. Ayatollahi, F., Raie, A., Hajati, F.: Expression-invariant face recognition using depth and intensity dual-tree complex wavelet transform features. J. Electron. Imaging 24(2), 23–31 (2015) 18. Pakazad, S.K., Faez, K., Hajati, F.: Face detection based on central geometrical moments of face components. In: IEEE International Conference on Systems, Man and Cybernetics (SMC 2006). Taiwan (2006) 19. Hajati, F., Cheraghian, A., Gheisari, S., Gao, Mian, A.S.: Surface geodesic pattern for 3D deformable texture matching. Pattern Recogn. 62, 21–32 (2017) 20. Abdoli, S., Hajati, F.: Offline signature verification using geodesic derivative pattern. In: 22nd Iranian Conference on Electrical Engineering (ICEE), pp. 1018–1023. Tehran (2014) 21. Hajati, F., Faez, K., Pakazad, S.K.: An efficient method for face localization and recognition in color images. In: IEEE International Conference on Systems, Man and Cybernetics (SMC 2006). Taiwan (2006) 22. Hajati, F., Raie, A., Gao, Y.: Pose-invariant multimodal (2d + 3d) face recognition using geodesic distance map. J. Am. Sci. 7(10), 583–590 (2011)
Alexnet-Adaboost-ABC Based Hybrid Neural Network for Electricity Theft Detection in Smart Grids Muhammad Asif1 , Ashraf Ullah1 , Shoaib Munawar2 , Benish Kabir1 , Pamir1 , Adil Khan1 , and Nadeem Javaid1(B) 2
1 COMSATS University Islamabad, Islamabad 44000, Pakistan International Islamic University Islamabad, Islamabad 44000, Pakistan
Abstract. In this paper, a hybrid deep learning model is presented to detect electricity theft in the power grids, which happens due to the Non-Technical Losses (NTLs). The NTLs emerge due to meter malfunctioning, meter bypassing, meter tampering, etc. The main focus of this study is to detect the NTLs. However, the detection of NTLs faces three major challenges: the problem of severe class imbalance, the problem of overfitting due to the highly dynamic data and poor generalization due to the usage of synthetic data. To overcome the aforementioned problems, a hybrid deep neural network is designed, which is the combination of Alexnet, Adaptive Boosting (AdaBoost) and Ant Bee Colony (ABC), termed as Alexnet-Adaboost-ABC. The Alexnet is exploited for the features’ extraction while Adaboost and ABC are used for the classification and parameters’ tuning, respectively. Moreover, the class imbalance issue is resolved using the Near Miss (NM) undersampling technique. The NM effectively reduces the majority class samples and standardize the proportion of both majority and minority classes. The model is evaluated on the real time inspected dataset released by the State Grid Corporation of China (SGCC). The performance of the proposed model is validated through the F1-score, precision, recall, Area Under Curve (AUC) and Matthew Correlation Coefficient (MCC). The simulation results depict that the proposed model outperform the existing techniques. The simulation results depict that the proposed model obtains 3%, 2% and 4% higher values of F1-score, AUC and MCC, respectively.
1 Introduction In recent days, electricity is playing a vital role in many daily life activities. The human life is totally dependent on the electricity due to the emergence of digital devices. However, different losses occur during the generation, transmission and distribution of electricity, which are daunting for the power utilities. These losses are categorized into Technical Losses (TLs) and Non Technical Losses (NTLs). TLs arise because of energy dissipation and short circuit in electricity distribution lines, fatal shocks, etc. Similarly, NTLs occur due to the fraudulent use of electricity, meter bypassing, unpaid bills, etc. In Pakistan, approximately 35% to 40% losses occur due to line losses while the other 40% happen because of NTLs [1]. The NTLs not only affect the economy of the developing countries such as Pakistan, India, Bangladesh, etc., but also the developed countries like c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 249–258, 2021. https://doi.org/10.1007/978-3-030-79725-6_24
250
M. Asif et al.
United States (US), United Kingdom (UK), Canada, etc. In US, the ratio of electricity theft is increased up to $7 billion while in UK these losses are increased up to £175 per annum. The recent advancement in the advance metering infrastructure (AMI) and roll-out of smart meters assist the power utilities to detect the electricity thieves to a great extent. The emergence of smart grids enables two-way communication flow and a balanced demand supply of electricity between consumers and power utilities. However, it still needs more efforts to detect the energy thieves. The main focus of this study is to detect the NTLs efficiently and helps the utilities to recover the maximum revenue. In literature, three types of techniques are presented for the electricity theft detection (ETD). 1. State Base: In this technique, hardware components such as transformers and sensors are used. This technique performs well in ETD. However, maintenance is the main problem [2]. 2. Game Theory: In this technique, a game is conducted between the players: one player is the attacker while other is the utility. Both players try to beat each. However, this is not a suitable approach because designing complex real world scenarios is a challenging task. 3. Machine Learning: The Machine Learning (ML) techniques are first train on the data and then prediction is performed. The various ML techniques are used in ETD like ensemble and boosting for classification and regression. The researchers have proposed various techniques: however, they still need more efforts to improve the classifier efficiency on an imbalanced dataset. From the existing literature the following limitations are identified: the severe class imbalance issue degrades the model’s performance and leads it towards overfitting, improper dealing with high dimensional data and highly misclassified number of fair consumers as compared to the unfair consumers, which is also termed as high FPR. However, the objective of this study is to present a robust model for efficient ETD in the power grids. The major contributions of this work are as follows: • the problem of imbalance dataset is resolved by employing an undersampling technique, which is known as Near Miss (NM), • the Alexnet, which is an advanced version of CNN, is exploited for features extraction and • the Adaboost classifier is used for classification and lowering the FPR to a minimal level. The rest of the paper is organized as follows: Sect. 2 presents the related work. The Sect. 3 describes the detailed information of the proposed system model. Whereas, the Sect. 4 describes the results and discussion about the proposed and existing models. The conclusion of the proposed model is presented in Sect. 5.
2 Related Work In [3], the authors propose a Wide and Deep Convolutional Neural Network (WDCNN) for the efficient ETD. The objective of proposed model is to solve an over-fitting problem and to make the model more robust against detecting the electricity thieves. The authors in [4] propose the CNN and Random Forest (RF) base model to detect the NTLs. The aim of the proposed model is to improve the detection accuracy and solving the overfitting problem. The authors of [5] introduce Gradient Boosted Tree (GBTs) to
Alexnet-Adaboost-ABC Based Hybrid Neural Network
251
detect the NTLs. The authors in [6] present a Hybrid Deep Neural Network (HDNN), which is based on the LSTM and Multi-Layer Perceptron (MLP). In proposed model, auxiliary data is also considered for better detection of electricity thieves. In [7], the authors propose a CNN-GRU-PSO deep model for ETD. This model is designed for minimizing the high FPR to a minimal level. The authors in [8] introduce a hybrid of LSTM and Rusboost base model for ETD. The authors of [9] use the VGG (Visual Geometry Group)-XGboost-FA based HDNN model for binary classification in ETD. The main aim of this model is to solve the overfitting issue and lowering the high FPR. The authors propose the CNN-LSTM term as deep siamese network (DSN) [10] for electricity theft detection. The main contribution of this model is to improve the detection accuracy. In [11] authors practice an optimization technique for tuning the hyperparameters of the proposed model for better ETD. In the paper [12], the authors propose a hybrid deep CNN-LSTM based deep learning model for ETD. In this paper, authors opt SMOTE technique for data balancing by reducing the majority class samples. However, the overfitting iuuse is raised due to the generation of redundant records. In [13], the authors use the feature extraction to extract optimal features. Meanwhile the Random Under Sampling Boosting (RUSBoost) is leveraged for the classification and data balancing by reducing the majority class instances. The objective of their proposed model is to detect the fraudulent users. In [14], Kernal Principal Component Analysis (KPCA) is used for the feature extraction and Bidirectional GRU is leveraged for the classification of the normal and abnormal users. The proposed model focuses on improving the DR and lowering the high FPR. The authors of [15] present a DNN model, which is the combination of deep learning model and ensemble technique for better detection of fraudulent users in power grids. In [16, 17], the clustering based Deep Neural Network (DNN) are proposed for better ETD by creating the relevant cluster of fair and unfair consumers. The objective of the proposed model is to maximize the Detection Rate (DR) while minimizing the False Positive Rate (FPR). In [18], Gradient boosting (GB) based technique is proposed for efficient feature engineering and classification. In [19], authors propose K-Mean and LSTM based deep model to detect the difference between drift and fraudulent electricity consumption pattern. Whereas, in [20], stacked sparse de-noising auto-encoder is practiced for feature extraction and detecting malicious users. In [21], the authors propose a novel technique known as Smart Energy Theft System (SETS) against the malicious attacks. In [22], an efficient technique is designed to detect the two types of attacks: i). Those attacker who use illegal electricity connection for some time in day ii). The attackers who steal electricity for the whole day. The authors in [23] present a novel hybrid technique, which is the combination of the Maximum Information Coefficient (MIC) and Clustering Fast Search Find Density Peak (CFSFDP) for detecting electricity thieves in the smart grid. However, no proper feature engineering mechanism is practice in the proposed model for better ETD. The authors in [24] propose the sample coefficient power anomaly detection (SEPAD) scheme for anomaly detection. However, the proposed scheme not perform well while achieving the high DR. In [25], the Finite Mixture Model (FMM) and GA are used for the parameter tuning and feature generations in order to make the model more robust against the various attack. Whereas, the FMM is used for the soft clustering. In [26], the authors introduce Gradient Boosting
252
M. Asif et al.
Theft Detector (GBDT) based on the Gradient Boosting Classifier (GBCs). The GBCs are based on XG (Extreme Gradient) boosting, CATA (Categorical) Boosting and Light Boosting. The stochastic features are also created and passed to the GBTD for better NTLs detection.
3 Proposed System Model This section contains the description of the proposed methodology. The Fig. 1 shows the complete workflow of the proposed model. The detailed description of each component of the system model is give below.
Model1 Model2
Train Dataset
Model3
Voting
Dataset
Modeln
Test Dataset
Pass data to AlexNet
Flatten layer
AdaBoost classifier
Extracted features
Output
Conv layer
Conv layer
Balanced data
Data preprocessing
Normalization
Near miss
Imbalance data
Missing values
Outliers
Fig. 1. Proposed model
3.1
Data Preprocessing
The electricity consumption data often contains noisy and erroneous values, which should be handled before passing them to any classification model. Therefore, in this
Alexnet-Adaboost-ABC Based Hybrid Neural Network
253
study, we consider the following data preprocessing techniques to handle the aforementioned issues. Firstly, we use normalization technique because the neural networks are very sensitive to diverse data. The main purpose of normalization is the feature scaling. There are two types of normalization techniques: one is min-max and the second one is Z-score. In this paper, we utilize min-max normalization. Similarly, the interpolation [9] is used to handle the missing values. It fills the current missing value by taking the average of the next and previous values. 3.2 Feature Extraction and Classification The electricity dataset contains high dimensional sequential data. Therefore, the extraction of potential features is a an essential task in order to achieve better generalization results. Hence, in this study, the Alexnet is used for extracting the most prominent features from the high dimensional feature space. The Alexnet is an advance version of CNN that contain five convolution layers, four max pooling layers, two batch normalizations, two fully connected layers and one sigmoid layer. The regular CNN uses tanh as an activation function in each layer. However, in Alexnet, ReLU activation function is used for the activation of neurons in each layer instead of tanh activation function to reduce the gradient vanishing problem. The Alexnet consists of 60 million parameters. Moreover, the dropout layers are employed to reduce the occurrence of overfitting. Hence, due to the aforementioned functionalities, the Alexnet performs better than the regular CNN. The Alexnet retrieves optimal features from users’ consumption history and generate a feature map. Afterwords, the feature map is passed to the Adaboost model for the final NTL prediction. The Adaboost is an ensemble learning technique in which several weak learners are combined in order to form a strong learner. It efficiently analyses the consumption patterns and performs the classification as an fair and unfair consumers. Furthermore, a meta-heuristic technique Ant Bee Colony (ABC) is used to tune the hyperparameters of Adaboost for better convergence and performance results (Table 1). Table 1. Mapping of limitation and proposed solution Limitations identified
Solutions proposed
Validations
L1: Imbalance Dataset
S1: Near miss
V1: N/A
L2: High Dimensionality S2: Alexnet-Adaboost- V2: Fig. 2 and Fig. 3 L3: High FPR
S3: Adaboost-ABC
V3: Fig. 4
4 Model Evaluation In this section, proposed model is trained on SGCC data and is compared with benchmark classifiers. Their performance is evaluated through different measures.
254
4.1
M. Asif et al.
Performance Metrics
The proposed model is evaluated through Area Under Curve (AUC), precision, recall, MCC and F1-score. The detailed description of these metrics is given below. The AUC score is one of the best metric for imbalance data classification. The result of AUC ranges between 0 and 1. It measures the separability between the two classes. The precision is measured by the relevancy of total actual result. The recall is calculated as a number of the true positives in the actual result. The MCC is also one of the reliable performance metrics. It provides the correlation result of the overall confusion metrics. 4.2
Benchmark Models and Simulation Results
In this paper, numerous benchmark techniques are used as the fair comparison. SVM: The SVM is one of the state-of-the-art benchmark technique. It is widely used for ETD. The simulation results of SVM in terms of precision, recall, accuracy and F1-score are described in the Table 2. Table 2. Performance result of SVM Performance metrics Results on balanced dataset Precision
0.655
Recall
0.98
Accuracy
0.72
MCC
0.52
F1-score
0.78
Logistic Regression: The Logistic Regression (LR) is used for binary classification. It consist of a single neural network layer. The sigmoid function is applied on the output layer for final classification. It is taken as a benchmark model. The performance results of LR are mentioned in Table 3. Table 3. Performance result of LR Performance metrics Results on balanced dataset Precision
0.76
Recall
0.68
Accuracy
0.73
MCC
0.465283
F1-score
0.72
CNN: The CNN is the deep learning technique, which is widely used in ETD. The performance of CNN in terms of various performances metrics is described in the Table 4.
Alexnet-Adaboost-ABC Based Hybrid Neural Network
255
Table 4. Performance result of CNN Performance metrics Results on balanced dataset Precision
0.74
Recall
0.46
Accuracy
0.64
MCC
0.322549
F1-score
0.73
4.3 Simulation Results In this section, we compare the performance of the proposed model with benchmark classifiers through different performance measures (Table 5). Table 5. Performance result of Adaboost-ABC Performance metrics Balance dataset Precision
0.74
Recall
0.98
Accuracy
0.72
MCC
0.52
F1-score
0.78
Figure 2a shows the ROC curve of CNN-Adaboost model on training and testing data and obtains 0.863 and 0.827 AUC scores, respectively. It attains a high curve because it has convolutional and dense layers that extract optimal features through local receptive fields and weight sharing mechanism. Figure 2b presents the ROC curve results of SVM on training and testing data that are lower than the CNN-Adaboost ROC curve. It achieves 0.721 and 0.752 AUC values on training and testing data, respectively. Figure 3a shows the ROC curve of LR on training and testing data. LR is simple form of the neural network. It gives good results on smaller datasets. However, its performance is drastically decreased on the larger datasets. Due to the large dimensionality of SGCC data, LR does not give good results. It attains 0.75 and 0.73 AUC values on training and testing data, respectively. Figure 3b presents ROC Curve of AdaBoost classifier. It attains 0.93 and 0.91 AUC score on train and test data, respectively. It belongs to a group of ensemble learning classifiers. These classifiers combine several weak learners into strong one, and increase model generalization. However, they incur high computational time and require large memory to train multiple classifiers. However, AdaBoost gives good results because several weak learners are trained in a parallel way. Moreover, we extract optimal features through the CNN model and pass them to the Adaboost. These extracted features enhance model generalization and reduce computational time. We apply ABC to find hyperparameters of the AdaBoost classifier.
256
M. Asif et al.
Fig. 2. ROC curve of implemented classifiers
Fig. 3. ROC curve of implemented classifiers
Fig. 4. ROC curve of implemented classifiers
Figure 4a presents the PR curve of all implemented classifiers. PR curve is a good performance indicator for an imbalanced dataset because it gives equal importance to both classes. It is a ratio of precision and recall on the different thresholds between 0 and
Alexnet-Adaboost-ABC Based Hybrid Neural Network
257
1. Our proposed model achieves a high PR curve as compared to benchmark classifiers, which indicates that Alexnet extracts optimal features from the electricity consumption dataset. Moreover, we use ABC to tune optimal hyperparameters of Adaboost. Figure 4b shows ROC curve of Alexnet-Adaboost-ABC and other classifiers. AlexnetAdaboost-ABC, SVM, LR and CNN-Adaboost attain 0.91, 0.72, 0.73 and 0.82 AUC values, respectively. All results indicate that our proposed model performs better than all benchmark classifiers because alexnet extracts optimal features through convolutional and pooling layers.
5 Conclusion In this article, we propose a hybrid model for efficient ETD, which is a combination of Alexnet and AdaBoost classifiers. Alexnet is used to extract optimal features from electricity consumption data. While, AdaBoost is exploited to classify between normal and abnormal points. Moreover, the hyperaparameters of AdaBoost are tuned through ABC, which is a meta-heuristic technique. All classifiers are trained and tested through a electricity dataset provided by SGCC. The dataset is of imbalanced nature that affects the classifiers’ performance and increases the misclassification rate. So, we use NM to balance the dataset. The proposed classifier is compared with the benchmark classifiers: SVM, LR and CNN-Adaboost. The proposed model is evaluated through different performance measures like precision, recall, ROC curve, MCC, accuracy, F1-score, PR curve and ROC-AUC. The proposed model achieve 0.91 AUC score, which is highest from the existing models. Moreover, the simulation results depict that the proposed model obtains 3%, 2% and 4% higher values of F1-score, AUC and MCC, respectively.
References 1. Ding, N., Ma, H., Gao, H., Ma, Y., Tan, G.: Real-time anomaly detection based on long short-term memory and Gaussian mixture model. Comput. Electr. Eng. 79, 106458 (2019) 2. Yip, S.-C., Tan, W.-N., Tan, C.K., Gan, M.-T., Wong, K.S.: An anomaly detection framework for identifying energy theft and defective meters in smart grids. Int. J. Electr. Power Energ. Syst. 101, 189–203 (2018) 3. Zheng, Z., Yang, Y., Niu, X., Dai, H.-N., Zhou, Y.: Wide and deep convolutional neural networks for electricity-theft detection to secure smart grids. IEEE Trans. Ind. Inf. 14(4), 1606–1615 (2017) 4. Li, S., Han, Y., Yao, X., Yingchen, S., Wang, J., Zhao, Q.: Electricity theft detection in power grids with deep learning and random forests. J. Electr. Comput. Eng. (2019) 5. Buzau, M.M., Tejedor-Aguilera, J., Cruz-Romero, P., G´omez-Exp´osito, A.: Detection of nontechnical losses using smart meter data and supervised learning. IEEE Trans. Smart Grid 10(3), 2661–2670 (2018) 6. Buzau, M.M., Tejedor-Aguilera, J., Cruz-Romero, P., G´omez-Exp´osito, A.: Hybrid deep neural networks for detection of non-technical losses in electricity smart meters. IEEE Trans. Power Syst. 35(2), 1254–1263 (2019) 7. Ullah, A., Javaid, N., Samuel, O., Imran, M., Shoaib, M.: CNN and GRU based deep neural network for electricity theft detection to secure smart grid. In: 2020 International Wireless Communications and Mobile Computing (IWCMC), pp. 1598–1602. IEEE (2020)
258
M. Asif et al.
8. Adil, M., Javaid, N., Qasim, U., Ullah, I., Shafiq, M., Choi, J.-G.: LSTM and bat-based RUSBoost approach for electricity theft detection. Appl. Sci. 10(12), 4378 (2020) 9. Khan, Z.A., Adil, M., Javaid, N., Saqib, M.N., Shafiq, M., Choi, J.G.: Electricity theft detection using supervised learning techniques on smart meter data. Sustainability 12(19), 8023 (2020) 10. Javaid, N., Jan, N., Javed, M.U.: An adaptive synthesis to handle imbalanced big data with deep Siamese network for electricity theft detection in smart grids. J. Parallel Distrib. Comput. 153, 44–52 (2021) 11. Ramos, C.C., Rodrigues, D., de Souza, A.N., Papa, J.P.: On the study of commercial losses in Brazil: a binary black hole algorithm for theft characterization. IEEE Trans. Smart Grid 9(2), 676–683 (2016) 12. Hasan, M., Toma, R.N., Nahid, A.A., Islam, M.M., Kim, J.M.: Electricity theft detection in smart grid systems: a CNN-LSTM based approach. Energies 12(17), 3310 (2019) 13. Avila, N.F., Figueroa, G., Chu, C.C.: NTL detection in electric distribution systems using the maximal overlap discrete wavelet-packet transform and random undersampling boosting. IEEE Trans. Power Syst. 33(6), 7171–7180 (2018) 14. Gul, H., Javaid, N., Ullah, I., Qamar, A.M., Afzal, M.K., Joshi, G.P.: Detection of nontechnical losses using SOSTLink and bidirectional gated recurrent unit to secure smart meters. Appl. Sci. 10(9), 3151 (2020) 15. Aslam, Z., Javaid, N., Ahmad, A., Ahmed, A., Gulfam, S.M.: A combined deep learning and ensemble learning methodology to avoid electricity theft in smart grids. Energies 13(21), 5599 (2020) 16. Maamar, A., Benahmed, K.: A hybrid model for anomalies detection in AMI system combining K-means clustering and deep neural network. Comput. Mater. Continua 60(1), 15–39 (2019) 17. Viegas, J.L., Esteves, P.R., Vieira, S.M.: Clustering-based novelty detection for identification of non-technical losses. Int. J. Electr. Power Energ. Syst. 101, 301–310 (2018) 18. Coma-Puig, B., Carmona, J.: Bridging the gap between energy consumption and distribution through non-technical loss detection. Energies 12(9), 1748 (2019) 19. Fenza, G., Gallo, M., Loia, V.: Drift-aware methodology for anomaly detection in smart grid. IEEE Access 7, 9645–9657 (2019) 20. Huang, Y., Qifeng, X.: Electricity theft detection based on stacked sparse denoising autoencoder. Int. J. Electr. Power Energ. Syst. 125, 106448 (2021) 21. Li, W., Logenthiran, T., Phan, V.T., Woo, W.L.: A novel smart energy theft system (SETS) for IoT-based smart home. IEEE Internet Things J. 6(3), 5531–5539 (2019) 22. Ghasemi, A.A., Gitizadeh, M.: Detection of illegal consumers using pattern classification approach combined with Levenberg-Marquardt method in smart grid. Int. J. Electr. Power Energ. Syst. 99, 363–375 (2018) 23. Zheng, K., Chen, Q., Wang, Y., Kang, C., Xia, Q.: A novel combined data-driven approach for electricity theft detection. IEEE Trans. Ind. Inf. 15(3), 1809–1819 (2018) 24. Wang, X., Yang, I., Ahn, S.-H.: Sample efficient home power anomaly detection in real time using semi-supervised learning. IEEE Access 7, 139712–139725 (2019) 25. Razavi, R., Gharipour, A., Fleury, M., Akpan, I.J.: A practical feature-engineering framework for electricity theft detection in smart grids. Appl. Energ. 238, 481–494 (2019) 26. Punmiya, R., Choe, S.: Energy theft detection using gradient boosting theft detector with feature engineering-based preprocessing. IEEE Trans. Smart Grid 10(2), 2326–2329 (2019) 27. Bitam, S., Batouche, M., Talbi, E.G.: A survey on bee colony algorithms. In: 2010 IEEE International Symposium on Parallel and Distributed Processing, Workshops and Ph.D. Forum (IPDPSW), pp. 1–8. IEEE (2010)
Blockchain and IPFS Based Service Model for the Internet of Things Hajra Zareen1 , Saba Awan1 , Maimoona Bint E Sajid1 , Shakira Musa Baig1 , Muhammad Faisal2 , and Nadeem Javaid1(B) 1
COMSATS University, Islamabad 44000, Pakistan 2 Iqra National University, Peshawar, Pakistan
Abstract. In this paper, blockchain and InterPlanetary File System (IPFS) based service model is proposed for Internet of Things (IoT). In the IoT, nodes’ credentials and generated data are stored on the IPFS in a hashed format. In order to ensure the security of data, encrypted hash is stored on the blockchain. However, blockchain is very expensive for storing the large amount of data. While, in the case of centralized database, there is a possibility of data tampering and information leakage. Moreover, a service model is proposed for sharing the services from the service providers to the consumers. In addition, a product consensus mechanism is performed between admin and user which is replaced with a blockchain service model. Due to this, consumers send the request to the service provider through blockchain for the required services. Though, peer and minor nodes’ involvement in consensus mechanisms, hinder in finding the service provider. Moreover, existing scheme is computationaly expensive and delay occurs due to the lengthy procedure of verification and consensus. Here, blockchain is used to record the evidence of the services. Also, a service verification scheme is designed using the Secure Hash Algorithm-256 (SHA-256). Furthermore, the smart contract is utilized to settle disputes between consumers and service providers. The simulation results show that Proof of Authority consumes less gas and has low latency as compared to Proof of Work, which represents the efficiency and effectiveness of the proposed solution. Keywords: Internet of Things · Blockchain · InterPlanetary file system · Smart contract · Consensus mechanism · Service model
1
Introduction
Internet of Things (IoT) plays a significant role in today’s world through promoting social and economic development. A Wireless Sensor Network (WSN) is considered the key technology in IoT architecture, which plays a significant role in promoting IoT. The IoT is now extensively used in various fields like smart cities, healthcare, smart power grids, etc. As IoT consists of many devices, so the security of IoT devices is compromised because of different kinds of cyberattack. Therefore, several centralized and distributed architecture have been proposed c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 259–270, 2021. https://doi.org/10.1007/978-3-030-79725-6_25
260
H. Zareen et al.
to tackle these attacks. However, the existing solutions are not suitable due to single point of failure, high latency, storage constraints, high computational cost, big data collection and lack of privacy which leads to inaccurate decision making [1]. The edge, cloud and transparent computing have been developed to extend terminals devices’ functionality with the on demand service provisioning scheme [2]. The existing service provisioning schemes face security challenges like a service provider may provide nonconformance or malicious services. In addition, the dishonest clients may deny from the correct services. The traditional nonrepudiation mechanisms can be classified into two categories: Trusted Third Party (TTP) and Non TTP schemes. In the former scheme, single point of failure and performance bottlenecks are the major issues. While in a later scheme, high performance cost, no nonrepudiation evidence and weak evidence are notable issues [3]. Nakamoto introduced the blockchain in 2008 which consists of blocks that are in sequences and chained together. Each block contains the hash of the previous block, current block and data. Data of each block is stored in the form of a Merkle Tree (MT). If any data tamper is performed, it can be easily identified while comparing the data with the root hash [6]. Blockchain is utilized to overcome the limitations of a centralized system and involvement of a third party. Blockchain consists of a Peer-to-Peer (P2P) network and distributed ledger, which offers tamper resistant, secure and immutable services. Blockchain’s first implementation was bitcoin. Later various domains adopted it, such as the Internet of Vehicles, IoTs, the energy sector, etc., [4]. However, blockchain is very expensive for storing a large amount of data. As two databases (Blockchain and local database) are considered to store each nodes’ data that require extra maintenance cost and storage capacity. Moreover, de-registration and re-registration of the nodes require extra storage space. Also, there is a possibility of data tampering and information leakage. Furthermore, product consensus among users and admin involves minor nodes and peer nodes. In addition, computational cost and delay increases due to the lengthy procedure of verification and consensus mechanism. To overcome the issues addressed above, blockchain and InterPlanetary File System (IPFS) based service model for IoT is proposed motivated by [6]. The contributions of the paper are as follows: – the blockchain and IPFS based service model is proposed. The blockchain is utilized to store the evidence of the services while IPFS is used to store and share data securely, – the service verification scheme is designed which is based on the Secure Hashing Algorithm (SHA-256) and – the smart contract is utilized to settle service disputes between consumers and service providers. The remainder of the paper is organized as follows: related studies are presented in Sect. 2. Proposed system model is demonstrated in Sect. 3. Simulation Results are discussed in Sect. 4 and Sect. 5 presents the conclusion of this work.
Blockchain and IPFS Based Service Model for the Internet of Things
2
261
Related Work
Due to many IoT devices, security is a major concern in IoT devices, so it is urgent to protect them from cyber attacks. Several centralized and distributed architecture has been proposed to tackle these attacks. Existing solutions are not suitable due to single point of failure, high latency, storage constraints and high computational cost. Besides this, the traditional systems face issues like big data collection and lack of privacy. A sufficient collection of data is not possible, leading to inaccurate decision-making [1]. The Industrial IoT (IIoT) is referred to as smart sensors to enhance manufacturing and industrial processes. Network computing technologies increase the functionalities of IIoT. Almost all existing service-provisioning schemes face new security challenges, which causes the anxiety of stakeholders and concern about using such provisioning technologies. A service provider may provide nonconformance or malicious services. In comparison, a dishonest client may deny getting the correct services. The traditional Nonrepudiation mechanisms can be classified into two categories: TTP and Non-TTP schemes. In the former scheme, single point of failures and performance bottlenecks are the major issues in centralized TTP. However, in the later scheme high-performance cost, no nonrepudiation evidence and provides weak evidence for any party for self-proving [3]. WSNs consist of issues related to the privacy and security of the data due to constraints. Besides this, authentication of every node and managing trust are necessary. However, previous work either handles security, privacy, or authentication and trust management, but none of them handles the trust management and authentication in WSN and IoT [4]. The traditional IoT identity authentication protocols uses centralized authentication methods and mostly rely on the trusted third parties. As a result, it causes single point of failure [7]. The WSN has a self-organizing structure where various sensor nodes are being deployed at a random positions to collect data from the physical environment. There are two types of sensor nodes named beacon and unknown nodes. Beacon nodes know their location through Global Positioning System(GPS) or manually assigned them. At the same time, Unknown nodes get their position through localization. As localization has a significant impact in many WSNs applications but accurate location estimation is still a big challenge. Localization has many problems like location estimation due to which accuracy is affected. In contrast, energy conservation is another issue that decreases the lifetime of a WSN. Another concern is malicious attack or activity. In this case, false location information is broadcasted in the network [8]. In the routing algorithms, malicious nodes attack and compromise the routing nodes thus they sends the wrong queue length information to maximize the chances of getting the packets. When it receives the packets, it does not forward them to neighbor nodes and discard them, making the black hole in the network. This phenomenon is called a black hole attack. So, for trust management among routing nodes, a third party is utilized, but it does not tackle the multi-hop distributed WSN. Besides
262
H. Zareen et al.
this, there is a possibility of attack and compromised by malicious nodes, so the security and fairness cannot be assured [9]. All IoT devices contain personal information and are connected to the internet. So, for this reason, it requires certification. The lightweight IoT devices that perform simple tasks have low-performance chipsets or no operating system running. Most IoT devices do not have authentication methods due to the limited resources. It does not support encryption protocol or certificate and then it is vulnerable [10]. The routing protocols require CA to authenticate, identify or remove IoT devices. There is a trust issue between the IoT vendors in central management due to disagreement in a centralized system. Besides this, it needs a secret key sharing mechanism, which requires high cost for implementation [11]. In a dynamic WSNs, key management security is compromised by BS, which is untrusted and easily compromised. Also, it causes additional overhead on sensors in the key distribution scheme. Besides this, it consists of a complex design protocol [12]. The main issue of hierarchical sensor networks is data transmission’s security and privacy without high computational cost [13]. As the mobile phones consist of sensor nodes that have limited resources. So, utilizing blockchain requires abundant resources to perform Proof of Work(PoW). Besides this, WSN has limited battery power, so it drained quickly and thus the whole network is affected [14]. The authenticity of the data and user privacy in a vehicular sensor network is the major issue [15]. Mobile devices consist of limited resources, so applying blockchain on the wireless mobile network requires abundant resources (computational and storage) to solve the PoW puzzle for the mining process [16]. No secure caching scheme can be tackled in the Information-Centric Network (ICN) that based on WSNs [17]. Nodes in a WSN have limited energy and storage capacity. Due to this reason, many network nodes prefer to preserve their resources. If most nodes act selfishly and do not forward the packets, then the whole network will not work properly [18]. Crowdsensing networks utilized mobile phone sensors for the collection of data to reduce the cost. However, there is a threat of privacy leakage. Due to this reason, users do not trust on it, upload incorrect information and also their involvement is low [19]. Now a day’s WSN plays a significant role. However, sensors nodes have constraints like limited energy and transmission range. Due to this reason, they faces several threats. These threats can be divided into two parts data and routing. A data attack malicious node changes the data in the payload for learning the transmission details or performing the variation. While in routing attacks, malicious nodes choose the wrong path by enforcement [20]. WSN has a self-organizing structure where various sensor nodes are being deployed at random positions to collect data from the physical environment. There are two types of sensor nodes named beacon and unknown nodes. Beacon nodes know their location through GPS or manually assigned them. In comparison, Unknown nodes get their position through localization. There are two types of localization algorithms known as range based and range-free. The former
Blockchain and IPFS Based Service Model for the Internet of Things
263
approach requires special hardware, so its procedure needs high cost for localization. While in the case of range free approach, the mobility of sensor nodes can change the framework’s network topology. Besides this, constant alarming situation because of malicious nodes make the framework overprotective [21]. Different types of threats occur in Industrial IoT. Workers gain access to the restricted area of the industry, steal, or misplace the products, information, or important records through compromising the sensor nodes, moreover, by getting open access to the information. Furthermore, to register the IoT devices, collect and verify the registration every time. So that no one can be able to modify or change it [22]. WSNs have a significant impact on IoT development. There are mainly two types of threats in WSNs, external and internal attacks. In the external attacker attacks from outside the network, while an internal attack node is compromised, it attacks from within the network. So, the detection of a malicious node within the network is very important. Also, there is no way to record the original data and detection process to trackback later [23]. Various challenges occur when blockchain is deployed in IIoT. Blockchain requires massive computing power in deployment. While IIoT demands maximum storage capacity and bandwidth to handle a huge amount of data. For these reasons, resource-constrained devices do not support certain operations because of their conflicts between heterogeneous network resources and IIoT devices [24]. The Central Processing Unit usage rate of the mining process, hash operations, hash quality, self-proposed block number, system throughput, remaining blocks filtered by UBOF in different scale of accounts, remaining blocks in different scale of throughput and storage cost are the key parameters that are considered for the validation of the proposed scheme [25]. Network latency and data delivery issues occur due to mobile sensors. Mobile WSNs sensor nodes cannot send information to their mobile CH until the mobile CH comes to its cluster boundary. Besides this, mobile CHs rises the possibility of malicious nodes to join the network and compromise its data privacy [26]. Shifting the global population in the urban area can maximize the burden on smart city network architecture in structural scalability, bandwidth constraints, low latency, high mobility, single points of failure, security and privacy of data [27]. As IoT devices contain a huge amount of data to access data, it takes a lot of time as IoT sensors have limited computing power and storage capacity. Due to this reason, it quickly runs out of battery when process the transactions faster. Limited bandwidth and slow update rate can cause major issues in time-critical applications as it requires faster information updates [28].
3
System Model
In this paper, blockchain and IPFS based Service Model is proposed which is motivated from [3]. It consists of the following three entities. Service Provider: Any person or organization which owns the data and provide the services to the consumers (C).
264
H. Zareen et al.
Consumer: Make a request for the service and execute the service program. Arbitrator: It is from the minor nodes which uses the smart contract for abitration. Arbitrator (A) responsibility is to settle disputes between C and service provider (SP). A is also utilized to maintain the distributed ledger. The proposed system consists of two phases, namely registration and service model. 3.1
Registration
The registration of every node is compulsory before providing any services. So, all the nodes are registered in the blockchain in the registration phase and contain a unique account address. Also, the node comprises other relevant information like services name, services data and hash value. After successful registration of the nodes, they can provide the desired services. The service record of every node is stored on the IPFS. 3.2
Service Model
This section contains the description of Elliptic Curve Integrated Encryption Scheme (ECIES) and IPFS. Moreover, blockchain and IPFS based service model is described. Elliptic Curve Integrated Encryption Scheme. ECIES is the enhancement of Elliptic Curve Cryptography (ECC) which is considered as the public keybased encryption scheme. It gives the semantic security against the intruders [29]. As, the intruder can utilizes plaintext and ciphertext attacks. Our service model uses ECIES to encrypt the services data. InterPlanetary File System. Our service model utilizes a distributed file storage system, IPFS. As IPFS is content-addressable, so if any modification is done on the data, its content address also changes. After encryption of data through ECIES, it is uploaded on the IPFS. Hash is generated against each data which is later stored on the blockchain. Description of the Service Model. In traditional scheme product consensus between admin and user is replaced through the service model. The user sends a request to the SP for the required services so no Cental Authority (CA) is used to find a respective SP. Here, blockchain is utilized to record the evidence of the services. The whole service is split into two nonexecutable fragments. The major off-chain part is sent through IPFS and the tiny part is delivered via on-chain, which can reduce the burden on blockchain and enforce the C to submit evidence of the off-chain part (IPFS) on the blockchain. So, through these steps fairness of the mechanism is ensured. Also, a service verification scheme is designed based on SHA-256 that only validates the on-chain evidence instead
Blockchain and IPFS Based Service Model for the Internet of Things
265
of the whole service program. The smart contract based technique is utilized to settle service disputes between C and SP. It provides fair and effective dispute resolution. The whole service scheme process is demonstrated as follows (also see the Fig. 1) Step 1: First, C sends the request of the transaction to SP for the desired service. This request consists of the service name and crypto token. Step 2: SP calculates the hash of S2, which is a major part of service S2, by using the hash function SHA-256. After calculating the hash, it is encapsulated with the token within the transaction and then sent to the respective C for the evidence (on-chain).
Registration
Service Model Data Storage and Encryption 2.3 Deliver S2 through IPFS Generate Shared to
2.4 Confirm Hash (S2) 2.6 Confirm Hash (S)
2.5 Publish S1
2.1 Request Service S
2.2.Publish Hash (S2) Phase1: Node Registration in blockchain
Client
Phase2: Service Model Service Provider
Registration
Fig. 1. Blockchain and IPFS based service model for IoT
Step 3: As Hash of S2 is published on the blockchain successfully. Now SP sends the S2 part through IPFS to C. As the major part is sent through IPFS and the tiny part is delivered via on-chain, it reduces the burden from blockchain and enforces the client to submit evidence of the off-chain part (IPFS) on the blockchain. So, these steps ensure the fairness of the repudiation mechanism. Step 4: C compute the hash value of the obtained S2 through SHA-256 and then compare it with the value stored in blockchain. If both values match, then C confirms it by sending the transaction confirmation with the SP account’s token. Whereas, if its value does not match, then C invokes smart contract for the termination of service program. In this step, SP can also terminate the service program if no response is received from the C within a specified time. As in Step 4, C is bound to confirm the off-chain part S2 for complete executable service program. Otherwise, both cannot obtain any benefit in case of termination of the service program. This demonstrates the true fairness of the scheme. Step 5: Afterwards, confirmation of the previous transaction from C is accessible on blockchain. The SP sends the transaction as evidence that comprises a small part S1 and token to the C account.
266
H. Zareen et al.
Step 6: After getting the S1, C restore the whole service. C has two options: either confirm the service or call for arbitration through a smart contract. Likewise in step 4, SP has a right to arbitrate if no response is received within specified time. Note that in step 5, hash of S1 is already published as evidence on the blockchain, so the smart contract can easily resolve the dispute based on previous evidence without final confirmation of C. 3.3
Mapping of Limitations to the Solutions
In first limitation (L1), two databases (blockchain and local database) are considered that require extra maintenance cost and storage capacity. Also, due to the local database, there is a possibility of data tampering and information leakage. To overcome the L1, IPFS is utilized as the solution one (S1) of the proposed model. In the registration phase, all the nodes are registered in the blockchain before providing any services. After successful registration of the nodes, they can provide the desired services. Services record of every node is stored on the IPFS. IPFS then generates the hash of the data. This process lacks the security of data because its hash is shared with every network node. The security of data is ensured by encryption after that encrypted hash is stored on the blockchain (Table 1). Table 1. Mapping of identified limitations, proposed solutions and validations done Identified limitations
Proposed solutions
Validations done
L1: Extra maintenance cost and storage capacity. Data tampering and information leakage in local database L2: Computational cost increases and delay occurs5
S1: IPFS and encryption
V1: Average gas consumption, V2: average transaction latency
S2: Blockchain based service model
In second limitation (L2), product consensus among user and admin involves minor nodes and peer nodes which cause increased computational cost and delay because of lengthy procedure of verification and consensus among nodes. To overcome the L2, service model is proposed which is the second solution (S2). Product consensus between admin and user is replaced by the blockchain service model. Users directly send the request to the service provider for the required services so that no CA is used to find the respective service provider. Average gas consumption and average transaction latency are the key parameters considered for the validation (V1, V2) of the proposed service model.
Blockchain and IPFS Based Service Model for the Internet of Things
4
267
Simulation Results and Discussion
This section provides the simulation results of the proposed system model. Average gas consumption of five steps is illustrated for both PoW and PoA consensus mechanisms. In PoW all nodes participate in consensus, and it utilizes maximum resources for mining. Whereas, in PoA, only pre-selected nodes participate in consensus so it requires less resources. 8
104 PoW PoA
Average Gas Consumption(gas)
7 6 5 4 3 2 1 0 Step 1
Step 2
Step 4
Step 5
Step 6
Fig. 2. Average gas consumption
Average Transaction Latency (s)
2.5
PoA PoW
2
1.5
1
0.5
0
-0.5 25
50
75
100
125
150
175
200
225
Number of Transactions
Fig. 3. Average transaction latency
According to Fig. 2, step 1 requires more gas in case of PoA as compared to PoW due to pre-selection of minor nodes. Step 2 and step 5 consumed maximum gas because of publishing the hashes (s1, s2).
268
H. Zareen et al.
In Fig. 3, the transaction latency illustrates the efficiency of handling the transactions in blockchain system. Two consensus mechanism are utilized to check the average transaction latency. PoA has low latency as compared to PoW due to the pre selected nodes that take part in mining.
5
Conclusion
In this paper, we have proposed blockchain and IPFS based service model for IoT. The proposed system consists of two phases, namely registration and service model. In the registration phase, all the nodes are registered in the blockchain before service provisioning. After successful registration of the nodes, they can provide the desired services. The record of every node is stored on the IPFS. The second phase consists of a service model. In this model, the consumers send the request to the service provider blockchain account for the required services. Therefore, no middleman is used to find a respective service provider. Here, blockchain is utilized to record the evidence of the services. Also, a service verification scheme is designed which is based on the SHA-256. Beside this, the smart contract is utilized to settle service disputes between clients and service providers. In case of conflict between both parties, this mechanism provides fair and effective dispute resolution. So, anyone can easily trust on it. The simulation results shows that PoA consumes less gas as compared to PoW. In addition, PoA has low latency as compared to PoW, which shows the efficiency and effectiveness of the proposed solution.
References 1. Rathore, S., Kwon, B.W., Park, J.H.: BlockSecIoTNet: blockchain-based decentralized security architecture for IoT network. J. Netw. Comput. Appl. 143, 167–177 (2019) 2. Alghamdi, T.A., Ali, I., Javaid, N., Shafiq, M.: Secure service provisioning scheme for lightweight IoT devices with a fair payment system and an incentive mechanism based on blockchain. IEEE Access 8, 1048–1061 (2019) 3. Xu, Y., Ren, J., Wang, G., Zhang, C., Yang, J., Zhang, Y.: A blockchain-based nonrepudiation network computing service scheme for industrial IoT. IEEE Trans. Ind. Inf. 15(6), 3632–3641 (2019) 4. Moinet, A., Darties, B., Baril, J.L.: Blockchain based trust and authentication for decentralized sensor networks. arXiv preprint arXiv:1706.01730 (2017) 5. Sultana, T., Almogren, A., Akbar, M., Zuair, M., Ullah, I., Javaid, N.: Data sharing system integrating access control mechanism using blockchain-based smart contracts for IoT devices. Appl. Sci. 10(2), 488 (2020) 6. Javed, M.U., Javaid, N., Aldegheishem, A., Alrajeh, N., Tahir, M., Ramzan, M.: Scheduling charging of electric vehicles in a secured manner by emphasizing cost minimization using blockchain technology and IPFS. Sustainability 12(12), 5151 (2020) 7. Cui, Z., et al.: A hybrid BlockChain-based identity authentication scheme for multiWSN. IEEE Trans. Serv. Comput. 13(2), 241–251 (2020)
Blockchain and IPFS Based Service Model for the Internet of Things
269
8. Kim, T.H., et al.: A novel trust evaluation process for secure localization using a decentralized blockchain in wireless sensor networks. IEEE Access 7, 184133– 184144 (2019) 9. Yang, J., He, S., Xu, Y., Chen, L., Ren, J.: A trusted routing scheme using blockchain and reinforcement learning for wireless sensor networks. Sensors 19(4), 970 (2019) 10. Hong, S.: P2P networking based internet of things (IoT) sensor node authentication by blockchain. Peer-to-Peer Netw. Appl. 13(2), 579–589 (2019). https://doi.org/ 10.1007/s12083-019-00739-x 11. Ramezan, G., Leung, C.: A blockchain-based contractual routing protocol for the internet of things using smart contracts. Wirel. Commun. Mobile Comput. (2018) 12. Tian, Y., Wang, Z., Xiong, J., Ma, J.: A blockchain-based secure key management scheme with trustworthiness in DWSNs. IEEE Trans. Ind. Inf. 16(9), 6193–6202 (2020) 13. Uddin, M.A., Stranieri, A., Gondal, I., Balasurbramanian, V.: A lightweight blockchain based framework for underwater IoT. Electronics 8(12), 1552 (2019) 14. Sergii, K., Prieto-Castrillo, F.: A rolling blockchain for a dynamic WSNs in a smart city. arXiv preprint arXiv:1806.11399 (2018) 15. Kolumban-Antal, G., Lasak, V., Bogdan, R., Groza, B.: A secure and portable multi-sensor module for distributed air pollution monitoring. Sensors 20(2), 403 (2020) 16. Liu, M., Yu, F.R., Teng, Y., Leung, V.C., Song, M.: Computation offloading and content caching in wireless blockchain networks with mobile edge computing. IEEE Trans. Veh. Technol. 67(11), 11008–11021 (2018) 17. Mori, S.: Secure caching scheme by using blockchain for information-centric network-based wireless sensor networks. J. Signal Process. 22(3), 97–108 (2018) 18. Ren, Y., Liu, Y., Ji, S., Sangaiah, A.K., Wang, J.: Incentive mechanism of data storage based on blockchain for wireless sensor networks. Mobile Inf. Syst. (2018) 19. Jia, B., Zhou, T., Li, W., Liu, Z., Zhang, J.: A blockchain-based location privacy protection incentive mechanism in crowd sensing networks. Sensors 18(11), 3894 (2018) 20. Kumar, M.H., Mohanraj, V., Suresh, Y., Senthilkumar, J., Nagalalli, G.: Trust aware localized routing and class based dynamic block chain encryption scheme for improved security in WSN. J. Ambient Intell. Human. Comput. 12, 5287–5295 (2020). https://doi.org/10.1007/s12652-020-02007-w 21. Goyat, R., Kumar, G., Rai, M.K., Saha, R., Thomas, R., Kim, T.H.: Blockchain powered secure range-free localization in wireless sensor networks. Arab. J. Sci. Eng. 45(8), 6139–6155 (2020). https://doi.org/10.1007/s13369-020-04493-8 22. Rathee, G., Balasaraswathi, M., Chandran, K.P., Gupta, S.D., Boopathi, C.S.: A secure IoT sensors communication in Industry 4.0 using blockchain technology. J. Ambient Intell. Human. Comput. 12, 533–545 (2020). https://doi.org/10.1007/ s12652-020-02017-8 23. She, W., Liu, Q., Tian, Z., Chen, J.S., Wang, B., Liu, W.: Blockchain trust model for malicious node detection in wireless sensor networks. IEEE Access 7, 38947– 38956 (2019) 24. Liu, Y., Wang, K., Lin, Y., Xu, W.: LightChain: a lightweight blockchain system for industrial internet of things. IEEE Trans. Ind. Inform. 15(6), 3571–3581 (2019) 25. Feng, H., Wang, W., Chen, B., Zhang, X.: Evaluation on frozen shellfish quality by blockchain based multi-sensors monitoring and SVM algorithm during cold storage. IEEE Access 8, 54361–54370 (2020)
270
H. Zareen et al.
26. Haseeb, K., Islam, N., Almogren, A., Din, I.U.: Intrusion prevention framework for secure routing in WSN-based mobile Internet of Things. IEEE Access 7, 185496– 185505 (2019) 27. Sharma, P.K., Park, J.H.: Blockchain based hybrid network architecture for the smart city. Future Gener. Comput. Syst. 86, 650–655 (2018) 28. Rovira-Sugranes, A., Razi, A.: Optimizing the age of information for blockchain technology with applications to IoT sensors. IEEE Commun. Lett. 24(1), 183–187 (2019) ´ 29. Gayoso Mart´ınez, V., Hern´ andez Encinas, L., S´ anchez Avila, C.: A survey of the elliptic curve integrated encryption scheme (2010)
Building Social Relationship Skill in Digital Work Design Ardian Adhiatma(&) and Umi Kuswatun Hasanah Dept. of Management, Faculty of Economics, Universitas Islam Sultan Agung (UNISSULA), Jl. Kaligawe Raya Km. 4, Semarang, Indonesia [email protected]
Abstract. The aim of this study is to analyze the impact of digital work design on social relationship skill through Digital-Mediated Communication (DMC) for learning methods in the digital era. This study presents how to improve social relationship skill for university lectures. Data were collected by using questionnaire techniques with SmartPLS as the analysis tool. The result found that all hypotheses are confirmed. The Implementation of DMC not only improves social relationship skill but also improve the virtual empathy that rarely appear during digital interaction. The model of social relationship skill improvement through DMC and digital work design can be used by researchers and practitioners to study and achieve the future in various organizations. Keywords: Digital work design Digital-mediated communication Communication skills Relationship quality Empathy
1 Introduction The development of digital technology is growing rapidly in all lines of life, including in the world of work. In the revolution era, digitalization can change the previous business environment [1]. Work design must always keep up with all the social, cultural and political changes and challenges in the business environment. During the current Covid-19 pandemic for instance, all organizations are required to continue to improve business strategies including work designs that has been adapted to various unstable conditions. The organization management continues to strive so that its business can survive amid the outbreak. It includes universities, which are currently deciding to use e-learning for their teaching and learning activities. Digital work design follows the need for blended learning which is currently being used as a student learning method in the midst of a pandemic. As an education-based service, universities must be able to prepare future generations with better quality than the previous generation. Digital intelligence is defined as the sum of social, emotional, and cognitive skills that enable people to face challenges and adapt to the requirements of digital life [2]. Therefore, the lecturers do not only need to understand technology but also should social relationship skill. This skill aims to make delivery of material, knowledge, and values from lecturers can be well-received through direct communication skills or digital tools. This intelligence includes skills how to establish good © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 271–278, 2021. https://doi.org/10.1007/978-3-030-79725-6_26
272
A. Adhiatma and U. K. Hasanah
communication, maintain a quality relationship with colleagues, control emotions, give empathy or motivation with today's digital environment. This intelligence includes how to establish good communication, maintain a quality relationship with colleagues, control emotions, give empathy or motivation with today’s digital environment. DMC is defined as communication mediated by interconnected computers, between individuals or groups separated in space and/or time [3]. The use of digital tools in communication is crucial for collaboration between lecturers in the digital era and the current pandemic era. This skill is also very important for transferring knowledge to students through e-learning, online discussions, and sending virtual assignments. This study examines the relationship of digital work design (DWD) to improve communication skills, relationship quality, and empathy mediated by digital-mediated communication (DMC) in higher educational institutions. The samples of this study were lecturers who are digital immigrants (born before 1985). It means they were born before the advent of the digital era. Changes in the digital era also change work design so that it needs relational skills. Relational skills are basically difficult collaborate automatically in completing work which can be applied in educational institutions [4]. One of the efforts to improve relational skills in organizations is achieved by adopting a DMC that can be implemented both in the workplace and outside the workplace.
2 Literature Review and Hypothesis Development 2.1
Digital Work Design and Digital Mediated Communication
Digital work design (DWD) is a working design that connects human and computer interactions in work practices [5]. Many organizations have taken advantage of digital technology development to communicate through digital communication intermediaries mediated in virtual meetings with some of their colleagues. This is in line with Yee [6] research, that the workforce develops new ways of working by utilizing the full capabilities of digital technology. The adoption of digital work design in the workforce is now driving digital mediated communication. The implementation of digital work design in several organizations makes the workforce communicate through digital mediated communication such as WhatsApp groups, lines, Facebook, and others which are used to complete their work. H1: Digital work design has a significant effect on the implementation of digital mediated communication. 2.2
Digital Mediated Communication on Communication Skill, Relationship Quality and Empathy
According to Dery and MacCormick [7], there there have been many changes in the digital workplace from 2006 to 2012.The changes are such as the work done by the workforce is always connected to the value of technology in the workplace which has led to the emergence of digital mediated communication. Meanwhile, according to Tarafdar et al. [8], the use of digital technology including DMC by employees must
Building Social Relationship Skill in Digital Work Design
273
have the support and control of managers. Moreover, Lau [9] suggested that DMC includes IM, can improve active control and two-way communication with good communication skills. As the result, it can improve good communication skills and increase team satisfaction Ou et al. [10]. The development of many workplaces that adopt DMCs allows for effective communication between colleagues in the workplace, both directly and virtually. Meanwhile, Sharma [11] explained that the use of digital technology can form social network in organizations. It also can improve organizational performance and relationships quality. A research by Buckley et al. [12] shows that communication between employees through digital mediated communication, that is well connected, encourages employees to share information about problems and business processes so that it results quality relationships among other workers. Kane [13] stated that social media application in digital mediated communication is something that can help the internal collaboration and the relationships quality for individual interactions with other workers. The role of technology in the digital eramust be able to make humans have good quality attitudes. Rushkoff [14] asked people to rethink the relationship between technology and humans. He stated that technology should support humanity and not the other way around. Batson [15] suggested the definition of empathy as abilities and processes; social information processing theory which describe the difference between face-to-face, computer-mediated communication also digital and media literacy [16]. Empathy can be created from social, emotional, and cognitive development both offline and online space. Research by Caplan & Turner [17] states that DMC, includes sharing experiences, themes, or interests, such as in online, support communities to have an empathic relationships that may be physically impossible. Likewise, the use of DMCs in an organization certainly presents social developments between one another that triggers the growth of empathy. H2: Digital mediated communication implementation has a significant positive effect on communication skill. H3: Digital mediated communication has a significant positive effect on relationship quality. H4: Digital mediated communication has a significant positive effect on empathy. 2.3
Mediation Role of Digital Mediated Communication
The fact that society becomes increasingly dependent on digital systems means there is a special need to investigate how mediation operates in the field of communication. Hancock [18] proposed the concept of Artificial Intelligence-Mediated Communication (AI-MC) and discussed its incorporation in interpersonal communication. They described AI-MC as communication that is not only transmitted via technology, but also communication that is “modified, augmented, or even generated by computing agents to achieve communication goals”. They added that AI-MC will play a role in linguistic patterns and relational dynamics, and ultimately in policy, culture, and ethics. DMC is closely related to social and interpersonal relationships including
274
A. Adhiatma and U. K. Hasanah
communication, the quality of relationships that arise from interactions through technology, and the growth of empathy that results from sharing between workers with one another (Fig. 1).
Communication Skill
H5a
Digital Work Design
H1
H2 H3
Digital Mediated Communication
H5b
Relationship Quality
H4 H5c
Empathy
Fig. 1. Research Model
H5a: Digital mediated communication mediates the relationship between digital work design and communication skills H5b: Digital mediated communication mediates the relationship between digital work design and relationship quality H5c: Digital mediated communication mediates the relationship between digital work design and empathy
3 Method The respondents of this study were lecturers who were digital immigrants at a university, totaling 100 lecturers using the purposive sampling technique. Lecturer profession was chosen as the sample because immigrant digital lecturers generation to always upgrade their digital abilities. It is an order to keep up with millennial generation students and have good relationship skill to transfer knowledge, and values. The data were collected through a questionnaire and processed using SmartPLS. There were 150 questionnaires given and 100 questionnaires were returned (66.67%). The result showed that the majority (58 lecturers) were dominated by men. From the level of education, the majority of respondents (88 lecturers) were postgraduate. Meanwhile, in term of age, majority respondent aged 40–40 years. The variables in this study are Digital Work Design (DWD) which includes 3 items, namely design for flexibility, agility & participation, and interdisciplinary, adopted from [5]. Digital Mediated Communication (DMC), includes 3 items, namely communication via email, communication through social media groups, and communication through virtual meetings, adopted from [19]. Communication Skill (CS), includes 8 indicators are obtained from Meerah et al. [20], namely communication skills to complete work orally, communication skills to complete work textually, communication skills for jobs completed with
Building Social Relationship Skill in Digital Work Design
275
body language (non-verbal), the ability to give feedback in communicating with others verbally, and the ability to give feedback in communicating with others textually. Relationship Quality (RQ), with 5 indicators, namely emotional support (emotions expressed authentically and constructively), tension or resilience, and openness; as well as positive emotional experiences, and mutuality [21]. All variables are measured by using Likert scale 1–5. The result of measurement model test based on Hair et al. [22] showed that all variables have fulfilled reliability and validity (Tables 1 and 2).
Table 1. The Reliability Analysis of the model’s constructs Variables DWD DMC CS RQ Em
Composite reliability (CR): pc > 0.6 0.848 0.835 0.844 0.872 0.839
Cronbach’s alpha: a > 0.7 0.732 0.707 0.771 0.833 0.757
Average variance extracted (AVE): 0.5 > AVE 0.651 0.631 0.520 0.578 0.571
4 Result and Discussion Diamantopoulos, Riefler, & Roth [23], categorized the path coefficient of < 0.30 as moderate cause (effect), from 0.30 to 0.60 as strong, and > 0.60 as very strong. Hypothesis test results show that all hypotheses are accepted (Table 3). The direct effect of Digital Work Design (DWC) on Digital Mediated Communication DMC, Customer Satisfaction (CS), Relationship Quality (RQ), and Empathy (Em) shows that there is a strong direct effect. In other word, hypothesis 1, 2, 3, and are accepted. This in line with the research by Yee [6] stated that adoption ability of digital work design in the workforce is now driving digital mediated communication. Moreover, result of this study also support [9]. Digital mediated communication includes IM, can improve control and better communication skills. Social media usage in digital mediated communication can help the quality of individual interaction with other workers. Moreover, digital mediated communication enable sharing experiences, themes or interests, such as in online, support communities to have an empathic relationships that may be physically impossible. This hypothesis test results show that the higher level of digital work design, the higher the digital mediated communication, customer satisfaction and empathy for university professional lecturers. In this research the ability of high educational organization in managing digital work design in be able to improve their digital mediated communication, customer satisfaction, relationship quality and empathy of their professional lecturers. Furthermore, DMC has a mediating role at a moderate level of the relationship between Digital Work Design (DWD) on Customer Satisfaction (CS), Relationship Quality (RQ) and Empathy (Em). This finding support Hancock et al. [18], proposed that Artificial Intelligence-Mediated Communication (AI-MC) and discussed its
276
A. Adhiatma and U. K. Hasanah Table 2. The result of model’s validity Construct indicator Convergent validity Loadings > 0.70 Digital work design DWD1 0.776 DWD2 0.833 DWD3 0.811 Digital mediated communication DMC1 0.805 DMC2 0.888 DMC3 0.674 Communication skill CS1 0.694 CS2 0.750 CS3 0.749 CS4 0.652 CS5 0.755 Relationship quality RQ1 0.742 RQ2 0.878 RQ3 0.723 RQ4 0.694 RQ5 0.750 Empathy Em1 0.612 Em2 0.858 Em3 0.861 Em4 0.659
Discriminant validity HTMT < 1
Yes
Yes
Yes
Yes
Yes
Table 3. The Reliability Analysis of the model’s constructs Hypothesis H1: DWD DMC H2: DMC CS H3: DMC RQ H4: DMC Em H5a: DWD DMC CS H5b: DWD DMC RQ H5c: DWD DMC Em ***p < 0.05
Beta 0.421 0.364 0.377 0.378 0.153 0.159 0.159
T-value (Sign) 5.473*** 4.353*** 4.675*** 5.109*** 2.904*** 3.424*** 3.172***
Building Social Relationship Skill in Digital Work Design
277
incorporation in interpersonal communication. DMC is closely related to social and interpersonal relationship including communication, the quality of relationship that arise from interactions through technology, and the growth of empathy that results from sharing between professional lecturers with another.
5 Conclusion, Implication and Future Research In conclusion, the implementation of DWD in universities can improve the quality of social relationship skills between lecturers through the mediating role of DMC variable. The use of digital communication provides opportunities effective communication and it improves the quality of relationships between one another in an interaction. DMC is also proven to be able to bridge the emergence of empathy to understand the conditions of others even though the interactions are virtual. Lecturers can form a network of relationships that are generated through the digital communication process because of a demand in the digital era that must always follow adjustments. This research provides a theoretical contribution in form of discussion for social relationship skill in the context of digital work design in the organizational behavior and human resource management literature. The study of digital empathy is still very limited in the existing literature. This research proves that digital work design and digital mediated communication will create digital empathy. Research on digital empathy by Kano & Morita [24] focuses more on the empathy quotient by virtual agents. Meanwhile, in this study, empathy is shown by the support and motivation provided by each other through the DMC intermediary. This study has several weaknesses, including data collection using questionnaires that cause self-report bias. The use of questionnaires also resulted in a lack of in-depth information regarding the actual situation. For the future research agenda, it is expected to increase in the number of respondents and a wider place of research by using the interview method to obtain broader information. The test for the future agenda is suggested to include other variables such as digital empathy which is in line with the current digital era as well as respondents who are included in the digital native category. The addition of variables in relational relationships such as collaboration also can be included.
References 1. Prasad, S., Shankar, R., Gupta, R., Roy, S.: A TISM modeling of critical success factors of blockchain based cloud services. J. Adv. Manag. Res. 15, 434–456 (2018). https://doi.org/ 10.1108/JAMR-03-2018-0027 2. Wiśniewska-Paź, B.: Emotional intelligence vs. digital intelligence in the face of virtual reality. New challenges for education for safety: the need for “new” communication and adaptation competencies. Cult. e Stud. Del Soc. 3, 167–176 (2018) 3. Luppicini, R.: Review of computer mediated communication research for education. Instr. Sci. 35, 141–185 (2007) 4. Gibbs, M.: How is new technology changing job design? Institute for the Study of Labor (IZA), Bonn Germany (2017). https://doi.org/10.15185/izawol.344
278
A. Adhiatma and U. K. Hasanah
5. Richter, A., Heinrich, P., Stocker, A., Schwabe, G.: Digital work design: the interplay of human and computer in future work practices as an interdisciplinary (grand) challenge. Bus. Inf. Syst. Eng. 60(3), 259–264 (2018). https://doi.org/10.1007/s12599-018-0534-4 6. Yee, N.: The Proteus paradox: How online games and virtual worlds change us - and how they don’t. Yale University Press, New Haven, CT (2014) 7. Dery, K., MacCormick, J.: Managing mobile technology: the shift from mobility to connectivity. MIS Q. Exec. 11, 159–173 (2012) 8. Tarafdar, M., D’Arcy, J., Turel, O., Gupta, A.: The dark side of information technology, MIT Sloan Manag. Rev. (2014) 61–70 9. Lau, W.W.F.: Effects of social media usage and social media multitasking on the academic performance of university students. Comput. Hum. Behav. 68, 286–291 (2017). https://doi. org/10.1016/j.chb.2016.11.043 10. Ou, C.X., Sia, C.L., Hui, C.K.: Computer-mediated communication and social networking tools at work. Inf. Technol. People 26, 172–190 (2013) 11. Sharma, D.: Resistance to human resouce information systems (HRIS) - problem recognition, diagnosis and positive intervention: a study on employee behavior and change management. Indian J. Appl. Res. 90, 99–104 (2013). https://doi.org/10.15373/2249555X/ JAN2013/39 12. Buckley, P., Minette, K., Joy, D., Michaels, J.: The use of an automated employment recruiting and screening system for temporary professional employees: a case study. Hum. Resour. Manage. 43, 233–241 (2006). https://doi.org/10.1002/hrm.20017 13. Kane, G.C.: Enterprise social media: current capabilities and future possibilities. MIS Q. Exec. 14, 1–16 (2015) 14. Rushkoff, D.: Present Shock: When Everything Happens Now. Penguin Group, New York, US (2013) 15. Baston, C.D.: The Social Neuroscience of Empathy. MIT Press., Cambridge, Cambridge (2009) 16. Kaloudis, A., et al.: How Universities Contribute to Innovation: A Literature Review-based Analysis (2019) 17. Caplan, S.E., Turner, J.S.: Bringing theory to research on computer-mediated comforting communication. Comput. Hum. Behav. 23, 985–998 (2007). https://doi.org/10.1016/j.chb. 2005.08.003 18. Hancock, J.T., Naaman, M., Levy, K.: AI-mediated communication: definition, research agenda, and ethical considerations. J. Comput. Commun. 25, 1–12 (2020). https://doi.org/10. 1093/jcmc/zmz022 19. Merdian, H.L., Reid, S.L.: E-professionalism: usage of social network sites by psychological professionals in training. Psychol. Aotearoa. 5, 28–33 (2013) 20. Iksan, Z.H., et al.: Communication skills among university students. Proc. Soc. Behav. Sci. 59, 71–76 (2012). https://doi.org/10.1016/j.sbspro.2012.09.247 21. Carmeli, A., Gittell, J.H.: High-quality relationships, psychological safety, and learning from failures in work organizations. J. Organ. Behav. 30, 709–729 (2009). https://doi.org/10. 1002/job 22. Hair, J.F., Hult, G.T.M., Ringle, C.M., Sarstedt, M.: A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM), 2nd edn SAGE Publication, Los Angeles (2017) 23. Diamantopoulos, A., Riefler, P., Roth, K.: The problem of measurement model misspecification in behavioral and organizational research and some recommended solutions. J. Appl. Psychol. 90, 710–730 (2005) 24. Kano, Y., Morita, J.: Factors influencing empathic behaviors for virtual agents. In: Proc. 7th Int. Conf. Human-Agent Interact. - HAI ’19, pp. 236–238 (2019). https://doi.org/10.1145/ 3349537.3352777
How to Push Digital Ecosystem to Explore Digital Humanities and Collaboration of SMEs Marno Nugroho(&) and Budhi Cahyono Department of Management, Faculty of Economics, Universitas Islam Sultan Agung, Semarang, Indonesia {marnonugroho,budhicahyono}@unissula.ac.id
Abstract. The rapid development of digital technology today must be considered as an opportunity to build a business/entrepreneur. It is necessary to find ways how to create digital entrepreneurship through enhancing the digital ecosystem for SMEs. Factors that influence the strength of the Digital Ecosystem include Human Resources and Digital Collaboration. The higher the Digital Humanities and Collaboration, the higher the Digital Ecosystem. By using the Behavioral and Digitization theory, authors derive Digital Humanity and Collaboration and Theory Acceptance Model to propose a Digital Humanity and Collaboration proposition. This proposition is unique because it is motivated by the sub-optimal use of digital technology by presenting social intervention and proactive goal setting. Furthermore, this proposition is expected to become part of the management control process that will improve manager performance. This study used 250 SME players in Indonesia, especially in the Central Java. The empirical model developed in this study consists of 4 research hypotheses. All results are supported at the 95% level, which means the digital humanity and collaboration becomes an intermediary variable in creating the Digital Ecosystem. Keywords: Digital humanity and collaboration Theory Acceptance Model Digital ecosystem Agility planning strategy Green management
1 Introduction In this digital era, all business sectors experience changes that require digitalization in their operations including SMEs as well as being demanded to adjust to changes to be able to survive and have sustainable competitive advantages. The existence of this digitalization effort is expected that SMEs will have a well-integrated digital business ecosystem so that they can continue to complete in the world business both regionally, nationally, and cross-border (Ministry of Cooperatives and SMEs) and international. According to [23], digital ecosystems, evolutionary systems, and self-regulated can contribute to the sustainability of development local and regional through a welldefined software platform. Reference [34] first introduced the concept of the business ecosystem as a strategic plan. Applying the right concept of digital ecosystem will be able to influence the ability of SMEs to products the best performance especially if supported by Digital Humanity and Collaboration. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 279–287, 2021. https://doi.org/10.1007/978-3-030-79725-6_27
280
M. Nugroho and B. Cahyono
Recently, entrepreneurs in the digital age are increasingly involved with digital interactivity and very high communication costs [2, 27]. With an emphasis on crowdsourcing, it is hoped that digital humanities will play more roles so that they can become the norm in the humanities [11, 19, 33]. The implementation of digital humanities and collaboration can be preceded by the behavior of SMEs acceptance of technology first in order to improve the digital ecosystem of SMEs so that SMEs actors are more responsive in dealing with change. [7] states that TAM uses Theory of Reasoned Action (TRA) as a theoretical foundation to identify the basic relationship between two main beliefs: perceived usefulness and perceived ease of use, and user attitude, intention and actual use of technology. Technology acceptance model (TAM) is a strong model that has been generalized and validated empirically to understand the factors responsible for technology adoption [22]. In addition to behavior towards technology acceptance, agility planning strategic can also affect digital humanity and collaboration. Changes in the environment make systematic strategic planning more difficult for companies [12, 31]. Rapid changes increases the volatility of the business environment and requires flexible and creative strategies [30]. Today, companies face some environment changes due competition to growing, technological changes, fluctuating demand, disruptions in the chain supply caused by natural disasters or human activity and so forth [31]. Some research results show a limited need for agility when operating in an environment characterized by low uncertainty [5]. This research will describe the company’s strategic agility plan as a way to manage unexpected changes and risks faced by organizations that can affect digital humanity and collaboration. Based on the background description related to the phenomena and factors that influence the digital ecosystem in SMEs, it can be concluded that the formulation of the problem in this study is “How to Push Digital Ecosystems by Exploring Digital Humanities and Collaboration of SMEs?”.
2 Literature Review 2.1
Digital Ecosystem
The idea of a technology (or industrial) ecosystem has been used to describe the relationship between technology and organization [23]. According to [18], the digital ecosystem has a kind of self-generative nature that works on-oriented logic service where users can act as providers at the same time. According to [18], digital ecosystem can be measured by the dimensions of Institutional entrepreneurship, Transaction costs, Digital technology, and Online social capital [25]. Meanwhile according to [23], the dimension of digital ecosystem consists to governance, regulations and industrial policy; human capital, knowledge and practices; service and technical infrastructure; and business and financial conditions.
How to Push Digital Ecosystem to Explore Digital Humanities
2.2
281
Technology Acceptance Model (TAM)
Most organizations adopt a behavior model towards technology acceptance to investigate the factors that encourage users to accept new technology [32]. TAM is an extension of Theory of Reasoned Action (TRA) to explain the acceptance and use of IT [3]. Based on studies conducted by [32], TAM influences digital humanities. TAM also influences social behavior and collaboration between resources in organizations [3, 22, 24, 28]. Thus, it can be hypostasized as: H1: The higher the technology acceptance model, the higher the digital humanities and collaboration. 2.3
Strategic Agility Planning
According to [4, 5], agility plans strategic can be detected by considering the extent to which customers are the source of new ideas, participating in the creation and formation of innovative products and services, and acting as a user to test new services and products [30]. Strategic agility planning is a strategic plan that considers the extent to which customers as a source of new ideas, defines new market opportunities and actions competitive, the ability to quickly redesign business processes, has criteria accuracy and cost effective that enable business operations to move with flexibility and greater speed, and building networks by exploiting opportunities through efficient sources [1, 8, 15]. Based on the results of a study conducted by [31], Strategic agility planning also has a big influence on the survival of organization in deciding business systems that they implement according to the needs of the times, namely in the digital collaboration of business and humanity which is an value important in the progress of an organization [4, 5, 10, 15, 30]. Thus, it can be hypostasized as: H2: The higher the strategic agility planning, the higher the digital humanities and collaboration. 2.4
Green Technology Capability
Development of a green technology innovation strategy is an initial stage that must be done companies to pursue the performance of green innovation [30]. Green innovation strategies build environmental responsiveness to pollution, product stewardship, and technologies that are not polluted by the environment [13, 27]. Green technology capability become a means to gain competitive advantage by developing various environment friendly programs [6, 13, 30]. Based on study conducted by [30], it was revealed that green technology strategies positively influence green innovation in digital humanities and collaboration. This study shows the companies must develop green technology strategies and must be reflected as the identity of green organizations to gain the legitimacy of organizations environmental, and then companies will achieve better green innovation performance [9, 13, 20, 24, 26]. Thus, it can be hypostasized as:
282
M. Nugroho and B. Cahyono
H3: The higher the green technology capability, the higher the digital humanities and collaboration. 2.5
Digital Humanities and Collaboration (DHC)
According to [21], digital humanities and collaboration are tools powerful to overcome challenges, can improve digital preservation practices through polling of resources, integrating expertise, and developing shared knowledge to improve digital preservation. DHC focuses on promoting and supporting research digital and teaching in all arts and humanities disciplines, acting as an advisory force community-based, and supporting excellence in research, publication, collaboration and training [33]. DHC usually involve application specialists or appears on the tools and computational techniques to research problem of Arts and Letters in which this process focused on the problem of humanities, technology and computing [11]. Studies conducted by [21] state that all dimensions of digital humanities and collaboration affect the digital ecosystem. Digital ecosystem can be implemented well if supported by digital humanities and collaboration. Thus, it can be hypostasized as: H4: The higher the digital humanities and collaboration, the better the digital ecosystem.
3 Research Method 3.1
Population and Samples
Types of research used are explanatory research and descriptive research. The technique sampling was determined using a non-random sampling technique with a purposive sampling method that is by selecting a group of subjects adjusted to certain criteria and based on the research objectives. The number of SMEs in Indonesia, especially in Central Java is 3,000 SMEs. The sample used in the study amounted to 250 SMEs in the area of Central Java, Indonesia. 3.2
Variable Measurement
Digital ecosystem uses 3 (three) dimensions of measurement in this research namely digital technology, institutional entrepreneurship, and online social capital [11, 14, 21, 33]. Measurements used on the Technology acceptance model (TAM) variable are Perceived ease of use, Perceived usefulness, Attitude toward using, and Behavioral intention [3, 7, 22, 28, 32]. Dimensions to measure strategic agility planning are Inventing new business model, creating imitation ability, enacting global complementarities, dan Make sense quickly [16, 29, 31]. The dimensions used to measure green technology capability in this study are Green commitment, Green innovation strategy, Technology improvement process, and sustainability orientation. Dimensions to measure how much influence DHC has on digital ecosystem include: Cultivate a foundation of knowledge and identify a shared vision, Advocate for the value of digital
How to Push Digital Ecosystem to Explore Digital Humanities
283
preservation activities, and Implement shared digital preservation services [13, 24]. Based on various theoretical studies and previous research, the conceptual model of this research is presented in Fig. 1 below.
Fig. 1. Empirical Model (2020)
4 Results and Discussion This research used the analysis method The Structural Equational Modeling (SEM) is operated through the Partial Least Square (PLS) program. The data processing tool for this test uses Warp PLS 5.0. All indicators used to measure all latent variables in this study are reflective so to test the measurement model must meet the test of convergent validity and discriminant validity as well as the reliability test (composed reliability). The test results of the AVE > 0.50 and composite reliability a value > 0.7 so this explains that the research data used can be trusted, so that the research data can be tested further. Digital Humanities & Collaboration test results on Digital Ecosystem have a coefficient value of 0.586 and a p-value of < 0.001 (p-value < 0.05). The results of the analysis are in accordance with the predictions that have been made, so it can be concluded that the first hypothesis (H1) is accepted. The results of the Technology Acceptance Model test results on Digital Humanities & Collaboration have a coefficient value of 0.110 and a p-value of 0.270 (p-value < 0.05). The results of the analysis are in accordance with the predictions that have been made, so it can be concluded that the second hypothesis (H2) is accepted. The results of the Strategic Agility Planning variable test on Digital Humanities & Collaboration have a coefficient value of 0.433 and a p-value of < 0.001 (pvalue < 0.05). The results of the analysis are in accordance with the predictions that have been made, so it can be concluded that the third hypothesis (H3) is accepted. Green Technology Capability test results on Digital Humanities & Collaboration have a coefficient value of 0.204 and a p-value of 0.004 (p-value < 0.05). The results of the analysis are in accordance with the predictions that have been made, so it can be concluded that the fourth hypothesis (H4) is accepted.
284
M. Nugroho and B. Cahyono
5 Discussion The rapid development of the digital era has a significant impact on all sectors. The business sector is one of the main sectors that has the impact of the changing digital era. The SMEs are the main actors to make a movement for change in the face of the digital economy era. The SMEs are required to adjust for changes in digitalization in their operational activities. This research is intended to provide several alternative efforts to improve the digital ecosystem. First, by implementing the Technology Acceptance Model by the SMEs. Based on the results of the regression analysis of this study, it can be concluded that the hypothesis 1 test related to the relationship between Technological Acceptance Model on Digital Humanities and Collaboration showed positive and significant results. The impact can be seen from the increase in growing knowledge and foundation of the shared vision of SME actors, advocating for the value of digital preservation activities, and providing implementation related to joint digital preservation services. This finding is also reinforced by the findings of previous researches by [11, 14, 17, 19, 21, 33]. Second, Strategic Agility Planning can also be applied to SMEs. This is consistent with the results of the analysis of hypothesis 2 in this study, it was indicated to have a positive and significant relationship between Strategic Agility Planning on Digital Humanities and Collaboration. This hypothesis explains that the more strategic agility planning at SMEs increases, the digital humanities and collaboration of SMEs will be high. The application of this method can also have a positive impact on digital humanities and collaboration carried out by SME actors, such as an increase in continuing to grow the knowledge base and identifying shared visions, advocating the value of digital preservation activities, and implementing digital preservation services together. The results of this study are also supported by several previous studies by [3, 7, 22, 28, 32]. Third, efforts that can be made can also be done by applying green technology capability to the SMEs. It was based on the results of hypothesis 3 analysis that showed a positive and significant influence between Green Technology Capability on Digital Humanities and Collaboration. The increasing green technology capability by SME actors, it will also increase the digital humanities and collaboration of SME actors. That will have a positive impact on the application of digital humanities and collaboration for SMEs. The results of this study have similarities with previous studies conducted by [4, 5, 10, 15, 29, 31]. Efforts to improve the digital ecosystem for SMEs are also influenced by the conditions of the application of digital humanities and collaboration at the SMEs. The research shows the results of the analysis in hypothesis 4 which found a positive and significant effect between Digital Humanities and Collaboration on Digital Ecosystem. The results of this study also have similarities with the results of previous studies by [3, 9, 13, 24, 26, 30]. This explains that if digital humanities and collaboration at SME continue to be improved, the digital ecosystem by SMEs will also increase. Efforts to implement digital humanities and collaboration that can be carried out by SME actors are to foster a foundation of knowledge and identify shared visions, advocate the value of digital preservation activities, as well as implementations relating to joint digital preservation services. It will be able to improve the digital ecosystem of SME actors
How to Push Digital Ecosystem to Explore Digital Humanities
285
relating to digital technology, institutional entrepreneurship, and online social capital. It is expected that the SMEs, especially in Indonesia, will be ready and able to compete competitively in the digital era in all market share. Based on the results of this study, it can be concluded that the four hypotheses in this study were accepted.
6 Conclusion This research focuses on the role of the digital ecosystem that occurs in SMEs in Indonesia. This study aims to investigate the role of digital ecosystems and other variables on Indonesian SMEs. The results of this study indicate that there is a positive and significant relationship among research variables. The four hypotheses of this research were declared accepted and proved to be true so that they could be used as an alternative way for SMEs to face the challenges of the global business sector. The application of green technology capability is also very needed in the era of the digital economy, namely by implementing a green commitment system, a green innovation strategy in the process of improving technology in SMEs. The application of several systems must also have a knowledge base that has a vision to prioritize consumer services digitally. The role given by these factors will make SMEs ready to face challenges and all changes in the trade sector. The implementation of an effective digital ecosystem will make SMEs products able to survive in all business competitions.
7 Limitations and Future Research The limitation of this study is that the distribution of sample areas is still limited to the Central Java area of urban areas. The research aspects related to SMEs are only 5 business sectors. The empirical model of research is still simple. Future researchers are expected to be able to expand the area of sample distribution that is not only in the Central Java, but is also for the outside area. In addition, aspects of research related to SMEs can also be added to other business fields such as manufacturing, cosmetics, Muslim fashion, etc. The empirical model of research is also broadened or narrowed against research antecedent’s future. It was intended to further expand the sample and information in detail.
References 1. Adams, P., Freitas, I., Fontana, R.: Strategic orientation, innovation performance and the moderating influence of marketing management. J. Bus. Res. 97, 129–140 (2019). https:// doi.org/10.1016/j.jbusres.2018.12.071 2. Ahmed, Y.A., Ahmad, M.N., Ahmad, N., Zakaria, N.H.: Social media for knowledgesharing: a systematic literature review. Telematics Inf. 37, 72–112 (2019). https://doi.org/10. 1016/j.tele.2018.01.015
286
M. Nugroho and B. Cahyono
3. AL-Nawafleh, E., ALSheikh, G.A.A., Abdullah, A.A., Tambi, A.M.B.A.: Review of the impact of service quality and subjective norms in TAM among telecommunication customers in Jordan. Int. J. Ethics Syst. 35(1), 148–158 (2019). https://doi.org/10.1108/ijoes07-2018-0101 4. Alon, I., Madanoglu, M., Shoham, A.: Strategic agility explanations for managing franchising expansion during economic cycles. Compet. Rev.: Int. Bus. J. 27(2), 113–131 (2017). https://doi.org/10.1108/CR-04-2016-0022 5. Arbussa, A., Bikfalvi, A., Marques, P.: Strategic agility-driven business model renewal: the case of an SME. Manage. Decis. 55(2) (2017). https://doi.org/10.1108/MRR-09-2015-0216 6. Soewarno, N., Tjahjadi, B., Fithrianti, F.: Green innovation strategy and green innovation: the roles of green organizational identity and environmental organizational legitimacy. Manage. Decis. 57(11) (2019). https://doi.org/10.1108/MD-05-2018-0563 7. Buabeng-Andoh, C.: Predicting students’ intention to adopt mobile learning. J. Res. Innov. Teach. Learn. 11(2), 178–191 (2018). https://doi.org/10.1108/jrit-03-2017-0004 8. Castañeda-Peña, H., Calderón, D., Borja, M., Quitián, S., Suárez, A.: Pre-service teachers’ appreciations of teacher-educators’ strategies when learning about narratives. Int. J. Educ. Res. 94, 90–99 (2019). https://doi.org/10.1016/j.ijer.2018.10.009 9. Cepeda, J., Arias-Pérez, J.: Information technology capabilities and organizational agility. Multinatl. Bus. Rev. (2018). https://doi.org/10.1108/mbr-11-2017-0088 10. Denning, S.: The role of the C-suite in agile transformation: the case of amazon. Strategy Leadersh. 46(6), 14–21 (2018). https://doi.org/10.1108/SL-10-2018-0094 11. Fay, E., Nyhan, J.: Webbs on the Web : libraries, digital humanities and collaboration. Libr. Rev. 64(1/2), 118–134 (2015). https://doi.org/10.1108/LR-08-2014-0089 12. Hanaysha, J.: Testing the effects of employee engagement, work environment, and organizational learning on organizational commitment. Procedia Soc. Behav. Sci. 229, 289– 297 (2016). https://doi.org/10.1016/j.sbspro.2016.07.139 13. Harrington, D., et al.: Capitalizing on SME Green Innovation Capabilities: Lessons from Irish-Welsh Collaborative Innovation Learning Network, pp. 93–121. University Partnerships for International Development (2017). https://doi.org/10.1108/s2055364120160000008010 14. Howell, E.: Scaffolding multimodality: writing process, collaboration and digital tools. English Teach. 17(2), 132–147 (2018). https://doi.org/10.1108/ETPC-05-2017-0053 15. Jaggars, D., Jones, D.E.: An agile planning and operations framework. Perform. Meas. Metrics (2018). https://doi.org/10.1108/PMM-11-2017-0057 16. Kim, M., Chai, S.: The impact of supplier innovativeness, information sharing and strategic sourcing on improving supply chain agility: global supply chain perspective. Int. J. Prod. Econ. 187, 42–52 (2017). https://doi.org/10.1016/j.ijpe.2017.02.007 17. Krahmer, A.: Digital newspaper preservation through collaboration. Digit. Libr. Perspect. 32 (2), 73–87 (2016). https://doi.org/10.1108/DLP-09-2015-0015 18. Kraus, S., Palmer, C., Kailer, N., Kallinger, F.L., Spitzer, J.: Digital entrepreneurship: a research agenda on new business models for the twenty-first century. Int. J. Entrep. Behav. Res. 25(2), 353–375 (2019). https://doi.org/10.1108/IJEBR-06-2018-0425 19. Lucky, S., Harkema, C.: Back to basics: supporting digital humanities and community collaboration using the core strength of the academic library. Digit. Libr. Perspect. 34(3), 188–199 (2018). https://doi.org/10.1108/DLP-03-2018-0009 20. Maleki Minbashrazgah, M., Shabani, A.: Eco-capability role in healthcare facility’s performance: natural-resource-based view and dynamic capabilities paradigm. Manage. Environ. Qual. 30(1), 137–156 (2019). https://doi.org/10.1108/MEQ-07-2017-0073
How to Push Digital Ecosystem to Explore Digital Humanities
287
21. Mannheimer, S., Cote, C.: Cultivate, assess, advocate, implement, and sustain: A five-point plan for successful digital preservation collaborations. Digital Libr. Persp. 33(2), 100–116 (2017). https://doi.org/10.1108/DLP-07-2016-0023 22. Marakarkandy, B., Yajnik, N., Dasgupta, C.: Enabling internet banking adoption. J. Enterp. Inf. Manage. 30(2), 263–294 (2017). https://doi.org/10.1108/jeim-10-2015-0094 23. Matopoulos, A., Herdon, M., Várallyai, L., Péntek, Á.: Digital business ecosystem prototyping for SMEs. J. Syst. Inf. Technol. 14(4), 286–301 (2012). https://doi.org/10.1108/ 13287261211279026 24. Mellett, S., Kelliher, F., Harrington, D.: Network-facilitated green innovation capability development in micro-firms. J. Small Bus. Enterp. Dev. 25(6), 1004–1024 (2018). https:// doi.org/10.1108/JSBED-11-2017-0363 25. Organisasi, P.B., Dan, G.K., Sudirjo, F., Pawiyatan, J., Bendan, L.: INTERVERNING (Studi Pada Rumah Sakit PT VALE Soroako, Sulawesi Selatan) Serat Acitya – Jurnal Ilmiah Latar Belakang Masalah Telaah Pustaka, pp. 1–16 (2006) 26. Owens, D., Khazanchi, D., Owens, D.: Exploring the impact of technology capabilities on trust in virtual teams. Am. J. Bus. 33(4), 157–178 (2018). https://doi.org/10.1108/AJB-042017-0008 27. Rafique, H., Shamim, A., Anwar, F.: Investigating acceptance of mobile library application with extended technology acceptance model (TAM). Comput. Educ. 145, 103732 (2019). https://doi.org/10.1016/j.compedu.2019.103732 28. Rauniar, R., Rawski, G., Yang, J., Johnson, B.: Technology acceptance model (TAM) and social media usage: an empirical study on Facebook. J. Enterp. Inf. Manage. 27(1), 6–30 (2014). https://doi.org/10.1108/JEIM-04-2012-0011 29. Rojas, C.V., Reyes, E.R., Hernandez, F.A.Y., Robles, G.C.: Integration of a text mining approach in the strategic planning process of small and medium-sized enterprises. Ind. Manage. Data Syst. (2018) 30. Soewarno, N., Tjahjadi, B., Fithrianti, F.: Green innovation strategy and green innovation: the roles of green organizational identity and environmental organizational legitimacy. Manage. Decis. 57(11), 3061–3078 (2019). https://doi.org/10.1108/MD-05-2018-0563 31. Valaei, N., Rezaei, S.: Job satisfaction and organizational commitment: an empirical investigation among ICT-SMEs. Manage. Res. Rev. 39(12), 1663–1694 (2016). https://doi. org/10.1108/MRR-09-2015-0216 32. Wang, C.-S., Jeng, Y.-L., Huang, Y.-M.: What influences teachers to continue using cloud services?: the role of facilitating conditions and social influence. Electron. Libr. 35(3), 520– 533 (2017). https://doi.org/10.1108/EL-02-2016-0046 33. Zhang, Y., Liu, S., Mathews, E.: Convergence of digital humanities and digital libraries. Libr. Manage. 36(4–5), 362–377 (2015). https://doi.org/10.1108/LM-09-2014-0116 34. Moore, J.F.: Predators and prey: a new ecology of competition. Harv. Bus. Rev. 71, 75–86 (1993)
IOTA-Based Mobile Application for Environmental Sensor Data Visualization Francesco Lubrano(B) , Fabrizio Bertone(B) , Giuseppe Caragnano(B) , and Olivier Terzo(B) LINKS Foundation, via Boggio 61, Turin, Italy {francesco.lubrano,fabrizio.bertone,giuseppe.caragnano, olivier.terzo}@linksfoundation.com
Abstract. Over the last few years, Distributed Ledger Technology (DLT) emerged as a trending opportunity for innovation in several sectors, and a wide range of implementations, sometimes very different from each other, appeared on the market. In particular, the intrinsic features of DLT can represent enabling factors for the IoT sector. The huge number of IoT applications and the exponential growth of connected smart devices are shaping the present and the near future of our lives and the way we understand and see the world. The application of IoT to all sectors, spanning from smart home to industry, is conceived as the way to improve and optimize processes, consumption and efficiency. The large amount of data produced, the need of certifying exchanges of goods, the possibility to track assets, and so on, are challenges that can be addressed through DLT. This paper aims to tackle some of these challenges by presenting IoT devices and a mobile application that are able to exchange data through the IOTA Tangle. IOTA is an implementation of DLT, based on the Directed Acyclic Graph (DAG) structure. This paper presents the functionalities and the current implementation of the mobile application, its features, challenges and future developments.
1
Introduction
The exponential growth of smart connected devices, fostered by the development and wide adoption of new paradigms such as smart cities, smart agriculture, and industry 4.0, is introducing several challenges including data transmission, authentication, data security, and device management [1]. In the IoT sector, the limited hardware capacities of sensors and devices constitute real constraints to the implementation of reliable and secure systems [2]. Recently, different implementations of Distributed Ledger Technology (DLT) have been developed to support the IoT sector [3]. DLT implementations such as Blockchain constitute an innovative way to exchange and securely store data through decentralized systems. Shafagh et al. developed a blockchain-based system for IoT data sharing to enable secure and resilient access control management. They managed to do so utilizing the blockchain as an auditable and distributed access control layer c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 288–296, 2021. https://doi.org/10.1007/978-3-030-79725-6_28
IOTA-Based Mobile Application
289
to the storage layer [4]. Besides classic blockchain solutions, such as Bitcoin or Ethereum, distributed ledger solutions based on directed acyclic graph (DAG) have been developed. Classic blockchain solutions bring intrinsic issues related to scalability [5], that are partially overcome by these DAG based implementations. IOTA [12] is one of these implementations and in recent times, interest in it has grown more and more. IOTA is defined as open, feeless and scalable distributed ledger, designed to support frictionless data and value transfer. IOTA can provide privacy, security, and immutability without relying on a central authority. In [6], authors presented a IOTA based solution for storage and visualization of data coming from an IoT device, describing the implemented model and the issues resulting from the use of IOTA. Similarly, in this paper we give an account of the implementation of an IoT data sharing solution based on IOTA. The solution is a mobile application that can retrieve transactions from IOTA Tangle and visualize the data inside them. It has been developed, tailored and tested to support messages coming from ST STM32MCU boards running the FP-SNS-IOTA1 function pack1 that is based on the IOTA Light Node implementation described in [15]. This mobile application implements the first basic features to interact with the IOTA Tangle and it is described in Sect. 3. Section 2 describes DLT solutions, focusing on IOTA. Section 4 reports final considerations and future works.
2
Background
Blockchain technologies and in general the solutions based on DLT are considered one of the most important innovations of the last 30 years. DLTs can be divided into permissioned and permissionless systems, depending on the authorization required to access and participate in the network. In a permissionless DLT, members remain anonymous or use pseudonyms and each member can append new blocks of data to the ledger. In a permissioned DLT, the identity of each member and the authorisation to access and add data are controlled. Different algorithms can be used to add and verify transactions to the ledger. The firstly introduced and most commonly used in permissionless DLTs is called proof-of-work (PoW), and is based on the resolution of a complex and computational demanding problem to gain the consensus of the network. Another algorithm, more recent and lightweight, is called proof-of-stake (PoS). PoS algorithm allows any holder of DLT’s tokens (typically cryptocurrency) to become the validator, simply by announcing dedicated transactions with an amount that became locked up in a deposit. All the participating validators subsequently achieve consensus on the next block [7]. IoT sector aims to leverage DLT for its decentralized structure and to solve security and privacy issues [8]. The decentralized feature comes with the mentioned consensus mechanism based on PoW or PoS, that enable DLT participants to coordinate and trust transactions without the need of a central authority. 1
“FP-SNS-IOTA1, The Easiest Way to Run and Write an IOTA Application”, https://blog.st.com/fp-sns-iota1/.
290
F. Lubrano et al.
These mechanisms hase also the consequence of creating a bottle-neck that reduce the insertion rate of new blocks. This, applied to IoT systems, introduces several limitations in terms of performance. The throughput, i.e. transactions per second (TPS), is limited (e.g., 7 TPS in Bitcoin and 20 to 30 TPS in Ethereum). This leads to confirmation delays of new blocks that are not acceptable for most of IoT use cases. The energy consumption and the transaction fees are other two major issues. The consensus process is demanding for IoT devices with limited hardware capacities, and the transaction fees applied to the micro-payments, usually made by IoT devices, cannot be supported [9]. Most DLT networks present some intrinsic problems that make them not suitable for generic IoT use cases, but other solutions based on directed acyclic graph (DAG) have been developed also to meet IoT requirements. One of the solutions based on DAG is IOTA. 2.1
IOTA
IOTA is a cryptocurrency developed for micro-payments and the IoT industry; it is open source and based on a DAG called Tangle. The Tangle is in charge of storing all the transactions issued by the nodes of the network. A node has to approve other two transactions, according to an algorithm, in order to issue a new transaction. If the two selected transactions do not present any conflict and are regular, the node can approve them and issues its own transaction. As a temporary consolidation measure, looking forward to a more stable network, IOTA introduced a central authority called Coordinator. The Coordinator is a client that periodically sends signed transactions called milestones that nodes trust and use to confirm transactions up to a certain point in the Tangle DAG. Indeed, IOTA’s consensus mechanism requires a confirmed transaction to be referenced (either directly or indirectly) through a signed transaction issued by the Coordinator. The presence of the Coordinator, as a single centralised node that issues signed and trusted transaction, at the moment limit the concept of decentralised network. To overcome this situation, IOTA predicted to remove the Coordinator once the network will reach the maturity [11]. The consensus mechanism used in IOTA improve the scalability, as the number of the new transactions grows linearly according to the number of the transactions that can be confirmed [12]. The approval mechanism implemented in IOTA does not need the action of miners, like Bitcoin. The absence of miners leads to the possibility to issue transactions without fees. Moreover, there is no possibility to generate or mine IOTA tokens, as all the IOTA tokens (2,779,530,283,277,761 tokens) have been generated at the beginning in the first transaction. The extremely low value of one IOTA token allows parties to exchange very small amounts, enabling micropayments. IOTA Tangle allows to issue two kind of transactions, value transactions and zero-value transactions. Transactions with a number of token to be exchanged among parties are validated ensuring the authenticity of the transactions to avoid double spending. Zero-value transactions, i.e. transactions without exchange of IOTA tokens, can be issued and attached to the Tangle but these transactions
IOTA-Based Mobile Application
291
are not signed or checked by nodes to verify their authenticity, because no value is being transferred. Zero-value transactions have been specifically designed to exchange messages within the Tangle. This feature makes IOTA not only a cryptocurrency but also a public distributed data communication protocol [10]. The lack of authentication and verification process related to these transactions introduces security issues on systems and solutions that leverage on IOTA to exchange messages. Specific solutions have been developed to overcome this issues. 2.1.1 Masked Authenticated Messaging Masked Authenticated Messaging (MAM) is a second layer data communication protocol built on top of IOTA Tangle. With MAM it is possible to send signed and crypted messages over the Tangle. MAM protocol allows to create public, private, or restricted channels and send and read messages on these channels. Users with access to a channel can validate the signature and decrypt the messages [13]. 2.1.2 IOTA Streams Recently, the IOTA community worked on an all-new version of MAM, called Streams. IOTA Streams is a DLT framework for decentralized data streaming and encryption on embedded systems, particularly developed for IoT use cases and introduces several improvements compared to MAM. In IOTA Streams the protection of data exchanged through IOTA is one of the core concept. Using cryptography, tamper-proof historical verification, and data ordering on the Tangle, Streams allows to exchange authenticated and trusted data [14].
3
IOTA-Based Mobile Application
The work pictured in this paper focuses on the implementation of a crossplatform application that leverages on IOTA. The main objective is to enable the potential of a public distributed system, i.e. IOTA, bringing it to the common user through an easy-to-use interface. The application can be deployed to run on different platforms, from mobile to desktop, leveraging on the Ionic framework2 and its integration with Vue.js3 . The main feature of the application is to visualize data extracted from IOTA Tangle. The different views of the application have been customized to visualize data coming from ST STM32 MCU boards, that run the FP-SNS-IOTA1 function pack. These IoT devices send environmental sensor data in form of zero-value transactions. Zero-value transactions are always confirmed by the nodes and attached to the Tangle. Once in the Tangle, the transactions can be retrieved by the application. The current implementation leverages on the basic version of IOTA, i.e. MAM or Streams have not been integrated yet. Thus, the user needs to use some kind of identifier (e.g. IOTA addresses or TAGs) in order to retrieve the transaction of a given device. 2 3
Ionic Framework: https://ionicframework.com/. Vuejs, Progressive JavaScript Framework: https://vuejs.org/.
292
F. Lubrano et al.
Fig. 1. High level architecture
Figure 1 represents the high level architecture of this solution. The ST devices issue zero-value transactions that are stored in the IOTA Tangle. In this scenario, the TAG field is passed as parameter to identify the device. The user adds into the application the device TAGs. This one-time procedure allows the application to store into the local storage these TAGs, so the user will be able to visualize the data just selecting a device from the main list. Once the user selects a device, the application automatically looks for all the related transactions into the Tangle and visualizes them through a customized dashboard. The dashboard and some other features of this application change according to the platform, i.e. the mobile version contains graphics and functions tailored for mobile devices. On the other hand, the desktop version leverages on a wider screen, adapting the application views. In this paper are reported the views of the mobile version of the application. 3.1
User Interface
The main view shows the list of registered devices (Fig. 2a). In this case, the TAG and the label added by the user during the registration phase are reported for each device. The device TAG is needed to retrieve the transactions and the label is a field that the user can define to easily identify the device. The floating button in the lower part of the interface allows to add new devices. The dedicated entry in the menu (Fig. 2b) permits the addition as well. The menu allows to navigate through different views of the app for adding new devices, editing labels, simulating transactions, editing the app configuration and quitting the application. To visualize the data, the user has just to select the device by clicking on it. The click automatically triggers the request to get the transactions related to the selected TAG and, if the request is successful, the dashboard is visualized (Fig. 2c). As mentioned above, the dashboard is customized to meet the need of visualizing environmental sensor data. The application can manage data sent by devices in a specific message format and visualize information through charts.
IOTA-Based Mobile Application
293
Fig. 2. Application main views
Fig. 3. Temperature chart and raw data visualization
In Fig. 3a, the chart provides data concerning the temperature reported by the device. The chart can be adjusted to the user requirements and the pan and zoom features are available and can be controlled through gestures. In the same way and according to the payload sent by the IoT device, sensor data related to humidity and pressure are visualized through the charts. The chart represents the data through an area, obtained by interpolating single measures represented in form of points. Clicking on the single point, specific information are visualized, such as the numeric value of the measure and the timestamp. It is possible to visualize data as a list of strings (Fig. 3b) so as to keep the application more general and not to limit it to the defined payload. In this case, the payload of the transaction is just visualized with the associated timestamp. This view allows the visualization of any message.
294
F. Lubrano et al.
Eventually, the application provides a dedicated view to send zero-value transactions. For development or testing purposes, it is possible to send transactions to the Tangle, defining the payload of the transaction. This feature is completely customizable, as it is possible to set seed, address, tag, provider and other IOTA parameters. 3.2
Limitations
The application presented in this paper is the first step of a wider development that includes the function pack of the selected devices. For this reason it reveals several limitations which will be analysed in this section. 3.2.1 Security and Authentication The transactions issued by each device are zero-value transactions. This allows to share data through the IOTA Tangle without paying any fees and without exchanging any amount of IOTA tokens. Because these transactions do not transfer any IOTA tokens, nodes carry out only the following basic checks on them: • proof of work is done according to the minimum weight magnitude of the network; • transaction’s timestamp is not older than 10 min, according to the local time of the IOTA node to which it is sent. In case of value transactions, nodes carry out additional checks: • all IOTA tokens that are withdrawn from an address are also deposited into one or more other addresses; • the value of any transaction does not exceed the total global supply; • signatures are valid. Hence, nodes verify the signature of a transaction only in the case of value transactions. The signature of zero-value transactions is not checked and as a consequence it is not possible to verify the identity of the device that issued the transaction. This behaviour is also utilized by the application itself, to issue zerovalue transactions impersonating the device. Indeed, in our system the device is identified by the IOTA tag. This issue can be solved integrating in the system the Streams framework, that add several security features as described in Sect. 2.1.2. 3.2.2 Snapshot Procedure Considering the growing popularity of IOTA and with the feature that allows to issue zero-value transactions, the Tangle would be destined to grow constantly, requiring a huge amount of memory and storage on the validating nodes. To mitigate this issue, IOTA Foundation introduced the snapshot procedure. When a snapshot is run, the information stored in the Tangle is simplified and made stable, by removing all history of transactions and keeping only information
IOTA-Based Mobile Application
295
about wallets (called addresses in IOTA terminology) containing a IOTA balance greater than zero, removing also all zero-value transactions used for messaging. Initially, the Iota Foundation made global snapshots on irregular intervals. This action has been replaced by a local version of snapshot procedure, where local snapshot of the Tangle are taken independently and asynchronously by Full nodes. Local snapshots are more efficient and do not cause network unavailability. Since local snapshots delete all the zero-value transactions stored into the Tangle, to control this action it is possible to deploy a private full node and manage the snapshots accordingly, so that relevant transactions are retained.
4
Conclusion and Future Works
This paper presented preliminary results and the first implementation of an IoT use case and a cross-platform application based on IOTA. The fundamentals of IOTA Tangle are reported and the main concept of the use case is described. This work will be extended and improved with the addition of more recent and complex functionalities of IOTA, such as MAM or IOTA streams. These features will increase the security and reliability of the communication between the devices and the user application. The possibility to deploy and run a custom full node is also under consideration. The full node allows to optimize the snapshots timing, reduce the latency and it offers possibility to partially control the data. We rely that the DLT solutions represent an enabling factor for several IoT use cases and can provide a secure and distributed way to exchange and store data. According to its roadmap, IOTA is constantly evolving, towards a more mature platform and our development will follow the new changes and will be aligned to the new versions of IOTA. A wider adoption of IOTA will contribute to this maturity: more transactions and more nodes connected to the Tangle will make easier to remove “Coordinator”, improving the decentralization, the scalability and the overall security of this system.
References 1. Noor, M.B.M., Hassan, W.H.: Current research on internet of things (IoT) security: a survey. Comput. Netw. 148, 283–294 (2019). https://doi.org/10.1016/j.comnet. 2018.11.025. ISSN 1389–1286 2. Sha, K., Wei, W., Yang, T.A., Wang, Z., Shi, W.: On security challenges and open issues in internet of things. Future Gener. Comput. Syst. 83, 326–337 (2018) 3. Dorri, A., Kanhere, S.S., Jurdak, R.: Blockchain in internet of things: challenges and solutions (2016). https://arxiv.org/abs/1608.05187 4. Shafagh, H., Burkhalter, L., Hithnawi, A., Duquennoy, S.: Towards blockchainbased auditable storage and sharing of IoT data. In: Proceedings of the 2017 on Cloud Computing Security Workshop, pp. 45–50, November 2017 5. Bez, M., Fornari, G., Vardanega, T.: The scalability challenge of ethereum: an initial quantitative analysis. In: 2019 IEEE International Conference on ServiceOriented System Engineering (SOSE), pp. 167–176. IEEE, April 2019
296
F. Lubrano et al.
6. Silvano, W.F., De Michele, D., Trauth, D., Marcelino, R.: IoT sensors integrated with the distributed protocol IOTA/Tangle: bosch XDK110 use case. In: 2020 X Brazilian Symposium on Computing Systems Engineering (SBESC), pp. 1–8. IEEE, November 2020 7. Koˇst’´ al, K., Krupa, T., Gembec, M., Vereˇs, I., Ries, M., Kotuliak, I.: On Transition between PoW and PoS. In: 2018 International Symposium ELMAR, Zadar, Croatia, pp. 207–210 (2018). https://doi.org/10.23919/ELMAR.2018.8534642 8. Singh, M., Singh, A., Kim, S.: Blockchain: a game changer for securing IoT data. In: 2018 IEEE 4th World Forum on Internet of Things (WF-IoT), Singapore pp. 51–55 (2018). https://doi.org/10.1109/WF-IoT.2018.8355182 9. Cao, B., et al.: When internet of things meets blockchain: challenges in distributed consensus. IEEE Netw. 33(6), 133–139 (2019). https://doi.org/10.1109/MNET. 2019.1900002 10. Silvano, W.F., Marcelino, R.: Iota tangle: a cryptocurrency to communicate Internet-of-Things data. Future Gener. Comput. Syst. 112, 307–319 (2020). https://doi.org/10.1016/j.future.2020.05.047. ISSN 0167-739X 11. Popov, S., et al.: The coordicide (2020). Accessed 1–30 January 12. Popov, S.: The tangle. White paper 1, 3 (2018) 13. Handy, P.: Introducing masked authenticated messaging, 4 November 2017. https: //blog.iota.org/introducing-masked-authenticated-messaging-e55c1822d50e/ 14. Cech, J.: IOTA Streams alpha, 2 April 2020. https://blog.iota.org/iota-streamsalpha-7e91ee326ac0/ 15. Stucchi, D., Susella, R., Fragneto, P., Rossi, B.: Secure and effective implementation of an IOTA light node using STM32. In: Proceedings of the 2nd Workshop on Blockchain-Enabled Networked Sensor (2019)
Electricity Theft Detection in Smart Meters Using a Hybrid Bi-directional GRU Bi-directional LSTM Model Shoaib Munawar1 , Muhammad Asif2 , Beenish Kabir2 , Pamir2 , Ashraf Ullah2 , and Nadeem Javaid2(B) 1
International Islamic University Islamabad, Islamabad 44000, Pakistan COMSATS University Islamabad, Islamabad 44000, Pakistan
2
Abstract. In this paper, a problem of misclassification due to cross pairs across a decision boundary is investigated. A cross pair is a junction of the two opposite class samples. These cross pairs are identified using Tomek links technique. The majority class sample associated with cross pairs are removed to segregate the two opposite classes through an affine decision boundary. Due to non-availability of theft data, six theft cases are used to synthesize theft data to mimic real world scenario. These six theft cases are applied to benign class data, where benign samples are modified and malicious samples are synthesized. Furthermore, to tackle the class imbalance issue a K-means SMOTE is used for the provision of balance data. Moreover, the technical route is to train the model on a timeseries data of both classes. Training model on imbalance data tends to misclassification of the samples, due to biasness towards a majority class, which results in a high FPR. The balanced data is provided as an input to a hybrid bi-directional GRU and bi-directional LSTM model. The two classes are efficiently classified with a high accuracy, high detection rate and low FPR. Keywords: Smart meters cases · FPR
1
· BiGRU · BiLSTM · Tomek links · Theft
Introduction
A power system infrastructure consists of three phases: power generation, transmission and distribution [1]. Power is generated at high voltages in power stations and it is transmitted through transmission lines. These high voltages in transmission lines are stepped down and a low compatible voltage level is supplied to electricity consumers via distribution system. Generally, consumers are connected to a low-voltage station. A utility provider (UP) intelligently monitors consumers’ consumed energy by deploying smart meters (SMs) on customer premises [2]. The supplied electric energy undergoing these three phases suffers from some undesirable losses, namely, technical losses (TLs) and non technical losses (NTLs) [3]. TLs are natural occurring losses in a power system due to c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 297–308, 2021. https://doi.org/10.1007/978-3-030-79725-6_29
298
S. Munawar et al.
energy dissipated in conductors, transformers, etc., whilst NTLs are malicious activities adopted by fraudulent consumers, connected on a low-voltage station. A major concern of these malicious activities is to under-report the consumed energy in order to reduce the electricity billing. Fraudulent consumers attempt malicious activities by adopting various approaches like meter tampering with shunt devices, double tapping and electronic faults [4]. These malicious activities over burden UP and increases energy demand. The smooth power flow of supplied energy is disrupted, which causes a huge revenue loss. The authors in [5] estimated that losses on distribution side have been increased from 11% to 16% between the years 1980–2000, which indicate that increasing losses are the most conspicuous issues. These revenue losses vary from country to country. For an instance, nearly 20% of the total generated energy loss is reported in India due to occurrence of malicious activities [6]. Similarly, developed countries, United States, United Kingdom, Brazil, Russia, Canada etc., are also effected by these issues. Statistics in [7,8] report a 10%, 16% and 100 million dollars revenue loss in Russia, Brazil and Canada, respectively. Worldwide, a recent report indicates a revenue loss of 96 billion dollars due to these malicious activities [9]. Concerning the electrical theft problem, in the existing literature, various countermeasure approaches to identify and investigate the theft occurrence. One of the major approaches is to target advanced metering infrastructure (AMI). AMI provides the sequential data, which is analysed for the detection of malicious behaviour. Furthermore, a mutual strategy of considering sequential and non-sequential data is opted for the enhancement of theft detection. Sequential data comprises of SMs’ consumption data whilst non-sequential is an auxiliary data containing geographical and demographical attributes of the consumers. Moreover, neighbourhood area network (NAN) and morphological patterning approaches tend to identify the maliciousness in customers’ premises. A NAN is a cluster of customers’ connected to a low voltage side of a transformer. In NAN topology, an observer meter is installed on the low voltage side of the transformer, which monitors the total consumed energy of corresponding consumers. TLs are adjusted numerically through a constant within a NAN, in order to achieve the relevancy in readings of observer meter and consumers’ SMs. However, morphological patterning focuses on the forecasting of consumers’ behaviour. Forecasting of the consumed energy is based on the historic consumption patterns of consumers. The irrelevancy between the historic and forecasted pattern identifies the maliciousness in electricity consumption. 1.1
List of Contributions
The contributions of this work are enlisted as follows. • To overcome the class imbalance issue, six theft cases are used to synthesize theft data. Later on, K-means SMOTE, a data over sampling technique, is used for the provision of balanced data. • Cross pairs identified and removed using Tomek links.
Electricity Theft Detection in Smart Meters
299
• In order to solve the misclassification issue, a hybrid model of bi-directional gated recurrent unit (Bi-GRU) and bi-directional long term short term memory (Bi-LSTM) is used. Additionally, an efficient theft detection method is developed.
2
Literature Review
This section provides an overview of the existing literature related to electricity theft detection (ETD) in SMs. 2.1
Considering Sequential Data
NTLs in large portion are due to the energy theft. Detection algorithms analyse the severity and period of the energy theft. Generally, theft detection is based on the historic monitored data of a customer’s SM. These detection approaches are categorized as data-oriented, network-oriented and hybrid-oriented. Solution proposed in [21] is data driven approach. An ensemble bagged tree (EBT) algorithm, which consists of many decision trees is used to detect NTLs. However, a small change in data disrupts the structure of the optimal decision trees (DTs). In [17], a solution based on boosting classifiers, a gradient boosting classifier (GBCs) detector, is introduced considering intentional theft, though non-fraud anomalies are ignored. GBCs detector is based on categorical boosting (CatBoost), extreme gradient boosting (XGBoost) and light gradient boosting (LGBoost), which is computationally expensive due to large number of trees. Besides that, it takes a large execution time and more memory. In [26], the authors address issues of imbalanced data nature and dense time-series data consequences. A maximal over lapped discrete wavelet-packet transform (MODWPT) method is used to reduce high data dimensionality issue and to extract the abstract features whilst a random under sampling boosting (RUSBoost) algorithm is used for data balancing [33]. However, RUSBoost eliminates the key features which results in information loss to a large extent. Similarly, in [28], to tackle data imbalance issue, SMOTE a data balancing technique, is used to balance the imbalanced data. While balancing the data, SMOTE considers neighbouring samples from the opposite classes and synthesizes the sample which is the injection of an additional noise to the data. In [4,18,19] CNN, CNN-RF and CNN-LSTM are developed, respectively. However, CNN has a maxpooling operation, which is significantly slower. So forth literature in [22], presents a semi-supervised autoencoder (SSEA), which learns the advanced features such as current, voltage and active power from the available SMs’ data. However, the results are not accurate and are less stable. Moreover, authors in [24] propose XGBoost technique for the detection of anomalies, whereas, computational and time complexity constrains are being ignored. Solutions proposed in [10–12], focuses on the consideration of non sequential data along with the SMs’ data to detect maliciousness. However, non malicious factors and data leakage during training of the model, are beyond the scope of this study.
300
2.2
S. Munawar et al.
Investigating Neighbourhood Area Network
A network-oriented approach requires a hardware based infrastructure, which enhances precision. A neighbourhood area network (NAN) based mechanism developed in [13,15] identifies the benign and theft customers. However, interpreting of one’s consumer reading by a nearby neighbour is not pinpointed. Similarly, in [25], the authors develop an ensemble technique of maximum information coefficient (MIC) and clustering technique by fast search and find of density peaks (CFSFDP). MIC is used for correlation between the observer meter’s data and NTLs whilst CFSFDP is used to investigate the morphology of massive amount of consumption patterns. Though, changes due to non malicious factors in the observer meter are not tackled. 2.3
Monitoring Morphological Patterning
The work in [14], develops a decision tree combined K-nearest neighbour and support vector machine (DTKSVM) technique to investigate anomalies. However, non malicious factors are not considered, which tend to misclassification in this scenario. A LSTM model is used by [23,32] to forecast future energy consumption curves and to compare it with the most common historic consumption profiles of the customers. LSTM based model requires the larger memory bandwidth and excessive computational complexities. To further enhance the electricity theft detection accuracy, in [16] a morphological aspect of the consumer’s pattern is observed by deploying a stacked sparse denoising autoencoder (SSDAE). An estimated noise is added to cope the non malicious factors, which is not a suitable assumption. Moreover, in [27] authors tackle NTLs using linear and categorical enhanced regression based scheme for detection of energy theft smart meters (LR-ETDM, CVLR-ETDM) algorithms. However, LR is sensitive to outliers. Non malicious factors cause a depicted outlier scenario, which disrupts the model performance and result a bad accuracy. In [28], authors propose an anomaly pattern detection hypothesis testing (APD-HT) scenario to investigate the malicious users. However, this model fails to detect variations in the reading of SMs’ due to non malicious factors (Table 1).
3
Proposed System Model
The proposed solution for NTL detection is shown in Fig. 1. In the figure, the proposed solution is a hybrid electricity theft detector, which has three phases: data preprocessing, data augmentation and classification. These three phases typically consists of 6 main steps. • In step (1), the data is preprocessed, missing values and outliers are filled and removed, respectively. Usually, missing values degrade the model’s accuracy due to the noisy and ambiguous data. To clean the data for fair training of the model, a simple imputer method is proposed. The missing values are filled using mean strategy.
Electricity Theft Detection in Smart Meters
301
Table 1. Mapping of limitations and proposed solutions Limitations to be addressed
Solutions to be proposed
Validations to be done
L1: High FPR
S1: A hybrid of BiLSTM-BiGRU
V1: Figs. 3(a) and 3(b)
L2: Problem of imbalanced data
S2: Using K-means SMOTE
V2: Performance measurement with existing model
L3: Misclassification due to cross pairs
S3: Using Tomek links
V3: Table 2
• In step (2), the cleaned data is then passed to the next step in which data augmentation is performed. In real world, fraudulent consumers’ samples are rare. Training of the model on such imbalanced dataset leads to biasness towards a majority class; therefore, data balancing is a necessary requirement. • In step (3), six theft cases are applied to benign class samples. Correspondingly, six theft samples are generated for each of the benign sample. In this way, the number of theft class samples are increased from the available benign class samples. • In step (4), a Tomek links technique is used to identify the cross pairs across the decision boundary. The associated majority class sample from the cross pairs is then removed while keeping the information preserved. • In step (5), the available data is then passed to the next phase for classification purpose. • In step (6), a hybrid bi-directional GRU and bi-directional LSTM model is developed and time series data is passed. A sigmoid function is used for the final classification. Bi-directional LSTM is suitable to handle a high dimensional time sequence data whilst a bi-directional GRU is a fast module that is used for avoiding computational complexities.
3.1
Data Preprocessing
Data preprocessing technique is used to transform the raw data into usable format. Electricity consumption data is a high dimensional time-series data. Analysing such a huge data is impractical and takes very long execution time. The time-series data contains certain missing values and outliers. Training a model with such a data compromises the accuracy of the forecasting model. Therefore, the missing values and filled without compromising the original data’s integrity. Training a model with such a data compromises the accuracy of the forecasted results. In our case, a simple imputer technique is applied to fill the NaN values. An independent mean strategy is applied to fill the NaN values. It is a fast and an easy approach.
302
S. Munawar et al. Input Data
Cross Pairs
Data Balancing
Smart Meter Data
Tomek Link
attacks
Benign Class
S2
S1 Time Series Data
Balanced Data
S3
Bi-directional LSTM Yt-1
Yt
Cross Pairs
Bi-directional GRU
Yt+1 Output
LSTM LSTM LSTM LSTM LSTM LSTM
S0 BiGRU Sh
Forward Input
y2
y3
BiGRU
BiGRU
x2
x3
y1
Backward
x1 Xt-1
Xt
Xt+1
users
Limitations Addressed L.1 Imbalanced data set L.3 Data Leakage Problem
Normal Users
Fig. 1. The proposed system model
3.2
Data Augmentation and Balancing
In the real world, malicious samples rarely exist. A balanced dataset is required to train ML and deep learning (DL) models. Training on an imbalanced dataset causes biasness towards a majority class, which leads to misclassification. There are many synthetic data generating techniques to tackle these issues such as undersampling and oversampling techniques. Undersampling techniques work on a majority class to discard redundant data, which leads to information loss. On the other hand, oversampling techniques duplicates the minority class to generate synthetic samples that is prone to overfitting. In our scenario, we are using six theft cases to mimic real world theft data. The following six theft cases are used in this paper to generate synthetic data sample: T 1(s t) = s t ∗ random(0.1, 0.9),
(1)
T 2(s t) = s t ∗ x t(x t = random(0.1, 0.9)),
(2)
T 3(s t) = s t ∗ random[0, 1],
(3)
T 4(s t) = mean(s) ∗ random(0.1, 1.0),
(4)
T 5(s t) = mean(s),
(5)
T 6(s t) = S T − t (W here T is sample time).
(6)
Electricity Theft Detection in Smart Meters
303
• In theft case 1, as shown in Fig. 2(a), the benign class samples are modified by multiplying them to a random number ranges between (0.1–0.9). • In theft case 2, as shown in Fig. 2(a), to mimic a discontinuity in the consumption pattern, the benign class values are manipulated by multiplying with a random number within the range of (0.1, 1.0). • In theft case 3, SMs’ readings are manipulated by multiplying them with 1 or 0. In zero multiplication scenario, patterns multiplied with 0 show no consumption during that timestamp. However, multiplication with 1 tries to inhibit a historic constant minimal energy usage pattern, which under-reports the usage in an intelligent induction, as shown in Fig. 2(a). • In theft case 4, a mean of the total consumption is multiplied by a random state between (0.1–0.9) to under-report SMs’ readings, as shown in Fig. 2(b). • In theft case 5, a simple mean consumption of the total consumed energy is taken, as shown in Fig. 2(b). The mean represents a consistent consumption throughout a time span. • In theft case 6, as shown in Fig. 2(b), peak hours are swapped with off peak hours. The swap is the reverse of actual electricity utilization during that time span. A synthetic theft data is then generated using these theft cases.
(a) Theft Cases 1, 2 and 3
(b) Theft Cases 3, 4 and 5
Fig. 2. Theft cases
3.3
Bi-drirectional LSTM
RNN suffers from gradient vanishing problem and to resolve the problem, a BiLSTM is developed for preserving the long distance information [22]. A BiLSTM model processes a temporal data sequence. It consists of two LSTMs. Past and future information is accessed through forward and backward directions. One is fed with time series data in forward direction while the other with a reversed copy of the input sequence data. This mode of feeding input data increases
304
S. Munawar et al.
the amount of available information. It has the capability of driving its gates according to its own need. A BiLSTM learns faster than a single LSTM. While developing a BiLSTM, two copies of hidden layers are created. The outputs of these layers are then concatenated [29].
4
Performance Evaluation
The performance of the hybrid BiGRU-BiLSTM model is evaluated using DR, FPR, AUC score and accuracy. A confusion matrix is a base for all these measuring parameters. Confusion matrix divides the dataset into four basic parts: true positive (TP), false positive (FP), true negative (TN) and false negative (FN). TP and TN show the correctly predicted positive and negative samples whilst FP and FN show the falsely classified negative samples as positive and positive samples as negative, respectively. Similarly, DR is the sensitivity of the model and it refers to a TPR in the literature. It is a ratio of the detected abnormal samples to the total number of abnormal samples. Mathematical representation of DR is shown in Eq. (7). DR = T P/(T P + F N ).
(7)
FPR is an important aspect of the evaluation matrix. In literature it is referred as false alarms. It is an incorrect consideration of negative samples as a positive. A high FPR is an expensive parameter resulting in an on-site verification, which leads to time utilization and increased monetary cost. Mathematically FPR is shown in Eq. (8). F P R = F P/(F P + T N ). (8)
5
Simulation Results
This section focuses on the simulation results of our proposed hybrid model. We have evaluated our proposed model using the electricity consumption data of residential consumers. At first, consumers having similar class are identified and labelled to understand their behaviour. Label 0 represents a benign class samples whilst label 1 stands for a theft class sample. The consumers consumption is monitored after every thirty minutes. A total of 48 features are extracted for a single day’s energy consumption. Theft samples are synthesized by applying six theft cases on the benign class. The synthesised theft data is concatenated with the benign class data. The concatenated data is then balanced using K-means SMOTE technique. Before training of the model, the two classes are segregated by a decision boundary. The decision boundary is inrushed with cross pairs, which degrade the model’s efficiency. These cross pairs are identified and are removed using Tomik links. In Fig. 3(a), the performance of the proposed BiGRU-BiLSTM is compared with an existing CNN-LSTM model [30]. The plots in Fig. 3(a), indicate the
Electricity Theft Detection in Smart Meters
305
Table 2. Identification of cross pairs Total samples (before) Identification of cross pairs Remaining samples 5194
51
(a) AUC of the proposed BiLSTM-BiGRU and existing GRU-LSTM Model
5143
(b) PRC of the proposed BiGRU-BiLSTM and existing CNN-LSTM Model
Fig. 3. Analysis of the proposed BiGRU-BiLSTM and CNN-LSTM model based on AUC and PRC curve
AUC of the existing and proposed models. Based on the classification accuracy, initially, both of the models perform quite well, showing a higher TPR with the lowest FPR when AUC score is 0.60. It is observed that with a small data, hybrid CNN-LSTM classifies the samples with a low FPR. However, at AUC score of 0.63, it misclassifies the samples, which increases FPR. In range of AUC score 0.63–0.87, the performance of CNN-LSTM model fluctuates with an increasing FPR. Similarly, in range of AUC score 0.60–0.87, the proposed BiGRU-BiLSTM model performs much better as compared to CNN-LSTM with an increased TPR. However, as the amount of data is increased, the proposed BiGRu-BiLSTM model achieves a maximum peak of 0.93. Our hybrid model beats its opponent model with a high TPR and achieves a high rate of accurate classification. Moreover, a PRC curve is used to evaluate performance of the binary classification. The performance of the proposed model is shown in Fig. 3(b). The Fig. 3(b) shows that a low PRC rate is not a suitable choice. It causes high rate of misclassification. Misclassification increases FPR and burdensome the UPs for an excessive on-site clarification effort, which is an expensive practice.
6
Conclusion
In this paper, a hybrid BiLSTM and BiGRU based model is proposed for the detection of NTLs. Initially, an affine classification boundary is defined between the benign and theft samples. The cross pairs are identified using Tomek links and the majority class sample is then removed from the identified cross pairs. Afterwards, six theft attacks are applied on the benign class data to synthesize
306
S. Munawar et al.
theft class data. For a single benign sample, six theft variants are generated. K-means SMOTE technique is applied on the benign class for the provision of balanced data. It makes clusters of the samples and minority data is oversampled. The balanced data is split into training and testing data. The proposed BiGRU-BiLSTM model is tested on unseen samples and it achieves an accuracy of 93%. An existing CNN-LSTM model is trained and tested on the same data; however, it fails to provide the precise output as compared to the proposed BiGRU-BiLSTM model. Our future work will include feature engineering based preprocessing to make the proposed model more efficient and accurate.
References 1. Grigsby, L.L. (ed.): Electric Power Generation, Transmission, and Distribution. CRC Press, Boca Raton (2018) 2. Yu, X., Cecati, C., Dillon, T., Simoes, M.G.: The new frontier of smart grids. IEEE Ind. Electron. Mag. 5(3), 49–63 (2011) 3. Depuru, S.S.S.R., Wang, L., Devabhaktuni, V.: Electricity theft: overview, issues, prevention and a smart meter based approach to control theft. Energy Policy 39(2), 1007–1015 (2011) 4. Buzau, M., Tejedor-Aguilera, J., Cruz-Romero, P., G´ omez-Exp´ osito, A.: Hybrid deep neural networks for detection of non-technical losses in electricity smart meters. IEEE Trans. Power Syst. 35(2), 1254–1263 (2020). https://doi.org/10. 1109/TPWRS.2019.2943115 5. World Bank: World development report 2004: making services work for poor people. The World Bank (2003) 6. Gaur, V., Gupta, E.: The determinants of electricity theft: an empirical analysis of Indian states. Energy Policy 93, 127–136 (2016) 7. Bhatti, S.S., et al.: Electric power transmission and distribution losses overview and minimization in Pakistan. Int. J. Sci. Eng. Res. 6(4), 1108–1112 (2015) 8. Buzau, M.M., Tejedor-Aguilera, J., Cruz-Romero, P., G´ omez-Exp´ osito, A.: Hybrid deep neural networks for detection of non-technical losses in electricity smart meters. IEEE Trans. Power Syst. 35(2), 1254–1263 (2019) 9. Smart Meters help reduce electricity theft increase safety. BC Hydro, Inc., vancouvers (2011) 10. Saeed, M.S., Mustafa, M.W., Sheikh, U.U., Jumani, T.A., Mirjat, N.H.: Ensemble bagged tree based classification for reducing non-technical losses in multan electric power company of Pakistan. Electronics 8(8), 860 (2019) 11. Punmiya, R., Choe, S.: Energy theft detection using gradient boosting theft detector with feature engineering-based preprocessing. IEEE Trans. Smart Grid 10(2), 2326–2329 (2019). https://doi.org/10.1109/TSG.2019.2892595 12. Buzau, M.M., Tejedor-Aguilera, J., Cruz-Romero, P., G´ omez-Exp´ osito, A.: Detection of non-technical losses using smart meter data and supervised learning. IEEE Trans. Smart Grid 10(3), 2661–2670 (2018) 13. Biswas, P.P., Cai, H., Zhou, B., Chen, B., Mashima, D., Zheng, V.W.: Electricity theft pinpointing through correlation analysis of master and individual meter readings. IEEE Trans. Smart Grid 11(4), 3031–3042 (2019) 14. Li, S., Han, Y., Yao, X., Yingchen, S., Wang, J., Zhao, Q.: Electricity theft detection in power grids with deep learning and random forests. J. Electr. Comput. Eng. (2019)
Electricity Theft Detection in Smart Meters
307
15. Hasan, M., Toma, R.N., Nahid, A.A., Islam, M.M., Kim, J.M.: Electricity theft detection in smart grid systems: a CNN-LSTM based approach. Energies 12(17), 3310 (2019) 16. Lu, X., Zhou, Y., Wang, Z., Yi, Y., Feng, L., Wang, F.: Knowledge embedded semi-supervised deep learning for detecting non-technical losses in the smart grid. Energies 12(18), 3452 (2019) 17. Zheng, Z., Yang, Y., Niu, X., Dai, H., Zhou, Y.: Wide and deep convolutional neural networks for electricity-theft detection to secure smart grids. IEEE Trans. Industr. Inf. 14(4), 1606–1615 (2018). https://doi.org/10.1109/TII.2017.2785963 18. Yan, Z., Wen, H.: Electricity theft detection base on extreme gradient boosting in AMI. IEEE Trans. Instrum. Meas. 70, 1–9 (2021) 19. Buzau, M.M., Tejedor-Aguilera, J., Cruz-Romero, P., G´ omez-Exp´ osito, A.: Detection of non-technical losses using smart meter data and supervised learning. IEEE Trans. Smart Grid 10(3), 2661–2670 (2019). https://doi.org/10.1109/TSG.2018. 2807925 20. Ismail, M., Shaaban, M.F., Naidu, M., Serpedin, E.: Deep learning detection of electricity theft cyber-attacks in renewable distributed generation. IEEE Trans. Smart Grid 11(4), 3428–3437 (2020) 21. Jokar, P., Arianpoo, N., Leung, V.C.M.: Electricity theft detection in AMI using customers’ consumption patterns. IEEE Trans. Smart Grid 7(1), 216–226 (2016). https://doi.org/10.1109/TSG.2015.2425222 22. Liu, Y., Liu, T., Sun, H., Zhang, K., Liu, P.: Hidden electricity theft by exploiting multiple-pricing scheme in smart grids. IEEE Trans. Inf. Forensics Secur. 15, 2453– 2468 (2020) 23. Zheng, K., Chen, Q., Wang, Y., Kang, C., Xia, Q.: A novel combined data-driven approach for electricity theft detection. IEEE Trans. Ind. Inf. 15(3), 1809–1819 (2018) 24. Kong, X., Zhao, X., Liu, C., Li, Q., Dong, D., Li, Y.: Electricity theft detection in low-voltage stations based on similarity measure and DT-KSVM. Int. J. Electr. Power Energy Syst. 125, 106544 (2021) 25. Fenza, G., Gallo, M., Loia, V.: Drift-aware methodology for anomaly detection in smart grid. IEEE Access 7, 9645–9657 (2019) 26. Huang, Y., Xu, Q.: Electricity theft detection based on stacked sparse denoising autoencoder. Int. J. Electr. Power Energy Syst. 125, 106448 (2021) 27. Yip, S.C., Wong, K., Hew, W.P., Gan, M.T., Phan, R.C.W., Tan, S.W.: Detection of energy theft and defective smart meters in smart grids using linear regression. Int. J. Electr. Power Energy Syst. 91, 230–240 (2017) 28. Park, C.H., Kim, T.: Energy theft detection in advanced metering infrastructure based on anomaly pattern detection. Energies 13(15), 3832 (2020) 29. Hu, J., Li, S., Hu, J., Guanci, Y.: A hierarchical feature extraction model for multilabel mechanical patent classification. Sustainability 10, 219 (2018). https://doi. org/10.3390/su10010219 30. Hasan, M., Toma, R.N., Nahid, A.A., Islam, M.M., Kim, J.M.: Electricity theft detection in smart grid systems: a CNN-LSTM based approach. Energies 12(17), 3310 (2019) 31. Khalid, R., Javaid, N., Al-Zahrani, F.A., Aurangzeb, K., Qazi, E.U.H., Ashfaq, T.: Electricity load and price forecasting using Jaya-Long Short Term Memory (JLSTM) in smart grids. Entropy 22(1), 10 (2020)
308
S. Munawar et al.
32. Mujeeb, S., Javaid, N., Akbar, M., Khalid, R., Nazeer, O., Khan, M.: Big data analytics for price and load forecasting in smart grids. In: International Conference on Broadband and Wireless Computing, Communication and Applications, pp. 77– 87. Springer, Cham (2018) 33. Adil, M., Javaid, N., Qasim, U., Ullah, I., Shafiq, M., Choi, J.G.: LSTM and batbased RUSBoost approach for electricity theft detection. Appl. Sci. 10(12), 4378 (2020)
Developing Innovation Capability to Improve Marketing Performance in Batik SMEs During the Covid-19 Pandemic Alifah Ratnawati(&) and Noor Kholis Department of Management, Faculty of Economics, Universitas Islam Sultan Agung, Semarang, Indonesia [email protected]
Abstract. This paper aims to examine how the role of innovation capability in improving the marketing performance of Batik SMEs in Central Java, Indonesia during the Covid-19 Pandemic. The role of innovation capability (IC) is examined by analyzing its ability to mediate the influence of market orientation (MO), entrepreneurial orientation (EO), marketing capability (MC), and operational capability (OC) in improving marketing performance (MP). The relationship between these six concepts is something that has not received serious attention from academics in Indonesia. In this study, the 200 batik entrepreneurs were involved. The results showed that IC was proven to be an intervening variable in improving MP. Furthermore, this study expects that the development of IC can improve the MP of batik SMEs in Central Java during this pandemic, so that SMEs can continue to operate and develop. Keywords: Market Orientation Entrepreneurial Orientation Marketing Capability Operational Capability Innovation Capability Marketing Performance
1 Introduction SMEs in Indonesia grow as a support for the community's economy and have a very strategic role in the Indonesian economy. According to data from the Indonesian Ministry of Cooperatives and SMEs in 2018, it shows that the total number of SMEs is 99.9% of the total business units or 62.9 million units. The absorption capacity of SMEs is 96.99% of the total employment, while 89% of them are in the micro sector, and can contribute 62.58% to gross domestic product. The existence of SMEs is very important for the pace of the economy in Indonesia. Therefore, SMEs need special attention as they become the largest contributor to GDP and substitute for the mainstay of unemployment absorption, substitute for consumer goods products, and substitute for the production of consumer goods or semi-finished goods. The Covid-19 pandemic has caused chaos in the economic sector. Not only in large industries, but the pandemic has made SME players in Indonesia anxious. SMEs are one of the sectors that have been severely affected by the Covid-19 pandemic in Indonesia. There are around 37,000 SME players that were affected by the covid-19 © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 309–323, 2021. https://doi.org/10.1007/978-3-030-79725-6_30
310
A. Ratnawati and N. Kholis
virus pandemic to their businesses as reported by the Ministry of Cooperatives and SMEs. From the data, 56% of SMEs reported a decrease in sales, 22% acknowledged capital difficulties, 15% had delayed distribution, and 4% reported difficulties in raw materials [1]. Batik is one of the superior products of SMEs in Central Java, Indonesia. Business continuity and survival are important factors in the struggle through today's difficult times. This pandemic condition is important to examine how creative industry players adapt and survive through difficult times and also for the next new normal period. The biggest challenge for batik SMEs during the pandemic is how these SMEs maintain their existence so that they are not getting worse and eroded by the intense competition in the batik industry and also not experiencing a decline in sales as a result of the Covid-19 pandemic. Marketing strategy is an important thing related to the survival of a business. To dominate a market, batik SMEs must have a special strategy to support the success of MP. MP is determined by the extent to which SMEs implement a MO. MO is a business perspective that focuses on the company's overall activities. MO is also a strategy in which companies or business actors must be oriented towards consumers and their market share [2]. [3] proved that MO contributes to improving MP. However, it is contrary to the results of research conducted by [4] that the MO factor does not determine MP. [5] stated that although several studies on the relationship between MO and performance still gave different results, research on the effect of MO on MP is still attractive for the benefit of corporate strategy. Companies that can apply MO have an advantage in customer knowledge, which can be a competitive advantage. The application of MO can provide a competitive advantage based on innovation. Entrepreneurship-oriented businesses have the ability to seek and take advantage of market opportunities that have not been exploited and respond to challenges, and have the willingness to take risks when faced with uncertain conditions. The results of research conducted by [6] show that EO has a positive and significant effect on MP. However, [7] shows that EO does not have a significant effect on MP. Likewise, research conducted by [8] shows that EO does not have a direct effect on MP. The existence of a gap between MO with MP and EO with MP has led to researchers’ ideas to provide solutions by providing IC variable. IC is the company's ability to create new ideas, products, or processes. This study seeks to broaden the understanding of the relationship between MO, EO, MC, and OC in increasing IC. Thus, it is hoped that increased IC can improve the MP of batik SMEs in Central Java during this pandemic, so that SMEs can continue to operate and develop.
2 Literature Review 2.1
Market Orientation (MO)
MO is considered as the basis of the marketing concept and is important to contribute to improving business performance [9]. According to [10], MO is the process of finding and understanding the desires of existing and potential customers, as well as observing and overcoming the activities of existing competitors through processes and activities that develop the organization and management system. Therefore, MO can make the
Developing Innovation Capability to Improve Marketing Performance
311
company better at responding to market demand and predicting market changes well to create sustainable competitive advantage. Meanwhile, according to [6], MO is related to the company’s emphasis on creating and maintaining superior customer value. The requirement is to pay attention to the interests of stakeholders, in addition to providing norms for organizational development behavior and having an attitude of being responsive to market information. Along with increasing competitive advantage and changing customer needs, MO plays an important role because all companies realize that customers are assets that can improve company performance [11]. According to [2], MO is a process of company activities related to the creation of customer needs and satisfaction. The concept of MO was developed by [5]; they explain MO as an important factor for the company’s success. [5] described a MO as the most effective and efficient organizational culture creating the behaviors needed to create superior value for buyers and sustainable superior performance for businesses. Simultaneously, [12] created a MO as the generation of market intelligence throughout the organization relating to current and future customer needs, the dissemination of intelligence across departments, and the responsiveness of the entire organization to it. With MO, companies try to understand and take advantage of the exogenous factors in a company. A company can identify and respond to the needs of their customers and provide products and services according to their needs, thus making MO the main instrument in developing competitive advantage. Several studies on MO have emerged based on resource and literature-based views [13]. In this study, MO is considered as an intangible resource that emphasizes the company's ability to process market intelligence from customers and competitors [14]. 2.2
Entrepreneurial Orientation (EO)
Entrepreneurial is a distinctive character that distinguishes entrepreneurs from managers or employees [15]. Entrepreneurs are known to seek out and take innovative, proactive, and risk-taking action. This differs from managers or workers who tend to avoid risk. Entrepreneurship is the hallmark of entrepreneurs who are constantly looking for and identifying business opportunities and creating new value for growth. According to [16], entrepreneurship is behavior that requires individuals to take risks and overcome challenges in creating new businesses that begin by identifying, evaluating, and seizing opportunities. The goal to be achieved is to earn income and make a profit. EO has emerged as an important factor in investigating the entrepreneurial spirit of companies and their influence on strategic processes [17]. The term EO refers to a series of dimensions consisting of decision-making processes, practices, and activities that lead to the re-creation of business ventures, including the tendency to act independently, the tendency to innovate and take risks, the tendency to competitive aggression in relation to competitors and the proactivity with new opportunities [18]. EO has a very vital role in obtaining and utilizing marketing information [19]. EO is the dissemination of entrepreneurial practices and associated shared values originating from top management companies so that it can be stated that EO starts at the highest organizational level and it aims to spread the practice of identifying and exploiting opportunities [20]. [21] argued that EO is related to the personal characteristics possessed by company owners in determining a clear vision, willingness to
312
A. Ratnawati and N. Kholis
face challenges and risks, and the ability to create a good corporate image. Organizations with a strong EO will be more efficient in achieving a goal [22]. Entrepreneurship-oriented businesses will have the ability to find and take advantage of market opportunities that have not been exploited and answer challenges and have the willingness to take risks when facing uncertain conditions [23]. According to [7], EO is the ability of processes, practices, and decision-making that is reflected in directing companies to engage in innovative, proactive, and risk-taking behavior to improve company performance. 2.3
Marketing Capability (MC)
Marketing capability refers to the company's ability to carry out marketing activities such as market research, advertising, promotion, customer relations, sales efforts, and so forth [24]. More specifically, marketing capability is a fundamental intangible asset that gives a company the ability to use available resources and perform marketing tasks according to the desired marketing performance [14]. Marketing capability also has the potential to make organizations aware of and act in response to changes in the market such as movements made by competitors, technological developments, and revolutions [25]. Marketing capability is also operated in a more precise organizational setting. The capacity of a company related to the collection, sharing, and dissemination of market information is part of its marketing capabilities [25]. Furthermore, as stipulated by [26], it includes launching new products successfully and maintaining good customer and supplier relationships. This all leads to the company’s success. To reach this stage, companies will respond or take action based on their knowledge of the market. 2.4
Operational Capability (OC)
OC is a source of unity, integration, and can provide direction to existing resources through other operational practices. OC is defined as the ability by which a company can seek income in the short term [27]. The two main keys of OC are in marketing, namely the capabilities needed to meet customer needs and technology, and the capabilities needed to produce products and services. OC is a key component of a company’s expertise to drive the achievement of production-related goals, for example, product quality, cost control and management, product flexibility, and delivery speed and reliability [28]. More specifically, OC refers to labor costs and operational costs. Labor costs include salaries, wages, and other benefits that the company pays to its employees and managers. The higher the employee and management benefits, the better the service the company can provide to its customers. 2.5
Innovation Capability (IC)
IC is the skills and knowledge needed to effectively absorb, master, and improve existing technology to create new products [29]. IC according to [39], consists of 4 aspects, namely the capacity to develop new products that meet market needs, the capacity to apply the right process technology to produce new products, the capacity to develop and adopt new products, and process technology to meet future needs, and
Developing Innovation Capability to Improve Marketing Performance
313
capacity responding to unintentional technological activities as well as unexpected opportunities created by competitors [30]. IC is one of the most important dynamics that enable SMEs to achieve high competitiveness, both in the national and international markets. IC is the ability to continuously transform knowledge and ideas into new products, processes, and systems for the benefit of the company and its stakeholders. At this time, companies began to follow globalization and adopt innovation continuously, which highlighted the role of innovation capabilities [31]. The main reason that IC is defined as the ability to generate, accept, and implement new ideas, processes, products, and services [32] is to help a company maintain a competitive advantage for short-term survival and build its competitiveness and profits for longterm survival [33]. According to [34], IC can be described as the potential to produce innovative outputs. MO is referred to as an organizational culture that focuses on how companies obtain and utilize market information [35]. Organizational capabilities have been identified to influence the transformation of market-oriented culture (such as market orientation) into specific activities [4], and IC has been presented as the main organizational capability [36]. Companies that implement a MO culture will be able to respect customer needs and the actions of competitors to focus more on IC to meet customer needs [36]. IC facilitates the creation of superior value for customers, and therefore companies that do not have that IC cannot fully utilize the market knowledge that has been generated through a MO [37]. In previous studies, [36]; [9] show that several variables can affect the relationship between MO and performance. The study conducted by [38] found a direct relationship between MO and IC. Thus, the hypothesis is proposed as follows: H1: The more the market orientation increases, the more the innovation capability increases. EO refers to entrepreneurial strategy-making practices, management philosophy, and company-level behavior [39]. Generally, companies that have a high EO will be more innovative, proactive, and dare to take risks [18]. Lack of EO often results in a low level of IC, especially at BUMN (State-owned enterprises) [18]. More specifically, innovation is a resource-intensive process, involving significant uncertainty and risk [40]. EO allows managers to take more risks while pursuing aggressive innovation [41]. Thus, the second hypothesis is proposed as follows: H2: The better the Entrepreneurial Orientation, the better the innovation capability. Companies will create a competitive advantage by thinking of new ways to do activities in the value chain, to provide superior customer value, which is an act of innovation [42]. This suggests that innovation will lead to competitive advantage and innovation can occur in any organizational value creation activity [43]. Marketing capability is very important at the product development stage, where the needs and competition of consumers must be assessed. Furthermore, information is shared to find comprehensive new product ideas to advance further to the development stage. Previous studies have shown that companies must have adequate resources and marketing capability to successfully develop new products with innovative capabilities [44]. The
314
A. Ratnawati and N. Kholis
findings of [43] show that marketing capability plays a dual role in competitive strategy by influencing innovation and sustainable competitive advantage. Therefore, we believe that marketing capability affects all types of innovation made by a company. Thus, the relationship of marketing capability and IC is hypothesized as follows: H3: The more marketing capability increases; the more innovation capability increases. OC is defined as the integration of a series of complex tasks performed by a company to increase its output through the most efficient use of production capabilities, technology, and material flow [45]. OC arises from determinants such as resources and practices, as well as considerations such as skills, knowledge, and leadership [46]. The literature reveals many studies that establish a positive relationship between OC and innovation [47]. As in a study conducted by [48] who recommended that successful SMEs increase innovation by increasing OC. Thus, the fourth hypothesis is proposed as follows: H4: The more the operational Capability increases, the more the innovation capability increase. 2.6
Marketing Performance (MP)
The company has succeeded in meeting market demand by providing unique and inimitable products and services, which will increase superior performance. MP indicators are customer satisfaction, product or service quality, customer memory, customer loyalty, sales level, profit, and market share [36]; [49]. Furthermore, MP is a benchmark in assessing the success of value creation, which is a combination of strengthening IC and an in-depth understanding of MO. Various experts use different dimensions in measuring MP [50]. According to [23], MP is a factor that is often used to measure the impact of the strategy adopted by the company. Recently, some marketing research studies have validated the positive effect of MO on MP [51]. One of the measures used to assess the success of the company's strategy is MP because every company has an interest in knowing the market achievement of the product sales. The success of MP is determined by how effective the company is in creating MO [52]. MO SMEs as part of the processes and activities related to the creation and fulfilment of customer needs and satisfaction will affect the improvement of MP [11]. [3] and [53] proved that MO contributes to improving MP. [65] found that MO and business performance are positively correlated. Furthermore, [54] found that there is a positive correlation between the business performance of SMEs and MO. Thus, the hypothesis is proposed as follows: H5: The more the market orientation increases, the more the marketing performance increase. Research broadly shows that an EO tends to have positive business results for companies. [55] argue that an EO encourages a learning orientation in organizations. EO will encourage experimentation in introducing new products or services, new technology in developing new processes, and willingness to take cost risks. Taking
Developing Innovation Capability to Improve Marketing Performance
315
risks and innovation as an effort to gain competitive advantage are reflected in efforts to improve MP [56]. The EO reflects the level of risk-taking, proactivity, and the company's aggressiveness towards innovation. Furthermore, EO will enhance organizational transformation and reform which can help build new competencies that drive improved MP [6]. Firms with high EO are more aggressive in entering new markets which are characterized by high risk and consequently require more intensive organizational learning [57]. A study conducted [58] revealed that EO has a positive effect on organizational learning which in turn has a positive effect on MP. The results of research conducted by [6] indicate that EO has a positive and significant effect on MP. Furthermore, [59] revealed that the type of transformational leadership and EO contributed to the achievement of high MP. Likewise, in analyzing the agribusiness industry in Malaysia, [71] stated that EO has a positive effect on MP. Thus, the hypothesis is proposed as follows: H6: The better the entrepreneurial orientation, the better the marketing performance. Market sensing and linkage with partners are some of the marketing capabilities that have been associated with positive organizational results. Others are customer, functional, and network capabilities [60] which can centrally be part of a marketing strategy that aims to improve superior MP. The company resource-based view proposes that intangible assets such as marketing capability have the potential to encourage competitive advantage, thereby increasing company performance in an industry. Marketing capability has been linked to business strategy, MO, and has become a complementary asset in driving MP. [61] investigated the effect of MC on firm performance. The results showed that MC had a positive effect on organizational performance. Research conducted by [25] found that MC has a positive and significant effect on MP. Therefore, the hypothesis is proposed as follows: H7: The more the marketing capability increases, the more the marketing performance increases. Previous literature in the management and IT domain clearly shows that a strong OC can contribute positively to achieving and maintaining a competitive firm performance [62]. The strategic literature in manufacturing highlights the role of OC in performance. The positive effect of OC on firm performance has been documented in some ways, such as by increasing revenue, reducing costs associated with product development and delivery, and improving the quality of the company's existing processes and products [63]. Superior OC is recognized as a driver to achieve competitive advantage and improve work results [64]. Thus, the hypothesis is proposed as follows: H8: The more the operational capability increases, the more the marketing performance increases. IC is very important to improve company performance. IC can improve performance by reducing production costs, thereby increasing profit margins [31]. SMEs can receive more benefits if they can develop, communicate, embrace, and exploit innovation. The innovation dimension consists of product innovation, process innovation,
316
A. Ratnawati and N. Kholis
and market innovation which is thoroughly studied by [65]. SMEs can gain more profits or increase profitability if SMEs can develop, communicate, buy, and develop IC [66]. There have been many studies that show a positive relationship between IC and firm performance [67]. Research conducted by [32] concluded that innovation ability can significantly increase firm performance. This is in line with research conducted by [31], that IC has a direct effect on firm performance. Thus, the hypothesis is proposed as follows: H9: The more the Innovation Capability increases, the more Marketing Performance increases.
3 Research Methodology 3.1
Variable Measurement
MO is a strategy in which companies or business actors must be oriented towards consumers and their market share [2]. Measurement variables used 6 items, concerning the opinion of [5]. EO is the characteristics and values embraced by the entrepreneur itself, which are the nature of never giving up, taking risks, speed, and flexibility. Measurement of variables using 5 items, namely, business experience, proactive, courage to take risks, flexible, and antipathetic. MC refers to the company's ability to carry out marketing activities such as market research, advertising, promotion, customer relations, sales efforts, and so forth [13]. To measure MC, 4 items were used. OC measurement used 4 items, namely, service process management, service performance management, IT infrastructure, and utilizing the most recent technology available. The measurement of IC used 4 items, namely, developing new products, expanding the product range, improving the quality of existing products, and increasing production flexibility [32]. MP is used to measure the success of strategies in marketing products. MP measurement used 3 items, namely, sales growth, customer growth, and profitability (Fig. 1).
Fig. 1. Research framework
3.2
Respondent
Respondents in this study were entrepreneurs or owners of batik SMEs in Central Java, Indonesia. The data was obtained by distributing questionnaires via google form and by
Developing Innovation Capability to Improve Marketing Performance
317
meeting directly with batik entrepreneurs. The distributed questionnaire is then collected and only 200 questionnaires can be analyzed. Questionnaires were distributed from December 2020 to January 2021, at which Central Java faced the COVID-19 pandemic. 3.3
Data analysis technique
This research used the regression analysis technique with two stages. The first stage was a regression analysis to examine the effect of MO, EO, MC, and OC on IC. The second stage of regression was carried out to test the effect of MO, EO, MC, OC, and IC on MP. Regression analysis was performed by using SPSS 23. While to test whether IC could be a mediating variable or not, the Sobel test was used.
4 Results Table 1 shows two regression models. The results of regression analysis of the first model and the second model produce good goodness of fit model because the Anova test produces the F-sign of 0.000. The coefficient of determination for the first model is shown by Adj. R2 = 0.501, which means that IC variable can be well explained by MO, EO, MC, and OC of 50.1%. The remaining 49.9% is explained by other variables outside the model. The coefficient of determination for the second model shows Adj. R2 = 0.581, which means that the MP variable can be well explained by the MO, EO, MC, OC, and IC of 58.1%. The remaining 41.9% is explained by other variables outside the model.
Table 1. Hierarchical regression analysis Model Hypothesis Regression Unstd b std b SE p-value VIF Results 1 H1 MO ! IC 0.115 0.168 0.043 0.008** 1.557 Accepted H2 EO ! IC 0.165 0.233 0.046 0.000** 1.705 Accepted H3 MC ! IC 0.268 0.310 0.044 0.000** 1.041 Accepted H4 OC ! IC 0.316 0.371 0.048 0.000** 1.279 Accepted 2 H5 MO ! MP 0.074 0.060 0.072 0.307 1.635 Rejected H6 EO ! MP 0.084 0.066 0.078 0.287 1.816 Rejected H7 MC ! MP 0.162 0.105 0.079 0.042* 1.238 Accepted H8 OC ! MP 0.421 0.276 0.087 0.000** 1.562 Accepted H9 IC ! MP 0.841 0.469 0.118 0.000** 2.046 Accepted Model 1: Adj R2 = 0.501, F = 50.988, p-value = 0.000; Model 2: Adj R2 = 0.581, F = 56.120, p-value = 0.000 Notes: * p < 0.05; **p < 0.01 MO = Market orientation, EO = Entrepreneurial orientation, MC = Marketing Capability, OC = Operational capability, IC = innovation capability, MP = marketing performance
318
A. Ratnawati and N. Kholis
The results of the regression analysis in model 1 show that there are four variables used to test the increase in IC. The results of the regression analysis with SPSS show that the variables that can increase IC are MO (std b = 0.168), EO (std b = 0.233), MC (std b = 0.310), and OC (std b = 0.371), all of which have a p-value of less than 0.01. It can be concluded that the hypotheses H1, H2, H3, and H4 are accepted. Thus, MO, EO, MC, and OC have a positive and significant effect on IC. For model 2 analysis, five variables, namely MO, EO, MC, OC and IC are used to test how they affect MP. The results of the regression analysis show that the effect of marketing MO on MP shows insignificant results (std b = 0.060, p-value 0.307). Thus, Hypothesis 5 is rejected. This means that the increase in MO is proven unable to improve MP. Similar results also occurred in an EO. The effect of EO on MP is shown by std b = 0.0.066 and p-value 0.287. This means that hypothesis 6 is rejected. Thus, the increase in EO proved unable to improve MP. The results of the regression analysis show that the variables that can affect MP are MC (std b = 0.105, p-value 0.042), OC (std b = 0.276; p-value 0.000), and IC (std b = 0.469; p-value = 0.000). Therefore, H7, H8, H9 are accepted. This result implies that increasing MP can be done through increasing MC, OC, and IC. The mediation test procedure proposed by Sobel (1982) was adopted to test the mediating effect of Innovation Capability (Table 2). Table 2. Parameter estimates for the path: indirect effects (Sobel test) Path MO ! IC ! MP EO ! IC ! MP MC ! IC ! MP OC ! IC ! MP Note: *p < 0.01
Sobel test p-value 2.504 0.0061* 3.204 0.0007* 4,630 0.0000* 4.836 0.0000*
Table 2 shows that all the p-values in the Sobel test have a value less than 0.01. This implies that innovation capability variable mediates the relationship between variable market orientation and marketing performance. This shows that innovation capability can mediate the relationship between market orientation with marketing performance, entrepreneurial orientation with marketing performance, marketing capability with marketing performance, and operational capability with marketing performance.
5 Discussion The results showed that IC can mediate the relationship between MO, EO, MC, and OC with MP. The increase in IC is marked by the company's ability to create products with the latest Batik designs and models, the ability to expand the Batik business area, and the ability to develop Batik production processes more effectively and efficiently.
Developing Innovation Capability to Improve Marketing Performance
319
Meanwhile, the increase in MP is indicated by the increasing Batik business growth and reaching the target, increasing customers, profits, and Batik sales. During a pandemic, the needs and desires of customers are important things that need to be understood by SME actors. SMEs must understand that many consumers limit their activities outside and prefer conducting all activities from home. Understanding the needs and desires of consumers is one manifestation of MO. MO is very important because it is proven to be able to increase the degree of IC. This is in line with research conducted by [48]. In this study, MO cannot directly affect MP but indirectly can affect MP through IC. EO can be seen from the actors of Batik SMEs who have experience in business, are proactive in improving performance, are willing to take risks, are flexible, and are anticipatory to sales problems that will arise. Batik SMEs that have a high EO cause a high IC. On the other hand, SMEs that have less EO will result in low IC levels. This supports the research conducted by [24]. Meanwhile, in this study, EO cannot directly affect MP, but indirectly can affect MP through IC. MC of SME actors play an important role because it can increase IC and MP. Increasing MC can be done by increasing the ability of SME actors in setting prices according to the target market, rapidly developing new Batik products that are not yet in the market, and increasing the ability to communicate and market the Batik products. Superior OC is proven to increase IC and MP. The increase of IC and MP is carried out by OC by improving service process management and service performance management of SMEs, increasing the ability to use IT infrastructure and the ability to utilize the latest technology.
6 Managerial Implications In the Covid-19 pandemic, it turned out that IC was proven to be a variable that could be a solution to improving the marketing performance of SMEs in Central Java, Indonesia. Therefore, SMEs need to increase IC to survive and develop during this pandemic. To increase IC, SMEs can create products with the latest Batik designs and models. In addition, many customers limit their activities outside so that it is easy to find information about batik through the media. Therefore, companies need to create the Batik products with the latest batik designs and models according to customer desires and needs, so that consumers are interested and willing to buy. SMEs also need to find new ideas to increase the ability to expand the Batik businesses area. This is because, during this pandemic, it is very easy for consumers to find the Batik products online, so that the Batik market area can expand and is not limited to one particular location… SMEs also need to improve their ability to develop the Batik production process more effectively and efficiently so that they can thrive in this pandemic. Therefore, it is expected that the marketing performance of SMEs will be able to increase and rise from the decadency of business due to the pandemic.
320
A. Ratnawati and N. Kholis
References 1. Rosyada, M., Wigiawati, A.: Strategi Survival Umkm Batik Tulis Pekalongan Di Tengah Pandemi Covid-19. J. Bisnis dan Kaji. Strateg. Manaj. 4, 189–214 (2020) 2. Uncles, M.: Market orientation. Austr. J. Manage. 25(2), i–ix (2000). https://doi.org/10. 1177/031289620002500201 3. Protcko, E., Dornberger, U.: The impact of market orientation on business performance The case of Tatarstan knowledge-intensive companies (Russia). Probl. Perspect. Manag. 12 (4), 225–231 (2014) 4. Han, J.K., Kim, N., Srivastava, R.K.: Market orientation and organizational performance: is innovation a missing link? J. Mark. 62(4), 30–45 (1998). https://doi.org/10.1177/ 002224299806200403 5. Narver, J.C., Slater, S.F.: The effect of a market orientation on business profitability. J. Mark. 54(4), 20 (1990). https://doi.org/10.2307/1251757 6. Masa’deh, R., Al-Henzab, J., Tarhini, A., Obeidat, B.Y.: The associations among market orientation, technology orientation, entrepreneurial orientation and organizational performance. Benchmarking 25(8), 3117–3142 (2018). https://doi.org/10.1108/BIJ-02-2017-0024 7. Aulia, R., Astuti, M., Ridwan, H.: Meningkatkan Kinerja Pemasaran melalui Orientasi Pasar dan Orientasi Kewirausahaan. J. Ilm. Manaj. dan Bisnis 20(1), 27–38 (2019). https://doi.org/ 10.30596/jimb.v20i1.2397 8. Setyawati, H.A.: Pengaruh Orientasi Kewirausahaan Dan Orientasi Pasar Terhadap Kinerja Perusahaan Melalui Keunggulan Bersaing Dan Persepsi Ketidakpastian Lingkungan Sebagai Prediksi Variabel Moderasi (Survey pada UMKM Perdagangan di Kabupaten Kebumen). Fokus Bisnis Media Pengkaj Manaj. dan Akunt. 12(2), 20–32 (2013) 9. Kirca, A.H., Jayachandran, S., Bearden, W.O.: Market orientation: a meta-analytic review and assessment of its antecedents and impact on performance. J. Mark. 69(2), 24–41 (2005). https://doi.org/10.1509/jmkg.69.2.24.60761 10. Na, Y.K., Kang, S., Jeong, H.Y.: The effect of market orientation on performance of sharing economy business: focusing on marketing innovation and sustainable competitive advantage. Sustain 11(3) (2019), https://doi.org/10.3390/su11030729 11. Puspaningrum, A.: Market orientation, competitive advantage and marketing performance of small medium enterprises ( SMEs). J. Econ. Bus. Accountancy Ventura 23(1), 19–27 (2020) 12. Kohli, A.K., Jaworski, B.J.: Market orientation: the construct, research propositions, and managerial implications. J. Mark. 54(2), 1 (1990) 13. Morgan, N.A., Slotegraaf, R.J., Vorhies, D.W.: Linking marketing capabilities with profit growth. Int. J. Res. Mark. 26(4), 284–293 (2009) 14. Morgan, N.A.: Marketing and business performance. J. Acad. Mark. Sci. 40(1), 102–119 (2012). https://doi.org/10.1007/s11747-011-0279-9 15. Carland, J.W., Hoy, F., Boulton, W.R., Carland, J.A.C.: Differentiating entrepreneurs from small business owners: a conceptualization. Acad. Manag. Rev. 9(2), 354 (1984) 16. Ramos-González, M., Rubio-Andrés, M., Sastre-Castillo, M.: Building corporate reputation through sustainable entrepreneurship: the mediating effect of ethical behavior. Sustainability 9(9), 1663 (2017). https://doi.org/10.3390/su9091663 17. Martens, C.D.P., Carneiro, K.D.A., Martens, M.L., da Silva, D.: Relationship between entrepreneurial orientation and project management maturity in Brazilian software firms. Rev. Ibero-Americana Estratégia 14(02), 72–91 (2015) 18. Lumpkin, G.T., Dess, G.G.: Academy of management review. Acad. Manag. Rev. 21(1), 135–172 (1996)
Developing Innovation Capability to Improve Marketing Performance
321
19. Ismail, N.A.: The roles of international entrepreneur orientation and geographical scope level to determine international performance: A case in the Malaysian halal food industry. Int. J. Entrep. 20(1), 129–142 (2016) 20. Lumpkin, G.T., Dess, G.G.: Linking two dimensions of entrepreneurial orientation to firm performance: the moderating role of environment and industry life cycle. J. Econ. Econ. Educ. Res. 16(5), 429–251 (2001) 21. Wales, W.J.: Entrepreneurial orientation: a review and synthesis of promising research directions. Int. Small Bus. J. Res. Entrep. 34(1), 3–15 (2016) 22. Nasip, S., Fabeil, N.F., Buncha, M.R., Hui, J.N.L., Sondoh Jr, S.L., Halim, D.N.P.A.: The influence of entrepreneurial orientation and social capital on the business performance among women entrepreneurs along West Coast Sabah, Malaysia. In: Proc. Int. Conf. Econ., vol. 2017, pp. 377–395 (2017) 23. Mohammad, I.N., Massie, J.D.D., Tumewu, F.J., Program, M.: The effect of entrepreneurial orientation and innovation capability towards firm performance in small and medium enterprises (Case Study: Grilled Restaurants In Manado). J. EMBA J. Ris. Ekon. Manajemen, Bisnis dan Akunt. 7(1) (2018). https://doi.org/10.35794/emba.v7i1.22255 24. Vorhies, D.W., Morgan, N.A.: Capabilities for sustainable competitive advantage. J. Mark. 69(January), 80–94 (2005) 25. Kyengo, J.M., Muathe, S.M.A., Kinyua, G.M.: Marketing capability and firm performance: an empirical analysis of food processing firms in Nairobi City County, Kenya. Strateg. J. Bus. Chang. Manag. 6(1), 544–555 (2019) 26. Theodosiou, M., Kehagias, J., Katsikea, E.: Strategic orientations, marketing capabilities and firm performance: an empirical investigation in the context of frontline managers in service organizations. Ind. Mark. Manag. 41(7), 1058–1070 (2012) 27. Winter, S.G.: Understanding dynamic capabilities. Strateg. Manag. J. 24(10 SPEC ISS), 991–995 (2003). https://doi.org/10.1002/smj.318 28. Terjesen, S., Patel, P.C., Covin, J.G.: Alliance diversity, environmental context and the value of manufacturing capabilities among new high technology ventures. J. Oper. Manag. 29(1– 2), 105–115 (2011). https://doi.org/10.1016/j.jom.2010.07.004 29. Romijn, H., Albaladejo, M.: Determinants of innovation capability in small electronics and software firms in southeast England. Res. Policy 31(7), 1053–1067 (2002) 30. Adler, P., Shenbar, A.: Adapting your technological base: the organizational challenge. Sloan Manage. Rev. 32(1), 25–37 (1990) 31. Shafi, M.: Sustainable development of micro firms: examining the effects of cooperation on handicraft firm’s performance through innovation capability. Int. J. Emerg. Mark. (2020) 32. Calantone, R.J., Tamer, C.S., Yushan, Z.: Learning orientation, firm innovation capability, and firm performance. Ind. Mark. Manag. 31, 515 (2004) 33. Hughes, M., Martin, S.L., Morgan, R.E., Robson, M.J.: Realizing product-market advantage in high-technology international new ventures: the mediating role of ambidextrous innovation. J. Int. Mark. 18(4), 1–21 (2010). https://doi.org/10.1509/jimk.18.4.1 34. Neely, A., Filippini, R., Forza, C., Vinelli, A., Hii, J.: A framework for analysing business performance, firm innovation and related contextual factors: perceptions of managers and policy makers in two European regions. Integr. Manuf. Syst. 12(2), 114–124 (2001) 35. Dela Novixoxo, J., Pomegbe, W.W.K., Dogbe, C.S.K.: Market orientation, service quality and customer satisfaction in the public utility companies. Eur. J. Bus. Manag. 10(30), 37–46 (2018) 36. Boso, N., Cadogan, J.W., Story, V.M.: Entrepreneurial orientation and market orientation as drivers of product innovation success: a study of exporters from a developing economy. Int. Small Bus. J. 31(1), 57–81 (2013). https://doi.org/10.1177/0266242611400469
322
A. Ratnawati and N. Kholis
37. Tutar, H., Nart, S., Bingöl, D.: The effects of strategic orientations on innovation capabilities and market performance: the case of ASEM. Proc. Soc. Behav. Sci. 207, 709–719 (2015). https://doi.org/10.1016/j.sbspro.2015.10.144 38. Akman, G., Yilmaz, C.: Innovative capability, innovation strategy and market orientation: An empirical analysis in Turkish software industry. Manag. Innov. 12(1), 139–181 (2019) 39. Anderson, B.S., Covin, J.G., Slevin, D.P.: Understanding the relationship between entrepreneurial orientation and strategic learning capability: an empirical investigation. Strateg. Entrep. J. 3, 218–240 (2009). https://doi.org/10.1002/sej 40. Hauser, J., Tellis, G.J., Griffin, A.: Research on innovation: a review and agenda for marketing science. Mark. Sci. 25(6), 687–717 (2006). https://doi.org/10.1287/mksc.1050. 0144 41. Naldi, L., Nordqvist, M., Sjöberg, K., Wiklund, J.: Entrepreneurial orientation, risk taking, and performance in family firms. Fam. Bus. Rev. 20(1), 33–47 (2007) 42. Porter, M.E.: The competitive advantage of nations. J. Multicult. Couns. Devel. (2001) 43. Weerawardena, J.: Innovation-based competitive strategy. The role of marketing capability in innovation-based competitive strategy, pp. 37–41 (2011) 44. Calantone, R.J., Di Benedetto, C.A., Divine, R.: Organisational, technical and marketing antecedents for successful new product development. R&D Manag. 23(4), 337–351 (1993). https://doi.org/10.1111/j.1467-9310.1993.tb00839.x 45. Dutta, S., Narasimhan, O., Rajiv, S.: Success in high-technology markets: Is marketing capability critical? Mark. Sci. 18(4), 547–568 (1999) 46. Wu, S.J., Melnyk, S.A., Flynn, B.B.: Operational capabilities: the secret ingredient. Decis. Sci. 41(4), 721–754 (2010) 47. Muzividzi, D.K., Mbizi, R., Mukwazhe, T.: An analysis of factors that influence internet banking adoption among intellectuals: case of Chinhoyi University of Technology. J. Contemp. Res. Bus. 4(11), 350–369 (2013) 48. Prabowo, H., Abdinagoro, S.B.: The role of effectual reasoning in shaping the relationship between managerial-operational capability and innovation performance. Manag. Sci. Later 11, 305–314 (2021). https://doi.org/10.5267/j.msl.2020.8.002 49. Narastika, A.A.R., Yasa, N.N.K.: Peran Inovasi Produk dan Keunggulan Bersaing Memediasi Pengaruh Orientasi Pasar Terhadap Kinerja Pemasaran. J. Ilmu Manaj. 7, 12 (2017) 50. Affandi, A., Erlangga, H., Sunarsi, D.: The Effect of Product Promotion and Innovation Activities on Marketing Performance in Middle Small Micro Enterprises in Cianjur (2019) 51. Liao, S.H., Chang, W.J., Wu, C.C., Katrichis, J.M.: A survey of market orientation research (1995–2008). Ind. Mark. Manag. 40(2), 301–310 (2011) 52. Cravens, D.W., Piercy, N.F., Baldauf, A.: Management framework guiding strategic thinking in rapidly changing markets. J. Mark. Manag. 25(1–2), 31–49 (2009) 53. Riswanto, A., Rasto, R., Hendrayati, H., Saparudin, M., Abidin, A.Z., Eka, A.P.B.: The role of innovativeness-based market orientation on marketing performance of small and mediumsized enterprises in a developing country. Manag. Sci. Lett. 10(9), 1947–1952 (2020) 54. Pelham, A.N.: Market orientation and other potential influences on performance in small and medium sized manufacturing firms. J. Chem. Inf. Model. (2000) 55. Rhee, J., Park, T., Lee, D.H.: Drivers of innovativeness and performance for innovative SMEs in South Korea: mediation of learning orientation. Technovation 30(1), 65–75 (2010). https://doi.org/10.1016/j.technovation.2009.04.008 56. Buli, B.M.: Entrepreneurial orientation, market orientation and performance of SMEs in the manufacturing industry: evidence from Ethiopian enterprises Bereket. Manag. Res. Rev. 40 (3) (2017)
Developing Innovation Capability to Improve Marketing Performance
323
57. Chiva, R., Alegre, J.: Organizational learning capability and job satisfaction: an empirical assessment in the ceramic tile industry. Br. J. Manag. 20(3), 323–340 (2009) 58. Wang, C.L.: What comes first: market or entrepreneurial orientation? Strateg. Dir. 32(10), 7– 9 (2016). https://doi.org/10.1108/sd-07-2016-0110 59. Yang, C.: The relationships among leadership styles, entrepreneurial orientation, and business performance. Manag. Glob. Transitions 6(3), 257–275 (2008) 60. Mitrega, M., Ramos, C., Forkman, S., Henneberg, S.C.: Networking capability, networking outcomes, and company performance: a nomological model including moderation effects. Ind. Mark. Manag. 41(5), 739–751 (2012) 61. Karanja, S.C., Muathe, S.M.A., Thuo, J.K.: The effect of marketing capabilities and distribution strategy on performance of msp intermediary organisations’ in Nairobi County, Kenya. Bus. Manage. Strateg. 5(1), 197 (2014). https://doi.org/10.5296/bms.v5i1.5723 62. Drnevich, P.L., Kriauciunas, A.P.: Clarrifying the conditions and limits of the contributions of ordinary and dynamic capabilities to realtive firm performance. Strateg. Manag. J. 279, 254–279 (2011). https://doi.org/10.1002/smj 63. Lai, F., Li, D., Wang, Q., Zhao, X.: The information technology capability of third-party logistics providers: A resource-based view and empirical evidence from China. J. Supply Chain Manag. 44(3), 22–38 (2008). https://doi.org/10.1111/j.1745-493X.2008.00064.x 64. Sabara, Z., Soemarno, S., Leksono, A.S., Tamsil, A.: The effects of an integrative supply chain strategy on customer service and firm performance: an analysis of direct versus indirect relationships. Uncertain Supply Chain Manag. 7(3), 517–528 (2019) 65. Rosli, M.M., Sidek, S.: The impact of innovation on the performance of small and medium manufacturing enterprises: evidence from Malaysia. J. Innov. Manag. Small Mediu. Enterp. 2013, 1–16 (2013). https://doi.org/10.5171/2013.885666 66. Aziz, N.N.A., Samad, S.: Innovation and competitive advantage: moderating effects of firm age in foods manufacturing SMEs in Malaysia. Proc. Econ. Finan. 35, 256–266 (2016). https://doi.org/10.1016/S2212-5671(16)00032-0 67. Naranjo-Valencia, J.C., Jiménez-Jiménez, D., Sanz-Valle, R.: Studying the links between organizational culture, innovation, and performance in Spanish companies. Rev. Latinoam. Psicol. 48(1), 30–41 (2016). https://doi.org/10.1016/j.rlp.2015.09.009
Muthmai’nnah Adaptive Capability: A Conceptual Review Asih Niati1,2(&), Olivia Fachrunnisa1, and Mohamad Sodikin1,3 1
Department of Management, Faculty of Economics, Universitas Islam Sultan Agung, Semarang, Indonesia {asihniati,msodikin}@std.unissula.ac.id, [email protected] 2 Department of Management, Faculty of Economics, Universitas Semarang, Semarang, Indonesia 3 Institute of Economics Science Cendekia Karya Utama, Semarang, Indonesia
Abstract. The aim of this study is to learn new concepts of personal dynamic abilities which are reflected in adaptive abilities. When a person faces changes and unpredictable work situations, it will create new challenges to improve performance. Design methodology using Publish or Perish Software (PoP) to map adaptive ability articles personally, which are then further grouped according to the requirements desired by researchers to get new concepts. The findings of the dynamic ability concept from the literature grouping stated that most of the studies discussed the optimization of physical skills (skills, knowledge and attitudes) and there were no changes in personal qualities involving elements of individual spirituality. In This article recommends interventions Islamic values in Adaptive Capability is Muthma’innah so getting a new concept Muthmai’nnah Adaptive Capability (MAC). From the results of this research, researchers can contribute to the development of Islamic human values in an organization. Keywords: Adaptive capability
Muthma’innah
1 Introduction Human Capital is capital that human resources have in the form of knowledge, skills, attitudes and actions to develop better organizational activities [1]. The components of human resources are not only limited to knowledge and skills but also have the ability to take the best action that is within the individual to always be adaptive, flexible and have resilience in interacting so that they can achieve maximum performance. To provide organizational opportunities to grow and develop, it is necessary to maximize the achievement of employee performance [2]. One of the keys to the strength and sustainability of a business lies in its human resources. The adaptive ability of individuals is one of the human values that should be possessed by anyone in an era of disruption and uncertainty. Organizations need employees who can bring a better change and have foresight by evaluating the past against existing potentials and weaknesses [3]. In addition to addictive behavior, the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 324–331, 2021. https://doi.org/10.1007/978-3-030-79725-6_31
Muthmai’nnah Adaptive Capability: A Conceptual Review
325
ability of employees to be able to manage personal emotions and other people is needed in carrying out organizational activities. Balancing the conflicting elements that exist in the person is very important to be able to live a meaningful life and provide commitment in times of difficulty [4], through the management of cognitive abilities, they will be able to control feelings and emotions [5]. Recently there are still many weaknesses in the literature on the adaptive ability of employees, so an evaluation is necessary. More organizations are directing the adaptation of employees to behavior for external gain. The adaptability of employees is still oriented towards physical and material optimization and has not yet led to elements of spirituality such as: 1. How do individual efforts to deal with change by using psychological assets so that they get a calm life in the face of unpredictable situations. 2. How individuals have the awareness that the changes is something that comes from Allah SWT, so that individuals need to have high spirituality to be adaptive. 3. How can individuals build mutual competence and a sense of empathy with the people around them in the face of uncertain changes that are not individual-oriented. From the above weaknesses, it is necessary to develop humanist values with new colors and intervene in values that give transcendental meaning, namely muthma'innah, so that someone will interpret life as “grace”. Through the weaknesses of the concept of some literature it is very important to intervene muthma'innah with the adaptive ability of employees.
2 Literature Review Al-Nafs Muthma’innah is the gift of the perfection of the heart from Allah SWT, to obtain purity and eliminate all bad things so that a peaceful mind will be planted even though it is experiencing failure in the world [6]. A soul filled with serenity will give you a state of high spiritual development. A person will be in a situation of harmony, happiness, comfort and peace even though he is actually in a state of failure. To give meaning to work, employees must be able to overcome their feelings of stress so that they can bring their body, mind and spirit to the organization [7]. Al-Nafs Muthma’innah will bring self-confidence to Allah SWT and bring a new spirit to peace of mind. Being able to think rationally and master the situation is a balance of mind to stay calm and peaceful (QS. Al-Fajr (89): 27–28). Allah SWT has bestowed his servants and used their potential to increase their faith. Calmness will indicate a high level of equanimity. Its implementation will involve a much more complex self-transcendence [4]. When someone faces difficulties, the elements of balance, inner contradiction and commitment will give meaning. Calmness in life is very important and must be sought by humans, especially Muslims, so that they can learn to understand how to get calm as it is said in QS. ArRa’d (13): 28 which means “Those who have believed and whose hearts are assured by the remembrance of Allah. Unquestionably, by the remembrance of Allah hearts are assured.”
326
A. Niati et al.
To deal with the change effectively, people must learn to adapt. By using flexibility, a person will be able to face situations and environments that are full of pressure so that they can think quickly to respond to changes [7]. Through Muthma'innah it will give a new dimension to someone's adaptive ability. With its positive power and personality, it will provide new dimensions, namely istiqomah, qonaah, mardhatillah and amanah. Individual calm soul, able to respond to all problems in life from experience and broaden the point of view so that it is more focused on positive things than negative things from difficult situations experienced.
3 Method The methodology used in this article is to map the bibliometric data analysis using Publish or Perish (PoP) and VOSviewer software. To map the literature data using PoP, the researcher took the keyword Interpersonal Adaptability which refers to Google Scholar and Scopus in the literature published in 2000–2020, and the mapping only focuses on articles that are in rank 1 or 2, while in using the VOSviewer software it is only for looking for opportunities for the topic to be researched. By using VOSviewer, it will provide a variety of interesting visuals, analyzes, and investigations [8]. The results are compiled in Research Information Systems (RIS) format to include all important article information such as paper title, author name and affiliation, abstract, keywords and references.
4 Results and Analysis. In this article, the author tries to do some discussion related to the data mapping process, so that a new concept is formed from integrating personal adaptive abilities with Muthma’innah. To get a definition from a literature, researchers used Publish or Perish (PoP) software to map the literature we needed by looking at the titles that match the discussion. Retrieval of literature from 2000 to 2020 from relevant google scholar and Scopus. After being selectively carried out on existing literature, it turns out that there are several things that need to be considered, namely duplication of articles, mismatch of the intended theme, there is a mixture of literature in the form of books, proceedings, articles and some are included in fake journals, so we need to do appropriate data mapping that focuses on articles only and enters the Q1 and Q2 ranking indexes. The results of data mapping from Google Scholar and Scopus can be seen in Table 1 as follows: The results of the mapping of metadata data from 8 (eight) articles can be seen in Table 2 with the following details. In the use of VOS viewer software, it has been found that adaptive ability has been widely researched in 2015, but not many studies have discussed the adaptive ability of employees interpersonal related to the value of inner calm so that it will generate new opportunities for research. The results of the visualization mapping are in Fig. 1.
Muthmai’nnah Adaptive Capability: A Conceptual Review
327
Table 1. Metadata results No. 1 2 3
Source Google scholar Scopus Final search based on Scimago
Initial search results Initial search results by topic 999 154 26 7 8
Fig. 1. Mapping visualization
Based on the results of a search through the Publish or Perish (PoP), the concept of adaptive abilities of employees in interpersonal produce a critical review as listed in Table 3 as follows: From that background, it is necessary to have the ability and individual behavior to use adaptability based on al-Nafs Al-Muthma'innah in the form of a calm, faithful, pious and clean soul from the impulses of lust. In accordance with Surah al-Fajr verses 27–30, a calm soul becomes the foundation of life in treating mental illness when experiencing failure, restlessness and restlessness [17]. To deal with change effectively, an individual must learn how to adapt. By using flexibility, a person will be able to deal with stressful situations and environments. A person who is based on Al-Nafs Muthma'innah is able to deal with change effectively, so he can think quickly to respond to change [7]. By balancing between psychic and spirit, it produces a calm soul as a basic foundation that is formed from faith, devotion, belief, and purity that will encourage an action towards successful work. As mentioned in QS. Al Fath (48): 4: “It is He who sent down tranquillity into the hearts of the believers that they would increase in faith along with their [present] faith. And to Allah belong the soldiers of the heavens and the earth, and ever is Allah Knowing and Wise”.
328
A. Niati et al. Table 2. Details of the meta data results
No. 1
Author [9]
2
[10]
3
[11]
4
[12]
5
[13]
6
[14]
7
[15]
8
[16]
Title Adaptability in the Workplace: Development of a Taxonomy of Adaptive Performance The Relative Importance of Task and Contextual Performance Dimensions to Supervisor Judgments of Overall Performance Employability during unemployment: Adaptability, career identity and human and social capital The Relative Importance of Task and Contextual Performance Dimensions to Supervisor Judgments of Overall Performance A Multilevel Model of Transformational Leadership and Adaptive Performance and the Moderating Role of Climate for Innovation When does adaptive performance lead to higher task performance? Individual and career adaptability: Comparing models and measures Employee Adaptive Performance and Its Antecedents: Review and Synthesis
Journal Journal of Applied Psychology 2000, Vol. 85, No. 4, 612–624
Rank Q1
Journal of Applied Psychology 2001, Vol. 86, No. 5. 984–996
Q1
Journal of Vocational Behaviour Vol 71 No. 2 (2007) 247–264
Q1
Journal of Applied Psychology 2010, Vol. 95, No. 1, 174–182
Q1
Group & Organization Management 2010 vol. 35(6) 699–726
Q1
Journal of Organizational Behavior (2011)
Q1
Journal of Vocational Behavior 83 (2013) 130–141
Q1
Human Resource Development Review Vol. 18 No. 3 (2019) Vol. 18 No. 3 (2019) 294–324
Q1
So conceptually, researchers try to create a new model to respond to changes with a calm soul that will be achieved to obtain divine truth. Therefore, in this study need to be given a value muthma'innah as a spiritual psychological aspect to create a harmonious personality through istiqomah, qonaah, mardhatillah and mandate to contribute to the achievement of the performance. Muthma’innah Adaptive Capability will provide a new dimension to the adaptive ability of employees in responding to changing internal and external conditions which is always changing:
Muthmai’nnah Adaptive Capability: A Conceptual Review
329
Table 3. State of the art review of adaptive capability theory No. 1
Author [9]
2
[10]
3
[11]
4
[12]
5
[13]
6
[14]
7
[15]
8
[16]
Key point of weakness Emphasizes the individualistic ability of employees to complete physical tasks in order to obtain personal and material satisfaction, but does not consider the achievement of goals to get the pleasure of Allah SWT with quality and wisdom from the results obtained 1. This study focuses only on the interest that leads to the life of the world and competition among individuals who encourage someone to work hard to fulfill the desire that is always insatiable thus impacting the greed and materialistic attitude 2. There are efforts to make a more effective contribution in contributing to the organization, but have not paid attention to the quality of the process to get inner peace 1. Adjustment is still limited to a response to changes in the external environment and there is no change in aspects of personal quality 2. There have been attempts to make a more effective contribution to their role as a team or organizational member, but have not considered the quality of the process for gaining inner peace 1. Not paying attention to the quality of the process to reach the Hereafter 2. Focus on individual efforts and not based on the intention of worship 3. There are attempts to make a more effective contribution to their role as a team or organizational member but have not considered the quality of the process for gaining inner peace 1. To pursue a worldly life that leads to the material elements so that there is no inner tranquility 2. The foundation does not aim to worship because of Allah SWT 3. There are efforts to change creative behavior but it has not been matched by filling in spirituality 1. Focus on behaviors that reflect the extent to which individuals are responsive to job changes in carrying out tasks 2. Only to overcome worldly problems, but not balanced with the fulfillment of spiritual values 3. The adjustment is still limited in response to external changes in the environment and there is no change in the aspect of personal qualities 1. The focus on competitive competition is not based on worship 2. Adjustment is still limited in response to the material and no response on the spiritual aspects of personal qualities 2. Only to overcome the problems of world life, but not balanced with spiritual replenishment 3. Has advantages that are not based on worship
1. Istiqomah that is, the fortitude or persistence of an employee on the right track in carrying out his job [7] 2. Qonaah is the attitude of being willing to accept or feel sufficient with what is obtained and to distance oneself from being dissatisfied. With qonaah, we know that in achieving a wish, it must be accompanied by effort.
330
A. Niati et al.
3. Mardhatillah is working in order to reach the pleasure of Allah, to provide the true value of blessings in a life filled with outpouring of grace and blessings from Allah SWT [18]. 4. Amanah is the attitude of working with full responsibility to their duty [18]. Muthma’innah Adaptive Capability (MAC) is the ability of employees to respond to internal and external conditions, adapt to changes and create a harmonious personality through istiqomah, qonaah, mardhatillah and a responsibility to contribute to achieve performance. Muthma'innah Adaptive Capability (MAC) will provide benefits for organizations to help individuals in solving their problems which lead the organization to excellence. A good character will be seen from a calm soul. The process of achieving calm soul perfection will affect one's behavior in coping with change. Based on faith, a person will control himself in any situation and think rationally and be able to achieve self balance.
5 Conclusion This paper has provided an overview of a new concept to the adaptive interpersonal literature of employees by integrating Islamic values “Muthma'innah.”. Muthma'innah Adaptive Capability will provide a solid building and a significant contribution to the organization. It is importance to develop the ability of employees, interpersonally, so they able to cope with changes or stressful organizational conditions. Employees can be strong and carry out good behavior in accordance with Islamic values leading to performance achievement. Future research will be focus on develop the measurement of Muthma’innah Adaptive Capabiliy and test this concept in certain organization. Some possible antecedents and consequences will also develop in the future. Acknowledgements. This research is funded by Ministry of Research and Technology Higer Education/ Nasional Research and Innovation Board, Indonesia. Doctoral Research Grant.
References 1. Vidotto, J.D.F., Ferenhof, H.A., Selig, P.M., Bastos, R.C.: A human capital measurement scale. J. Intellect. Cap. 18(2), 316–329 (2017). https://doi.org/10.1108/JIC-08-2016-0085 2. Al-Matari, E.M., Al-Swidi, A.K., Fadzil, F.H.B.: The measurements of firm performance’s dimensions. Asian J. Financ. Account. 6(1), 24 (2014). https://doi.org/10.5296/ajfa.v6i1. 4761 3. Lumpkin, A., Achen, R.M.: Explicating the synergies of self-determination theory, ethical leadership, servant leadership, and emotional intelligence. J. Leadersh. Stud. 12(1), 1–15 (2018). https://doi.org/10.1002/jls.21554 4. Astin, A.W., Keen, J.P.: Equanimity and spirituality. Relig. Educ. 33(2), 39–46 (2006). https://doi.org/10.1080/15507394.2006.10012375 5. George, J.M.: Emotions and leadership: the role of emotional intelligence. Hum. Relations 53(8), 1027–1055 (2000). https://doi.org/10.1177/0018726700538001
Muthmai’nnah Adaptive Capability: A Conceptual Review
331
6. Farmawati, C., Hidayati, N.: Penyusunan dan Pengembangan Alat Ukur Islamic Personality Scale (IPS). J. Psikol. Islam dan Budaya 1(2), 19–30 (2018). https://doi.org/10.15575/jpib. v2i1.4318 7. Adawiyah, B.P.: Scaling the notion of Islamic spirituality in the workplace. J. Manag. Dev. 36(7), 877–898 (2017). https://doi.org/10.1108/JMD-11-2014-0153 8. van Eck, N.J., Waltman, L.: Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics 84(2), 523–538 (2010). https://doi.org/10.1007/ s11192-009-0146-3 9. Pulakos, E.D., Arad, S., Donovan, M.A., et al.: Adaptability in the workplace: development of a taxonomy of adaptive performance. J. Appl. Psychol. 85(4), 612–624 (2000). https://doi. org/10.1037//0021-9010.85.4.612. 10. Johnson, J.W.: The relative importance of task and contextual performance dimensions to supervisor judgments of overall performance. J. Appl. Psychol. 86(5), 984–996 (2001). https://doi.org/10.1037/0021-9010.86.5.984 11. McArdle, S., Waters, L., Briscoe, J.P., Hall, D.T.T.: Employability during unemployment: Adaptability, career identity and human and social capital. J. Vocat. Behav. 71(2), 247–264 (2007). https://doi.org/10.1016/j.jvb.2007.06.003 12. Griffin, M.A., Neal, A., Parker, S.K.: A new model of work role performance: positive behavior in uncertain and interdependent contexts. Acad. Manag. J. (2007). https://doi.org/ 10.5465/AMJ.2007.24634438. 13. Griffin, M.A., Parker, S.K., Mason, C.M.: Leader vision and the development of adaptive and proactive performance: a longitudinal study. J. Appl. Psychol. 95(1), 174–182 (2010). https://doi.org/10.1037/a0017263 14. Charbonnier-Voirin, A., El Akremi, A., Vandenberghe, C.: A multilevel model of transformational leadership and adaptive performance and the moderating role of climate for innovation. Gr. Organ. Manag. 35(6), 699–726 (2010). https://doi.org/10.1177/ 1059601110390833 15. Hamtiaux, A., Houssemand, C., Vrignaud, P.: Individual and career adaptability: comparing models and measures. J. Vocat. Behav. 83, 130–141 (2013). https://doi.org/10.1016/j.jvb. 2013.03.006 16. Park, S., Park, S.: Employee adaptive performance and its antecedents: review and synthesis. Hum. Resour. Dev. Rev. 18(3), 294–324 (2019). https://doi.org/10.1177/ 1534484319836315 17. Widodo, A., Rohman, F.: Konsep Jiwa yang Tenang dalam Surat Al Fajr 27–30 (Perspektif Bimbingan Konseling Islam). AL-IRSYAD J. Bimbing. Konseling Islam 1(2), 219–234 (2019) 18. Walian, A.W.: Konsepsi Islam Tentang Kerja. Rekonstruksi Terhadap Pemahaman Kerja Seorang Muslim. An Nisa’a 7(1), 65–80, 2000
Interaction Model of Knowledge Management, Green Innovation and Corporate Sustainable Development in Indonesia Siti Sumiati(&), Sri Wahyuni Ratnasari, and Erni Yuvitasari Faculty of Economics, Universitas Islam Sultan Agung, Semarang, Indonesia [email protected]
Abstract. The case in this research is the global challenges that require a competitive advantage for Micro, Small and Medium Enterprises (MSMEs). The purpose of this research is to formulate efficient strategies to improve Corporate Sustainable Development for MSME practitioners. This research population comprised all MSMEs in Central Java Indonesia with 100 MSME units as the research illustration. This research method used a non-random sampling method with a purposive sampling method. The research results indicated that the five hypotheses in this research have a positive and significant relationship. Keywords: Knowledge creation Knowledge acquisition Green innovation Corporate sustainable development
1 Introduction In the millennial era, consumers can access various objects they need easily and quickly. It is a challenge for MSME actors to increase their competitive advantage amidst intense market competition. Therefore, MSME actors must be able to formulate efficient strategies in increasing corporate sustainable development. One of the strategies is to increase knowledge management for MSMEs. Knowledge management is a source of strategic energy for MSMEs because they think knowledge management can exceed their competitors [1]. Some types of knowledge that MSMEs must improve are knowledge creation and knowledge acquisition. MSME actors need knowledge creation to produce innovative and unique products in the market share. Knowledge acquisition also means trying to make MSME actors ready to experience all forms of transformation socially, technologically, or in area [11]. Corporate sustainable development emphasizes more on the area aspect [4]. Implementing the green innovation concept is to share facilities with MSME actors by increasing area-friendly products. The full implementation of green innovation is to minimize the negative effects of MSME operational activities in the area. Based on the explanations above, it can be concluded that the problem formulations in this research are; How to improve Green Innovation through Knowledge Creation and Knowledge Acquisition?; How to enhance Corporate Sustainable Development through Green innovation? And how is the Corporate Sustainable Development optimization model for MSMEs in Indonesia? © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 332–337, 2021. https://doi.org/10.1007/978-3-030-79725-6_32
Interaction Model of Knowledge Management, Green Innovation
333
2 Literature Review Knowledge management is a conversion of implicit knowledge replaced as explicit knowledge to be transferred, learned, and understood by others. Knowledge management can be an aspect driving corporate sustainable development. The development of knowledge management can also be a stable foundation for MSME actors in conducting business. It aims to make its products survive and have a competitive advantage in market share. Efficient knowledge management allows organizations to be more innovative and creative. As a result, some MSME actors take knowledge management as a strategic energy source that will enable them to defeat their competitors [1]. Knowledge creation results from the interaction between knowledge and identification through activities, practice, and people [7]. Knowledge creation in operational activities is meaningful for MSMEs. This knowledge creation allows MSME actors to practice new knowledge and generate new inspiration or solutions for employees [1]. In MSMEs, the purpose of knowledge is to achieve efficiency related to energy sources' effective use and tend to explore area-friendly. Knowledge creation contained in MSMEs does not only suppress and facilitate the process of making area-friendly products. Therefore, this study’s hypothesis is H1: Knowledge creation has a positive and significant impact on Green innovation. Knowledge acquisition refers to an organization's activities to obtain, extract, and control knowledge from various sources [1]. Most MSMEs’ employees want to get knowledge from internal sources. Thus, the MSME actors show that they can gain and absorb knowledge positively, which will impact their financial performance. To achieve corporate sustainable development, MSME actors must use their knowledge acquisition in MSMEs' operational activities. Therefore, the second hypothesis in this research is H2: Knowledge acquisition has a positive and significant effect on Green innovation. Green innovation is a facility used by the industry to eliminate or minimize the negative effects of their operational activities in nearby areas [8, 15]. The purpose of green innovation is to bring revisions in the manufacturing process through raw materials into finished products. It aims to minimize natural consequences, sources of consumption, capitalization of current energy sources, and reduce waste. Therefore, the research hypothesis is H3: Green innovation positively and significantly impacts corporate sustainable development. The theory of corporate sustainable development is linked to the “Brundtland Commission” report entitled “Our Common Future” presented at the UN Union Assembly in 1987. The report highlighted the interconnected issues of economic development and area stability. Corporate sustainable development is a development that aims to fulfill all the needs of people without risking natural conditions and nearby areas [1, 3]. The area approach implemented by corporate sustainable development focuses on preserving nature, natural areas, justifying clean water and air, and minimizing natural energy source utilization. Moreover, the economic approach also requires practicing creative skills to produce area-friendly products that do not use risky and quality materials to be accepted by market share and optimize profits by increasing
334
S. Sumiati et al.
sales and reducing operational costs. Conversely, a social approach to corporate sustainable development also focuses on strengthening the relationship between organizations, humans and people. It also promotes human welfare by controlling all that is needed. Besides, the existence of corporate sustainable development aims to share social justice for human rights and labor.
3 Research Methods The type of research is both explanatory research and descriptive research. This research aims to describe and analyze the research results according to the reality of the object under study to describe the increase of Corporate Sustainable Development of MSMEs in Indonesia. The population is a combination of all research objects; they are humans, events or various indications that are intertwined and needed by researchers to support success in research. In connection with this research, the selected population comprised Micro, Small and Medium Enterprises (MSMEs) in Central Java. Sample is part of the number and characteristics of a population. The sampling method used in this research is the non-random sampling method with purposive sampling. The researchers selected a group of subjects adjusted to specific criteria and based on the research objectives. There were also illustrative criteria from this research. The criteria were 100 MSMEs in Indonesia, especially in Central Java, consisted of one manager, namely the owner/leader of each MSME. In this research, information was obtained through 2 sources, primary data and secondary data. The primary data extracted in this research were related to research variables, specifically related to the increase of corporate sustainable development in MSMEs. The secondary data in this research were scientific journals, scientific books, the internet and other data associated with this research. The researchers collected data by using a questionnaire. The Inner Model in this research is shown in Fig. 1 as follows:
Knowledge Creation
H1 H3 Green Innovation
Knowledge acquisition
H2
Fig. 1. Research inner model
Corporate Sustainable Development
Interaction Model of Knowledge Management, Green Innovation
335
An operational definition is a practical operational definition of a variable. The operational definitions for each variable are as follows (Tables 1 and 2):
Table 1. Operational definition of variables and indicators No 1
Variable Knowledge Creation (X1) A knowledge that MSME actors must possess is related to the creativity of MSME actors in creating new products and ideas
2
Knowledge Acquisition (X2) A knowledge that MSME actors must possess is related to MSME actors' attitudes in facing various changes that occur in the business sector Green Innovation (Y1) MSME actors use a concept to eliminate and minimize the MSME operational activities' negative impact on the community and the surrounding environment Corporate Sustainable Development (Y2) A concept in which MSME actors must continue to preserve nature in developing and expanding their businesses to meet the current generation's needs without having to sacrifice/destroy all resources related to MSMEs' operational activities
3
4
Indicator X1.1 = service attitude X1.2 = innovation capabilities X1.3 = employee development X1.4 = employee motivation level X2.1 = dynamic business environment X2.2 = flexibility X2.3 = responsiveness Y1.1 = green technology innovation Y1.2 = green management innovation Y2.1 = environmental sustainability Y2.2 = social sustainability Y2.3 = economic sustainability
Source [1, 4]
[1, 6]
[1, 15]
[1, 3]
4 Research Results and Discussion This research used the Structural Equational Modeling (SEM) analysis procedure operated through the Partial Least Squares (PLS) program. The researchers processed the information by using WarpPLS 5.0. All indicators used to measure all variables in this research have a reflective character. To test the measurement model, it must fulfill the convergent validity test, discriminant validity, and composite reliability test. Based on the field information analysis results, the information used in this research is valid and reliable so that other tests can test the information. The results of the research analysis can be seen in the table as follows:
336
S. Sumiati et al. Table 2. Results of research data analysis
Dependent variable
Independent variable
Path coefficient
P-value
Information
Regression model I Green Innovation (Y1)
Knowledge Creation (X1)
0.295
0.036
H1: X1 Y1 Accepted
Knowledge Acquisition (X2)
0.478
0.006
H2: X2 Y1 Accepted
R-Squared = 0.547
Regression I:X1,X2 ! Y1
Adj. R-Squared = 0.568
Regression model II Corporate Sustainable Development (Y2) R-Squared = 0.426
Green Innovation (Y1)
0.577 Adj. R-Squared = 0.355
0.018
H3: Y1Y2 Accepted Regression II: Y1 ! Y2
Source: Processed primary data, 2020
Based on the results of the regression analysis, the three research hypotheses show positive and significant results. Hypothesis 1 tested the relationship between Knowledge Creation and Green Innovation shows positive and significant results. The greater the Knowledge Creation values of MSME actors, the greater the Green Innovation that MSMEs will possess. Hypothesis 2 in this research implies a positive and significant relationship between Knowledge Acquisition and Green Innovation. This hypothesis explains that if there is an increase in Knowledge Acquisition in MSMEs, there will be more Green Innovation of MSMEs. Meanwhile, hypothesis 3 of this research explains a positive and significant relationship between Green Innovation and Corporate Sustainable Development. The statement confirms that the greater the Green Innovation possessed by MSME actors, the value of Corporate Sustainable Development for MSME actors will also increase. The effort that MSME actors in Indonesia can make is to increase the Corporate Sustainable Development that each MSME has. It is expected that the MSME actors, especially in Indonesia, will be ready and able to compete in the digital era in all market segments.
5 Conclusion Based on the research results, it can be concluded that the three hypotheses of this research are acceptable. In other words, the three results of the research hypothesis analysis show a positive and significant effect on the relationship between each variable. The MSME actors can increase Green Innovation through some variables such as Knowledge Creation and Knowledge Acquisition. On the other hand, to increase the Corporate Sustainable Development of MSME actors, it can improve the Green Innovation implementation by MSME actors. This research's limitation is that the distribution of illustrated areas is still limited to the Central Java area. The research aspect is related to MSMEs only in five business fields. The empirical research model is still simple. Future researchers can expand the distribution area of illustrations in the Central Java region and outside the zone. Additionally, future research can also add other aspects of MSME-related research to different business fields. The empirical research model is also broadened or narrowed towards the antecedents of future research. It aims to expand the illustration and data in detail further.
Interaction Model of Knowledge Management, Green Innovation
337
References 1. Abbas, J.: Impact of knowledge management practices on green innovation and corporate sustainable development : a structural analysis, J. Clean. Produc. 229, 611–620 (2019) 2. Ahmed, Y.A., Ahmad, M.N., Ahmad, N., Zakaria, N.H.: Social media for knowledgesharing: a systematic literature review. Telemat. Inform. 37, 72–112 (2019) 3. Al, S., Choiruzzad, B., Eko, B.: The 3rd international conference on sustainable future for human security Islamic economy project and the Islamic scholars. Procedia Environ. Sci. 17, 957–966 (2013) 4. Andreou, P.C., Louca, C., Petrou, A.P.: Organizational learning and corporate diversification performance. J. Bus. Res. 69(9), 3270–3284 (2016) 5. Azaizah, N., Reychav, I., Raban, D.R., Simon, T., McHaney, R.: Impact of ESN implementation on communication and knowledge-sharing in a multi-national organization. Int. J. Inf. Manage. 43(February), 284–294 (2018) 6. Denning, S.: The role of the C-suite in agile transformation: the case of amazon. Strat. Leadership 46(6), 14–21 (2018) 7. Fay, E., Nyhan, J.: Webbs on the web: libraries, digital humanities and collaboration. Libr. Rev. 64, 118–134 (2015) 8. Harrington, D., et al.: Capitalizing on SME green innovation capabilities: lessons from Irishwelsh collaborative innovation learning network. University Partnerships for International Development, pp. 93–121(2017) 9. Lin, J., Lu, Y., Wang, B., Kee, K.: Electronic commerce research and applications the role of inter-channel trust transfer in establishing mobile commerce trust. Electron. Commer. Res. Appl. 10(6), 615–625 (2011) 10. Liu, J., Zhang, Z., Evans, R., Xie, Y.: Web services-based knowledge sharing, reuse and integration in the design evaluation of mechanical systems. Robot. Comput.-Integr. Manuf. 57(April 2018), 271–281 (2019) 11. Madanchian, M., Taherdoost, H.: Assessment of leadership effectiveness dimensions in Small Medium Enterprises (SMEs) costing models for capacity optimization in Industry 4.0: trade-off between used capacity operational. Procedia Manuf. 32, 1035–1042 (2019) 12. Oliveira, M., Curado, C., Henriques, P.L.: Knowledge sharing among scientists: a causal configuration analysis. J. Bus. Res. (2018) 13. Park, H.Y., et al.: Family firms’ innovation drivers and performance : a dynamic capabilities approach (2018) 14. Rafique, H., Shamim, A., Anwar, F.: Investigating acceptance of mobile library application with extended technology acceptance model (TAM) corresponding author. Comput. Educ. 103732 (2019) 15. Soewarno, N., Tjahjadi, B., Fithrianti, F.: Green innovation strategy and green innovation. Management Decision, MD-05–2018–0563 (2019)
The Impact of Covid-19 Pandemic on Continuance Adoption of Mobile Payments: A Conceptual Framework Dian Essa Nugrahini(&) and Ahmad Hijri Alfian Department of Accounting, Faculty of Economics, Universitas Islam Sultan Agung, Semarang, Indonesia {dianessan,hijrialfian}@unissula.ac.id
Abstract. Using mobile-based payments for transactions can be a method to maintain social distancing, preventing the spread of the Covid-19 virus. The continuous usage can be ensured by replacing physical banking transactions, provided the users are satisfied and convinced of its benefits. Besides, most of the services have been offered through online platforms, and consumers are forced to explore online payment options. Hence, this study aims to propose a conceptual framework of mobile payments adoption and its continuance intention. The subject of the study constitutes new adopters of mobile payments. Future research will include validating the proposal framework using empirical data. Keywords: Continuance intention Mobile payment Adoption COVID-19
1 Introduction The Covid-19 pandemic has devastated the world's economy and financial markets. To contain the spread and effects of the Covid-19 outbreak, many countries, including Indonesia, have taken preventive measures to reduce the risk of Covid-19, such as social distancing policies. Social distancing policies can help to contain the spread of disease by reducing the possibility of face-to-face or close contact with infected people and contaminated surfaces [1–3] These social distancing policies support the adoption of cashless alternative payment methods to avoid physical contact because cash can accelerate the virus's spread. In addition to social distancing policies, Indonesians are also faced with working from home and studying from home. With these policies, many people use e-commerce services as an alternative to meet their needs. The rapid development of e-commerce, the emergence of new technologies and the high use of mobile devices have changed the way consumers complete their transactions. It makes mobile payments popular in the community, especially among the millennial generation. Compared to traditional payments, such as cash and debit/credit cards, the advantage of mobile payment is in its convenience because it is not limited to time and place [4]. As an effect of recent e-commerce developments, cashless payments through digital systems refer to smart payment alternatives in several developing countries to achieve sustainable competitive advantage. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 338–346, 2021. https://doi.org/10.1007/978-3-030-79725-6_33
The Impact of Covid-19 Pandemic on Continuance Adoption
339
The growth of the Internet in Indonesia has driven e-commerce from 2013 to 2020. The number of e-commerce users in Indonesia has increased from 34% of the total population in 2015 to 53% in 2020 [5]. The growth of the fintech product market in Indonesia shows an upward trend, as seen from the increase in transaction value and the number of start-ups. Recent data show that mobile payment transactions over the past three years have shown an increasing trend with Rp 56 trillion in 2019, Rp 47 trillion in 2018, and Rp 12 trillion in 2017 (Rp - Rupiah, Indonesian currency. 1 USD = Rp 14,197) [5]. E-money or e-wallet payments are the most popular form of fintech service in Indonesia, followed by web-based investment and pay later services. The most significant digital transactions in Indonesia come from retail (28%), online transportation (27%), food orders (20%), e-commerce (15%), and bill payments (7%). Digitalization has become a significant factor of consumer behavior leading to a new way of life. The increasing use of online service makes mobile payments more reliable, along with the expansion of the supplier's reach and the delivery network's size. The emergence of digitalization via the internet has accelerated globalization and payment systems from manual to online transactions. It causes dependence on the use of electronic money (e-money) in making transactions. Several studies on mobile payment have been conducted in recent years, while technology adoption has long been studied. Dahlberg et al. [6] reviewed mobile payment research from 2007 to 2014 and concluded that the research has focused mainly on three themes: strategy and ecosystems, technology, and adoption. In marketing, most research focuses on the factors that influence the adoption of mobile payments. Various theories and models from various disciplines have been applied to provide further explanations. The most widely adopted are the Technology Acceptance Model (TAM), The Unified Theories of Acceptance and Use of Technology (UTAUT), the diffusion theory of belief and innovation, and mental accounting theory. These models have been widely used to investigate technology adoption. However, the contributions made in the field of continuous technology adoption and use are noteworthy. This contribution, in particular, reflects a more rational aspect of adoption, particularly the fact that sometimes consumers do not make decisions of this type based on realistic and reasoned beliefs [7]. Previously, mobile-based payments were a convenience medium; but now it seems necessary after the Covid-19 pandemic. Thus, Covid-19 is expected to increase the use of mobile payments due to two factors. First, mobile payments can act as an instrument to promote social distancing policies, enabling people to make transactions during lockdown and quarantine periods. Second, most of the services have been offered through online platforms, and consumers are forced to use online payments. The central bank has advised bankers and customers to use digital-based payments to avoid physical contact via currency/coin media. Data released by the Bank for International Settlements show a sharp increase in the use of contactless payments in major countries [8]. As the Covid-19 virus infection is expected to continue for some time (until everyone is vaccinated), the adoption of mobile payments can be offered as a method of social distancing. Regular use can be ensured by replacing physical banking transactions, as long as users are satisfied and confident of the benefits.
340
D. E. Nugrahini and A. H. Alfian
2 Literature Review 2.1
Technology Adoption Models
Technology adoption has been a critical research area for the last three decades [9]. Many theoretical models have been proposed during this process to investigate the mechanisms of user adoption of technology. It aims to explain better and predict user behavior. The theories include the technology acceptance model (TAM) by [10], theory of reasoned action (TRA) (TRA) [11] and the extended version [12, 13], theory of planned behavior (TPB) by Ajzen [14], and theory of Integrated acceptance and use of technology (UTAUT) by Venkatesh et al. [15] which has been applied widely to explain the adoption of different types of innovation. TAM, an adapted TRA model, is proposed to investigate the determinants of Information Systems (IS) acceptance. TAM proposes two main factors that influence the acceptance of Information Systems (IS), namely perceived usability and ease of use. The first has to do with the extent to which users perceive IS to help their job performance. The latter relates to their perceptions of difficulties in using IS. Both of these factors can influence the user's attitude towards technology, leading to the system's actual use. Initially, TAM has been extensively tested and modified to predict technology adoption behavior. TAM’s main problem is the many additional factors that can influence technology acceptance in a particular context [10]. On the other hand, The Unified Theories of Acceptance and Use of Technology (UTAUT) has four main constructs that influence a person's behavioral intention to use technology: performance expectancy, business expectations, social influence, and facilitating conditions. Performance expectancy is defined as how the use of technology will offer benefits to consumers in carrying out certain activities. Business expectancy is defined as the level of convenience associated with the help of technology by consumers. Meanwhile, social influence shows the extent to which consumers perceive that essential people in their lives, such as family and friends, believe that they must use a particular technology. Lastly, the facilitating condition reflects consumers’ perceptions of the resources and support available to carry out the target behavior [16]. Neither of these theoretical models is flawless. Therefore, many researchers tend to investigate practical problems by combining two or more themes. 2.2
Overview of Mobile Payment Adoption
The study of mobile payment user behavior is a topic of current interest in the scientific marketing community (e.g. Smith; Calvo-Porral and Otero-Prada; Calvo-Porral and Nieto-Mengotti [17–19]. One of the most interesting related problems is the adoption of mobile payments [20]. Mobile payment has been defined as using a cell phone or other cellular device to purchase goods or services [21]. Mobile payment service also refers to any business activity that uses a mobile device to complete economic transactions [22]. There are two main types of mobile payment, remote mobile payment and short distance mobile payment [23], which are made remotely and in a physical store. As a new technology,
The Impact of Covid-19 Pandemic on Continuance Adoption
341
Mobile payment is recognized as one of the most potential applications [24, 25]. Its use is extensive, from buying cinema tickets to paying for transportation, and many others. Mobile payments research can be categorized into three lines: strategy and ecosystem [26, 27], technology and the technology environment [28, 29], and adoption [24, 25, 30]. For some reasons, the adoption of mobile payments has received the most attention among marketing experts. The first reason is that mobile payments have great potential. With their simplicity, it can benefit millions of users and companies worldwide [20]. Second, technology adoption has been widely studied in the marketing field, for example, by Sun et al., and Walles et al. [7, 31]. It is important to understand consumer preferences and identify why they are willing or unwilling to use technology. By only this way, it is expected that mobile payment services can generate value for consumers and stakeholders [6]. 2.3
Perceived Usefulness
According to TAM, perceived usefulness is the extent to which users believe that adopting a particular technology will improve job effectiveness and performance [32, 33]. It is the user's perception of the increased usability of adopting new technologies [34]. Perceived usefulness, with perceived attitude and ease of use, is one of the antecedents of behavioral intention in the TAM model [10]. Pham and Ho [35] argued that perceived usefulness should be the first characteristic of the new technology to be taken into account. In mobile payments, the presence of perceived benefits can convince consumers that the mobile payment process may help make certain purchases [36, 37]. Moreover, mobile payments have other functions, for example, to transfer money online. TAM [10] described perceived usefulness as positively associated with consumer attitudes towards certain technologies. In the context of mobile payment, when users realize its usefulness, they will develop attitudes that support it. The unique function of mobile payment will enhance this supportive attitude. TAM is also expected to predict that an individual's intention to use mobile payment depends on perceived usefulness [38]. Technologies, such as mobile banking, enable its users, anywhere and at any time, to access information about their current balances and past transactions, thereby strengthening user adoption intentions [39]. Mobile banking is a form of mobile payment. All mobile payment services are similar to mobile banking. When users begin to realize the usefulness of mobile payments compared to other payment methods (such as cash and credit cards), for example, by completing their transactions more conveniently, they will tend to adopt mobile payments. It can increase consumer confidence in enjoying the use of mobile payments as an alternative payment and as an effort to reduce the spread of COVID-19. 2.4
Perceived Ease of Use
In the TAM model, it is defined that the perceived ease of use is an individual's perception of the simple, easy and effortless operation of a particular technology system [10]. It is an assessment of the effort involved in using technology [12] and has been considered one of the most influential determinants in adopting new technology [33].
342
D. E. Nugrahini and A. H. Alfian
Ease of use and perceived usefulness have been suggested as the two main factors determining the acceptance of new technology [24, 25, 40]. Both are essential and reliable predictors of user attitudes and intentions towards new technology [36, 37]. Perceived ease of use is the most significant and the most proposed precursor in assessing mobile payment adoption [6]. In TAM, it is argued that perceived ease of use positively influences perceived usefulness and indirectly influences intention to use. It has a positive influence on attitudes toward new technology. The perceived ease of use reflects the ease of using technology to access websites to buy online [41]. The use of technology is more profitable for online users. In other words, the easier the application of technology will make alternative payment methods used by consumers as a means of transaction. 2.5
Social Influence
Social influence (SI) has been significantly constructed to assess consumers' willingness to use mobile payment [42]. Potential influencers for consumers to use mobile payments are family members, friends, colleagues, and neighbors [43]. Therefore, SI shows the influence of environmental factors that encourage consumers to buy or sell new products [15]. Similarly, Martins et al. [44] found that social influence impacts online users' intentions to adopt Internet services. On the other hand, Chaouali et al. [45] reported that social influences influence individuals' mindset regarding the use of new innovative products through technology services. Social influence (SI) can be derived from the influence of subjective norms and social factors on the intention to behave using mobile payments at UTAUT. 2.6
Lifestyle Compatibility
Lifestyle compatibility (LC) is defined as the natural alignment of lifestyle choices and values [46]. This aspect of lifestyle compatibility is essential to reduce the potential for uncertainty in technology use regarding users' values, experiences, lifestyle, and preferences [47]. Thus, lifestyle compatibility influences a person's behavior and offers excellent benefits in predicting consumer behavioral intentions [48]. If consumers are accustomed to interacting with applications, they may assume that technology offers the convenience of buying a product.
3 Conceptual Framework Based on the literature review described above, the conceptual framework is as follows (Fig. 1).
The Impact of Covid-19 Pandemic on Continuance Adoption
343
Perceived ease of use The Technology Acceptance Model (TAM) + The Unified Theories of Acceptance and Use of Technology (UTAUT) Internal Factor
Perceived Usefulness
External Factor
Perceived Ease of Use
Social Influence
Lifestyle Compatibility
Intention to Use Mobile Payment
Adoption of Mobile Payment Satisfaction Toward Service Performance Continuance Intention
Fig. 1. Conceptual framework
4 Conclusion and Future Research The conceptual model proposed in this study aims to investigate the general adoption of mobile payments. The definition of mobile payment is a general concept. Simultaneously, specific representations vary, including e-wallets, mobile banking applications, and third-party applications, such as virtual transportation cards based on Alipay and NFC, etc. Each adoption can be different, and this requires specific research. The COVID-19 pandemic is causing dramatic changes in consumer behavior. In particular, e-commerce, digitization, and systems that allow consumers to avoid direct physical contact are being promoted, and mobile payment system requires no physical contact. In this situation, contactless payments, including a mobile-based payment system, help prevent the pandemic. Hence, the adoption of mobile payments is considered a preventive health behavior. In the post-COVID-19 era, when these behaviors have stabilized and become regular, it is crucial to study the pandemic's fundamental and long-term impact on the adoption of this new technology. Finally, to develop a sustainable intention to use mobile payment services, the study suggests improving service performance by adding more features and services in one platform.
344
D. E. Nugrahini and A. H. Alfian
References 1. Chang, S.L., Harding, N., Zachreson, C., Cliff, O.M., Prokopenko, M.: Modelling transmission and control of the COVID-19 pandemic in Australia. Nat. Commun. 11(1), 1–13 (2020). https://doi.org/10.1038/s41467-020-19393-6 2. Eikenberry, S.E., et al.: To mask or not to mask: Modeling the potential for face mask use by the general public to curtail the COVID-19 pandemic. Infect. Dis. Model. 5, 293–308 (2020). https://doi.org/10.1016/j.idm.2020.04.001 3. Fong, L.S.: Workforce Transformation for, no. June (2018) 4. Shao, Z., Zhang, L., Li, X., Guo, Y.: Antecedents of trust and continuance intention in mobile payment platforms: the moderating effect of gender. Electron. Commer. Res. Appl. 33,(2019). https://doi.org/10.1016/j.elerap.2018.100823 5. Das, K., Gryseels, M., Sudhir, P., Tan, K.T.: Unlocking Indonesia’s digital opportunity. McKinsey Co., no. October, pp. 1–28 (2016). https://www.mckinsey.com/*/media/ McKinsey/Locations/Asia/Indonesia/OurInsights/UnlockingIndonesiasdigitalopportunity/ Unlocking_Indonesias_digital_opportunity.ashx. 6. Dahlberg, T., Guo, J., Ondrus, J.: A critical review of mobile payment research. Electron. Commer. Res. Appl. 14(5), 265–284 (2015) 7. Sun, H., Fang, Y., Zou, H.M.: Choosing a fit technology: understanding mindfulness in technology adoption and continuance. J. Assoc. Inf. Syst. 17(6), 2 (2016) 8. Auer, R., Cornelli, G., Frost, J.: Covid-19, cash, and the future of payments. BIS Bull. 3, 1–7 (2020) 9. Chuttur, M.Y.: Overview of the technology acceptance model: origins, developments and future directions. Work. Pap. Inf. Syst. 9(37), 9–37 (2009) 10. Davis, F.D.: A Technology Acceptance Model for Empirically Testing New End-User Information Systems. Massachusetts Institute of Technology, Cambridge, MA (1986) 11. Fishbein, M., Ajzen, I.: Belief, Attitude, and Behavior: An Introduction to Theory and Research. Addison Wessley, Reading, MA (1975) 12. Venkatesh, V., Davis, F.D., College, S.M.W.: Theoretical acceptance extension model: four longitudinal field studies. Manage. Sci. 46(2), 186–204 (2000) 13. Venkatesh, V., Bala, H.: Technology acceptance model 3 and a research agenda on interventions. Decis. Sci. Inst. 39(2), 273–315 (2008) 14. Ajzen, I.: The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 50(2), 179– 211 (1991) 15. Venkatesh, V., Thong, J.Y., Xu, X.: Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology. MIS Q. 36, 157–178 (2012) 16. Venkatesh, V., Morris, M.G., And, G.B.D., Davis, F.D.: User acceptance of information technology: toward a unified view. MIS Q. 27(3), 425–478 (2003) 17. Smith, T.A.: The role of customer personality in satisfaction, attitude-to-brand and loyalty in mobile services. Spanish J. Mark. - ESIC 24(2), 155–175 (2020). https://doi.org/10.1108/ SJME-06-2019-0036 18. Calvo-Porral, C., Otero-Prada, L.-M.: A profile of mobile service users in a mature market: from ‘uninvolved pragmatics’ to ‘potential switchers’ (2020) 19. Calvo-Porral, C., Nieto-Mengotti, M.: The moderating influence of involvement with ICTs in mobile services. Spanish J. Mark. - ESIC 23(1), 25–43 (2019). https://doi.org/10.1108/ SJME-08-2018-0036
The Impact of Covid-19 Pandemic on Continuance Adoption
345
20. Liébana-Cabanillas, F., Molinillo, S., Ruiz-Montañez, M.: To use or not to use, that is the question: analysis of the determining factors for using NFC mobile payment systems in public transportation. Technol. Forecast. Soc. Change 139(November), 266–276 (2019). https://doi.org/10.1016/j.techfore.2018.11.012 21. Kim, C., Mirusmonov, M., Lee, I.: An empirical examination of factors influencing the intention to use mobile payment. Comput. Human Behav. 26(3), 310–322 (2010). https:// doi.org/10.1016/j.chb.2009.10.013 22. Liébana-Cabanillas, F., Herrera, L.J., Guillén, A.: Variable selection for payment in social networks: introducing the Hy-index. Comput. Human Behav. 56, 45–55 (2016). https://doi. org/10.1016/j.chb.2015.10.022 23. Liu, Y.: Consumer protection in mobile payments in China: a critical analysis of Alipay’s service agreement. Comput. Law Secur. Rev. 31(5), 679–688 (2015). https://doi.org/10. 1016/j.clsr.2015.05.009 24. Liébana-Cabanillas, F., Sánchez-Fernández, J., Muñoz-Leiva, F.: Antecedents of the adoption of the new mobile payment systems: the moderating effect of age. Comput. Human Behav. 35, 464–478 (2014). https://doi.org/10.1016/j.chb.2014.03.022 25. Liébana-Cabanillas, F., Sánchez-Fernández, J., Muñoz-Leiva, F.: The moderating effect of experience in the adoption of mobile payment tools in Virtual Social Networks: the mPayment Acceptance Model in Virtual Social Networks (MPAM-VSN). Int. J. Inf. Manage. 34(2), 151–166 (2014). https://doi.org/10.1016/j.ijinfomgt.2013.12.006 26. Au, Y.A., Kauffman, R.J.: The economics of mobile payments: understanding stakeholder issues for an emerging financial technology application. Electron. Commer. Res. Appl. 7(2), 141–164 (2008). https://doi.org/10.1016/j.elerap.2006.12.004 27. De Reuver, M., Verschuur, E., Nikayin, F., Cerpa, N., Bouwman, H.: Collective action for mobile payment platforms: a case study on collaboration issues between banks and telecom operators. Electron. Commer. Res. Appl. 14(5), 331–344 (2015). https://doi.org/10.1016/j. elerap.2014.08.004 28. Ou, C.M., Ou, C.R.: Adaptation of proxy certificates to non-repudiation protocol of agent based mobile payment systems. Appl. Intell. 30(3), 233–243 (2009) 29. Ahamad, S.S., Sastry, V.N., Udgata, S.K.: Secure mobile payment framework based on UICC with formal verification. Int. J. Comput. Sci. Eng. 9(4), 355–370 (2014). https://doi. org/10.1504/IJCSE.2014.060718 30. Lu, Y., Yang, S., Chau, P.Y.K., Cao, Y.: Dynamics between the trust transfer process and intention to use mobile payment services: a cross-environment perspective. Inf. Manag. 48 (8), 393–403 (2011). https://doi.org/10.1016/j.im.2011.09.006 31. Wallace, L.G., Sheetz, S.D.: The adoption of software measures: a technology acceptance model (TAM) perspective. Inf. Manag. 5Q(2), 249–259 (2014) 32. Davis, F.D.: User acceptance of information technology: system characteristics, user perceptions and behavioral impacts. Int. J. Man. Mach. Stud. 38(3), 475–487 (1993) 33. de Luna, I.R., Liébana-Cabanillas, F., Sánchez-Fernández, J., Muñoz-Leiva, F.: Mobile payment is not all the same: the adoption of mobile payment systems depending on the technology applied. Technol. Forecast. Soc. Change 146, 931–944 (2019). https://doi.org/ 10.1016/j.techfore.2018.09.018 34. Ooi, K.B., Tan, G.W.H.: Mobile technology acceptance model: an investigation using mobile users to explore smartphone credit card. Expert Syst. Appl. 59, 33–46 (2016). https:// doi.org/10.1016/j.eswa.2016.04.015 35. Pham, T.T.T., Ho, J.C.: The effects of product-related, personal-related factors and attractiveness of alternatives on consumer adoption of NFC-based mobile payments. Technol. Soc. 43, 159–172 (2015). https://doi.org/10.1016/j.techsoc.2015.05.004
346
D. E. Nugrahini and A. H. Alfian
36. Liébana-Cabanillas, F., Muñoz-Leiva, F., Sánchez-Fernández, J.: A global approach to the analysis of user behavior in mobile payment systems in the new electronic environment. Serv. Bus. 12(1), 25–64 (2017). https://doi.org/10.1007/s11628-017-0336-7 37. Liébana-Cabanillas, F., Marinkovic, V., Ramos de Luna, I., Kalinic, Z.: Predicting the determinants of mobile payment acceptance: a hybrid SEM-neural network approach. Technol. Forecast. Soc. Change 129, 117–130, 2018 https://doi.org/10.1016/j.techfore.2017. 12.015 38. Williams, M.D.: Social commerce and the mobile platform: Payment and security perceptions of potential users. Comput. Human Behav. 115 (2021). https://doi.org/10. 1016/j.chb.2018.06.005 39. Mehmet Haluk Koksal: The intentions of Lebanese consumers to adopt mobile banking. Int. J. Bank Mark. 34(3), 327–346 (2016) 40. Belanche, D., Casaló, L.V., Flavián, C.: Artificial Intelligence in FinTech: understanding robo-advisors adoption among customers. Ind. Manag. Data Syst. 119(7), 1411–1430 (2019). https://doi.org/10.1108/IMDS-08-2018-0368 41. Grover, P., Kar, A.K., Janssen, M., Ilavarasan, P.V.: Perceived usefulness, ease of use and user acceptance of blockchain technology for digital transactions–insights from usergenerated content on Twitter. Enterp. Inf. Syst. 13(6), 771–800 (2019). https://doi.org/10. 1080/17517575.2019.1599446 42. Peng, S., Yang, A., Cao, L., Yu, S., Xie, D.: Social influence modeling using information theory in mobile social networks. Inf. Sci. (Ny) 379, 146–159 (2017). https://doi.org/10. 1016/j.ins.2016.08.023 43. Sarika, P., Vasantha, S.: Impact of mobile wallets on cashless transaction, vol. 7 (2019) 44. Martins, C., Oliveira, T., Popovič, A.: Understanding the internet banking adoption: a unified theory of acceptance and use of technology and perceived risk application. Int. J. Inf. Manage. 34(1), 1–13 (2014). https://doi.org/10.1016/j.ijinfomgt.2013.06.002 45. Chaouali, W., Ben Yahia, I., Souiden, N.: The interplay of counter-conformity motivation, social influence, and trust in customers’ intention to adopt Internet banking services: the case of an emerging country. J. Retail. Consum. Serv. 28, 209–218 (2016). https://doi.org/10. 1016/j.jretconser.2015.10.007 46. Chawla, D., Joshi, H.: Role of Mediator in Examining the Influence of Antecedents of Mobile Wallet Adoption on Attitude and Intention (2020) 47. Lin, H.-F.: An empirical investigation of mobile banking adoption: the effect of innovation attributes and knowledge-based trust. Int. J. Inf. Manage. 31, 252–260 (2011) 48. Shaw, N, Sergueeva, K.: The non-monetary benefits of mobile commerce: Extending UTAUT2 with perceived value. Int. J. Inf. Manage. 45, 44–55 (2019). https://doi.org/10. 1016/j.ijinfomgt.2018.10.024.
An Analysis in the Application of the Unified Theory of Acceptance and Use of Technology (UTAUT) Model on Village Fund System (SISKEUDES) with Islamic Work Ethics as a Moderating Effect Khoirul Fuad1(&), Winarsih1, Luluk Muhimatul Ifada1, Hendry Setyawan1, and Retno Tri Handayani2 1
Department of Accounting, Universitas Islam Sultan Agung, Jalan Raya Kaligawe KM 4, Semarang, Indonesia {khoirulfuad,winarsih,luluk.ifada, hendri}@unissula.ac.id 2 Universitas Muria Kudus, Jalan Lingkar Utara UMK, Gondangmanis, Bae, Kudus, Indonesia [email protected]
Abstract. The allocation of village fund disbursed by the government is huge and tends to increase every year. The government requires using an information system for villages so that funds are distributed appropriately, transparently and accountably and minimize misuse. This study aims to determine the village fund management through the village fund system using the Unified Theory of Acceptance and Use of Technology (UTAUT) approach and using Islamic work ethics as a moderating variable. The research sample obtained 163 respondents who directly applied village funds in Central Java. The data analysis technique in this study used a structural equation model based on partial least square. This study found that only one out of the four variables in the UTAUT model did not affect. In comparison, the other three variables, namely performance expectancy, social influences, and facilitating conditions, affected students' behavior. Meanwhile, the moderating variable, namely Islamic work ethics, showed that only business expectancy could be moderated by Islamic work ethics but not significantly. Additionally, other variables were declared unable to moderate. Keywords: UTAUT
Village fund system Islamic work ethics
1 Introduction Technology currently plays an important role in all activities in both the public and private sectors. In the public sector, the government uses technology as a way to provide better quality public services. Financial management is part of applying existing technology in the government, such as village financial management. Law number 6/2014 on villages requires the use of a village information system. It is a form of transparent budget management according to its use. [19] stated that the human © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 347–356, 2021. https://doi.org/10.1007/978-3-030-79725-6_34
348
K. Fuad et al.
factor plays an important role in the acceptance and adoption of technology. [25] believed that e-learning impacts the sustainability of financial activity and provides a convenient impact on its users. [5] found that the existence of technology can improve performance for village government and facilitate the work of village officials. Unified Theory of Acceptance and Use of Technology (UTAUT) is the basic model used in this study to support village financial management, which the government requires. The factors measured are four constructs: performance expectancy, business expectancy, social influence, and facility conditions [28]. Performance expectancy means individual confidence to get benefits in using a technology or information system. The studies of [5, 16, 29] indicating that performance expectancy can have a positive impact on the behavior of using the system. In contrast, [21, 29] suggested that performance expectancy cannot influence system user behavior. The second construct is business expectancy which can be defined by the level of ease associated with using the system [2]. The results of research conducted by [10, 12, 23, 27]; showed that showed that business expectancy has a positive impact on the behavior of system users. However, it is in contrast with the results of research conducted by [16, 19, 21, 28] that business expectancy does not show any influence on the behavior of system users. The next construct is social influence. It is about the extent to which individual views that people believe they have to use the new system [2]. Several studies on social influence on user behavior were conducted by [26, 31] which implies that social influence has a positive impact on the behavior of users of an information system. The final construct in the UTAUT model is the facilitating condition as defined by the availability of technical assistance in supporting information systems. The research by [16, 23] shows that facilitating condition affects the user behavior of an information system. On the contrary, [4, 21] suggested that the facilitating condition does not have a significant effect on user behavior. Based on the results of previous research described above, it can be found some inconsistencies in the research result from the use of the UTAUT model. This model's application still does not lead to a public or government organization's financial management system. It attracts researchers to develop the UTAUT model application in the local government sector, especially in the village. Besides, this research will combine Islamic work ethics to link the importance of Islamic work ethics. It is because an activity is not only aimed at being resolved but rather encourages personal balance and social relations [8, 32].
2 Literature Review and Hypothesis Development 2.1
Literature Review
The UTAUT model is a concept developed by [28] regarding the acceptance and use of information technology. Researchers widely use the UTAUT model to examine user acceptance of information technology. [3, 23] adopted e-government, [22] adopted ehealth. The model developed by [28] combining the behavioral model with the acceptance of information technology. The UTAUT model is also based on the eight
An Analysis in the Application of the Unified Theory
349
theories used. They are the theory of reasoned action (TRA), technology acceptance model (TAM), motivational model (MM), theory of planned behavior (TPB), a combination of TAM and TPB, the model of PC utilization (MPTU), diffusion innovation theory (IDT) and social cognitive theory (SCT) [17]. UTAUT model consists of four main constructs: performance expectancy, effort expectancy, social influence, and facilitating conditions. The first construct performance expectancy is the level to which a person believes that using the system will benefit, maximizing, or optimizing performance at work [28]. The second construct, business expectancy, can be defined as something related to convenience. It is the level of relief, or how high is the level of ease for each individual in using something related to the technology system. The ease is by reducing energy and time on the individual in action [28]. The third construct social influence is a measure of the extent to which each individual believes that people who have an important role should use a new information system [28]. The last construct facility conditions can be interpreted as a person's belief in an organization's infrastructure. It is also the availability of technical support to facilitate the use of technology systems [28]. In addition to the four constructs of the UTAUT model as the variable, this study also used IWE (Islamic Work Ethic), a multidimensional concept. It links an organization's prosperity and continuity to societal welfare [1]. Islamic work ethics views work to gain more than personal interests economically, socially, and psychologically. It maintains social prestige, increases social prosperity, and strengthens faith [1]. 2.2
Hyphotesis Development
Performance Expectancy and the Behavior of Village Fund System Users Performance expectancy is considered the most influential variable in the constructs of the UTAUT model. It is because performance expectancy is the confidence of a person or individual using the system to help their performance. Research by [3, 5, 6, 18, 22] shows that performance expectancy has a positive effect on system user behavior. H1: Performance expectancy has a positive effect on the behavior of users of the village financial system.
Business Expectancy and the Behavior of Village Fund System Users Business expectancy is the level of ease in using a system. If the system is made easy and flexible, a person will easily use it even though it is new. Research results from [24, 26, 29] indicating that business expectancy has a positive effect on system user behavior. H2: Business expectancy has a positive effect on the behavior of the village fund system users.
Social Influence and the Behavior of Village Fund System Users Social influence is expected to encourage people to utilize and use the system. In other words, social influence has an impact or perception on people in using the system. [12, 21] suggested that social influence has a positive effect on system user behavior. H3: Social influence has a positive effect on the behavior of the village fund system users.
350
K. Fuad et al.
Facilitating Condition and the Behavior of Village Fund System Users In facilitating conditions, there is the availability of existing infrastructure to support the system and the encouragement of knowledge to improve the tasks performed. The results of research conducted by [3, 13, 23] show that facilitating condition influences the behavior of system users. H4: Facilitating condition has a positive effect on the behavior of users of the village financial system.
Islamic Work Ethics and the Behavior of Village Fund System Users Islamic work ethics considers the work’s goal to complete work and encourage a balance of personal growth and social relationships [2]. It is crucial because Islamic work ethics functions not only for individuals who are always knowledgeable in principle, but this Islamic work ethic is also important for the surrounding work environment. It can lead to system users in addition to proficiency in technology. H5: Work ethics moderates the relationship of performance expectancy, business expectancy, social influence, and facilitating conditions on the behavior of system users.
3 Research Method The population in this study was village government offices in Central Java that used the village fund system. The sample of the study included village officials who used technology-based financial information systems in each village office. The sampling method used in this study was judgment sampling. The source of the data used in this research is primary data by distributing questionnaires to respondents. It was measured by using a 1–7 Likert scale. This scale is used because it is more accurate, easier to use, and reflects the respondents' real value [7, 15]. The data were collected by distributing questionnaires directly and online with google form media in areas that were difficult to reach and too far away due to restrictions on social activities as the impact of the Covid-19 pandemic. Meanwhile, the data were analyzed using Structural Equation Model based on Partial Least Square (SEM-PLS) with Smart PLS using two equations: the Outer Model and Inner Model.
4 Result and Discussion 4.1
Result
Outer Model Test Result This study's indicators, amounting to 163 respondents’ answers, must meet the convergent validity test, discriminant validity, and reliability test [9]. The convergent validity test results in this study have shown that the outer model value or the correlation between the construct and the variable has met the convergent validity greater than 0.5. Besides, the convergent validity used in each construct variable is seen from the AVE (Average Variance Extracted) value, whose value must be greater than 0.5. This study has been able to meet these requirements. Furthermore, the discriminant
An Analysis in the Application of the Unified Theory
351
validity test results show that AVE's square root value for all variables is higher than the correlation value with other variables. It indicates that the model has validly fulfilled the discriminant validity. Meanwhile, this study's reliability test results showed the composite reliability value of all constructs had a value of >0.7. In conclusion, the model in this study had met the reliability requirements. The outer model test results are presented in the following table (Tables 1 and 2): Table 1. The value of average varaiance extracted (AVE) Variable Average variance extracted Performance expectancy (X1) 0,860 Business expectancy (X2) 0,793 Social influence (X3) 0,722 Facilitating condition (X4) 0,608 The behavior of village fund system users (Y) 0,894 Islamic work ethics (X5) 0,561
Table 2. Correlations among latent variables with square roots of AVEs Variable EK EU PS KF PP EKI
EK 0,927 0,577 0,587 0,547 0,525 0,191
EU PS 0,577 0,587 0,891 0,575 0,575 0,850 0,642 0,604 0,491 0,510 0,046 −0,040
KF 0,547 0,642 0,604 0,780 0,534 0,002
PP EKI 0,525 0,191 0,491 0,046 0,510 −0,040 0,534 0,002 0,946 0,039 0,039 0,749
Inner Model Test Result The inner model test aims to see the relationship between variables, the significance value, and the R-Squares or Adjusted R2 of the research model. Based on this study, the results are as shown in Table 3 below; Table 3. The value of R-squared, adjusted R-squared, and Q-squared Endogenous variables R-squared R-square adjusted Q-squared The behavior of village fund system users 0,442 0,414 0,439
Hypothesis Result (t-Test) The hypothesis test in this study is reflected through the p-value in the index range. It is the basis for the relationship between exogenous and endogenous latent variables with a significance level of 5% [9]. The first hypothesis test results in this study between performance expectancy on village fund system users’ behaviour show that the path
352
K. Fuad et al.
coefficients value is 0.155 and the p-value is 0.041. In summary, the hypothesis is supported. The second hypothesis test results indicate the path coefficient value is 0.204 and a p-value is 0.072. Thus the hypothesis is not supported. The third hypothesis results describe a path coefficient value of 0.185 and a p-value of 0.013; hence, the hypothesis is supported. The fourth hypothesis implies that the path coefficient value is 0.181 and a p-value is 0.024. As a result, the hypothesis is supported. Further, the fifth hypothesis regarding the moderation effect is not supported. The complete results of each test are shown in Fig. 1 below:
Fig. 1. Partial least square test results
Meanwhile, the results of the path coefficients and the P-Value are presented in Table 4 below:
Table 4. Output path coefficient and P-values Hypothesis Performance Expectancy _The Behavior of Village Fund System Users Business Expectancy _The Behavior of Village Fund System Users Social Influence _The Behavior of Village Fund System Users Facilitating Condition _The Behavior of Village Fund System Users Islamic Work Ethics Moderate the relationship between performance expectancy, business expectancy, social influences, and facilitating condition _The Behavior of Village Fund System Users
Path Coefficients 0,155
PValue 0,041
Conclusion
0,204
0072
0,185
0,013
Not supported Supported
0,181
0,024
Supported
−0,066 0,042 −0,124 −0,116
0,254 0,353 0,180 0,178
Not supported
Supported
An Analysis in the Application of the Unified Theory
4.2
353
Discussion
Performance Expectancy and the Behavior of Village Fund System Users The study results in the relationship between these variables can be concluded that the hypothesis is supported. In other words, the higher the individual performance expectancy will encourage user behavior in using the village fund system. It is also supported by the assumption of respondents who feel that the student council's existence can help complete work more quickly and accurately. The results of this study are in line with research by [3, 5, 6, 18, 23]. They believed that performance expectancy has a significant positive effect on system user behavior. Business Expectancy and the Behavior of Village Fund System Users This study's test results show that the business expectancy on using a financial fund system is not supported. It means that if business expectancy has increased or decreased, it will not affect the village fund system users' behavior. The system is designed to make ease the users both how to operate and how easy it is. However, if the user is still not skilled and proficient, the system will not run optimally. The results of this study are similar to those found by [6, 11, 29]. Social Influence and the Behavior of Village Fund System Users Social influence is defined as the understanding that a person is considered important to their surroundings by believing that they must use the system. This study also proves that social influence has a positive impact on village fund system users’ behavior. The results can also indicate that village officials feel their self-image will increase when using the village fund system correctly. The results of this study are supported by [23]. Facilitating Condition and the Behavior of Village Fund System Users The test results in this study confirm that facilitating conditions positively and significantly affect using the village fund system. If the facilitating condition is getting better and up to date, it will always encourage user behavior to improve their knowledge of using the system. It is hoped that the results obtained will fulfil their responsibility in managing village finances. The results of this study support the research by [14, 16, 26]. Islamic Work Ethics and the Behavior of Village Fund System Users The results of the moderating variable research, namely Islamic work ethics, are not accepted. It can be interpreted that the Islamic work ethics of village fund system users have not been able to contribute to increasing the desire of village officials to be better at operating this system. It can be caused by when using this system; there is already a separate rule that contains elements of general work ethics.
5 Conclusion Based on the explained research results, it is stated that only business expectancy cannot prove the hypothesis. Additionally, business expectancy does not affect the behavior of village fund system users. Meanwhile, performance expectancy, social influence, and facilitating conditions are proven to affect village fund system users'
354
K. Fuad et al.
behavior. It can be interpreted that the construct of the UTAUT model concerning the use of the village fund system can be employed as a measurement from the user’s point of view regarding this system. Some of the results of this study are; first, about performance expectancy, the financial system is considered capable of helping village officials to manage village finances more quickly and accurately. Second, the results found that users still feel no ease and flexibility in using this system for business expectancy. It could be because the system is complicated or the human resources are still not proficient and skilled. Third, this study's social influence can be interpreted that the leader considers village fund system users as an obligation or instruction. Meanwhile, this study's facilitating condition shows that the support for the facilities related to the village fund system's use has been going well. A good and stable internet network offers it. The application of Islamic work ethics as a moderating variable in this study is not supported in increasing village fund system users’ behavior. The presence or absence of Islamic work ethics will result in the same user behavior. In other words, the village apparatus thinks that running the system is not directly influenced by ethics. Running the system is more related to the tools and habits in operating the system.
References 1. Ali, A., Al, O.: Islamic work ethic in Kuwait. J. Manusiament Dev. 14 (2008) 2. Ali, A.: Scaling an Islamic work ethics. J. Soc. Psychol. 128(5), 575–583 (2001) 3. Alshehri, M., Drew, S., Alhussain, T., Alghamdi, R.: The Effects of Website Quality on Adoption of E-Government Service: AnEmpirical Study Applying UTAUT Model Using SEM The Effects of Website Quality on Adoption of E-Government Service: An Empirical Study Applying UTAUT Model Using SEM (2012) 4. Anandari, D., Ekowati, W., Info, A.: Jurnal Kesehatan Masyarakat. 15(1), 89–97 (2019) 5. Andriyanto, D., Baridwan, Z., Subekti, I.: Anteseden Perilaku Penggunaan E-Budgeting: Kasus Sistem Informasi Keuangan Desa di 6. Banyuwangi, Indonesia. Kasus Sistem Informasi Keuangan Desa Di Banyuwangi, Indonesia 6(2), 151–170 (2019) 6. Enrique, B., et al.: Transformasi Pemerintah. Proses Dan Kebijakan, Orang (2017) 7. Finstad, K.: Response interpolation and scale sensitivity: evidence against 5-point scales. J. Usability Stud. 5(3), 104–110 (2010) 8. Fuad, K., Handayani, R.T.: Determinants of regional government performance: Islamic work ethics as moderating variable. In: Proceedings of the 1st International Conference on Islamic Civilization, ICIC 2020, 27th August 2020, Semarang, Indonesia (2020) 9. Imam dan Hengky Latan, G.: Partial Least Squares: Konsep, Teknik dan Aplikasi Menggunakan SmartPls 3.0 Untuk Penelitian Empiris. Badan Penerbit Universitas Diponegoro, Jepara (2015) 10. Gupta, K.P., Singh, S., Bhaskar, P.: Citizen adoption of E-government: a literature review and conceptual framework. Electron. Gov. 12(2), 160–185 (2016). https://doi.org/10.1504/ EG.2016.076134 11. Handayani, T., Sudiana, S.: Analisis penerapan model utaut (unified theory of acceptance and use of technology) (studi kasus: sistem informasi akademik pada sttnas yogyakarta), pp. 165–180 (2015) 12. Hormati, A., Ternate, U.K., Ternate, B.B.: Pengujian model unified theory of acceptance and use of technology dalam pemanfaatan. 3(April), 1–24 (2012)
An Analysis in the Application of the Unified Theory
355
13. Isaac, O., Abdullah, Z., Aldholay, A.H., Ameen, A.A.: Antecedents and outcomes of internet usage within organisations in Yemen: an extension of the unified theory of acceptance and use of technology (UTAUT) Model. Asia Pac. Manag. Rev. 24(4), 335–354 (2019). https:// doi.org/10.1016/j.apmrv.2018.12.003 14. Jennifer, J.: International Journal of Medical Informatics Patients’ Intention to Use Online Postings of ED Wait Times: A Modi Fi Ed UTAUT Model 112 (December 2017), 34–39 (2018) 15. Joshi, A., Kale, S., Chandel, S., Pal, D.: Likert scale: explored and explained. Br. J. Appl. Sci. Technol. 7(4), 396–403 (2015). https://doi.org/10.9734/bjast/2015/14975 16. Kurfalı, M., Arifoğlu, A., Tokdemir, G., Paçin, Y.: Adoption of E-government services in Turkey. Comput. Hum. Behav. 66, 168–178 (2017). https://doi.org/10.1016/j.chb.2016.09. 041 17. Assegaf Setiawan, K.: Analisis Perilaku EDMODO pada Perkuliahan dengan Model UTAUT. TEKNOSI vol. 02, No. 03 (2016) 18. Malau, Y.: Analisis penerimaan rail ticket system pada pt. Kai dengan menggunakan model utaut. Xviii(2), 102–112 (2016) 19. Mansoori, K.A.A., Sarabdeen, J., Tchantchane, A.L.: Investigating Emirati citizens’ adoption of e-government services in Abu Dhabi using modified UTAUT model. Inf. Technol. People 31(2), 455–481 (2018). https://doi.org/10.1108/ITP-12-2016-0290 20. Naranjo-Zolotov, M., Oliveira, T., Casteleyn, S.: Citizens’ intention to use and recommend e-participation: drawing upon UTAUT and citizen empowerment. Inf. Technol. People 32 (2), 364–386 (2019). https://doi.org/10.1108/ITP-08-2017-0257 21. Novianti, N., Brawijaya, U., Baridwan, Z.: Faktor-faktor yang mempengaruhi minat pemanfaatan sistem informasi berbasis komputer dengan gender sebagai variabel moderating. In jurnal Akuntansi Multiparadigma 1(3) (2010). https://jamal.ub.ac.id/index.php/ jamal/article/view/111 22. Sa’idah, N.: Analisis Penggunaan Sistem Pendaftaran Online (E-Health) Berdasarkan Unified Theory of Accepptance and Use Of Technology (Utaut) Analysis 5, 72–81 (2017) 23. Rabaa’i, A.A.: The Use of UTAUT to investigate the adoption of E-government in Jordan: a cultural perspective. Int. J. Bus. Inf. Syst. 24(3), 285 (2017). https://doi.org/10.1504/IJBIS. 2017.082037 24. Ali, S.S., Danish, R.Q.: Effect of Performance Expectancy and Effort Expectancy on the Mobile Commerce Adoption Intention through Personal Innovativeness among Effect of Performance Expectancy and Effort Expectancy on the Mobile Commerce Adoption Intention through Personal Innova, no. August (2018) 25. Sakanko, M.A., David, J.: The effect of electronic payment systems on financial performance of microfinance banks in Niger State. Esensi: Jurnal Bisnis dan Manajemen 9(2), 143–154 (2019). https://doi.org/10.15408/ess.v9i2.12273 26. Sharma, R., Singh, G., Sharma, S.: Modelling internet banking adoption in Fiji: a developing country perspective. Int. J. Inf. Manage. 53, 102116 (2020). https://doi.org/10.1016/j. ijinfomgt.2020.102116 27. Onaolapo, S., Oyewole, O.: Performance expectancy, effort expectancy, and facilitating conditions as factors influencing smart phones use for mobile learning by postgraduate students of the university of Ibadan, Nigeria. Interdisc. J. E-Skills Lifelong Learn. 14, 095– 115 (2018). https://doi.org/10.28945/4085 28. Venkatesh, V., Morris, M.G., Davis, G.B., Davis, F.D., Davis, F.D., Walton, S.M.: No Title. In User Acceptance of Informatiom technology: Toward a unified view (2003) 29. Warsito, T.: Are government employees adopting local e-government transformation? The need for having the right attitude, facilitating conditions and performance expectancys (2017)
356
K. Fuad et al.
30. Zati, W., Dini, S., Alamanda, T., Sidiq, F., Prabowo, A.: Adoption of Technology on the Assessment Information System Application of Bandung Juara (Sip Bdg Juara) Using Modified Utaut 2 Model (2017) 31. Yu, C.-S.: Factors Affecting Individuals To Adopt Mobile Banking: Empirical Evidence From The Utaut Model (2012) 32. Yousef, D.A.: Islamic work ethic a moderator between organizational commitment and job satisfaction in a cross-cultural context. Personnel Rev. 30(2), 152–169 (2001)
MOC Approach and Its Integration with Social Network and ICT: The Role to Improve Knowledge Transfer Tri Wikaningrum(&) Department of Management, Faculty of Economics, Universitas Islam Sultan Agung, Semarang, Indonesia [email protected]
Abstract. Initially, the study of knowledge management emphasized the role of Information and Communication Technology (ICT). However, policies in human resource management are crucial factors that determine the quality of organizational knowledge management. In the context of knowledge management, HRM practices should use specific measurement instruments. Most of the studies have examined the use of the configuration of HRM functions. Furthermore, this article aims to examine the motivation-opportunity-competency (MOC) approach as a dimension of HRM in knowledge management research. With this approach, policies in employee management are aimed at encouraging the willingness of individuals to share knowledge, provide opportunities for sharing and interaction, and improve employee competence. Its integration with social network and the application of ICT is expected to be able to improve human capital, which in turn supports knowledge transfer capability. Therefore, this paper discusses the MOC approach, underlying phenomena and conceptual model to improve knowledge transfer capability. For future research, it is suggested to test the validity of the instruments and conceptual model proposed by this article. Keywords: MOC approach capability Human capital
Social network ICT Knowledge transfer
1 Introduction Currently, the role of the knowledge economy is more realized for the competitive advantage of organizations sustainably. Organizations no longer depend on production factors in the form of intangible assets, but also knowledge relevant to the external conditions of the industry. The demand for creativity and innovation requires organizations to pay more attention to their information and knowledge resources. The knowledge possessed by the organization becomes a strategic asset, because of its specific nature, not easily imitated and transferred. Ownership of knowledge also makes it easier for individuals to see problems, be sensitive to seize opportunities and be able to read solutions from different perspectives. This has implications for the importance of organizational knowledge acquisition practices. Knowledge acquisition © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 357–364, 2021. https://doi.org/10.1007/978-3-030-79725-6_35
358
T. Wikaningrum
is defined as the creation of new knowledge based on existing knowledge, which comes from internal or external to the organization. Knowledge lies and is owned by the individual owner. To become organizational knowledge, it must be understood, interpreted, shared, transferred, entered into the system, and applied. Therefore, the willingness to share knowledge is one of the critical factors in the transfer of knowledge from an individual property to group and organization ownership. This perspective is compatible with the personalization knowledge strategy. This strategy is based on the view that social process support is needed to facilitate knowledge sharing between individuals [1, 2]. The willingness to share knowledge is not sufficient to depend only on the willingness of the owner of knowledge but needs to be encouraged through organizational policies. Policies in human resource management are directed at encouraging organizational members to support the knowledge management process. Several studies linked knowledge management with human resource management by using a combination of HRM practices as their dimensions, such as research conducted by [3], that used the dimensions of training, selection, team performance appraisal, job rotation, and incentives. [4–6] included the practice of selection, participation, training, performance appraisal, and compensation as dimensions of HRM. HRM practice does not all directly impact knowledge sharing and transfer behavior. Therefore, the motivationopportunity-competency (MOC) approach is considered more appropriate as a dimension of HRM practices concerning knowledge management [7]. The policy should also be directed to provide opportunities to share and implement new knowledge. Individuals who are motivated and have the opportunity to process knowledge acquisition have the potential to increase competency needed by the individual and have value for their organization. Hopefully, the motivation, opportunity, and competency approaches can support organizational knowledge acquisition capability in an integrated manner. As we know, organizations today are facing an era of a knowledge economy and external dynamics that are changing very rapidly. On the other hand, access to information and knowledge is widely available. Therefore, the critical factor is no longer the ability to access knowledge but the speed of acquiring knowledge from both internal and external to the organization. This is in line with collaborative agility capital, i.e. a novelty concept which is interpreted as agile learning competency with learning speed as one of the indicators. This competency has the potential to increase knowledge transfer and knowledge acquisition [1]. Although the personalization strategy is important, the speed and effectiveness of acquiring knowledge need to be supported by a codification strategy to transfer knowledge into corporate legal documents. ICT support has an important role in implementing this codification strategy. This article seeks to discuss the Motivation-Opportunity-Competency (MOC) approach, the role of knowledge strategies, enabler factors, and offers a conceptual model for enhancing knowledge transfer capability.
MOC Approach and Its Integration with Social Network and ICT
359
2 Literature Review 2.1
Knowledge and Transfer Knowledge
The study of knowledge management can be known from some perspectives depending on how the knowledge is viewed. This is important because different perspectives imply a different focus on knowledge management discussions [8]. For example, when knowledge is viewed as an object, knowledge management focuses on managing the stock of knowledge. Knowledge is considered a process, so the focus of knowledge management will be on how to create, share and transfer knowledge. However, if knowledge is viewed from perspective of capability, the focus will be more on strengthening the core competencies and intellectual capital of employees. This article discusses knowledge management by using a process and capability perspective. Therefore, this discussion is related to knowledge transfer capability and efforts to improve it. The relationship between the knowledge perspective and the focus of the discussion on knowledge management appears in some research results, such as research conducted by [9], that knowledge management includes the processes of creating, storing, transferring, and applying knowledge. Other researchers only classify two processes, namely knowledge creation and knowledge transfer [10–12]. Then, [13] classifies the knowledge creation and knowledge transfer as a flow of knowledge. Among the various knowledge management processes, knowledge creation is an important strategic weapon [14]. If a company wants to improve organizational performance, what really matters is not knowledge, but the company's ability to apply knowledge effectively to create new knowledge. This means that knowledge and the ability to create and utilize it, is the main source for companies to build and enhance a sustainable competitive advantage. This shows that researchers do not use the perspective of knowledge as an object, but as a capability. To create new knowledge, the organization must have knowledge creation capability [15]. This capability is defined as the degree to which organizational members who have mutual access can combine information and knowledge into new knowledge, and perceive the value of the exchange and amalgamation process [16]. [16] stated that there are at least three resources that have an impact on the knowledge creation capability. They are individual knowledge stocks, relational relationships that facilitate the flow of knowledge among employees, and organizational processes that shape organizational climate. Knowledge can be sourced from information connections. Therefore, knowledge creation should focus on the exchange and sharing of information [17]. This can be interpreted that the process of knowledge creation, actually includes knowledge transfer activities. 2.2
Knowledge Strategy and HRM
Concerning the relationship between knowledge management and HRM practices, the choice of a knowledge strategy implies a different approach to HR management. Hansen, Nohria, and Tierney in their article published in Harvard Business Review [2] argued that codified knowledge strategies have HRM implications that motivate
360
T. Wikaningrum
employees to codify their knowledge, while personalization knowledge strategy is related to encouraging employees to share knowledge with others. Hence, the knowledge process is different. The codification strategy focuses on the process of transferring knowledge from people to documents, while the personalization strategy emphasizes the improvement of social processes to facilitate the sharing of tacit knowledge. It can be interpreted that studies that link HRM practices with knowledge management must consider the source of the company's competitive advantage. The source comes from codified knowledge, such as in the form of databases, or knowledge creation and innovation. The implication is in the selection of HRM practice instruments, which should be oriented towards motivating, increasing competence, and providing opportunities for interaction among organizational members. Whatever policies in HRM practices are directed at increasing the motivation, opportunity, and competency of individuals in acquiring knowledge and transferring it to all members of the organization. 2.3
Social Network and ICT
Social network makes the individuals easy to build relational relationships that contain trust, reciprocal norms, and group identities. In the context of the learning process, the knowledge of employees must be continuously updated so that it is not obsolete and relevant to the dynamics of internal and external changes in the organization. Furthermore, the creation of new knowledge will be more likely to occur through the application of existing knowledge and collaborative interactions among individuals in the organization. Thus, the exchange of resources including knowledge is easier to do. The role of social network and interaction patterns that occur in the work process is very necessary, both among colleagues, between subordinates and superiors, as well as between management and external stakeholders. Knowledge acquisition from external sources will most likely occur in these interactions. Furthermore, the knowledge gained (exploration) will be assimilated, transferred, and applied to the internal organization. Mechanisms for knowledge transfer can use “soft” and “hard” approaches. The first approach relies on face-to-face interactions, learning by doing, brainstorming, and direct communication. Therefore, social network has become a medium that facilitates this “soft” approach. Meanwhile, the ``hard'' approach is more based on information and communication technology (ICT), including information shared via email, electronic company bulletin, and knowledge management portals. Both approaches have their advantages & disadvantages. The knowledge transferred includes tacit and explicit knowledge, so an integrative approach using “soft” and “hard” is needed. Although the quality of social interaction is important in the practice of sharing and transfer of knowledge, technology supports the speed and effectiveness of knowledge dissemination, especially explicit knowledge. 2.4
Human Capital
Many definitions of human capital are expressed by experts. Each definition differs in scope and measurement. For example, [18] stated that human capital includes cumulative tacit knowledge, competencies, experience, and expertise, as well as individual
MOC Approach and Its Integration with Social Network and ICT
361
innovation and talents. Human capital is also defined as a set of key elements such as knowledge, skills, experience, attitudes, and competencies of all employees [19]. Meanwhile, [20] stated that human capital refers to the competence of employees and external party resources that can be accessed by the company. Human capital is the knowledge, expertise, and experience of individuals as well as their desire to share it with the organization to create value [21]. Therefore, measuring human capital is not just measuring a person's knowledge, expertise, and experience, but also how successful the individual's knowledge and contribution provides value to the organization. Thus, it can be concluded that human capital is a set of competencies that are personal, inherent in the owner, and have value to the organization.
3 Research Method and Conceptual Model The conceptual model proposed in this article is based on tracing secondary data from scientific books, journals, and magazines. Model development illustrates the relationship between HRM-MOC Approach policies, social network, ICT, human capital, and knowledge transfer capability. The HRM policy with the MOC Approach is an organizational input to encourage the improvement of the quality of human capital. Ownership of valuable knowledge becomes an asset for individuals in transferring knowledge. Knowledge transfer occurs in the process of interaction among individuals (group level). This is not just knowledge sharing, because the transfer of knowledge is not only related to the transfer of knowledge but also changes in the types and levels of knowledge. Therefore, the quality of human capital plays a role in increasing individual capabilities in identifying, absorbing, assimilating, and applying knowledge. However, the factors that support activities related to the application of knowledge management practices are not only the qualifications of human resources (human capital) but also social network both inside and outside the organization. A social network is a media enabler for knowledge transfer. As an element of the learning process, knowledge transfer is not an individual process. It is a collective process that involves interactions between individuals and groups. Social exchange and the norm of reciprocity lead to the quality of the exchange of knowledge resources among organizational members. The formation of the Network as a learning medium can be encouraged by the MOC Approach. As stated by [22], human resource management policies can be designed to improve the development of social network. The transfer of knowledge from the individual level to the group and organization level has implications for changes in the type of knowledge and its format. For example, the standard operating procedure in the form of a physical document is converted into an operating manual in the company's information system. This explicit knowledge is converted into a format that is more accessible to all members of the organization. Changing the format of knowledge requires technical support. ICT also facilitates collaborative activities among individuals and between individuals and groups, especially those located geographically [2]. Thus, ICT can facilitate the practice of knowledge transfer not only to codify knowledge but also to facilitate
362
T. Wikaningrum
interactive communication among individuals. Technologies commonly used are intranets, groupware, data warehouse, and decision support tools. The integration between the MOC approach, social network, and ICT is a combination of organizational/structural aspects and social relationship aspects. Organizational policies that effectively motivate, provide opportunities to access knowledge, and improve employee competencies are not sufficient for increasing the ability to transfer knowledge. Likewise, with the competencies, skills, and attitudes inherent in individuals (human capital). Technical support and social network are catalysts that increase the ability of organizational members to share and move their knowledge into organizational knowledge. As presented by Fig. 1 below, the HRM-MOC Approach policy affects the knowledge transfer capability through its effect on improving human capital. Also, ICT and social network support the role of human capital through their ability to strengthen the influence of human capital on knowledge transfer capability.
Social Network MOC Approach
Knowledge Transfer Capability
Human Capital
ICT
Fig. 1. Conceptual model
4 Conclusion and Future Research This article has discussed the integrative role of the HRM-MOC Approach, social network, and ICT policies in increasing the role of human capital in knowledge transfer capability. Future researches need to explore the dimensions and indicators of each concept as proposed by this conceptual model and validate its measurements. Furthermore, conduct a pilot study and test the model empirically. Also, for the model testing, future researches are recommended to use company objects that meet several criteria. First, the majority of employees are knowledge workers. Second, companies that implement a configuration of human resource management practices. Third, companies have ICT capital that can be directed to support knowledge management practices.
MOC Approach and Its Integration with Social Network and ICT
363
References 1. Wikaningrum, T., Sulistyo, H., Ghozali, I., Yuniawan, A.: Collaborative agility capital: a conceptual novelty to support knowledge management. In: Barolli, L., Hussain, F.K., Ikeda, M. (eds.) CISIS. AISC, vol. 993, pp. 972–980. Springer, Cham (2020). https://doi.org/10. 1007/978-3-030-22354-0_93 2. Hislop, D.: Knowledge Management in Organizations: A Critical Introduction. 3rd edn. Oxford University Press, United Kingdom (2013) 3. López-Cabrales, Á., Real, J.C., Valle, R.: Relationships between human resource management practices and organizational learning capability: the mediating role of human capital. Pers. Rev. 40(3), 344–363 (2011) 4. Chen, C.-J., Huang, J.-W.: Strategic human resource practices and innovation performance the mediating role of knowledge management capacity. J. Bus. Res. 62(1), 104–114 (2009). https://doi.org/10.1016/j.jbusres.2007.11.016 5. Özbağ, G.K., Esen, M., Esen, D.: The impact of HRM capabilities on innovation mediated by knowledge management capability. Procedia Soc. Behav. Sci. 99, 784–793 (2013) 6. Fernandez, M.D., Reyes, S.P., Cabrera, R.V.: Human capital and human resource management to achieve ambidextrous learning: a structural perspective. BRQ Bus. Res. Q. 20(1), 63–77 (2016) 7. Chuang, C.H., Jackson, S.E., Jiang, Y.: Can Knowledge-intensive teamwork be managed? examining the roles of HRM systems, leadership, and tacit knowledge. J. Manag. 42(2), 524–554 (2013) 8. Carlsson, S.A.: Knowledge managing and knowledge management systems in interorganizational networks. Knowl. Process. Manag. 10(3), 194–206 (2003) 9. Alavi, M., Leidner, D.E.: Knowledge management and knowledge management systems: conceptual foundations and research issues. MIS Quar. 25(1), 107–136 (2001) 10. Ofek, E., Sarvary, M.: Leveraging the customer base: creating competitive advantage through knowledge management. Manage. Sci. 47(11), 1441–1456 (2001) 11. Von Krogh, G., Nonaka, I., Aben, M.: Making the most of your company’s knowledge: a strategic framework. Long Range Plan. 34(4), 421–439 (2001) 12. Kumar, J.A., Ganesh, L.S.: Research on knowledge transfer in organizations: a morphology. J. Knowl. Manage. (2009) 13. Cabrera, E.F., Cabrera, A.: Fostering knowledge sharing through people management practices. Int. J. Hum. Resour. Manage. 16(5), 720–735 (2005) 14. Lee, H., Choi, B.: Knowledge management enablers, processes, and organizational performance: an integrative view and empirical examination. J. Manag. Inf. Syst. 20(1), 179–228 (2003) 15. Nahapiet, J., Ghoshal, S.: Social capital, intellectual capital, and the organizational advantage. Acad. Manag. Rev. 23(2), 242–266 (1998) 16. Smith, K.G., Collins, C.J., Clark, K.D.: Existing knowledge, knowledge creation capability, and the rate of new product introduction in high-technology firms. Acad. Manag. J. 48(2), 346–357 (2005) 17. Wang, D., Zhongfeng, S., Yang, D.: Organizational culture and knowledge creation capability. J. Knowl. Manage. 15(3), 363–373 (2011). https://doi.org/10.1108/ 13673271111137385 18. Bontis, N., Dragonetti, N.C., Jacobsen, K., Roos, G.: The knowledge toolbox: a review of the tools available to measure and manage intangible resources. Eur. Manag. J. 17(4), 391– 402 (1999)
364
T. Wikaningrum
19. Hendriks, P.H.J., Sousa, C.A.A.: Rethinking the liaisons between intellectual capital management and knowledge management. J. Inf. Sci. 39(2), 270–285 (2013). https://doi.org/ 10.1177/0165551512463995 20. Petty, R., Guthrie, J.: Intellectual capital literature review: measurement, reporting and management. J. Intellect. Cap. 1(2), 155–176 (2000) 21. Bozeman, B., Corley, E.: Scientists’ collaboration strategies: implications for scientific and technical human capital. Res. Policy 33(4), 599–616 (2004) 22. Wikaningrum, A.T., Mas’ud, B.F.: Value-based social capital: an overview of social exchange theory. In: 17th International Symposium on Management (INSYMA 2020), pp. 69–74. Atlantis Press (2020)
An Integrated System for Actor Node Selection in WSANs Considering Fuzzy Logic and NS-3 and Its Performance Evaluation Yi Liu1(B) , Shinji Sakamoto2 , and Leonard Barolli3 1
Department of Computer Science, National Institute of Technology, Oita College, 1666 Maki, Oita 870-0152, Japan [email protected] 2 Department of Computer and Information Science, Faculty of Science and Technology, Seikei University, 3-3-1 Kichijoji-Kitamachi, Musashino-shi, Tokyo 180-8633, Japan [email protected] 3 Department of Languages and Informatics Systems, Fukuoka Institute of Technology (FIT), 3-30-1 Wajiro-Higashi, Higashi-Ku, Fukuoka 811-0295, Japan [email protected]
Abstract. In recent years, research on Wireless Sensor and Actor Networks (WSANs) has been conducted as a basic technology for the Internet of Things (IoT). The WSANs are wireless networks consisting of a large number of sensor nodes and actor nodes. They are expected to be applied to a wide range of fields such as environmental monitoring of towns, agricultural lands, integrated management of buildings and schools. In WSANs, it is necessary to construct an appropriate node operating environment for efficient sensing and actuation. In this research, we focus on the problem of selecting the actor node, which is the point for packet transmission, among other operations of WSANs. We proposed a system to select the most suitable actor node for communication-based on Fuzzy Logic (FL) by considering different parameters to perform the simulation. We also integrated the FL-based system with the network simulator 3 (ns-3), in order to carry out the performance evaluation. For the FL-based system, we considered three input parameters: Distance to Event from Actor (DEA), Number of Sensor per Actor (NSA) and Task Accomplishment Time (TAT) for making the Actor Selection Decision (ASD). From the simulations results of the FL-based system, we found that the ASD is decreased when TAT, NSA and DEA are increased. The performance evaluation by ns-3 shows that the packet loss is increased when NSA is increased. When TAT increases, the packet loss and the delay time are increased. By comparing the packet loss and delay time for three actor nodes (Actor1, Actor2, Actor3), we found that Actor2 has the best performance.
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 365–376, 2021. https://doi.org/10.1007/978-3-030-79725-6_36
366
1
Y. Liu et al.
Introduction
Recently, there are a lot of research works on the Internet of Things (IoT). By utilizing the IoT, it is possible to operate, monitor and control the remote objects via the Internet. For different applications, it is necessary to use a wide variety of sensor devices. It is vital to build and improve the network performance by using sensors in IoT. In this research, we focus on the problem of selecting actor nodes during data transfer in Wireless Sensor and Actor Networks (WSANs). The purpose is to implement and evaluate a system for selecting the most efficient actor node for data transmission by using the actor node as a transit point for data communication [1]. In this work, we use Fuzzy Logic (FL) for the selection of actor nodes considering different parameters. The FL is a logic underlying modes of reasoning which are approximate rather than exact. The importance of FL derives from the fact that most modes of human reasoning and especially common sense reasoning are approximate in nature [2]. FL uses linguistic variables to describe the control parameters. By using relatively simple linguistic expressions, it is possible to describe and grasp very complex problems. A very important property of the linguistic variables is the capability of describing imprecise parameters. The concept of a fuzzy set deals with the representation of classes whose boundaries are not determined. It uses a characteristic function, taking values usually in the interval [0, 1]. The fuzzy sets are used for representing linguistic labels. This can be viewed as expressing an uncertainty about the clear-cut meaning of the label. But an important point is that the valuation set is supposed to be common for the various linguistic labels involved in the given problem. The fuzzy set theory uses the membership function to encode a preference among the possible interpretations of the corresponding label. A fuzzy set can be defined by exemplification, ranking elements according to their typicality with respect to the concept underlying fuzzy set [10]. We perform simulations using ns-3, which is a discrete event-driven network simulator that can perform operations close to the actual environment. The ns-3 simulates a real environment by executing an experimental script called a scenario. In ns-3, program modules corresponding to each function of the Internet protocol are implemented. In this paper, we propose an integrated FL-based system and ns-3 for the selection of an actor node in WSANs. We considered three input parameters: Distance to Event from Actor (DEA), Number of Sensor per Actor (NSA) and Task Accomplishment Time (TAT) for making the Actor Selection Decision (ASD). We evaluate the proposed system by considering packet transfer rate and communication delay under various environments based on multiple parameters. The structure of this paper is as follows. In Sect. 2, we introduce WSANs. In Sect. 3, we give a short description of FL used for control. In Sect. 4, we present the proposed fuzzy-based system and ns-3. In Sect. 5, we discuss the simulation results. Finally, conclusions and future work are given in Sect. 6.
Implementation and Evaluation of a System
2
367
WSANs
In recent years, the energy consumption of individual sensors has been decreased due to the miniaturization and higher performance of sensors and the development of wireless communication technology. The sensor node continuously observes the state, controls the state of the observed object and transfers data continuously. Sensor nodes consume a particularly large amount of power during data communication and standby state. However, the sensors that makeup WSNs cannot always be connected to a power source, and battery operation is a prerequisite. Therefore, in most cases, it is required to realize a network that enables long-term operation by saving the energy required for communication or data processing low. In addition, as the scale of the network increases, the distance between the sensor node and the actor node increases. That is, the number of multi-hop communication steps from the sensor node to the actor node increases, and the power used for communication increases. In order to solve this problem, a new network form has been proposed that realizes energy saving and low delay transfer of the entire network by adding a new node called an actor node while using existing sensor resources. This network is called Wireless Sensor and Actor Networks (WSANs). The WSANs are constructed with the model shown in Fig. 1. The sensor nodes and actor nodes are placed in the observation target range of the network. Similar to WSNs, the sensor observes the target and acquires the data, and the actor node processes the data acquired by the sensor. The processing in this case refers to the control of the observed data to the observation target.
Fig. 1. WSANs model.
3
Application of Fuzzy Logic for Control
The ability of fuzzy sets and possibility theory to model gradual properties or soft constraints whose satisfaction is a matter of degree, as well as information
368
Y. Liu et al.
pervaded with imprecision and uncertainty, makes them useful in a great variety of applications [3–9,11,12]. The most popular area of application is Fuzzy Control (FC), since the appearance, especially in Japan, of industrial applications in domestic appliances, process control, and automotive systems, among many other fields. In the FC systems, expert knowledge is encoded in the form of fuzzy rules, which describe recommended actions for different classes of situations represented by fuzzy sets. In fact, any kind of control law can be modeled by the FC methodology, provided that this law is expressible in terms of “if ... then ...” rules, just like in the case of expert systems. However, FL diverges from the standard expert system approach by providing an interpolation mechanism from several rules. In the contents of complex processes, it may turn out to be more practical to get knowledge from an expert operator than to calculate an optimal control, due to modeling costs or because a model is out of reach. A concept that plays a central role in the application of FL is that of a linguistic variable. The linguistic variables may be viewed as a form of data compression. One linguistic variable may represent many numerical variables. This form of data compression is referred as granulation. The same effect can be achieved by conventional quantization, but in the case of quantization, the values are intervals, whereas in the case of granulation the values are overlapping fuzzy sets. The advantages of granulation over quantization are as follows: • it is more general; • it mimics the way in which humans interpret linguistic values; • the transition from one linguistic value to a contiguous linguistic value is gradual rather than abrupt, resulting in continuity and robustness. FC describes the algorithm for process control as a fuzzy relation between information about the conditions of the process to be controlled, x and y, and the output for the process z. The control algorithm is given in “if ... then ...” expression, such as: If x is small and y is big, then z is medium; If x is big and y is medium, then z is big. These rules are called FC rules. The “if” clause of the rules is called the antecedent and the “then” clause is called consequent. In general, variables x and y are called the input and z the output. The “small” and “big” are fuzzy values for x and y, and they are expressed by fuzzy sets. Fuzzy controllers are constructed of groups of these FC rules, and when an actual input is given, the output is calculated by means of fuzzy inference.
Implementation and Evaluation of a System
4
369
Proposed Simulation System
In this paper, we present an integrated simulation system considering FL and ns3. The conceptual diagram of the proposed system is shown in Fig. 2. First, the input parameter values for FL-based system are sensed and measured by sensors. Then, using FL-based system, an appropriate actor node is selected. After that, we use the data received by the actor node in ns-3 for the performance evaluation.
Fig. 2. Conceptual diagram of proposed system.
4.1
FL-Based System
In this work, as shown in Fig. 3, we considered three parameters: Distance to Event from Actor (DEA), Number of Sensor per Actor (NSA) and Task Accomplishment Time (TAT) for making the Actor Selection Decision (ASD). These three parameters are not correlated with each other, for this reason we use FL. The membership functions for our system are shown in Fig. 4. In Table 1, we show the Fuzzy Rule Base (FRB) of our proposed system, which consists of 36 rules. The input parameters for FLC are: DEA, NSA and TAT. The output linguistic parameter is ASD. The term sets of DEA, NSA and TAT are defined respectively as follows.
370
Y. Liu et al.
DEA FLC
NSA
ASD
TAT Fig. 3. FLC stucture.
Fig. 4. Membership functions.
DEA = {V eryN ear, N ear, F ar, V eryF ar} = {V N, N, F, V F } N SA = {F ew, M iddle, M any} = {F e, M id, M a} T AT = {Short, M iddle, Long} = {S, M, L}
Implementation and Evaluation of a System
371
Table 1. FRB. No. DEA NSA TAT ASD No. DEA NSA TAT L4 1
VN
Fe
S
L7
19
F
Mid
S
L2
2
VN
Fe
M
L5
20
F
Mid
M
L1
3
VN
Fe
L
L4
21
F
Mid
L
L3
4
N
Fe
S
L6
22
VF
Mid
S
L2
5
N
Fe
M
L4
23
VF
Mid
M
L1
6
N
Fe
L
L3
24
VF
Mid
L
L4
7
F
Fe
S
L5
25
VN
Ma
S
L3
8
F
Fe
M
L4
26
VN
Ma
M
L2
9
F
Fe
L
L2
27
VN
Ma
L
L4
10
VF
Fe
S
L4
28
N
Ma
S
L2
11
VF
Fe
M
L3
29
N
Ma
M
L1
12
VF
Fe
L
L2
30
N
Ma
L
L3
13
VN
Mid
S
L6
31
F
Ma
S
L1
14
VN
Mid
M
L4
32
F
Ma
M
L1
15
VN
Mid
L
L3
33
F
Ma
L
L2
16
N
Mid
S
L5
34
VF
Ma
S
L1
17
N
Mid
M
L3
35
VF
Ma
M
L1
18
N
Mid
L
L2
36
VF
Ma
L
L1
The term set for the output ASD is defined as follows. ⎞ ⎛ ⎞ ⎛ L1 Level1 ⎜ Level2 ⎟ ⎜ L2 ⎟ ⎟ ⎜ ⎟ ⎜ ⎜ Level3 ⎟ ⎜ L3 ⎟ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ASD = ⎜ ⎜ Level4 ⎟ = ⎜ L4 ⎟ ⎜ Level5 ⎟ ⎜ L5 ⎟ ⎟ ⎜ ⎟ ⎜ ⎝ Level6 ⎠ ⎝ L6 ⎠ L7 Level7 4.2
NS-3 Simulator
The goal of the ns-3 project is to develop a preferred, open simulation environment for networking research. It should be aligned with the simulation needs of modern networking research and should encourage community contribution, peer review, and validation of the software. In this research, we use ns-3 for performance evaluation considering different parameters. In FL-based system, we considered three input parameters for selecting actor nodes in WSANs. For performance evaluation by ns-3, we consider NSA and TAT. By ns-3, we constructed a network topology based on Table 2
372
Y. Liu et al.
parameters. Fig. 5 shows the position of nodes in WSANs network. The red points show the sensor nodes, green points the actor nodes and the blue points the management nodes. Table 2. Parameters for ns-3. Parameters
Values
Number of sensor nodes
5, 10 for each actor node
Number of actor nodes
3
Number of management nodes
3
Number of getway
1
Wireless communication standard IEEE 802.16.e Model of radio wave
COAST231
Node mobility
Random
Simulation Time
5, 25 s
Fig. 5. Node placement.
5 5.1
Simulation Results Simulation Results of FL-Based System
In this section, we present the simulation results for our proposed FL-based system. We decided the number of term sets by carrying out many simulations.
Implementation and Evaluation of a System
373
We show the relation of DEA with NSA, TAT and ASD in Fig. 6. We consider TAT and DEA as constant parameters. In Fig. 6(a), we consider DEA value 10 units. We change the NSA value from 0.1 to 0.9 units. When the NSA increases, the ASD is decreased. Also, when TAT increases, the ASD decreases. In Fig. 6(b) and Fig. 6(c), we increase the DEA values to 40 and 70 units, respectively. We see that when the DEA increases, the ASD is decreased.
Fig. 6. Relation of ASD with NSA and TAT for different DEA values.
5.2
Simulation Results of NS-3
For performance evaluation with ns-3, we consider NSA and TAT parameters, and measured the packet loss and delay time for each actor. The simulation results are shown from Figs. 7, 8, 9 and Fig. 10. Comparing Fig. 7 and Fig. 8,
374
Y. Liu et al.
(a) Total delay time
(b) Paket loss
Fig. 7. Sensor node = 5, Simulation time = 5 s.
(a) Total delay time
(b) Paket loss
Fig. 8. Sensor node = 10, Simulation time = 5 s.
the number of sensor nodes connected to one actor node has changed from 5 to 10. The packet loss is increased when NSA increases. Comparing Fig. 7 and Fig. 9, we changed the task accomplishment time from 5 s to 25 s. When TAT increases, the packet loss and the delay time are increased. By comparing the packet loss and delay time for three actor nodes (Actor1, Actor2, Actor3), we found that Actor2 has the best performance.
Implementation and Evaluation of a System
(a) Total delay time
375
(b) Paket loss
Fig. 9. Sensor node = 5, Simulation time = 25 s.
(a) Total delay time
(b) Paket loss
Fig. 10. Sensor node = 10, Simulation time = 25 s.
6
Conclusions and Future Work
In this paper, we proposed an integrated system for actor node selection in WSANs considering FL and ns-3. We evaluated the performance of proposed system by computer simulations. • From the simulations results of FL-based system, we conclude that when DEA, NSA and TAT are increased, the ASD is decreased. • From the simulations results of ns-3, we conclude that when task accomplishment time (5 s to 25 s) and number of sensor nodes are increased, the packet loss and time delay are increased. But, when the number of sensor nodes was increased, some nodes were unable to communicate. • Comparing the three actor nodes, the Actor2 has the best performance.
376
Y. Liu et al.
In the future, we would like to make extensive simulations to evaluate the proposed system and compare the performance of our proposed system with other systems.
References 1. Elmazi, D., Cuka, M., Ikeda, M., Matsuo, K., Barolli, L.: Application of fuzzy logic for selection of actor nodes in WSANs implementation of two fuzzy-based systems and a testbed. Sensors 19(24), 5573 (2019). https://doi.org/10.3390/s19245573. 16 Pages 2. Inaba, T., Obukata, R., Sakamoto, S., Oda, T., Ikeda, M., Barolli, L.: Performance evaluation of a QoS-aware fuzzy-based CAC for LAN access. Int. J. Space-Based Situated Comput. 6(4), 228–238 (2016) 3. Kandel, A.: Fuzzy Expert Systems. CRC Press, Boca Raton (1991) 4. Klir, G., Folger, T.: Fuzzy Sets, Uncertainty, and Information. Prentice Hall, Englewood Cliffs (1988) 5. Liu, Y., Ozera, K., Matsuo, K., Ikeda, M., Barolli, L.: A fuzzy-based approach for improving peer coordination quality in MobilePeerDroid mobile system. In: International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS-2018), pp. 60–73. Springer (2018) 6. McNeill, F.M., Thro, E.: Fuzzy Logic: A Practical Approach. Academic Press, Cambridge (2014) 7. Munakata, T., Jani, Y.: Fuzzy systems: an overview. Commun. ACM 37(3), 69–77 (1994) 8. Procyk, T.J., Mamdani, E.H.: A linguistic self-organizing process controller. Automatica 15(1), 15–30 (1979) 9. Spaho, E., Kulla, E., Xhafa, F., Barolli, L.: P2P solutions to efficient mobile peer collaboration in MANETs. In: International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC-2012), pp. 379–383. IEEE (2012) 10. Terano, T., Asai, K., Sugeno, M.: Fuzzy Systems Theory and Its Applications. Academic Press Professional, Inc. (1992) 11. Zadeh, L.A., Kacprzyk, J.: Fuzzy Logic for the Management of Uncertainty. Wiley, Hoboken (1992) 12. Zimmermann, H.J.: Fuzzy Set Theory and Its Applications. Springer, Heidelberg (2011)
Design of an Intelligent Driving Support System for Detecting Distracted Driving Masahiro Miwata1 , Mitsuki Tsuneyoshi1 , Yoshiki Tada1 , Makoto Ikeda2(B) , and Leonard Barolli2 1
Graduate School of Engineering, Fukuoka Institute of Technology, 3-30-1 Wajiro-higashi, Higashi-ku, Fukuoka 811-0295, Japan {mgm21108,mgm21106,mgm20103}@bene.fit.ac.jp 2 Department of Information and Communication Engineering, Fukuoka Institute of Technology, 3-30-1 Wajiro-higashi, Higashi-ku, Fukuoka 811-0295, Japan [email protected], [email protected]
Abstract. In this paper, we propose an Artificial Intelligence (AI)-based driving support system for detecting distracted driving and increasing the safe driving. We classify the hands of driver and smartphones for detecting the distracted status. We evaluate the proposed system by experiments. The experimental results show that YOLOv5-based distracted driving detection method has a good performance. Keywords: AI
1
· Driving support · Distracted driving · YOLOv5
Introduction
Traditionally, traffic accidents caused by lack of attention such as distracted driving have been a problem around the world [2,4,16]. Some drivers have been seen using smartphones or In-Vehicle Infotainment (IVI) during the driving for enjoying their time. It is important in the future to embed an effective system into cars in principle to prevent distracted driving [7,8]. Recently, the Artificial Intelligence (AI) based advanced intelligent drive systems have attracted attention. For example, in the automobile industry, vehicle companies collaborated with NVIDIA are developing an Advanced Driver Assistance System (ADAS) with edge computing. Further development of this technology to the next stage of ADAS requires low latency, three-dimensional high-resolution map and cooperation with other vehicles. AI systems focus on edge, where daily training is done in the cloud and real-time prediction is done at the edge. The application of real-time prediction is expected to solve complicated problems for humans in various fields [9,11,12,15]. There is also a competition community [1], which provides many datasets on the Internet. As a related work, a dataset of 10 different classes of distracted driving has been provided [14]. However, the dataset is outdated and is a problem in vehicles equipped with large c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 377–382, 2021. https://doi.org/10.1007/978-3-030-79725-6_37
378
M. Miwata et al.
displays for electric vehicles and ADAS. On the other hand, using this dataset as a starting material can reduce the cost of preparation. In this paper, we present the design of an intelligent driving support system for detecting distracted driving. We classify the hand with smartphone to predict the distracted driving in an environment with an advanced infotainment display. The support system can prevent the distracted driving for future automobiles. From this work, we will provide insight into the classification performance of the future problems of vehicles. The structure of the paper is as follows. In Sect. 2, we describe the related work. In Sect. 3, we describe the proposed detecting system. In Sect. 4, we described the evaluation results. Finally, conclusions and future work are given in Sect. 5.
2
Related Work
Deep Neural Network (DNN) has a deep hierarchy that connects multiple internal layers for feature detection and representation learning. Representation learning is used to express the extracting essential information from observation data in the real world. Feature extraction uses trial and error by artificial operations. However, DNN uses the pixel level of the image as an input value and acquires the most suitable characteristic to identify it [5,6]. The Convolutional Neural Network (CNN) uses the backpropagation model like a conventional multi-layer perceptron. The authors [13] investigated the effect of convolutional network depth on the accuracy of large-scale image recognition settings. The YOLO algorithm [10] has been proposed by Joseph Redmon and et al.. He stopped developing due to military use and privacy issues. Then, Alexey Bochkovskiy has taken over the development and released YOLOv4 in April 2020 [3]. YOLOv5 was released by Glenn Jocher and et al. in June 2020, which uses PyTorch as the machine learning library. YOLO series are end-to-end form by one-stage object detection network. So, the detector is faster than two stage detector. The YOLO algorithm takes the whole image as input, divides the image into a grid and predicts the whole image directly. This effectively avoids background errors by using the environmental information to be detected effectively.
3 3.1
Intelligent Driving Support System Overview of the Proposed System
The structure of our proposed system is shown in Fig. 1. The proposed system consists of an intelligent driving support system used in the vehicle. The AI-based system is an application of the proposed system that predicts the object and alert the driver. The edge is embedded in Jetson Xavier NX. The classification characteristics are computed by YOLOv5.
Design of an Intelligent Driving Support System
379
Fig. 1. Model of the proposed system.
3.2
Detecting Distracted Driving
The driver’s position during driving is different and both hands are not always on the steering wheel. Recent vehicles have enhanced In-Vehicle Infotainment (IVI), including navigation, location-based services, voice communication, Internet access, multimedia playback, and search function for news, e-mail, and more. Therefore, drivers are able to drive with a lot of information available. However, the drivers sometimes could be expected to make the following actions. • • • • • • • • •
Operate the IVI with one hand Operate the IVI with voice Looking at the AR navigation Operate a cell phone Looking aside. Talking to a passenger Holding a bottle to drink a soft drink or water. Rubbing your eyes or touching your glasses or sunglasses Falling asleep or losing consciousness
We show examples of mis-detection when we do not use the original dataset in Fig. 2. In these examples, multi-monitor, multi-function, and window switches are mis-detected as smartphone. The analog tachometer was mis-detected as a clock. This paper focuses on how to detect distracted driving as well as improve these false detection.
(a) Case #1
(b) Case #2
(c) Case #3
Fig. 2. Mis-detected examples.
(d) Case #4
380
4
M. Miwata et al.
Evaluation Results
4.1
Evaluation of Setting
In this work, we took photos in vehicle to collect the normal and distracted driving images for preparing an original dataset. The dataset contains 501 images. Our network model is based on YOLOv5m-P5 model. To create a training model, we used a maximum of 300 epochs. The image size is 960 × 540. For evaluation, we use precision, recall and mAP parameters. Precision indicates the fraction of relevant instances among the retrieved instances. Recall indicates the fraction of relevant instances that were retrieved. mAP indicates the average of Average Precision (AP) in all categories. AP is the average accuracy of a certain category. 4.2
Results
The results of distracted driving detection for 300 epochs are shown in Fig. 3. The detection results for the following four cases are described below:
(a) Case #A
(b) Case #B
(c) Case #C
(d) Case #D
Fig. 3. Detection results of test dataset.
• • • •
Case #A: Hold and operate the smartphone with your left hand, Case #B: Hold and operate the smartphone with your right hand, Case #C: Put the smartphone on the lap and operate it with the left hand, Case #D: Put the smartphone on the lap and handle the steering wheel with both hands.
Design of an Intelligent Driving Support System
381
We have confirmed good results in all cases. Our model determined the distracted driving where the driver is operating a smartphone with their hands. Moreover, when the navigation monitor and multi-function were seen in the image, there was no false recognition in this dataset. The experimental results of precision, recall and mAP are shown in Fig. 4. The evaluation results show that each parameter increases with increasing of the number of epochs. The results of precision and recall reach 0.99 after 125 epochs. We observed that the variance of mAP decreases with increasing the number of epochs.
(a) Precision
(b) Recall
(c) mAP: IoU threshold 0.5
(d) mAP: IoU from 0.5 to 0.95
Fig. 4. Evaluation results of 300 epochs.
5
Conclusions
In this paper, we proposed the design of an intelligent driving support system for detecting the distracted driving. We classified hand with smartphone to predict the distracted driving in an environment with advanced infotainment display. From the evaluation results, we found that YOLOv5-based distracted driving detection method has a good performance. In the future work, we will consider more distracted actions for improving the flexibility of our proposed system.
382
M. Miwata et al.
References 1. Kaggle: Data science community. https://www.kaggle.com/ 2. Bergasa, L.M., Almeria, D., Almazan, J., Yebes, J.J., Arroyo, R.: DriveSafe: an app for alerting inattentive drivers and scoring driving behaviors. In: 2014 Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 240–245 (2014) 3. Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M.: YOLOv4: optimal speed and accuracy of object detection. Computer Vision and Pattern Recognition (cs.CV), April 2020. https://arxiv.org/abs/2004.10934 4. Ersal, T., Fuller, H.J.A., Tsimhoni, O., Stein, J.L., Fathy, H.K.: Model-based analysis and classification of driver distraction under secondary tasks. IEEE Trans. Intell. Transp. Syst. 11(3), 692–701 (2010) 5. Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006) 6. Le, Q.V.: Building high-level features using large scale unsupervised learning. In: Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing 2013 (ICASSP-2013), pp. 8595–8598, May 2013 7. Liu, T., Yang, Y., Huang, G.B., Yeo, Y.K., Lin, Z.: Driver distraction detection using semi-supervised machine learning. IEEE Trans. Intell. Transp. Syst. 17(4), 1108–1120 (2016) 8. McCall, J.C., Trivedi, M.M.: Driver behavior and situation aware brake assistance for intelligent vehicles. Proc. IEEE 95(2), 374–387 (2007) 9. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015) 10. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR-2016), pp. 779–788, June 2016 11. Silver, D., et al.: Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489 (2016) 12. Silver, D., et al.: Mastering the game of Go without human knowledge. Nature 550, 354–359 (2017) 13. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: Proceedings of the 3rd International Conference on Learning Representations (ICLR-2015), May 2015 14. State Farm: Dataset of state farm distracted driver detection (2016). https://www. kaggle.com/c/state-farm-distracted-driver-detection/ 15. Vicente, F., Huang, Z., Xiong, X., la Torre, F.D., Zhang, W., Levi, D.: Driver gaze tracking and eyes off the road detection system. IEEE Trans. Intell. Transp. Syst. 16(4), 2014–2027 (2015) 16. Wang, Y.K., Jung, T.P., Lin, C.T.: EEG-based attention tracking during distracted driving. IEEE Trans. Neural Syst. Rehabil. Eng. 23(6), 1085–1094 (2015)
Detection of Non-Technical Losses Using MLP-GRU Based Neural Network to Secure Smart Grids Benish Kabir1 , Pamir1 , Ashraf Ullah1 , Shoaib Munawar2 , Muhammad Asif1 , and Nadeem Javaid1(B) 2
1 COMSATS University Islamabad, Islamabad 44000, Pakistan International Islamic University Islamabad, Islamabad 44000, Pakistan
Abstract. In this paper, a data driven based solution is proposed to detect Non-Technical Losses (NTLs) in the smart grids. In the real world, the number of theft samples are less as compared to the benign samples, which leads to data imbalance issue. To resolve the issue, diverse theft attacks are applied on the benign samples to generate synthetic theft samples for data balancing and to mimic real-world theft patterns. Furthermore, several non-malicious factors influence the users’ energy usage patterns such as consumers’ behavior during weekends, seasonal change and family structure, etc. The factors adversely affect the model’s performance resulting in data misclassification. So, non-malicious factors along with smart meters’ data need to be considered to enhance the theft detection accuracy. Keeping this in view, a hybrid Multi-Layer Perceptron and Gated Recurrent Unit (MLP-GRU) based Deep Neural Network (DNN) is proposed to detect electricity theft. The MLP model takes auxiliary data such as geographical information as input while the dataset of smart meters is provided as an input to the GRU model. Due to the improved generalization capability of MLP with reduced overfitting and effective gated configuration of multi-layered GRU, the proposed model proves to be an ideal solution in terms of prediction accuracy and computational time. Furthermore, the proposed model is compared with the existing MLP-LSTM model and the simulations are performed. The results show that MLP-GRU achieves 0.87 and 0.89 score for Area under the Receiver Operating Characterstic Curve and Area under the Precision-Recall Curve (PR-AUC), respectively as compared to 0.72 and 0.47 for MLP-LSTM.
1
Introduction
The emergence of Advanced Metering Infrastructure (AMI) is one of the core innovations of the smart grids. It helps the power utilities to alleviate the possibility of energy theft through its tracking capability and fine-grained calculations [1]. However, using the smart metering system increases the risk of electricity theft, which leads to the loss of electricity and is one of the most apparent problems that negatively affects the performance of the power grids. Electricity c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 383–394, 2021. https://doi.org/10.1007/978-3-030-79725-6_38
384
B. Kabir et al.
losses are broadly classified into two categories: Non-Technical Losses (NTLs) and Technical Losses (TLs) [2]. Transformer and transmission line faults caused by internal power system components are the most common causes of TLs. The NTLs, on the other hand, can be calculated as the difference between complete loss and TLs. Due to these NTLs, Pakistan is losing 0.89 billion rupees per year and an annual loss of 4.8 billion rupees is faced in India [3]. One of the major NTLs is the stealing of electricity, which normally involves manipulating the meter reading, bypassing the electrical meter, etc. Electricity theft has an adverse effect on the safety and quality of the power source. Different users exhibit different Electricity Consumption (EC) behavior. Though, it is a difficult task to recognize NTL patterns among all regular patterns of EC. Various methods are used to identify and address the NTLs. These methods are divided into three fundamental categories: data-driven methods, network based methods and hybrid based methods. In recent years, data-driven approaches have gained the attention of academia and researchers for performing ETD. Data-driven approaches consist of different deep learning and machine learning based solutions [3]. These solutions are used to analyze and detect irregular patterns of consumers’ electricity consumption. Deep learning based methods for the detection of electrical theft are used in [4,9]. The authors present a study of various deep learning models, including Long-Short-Term Memory (LSTM), Multi-Layer Perceptron (MLP), Convolutional Neural Networks (CNN), Gated Recurrent Unit (GRU), etc. However, these models have poor generalization due to inappropriate tuning of hyperparameters. Despite recent advancements in deep learning and its growing success, relatively little work has been done in the literature on the class imbalance issue. The authors in [4] used LSTM-MLP model for data classification. However, imbalanced dataset issue is not addressed, which leads to poor Area under the Precision-Recall Curve (PR-AUC) score of 54.4%. Furthermore, ensemble models fail to detect diverse theft attacks due to the imbalanced nature of the dataset [5,6], which leads to a high False Positive Rate (FPR) while detecting different cases of theft attacks. Through the analysis of schemes used in the literature, a hybrid MLP-GRU based Deep Neural Network (DNN) is proposed in this paper to detect electricity theft using real smart meters’ data along with auxiliary information. The rest of the article is organized as follows: Sect. 2 provides the literature on the detection of energy theft in smart grids. The proposed technique is discussed in Sect. 3 while Sect. 4 presents the performance evaluation metrics. Simulation results are discussed in Sect. 5 and the paper is concluded in Sect. 6. 1.1
List of Contributions
The contributions of this study are as follows: • Due to the availability of limited electricity theft samples, the data augmentation approach is exploited to generate the fake samples. • Six theft attacks are used to generate synthesized theft patterns. To balance the number of synthetic samples generated and to remove data imbalance problem, Adaptive Synthetic (ADASYN) sampling approach is used.
NTLs Detection Using Hybrid MLP-GRU Based Model
385
• A hybrid model, known as MLP-GRU, is used that integrates both auxiliary and smart meters’ data for NTL detection. The proposed model classifies and detects electricity theft efficiently as compared to existing models.
2
Literature Review
The current hybrid-oriented NTL detection solutions are based upon machine learning and deep learning approaches. A complex anomaly detection task is the detection of NTL. However, it is not accurate to rely on outlier detection methods (e.g., k-means clustering and local outlier factor) alone [1]. Traditional approaches cannot work with sequential (EC history) and non-sequential (auxiliary information) data [1]. Similarly, recent approaches such as CNN and MLP cannot work with sequential data. In [4], the authors use MLP combined with LSTM models for NTL detection. However, the imbalanced dataset issue is not resolved. In existing supervised learning algorithms, the usage of SVM and Logistic Regression (LR) has become an active area of research in ETD. However, they require manual feature extraction that relies on expert knowledge and does not perform data preprocessing [2,3,5,28,29]. In [3], the authors propose a wide and deep model to analyze electricity theft data. However, the model cannot predict a descent in the EC that happened before the period of analysis. Whereas, in [5], the authors propose a Machine Learning (ML) algorithm based on a boosting technique, termed as Gradient Boosting Theft Detector (GBTD), which performs efficient feature engineering based preprocessing to enhance theft detection performance. Traditionally, ML techniques are widely used to analyze the irregular electricity consumption behavior of users to detect electricity theft [6]. Though, most of these approaches have poor accuracy due to a lack of generalization. Furthermore, the high dimensional data degrades the performance of a neural network with shallow architecture [7]. So, the authors in [6] introduce a boosting technique, known as XGBoost, as a supervised learning method for the classification of malicious users. However, the proposed method has high FPR due to the extremely imbalanced dataset and on-site inspections, which are tedious and time-consuming. Traditional classification techniques have some issues such as imbalanced dataset problem, high FPR due to non-malicious factors, and how to detect Zero-day attacks that cannot be obtained from historic data [8]. A Consumption Pattern-based Electricity Theft Detection (CPBETD) algorithm is proposed for the detection of diverse electricity theft attacks against AMI. However, due to the SVM’s misclassification rate, high FPR is reported [8]. In the literature, hybrid deep learning techniques are mostly used for ETD, in which CNN, LSTM, and Random Forest (RF) models are of vital importance [9,10,26,26]. Moreover, raw datasets are used as inputs for training and testing, which degrades the models’ classification performance. Moreover, we have less number of theft samples as compared to benign samples in the real world,
386
B. Kabir et al.
which causes data imbalance problem. Balancing the dataset using Synthetic Minority Oversampling Technique (SMOTE) can lead to overfitting due to the generation of synthetic samples. Furthermore, during back-propagation in the CNN network, its generalization performance degrades due to overtraining in the softmax classifier layer. The existing supervised learning based data analytics methods require labeled data for training as well as additional information is needed for the detection of energy thefts [12,19]. Although, conventional classification based techniques are used for NTL detection in Power Distribution Companies (PDCs), they have poor detection rate as well as high FPR resulting in higher inspection cost, which is a time consuming and tedious task as well. In article [13], the authors propose an ensemble bagged tree classification algorithm that uses the dataset of Multan Electric Power Company (MEPCO) to detect electricity theft. However, the suggested method requires a significant amount of time for training the model. Conventional data driven techniques face issues when classifying daily and weekly energy consumption data. Thus, if these techniques are extended to hourly or more granular electricity usage data, their accuracy will be minimum. Since, they fail to express the trend of intraday electricity usage. In [15], Text Convolutional Neural Network (Text-CNN) is proposed to classify the twodimensional time-series data. However, the proposed models’ accuracy degrades. Numerous data driven approaches focus on boosting techniques and ignore bagging methods, i.e., RF and Extra Trees (ET) as ensemble learning. In [17], a thorough analysis is performed on an ensemble ML classifier based on bagging and boosting. Unsupervised learning approaches have gained a lot of coverage of detecting electricity theft. However, on large data, these methods have a lack of generalization. In [18], the authors use a Stacked Sparse Denoising Auto-Encoder (SSDAE) for extracting abstract features of large data in an unsupervised manner. However, auto-encoders consume more processing time due to excessive hyperparameters’ tuning. Furthermore, in [20], the authors propose an unsupervised learning based anomaly pattern detection approach that requires only normal users’ consumption data for the model’s training to detect electricity theft. However, patterns that are detected as outliers by a classifier may be patterns of more energy usage during holidays and weekdays. Likewise, in [22], the authors perform a detailed analysis of three ML classifiers such as SVM, RF, and k-Nearest Neighbors (KNN), using a dataset of the electric supply company of Pakistan to predict the existence of NTL. However, there is a lack of reliable performance metrics for model evaluation.
3
Proposed System Model
The proposed solution for NTL detection is shown in Fig. 1. The proposed hybrid deep learning based electricity theft detector has two phases: training and testing. These two phases typically consist of five main steps: (1) The data
NTLs Detection Using Hybrid MLP-GRU Based Model
387
preprocessing is performed at the first stage of the training phase. In the data preprocessing, a simple imputer method is used to replace missing values from the dataset. Afterwards, a min-max operation is performed for data normalization using the standard-scalar method. After completing the data preprocessing, normal users’ samples are obtained. (2) The normalized and cleaned data is then passed to the next step in which data augmentation is performed. Fraudulent users’ samples are generated by altering honest samples according to existing theft attacks [5]. (3) The ADASYN method is applied on benign samples. (4) Balanced data that is received from the previous stage is then passed to the next phase for classification purpose. Balanced smart meters’ data and auxiliary data are passed to create the MLP module and GRU module as input for prediction. (5) At the last stage, results are evaluated by using effective performance measures. Numerous performance measures are used for comparative analysis, Table 1. Mapping between identified limitations and proposed solutions Limitations Identified
Solutions Proposed
L1: Imbalanced dataset problem
S1: Apply six theft attacks V1: Comparison with on benign samples oversampling techniques
Validations Done
L2: Misclassification due to non-malicious factors
S2: Incorporate auxilliary data to reduce high FPR
V2: Performance comparison with existing models
Fig. 1. MLP-GRU model architecture
388
B. Kabir et al.
such as Area under the Receiver Operating Characteristic Curve (ROC-AUC), F1-score, PR-AUC and accuracy for validating the proposed model’s efficiency. In the second phase, testing is performed on new samples to evaluate the trained model’s performance to determine whether the new sample belongs to an honest class or a malicious class. 3.1
Data Preprocessing
The actual electricity consumption data recorded by smart meters often contain missing values that may arise due to various reasons including the short circuit transmission equipment, bad connection errors, etc., that degrade the performance of many ML models. The missing data in the dataset provokes the classifier to classify fraudulent customers incorrectly. Furthermore, when data is dispersed on a large scale, interpretation becomes complicated and execution time increases. In the proposed method, we exploit the interpolation method to recoup the missing data [4]. Simple Imputer is used as an interpolation method to impute the missing values using mean, median and the most expected values. Afterwards, the standard-scalar method is applied for data normalization that is used for scaling inconsistent data to a common scale between 0 and 1 for better prediction. 3.2
Data Augmentation Using Six Theft Attacks
The number of malicious samples is substantially smaller than the normal samples in the real world. In this scenario, if we train ML and deep learning models on the imbalanced data, the models will be biased regarding the majority class, and in some cases, it will completely disregard the minority group that leads to performance degradation. This data mismatch is a big issue in ETD that needs to be addressed. Numerous resampling techniques are used in the literature to tackle this problem [1,16,19]. However, the undersampling techniques lead to important information loss issues. On the other hand, the oversampling techniques duplicate the minority class samples that are prone to overfitting. In consideration of the strong disparity of the massive energy consumption datasets and the drawbacks of existing approaches, we generate synthetic samples of theft in te proposed work by modifying honest samples. Thus, we exploit the existing six theft attacks to generate different malignant patterns from normal ones to train ML models with diverse types of theft patterns [5]. It is an important aspect to generate different malicious patterns of theft, which introduces variability in the dataset. Also, data augmentation helps to analyze the diversity in the consumption behavior of consumers as well as reduce overfitting by synthetic data generation. After generating malicious samples, the minority class (normal) is oversampled to balance the malicious and non-malicious data points using ADASYN, which is a variant of SMOTE.
NTLs Detection Using Hybrid MLP-GRU Based Model
3.2.1
389
Six Theft Attacks
To generate theft samples, we use the existing theft cases of diverse types of attacks to alter the smart meters’ data [5]. If the consumers’ actual use is denoted by xt where xt = [x1 , x2 , ..., x365 ], then these theft cases are used for modifying the real usage of patterns : Theft Theft Theft Theft Theft Theft
attack attack attack attack attack attack
(a1): (a2): (a3): (a4): (a5): (a6):
xt xt xt xt xt xt
= = = = = =
xt * r, r = random (0.1, 0.9); xt * rt , rt = random(0.1, 1.0); xt * rt , rt = random[0, 1]; avg (x) * rt , rt = random (0.1, 1.0); avg (x); xT -t.
(a) Theft attack 1,2 and 5
(b) Theft attack 3,4 and 6
Fig. 2. Analysis of different theft attacks
Theft attack 1 generates the malicious consumption patterns by multiplying the benign class of electricity consumption with the randomly generated values between 0.1 and 0.9. Theft attack 2 introduces a similar theft case scenario in which meter readings of normal consumers are multiplied by different random numbers lying between 0.1 and 1.0 that shows a discontinuity in manipulated values and tracking of theft. In theft attack 3, the normal samples are multiplied by 1 at a given time interval t, and at t+1, the samples are multiplied by 0. This implies that the consumers either send the actual readings at a given random timestamp or merely send zero energy usage at a subsequent time. Moreover, in the theft attack 4 scenario, an average value of the total consumption is multiplied by a random state between (0.1, 1.0) to under-report the consumed energy. Whereas, theft attack 5 takes the mean value of the total energy consumption by reporting a consistent consumption throughout a day. Theft attack 6 occurs when the malicious users reverse or shift the order of readings from on-peak hour to off-peak hour [6]. Figure 2 (a) and (b) depict a daily energy consumption trend as well as six different types of malicious attacks.
390
B. Kabir et al.
3.2.2
Classification and Prediction with MLP and GRU
In the proposed work, the hybrid neural network of GRU and MLP is introduced. The proposed GRU-MLP network uses electricity consumption data as input. The proposed methodology is inspired by work done in [4] for detecting electricity theft. The work in [4] developed a hybrid neural network classifier, known as LSTM-MLP. The preprocessed energy consumption data of the smart meter is fed into the GRU module with 100 neurons. The GRU layer has twice as many neurons as compared to the MLP model. With relatively fewer cells, the GRU layer generalizes the embedding at a lower computational cost. Auxiliary data is passed to the MLP module as input with 20 neurons, as the data has low dimensional features. Until the data is sent to the final dense layer, the data is normalized using the batch normalization technique. There is only one neuron with an activation mechanism of sigmoid in the final layer.
4
Performance Evaluation Metrics
In this section, a detailed analysis is performed to compare the performance of the proposed MLP-GRU network with the baseline MLP-LSTM model. The performance metrics used to validate the performance of the above schemes are accuracy, ROC-AUC, F1-score and PR-AUC [27]. These are derived from the confusion matrix parameters, which are True Positive (TP), False Positive (FP), True Negative (TN), False Negative (FN) that reflect the number of consumers that are correctly classified as fair consumers, incorrectly classified as normal, correctly classified as fraudulent and incorrectly classified as fraudulent users, respectively. Accuracy is one of the most commonly used metrics that indicate the percentage of correct predictions by the model. Equation 1 demonstrates accuracy mathematically. While, the F1 score is another metric that indicates the balance between precision and recall, which is defined in Eq. 2. Accuracy = (T P + T N )/(T N + T P + F N + F P )
(1)
F 1 − score = 2 ∗ (P recision ∗ Recall)/(P recision + Recall)
(2)
The primary objective of ETD is to increase fraud Detection Rate (DR) or True Positive Rate (TPR) and low FPR [28]. The ROC-AUC is an appropriate measure for binary classification to detect NTLs. It is constructed by plotting TPR also known as Recall, against FPR while changing the decision thresholds. The score varies from 0 to 1. It is a more reliable measure in case of a class imbalance problem. Though, TPR and FPR are useful indicators for measuring a model’s efficiency for NTL detection, they do not take into account the models’ precision. Hence, to evaluate the precision of the model, PR-AUC is a useful metric that is also a suitable measure for imbalanced datasets. Therefore, we use PR-AUC, which takes into account the classifier’s accuracy as well as shows the expense of on-site utility inspections.
NTLs Detection Using Hybrid MLP-GRU Based Model
5
391
Simulation and Results
In this section, the simulation results are discussed. The proposed model is evaluated on Pakistan Residential Electricity Consumption (PRECON) dataset. 5.1
Evaluation Results
The classification accuracy of the proposed hybrid model is shown in Fig. 3 (a). Since the proposed MLP-GRU model outperforms the GRU model in terms of ROC-AUC, obtaining an AUC score of 0.87 on the test data. By considering nonsequential or auxiliary features like the contracted power, permanent residents and property area, etc., the NTL detection performance is significantly increased. Whereas, MLP-LSTM has poor performance with an AUC score of 0.72 on the test dataset due to the limited generalization ability of the LSTM model.
(a) ROC-AUC Curve
(b) PR-AUC Curve
Fig. 3. Evaluation metric of MLP-GRU and MLP-LSTM models
Figure 3 (a) demonstrates a comparative analysis of the developed hybrid model. TPR and FPR are plotted on X-axis and Y-axis, respectively. TPR indicates the correctly classified samples of the total available samples whilst FPR is an expensive parameter. Initially, the proposed hybrid model classifies the binary distribution with high accuracy and low FPR. After reaching a peak TPR of 0.7, a slight change is observed with an increasing FPR. However, the observed FPR of our model is much lower than MLP-LSTM. Afterwards, a periodic regain of our proposed models’ ROC is observed exponentially. The periodic increase in TPR reduces FPR that enhances the model’s stability and accuracy. This reduction in FPR by our designed model reduces on-site inspections, which is an expensive parameter for the utility providers due to the deployment of experts for on-site inspections to verify the cause. Similarly, Fig. 3 (b) shows the PRAUC curve. The PR-AUC of the proposed model is significantly higher than MLP-LSTM with a PR-AUC value of 0.89 on the test dataset. In Table 2, it is observed that the proposed MLP-GRU model outperforms the MLP-LSTM
392
B. Kabir et al.
model in terms of AUC, accuracy and F1-score. The reason is that the computational complexity of the proposed MLP-GRU model is low because few gates are used in GRU as compared to the LSTM model. Also, it performs better due to the usage of a small dataset. Table 2. Comparison Results Models MLP-GRU
6
AUC score Accuracy F1 score 0.87
0.78
0.82
MLP-LSTM 0.72
0.51
0.62
Conclusion
In this paper, we propose a hybrid model, known as MLP-GRU, for detecting NTLs using smart meters’ data and auxiliary data. We incorporate sequential data as input to the MLP module and auxiliary data is passed to the GRU module. Furthermore, since the electricity consumption data contains a limited number of malicious users, it makes the classification model biased towards the majority class. To address this problem, we generate synthetic theft patterns using six theft attacks on benign samples for data balancing and to incorporate diversity in theft patterns. Afterwards, we evaluate the performance of our hybrid model against non-malicious changes in electricity consumption patterns of users and diverse theft attacks. Simulations are conducted using the PRECON dataset along with theft attacks. The results show that our proposed hybrid model outperforms the baseline MLP-LSTM model. It is observed that by integrating auxiliary information along with smart meters’ data, the model’s performance is significantly improved in terms of PR-AUC and ROC-AUC score with 0.89 and 0.87, respectivelqueryReferences [11, 14, 21, 23–25] are given in list but not cited. Please cite them or delete the citation.. In fact, the efficiency of baseline model MLP-LSTM and the proposed hybrid MLP-GRU network is quite low in terms of training accuracy, which implies the importance of an optimization algorithm for tuning hyperparameters of models to achieve optimal results.
References 1. Buzau, M.M., Tejedor-Aguilera, J., Cruz-Romero, P., G´ omez-Exp´ osito, A.: Detection of non-technical losses using smart meter data and supervised learning. IEEE Trans. Smart Grid 10(3), 2661–2670 (2018) 2. Kong, X., Zhao, X., Liu, C., Li, Q., Dong, D., Li, Y.: Electricity theft detection in low-voltage stations based on similarity measure and DT-KSVM. Int. J. Electr. Power Energy Syst. 125 (2021). https://doi.org/10.1016/j.ijepes.2020.106544 3. Zheng, Z., Yang, Y., Niu, X., Dai, H.N., Zhou, Y.: Wide and deep convolutional neural networks for electricity-theft detection to secure smart grids. IEEE Trans. Industr. Inf. 14(4), 1606–1615 (2017)
NTLs Detection Using Hybrid MLP-GRU Based Model
393
4. Buzau, M.M., Tejedor-Aguilera, J., Cruz-Romero, P., G´ omez-Exp´ osito, A.: Hybrid deep neural networks for detection of non-technical losses in electricity smart meters. IEEE Trans. Power Syst. 35(2), 1254–1263 (2019) 5. Punmiya, R., Choe, S.: Energy theft detection using gradient boosting theft detector with feature engineering-based preprocessing. IEEE Trans. Smart Grid 10(2), 2326–2329 (2019) 6. Yan, Z., Wen, H.: Electricity theft detection base on extreme gradient boosting in AMI. IEEE Trans. Instrum. Meas. 70, 1–9 (2021) 7. Avila, N.F., Figueroa, G., Chu, C.C.: NTL detection in electric distribution systems using the maximal overlap discrete wavelet-packet transform and random undersampling boosting. IEEE Trans. Power Syst. 33(6), 7171–7180 (2018) 8. Jokar, P., Arianpoo, N., Leung, V.C.: Electricity theft detection in AMI using customers’ consumption patterns. IEEE Trans. Smart Grid 7(1), 216–226 (2015) 9. Li, S., Han, Y., Yao, X., Yingchen, S., Wang, J., Zhao, Q.: Electricity theft detection in power grids with deep learning and random forests. J. Electr. Comput. Eng. 2019 (2019). https://doi.org/10.1155/2019/4136874 10. Hasan, M., Toma, R.N., Nahid, A.A., Islam, M.M., Kim, J.M.: Electricity theft detection in smart grid systems: a CNN-LSTM based approach. Energies 12(17), 3310 (2019). https://doi.org/10.3390/en12173310 11. Fenza, G., Gallo, M., Loia, V.: Drift-aware methodology for anomaly detection in smart grid. IEEE Access 7, 9645–9657 (2019) 12. Zheng, K., Chen, Q., Wang, Y., Kang, C., Xia, Q.: A novel combined data-driven approach for electricity theft detection. IEEE Trans. Industr. Inf. 15(3), 1809–1819 (2018) 13. Saeed, M.S., Mustafa, M.W., Sheikh, U.U., Jumani, T.A., Mirjat, N.H.: Ensemble bagged tree based classification for reducing non-technical losses in multan electric power company of Pakistan. Electronics 8(8), 860 (2019). https://doi.org/10.3390/ electronics8080860 14. Li, W., Logenthiran, T., Phan, V.T., Woo, W.L.: A novel smart energy theft system (SETS) for IoT-based smart home. IEEE Internet Things J. 6(3), 5531–5539 (2019) 15. Feng, X., et al.: A novel electricity theft detection scheme based on text convolutional neural networks. Energies 13(21), 5758 (2020). https://doi.org/10.3390/ en13215758 16. Qu, Z., Li, H., Wang, Y., Zhang, J., Abu-Siada, A., Yao, Y.: Detection of electricity theft behavior based on improved synthetic minority oversampling technique and random forest classifier. Energies 13(8), 2039 (2020). https://doi.org/10.3390/ en13082039 17. Gunturi, S.K., Sarkar, D.: Ensemble machine learning models for the detection of energy theft. Electric Power Syst. Res. 106904 (2020). https://doi.org/10.1016/j. epsr.2020.106904 18. Huang, Y., Xu, Q.: Electricity theft detection based on stacked sparse denoising autoencoder. Int. J. Electr. Power Energy Syst. 125 (2021). https://doi.org/10. 1016/j.ijepes.2020.106448 19. Gong, X., Tang, B., Zhu, R., Liao, W., Song, L.: Data augmentation for electricity theft detection using conditional variational auto-encoder. Energies 13(17), 4291 (2020). https://doi.org/10.3390/en13174291 20. Park, C.H., Kim, T.: Energy theft detection in advanced metering infrastructure based on anomaly pattern detection. Energies 13(15), 3832 (2020). https://doi. org/10.3390/en13153832
394
B. Kabir et al.
21. Adil, M., Javaid, N., Qasim, U., Ullah, I., Shafiq, M., Choi, J.G.: LSTM and batbased RUSBoost approach for electricity theft detection. Appl. Sci. 10(12), 4378 (2020). https://doi.org/10.3390/app10124378 22. Maamar, A., Benahmed, K.: A hybrid model for anomalies detection in AMI system combining K-means clustering and deep neural network. Comput. Mater. Continua 60(1), 15–39 (2019) 23. Ding, N., Ma, H., Gao, H., Ma, Y., Tan, G.: Real-time anomaly detection based on long short-Term memory and Gaussian Mixture Model. Comput. Electr. Eng. 79 (2019) 24. Jindal, A., Dua, A., Kaur, K., Singh, M., Kumar, N., Mishra, S.: Decision tree and SVM-based data analytics for theft detection in smart grid. IEEE Trans. Industr. Inf. 12(3), 1005–1016 (2016) 25. Lu, X., Zhou, Y., Wang, Z., Yi, Y., Feng, L., Wang, F.: Knowledge embedded semi-supervised deep learning for detecting non-technical losses in the smart grid. Energies 12(18), 3452 (2019). https://doi.org/10.3390/en12183452 26. Arif, A., Javaid, N., Aldegheishem, A., Alrajeh, N.: Big Data Analytics for Identifying Electricity Theft using Machine Learning Approaches in Micro Grids for Smart Communities 27. Ghori, K.M., Imran, M., Nawaz, A., Abbasi, R.A., Ullah, A., Szathmary, L.: Performance analysis of machine learning classifiers for non-technical loss detection. J. Ambient Intell. Hum. Comput. 1–16 (2020) 28. Aslam, Z., Ahmed, F., Almogren, A., Shafiq, M., Zuair, M., Javaid, N.: An attention guided semi-supervised learning mechanism to detect electricity frauds in the distribution systems. IEEE Access 8, 221767–221782 (2020) 29. Aldegheishem, A., Anwar, M., Javaid, N., Alrajeh, N., Shafiq, M., Ahmed, H.: Towards sustainable energy efficiency with intelligent electricity theft detection in smart grids emphasising enhanced neural networks. IEEE Access 9, 25036–25061 (2021)
Synthetic Theft Attacks Implementation for Data Balancing and a Gated Recurrent Unit Based Electricity Theft Detection in Smart Grids Pamir1 , Ashraf Ullah1 , Shoaib Munawar2 , Muhammad Asif1 , Benish Kabir1 , and Nadeem Javaid1(B) 2
1 COMSATS University Islamabad, Islamabad 44000, Pakistan International Islamic University Islamabad, Islamabad 44000, Pakistan
Abstract. In this paper, we present a novel approach for the electricity theft detection (ETD). It comprises of two modules: (1) implementations of the six theft attacks for dealing with the data imbalanced issue and (2) a gated recurrent unit (GRU) to tackle the model’s poor performance in terms of high false positive rate (FPR) due to some non malicious reasons (i.e., drift). In order to balance the data, the synthetic theft attacks are applied on the smart grid corporation of China (SGCC) dataset. Subsequently, once the data is balanced, we pass the data to the GRU for ETD. As the GRU model stores and memorizes a huge sequence of the data by utilizing the balanced data, so it helps to detect the real thieves instead of anomaly due to drift. The proposed methodology uses electricity consumption (EC) data from SGCC dataset for solving ETD problem. The performance of the adopted GRU with respects to ETD accuracy is compared with the existing support vector machine (SVM) using various performance metrics. Simulation results show that SVM achieves 64.33% accuracy; whereas, the adopted GRU achieves 82.65% accuracy for efficient ETD.
1 Introduction The detection of the electricity theft is a major issue and one of the hot research topics in the literature. Generally, losses in electricity are divided into two types, technical losses (TLs) and non technical losses (NTLs) [1]. TLs happen due to the resistance in electric transmission lines and resistance in the distribution transformers. NTLs occur due to the energy theft in different forms such as, meter bypassing, meter tampering, etc. Electricity theft results in many issues such as public safety hazards, huge revenue loss, grid’s operational inefficiency, etc. The revenue loss from electricity theft cost around 100 millions Canadian dollars yearly reported by the Canadian electric power utility, i.e., british columbia Hydro and power authority [2, 33]. Moreover, monetary loss due to the NTLs reported $96 billion per annum globally [3]. Hence, there is an urgent requirement of efficient approach for electricity theft detection (ETD). Currently, the ETD based approaches are divided into three categories: the hardware or state based ETD; game theory based ETD; and classification based ETD. The hardware based theft detection approaches [4, 5], use some hardware devices like the radio frequency identification tags and wireless sensory devices to achieve maximum c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 395–405, 2021. https://doi.org/10.1007/978-3-030-79725-6_39
396
Pamir et al.
ETD accuracy. However, these methods need extra costs such as hardware devices’ installation and maintenance costs. In the game theory based ETD techniques, the ETD problem is formulated as the game between the utility and the theft user [6, 7]. These approaches do not need extra cost. However, these approaches are not the optimal remedy to minimize the electricity theft. Some machine learning (ML) and deep learning (DL) techniques are proposed in [2, 3] and [8, 13] to use the electricity consumption (EC) data recorded by smart meters (SMs) for ETD in smart grids (SGs). However, there are some problems in existing methods which affect the model’s true positive rate (TPR) and false positive rate (FPR) negatively. One of the main limitations due to which the existing ML techniques face problems of low performance in ETD is imbalanced data problem. It means that the number of observations of the honest users is not equal to the number of observations of the theft users in the dataset. The honest consumers data is easily accessible in users’ history. Whereas, the theft users’ data is difficult to attain due to electricity theft data is rarely collected in real world. This problem causes the model to take biased decisions towards the majority class, i.e., normal consumers, it leads to the high FPR. Moreover, another issue due to which ML based classification techniques face is that some abnormal usage of electricity can be happen due to the non malicious intermediaries such as changes in season, family members, electric appliances, etc. In this paper, the synthetic six theft attacks are implemented for data balancing and a DL based classification technique is adopted for ETD. The adopted technique addresses the pinpointing the real theft instead of considering the anomalies due to non theft factors as theft consumers. The main contributions of this paper are stated as follows. • In this study, we tackle the class imbalanced problem by generation of the theft samples using the six theft attacks implementation. • We used an efficient ETD model using the gated recurrent unit (GRU). The GRU model classifies the usage patterns in which the anomaly exists due to the non malicious activities (i.e., drift) as normal class. • The efficiency of the adopted model is compared and evaluated with the existing model considering the accuracy, AUC, precision, recall, and F1-score metrics using the SGCC dataset. The rest of the paper is organized as follows. Section 2 discusses the literature review. Problem statement is given in Sect. 3. Section 4 describes the proposed system model. Section 5 provides the simulation results and discussion while Sect. 6 concludes the paper.
2 Related Work The existing works related to ETD are classified into four categories: hyperparameters tuning, class imbalance problem, privacy preservation and curse of dimensionality. The first category discusses the research papers that focus on hyperparameter tuning. In [1, 3–11, 11], the authors focused on ML and DL models’ hyperparameters tuning. In [1], authors targeted the high dimensionality issue and extracted the most effective features,
A Synthetic Theft Attacks Implementation for Data Balancing
397
so that it can solve the generalization issue as well as achieved better performance with respect to FPR. The optimal settings of the proposed model’s parameters values are also done. Furthermore, the authors in [2] focused on detecting electricity theft by employing both 1-D and 2-D data for training and testing. The hyperparameters tuning is performed manually. In order to correctly identify and detect the EC patterns that caused due to the NTL, the work in [3] is presented. The model generalization issue is also addressed in this paper. Furthermore, hyperparameters tuning and class imbalanced issues are considered by the authors in [8, 9], and [10]. Similarly, in [8] the data imbalanced and theft attack problems are addressed In addition, customers’ privacy maintenance is also addressed in this article. Similarly, the authors in [10] addressed data imbalance problem and the classification of the data samples that are very close to the hyperplane of support vector machine (SVM). Furthermore, the unavailability of the theft data is resolved. In [11] the authors focused on anomaly detection problem in the SMs’ data of the commercial and industrial customers. Furthermore, the huge volume of data and consumers’ privacy preservation problems are also considered. In addition, the hyperparameters tuning is also done using grid search technique. The second category consists of the research papers that consider class imbalanced problem in SM data. In [12], authors considered the class imbalance issue as the model is biased towards the majority class (i.e., normal consumers). To solve data imbalance issue, an improved SMOTE is proposed. In [14–18], the authors focused on class imbalance issue. Moreover, the authors in [14–16]- tackled the problem of class imbalance ratio in the dataset using SMOTE. Furthermore, the authors in [16–18], focused on helping Multan electric power company (MEPCO) in Pakistan to identify the electricity thieves with the high value of TPR and low value of FPR. Moreover, in the existing literature, many methodologies are proposed for detection of NTL. The third category consists of the research articles that consider consumers’ privacy preservation using the SM data. In [13, 13, 19, 22], the authors focused on solving customers’ privacy preservation issue and model overfitting problem. Moreover, in [19–21], consumers’ privacy is maintained using functional encryption method. Furthermore, the class imbalanced issue is also addressed in this paper. The authors in [22, 23, 25] considered the feature generation based on the random features generation such as, maximum, minimum, mean, and standard deviation values from the existing features. Furthermore, extreme gradient boosting (XGBoost) model is proposed in [22]. This model uses regularization and out-of-core computing for solving the model overfitting and model memory complexity problems, respectively. In addition, the authors tackled the unavailability of the labeled datasets as well as non sequential information, using combined maximum information coefficient (MIC) and an unsupervised technique fast search and find of density peaks (CFSFDP).
3 Motivation and Problem Statement In [1], the authors proposed SSDAE for NTL detection in SGs. The main challenges, they focused on: high FPR due to non malicious factors and low generalization of classifiers due to high dimensionality of data. The consumption pattern based ETD
398
Pamir et al.
(CPBETD) model is proposed in [8]. Data imbalanced problem is resolved using the synthetic attacks dataset generation. They proposed multiclass SVM based CPBETD model utilizing transformer meters as well as consumers’ meters data for ETD solution. Hence, motivated from [1] and [8], we also started working on detecting anomalies due to non-theft activities (i.e., drift concept) and tackling class imbalanced problem. In [2] and [3], the authors proposed a wide and deep CNN and a hybrid LSTMMLP model for ETD, respectively. However, the classes imbalanced issue is not tackled that causes the model’s biasness towards the majority class which results in high FPR. Moreover, high FPR due to non malicious factors are also ignored by the above referred papers.
4 Proposed System Model The proposed system model is divided into three main parts: (1) preprocessing of the data, (2) theft attacks implementation for data balancing , and (3) smart meter data analysis using the adopted GRU and the benchmark SVM techniques. All of these parts are discussed in detail in subsequent subsections. The graphical representation of our proposed system model is given in Fig. 2. 4.1
Preprocessing of the Smart Meter’s Data
The EC data recorded by the SM sometimes contains outliers or missing values due to different reasons such as faulty meters, unreliable and untrustworthy dispatching of the EC data, storage problems, etc., [2]. We employed the simple imputer method to replace missing or empty values by taking the average of the previous EC data and next EC data [27]. Outlier affects the performance of a classifier and increases the FPR, therefore, it needs to be mitigated. So we employ three sigma rule for ottlier handling. As we have dealt with the outliers and NaN values, Now, we should normalize the dataset because DL models are sensitive to data diversity. We employ the min-max technique to scale the data. Normalization is done using the Eq. 1. f (xi,s ) =
xi,s − min(X) max(X) − min(X)
(1)
Where, max(X) and min(X) are the maximum and minimum values of the vector X, respectively. 4.2
Theft Attacks Generation for Data Balancing
Theft attacks are applied in order to balance the SGCC dataset. There are total six theft attacks that are introduced in [8] and a modified version of these attacks are given in [23]. We have chosen the latest and updated theft attacks to generate more practical theft patterns to balance EC data. If the real EC of a consumer is et and (t ∈ [0, 1034]). In our case, SGCC dataset has total 1035 days of EC data, the six theft attacks’ equations are given below.
A Synthetic Theft Attacks Implementation for Data Balancing
399
t1(xt ) = xt ∗ random(0.1, 0.9),
(2)
t2(xt ) = xt ∗ rt (rt = random(0.1, 1)),
(3)
t3(xt ) = xt ∗ random[0, 1],
(4)
t4(xt ) = mean(X) ∗ random(0.1, 1),
(5)
t5(xt ) = mean(X),
(6)
t6(xt ) = x1034−t ,
(7)
where, X = {x1 , x2 , ..., x1034 }. We have applied these theft attacks on the honest consumers’ data to balance the number of honest and theft consumers in the SGCC dataset. The dataset has an imbalanced nature that contains 3615 theft and 38757 honest records out of 42372 consumers’ record. The 3615 real theft records that are already available in the dataset is kept the same; however, other theft data is generated by applying theft attacks on the honest data that is started from 3615 to 21182 records. These six attacks are implemented in the order of attack 1, attack2, attack 3, attack4, attack 5, and attack 6 on the honest instances of the considered dataset from 3615-6534, 6543-9470, 947112398, 12399-15326, 15327-18254, and 18255-21182, respectively. The theft attack patterns are shown in Fig. 1, the theft attacks are applied on all the 1035 days data; however, in this figure we have just presented theft attack patterns for only 30 days as a sample. The remaining data from 21183-42366 are the honest consumers’s EC data. Hence, the dataset is balanced and used for training the GRU model.
Fig. 1. Synthetic theft attack patterns
400
4.3
Pamir et al.
Smart Meters’ Data Analysis Techniques
We have used the GRU model for ETD in SGs. Secondly, the benchmark model is selected in order to evaluate the adopted model’s performance, so SVM is selected as the benchmark model. Both of these models are discussed in detail in the subsequent subsections. 4.3.1 Gated Recurrent Unit GRU is introduced for the first time in 2014 [29]. It is faster in training process in comparison with its previous versions LSTM and recurrent neural network (RNN). GRU is very commonly used in other domains; however, it is rarely used and investigated in the SGs domain for ETD. Hence, there is still a big room for research to investigate GRU for solving ETD problem in SGs. The performance of GRU is compared with a very popular benchmark model, i.e., SVM and it is clearly shown in the simulations section that GRU outperforms the traditional benchmark SVM model in various performance parameters for ETD in SGs. GRU is introduced to solve the gradient vanishing problem that exists in RNN. GRU is very close to the LSTM in terms of architecture. However, it merges both input and forget gates of LSTM to a single gate called update gate. Moreover, the GRU also merges the hidden and cell states. A GRU comprises of a cell that contains numerous operations. GRU comprises of the reset gate, update gate, and a current memory data. Using these gates, the GRU is capable of storing values in its storage for specific time for the purpose of using these stored values to pass the information to the next gate. The update gate is responsible for addressing the gradient vanishing issue because in this step, model learns that how much quantity of the information to carry towards the subsequent stage. Reset gate is responsible to decide the quantity of the previous historical information to forget. The third gate is the current memory content that is responsible to carry only the well suited and relevant information. As GRU stores the previous information in its memory, so it is very significant for looking and analyzing at the previous historical information and take decision for the future. Due to this property, the GRU is capable of differentiating the anomaly due to the non malicious reasons from the anomaly due to the malicious reasons. That is why we adopt GRU for theft classification. 4.3.2 Support Vector Machine SVM is a popular ML technique that is used as a basic and benchmark classifier by many articles in the literature, as in [8] and [30], respectively. the SVM is used as a benchmark model for comparison of their proposed model to solve ETD problem. We have also chosen SVM as a benchmark model for performance comparison with the GRU. The SVM model draws a hyperplane that increases the margin between the theft and honest classes in order to more clearly classify theft and normal consumers. The radial basis function (RBF) kernel is used for non linearly separable data in SVM. Finally, the values of the hyperparameters of SVM, i.e., γ and C parameters are chosen by default. The γ is the parameter of the non linear SVM that decides the curvature and curliness of the decision boundary. Whereas, C controls the misclassification error.
A Synthetic Theft Attacks Implementation for Data Balancing
401
Fig. 2. Proposed system model
5 Simulation Results and Discussion The simulations results are provided in this section. The description of the dataset as well as the performance metrics adopted in this paper are also described. The EC data of the SGCC dataset is used for validation of our selected model with respect to the different performance parameters. The data available in SGCC dataset is imbalanced. More details about the dataset is given in Table 1. Since the original data of SGCC is imbalanced, the F1-score, precision, and recall measures are quite suitable metrics for evaluating the models’ performance using imbalanced data [30]. In such cases, the accuracy is not a suitable performance measure [31]. However, in our scenario, the SGCC data is balanced using synthetic six theft attacks implementation; therefore, the accuracy is also considered along with the other performance measures for performance comparison and evaluation of the selected and benchmark models. The common formulas for computation of these performance metrics are taken from [30] and [32]. Figure 3 depicts the performance comparison of the adopted GRU over the existing SVM model with respect to the different performance metrics. The accuracy, AUC, precision, recall, and F1-score for the SVM are 0.6433, 0.6423, 0.4678, 0.7162, and 0.5659, respectively. Whereas, the accuracy, AUC, precision, recall, and F1-score values for GRU are 0.8265, 0.7552, 0.8355, 0.7176, and 0.7720 respectively. Hence, it is
402
Pamir et al. Table 1. Dataset description Dataset description
Values
Data acquisition interval
2014–2016
Number of theft consumers before data balancing
3615
Number of non-theft consumers before data balancing 38752 Number of theft consumers after data balancing
21183
Number of non theft consumers after data balancing
21184
Total consumers before data preprocessing
42372
Total consumers after data preprocessing
42367
proved that GRU outperforms the SVM in all of the performance evaluation metrics considered in this paper. SVM performs poorly due to it can not deal with the large time series data that leads to the overfitting problem. Conversely, the GRU can easily handle the large time series data and solve the overfitting problem. That is why GRU outperforms the SVM with respect to all of the performance metrics.
Fig. 3. Performance comparison between SVM and GRU
The FPR is another very necessary performance measure for ML models where the honest consumers are considered and classified as dishonest. If the FPR value is high, so the cost for on site inspection is also high and vice versa. There are so many ways to decrease FPR. However, in this research article, we have only considered the two ways for minimizing FPR. The first way is to balance the imbalanced data while the second way is to correctly classify the anomalies that occur in EC data because of the non-malicious activities. This paper adopts GRU because it has a property that analyzes
A Synthetic Theft Attacks Implementation for Data Balancing
403
the long and historical relationship between EC patterns. Therefore, GRU automatically learns and identifies anomalies that occur in data due to non theft factors and classify these anomalies as the honest ones instead of theft ones. The FPR values for SVM and GRU are 2242 and 693, respectively. It is clear from these numeric results that our adopted GRU has much more smaller FPR value than SVM. AUC metric is also an important measure in order to evaluate our model’s performance for differentiating between the normal and abnormal patterns. The AUC for the SVM and GRU is shown in Fig. 4. From the figure, the AUC of the adopted GRU is better than the AUC of SVM. The reason is that AUC of GRU implies that the receiver operating characteristic (ROC) curve lies on the diagonal line of the curve (red dotted lines). It also means that the ROC curve on the diagonal line has no discriminatory ability. Whereas, the ROC curve above the diagonal line has discriminatory ability to classify electricity theft.
Fig. 4. ROC curve based on SVM and GRU
6 Conclusion In this paper, we implemented the six theft attacks for synthetic theft data generation in order to address the data imbalance issue. Subsequently, a novel DL classifier, i.e., GRU is utilized for ETD in SGs. GRU is compared with the other existing SVM classifier. It is clearly visible in the simulations section that GRU outperforms the SVM in terms of accuracy, AUC, precision, recall, and F1-score. Simulations are conducted on a real SGCC dataset that comprises of the three years of the data of 42372 consumers from 2014 to 2016 duration.
References 1. Huang, Y., Qifeng, X.: Electricity theft detection based on stacked sparse denoising autoencoder. Int. J. Electr. Power Energy Syst. 125 (2021)
404
Pamir et al.
2. Zheng, Z., Yang, Y., Niu, X., Dai, H.-N., Zhou, Y.: Wide and deep convolutional neural networks for electricity-theft detection to secure smart grids. IEEE Trans. Industr. Inf. 14(4), 1606–1615 (2017) 3. Buzau, M.-M., Tejedor-Aguilera, J., Cruz-Romero, P., G´omez-Exp´osito, A.: Hybrid deep neural networks for detection of non-technical losses in electricity smart meters. IEEE Trans. Power Syst. 35(2), 1254–1263 (2019) 4. Khoo, B., Ye, C.: Using RFID for anti-theft in a Chinese electrical supply company: a costbenefit analysis. In: 2011 Wireless Telecommunications Symposium (WTS), pp. 1–6. IEEE (2011) 5. McLaughlin, S., Holbert, B., Fawaz, A., Berthier, R., Zonouz, S.: A multi-sensor energy theft detection framework for advanced metering infrastructures. IEEE J. Sel. Areas Commun. 31(7), 1319–1330 (2013) 6. C´ardenas, A.A., Amin, S., Schwartz, G., Dong, R., Sastry, S.: A game theory model for electricity theft detection and privacy-aware control in AMI systems. In: 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 1830–1837. IEEE (2012) 7. Amin, S., Schwartz, G.A., Tembine, H.: Incentives and security in electricity distribution networks. In: Grossklags, J., Walrand, J. (eds.) International Conference on Decision and Game Theory for Security, pp. 264–280. Springer, Heidelberg (2012) 8. Jokar, P., Nasim, A., Leung, V.C.M.: Electricity theft detection in AMI using customers’ consumption patterns. IEEE Trans. Smart Grid 7(1), 216–226 (2015) 9. Gunturi, S.K., Sarkar, D.: Ensemble machine learning models for the detection of energy theft. Electric Power Syst. Res. 192, 106904 (2021) 10. Kong, X., Zhao, X., Liu, C., Li, Q., Dong, D.L., Li, Y.: Electricity theft detection in lowvoltage stations based on similarity measure and DT-KSVM. Int. J. Electr. Power Energy Syst. 125 (2021) 11. Buzau, M.M., Javier, T.-A., Pedro, C.-R., Antonio, G.-E.: Detection of non-technical losses using smart meter data and supervised learning. IEEE Trans. Smart Grid 10(3), 2661–2670 (2018) 12. Qu, Z., Li, H., Wang, Y., Zhang, J., Abu-Siada, A., Yao, Y.: Detection of electricity theft behavior based on improved synthetic minority oversampling technique and random forest classifier. Energies 13(8), 2039 (2020) 13. Lu, X., Zhou, Yu., Wang, Z., Yi, Y., Feng, L., Wang, F.: Knowledge embedded semisupervised deep learning for detecting non-technical losses in the smart grid. Energies 12(18), 3452 (2019) 14. Avila, N.F., Gerardo, F., Chu, C.-C.: NTL detection in electric distribution systems using the maximal overlap discrete wavelet-packet transform and random undersampling boosting. IEEE Trans. Power Syst. 33(6), 7171–7180 (2018) 15. Hasan, M., Toma, R.N., Nahid, A.-A., Islam, M.M., Kim, J.-M.: Electricity theft detection in smart grid systems: a CNN-LSTM based approach. Energies 12(17), 3310 (2019) 16. Saeed, M.S., Mustafa, M.W., Sheikh, U.U., Jumani, T.A., Mirjat, N.H.: Ensemble bagged tree based classification for reducing non-technical losses in multan electric power company of Pakistan. Electronics 8(8), 860 (2019) 17. Wang, X., Yang, I., Ahn, S.-H.: Sample efficient home power anomaly detection in real time using semi-supervised learning. IEEE Access 7, 139712–139725 (2019) 18. Liu, H., Li, Z., Li, Y.: Noise reduction power stealing detection model based on self-balanced data set. Energies 13(7), 1763 (2020) 19. Ibrahem, M.I., Nabil, M., Fouda, M.M., Mahmoud, M.M., Alasmary, W., Alsolami, F.: Efficient privacy-preserving electricity theft detection with dynamic billing and load monitoring for AMI networks. IEEE Internet of Things J. 8(2), 1243–1258 (2020)
A Synthetic Theft Attacks Implementation for Data Balancing
405
20. Yao, D., Wen, M., Liang, X., Zipeng, F., Zhang, K., Yang, B.: Energy theft detection with energy privacy preservation in the smart grid. IEEE Internet Things J. 6(5), 7659–7669 (2019) 21. Nabil, M., Ismail, M., Mahmoud, M.M., Alasmary, W., Serpedin, E.: PPETD: privacypreserving electricity theft detection scheme with load monitoring and billing for AMI networks. IEEE Access 7, 96334–96348 (2019) 22. Micheli, G., Soda, E., Vespucci, M.T., Gobbi, M., Bertani, A.: Big data analytics: an aid to detection of non-technical losses in power utilities. Comput. Manag. Sci. 16(1), 329–343 (2019) 23. Punmiya, R., Choe, S.: Energy theft detection using gradient boosting theft detector with feature engineering-based preprocessing. IEEE Trans. Smart Grid 10(2), 2326–2329 (2019) 24. Ghasemi, A.A., Gitizadeh, M.: Detection of illegal consumers using pattern classification approach combined with Levenberg-Marquardt method in smart grid. Int. J. Electr. Power Energy Syst. 99, 363–375 (2018) 25. Fenza, G., Gallo, M., Loia, V.: Drift-aware methodology for anomaly detection in smart grid. IEEE Access 7, 9645–9657 (2019) 26. Yan, Z., Wen, H.: Electricity theft detection base on extreme gradient boosting in AMI. IEEE Trans. Instrum. Meas. 70, 1–9 (2021) 27. Li, S., Han, Y., Yao, X., Yingchen, S., Wang, J., Zhao, Q.: Electricity theft detection in power grids with deep learning and random forests. J. Electr. Comput. Eng. 2019 (2019) 28. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997) 29. Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 (2014). www.arxiv.org. Accessed 17 April 2021 30. Adil, M., Javaid, N., Qasim, U., Ullah, I., Shafiq, M., Choi, J.-G.: LSTM and bat-based RUSBoost approach for electricity theft detection. Appl. Sci. 10(12), 4378 (2020) 31. www.machinelearningmastery.com. Accessed 17 Apr 2021 32. Gul, H., Javaid, N., Ullah, I., Qamar, A.M., Afzal, M.K., Joshi, G.P.: Detection of nontechnical losses using SOSTLink and bidirectional gated recurrent unit to secure smart meters. Appl. Sci. 10(9), 3151 (2020) 33. Aldegheishem, A., Anwar, M., Javaid, N., Alrajeh, N., Shafiq, M., Ahmed, H.: Towards sustainable energy efficiency with intelligent electricity theft detection in smart grids emphasising enhanced neural networks. IEEE Access 9, 25036–25061 (2021)
Blockchain Enabled Secure and Efficient Reputation Management for Vehicular Energy Network Abid Jamal, Muhammad Usman Gurmani, Saba Awan, Maimoona Bint E. Sajid, Sana Amjad, and Nadeem Javaid(B) COMSATS University Islamabad, Islamabad, Pakistan
Abstract. Blockchain (BC) based Vehicular Energy Network (VEN) enables secure and distributed trading between the vehicles. Furthermore, reputation management is a critical requirement for building trust in BC based VENs. However, the existing BC based reputation schemes are vulnerable to replay attacks due to insecure reputation verification. Moreover, a BC based VEN also requires a privacy preserving traceability mechanism to prevent false information dissemination and fraudulent transactions. Furthermore, a VEN also necessitates an efficient storage mechanism to reduce the storage overhead incurred by the BC ledger. To address these issues, this paper presents a BC based secure and efficient reputation management scheme for VENs. The proposed scheme provides a secure vehicles’ reputation verification mechanism to prevent the replay attacks. Moreover, the proposed scheme uses Elliptic Curve Digital Signature Algorithm based pseudonym mechanism to enable conditional anonymity and vehicles’ traceability. Furthermore, the proposed scheme uses InterPlanetary File System to efficiently store the vehicles’ reputation information and consequently, reduces the storage overhead. Finally, performance and security analysis is performed to show the effectiveness and practicality of the proposed scheme. Keywords: Blockchain · Vehicular Energy Network management · Efficient storage
1
· Reputation
Introduction
The number of vehicles in the urban areas is increasing at a rapid pace. This increase introduces multiple challenges, including environmental pollution, road jams, traffic accidents, etc., [1–3]. To deal with these issues, Intelligent Transport System (ITS) is introduced to effectively manage the traffic conditions in highly populated urban areas. Vehicular Energy Network (VEN) is one of the prominent applications of ITS, which has recently gained attention due to its promising features like energy trading, information sharing, load balancing, etc., [4–6]. Vehicles in a VEN use Dedicated Short Range Communications (DSRC) protocol to communicate with c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 406–416, 2021. https://doi.org/10.1007/978-3-030-79725-6_40
Blockchain Enabled Secure and Efficient Reputation Management for VEN
407
the other vehicles to share information, trade energy and broadcast announcement messages in the network. However, as the conventional VENs rely on a centralized server for the network management, it is vulnerable to the Denial of Service (DOS) attacks and scalability issue. Hence, several researchers have proposed decentralized solutions for VENs. On the other hand, Blockchain (BC) is a distributed ledger technology that was introduced by Satoshi Nakamoto in 2008 [7]. With the popularity of Bitcoin, different variants of BC have been introduced. Ethereum is one of the renowned BC framework [8] that introduces the concept of smart contract which enables the users to trade assets and share information without the involvement of a third party. Due to its prominent features, like transparency, data integrity, availability, tamper-proof records, etc., the BC technology can improve the conventional VENs [9–13]. In BC based VEN, different set of vulnerabilities related to privacy and security exist. Due to the open nature of BC, the malicious vehicles can spread false information and perform fraudulent transactions in the VEN. Moreover, the existing BC based VENs lack effective vehicle traceability mechanism to identify and revoke malicious vehicles. Furthermore, due to the limited storage of VEN nodes, the current BC based VENs are prone to data unavailability and they incur high storage cost. To overcome these issues, we propose a BC based secure and efficient reputation management scheme, which prevents replay attack, enables conditional privacy and efficient data storage. The following is the list of our contributions. – An effective vehicle reputation verification mechanism is proposed in which the vehicles’ ratings are stored in the latest block to prevent the replay attacks. – Elliptic Curve Digital Signature Algorithm (ECDSA) based vehicle authentication mechanism is used to enable conditional anonymity in VENs. – InterPlanetary File System (IPFS) is used for ensuring persistent data availability and efficient data storage. The rest of the paper is organized as follows. In Sect. 2, the related work is presented. A problem statement, formulated based upon the related work, is presented in Sect. 3. In Sect. 4, the proposed system model is shown. The working of the proposed scheme is discussed in Sect. 5. The Security analysis is presented in Sect. 6, whereas the results are shown in Sect. 7. Finally, the conclusion is drawn in Sect. 8.
2
Related Work
Recently, the researchers have focused on the applications of ITS for reducing the environmental pollution caused by the traffic. The conventional centralized vehicular network architecture fails to aid in these applications due to several reasons, including scalability issue, increased security risks, trust management, etc. To overcome these issues, decentralized solutions are proposed.
408
2.1
A. Jamal et al.
Reputation
In [14], the authors address the issue of security vulnerabilities in smart vehicles. They exploit a permissioned BC based reputation scheme to prevent false information dissemination in the network. However, their proposed reputation scheme does not allow the less reputed vehicles to regain their reputation values. In [15], authors propose One-Time Password (OTP) and Artificial Intelligence (AI) based reputation mechanism in vehicular edge computing to enable secure data sharing. A secure BC based incentive scheme is proposed in [16] for traffic event validation. In this scheme, the reputation of vehicles is calculated based on their past events and consortium BC is utilized for storing the vehicles’ reputation values. The authors in [17] address the issue of malicious service provision in vehicular cloud network. They propose a BC based trust management scheme by utilizing three-valued subjective logic to identify the malicious service providers. The authors in [18] address the issue of computationally intensive reputation and consensus mechanism in vehicular energy network. They propose Proof of Work based reputation scheme to reduce the mining cost. The authors in [19] propose a BC based energy and data trading scheme. Their proposed scheme uses smart contracts to handle trading disputes and data redundancy. The authors in [20] propose a BC based scheme to store and manage the authentication information of the vehicles. Moreover, they utilize vehicular edge computing to reduce the computational and storage cost. The authors in [21] address the issue of false information sharing in the network. They propose a BC based decentralized trust management system to record the vehicle reputation based on their network participation rate. 2.2
Storage
The authors in [22] propose a BC based data storage system to overcome the overwhelming cost of uploading data on the BC. They use smart contracts to reduce the size of reuploaded data and exploit data partitioning mechanism to decrease the computational overhead. They also adjusted the difficulty of Proof of Work consensus algorithm to enhance the system efficiency in terms of data updates. In [23], authors address the issue of high computational and storage cost in BC based Internet of Vehicles. They propose a consortium BC enabled edge computing system to reduce the communication cost and storage requirement. 2.3
Security
Due to open nature of the BC based vehicular networks, it is necessary to detect and revoke the malicious vehicles from the network. In this regard, the authors in [24] propose a BC enabled efficient certificate revocation list management scheme. Their proposed pseudonym shuffling mechanism reduces the storage cost of large number of pseudonyms. In [25], the authors utilize BC based edge computing for efficient vehicles’ trust data calculation and storage. However, their proposed scheme is vulnerable to private information leakage due to transparency
Blockchain Enabled Secure and Efficient Reputation Management for VEN
409
feature of BC. In [26], the propose a BC based distributed authentication scheme. However, their proposed scheme is susceptible to single point of failure as the users’ authentication information is stored in a centralized cloud server. In [27], the authors propose a distributed pseudonym identity management mechanism to utilize self-generated vehicle certificates in BC based vehicular networks. However, their proposed scheme does not support vehicle traceability, which can lead to false information dissemination and fraudulent transactions.
3
Problem Statement
Reputation management is necessary for the vehicular network to verify the trustworthiness of a vehicle before trading or sharing information. The authors in [28] propose a reputation scheme in which the vehicles prove their trustworthiness by sharing the index of transaction, which contains their reputation value. However, this scheme is inherently vulnerable to replay attacks as the malicious vehicles can share the index of an old transaction to appear as reliable entities. Furthermore, the authors in [29], use One-Time Address (OTA) mechanism to prevent unique identification of a vehicle. However, due to the lack of conditional anonymity, the malicious vehicles cannot be traced or removed from the network. Another common issue in BC based vehicular networks is the high storage cost due to the data redundancy and indefinite growth of BC ledger. To overcome this issue, the authors in [27] store the BC ledger on pre-selected Roadside Units (RSUs) to save the storage space. However, in addition to the increased communication cost, their proposed solution is prone to scalability and data unavailability issue.
4
System Model
In this paper, we propose a secure and efficient reputation management scheme for VENs. The proposed model consists of three phases, as shown in Fig. 1. The first phase is registration phase in which the vehicles are registered with the Certificate Authority (CA) by sending their identity information. The second phase is vehicle trading phase. In this phase, the secure trading is ensured by verifying the trustworthiness of the vehicles on the BC. The third phase is the data storage phase, wherein the RSUs store the users’ reputation data on IPFS to save their storage. The proposed model depicted in Fig. 1, contains a mapping table of the limitations and their solutions. The limitations ranging from L1 to L3 are mapped with the solutions ranging from S1 to S3, respectively. 4.1
Entities
The proposed system model contains the following entities.
410
A. Jamal et al.
Fig. 1. Proposed system model
Certificate Authority: The Certificate Authority (CA) is a central trusted entity, which handles the registration process of the vehicles and RSUs. In the proposed scheme, the CA stores an encrypted copy of the mapping between the real ID and the pseudo-ID of the vehicles to ensure vehicles’ traceability. RoadSide Unit: RSUs perform multiple operations in the vehicular networks. RSUs aid the vehicles in retrieving reputation data from BC and the IPFS to ensure secure transactions between the vehicles. The BC ledger is stored on the RSUs, which contains all the information about the reputation of the vehicles and the previous transactions. The BC ledger stored on all RSUs ensures the data availability. Vehicles: In the proposed system, the vehicles share announcements with each other and the RSUs about the road conditions. The vehicles also trade data and energy with each other to increase their reputation value and earn monetary gains. Blockchain: In the proposed model, the BC is used to store the reputation data of the vehicles in a distributed manner to overcome the single point of failure
Blockchain Enabled Secure and Efficient Reputation Management for VEN
411
issue. Moreover, it also provides transparency, integrity and availability of the data. Smart Contract: In the proposed model, vehicles use smart contracts for requesting the reputation data of their trading partner from BC before initiating a trade. Moreover, RSUs use smart contract for storing and updating the reputation data of the vehicles in the BC. InterPlanetary File System: IPFS is a distributed storage framework that ensures long term data availability and easy accessibility. In the proposed framework, the IPFS is used to store the reputation data of the vehicles. The reputation data of 100 vehicles is combined to form a single batch. The IPFS returns a fix sized SHA-256 hash for every batch. This hash is then stored on the BC to ensure transparency.
5
Proposed Scheme
In this section, the details of all phases of the proposed scheme are presented. Some of the notations used in the scheme are as following. V1 and V2 represent two vehicles that perform trading. Whereas, RID and P ID are the real ID and pseudo ID of the vehicles, respectively. 5.1
System Initialization
For system’s initialization, an Elliptic curve y 2 = x3 + ax + b mod p is selected. Here a, b ∈ Zp∗ , and p is large prime number. g is the generator of the elliptic group. After that, the CA generates its cryptographic material by selecting a master private key CAM SK and generating a master public key CAM P K = CAM SK × g. CA uses Elliptic Curve Digital Signature Algorithm (ECDSA) for signing the digital certificates. The signing key and verifying key of CA are CAsigKey and CAverKey , respectively. 5.2
Registration Phase
In this phase, the vehicle V1 requests the CA for a pseudo ID by sending its private information RIDV 1 = (N ame, SSN, P lateN umber) over a secure channel. The CA first verifies the RIDV 1 by checking the list of existing users and the blacklist to see if the vehicle is malicious. After verification, the CA generates a pseudo ID P IDV 1 for the vehicle V1 . The CA also generates a mapping between RIDV 1 and P IDV 1 and stores it in the encrypted form as which is described as. M apping(P IDV 1 ) = EncCAM P K (P IDV 1 − > RIDV 1 ). This mapping ensures the vehicles’ traceability while preserving their privacy.
412
5.3
A. Jamal et al.
Vehicle Trading Phase
After registration, the vehicles become a part of the network and can trade with other members of the network. In the trading process, the vehicle V1 first sends a trading request signed with its private key to V2 for trading the data, V 11 − > V2 : T xR eq = (req, ts, P IDV 1 , SigV 1sigKey ). Here req is the requested data, ts is the timestamp, P ID is pseudo ID and Sig(.) shows that the request is signed. When the vehicle V2 receives the request, it extracts the P IDV 1 from the T xReq and sends it to the RSU for checking the reputation value. V2 − > RSU : RepCheckR eq = (P IDV 1 , ts, SigV 2sigKey ) RSU − > V2 : repV alue(V1 ) The RSU returns the reputation value by requesting the data from the BC via smart contract. After receiving the reputation information of V1 , the V2 initiates the trade with V1 if the reputation value of V1 is above the pre-defined threshold. 5.4
Data Storage Phase
In this phase, the RSU stores the reputation data of the vehicles on the IPFS to reduce the storage cost of BC. The reputation data is divided into the batches of 100 users before it is uploaded to the IPFS. Each batch is encrypted with the RSU’s public key RSUpk before it is stored on the IPFS to prevent malicious data access. The request IP F SR eq = EncRSUpk (ReputationData||ts) is sent to IPFS. In return, the IPFS sends a fixed length SHA-256 hash to the RSU which is then stored on the BC ledger.
6
Security Analysis
In this section, we discuss two of the existing BC based VEN vulnerabilities and their countermeasures. 6.1
Replay Attacks Prevention
When a valid data transmission is maliciously repeated, it is termed as a replay attack. As discussed in Sect. 3, the vehicles in the BC based vehicular networks prove their trustworthiness by sharing the index of the transaction, which contains their reputation value. This approach is vulnerable to the replay attacks as the vehicles can share older transaction indexes to appear as more trustworthy. To overcome this issue, our proposed scheme stores all of the reputation information in the latest block using IPFS. The users are restricted to only use the latest block for verifying the trustworthiness of a vehicle.
Blockchain Enabled Secure and Efficient Reputation Management for VEN
6.2
413
Conditional Privacy Preservation
In the proposed scheme, the pseudonym certificates are used for hiding the real identity of the vehicles. The real identity information of the vehicles is stored with the CA so that in case of disputes or misbehaviour, the true identity of the malicious vehicles can be exposed. In [29], authors use OTA method to prevent privacy leakage due to data linkage. However, their proposed OTA scheme lacks traceability feature, due to the which the malicious vehicles cannot be identified or removed from the network.
7
Results and Discussion
The proposed scheme is compared with existing solutions provided in the literature. For the transactions signing, we have compared the use of OTA scheme with ECDSA. Moreover, we have compared the use of BC and IPFS for data storage. Also, we have related response delays of vehicles with the malicious behaviour.
Fig. 2. Performance comparison
Figure 2a shows the comparison of different cryptographic operations OTA scheme, used in [29], with the ECDSA based pseudonym certificate generation scheme. It can be observed that the OTA takes significantly longer time to generate keys due to the use of Kerl hashing algorithms. In OTA scheme, each address is used only once to prevent the private key leakage. However, due to immense computational cost, it cannot fulfil the requirement of quick authentication of fast-moving vehicles in the vehicular networks. Hence, we utilize ECDSA which enables quick authentication and privacy preservation. The results show the comparison of both schemes in terms of computational time required for key generation, signature generation and verification. Figure 2b shows the comparison of storage cost of storing reputation data directly on BC and its IPFS hash. It is evident from the Fig. 2b that storing
414
A. Jamal et al.
the actual data on BC is a resource intensive task as the same copy of the data is needed to be stored on every node. Authors in [27] have stored the BC ledger on selective RSUs to overcome the overwhelming storage cost; however it introduces the issues of data unavailability and increased communication cost due to increased number of data retrieval requests. Hence, to overcome this issue, we use IPFS to store the actual data and store only the IPFS hash on the BC. As the IPFS returns a fixed size SHA-256 hash value for the data irrespective of its size, hence, it is an efficient approach to store the IPFS hash of the data on BC instead of storing the actual data.
Fig. 3. Malicious vehicle detection using delays in response time
Figure 3 shows the time delays in the vehicle’s request and response. In a vehicular network, the vehicle sends requests to other vehicles for information sharing or energy trading. The other vehicle has to respond with the correct information and prove its trustworthiness. The results depict that when a vehicle share authentic reputation information with its peer, the response time generally follows a same trend. However, when a vehicle shares fake reputation information, it takes a longer than the authentic response like the 7th request in Fig. 3. The reason of taking longer time is that, the malicious vehicle will need to generate fake reputation information before sending the response. Hence, we have used the delay in response time to identify the malicious vehicles.
8
Conclusion
In this paper, we propose a BC based secure and efficient reputation management scheme to prevent replay attacks, enable conditional anonymity and reduce the
Blockchain Enabled Secure and Efficient Reputation Management for VEN
415
storage cost of VEN. We have used IPFS to efficiently store the reputation data of the vehicles in the latest block to enable secure reputation verification. Moreover, ECDSA is used for enabling conditional anonymity. The security analysis is performed to show the robustness of the proposed scheme. Also the performance analysis shows the practicality of the proposed scheme. In future, this scheme will be further extended to include distributed revocation mechanism.
References 1. Li, Y., Hu, B.: An iterative two-layer optimization charging and discharging trading scheme for electric vehicle using consortium blockchain. IEEE Trans. Smart Grid 11(3), 2627–2637 (2020). https://doi.org/10.1109/TSG.2019.2958971 2. Feng, Q., He, D., Zeadally, S., Liang, K.: BPAS: blockchain-assisted privacypreserving authentication system for vehicular ad hoc networks. IEEE Trans. Ind. Inf. 16(6), 4146–4155 (2020). https://doi.org/10.1109/TII.2019.2948053 3. Li, K., Lau, W.F., Au, M.H., Ho, I.W.-H., Wang, Y.: Efficient message authentication with revocation transparency using blockchain for vehicular networks. Comput. Electr. Eng. 86, 106721 (2020) 4. Posner, J., Tseng, L., Aloqaily, M., Jararweh, Y.: Federated learning in vehicular networks: opportunities and solutions. IEEE Netw. 35(2), 152–159 (2021) 5. Kudva, S., Badsha, S., Sengupta, S., Khalil, I., Zomaya, A.: Towards secure and practical consensus for blockchain based VANET. Inf. Sci. 545, 170–187 (2021) 6. Wang, E.K., Liang, Z., Chen, C.-M., Kumari, S., Khan, M.K.: PoRX: a reputation incentive scheme for blockchain consensus of IIoT. Future Gener. Comput. Syst. 102, 140–151 (2020) 7. Nakamoto, S.: Bitcoin: a peer-to-peer electronic cash system. Manubot (2019) 8. Buterin, V.: A next-generation smart contract and decentralized application platform. White paper 3, no. 37 (2014) 9. Firdaus, M., Rhee, K.-H.: On blockchain-enhanced secure data storage and sharing in vehicular edge computing networks. Appl. Sci. 11(1), 414 (2021) 10. Javaid, U., Aman, M.N., Sikdar, B.: A scalable protocol for driving trust management in internet of vehicles with blockchain. IEEE Internet Things J. 7(12), 11815–11829 (2020) 11. Ma, Z., Zhang, J., Guo, Y., Liu, Y., Liu, X., He, W.: An efficient decentralized key management mechanism for VANET with blockchain. IEEE Trans. Veh. Technol. 69(6), 5836–5849 (2020) 12. Ren, Y., Li, X., Sun, S.-F., Yuan, X., Zhang, X.: Privacy-preserving batch verification signature scheme based on blockchain for Vehicular Ad-Hoc Networks. J. Inf. Secur. Appl. 58, 102698 (2021) 13. Zhang, Q., et al.: Blockchain-based asymmetric group key agreement protocol for internet of vehicles. Comput. Electr. Eng. 86, 106713 (2020) 14. Oham, C., Michelin, R.A., Jurdak, R., Kanhere, S.S., Jha, S.: B-FERL: blockchain based framework for securing smart vehicles. Inf. Process. Manag. 58(1), 102426 (2021) 15. Gawas, M., Patil, H., Govekar, S.S.: An integrative approach for secure data sharing in vehicular edge computing using blockchain. Peer-to-Peer Netw. Appl. 1–19 (2021). https://doi.org/10.1007/s12083-021-01107-4 16. Khalid, A., Iftikhar, M.S., Almogren, A., Khalid, R., Afzal, M.K., Javaid, N.: A blockchain based incentive provisioning scheme for traffic event validation and information storage in VANETs. Inf. Process. Manag. 58(2), 102464 (2021)
416
A. Jamal et al.
17. Xu, S., Guo, C., Hu, R.Q., Qian, Y.: BlockChain inspired secure computation offloading in a vehicular cloud network. IEEE Internet Things J. (2021) 18. Yahaya, A.S., Javaid, N., Javed, M.U., Shafiq, M., Khan, W.Z., Aalsalem, M.Y.: Blockchain-based energy trading and load balancing using contract theory and reputation in a smart community. IEEE Access 8, 222168–222186 (2020) 19. Sadiq, A., Javed, M.U., Khalid, R., Almogren, A., Shafiq, M., Javaid, N.: Blockchain based data and energy trading in internet of electric vehicles. IEEE Access 9, 7000–7020 (2020) 20. Akhter, A.F.M., Ahmed, M., Shah, A.F.M., Anwar, A., Kayes, A.S.M., Zengin, A.: A blockchain based authentication protocol for cooperative vehicular ad hoc network. Sensors 21(4), 1273 (2021) 21. Arora, S.K., Kumar, G., Kim, T.: Blockchain based trust model using tendermint in vehicular adhoc networks. Appl. Sci. 11(5), 1998 (2021) 22. Yin, Y., Li, Y., Ye, B., Liang, T., Li, Y.: A blockchain-based incremental update supported data storage system for intelligent vehicles. IEEE Trans. Veh. Technol. 70(5), 4880–4893 (2021) 23. Mei, Q., Xiong, H., Zhao, Y., Yeh, K.-H.: Toward blockchain-enabled IoV with edge computing: efficient and privacy-preserving vehicular communication and dynamic updating. In: 2021 IEEE Conference on Dependable and Secure Computing (DSC), pp. 1–8. IEEE (2021) 24. Lei, A., et al.: A blockchain based certificate revocation scheme for vehicular communication systems. Future Gener. Comput. Syst. 110, 892–903 (2020) 25. Shrestha, R., Bajracharya, R., Shrestha, A.P., Nam, S.Y.: A new type of blockchain for secure message exchange in VANET. Digit. Commun. Netw. 6(2), 177–186 (2020) 26. Zheng, D., Jing, C., Guo, R., Gao, S., Wang, L.: A traceable blockchain based access authentication system with privacy preservation in VANETs. IEEE Access 7, 117716–117726 (2019) 27. Benarous, L., Kadri, B., Bouridane, A.: Blockchain-based privacy-aware pseudonym management framework for vehicular networks. Arab. J. Sci. Eng. 45, 6033–6049 (2020). https://doi.org/10.1007/s13369-020-04448-z 28. Luo, B., Li, X., Weng, J., Guo, J., Ma, J.: Blockchain enabled trust-based location privacy protection scheme in VANET. IEEE Trans. Veh. Technol. 69(2), 2034–2048 (2019) 29. Pu, Y., Xiang, T., Chunqiang, H., Alrawais, A., Yan, H.: An efficient blockchain based privacy preserving scheme for vehicular social networks. Inf. Sci. 540, 308– 324 (2020)
Religious Value Co-Creation: A Strategy to Strengthen Customer Engagement Ken Sudarti(&), Olivia Fachrunnisa, Hendar, and Ardian Adhiatma Department of Management, Faculty of Economics, Universitas Islam Sultan Agung, Semarang, Indonesia {kensudarti,olivia.fachrunnisa,hendar,ardian} @unissula.ac.id
Abstract. This paper aims to develop a conceptual model that connects a new concept, namely Religious Value Co-Creation (RVCC) to build customer engagement. RVCC is the result of a synthesis between the concept of value cocreation and Islamic values. This new concept emerged because the value dimensions proposed by previous researchers had not considered the religious factor and were still transactional. Besides, the phenomenon that develops in religion-based products is not visible for its differentiation from non-religionbased products. Several propositions were submitted from the results of literature studies. By involving religious elements in value co-creation, the parties included in value co-creation will be involved in ‘sincere to give’ and ‘sincere to accept’ to obtain holistic value. RVCC is indicated to increase customer engagement. Keywords: Religious Value Co-Creation Customer engagement
Religious value congruence
1 Introduction This study focuses on value creation that occurs when frontline staff and customers meet in interactive marketing activities as a consequence of the inseparability inherent in services. This concept was initially considered only as a company activity but later developed into value co-creation, which is defined as a continuous interaction between two or more parties in building a personalized service experience [1]. The concept of value creation is the most significant factor for company success and has been believed to be an important source of competitive advantage [2]. The concept of value co-creation considers the customer as an active resource who can contribute and generate value by taking the role of a co-provider. [3] stated that in an increasingly saturated market condition and limited resources, companies should no longer focus on optimizing internal resources, but must be able to explore external resources including involving customers in the creation of shared values. Collaboration between internal and external resources results in optimal value creation. Value co-creation activities involve and benefit three parties at once, namely: the company, employees, and customers. Based on a customer perspective, involvement during the value co-creation process causes their needs to be met during their © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 417–425, 2021. https://doi.org/10.1007/978-3-030-79725-6_41
418
K. Sudarti et al.
participation process. Individuals are willing to participate in a relationship because of their perceived value [4]. Customer involvement in value co-creation becomes the next shared value creation experience and basis. The value creation experience gained through interaction with several service providers creates accumulated knowledge and increases the value exchanged. While in the employee’s point of view, through value co-creation, they will be better able to understand the aspirations, desires, motivations, and behavior of consumers and create pleasant exchange relationships. Furthermore, company perspective believed that value co-creation can reduce uncertainty and eliminate sources of environmental risk. [5] stated that there are five kinds of values that can be created, they are: (a) functional values, i.e. the utility obtained and felt that is related to functional performance; (b) emotional values, i.e. the utility obtained and felt related to feelings; (c) social values, i.e. the utility obtained and felt when individuals interact with one or more social groups; (d) conditional values, i.e. the utility that is obtained and felt from a series of conditions when faced with various choices, and (e) epistemic values, i.e. the utility that is obtained and felt because a sense of curiosity is fulfilled. [6] suggests the elements that must be present in the value co-creation process, namely: meaningfulness, collaboration, contribution, recognition, and affective response. [7] concluded that value co-creation activities include several core activities, namely: information seeking, information sharing, responsible behavior, personal interaction, feedback, advocacy, and helping. Based on the literature review on value co-creation, the search for meaning in the value creation process as stated by [1, 6, 8, 9] have not included the religious aspect. Religious value is very important in consumer purchasing behavior [10] including halal products. Individual understanding of halal products varies depending on the level of religiosity. Therefore, this study intends to improve the concept of value co-creation by offering a new concept of Religious Value Co-Creation (RVCC) which is defined as the intensity of mutually reinforcing beliefs and knowledge of Sharia products between customers and employees through “giving and accepting religious values”. The religious dimension becomes very important when an organization offers products based on religious values. This dimension can strengthen customer engagement between employees and customers, which in turn has an impact on the desire to have an ongoing relationship. RVCC is the result of a synthesis between the Theory of Value (TOV), the concept of Service-Dominant Logic (SDL), and Islamic values. The concept of value cocreation is at the core of SDL which emphasizes relationships to create shared value that is beneficial to the parties involved. RVCC complements it by adding religious values. RVCC in this study focuses on the interaction process by adapting two dimensions of value-co creation from [6], namely collaboration and contribution which are then internalized with Islamic values, so that the RVCC dimension is sincere to give, namely the intensity of increasing confidence through sincerity. give alms knowledge about sharia products and sincere to accept, namely the intensity of increasing confidence through sincerity receiving the knowledge of sharia products. This considers the dimensions used [6] to be extensive, including the process and results of value co-creation so that it becomes less focused.
Religious Value Co-Creation
419
This concept is unique because it contains spiritual and worldly dimensions, so that it can spread wider benefits, both for customers, employees, and the organization. This uniqueness is believed to be able to bring strong differentiation through value superiority that has touched the level of non-transactional motive. For organizations that offer products on a religious basis, this differentiation is urgent to be created. If not, then the words ‘sharia’ is only at the level of ‘labeling’.
2 Literature Review 2.1
Religious Value Co-Creation (RVCC) as a New Concept
Theory of value (TOV) is the foundation of Service-Dominant Logic (SDL) which then derives the concept of Value Co-Creation (VCC) as a value construction. TOV is a philosophical and moral theory that deals with the main question of what the value is. This theory is most widely used to conceptualize consumer value, whereas SDL emphasizes that services are a fundamental component of economic exchange, goods are only distribution mechanisms, not a unique expression of value. Value is customercentric and is co-created by the company and the customer. [11] states that value cocreation can be started by making meaning through interaction, collaboration, reciprocal exchange, job performance evaluation, and resource integration. Through this practice, customers and service providers will achieve mutual benefits, create service excellence and improve service system continuity. SDL postulates that when customers engage in shared value exchange, they actively create meaning from the process, thereby increasing value [8]. Making meaning occurs in interactions through communication with the help of terms and images [11]. Furthermore, customers are encouraged to cooperate with service providers when they expect the results to be more valuable, not only for them but also for others [12]. This statement is emphasized by [1], which states that value is created together through collaboration. This collaboration is the main focus of Service-Dominant Logic. Collaboration in the context of VCC is referred to as “marketing with”, not “marketing to”. This collaboration will remove barriers and open up access to new opportunities and resources, increase understanding of how to integrate resources effectively, improve service quality and reduce errors in service delivery [11]. However, if the parties fail to invest resources and integrate them in a collaborative process, the potential value is not realized and can even be assessed negatively [13]. Conversely, the positive amount of resource contributions generates benefits for all actors [11]. The enthusiasm and hope for more valuable and meaningful results for all parties are factors that contribute to the success value of VCC [12]. Religious value co-creation is a value creation related to religious values. The value obtained from religion is related to its religious commitment [10]. Religious commitment shows the extent to which a person believes in his religious values and practices them in his daily life, including the desire to do preaching (da’wah) through the purchase process. Da’wah has the potential to form harmony among humans to create group cohesiveness [14]. Da’wah contains elements of two-way communication. When
420
K. Sudarti et al.
someone does da’wah, he is not only spreading religious values but reciprocally will get feedback from his da’wah content. Therefore, RVCC contains a contribution element (represented by the sincere giving dimension) and an element of collaboration (represented by the sincere accepting dimension). The command for charity is contained in [15] “And among the most important alms is knowledge alms” HQ [4:114]. Following the words of the Prophet Muhammad narrated by Ibn Majah: “The foremost almsgiving is when a Muslim learns a science, then teaches it to other Muslim brothers”. Science is very important in Islam. This can be found in many verses of the Holy Qur’an that states that knowledgeable people are in a high position. It is different, people who know and do not know, only a rational person can receive knowledge [15] [HQ. 39:9]. We are also ordered to be tolerant in the assembly [15] [HQ. 58:11]. Allah SWT said in [15] “People who are blind are not the same as those who see” HQ [35:19]. A Muslim has to give a warning [HQ. 51:55]. Based on literature review with the internalization of Islamic values, the indicators of sincere to give dimension include: (a) I often give views about Sharia products to salesmen, (b) I often tell my experiences while consuming Sharia products, (c) I often give advice on how to convince the benefits of Sharia products, (d) I often strengthen salesmen’s confidence in mastering Islamic products. The indicators of sincere to accept include: (a) I often get explanations about Sharia products from salesmen, (b) salesmen convince me about the benefits of Islamic products, (c) salesmen always remind me about the benefits of Sharia products, (d) salesman explanations increasingly strengthen me about the benefits of Sharia products.
3 Proposition Development 3.1
Religious Value Congruence (RVC) and Religious Value Co-Creation (RVCC)
Individuals will show interest in exchanging information when they have a mutual understanding with other parties. If there is no common understanding, there will be affective conflicts that hinder the exchange of information. Value congruence allows a person to be able to predict how other people act or behave in different situations. The similarity of values is also able to increase the effectiveness of communication, predictability, trust, and attractiveness. This statement is following the belief congruence theory which states that individuals will judge something based on the similarity of their belief to the object of assessment. This theory will not be valid if there is no match for religious affiliation [16]. Thus, two important elements of belief congruence theory are sameness and importance. Individuals will prefer something more similar to their belief values [16]. Trust is an expectation about the behavior of others and a willingness to behave according to these expectations without conditions. Trust reduces uncertainty, reduces transaction costs, and facilitates cooperation because of the compliance and commitment of both parties. Two parties are willing to work together because of mutual understanding and trust.
Religious Value Co-Creation
421
The value is phenomenologically determined by the customer [1], so that personality traits play an important role in the assessment process. Therefore, an open personality will help determine the results and the process of sharing reciprocal experiences, especially with new objects [17]. An open-minded person is a person who is broad-minded, imaginative, curious, adaptable, and enjoys new things as valuable knowledge and experience. Openness is defined as the level of a person’s willingness to consider, accept, integrate new and creative ideas through the creation of shared values. This is especially important when collaboration occurs through face-to-face communication. Furthermore, when two parties have the same religious values, it will increase their willingness to give and receive religious knowledge including halal products. The give and accept process is getting smoother because of the conformity of religious values. Value congruence enhances communication. Communication refers to the open exchange of information through formal and informal interactions. This is due to the similarity of standards in interpreting events. The exchange of information reduces the possibility of misunderstanding. Based on this literature study, it can be concluded that individuals will be willing to be involved in the interaction of religious-based value creation when they have the same understanding of the religious value they create. This common value will increase the effectiveness of mutual communication by exchanging resources based on mutual trust. Willingness to give and accept knowledge of Sharia products is based on religious orders to advise one another in patience [15] [HQ. 103:1–3]. P1a: When individuals have the same religious values with the products they consume, their interest in shared value creation will increase a sense of ‘sincere to give’. P1b: When individuals have the same religious values with the products they consume, their interest in shared value creation will increase a sense of ‘sincere to accept’. 3.2
Religious Value Co-Creation (RVCC) and Customer Engagement (CE)
The concept of engagement is derived from the partnership theory developed by [18]. Engagement is defined as a state of being involved, focused, completely focused, or captivated by something, so that it grabs one’s attention and creates attractiveness [19]. Engagement will occur when the organization engages stakeholders in cooperative relationships for better results. Engagement is a unidimensional concept that involves emotional, cognitive, and behavioral aspects. Customer engagement involves involvement and participation that is created because there are customer interaction and creative experiences with the company [19]. [20] state that when a relationship is considered satisfying and has emotional ties, it will strengthen at the level of engagement. Today, customers are no longer viewed as passive recipients of marketing cues, but as proactive parties creating shared value. Customer engagement will create shared experiences and value creation and contribute to the innovation process [21]. Non-transactional customer involvement
422
K. Sudarti et al.
can contribute to various resources (time, knowledge, and action) that affect company activities [22]. Consumers who are involved in brand communities tend to show higher levels of trust, commitment, satisfaction, emotional attachment, and brand loyalty [23]. Customer engagement focuses on the interactive experience of consumers and is a cognitive, emotional, and behavioral activity that is positively related to consumers while interacting with brands [24]. [12] define customer engagement as satisfied and loyal customers who recommend the company’s products and services to others. This statement is reinforced by [25] which states that customer engagement that is carried out through customer-company interactions contributes to creating satisfied customers who will not only make repeat purchases but also commit to the company and recommend products and services to other customers. All of this shows relationship commitment, which is the individual’s desire to maintain a stable relationship and are willingness to make sacrifices to maintain the value of the relationship [26]. Relationship commitment can increase cooperation [26] and increase engagement [27]. VCC ratings affect behavioral intentions, overall service ratings, satisfaction, and loyalty [28]. Therefore, it can be concluded that there is a reciprocal relationship between customer engagement and value co-creation. However, in this study, the relationship between the two is seen in only one direction, where customer engagement is seen as the impact of the interaction of religious-based value creation. Thus, the higher the intensity of the customers in taking and giving activities with religious values, the more satisfied the customer will be because the expectations of increasing understanding and belief in religious-based products are fulfilled. The interaction between the customer and the company during the value creation process creates a bond between customer engagement and value co-creation. Group cohesiveness will create engagement, namely the psychological, cognitive, and emotional levels shown by the interacting parties. Customer involvement includes three dimensions, they are: enthusiasm, conscious participation, and social interaction [22]. Customers who have a strong engagement with an organization will prioritize advice, spread positive word of mouth, help other customers get products, write blogs, or post comments [25]. Engagement involves cognitive, emotional, and behavioral aspects of customers as well as changes with the environment [29]. The results of the study by [30] concluded that perceived value is the strongest determinant of increasing customer engagement. Based on this explanation, it can be said that, when customers engage in knowledge collaboration practices in the process of purchasing halal products, they will consider what value they will get. His willingness to be involved in value creation is based on the intention of preaching and spreading the good values of halal products. This intention encourages him to strive to strengthen his belief and knowledge of halal products through giving and accept activities with service providers represented by frontline staff. Intensity reinforces confidence in Sharia products, further increasing bonding between the two parties. The openness of communication causes both of them to trust each other so that it strengthens the desire for a sustainable relationship. The give and
Religious Value Co-Creation
423
accept process regarding religious values contained in Sharia products increases sense of brotherhood (ukhuwah) (Fig. 1). P2a: The more intensively the individual is involved in creating shared value through the sense of sincere to give to the salesman, the stronger the customer engagement. P2b: The more intensively the individual is involved in the creation of shared value through the sense of sincere to accept from the salesman, the stronger the customer engagement.
Fig. 1. Conceptual model of religious values and customer engagement
For evaluation purposes, these propositions will be derived in an empirical model and will be examined on a sample with the following criteria: Sharia insurance sales force who meets directly with customers and is actively involved with customers in value co-creation.
4 Conclusion and Future Research Religious value co-creation is an extension of the concept of value co-creation which is holistic because it contains dimensions of the world and hereafter. RVCC has the potential to increase engagement with religious-based service customers. Parties involved in the value creation process will get the ultimate value as a result of strengthening religious values in the process of taking and giving knowledge and belief in Sharia products. The level of involvement in religious value co-creation will be stronger when both parties have a religious value congruence.
424
K. Sudarti et al.
Future research will be conducted by validating the RVCC measurement scale and testing the proposed conceptual model with empirical data on Islamic insurance companies, where the involvement of salesmen and customers is very intense. By involving religious aspects in value creation, customers will get holistic satisfaction, not only in a material dimension but also in a spiritual way. The religious aspect is also considered in the creation of shared values because religious commitment affects the orientation of consumers regarding their consumption patterns and social behavior. Religious commitment plays an important role in people’s lives through the formation of beliefs, knowledge, and attitudes towards consumption.
References 1. Vargo, S.L., Lusch, R.F.: Institutions and axioms: an extension and update of servicedominant logic. J. Acad. Mark. Sci. 44(1), 5–23 (2016). https://doi.org/10.1007/s11747-0150456-3 2. Woodruff, R.B.: Marketing in the 21st century customer value: the next source for competitive advantage. J. Acad. Mark. Sci. 25(3), 256 (1997). https://doi.org/10.1177/ 0092070397253006 3. Prahalad, C.K., Ramaswamy, V.: Co-creating unique value with customers. Strategy Leadersh. 32(3), 4–9 (2004). https://doi.org/10.1108/10878570410699249 4. Yu, Y., Hao, J.X., Dong, X.Y., Khalifa, M.: A multilevel model for effects of social capital and knowledge sharing in knowledge-intensive work teams. Int. J. Inf. Manage. 33(5), 780– 790 (2013). https://doi.org/10.1016/j.ijinfomgt.2013.05.005 5. Sheth, J.N., Newman, B.I., Gross, B.L.: Why we buy what we buy: a theory of consumption values. J. Bus. Res. 22(2), 159–170 (1991). https://doi.org/10.1016/0148-2963(91)90050-8 6. Busser, J.A., Shulga, L.V.: Co-created value: multidimensional scale and nomological network. Tour. Manage. 65, 69–86 (2018). https://doi.org/10.1016/j.tourman.2017.09.014 7. Yi, Y., Gong, T.: Customer value co-creation behavior: scale development and validation. J. Bus. Res. 66(9), 1279–1284 (2013). https://doi.org/10.1016/j.jbusres.2012.02.026 8. Pareigis, J., Edvardsson, B., Enquist, B.: Exploring the role of the service environment in forming customer’s service experience. Int. J. Qual. Serv. Sci. 3(1), 110–124 (2011). https:// doi.org/10.1108/17566691111115117 9. Karpen, I.O., Bove, L.L., Lukas, B.A., Zyphur, M.J.: Service-dominant orientation: measurement and impact on performance outcomes. J. Retail. 91(1), 89–108 (2015). https:// doi.org/10.1016/j.jretai.2014.10.002 10. Rahman, M.S.: Young consumer’s perception on foreign made fast moving consumer goods: the role of religiosity, spirituality and animosity. Int. J. Bus. Manage. Sci. 5(2), 103–118 (2012) 11. Lusch, R.F., Vargo, S.L.: Service-Dominant Logic: Premises, Perspectives, Possibilities. Cambridge University Press, Cambridge (2014) 12. Roberts, D., Hughes, M., Kertbo, K.: Exploring consumers’ motivations to engage in innovation through co-creation activities. Eur. J. Mark. 48(1), 147–169 (2014). https://doi. org/10.1108/EJM-12-2010-0637 13. Jaakkola, E., Hakanen, T.: Value co-creation in solution networks. Ind. Mark. Manage. 42 (1), 47–58 (2013). https://doi.org/10.1016/j.indmarman.2012.11.005
Religious Value Co-Creation
425
14. Kashif, M., De Run, E.C., Abdul Rehman, M., Ting, H.: Bringing Islamic tradition back to management development: a new Islamic Dawah based framework to foster workplace ethics. J. Islamic Mark. 6(3), 429–446 (2015). https://doi.org/10.1108/JIMA-12-2013-0086 15. Holy Qur’an 16. Alhouti, S., Musgrove, C.F., Butler, T.D., D’Souza, G.: Consumer reactions to retailer’s religious affiliation: roles of belief congruence, religiosity, and cue strength. J. Mark. Theory Pract. 23(1), 75–93 (2015). https://doi.org/10.1080/10696679.2015.980176 17. Gordon, W.V., Shonin, E., Zangeneh, M., Groffiths, M.D.: Work-related mental health and job performance: can mindfulness help? Int. J. Ment. Health Addict. 12, 129–137 (2014). https://doi.org/10.1007/s11469-014-9484-3 18. McQuaid, R.W.: The theory of partnerships - why have partnerships. In: Osborne, S.P. (ed.) Managing Public-Private Partnerships for Public Services: An International Perspective, pp. 9–35. Routledge, London (2000) 19. Hollebeek, L.: Demystifying customer brand engagement: exploring the loyalty nexus. J. Mark. Manage. 27(7–8), 785–807 (2011). https://doi.org/10.1080/0267257X.2010.500132 20. Pansari, A., Kumar, V.: Customer engagement: the construct, antecedents, and consequences. J. Acad. Mark. Sci. 45(3), 294–311 (2017) 21. Hoyer, W.D., Chandy, R., Dorotic, M., Krafft, M., Singh, S.S.: Consumer cocreation in new product development. J. Serv. Res. 13(3), 283–296 (2010). https://doi.org/10.1177/ 1094670510375604 22. Vivek, S.D., Beatty, S.E., Morgan, R.M.: Customer engagement: exploring customer relationships beyond purchase. J. Mark. Theory Pract. 20(2), 122–146 (2012). https://doi. org/10.2753/MTP1069-6679200201 23. Brodie, R.J., Juric, B., Ilic, A., Hollebeek, L.: Consumer engagement in a virtual brand community: an exploratory analysis. J. Bus. Res. 66(1) (2011). https://doi.org/10.1016/j. jbusres.2011.07.029 24. Hollebeek, L.D., Conduit, J., Brodie, R.J.: Strategic drivers, anticipated and unanticipated outcomes of customer engagement. J. Mark. Manage. 32(5–6), 393–398 (2016). https://doi. org/10.1080/0267257X.2016.1144360 25. Van Doorn, J., et al.: Customer engagement behavior: theoretical foundations and research directions. J. Serv. Res. 13(3), 253–266 (2010). https://doi.org/10.1177/1094670510375599 26. Morgan, R., Hunt, S.: The commitment-trust theory of relationship marketing. J. Mark. 58 (3), 20–38 (1994). https://doi.org/10.2307/1252308 27. Chalofsky, N., Krishna, V.: Meaningfulness, commitment, and engagement: the intersection of a deeper level of intrinsic motivation. Adv. Dev. Hum. Resour. 11(2), 189–203 (2009). https://doi.org/10.1177/1523422309333147 28. Gallan, A.S., Jarvis, C.B., Brown, S.W., Bitner, M.J.: Customer positivity and participation in services: an empirical test in a health care context. J. Acad. Mark. Sci. 41(3), 338–356 (2013). https://doi.org/10.1007/s11747-012-0307-4 29. Hollebeek, L.D., Glynn, M.S., Brodie, R.J.: Consumer brand engagement in social media: conceptualization, scale development and validation. J. Interact. Mark. 28(2), 149–165 (2014). https://doi.org/10.1016/j.intmar.2013.12.002 30. Ngo, H.Q., Nguyen, T.H., Kang, G.D.: The effect of perceived value on customer engagement with the moderating role of brand image: a study case in vietnamese restaurants. Int. J. Innov. Technol. Explor. Eng. 8(7C2), 451–461 (2019)
Environmental Performance Announcement and Shareholder Value: The Role of Environmental Disclosure Luluk Muhimatul Ifada1(&), Munawaroh2, Indri Kartika1, and Khoirul Fuad1
2
1 Department of Accounting, Faculty of Economics, Universitas Islam Sultan Agung, Semarang, Indonesia {luluk.ifada,indri,hoirulfuad}@unissula.ac.id Faculty of Economics, Universitas Krisnadwipayana, Bekasi, Indonesia [email protected]
Abstract. This study aims to analyze the effect of environmental performance announcements on shareholder value and how environmental disclosures mediate this relationship. The companies studied included 81 companies listed on the Indonesia Stock Exchange (IDX) in 2017–2019. Researchers determine the sample using purposive sampling, with the criteria 1) manufacturing companies that issue financial reports. 2) manufacturing companies that follow PROPER. 3) manufacturing companies that publish environmental disclosures in Sustainability Reporting. The results showed that the first hypothesis, namely the announcement of environmental performance has a positive effect on environmental disclosure, is accepted. Furthermore, the second hypothesis is that environmental performance announcements have no effect on shareholder value, and the third hypothesis is that environmental disclosure has a positive effect on shareholder value.
1 Introduction Environmental issues have become an important topic in today’s global economy. Corporate goals and responsibilities have begun to shift from focusing on profit to companies that care for the environment and society to maximize corporate value. Company value can be achieved by increasing the share price, thereby it will increase the prosperity of the owner [1]. One of the positive assessments from stakeholders is the announcement of a company that cares about the environmental performance around the company. The announcement is in the form of environmental performance announcements for companies participating in the Company Performance Assesment in Environmental Management (PROPER). PROPER will be notified to the public every year. This announcement can improve the company’s reputation [2]. Announcement of environmental performance leads to a more thorough understanding of a company’s environmental activities and their disclosures [3]. Companies announced by the Ministry of Environment to have PROPER especially for their good environmental performance will inform extensive environmental disclosures. The market reaction will be © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 L. Barolli et al. (Eds.): CISIS 2021, LNNS 278, pp. 426–434, 2021. https://doi.org/10.1007/978-3-030-79725-6_42
Environmental Performance Announcement and Shareholder Value
427
better for companies with good environmental performance announcements than companies with the poor environmental performance [4]. The results of research that have been conducted by [5–8] indicate that environmental performance announcements affect the extent of corporate environmental disclosure. However, research by [9] has had the opposite result. On the other hand, [10, 11] explains the existence of value creation for shareholders for social and environmental concerns with performance announcements and environmental disclosures. Shareholders will participate in increasing the value of the company’s shares because the company maintains the environment well [12]. [13] explained that the implementation of environmental policies has a relationship with shareholder value but only under certain conditions. If the performance and environmental disclosures cannot maximize shareholder value, then the incentives issued include waste for the company. [14] showed that corporate social responsibility has an effect on firm value. The inconsistency of previous research makes the relationship between environmental performance announcements and shareholder value uncertain, whether environmental performance announcements are an aspect that can increase or even decrease company shareholder value. Legitimacy theory implies that a function of the high intensity of stakeholder expectations of the company’s environmental performance has an impact on the legitimacy that ensures the sustainability of company’s business [15]. The company’s environmental disclosure becomes a tool to obtain and maintain the company’s legitimacy status to stakeholders [16]. Signal theory suggests that companies with superior environmental performance have an incentive to disclose superior company performance and choose to mark company achievements by publishing sustainable ESG reports in addition to mandatory financial reports [17]. Signaling theory refers to the ability to communicate with all stakeholders, where companies consistently make environmental disclosures and announcements of sustainability performance signifying them as good corporate citizens. To maintain the financial and non-financial performance that affects increasing stakeholder value [17], this is a reflection of how the company should behave. [11] explains environmental disclosure information as a form of transparency in the business sector regarding the company’s environmental activities and shows the company’s concern and responsibility in front of shareholders so that the preservation of the surrounding environment will build a good legitimacy for companies related to increasing stakeholder value. Environmental disclosure is an important aspect for company management and investors [10]. In line with the above research, [7] and [18] state that disclosure of the company’s environment provides additional information for external parties about the company’s environmental performance and the development of company greenhouse gas emission information. In doing so, it allows analyst and investors to better assess and increase the visibility of their shareholder value. This study tries to find that environmental disclosure can mediate the effect of environmental performance announcements on stakeholder values. Variations in environmental disclosure transparency can explain the increase in stakeholder expectations and are relevant to the formation of an image of shareholder value [19]. It is intended that the company makes environmental performance announcements to improve the company’s reputation.
428
L. Muhimatul Ifada et al.
2 Theoretical and Hypothesis Development This study proposes the disclosure of environmental performance to mediate the effect of environmental performance on company shareholder value. The announcement of the company’s environmental performance by the Ministry of Environment can show that the company is proactive in giving the impression of good management of the environment. This opportunity can be used by companies to publish parts of company activities that meet the wishes of stakeholders in terms of environmental programs, which will further increase the value of the company’s shareholders [20]. 2.1
Environmental Performance Announcement and Environmental Disclosures
Companies with high environmental performance announcements will be credible and broader in delivering environmental disclosures. In this case, the company will still get legitimacy and a longer going concern because the community will be more accepting of the company’s existence [21, 22]. This shows the positive effect of environmental performance announcement on the level of environmental disclosure. [5, 6, 21] have similar research results that company’s environmental performance announcement has a positive effect on environmental disclosure. H1: Environmental performance announcements have a positive effect on the company’s environmental disclosures 2.2
Environmental Performance Announcement and Shareholder Value
The presence of companies that participate in PROPER aims to improve environmental performance and environmental disclosure. In this case, the better the company improves its performance announcements, the better the reputation of all stakeholders will be increased [23]. This statement is supported by [24] which states that companies that receive good environmental performance announcements from third parties will get response from investor through an increase in the company’s stock price and longterm company earnings. Through the announcement of environmental performance, it contains information that is relevant to achieving good communication for many stakeholders that the company has a concern for environmental programs. [21] explained that environmental disclosure can increase customer and employee satisfaction which then has an impact on shareholder value creation. This reflects that good corporate value will reflect the creation of good shareholder value as well [25]. [26] and [27] showed that environmental performance announcements increase the shareholder value of the company. H2: Environmental performance announcements have a positive effect on shareholder value 2.3
Environmental Disclosure and Shareholder Value
In company’s environmental reporting, companies can be consistent and proactive to meet stakeholder expectations [11]. It can make a good impression by publishing parts
Environmental Performance Announcement and Shareholder Value
429
of the company’s environmental operations such as energy and emissions, waste data, environmental initiatives, and environmental policies, environmental capital expenditures that will have a direct impact on the company’s future cash flows and the risks associated with the company’s holding value [28]. Disclosure of information about the company’s concern for the environment is carried out because indirectly this will become a consideration for investors and creditors on the company’s credibility [15]. In empirical research, [29, 30] states that stocks perform better in terms of stock returns which have a positive relationship with the disclosure of the company’s environment. Companies make environmental disclosures as an environmental responsibility to create shareholder value, this is in line with the results of research by [3, 10, 11]. Environmental disclosures create relevant value through their direct impact on the cost of capital or value. H3: Environmental disclosure has a positive effect on shareholder value
3 Methodology 3.1
Sample
Population and Sample. The population in this study were all manufacturing companies in Indonesia. This research sample used purposive sampling. Based on the availability of annual reports that are listed on the IDX or the company website, the samples used were 81 companies from 2017 to 2019 (Table 1). Table 1. Variable measurement Variable Environmental performance announcements Shareholder value Environmental disclosures
Measurement The average score of PROPER achieved by each company
[2]
Stock price PBV ¼ Share Book value P ðitemdisclose x IER0 sindexxscoreÞ IEs index ¼ Total items
[18]
The average of shareholder value (SV) is 2,96. The average value of the company’s environmental performance announcement is 3. While the company’s environmental disclosure score is 1.71 (Table 2).
3.2
Data Analysis
For data analysis, this study used SPSS in the form of simple linear regression analysis.
430
L. Muhimatul Ifada et al.
4 Results 4.1
Descriptive Statistic
Table 2. Descriptive statistic N Minimum SV 81 .207 EP 81 2.00 ED 81 .003
4.2
Maximum 9.465 4.000 5.110
Mean St. deviation 2.96460 2.454547 3.00000 .474342 1.71698 1.256533
The Results of Hypothesis Testing
The Kolmogorov-Smirnov test results show the Asymp. Sig. value that is more than 0.05, namely 0.200 for model 1 and 0.063 for model 2. This shows that model 1 and model 2 in this study have a regression model with a normal distribution. The heteroscedasticity test shows a number above 0.05 so that model 1 and model 2 have no heteroscedasticity. Furthermore, the results of the Durbin-Watson value for model 1 lie between du < d