419 77 12MB
English Pages XVI, 368 [365] Year 2020
Smart Innovation, Systems and Technologies 186
G. Jezic · J. Chen-Burger · M. Kusek · R. Sperka · Robert J. Howlett · Lakhmi C. Jain Editors
Agents and Multi-Agent Systems: Technologies and Applications 2020 14th KES International Conference, KES-AMSTA 2020, June 2020 Proceedings
123
Smart Innovation, Systems and Technologies Volume 186
Series Editors Robert J. Howlett, Bournemouth University and KES International, Shoreham-by-sea, UK Lakhmi C. Jain, Faculty of Engineering and Information Technology, Centre for Artificial Intelligence, University of Technology Sydney, Sydney, NSW, Australia
The Smart Innovation, Systems and Technologies book series encompasses the topics of knowledge, intelligence, innovation and sustainability. The aim of the series is to make available a platform for the publication of books on all aspects of single and multi-disciplinary research on these themes in order to make the latest results available in a readily-accessible form. Volumes on interdisciplinary research combining two or more of these areas is particularly sought. The series covers systems and paradigms that employ knowledge and intelligence in a broad sense. Its scope is systems having embedded knowledge and intelligence, which may be applied to the solution of world problems in industry, the environment and the community. It also focusses on the knowledge-transfer methodologies and innovation strategies employed to make this happen effectively. The combination of intelligent systems tools and a broad range of applications introduces a need for a synergy of disciplines from science, technology, business and the humanities. The series will include conference proceedings, edited collections, monographs, handbooks, reference books, and other relevant types of book in areas of science and technology where smart systems and technologies can offer innovative solutions. High quality content is an essential feature for all book proposals accepted for the series. It is expected that editors of all accepted volumes will ensure that contributions are subjected to an appropriate level of reviewing process and adhere to KES quality principles. ** Indexing: The books of this series are submitted to ISI Proceedings, EI-Compendex, SCOPUS, Google Scholar and Springerlink **
More information about this series at http://www.springer.com/series/8767
G. Jezic J. Chen-Burger M. Kusek R. Sperka Robert J. Howlett Lakhmi C. Jain •
•
•
•
•
Editors
Agents and Multi-Agent Systems: Technologies and Applications 2020 14th KES International Conference, KES-AMSTA 2020, June 2020 Proceedings
123
Editors G. Jezic Faculty of Electrical Engineering and Computing University of Zagreb Zagreb, Croatia
J. Chen-Burger School of Mathematical and Computer Sciences The Heriot-Watt University Scotland, UK
M. Kusek Faculty of Electrical Engineering and Computing University of Zagreb Zagreb, Croatia
R. Sperka Department of Business Economics and Management Silesian University in Opava Opava, Czech Republic
Robert J. Howlett Bournemouth University and KES International Research Shoreham-by-sea, UK
Lakhmi C. Jain University of Technology Sydney Sydney, NSW, Australia Liverpool Hope University Liverpool, UK KES International Research Shoreham-by-sea, UK
ISSN 2190-3018 ISSN 2190-3026 (electronic) Smart Innovation, Systems and Technologies ISBN 978-981-15-5763-7 ISBN 978-981-15-5764-4 (eBook) https://doi.org/10.1007/978-981-15-5764-4 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore
KES-AMSTA 2020 Conference Organization
KES-AMSTA 2020 was organized by KES International—Innovation in Knowledge-Based and Intelligent Engineering Systems.
Honorary Chairs I. Lovrek, University of Zagreb, Croatia L. C. Jain, University of Technology Sydney, Australia; Liverpool Hope University, UK; and KES International, UK
Conference Co-Chairs G. Jezic, University of Zagreb, Croatia J. Chen-Burger, Heriot-Watt University, Scotland, UK
Executive Chair R. J. Howlett, KES International Research, UK
Program Co-Chairs M. Kusek, University of Zagreb, Croatia R. Sperka, Silesian University in Opava, Czechia
v
vi
KES-AMSTA 2020 Conference Organization
Publicity Chair P. Skocir, University of Zagreb, Croatia M. Halaska, Silesian University in Opava, Czechia
International Program Committee Dr. Arnulfo Alanis, Technological Institute of Tijuana Prof. Ahmad Taher Azar, Prince Sultan University, Saudi Arabia Dr. Messaouda Azzouzi, University of Djelfa, Algeria Prof. Costin Badica, University of Craiova, Romania Assist. Prof. Marina Bagic Babac, University of Zagreb, Croatia Dra. Maria del Rosario Baltazar Flores, Instituto Tecnologico de Leon, Mexico Prof. Dariusz Barbucha, Gdynia Maritime University, Poland Prof. Bruno Blaskovic, University of Zagreb, Croatia Dr. Iva Bojic, Singapore-MIT Alliance for Research and Technology, Singapore Dr. Gloria Bordogna, CNR IREA, Italy Dr. Grażyna Brzykcy, Poznan University of Technology, Poland Assoc. Prof. Frantisek Capkovic, Slovak Academy of Sciences, Slovak Republic Prof. Zeljka Car, University of Zagreb, Croatia Dr. Jessica Chen-Burger, Heriot-Watt University, UK Dr. Angela Consoli, DST Group, Australia Prof. Margarita Favorskaya, Siberian State Aerospace University, Russia Dr. Lilia Georgieva, Heriot-Watt University, UK Dr. Paulina Golinska–Dawson, Poznan University of Technology, Poland Mr. Michal Halaska, Silesian University in Opava, Czech Republic Prof. Chihab Hanachi, University of Toulouse 1 Capitole, France Prof. Huu-Hanh Hoang, Posts and Telecommunications Institute of Technology, Vietnam Prof. Tzung-Pei Hong, National University of Kaohsiung, Taiwan Prof. Mirjana Ivanovic, University of Novi Sad, Serbia Prof. Dragan Jevtic, University of Zagreb, Croatia Prof. Vicente Julian, Universitat Politecnica de Valencia, Spain Prof. Arkadiusz Kawa, The Institute of Logistics and Warehousing, Poland Prof. Petros Kefalas, University of Sheffield, UK Dr. Konrad Kulakowski, AGH University of Science and Technology, Poland Prof. Setsuya Kurahashi, University of Tsukuba, Japan Prof. Kazuhiro Kuwabara, Ritsumeikan University, Japan Prof. Joanna Jozefowska, Poznan University of Technology, Poland Dr. Adrianna Kozierkiewicz, Wroclaw University of Technology, Poland Prof. Mario Kusek, University of Zagreb, Croatia Prof. Marin Lujak, IMT Lille Douai, France
KES-AMSTA 2020 Conference Organization
vii
Prof. Manuel Mazzara, Innopolis University, Russia Prof. Jose Manuel Molina Lopez, Universidad Carlos III de Madrid, Spain Prof. Radu-Emil Precup, Politehnica University of Timisoara, Romania Dr. Ewa Ratajczak-Ropel, Gdynia Maritime University, Poland Dr. Katka Slaninova, Silesian University in Opava, Czech Republic Prof. Roman Šperka, Silesian University in Opava, Czech Republic Prof. Petr Suchanek, Silesian University in Opava, Czech Republic Prof. Ryszard Tadeusiewicz, AGH University of Science and Technology, Poland Prof. Hiroshi Takahashi, Keio University, Japan Prof. Takao Terano, Chiba University of Commerce, Japan Dr. Krunoslav Trzec, Ericsson Nikola Tesla, Croatia Dr. Jeffrey Tweedale, Defence Science and Technology Group, Australia Prof. Taketoshi Ushiama, Kyushu University, Japan Prof. Jordi Vallverdu, Universitat Autònoma de Barcelona, Spain Prof. Toyohide Watanabe, Nagoya University, Japan Dr. Mahdi Zargayouna, IFSTTAR, France
Invited Session Chairs Agent-Based Modelling and Simulation (ABMS) Assoc. Prof. Roman Šperka, Silesian University in Opava, Czech Republic Business Process Management Assoc. Prof. Roman Šperka, Silesian University in Opava, Czech Republic Agents and Multi-agents Systems applied to Well-being and Health Dr. Maria del Rosario Baltazar Flores, Instituto Tecnologico de Leon, Mexico Dr. Arnulfo Alanis Garza, Instituto Tecnologico de Tijuana, Mexico Business Informatics Prof. Hiroshi Takahashi, Keio University, Japan Prof. Setsuya Kurahashi, University of Tsukuba, Japan Prof. Takao Terano, Tokyo Institute of Technology, Japan Multi-Agent Systems in Transportation Systems Dr. Mahdi Zargayouna, IFSTTAR, France
Preface
This volume contains the proceedings of the 14th KES Conference on Agent and Multi-Agent Systems—Technologies and Applications (KES-AMSTA 2020) held as a virtual conference between June 17 and 19, 2020. The conference was organized by KES International, its focus group on agent and multi-agent systems, and University of Zagreb, Faculty of Electrical Engineering and Computing. The KES-AMSTA conference is a subseries of the KES conference series. Following the success of previous KES Conferences on Agent and Multi-Agent Systems—Technologies and Applications, held in St. Julians, Gold Coast, Vilamoura, Puerto de la Cruz, Sorrento, Chania, Hue, Dubrovnik, Manchester, Gdynia, Uppsala, Incheon, and Wroclaw, the conference featured the usual keynote talks, presentations, and invited sessions closely aligned to its established themes. KES-AMSTA is an international scientific conference for discussing and publishing innovative research in the field of agent and multi-agent systems and technologies applicable in the Digital and Knowledge Economy. The aim of the conference is to provide an internationally respected forum for both the research and industrial communities on their latest work on innovative technologies and applications that is potentially disruptive to industries. Current topics of research in the field include technologies in the area of decision making, big data analysis, cloud computing, Internet of Things (IoT), business informatics, artificial intelligence, social systems, health, transportation systems and smart environments, etc. Special attention is paid on the feature topics: agent communication and architectures, modeling and simulation of agents, agent negotiation and optimization, business informatics, intelligent agents, and multi-agent systems. The conference attracted a substantial number of researchers and practitioners from all over the world who submitted their papers for main track covering the methodologies of agent and multi-agent systems applicable in the smart environments and knowledge economy and had four invited sessions on specific topics within the field. Submissions came from 16 countries. Each paper was peer reviewed by at least two members of the International Program Committee and International Reviewer Board. 33 papers were selected for presentation and publication in the volume of the KES-AMSTA 2020 proceedings. ix
x
Preface
The Program Committee defined the following main tracks: Software Agents in Smart Environment and Intelligent Agents and Cloud Computing. In addition to the main tracks of the conference, there were the following invited sessions: Agentbased Modeling and Simulation, Business Process Management, Agents and MAS applied to Well-being and Health, Business Informatics, and MAS in Transportation Systems. Accepted and presented papers highlight new trends and challenges in agent and multi-agent research. We hope that these results will be of value to the research community working in the fields of artificial intelligence, collective computational intelligence, health, robotics, smart systems, and, in particular, agent and multi-agent systems, technologies, tools, and applications. The Chairs’ special thanks go to the following special session organizers: Dra. Maria del Rosario Baltazar Flores, Instituto Tecnologico de Leon, Mexico; Prof. Arnulfo Alanis Garza, Instituto Tecnológico de Tijuana, México; Prof. Hiroshi Takahashi, Keio University, Japan; Prof. Setsuya Kurahashi, University of Tsukuba, Tokyo, Japan; Prof. Takao Terano, Tokyo Institute of Technology, Japan; and Dr. Mahdi Zargayouna, IFSTTAR, France, for their excellent work. Thanks are due to the Program Co-chairs, all Program and Reviewer Committee members and all the additional reviewers for their valuable efforts in the review process, which helped us to guarantee the highest quality of selected papers for the conference. We cordially thank all authors for their valuable contributions and all of the other participants in this conference. The conference would not be possible without their support. Zagreb, Croatia Scotland, UK Zagreb, Croatia Opava, Czech Republic Shoreham-by-sea, UK Sydney, Australia/Liverpool, UK/Shoreham-by-sea, UK April 2020
G. Jezic J. Chen-Burger M. Kusek R. Sperka Robert J. Howlett Lakhmi C. Jain
Contents
Software Agents in Smart Environment Revitalising and Validating the Novel Approach of xAOSF Framework Under Industry 4.0 in Comparison with Linear SC . . . . . . Fareed Ud Din, David Paul, Joe Ryan, Frans Henskens, and Mark Wallis Natural Language Agents in a Smart Environment . . . . . . . . . . . . . . . . Renato Soic and Marin Vukovic Potentials of Digital Business Models for the European Agriculture Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ralf-Christian Härting, Raphael Kaim, and Frieder Horsch Agent-Based Approach for User-Centric Smart Environments . . . . . . . Katarina Mandaric, Pavle Skocir, and Gordan Jezic Providing Efficient Redundancy to an Evacuation Support System Using Remote Procedure Calls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Itsuki Tago, Kota Konishi, Munehiro Takimoto, and Yasushi Kambayashi Process Model for Accessible Website User Evaluation . . . . . . . . . . . . . Matea Zilak, Ivana Rasan, Ana Keselj, and Zeljka Car
3 17
27 37
47 57
Intelligent Agents and Cloud Computing A Comparative Study of Trust and Reputation Models in Mobile Agent Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Donies Samet, Farah Barika Ktata, and Khaled Ghedira
71
Agent-Based Control of Service Scheduling Within the Fog Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Petar Krivic, Jakov Zivkovic, and Mario Kusek
83
xi
xii
Contents
On the Conception of a Multi-agent Analysis and Optimization Tool for Mechanical Engineering Parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Paul Christoph Gembarski
93
Predicting Dependency of Approval Rating Change from Twitter Activity and Sentiment Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Demijan Grgić, Mislav Karaula, Marina Bagić Babac, and Vedran Podobnik Protected Control System with RSA Encryption . . . . . . . . . . . . . . . . . . 113 Danenkov Ilya, Alexey Margun, Radda Iureva, and Artem Kremlev Artificial Intelligent Agent for Energy Savings in Cloud Computing Environment: Implementation and Performance Evaluation . . . . . . . . . 127 Leila Ismail and Huned Materwala Agent-Based Modeling and Simulation and Business Process Management Design of Technology for Prediction and Control System Based on Artificial Immune Systems and the Multi-agent Platform JADE . . . . 143 G. A. Samigulina and Z. I. Samigulina A Multi-agent Framework for Visitor Tracking in Open Cultural Places . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Muhammed Safarini, Rasha Safarini, Thaer Thaher, Amjad Rattrout, and Muath Sabha Toward Modeling Based on Agents that Support in Increasing the Competitiveness of the Professional of the Degree in Computer Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 María del Consuelo Salgado Soto, Margarita Ramírez Ramírez, Hilda Beatriz Ramírez Moreno, and Esperanza Manrique Rojas Human Tracking in Cultural Places Using Multi-agent Systems and Face Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Adel Hassan, Aktham Sawan, Amjad Rattrout, and Muath Sabha A Conceptual Framework for Agent-Based Modeling of Human Behavior in Spatial Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Dario Esposito, Ilenia Abbattista, and Domenico Camarda Real-Time Autonomous Taxi Service: An Agent-Based Simulation . . . . 199 Negin Alisoltani, Mahdi Zargayouna, and Ludovic Leclercq Modelling Timings of the Company’s Response to Specific Customer Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Petr Suchánek and Robert Bucki
Contents
xiii
Importance of Process Flow and Logic Criteria for RPA Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Michal Halaška and Roman Šperka Agents and Multi-agents Systems Applied to Well-Being and Health Multiagent System as Support for the Diagnosis of Language Impairments Using BCI-Neurofeedback: Preliminary Study . . . . . . . . . 235 Eugenio Martínez, Rosario Baltazar, Carlos A. Reyes-García, Miguel Casillas, Martha-Alicia Rocha, Socorro Gutierrez, and M. Del Consuelo Martínez Wbaldo Multi-agent System for Therapy in Children with the Autistic Spectrum Disorder (ASD), Utilizing Smart Vision Techniques—SMA-TEAVI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Ruben Sepulveda, Arnulfo Alanis, Marina Alvelais Alarcón, Daniel Velazquez, and Karina Alvarado Multiagent Monitoring System for Oxygen Saturation and Heart Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Fabiola Hernandez-Leal, Arnulfo Alanis, and Efraín Patiño Multi-agent System for Obtaining Parameters in Concussions—MAS-OPC: An Integral Approach . . . . . . . . . . . . . . . 261 Gustavo Ramírez Gonzalez, Arnulfo Alanis, Marina Alvelais Alarcón, Daniel Velazquez, and Bogart Y. Márquez Data Analysis of Sensors in Smart Homes for Applications Healthcare in Elderly People . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Uriel Huerta, Rosario Baltazar, Anabel Pineda, Martha Rocha, and Miguel Casillas A Genetic Algorithm-Oriented Model of Agent Persuasion for Multi-agent System Negotiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Samantha Jiménez, Víctor H. Castillo, Bogart Yail Márquez, Arnulfo Alanis, Leonel Soriano-Equigua, and José Luis Álvarez-Flores Business Informatics Impacts of the Implementation of the General Data Protection Regulations (GDPR) in SME Business Models—An Empirical Study with a Quantitative Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Ralf-Christian Härting, Raphael Kaim, and Dennis Ruch A Study on the Influence of Advances in Communication Technology on the Intentions of Urban Park Users . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Noriyuki Sugahara and Masakazu Takahashi
xiv
Contents
Construction of News Article Evaluation System Using Language Generation Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Yoshihiro Nishi, Aiko Suge, and Hiroshi Takahashi Constructing a Valuation System Through Patent Document Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Shohei Fujiwara, Yusuke Matsumoto, Aiko Suge, and Hiroshi Takahashi Modeling of Bicycle Sharing Operating System with Dynamic Pricing by Agent Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 Kohei Yashima and Setsuya Kurahashi Omni-Channel Challenges Facing Small- and Medium-Sized Enterprises: Balancing Between B2B and B2C . . . . . . . . . . . . . . . . . . . 343 Tomohiko Fujimura and Yoko Ishino A Formal, Descriptive Model for the Business Case of Managerial Decision-Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 Masaaki Kunigami, Takamasa Kikuchi, Hiroshi Takahashi, and Takao Terano Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
About the Editors
G. Jezic is a Professor at the University of Zagreb, Croatia. His research interest includes telecommunication networks and services focusing particularly on parallel and distributed systems, machine-to-machine (M2M) and Internet of Things (IoT), communication networks and protocols, mobile software agents, and multi-agent systems. He actively participates in numerous international conferences as a paper author, speaker, member of organizing and program committees, or reviewer. He co-authored over 100 scientific and professional papers, book chapters, and articles in journals and conference proceedings. J. Chen-Burger is an Assistant Professor, Computer Science, Heriot-Watt University. She was a Research Fellow, Informatics, University of Edinburgh. Her research interests include enterprise modeling, process modeling, execution and mining technologies and how they may interact with agent technologies to solve complex real-world problems. She is a committee member of several international conferences, journals and chair of conference and conference sessions. She is PI to several research and commercial projects. M. Kusek is Professor at the University of Zagreb, Croatia. He holds Ph.D. (2005) in electrical engineering, from the University of Zagreb. He is currently a Lecturer of 9 courses and has supervised over 130 students at B.Sc., M.Sc., and Ph. D. studies. He participated in numerous local and international projects. He has co-authored over 80 papers in journals, conferences, and books in the area of distributed systems, multi-agent systems, self-organized systems, and machine-to-machine (M2M) communications. Prof. Kušek is a member of IEEE, KES International, and the European Telecommunications Standards Institute (ETSI). He serves as a program co-chair for two international conferences.
xv
xvi
About the Editors
R. Sperka is an Associate Professor and Head of Department of Business Economics and Management at Silesian University in Opava, School of Business Administration in Karvina, Czech Republic. He holds Ph.D. title in “Business economics and management” and Dr. title in “Applied informatics” since 2013. He has been participating as a head researcher or research team member in several projects funded by Silesian University Grant System or EU funds. His field of expertise is business process management, process mining, implementation, and deployment of information systems and software frameworks; the use of agent-based technology in social sciences; and modeling and simulation in economic systems and financial markets. Dr. Robert J. Howlett is the Executive Chair of KES International, a non-profit organization that facilitates knowledge transfer and the dissemination of research results in areas including intelligent systems, sustainability, and knowledge transfer. He is a Visiting Professor at Bournemouth University in the UK. His technical expertise is in the use of intelligent systems to solve industrial problems. He has been successful in applying artificial intelligence, machine learning, and related technologies to sustainability and renewable energy systems; condition monitoring, diagnostic tools and systems, and automotive electronics and engine management systems. His current research work is focused on the use of smart microgrids to achieve reduced energy costs and lower carbon emissions in areas such as housing and protected horticulture. Dr. Lakhmi C. Jai, Ph.D., M.E., B.E. (Hons) Fellow (Engineers Australia) is with the University of Technology Sydney, Australia, and Liverpool Hope University, UK. Professor Jain serves the KES International for providing a professional community the opportunities for publications, knowledge exchange, cooperation, and teaming. Involving around 5,000 researchers drawn from universities and companies worldwide, KES facilitates international cooperation and generates synergy in teaching and research. KES regularly provides networking opportunities for professional community through one of the largest conferences of its kind in the area of KES.
Software Agents in Smart Environment
Revitalising and Validating the Novel Approach of xAOSF Framework Under Industry 4.0 in Comparison with Linear SC Fareed Ud Din, David Paul, Joe Ryan, Frans Henskens, and Mark Wallis
Abstract Recent literature claims that Small to Medium Size Enterprises (SMEs), as compared to larger setups, may not be able to experience all the benefits of the fourth industrial revolution (Industry 4.0). In order to bridge this gap, the Agent Oriented Smart Factory (AOSF) framework provides a comprehensive supply chain architecture. AOSF framework does not only provide high-level enterprise integration guidelines but also recommends a thorough implementation in the area of warehousing by providing Agent Oriented Storage and Retrieval (AOSR) WMS system. This paper focuses on scenario-based comparison of the extended AOSF framework with a Linear SC model, to explain substantially improved performance efficiency especially in SME-oriented warehousing. These scenario-based experiments indicate that AOSR can yield 60–148% improvement in certain Key Performance Indicators (KPIs), i.e. number of products stored in racks, receiving area (RA) and expedition areas (EA), in comparison with standard WMS strategies.
1 Introduction Supply Chain (SC) is a fundamental element that provides an organisation with process flow, regardless of the size of the organisation [1]. For SMEs, the importance of SC networks becomes more crucial as they rely solely on the tightly integrated subsystems and components to maintain business processes. The concept of Industry 4.0 [2] is no longer new. This initiative provides a flexible and advanced system which recommends a high-tech infrastructural shift, incorporating intelligent machines within the manufacturing Supply Chain (SC) and a high level of automated interaction in between the constituent components [3]. In order to build such a F. U. Din (B) · J. Ryan · F. Henskens · M. Wallis University of Newcastle, Callaghan, NSW, Australia e-mail: [email protected] D. Paul University of New England, Armidale, NSW, Australia © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_1
3
4
F. U. Din et al.
structure, additional infrastructural and operational cost is required [4]. However, for SMEs, implementing a low-cost, but effective solution has always been a preference [4]. Thus, a general SC framework for SMEs, based on Industry 4.0 standards, may help bridge this gap between Industry 4.0 and SMEs. In order to bring SMEs many of the benefits of Industry 4.0, Agent Oriented Smart Factory (AOSF) framework [5] provides a moderate-level semi-autonomous system for SMEs to apply a comprehensive SC framework under the umbrella of Industry 4.0. The contribution of AOSF framework has been presented in a series of previous contributions including high-level SC-based AOSF architecture [5], its extended visualisation as a Cyber Physical System (CPS) [6], problem and domain definition to build the baseline model for its associated WMS strategy, Agent Oriented Storage and Retrieval (AOSR) [7], and 6-Feature strategy, general work-flow of AOSR [8], a thorough performance validation of AOSR in comparison with standard WMS strategies [9] and Time Efficiency Validation of AOSR Planner Algorithm [10]. This paper focuses on scenario-based comparison of the AOSF recommended CPSbased SC model with the Linear SC model to explain how AOSF provides robust, proactive and systematic flow of operations. It also includes test case-based validation of AOSF’s recommended AOSR-WMS strategy to affirm the validity of the overall system. This article includes scenario-based test cases within the supply chain of a firm and relates them with two different possible cases of information exchange from the frontend customer side (CRM) and back-end supplier side (SCM). It also highlights the importance of the Business Process Re-engineering (BPR) strategy recommended by the AOSF framework. Some implementation results taken from the prototype developed in JADE [11] are also included to discuss the use of multiple warehousing and product placement/retrieval mechanisms provided by AOSR strategy, e.g. Zoning Logic, FIFO Logic and Pick from/Put to the Fewest Logic [12]. It also provides a comparison of the recommended AOSR hybrid product placement and retrieval strategy with standard WMS strategies. The discussion includes how the hybrid logicbased AOSR algorithm combines not only all the aforementioned logic schemes but also the ‘Pick from/Put to the Nearest logic’, in order to reduce the overall activity time within the shopfloor. It also includes validation of how the 6-Feature strategy recommended by the AOSR system helps in bringing improvement and proactiveness within a warehouse.
2 Test Scenario and AOSF Supply Chain (SC) is a philosophical boundary-less network within a business setup that prevails from the supplier side towards the customer side. Several different events could occur from any of the constituent parts of an SC network, e.g. supply chain management (SCM), enterprise central unit (ECU), logistical information system (LIS) or customer relationship management (CRM). This section addresses two
Revitalising and Validating the Novel Approach …
5
Fig. 1 A Linear view of supply chain network
possible approaches to create a comprehensive informational flow passing through different existing SC components: 1. Create the bulk approach, where the events are triggered from the supplier side and require the involvement of different units of the SC network. This case can be further segregated into two sub-scenarios, Scenario 1A: Creating the bulk from inside and Scenario 1B: Creating the bulk from outside. 2. Break the bulk approach, where the action is invoked from the customer side, creating a wave of initiation of different subcomponents of the supply chain up to the warehouse. Both of the cases, with their sub-scenarios, are reflected by the linear representation of the SC network in Fig. 1, detailed below. Scenario 1A: Creating the bulk from within the enterprise is a scenario where the manufacturing unit informs the central information centre about the completion of a particular batch, which is further updated to ECU. ECU collects details about the execution of production planning, process and disposition [13] related to particular finished or semi-finished (raw) products to be stored in the warehouse. After processing the data, ECU transforms it to decisive information and initiates a trigger, invoking a call for products to be stored with the details about dispositioning from the manufacturing side and delivery towards the warehouse side. Scenario 1B: Creating the bulk from outside is a scenario where the products are to be delivered from suppliers to the firm. The SCM component is the primary interface that corresponds to the requirement of suppliers and deals with delivery details via LIS. SCM also performs operational planning, execution of procurement and completion with the purchase department, which are sub-operations of interdepartmental communication [14]. ECU collects data transmitted by SCM and LIS from the central information centre, and then, after processing that data, invokes a call to deliver the right batch to the warehouse with all the delivery details.
6
F. U. Din et al.
In the second approach of Breaking the bulk, the trigger is initiated from the front-end customer side. The CRM component is the main interface for dealing with the requirement of upcoming orders from the customer side. CRM coordinates with the department of sales and marketing and posts the data to the central information centre [14]. Then the information related to a particular shipment, in liaison with the sales department, is transmitted to the warehouse side. Section 3 highlights details of the test cases, for all the possible triggers initiated, in a routine day on an hourly basis in a distribution warehouse. From the perspective of mapping Industry 4.0 standards to SMEs, three particular aspects, as mentioned below, are usually recommended through the use of RFID technologies, mobile user interfaces and auto/predictive control of inventory management [15]: – Smart Logistics, providing connected units with predictive features; – Smart Production, providing sensor-based environments within production plants; and – Organisational/ Managerial model, providing comprehensive control to managerial staff. AOSF framework takes all these recommendations into account and provides a comprehensive layout not only for organisation and modelling of an SC network but also for how it works in maintaining vertical, horizontal and end-to-end integration, which is an important factor to keep the whole system updated. AOSF framework is based on a Cloud-based CPS architecture that provides the flexibility and scalability of adding Big Data features as needed in the future. At the moment, most SMEs are not considering data as a source of added value [16]. Also, the extensive use of collaborative robots is not exploited by SMEs yet and does not seem possible in the near future because of the high infrastructural cost involved with such automation [17]. Recalling the concepts of the tier-based AOSF framework and its extended view in Fig. 2, this architecture better caters to the cases discussed in a linear SC structure as it provides a proper integration mechanism though an IntraEnterprise Wide Network (IWN), which also provides three-dimensional enterprise integration (as discussed in our previous work [6]). The traditional SC elements, such as the SCM, CRM, plant side, business operation side and warehouse side, with all the smart devices, are part of the Smart Connection Layer, which further provides connectivity to ECU. ECU in AOSF architecture is considered a focal point which serves as a middle layer sensing the data and transforming it into decisive information. All the backup and monitoring facilities are set up at the Cyber Cognition Layer which provides overall cognitive abilities to the system. Such a three-dimensional structure of the AOSF framework also helps in maintaining a proper backup at the cloud layer, while keeping all the constituent elements updated concurrently. Agent orientation also gives the AOSF framework a further benefit of flexibility where agents interact with each other for a particular resource constraint and themselves are helped by utilising their own local inference engine and belief sets. AOSF framework recommends standard classification of reflex agents, utility-based agents and
Revitalising and Validating the Novel Approach …
7
Intra-Enterprise Wireless Network (IWN)
OLAP Database Servers
Data Backup Servers
Server Teams
CyberCognition Level
Enterprise Central Unit Device Manager
Client Manager
Mobile Matchmaker
Data-toInformation Level Smart Connection Level
Business Operations Side Plant Maintenance Side Customer Side
Warehouse Side
Supplier Side
Fig. 2 Extended view of AOSF framework
goal-based agents, i.e. Smart Device Agent (SDAs), User Side Agent (UAs) and Mediator Agents (MAs), in order to provide decentralised decision-making, thus making operation seamless and robust.
3 Dataset and Test Cases In order to provide a solution to improve warehouse management in SMEs, the AOSF framework recommends its associated AOSR-WMS mechanism with its 6-Feature strategy [8], which is prototyped in JADE [11], as detailed in [6]. For a thorough validation of this system, the data used to evaluate the test cases, for different categories of products in different scenarios, is represented in Tables 1 and 2. All the data categorisation of the products applied to the AOSF framework and its associated AOSR algorithm are taken as a test/example case, which can be modified as per business need. The details of different classes and categorisation of products in these test cases are extracted from the online source provided by DGI Global [18] and Eurosped [19] warehousing and logistics companies. In order to build a comprehensive dataset that includes maximum variation and can be considered as representative for large-scale applicability, several different features are included, such as product classes, their characteristics, SKUs and different situations of product delivery and
8
F. U. Din et al.
shipment. The data used to validate this system is stored categorically within the highlighted constraints as detailed in Tables 1 and 2. This comprehensive dataset does not only include one type of product category, it consists of the information of several characteristics of products, e.g. SKUs, quantity and products classes, from several different industrial sectors, e.g. electronics industry, medical industry, textile firms, paint and glass industry. Table 1 takes 32 different triggers into account and categorises them into the aforementioned generic scenarios: Scenario 1A (creating
Table 1 The dataset used for hourly Creating/Breaking the Bulk Test Cases Tr. # Case Initiator Hours Product ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
1A 1A 1B 1A 1B 1B 1A 1A 1A 1B 1B 1B 1B 1A 1B 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
ECU
1st
2nd
3rd
4th
CRM 5th
6th
7th
8th
P-9001 P-9002 k-9804 K-2098 L-3092 K-9803 F-9210 L-2801 F-2830 C-3921 R-3392 R-1292 P-8372 K-3269 R-3390 P-9001 P-9002 k-9804 K-2098 L-3092 K-9803 F-9210 L-2801 F-2830 C-3921 R-3392 R-1292 P-8372 K-3269 R-3390 P-9001 P-9002
Quantity 17 13 23 27 17 33 47 33 23 27 67 43 53 27 67 5 25 33 67 53 23 37 73 47 33 27 43 17 23 17 27 13
Revitalising and Validating the Novel Approach …
9
Table 2 Categorisation/classification of products with respect to characteristics Sr. no. Product Product name Characteristics ID SKU Hazard Fast 1 2 3 4
P-9001 M-1001 P-9002 K-9804
5 6
K-2098 L-3092
7 8 9 10 11 12
K-9803 F-9210 L-2801 F-2830 C-3921 R-3392
13 14 15
R-1292 P-8372 K-3269
16 17
R-3390 P-9003
18 19
R-3292 K-4940
20
K-9805
Small Electronics Medical Supplies Household/Hygiene Large Electrical App. Textile Items Crops Prot. Materials Glass Bars Paints/Chemicals Oils/Lubricants Chalking Material Spare Parts Stationary/Paper Logs Industrial Goods Dyes Pallets Large Mechanical Parts Pest Control Powder Household Equipment Alkaline Substances Large Liquid Containers Long Glass Screens
Finished
E/B B/C B/C B/C
✗ ✗ ✗
✗
B/C B/C
✗ ✗
✗
B/C B/C B/C C/P C/P C/P
✗ ✗ ✗ ✗ ✗
✗ ✗ ✗
✗ ✗ ✗ ✗
C/P C/P C/P
✗
✗
✗
C/P C/P
✗
✗ ✗
Br Cyl
SP
the bulk from inside), Scenario 1B (creating the bulk from outside) and Scenario 2 (breaking the bulk) (in column 2), with their initiator SC-unit in column 3. The first 15 cases are related to the case Creating the Bulk, both from inside and outside, and the others reflect the scenario of Breaking the Bulk. This dataset is segregated into 8 divisions (as represented in column 4) with respect to working hours and details of shipment and delivery within the warehouse for each hour. For the sake of clarity and uniformity only 4 triggers per hour are considered; there are usually 0–6 data transactions per hour depending upon the size of an enterprise [18, 19], so this is realistic. Every product is assigned a unique Product Id (as represented in column 4), which encapsulates all the details related to the characteristics of a particular product. In column 6, the quantity represented is a random number (in the range
10
F. U. Din et al.
Fig. 3 4-level storage of knowledge structures
1–100, as per available industrial data) of products being requested by any of the SC components from the network, e.g. ECU or CRM, which corresponds to a particular delivery or shipment, respectively (further examination of these random values are discussed later in this article). This dataset comes with combinations of multiple possibilities such as delivery and shipment instances with varying product details (based on product category, characteristics, SKU, quantity and due date); hence, it is stored in four levels of data abstraction (4-level knowledge structures). Figure 3 explains the details of these knowledge structures utilised by the AOSF framework. These main knowledge
Revitalising and Validating the Novel Approach …
11
structures are contiguous and continuous logs of shipment and delivery details, where each log is related to a particular case. The AOSF framework stores these information logs as HashMaps, and refers to them as hash logs as reflected in Fig. 3. In an instance of hash log storage, there could be n possible Advance Shipment and Delivery Notices (ASN/ADN). In all cases, the origin of the details is the product that needs to be dispositioned in a particular Stock Keeping Unit (SKU). For every product, the uniquely identifiable Product ID corresponds to a specific category which could be hazardous products, unfinished products or brittle/fragile items. Products could also be distinguished based on their packing units (SKUs) such as Each per Box (E/B), Boxes per Case (B/C), Cases per Pallet (C/P), Barrels, (Br), Cylinders (Cyl) or Single Pallets (SP). The categorisation of products applied to the AOSF framework and its associated AOSR algorithm are highlighted in the dataset represented in Table 2. As highlighted above this data is taken as a test/example case, which can be modified as per business need. To apply the dataset to AOSF and AOSR strategy, the products are classified as per four parameters: their SKU, Hazard category, Movement (slow or fast) and Finished or Unfinished. In order to provide comprehensive testing, six different types of SKUs (E/B, B/C, C/P, Br, Cyl, SP) and six different types of characteristics (binary values of hazard, fast and finished classification) are considered with 20 different classes/categories of products. The AOSF framework and AOSR strategy, both provide the flexibility to accommodate such variability of products. The details about how AOSR, with its 6-Feature strategy, provides the scalability for the same or different categorisation of products are included in our previous work [8].
4 Results and Discussion All the aforementioned test cases are applied to the AOSF and AOSR strategy in comparison with a standard WMS strategy (explained in detail in our previous work [8]). In order to bring clarity in results and to provide better recommendations, this research is constrained to three very important key performance indicators (KPIs): 1. number of products stored in racks; 2. number of products kept at receiving area (RA); and 3. the number of products placed in expedition areas (EA). Our work on visualising AOSF as a CPS [6] includes the validation of AOSF/AOSR strategy with specific case-based test cases. In this article we have included multiple random test cases applied to AOSR and Linear SC-based standard WMS strategy to reflect the inclination of both with respect to the aforementioned three performance metrics. Low performance in managing these three parameters results in basic WMS issues such as receiving area overloading, demarcation lines vanishing, manual re-slotting and wandering/lost items [20]. The literature has often mentioned persisting SC and
12
F. U. Din et al.
WMS issues, and the main reasons behind such problems are mostly the unmanaged receiving and expedition areas [21] and unmanaged storage capacity [22]. A higher number of products within the racks is usually considered as a performance metric for efficiency in warehousing [23]. The general applicability of a system can be validated by analysing its performance over a wide range of data. As explained in the previous section, a diverse case study with a broad scope helps indicate the performance efficiency of AOSF and AOSR strategy. In order to confirm its validity over a wide range of possible scenarios, this section describes several tests applied to the AOSF with the AOSR strategy in comparison with the standard approach. In contrast to the standard WMS strategy [22, 24], which provides centralised management of tasks such as tracking location and level of products in the racks using a single logic, AOSR utilises its hybrid logicbased strategy [8] as per the products’ characteristics to generate the placement plan. The dataset described in Sect. 3 has been modified with different random values for the quantity, to help ensure this generic case study is more widely applicable. Figure 4 represents the detail of the first 15 (out of 30) test cases applied to the subject strategies. For clearer visualisation, only the first 15 cases are displayed in the graph. A closer look at Fig. 4 demonstrates that there is a similar trend for a wider range of data as compared to specific case-based validation performed in our previous work [6]. Figure 4 demonstrates the number of products in all three areas (Racks, EA and RA). The products in racks are represented by blue bars, products in EA with orange and the products in RA are represented with red bars. The standard WMS strategy tends to balance between all the three aforementioned KPIs (number of products in racks, RA and EA), while for the AOSF recommended strategy, the main priority is to manage the maximum number of products in the racks. Figure 4a, on the left, shows that, out of 1080 products, at most around half of them are in racks and, from the remaining products, a major proportion are stored in EA. Furthermore, approximately one-quarter of the total products seem to be stuck in the receiving area.
(a) Standard WMS Strategy Fig. 4 Performance results with random data
(b) AOSF with AOSR Strategy
Revitalising and Validating the Novel Approach …
13
This trend can be observed in all test cases. As previously stated, such a situation leads towards the problems of mismanagement onto the shopfloor and causes the concerns of the unmatched stock count, missing or wandering items and extended lead time of order processing. On the other side, in Fig. 4b the major proportion of the total products, almost 80%, are maintained in the racks, in almost all test cases. The AOSF/AOSR strategy is designed to prioritise the placement of as many products in the racks as possible by utilising its slotting and re-slotting strategy to make more space available within the racks for upcoming products (as detailed in [8]). This is why it succeeds in maintaining a very low number of products in RA: around 40–50 products as compared to 100–150 products when using the standard approach. Also, there is a good difference in the products detained in EA when using AOSF, with around 180–210 stored in EA with AOSF as compared to 390–440 with the standard approach. Figure 5 represents the average results of the 30 test cases used in this section. The standard WMS strategy, as already discussed in the detailed data value graph in Fig. 4, tends to maintain the balance between the number of products in all the three sections of shopfloor: racks, EA and RA. It is represented by the purple-shaded region in the graph. On average 507 products are in racks, 444 are in EA and 127 are in RA. For AOSF, the focus is to maintain the products in racks, as can be seen by the deflection of the graph towards the corner of ‘Rack’. That means there are more data points towards the ‘Rack’ corner as compared to the others. On average 814 products are stored in racks, 215 are placed in EA temporarily and only 52 are at RA. These numbers provide quite a fair improvement in efficiency as presented by Fig. 6. Figure 6 demonstrates the performance improvement over the average of the 30 random test cases. The blue bars represent products in racks, orange bars show the number of products in EA and red bars represent the products in RA. The number of products maintained in racks by utilising the AOSF recommended strategy brings a 60% increase in the number of products stored in racks, a 107% decrease in the number of products in EA and a 148% decrease of items in RA. These results are
Fig. 5 Performance inclination with random data
14
F. U. Din et al.
Fig. 6 Improved efficiency with AOSR/AOSR strategy
very close to those obtained with the specific case-based test cases presented in [6], indicating that the case study presented and multiple random datasets are good representation of typical results. The consistency of performance while utilising AOSF recommended strategy speaks about its validity and broader applicability.
5 Conclusion and Future Work In this article, scenario-based comparison of xAOSF’s three-tier architecture with the Linear SC model is discussed, to explain how AOSF framework provides seamless flow of information with robustness. The test cases analysed the products stored in racks, in EA and in RA, each with respect to two different system states: State 0 (without conflicts) and State 1 (with possible conflicts). The time efficiency of the AOSR strategy in relation to the standard approaches is also discussed in our other work [10]. The results are extracted from different test cases applied to validate the AOSF and its recommended AOSR-WMS strategy in comparison with standard WMS strategies. The successful and positive results, from all the scenarios and test cases, highlight the overall performance efficiency of the AOSR algorithm in association with its parent AOSF framework. The AOSR-WMS strategy is one part of the implementation of the AOSF framework; in future there are still other open areas to work, e.g. plant maintenance, transportation and other SC operational activities. Addition of other state of the art features, i.e. IoT and Big Data may also provide this system with more cognitive abilities in order to provide intelligence, based on past data trends. Although most SMEs do not currently consider data as a source of added value [16], it could be a
Revitalising and Validating the Novel Approach …
15
valuable addition in the future. The AOSF framework presents CPS-based provision for storing and maintaining historical data, for the purpose of predicting future trends and providing flexibility to incorporate data analytics in future. The ideas contributed by Voss et al. in [25] related to incorporating Big Data analytics in logistics can also be a part of this system to enhance it for future purposes. Similarly, handling tasks with the same priority can also be a value addition to the general contribution of AOSF framework.
References 1. Xu, G., Dan, B., Zhang, X., Liu, C.: Coordinating a dual-channel supply chain with risk-averse under a two-way revenue sharing contract. Int. J. Product. Econ. 147, 171–179 (2014) 2. Industry in Germany, “German Trade and Industry (GTAI),” https://www.gtai.de/GTAI/ Navigation/EN/Invest/industrie-4-0.html, [Online; accessed on 14 Nov 2017] 3. Sommer, L.: Industrial revolution industry 4.0: are German manufacturing SMEs the first victims of this revolution? J. Ind. Eng. Manag. 8(5), 1512 (2015) 4. Llonch, M., Bernardo, M., Presas, P.: A case study of a simultaneous integration in an SME: implementation process and cost analysis. Int. J. Qual. Reliab. Manag. 35(2), 319–334 (2018) 5. Din, F.U., Henskens, F., Paul, D., Wallis, M.: Agent-Oriented Smart Factory (AOSF): an mas based framework for smes under industry 4.0. In: KES International Symposium on Agent and Multi-Agent Systems: Technologies and Applications, pp. 44–54. Springer (2018) 6. Din, F.U., Henskens, F., Paul, D., Wallis, M.: Extended Agent-Oriented Smart Factory (xAOSF) framework as a conceptualised CPS with associated AOSR-WMS system, p. In Review (2019) 7. Din, F.U., Henskens, F., Paul, D., Wallis, M.: Formalisation of Problem and Domain Definition for Agent Oriented Smart Factory (AOSF). In: 2018 IEEE Region Ten Symposium (Tensymp), pp. 265–270, IEEE (2019) 8. Din, F.U., Henskens, F., Paul, D., Wallis, M., Hashmi, M.A.: AOSR-WMS planner associated with AOSF framework for SMEs, under Industry 4.0, p. In Review (2019) 9. Din, F.U., Paul, D., Ryan, J., Henskens, F., Wallis, M.: AOSR 2.0: a novel approach and thorough validation of agent oriented storage and retrieval WMS planner for SMEs, under Industry 4.0, p. In Review (2019) 10. in, F.U., Paul, D., Ryan, J., Henskens, F., Wallis, M.: Validating time efficiency of AOSR 2.0: a novel wms planner algorithm for SMEs, under Industry 4.0. J. Softw. 1(3), p. Accepted (2020) 11. Java Agent Development Framework: JADE Open Source Project: Java Agent Development Environment Framework. http://jade.tilab.com/ (2017). [Online; accessed 18-August-2017] 12. Preuveneers, D., Berbers, Y.: Modeling human actors in an intelligent automated warehouse.’ In: International Conference on Digital Human Modeling, pp. 285–294. Springer (2009) 13. Hofmann, E., Rüsch, M.: Industry 4.0 and the current status as well as future prospects on logistics. Comput. Ind. 89, 23–34 (2017) 14. Singh, P., Van Sinderen, M., Wieringa, R.: Smart logistics: an enterprise architecture perspective. In: CAiSE Forum. 29th CAiSE Conference, pp. 9–16 (2017) 15. Poklemba, R.: Mapping requirements and roadmap definition for introducing I 4.0 in SME environment. Adv. Manuf. Eng. Mater., p. 183 (2019) 16. Bi, Z., Cochran, D.: Big data analytics with applications. J. Manag. Analyt. 1(4), 249–265 (2014) 17. Becker, T., Wagner, D.: Identification of key machines in complex production networks. Procedia CIRP 41, 69–74 (2016) 18. Global, D.G.I.: Warehouse Prducts Classes. http://www.dgiglobal.com/classes, 2018. [Online; available 25-Jul-2018]
16
F. U. Din et al.
19. EuroSped, Dataset Information for Warehouse and Logistics. http://www.eurosped.bg/en/ eurolog-warehouse-logistics-4pl/ (2018). [Online; available 25-Jul-2018] 20. Business2Community, Issues in Warehouse Management Systems. https://www. business2community.com/product-management/top-5-warehouse-management-problemssolve-02027463 (2018). [Online; accessed 17-July-2018] 21. Richards, G.: Warehouse Management: a Complete Guide to Improving Efficiency and Minimizing Costs in the Modern Warehouse. Kogan Page Publishers (2017) 22. Lu, W., Giannikas, V., McFarlane, D., Hyde, J.: The Role of Distributed Intelligence in Warehouse Management Systems, pp. 63–77 (2014) 23. Golovatova, A., Jinshan, Z.: Optimization of Goods Incoming Process. Master’s thesis, University of Boras, Sweden, Sweden (2010) 24. Chen, J.C., Cheng, C.-H., Huang, P.B., Wang, K.-J., Huang, C.-J., Ting, T.-C.: Warehouse management with lean and RFID application: a case study. Int. J. Adv. Manuf. Technol. 69(1– 4), 531–542 (2013) 25. Voss, S., Sebastian, H.-J., Pahl, J.: Intelligent decision support and big data for logistics and supply chain management: a biased view. In: 50th Hawaii International Conference on System Sciences, pp. 1338–1340 (2017)
Natural Language Agents in a Smart Environment Renato Soic and Marin Vukovic
Abstract There have been significant achievements in the area of human–computer interaction using natural language (i.e., speech), applied in various domains. As a result, users can intuitively interact with various smart devices and systems. Unfortunately, the full extent of capabilities is almost exclusively related to the English language. This is expected, as most advanced solutions and achievements in the field come from tech giants originating from English-speaking countries. English is also considered the global lingua franca, so this is definitely the most viable language from the economic perspective. However, in the case of minority languages, such as Croatian, there are many challenges ahead in order to achieve comparable results. In this paper, we present a model which intends to enable spoken notifications in the Croatian language. The presented model defines cognitive and linguistic processes which enable generation and reproduction of understandable, human-friendly notifications in smart environments.
1 Introduction Speech-enabled computer systems have become a common occurrence in the last several years. Their domain has extended from personal devices and specialized services to all types of smart human environments. As a result, there are many examples of employing speech interfaces in smart home systems, industrial facilities, smart vehicles, public services, etc. Regardless of the operational domain, spoken communication between human users and computer systems needs to address specific features related to the wider context of the target environment. This includes events R. Soic (B) · M. Vukovic Faculty of Electrical Engineering and Computing, University of Zagreb, Unska 3, 10000 Zagreb, Croatia e-mail: [email protected] M. Vukovic e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_2
17
18
R. Soic and M. Vukovic
and conditions in the environment, as well as user activities, but also the related linguistic context [1]. The term spoken human–computer interaction usually refers to both directions in communication, meaning that a human user can interact with a computer by using spoken commands, while the computer can respond in a similar manner by using a synthesized voice. Here, a computer may be represented by a single device, but also a distributed computer system (i.e., a smart environment). Bidirectional spoken interaction requires several essential components or building blocks. Speech recognition and natural language understanding components are required to enable the computer to register and understand the commands spoken by human users. On the other hand, speech synthesis and natural language generation components enable the computer to construct and reproduce messages adapted for human users. In both cases, it is essential to understand environmental and user context, which can be represented with an additional component in the system which relies on a real-time stream of information provided by external systems or devices. In this paper, we describe a model of a distributed system using text-to-speech to notify human users about state and events in the environment. The goal was to integrate research efforts from two research domains—speech technologies and the Internet of Things. The primary motivation was to provide support for the Croatian language in a real smart IoT environment. Unfortunately, the Croatian language is technologically seriously underdeveloped, with sparse resources available [2]. Therefore, this paper explains which steps and which resources are required to design a functional system. The described system is capable of operating in complex IoT environments consisting of numerous sensors and devices, while interacting with human users using a text-to-speech subsystem. The presented concept can be employed in different scenarios from various domains. The next section describes the current state in research and development of speech-enabled systems and their applications in various domains. The third section provides an overview of the proposed agent-based system and related natural language agents. Additionally, some important concepts related to context-awareness are explained. The fourth section explains how the natural language generation agent uses its employed services to generate textual notifications. In the fifth section, the speech synthesis agent and its corresponding services are described. The sixth section describes a case study of the proposed agent-based system operating in a smart home environment. Finally, a conclusion is given, with plans for future research and development.
2 Spoken Interaction in Smart Environments From the human point of view, interaction with complex systems that constitute smart environments requires a multimodal approach which tends to be complicated. Regardless of the application domain, smart systems are mostly distributed, provide
Natural Language Agents in a Smart Environment
19
different communication channels and users need to conform to specific user interfaces. The most used, but also most simple and straightforward method of interaction is a traditional graphical user interface, where a human user can interact with the system by using a client application which provides insight into various system features. However, smart environments are still primarily human environments and the ultimate goal should be to achieve the most intuitive way of system interaction with human users. Speech has been recognized as the most natural and efficient method of communication for humans [3]. This requires that the system is capable of both receiving spoken requests and commands and providing spoken feedback to human users. These functionalities require speech recognition and speech synthesis capabilities, respectively. However, in the context of a smart system, speech recognition and speech synthesis components represent simple interaction interfaces, with no cognitive processing of users’ requests and system feedback. Therefore, additional components are required, the ones which could enable translation between natural language and language used by the system (e.g., system events, commands, etc.). These are typically represented as the natural language understanding and natural language generation component. Human–computer interfaces which enable spoken interaction between a human user and the computer system have been successfully applied in various domains, from smart home systems to industrial facilities. However, their capabilities are usually limited to recognition of specific commands and the reproduction of predefined spoken notifications. The rise of Intelligent Personal Assistants (IPAs) improved the situation significantly. IPAs have evolved into systems capable of leading conversations with human users, understanding linguistic and semantic context. They have also provided options for integration with various devices and external systems, making the IPAs very flexible and extensible [4]. All distinguished IPA platforms enable their users to develop new functionalities for the devices, thus creating a constantly growing ecosystem of services, but also introducing possible security and privacy threats [5]. Privacy is an ongoing issue in all notable IPA platforms, due to their dependency on cloud infrastructure, resulting in private conversations being recorded and analyzed [6]. Despite all the mentioned advantages, the IPA approach hasn’t yet achieved full integration with IoT smart platforms, in the sense that it is fully capable of processing all events in the given environment and understanding its context. Recent advancements in speech recognition and natural language understanding have resulted in spoken interaction being introduced in smart home applications with customizable devices, thus expanding the system capabilities [7]. Furthermore, in addition to understanding spoken words, emotion recognition has been introduced with the purpose of detecting user’s mood [8]. These examples did not focus on speech synthesis which would enable the IoT system to present basic information in spoken format back to the user. An example of a speech-enabled IoT system with a combination of cloud services for speech recognition and speech synthesis is described in [9]. To the best of our knowledge, a smart environment system enhanced with speech interaction subsystem based on the Croatian language has not been developed yet.
20
R. Soic and M. Vukovic
3 Language Agents in a Smart Environment When designing a system with the previously described goals, various design methodologies could be applied. Agent-based paradigm proved itself suitable for the user-centric approach, providing flexibility when dealing with numerous sources of information and various output destinations in a distributed computer system. Therefore, software agents rely on the specific service ecosystem to perform complex cognitive tasks. Such an approach offers modeling on a higher level of abstraction, thus enabling a more intuitive decomposition of the underlying service ecosystem. An agent-based interaction model in a smart environment was presented in [10], with regard to different communication channels and interaction methods. This approach was furtherly explored in [11], where a more detailed decomposition of software agents and generic services in a distributed smart industrial environment was described. In accordance with previous research, this decomposition has been applied to modeling natural language agents and generic services on which they depend. In the case of spoken interaction, the challenges are many but can be divided into two domains: environmental and linguistic. The environmental domain challenges are related to context-awareness and decision-making based on the current state in the environment. The linguistic domain encompasses operations related to the construction of human-friendly textual messages and their corresponding spoken notifications based on synthetic system events. In the scope of this paper, we focus on the linguistic domain, where an agent-based approach to natural language generation and speech synthesis is presented. The natural language generation component or subsystem is tasked with generating textual, user-friendly notifications based on synthetic system events emerging in a smart environment. The speech synthesis component transforms the generated text into audio which can be reproduced on a physical device. These components or subsystems can be represented as software agents and their corresponding generic services, as shown in the more detailed decomposition in Fig. 1. Such organization has several advantages. When dealing with potentially computationally intensive domains such as natural language processing, it is more effective to break a complex task into several isolated steps. This results in a better separation of concerns and positively impacts execution, as certain tasks can be executed in parallel. Another advantage of the presented approach is that any software agent or generic service can be deployed in the computer cloud, a physical server, or on any node in a distributed environment. This is especially important in terms of privacy and security. Commercial IPAs rely exclusively on cloud services, thus constantly facing criticism because this approach jeopardizes users’ private data. The proposed system could operate as a standalone instance, with all collected data remaining under users’ control and ownership.
Natural Language Agents in a Smart Environment
21
Fig. 1 Natural language agents and their corresponding generic services
4 Generating Textual Notifications in the Croatian Language Producing a spoken notification from an event which occurred in the environment requires a component capable of constructing textual content in a human language, which human users could fully comprehend. Once the content is constructed, it can be reproduced as a spoken notification on an applicable physical device. There are several possible approaches to generating text notifications adapted for human users. They differ in terms of which input data is used as the knowledge source and the methods which are employed to construct the output text. Here we explored the template-based approach, which is relatively simple and the most convenient when dealing with structured input data which typically occurs in a smart environment [12]. In this context, template-based approach means that event data is extracted and organized into a raw textual representation which will be improved with the help of language processing services. As displayed in Fig. 2, natural language generation agent is invoked by the context resolution agent. It receives all system events (1) and maintains its knowledge base (2) so it can decide whether a user should be notified regarding a certain state or important events occurring in the environment. NLG agent receives one or more system events which are then transformed into sequences of words by applying the template-based generation method (3). In this initial notification form, data extracted
22
R. Soic and M. Vukovic
Fig. 2 Natural language generation process
from system events is also in its original form, which means it will most probably need to be adjusted to properly fit into a user-friendly notification. These adaptations and corrections are performed by employing N-gram service (4), semantic (5) and morphological (6) lexicon, and spell-checker service (7). When the user notification is completed, it is sent to context resolution agent (8) which will finally pass it on to the speech synthesis agent. N-gram service represents an interface to the Croatian language model. N-grams are word sequences comprised of N words extracted from large text corpora. Therefore, n-gram based language model contains information about word transitions as they occur in real-life language usages. The n-gram model described here was constructed by using a n-gram database constructed from a large general vocabulary text corpus collected by Hascheck, a Croatian spell-checker service. Currently, the N-gram service is based on 3-gram system, which contains word sequences comprised of three words. However, Hascheck contains n-gram collections of different lengths, with 2 0.50 is not being fulfilled [31] (Table 1). The R-square measure is declared sufficient according to Chin (R-square = 0.203) [26]. The majority of the people that answered the survey work for companies located in Germany (98%). The other two percent are from Austria, the Netherlands, and the United Kingdom. During the implementation, it became apparent that the agricultural sector does not so much act and influence nationally, but rather Europe-wide. Therefore, the respondents who did not come from Germany could be retained in the survey. Figure 2 shows the number of employees working in the respondent companies. Table 1 Results of the structural equation model Hypothesis
SEM-path
Path coefficient
t-value
p-value
CA
AVE
CR
H1
KPI → PDBM
+0.263
2.403
0.008
0.658
0.356
0.773
H2
Individualization → PDBM
+0.034
0.295
0.384
0.571
0.327
0.681
H3
Efficiency → PDBM
+0.263
2.184
0.015
0.552
0.412
0.729
H4
Communication → PDBM
+0.013
0.133
0.447
0.668
0.426
0.784
Fig. 2 Number of employees in percentage
Potentials of Digital Business Models for the European …
33
Fig. 3 Digitization awareness
Most of the companies (75%) employ 50 people or less. About eight percent work in a medium-sized company with 51–250 employees. Around 17% work in a larger company with more than 250 employees. The digitization awareness of the participants is depicted in Fig. 3. Less than 8% are dealing with digitization for under one year. About a quarter of the experts is working on the topic between one to three years. Most of the answering experts (66%) address the topic for more than three years.
5 Discussion and Limitations While in previous research it was found that all four determinants have a significant influence on the potentials of digital business models, this study can only confirm the determinants KPI and Efficiency. The determinants Individualization and Communication show no significant influence on the potentials of digital business models and their processes. International literature confirms that potential lies in individualization and communication. It could be due to the development process in agriculture that these potentials cannot be fully exploited yet. The fact, that the results vary from the cross-industry study, shows that it is important for decision-makers to allocate resources optimally. Engineering, producing, and advertising new products or developing new business models in the fields of the rejected determinants may result in a competitive disadvantage. On the other hand, managers or entrepreneurs, who focus on and push digital technologies as well as business models that focus on Efficiency and KPIs, can generate value for their business. This paper is subject to a certain limitation by an empirical investigation in the form of quantitative research. The used data sets for the investigation are limited to n = 119 answers from experts in the farming industry. Also, the online implementation and the German questionnaire may exclude certain groups of people. Regional restriction toward studies in the agricultural sector is a problem not only due to certain regional
34
R.-C. Härting et al.
mentalities. Agricultural crop crowing is highly dependent on external factors, e.g., weather and soil, which can vary strongly depending on different regions. Further studies in the field of digital business models in the agricultural sector are conceivable if parameters fit to the different farm types like crop growing or animal rearing farms. Replicating this study for livestock specialists may produce other results than the focus on crop farming did. Farmers who execute a combination of crop and livestock farming may have different focal points. To improve response rates for future surveys in the agricultural sector, the researcher suggests dating the survey during winter season, since agricultural expenditure of work is highly seasonal.
6 Conclusion In this research project, the potentials of digital business models and their processes in the agricultural sector were investigated. A statistical analysis using structural equation modeling revealed the theoretical influences of the four constructs Key Performance Indicator, Efficiency, Individualization, and Communication in an intensive literature research. The conducted study confirms a significant influence of the determinants Key Performance Indicators and Efficiency. The constructs Individualization and Communication could not be identified as significant in this research study. The study shows that the two determinants, Efficiency and Key Performance Indicators, have a positive and significant influence on the potentials of digital business models for the European agricultural sector. On the one hand, these potentials can help to minimize crop failures and thus avoid deficiency payments. On the other hand, costs could be saved through efficient and sustainable use of resources.
References 1. Keen, P., Williams, R.: Value architecture for digital business: beyond the business model. MIS Q. 37(2), 643–647 (2013) 2. ITU: International Telecommunication Union, https://www.itu.int/en/ITU-D/Statistics/Pages/ stat/default.aspx. Accessed 21 Nov 2019 3. ARD/ZDF (2019a): Entwicklung der Onlinenutzung in Deutschland 1997 bis 2018, http:// www.ard-zdf-onlinestudie.de/onlinenutzung/entwicklung-der-onlinenutzung/. Accessed 23 Sept 2018 4. Härting, R.C., Reichstein, C., Lämmle, P., Sprengel, A.: Potentials of digital business models in the retail industry—empirical results from European experts. In: Proceedings of KES conference 2019 Elsevier B.V. 2019, Vol. 159, pp. 1053–1062 5. Bitkom Research: Social Media: Für 9 von 10 Internetbenutzern längst Alltag, https://www. bitkom.org/sites/default/files/pdf/Presse/Anhaenge-an-PIs/2018/180227-Bitkom-PK-ChartsSocial-Media-Trends-2.pdf. Accessed 23 Sept 2018 6. Forbes: The World’s Largest Public Companies, https://www.forbes.com/global2000/list/# header:marketValue_sortreverse:true. Accessed 23 Sept 2018
Potentials of Digital Business Models for the European …
35
7. Kaim, R., Härting, R.C., Reichstein, C.: Benefits of agile project management in an environment of increasing complexity—a transaction cost analysis. In: Intelligent Decision Technologies 2019, pp. 195–204. Springer (2019) 8. Martin-Retortillo, M., Pinilla, V.: Why did agricultural labor productivity not converge in Europe from 1950 to 2006? In: Proceedings of the 2012 Economic History Society Annual Conference, University of Oxford, UK (2012) 9. Duncan, M., Harshbarger, E.: Agricultural productivity: trends and implications for the future economic. Economic Review, pp. 1–12 (1979) 10. Kaloxylos, A., Eigenmann, R., Teye, F., Politopoulou, Z., Wolfert, S., Shrank, C., Dillinger, M., Lampropoulou, I., Antoniou, E., Pesonen, L., Huether, N., Floerchinger, T., Alonistioti, N., Kormentzas, G.: Farm management systems and the Future Internet era. Comput. Electron. Agric. 89, 120–144 (2012) 11. EFSA: European Food Safety Authority: Bee health, European Food Safety Authority, https:// www.efsa.europa.eu/en/topics/topic/bee-health. Accessed 23 Sept 2018 12. BMEL: Pflanzenschutzmittel https://www.bmel.de/DE/Landwirtschaft/ Pflanzenbau/Pflanzenschutz/_Texte/DossierPflanzenschutzmittel.html;jsessionid= 45AEFD1E109084BA445774C764075283.2_cid385?nn=1853720¬First=true&docId= 5305986. Accessed 28 Sept 2018 13. ECHA: European Chemicals Agency: Glyphosate not classified as a carcinogen by ECHA, https://echa.europa.eu/de/-/glyphosate-not-classified-as-a-carcinogen-by-echa. Accessed 23 Sept 2018 14. European Commission (2019a): Nitrat im Grundwasser: Kommission mahnt Deutschland zur Umsetzung des EuGH-Urteils, https://ec.europa.eu/germany/news/20190725-nitrat_de. Accessed 23 Sept 2018 15. European Commission (2019b): Dürre in Europa: EU-Staaten geben grünes Licht für Hilfen für Landwirte, https://ec.europa.eu/germany/news/20190928-hilfen-f%C3%BCrlandwirte_de. Accessed 23 Sept 2018 16. Norousi, R., Bauer, J., Härting, R., Reichstein, C.: A Comparison of predictive analytics solutions on Hadoop. In Proceedings of KES-IDT 2017—Part II, Springer 2016, Vol. 73, pp. 157–168 17. Dobermann, A., Blackmore, B., Simon, C., Vlacheslav, A.: Precision farming: challenges and future directions. In: Proceedings of the 4th International Crop Science Congress Brisbane, Australia (2014) 18. Balafoutis, A., Beck, B., Fountas, S., van der Vangeyte, J., Waal, T., Soto-Embodas, I., GómezBarbero, M., Barnes, A.P., Eory, V.: Precision agriculture technologies positively contributing to CHG emissions mitigation Farm Productivity and Economics. Sustainability 9, 1–28 (2017) 19. Härting, R.C., Reichstein, C., Schad, M.: Potentials of digital business models-empirical investigation of data driven impacts in industry. Procedia Comput. Sci. 126, 1495–1506 (2018) 20. LimeSurvey: Marktforschung—Das Online-Umfrage-Tool für die Marktforschung, https:// www.limesurvey.org/de/beispiele/marktforschung. Accessed 17 Jan 2020 21. Hüther, M.: Digitalisation: an engine for structural change—A challenge for economic policy IW policypaper15/2016: 13, Cologne (2016) 22. Daurer, S., Molitor, D., Spann, M.: “Digitalisierung und Konvergenz von Online- und Offline-Welt: Einfluss der mobilen Internetsuche auf das Kaufverhalten” Zeitschrift für Betriebswirtschaft Vol. 82, pp. 3–23, Springer Gabler (2012) 23. Kilimann, O.: “Zahlungsverkehr in Zeiten des Wandels und der Digitalisierung” Zeitschrift für das gesamte Kreditwesen, Vol. 19/2017:955 (2017) 24. Mikalef, M., Pateli, A.: Information technology-enabled dynamic capabilities and their indirect effect on competitive performance: Findings from PLS-SEM and fsQCA. J. Bus. Res. 70, 1–16 (2017) 25. Loebbecke, C., Picot, A.: Reflection on societal and business model transformation arising from digitization and big data analytics: a research agenda. J. Strategic Inf. Syst. 24, 149–157 (2015)
36
R.-C. Härting et al.
26. Homburg, C., Baumgartner, H.: Beurteilung von Kausalmodellen. Bestandsaufnahme und Anwendungsempfehlungen. MAR 17(3), 162–176 (1995) 27. Weiber, R., Mühlhaus, D: Strukturgleichungsmodellierung: Eine anwendungsorientierte Einführung in die Kausalanalyse mit Hilfe von AMOS, SmartPLS und SPSS. Springer, (2014) 28. Hair, J.F., Hult, G.T.M., Ringle, C.M., Sarstedt, M., Richter, N.F., Hauff, S.: Partial Least Squares Strukturgleichungsmodellierung: Eine anwendungsorientierte Einführung. Vahlen (2017) 29. Cho, E., Kim, S.: Cronbach’s coefficient alpha: well known but poorly understood. Organ. Res. Methods 18(2), 207–230 (2015) 30. Cho, E.: Making reliability reliable, a systematic approach to reliability coefficients. Organ. Res. Methods 19(4), 651–682 (2016) 31. Bagozzi, P., Yi, Y.: On the evaluation of structural equation models. Acad. Market. Sci. 16(1), 74–94 (1988)
Agent-Based Approach for User-Centric Smart Environments Katarina Mandaric, Pavle Skocir, and Gordan Jezic
Abstract Internet of Things (IoT) solutions are becoming irreplaceable in various application domains. IoT enables control over many systems in a smart environment, such as the heating, ventilation and air conditioning (HVAC) system, lighting, and appliances in a smart home or office. By enhancing IoT solutions with a cognitive capability it becomes possible to, for example, adjust ambient conditions according to user preferences without the need for direct user intervention. This functionality constitutes a fore-coming phase in IoT evolution—Cognitive IoT. In this paper, we propose an agent-based smart environment system and compare it to a centralized implementation. In both approaches, feed-forward artificial neural networks are trained under supervision and used to adjust the lighting conditions to the specific user. The agent-based approach offers better preference prediction precision as each user is supported by one agent with a neural network specialized only for his preferences as opposed to the centralized approach where all user preferences are predicted by one neural network. Additionally, the agent-based approach enables easier addition of new users.
1 Introduction Internet of Things (IoT) has been around for a long period and, as pointed out by Atzori et al. [2], has significant impact on the way we live. As for the phrase, it is believed that Kevin Ashton first used it in one of his presentations in 1999 [1]. K. Mandaric (B) · P. Skocir · G. Jezic Faculty of Electrical Engineering and Computing Internet of Things Laboratory Unska 3, University of Zagreb, HR-10000 Zagreb, Croatia e-mail: [email protected] P. Skocir e-mail: [email protected] G. Jezic e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_4
37
38
K. Mandaric et al.
IoT offers many opportunities for improvements in many fields, for example, in the development of smart homes, smart environments, smart buildings, and smart cities. In smart homes, it alludes to connecting appliances, ambient adjustment including lights and the HVAC system. Smart security systems are applied to smart homes, smart buildings, and other smart environments such as offices and laboratories. When talking about smart cities, it includes smart energy and water distribution, smart surveillance, smart traffic control, effective garbage collecting and management, smart street lighting, smart parking, environmental monitoring, etc. IoT contributes to many advancements in the fields of industry, manufacturing, agriculture, energy savings, etc. Some use cases in agriculture are, as pointed out by Perwej et al. [11], monitoring of climate conditions, autonomous tractors, precision farming, greenhouse automation, agricultural drones, crop management, livestock monitoring, and End-to-End farm management systems. This will be very important in the light of climate change progressing. Wearable sensors also offer many possibilities. As the lifespan and number of older people rises, older people have to have the possibility to live alone and not be forced into a retirement home out of precaution. By combining wearable sensors and fixed sensors installed in their homes, older people are under constant supervision, and actions can be taken if the need for it arises, e.g., wearable sensors reading body position as lying down but fixed sensors showing that the user is in the shower or in the kitchen. Wearable sensors are also used in sports to monitor the performance of sportsmen, their statistics, and progress. IoT solutions are becoming irreplaceable and very important in the mentioned and various other application domains. A lot of data is acquired through these things and a lot of functionalities are made available. For example, it is possible to control appliances from the comfort of your sofa using just your smartphone. But users want more, they want the system to automatically adjust to them, without the need for their interference. The upturn to IoT is Cognitive Internet of Things (CIoT). In this new paradigm, IoT is enhanced by comprehensive cognitive capability. As Wu et al. [13] point out, by adding high-level intelligence to IoT, it is possible to fulfill its potential. They also assert the advantages of CIoT pointing out these examples: saving peoples time and effort, increasing resource efficiency, and enhancing service provisioning. More specific examples include various home automation examples such as ambient adjustments in the smart home based on user’s emotion [4], or if a user falls asleep while watching TV, turning off the TV and lights upon detecting that he is asleep. In agriculture, based on current sensor readings, weather forecast, and stage of plant growth, dosing water supply and fertilizer. Many researchers, such as Wu et al. [13] and Jamnal and Liu [8] agree that many different techniques from different areas are included in the development of CIoT, such as machine learning, artificial intelligence, context-aware computing, cyberphysical systems, pattern recognition, database technology, big data analytics, etc. The definition of CIoT, by Haykin [7], states that interconnected things behave as agents in CIoT, so it is logical to investigate agent-based approaches when working in the CIoT domain: “Cognitive Internet of Things (CIoT) is a new network paradigm, where (physical/virtual) things or objects are interconnected and behave as agents,
Agent-Based Approach for User-Centric Smart Environments
39
with minimum human intervention, the things interact with each other following a context-aware perception-action cycle, use the methodology of understanding-bybuilding to learn from both the physical environment and social networks, store the learned semantic and/or knowledge in kinds of databases, and adapt themselves to changes or uncertainties via resource-efficient decision-making mechanisms, with two primary objectives in mind: (1) bridging the physical world (with objects, resources, etc.) and the social world (with human demand, social behavior, etc.), together with themselves to form an intelligent physical-cyber-social (iPCS) system; (2) enabling smart resource allocation, automatic network operation, and intelligent service provisioning.” The rest of the paper is organized as follows. In Sect. 2 an overview of machine learning techniques in CIoT and agents is given. In Sect. 3, the existing approach with a centralized system and the improvement that the agent-based system offers is presented. In Sect. 4, our use case is described with the proposed idea for the agent-based solution. Section 5 concludes the paper and outlines the future work.
2 Related Work Different machine learning techniques have been used in the agent-based approach and CIoT. Arguments which learning technique is most appropriate have been discussed, especially by Busoniu et al. [3]. Many papers chose reinforcement learning over supervised and unsupervised learning pointing it out as the best fit for agentbased systems which will be described below. Czarnowski and Jedrzejowicz [5] talk about the developments in the last decade in both fields. They state that researchers from each field realized opportunities from each other’s solutions. While also pointing out that learning, whether it be reinforcement, supervised or unsupervised learning, and its incorporation with agents, produced many valuable applications. Therefore, learning is becoming a key ability of agents. In his own paper, where he reviewed several research results integrating agent technologies in machine learning, Jedrzejowicz [9] emphasizes that the integration of two technologies, machine learning and agent technologies, provides significant advantages in both fields. Busoniu et al. [3] discussed multi-agent reinforcement learning (MARL), its benefits and challenges, and also reviewed MARL algorithms for fully cooperative, fully competitive, and more general tasks. With reinforcement learning, the agent learns by interacting with its surroundings. They point out that, as opposed to supervised learning that we used where the agent is given the correct solutions to the asked problem, the reinforcement learning feedback is less informative but is more informative versus unsupervised learning, where no explicit feedback is available. Some benefits of MARL include experience sharing which results in faster learning of similar tasks and better performance, parallel computation which realizes better speed,
40
K. Mandaric et al.
its robustness and scalability. One of the more prominent challenges in MARL is specifying the agent’s goal. Khalil et al. [10] further discuss the impracticality of using supervised learning methods for multi-agent systems, but point out that there are works that do use supervised learning successfully. They state that they found that reinforcement learning compliments the agent paradigm the most and is the most used for learning agents. Spychalski and Arendt [12] used another machine learning algorithm based on associative arrays. They state that, supported by proof of performance measurements, this algorithm is less complex and more efficient than artificial neural networks and Bayesian networks. While ANNs and Bayesian networks have to use floating point numbers, agents communicate using string messages which complicates their implementation. On the other hand, associative arrays do not define the input values, i.e., can operate on string values. Hayashi et al. [6] used another machine learning technique in their aim to improve the behavior prediction accuracy combining agent-based simulation and machine learning in the case of custom churning. They point out that agents are used to obtain a better accuracy level as not all users have the same behavior. They used a machine learning technique called C4.5 which predicts based on a classification tree.
3 Agent-Based Model Different implementations feature various approaches. Discussed below are the characteristics of both centralized and agent-based approach. These approaches are compared with the intention to highlight the advantages of one over the other.
3.1 Centralized Approach In most of the existing commercial implementations, users can set up just one possible setting input based on time and/or sensor readings, or the users are presented with one entity that handles them all which can be slow and with low accuracy levels. This affects the users’ quality of experience negatively and leads to users’ dissatisfaction and their rejection of such systems. In a smart environment, many actions can be automatized, such as the HVAC system, the lighting system, various appliances, the multimedia system, and access control. This is displayed in Fig. 1 on the example of a smart home and office. For example, when a user wakes up he wants his favorite music to play, have the light at a low level in blue color, and the coffee machine to start or to close the blinds on windows, set certain lighting fixtures on and set the audio level of the TV when watching a movie. Or in a smart office, when the temperature falls under a certain limit, turn on the heating or when there isn’t sufficient natural light from outside turn on the lights to a certain level. This turns out to be problematic because of different preferences of multiple users.
Agent-Based Approach for User-Centric Smart Environments
41
Fig. 1 An example of potentially automated systems in a smart home or office
For all of the users all together, one Artificial Neural Network (ANN) can be used to predict the best ambient adjustment based on their preferences which the ANN was trained on. There are certain down points to this approach. The ANN can’t predict at the same time for two or more users, so a certain delay is present as the ANN predicts one user after another. Also, if most users have similar preferences and one user has different preferences it will be harder for the ANN to learn such “anomaly.” Adding new users can also be problematic. Furthermore, with this approach, another ANN or different calculation must be used to calculate the optimal ambient adjustment to please all users. Also, the question whether to use one ANN for all the preferences, e.g., HVAC and lighting, or one ANN for each ambient component is also challenging considering the number of users and preferences. The further improvement made regarding this question is described below.
3.2 Agent-Based Approach In this approach, with an agent-based foundation, each user would have its own agent with his own intelligence, i.e., his own neural network that learns only his preferences as shown in Fig. 2. In it, examples of temperature, lighting, washing machine start time, and coffee machine start time preferences of each user are shown. In the figure, also shown is that the agents of present users negotiate about the optimal settings when more users with different preferences are present (visible between agents 1, 2, 3, and 4, but all present users’ agents negotiate with each other). An artificial neural network per each user would ensure better predictions based on the users’ past preferences as the ANN doesn’t have to learn all users’ preferences. It would also ensure less time for learning when a new prediction is written in the system because of less data to be learned in less epochs. As to user separation, the input of the ANN is slimmed down as there is no need for a user identifier to separate users from one another which was needed in the previous approach. This also contributes to less learning time and less epochs needed. Moreover, this leads to simpler adding of new users than in the centralized approach with one ANN.
42
K. Mandaric et al.
Fig. 2 Scheme showcasing an Agent-Based system with various users’ preferences
This way, there would be no need for a central calculation model for optimal ambient adjustment because of more present users in the smart environment as the agents take over the negotiation process. If two or more users are present with different preferences regarding, for example, heating, their agents first calculate their preferences and then start the negotiation process with other agents. In the negotiation process they have to take into account the extent of the differences between their users’ preferences.
4 Use Case: Smart Lighting In our previous work we developed a system for smart lighting that adjusts the room lighting intensity and color to users’ preferences. The system was implemented at the Internet of Things Laboratory at the Faculty of Electrical Engineering and Computing at the University of Zagreb. As the user unlocks the smart lock with his ID, the system adjusts the lighting based on which user entered, the current time of day, and the natural light brightness.
Agent-Based Approach for User-Centric Smart Environments
43
The time of day was divided into seven periods which in accordance to the varying times of daylight during the year and different seasons. Special attention was given to the times of day when daylight varies most during the year. As for the luminosity sensor, it is connected to a Waspmote device which sends the luminosity data, among other sensor data, to a Raspberry Pi which serves as a gateway. The data is reachable through openHab, an open source automation system for smart environments. The sensor values are expressed in the lux unit of measurement. A feed-forward Artificial Neural Network (ANN) is trained with the backpropagation algorithm on user preferences. Also, the system includes a model for multiple user calculation that aims to achieve optimal lighting for all present users that, unavoidably, mostly do have different preferences. After many tests of different architectures and number of neurons in each layer, an ANN that consists of three hidden layers was chosen. For this phase of the research, users could choose between seven colors (red, orange, yellow, white, green, blue, purple). Three Philips Hue LED white and color bulbs and one Philips Hue GO were used for the lighting of the room. Hue offers an astounding 16 million colors to choose from and it is planned in our future work to let the user have unlimited access to all these colors. As for the intensity of light, it was divided into seven levels, one (1) marking turned off lights and seven (7) marking the brightest setting. To summarize, the inputs of the ANN are user ID, time of day, and luminosity sensor value in form of a three-hot vector. The outputs are the preferred lighting intensity and color of light. As we approached the ANN inputs as a three-hot vector, it is harder to add new users as the number of inputs has to increase when adding new users. A partial solution was to have privileged users with their preferences and one additional user profile for all other unprivileged users which features generic lighting settings. The lighting intensity setting compliments the values read by the luminosity sensor and the light color is white in this profile. Twelve different preference settings were collected from twenty-two users, every user had different scenarios, i.e., no two users had the same twelve combinations of time period and natural light level. Out of the twelve collected preferences per user, ten were used for the training of the ANN, and two for the testing. The collected data for training equals to 220 inputs on which the neural network was trained. The training accuracy achieved after 19 epochs is 90.84%. For the testing 44 preference inputs were used, two per user, and the testing accuracy achieved is 88.64%. The model for multiple user calculation takes into consideration the number of users present and the quantitative difference in their individual preferences in order to please everyone present in the room. This way, all the users’ preferences are trained on and predicted by one neural network which in certain situations can be problematic.
44
K. Mandaric et al.
4.1 Agent-Based Following the agent-based improvements possible, we propose an agent-based version of our developed system. As of now, the new, user-specialized, ANNs were tested and the results are discussed below as well as other proposals regarding the agent-based approach. With the agent-based foundation, one big ANN, that is used to predict the preferences for all users, is replaced with one network per user/agent. This way, the ANN doesn’t have to have the input variable of user ID which is shown in Fig. 3 in the comparison of ANNs used in the centralized and agent-based approach. This enables the easier introduction of new users compared to the centralized approach. Also, this way, one user’s preferences won’t affect another user’s preferences. Furthermore, as there are now two inputs, current natural light level and period of day, and the same two outputs as before, wanted brightness level and color of light, the developed network is very different from before. A shallow neural network is used with one input, one hidden, and one output layer. The hidden layer has much more neurons then the input and output layer. This architecture showed the best among various other architectures tested as well as activation functions. The used activation functions are the ReLu (rectified linear unit) function for hidden layer and the sigmoid function for output layer. Furthermore, negotiations between agents is included. When a compromise factor is included through negotiations, users can decide if they are more flexible or not, in order to achieve the optimum lighting settings. For example, if two users are present with the same compromise factor that has different color preferences as in, one user wants a cold color, while the other wants a warm color, the color settings will be set to white. When two or more users have color preferences in the same spectrum (cold or warm); the color decision won’t be white, but will be defined by the number of users who want which color. If three users are present with color preferences red, orange, and yellow, the light color will be the middle color option, orange. It is very important to note that although all users’ networks do have the same architecture, the accuracy levels achieved are not the same. Some users have more intuitive preferences while others have many color and brightness changes which do not seem as intuitive and constant. The learning of the neural networks for this reason doesn’t last the same number of epochs. The learning is broken when over-fitting starts, i.e., when the validation loss starts to rise. The achieved accuracies for users’ preferences ranged from 82% to 100%. For the learning set, ten out of twelve user preferences were used, while two were used for the testing of the developed neural network like in the previous approach with one ANN. Focusing on the real usage of this system where users will not necessarily input their preferences beforehand, but input their preferences as they wish while staying in the room, the number of preferences of users will not be the same, and that is another reason not to have a set number of training epochs. Users, of course, can change their preferences, if they input another light brightness setting and/or light
Agent-Based Approach for User-Centric Smart Environments
45
Fig. 3 Comparison of ANN in centralized (left) and agent-based (right) approach
color for the same scenario, i.e., natural brightness level and period of day; the older preference is overwritten by the new preference. Furthering the suggested solution, when more users are present, their agents will negotiate about the optimal lighting setting and not like in the previous approach where one central model calculated the optimal lighting setting from all the users’ preferences. Of course, whenever the period of day or sensor reading of natural light changes, the process must repeat. Also, when another user enters the room, after acquiring his own preference prediction, the negotiation process starts again.
5 Conclusion and Future Work The main focus of this paper was to propose and discuss an agent-based approach for a smart lighting system. It is an example of integrating agent technologies and machine learning algorithms in the field of Cognitive Internet of Things. By switching from a totally centralized system where one artificial neural network was in charge
46
K. Mandaric et al.
of learning all users’ preferences to an agent-based system where each user is hosted by one personal neural network, faster predicting and learning of new preference settings and easier adding of new users into the system was achieved. Also, better accuracy rates are achieved which leads to better user satisfaction. In the future work, besides the full realization and testing of this agent-based system, plans are made for the smart environment to include more actions, such as HVAC system control, which will be also implemented using agent technologies and machine learning. Future work will also be directed at improving the machine learning aspect of this system, and more approaches and techniques will be studied in order to further improve the system. At the same time, the system will be studied for ways to implement it within a smart home system where users don’t want the commitment to register themselves while entering different rooms within the house which cannot be solved with simple motion detection sensors if more people with different preferences live in the house. One other major step will be to integrate users’ activities in the system, so that the lighting will match their activities. For example, the user does not want and/or need the same lighting when watching TV, reading a book, or preparing dinner. Acknowledgements This work has been supported in part by Croatian Science Foundation under the project IP-2019-04-1986 (IoT4us: Human-centric smart services in interoperable and decentralized IoT environments).
References 1. Ashton, K.: That ’Internet of Things’ Thing. RFID J. (2009) 2. Atzori, L., Iera, A., Morabito, G.: The Internet of Things: A Survey. Comput. Netw., 2787–2805 (2010) 3. Busoniu, L., Babuska, R., De Schutter, B.: Multi-agent Reinforcement Learning: An Overview. Studies Comput. Intell. 310, 183–221 (2010) 4. Cupkova, D., Kajti, E., Mocnej, J., Papcun, P., Koziorek, J., Zolotov, I.: Intelligent HumanCentric Lighting for Mental Wellbeing Improvement. Int. J. Distrib. Sensor Netw., 15 (2019) 5. Czarnowski, I., Jedrzejowicz, P.: Machine Learning and Multiagent Systems as Interrelated Technologies, pp. 1–28 (2013) 6. Hayashi, S., Martono, N., Kanamori, K., Ohwada, H.: Improving Behavior Prediction Accuracy by Using Machine Learning for Agent Based Simulation 9621 (2016) 7. Haykin, S.: Cognitive Dynamic Systems: Perception-Action Cycle, Radar and Radio (2012) 8. Jamnal, G., Liu, X.: A Cognitive-IoE Approach to Ambient-intelligent Smart Home (2017) 9. Jedrzejowicz, P.: Machine Learning and Agents, pp. 2–15 (2011) 10. Khalil, K., Abdelaziz, M., Nazmy, T., M.Salem, A.B.: Machine Learning Algorithms for MultiAgent Systems, pp. 1–5 (2015) 11. Perwej, D.Y., Haq, K., Parwej, D.F., M., M.: The Internet of Things (IoT) and its Application Domains. Int. J. Comput. Appl. 182, 36–49 (2019) 12. Spychalski, P., Arendt, R.: Machine Learning in Multi-Agent Systems using Associative Arrays. Parallel Comput. 75 (2018) 13. Wu, Q., Ding, G., Xu, Y., Feng, S., Du, Z., Wang, J., Long, K.: Cognitive Internet of Things: A New Paradigm Beyond Connection. IEEE Int. Things J. 1(2), 129–143 (2014)
Providing Efficient Redundancy to an Evacuation Support System Using Remote Procedure Calls Itsuki Tago, Kota Konishi, Munehiro Takimoto, and Yasushi Kambayashi
Abstract This paper discusses the use of micro-server technology and mobile agent technology in order to migrate a database server when the server machine has a problem and unable to support the database server. We have engaged in an evacuation support system. The previous system had a single point of failure: if one of the server machines failed, the entire system failed. In order to mitigate this problem, Zabbix system monitoring software triggered the server software to migrate to a spare server machine as a mobile agent. This achieves system redundancy without any cloud technology. Zabbix, however, takes five minutes to detect a system failure. Users of the evacuation support system, who are in a hurry to go to evacuation shelters, need real-time information about evacuation routes. They need a good recommender system to move toward safe places. For systems that need to provide real-time information, it is a serious problem that the system needs five minutes to detect a system failure. Therefore, this paper proposes a new surveillance system, which employs the Remote Procedure Call (RPC) instead of Zabbix. In the proposed system, the mobile agent is used to make a database server migrate to another server machine quickly and recover the evacuation support system. This paper describes the design and implementation of our server agents and reports the experiences and observations during the experiments.
I. Tago (B) · Y. Kambayashi Department of Computer and Information Engineering, Nippon Institute of Technology, Miyashiro, Japan e-mail: [email protected] Y. Kambayashi e-mail: [email protected] K. Konishi NTT Comware Corporation, Tokyo, Japan e-mail: [email protected] M. Takimoto Department of Information Sciences, Tokyo University of Science, Tokyo, Japan e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_5
47
48
I. Tago et al.
1 Introduction In Japan, there are many natural disasters such as earthquakes, tsunamis, floods caused by heavy rains, and volcanic eruptions. The Great Eastern Japan Earthquake, which occurred in March 2011, and Typhoon Hagibis, which occurred in October 2019, are fresh memories. A Nankai Trough earthquake is suspected. People need to be ready and prepare to minimize the damage. People need to choose an appropriate evacuation route depending on the disaster situation. In the case of tsunami, people need to climb to places as high as possible. In the case of conflagration, people need to go to windward. In any case, a collapsed house, flooding river can block predefined evacuation routes. The previously proposed evacuation route guidance system uses an Ant Colony Optimization (ACO) algorithm to indicate the passable route in real time. In the system, priorities are given to roads with a large traffic volume of pedestrians at certain points [1]. Consideration must be given to a situation of failures of machines running the system. The purpose of this study is to realize a system that can continuously provide the latest information by dealing with machine failures using mobile agents. The structure of this paper is as follows: in the second section, we will explain the background. The third section describes the proposed system. In the fourth section, we demonstrate the feasibility of our system by experiment, and in the fifth section, we give a summary and outline prospective future studies.
2 Background In this section, we provide the background including previously proposed evacuation support systems, mobile agents that is the basis of our proposal, a fault tolerant mechanism, and the remote procedure call that we use for our proposal.
2.1 Evacuation Support System Evacuation support system has been studied to date. The damages are classified into three patterns: the direct casualties, the consequences of collapsing structures and fire during an evacuation, and the consequences of tsunami. In general, these damages do not occur simultaneously. In most cases, people survived from the first strike have to escape from the third strike while avoiding the second strike. For example, people had to complete their evacuation before a tsunami strikes. Cracks and liquefaction on the ground, however, invalidate predefined evacuation routes. Therefore, it is important to dynamically find safe evacuation routes appropriate for the situation when a wide-area disaster occurs.
Providing Efficient Redundancy to an Evacuation Support System …
49
Asakura et al. investigated the evacuation routes following a large-scale disaster. They proposed a method that uses ACO and has shown it is useful in a simulator [2]. Goto et al. proposed an improved version of the ACO algorithm for constructing evacuation routes. They have improved pheromone control for disaster evacuation guidance [3]. Taga et al. verified the usefulness of utilizing human communication via a Mobile Ad hoc Network (MANET) [4]. MANET is a network constructed with only portable devices. Hence, it is possible to avoid problems due to the failure of communication infrastructure [5]. Based on simulation results, we proposed a system based on smartphones and client/server systems. The server-side performs the basic processing of evaluating the dynamic situation based on the information collected from users’ smartphones. The analysis was performed by employing the concept of the ACO algorithm on the server-side. ACO is an optimization algorithm that mimics the foraging behavior of ants. Ants go back and forth between the feeding grounds and nest to bring food from feeding grounds to nests [6]. At that time, ants put down a chemical substance called pheromone to the routes they used. Other ants, which traverse between the feeding grounds and the nest, follow the pheromone and replenish the pheromone. By these actions, the long paths to the feeding grounds lose their pheromone by evaporation before replenishment of the pheromones. On the other hand, other ants strengthen shorter path pheromone before evaporation. As a result, ants derive the optimum route to the feeding area [7]. Goodwin et al. took advantage of ACO to plan safe escape routes during a chaotic crisis [8], while Baharmand and Comes choose to optimize the location of shelters [9]. We had built an evacuation support system that collects information about the pedestrians’ flow and simulates optimal escape routes by using ACO. The simulation is performed by a server machine. Figure 1 shows the configuration and the flow of data of our proposed system. We have installed servers in several different regions that are geographically separated to contingencies and maintain continuous services. The configuration uses microservices across many server machines, where each has only one function. Microservices make the functionality easy to replicate. Fig. 1 The System configuration and the flow of data
50
I. Tago et al.
Although the system has provided optimal evacuation routes, it had a single point of failure [1]. Therefore, we tried to make the system redundant by using Zabbix to solve this single point of failure [10]. Zabbix is a system monitoring software that consists of a Zabbix server and many Zabbix agents. When Zabbix activates its agents, the Zabbix server can start monitoring the target systems where Zabbix agents are installed. Zabbix, however, takes five minutes to detect server failure. Therefore, Zabbix is not ideal for systems that need to continuously provide the latest information. Hamamura et al. have developed AkariMap, including the location of an Automatic External Defibrillator (AED), that uses Google Maps API and OpenStreetMaps API. This AkariMap not only obtains evacuation support information for the area but also displays areas that may be flooded after the tsunami so that safer evacuation sites can be selected and updated [11]. Mori et al. have developed the Emergency Rescue Evacuation Support System (ERESS). It is an advanced evacuation support system based on MANET. It lets people start evacuation within thirty seconds [12].
2.2 Redundant Arrays of Inexpensive Disks The Redundant Arrays of Inexpensive Disks (RAID) was developed to combine multiple inexpensive hard disks and to provide relatively high reliability and availability. RAID Level 1 implements mirroring and writes the same data into two hard disks to increase the degree of fault tolerance. Level 5 provides fault tolerance at a lower cost by distributing and storing parity on each hard disk [13]. The RAID technology deals with only failures of individual hard disks and cannot deal with the large-scale disaster discussed in this paper. In a large-scale disaster, the server machine or the entire building containing the server machine may be damaged. To deal with such disasters, this paper uses micro-server technology and mobile agent technology. By subdividing the server into smaller mobile agents, it is possible for servers to migrate to new server machines that reside in different places immediately after a disaster.
2.3 Remote Procedure Call Remote Procedure Call (RPC) was introduced in 1976 as RFC 707 [14] by White et al. and in 1984 by Andrew et al. enabled control and data communication between different computers [15]. In RPC, two functions, request and response, are interchanged on the same port number between different machines connected by a network. Figure 2 shows the interaction between the two machines as follows: (1) Machine-A and Machine-B are connected by the same port number. Machine-A sets the status to “Receive Ready” and waits for a response from Machine-B.
Providing Efficient Redundancy to an Evacuation Support System …
51
Fig. 2 Cycle of remote procedure calls
(2) When a request is sent from Machine-B, Machine-A receives the request. Upon receiving the request, Machine-A starts processing to respond to the request. (3) After the processing on the Machine-A side is completed, Machine-A sends a response. Machine-B receives this response, and Machine-A is ready for the next request.
3 System Overview The proposed system takes advantage of RPC to monitor the liveliness of the servers that constitute our evacuation support system. The previous version of our system uses Zabbix to monitor the servers. RPC shortens the recovery time of the system. The system is written in Python 3. We call the proposed monitoring system Remote Procedure Call Monitoring System (RPCMS). RPCMS uses four machines as basic minimum building blocks as demonstrated by [10]. Consider that a machine on which a server of the evacuation system is running. Exception function calls alert when the RPC connection is disconnected. Figure 3 shows the code that makes the server migrate when the RPC connection is broken. This exception handling makes the server migrate to a safe location so that the flow of the program does not stop. try: s = xmlrpc.server.SimpleXMLRPCServer(("a.b.c.d", 8080)) s.serve_forever() except: move the server
Fig. 3 Code example for detecting trouble occurrence
52
I. Tago et al.
Machine: A
Backup file server
Machine: C
Monitor / Receive
File store
Monitor / Send
Monitor / Send
Database Rebuild
Monitor / Receive
Machine: B Moving the blocks
Machine: A
Machine: B
Machine: C
Machine: D
Fig. 4 System overview of remote procedure call’s monitoring system
By effectively applying this exception processing, a server can evacuate to a safe machine when unexpected trouble occurs in the current machine. In Fig. 2, MachineA is used as the server machine. When trouble occurs in Machine-A, the server on Machine-A moves to Machine-B, then Machine-B, which has been monitoring Machine-A, becomes the server machine. Then we must make another machine be the monitor machine so that it can discover when Machine-B breaks. Therefore, Machine-B restored the data of Machine-A on Machine-B and becomes the next server machine, and then made Machine-C be the monitoring machine that monitors Machine-B. By repeating this, the processing of RPCMS should be seamlessly continuing. Figure 4 shows the general flow of this server/monitor migration.
3.1 Monitoring This section describes how to move the host environment on Machine-A to MachineB and migrate the agents to evacuate on the new server. Figure 5 shows the flow of the agent’s migration from Machine-A to Machine-B. A monitoring agent on Machine-B surveillances the status of Machine-A and makes the server agent move to Machine-B instantly when an unexpected situation occurs in Machine-A. The steps are explained below: 1. The monitoring agent on Machine-B sends a message ‘0’ periodically to the database server on Machine-A to confirm that the server is correctly functioning. 2. When the agent on Machine-A receives the message ‘0’ from the agent on Machine-B, it applies the journal file to the database agent. 3. It is dangerous to have multiple agents on the same machine. The agents on Machine-A sends the journal file to the backup server. 4. If everything is OK, the server agents on Machine-A rewrite the message ‘1’ and sends the message to the monitoring agent on Machine-B.
Providing Efficient Redundancy to an Evacuation Support System …
53
Fig. 5 Details of the interaction between Machine-A and Machine-B before Database Rebuild on Machine-B
5. The monitoring agents on Machine-B check the message from the server agent on Machine-A. If the message has ‘1’, the agent on Machine-B rewrites the message on the server agent ‘0’ and go back the Step 1. If the agent on Machine-B does not hear from Machine-A, the agents on Machine-B suspects that trouble has occurred in Machine-A, sends a message “kill” to the server agents on MachineA, and transits to the recovery action (Step. 6).
3.2 Recovery This section describes the exchange between the new server and database agents on Machine-B and the new monitoring agent on Machine-C after Step. 5. The flows are shown in Fig. 6. The figure shows the creation of the server agent and database agent on machine-B and creation of the new monitoring agent on Machine-C that surveillances the server agent and the database agent on Machine-B. The steps are explained below: 6. The monitoring agent on Machine-B makes the server agents cease, retrieves the database agent with old data from the backup server, and updates the database agent using the transaction file. After recovering the database, the monitoring agent on Machine-B shuts down the database. When the monitoring agent on Machine-B sends a message to the database agent, it immediately instantiates a new server agent with the message ‘2’. 7. The monitoring agent on Machine-B migrates to on Machine-C, which has in hot standby.
54
I. Tago et al.
Fig. 6 Details of the interaction between Machine-B and Machine-C after Database Rebuild on Machine-B
8. The monitoring agent on Machine-C checks the message from the monitoring agent on Machine-B so that it is the same agent as the one on Machine-B. If the message has ‘2’, the monitoring agent on Machine-C rewrites the message on the server agent ‘3’ and sends it back on Machine-B. This operation confirms that the message comes from the monitoring agent on Machine-B. 9. If the agents on Machine-B check ‘3’ on the message of the server agent, the monitoring agent renames Machine-B with Machine-A and Machine-C with Machine-B and starts monitoring (go to step 1 of the previous section).
4 Experiments In order to demonstrate the feasibility of the mobile server agent system, we have conducted a numerical experiment. We have used four Raspberry Pi Zero machines with 1 GHz ARM processor connected with Wireless LAN to verify the feasibility of RPCMS. We have each of them be connected using RPC and performed the experiment if trouble occurred in Machine-A. We have made the server agent on Machine-A refused to reply to the receiving request sent from Machine-B. The database used in this experiment was a database containing the shelter data and Japan zip code data. Its size was 16.44 MB. We have repeatedly performed this experiment 100 times a day for a total of 400 times for 4 days and measured the restoration time during all the trials. Figure 7 shows the frequency of each class out of 400 times. Of the 400 runs, the frequency of class from 74.50 to 75.00 s was fifty-seven. The result of this experiment shows the average value of 400 times in 74.66 s. This is equivalent to 4.5 s per MB
Providing Efficient Redundancy to an Evacuation Support System …
55
Fig. 7 Experiments results on 400 times
of the backup file. Compared to using Zabbix, it is observed that the performance is improved; the recovery time 2.6 times as fast as the previous system [10].
5 Conclusion and Future Directions This paper discusses a new surveillance subsystem of our evacuation support system we have improved from the previous version that uses Zabbix. The new system uses RPC for surveillance. We then verified the new system’s feasibility in a small prototype. The new system uses RPC to detect immediately after a system error has occurred. The resulted recovery time was much less than the expected target time. Current implementation backs up only the machine that manages the database, but we need to make sure that the database server can be appropriately linked with the application server. The proposed new surveillance subsystem is a toy networked database system that simulates a large-scale server. We need to conduct experiments on a larger practical network database system. As a future direction, we are planning to reimplement our agent system that provides a basis for Windows/UNIX machines instead of Raspberry Pis. We also need to cope with the abrupt power off. We are now investigating to find a way to detect an indication of sudden failure. If we could not find any indication, we may need to construct a dual system. Then we will build a realistic evacuation support system. Acknowledgements This work is partially supported by the Japan Society for Promotion of Science (JSPS), with the basic research program, Grant-in-Aid for Scientific Research (KAKENHI) (C) Grant Numbers JP17K01304 and JP17K01342.
56
I. Tago et al.
References 1. Kambayashi, Y., Konishi, K., Sato, R., Azechi, K., Takimoto, M.: A prototype of evacuation ´ atek support systems based on the ant colony optimization algorithm. In: Borzemski L., Swi˛ J., Wilimowska Z. (eds.) Information Systems Architecture and Technology: Proceedings of Thirty-Ninth International Conference on Information Systems Architecture and Technology ISAT 2018, AISC, vol. 852, pp. 324–333, Springer (2018) 2. Asakura, K., Fukaya, K., Watanabe, T.: Construction of navigational maps for evacuees in disaster areas based on ant colony systems. Int. J. Knowl. Web Intell. 4(4), 300–313 (2013) 3. Goto, H., Ohta, A., Matsuzawa, T., Takimoto, M., Kambayashi, Y., Takeda, M.: A guidance system for wide-area complex disaster evacuation based on ant colony optimization. Proc. Eighth Int. Conf. Agents Artif. Intell. 2, 262–268 (2016) 4. Taga, S., Matsuzawa, T., Takimoto., M., Kambayashi, Y.: Multi-agent approach for evacuation support system. In: Proceedings of the Ninth International Conference on Agents and Artificial Intelligence, vol. 2, pp. 220–227 (2017) 5. Taga, S., Matsuzawa, M., Takimoto, M., Kambayashi, Y.: Multi-agent base evacuation support system using MANET. In: Nguyen N., Pimenidis E., Khan Z., Trawi´nski B. (eds.) Computational Collective Intelligence. ICCCI 2018. LNCS, vol. 11055, pp. 445–454, Springer (2018) 6. Beckers, R., Deneubourg, J.L., Goss, S., Pasteels, J.M.: Collective decision making through food recruitment. Insectes Soc. 37, 258–267 (1990) 7. Dorigo, M., Maniezzo, V., Colorni, A.: Ant system: optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B: Cybern. 26(1), 29–41 (1996) 8. Goodwin, M., Granmo, O., Radianti, J.: Escape planning in realistic fire scenarios with ant colony optimization. Appl. Intell. 42(1), 24–35 (2015) 9. Baharmand, H., Comes, T.: A framework for shelter location decisions by ant colony optimization. In: Proceedings of the 12th International Conference on Information Systems for Crisis Response and Management, Kristansand (2015) 10. Tago, I., Suzuki, N., Matsuzawa, T., Takimoto, M., Kambayashi., Y.: A proposal of evacuation support system with redundancy using the mobile agents. In: Jezic G., Chen-Burger YH., Kusek M., Šperka R., Howlett R., Jain L. (eds.) Agents and Multi-agent Systems: Technologies and Applications 2019. SIST, vol. 148, pp. 47–56, Springer (2019) 11. Hamamura, A., Fukushima, T., Yoshino, T., Egusa, N.: Evacuation support system for everyday use in the aftermath of natural disaster. In. Duffy V.G. (eds.) Digital Human Modeling. Applications in Health, Safety, Ergonomics and Risk Management. DHM 2014. LNCS, vol. 8529, pp. 600–611 (2014) 12. Mori, K., Yamane, A., Hayakawa, Y., Wada, T., Ohtsuki, K., Okada, H.: Development of emergency rescue evacuation support system (ERESS) in panic-type disasters: disaster recognition algorithm by support vector machine. Art: IEICE Trans. Fund. Electron. Commun. Comput. Sci. E96.A(2), 649–657 (2013) 13. Patterson, D.A., Gibson, G., Katz. R.H.: A case of redundant arrays of inexpensive disks (RAID). In: Proceedings of the 1988 ACM SIGMOD International Conference on Management of Data, SIGMOD ’88, pp. 109–116 (1988) 14. White, J.E.: A high-level framework for network-based resource sharing. In: Proceedings of the National Computer Conference, AFIPS ’76, pp. 561–570 (1976) 15. Andrew, D.B., Bruce, J.N.: Implementing remote procedure call. ACM Trans. Computer Syst. 2(1), 39–59 (1984)
Process Model for Accessible Website User Evaluation Matea Zilak, Ivana Rasan, Ana Keselj, and Zeljka Car
Abstract Despite the number of accessibility guidelines, plugins, and tools, different websites analyses indicate that still many accessibility issues are not adequately addressed, nor the evaluation of website content and functionality has not been adequately performed with heterogeneous groups of users for whom the accessibility is a necessary and must-have option. User evaluation is a crucial step for final website accessibility achievement. The paper describes the process used for evaluation of accessible website prototype with content related to the telecom operators’ offers for youth, seniors, and persons with disabilities. The website prototype is an example of accessibility implementation in terms of universal design and is developed in the project involving regulatory agency, telecom operators, researchers from technical field, and NGOs of persons with disabilities. The evaluation process encompasses functional testing and involves all user representatives. According to the obtained results, the developed prototype is an example of good practice and applicable for web accessibility improvement of existing or new website design, and the process described in this paper is a model of how to perform user evaluation.
M. Zilak (B) · I. Rasan · Z. Car Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia e-mail: [email protected] I. Rasan e-mail: [email protected] Z. Car e-mail: [email protected] A. Keselj University of Dubrovnik, Dubrovnik, Croatia e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_6
57
58
M. Zilak et al.
1 Introduction The Council of Europe is stressing out that Internet today is not only an essential tool for accessing information and communicating with others, but also the tool for many other daily activities, in a way that it provides access to many other services. Therefore, it is very important in the process of enabling the participation in democracy and social inclusion [1]. In order to contribute in achieving the accessibility and to promote the rights and full participation of people with disabilities in society, the team consisting of experts from different professional fields recognized the challenges related to the web which are very often the main barrier for people with disabilities and elderly in everyday life, and they referred to the fact that most websites do not include accessibility options. The importance of web accessibility is confirmed by different initiatives and programs, such as the one from ITU [2], as well as laws and policies enacted all around the world which include implementing rules for making websites accessible to individuals with disabilities [3]. To align with the directive issued by the European Parliament and the Council [4], the Croatian Parliament adopted the act which regulates public sector bodies to ensure the accessibility of their websites and mobile applications for people with disabilities with strict deadlines [5]. The process of monitoring and reporting on most of the public sector bodies’ websites and mobile applications accessibility is also required. However, when it comes to the practical implementation of an accessible website, it is not simple nor straightforward to decide which options should be implemented, and how to meet the needs of users with different disabilities. While speaking about web accessibility, most people think about blind users. But there is a significant number of persons with other difficulties (for example, physical, hearing, reading, attention, etc.). For example, papers [6–10] describe the evaluation of websites of different universities, governmental sites, etc., where all authors came to the same conclusion—there is no universal methodology for developing or evaluating the accessible website. The author in [11] describes website solution prototype that facilitates users’ specific needs, i.e., people with different intellectual deficits, while simultaneously accounting for the needs of users with mobility and sensory deficits. Furthermore, authors in [12] propose a methodology for developing an accessible website for elderly users, based on the fact that the elderly are a heterogeneous group of users with different difficulties not necessarily expressed in the same measure. The Accessible Website for Persons with Disabilities project1 is a product of years of research and multidisciplinary cooperation between the academia and state regulatory agency for telecommunications. The overall research goal is to analyze the specific needs of people with disabilities regarding web accessibility in the country and to implement open and innovative solutions for their digital inclusion. Although accessibility is now a very popular keyword, almost the buzzword, mostly due to law obligations, research in this field is pretty much mature and there is a set of 1 Accessible
window into the world of telecom operator information, http://usluge.ict-aac.hr/ pristupacni-web-2/o-projektu/.
Process Model for Accessible Website User Evaluation
59
Fig. 1 Homepage of the web prototype with opened accessibility options on the right
well-known accessibility guidelines, such as WCAG,2 evaluation tools, and lots of professional and scientific papers [13]. So, the aim of the undertaken research was to analyze the needs of Croatian users with disabilities and, by applying accessibility guidelines, to develop an accessible website prototype (web prototype in further text) that will be a model for all stakeholders interesting in improving accessibility of their websites. One of the findings from the undertaken research is that people with disabilities have difficulty finding information of interest on telecom operator websites, which is why the developed web prototype provides information on special offers of telecom operators to different groups of people, including people with disabilities (PWD), young people (youth), and the elderly (seniors) as main target groups. Also, it contains accessibility options in a way that everyone, regardless of age and ability, can access the information they seek. The web prototype is designed in line with WCAG guidelines and users’ requirements based on the needs and preferences previously collected from PWD or their representatives [14]. The web prototype is developed using WordPress as the core technology in combination with WP accessibility helper (WAH) PRO plugin as described in [15]. Current appearance of the web prototype can be seen in Fig. 1. The design and implemented accessibility features of the website are derived from the analysis of end users’ responses. To be in accordance with their requirements, the following should have been kept in mind: careful selection of colors, fonts, images, way of presenting content (e.g., grouping and highlighting), layout of graphic elements, etc. The web prototype’s user interface includes several tabs: Homepage, i.e. ,Poˇcetna (currently selected tab in Fig. 1.), then Youth, PWD, Seniors, Other, i.e., Mladi, OSI, Seniori, Dodatno, all related to special offers of telecom operators, followed by the tabs Contacts and About 2 Web
Content Accessibility Guidelines, https://www.w3.org/WAI/standards-guidelines/wcag/.
60
M. Zilak et al.
project, i.e., Kontakti and O projektu. On the upper right corner of the web prototype, there is a menu with the following accessibility options: Increase/Decrease Font Size; Recommended Font Size; Change Font; Color Contrast Selection; Highlight Links; Underline Links; Gray Tones Images; Turn Off the Lights; Reset Accessibility Options. A detailed description of every option can be found in [15].
2 Process of Accessible Website Prototype Evaluation The aim of this process model is not only to describe the process used for this accessible website prototype evaluation but also to provide a wider picture of all challenges that are related to the accessibility implementation and evaluation. The novelty of this process lays in the selection criterion for evaluation participants— the process includes representatives of persons with most disability types and from different backgrounds. It should be stressed out that the communication with these evaluation participants sometimes is challenging itself and the prerequisite for its success is building trust relationships—evaluation process is demanding and requires the active involvement of users so they should believe that overall effort will be beneficial not only for them but for most of the persons with disabilities and they should really support these activities. Also, the approach to representatives of elderly and youth is challenging—elderly persons from very different backgrounds were approached privately, and group testing was organized in several retirement homes. Youth users were included in the evaluation process in cooperation with schools in two cities.
2.1 Functional Testing of Accessible Website Prototype First, two functional testings were conducted to verify whether the website functionalities follow the requirements specified by users in the initial phase of development [16]. Each iteration included students of computing and electrical engineering studies from two Croatian universities who are enrolled in the courses related to the ergonomics, human–computer interaction, design for all, and web accessibility. The testing was related also to the prototype design and its consistency, loading speed, and accessibility options run on desktop and mobile devices. Implemented functionalities and accessibility options were tested for different operating systems, browsers, and a wide range of mobile devices. Test participants completed a survey about devices used for testing, encountered problems, opinions on accessibility options, content, and possible improvement. Automatic evaluation using online accessibility evaluation tools AChecker and WAVE was also performed, as described in [15]. This testing process also produced a significant added value in raising awareness of involved students about challenges faced by persons with disabilities and in raising the interest of
Process Model for Accessible Website User Evaluation
61
their future professional work in the fields related to accessibility, universal design, and wellbeing of PWD as well.
2.2 Evaluation of Accessible Website Prototype User Experience After functional testing, the evaluation with representatives of web prototype end users was initiated. User experience evaluation (UXE) refers to a collection of methods, skills, and tools utilized to uncover how a person perceives a system before, during, and after interacting with it [17]. The web prototype evaluation process model is shown in Fig. 2. The evaluation process activities are related to the survey, such as Creation (design), evaluation, analysis, and improvement of a survey, then activity related to the web prototype testing with sub-activities: Getting acquainted with the content of the website, Testing all accessibility options, and Going through specific scenario. The following activities are Collection of user feedback through survey, Analysis of user feedback, and Requirements specification for improving the web prototype. The final activity in this iteration is Implementation of improved version of a web prototype, after which new iteration of activities related to a survey and testing of the web prototype follows. There may be as many iterations as needed until the optimal solution is obtained in terms of accessibility. Survey design and evaluation. The questions design process itself required good preparation, which involved studying the various literature in order to cover all the key elements that a survey questionnaire should contain [16]. Given the end users are quite different, the main challenge was to create survey questions applicable to youth, seniors, and PWD user simultaneously, and to collect different feedbacks from their very different perspective that should be useful for further website improvement.
Fig. 2 Model of evaluation process for accessible website prototype
62
M. Zilak et al.
Evaluation with end users (3 target groups). Website end users are the following: • PWD—visually impaired, hearing impaired, people with different motor problems, people with cognitive impairments, and people with multiple disabilities. If we consider the facts related to the specific needs of these users and their heterogeneous needs even if they have the same disability, it is important to emphasize that without their active inclusion in the process of developing (collecting requirements, iteration process of testing, etc.) and evaluating the prototype, optima accessibility solution would be impossible to find. • Seniors—it is important to have in mind that the level of senses decreases with aging (vision, motor skills, reflexes, movements, etc.). • Youth—participation in this evaluation aimed at raising their awareness of the first two groups, and they also tested accessibility options’ potential to help in everyday life of persons without disability. In order to achieve a high level of quality of the project implementation process, considerable effort has been put into direct communication between researchers (exchange of ideas regarding access to and design of testing tools, time required to iterative testing, tool design, etc.) and end user representatives. Specification and implementation of requirements. Based on the information collected in the evaluation process, the final analysis was performed to define possible website improvement, i.e., to define new functional and non-functional requirements that need to be implemented. The technical feasibility of implementing each requirement is also considered. In this stop, those requirements stated from PWD are defined as necessary for implementation.
3 Results of Conducted Surveys for UXE of Web Prototype Evaluation of user experience was carried out in two iterations. The first iteration was conducted when the beta version of the website prototype was developed, and the second when improvements were implemented.
3.1 Results from First Conducted Survey for UXE The first evaluation iteration involved a total of 111 representatives of end users from 3 target groups (57.7% women, 42.3% men). Most participants (55%) were younger than 25 years while persons older than 50 years were approximately 20%. Before answering any question regarding the website, survey participants were asked to explore the web prototype including content and accessibility options. Afterward, they expressed their satisfaction with the accessibility options. Distribution of
Process Model for Accessible Website User Evaluation
63
answers shown in Fig. 3 shows that 95.5% of respondents are completely or partially satisfied with accessibility options on the website indicating that the website is well-suited to most of its end users. In the next question, participants were asked to express their opinion on whether the content on the website is useful. Figure 4 shows that 51.9% consider the content is very useful indicating that they could visit this website in the future to search for information of interest, 39.8% of respondents find the content useful but not necessary which can be an indicator that these people usually do not have difficulty finding information related to telecom offers on other websites. About 8.4% of respondents stated that the website content is not useful, and that the website does not contain all the necessary information. Regarding ensuring useful content, some research activities should be extended. On the question whether the participants found the accessibility options easily, 94.6% of respondents answered that they did, indicating that the options were clearly and visibly displayed on the website, the icon for the accessibility options is rated as appropriate by 84.7% of respondents, while others stressed that current icon is not associated with accessibility and think that description stated by the screen reader users is poor. The labels of accessibility options were found understandable for 96,4% respondents which indicates that each label was carefully chosen and appropriate for each option. The size of buttons for the accessibility options was found appropriate Fig. 3 Distribution of answers related to level of satisfaction with accessibility options
1.8%
2.7%
completely sasfied 34.2%
parally sasfied very poorly sasfied 61.3%
Fig. 4 Distribution of answers with opinion on usefulness of the content on the website
not sasfied at all
2.8% 5.6%
very useful useful but not necessary
39.8%
51.9%
not useful necessary content missing
64
M. Zilak et al.
for 88,3% respondents who also indicated that the accessibility options menu was designed appropriately for easy activation. The results (92.4%) confirmed that the accessibility options on the right side of the screen are set properly and that there is no need of moving them elsewhere. A significant percentage of respondents (97.3%) stated that they navigated the website without difficulty, indicating that the website is consistent, and the layout of different structures is appropriately arranged. The same percentage (97.3%) of respondents stated that the website content is understandable, i.e., the information related to telecom offers as well as other content on the website is clear and transparent, and presented in an accessible way. The distribution of answers for all yes/no questions mentioned above is shown in Fig. 5. The last question (Fig. 5) is related to accessing the website link through browsers on the users’ mobile phones. Participants tried to access the website via their mobile phones and 32.4% of respondents stated that they could not access it. This information was of a great value since it meant that more effort needs to be invested to customize the display of the website content and design on different mobile devices, especially because 87.4% of respondents stated that they use their mobile phones (often or sometimes) for browsing the Web. Participants were asked to select which accessibility options they consider useful and which they would use. Relating to selecting accessibility options as a useful one, the option increase/decrease font was on the first place for 70.3% of participants. The following options that many users opted for are the options for highlighting links (49.5%) and selecting colors for better contrast of background and text (47.7%). Figure 6 shows a graph with the distribution of answers for all options. When it comes to the rating the website compatibility with screen reader (in case of using it), 67 participants answered this question, and 77.6% of them stated that it was easy to read the content, while 10.4% stated they could not read the website with their screen reader. The website design was liked or liked very much by 73,8% participants (Fig. 7).
Fig. 5 Distribution of answers for yes/no questions
Process Model for Accessible Website User Evaluation
Increase/Decrease Font Size Recommended Font Size Change Font Color Contrast Selecon Highlight Links Underline Links Gray Tones Images Invert Colors Turn Off the Lights Remove Animaons None
65
14.4% 13.5%
38.7% 32.4% 47.7% 49.5% 39.6%
22.5% 16.2%
0.0%
20.0%
70.3%
36.9%
40.0%
60.0%
80.0%
Fig. 6 Distribution of answers for usefulness of certain accessibility option
Compability with screen reader
40.3%
Design of the website prototype
36.9%
0% 5
4
20% 3
2
37.3% 36.9%
40%
60%
1.5% 10.4% 10.4% 4.5% 7.2% 14.4%
80%
100%
1
Fig. 7 Distribution of ratings for website compatibility with screen reader and design
Requirements specification for improvement after first survey. These are some of the new specified requirements: scale the accessible options labels along with the rest of the text on the website; bold relevant information; change the color of links for better contrast; add the alt tag or better descriptions to some images; change the resolution of header image; make search input more visible; show accessible options icons when font is changed; change and increase the size of accessibility options menu icon; more intuitive option for going back to initial accessibility options setting; better indicator of activated accessibility options; remove option for removing animations; make visible main titles when turn off the lights option is activated; make magnifier centered with images; make the website responsive for smaller displays. Priority in implementation according to technical capabilities had requirements that were defined based on the responses of the PWD target group. To make the website more responsive for smaller displays, it was needed to change the measurement units used for all the HTML elements, i.e., instead of pixels, use relative units such as rem or em. This change also affected the inconsistency of the Increase/decrease font size option. Other inconsistencies that appeared on the website were due to the unstable version of the WAH PRO plugin used for the development. The plugin included contradictory lines of code because of which the source code needed to be changed. Another challenge with the early version of the plugin was the lack of the support—there were no updates or patches for the existing bugs. It has
66
M. Zilak et al.
become obvious that WordPress updates and compatibility with the installed plugins also play a great role. For example, the selection of the Increase/decrease font size option caused the change of image positions on the website which consequently caused decentralization of the image magnifier in relation to images because the magnifier remembered only the initial image positions after the website loads. The problem was solved by writing a JavaScript function that was directly linked to both plugins. Another finding is that the table as an HTML element is not suitable for displaying multiple images and image associated text because it confuses the screen readers who interpret it as an actual table and read it cell by cell. Requirements such as change the color of links, bold relevant information, and more visible search input have been implemented simply by changing current CSS code according to users’ specifications.
3.2 Results from Second Conducted Survey for UXE After the implementation of the newly defined requirements in the prototype, each participant approached the survey questionnaire after testing the web prototype. The second iteration of the website evaluation involved 107 representatives of end users (62.6% women, 37.4% men, 55% younger than 25 years, and 18.7% older than 50 years). Most of the questions were in a form of Likert scale (rating options were set as 1 = strongly disagree, and 5 = strongly agree). The distribution of ratings related to these questions is shown in Fig. 8. Given that, most of the questions were
Fig. 8 Distribution of ratings related to second survey questions
Process Model for Accessible Website User Evaluation
67
rated with the highest mark which means that most of the participants either strongly agree with statements from the questionnaire or they are generally satisfied with the improvements implemented on the website. There was one multiple choice question related to the web prototype design, and most of the respondents (50.5%) stated that the website design is interesting and attractive. When it comes to the features and the functionalities of the web prototype, 73.8% of the participants responded that they agree or strongly agree indicating that all implemented accessibility options work as expected. The question regarding the accessibility options menu size is proportional to the mobile device display got the lowest mark. Consideration should be given to the fact that more than 55% of participants are younger than 25 and are used to the standard mobile navigation which toggles by touching the hamburger icon. On the 22nd question, participants had to write what accessible website represents to them. Most of the responses described the accessible website as a website that is simple and accessible for everyone, regardless of the age and disability. It was pointed out that this design makes it easier to find all interesting and relevant information. Considering the results of the survey, all the functionalities and design guidelines meet the needs of the users. Following the users’ requirements and implementing changes has resulted in creating an accessible website where all relevant and essential information are provided.
4 Conclusion The accessible website prototype, which evaluation process is presented in the paper, is developed as a good practice example and model for all stakeholders who are interested in practical design and technical solution with accessible functionalities implemented according to the researched user needs. It incorporates WCAG guidelines, but also provides a practical way of their actual implementation in a responsive and accessible website. The evaluation process presented in this paper proves fulfillment of prototype aim and presents a model how to organize functional and user testing activities in order to enhance accessibility and suitability of some websites for all users. Implementing accessibility is a design and technical challenge and an incentive to express creativity. Accessibility needs to be educated and promoted, and public awareness raised (especially of web design professionals). Future challenge is definitely automatization and personalization of accessibility options and utilization of multi-agent systems for this purpose.
References 1. Council of Europe. The Internet, a public service accessible by everyone. https://www.coe.int/ en/web/portal/public-service-accessible-by-everyone. Accessed 26 Jan 2020
68
M. Zilak et al.
2. International Telecommunication Union (ITU). The ITU-D National Programme in Web Accessibility: “Internet for @ll”. https://www.itu.int/en/ITU-D/Digital-Inclusion/Personswith-Disabilities/Pages/[email protected]. Accessed 19 Feb 2020 3. W3C Web Accessibility Initiative (WAI) (2018) Web Accessibility Laws & Policies. https:// www.w3.org/WAI/policies/. Accessed 17 Feb 2020 4. Directive (EU) 2016/2102 of the European Parliament and of the Council of 26 October 2016 on the accessibility of the websites and mobile applications of public sector bodies. Offic. J. Eur. Union L327, 1–15 (2016) 5. Act on the accessibility of websites and programming solutions for mobile devices of public sector bodies [Zakon o pristupaˇcnosti mrežnih stranica i programskih rješenja za pokretne - tijela javnog sektora]. (2019). Official Gazette [Narodne novine] 17/19 uredaje 6. Dongaonkar, S.U., Vadali, R.S., Dhutadmal, C.: Accessibility analyzer: tool for new adaptations in government web applications to improve accessibility. In: 2017 International Conference on Computing Communication Control and Automation (ICCUBEA), IEEE, Pune, pp. 1–5 (2017) 7. Fernández, J.M., Roig, J., Soler, V.: Web accessibility on Spanish universities. 2010 Second International Conference on Evolving Internet, pp. 215–219. IEEE, Valencia (2010) 8. Acosta-Vargas, P., Lujan-Mora, S., Salvador-Ullauri, L.: Web accessibility polices of higher education institutions. 2017 16th International Conference on Information Technology Based Higher Education and Training (ITHET), pp. 1–7. IEEE, Ohrid (2017) 9. Sandhya, S., Devi, K.A.S.: Accessibility evaluation of websites using screen reader. In: 7th International Conference on Next Generation Web Services Practices (NWeSP), IEEE, Salamanca, pp. 338–341 (2011) 10. Márquez, S., Moreno, F., Coret, J., Jiménez, E., Alcantud, F., Guarinos, I.: Web accessibility for visual disabled: an expert evaluation of the Inclusite® solution. In: 2012 15th International Conference on Interactive Collaborative Learning (ICL), IEEE, Villach, pp. 1–5 (2012) 11. Halbach, T.: Towards cognitively accessible web pages. 2010 Third International Conference on Advances in Computer-Human Interactions, pp. 19–24. IEEE, Saint Maarten (2010) 12. Gregor, P., Newell, A.F.: Designing for dynamic diversity: making accessible interfaces for older people. In: Proceedings of the 2001 EC/NSF Workshop on Universal Accessibility of Ubiquitous Computing: Providing for the Elderly, pp. 90–92. ACM, Alcácer do Sal (2001) 13. Abuaddous, H.Y., Zalisham, M., Basir, N.: Web accessibility challenges. Int. J. Adv. Comput. Sci. Appl. 7, 171–181 (2016). https://doi.org/10.14569/IJACSA.2016.071023 14. Car, Ž. et al.: Accessible website for persons with disabilities—Project report. Croatian regulatory Authority for Network Industries and University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb (2018) 15. Žilak, M., Kešelj, A., Besjedica, T.: Accessible web prototype features from technological point of view. In: 42nd International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 457–462. IEEE, Opatija (2019) 16. Car, Ž. et al.: Accessible website for persons with disabilities—Results of the website accessibility and usability testing, Croatian regulatory Authority for Network Industries and University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb (2019) 17. Law, E., Roto, V., Hassenzahl, M., Vermeeren, A., Kort, J.: Understanding, scoping and defining user experience: a survey approach. In: Proceedings of Human Factors in Computing Systems Conference, pp. 719–728. ACM, Boston (2009)
Intelligent Agents and Cloud Computing
A Comparative Study of Trust and Reputation Models in Mobile Agent Systems Donies Samet, Farah Barika Ktata, and Khaled Ghedira
Abstract In the literature, several studies have shown that the use of trust models in mobile agent-based systems has advantages for improving the level of security and performance. This paper presents a comparative study of trust models specific to mobile agents, showing their strengths and limitations. The comparison is made based on several dimensions. The purpose of this study is to analyze the used dimensions by these models (parameters, concepts, etc.) and to identify the dimensions that are not used in the existing models of trust. This study will allow researchers to address the voids by ameliorating the concluded limits and proposing a new trust model, which besides incent entities to not adopt a selfish and/or strategic behavior will reflect better the reality of the behavior of the different entities on the mobile agent system. As a result, the trust values will be more accurate and the decision-making will be more precise. To make the requirements of the trust model easier to understand, we add a B2B application based on mobile agents and we integrate into each host of it a trust layer that allows the integration of the proposed dimensions. We present also the flowchart of the new related trust model.
D. Samet (B) National School of Computer Science, University Campus of Manouba, 2010 Manouba, Tunisia e-mail: [email protected] F. B. Ktata Higher Institute of Applied Sciences and Technology of Sousse, Rue Tahar Ben Achour, 4003 Sousse, Tunisia e-mail: [email protected] K. Ghedira Université Centrale de Tunis, Honoris United Universities, Tunis, Tunisia e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_7
71
72
D. Samet et al.
1 Introduction The use of a trust model in mobile agent-based systems can significantly improve the level of security and the level of performance of these systems. This is ensured by the use of trust models for assessing the trust level of an entity (agent or host) before making the decision to interact or not with it, and the conditions under which interaction can take place. A number of trust and reputation models for mobile agent-based systems are proposed in the literature. These models are defined based on different dimensions. This paper intends to help a better understanding of the existing trust and reputation models and to identify the limits and the dimensions which are not covered by the literature, in order to help the definition of a new trust model that includes different dimensions, which allow obtaining accurate values of trust for better decision-making. In this paper, the sections are organized as follows: Sect. 2 presents an explanation of the need of the trust evaluation in mobile agent systems. In Sect. 3, we present a trust and reputation background. In Sect. 4, we select a set of trust models for mobile agent systems, investigate their features and we distinguish their limits. In Sect. 5, we present a discussion on new dimensions that can be added to the trust model definition for a model that includes more details to have an accurate decision. In Sect. 6, we propose a Trust Layer which allows the integration of the new trust model that uses the proposed dimensions. A flowchart of the new trust model is presented in Sect. 7. Finally, Sect. 8 presents the conclusion and future work.
2 Need of Trust Evaluation in Mobile Agent Systems The dynamic nature of mobile agent systems and the necessity of the collaboration, the communication and the interaction between its different entities increase the degree of uncertainty. The uncertainty is, in fact, the result of the strong dependence of the mobile agent on the host and vice versa. For example, and based on [10], the main problems in mobile agent systems are as follows: The code or parts of an agent which must be readable by the visited hosts. The encryption and decryption of the messages, when the agent is not in its home host: the issue here is about how much an agent can trust a host to encrypt or decrypt its exchanged messages. The tampering of intermediate results especially in the case where it will be used later in the itinerary to calculate further intermediate results: a mobile agent was initially safe, let’s imagine now that it has visited several hosts that are not trusted. The probability that it transmits false data or uses data as input parameters to the agent code which makes it malicious becomes stronger. Also, it is not sufficient to be an authenticated entity (agent or host) to be considered trustworthy, because being authenticated is a guarantee that the agent code was written by its home host, according to its signature, but it is not a guarantee that the mobile agent does not include malicious code or it has not changed its behavior by undergoing attacks while traveling. So using only cryptographic mechanisms is not enough to secure the mobile agent system. Thus
A Comparative Study of Trust and Reputation Models in Mobile Agent Systems
73
and as in any uncertain system, the use of trust management is a necessity to improve the level of security in the systems based on mobile agents. The improvement of this level is the result of the establishment of a trust management system which is based on a trust model. In this system, decision-making is based on the avoidance of interactions with entities classified as non-trusted and choosing the most trusted one.
3 Trust and Reputation Background 3.1 Trust and Reputation Definition In mobile agent systems, there are two classes of trust which are the trust for an agent and trust for a host. The trust for an agent includes the trust for the agent code, the trust for its home host and the trust for its itinerary. The trust for a host includes the trust for its already sent agents and the trust for its provided services (execution, security and recommendation). Based on [5] we can define trust as a numeric value that indicates how much a trustor (a host or an agent) has the belief that the trustee (another host or an agent) will perform a task as it claims to do it and without harming the trustor. Reputation is used to provide collaboration among trustworthy agents and hosts, by distributing feedback to help to make trust decisions. The trust models are inspired from human society where unknown individuals must interact with each other to get services. Individuals in human society get to know each other via direct interactions (DI) and reputation (Rep). This is true also in mobile agent systems and especially in open mobile agent systems, where an entity cannot have direct interactions with all other entities.
3.2 Trust and Reputation Model Steps In mobile agent-based systems, depending on the trust and reputation model, an agent calculates trust values to choose the best hosts and agents to interact with. A host calculates trust values to assist decision-making in the authorization process. In fact, a trust and reputation model has three steps: Gathering information about the trustee entity, from their own experience (direct interaction) and from other entities (reputation). Sharing information about reputation can be shared using a proactive way,it means that information is shared at each specified interval, or it can be shared using a reactive way when any ask for it. Propagation of reputation information may be local that is reputation information published within a neighborhood host and acquaintance agents (in the same security domain), or global when information is propagated to hosts and agents outside the security domain of the host or agent publishing the reputation information. Analyzing and modeling information define how entities calculate the trust value of other entities that they may interact with.
74
D. Samet et al.
Decision-making based on calculated values of trust—the trust management system chooses to interact or not with the trustee entity.
3.3 Trust and Reputation Model Dimensions Trust and reputation models use a set of dimensions that introduce concepts to determine the type of information that should be collected about the entity which an agent or a host may interact with, and how it will analyze this information and calculate a trust value to make a trust decision. We present the dimensions often used by trust models based on our study of trust models for mobile agent-based systems. We are interested in two types of trust: trust for the competence of an entity (which includes its competence to provide a secure execution and its competence to provide the waited service) and the trust for the credibility of an entity as a recommender. We discuss these two types of trust, in the function of the treated dimensions. The used trust and reputation dimensions by the studied models are Classes of Trust presents trust relationships that are considered by the trust model. Information Source an entity need to collect information from its own experiences, from other agents, host and/or from the environment. The extracted information can be Direct interaction (DI): it is the first-hand information. It presents the own experience of the trustor entity (the entity which trusts) with the trustee entity (the entity which is trusted). Direct observation and inspection: it is based on the observation of others’ interactions and the examination of the environment in the search of relevant past and current information to analyze the behavior of the trustee. Reputation (Rep): it is known also as indirect information or witness information. It presents information provided by others about their experiences (interactions) with the trustee entity. Satisfaction: according to their preferences two entities may have different satisfaction levels in regard to the performance of the same requested entity. Satisfaction is the rate affected by a service requestor concerning its satisfaction with a service provided by another entity. The service can be also a recommendation or a safe execution. In the case of recommendation, the satisfaction is based on the similarity of the affected trust values. The closer their values, the more the trustor can rely on the other recommendations. Trust Semantics (or the multidimensional): it is not like most trust models; here trust is not represented by a single value. Multidimensionality aims to convert one single value that calculates trust, in an average of several aspects, transforming it into a composite measure [1, 8]. Trust Preferences: these are related to the definition of weight to every attribute, determined by a trust semantic, according to the preferences of the trustor. Initial Trust: it is the definition of how to calculate the trust value to attribute to a new entity entering the system [11].
A Comparative Study of Trust and Reputation Models in Mobile Agent Systems
75
Decay Factor: it is the decay of information with time, based on a decay factor. If it does not exist or there are a few interactions with the corresponding entity, the confidence value granted decreases to reach a neutral value. Knowledge Level: the knowledge level increases with the number of direct experiences. We must distinguish do not trust from do not know. Known does not mean trust, and unknown does not mean distrust either [12]. The knowledge level can be in the range [0,1]. 0 for unknown and 1 for perfect knowledge. These values are set by assigning interval weighting, which does not overlap for the number of interactions. Recency Level: the recency level can be in the range [0,1]. 0 for a too old experience and 1 for too recent experience. These values are set by assigning interval weighting, which does not overlap with time. Paranoid versus Trusting Trustor: if the calculated trust value for a trustee belongs to an interval where Φ < Trust Value (Trustor/Trustee) < Θ, the trustee can be considered as either trustworthy or untrustworthy depending on how paranoid or trusting the trustor is.
4 Comparing Trust Models for Mobile Agent Systems In this section, we develop a comparative study of Trust Models for Mobile Agent Systems with an emphasis on the integrated dimensions in the definition of each model. We tried to include all the models which treat trust in mobile agent systems. Even though some models are not with recent references, they sometimes presents important solutions and dimensions that are not used in more recent models. The studied models are the model of Dadhich et al. [2], Gan et al. [4], Zaiter et al. [14], Hacini et al. [6], Derbas et al. [3], Tajeddine et al. [11], Lin et al. [7], Ye et al. [12] and Mittal et al. [9]. In the analysis table (Table 1), for each model the first line analyses the dimensions related to the trust on the entity as a service provider (can be the security or the competence), the second line analyses the dimensions related to the trust toward the entity as a recommender. Each of the boxes that specify that the dimension is not verified presents a possible amelioration of the model, knowing that the accuracy of the model depends on its consideration of each information and dimension that reflect the reality and the right estimation of the benevolence or maliciousness of an entity and the trust level toward it. Dadhich et al. [2] consider only the trust information about hosts to protect an agent against a host. This work uses as information source direct interaction (called direct trust) between a trustor and a trustee host. Reputation is provided by other hosts (called indirect trust or recommendation trust). This work calculates the direct trust of a host by making an evaluation of the direct experiences according to its recent positive and negative transactions. This work does not evaluate the reliability of the recommended value or the trustworthiness of the recommender. Gan et al. [4] consider only the trust information about an
Mittal et al. [9]
Ye et al. [12]
Lin et al. [7]
Tajeddine et al. [11]
Derbas et al. [3]
Hacini et al. [6]
Zaiter et al. [14]
DI
Trust of a host to another as a recommender
DI, Rep (centralized)
Trust of a host to another as a recommender
Trust of an agent/ host to an origin/execution host
DI, Rep
Trust of an agent/host to an execution host
DI
Trust of a host to another as a recommender
DI DI, Rep
Trust of a host to another as a recommender
Trust of an agent\host to an origin/execution host
DI, Rep
DI, Rep
Trust of a host to another
Trust of a host to another as a recommender
Trust of an agent to an execution host
Trust of a host to another as a recommender
DI, Rep, observation and inspection
DI
Trust of an agent to a recommender host
Trust of an agent to an execution host
DI, Rep
DI
Trust of an agent to another recommender agent
Trust of an agent to an execution host
DI, Rep
DI, Rep
× × × × × ×
× × × × × ×
×
×
×
×
×
×
×
×
×
×
×
×
×
×
Information Satisfaction Trust source semantics
Trust of an agent to another
Trust of a host to another as a recommender
Trust of an agent to an execution host
Dadhich et al. [2]
Gan et al. [4]
Classes of trust
Models
Table 1 A comparative study of the trust models for mobile agent systems
×
×
×
×
×
×
×
×
Trust preferences
×
×
×
×
×
×
Initial trust
×
×
×
Decay factor
×
×
Knowledge level
×
Recency level
×
×
×
Paranoid versus Trusting
76 D. Samet et al.
A Comparative Study of Trust and Reputation Models in Mobile Agent Systems
77
agent to evaluate its expected behavior. This work uses a similarity table to evaluate the credibility of a recommender agent. Evaluation factors are the degree of satisfaction, amount of transaction and the time of transaction. Zaiter et al. [14] treat only trust information of the agent concerning an execution host, and it does not include most of the cited dimensions either for evaluating a trustee or a recommender. Hacini et al. [6] consider information about the trustworthiness of an execution host to allow a mobile agent to select the appropriate quality of service to offer to it. This work uses direct interaction, observation, inspection and reputation as information to estimate the trustworthiness of a host, and does not evaluate the credibility of a recommender. Derbas et al. 2004 (TRUMMAR) [3] treat only trust information about hosts to protect agents against malicious hosts. This model uses previous information available at the trustor host and reputation reported from neighbors’ hosts (that are under the same administrative control), friends’ hosts (under different but trusted administrative control) and stranger’s hosts (volunteer providers of information). This work uses important dimensions like initial trust, decay factor and paranoid versus trusting, but it lacks the evaluation of recommendation and recommender entities, also it does not treat the other cited dimensions to have an accurate trust evaluation. Tajeddine et al. 2007(PATROL) [11] present an amelioration of TRUMMAR. It uses the two types of trusts: trust for the competence of a host to perform a specific task and trust on the reliability of a host to give a reputation value. The evaluation of the reliability of a host as a recommender is based on rule parameters as cooperation, frequency of interactivity, popularity and similarity. However, the popularity, cooperation and interactivity may do not reflect the competence, the good knowledge and the recent experience with the trustee itself and on the other hand, these parameters do not indicate the benevolence of an entity, especially in the case of concurrence between the trustor and the recommender. In the work of Lin et al. [7] (Mobile Trust) a host trusted for recommendation is a host trusted according to the belief that one entity holds in another entity based on its own experiences of recommendation proposed by the recommender entity in the past. However, this work does not specify how a recommendation is considered reliable; also it does not consider several of the mentioned dimensions for the calculation of trust on the recommender and the trustee. Moreover, this work does not use dimensions to incent the recommender to provide recommendations and to have a cautious attitude toward strategic and selfish behavior of agents. The same shortcomings are true for the work of Ye et al. [12], however, it considers the knowledge level when calculating trust level for the trustee, but not for the recommender. Mittal et al. [9] maintain the reputation of the platform to which the agent will get executed, and the trust of the mobile agent. However, the trust reputation system is centralized, where nodes share the trust values stored at a trusted third party. It will be more accurate and cautious if every node has its own procedure of calculating trust values depending on its own vision, experiences and relationships with entities of its environment. Another point is that the trust value for an agent in this model depends only on the trust value of its origin, however, in fact, the trust value for a mobile agent depends on its origin as well as its itinerary especially in the case where the collected data can be used in the determination of the agent behavior. In Table 1 we summarize the different dimensions incorporated in the models discussed above.
78
D. Samet et al.
5 New Dimensions By studying these models and examining the used dimensions in the related works, we conclude that authors neglected several aspects. First, the fact that agents in the current multi-agent trust research landscape are normally selfish meaning that an agent will take whatever action that is expected to maximize its own utility [13] especially in large-scale systems and in the field of e-commerce. So, we need to integrate additional dimensions to incent entities which can be in concurrence to provide true reputation values of the parties which they have already interacted with. For this, we propose that the trust value for service (TVS) of an entity will be influenced by the trust value of the recommendation experience (TVR) provided by the entity to evaluate. In fact, by doing so we guarantee the effect of his dishonesty as a recommender on the TVS because it is irrational to consider the same trust value for a first entity which provides a good service and an honest recommendation, and for a second entity which provides a good service but a dishonest recommendation. Also, this consideration will help to differentiate the entities with strategic behavior which provides false recommendations in order to mislead an adversary or to decrease requests for services toward it. So the dimension Incentive Feedback should be added to encourage the recommender entity and let it have an interest in cooperation and providing true information about its interactions with the trustee entity. The global trust value for an entity is influenced by the trust value for the recommendation experience. Moreover to be wary of other entities and to be more cautious, an entity must take into consideration that a benign entity cannot be a good recommender when it does not provide the promised service. So we propose to add a new dimension which is Cautious Attitude where the trust value for the recommendation experience (TVR) of an entity will be influenced by the trust value of (TVS) provided by the recommender to evaluate. In addition to the knowledge level, we propose to consider another dimension which reflects the number of times an entity (trustee or recommender) provides a good service compared to the number of times it is asked to do so. For example, between two entities with which the trustor has the same number of interactions, using the familiarity dimension, the model will favor the entity with more good experiences. We define Familiarity as the ratio of the number of good experiences by the total number of experiences. For recommendations: if the difference between the recommended value and the trust value assigned to the trustee after the interaction is ≥ λ, the experience with the recommender is considered as a bad experience; if not, it is considered as a good experience. Also, a good trust model should make the trustor be able to differentiate a trustee which gains a satisfaction value due to an oscillatory behavior (in this case the trustee should be penalized) and another trustee which is gaining the same satisfaction value but it is improving or has almost the same level of satisfaction, in this case, the value of the evolution is equal to 1. So Evolution depends on the consistency of the interaction results with certain entities and how much it keeps a no oscillatory behavior that is ameliorating or keeping the same level.
A Comparative Study of Trust and Reputation Models in Mobile Agent Systems
79
6 Application Scenario In order to ensure the adequate integration of a new trust model which reflects better the reality of the entities’ behavior using the above dimensions, we propose a Trust Layer which must be implemented in each host of the mobile agent system. An important advantage of our trust layer is that it is independent of the functional layer. Thus, our trust layer is standard and can be integrated into any host of the mobile agent system. In order to illustrate the use of the trust layer, we propose a system of B2B based on mobile agents in which and in every host we integrate the trust layer (Fig. 1). The system of B2B based on mobile agents is composed of Seller/Buyer host represents the host of a user that has products/services to sell or/and search to purchase a product/service. Responsible Seller Agent (RSA) is a stationary agent in the Seller/Buyer host that consults the product database, creates the Seller Agent and supervises its activity. Seller Agent (SA) is a mobile agent that is created by the RSA. The SA migrates from a Seller/Buyer host to the other Seller/Buyer hosts, then communicates with the Responsible Buyer Agent to provide the information about the products/services, to negotiate and to sell in case of demand. Responsible Buyer Agent (RBA) is a stationary agent in the Seller/Buyer host that communicates with the SA and saves according to the need of the company, a targeted choice of products/services information on the Catalog Database. RBA can buy a product instantly when the SA communicates with it or later when the company needs to buy a product. In this case the RBA consults the Catalog Database, creates the BA and supervises its activity. Buyer Agent (BA) is a mobile agent that is created by the RBA. The BA migrates from a Seller/Buyer host to the other Seller/Buyer hosts to negotiate and buy the requested products/services. The trust layer is composed of Decision Agent is a stationary agent that is responsible for the validation or rejection of the proposed trustee entities. Trust Manager is a stationary agent that is responsible for the computation of the trust values. Recommendation Collector is a stationary agent that is responsible of the collection of the recommendations from other hosts. Recommendation Provider is a stationary agent that distributes recommendation when it is asked for by the Recommendation Collector of the other hosts.
7 New Trust Model Flowchart Consider the situation of the B2B application (Fig. 1) where after consulting the product Database, the RSA in host X wants to compose the itinerary to be followed by the SA (1) in order to make sell transactions. The RSA of host X provides to the decision agent a list of potential buyer hosts (2). The decision agent of the host X will not validate a potential host Y in the itinerary of the SA unless it is confident that host Y is trustworthy. In order to find out whether host Y is trustworthy or not, host X follows the flowchart that we propose (Fig. 2). In host X, Decision Agent contacts the Trust Manager (3) to calculate the knowledge level value and the recency level value
80
D. Samet et al.
Fig. 1 Trust layer included in B2B application based on mobile agent
Fig. 2 New trust model flowchart
of the interaction with Y (4). If these values are not equal to 1 that is to say that the host X does not have enough experiences with the host Y or the experience of host X with the host Y is not recent enough, the Trust Manager will send a recommendation request to the recommendation collector (5). The recommendation collector will decide on the number N of hosts Zi to query about reputation values. This number is upper-bounded in order not to overflow the network by sending queries to a large
A Comparative Study of Trust and Reputation Models in Mobile Agent Systems
81
number of hosts. Afterward, recommendation collector will query the first N other hosts Zi (6), whose trust value for recommendation experience in the host X (TVR X/Zi) is greater than or equal to the recommendation threshold and in the decreasing order of TVR X/Zi, about their trust values toward the host Y (7). The value of the recommendation threshold is the value from which a recommender is considered to be apt to provide an honest recommendation. Then the Recommendation Collector will get the trust values (decayed) from the Recommendation Providers (8). The Recommendation Collector will calculate the average value of recommendations ARV Y after weighting the values assigned to host Y by every recommender Zi with TVR X/Zi. The Recommendation Collector will send the ARV Y to the Trust Manager (9). The Trust Manager decays old trust value of host Y on host X. Then it calculates the new reputation value for the host Y using the average value of recommendations ARV Y and the decayed old trust value (10). Note that if the knowledge level and the recency level are equal to 1, X will directly decay its old trust value toward the host Y, skipping all the steps in between. After reputation value for the host Y is calculated, a choice has to be made by the decision agent (11): if the reputation value of Y is less than the absolute mistrust level Φ, Y will not be trusted and no interaction will take place; otherwise, if the reputation value of Y is greater than the absolute trust level Θ, host X will trust and interact with host Y. However, if the reputation value of host Y is between Φ and Θ, host X will use a probabilistic approach where the decision to trust or not to trust is based on how close the reputation value is to Φ or Θ, respectively. Then the decision agent sends the response to the RSA (12) which creates the seller agent and includes the trusted hosts in its itinerary (13). After the interaction with the host Y, X calculates for Y and for Zi the satisfaction, the familiarity, the knowledge level and the evolution. These values are used to calculate the results of interaction RI (HEval and REval). HEval is an indicator of how good or bad host Y performed in his interaction. REval is an indicator of the reliability of the host Z as a recommender. Satisfaction toward the host Zi is the assessment of the difference between the reliability of host Y as seen by requester X and recommender Zi. Satisfaction toward the host Y is composed of two types of assessments: the assessment of the security level on the host Y and the assessment of its competence as a service provider. Then, host X will calculate the new decay factor (τ ) for host Y based on the difference in host Y result of interactions (HEval) between successive interactions. Also, it will calculate the new decay factor (τ ) for host Zi based on the difference in host Zi result of interactions (REval) between successive recommendation experiences. Finally, X recalculates the trust toward the service of the host Y TVS X/Y using the old TVS X/Y, the decayed TVR X/Y and HEVAL X/Y, and then recalculates the trust toward the recommendation experiences of the host Y TVR X/Y using the new TVS X/Y and the decayed TVR X/Y. Also, X recalculates the trust toward the recommendation experiences with every host Zi TVR X/Zi using the old TVR value of Zi on the host X, the decayed TVS X/Zi and REVAL X/Zi, and then recalculates the trust toward the service of every host Zi TVS X/Zi using the new TVR X/Zi and the decayed TVS X/Zi.
82
D. Samet et al.
8 Conclusion In general, all studied models use considerable trust dimensions but provide different levels of accuracy. The accuracy of the trust models can be improved by proposing a new trust model that avoids the gaps identified in this paper and including all the mentioned dimensions according to the provided flowchart. By doing so, the new model will be able to encourage good behavior and discourage the selfish and dishonest ones which may strategically attack the trust management system. Currently, we are working on completing the comprehensive trust model that combines the two kinds of schemes that avoid such attacks as the recommender based trust schemes (filtering recommendations) and the incentive and punishment-based schemes.
References 1. Aljazzaf, Z.M., Perry, M., Capretz, M.A.: Online trust: definition and principles. In: 2010 Fifth International Multi-conference on Computing in the Global Information Technology, pp. 163–168. IEEE (2010) 2. Dadhich, P., Dutta, K., Govil, M.: On the approach of combining trust and security for securing mobile agents: trust enhanced security. In: 2011 2nd International Conference on Computer and Communication Technology (ICCCT-2011), pp. 379–384. IEEE (2011) 3. Derbas, G., Kayssi, A., Artail, H., Chehab, A.: Trummar-a trust model for mobile agent systems based on reputation. In: The IEEE/ACS International Conference on Pervasive Services, 2004. ICPS 2004. Proceedings, pp. 113–120. IEEE (2004) 4. Gan, Z., Li, Y., Xiao, G., Wei, D.: A novel reputation computing model for mobile agentbased e-commerce systems. In: 2008 International Conference on Information Security and Assurance (isa 2008), pp. 253–260. IEEE (2008) 5. Granatyr, J., Botelho, V., Lessing, O.R., Scalabrin, E.E., Barthès, J.P., Enembreck, F.: Trust and reputation models for multiagent systems. ACM Comput. Surv. (CSUR) 48(2), 27 (2015) 6. Hacini, S., Guessoum, Z., Boufaida, Z.: Tamap: a new trust-based approach for mobile agent protection. J. Comput. Virol. 3(4), 267–283 (2007) 7. Lin, C., Varadharajan, V.: Mobiletrust: a trust enhanced security architecture for mobile agent systems. Int. J. Inf. Secur. 9(3), 153–178 (2010) 8. Lu, G., Lu, J., Yao, S., Yip, Y.J., et al.: A review on computational trust models for multi-agent systems. Open Inf. Sci. J. 2, 18–25 (2009) 9. Mittal, P., Mishra, M.K.: Trust and reputation-based model to prevent denial-of-service attacks in mobile agent system. In: Towards Extensible and Adaptable Methods in Computing, pp. 297–307. Springer (2018) 10. Samet, D., Ktata, F.B., Ghedira, K.: Security and trust on mobile agent platforms: a survey. In: Jezic, G., Kusek, M., Chen-Burger, Y.H.J., Howlett, R.J., Jain, L.C. (eds.) Agent and MultiAgent Systems: Technology and Applications, pp. 42–52. Springer International Publishing, Cham (2017) 11. Tajeddine, A., Kayssi, A., Chehab, A., Artail, H.: Patrol: a comprehensive reputation-based trust model. Int. J. Internet Technol. Secur. Trans. 1(1–2), 108–131 (2007) 12. Ye, Z., Song, W., Ye, D., Yue, L.: Dynamic trust model applied in mobile agent. In: 2008 6th IEEE International Conference on Industrial Informatics, pp. 536–540. IEEE (2008) 13. Yu, H., Shen, Z., Leung, C., Miao, C., Lesser, V.R.: A survey of multi-agent trust management systems. IEEE Access 1, 35–50 (2013) 14. Zaïter, M., Hacini, S., Boufaïda, Z.: Trust metrics identification for mobile agents protection (2008)
Agent-Based Control of Service Scheduling Within the Fog Environment Petar Krivic, Jakov Zivkovic, and Mario Kusek
Abstract Modern IoT application scenarios require lower processing latency than what the cloud architecture alone can offer, and the addition of the fog computing layer could upgrade the existing architecture to conform to this requirement. Existing platforms that could be employed to enable distributed processing in fog-to-cloud environment, that includes the resources on the network’s edge, in most cases utilize the agent-based approaches. However, although they do offer a certain level of service management, they still do not provide a procedure to configure latencydependent service rescheduling. Thus, within this paper, we present our basic agentbased model that enables latency-sensitive service management in the distributed IoT environment.
1 Introduction From the beginning of their development, all Internet of Things (IoT) platforms had to fulfil two basic requirements: the ability to efficiently process high data volume and the ability to offer eased resource scalability. Thus, the cloud computing architecture has gradually become a common approach to implement such platforms since it provided the elasticity of available computing resources, and nowadays, the complete IoT data traffic is routed towards these remote data centres. However, modern IoTbased concepts present a new set of requirements that these platforms will have to conform to, and these primarily imply near real-time processing of sensor data to P. Krivic (B) · J. Zivkovic · M. Kusek Faculty of Electrical Engineering and Computing, University of Zagreb, Unska 3, HR-10000 Zagreb, Croatia e-mail: [email protected] J. Zivkovic e-mail: [email protected] M. Kusek e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_8
83
84
P. Krivic et al.
make the necessary decisions and to perform the appropriate actuations [1]. Since the bottleneck of the existing approach is the communication between end devices and distant cloud infrastructure, the idea of a basic solution that could resolve this issue would be to bring processing resources closer to the relevant environments where end devices are also located. Fog computing concept emerged as a solution that addresses these requirements by providing elastic resources and services at the edge of the network [2]. It is a highly virtualized platform that provides computing, storage, and networking services between end devices and traditional cloud computing data centres [3], and thus it enables the extension of cloud processing to the local environments. The main prerequisite that enabled the idea of fog computing was extensive development of processing and memory resources that constrained devices possess today, and their ubiquitous presence in most of the existing local networks. These devices offered computing resources in local environments that could be utilized as a virtual extension of cloud processing, which can handle requests locally, and thus, they would resolve the latency issues caused by communication with distant cloud servers. The aforementioned processing distribution that includes devices in dynamic local environments also requires a distributed management plane. Since fog computing implies devices that are often mobile, and thus, they are not always available for processing, it is essential to ensure that the processing is properly orchestrated and controlled. To accomplish this goal, each processing fog node has to be able to communicate with the controlling plane that should always be available, and it should have an overview of the entire processing environment. Thus, we believe that the inclusion of fog computing to the existing cloud processing architecture should be realized as an agent-based system, where the orchestrator is placed in the cloud, and it communicates with agents that are installed on fog nodes across distributed local environments. In this paper, we first give a brief overview of the available tools that are used to enable the extension of cloud computing towards local environments with the emphasis on their usage of agent-based principles. Afterwards, we give an example of our basic agent-based system that controls fog processing with the ultimate goal to reduce the latency in comparison to the common processing in a remote cloud data centre.
2 Related Work Ever since it emerged, the fog computing concept has been a popular research topic, and its name already suggests that it is an extension of the cloud computing services to the edge of the network [4]. Because of its positioning in the network stack that is close to the relevant environment where end devices are also located, it is often presented as a technology that could enable modern latency-sensitive IoT services [5]. There are numerous definitions of this concept in the scientific literature, but the most comprehensive one has been published in [6], where fog computing is defined
Agent-Based Control of Service Scheduling Within the Fog Environment
85
as a horizontal, system-level architecture that distributes computing, storage, control, and networking functions closer to the users along a cloud-to-thing continuum. The reduction of response latency in comparison to the centralized cloud processing has been pointed out as one of the main benefits that fog computing enables. Thus, the majority of researches that targets fog applications within the area of IoT have been focused on scheduling the incoming requests across a distributed fog-tocloud environment, to maximize the latency reduction [7, 8]. Yousefpour et al. have described an algorithm in [7], that is utilized on fog nodes to decide whether to process a received request or to offload it to another fog node, with the ultimate goal to process it in the most efficient manner. The decision was based on the response time of a fog node that depended on several factors: the amount of computation needed to be performed on a task, the queuing status of a fog node (in terms of current load), and its processing capability. It is important to emphasize that in the majority of research papers that targeted fog computing within the area of IoT, the focus was more on request scheduling, and the assumption was that the processing services are running on all available fog nodes. The scenario where all services were not installed on each available fog device was mentioned in [9], where the lookup time for the node that runs the specific service was included as a parameter in the described optimization algorithm. Also, an example of dynamic service scheduling across fog nodes was described in [10], where the criteria for deploying or releasing a particular service on a specific fog node depended on the traffic demand for a particular service on it. Although agents are not explicitly emphasized as a paradigm that should be used to implement fog systems in the previously mentioned papers, it could be implied, since all fog nodes should have the abilities of mutual communication and autonomous decision-making. However, in [11], Bonomi et al. explicitly point out software agents (foglets) as a part of the fog orchestration layer that runs on every node of a fog platform where it monitors and manages all activities of the host device. In [12], the authors also utilized the agent-based solution to implement a framework that enables the provision and management of virtual networks to ultimately interconnect different fog and cloud infrastructures and thus to facilitate their mutual communication. There are many more papers that point out agent technology as a convenient approach to implement the management plane on top of distributed autonomous and cooperative entities that are running at the edge of the network [13, 14]. Thus, it is obvious that this paradigm will have an important role in improving the cloud architecture with the fog layer. However, the dynamic service scheduling throughout the fog environment that reduces the processing latency, which we intended to realize within this paper, has not been significantly covered within the inspected literature. Still, some platforms and technologies enable orchestrated service scheduling, and hence we first describe them in the following section with the emphasis on their usage of agent-based approach, after which we give our example that demonstrates our idea of establishing the service migration control within the fog environment to ultimately decrease processing latency.
86
P. Krivic et al.
3 Enabling Service Scheduling Technologies Important technical concepts that have eased the development of platforms that enable the implementation of fog-based systems were primarily microservices and containerization. Microservice-based paradigm is intended for the development of modular systems that are constituted of independent and interconnected building blocks that, in mutual collaboration, effectively realize their function. Because of its similarities with agent-based paradigm and IoT systems that always imply distributed environments, this modular architecture offers a convenient approach to implement an agent-based IoT system [15]. However, fog computing implies an environment of heterogeneous devices, and thus, it was necessary first to enable the installation of microservice modules regardless of the hardware platform, to ultimately enable the utilization of this architecture in the implementation of fogbased systems. Container-based virtualization resolved this requirement by offering abstractions directly for user processes, and it also enabled eased application portability and component management, which resulted in the development of different container-based orchestration frameworks. A basic example of a tool that could utilize fog resources for the extension of the cloud is Kubernetes [16]. It is a system that offers automated orchestration of the application deployment in a distributed cluster environment. It also utilizes an agent-like approach where the cluster is constituted of master nodes and worker nodes. Master receives the application deployment description, and it ensures that the worker nodes carry out the assigned application state. However, in the context of fog exploitation, this system utilizes only the computing resources of fog devices, and it cannot be manually forced to orchestrate services on latency reduction criteria. Another system that enables the utilization of distributed heterogeneous devices on the edge of the network is ioFog [17]. It is an open-source edge computing platform for deploying, running, and networking distributed microservices at the edge. Its architecture is also agent based, and thus each device that is a part of its distributed edge computing network runs a daemon service called an Agent that is responsible for microservices running on that particular node. Besides Agent service, there are two more available daemon services that this edge computing network offers—Controller and Connector. The Controller provides the orchestration of different Agents, and the Connector is used to establish communication between different microservices in the established network. Within this system, the administrator can control which available agent will run a specific microservice, and thus a certain level of service scheduling management can be realized. Still, the schedule is defined in yaml file, and thus again, the latency criteria that triggers the rescheduling cannot be configured. There is a large number of platforms similar to those previously described solutions that could be utilized to include fog layer in the system implementation. Still, none of them offers the ability to configure performance criteria that would trigger the service rescheduling. Thus, in continuation of this paper, we first define the
Agent-Based Control of Service Scheduling Within the Fog Environment
87
agent roles that would have to be implemented to establish a control plane in a distributed IoT environment, and afterwards we describe and verify a procedure that demonstrates a basic strategy to accomplish latency-triggered service rescheduling.
4 Agent-Based Approach for Service Migration Control in Fog Environment To explore and verify the benefits of fog computing in IoT, we first had to set up an appropriate environment that would include a distributed processing plane along the fog-to-cloud continuum. Our main target was to confirm that with the implementation of simple fog computing mechanisms, latency between the request and response would be reduced. Since the trigger for the utilization of fog processing is in most cases network based, we had to enable the adjustments of network conditions, and thus we decided to perform our experiment in a network-based simulator. Microservice architecture has enabled simplified implementation of systems that utilize distributed processing, and also a more dynamic service migration across distributed computing infrastructure that includes cloud computing enhanced with the fog nodes in local environments. This very property is highly exploitable in the context of IoT platforms since data generated in local environments could then also be processed on fog nodes that run the necessary services within the same environment. To confirm this assumption, we designed an experiment where the idea was to enable service migration from the cloud towards environments where end devices that produce data are also placed, and then to apply a strategy that triggers a service migration process based on the latency criteria.
4.1 Role Distribution Among the Involved Entities The high-level architecture of the simulated environment is depicted in Fig. 1. Entities involved in our simulation are described in continuation: – Sensor—constrained end device that, in this scenario, forwards the measured data towards its gateway. Although modern sensor devices are also getting involved with simple data processing, in our use case we left out this option to focus more on fog computing principles; – Gateway—a device that has the ability of communication with constrained sensor devices, and it is also connected to the Internet. Its main role, in this case, is to pass the received data as a request towards the service that has the ability to process it; – Fog node—computing node that has enough processing and memory resources to run at least one microservice. Microservices are today usually exported as docker images to achieve easier portability, so this device should have enough resources to run a docker environment. Also, fog node and gateway could be situated on the
88
P. Krivic et al.
Fig. 1 Simulated environment architecture
same device, but in our simulation, we separated them on different nodes to gain a better overview of ongoing processes; – Server—a point that has an “unlimited” amount of computing power and processing resources. It receives requests from the gateway, processes them, and returns a resulting response. It is the ultimate processing point, but it is also distant from the relevant environment, and therefore each request has to pass a multi-hop distance in the public network to reach it. Thus, the overall latency increases, and with the addition of network congestion caused by the high amount of IoT data, it presents an issue for further development of modern solutions within the IoT concept. To perform the intended scenario on such entities, we had to implement a management plane that would trigger and control necessary service migrations. In a distributed environment depicted in Fig. 1, at least three different modules that would be in mutual interaction had to be implemented to accomplish the intended behaviour successfully. Therefore, we designed an agent-based system that monitors the state of the environment and performs actions that would ultimately result in the reduced latency of request processing. The three interconnected agents in our system have the following roles: – Gateway agent—a module deployed on the gateway node in Fig. 1 that except message forwarding between the sensor and a processing node also has to perform latency measurements on each forwarded request. Depending on the applied strategy and available fog nodes, it then determines whether to trigger the migration of the involved processing microservice or not; – Fog node agent—the first task of this agent module that is deployed on the fog node in Fig. 1 is to send a registration message with its IP address to the network manager agent. After the registration, it can receive and answer ping requests, and the node where this agent is hosted becomes available for running required microservices; – Network manager agent—a module deployed in the cloud server in Fig. 1 is a central network supervisor that has the information about each fog node available for hosting the entangled microservices. In our experiment, it also hosts the software of each processing microservice, which is then delivered to the preferred fog node on request.
Agent-Based Control of Service Scheduling Within the Fog Environment
89
4.2 Simulation Scenario As previously mentioned, the goal of our simulation was to verify that with the utilization of our agent-based approach for the control of service migration in fog environment, the latency of request processing can be reduced. Figure 2 presents a baseline scenario of mutual agent interaction that is necessary to perform the migration of the specific processing microservice from the cloud to the preferred fog node in the local environment. At the beginning of the scenario, we assume that the microservice required for processing of sensor data is running in the cloud environment, and thus the gateway forwards each sensor data point towards it. We also defined a certain latency threshold that is acceptable for the gateway to receive the response, so the gateway agent always has to track the time between sending a request and receiving a response. If the defined threshold is exceeded, the gateway agent initiates the lookup process to check whether there is an available fog node that could deliver the response more rapidly. When the gateway agent receives the list with the addresses of all available fog nodes from the network manager agent, it sends a ping request towards each listed fog node agent to measure the time it takes them to respond to it. After the gateway agent calculates which of them took the least time to respond, it notifies the network manager agent about the preferred fog node and awaits on the notification that the migration of necessary microservice to the chosen node is successful. The network manager agent transfers the required software to the preferred fog node, upon which the transferred microservice is launched and is ready to process further requests from the gateway. Described activities can be reproduced again if the performance of the chosen node deteriorates, and then the processing microservice can again be launched in the cloud or transferred to another available fog node. We believe that this agent-based approach is thus the right basic strategy to design and implement an agile and easily
Fig. 2 Simulated communication sequence
90
P. Krivic et al.
manageable basic system that could then simply be adaptable and shaped to fit the specific application scenario.
4.3 Simulation Result The scenario described in the previous subsection has been imported in the networkbased simulator IMUNES [18] that enables the emulation of IP-based networks. It offers the setup of different virtual nodes that are then connected with adjustable network links, which was important in our simulation because we could then adapt the link latency and verify the anticipated behaviour. As shown in Fig. 3, we connected previously described nodes to establish the architecture similar to the one depicted in Fig. 1, and although the end device (sensor) was in this simulation connected with the gateway also through the IP connection, we simulated their behaviour as if they were mutually connected via some other constrained communication protocol. We then initiated routing tables of each router to enable mutual communication among the described nodes and to establish the envisioned network behaviour, after which our network simulation was executed. To test the intended scenario, we also implemented three described agent modules and two additional modules that simulate the behaviour of the remaining two entities of our system as Spring Boot applications [19], and deployed them on corresponding virtual nodes in our network (gateway, network manager, fog nodes, cloud server, and end device in Fig. 3). As previously stated, the trigger that starts the migration process is increased latency in communication with the cloud server, and thus we had to simulate it to confirm the accuracy of our agent-based control system. IMUNES offers the option to adjust the link delay in the menu of link configuration settings,
Fig. 3 IMUNES network topology setup
Agent-Based Control of Service Scheduling Within the Fog Environment
91
so we could easily adapt the latency value of the link towards the cloud server above the defined threshold that triggers the migration. We could then verify that the communication between implemented agents shown in Fig. 2 is launched, and monitor the process of service migration towards the fog_node1 in our simulation. After the migration, all sensor requests that are forwarded from the gateway are directed towards the fog_node1 where they were successfully processed, and thus we can confirm that the agents successfully accomplished the necessary actions to overcome simulated latency increment. With this basic example, we verified that collaborating agents on three different points of characteristic fog architecture could be utilized to efficiently perform service migrations that are necessary to improve system performance. It is also important to emphasize that this was a basic demonstrative research scenario, and before its utilization on a specific application scenario, it should be adapted and modified to accomplish the desired benefits.
5 Conclusion Within this paper, we made a brief overview of the currently available platforms that enable the utilization of fog resources, with the emphasis on their agent-based attributes and the abilities to accomplish service scheduling. Since one of the main contributions that fog computing offers is the latency reduction and since neither of the existing platforms has the ability to condition service scheduling on latency criteria, we described an agent-based system that provides this functionality. We presented a procedure that defines agent behaviour which has to be applied to accomplish service scheduling on latency criteria. In our experiment, we showcased the basic example that demonstrates and verifies the described behaviour, and thus in the future work, we plan to adapt this approach for some common IoT application scenarios and examine whether the performance efficiency is improved. Also, in our experiment, we utilized a simple strategy based on the predefined latency threshold, and in our future work, we also plan to test some advanced and adaptive strategies to achieve performance improvement. However, we believe that with this paper, we demonstrated that the usage of agent paradigm within the fog environment could reduce the overall processing latency, and we have also set a strong base to further examine the benefits of fog computing and agent-based approach within the IoT environment.
References 1. Dolui, K., Datta, S.K.: Comparison of edge computing implementations: fog computing, cloudlet and mobile edge computing. In: 2017 Global Internet of Things Summit (GIoTS), pp. 1–6. IEEE, Geneva, Switzerland (June 2017)
92
P. Krivic et al.
2. Yi, S., Li, C., Li, Q.: A survey of fog computing: concepts, applications and issues. In: Proceedings of the 2015 Workshop on Mobile Big Data, pp. 37–42. Association for Computing Machinery, Hangzhou, China (2015) 3. Bonomi, F., Milito, R., Zhu, J., Addepalli, S.: Fog computing and its role in the internet of things. In: Proceedings of the First Edition of the MCC Workshop on Mobile Cloud Computing, pp. 13–16. Association for Computing Machinery, Helsinki, Finland (2012) 4. Osanaiye, O., Chen, S., Yan, Z., Lu, R., Choo, K.R., Dlodlo, M.: From cloud to fog computing: a review and a conceptual live VM migration framework. IEEE Access 5, 8284–8300 (2017) 5. Li, J., Zhang, T., Jin, J., Yang, Y., Yuan, D., Gao, L.: Latency estimation for fog-based internet of things. In: 2017 27th International Telecommunication Networks and Applications Conference (ITNAC), pp. 1–6. IEEE, Melbourne, Australia (November 2017) 6. OpenFog Consortium: IEEE standard for adoption of OpenFog reference architecture for fog computing. IEEE Stand. 1934–2018, 1–176 (2018) 7. Yousefpour, A., Ishigaki, G., Jue, J.P.: Fog computing: towards minimizing delay in the internet of things. In: 2017 IEEE International Conference on Edge Computing (EDGE), pp. 17–24. IEEE, Honolulu, USA (June 2017) 8. Filip, I., Pop, F., Serbanescu, C., Choi, C.: Microservices scheduling model over heterogeneous cloud-edge environments as support for IoT applications. IEEE Internet Things J. 5(4), 2672– 2681 (2018) 9. Zeng, D., Gu, L., Guo, S., Cheng, Z., Yu, S.: Joint optimization of task scheduling and image placement in fog computing supported software-defined embedded system. IEEE Trans. Comput. 65(12), 3702–3712 (2016) 10. Yousefpour, A., Patil, A., Ishigaki, G., Kim, I., Wang, X., Cankaya, H.C., Zhang, Q., Xie, W., Jue, J.P.: FOGPLAN: a lightweight QoS-aware dynamic fog service provisioning framework. IEEE Internet Things J. 6(3), 5080–5096 (2019) 11. Bonomi, F., Milito, R., Natarajan, P., Zhu, J.: Fog computing: a platform for internet of things and analytics. In: Big Data and Internet of Things: A Roadmap for Smart Environments, vol. 546, pp. 169–186. Springer, Cham (2014) 12. Moreno-Vozmediano, R., Montero, R.S., Huedo, E., Llorente, I.M.: Cross-site virtual network in cloud and fog computing. IEEE Cloud Comput. 4(2), 46–53 (2017) 13. Giordano, A., Spezzano, G., Vinci, A.: Smart agents and fog computing for smart city applications. In: Smart Cities, pp. 137–146. Springer, Cham (2016) 14. Wan, J., Chen, B., Wang, S., Xia, M., Li, D., Liu, C.: Fog computing for energy-aware load balancing and scheduling in smart factory. IEEE Trans. Ind. Inform. 14(10), 4548–4556 (2018) 15. Krivic, P., Skocir, P., Kusek, M., Jezic, G.: Microservices as agents in IoT systems. In: KES International Symposium on Agent and Multi-Agent Systems: Technologies and Applications, pp. 22–31. Springer, Cham (June 2017) 16. Kubernetes Homepage. https://kubernetes.io/. Accessed 31 Jan 2020 17. ioFog Homepage. https://iofog.org/. Accessed 31 Jan 2020 18. IMUNES Homepage. http://imunes.net/about. Accessed 23 Jan 2020 19. Spring Homepage. https://spring.io/projects/spring-boot. Accessed 3 Feb 2020
On the Conception of a Multi-agent Analysis and Optimization Tool for Mechanical Engineering Parts Paul Christoph Gembarski
Abstract A key tool for mechanical engineering are computer-aided design (CAD) systems for determining the product shape and specifying the necessary production information. Today’s CAD systems include functionalities for knowledge-based engineering which allow the partly automation of design tasks and thus shorten lead time in product development. Another way for this is concurrent engineering which means to involve experts from different domains early in the design process. In order to transfer and integrate this approach in knowledge-based engineering, the implementation of a multi-agent system is a possible way. Such systems consist of entities that are capable of autonomous action, interact intelligently with their environment, communicate and collaborate. This paper proposes a concept and shows a basic implementation for such a multi-agent system as an extension for standard CAD software. The agents take the role of domain experts like manufacturing technologists, jig designer, etc. and analyse as well make suggestions for the optimization of the design of mechanical engineering parts.
1 Introduction The development of technical products is highly complex due to the variety of solution principles and design constraints [1]. Product development processes are mainly based on phase models or on the idea of the product life cycle [2]. In order to raise decision quality and to avoid later changes, design departments follow the concurrent engineering paradigm, where the goal is to involve experts from different functional units like manufacturing, service engineering, etc. early in the design process [3]. Today, products are developed in computer-aided engineering environments that comprise, among others, computer-aided design (CAD) tools to define product shape P. C. Gembarski (B) Institute of Product Development, Leibniz Universität Hannover, Welfengarten 1a, 30167 Hannover, Germany e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_9
93
94
P. C. Gembarski
and production information [4]. Nowadays, these systems are able to store and process engineering knowledge like design rules or dimensioning formulae, making knowledge-based engineering available outside from specialized software or expert systems [5, 6]. Routine design tasks like product configuration are subject to automation [7, 8]. In contrast, creative design tasks, due to their ill-structured nature and the size of the solution space especially in conceptual design, involve knowledge-based engineering systems mainly as decision- or problem-solving support and coworker of a human designer [9–11]. An implementation that satisfies the concurrent nature of the design process and multi-objective decision-making is basically possible based on a multi-agent system [12]. These consist of entities that are capable of autonomous action, interact intelligently with their environment, communicate and collaborate [13]. Reported applications range from control of production systems over modelling and simulation of complex systems like smart grids to workflow management [14]. This paper derives and discusses a concept for such a multi-agent system as an extension for a standard CAD software. The agents take the role of domain experts to analyse and make suggestions for the optimization of the design of mechanical engineering parts. The paper is organized as follows: Sect. 2 presents the theoretical background and related work regarding knowledge-based engineering and multiagent systems related to product development. Afterwards in Sect. 3, the concept for an agent-based analysis and optimization tool is derived and discussed before Sect. 4 shows part of the implementation. The final Sect. 5 presents discussion and conclusion and points out future research potential.
2 Theoretical Background and Related Work Knowledge-Based Engineering Systems. Knowledge-based engineering (KBE) is a set of techniques to establish product models that are easy to adapt to new requirements and which can be used for (partial) automation of design tasks [7]. An underlying concept of KBE is the implementation of two very basic kinds of knowledge [15]: (1) Domain knowledge constitutes a solution space in which a particular solution for a defined set of requirements can be found. It is modelled, e.g. by parameter constraints, formulae and design rules for product models [10]. (2) Control knowledge, like rule- or model-based reasoning, states how this solution space is explored [5]. KBE systems can be seen as an evolutionary step in computer-aided engineering which is created by the combination of object-oriented programming, artificial intelligence and CAD [16]. They are built from the basic components knowledge base, inference engine and interfaces. The knowledge base is the storage for the domain and control knowledge, the inference engine applies the knowledge base to the given problem by applying inference and task knowledge [17]. Optionally components are dialogues for knowledge acquisition or an explanatory component that presents the individual conclusions the system has drawn to a user in an understandable way [6]. In addition, interfaces exist for user interaction and to other hardware, software
On the Conception of a Multi-agent Analysis …
95
or data storage systems. One of these interfaces usually integrates CAD systems or geometry models [9]. Multi-Agent Systems in Product Development. Introduced as a concept in the early 1980ies, agent-based modelling and simulation as well as multi-agent systems have been proposed for various applications [12–14]. Although there is no standardized definition, the literature widely agrees that an agent is a software entity that operates autonomously without the intervention of a human user to complete a task [18]. Therefore, the agent needs the ability to perceive the environment relevant to the task, to (re-) act to certain circumstances and to know about consequences as well as to communicate and collaborate with other parts of the system (i.e. other agents or the user) [19]. Related to product development, four core application areas for agent-based systems may be identified from the literature: 1. 2. 3. 4.
Workflow management in collaborative engineering, e.g. [20, 21] Simulation of system behaviour and control circuits, e.g. [14] Synthesis or reengineering of design artefacts, e.g. [19, 22–24] Analysis of product characteristics according to design guidelines, e.g. [12, 18].
Referring to the last application area, based on the analysis, an optimization is usually either proposed to the user or executed by the system. But the majority of approaches focusses just on a narrow set of design guidelines, like e.g. in design for recycling “reduce number of joints”, “reduce number of parts” or “reduce number of different materials in a sub-assembly”. Such guidelines are easy to check since they can be read from the feature tree or the parameters of the CAD model usually without additional contextual information [25]. Looking at the concurrent engineering paradigm, this corresponds to the perspective of an eco-design expert as support for the designer. Only single implementations show the collaboration of different agents as domain experts taking different viewpoints. Additionally, the systems usually lack an instance of the designer himself, who can negotiate with the others depending on which optimization direction is to be taken and what consequences occur.
3 Conceptualizing an Agent-Based Analysis and Optimization Tool for Mechanical Engineering Parts For the system design, the author follows the methodological suggestions for Design Science Research (DSR) from Sonnenberg and vom Brocke [26]. Figure 1 shows a DSR process, involving a design-evaluate-construct-evaluate pattern. The ex-ante evaluation aims at validating the design of an artefact and is presented in this paper. The input for the Evaluation 1 activity is, e.g. the observation of a problem or a research need, design objectives and a design theory or an existing solution to a practical problem. As output stands the justified problem statement and justified design objectives. Possible evaluation methods represent, e.g. literature reviews,
96
P. C. Gembarski
Fig. 1 Evaluation activities within a DSR process (acc. to [25])
expert interviews or surveys. The result of the design activity is the design specification. Evaluation 2 is, e.g. a proof-of-concept, demonstrated on a predefined test case. The goal is to prove the applicability of the chosen design methods and tools.
3.1 Evaluation 1: Design Review To understand the problem-solving activities that are necessary for the optimization of subtractively manufactured engineering parts, the author managed and observed several design reviews. Exemplarily, in order to get an impression on the complexity of the decision-making process, a discussion from a design review on the mounting bracket depicted in Fig. 2 is shown below. Length and Shape of Slot A. The production planning engineer proposed that Slot A could be extended to full length to ease the manufacturing operation. The designer negated this since the bracket is manufactured from an extrusion profile without thermal treatment. From his experience, the clevis would bend up due to residual stresses so that this might have an impact on the tolerance concept. The planner agreed but then two radii must be added at the back face of Slot A in order to manufacture
Fig. 2 Mounting bracket with annotated faces and features
On the Conception of a Multi-agent Analysis …
97
it with an end mill. The designer argues that he needs a certain face length for a function. But Hole B could be moved a few millimeters to the front and the slot length could be increased by 4 mm. A 6 mm radius is added, the availability of a 12 mm end mill with long shank is finally checked in the tool management system. Slope of Faces no. 1 and 2. Since it was already clarified during the review that the semi-finished material is an extrusion profile of that shape, the slope itself was not considered to be a problem. But regarding the countersinks of the four holes, the production planning engineer identified a challenge: Since the surface of the face will be hard and potentially rough, the tool for countersinking might be deflected. In order to avoid a tool change, he suggests to use the same tool as for the slot to mill pockets instead of the countersinks. The designer agrees. At this point of the discussion, the fixture designer asked about the lot size. Since it is only small, no additional fixtures will be designed, the bracket can be clamped in a standard machine vice. In the reflection of the design reviews, the following basic requirements for an agent-based analysis and optimization tool can be summarized: • The tool must be fully integrated into the designer’s workflow and thus work directly on the CAD files. • Additionally to geometry, information about loads, surface quality and tolerances are needed to argue machining operations and must be processed. • For machining operations, a mapping between machine and toolset must be included in order to reason about setup planning and achievable tolerances. • The system must work iteratively, if an adaption is proposed, the new situation has to be analysed accordingly. • All conclusions drawn from the system must be explainable and consequences on the dimensions function fulfillment, part mass, number of machining operations and number of tool changes must be shown qualitatively. • The user is an active part of the system and can be asked about additional relevant information.
3.2 Design The use case diagram shown in Fig. 3 shows part of the activities that the system has to perform which was derived from the Evaluation 1 activity. As the basis for the system is the CAD model, the use of Autodesk Inventor is a constraint which enables the inclusion of object and API catalogues already at an early stage. Additionally, representing mechanisms like feature tree, parameter tables and the mathematical representation of the model’s shape (list of single faces, enriched with ID, type, parent feature, bounding box dimensions and technical information like surface quality or tolerances) form the basis for a part of the knowledge repository, namely the part repository (Fig. 4). This is complemented by a basic repository for the manufacturing
98
P. C. Gembarski
Fig. 3 Use case diagram (excerpt)
Fig. 4 Data and knowledge repositories (Overview)
knowledge. Necessary components are a translation table from feature to machining operation (chamfer → milling with according cutter head or, e.g. 4-axis machining) with alternatives and priorities and the list of available tools. As third, the knowledge bases and inferences for the agents are modelled. As implementation, a rule base was chosen since the production rules are clearly readable
On the Conception of a Multi-agent Analysis …
99
and single-agent rule bases may be split up and organized by a meta-model if it grows too large. In addition to that, the behaviour of the single agents is formalized. Therefore, optimization criteria and event trigger are specified, e.g. on change of the feature tree due to an accepted design adaption identify the new manufacturing sequence, etc. In this context, special attention must be kept for the agent representing the designer. Particularly the perception of load application and the resulting stress distribution of the part are critical to code as rules. The integration of an simulation tool accordingly, like a finite element analysis coupled to the CAD system, is basically possible but not the aim for the prototype.
3.3 Proof-of-Concept The proof-of-concept implements the analysis and optimization of single milling and drilling features. The prototype system was implemented in VB.net since it allows a good interoperability to the CAD system Autodesk Inventor. Part of the knowledge repository was realized in MS Excel spreadsheets for quick maintenance, the agent rule bases are hardcoded in VB.net. Exemplarily, Fig. 5 shows the flowchart for the detection and processing of manufacturing operations with radius milling cutters. The above process comprises activities performed by the setup planning and the tool management agent. Based upon the features of the CAD model (here fillets), the setup planner determines the position of a specific feature relative to the bounding box in order to perceive technological boundary conditions, i.e. in this case, the necessary tool length for cutting from the top or from the side. After this step, both agents negotiate about the options based on tool availability. If there is no tool available, the tool manager will suggest bigger and smaller alternatives (or in case of
Fig. 5 Flowchart for fillet feature detection and processing
100
P. C. Gembarski
the fillet a change to chamfer). If there is a set of different tools available, the agents create a priority list and based upon this the manufacturing operation and setup. When this is performed for all features in the CAD feature tree, the setup planning agent starts to minimize setup operations (e.g. organizes all cutting operations from the top of the part without relocating) and tool changes. Hereby, the agent is allowed to propose changes to the design agent.
4 Test Case As a test case, a clevis was chosen which had some similar features compared to the bracket mentioned above (Fig. 6). After the part file had been loaded to the CAD system Inventor, the analysis and optimization tool was started. It first translated the geometric data into the part repository. The designer was asked then to supplement the face list with a date about applied forces, surface quality, etc. Afterwards, the agents were activated which worked on the chamfer and fillet features in the prototype system. After the agents have adapted the size of the upper fillet, the setup planning agent determined a first manufacturing sequence and stored the necessary toolset. After this, the setup planner minimized the set of different tools needed for machining using his rule base. As an example, the rule “IF count_chamfer_operations>count_fillet_operations THEN replace fillet with chamfer of same size” starts a negotiation with the design agent. Since the fillet is the superior design regarding strength aspects, it is kept and the suggestion is rejected.
Fig. 6 Annotated clevis in Autodesk Inventor
On the Conception of a Multi-agent Analysis …
101
Since the outer chamfer is located near the bounding box of the part, the setup planning agent asks the designer if this is a cosmetic one and if it can be omitted. The agents’ conversation is directly stored in the CAD model (Fig. 6) and can be checked by a human designer.
5 Discussion and Conclusion The implemented prototype showed the principles of action and function of the designed agent-based analysis and optimization tool. In the next steps, the concrete construction of the system takes place, implementing more features and more general feature recognition abilities and adding more rules to the agent knowledge bases. Challenging for implementation is setup planning and the resulting context dependency also, for e.g. fixture design. From a scientific point of view, two major questions offer the potential for further research. At first, the features involved here are limited to prismatic geometries what makes the check against manufacturing restrictions comparatively easy. Regarding, e.g. casting, this is different since freeform surfaces occur and design guidelines for casting (like avoiding material accumulations) require different levels of perception that cannot be realized based on the feature tree or a face list. An according analysis agent must rely on either analogous models or must possess more sophisticated artificial intelligence abilities. The implementation of image processing of part intersections might be a feasible way here. Second, the knowledge is modelled explicitly in the above example. So, if a new tool, e.g. an end mill with another diameter, has to be added, this has not to be done solely in the tool list, but also in the corresponding agent rule base. In order to follow the paradigm of intelligent agents, it would be beneficial to program the ability of learning and modifying the rule bases by the agents themselves. A possible way to cope with this issue might be coupling a case-based reasoning system.
References 1. Pahl, G., Beitz, W., Feldhusen, J., Grote, K.-H.: Engineering Design—A Systematic Approach, 3rd edn. Springer, Berlin, Heidelberg (2007) 2. Ullman, D.G.: The Mechanical Design Process. McGraw Hill Higher Education, New York City (2009) 3. Vajna, S. (ed.): Integrated Design Engineering. Springer, Berlin Heidelberg (2014) 4. Stroud, I., Nagy, H.: Solid modelling and CAD systems. Springer, Berlin Heidelberg (2011) 5. Gembarski, P.C.: Komplexitätsmanagement mittels wissensbasiertem CAD—Ein Ansatz zum unternehmenstypologischen Management konstruktiver Lösungsräume. TEWISS, Garbsen (2019) 6. Hopgood, A.A.: Intelligent Systems for Engineers and Scientists. CRC Press, Boca Raton (2016)
102
P. C. Gembarski
7. Verhagen, W.J.C., Bermell-Garcia, P., van Dijk, R.E.C., Curran, R.: A critical review of Knowledge-based engineering: an identification of research challenges. Adv. Eng. Inform. 26(1), 5–15 (2012) 8. Felfernig, A., Hotz, L., Bagley, C., Tiihonen, J.: Knowledge-based configuration: From research to business cases. Morgan Kaufmann, Burlington (2014) 9. Skarka, W.: Application of MOKA methodology in generative model creation using CATIA. Eng. Appl. Artif. Intell. 20(5), 677–690 (2007) 10. Gembarski, P.C., Bibani, M., Lachmayer, R.: Design catalogues: knowledge repositories for knowledge-based-engineering applications. In: Marjanovi´c, D., Štorga, M., Pavkovi´c, N., Bojˇceti´c, N., Škec, S. (eds.): Proceedings of the DESIGN 2016 14th International Design Conference, pp. 2007–2016 (2016) 11. Li, H., Gembarski, P.C., Lachmayer, R.: Template-based design for design co-creation. In: Proceedings of the 5th International Conference on Design Creativity (ICDC2018), Bath, United Kingdom, 31.01.-02.02.2018, pp. 387–394 (2018) 12. Sun, J., Zhang, Y.F., Nee, A.Y.C.: A distributed multi-agent environment for product design and manufacturing planning. Int. J. Prod. Res. 39(4), 625–645 (2001). https://doi.org/10.1080/ 00207540010004340 13. Jennings, N.R., Wooldridge, M.J. (eds.): Agent Technology—Foundations, Applications, and Markets. Springer, Berlin Heidelberg (1998) 14. Taylor, S. (ed.): Agent-based Modeling and Simulation. Palgrave Macmillan, Basingstoke (2014) 15. Schreiber, G., Wielinga, B., de Hoog, R., Akkermans, H., Van de Velde, W.: CommonKADS: A comprehensive methodology for KBS development. IEEE Expert 9(6), 28–37 (1994) 16. Chapman, C.B., Pinfold, M.: The application of a knowledge based engineering approach to the rapid design and analysis of an automotive structure. Adv. Eng. Softw. 32(12), 903–912 (2001) 17. Milton, N.R.: Knowledge Technologies. Polimetrica sas, Monza (2008) 18. Dostatni, E., Diakun, J., Grajewski, D., Wichniarek, R., Karwasz, A.: Multi-agent system to support decision-making process in design for recycling. Soft. Comput. 20(11), 4347–4361 (2016) 19. Fougères, A.J., Ostrosi, E.: Intelligent agents for feature modelling in computer aided design. J. Comput. Design Eng. 5(1), 19–40 (2018) 20. Huang, C.J., Trappey, A.J., Yao, Y.H.: Developing an agent-based workflow management system for collaborative product design. Ind. Manag. Data Syst. 106(5), 680–699 (2006) 21. Juan, Y.C., Ou-Yang, C., Lin, J.S.: A process-oriented multi-agent system development approach to support the cooperation-activities of concurrent new product development. Comput. Ind. Eng. 57(4), 1363–1376 (2009) 22. Huang, C.C.: A multi-agent approach to collaborative design of modular products. Concurr. Eng. 12(1), 39–47 (2004) 23. Chen, Y., Liu, Z.L., Xie, Y.B.: A multi-agent-based approach for conceptual design synthesis of multi-disciplinary systems. Int. J. Prod. Res. 52(6), 1681–1694 (2014) 24. Plappert, S., Gembarski, P.C., Lachmayer, R.: The use of knowledge-based engineering systems ´ atek, J., Borzemski, L., and artificial intelligence in product development: a snapshot. In: Swi˛ Wilimowska, Z. (eds.) Information Systems Architecture and Technology: Proceedings of 40th Anniversary International Conference on Information Systems Architecture and Technology— ISAT 2019. Advances in Intelligent Systems and Computing, vol. 1051, pp. 62–73 25. Gembarski, P.C., Sauthoff, B., Brockmöller, T., Lachmayer, R.: Operationalization of manufacturing restrictions for CAD and KBE-systems. In: Marjanovi´c, D., Štorga, M., Pavkovi´c, N., Bojˇceti´c, N., Škec, S. (eds.) Proceedings of the DESIGN 2016 14th International Design Conference, pp. 621–630 (2016) 26. Sonnenberg, C., vom Brocke, J.: Evaluations in the science of the artificial—reconsidering the build-evaluate pattern in design science research. In: Peffers, K., Rothenberger, M., Kuechler, B. (eds.) Design Science Research in Information Systems. Advances in Theory and Practice. DESRIST 2012. Lecture Notes in Computer Science, vol. 7286, pp. 381–397
Predicting Dependency of Approval Rating Change from Twitter Activity and Sentiment Analysis Demijan Grgi´c, Mislav Karaula, Marina Bagi´c Babac, and Vedran Podobnik
Abstract In recent years, multi-agent systems have been augmented with alternative social network data like Twitter for preforming target inference. To this extent, public figures tracked by agent system usually have key social value in social networks. This social value can additionally drift in public approval based on social network communication. To test the connection in public approval and social communication, we perform analysis of presidential Twitter account activity during the two-year period 2017–2018. Sentiment analysis was used on the processed tweets in order to test such a data set’s predictability in gaining an insight into a 7, 14 and 21 days (1, 2 and 3 weeks) significant presidential job approval rating change. To this extent, five different supervised machine learning algorithms are used: Random Forest, Xgboost, AdaBoost, AdaBag and ExtraTrees. Results indicate that voter approval rating has slight future predictability based on Twitter activity and emotional sentiment analysis possibly indicating consistency with the human nature of positive news and outcomes resonating with people for a much shorter period than negative ones.
1 Introduction Social media such as the social network Twitter is nowadays progressively used by people as a platform to voice their thoughts, feelings and point of view using short text messages. Consequently, inquiry into the communication patterns of networked human populations has become an increasingly important research agenda. Even though a single short text does not provide an abundance of insight by itself, when the data is aggregated, simply because of the sheer volume of it, the information that can be extracted tends to be quite valuable. As a matter of fact, to attest the D. Grgi´c (B) · M. Karaula · M. Bagi´c Babac · V. Podobnik Faculty of Electrical Engineering and Computing, University of Zagreb, Unska 3, 10000 Zagreb, Croatia e-mail: [email protected] URL: https://www.sociallab.fer.hr © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_10
103
104
D. Grgi´c et al.
aforementioned volume of the generated data, in the case of Twitter some statistics for the current year of 2019 say that the total number of monthly users is 326 million, the total number of daily users, 100 million, total number of tweets sent per day, 500 million which really does provide a potentially huge data set no matter what the research subject is. On the other hand, detecting sentiments—which is a content-based classification problem involving concepts from the domains of natural language processing and machine learning—in such written text has an extensive range of different applications. Semantic analysis is defined as an interdisciplinary area which comprises natural language processing, text analytic and computational linguistics to identify the sentiment expressed in a text [27]. Further definitions of sentiment analysis provided might also use the terms opinion mining and/or subjectivity analysis [23], however, recent research in the area of sentiment analysis focuses on the specific application of classifying text as to their polarity (either positive, negative or predominately neutral). Complex sentiment analysis or fine-grained sentiment analysis extends beyond detection of polarity and deals with deep sentiment analysis that measures the causal relationships between author’s attitudes, value structures and personality, and might include detection of emotions and polarity change over time [12]. In this paper, the possibility of using large volumes of short texts—tweets from Twitter—and sentiment analysis to predict a significant presidential job approval rating change is explored. This paper contributes to the state of the art by showing that there is a slight but significant predictability during the analyzed period using Twitter activity and sentiment emotional daily context to predict future significant change in presidential job approval. The rest of the paper is organized as follows: Sect. 2 (Related work) gives an overview of selected related work with emphasis on applied sentiment analysis and Twitter predictive power, Sect. 3 (Data set analysis) explains used data set and variable transformations, Sect. 4 (Methodology and results) breaks down used methodology and obtained results and Sect. 5 (Conclusion) provides the conclusion with emphasis on possible future improvements.
2 Related Work Several papers show predictability by using sentiment analysis, be it in general use or using social network data as the data set. Papers cited in this section are significant because they show the massive potential of using sentiment analysis on social data to show and predict trends in certain areas of life. For example, Mewari et al. [16] and Sharif et al. [24] did comprehensive studies to identify trends of opinion mining and sentiment analysis in social networks. A lexicon method, similar to the one used in the processing stages of this paper, has been used to analyze the role of negation words in a sentence by Mumtaz et al. [19] and Bhardwaj et al. [3].
Predicting Dependency of Approval Rating Change from Twitter Activity …
105
O’Connor et al. [21] found that a generally simple sentiment detector based on Twitter data correlates to sentiment word frequencies in analyzed Twitter messages. Their work shows that Twitter data is related to consumer confidence and presidential job approval from poll estimates. Pagolu et al. [22] found existence of strong correlation between rise or fall in company stock prices and the general public opinions or emotions expressed on Twitter platform about the analyzed company. Rise or fall in stock prices could be compared with rise or fall in presidential job approval rating which subsequently provides a positive indication that this paper can achieve significant results by looking at the connection between presidential job approval rating and Twitter data. Logunov and Panchenko [15] constructed various Twitter sentiment series using emoticons as proxies for happy, sad, joy and cry emotions and found that all indices exhibit significant day-of-the-week effects—in particular, relatively more emoticons are used during Friday and the weekend relative to the other days of the week. They also attempted to construct an index aggregating the individual emoticon indices and have shown that the aggregate index is good in tracking important world events, including holidays and natural disasters. Jin and Zafarani [13] have investigated the relevant utility of social information at the ego, triad, community and the whole-network level for sentiment prediction. They found that sentiments are predictable using analyzed structural properties of social networks alone. They have shown that in the environment of scarce computational resources, by using only four network properties analyzed in the paper, one can predict sentiments with a high degree of accuracy. Munjal et al. [20] presented a competent visualization-based framework that is able to detect different opinions from tweets by implementing lexical analysis in its analysis. The approach is shown by taking a relevant example from the film industry, where different views and reviews affect the cumulative opinion about a particular movie and hence the final box office collection. The results that were obtained are found to be in correlation with the box office collections which is a crucial step to augment the existing data analysis methods with the analytical measures of visualizing user real-time opinion. It has also been shown that Twitter data analysis can be seen as a relevant real-time indicator of specific political sentiments by Tumasjan et al. [26]. Bhatia et al. [4] adopted a method to fetch the data in real time directly from Twitter, perform sentiment analysis based on a lexicon model on the streamed data and determine positive, negative and neutral sentiments. They found it can be used potentially to find out the number of people who are in favor of or are against an issue which also helped to predict the social pattern and social interest of every user. Soni and Mathai [25] proposed a new hybrid approach that combines unsupervised learning from k-means clustering to cluster the tweets and then additionally performs supervised learning methods such as decision trees and support vector machines for final classification so that companies can gain augmented insights from Twitter sentiment for better management decisions in the future.
106 Table 1 Descriptive statistics features of presidential Twitter daily activity on a beforehand processed data set
D. Grgi´c et al. Daily tweet number
Value
Min 1st quartile Median Mean 3rd quartile Max
1 4 7 7.5 10 24
3 Data Set Analysis Presidential daily tweets have been taken from the Trump Twitter Archive1 website. Only the tweets between 2017 and 2018 have been used. The year 2016 has been avoided considering it was an election year and that January 20, 2017, is when Donald Trump was inaugurated as the 45th president of the United States, while 2019 was still ongoing at the time of data acquisition. For individual tweets, the preprocessing included the following actions: – – – – – – –
removing all non-ASCII characters; removing links with http and https prefixes; removing Twitter usernames; removing English stop words; removing any extra spaces and numbers; casting all words to lowercase; and tokenizing words.
After preprocessing, all tokens have been aggregated at the daily level. In total, 2 years of daily data were used as an input. The most significant descriptive statistics features of this data set, based on daily tweets analysis, can be seen in Table 1. It is notable to mention that all retweets done by the president were removed from the data set because they do not originate from his Twitter account and can have different sentiment strength. For sentiment analysis, the NRC Word-Emotion Association Lexicon [17, 18], which is a specific list of English words and their sentiment associations with eight basic emotions (anger, fear, anticipation, trust, surprise, sadness, joy and disgust) and two key polarity sentiments (negative and positive), was used. Therefore, in total there are fourteen feature variables extracted from the presidential tweets data set and used as predictors including general Twitter data (e.g. average daily retweets) and sentiment variables created from eight basic emotions and two sentiments that are, as aforementioned, derived from the NRC Word-Emotion Association Lexicon. Each word in the lexicon is associated with one or more emotions or sentiment (binary association) and an inner join is performed between words used in tweets from 1 www.trumptwitterarchive.com.
Predicting Dependency of Approval Rating Change from Twitter Activity …
107
Table 2 Feature descriptions for input into machine learning algorithm Variable name Explanation Tweet_numb Is_Android_n Retweet_count_Avg Favorite_count_Avg Anger_R Anticipation_R Disgust_R Fear_R Joy_R Negative_R Positive_R Sadness_R Surprise_R Trust_R
Total number of tweets on a given day left after processing Total number of Android tweets (direct tweets from president) Average retweets per tweet during the day by other people Average favorites per tweet during the day by other people Total anger words divided by total number of tweets in a day Total anticipation words divided by total number of tweets (day) Total disgust words divided by total number of tweets in a day Total fear words divided by total number of tweets in a day Total joy words divided by total number of tweets in a day Total negative words divided by total number of tweets in a day Total positive words divided by total number of tweets in a day Total sadness words divided by total number of tweets in a day Total surprise words divided by total number of tweets in a day Total trust words divided by total number of tweets in a day
one day with NRC lexicon. This gives us an binary value of association between words used in a day and individual emotions. Sentiment analysis for each variable is calculated as a ratio by getting the total number of words for each sentiment or emotion of one day divided by the total number of tweets for that day left after preprocessing. This gives us a normalized strength of the emotion comparable across days (e.g. days that have a higher proportion of positive words relative to total daily number of tweets indicate a more positive daily outlook). The full list of features is given in Table 2. On the other hand, presidential job approval rating, which is the basis for creating 6 target variables, has been taken from online pool aggregating and estimating website FiveThirtyEight.2 Presidential job approval rating estimate can be seen in Fig. 1. Since the dispersion of presidential approval ratings pools was the lowest between “voters and likely voters” category, only estimates for that category were taken. Furthermore, this category is also the most interesting one to analyze because the voter’s opinions and presidential job approval ratings have a direct connection to the presidential elections and their results. The target variable (approval rating) has been transformed so that it represents change of job approval between the tweeted day and some day in the future. The tested future daily change of presidential job approval of 7 (T7), 14 (T14) and 21 (T21) days were used. This represents from 1 up to 3 weeks future approval change (1st difference of two data points). This procedure then effectively transforms the problem from the time-series forecasting into a supervised learning problem. Note that we impose stationary on the target variable by analyzing difference and not absolute approval value to remove 2 www.fivethirtyeight.com.
108
D. Grgi´c et al.
Fig. 1 Presidential job approval rating estimate over the course of 2017 and 2018 Table 3 Target variable descriptions for input into machine learning algorithm Target Explanation T-7 day Q1 T-7 day Q3 T-14 day Q1 T-14 day Q3 T-21 day Q1 T-21 day Q3
Binary label if 7 day change is lower or equal than 1st quartile Binary label if 7 day change is higher or equal than 3rd quartile Binary label if 14 day change is lower or equal than 1st quartile Binary label if 14 day change is higher or equal than 3rd quartile Binary label if 21 day change is lower or equal than 1st quartile Binary label if 21 day change is higher or equal than 3rd quartile
potential issues with time-series forecasting. Furthermore, all feature variables are only of lag 1 to remove potential complex information spillover. After the change has been calculated, the numeric variable is transformed into a classification problem by classifying if the change during the observed target period has been positive or negative or rather, if the approval change was a significantly positive or a negative one (explained in next paragraph). To combat the noise and small changes that could potentially reduce the predictive power, we concentrate on the 1st and 3rd quartile only, for each change in the target variable. This means we flag the change as significant only if it is below 1st or above 3rd quartile in size of all changes of job approval rating during the observed period. The binary classification target variables are therefore shown in Table 3 with the key cutoff logic as follows: – Q1—yes if the change in presidential approval is lower than 25% quantile within the target period and otherwise no, and – Q3—yes if the change in presidential approval is higher than the 75% quantile within the target period and otherwise no. The relevant change in the daily Twitter target variable used for cutoff and for every time span (1–3 weeks) in percentage points is given in Table 4.
Predicting Dependency of Approval Rating Change from Twitter Activity …
109
Table 4 Target variable cutoff points of approval rating change for each time span (7, 14 and 21 day future change of approval rating) (in %) T-7 day T-14 day T-21 day Q1 Q3
−0.57 0.51
−0.84 0.75
−0.96 0.83
4 Methodology and Results To test the predictability of significant job approval rating change, we use five different machine learning classification algorithms, namely – Random Forest [10, 11], which consists of a large number of relatively uncorrelated random trees which are used as a voting committee by allowing each individual tree to randomly sample from the data set with replacement that results in different trees (which is known as bootstrap aggregating—bagging, [5]); – Xgboost (eXtreme Gradient Boosting, [6]), an optimized distributed gradient boosting library designed to be highly efficient, flexible and portable which implements machine learning algorithms under the Gradient Boosting framework and provides a parallel tree boosting (also known as GBDT, GBM); – AdaBoost (Adaptive Boosting, [8]), which is often referred to as the best out-ofthe-box classifier with decision trees internally used as weak learners [14]; – AdaBag [1], which implements Breiman’s Bagging algorithm using classification trees as individual classifiers; and – ExtraTrees (extremely randomized trees, [9]), which have the main objective of further randomizing internal tree building by modifying the procedure when using numerical input features, where the choice of the optimal cut-point is responsible for a large proportion of the total variance of the created tree; when comparing it to random forests, extratrees drop the idea of using bootstrap copies of the learning sample, and additionally do not try to find an optimal cut-point for each one of the randomly chosen features at each split, but they select a cut-point at random which surprisingly has shown to have beneficial results without degradation of final results. Predictive models are designed in a way that, for every target variable from Table 3, we run all 5 classification model types mentioned above by using feature variables from Table 2. In total this creates 30 different models (6 target variables predicted with 5 classification model types). Every model is optimized by using five fold cross-validation on training data, and the results have been confirmed by an out-of-sample holdout set. Training data was 70% of the whole set and 30% was reserved as additional holdout set. Hyper-parameters for each of the models were optimized using a grid search during the cross-validation procedure. The results of all 30 machine learning models—predicting 7, 14 and 21 days (1, 2 and 3 weeks) both positive and negative approval rating change—are visible in Table 5. The pre-
110
D. Grgi´c et al.
Table 5 Out-of-sample AUC results Algorithm T-7 day Q1 (%) Q3 (%) Random forest Xgboost AdaBoost AdaBag ExtraTrees Average
51.4 50.0 51.9 53.5 51.6 51.7
60.8 61.9 60.9 62.0 59.1 60.9
T-14 day Q1 (%)
Q3 (%)
T-21 day Q1 (%)
Q3 (%)
63.2 60.7 57.9 54.0 59.5 59.0
57.1 51.5 57.4 52.7 53.6 54.4
60.3 59.3 53.1 55.6 56.0 56.9
50.6 52.2 52.6 51.3 52.8 51.9
dictability of presidential job approval rating significant change is the strongest and most stable across algorithms at 7 days (1 week) forward change on the positive side (3rd quartile) and 14 days (2 weeks) change of the approval rating on the negative side (1st quartile). To test the stability of the conclusion, we run bootstrap test for two correlated ROC curves with 2000 number of bootstrap replicates (permutations) comparing each model AUC-ROC curve to other models within the same period [7]. Comparison result for T-7 day Q3 change and T-14 day Q1 change show that all 5 models do not produce statistically significant AUC different from each other (within the same period) at p-value 0.01. We do note that for T-14 day Q1 AdaBag (AUC 54%) and Random Forest (AUC 63%) have a p-value for the test of 0.019 which is significant at 5%. This is also the only exception. Findings suggest that it was possible to slightly predict significant positive change in job approval rating 7 days (1 week) into the future and negative ones 14 days (2 weeks) into the future. Not being able to predict both positive and negative changes for the same time frames—or being able to predict positive changes in the short term and negative ones only for the longer term—might possibly be a result of human nature as good news and positive changes often resonate with people for a much shorter period of time than negative ones [2].
5 Conclusion Results indicate that the predictability of job approval rating significant change was the strongest across algorithms for 7 day (1 week) and 14 days (2 weeks) future predictions. Expecting to predict 21 days (3 weeks) into the future using such a volatile data source as a social network probably would not have been realistic which is in line with an expectation that the older the data in a time frame is, the less relevant it gets. It should be noted that this research does not take into account alternative information or different NLP processing of the Twitter social network. It is plausible
Predicting Dependency of Approval Rating Change from Twitter Activity …
111
that taking into account additional sources could lead to further strengthening of the position of predicting the presidential job approval rating movement using sentiment analysis and perhaps even getting higher out-of-sample AUC results. Furthermore, specific topics related to emotions during the analyzed period could have had a significant influence in the predictability (e.g. only specific topics create an emotional effect). This could either increase predictability or explain the observed predictability as a confounding variable. Additionally, specific social figures can have “individual effect” from emotional impact (appeal to emotion) that is not persistent across other public figures. Several implications from this research can be derived for real applications: (i) when a connection between emotions and future approval exist then this indicates a higher connection from emotion resonance versus pure logic for specific days that are tracked by multi-agent systems; (ii) if a significant change is predicted, this should indicate a strong relevance of the topic in that day to public; and (iii) it gives multiagent systems supplementary information to use when performing final inference (e.g. predicting future event scenarios can be favorably enforced (or discarded) if social approval is predicted to rise (of fall) under a certain threshold. Further research will concentrate to broaden the scope of feature variables and conclusions related to the predictability of tweets and approval ratings.
References 1. Alfaro-Cortés, E., Gámez, M.: Adabag an R package for classification with boosting and bagging. J. Stati. Soft. 54 (2013) 2. Baumeister, R., Bratslavsky, E., Finkenauer, C., Vohs, K.: Bad is stronger than good. Rev. Gen. Psychol. 5 (2001). https://doi.org/10.1037/1089-2680.5.4.323 3. Bhardwaj, P., Gautam, S., Pahwa, P.: A novel approach to analyze the sentiments of tweets related to tripadvisor. J. Inf. Optim. Sci. 39, 591–605 (2018). https://doi.org/10.1080/02522667. 2017.1417726 4. Bhatia, R., Garg, P., Johari, R.: Corpus based twitter sentiment analysis (2018) 5. Breiman, L.: Bagging predictors. Mach. Learn. 24, 123–140 (1996) 6. Chen, T., Guestrin, C.: Xgboost: a scalable tree boosting system, pp. 785–794 (2016). https:// doi.org/10.1145/2939672.2939785 7. DeLong, E.R., DeLong, D.M., Clarke-Pearson, D.L.: Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics 44, 837–845 (1988) 8. Freund, Y., Schapire, R.E.: A short introduction to boosting (1999) 9. Geurts, P., Ernst, D., Wehenkel, L.: Extremely randomized trees. Mach. Learn. 63, 3–42 (2006) 10. Ho, T.: Random Decision Forests, vol. 1, pp. 278–282 (1995). https://doi.org/10.1109/ICDAR. 1995.598994 11. Ho, T.: The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 20, 832–844 (1998) 12. Jang, H.J., Sim, J., Lee, Y., Kwon, O.: Deep sentiment analysis: Mining the causality between personality-value-attitude for analyzing business ads in social media. Expert Syst. Appl. 40, 7492–7503 (2013) 13. Jin, S., Zafarani, R.: Sentiment prediction in social networks (2018). https://doi.org/10.1109/ ICDMW.2018.00190
112
D. Grgi´c et al.
14. Kégl, B.: The return of adaboost.mh: multi-class hamming trees (2013) 15. Logunov, A., Panchenko, V.P.: Characteristics and predictability of twitter sentiment series (2011) 16. Mewari, R., Singh, A., Srivastava, A.: Opinion mining techniques on social media data. Int. J. Comput. Appl. 118, 39–44 (2015). https://doi.org/10.5120/20753-3149 17. Mohammad, S., Turney, P.: Emotions evoked by common words and phrases: using mechanical turk to create an emotion lexicon (2010) 18. Mohammad, S.M., Turney, P.D.: Crowdsourcing a word-emotion association lexicon. 29(3), 436–465 (2013) 19. Mumtaz, D., Ahuja, B.: A lexical approach for opinion mining in twitter. Int. J. Educ. Manag. Eng. 6, 20–29 (2016). https://doi.org/10.5815/ijeme.2016.04.03 20. Munjal, P., Narula, M., Kumar, S., Banati, H.: Twitter sentiments based suggestive framework to predict trends. J. Stat. Manag. Syst. 21, 685–693 (2018). https://doi.org/10.1080/09720510. 2018.1475079 21. OConnor, B., Balasubramanyan, R., Routledge, B., Smith, N.: From tweets to polls: linking text sentiment to public opinion time series (2010) 22. Pagolu, V.S., Challa, K.N.R., Panda, G., Majhi, B.: Sentiment analysis of twitter data for predicting stock market movements, pp. 1345–1350 (2016) 23. Pang, B., Lee, L.: Opinion mining and sentiment analysis. Found. Trends Inf. Retr. 2, 1–135 (2008). https://doi.org/10.1561/1500000011 24. Sharif, W., Samsudin, N.A., Deris, M.M., Naseem, R.: E ect of negation in sentiment analysis. In: 2016 Sixth International Conference on Innovative Computing Technology (INTECH), pp. 718–723 (2016) 25. Soni, R., Mathai, K.J.: Improved twitter sentiment prediction through cluster-then-predict model (2015) 26. Tumasjan, A., Sprenger, T.O., Sandner, P.G., Welpe, I.M.: Predicting elections with twitter: what 140 characters reveal about political sentiment (2010) 27. Vinodhini, G., Chandrasekaran, D.: A comparative performance evaluation of neural network based approach for sentiment classification of online reviews. J. King Saud Univ. Comput. Inf. Sci. 28 (2015). https://doi.org/10.1016/j.jksuci.2014.03.024
Protected Control System with RSA Encryption Danenkov Ilya, Alexey Margun, Radda Iureva, and Artem Kremlev
Abstract The paper addresses the problem of protecting and identification of attacks in control systems. The problem is solved by the use of the asymmetric RSA encryption algorithm. Analysis influence of encryption on control process and impact of discretization on quality of control are performed. The research is considered for the class of linear controllers. The effectiveness of the method is demonstrated in simulation of control system under a false data injection attack.
1 Introduction Cyberphysical systems (hereinafter CPS) are an essential part of infrastructure, such as power grids and manufacturing. Their faults can lead to enormous financial damage and threats to human lives. The problem of cybersecurity of this type of system has become relevant over the past 10 years. One representative of CPS is Industrial Internet of Things. Industrial IoT is created for linking production into a single control network [1]. Feature of IIoT is the specifics of security since IIoT falls into several categories at once: being a network, it is a CPS that belongs to critical information infrastructure. However, the more IIoT spreads, the more vulnerabilities are discovered [2, 3]. The number of cyberattacks on critical infrastructure is increasing, and occasions are becoming public. Also, IIoT may link control systems. Hence, security problem of a cyberphysical system is more complex than it seems. Pay attention to more significant incidents: attack by the Stuxnet [4] virus was conducted against enrichment centrifuges for uranium fuel in 2009. This virus implemented Controller Falsification attack. Another incident is False Data Injection attack against the Ukrainian energy supply system in 2015 [5]. The above attacks were realized on control systems and had serious consequences. Besides, particuD. Ilya · A. Margun · R. Iureva (B) · A. Kremlev ITMO University, Saint Petersburg 197101, Russia e-mail: [email protected] URL: https://www.lcps.ifmo.ru © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_11
113
114
D. Ilya et al.
larly, methods of protecting control systems from the above attacks and importance of cybersecurity is increasing. Primary method of control system protection is using asymmetric algorithms such as ElGamal or RSA. RSA is the first public-key cryptosystem. It is widely used in various security tools. This algorithm is based on the problem of factorization of large numbers. The ElGamal scheme is based on the discrete logarithm problem [6]. The general principle of these schemes is as follows.One of two participants in communication session generates public and private keys and transfers the public key to other participants. The participant who knows the public key can generate ciphertexts. The private key allows decrypting these messages so that the receiving party can restore the original message from the ciphertext created by use of the corresponding public key. The private key is known only to the recipient, so that the attacker cannot get unauthorized access to the information contained in the encrypted message. So the malicious data injected into control input after decryption gives inadequate values and is rejected as erroneous. This is a fundamental concept. More complex and advanced frameworks are based on it. Recently, several modifications of the considered approach have been proposed [7, 8]. For example, using key changes was proposed. This method assumes presence of precomputed key pairs, and pair changes after a particular time. Thus, an attacker isn’t able to decrypt these values and gets unauthorized access to transmitted messages during use of one key pair. Key pairs should be changed timely. However, using cryptography in control systems creates some problems, such as quantizing [9, 10] and time-delay of the control signal, because using encryption supposes discrete realization of the controller. The encryption operation changes the value of the control signal. Also, the giant key allows transmitting more accurate value, but encryption process becomes longer. If key size is insufficient, size of the transmitted values of the controlled signal is limited, and then discretization error can adversely affect the control efficiency and system stability. Delays in the control signal that occur due to the use of encryption can cause system’s unstable state. This leads to performance degradation. Thus, the problem of protecting control systems is very multifaceted and requires an integrated approach and consideration from all sides, taking into account many aspects. This paper discusses the option of protecting control systems using the RSA encryption algorithm and assessing the impact of discretization period on the quality of control.
2 Problem Statement This section discusses the control system model and false data injection attack. We assume that control is carried out through a control terminal; however, it is not considered in detail. The model and plant are connected by a bus. Also, these elements may be distributed in a significant area.
Protected Control System with RSA Encryption
115
Fig. 1 Control system
2.1 Control System Model The model consists of a linear controller and a plant. A controller is a linear dynamic system with a measurable state vector. The goal of the controller is providing output tracking for the reference input signal. Control process includes calculation tracking error e(t) as the difference between setpoint and current output value, after which the controller generates a control signal value. The control plant is a linear dynamic system with known parameters and measured output. This is depicted in Fig. 1 where ym (t)—reference signal; e(t)—tracking error; u(t)—control signal; and yc (t)—output.
2.2 False Data Injection The paper [11] describes the concept of a false data injection attack. It consists of the fact that the parameters of the control system and having access to the readings of the sensors are known; the attacker can generate an attack data sequence that affects the control and ultimately lead to the failure of the system (Fig. 2). Let ya = (yc1 + A1 , ..., ycm + Am )T be the vector of observable measurements modified by malicious data. yc = (yc1 , ..., ycm )T is the vector of unmodified measurements, and A = (A1 , ..., Am )T are malicious data added to the original measurements. It is called an attack sequence. The attacker compromises the yci value and replaces it with (yci + Ai ). Traditional methods for detecting bad measurements calculate 2-norm of the measurement residual and check whether bad measurements exist or not. The effect of attack data sequence consists of modification of the data by the attack sequence. These malicious values don’t lead to detection by bad measurement detection methods.
116
D. Ilya et al.
Fig. 2 False data injection
3 RSA Encryption We consider using ElGamal and RSA asymmetric encryption algorithms for secure control of the plant. Both algorithms may realize homomorphic multiplication. This can be used in future implementations for more secure computing of reference input signals. According to [12], RSA is significantly faster in encryption operation than ElGamal. ElGamal and RSA have similar time for decryption operation. RSA ciphertext has less size then ElGamal. Control systems require fast feedback and lack of delays of control signals. Also, small size data needs less channel capacity. It potentially reduces the cost of implementation. So RSA has more advantages for implementation in control systems than ElGamal.
3.1 Algorithm Description RSA [13] consists of three stages: keys generation, encryption, and decryption (Fig. 3). Keys Generation: 1. Two prime numbers p and q are generated. 2. So-called module (n) is calculated. n = pq. 3. Calculate ϕ(n) = ( p − 1)(n − 1), where φ(n) is the Euler function.
(1)
Protected Control System with RSA Encryption
117
Fig. 3 Encrypted control
4. An integer ε is selected that satisfies the conditions LC M(ε; ϕ(n)) = 1
(2)
1 < ε < ϕ(n).
(3)
LC M—Least common multiple; ε—public exponent. 5. d is private exponent. d is calculated such that dε ≡ 1(modϕ(n)).
(4)
6. The pair (ε; n) is the public key, and the pair (d; n) is private. Encryption: Encrypt the plaintext m ∈ Z n using the public key k p = (n; ε) in order to compute the ciphertext c, i.e., Enc(k p ; m) = m ε mod n = c.
(5)
Decryption: Decrypt the ciphertext c using the private key ks = (n; d) and the public key k p in order to compute the plaintext m , i.e., Dec(ks ; c) = cd mod n = m ; m = m .
(6)
3.2 False Data Injection Detection The detector operates on threshold discrimination principle. The method of calculating threshold is described in [14]. Let threshold be > 0. So attack detector (d) model is 1, y 2 > , d= 0, otherwise where y and y —original measurements before and after transmitting, respectively.
118
D. Ilya et al.
According to [11], the modification of state measurements doesn’t trigger the detector; therefore, the attack is successfully carried out. It is proposed to encrypt control signals and information about the installation status to prevent this outcome using the RSA encryption algorithm. In this case, the following is true for the detector: 1 Dec(ks ; Enc(k p ; y)) 2 > , d= 0 otherwise where Enc() and Dec() are encryption and decryption functions, respectively, k p , ks are the public and private keys, and y are the original dimensions. An attack on encrypted data occurs according to the following rule:
yr sa = Enc(k p ; y) + a
(7)
where yr sa is the modified encrypted data, a—attack sequence. The attack fails because when data is decrypted, large difference is obtained that is detected by the detector, that is Dec(ks ; Enc(k p ; y)) 2 > , ∀y.
(8)
4 Implementation The use of encryption algorithm leads to discrete controller implementation. It is necessary to solve such problems as data conversion and the effect of discrete controller implementation on the quality of the control system.
4.1 Data Conversion Signals should be converted according to the length of encryption key and cast to an integer type for preparing for encryption of control signals. Conversion of real value into integer is performed as follows: Z = 10o R
(9)
where Z is the converted integer, R is the value of real type, and o is the order of the significant real part of the converted number. It is assumed that the values are written in decimal. Sign of signal value is transmitted by the last bit of data package, for example. The order of the significant part is determined by the degree of discretization of the signal.
Protected Control System with RSA Encryption
119
4.2 Analysis of Discrete Controller Implementation on Control Quality The Kotelnikov–Shannon sampling theorem [15] is used to select a sampling frequency. It states that it is possible to restore an analog signal from a digital one if sampling frequency exceeds maximum frequency of real signal spectrum at least two times. Two approaches for choosing sampling rate are possible for described control system: 1. Consideration of plant in a discrete form. To do this, we should take the fastest process in system and using the theorem, calculate sampling frequency [16, 17]. 2. Consideration of control plant as a continuous system, and controller in a continuous form. It is possible only if controller frequency is much larger than the highest frequency of the plant. More often, the first option is implemented in practice, however, it is not always possible to choose sampling rate by the Kotelnikov–Shannon sampling theorem. To select sampling rate, a different approach is used. Consider a closed-loop control system with sequentially connected controller, and linear stable control plant described by equations: x˙ = Ax + Bu yc = C x. Denotations: ym (t) is desired output, yc (t) is controllable output signal, u(t) is control signal (u min ≤ u(t) ≤ u max ), and e(t) = yc (t) − ym (t) is tracking error. Implementation of encryption protocols leads to discrete implementation of controller with significant value of sampling interval (denoted by T ). Let’s evaluate impact of sampling interval on tracking error. Let yd (t) be the output of closed system with a discrete controller. Since the controller and plant are linear, we can write [18] (10) yc (t) = yd (t) + φ where φ is a Lipschitz function, which is the magnitude of the error created by the discretization of the controller. Consider change in output signal in interval (t1 ; t1 + T ). For continuous controller: t 1 +T
yc (t) = Ce x(t1 ) + C At
t1
For discrete controller:
e A(t−τ ) Bu(τ )dτ .
(11)
120
D. Ilya et al. t 1 +T
yd (t) = Ce x(t1 ) + C At
e A(t−τ ) Bu(t1 )dτ .
(12)
t1
To find bounds φ: t 1 +T
φ(t) = yc (t) − yd (t) = C
e A(t−τ ) (u(τ ) − u(t1 )dτ ).
(13)
t1
The function φ(t) takes the maximum value at the maximum value (u(τ ) − u(t1 ))—taking into account the saturation (u max − u min ), i.e., t 1 +T
|φ(t)| ≤ |(u max − u min )C(
e A(t−τ ) dτ )B| =
t1
T = |(u max − u min )C(
e A(t−τ ) )dτ B| =
0
= |(u max − u min )C A−1 (e AT − I )B|.
(14)
Let the maximum permissible deviation of a system with a discrete controller from a continuous system be denoted by δ. Then the maximum allowable sampling interval Tm is determined from the condition: |(u max − u min )C A−1 (e AT − I )B| = δ.
(15)
5 Numerical Example In this section, numerical examples are considered, and an experiment is performed. A continuous model with the following parameters was considered as an installation. For example, this model can describe DC-motor dynamics,
0 1 0 A= ,B = C= 10 . −5 −1.5 1
(16)
Also, consider the model of the PID controller with the following coefficients: K p = 15, K i = 9, K d = 6, and K n = 37. Transfer function is H (s) =
237s 2 + 564s + 333 . s 2 + 37s
(17)
Protected Control System with RSA Encryption
121
Fig. 4 Correlation between the deviation and the sampling period
Let’s trace the correlation between the deviation and the sampling period for the described plant (Fig. 4). This figure demonstrates converging of permissible deviation and it limitations. Selection of satisfactory permissible deviation is based on requirements and constraints of manufacturing. The sampling period is accepted as Tm = 0.001s. Let us evaluate its impact using formula (15) δ = 0.00002.
(18)
Simulink package was used to simulate the operation of the secure control system. The attack was carried out on feedback. Ten keys calculated in advance were used to demonstrate the approach. Key change occurs every T = 0.5 s. Long period of time between key changing, small key length and small amount of them were chosen to simplify the demostration of the approach. The value of the output signal y(t) was encrypted with parameters in Table 1. Figure 5 illustrates the impact of discrete implementation of a PID controller and implementation of the encryption algorithm. The following figures show the behavior of plant output. An attacker injects an attacking sequence at time intervals T = 1 s. However,as signal is encrypted, its modification and decryption are alarmed by a detector. All attacks have been identified. The basis of these attacks is substitution of a control signal by modified (Fig. 6).
122
D. Ilya et al.
Table 1 Encryption parameters Lable e 1 2 3 4 5 6 7 8 9 10
29 17 29 23 17 19 7 37 7 31
d
n
11613 1985 19445 12263 16073 30579 4843 31273 41143 2335
30967 34121 35621 35639 27661 41917 34277 40301 48443 36581
Fig. 5 Comparing output values
Output values save every sampling period. During attack, these values replace corrupted values till the end of the attack or countermeasures (Fig. 7). Despite the short interval of an attack, the control process is destabilized; therefore, after detecting an attack, countermeasures have to be taken (Fig. 8).
Protected Control System with RSA Encryption
Fig. 6 Encrypted control system under attack
Fig. 7 Encrypted values of output
123
124
D. Ilya et al.
Fig. 8 Detector triggering
6 Conclusion In this paper, a false data injection attack identification method using the RSA encryption algorithm was considered. Mostly complex control systems do not provide builtin protection against cyberattacks, particularly from false data injection. Equipping control systems with encryption tools allows identifying false data injection attacks and taking countermeasures. Two options are proposed as countermeasures: an immediate shutdown of process occurring in the dynamic system or safe shutdown of process after completion of current task. For implementation in real systems, it is proposed to increase the key length and number of key pairs. Modern hardware [19] allows generating keys and performing encryption and decryption functions with high performance. Acknowledgements This work was financially supported by Government of Russian Federation (Grant 08-08) and by the Ministry of Science and Higher Education of Russian Federation, passport of goszadanie no. 2019-0898.
References 1. Boyes, H., Hallaq, B., Cunningham, J., Watson, T.: The industrial internet of things (iiot): an analysis framework. Comput. Ind. 101, 1–12 (2018) 2. The State of Industrial Cybersecurity 2018. Kaspersky lab (2019) 3. ICS vulnerabilities: 2018 in review. Positive Technologies (2019) 4. Langner, R.: Stuxnet: dissecting a cyberwarfare weapon. IEEE Secur. Priv. 9, 49–51 (2011)
Protected Control System with RSA Encryption
125
5. Lee, R.M., Assante, M.J., Conway, T.: Analysis of the Cyber Attack on the Ukrainian Power Grid. E-ISAC SANS (2016) 6. ElGamal, T.: A public key cryptosystem and a signature scheme based on discrete logarithms. In: Proceedings of CRYPTO 84 on Advances in cryptology, pp. 10–18 (1985) 7. Kogiso, K., Fujita, T.: Cyber-security enhancement of networked control systems using homomorphic encryption. In: 2015 54th IEEE Conference on Decision and Control (CDC), pp. 6836–6843 (2015) 8. Kogiso, K.: Attack detection and prevention for encrypted control systems by application of switching-key management. In: 2018 IEEE Conference on Decision and Control (CDC), pp. 5032–5037 (Dec 2018) 9. Margun, A., Bobtsov, A., Furtat, I.: Algorithm to control linear plants with measurable quantized output. Autom. Remote Control 78(5), 826–835 (2017) 10. Nair, G.N., Fagnani, F., Zampieri, S., Evans, R.J.: Feedback control under data rate constraints: an overview. Proc. IEEE 95(1), 108–137 (2007) 11. Liu, Y., Ning, P., Reiter, M.K.: False data injection attacks against state estimation in electric power grids. ACM Trans. Inf. Syst. Secur. 14(1) (2011) 12. Siahaan, A.P.U., Elviwani, E., Oktaviana, B.: Comparative analysis of rsa and elgamal cryptographic public-key algorithms. In: Proceedings of the Joint Workshop KO2PI and the 1st International Conference on Advance and Scientific Innovation (2018) 13. Rivest, R.L., Shamir, A., Adleman, L.: A method for obtaining digital signatures and public-key cryptosystems. Commun. ACM 21(2), 120–126 (Feb 1978) 14. Monticelli, A.: State Estimation in Electric Power Systems: A Generalized Approach. Kluwer International Series in Engineering and Computer Science, Springer, US (1999) 15. Kotel’nikov, V.A.: On the transmission capacity of ‘ether’ and wire in electric communications. Usp. Fiz. Nauk 176(7), 762–770 (2006). https://ufn.ru/ru/articles/2006/7/h/ 16. Dobriborsci, D., Margun, A., Kolyubin, S.: Theoretical and experimental research of the discrete output robust controller for uncertain plant. In: 2018 European Control Conference (ECC), pp. 533–538 (June 2018) 17. Gray, R.M., Neuhoff, D.L.: Quantization. IEEE Trans. Inf. Theory 44(6), 2325–2383 (1998) 18. Furtat, I., Fradkov, A., Liberzon, D.: Compensation of disturbances for mimo systems with quantized output. Automatica 60, 239–244 (2015) 19. Wockinger, T.: High-Speed RSA Implementation for FPGA Platforms. Institute for Applied Information Processing and Communications Graz University of Technology, Graz (2005)
Artificial Intelligent Agent for Energy Savings in Cloud Computing Environment: Implementation and Performance Evaluation Leila Ismail
and Huned Materwala
Abstract The gaining popularity of the Internet of Things (IoT), big data analytics, and blockchain to make the digital world connected, smart, and secure in the context of smart cities have led to increasing use of the cloud computing technology. Consequently, cloud data centers become hungry for energy consumption. This has an adverse effect on the environment in addition to the high operational and maintenance costs of large-scale data centers. Several works in the literature have proposed energy-efficient task scheduling in a cloud computing environment. However, most of these works use a scheduler that predicts the power consumption of an incoming task based on a static model. In most scenarios, the scheduler considers the CPU utilization of a server for power prediction and task allocations. This might give misleading results as the power consumption of a server, handling a variety of requests in smart cities, depends on other metrics such as memory, disk, and network in addition to CPU. Our proposed Intelligent Autonomous Agent Energy-Aware Task Scheduler in Virtual Machines (IAA-EATSVM) uses the multi-metric machine learning approach for scheduling of incoming tasks. IAA-EATSVM outperforms the mostly used Energy Conscious Task Consolidation (ECTC) based on a static approach. The detailed performance analysis is elaborated in the paper.
1 Introduction Cloud computing [1] enables on-demand elastic access to a shared pool of configurable computing resources, such as networks, servers, storage, applications, and services, that can be rapidly provisioned and released with minimal management effort or service provider interaction. Its adoption is increasing with the emergence of computing paradigms such as big data analytics [2], Internet of Things (IoT) [3], L. Ismail (B) · H. Materwala Department of Computer Science and Software Engineering, College of Information Technology, United Arab Emirates University, Al Ain, Abu Dhabi 15551, UAE e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_12
127
128
L. Ismail and H. Materwala
and blockchain [4] in a digitally connected world in the context of smart cities. The constant expanding size of these datacenters makes energy consumption a critical issue. According to the Natural Resources Defense Council (NRDC) in the USA [5], data centers used about 91 billion kilowatt-hours of electrical energy in 2013, equivalent to the output of 34 large coal-fired plants (500 MW each). This is estimated to reach 140 billion kWh by 2020, a 53% increase. The associated expected annual electricity cost is $13 billion. This will equate to about 100 million metric tons of carbon pollution released into the atmosphere annually. According to a study, the Information and Communication Technology (ICT) industry will be responsible for up to 3.5% of global CO2 emission with almost 50 billion devices globally connected [6]. This emission is projected to rise to 14% by 2040. The issue of data centers’ energy consumption can be addressed by implementing hardware solutions [7] or by deploying software solutions such as energy-efficient schedulers. An energy-efficient scheduler mainly comprises a power prediction model and an energy-efficient scheduling algorithm. The power model predicts the power consumption of an incoming task and the scheduler places a task on a server that will consume the least energy to execute the task. The effectiveness of the algorithm depends mainly on the power model used. Researchers proposed different server-level power models based either on the physical quantities related to a server’s hardware such as its temperature, fan speed, resistance, and heat dissipation [8] or on the server’s operating system performance metrics such as CPU, memory, disk and network utilization [9]. Deploying hardware-specific power models at a data center level is expensive and difficult to manage due to the sensors required to measure the values of different physical quantities. Consequently, this paper focuses on scheduling using software-based power models. Most of the works on software-based energy-efficient scheduling use a static power model based on the server’s minimum and maximum power (referred to here as “Power Endpoints Model” or simply PEM) [10–15]. However, this might lead to inaccurate results as the model does not consider the implications of the spatial distribution and behavior of the power data between the endpoints. Consequently, there is a need for an intelligent scheduler that adapts to the dynamicity of the incoming loads in smart cities. With the envisaged variety of IoT, big data analytics, and blockchain applications in smart cities, it is necessary to introduce Intelligent Autonomous Agents (IAAs) to take accurate decisions without human intervention. IAAs use Artificial Intelligence (AI) and machine learning and cooperate to make decisions autonomously. Very few works in the literature use the IAA scheduler. These works are mostly based on the CPU utilization of a server [16]. This is based on the result that a server power consumption significantly depends on CPU utilization [17]. However, experimental results show that memory, disk, and network comprise of around 40–60% of a server’s power consumption depending on the workload being executed [18]. Consequently, in this paper, we focus on the Multiple Linear Regression (MLR) machine learning power model that uses CPU, memory, network, and disk utilization values as the independent variables for power prediction. In this paper, we propose an Intelligent Autonomous Agent Energy-Aware Task Scheduler on cloud Virtual Machines (IAA-EATSVM). IAA-EATSVM uses the
Artificial Intelligent Agent for Energy Savings …
129
MLR machine learning power model in combination with our EATSVM scheduling algorithm [11] to intelligently schedule cloud tasks without human intervention. We compare IAA-EATSVM with the mostly used Energy-Aware Task Consolidation (ECTC) scheduler [10] that uses the PEM power model. The major contributions of this paper are as follows: 1. We implement and evaluate IAA-EATSVM, which is based on machine learning for energy savings in a cloud computing environment. 2. We compare the performance of IAA-EATSVM to the mostly used cloud scheduler in the literature of ECTC, in terms of energy consumption, electricity cost, carbon emission, and execution time. This is by using a diverse set of benchmarks and applications. The experimental results show that IAA-EATSVM outperforms ECTC. 3. We consider the impact of an increasing number of hosts in a data center on the performance of the IAA-EATSVM and ECTC. The remainder of the paper proceeds as follows. Section 2 provides an overview of related literature. We describe the architecture of our proposed IAA-EATSVM scheduler in Sect. 3. The experimental setup, experiments, and performance evaluation, in terms of energy consumption, electricity cost, carbon dioxide emission, and execution time are presented in Sect. 4. Section 5 concludes the paper with future research directions.
2 Related Work As the energy consumption of the cloud computing system has become one of the most crucial issues, there has been a lot of research efforts for energy optimizations in cloud data centers. Most of the works on task scheduling and consolidation algorithms in the literature are based on a static power model [10–15] to predict power consumption. Lee and Zomaya’s Energy Conscious Task Consolidation (ECTC) schedules an incoming task on a machine in which the energy consumption to execute that task is the minimum. It assigns the task to the server which has the maximum overlapping time between the incoming task and the ongoing one(s) [10] aiming to maximize CPU utilization. The algorithm considers a homogeneous data center where the execution time of the task is the same on all hosts. In our previous work [11], we extended the ECTC algorithm to consider a heterogeneous data center, and the increase in the execution time(s) of the ongoing task(s) due to overlaps with the incoming task. Huai et al. [12] compared the Power Best Fit (PBF) and the Load Balancing (LB) algorithms for task scheduling. The results showed that PBF consumes more energy as it results in a bigger number of idle servers than LB, because the latter avoids task consolidation. Ying and Yu [13] propose a genetic algorithm to formulate a bi-objective optimization problem for task scheduling to optimize power consumption and the execution time of the task on each server. Wu et al. [14] consider both
130
L. Ismail and H. Materwala
the energy and the SLAVs optimizations by using the Dynamic Voltage and Frequency Scaling (DVFS) technique. The algorithm places a task on a server where the frequency required by the task lies between the minimum and the maximum servers’ frequencies. The minimum ensures the performance of the task, while the maximum prevents server overutilization. Qureshi [15] propose a scheduling algorithm for application workloads or tasks based on the Application Profiles (APs). An AP includes the application’s arrival and completion times, the CPU and memory utilizations, and the power consumption. Very few works [16, 19] propose an intelligent autonomous scheduler for tasks in cloud computing with machine learning. These works use a linear regression model based on CPU utilization [16], or CPU and memory [19]. However, smart city applications require a combination of CPU, memory, disk, and network resources. To our knowledge, there is no work in the literature that implements an intelligent autonomous cloud agent scheduler that uses machine learning on CPU, memory, disk, and network for power prediction. In this paper, we propose an intelligent autonomous agent scheduler, IAA-EATSVM, and compare its performance with the mostly used ECTC [10] cloud scheduler in terms of energy consumption, electricity cost, carbon dioxide emission, and execution time.
3 IAA-EATSVM Architecture Figure 1 shows an overview of the intelligent autonomous agent scheduler, IAAEATSVM, architecture. The main components of the architecture are the following: • Job store: The requests submitted by the cloud users are stored in the job store component before they are scheduled. • Power consumption monitor: The power consumption monitor records the power consumption of all the servers in the data center and sends it to the scheduler in real time. • Resource utilization monitor: The resource utilization monitor records the values of CPU, memory, disk, and network utilization for each server and sends it to the scheduler. • Resource utilization calculator: ECTC scheduler works on the assumption that all the servers are homogenous [10]. Consequently, an incoming user request will have the same utilization on all the servers. However, in a real data center scenario, the servers are heterogeneous. To address this issue, IAA-EATSVM uses a resource utilization calculator. An incoming user request has CPU utilization UCPUref with respect to a reference machine. Based on the reference value, the resource utilization calculator computes the utilization of the request on other machines using Eq. 1. Ui = (UC PU ref ∗ C Sref ) C Si
(1)
Artificial Intelligent Agent for Energy Savings …
131
Fig. 1 Overview of IAA-EATSVM architecture
where Ui is the utilization on the ith machine, CSref is the clock speed of the reference machine, and CSi is the clock speed of the ith machine. • Data mining engine: IAA-EATSVM scheduler uses a machine learning power model that is based on the CPU, memory, disk, and network utilization. The model is based on the multiple linear regression approach as shown in Eq. 2. The data mining engine uses the resource utilization values recorded while running users’ applications and the corresponding power consumption values and feeds it to a model builder that resides within. The model builder model uses the utilization and power data to calculate the values of the regression coefficient for the power model. The data mining engine collects the utilization and power values continuously in real time and the power model is updated continuously based on the new data.
P = α + β1 UC PU + β2 Umem + β3 Udisk + β4 Unet
(2)
where α, β1, β2 , β3 , and β4 are the regression coefficients. • Intelligent autonomous agent: The intelligent autonomous agent learns from the server environment and schedules a user’s request on a server such that the overall increase in the energy consumption is the minimum.
132
L. Ismail and H. Materwala
To schedule an incoming task having its resource utilization requirements on one of the cloud’s servers, IAA-EATSVM proceeds as follows [11]: • It calculates the completion time of the task on a server based on the length of the task in terms of Millions of Instructions (MI) and the speed of the server in terms of Million Instructions per Second (MIPS). • If the server is idle, the algorithm calculates the value of the energy function for that task by multiplying the power consumed by the task (using power model) and the calculated completion time of the task. • If the server is active (i.e., having ongoing task(s)), the algorithm first calculates the increase in the completion time of the ongoing task(s) and then calculates a combined energy function for the incoming task and the ongoing one(s). • The algorithm calculates the value of the energy function and places the task on the server having the minimum value of the energy function.
4 Performance Analysis In this section, we implement the ECTC and EATSVM scheduler and analyze their performances. We evaluate and compare their performances in terms of energy consumption, electricity costs, carbon dioxide emission, and execution time.
4.1 Experimental Environment To evaluate the performance of the schedulers in a cloud computing environment, we implemented ECTC and IAA-EATSVM in CloudSim 3.0.3 [20]. We extended the power consumption class of CloudSim to include the PEM and Machine learningbased power models. We created a heterogenous data center made of servers of three different types and virtual machines of four different types as shown in Tables 1 and 2, respectively. The servers in Table 1 are part of our research laboratory at the Table 1 Servers types used in the experiments Server 1
Sun Fire Intel_Xeon CPU core of 2.80 GHz, Dual core, with 512 KB of cache and 4 GB of memory for each core, CPU voltage rating 1.5 V, OS version CentOS 6.8(i686).
Server 2
Sun Fire × 4100 with AMD_Operaton252 CPU of 2.59 GHz, dual CPU, single core, with 1 MB of cache and 2 GB of memory for each core, CPU voltage rating of 3.3–2.9 V, OS version Red Hat Enterprise Linux Server release 7.3 (Mapio)
Server 3
CELSIUS R940 power 2 × Intel Xeon E5-2680v4 CPU (2.40 GHz, 14 cores), 8 × 32 GB DDR4, 2 × HDD SAS 600 GB, OS version Red Hat Enterprise Linux Server RHEL 7.4–64-bit
Artificial Intelligent Agent for Energy Savings …
133
Table 2 VMs types used in the experiments Type 1
Type 2
Type 3
Type 4
MIPS
2500
2000
1000
500
RAM
870
1740
1740
613
College of Information Technology of the United Arab Emirates University. The default VM configurations of CloudSim is used in Table 2. To develop the power models for the schedulers, a training dataset, consisting of loads stressing the performance metrics (CPU, memory, disk, and network) and the corresponding power consumption, is required. The dataset is obtained experimentally by running different tools on our servers (Table 1). We use the CPU Load Generator 1.0.0 [21], the Stress 1.0.4 [22], the Vdbench 5.04.06 [23], and the iperf 3.1.3 [24] to stress the CPU, the memory, the disk I/O, and the network I/O, respectively. CPU Load Generator uses a script that generates a fixed CPU load for a finite user-defined time duration. Stress uses a defined number of VM workers of a specific memory allocation size for a defined time interval to stress the memory. Vdbench generates a configurable amount of disk I/O workloads on a server using a curve parameter of a specific Vdbench’s run definition file. Iperf3 generates a configurable network I/O rate between the server under study and a remote host server. To obtain the values of the performance metrics while the tools are running, we developed scripts for data collection. We use Linux perf utility [25] to measure the values of CPU and memory utilizations and collected tool [26] to measure the disk and network I/Os. To obtain the power consumption of the servers we use a twochannel digital oscilloscope of type Tektronix—TD2012B [27] 100 MHz with 1GS/s sampling. We connect the oscilloscope to a current probe [28] and a high differential voltage probe [28] for acquiring the current and voltage signals, respectively, in real time, while the tools are running, by using a LabVIEW 2016 program that we developed. The power consumption is then calculated by multiplying the current and the voltage signals. We use the R environment 3.5.1 [29] for the development of the power models. We experimentally generate in our lab synthetic workloads for the scheduler to typify real-life smart cities applications using different benchmarks and applications such as the Sysbench 1.0.17 benchmark [30], MEncoder 1.2.1 application [31], PARSEC 3.0 benchmark’s Black Scholes model and Streamcluster [32], and ensemble clustering application using Weka 3.8.1 [33]. Sysbench stresses the CPU of a server by calculating the prime numbers between zero and a user-defined number. The prime number calculations are required in real-world applications such as the generation of cryptographic hash codes, random numbers, and designing the number of pins on a rotor machine. MEncoder stresses the CPU and memory by performing a video compression operation on a specified video file. We use real videos of the laboratory activities in MPEG-4 video format [34] with 1920 × 1080 resolution. The Black Scholes model utilizes the CPU, memory, and disk I/O by calculating the prices of European options’ portfolio analytically using partial differential equations (PDE).
134
L. Ismail and H. Materwala
Streamcluster utilizes the CPU, memory, disk I/O, and network I/O by solving an online clustering problem. The data mining ensemble clustering stresses the CPU, memory, and disk I/O by performing k-means clustering on a specific data set. We use forest cover data sets [35] consisting of geospatial descriptions of various forest types. The dataset includes real tree observations from four wilderness areas located at the Roosevelt National Forest of northern Colorado. We run these benchmarks and applications on each server of our experimental testbed and measure the values of the different utilization metrics and the corresponding power consumptions. The measured utilization metrics values are then used as resource requirements of the cloud workloads.
4.2 Experiments We perform two sets of experiments. First, to obtain the training dataset to develop the power models for the scheduler. Second, to develop the workload for the cloud schedulers. To generate the training dataset, we performed 4 experiments on the servers in our testbed, each stressing CPU, memory, disk, or network, respectively, using the tools described in the previous subsection. To stress the CPU, we produce 30 configurable CPU loads between 0 and 100% at random intervals using the CPU Load Generator. To stress memory, we populate the servers’ memory using VM workers of 30 random memory sizes using the Stress tool. To produce disk utilization, we generated 30 I/O rates between 0 and 100% at random intervals using Vdbench. To generate network I/O, we ping the server under experiment from a remote server using 30 random bandwidths between 0 and 100% using iperf3. Every experiment runs for 5 min during which we measure the values of performance metrics and corresponding power consumption at every one second. We record these values in a file and calculate the average of performance metrics and power consumption values. Each experiment is repeated 25 times and the average of all the averages is computed. To develop the MLR power model, we use the entire training dataset, while for the PEM model in ECTC, we use the servers’ power consumption values when the CPU utilization is at 0 and 100%. The PEM model to predict the power consumption is stated in Eq. 3 [10]. P = Pmax + Pmax − Pmin UC PU
(3)
where P-max is the server’s maximum power consumption at full CPU load and Pminis the minimum power consumption when the server is idle. To generate a synthetic and dynamic workload for the schedulers, we stress one or more performance metrics. We run Sysbench to generate a CPU-intensive workload by calculating the prime numbers between 0 and 20,000,000. To produce a CPU and memory-intensive workload, we performed video compression operation on video
Artificial Intelligent Agent for Energy Savings …
135
files of increasing sizes from 5 to 50 GB at an interval of 5 GB, using the MEncoder application. The source code of MEncoder is included in the Mplayer project version SVN-r31628-4.8.5. To generate an intensive workload of CPU, memory, and disk, we use the Black Scholes application to calculate the prices of a 65,536 European options portfolio. We also use the ensemble clustering application to perform k-means clustering of data sets with a different number of instances (2799, 279,000, 2,790,000, and 5,580,000). We use the Streamcluster application to generate an intensive workload of CPU, memory, disk, and network by performing an online stream clustering for native input options having 1,000,000 inputs points and 218 dimensions. We collect the utilization metrics and the corresponding power consumption values every 1 s for each experiment and calculate the average. We repeat every experiment 25 times and calculate the average of the averages. The data set generated for each benchmark/application on each server is replicated 50 times and shuffled randomly to create a large workload. To evaluate the performance of the ECTC and IAA-EATSVM, we calculate the energy consumption, electricity cost, carbon dioxide emission, and execution time by running the schedulers to schedule the synthetic workload. We first simulate the data center with an increasing number of hosts (50, 250, 500, 800, and 1000). The host types used for the simulation of the data center are of the same specifications as the ones used in our Lab and are equally distributed in the simulated data center. We then create 1500 VMs with the four VM types equally distributed. In the experiments, we first schedule the workload in a data center having 50 hosts using the ECTC scheduler. The power model used during the implementation is PEM. We then schedule the workload using the IAA-EATSVM scheduler. The power model used during the implementation is multiple linear regression. We measure the energy consumption of the cloud data center post scheduling for both the schedulers. We repeat both scenarios 3 times and calculate the average for energy consumption. The energy consumptions of the running VMs is calculated by multiplying the power consumption of the machine and the time it has been running. The energy consumptions of the VMs are added to find that of the hosts, which in turn are added to find that of the data center. We repeat the experiments with an increasing number of hosts. To calculate the electricity costs, we use the standard US energy cost of 13.04 cents/kilowatthour (kWh) [36]. This is the average energy consumption cost of all the states in the USA as of November 2019. To calculate the carbon dioxide emission, we use the US average electricity source emission of 0.4483 kg CO2 per kWh [37].
4.3 Experimental Results Analysis In this section, we evaluate and compare our results on the performance of the ECTC and IAA-EATSVM. We also give insights and conclusions on these evaluations. In particular, we explain the reasons for the rationale of the schedulers’ performance. Figure 2 shows that the ECTC, has more energy consumption compared to IAAEATSVM, for task scheduling. This is because, of the following reasons: (1) the
136
L. Ismail and H. Materwala
Fig. 2 Energy consumption of IAA-EATSVM and ECTC schedulers in a cloud data center with increasing number of hosts
power model used by ECTC scheduler, PEM, only depends on the server’s minimum and maximum power, and does not consider the implications of the spatial distribution of the power consumption between the extremes. Moreover, PEM only uses CPU utilization to predict power consumption. However, the power consumption of a server in a data center serving a variety of applications depends on memory, disk, and network utilizations in addition to CPU. (2) The ECTC scheduler places an incoming task on a server where the overlapping time between the new task and the ongoing task(s) is the maximum. Consequently, the scheduler will not place a task on the idle server leading to consolidation. Based on the experimental results by [38], server consolidation leads to more energy consumption. On the other hand, IAAEATSVM uses a power model based on MLR machine learning. It learns from the previous loads and corresponding power consumption of a server’s CPU, memory, disk, and network utilization, and dynamically predicts the power consumption of an incoming task. Moreover, the IAA-EATSVM scheduler considers both idle and active servers and places a task on a server where the increase in energy consumption is the minimum and takes into account the new task and the ongoing one(s). In summary, our scheduler saves up to 4.12% compared to the non-intelligent scheduler in a cloud data center having 1000 hosts. Figure 3 shows the electricity cost incurred by ECTC and IAA-EATSVM scheduler. It shows that IAA-EATSVM yields lower electricity bills as compared to ECTC for an increasing number of servers. In summary, IAA-EATSVM reduces the electricity bill by up to 400 USD for scheduling one batch of tasks in a cloud data center with 1000 hosts. Our results on the carbon dioxide emission by the ECTC and IAAEATSVM scheduler are shown in Fig. 4. It shows that the IAA-EATSVM scheduler emits less carbon dioxide leading to green computing as compared to ECTC, thanks
Artificial Intelligent Agent for Energy Savings …
137
Fig. 3 Electricity cost for running IAA- EATSVM and ECTC schedulers in a cloud data center with increasing number of hosts
Fig. 4 Carbon dioxide emission by IAA-EATSVM and ECTC schedulers in a cloud data center with increasing number of hosts
to its energy savings. In summary, IAA-EATSVM emits up to 14 kg less carbon dioxide to schedule one batch of tasks in a cloud data center having 1000 hosts. Table 3 shows the total completion time for the execution of the tasks’ workload with an increasing number of hosts for IAA-EATSVM and ECTC. It shows that IAA-EATSVM performs slightly better than ECTC. This is because IAA-EATSVM considers the increase in execution time for overlapping tasks while ECTC does not.
138 Table 3 Total execution time by IAA-EATSVM and ECTC schedulers in the cloud data center with increasing number of hosts
L. Ismail and H. Materwala Number of hosts
Execution time (minutes) IAA-EATSVM
ECTC
50
4.775
5.104
250
25.820
26.759
500
44.883
54.814
800
74.943
78.837
1000
95.329
98.567
5 Conclusion Energy-aware tasks scheduling in the cloud computing systems has become an important approach for energy savings. In this paper, we propose a machine learning based IAA-EATSVM cloud scheduler and compare its performance with the mostly used power model based ECTC scheduler. We evaluate their performance in terms of energy savings, electricity costs, carbon dioxide emission, and execution time. Our experimental results show that IAA-EATSVM achieves 4.12% more savings in energy compared to ECTC, a reduction of $400 in electricity bills, and 14 kg lower CO−2 emission for a 1000-host cloud data center. It took 95 min for the workload to complete its execution using IAA-EATSVM and 99 min using ECTC. This is because IAA-EATSVM considers the overlapping time between tasks at scheduling while ECTC does not. In addition, IAA-EATSVM uses a machine learning approach that learns from the execution of past tasks to construct a model for the prediction of power consumption while ECTC follows a static approach. In future research direction, we propose investigations in the following directions. First, we aim to integrate cognitive learning algorithms such as Particle Swarm Optimization and Genetic Algorithm in the intelligent autonomous agent scheduler to optimize performance in addition to energy consumption while respecting the service-level agreements. Second, we would like to perform the experiments on a wide range of servers’ architectures and configurations. Acknowledgements This work was funded by the Emirates Center for Energy and Environment Research, United Arab Emirates University, under Grant 31R101.
References 1. Mell, P., Grance, T.: The NIST definition of cloud computing recommendations of the national institute of standards and technology. NIST Spec. Publ. 145, 7 (2011). https://doi.org/10.1136/ emj.2010.096966 2. Richter, A., Khoshgoftaar, T.: Efficient learning from big data for cancer risk modeling: a case study with melanoma. Comput. Biol. Med. 110, 29–39 (2019) 3. Xia, F., Yang, L.T., Wang, L., Vinel, A.: Internet of things. Int. J. Commun. Syst. 25 (2012)
Artificial Intelligent Agent for Energy Savings …
139
4. Al Omar, A., Alam Bhuiyan, Z.M., Basu, A., Kiyomoto, S.: Privacy-friendly platform for healthcare data in cloud based on blockchain environment. Futur. Gener. Comput. Syst. 95, 511–521 (2019) 5. DELFORGE P (2015) America’s Data Centers Consuming and Wasting Growing Amounts of Energy| NRDC. https://www.nrdc.org/resources/americas-data-centers-consuming-andwasting-growing-amounts-energy. Accessed 10 Sep 2019 6. Vidal J Global carbon emission. https://www.climatechangenews.com/2017/12/11/tsunamidata-consume-one-fifth-global-electricity-2025/ 7. Greenberg, S., Mills, E., Tschudi, B., Berkeley, L.: Best practices for data centers : lessons learned from benchmarking 22 data centers T. Aceee SUMMER, 76–87 (2006). https://doi. org/10.1016/j.energy.2012.04.037 8. Ham, S.W., Kim, M.H., Choi, B.N., Jeong, J.W.: Simplified server model to simulate data center cooling energy consumption. Energy Build. 86, 328–339 (2015). https://doi.org/10. 1016/j.enbuild.2014.10.058 9. Dai, X., Wang, J.M., Bensaou, B.: Energy-efficient virtual machines scheduling in multi-tenant data centers. IEEE Trans. Cloud Comput. 4, 210–221 (2016). https://doi.org/10.1109/TCC. 2015.2481401 10. Lee, Y.C., Zomaya, A.Y.: Energy efficient utilization of resources in cloud computing systems. J. Supercomput. 60, 268–280 (2012). https://doi.org/10.1007/s11227-010-0421-3 11. Ismail, L., Materwala, H.: EATSVM: energy-aware task scheduling on cloud virtual machines. Procedia Comput. Sci. 135, 248–258 (2018). https://doi.org/10.1016/j.procs.2018.08.172 12. Huai, W., Qian, Z., Li, X., et al.: Energy aware task scheduling in data centers. J. Wirel. Mob. Netw. Ubiquitous Comput. Dependable Appl. 4, 18–38 (2013) 13. Ying, C.T., Yu, J.: Energy-aware genetic algorithms for task scheduling in cloud computing. In: Proc—7th ChinaGrid Annu Conf ChinaGrid 2012, pp. 43–48 (2012). https://doi.org/10. 1109/ChinaGrid.2012.15 14. Wu, C.M., Chang, R.S., Chan, H.Y.: A green energy-efficient scheduling algorithm using the DVFS technique for cloud datacenters. Futur. Gener. Comput. Syst. 37, 141–147 (2014). https:// doi.org/10.1016/j.future.2013.06.009 15. Qureshi, B.: Profile-based power-aware workflow scheduling framework for energy-efficient data centers. Futur. Gener. Comput. Syst. 94, 453–467 (2019). https://doi.org/10.1016/j.future. 2018.11.010 16. Ilager, S., Ramamohanarao, K., Buyya, R.: ETAS: energy and thermal-aware dynamic virtual machine consolidation in cloud data center with proactive hotspot mitigation. Concurr. Comput., 1–15 (2019). https://doi.org/10.1002/cpe.5221 17. Fan, X., Weber, W.-D., Barroso, L.A.: Power provisioning for a warehouse-sized computer. ACM SIGARCH Comput. Archit. News 35, 13 (2007). https://doi.org/10.1145/1273440. 1250665 18. Bircher, W.L., John, L.K.: Complete system power estimation using processor performance events. IEEE Trans. Comput. 61, 563–577 (2011) 19. Kim, N., Cho, J., Seo, E.: Energy-credit scheduler: an energy-aware virtual machine scheduler for cloud systems. Futur. Gener. Comput. Syst. 32, 128–137 (2014). https://doi.org/10.1016/j. future.2012.05.019 20. Calheiros, R.N., Ranjan, R., Beloglazov, A., et al.: CloudSim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms. Softw—Pract Exp 41 (2011) 21. Carlucci G CPULoadGenerator. https://github.com/GaetanoCarlucci/CPULoadGenerator. Accessed 1 Sep 2019 22. Stress project page. https://people.seas.harvard.edu/~apw/stress/. Accessed 1 Sep 2019 23. Vandenbergh, H.: Vdbench Users Guide, 1–114 (2012) 24. Mortimer, M.: iperf3 Documentation (2018) 25. Linux perf Examples. http://www.brendangregg.com/perf.html. Accessed 2 Sep 2019 26. Collectd: https://collectd.org/wiki/index.php/Main_Page. Accessed 1 Sep 2019
140
L. Ismail and H. Materwala
27. Tektronix: User Manual TDS1000- and TDS2000-Series Digital Storage Oscilloscope. 206 (2009) 28. Wedlock, B.D., Roberge, J.K., James, K.: Electronic Components and Measurements. PrenticeHall (1969) 29. R: What is R? https://www.r-project.org/about.html 30. Kopytov A Sysbench. https://github.com/akopytov/sysbench#sysbench. Accessed 14 Aug 2018 31. MPlayer—The Movie Player. http://www.mplayerhq.hu/design7/dload.html. Accessed 14 Aug 2018 32. The PARSEC Benchmark Suite. http://parsec.cs.princeton.edu/parsec3-doc.htm. Accessed 14 Aug 2018 33. Weka 3—Data Mining with Open Source Machine Learning Software in Java. https://www.cs. waikato.ac.nz/ml/weka/downloading.html. Accessed 25 Aug 2018 34. H264 Video Format. http://www.h264info.com/h264.htmlis. Accessed 14 Aug 2018 35. UCI Machine Learning Repository. https://archive.ics.uci.edu/ml/index.php. Accessed 25 Aug 2018 36. The energy rate report. https://www.chooseenergy.com/electricity-rates-by-state/. Accessed 25 Feb 2020 37. Carbon dioxide emission. https://carbonfund.org/calculation-methods/. Accessed 25 Feb 2020 38. Ismail, L., Materwala, H.: Energy-Aware VM placement and task scheduling in Cloud-IoT computing: classification and performance evaluation. IEEE Internet. Things J. 5, 5166–5176 (2018). https://doi.org/10.1109/JIOT.2018.2865612
Agent-Based Modeling and Simulation and Business Process Management
Design of Technology for Prediction and Control System Based on Artificial Immune Systems and the Multi-agent Platform JADE G. A. Samigulina
and Z. I. Samigulina
Abstract The work is devoted to the creation of a technology and an intelligent system for prediction and control of complex nonlinear dynamic objects in the oil and gas industry based on modified algorithms of artificial immune systems using the multi-agent platform JADE. As an example, there is considered a technological process for gas purification from acidic components at the U300 installation (medium pressure absorber D302) of the Tengiz Chevroil enterprise. There has been developed a knowledge base based on ontological models implemented in the Protégé ontology editor. The creation of the knowledge base is carried out according to the concept of MDA (Model-Driven Architecture), when first platform-independent ontological models are created, and then platform-dependent ones. The hierarchical structure of classes and the visualization of a common ontological model for a multi-agent system are presented. Developed a block diagram of a multi-agent intelligent prediction and control system, implemented on the JADE platform. A description of the containers created in JADE and a fragment of the agent specification are given. Simulation results are presented.
1 Introduction Digitalization of industry and progress in the development of modern information technologies, as well as artificial intelligence, contribute to the development of highly efficient complex distributed automated control systems (DCS—Distributed Control Systems), such as Expirion PKS from Honeywell company [1]. Large industrial enterprises Tengiz Chevroil, Karachaganak Petroleum Operating, and others actively use G. A. Samigulina Institute of Information and Computing Technologies, Almaty, Kazakhstan e-mail: [email protected] Z. I. Samigulina (B) Kazakh-British Technical University, Almaty, Kazakhstan e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_13
143
144
G. A. Samigulina and Z. I. Samigulina
these achievements. Complex objects of the oil and gas industry are characterized by multidimensionality, uncertainty of parameters, and redundancy of current information. Large amounts of information and a high rate of analysis of production data that a person faces in modern production increase the risks of making wrong decisions at managing. Technological process control at refineries is divided into several independent levels of protection. During the operation of a distributed control system, huge amounts of information are collected that are not used and analyzed in any way. There are segments of predictable and unpredictable events [2]. A promising approach is to study the segment of predictable events and processing of multidimensional production data in real time based on metaheuristic algorithms of artificial intelligence. Of particular interest are recent developments in the field of artificial immune systems and various modified algorithms based on them. For example, in researches [3], there is considered a hybrid approach using the recognition algorithms of the artificial immune system Artificial Immune Recognition System (AIRS and AIRS2) in order to solve classification problems, and which offers reliable and powerful information processing. In paper [4], there is considered an algorithm based on the use of AIRS and of a deterministic version of the genetic algorithm (GA). The simulation was carried out on real data and showed that the proposed algorithm shows higher results in classification accuracy and speed than the AIRS2 algorithm. Researches [5] are devoted to the use of a modified algorithm of artificial immune systems based on clonal selection and an artificial neural network to predict the consumption of electric energy and peak loads. An urgent task is the selection of informative descriptors that describe the state of the system and the creation of an optimal data set. There are widely used various bio-inspired artificial intelligence optimization algorithms. For example, in work [6], there is considered a new strategy for informative descriptors selection based on the combined use of the firefly algorithm and the approach of artificial immune systems based on clonal selection. The simulation results showed the prospects of using this modified algorithm. Recently, many new promising optimization algorithms have appeared. Researches [7] are devoted to optimization based on the grey wolf algorithm for solving real engineering problems. The proposed modified mGWO algorithm with an exponential decay function shows better results than the classic GWO algorithm. In article [8], the solution to the optimization problem is considered using an artificial immune network based on a cloud model (AINet—CM). A demonstration of the proposed algorithm for two industrial applications showed the practical value of the developed approach. The paper [9] provides an analytical review of the optimization algorithm of the Flower Pollination Algorithm (FPA) and its application. Researches [10] are devoted to solving the problem of informative features selection based on a binary algorithm of flower pollination. The numerical experiments using publicly available data sets and a comparative analysis with particle swarm optimization algorithms and the firefly algorithm have shown good simulation results. In this regard, the development of a technology and an intelligent prediction and control system based on highly effective modified algorithms of artificial immune
Design of Technology for Prediction and Control System …
145
systems and other various artificial intelligence algorithms for predicting behavior, equipment diagnostics, and operational control of a complex object is relevant [11]. Applications developed on the basis of AIS algorithms [12] and various modified algorithms [13] have several advantages: memory, autonomy, and adaptability. The literature analysis confirms the relevance of this area of research. The multi-agent systems (MAS) approach is widely used to solve the problems under consideration. Multi-agent systems are a powerful tool for introducing promising bio-inspired AI algorithms and, in particular, artificial immune systems into real production. The article [14] discusses the creation of a multi-agent decentralized control system based on the artificial immune systems approach. Researches [15] are devoted to the development of testing technology for intelligent agents in creating complex distributed systems based on the clonal selection algorithm CLONALS. The obtained results allow the developer to change the structure of the agent in order to increase its productivity. The results of the comparison with the genetic algorithm (GA) and the ant colony optimization algorithm (ACO) are presented, which show the advantages of the proposed approach. During the multi-agent systems design and knowledge bases development, there are often used ontological models [16] in order to systematize input and output parameters and to create a more effective interaction between agents. The following structure of the article is proposed: in the second section there is formulated the research problem, in the third section—there are described methods for solving and developing a knowledge base on the basis of ontological models for a multi-agent system, the fourth section is devoted to an intelligent system for prediction and control of complex objects based on the artificial immune systems approach and multi-agent platform JADE, the fifth section shows the simulation results, the conclusion is given in the sixth section and at the end, there is a list of used literature.
2 Statement of the Problem The problem statement is formulated as follows: it is necessary to develop a technology and an intelligent system for prediction and control of complex nonlinear dynamic objects based on artificial immune systems and the multi-agent platform JADE for real production facilities in the oil and gas industry. As an example, there is considered the technological process for gas purification from acidic components at the U300 installation (medium pressure absorber D302) of the Tengiz Chevroil enterprise [17]. There has been developed a database of the parameters of a complex object consisting of readings from sensors. For example, there are used: LIC31053—a level sensor; TI31050—a temperature sensor; PDI31003—a differential pressure; FIC31005—a consumption sensor; QRAH31001—H2 S analyzer in gas, etc. The dimension of the database is R = 11 × 700, 7700 data copies [18, 19].
146
G. A. Samigulina and Z. I. Samigulina
3 Solution Methods Implementation of the smart system for prediction and control of complex objects of the oil and gas industry is carried out on the multi-agent platform JADE (Java Agent Development Framework) [20]. Despite the availability of extensive software for implementing multi-agent systems (AgentBuilder, JACK Intelligent Agents, MASON, MadKIT, etc.), there is quite actively used an open platform JADE, which complies with the international standard FIPA (The Foundation for Intelligent agent). Applications developed in this environment are failure resistant, have self-recovery ability, and therefore have a sufficient degree of reliability. Multi-agent systems have flexibility, scalability, high performance, and the ability to expand by new modules. The operation and effective interaction of intelligent agents is carried out in a dynamic environment. Agents communicate among themselves by messages in the language of ACL (Agent Communication Language). A set of convenient graphical tools is another advantage of this software product. In JADE, there is created the main container in which the special agents AMS (Agent Management System) and DF (Director Facilitator) operate, which is always active. The remaining containers with agents are connected at the time of launch. Intelligent agents can operate with the semantic knowledge base [16] in the OWL format (Web Ontology Language), consisting of ontologies of the used modified AIS algorithms created in the Protégé ontology editor. Representation of agent knowledge in the form of ontologies helps to facilitate communication between agents. The development of the knowledge base is carried out in the framework of the MDA concept (Model-Driven Architecture) in two stages: firstly, there are created platform-independent ontological models, and then these models are converted into platform-dependent ones [21]. In the multi-agent platform JADE, just like in the MDA approach, the BDI (Beliefs-Desires-Intentions) model is implemented, when agents are created taking into account beliefs, desires, and intentions, when they realize their own goals, communicate with each other and react to changes in the software environment. A knowledge base on the basis of ontological models of the applied algorithms is developed. Figure 1 shows the hierarchical structure of the classes for the ontological model of a multi-agent prediction and control system. During the development of a multi-agent intelligent control system, the following optimization algorithms are used: • • • •
Random Forest algorithm (RF) [22]; Particle Swarm Optimization algorithm (PSO) [23]; Grey Wolf Optimization algorithm (GWO); Flower Pollination algorithm (FP). The following artificial immune system algorithms are also used:
– AIRS artificial immune system recognition algorithm; – an artificial immune system algorithm based on clonal selection CLONALS; – an immune network algorithm (AIS) [22].
Design of Technology for Prediction and Control System …
147
Fig. 1 Hierarchical structure of the classes for the ontological model of a multi-agent intelligent system in Protégé ontology editor
Figure 2 shows the visualization of the structure of the ontological model of a multi-agent intelligent system. The ontological model is used in the analysis of numerous connections between intelligent agents and during their consideration in the software development for multi-agent systems.
4 Intelligent System of Prediction and Control of Complex Objects on the Basis of Multi-agent Platform JADE Figure 3 shows the developed intelligent control system. Based on information about the dynamic behavior of a complex object, there is formed a purpose that must be achieved during the intelligent control system operation. Data from the control object is stored in the parameter database (readings from sensors) and processed on the basis of prediction and control technology, which provides communication with a knowledge base consisting of ontologies of the used optimization algorithms and
148
G. A. Samigulina and Z. I. Samigulina
Fig. 2 Visualization of the structure of the ontological model of a multi-agent intelligent system
Fig. 3 Block diagram of an intelligent control system based on a technology and multi-agent approach
Design of Technology for Prediction and Control System …
149
artificial immune system algorithms (Figs. 1 and 2). Since a technology consists of many elements interacting with each other, then there is used a multi-agent approach. The operation of the intelligent prediction and control system is carried out according to the following algorithm: Step 1. Connection to the data storage of the Expirion PKS distributed enterprise control system and the formation of a database of complex object parameters, consisting of readings from sensors. Step 2. The choice of data optimization method and the creation of the optimal set; the formation of an optimal database of parameters (descriptors) that describe the behavior of a complex object based on the selected optimization algorithm. Step 3. The solution of the classification problem (3 classes are selected—alarm priorities “emergency”, “high”, “low”). Step 4. Selection of a prediction algorithm based on artificial immune systems. Step 5. Performance evaluation and selection of a modified AIS algorithm based on the best predictive result. Step 6. Prediction and decision-making on the operational control of a complex object [19]. Such a system consists of a large number of interacting elements, the practical implementation of which is most convenient in the class of multi-agent systems. Figure 4 shows a block diagram of an intelligent prediction and control system based on the multi-agent platform JADE. Special agents are created in the main container: AMS (Agent Management System) for managing agents and providing information about currently existing agents; DF (Director Facilitator) provides information on the services of agents registered in AMS. The container No.1 contains agents that implement optimization algorithms (RF, PSO, GWO, FPA); in container No. 2 there are agents that perform artificial immune system algorithms (AIRS, CLONALG, AIS); in container No.3 there are agents for algorithms effectiveness evaluation (Precision agent, Recall agent, etc.). Effective interaction between agents is carried out using
Fig. 4 Block diagram of a multi-agent intelligent system based on JADE platform
150
G. A. Samigulina and Z. I. Samigulina
Table 1 Fragment of the specification of the main agents Name of the agent
Agent designation
Agent function
1
2
3
RF agent
RF_K1
Random Forest agent
PSO agent
PSO_K1
Particle Swam Optimization agent
GWO agent
GWO_K1
Grey Wolf Optimization agent
FPA agent
FPA_K1
Flower Pollination Algorithm agent
AIRS agent
AIS_K2
AIRS artificial immune systems recognition algorithm
CLONALS agent
CLON_K2
Artificial immune system algorithm based on clonal selection CLONALG
Name of the agent
Agent designation
Agent function
Immune network modeling agent
AIS_K2
AIS Immune Network Modeling Algorithm
Evaluation of prediction model agent
EP
Prediction Model Evaluation Agent
Precision agent
EC1
Precision Algorithm Evaluation Agent (Accuracy)
Recall agent
EC2
Recall Algorithm Evaluation Agent (completeness)
AUC agent
EC4
Area Under ROC Curve Evaluation Agent
…
…
…
messages in the ACL (Agent Communication Language), which consist of such fields as: sender, recipient, communication, and content. Table 1 provides a fragment of the specification of the main agents for a multi-agent intelligent system.
5 Simulation Results Let us present the simulation results on the example of modified algorithms using the grey wolf algorithm and the immune network modeling algorithm [19, 22] based on homologous proteins. This approach has proven itself in solving the problem of pattern recognition at the class boundary; it has memory, self-organization, and high adaptability. A comparative analysis is presented in Fig. 5. Before data preprocessing based on GWO, the efficiency of the application of the algorithms is: AIRS (62%), CLONALG (63.8%), and AIS (85%). After the reduction of uninformative features based on the grey wolf optimization, the following simulation results of modified algorithms were obtained: GWO-AIRS (73.2%),
Design of Technology for Prediction and Control System …
151
Fig. 5 Comparison of the results of applying modified AIS algorithms
GWO-CLONALG (75.4%), and GWO-AIS (93.6%). The best predictive result was shown by the modified GWO-AIS algorithm. Since the accuracy indicator does not always allow an objective assessment comparing prediction algorithms, performance assessments for each class are additionally considered based on the ROC—error curve and AUC (Area Ander ROC curve) characteristics.
6 Conclusion The use of modified AIS algorithms in the implementation of the technology for complex object control is a promising area of research and can improve the accuracy of model prediction by selecting such a modification of algorithms that will be most effective for analyzing a specific database. The developed multi-agent technology for prediction and control of complex objects combines the advantages of the MDA, MAC, and BDI approaches. It is a promising tool for the analysis and prediction of multidimensional production data in industrial operation and has the properties of self-organization, adaptability, and failure resistance. The research was financially supported by the Ministry of Education and Science of the Republic of Kazakhstan in the framework of the scientific project No. AP05130018 on the topic: “Development of cognitive Smart—technology for intelligent control systems of complex objects based on artificial intelligence approaches” (2018–2020).
152
G. A. Samigulina and Z. I. Samigulina
References 1. Industrial Automation and Control Solutions from Honeywell. https://www.honeywellprocess. com. Accessed 09 Jan 2020 2. Technical Documentation: Honeywell Server and Client Planning Guide, 37 (2008) 3. Reza, M.: A Hybrid Approach for Artificial Immune Recognition System. University of Malay, Kuala Lumpur (2016) 4. Jenhani, I., Elouedi, Z.: AIRS-GA: a hybrid deterministic classifier based on artificial immune recognition system and genetic algorithm. In: Proceedings of Conference: The 2017 IEEE Symposium Series on Computational Intelligence, Honolulu, Hawaii, USA (2017). https://doi. org/10.1109/SSCI.2017.8280958. Accessed 17 Feb 2020 5. Omid, A., Amir, N.: A novel electric load consumption prediction and feature selection model based on modified clonal selection algorithm. J. Intell. Fuzzy Syst. 34(4), 2261–2272 (2018) 6. Kihel, B.K., Chouraqui, S.: Firefly optimization using artificial immune system for feature subset selection. Int. J. Intell. Eng. Syst. 12(4), 337–347 (2019) 7. Mittal, N., Singh, U., Sochi, B.S.: Modified grey wolf optimizer for global engineering optimization. J. Appl. Computat. Intell. Soft Comput., 1–16 (2016) https://doi.org/10.1155/2016/ 7950348. Accessed 17 Feb 2020 8. Wang, M., Feng, S., Li, J., Li, Z., Xue, Y., Guo, D.: Cloud model-based artificial immune network for complex optimization problem. J. Comput. Intell. Neurosc., 1–17 (2017). https:// doi.org/10.1155/2017/5901258. Accessed 17 Feb 2020 9. Abdel-Basset, M., Shawky, L.A.: Flower pollination algorithm: a comprehensive review. J. Artif. Intell. Rev. 52(4), 1557–2533 (2019) 10. Rodrigues, D., Yang, X.C., Souza, A.N., Papa, J.P.: Binary flower pollination algorithm and its application to feature selection. In: Yang, X.-S. (ed.) Recent Advances in Swarm Intelligence and Evolutionary Computation. Studies in Computational Intelligence, vol. 585, pp. 85–100. Springer (2015) 11. Krishnamurthy, E.V., Kris, Murthy V.: On engineering smart systems. In: Proceedings KES of the 9th International Conference on Knowledge-Based Intelligent Information and Engineering Systems, 3, pp. 505–512. Australia (2005) 12. Sotiropoulos, D., Tsihrintzis, G.: Machine learning paradigms. Artificial immune systems and their applications in software personalization. Intelligent Systems Reference Library, vol. 118, pp. 159–235. Springer (2017) 13. Padmanabhan, S., Chandrasekaran, M., Ganesan, S., Khan, M., Navakanth, P.: Optimal solution for an engineering applications using modified artificial immune system. In: IOP Conference Series: Materials Science and Engineering, vol. 183, pp. 1–5 (2017) 14. German, S., Shin, S., Tsourdos, A.: Immune-system inspired approach for decentralized multiagent control. In: Proceeding of the IEEE 24th Mediterranean Conference on Control and Automation, pp. 1020–1025. Alhens, Greece (2016) 15. Carneiro, S.M., Thiago, A.R., Ricardo, de A.L. Rabêlo, Francisca, Silveira, R.V., Gustavo, de Campos, A.L.: Artificial immune systems in intelligent agents test. In: Proceedings of the IEEE Congress on Evolutionary Computation, pp. 536–543. Sendai, Japan (2015) 16. Freitas, A., Bordini, R.H., Vieira, R.: Designing Multi-Agent Systems from Ontology Models. LNCS, vol.11375, pp. 76–95. Springer (2019) 17. The technological regulations for the technological process of purification of hydrocarbon gases at the installation 300. Tengiz Chevroil LLP, 1–146 (2016) 18. Samigulina, G.A., Samigulina, Z.I.: Development of Smart-technology for Complex Objects Control based on Approach of Artificial Immune Systems. In: Materials of 2018 Global Smart Industry Conference, Chelyabinsk, Russia (2018). https://ieeexplore.ieee.org/document/ 8570142/keywords#keywords. Accessed 10 Dec 2018 19. Samigulina, G.A., Samigulina, Z.I.: Development of smart technology for complex objects prediction and control on the basis of a distributed control system and an artificial immune systems approach. Adv. Sci. Technol. Eng. Syst. J. 4(3), 75–87 (2019)
Design of Technology for Prediction and Control System …
153
20. Samigulina, G.A., Nyusupov, A.T., Shayakhmetova, A.S.: Analytical review of software for multi-agent systems and their applications. News of the Academy of sciences of the Republic of Kazakhstan. Ser. Geol. Techn. Sci. 3(429), 173–181 (2018) 21. Buˇcko, B., Zábovská, K., Zábovský, M.: Ontology as a modeling tool within model driven architecture abstraction. In: Proceedings of 42nd International Convention on Information and Communication Technology, Electronics and Microelectronics, pp. 1525–1530. Croatia (2019) 22. Samigulina, G.A., Samigulina, Z.I.: Modified immune network algorithm based on the Random Forest approach for the complex objects control. J. Artif. Intell. Rev. 52(4), 2457–2473 (2019) 23. Samigulina, G.A., Massimkanova, Zh.A.: Development of smart—technology for forecasting technical state of equipment based on modified particle swarm algorithms and immune-network modeling. In: International conference on Computational and Experimental Engineering and Sciences, pp. 68–69. Japan (2019)
A Multi-agent Framework for Visitor Tracking in Open Cultural Places Muhammed Safarini, Rasha Safarini, Thaer Thaher, Amjad Rattrout, and Muath Sabha
Abstract Real-time tracking of pedestrians has attracted tremendous attention for many purposes, especially studying their behavior. The perfect tracking is still a challenging open research. In this paper, a novel approach was introduced by employing a multi-agent vision-based system to achieve an accurate real-time tracker. The proposed approach is recommended to be used in open cultural places where there may be some obstacles including buildings, columns, and many others. The proposed model efficiency was assessed on four real Multiple Object Tracking (MOT) challenge benchmarks in terms of Multi-Object Tracking Accuracy (MOTA). Simulation results reveal a high efficacy of the proposed model compared with the state-of-the-art techniques.
1 Introduction Currently, the detection and tracking of pedestrians in public places is attracting considerable attention from the research community. It can be exploited for several purposes such as surveillance and security-related issues [1]. In public and open spaces, the targeted visitors to be tracked are not expected to be provided with any transmitter or marker for tracking purposes. Consequently, Computer stereo vision plays an important role in tracking multiple visitors through the analysis of real-time video streams from multiple fixed surveillance cameras [2, 3]. This process is often carried out in two stages: detecting and tracking. Human detection, which is a branch of object detection, is utilized to identify the presence of objects and localize humans (i.e., identify the location of humans in rectangular bounding boxes) [4]. Various machine learning approaches have been introduced for human detection such as Haar cascade [5], Histograms of Oriented Gradient (HOG) [6], and deep learning-based methods [7]. However, these techniques require high M. Safarini · R. Safarini · T. Thaher (B) · A. Rattrout · M. Sabha Arab American University, Jenin, Palestine e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_14
155
156
M. Safarini et al.
computation power. In this paper, an efficient framework for multi-person tracking in open cultural places was proposed by combining Multi-Agent System (MAS) and computer vision methods. The main contributions are • Finding a framework that tracks humans by putting an agent for each. • Finding a technique which could achieve real-time tracking on parallel computing. • Finally, achieving a successful tracking framework using MAS. The rest of paper is organized as follows: The related works are reviewed in Sects. 2 and 3. In Sect. 4, the proposed methodology involved in this work is presented and discussed deeply. Section 5 presents the simulation results and their analysis. Finally, Sect. 6 concludes the overall results and future works.
2 Review of Related Works The perfect real-time tracking is still a challenging problem. Several aspects make this process very difficult, especially for humans. Their clothes vary in colors and texture, which increases the complexity of tracking. Besides the random behavior of humans and interactions. For instance, people may move singly for a while or move together as a group for another. Moreover, many challenges arise due to several factors, including varying lighting, motion blur, crowded scenes, partial occlusions (the human is not entirely observed), cluttered background, and other environmental constraints. For tracking purposes, machine learning classifiers are employed to learn from the detected objects in the previous and subsequent frames; thus, additional computational power is required [4]. Due to the challenges mentioned above, the tracking by detection problem is still open for research. The performance of such systems is based on two contradictory metrics; the maximum accuracy of the final results and the minimum computation time [3]. Recently, Multi-Agent Systems (MASs) have attracted considerable attention from researchers in the field of computer vision. It is highly recommended to deal with complex problems effectively [8]. Srisamosorn et al. introduced an indoor human tracking system by involving the moving robot (quadrotor) and multiple fixed cameras. The fixed cameras are used to identify the position and direction of each human, as well as the position of the quadrotor. While the quadrotor is utilized to follow the human’s movement by tracking his/her face based on the attached camera [9]. Although satisfactory results were achieved, the system is applicable to a small closed environment with a limited number of persons. Choi et al. [10] introduced an efficient real-time multi-person tracking framework for Intelligent Video Surveillance. The purpose is to detect and track humans without supervision. Their methodology uses the background subtraction approach to extract the Region of Interest (ROI) and utilizes particle filtering for tracking purposes.
A Multi-agent Framework for Visitor Tracking in Open Cultural Places
157
Sabha and Nasra [11] used fuzzy logic to give accurate results and reduce the ambiguity on user recognition. In addition to that, they reduced the calculation by using parallel computation. Jafari et al. [12] presented a real-time multi-person tracking engine based on Red, Green, Blue, and Depth (RGB-D) vision. Their proposed system is suitable for mobile robots and head-worn cameras. The authors speed up the detection process by exploiting the available depth information from the current RGB-D sensors. A system for real-time closed-space tracking of persons in a shopping mall was proposed by Bouma et al. [13]. The system was tested with multiple static cameras without overlapping field-of-views. The system consists of three main components called: Tracklet generation, Re-identification engine, and graphical man–machine interface. Sabha and Abu Daoud suggested a system for adaptive camera placement in open cultural places using Voronoi Diagrams [14]. The system applied a technique for the compensation of missing drones by other existing drone cameras. Multiple cameras produce one image. Recently, MAS has been utilized in real-time multi-persons tracking tasks and showed successful performance results. However, to the best of our knowledge, there is a lack of exploiting MAS for tracking and prediction of visitor movement in open cultural places. Previtali and Iocchi [15] presented a method for Distributed Multi-Agent Multi-Object indoor Tracking using a Multi-Clustered Particle Filtering technique. They used a team of mobile sensors (robots) that keeps track of several moving objects. Sanchez et al. [3] introduced a computational model for tracking and classifying moving objects (pedestrians and vehicles) through surveillance cameras. Their methodology is based on the integration between an intelligent MAS and information fusion techniques with the aim of minimizing the computational effort and producing a reliable accuracy. An efficient multi-object tracking approach based on Multi-Agent Deep Reinforcement Learning (MADRL) was proposed by Jiang et al. [16]. They adopted YOLO V3 to detect the objects which represent the agents.
3 Multi-agent System (MAS) MAS or self-organized system has been recently emerging as a promising sub-field of Distributed Artificial Intelligence (DAI) [8]. It is a computerized system compound of multiple smart and autonomous entities, called agents, which can collaborate with other entitie, and perceive its environment to perform the desired actions [17]. MAS is based on the idea of a cooperative work environment in order to handle complex adaptive problems that are impossible to handle by the traditional centralized approaches [8]. In MAS, the whole task is divided into multiple smaller tasks allocated to the collaborative agents. To deal with complex decision-making systems and to achieve the desired goals efficiently, the agent (which can be software, hardware, or a hybrid of both) enjoys the Autonomy, Sociability, and Pro-activity properties [8].
158
M. Safarini et al.
MAS has received tremendous attention from the research community in various disciplines, including graphical applications, civil engineering, smart grids, networking, manufacturing, and recently in tracking multiple moving objects [3]. In this work, a hybrid MAS model of reactive and proactive agents is exploited to tackle the problem of tracking multiple pedestrians in open cultural places.
4 The Proposed System 4.1 Overview Existing real-time tracking approaches based on computer vision are still facing some problems, namely, the high computational demand of computer vision, the limited field of view of the used sensors, and the possibility of pedestrians tracking loss. On the other hand, in some approaches, MASs are utilized to track moving objects. However, they were applied to indoor environments. In this paper, a novel approach is proposed by utilizing a multi-agent system combined with computer vision in order to achieve real-time tracking of pedestrians, as in Fig. 1. The proposed system is designed to be applied to open historical environments with several entrances and exits. The site has to be equipped with a fixed camera. Normally, visitors of all ages and cultures enter and tour the place. The environment has the following characteristics:
Fig. 1 The right figure represents the proposed hybrid MAS model for the tracking pedestrians framework shown in the left figure
A Multi-agent Framework for Visitor Tracking in Open Cultural Places
159
• The number of visitors changes dynamically over time. • Visitors can move in groups or individuals. • There are a set of rules that regulates cultural places pedestrians have to obey. The developed system consists of several agents that cooperate with each other to track all visitors who enter the historical place. The system consists of eleven agents, namely, visitor, motion detection, histogram, control, position, tracking, search, speed, count, direction, static historical objects, and behavior prediction agents.
4.2 The Employed Agents A hybrid model is used in the proposed MAS. The hybrid model is a combination of two subsystems, namely, reactive and deliberative. The reactive subsystem is able to react fast to events without complex reasoning, and the deliberative subsystems draw plans and make decisions using symbolic reasoning. The agents’ interactions and the proposed model are shown in Fig. 1, while Table 1 presents a detailed description of the employed agents.
4.3 Methodology The developed system consists of several agents that cooperate with each other to track all visitors who enter the historical place. When a visitor agent arrives, the control agent detects the moving objects in images from the cameras using the Open Source Computer Vision (OpenCV) contour function and sends the moving objects to the histogram agent. Then the histogram agent calculates the histogram of each object and compares it with the stored histograms in the database using a specific threshold to decide if the person in this frame is already registered in the site or he is a new visitor. If the detected person is already there, it updates his histogram. If the histogram does not match any of the stored histograms, this means he is a new visitor. The histogram agent stores the new histogram in the Database (a new visitor is registered) and sends a message to the count agent to increase the number of visitors by one. To express the matching between two histograms H1 , H2 , we employed the OpenCV correlation metric [18] calculated as in Eq. (1): d(H1 , H2 ) = where H¯ k =
1 N
J
− H¯ 1 )(H2 (I ) − H¯ 2 ) ¯ 2 I (H2 (I ) − H¯ 2 )2 I (H1 (I ) − H1 ) I ((H1 (I )
Hk (J ) and N is the total number of histogram bins.
(1)
Objects
Position of visitor agent
Direction of movement 1- Receives and stores the movement direction of the visitor agent of visitor agent 2- Sends the Direction of the visitor to either search agent or behavior prediction agent
Distance between visitor agent and camera
Speed of visitor agent
Message of visitor arrival or leave
Movement of visitor agent
Manages lost visitors
All information about the visitor agent
Histogram agent
Position agent
Direction agent
Depth agent
Speed agent
Count agent
Tracking agent
Search agent
Behavior Prediction agent
1- Receives information from other agents (position, direction, path, etc.) 2- Predicts the next move of the visitor agent
1- Receives information about the lost agent from other agents (position, direction, etc.) 2- Retrieves historical data from the database 3- Predicts where the lost agent could be
1- Receives position of visitor agent Continuously 2- Calculates and stores the visitor path 3- Sends the path when needed
1- Receives message from control agent for new arrivals or departures 2- Sends the number of visitors when needed
1- Receives the speed of the visitor agent 2- Calculates and Stores the speed of each visitor 3- Sends an alarm if the speed is exceeded
Receives the distance between the visitor and the camera
1- Receives the position of a visitor, calculates the direction, and stores it 2- Sends the position of a visitor agent to either control, search, or behavior prediction agents
1- Receives objects from control agent 2- Calculates histogram for each object 3- Compares with the stored histograms in the database
1- Analyzes images received from motion detection agent 2- Sends the detected objects to the histogram agent 3- Sends the position to the Position agent
Images from motion detection agent
Control agent
Action 1- Stores captured images from the camera 2- Sends images to control agent
Perception
Motion detection agent Moving objects in the environment
Agent name
Table 1 Description of employed agents
Predicts the next move of the visitor agent
Searches for lost visitor agent
Stores the path of the visitor agent’s movement
Stores the number of visitors in the place at any given time
Stores the speed of the visitor agents
Stores the distance between the visitor and the camera
Stores Direction of movements of each visitor agents
Stores position of a visitor agent
1- Calculates histograms 2- Stores visitor histogram in the database
1- Detects moving objects 2- Interacts with other agents
Detects when visitors arrive to the historical place
Goal
160 M. Safarini et al.
A Multi-agent Framework for Visitor Tracking in Open Cultural Places
161
Fig. 2 System methodology
The control agent also interacts with depth, speed, position, direction, and tracking agents to store the needed information about the visitor. With each movement of the visitor agent, the motion detection agent keeps sending the captured images to the control agent, which in turn interacts with other agents to update his information to keep him tracked and store his movement path. If the visitor agent goes out of the covered area, then the control agent informs other agents to delete his stored data and decreases the count by one. The search agent works when a visitor is lost within the environmental limits, the agent gets the historical statistical data and its last (x, y) position of the lost visitor to predict his possible location. Finally, the behavior prediction agent learns the visitors’ behavior using his location history and predicts the next place the visitor could go to. The whole system is described in detail in Fig. 2.
5 Simulation Results 5.1 Simulation Setup To assess the tracking performance of the proposed algorithm, four well-known MOT challenge benchmarks were employed in this research [19]. These benchmarks were chosen carefully with different details and complexities to cover various behaviors. The prototype of the framework has been designed and implemented using Java Agent Development (JADE) [20] platform and OpenCV. The used Java development environment is a PC with Intel Core i5-4200, 2.3 GHz CPU, 16 GB RAM. The algorithm was applied on four real MOT open access videos which are PETS09-S2L1, AVG-TownCentre, TUD-Crossing, and Venice-1 shown in Fig. 3. The tracking performance of the proposed system was assessed and compared with other existing approaches based on Multi-Object Tracking Accuracy (MOTA) measure.
162
M. Safarini et al.
(a) AVG-TownCentre
(b) PETS09-S2L1
(c) TUD-Crossing
(d) Venice-1
Fig. 3 Videos where the algorithm is applied with bounding boxes around the detected pedestrians
It is a commonly used evaluation metric which indicates how many mistakes the tracker made in terms of missed targets, false positives, mismatches, and failures to recover tracks [21]
5.2 The Impact of Similarity Threshold The threshold is the desired lower limit for the similarity of two detected objects that belong to different frames. It is a vital factor that needs to be properly tuned. Extensive experiments using different values of threshold (0.65, 0.7, 0.75, 0.8, and 0.9) were conducted for each tested video. From the inspecting Table 2, it can be noted that the threshold of 0.8 has obtained the best accuracy for PETS09-S2L1, AVG-TownCentre, and TUD-Crossing benchmarks. While for Venice-1, the threshold of 0.75 is better. From that, it is recommended to tune the best threshold for each site when installing the system to give the best results.
5.2.1
Computation Time
In this part, we are interested to find out the efficiency of using MAS. Table 3 compares the running time results of the proposed approach (i.e., using MAS and computer vision methods) and that without MAS (i.e., using pure computer vision methods) in dealing with AVG-TownCentre benchmark. The accelerated trends of
A Multi-agent Framework for Visitor Tracking in Open Cultural Places
163
Table 2 Accuracy results for different values of similarity threshold Benchmarks Threshold of similarity 0.65 0.7 0.75 0.8 PETS09-S2L1 AVGTownCentre TUDCrossing Venice-1
0.9
58 65
62 70
66 75
70 80
64 72
63
68
73
78
70
38
40
45
42
41
Table 3 Comparing running times on AVG-TownCentre showing MAS effect # of objects 10 25 50 75 100 150 200 300 400 500 Without 2 MAS With MAS 2
600
700
120
140
5
10
16
23
35
40
60
80
100
4
7
10
15.5
23
27
37.5
47
55.75 63.9
71.55
Fig. 4 Running time results
both approaches are plotted in Fig. 4. It is clear that employing MAS speeds up the detection and tracking processes. Dividing the problem into multiple tasks that can be handled by various agents provides an ability for parallel computation, and thus to overcome some computer vision limitations.
5.2.2
Comparison with Other Techniques
For investigating the proposed approach performance, it is compared to other 14 state-of-the-art multi-object trackers in terms of MOTA measure. The comparisons are presented in Table 4. The reported results clarify the superiority of the proposed tracker in dealing with AVG-TownCentre and Venice-1 benchmarks and show the competitive results on the TUD-Crossing benchmark. These results confirm that the
164
M. Safarini et al.
Table 4 Comparison between the proposed tracker and other state-of-the-art trackers in terms of MOTA% [indicates that the benchmark has not been tested by the corresponding approach] Trackers Benchmarks PETS09-S2L1 AVGTUD-Crossing Venice-1 TownCentre Proposed algorithm FFT15 MPNTrack15 RNN_LSTM SiameseCNN MADRL DeepMP_15 MHTREID15 CRFTrack AP_HWDPL_p STRN AMIR15 HybridDAT INARLA QuadMOT
70
80
78
45
– – – – –
36.2 54.4 13.4 19.3 49.8 47.3 38.4 49 28.4 27.9 36.2 29.2 32.1 30.8
84.8 80.1 57.2 73.7 79.6 73.6 79.1 80.8 61.3 68 73.7 73.3 73 72.1
38.6 40 12.7 22.3 26.5 36.8 32.5 37.3 39.1 39.8 29.1 37.1 36.4 31.8
– – – – – – – –
employed search agent has assisted the proposed model in predicting and searching for missing persons during the tracking process, and thus achieving good results compared to the results of recent comparative techniques.
6 Conclusions and Future Work From the results above, an efficient multi-person detection and tracking model is presented by combining computer vision and MAS. The model is tested on four real MOT challenge benchmarks. Simulation results demonstrated that the similarity threshold has a significant impact on the performance of the proposed tracker. Besides, adopting MAS has ensured excellent potential of tracking persons in efficient time. In comparison with other recent tracking methods, the proposed method showed high competitive accuracy results. The future directions of pedestrian tracking will focus on recognizing the entered person, registering her/him, and following him to analyze her/his behavior according to the age, background, level of education, and any other information. This could be used in many applications, including marketing and visitor satisfaction.
A Multi-agent Framework for Visitor Tracking in Open Cultural Places
165
References 1. Booranawong, A., Jindapetch, N., Saito, H.: A system for detection and tracking of human movements using rssi signals. IEEE Sens. J. PP, 1 (2018) 2. Ray, K.S., Dutta, S., Chakraborty, A.: Detection, recognition and tracking of moving objects from real-time video via sp theory of intelligence and species inspired PSO. arXiv:1704.07312 (2017) 3. Sanchez, S., Rodrgíuez, S., De La Prieta, F., De Paz, J., Bajo, J.: Multi-agent system for tracking and classification of moving objects. Adv. Intell. Syst. Comput. 373, 63–74 (2015) 4. Nguyen, D.T., Li, W., Ogunbona, P.: Human detection from images and videos: a survey. Pattern Recognit. 51 (2015) 5. Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proceedings of the 2001 IEEE Computer Society Conference On Computer Vision and Pattern Recognition. CVPR 2001, vol. 1, p. I. IEEE (2001) 6. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2005), vol. 2 (2005) 7. Baccouche, M., Mamalet, F., Wolf, C., Garcia, C., Baskurt, A.: Sequential deep learning for human action recognition. In: International Workshop on Human Behavior Understanding, pp. 29–39. Springer (2011) 8. Dorri, A., Kanhere, S., Jurdak, R.: Multi-agent systems: a survey. IEEE Access, 1 (2018) 9. Srisamosorn, V., Kuwahara, N., Yamashita, A., Ogata, T., Ota, J.: Human-tracking system using quadrotors and multiple environmental cameras for face-tracking application. Int. J. Adv. Robot. Syst. 14, 172988141772735 (2017) 10. Choi, J.W., Moon, D., Yoo, J.H.: Robust multi-person tracking for real-time intelligent video surveillance. ETRI J. 37 (2015) 11. Sabha, M., Nasra, I.: Visitor tracking in open cultural places. In: Proceedings of the 4th Hyperheritage International Seminar. HIS.4, Jenin, Palestine (2017) 12. Jafari, O.H., Mitzel, D., Leibe, B.: Real-time RGB-D based people detection and tracking for mobile robots and head-worn cameras. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 5636–5643. IEEE (2014) 13. Bouma, H., Baan, J., Landsmeer, S., Kruszynski, C., van Antwerpen, G., Dijk, J.: Real-time tracking and fast retrieval of persons in multiple surveillance cameras of a shopping mall. In: Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2013, vol. 8756, p. 87560A. International Society for Optics and Photonics (2013) 14. Sabha, M., Daoud, J.J.A.: Adaptive camera placement for open heritage sites. In: Proceedings of the International Conference on Future Networks and Distributed Systems. ICFNDS 2017. Association for Computing Machinery, New York, NY, USA (2017). https://doi.org/10.1145/ 3102304.3109813 15. Previtali, F., Iocchi, L.: Ptracking: distributed multi-agent multi-object tracking through multiclustered particle filtering. In: 2015 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), pp. 110–115 (2015) 16. Jiang, M., Deng, C., Yu, Y., Shan, J.: Multi-agent deep reinforcement learning for multi-object tracker. IEEE Access PP, 1 (2019) 17. Rezaee, H., Abdollahi, F.: Average consensus over high-order multiagent systems. IEEE Trans. Autom. Control 60, 1 (2015) 18. Marín-Reyes, P.A., Lorenzo-Navarro, J., Castrillín-Santana, M.: Comparative study of histogram distance measures for re-identification (2016) 19. Leal-Taixé, L., Milan, A., Reid, I., Roth, S.: Motchallenge 2015: towards a benchmark for multi-target tracking. arXiv:1504.01942 (2015) 20. Bellifemine, F., Bergenti, F., Caire, G., Poggi, A.: Jade—a java agent development framework. In: Multi-agent programming, pp. 125–147. Springer (2005) 21. Bernardin, K., Stiefelhagen, R.: Evaluating multiple object tracking performance: the clear mot metrics. EURASIP J. Image Video Proc. (2008)
Toward Modeling Based on Agents that Support in Increasing the Competitiveness of the Professional of the Degree in Computer Science María del Consuelo Salgado Soto, Margarita Ramírez Ramírez, Hilda Beatriz Ramírez Moreno, and Esperanza Manrique Rojas Abstract This article presents a proposal for an agent-based model that will detect patterns of behavior based on the dynamic characteristics and changing needs of the educational-work environment to be represented. After the analysis of the information obtained and the definition of the scope of the project, the next step to be carried out in the modeling of the proposal that will allow in the future to develop a social simulator in which the relationships and iterations between the agents where each one executes a function and together with the knowledge base, it will be possible to obtain as a dynamic and updated result, the new perceptions that represent the opportunities for improvement in teaching, the learning competencies in the educational process and other factors that will allow the student to integrate appropriately and meet the needs at a given time in the labor sector as well as the strengthening of competencies of the Bachelor of Computer Science program to direct it to increase competitiveness.
1 Introduction Higher education institutions seek to achieve a high level of competitiveness, through their study programs, focusing on all aspects responsible for providing knowledge, skills, and attitudes to the future professional so that at any given time he can place M. del Consuelo Salgado Soto (B) · M. R. Ramírez · H. B. R. Moreno · E. M. Rojas Facultad de Contaduría y Administración, Universidad Autónoma de Baja California UABC, Tijuana, BC, Mexico e-mail: [email protected] M. R. Ramírez e-mail: [email protected] H. B. R. Moreno e-mail: [email protected] E. M. Rojas e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_15
167
168
M. del Consuelo Salgado Soto et al.
himself and adapt in a new environment for professional development as well as provide the tools to keep updated. The work environment is in a constant dynamics due to the demands of the same environment, the lack of society, globalization, and the great technological advances; these requirements must be met by professional people who understand the trends of knowledge management, the adoption of information technologies in organizations, innovation, attention to social problems, and a broad knowledge of economic development. The educational-work environment that appears in this work can be understood as a complex system that is defined by the relationship of the particular characteristics of the components, in which social, economic, and environmental processes are presented, in addition to it gives the exchange of information and ideas that determine the environment, and the elements of that reality are set aside to develop in an situation of competitiveness. This type of system proposes certain peculiarities such as the time that a process lasts, the problem or situation that must be analyzed since it arose, the presence, interaction, and organization of an agent ecosystem, emergence of properties, as well as the presence of the non-linearity, dynamism, and transformation [1]. After analyzing the above, the proposal arises to model a computer system where reality is emulated in an artificial environment based on agents to identify dynamic scenarios, establish patterns, prevent behaviors, which allow us to observe the efficient, necessary, and time changes for adopting them in the curriculum thus generating new information and knowledge to achieve professional, dynamic, and updated training that allows the professional to contribute in solving problems, attending to situations efficiently and effectively and consequently achieving an increase in competitiveness.
1.1 Objective Represent through an agent-based model in which a complex artificial ecosystem defined by the work environment and higher level education that leads to the proposal of a social knowledge management simulator to address learning competencies and labor needs is visualized, through the detection of improvements as well as the strengthening of competences of the Bachelor of Computer Science program to direct it to the increase of competitiveness.
1.2 Justification There are social, professional, personal, and academic reasons so that the educational programs are updated in periods defined by the Higher Education Institutions, but even so, this activity is not enough due to the changing needs of the work environment
Toward Modeling Based on Agents that Support …
169
and society, where the future professionals of the Bachelor of Computer Science face when trying to enter the labor market. Through an agent-based tool and the advantages of simulating a school-work environment, we could increase competitiveness in addition to generating knowledge to take actions, that is, to establish strategies, plan and execute them so that in some way it suggests updating the contents of the curriculum, and can address the problems and emerging situations that occur in the labor sector. The above would allow that through the search for patterns in interest groups, the potentialities of the abilities, skills, and knowledge of the future professionals of the Bachelor of Informatics could be distinguished through the identification of improvements and strengthening of skills that would be focused in a large percentage to the solution of problems that demand to be taken care of in the organizations.
1.3 Related Cases Studies have been carried out and projects have been developed in which the important role played by agent-based systems that allow to simulate behaviors or detect patterns to learn from them by anticipating facts and making decisions can be observed. In a work, an approach for adaptive and dynamic electronic education was proposed; this project presented a multi-agent architecture based on techniques that will support the electronic education process and artificial psychology techniques to deal with user psychology, thus making the electronic education system more effective and satisfactory, and the process of e-Education is both adaptable and dynamic and therefore updated [2]. In another project, the agent-based systems designed for a variety of human learning activities were examined, the authors divided them into two areas, one focused on adult learning in work-related skills and the second one on the learning of children and adults in various academic environments. This allowed them to classify the main trends in the development of agents to help people and emphasize that the work of each of these categories brings to light common situations [3]. Another work that relates to education and agent-based models, consisted of the construction of an intelligent distance education platform that can provide a collaborative virtual teaching environment for teachers and students in different places through agent technology [4]. In a project carried out to enrich the teaching and learning systems, a multi-agent pedagogical system is conceptualized and analyzed. To do this, we started with the activities of understanding the problem, identifying the active and passive actors, then in an analysis they defined the agent models, tasks, coordination, communication, organization, use, and experience, and to model the multi-agent system The MASCommonKADS methodology was used, finally with the results obtained, they observed the importance of artificial intelligence in education [5]. According to the cases presented, it is proposed to contribute with an agent-based model simulator as a tool capable of detecting situations of certainty, uncertainty,
170
M. del Consuelo Salgado Soto et al.
and risk for future professionals as well as offering possible resources to teachers to prepare students for starting of patterns detected in the labor sector.
2 Referential Framework 2.1 Complex Systems The systems are formed by different elements, they are conditioned by an environment, time, and place, and include different factors such as physical, biological, and socioeconomic [1]. The complexity allows us to understand that what has been created is real, but this can go further and become another understanding. Complexity is usually taken as a synonym for difficulty, it can be considered as something that cannot be codified, which means that it is the opposite of easy [6]. A complex system can be conceptualized as a system composed of several entities, interacting processes, nonlinear, out-of-balance descriptions, and computer-based simulations [7]. To define it, it is necessary to observe the iteration of the systems, subsystems, processes, and other components in the time and space of the phenomenon, the information that is shared between them, must also be observed before an action or any change that occurs in the environment, [8].
2.2 Agent-Based Model For this project, the benefits of intelligent agents were considered. Modeling agent systems depend largely on the specific needs that must be addressed in the environment that will allow agents to learn, socialize, and evolve. Agent-based simulation, as a computational tool, can allow the complexity, emergence, and non-linearity typical of many social phenomena to be treated simply [9]. Computational simulation is considered as an essential calculation tool to test the existing models in the ranges of parameters impossible to achieve through experimentation, also, it allows the visualization of the results obtained [10]. With this type of systems, it is possible to predict how it will behave, by analyzing the behavior of agents in various conditions through simulation [11].
2.3 Higher Education, Skills, and Labor Sector Education is a human right for all, throughout life, and access to education must be accompanied by quality [12]. The Higher Education Institutions aim to train students to solve current social problems, in addition to preparing better professionals for the
Toward Modeling Based on Agents that Support …
171
future through a system [13], composed of the study plans selected by the structured set of subjects, practices, and activities of teaching and learning and containing the data of general training [14]. Higher education is of great interest for this project because this is where one of the objects of study of the project arises, students as future professionals who enter the labor market and control their performance achieve their personal and professional growth.
3 Methodology The methodology to solve a complex social problem [1] that is proposed is the following: • The first phase is to identify the agents: the user, who is the subject that has the problem; the third parties involved who are in the situation of the problem and may be affected by the solution; the decider that is in the situation of the problem and has the power and the resources to make decisions regarding the situation; and to the advisory agent that is the investigation team, is in the situation of the problem, identifies it and suggests alternatives of change of the situation to solve it. • The second phase is to perform a systemic analysis: This analysis includes the phases of user definition, the system, its environment, the applicable complexity approaches, construction of the conceptual model, and the implementation and monitoring of the solution.
4 Modeling Proposal Through a model for the representation of the real world, an artificial environment will be simulated with the characteristics and rules that allow obtaining new information to define actions, with the model presented below you want to increase the competitiveness of future professionals. The proposal shows in Fig. 1 how agents interact with each other, perceiving the environment as particularities as input and sending new information among themselves to increase their knowledge. In this interaction, the agents WorkEnvironmentAgent, DetectorAgent, ComputerSystemDegreeAgent, StudentAgent, SubjectAgent, TeacherAgent, TeachingAgent, and the knowledge base participate, which with this interaction between them will result in the factors that increase competitiveness. Definition of model agents In the scenario defined as an educational-work environment, the proposed model is composed of the following agents that are described below:
172
M. del Consuelo Salgado Soto et al.
Fig. 1 Model proposal
• WorkEnvironmentAgent: represents the labor sector that is responsible for defining the job opportunities, characteristics, and needs to be covered. • DetectorAgent: This agent is responsible for detecting job opportunities and their respective characteristics to cover them in the applicant labor market. • ComputerSystemDegreeAgent: This agent represents the agent of the educational program in Computer Science, who receives the request to cover a job opportunity, also, this agent offers professional training to the student. • StudentAgent: This agent personifies the student, who have different attitudes, skills, competencies in addition to the knowledge acquired in the educational program. • SubjectAgent: This agent is responsible for the content, knowledge, skills, abilities, and values that the student receives. • TeacherAgent: This agent represents the teacher in the educational environment, who is responsible for providing knowledge and generating skills in the student. • TeachingAgent: This agent is responsible for receiving, validating, or providing the teaching skills and techniques of the teacher. Iteration of the agents The sequence diagram in Fig. 2 shows the iteration of the agents that have direct contact with the educational-work environment.
Toward Modeling Based on Agents that Support …
173
Fig. 2 Iteraccion of agents in the Educational-Work environment
1. WorkEnvironmentAgent is responsible for requesting to fill a job position defining the characteristics and skills of the supporter. 2. DetectorAgent is the one who detects the opportunity to fill the position, consults the knowledge base, and receives results. 3. DetectorAgent based on the result of the query, requests the ComputerSystemDegreeAgent 4. ComputerSystemDegreeAgent receives the request and the characteristics of competitiveness. 5. ComputerSystemDegree updates its knowledge base and sends the recommendation to DetectorAgent. 6. DetectorAgent receives the recommendation of the professional, who must fill the job and updates the knowledge base. 7. DetectorAgent sends the appropriate recommendation for the job to WorkEnvironmentAgent. The following figure shows the iteration between the StudentAgent, the queries in the knowledge base of each of them, all this within the educational environment (Fig. 3). 1. ComputerSystemDegreeAgent with the result sends information and a request to the StudentAgent agents and TeacherAgent. 2. StudentAgent searches the student base for skills, abilities that define competitiveness, and with the result requests SubjectAgent. 3. SubjectAgent receives the request and searches the subject base for knowledge and skills, and returns the StudentAgent query. 4. StudenAgent updates its database and returns the query to ComputerSystemDegreeAgent. Figure 4 shows the iteration between the TeacherAgent and the TeachingAgent, the queries in the knowledge base of each of them, all this within the educational environment.
174
M. del Consuelo Salgado Soto et al.
Fig. 3 Interaction of StudentAgent in the educational environment
Fig. 4 Interaction of TeacherAgent in the educational environment
5. 6. 7. 8. 9.
TeacherAgent receives the request for skills, techniques, experience, and knowledge. TeacherAgent consults the Teacher base and sreceives the results to send a request to TeachingAgent. TeachingAgent is responsible for receiving the request and searches the Teaching base for the best skills and techniques and returns them to TeacherAgent. Teacher Agent updates its base and sends the result to ComputerSystemDegreeAgent. ComputerSystemDegree receives the information, updates its knowledge base. and sends the recommendation to DetectorAgent.
Toward Modeling Based on Agents that Support …
175
10. DetectorAgent receives the recommendation of the professional who must fill the job and updates the knowledge base. Taking up the proposed methodology for the solution of complex social problems, what is described in the previous figures is what is indicated in the first phase of the methodology, which agreed to identify the agents that interact in the simulated artificial environment, which meet the defined characteristics for this environment and that are directly related to the problem. In the figures can be seen that the main agents that interact with the educationalwork environment update their knowledge base and update that environment to achieve the objective.
5 Conclusions Modeling the reality in a system can sometimes be difficult, but if you look at the complexity perspective, you can understand the relationships that exist between each of the components and the communication of messages between them. Through the modeling of a computer system with the support of agents, it is possible computational simulation and analysis complex systems, the reality can be studied to detect patterns that will prevent or prevent undesirable situations or address them through advanced actions that allow to achieve the objectives proposed for the system in question. The proposal presented is the first version and it is intended to increase competitiveness through agents using the dynamism, independence, and self-organization that organizes them, as well as feed the knowledge base with the needs that allow support in the increase of the skills, abilities, and knowledge to future professionals.
6 Future Woks The proposal is an outline of an agent-based computer system that aims to increase competitiveness. The future work is to analyze the proposal presented and consider including more agents to define more specific tasks based on the properties that are defined, another activity is to formalize each of the agents to understand their beliefs, desires, and intentions that define the tuples and implement the knowledge base and the rules of diffuse inference. In the activities, it also includes representation in a simulator that allows representing the linearity of the system to make hypotheses, check them, and take the right path in the future development of the computer system.
176
M. del Consuelo Salgado Soto et al.
Once the above has been verified, analyze the software that allows the environment to be represented by modeling through programming to simulate the natural environment of a complex system, the social interaction between the agents, and explore the behavior to support the proposal presented.
References 1. Lara-Rosano, F., Gallardo Cano, A., y Almanza Márquez, S.: Teorías, métodos y modelos para la complejidad social: un enfoque de sistemas complejos adaptativos (1st ed.). Ciudad de México: Colofón Ediciones Académicas (2017) 2. Chen J., Cheng P.: A new multi-agent approach to adaptive e-education. In: Wang, W., Li, Y., Duan, Z., Yan, L., Li, H., Yang X. (eds.) Integration and Innovation Orient to E-Society Volume 2. IFIP International Federation for Information Processing, vol 252. Springer, Boston, MA (2007) 3. Sklar, E., Richards, D.: The use of agents in human learning systems, 767–774. https://doi.org/ 10.1145/1160633.1160768 (2006) 4. Duan, W.S., Ma, Y., Liu, L.P., Dong, T.P.: Research on an intelligent distance education system based on multi-agent. In: Zhong, Z. (eds.) Proceedings of the International Conference on Information Engineering and Applications (IEA) 2012. Lecture Notes in Electrical Engineering, vol 218. Springer, London (2013) 5. Jiménez, J., Ovalle, D., Branch, J.: Conceptualización y análisis de un sistema multi-agente pedagógico utilizando la metodología MAS-COMMONKADS. Revista Dyna, Año 76, Nro. 158, pp. 229–239. Medellín, junio de 2009. ISSN 0012-7353 (2009) 6. Agazzi, E., Motecucco: Complexity and emergence: proceedings of the annual meeting of the international academy of the philosophy of science, Bergamo. World Sci. ISBN: 9812381589 (2002) 7. Schweitzer, F., Zimmermann, J.: Communication and self-organisation in complex systems: a basic approach. In: Fisher, M., Frohlich, J. (eds.) Knowledge Complexity and Innovation Systems. Advances in Spatial Science. Springer, Berlin, Heidelberg (2001) 8. Martínez, G.: “Sistemas complejos”, Revista.unam.mx, [Online]. http://www.revista.unam.mx/ vol.13/num4/art44/art44.pdf. (2012) 9. Gutiérrez, G.: Elementos de simulacion computacional (2001) 10. Saha, S.: Introducción a la Robótica. McGraw-Hill España, España (2011) 11. Durán, J.: Nociones de Simulación Computacional: Simulaciones y Modelos Científicos. CONICET—Argentina. Universidad Nacional de Córdoba—Argentina (2015) 12. UNESCO. “Educación| Organización de las Naciones Unidas para la Educación, la Ciencia y la Cultura” (2017). [Online]. http://www.unesco.org/new/es/santiago/education/ 13. Guerrero, J., Faro, M.: Breve análisis del concepto de Educación Superior. In: Alternativas en Psicología. Tercera Época. Año XVI. Número 27. Agosto-Septiembre. (2012) 14. Planes de estudio.: Glosario.Términos utilizados en la Dirección General de Planeación y Programación. Secretaria de Educación Pública. (2008). [Online]. http://cumplimientopef.sep. gob.mx/2010/Glosario%202008%2024-jun-08.pdf
Human Tracking in Cultural Places Using Multi-agent Systems and Face Recognition Adel Hassan, Aktham Sawan, Amjad Rattrout, and Muath Sabha
Abstract Heritage places are considered among the most valuable places to any nation for maintaining their history. Computer Vision (CV) and Multi-Agent Systems (MAS) are used for preserving, studying, and analyzing cultural heritage places. This paper introduces a new technique that combines both CV and MAS to track visitors’ faces in cultural places. The model consists of four layers of MAS architecture. The proposed system shows its ability to tackle the human face tracking problem and its flexibility to solve the problem with different tracking parameters. This paper also describes the ability of the agent-based system to deploy a computer vision system to execute different algorithms that fit in solving the human face recognition and tracking problem. The proposed system can be used in any similar place with real-time agent-based human face tracking system.
1 Introduction Computer vision is considered the most critical system that handles how can detect, recognize, and track objects in a complex environment such as heritage places. The tracking approach consists of two types; motion-based object tracking and recognition-based object tracking. Tracking consists of a set of factors that is influenced by a variety of factors, such as detectors, sensors, and environment. A. Hassan · A. Sawan (B) · A. Rattrout · M. Sabha Arab American University, Jenin, Palestine e-mail: [email protected] URL: https://www.aaup.edu/ A. Hassan e-mail: [email protected] URL: https://www.aaup.edu/ A. Rattrout e-mail: [email protected] M. Sabha e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_16
177
178
A. Hassan et al.
Multi-agent system suits this complex environment of tracking humans by its concept. Facial recognition for tracking detects, identifies, locates, and tracks humans in the scene according to different parameters. These parameters, such as certain faces, human positions, camera position, environment, and surrounding effects, could be affected by the type of computer vision suitable algorithms used in the design. The paper suggests combining computer vision system technique with a multi-agent system to produce a piece of art that maintains any movements in the environment by a recognition-based approach. The rest of this paper is organized as follows: Sect. 2 presents related works; Sect. 3 presents the problem and proposed system; Sect. 4 presents the simulation of the proposed system and its results; Sect. 5 is the conclusion and suggested future work.
2 Related Works To build a model for extraction data from images and videos (series of images) using Computer vision, the system in general consists of two main processes, image processing and pattern recognition. These processes are used to create new categories of information understandable by the system [1]. Computer Vision tries to simulate human eyes in identifying objects and perceiving the environment, but this process involves a lot of challenges depending on the design and software used to do this recognition with limited resources and functionalities in the computer vision system. Pattern recognition is responsible for object identification from images that have been taken by external devices such as cameras and sensors [2]. Object detection and representation is considered as the most crucial process in finding moving entities from image sequences. Object tracking is the next necessary process to identify the coordination of moving objects in images or series of images (Video). Doing this needs robust object tracking algorithms to handle these processes [3]. Sabha and Abu Daoud suggested an adaptive camera placement system for covering the complete area [11]. Their system could be applied to open cultural places for tracking humans. There are many well-known tracking algorithms and techniques such as Kalman, mean-shift, Camshift, and LBP filters that achieve good results [2]. Xi divides object tracking subject into two categories, global or holistic visual representation and local visual representation [7]. LBP (Local Binary Patterns) uses local-based feature representation especially in facial image analysis, and includes processes in object tracking methodologies such as face detection, face representation, facial analysis, classification, and face recognition [5]. Object tracking methodologies could be put into two categories; recognition-based tracking and motion-based tracking. Recognition-based tracking is a modification of object recognition. The object is recognized in multiple series of images and its coordination of extraction. One of the drawbacks of this method is that the object that can be tracked only if the identification and recognizing process happen, otherwise
Human Tracking in Cultural Places Using Multi-agent …
179
it cannot be tractable. Motion-based detection relies on the motion of object even if it has not been verified. This is done by comparing background and foreground images [9, 12]. Techniques that are used to handle the problem of target tracking are dependent on the tracking parameters, these parameters are described as follows: (i) Trackers: Cameras, sensors, or any sensing devices (ii) Targets: Could be one or more objects and could be in stationary or moving situation. (iii) Environment: Outdoor/Heritage places (dynamic, unstructured environment) or Indoor (controlled, static, structured environment). For facial tracking, LBP is considered the most robust, fastest, and the most accurate method that could be used to recognize and track object in an image or video located in outdoor environment such as heritage places. Heritage places are considered as complex environments that consist of a lot of objects. Tasks could be divided to do the job in a faster and more efficient way by assigning every task to one agent or multiple agents using Multi-Agent System (MAS) [4]. Agent is considered as an entity located in an environment and uses parameters to do a special action based on the predefined goal for the entity. The agent could be reactive or proactive agent based on the nature of their defined tasks. Multiple agents can communicate with each other perceiving the environment and give feedback with more rich information about the environment changes which formulates the MAS [10]. MAS characteristics are used to build and design new model into the computer vision area by making each agent to do a specific job such as capturing images, identifying object, and analyzing a modality in these images. In addition, feeding back these information into such knowledge base system to deal with extracting features at certain time doing more filtration and classification of object types and movements which cause changes between video frames in perceiving environment. Reference [8] presents cooperative multi-target tracking system by introducing a set of active vision agents called AVAs, to track each object in the environment and reciprocate object information between agents. Reference [13] presents how agents can corporate with each other to assign dedicated resources that fit with limitations in the environment using multi-sensor target tracking. Reference [6] presents a different object tracking methods such as HAAR-like features [14], Histograms of Oriented Gradients (HOG) [3], and Local Binary Patterns (LBP) [15].
3 Proposed System Proposed system by building a model deals with three main categories; first is heritage place (Environment) since it consists of many elements and needs to be organized in a unique form to deal with them within the system, second is Computer Vision (CV) that will be the use to perceiving the environment, third is Multi-Agent System (MAS)
180
A. Hassan et al.
that will be the interface that is dealing with an environment based on computer vision to do the required action. Architecture layer design using MAS techniques for proposed system that defines the policies, procedures, and functionalities between these elements is shown in Fig. 1.
3.1 Complexity of System The complexity of System: The main characteristics that define the sophistication in the proposed system shown in Fig. 2 arise as a result of a lot and huge elements that need to be dealing with, such as
Fig. 1 Architectural layer mode
Human Tracking in Cultural Places Using Multi-agent …
181
Fig. 2 Structure model
– – – –
Emergent behavior Self-organization Adaptation Co-evolution.
Computer vision system and multi-agent system are working together in this paper using different agents to simplify complex tasks and operations that are in computer vision to perceiving, filtration, clarification, localization, detection, recognition, and tracking of human actions in a sophisticated and complex environment such as heritage places. Agents characteristics are used to communicate, collaborate, and cooperate with each other to do the computer vision activities quickly and accurately. MAS architecture proposed in this paper is a layered approach that uses reactive and proactive agents to do different tasks to simplify the complexity of environment and Computer vision. Computer vision uses a facial recognition algorithm using LBP that uses different techniques such as template matching and support vector machines and linear programming to examine the facial expression. The template is formed for each class face images, and then a nearest-neighbor classifier is used to match the input image with the closest template that a face image divide into small regions from which LBP histograms are extracted and concatenated into a single, spatially enhanced feature histogram. Proposed architecture composed of four layers, Presentation Layer, Middle Layer, Operation Layer, and Database (Knowledge Base) Layer. These layers exchange data and information between each other using different agents that are developed by JADE and communicate with each other reactively or proactively using Foundation for Intelligent Physical Agents (FIPA) Language to capture visitors and their reactions to improve the process of tracking the object target which is human face. Different type of agents are created with a specific role in each layer illustrated as follows:
182
A. Hassan et al.
(i) The Presentation/User Layer: In this layer, access point for a user to the tracking system, and as more accuracy needed in the tracking process results, as more precise tracking parameters, the user should enter the system. – Tourist Agent: Agent representing tourists in the heritage area. – CAM Agent: Agent who is filming tourist agent. (ii) The Middle Layer: In this layer, decision making takes place, and the success framework relies on the decisions made in this layer. The agent is responsible for deciding and choosing the best tracking methodology based on the user input tracking problem parameters. – Tracking Agent: Agent who captures and tracks tourist agents in the heritage area. – Path Agent: Agent who is tracking paths of tourist agents. – Location agent: Agent who is tracking the location of a tourist agent. (iii) Operation Layer: This layer includes all the agents that represent a phase in the tracking process, such as – Similarity agent (uncertain): Agent who tries to find missing or unrecognized agents. – Capacity Agent: Agent who monitors the capacity of POI. – Social Agent: Agent, who is trying to spread a general message to more than one agent. (iv) Knowledge Layer: knowledge base consists of a group of tracking methodologies, where each tracking methodology is a subset of following phases, Face detection, Face analysis, face recognition, and face tracking: – Knowledge base Agent: Agent who has database and information about POI. – Avatar Agent: Agent who explains POI information to tourist agents.
4 Simulation of Experiment The simulation of the proposed system was built using JADE on eclipse with Open CV (Open Source Computer Vision Library) to analyze the video streaming. The primary goal of this paper is to create agents by using JADE and to build interactions between these agents using Agent Communication Language ACL Messaging. Experiments were performed on a single node laptop, four cores, Microsoft Windows 10, Core i7 processor, 1.8 GHz speed, 16 GB memory, and 500 GB HDD. Interactions and activities between agents shown in Fig. 3 describe how they cooperate and collaborate to get the required information about the visitor using face recognition approach in the heritage place. Activities sequence are as follows:s
Human Tracking in Cultural Places Using Multi-agent …
183
Fig. 3 Sequence activity diagram between agents
(i) Tourist Agent—Tracking Agent: Inform detection, Confirm detection, Begin Tracking. (ii) Camera Agent—Similarity Agent: Get face image, Send face image. (iii) Avatar Agent—Knowledge Agent: Initiate type of avatar, Confirm type of avatar, Avatar begins presentation. (iv) Capacity Agent—Social Agent: Get available location, Send location information. (v) Path Agent—Location Agent: Get location, Send location information, Confirm location of visitor. Different data sets and algorithms are used to build the model. Data set containing 226 images and algorithms such as HARR, HOG, LBP are used. The proposed algorithm combines HUG and LBP algorithms together to produce better results compared to other algorithms as shown in Table 1 and Fig. 4. Proposed system has been applied to video captured from the entrance of heritage place and the results are shown in Figs. 5 and 6. It proves that images extracted from the video for face recognition methodology present perfect results that capture human faces and keep tracking them when they are exploring the environment. Using the
Table 1 Face detection evaluation comparison results of 226 images Type Total faces Haar HOG LBP True positive False positive False negative Accuracy rate
266 266 266 100%
208 18 55 92%
199 27 63 88%
206 20 41 91%
Imporved 215 11 33 95%
184
A. Hassan et al.
Fig. 4 Face recognition result
Fig. 5 Face recognition source
proposed system to be applied in a complex environment like heritage places can achieve better results for visitor tracking using the face recognition approach. The tracking agent and camera agent worked cooperatively to recognize human face when enters the environment and kept tracking her till the human exit from the environment, then similarity agent enabled and terminated all other activated agents and sustain these activities in knowledge base by using knowledge agent. (i) The tracking agent initiates the activity and informs the tourist agent that a new visitor is detected. (ii) The tourist agent confirms the activity and sends back to the tracking agent that an agent is detected and then the tracking agent initiates the request to camera image agent to capture the visitor’s face.
Human Tracking in Cultural Places Using Multi-agent …
185
Fig. 6 Face recognition result
(iii) A similarity agent is then initiated to examine the detection activity to check the nature of the object, then send it to the camera agent to capture the object and return to the similarity agent that the object is human and detect the face based on algorithm face recognition. (iv) The camera agent captures the face of the visitor and sends it to a similarity agent. (v) Based on face detection and recognition, the Avatar agent will be introduced to the visitor about the area that is available and suggests a tour based on capacity agent information that is stored or retrieved by knowledge base agent. (vi) After that, the path agent and location agent phase initiate to track the visitor in the environment and direct it until the tour is finished. (vii) In meanwhile, the social agent is activated between the visitor agents to explore different locations and suggest different tours based on information feedback from social agents.
5 Conclusion and Future Work Heritage places are considered as complex environments that need special treatment which MAS can take care of, and could simplify tasks and manage and control different types of interactions. In this research, the proposed system uses multi-agent system and computer vision cooperatively to handle the tracking problem in such a complex environment of heritage places. As the Computer Vision part, HUG and LBP algorithms have been used to improve the face recognition capabilities. The MAS used agents to do different tasks in the scene by adopting layered hybrid multi-agent system and benefited from the nature of agents that can work in cooperative, autonomy, and self-organizing structures to manage the enormous changes and interactions in the environment
186
A. Hassan et al.
because of the number of human in the environment and goals in heritage places. Besides, the proposed system managed to combine two algorithms to enhance the retrieval of the information and the tracking of human faces in heritage places. There are some problems in the existing human face recognition and tracking processes. Although the proposed system works efficiently when the face of a human is directed to the camera, but when human face is not in front of camera, or if the human is partially covered by an obstacle, the system is not able to recognize the human. To overcome such issues, we recommend to use addition algorithms to filter, classify, and identify the human tracking and initiate new agents to tackle these difficulties to recognize human in heritage places.
References 1. Cosido, O., Iglesias, A., Galvez, A., Catuogno, R., Campi, M., Terán, L., Sainz, E.: Hybridization of convergent photogrammetry, computer vision, and artificial intelligence for digital documentation of cultural heritage-a case study: the magdalena palace. In: 2014 International Conference on Cyberworlds, pp. 369–376. IEEE (2014) 2. Cos˛Ükun, M., Ünal, S.: Implementation of tracking of a moving object based on camshift approach with a UAV. Procedia Technol. 22, 556–561 (2016) 3. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2005), 2 (2005) 4. Dorri, A., Kanhere, S., Jurdak, R.: Multi-agent systems: a survey. IEEE Access, 1 (2018) 5. Huang, D., Shan, C., Ardabilian, M., Chen, L.: Local binary patterns and its application to facial image analysis: a survey. IEEE Trans. Syst. Man Cybern. Part C 41, 765–781 (2011) 6. Jiang, M., Deng, C., Yu, Y., Shan, J.: Multi-agent deep reinforcement learning for multi-object tracker. IEEE Access PP, 1 (2019) 7. Li, X., Hu, W., Shen, C., Zhang, Z., Dick, A., Hengel, A.: A survey of appearance models in visual object tracking. ACM Trans. Intell. Syst. Technol. (TIST) 4 (2013) 8. Matsuyama, T., Ukita, N.: Real-time multitarget tracking by a cooperative distributed vision system. Proc. IEEE 90, 1136–1150 (2002) 9. Murray, D., Basu, A.: Motion tracking with an active camera. IEEE Trans. Pattern Anal. Mach. Intell. 16, 449–459 (1994) 10. Russell, Norvig, S.: Artificial Intelligence: A Modern Approach. Prentice Hall, Englewood Cliffs, NJ (2010) 11. Sabha, M., Daoud, J.J.A.: Adaptive camera placement for open heritage sites. In: Proceedings of the International Conference on Future Networks and Distributed Systems. ICFNDS 2017, Association for Computing Machinery, New York, NY, USA (2017). https://doi.org/10.1145/ 3102304.3109813 12. Sabha, M., Nasra, I.: Visitor tracking in open cultural places. In: Proceedings of the 4th Hyperheritage International Seminar. HIS.4, Jenin, Palestine (2017) 13. Soh, L.K., Tsatsoulis, C.: Reflective negotiating agents for real-time multisensor target tracking. In: IJCAI International Joint Conference on Artificial Intelligence (2003) 14. Viola, P., Jones, M.: Robust real-time face detection. Int. J. Comput. Vis. 57, 137–154 (2004) 15. Wang, X., Han, T.X., Yan, S.: An HOG-LBP human detector with partial occlusion handling. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 32–39. IEEE (2009)
A Conceptual Framework for Agent-Based Modeling of Human Behavior in Spatial Design Dario Esposito, Ilenia Abbattista, and Domenico Camarda
Abstract The paper presents the design phase of an Agent-Based Model of a human user acting in a built environment, useful for the simulation of indoor and outdoor settings in urban and architectural design. It describes a conceptual framework to formalize the main aspects of human spatial behavior in a shared environment. Moreover, it represents the cognitive abilities behind spatial behaviors and defines how some physical, social, and psychological factors, as well as their interactions, affect behavioral performance through operational feedback at decisional and executional level. The cognition-based architecture is built on the agent pursuing goals in a grid-like layout, where each cell provides a certain feature. The agent forms goals by interpreting objectives through his internal state and consequently, the behavior takes place in the simulated environment. Action planning incorporates the memory of previous successes and failures, which allows the agent to improve over time. Action execution involves movement, interaction with space, and social interaction, all of which vary depending on environmental features and available time.
Author Contributions: Conceptualization, methodology, formalization and writing D.E.; visualization D.E. and I.A.; writing – introduction and conclusions D.E. and D.C., review and editing D.E. and D.C.; supervision D.C. All authors have read and agreed to the published version of the manuscript D. Esposito (B) · I. Abbattista · D. Camarda Polytechnic University of Bari, 70125 Bari, Italy e-mail: [email protected] I. Abbattista e-mail: [email protected] D. Camarda e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_17
187
188
D. Esposito et al.
1 Introduction Nowadays, there is an increasing approach in operations research studies that tends to bring spatially (and sometimes temporally) organizations to agent-based structuring. This is often made with the aim of simulating roles, behaviors, relationships, drawing out logical and basic operating instructions for multi-agent decision support. A multi-agent model can contain multiple human but also artificial, automatic agents, or a hybrid mix of human/artificial agents. A modeling of this type shows its great potential in the environmental domain, allowing a significant reproduction of the ontological–phenomenological richness implicit in the complexity of the environment itself, and therefore allowing the maintaining of the necessary knowledge for decision-making. Baseline studies for multi-agent systems in the environment are not very widespread, but the different reflections, especially in terms of social simulation, are of great importance for the orientation of our research [1, 2]. Moreover, multi-agent modeling contains references to one of the structural aspects of environmental and land management processes, namely the hierarchical levels of tasks and reciprocal behaviors. This happens commonly but in more complex modalities in environmental, urban, and/or regional contexts. In them, relations between community (human, natural, artificial) agents typically occur through between operational levels that are often hierarchically very different from one another. Multi-agent models are intrinsically suitable to follow these dimensionally complex organizational structures, thus providing potentials of management support. Specifically, urban planning and architectural design involve the realization of infrastructures and public spaces. Their design is increasingly important in a context which sees several economic interests, user needs, and functions expressed through space. This is a task inherently oriented to consider human expectations, needs, and behaviors. Planners, architects, and civil engineers should be able to assess to what extent the designed places meet the requirements expressed by their intended users. To date, they rarely find formal methods to forecast whether infrastructures will perform from a user’s perspective before they have been built and tested through use, especially for the most qualitative aspects, such as human satisfaction, productivity, comfort, and safety. Indeed, the design of the built environment and decisions taken to manage it heavily influence operational efficiency. In early design stages, buildinguse evaluation requires consideration of human factors, qualitative variables deriving from social and cognitive sciences, environmental psychology, and more. This poses a major difficulty for experts, particularly due to the unpredictable impact that a physical setting produces on human choices. A badly designed, inefficient building, failing to support the activities of its occupants will not only hinder the users’ quality of life, but could sometimes be dangerous [3]. Considerations about human spatial behavior and use of space are often left to the designers’ intuition and professional experience. Nonetheless, the use of built environment could vary from what the expert had previously planned, because it could differ in its function. Its use changes as a response to the context. It reflects human spatial and social behavior, and because of this, it is more dynamic than functional.
A Conceptual Framework for Agent-Based Modelling …
189
The proposed approach aims at supporting practitioners in the design phase of places and spatial infrastructures through a conceptual framework that formalizes the human decision-making process and spatial behavior in indoor and outdoor public built environments, taking into account highly unstable knowledge variables, namely those affecting the choices of people in space. To this end, an established corpus of theoretical studies on human behavior in space was transposed into a coherent and practical framework, forming a solid ground for simulation with an agent-based approach. The value of the research is to improve the design of a “human-centered” built environment, facilitating the understanding and consideration of the use of the built environment, through agent-based simulations. This effort is framed in a wider research project to develop systems to support design and planning and to improve the decision-making processes in the field of infrastructure urban management. The application area of the research is defined as a generic built environment with affordances and services available for a virtual agent moving around it. He is targeted on purposes with determined variants of behavior, resulting in making decisions on time and executing activities. The model focuses on the strategic, tactical, and operational levels of a human agent’s behavior [4]. The structure of the paper is organized as follows. The present section introduces the research domain to address the importance of using agent-based modeling of human behavior in space to support urban planning and architectural design. The second section discusses the agent-based reference research area applied to the modeling of spatial behavior in human-scale space, with references to relevant studies and outlining linked research problems and our aims. The third section presents the proposed agent-based framework for a formal layout of an agent’s spatial behavior model. Its components for the description of the environment’s properties (actions, spatial constraints, movement) and agent’s possible physiological and psychological states are described. The fourth section depicts how the agent’s behavior is defined through a series of development phases. Finally, the outlook for the research domain of spatial design is discussed and prospects in the field of the decision support system are pointed out.
2 Relevant Background An agent-based approach models system as a collection of entities called agents. Several definitions of the term “agent” exist, Wooldridge and Jennings provide a definition of what an agent is and does through a number of properties, such as autonomy, social ability, reactiveness, pro-activeness [5]. Further definitions and discussion follow, adding an increasing focus on the social contextualization of agents’ activities [6, 7]. Bonabeau presented a list of decisive factors in the choice of an agent-based modeling approach. It is interesting to highlight the following, as they seem to fit a human-scale agent-based modeling for design purposes more appropriately [8]: (i) when individual behavior is complex and describing complex behavior with
190
D. Esposito et al.
equations becomes intractable; (ii) when space is crucial and the agents’ positions are not fixed; (iii) when the population is heterogeneous, i.e., when each individual is potentially different; (iv) when validation and calibration of the model through expert judgment is crucial. Given these factors, an agent-based model in our study is a system based on adaptive and autonomous decision-making agents. For the simulation of social systems, they are set in a spatial layout and can perceive, plan, and act. The case of a set of agents operating and interacting simultaneously in a shared environment, under different initial conditions and constraints typically pertain to a multi-agent system [2]. A core advantage of simulations based on ABM for applications in the field of urban and architectural studies is their capacity to represent agents’ spatial locations and movements in an explicit context, which is crucial to human spatial behavior in a built environment. A realistic simulation of a built environment with agent-based modeling emulates how humans act and interact with each other and with the environment. In order to build an expressive model of a human agent using space, it is essential to understand and formalize the process of behaving performed by human users of spaces. An overall system of human behaviors depends on individuals’ decisions in actions, which are generally influenced by a large number of factors, intertwined in an often unpredictable way. Indeed, human behaviors in space show mixed mechanisms that raise emergent phenomena: from the natural tendency to stay at a given distance, e.g., proxemics to imitation effect; from competition for the shared space and between different activities to cooperation (non-written social norms) to prevent stall situations [9–11]. Spatial features have a significant effect on human behavior, both directly and through agents’ activities and interactions. Moreover, human spatial behavior and decision-making are strongly affected by the contextual environment. Many studies have attempted to understand such a close relationship. Major concerns include wayfinding, queuing and crowding, territoriality, spacescape visualization, and more [7, 12–14]. To address some of these issues, we propose a conceptual framework aimed at describing the spatial behavior of agents capable of understanding their environment and acting accordingly. This specifically regards the individual and internal process which causes the intention and planning phases to develop a certain behavior, as well as the consequences. It formalizes the impact of the social and physical environment on an agent’s decision-making process and behavior execution. It considers human factors and traits, as well as perceptive and cognitive abilities in relation to the agent’s dynamic surrounding environment. As a matter of fact, although there are a number of agent-based simulations, these are effective yet highly specific and so are difficult to apply to other contexts. The main aim of this study is to deduce a conceptual transversal framework to formalize the main aspects of human spatial behavior, which can be useful in the first stages of agent-based modeling for the simulation of indoor and outdoor built environments and in most cases of urban and architectural design.
A Conceptual Framework for Agent-Based Modelling …
191
3 An Agent-Based Model of Human Spatial Behavior The starting hypothesis of this formalization is to consider a rational agent’s behaviors, i.e., the agent aims to perform actions efficiently to achieve a certain objective. The agent must choose among a set of offered alternatives proposed by the surrounding space to change the environment or adapt to it. This is in order to properly develop the behavior required to achieve goals and meet needs and desires. Agents have the capacity to move through the environment and act inside it, using a behavioral process drawn from a wide range of research which studies the spatial behavior of people in human-scale spaces. Accordingly, our model tries to represent how several environmental factors could influence behaviors observed in the built environment. Indeed, agents reveal behaviors performing sequences of actions, which vary depending on their conditions and the context situation. Performing spatial behaviors, each agent reacts to the space configuration and the physical context, to specific environmental attractions or repulsions and to the actions of other agents present in the same area. Studies carried out on human behavior in space have attempted a preliminary formalization of the basic relationship between environment and behavior under the general function form, slightly modified from Yan [15]: B = f(S,G,R,E)
(1)
Where B is the agent’s behavior, S is the agent’s internal state, G are the agent’s objectives, R is a set of the agent’s behavioral rules, and E is the environment. Since we consider only the most relevant variables, the model is a simplification of reality that operates through adaptive feedback mechanisms [16]. Thus, our model reproduces the human spatial behavior which affects and is affected by the environment in a feedback loop manner. This refers to agents capable of understanding their environment and acting upon it accordingly. They show individual internal processes that lead to intentions and planning phases, as well as the ability to learn from experience as a consequence of their behavior. It enhances, among other things, spatial compatibility and conflict among activities and people and the psychological and physical consequences of the spatial experience. The model is built on the interaction of three starting elements, i.e., the environment, the agent’s objectives, and the agent’s internal state. Hence, any behavior planned and then fulfilled in space starts from a combination of these components. In Fig. 1, their interaction is represented as an intermediate step, named “interaction effect”, which transforms inputs into outputs. The conceptual framework is meant to explain how starting components relate, develop in time and space, and the feedback results due to their interactions.
192
D. Esposito et al.
Fig. 1 The feedback effect of the interaction on the main components
3.1 Initial Elements First the environment. It is composed of two elements: external conditions and physical morphology. The first element considers the environment as a set of characteristic factors, i.e., settable design parameters such as weather conditions, temperature, time, etc., which also affect the agent’s internal state. The second element is the so-called physical context, which contains fixed morphological features, properties and attributes of the space, as well as its social context. The environment is modeled as a spatial cell grid occupied by agents one at a time. The cell grid composition of space allows for the permanent or transient assignment of features to each cell or cell group, i.e., defined area, as follows: (i) action provided due to the presence of a certain physical feature, as well as other agents [17]; (ii) walkable paths and obstacle location; (iii) degree of space compatibility between same or different activities occurring simultaneously in the same area [18]; (iv) maximum admitted users’ density [19]; (v) degree of attraction or repulsion on agents [20]. The environment is an active element since it serves as a connecting layer between agents and activities occurring in it. Indeed, aiming at imitating human cognitive abilities, not human cognition itself, in our framework information is embedded in the features of the physical morphology. This simplifies the agent complex cognitive task of information processing, while focusing on decision-making and action execution processes. Therefore, perceptions of environmental features are automatically recognized by the agents to provide them with information on the possibilities offered by the space, as well as the presence of other agents acting in the same place [21]. The agent’s internal state is the second element. It is influenced by several factors, namely: (i) environmental external conditions, (ii) internal static features, i.e., data defining an agent’s fundamental characteristics (gender, age, average movement speed on foot, etc.), and (iii) agent’s physical and psychological state (preference, fatigue, health, expected time available, etc.). Each internal state is associated with a level on a progressive scale which measures the degree of satisfaction that the agent feels in that condition. Moreover, each value is associated with high or low agent awareness, where high awareness expresses a good self-efficacy, which is the extent to which the agent is confident of performing the behavior effectively [22].
A Conceptual Framework for Agent-Based Modelling …
193
The final starting component concerns the objectives that the agent can achieve. Objectives are set up in the simulation as a design parameter. A specific set of active objectives is selected from a list of alternative ones that are plausible for the studied layout. Each one should be defined as strong or weak. This duality mirrors the agent’s degree of commitment to the behavior objectives, i.e., a strong commitment states the condition of a good attitude and the sense of confidence in performing a task.
3.2 Modeling Agent’s Behavior In this section, we describe the proposed agents’ behavior framework arising from the relationship between the starting elements (red in Fig. 2) and which unfolds in the agent’s spatial behavior development.
Fig. 2 Outline of the agent’s behavior framework
194
D. Esposito et al.
Behavior Planning Once the starting conditions of the model are defined, behavior planning outcomes are derived from the combination of internal state and objectives (green in Fig. 2). They are combined in a matrix-filter, giving rise to a planned behavior only if they are consistent with each other. For each resulting behavior, the system automatically associates a set of actions that the agent should perform. The framework accounts essentially for two types of actions available in a built environment, displacement and interaction. These result in three types of system functions, where the first one is also a background function required to perform the others. These are navigation, physical interaction, and social interaction. Every available action is defined by the following features: (1) the information about the physical place, or about the agent needed; (2) the duration required to accomplish it; (3) a label for a priority classification, i.e., “needed” or “optional” [11, 21]; (4) its space compatibility with other activities carried out simultaneously in the same area [20]; (5) its density compatibility limit, due to the presence of other agents acting in the same area [19]; (6) the average speed of displacement (if the activity is related to the navigation function) [23]; (7) the agent’s knowledge of the action’s feasibility in certain conditions, originating from recorded past experiences. In the model, these attributes represent the agent’s knowledge base, which is updated by the experience of similar behaviors. The fourth and fifth features are parameters of control, named compatibility signals. Each one specifies a context precondition that the agent needs to verify to carry out the activity. Such parameters represent the social influence on his planned behavior. Thus, even if the agent acquires this information from the active environment, it is still relative to the ongoing activity. The third feature directly characterizes the activity and indirectly influences the fourth and fifth features. In fact, it will raise the limit values of the compatibility signals in case of “needed” activities and decrease them if “optional”. This means increasing or decreasing the spatial compatibility threshold between activities and the acceptable crowding for each one. The compatibility signal limit values are also influenced by the quality of behavior. They rise when the behavior results from a combination of an agent’s strong commitment and high awareness, while they decrease when a combination of an agent’s weak commitment and low awareness occurs. These combinations affect the behavior execution and the chance to accomplish activities successfully [9]. Decision-Making Once the list of actions needed to accomplish the behavior is defined, they are ordered according to functional criteria (light blue in Fig. 2). Namely, the agent observes the surrounding environment and applies a filter that defines the priority of actions based on the following process: (i) a matching process, which recognizes the presence of useful spots required to complete the actions on the spatial grid; otherwise, the action is automatically excluded from the list. The spot is a point in the space which corresponds to a grid-cell in case of physical actions, or to a grid-cell occupied by other agents in case of social actions; (ii) a sorting process according to its label for priority classification, i.e., “needed” or “optional” [12]; (iii) a wayfinding function
A Conceptual Framework for Agent-Based Modelling …
195
to optimize the path. This function calculates the shortest path between a cell grid’s useful spots; (iv) information on the action’s feasibility, obtained from its seventh feature. Additionally, this function estimates the time required to move within the grid, based on the length of path and the average speed for the agent’s displacement. Time Verification Once the action sequence has been sorted, a time filter is run (yellow in Fig. 2). This estimates the time required to perform the sequence of actions as a sum of the time required to perform them and the time required for total displacement. This is evaluated according to the information derived from the action features. The value is compared with the expected time available set up for each agent as a starting condition. The filter directly allows for the activation of the next phase if the expected time available appears comparable with the estimated time required. If the expected time available is longer than the time needed, the filter increases the density compatibility limit for needed actions. This condition represents the human predisposition to endure crowding when performing key activities. Moreover, the filter activates the attraction effect of the surrounding environment during “needed” activities, which in turn is an effect active only during the execution of “optional” activities. If the expected time available is less than the estimated time required, the value of the time gap is compared with a percentage of the former. If the gap value is lower, the system decreases the value of the density compatibility limit. If the gap value is higher, the function removes the last action in the run queue and starts a new time verification phase on this reduced action list. Behavior Execution Once past the time filter, the agent starts moving through space, in order to solve the action list sequentially (blue in Fig. 2). When in contact with the useful cell grid, the agent remains on the spot the time required to perform the action, in this way affecting the environment or other agents with whom there is interaction. The path strategically defined in the previous steps changes operationally in real time, with information coming from the local movement for obstacle avoidance. The circumstances artificially embedded in the environment layout influence behavior execution in real time. The conditions indicated by compatibility signals are prerequisites to be fulfilled to complete the action, otherwise the agent desists and moves forward to the subsequent one. They produce unplanned behaviors, such as slowdown until a stop caused by the attraction mechanism. This occurrence is determined in case of overlap between the agent’s surrounding sensory area, corresponding to individual social space, and the area of influence of attractive elements [13]. They can dissuade the agent from acting due to unsustainable crowding or due to the spatial incompatibility of activities. Reflection in Action After the completion of each action, an assessment process of the situation starts, since the execution of the interaction with the (physical and social) environment is
196
D. Esposito et al.
the cause of unpredictable consequences that push the agent to a reflection on the new situation (orange in Fig. 2). In this process of analysis, the information arising from the action execution affects the agent’s psychological and physiological state and objectives. Therefore, a new system cycle starts, and all the previous phases are rerun with two possible solutions. The new cycle leads to the definition of a behavior different from the one previously processed and which is currently suspended. This happens if the situation has deeply influenced the agent’s planned behavior, causing the beginning of a new, essentially different, behavior development. Otherwise, the information has no substantial impact on the agent, i.e., the new cycle produces the same behavior, that is, the same as the sequence of actions to be carried out in the residual part of the previous cycle. This option expresses the condition that the agent’s planned behavior remains unaffected. Thus, the agent resumes and proceeds with the behavior in accordance with the new sequence, which is identical to the former one. Experience Learned On completion of the action execution or when the time expires, the agent concludes their behavior with three possible outcomes (purple in Fig. 2). If the behavior is totally accomplished, this affects the agent’s internal state, i.e., a feedback mechanism increases the level of the satisfaction scale. Otherwise, if the circumstances prevented the planned actions from being performed, the behavior totally fails. Thus, a feedback mechanism acts on the agent’s commitment state and on the agent’s awareness state raises the lower one, and, at the same time, decreases the value on the satisfaction scale. The system also accepts the option that the behavior is partially completed. In this case, remaining data, namely the residual actions list and the new position information, are sent back to the decision-making phase, restarting it. This phase simulates the agent’s ability to learn from their experiences and to update the action attributes in the knowledge base. Updating the knowledge base determines the generation or the update of the seventh action feature, which stands for the agent’s memory of the feasibility of actions. Thanks to this feature which considers agent experience, in our framework the repetition of the same action in the same spatial scenario allows the agent to minimize the discrepancy between planned behavior and its fulfillment, driving them to achieve better outcomes.
4 Conclusions The presented framework aims at fostering the research purpose of supporting planning decision-making and architectural design, suggesting multi-agent-based spatial system and the elaboration of what-if scenarios with agent-based simulations. The agent-based approach applied in this study, focusing on human behaviors in built spaces, revealed a variety of agents’ rationales involved in the complexity of built settings activities. Because of the challenging complexity of the modeling task, this research aims at representing a preliminary effort toward the building up of a system
A Conceptual Framework for Agent-Based Modelling …
197
architecture to feed a software simulation. The future development foresees a casebased formalization of concepts, aims, actions, relations, and their implementation in a simulation with the Unity 3D platform, in order to validate the system architecture. An agent-based simulation of human spatial behavior based on the conceptual framework proposed will be suitable in estimating human-related building performances. On a larger scale, human spatial behaviors could be integrated into the design process in order to understand and assess the impact of physical and social settings of a built environment on its inhabitants. This knowledge could have positive consequences for a more concrete way to evaluate the effect of infrastructural development on the future of today’s cities. This will provide better support for administrative policy-making, allowing decision-makers to envision more aware and effective spatial planning and design activities for the sustainable development of future cities.
References 1. Arentze, T., Timmermans, H.: Multi-agent models of spatial cognition, learning and complex choice behavior in urban environments. In: Portugali, J. (ed.) Complex Artificial Environments, pp. 181–200 (2006) 2. Ferber, J.: Multi-agent Systems: an Introduction to Distributed Artificial Intelligence. AddisonWesley, Harlow (1998) 3. Schaumann, D., Pilosof, N.P., Date, K., Kalay, Y.E.: A study of human behavior simulation in architectural design for healthcare facilities. Ann. Ist. Super. di Sanità. 52, 24–32 (2016) 4. Hoogendoorn, S.P.: Normative pedestrian flow behavior theory and applications (2001) 5. Wooldridge, M., Jennings, N.R.: Intelligent agents: theory and Practice. Knowl. Eng. Rev., 115–152 (1995) 6. Kennedy, W.G.: Modelling human behaviour in agent-based models. In: Agent-Based Models of Geographical Systems, pp. 167–179 (2011) 7. Helbing, D., Balietti, S.: How to do agent-based simulations in the future : from modeling social mechanisms to emergent phenomena and interactive systems design why develop and use agent-based models. Time, 1–55 (2011) 8. Bonabeau, E.: Agent-based modeling: methods and techniques for simulating human systems (2002) 9. Hall, E.T.: The Hidden Dimension (1966) 10. Whyte, W.H.: The Social Life of Small Urban Spaces (1982) 11. Gehl, J.: Life Between Buildings: Using Public Space. Island Press (2011) 12. Lynch, K.: The image of the city (1962) 13. Hillier, B.: Space is the machine (1996) 14. Helbing, D., Molnár, P., Farkas, I.J., Bolay, K.: Self-organizing pedestrian movement. Environ. Plan. B Plan. Des. 28, 361–383 (2001) 15. Yan, W., Kalay, Y.: Simulating human behaviour in built environments. In: Martens, B., Brown, A. (eds.) Computer Aided Architectural Design Futures 2005, pp. 301–310. Springer, Berlin, Heidelberg (2005) 16. Miller, J.H., Page, S.E.: Complex adaptive systems: an introduction to computational models of social life (2009) 17. Hadavi, S., Kaplan, R., Hunter, M.C.R.: Environmental affordances: a practical approach for design of nearby outdoor settings in urban residential areas. Landsc. Urban Plan. 134, 19–32 (2015) 18. Marcouiller, D.: Outdoor recreation planning: a comprehensive approach to understanding use interaction. CAB Rev. Perspect. Agric. Vet. Sci. Nutr. Nat. Resour. 3 (2008)
198
D. Esposito et al.
19. Stokols, D.: On the distinction between density and crowding: some implications for future research. Psychol. Rev. 79, 275–277 (1972) 20. Ostermann, F.: Modeling, analizing, and visualizing human space appropriation (2009) 21. Langley, P., Choi, D.: A unified cognitive architecture for physical agents. In: Twenty-First National Conference on Artificial Intelligence, pp. 1469–1474. AAAI Press, Boston (2006) 22. Fishbein, M., Ajzen, I.: Predicting and Changing Behavior : the Reasoned Action Approach. Taylor & Francis (2009) 23. Fruin, J.J.: Pedestrian Planning and Design. Metropolitan Association of Urban Designers and Environmental Planners (1971)
Real-Time Autonomous Taxi Service: An Agent-Based Simulation Negin Alisoltani, Mahdi Zargayouna, and Ludovic Leclercq
Abstract Today policymakers face increasingly complex traffic systems. While they need to ensure smooth traffic flows in the cities, they also have to provide an acceptable level of service in remote areas. Autonomous Taxis (AT) offer the opportunity to manage car traffic at low operational cost and they can be appropriate alternatives for the driven vehicles. In this paper, we propose a multi-agent system to find the best dispatching strategy for a fleet of AT. In the dispatching process, we aim to satisfy both the passengers and the providers objectives and priorities. To be able to apply the method on large-scale networks, we introduce a clustering method to cluster the requests every minute and then solve the assignment problem for the requests inside each cluster. As the network congestion can have significant impacts on the vehicle speed and travel time especially considering the private cars that are driving in the system besides the taxis, an agent-based simulation platform is used to simulate the function of the AT fleet. We use the trip-based Macroscopic Fundamental Diagram (MFD) to simulate the time evolution of traffic flows on the road network and update the traffic situation in the system every second to represent the real traffic dynamics. We address the problem for a large city scale of 80 km2 (Lyon city in France) with more than 480,000 trips over 4 h period containing the morning peak. The experimental results with real data show that the proposed multi-agent system is efficient in terms of serving all the requests in a short time satisfying both passengers and providers objectives.
N. Alisoltani (B) · M. Zargayouna Univ. Gustave Eiffel IFSTTAR GRETTIA Paris France Boulevard Newton, Marne la Vallée Cedex 2, 77447 Paris, France e-mail: [email protected] N. Alisoltani · L. Leclercq Univ. Gustave Eiffel Université de Lyon ENTPE LICIT Lyon France, 25 avenue François Mitterrand, 69675 Bron Cedex, Lyon, France © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_18
199
200
N. Alisoltani et al.
1 Introduction The development of Autonomous Vehicles (AVs) will sooner or later result in commercial Autonomous Taxi (AT) services. ATs would have fewer accidents and would reduce the operating costs while increasing the productivity. Making a complete replacement of Conventional Driven Vehicles (CDV) with ATs would be a possible solution for city traffic [1]. Nowadays, the spread of mobile devices and the development of Global Positioning System (GPS) make it possible for all the transport operators to adapt in real time the transportation supply to travel demand. These new technologies have made considerable changes in the transportation modes as well as taxis. Therefore, an efficient taxi dispatching system is required to quickly find the best match between the received passenger requests by the central app-based platform and available ATs in real time. Different researches focus on developing different methods for this purpose like in [2–5]. The critical issue in the real-time assignment is to be able to solve the matching process very fast especially for large-scale networks with thousands of requests respecting the current traffic situation in the network at the time. In the AT dispatching problem, passengers are sensitive to the waiting time and travel time while the objective for the AT fleet provider is to reduce the travel time and travel distance of the ATs [6]. Different researches focus on different objectives in the problem related to passengers [7], drivers [8, 9], providers [10–12], or the network [1]. In this problem, sometimes it is more beneficial for the provider if taxis cruise in the network after serving the first request to pick up the next passenger while sometimes it is better that they wait in stations for new trip requests. In this research, to assign the taxis to the passenger requests, we propose an assignment method to find the best trade-off between the passengers and the providers objectives. In the large-scale problems, the number of received requests at every time is a huge number. It has been indicated in the literature that the patterns of demands and the patterns of supplies are spatially–temporally dependant [13]. A lot of studies on this domain use different clustering methods to consider these dependencies. They use methods like dividing the time into several time slots or dividing the space into several clusters, road segments, or cells [14, 15]. Several previous works have highlighted the relevance of using data analytics with multi-agent systems [16, 17]. To make the computations fast to find the best assignment in real time, we propose a clustering method to cluster the requests every minute and then solve the assignment problem for the requests inside each cluster. We define a function to show the extra travel time that each AT should pass to get to the new passenger after dropping of the current passenger. Then, we cluster the trips considering this function. Another serious point is to predict the travel times accurately to determine the availability of the taxi and pick up/drop off times. We believe that this important problem has not been tackled in the researches on this field properly. Indeed, the authors usually assume that the travel time during the assignment process remain the same during the execution of the vehicle schedules [18]. But the network congestion can have a significant impact on the vehicle speed and travel time, especially consid-
Real-Time Autonomous Taxi Service: An Agent-Based Simulation
201
ering the private cars that are driving in the system besides the ATs [19, 20]. In this research, we present a multi-agent-based solution to the real-time AT dispatching problem. In our method, we define the plant model that is different from the prediction model to assess the impact of traffic conditions on the real-time dispatching system performance for large-scale problems. The considered prediction model is based on the last observed travel times, while the considered plant model is a tripbased Macroscopic Fundamental Diagram (MFD) model which is able to reproduce the time evolution of average traffic conditions for a full road network [21, 22].
2 Multi-agent AT Dispatch System We define an AT multi-agent system composed of 3 types of agents: 1—The passenger agent which sends the request for a trip to the system and defines the pick up location and time and the drop of location and time. Passenger agents are avatars of human travelers. 2—The operator agent which manages a fleet of AT. Operator agents are avatars of existing operators. 3—The dispatcher agent which receives the requests from the passenger agent and computes the best dispatching strategy for the ATs. This agent has access to the current AT speed and travel time in the dynamic network. The dispatcher agent is an artificial entity, mainly responsible for the system calculations. The multi-agent system components are shown in Fig. 1.
Fig. 1 Multi-agent system for the AT dispatching
202
N. Alisoltani et al.
3 Dispatching Method for Real Time AT Service We have defined an algorithm for AT dispatching to find the best schedule for the ATs and serve the requests with minimum passenger waiting time. The taxi leaves the closest waiting location to serve the first passenger. After dropping off the passenger in the destination, the AT faces two choices. The first option is to continue to another origin and the second one is to go back to the nearest waiting location to wait for the next passenger. The designed algorithm computes all the possible branches of routes for the received requests. If the waiting time for each passenger is acceptable, the algorithm calculates the car travel time. In the end, when it computes all the possible options, it chooses the optimal situation. At each iteration, the algorithm removes the assigned requests and continues with the rest of the requests until it inserts all the passengers in the taxi schedules. The flowchart in Fig. 2 shows the algorithm function. This algorithm can find the exact optimal schedule for each AT considering the received requests every minute. The exact algorithm for solving the AT dispatching problem is costly in terms of computation time, especially in dynamic environments and for large-scale problems. So, we describe a heuristic method based on clustering to speed up the computations while keeping the quality of the solutions close to the exact solution. One central idea in this paper is to rely on exact solving, while providing fast solving algorithm. To do so, we limit the requests considered by the optimization algorithm to those that have more possibility to create the optimal route. This narrows the search of feasible solutions and makes the algorithm faster. To introduce this limitation, when we receive the requests, we put them in separate clusters, and then we apply an exact branch-and-cut method on the only requests of each group. The clustering takes place every minute. We first define a dissimilarity square matrix for all the requests known at the time (the rows and the columns are the requests). The values in the matrix for each pair of requests is equal to the extra travel time that a vehicle will have if it had to serve both requests instead of only one. All the entries of the main diagonal are then equal to zero (no extra travel time when serving the same request). To create the clusters, we use hierarchical clustering [23] to cluster the requests based on the dissimilarity matrix.
4 AT Simulation A simulation platform is used as a model of the reality, to simulate the real-time AT service and the movement of personal cars that are in the network at the same time. This simulator is able to simulate the time evolution of traffic flows on the road network. In this research, we use the trip-based MFD [24] to accommodate individual trips while keeping a very simple description of traffic dynamics. The general principle of this approach is to derive the inflow and outflow curves noting
Real-Time Autonomous Taxi Service: An Agent-Based Simulation
Fig. 2 AT service dispatching algorithm
203
204
N. Alisoltani et al.
that the travel distance L by a driver entering at time t − T (t) when n(t) is the number of en-route vehicles at time t and the average speed of travelers is V (n(t)) at every time t should satisfy the following equation: t V (n(s))ds (1) L= t−T (t)
In our research, the state of all the vehicles is available for the system at every time t. The cars can have two situations. They are either waiting in depots for new passengers or they are servicing the assigned passengers. In addition to the ATs that are moving to serve the passengers, there are other cars in the network. So the accumulation (the number of cars in the system) at each time t is the summation of the number of autonomous taxis and number of what we call personal cars in the system. Therefore, at each time t, the average speed of travelers can be computed. At each simulation time step, the simulator computes the current speed of the cars considering the current traffic situation (the number of en-route vehicles). The simulation time step that we use in our plant model is 1 s. So the state of en-route cars is updated every second. To make travel time prediction during optimization, we predict the traffic situation for the next optimization time horizon (every minute) and we assign the passengers to the cars based on this prediction. So at each optimization time horizon, the direct travel time from each point i to j is computed based on the current average speed and the shortest path between two points, for the next minute. Then the optimization algorithm assigns all the requests for the next minute to the en-route ATs or empty waiting ATs.
5 Results In the proposed research, we apply our method on a realistic origins–destinations (O-D) matrix for the city of Lyon in France. The network is loaded with travelers of all O-D with given departure time to represent 4 h of the network with more than 480,000 requests based on the study of [25]. This network has 11,310 nodes and 27,000 links with an area of 80 km2 . The area is shown in Fig. 3. We execute the experiments for the morning peak hour from 6 to 10 AM (4 h). The important point to use the heuristic method is to decide about the size of clusters. Bigger cluster sizes can give results closer to the optimal solution, but they are expensive in terms of computation time. So it is necessary to determine the best trade-off between the quality of objective function and the computation time to choose the appropriate cluster size. We execute the algorithm with different cluster sizes to find this trade-off. Figure 4 shows the computation time and the total travel time for the ATs for different cluster sizes. It is clear that the cluster size of 100 can make this trade-off between the computation time and the quality of the solution. So we execute the simulation with this cluster size.
Real-Time Autonomous Taxi Service: An Agent-Based Simulation
205
Fig. 3 Lyon city in France
Fig. 4 The computation time and the total travel time of ATs for different cluster sizes
Table 1 shows the ATs travel time and distance and the passengers waiting time for the proposed method and the basic scenario. In the basic scenario, 10% of the trip requests are served with taxis without putting the trips in sequence. The AT service shows the situation in which we serve 10% of the requests with the new proposed AT system. The other trips in the system are personal trips that can impact the speed and travel time in the network. In the basic scenario, the average waiting time for each passenger is 23 s. It means that the passenger pick up time is 23 s after the requested pick up time. The proposed AT dispatching method increases the average waiting time for each passenger just 15 s while it can decrease the total travel time for 127 h and the total travel distance for 3,640 km.
206
N. Alisoltani et al.
Table 1 Simulations results Simulation Total travel time (h) Basic scenario AT service
76317 76190
Total travel distance (km)
Average passenger waiting time (s)
2303544 2299904
13 28
6 Conclusion Autonomous Taxis (AT) offers the opportunity to manage car traffic at low operational cost and they can be appropriate alternatives for the driven vehicles. We proposed a multi-agent system to find the best dispatching strategy for a fleet of AT. To be able to apply the method on large-scale networks, we introduced a clustering method to cluster the requests every minute and then solve the assignment problem for the requests inside each cluster. As the network congestion can have significant impacts on the vehicle speed and travel time especially considering the private cars that are driving in the system besides the taxis, an agent-based simulation platform is used to simulate the function of the AT fleet. The passenger agent sends the trip request to the dispatcher agent and then the dispatcher finds the best AT to assign the request considering the network traffic dynamics. We cluster the requests into clusters of 100 requests and solve an algorithm to find the best schedule for the ATs inside each cluster. The results show that this method can decrease the traffic congestion in the network while satisfying both passengers and fleet providers objectives. Acknowledgements This study has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No. 646592–MAGnUM project).
References 1. Bischoff, Joschka, Maciejewski, Michal: Simulation of city-wide replacement of private cars with autonomous taxis in Berlin. Proced. Comput. Sci. 83, 237–244 (2016) 2. Billhardt, H., et al.: Taxi dispatching strategies with compensations. Expert Syst. Appl 122, 173–182 (2019) 3. Felt, M., Gharachorloo, N., Moshrefi, A.: Mobile taxi dispatch system. U.S. Patent Application No. 12/607,782 4. Gao, G., Xiao, M., Zhao, Z.: Optimal multi-taxi dispatch for mobile taxi-hailing systems. In: 2016 45th International Conference on Parallel Processing (ICPP). IEEE (2016) 5. Zargayouna, M., Zeddini, B.: Fleet organization models for online vehicle routing problems. In: Transactions on Computational Collective Intelligence VII, pp. 82–102. Springer, Berlin, Heidelberg (2012) 6. Seow, K.T., Lee, D-H.: Performance of multiagent taxi dispatch on extended-runtime taxi availability: a simulation study. IEEE Trans. Intell. Transp. Syst. 11.1 , 231–236 (2009)
Real-Time Autonomous Taxi Service: An Agent-Based Simulation
207
7. Shen, W., Lopes, C.: Managing autonomous mobility on demand systems for better passenger experience. In: International Conference on Principles and Practice of Multi-Agent Systems. Springer, Cham (2015) 8. Dai, G., et al.: A balanced assignment mechanism for online taxi recommendation. In: 2017 18th IEEE International Conference on Mobile Data Management (MDM). IEEE (2017) 9. Liu, Y., et al.: Recommending a personalized sequence of pick-up points. J. Comput. Sci. 28, 382–388 (2018) 10. Hyland, Michael, Mahmassani, Hani S.: Dynamic autonomous vehicle fleet operations: optimization-based strategies to assign AVs to immediate traveler demand requests. Transp. Res. Part C Emerg. Technol. 92, 278–297 (2018) 11. Maciejewski, M., Nagel, K.: Simulation and dynamic optimization of taxi services in MATSim. VSP Working Paper 13-0. Berlin, T.U.: Transport Systems Planning and Transport Telematics (2013) 12. Powell, J.W., et al.: Towards reducing taxicab cruising time using spatio-temporal profitability maps. In: International Symposium on Spatial and Temporal Databases. Springer, Berlin, Heidelberg (2011) 13. Wang, S., et al.: Trajectory analysis for on-demand services: a survey focusing on spatialtemporal demand and supply patterns. Transp. Res. Part C Emerg. Technol. 108 , 74–99 (2019) 14. Davis, Neema, Raina, Gaurav, Jagannathan, Krishna: Taxi demand forecasting: a HEDGEbased tessellation strategy for improved accuracy. IEEE Trans. Intell. Transp. Syst. 19(11), 3686–3697 (2018) 15. Qi, H., Liu, P.: Mining Taxi Pick-Up Hotspots Based on Spatial Clustering. IEEE SmartWorld, Ubiquitous Intelligence and Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, p. 2018. Internet of People and Smart City Innovation, IEEE (2018) 16. Revilloud, M., Gruyer, D., Rahal, M.: A new multi-agent approach for lane detection and tracking. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 3147–3153. Stockholm (2016) 17. Revilloud, M., Gruyer, D., Rahal, M.: A lane marker estimation method for improving lane detection. In: 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), pp. 289–295. Rio de Janeiro (2016) 18. Goel, Preeti, Kulik, Lars, Ramamohanarao, Kotagiri: Optimal pick up point selection for effective ride sharing. IEEE Trans. Big Data 3(2), 154–168 (2016) 19. Alisoltani, N., et al.: Optimal fleet management for real-time ride-sharing service considering network congestion. No. 19-04863 (2019) 20. Alisoltani, N., Zargayouna, M., Leclercq, L.: A Multi-agent system for real-time ride sharing in congested networks. Agents and Multi-agent Systems: Technologies and Applications 2019, pp. 333–342. Springer, Singapore (2020) 21. Lamotte, R., Geroliminis, N: The morning commute in urban areas: insights from theory and simulation. No. 16-2003 (2016) 22. Mariotte, Guilhem, Leclercq, Ludovic, Laval, Jorge A.: Macroscopic urban dynamics: analytical and numerical comparisons of existing models. Transp. Res. Part B Methodol. 101, 245–267 (2017) 23. Ding, C., He, X.: Cluster merging and splitting in hierarchical clustering algorithms. In: 2002 IEEE International Conference on Data Mining, 2002. Proceedings. IEEE (2002) 24. Daganzo, Carlos F.: Urban gridlock: macroscopic modeling and mitigation approaches. Transp. Res. Part B Methodol. 41(1), 49–62 (2007) 25. Krug, J., Burianne, A., Leclercq, L.: Reconstituting Demand Patterns of the City of Lyon by Using Multiple GIS Data Sources. University of Lyon, ENTPE, LICIT (2017)
Modelling Timings of the Company’s Response to Specific Customer Requirements Petr Suchánek
and Robert Bucki
Abstract The paper highlights the problem of delays of information flow in the logistics manufacturing system. Delays result from the need for sending a customer’s inquiry to the customer service department in order to obtain the precise information regarding whether or not an order can be made by the logistics system. Time delays are caused either by elaborating the inquiry in separate units within the logistics chain or by the process of passing information between units as well as subunits in the system. After obtaining information from the units in question, the answer is sent back to the customer. The goal of the paper is to present one of the possible approaches to modelling the information delay flow between individual communication units of an example logistics chain in terms of processing a response to a customer’s query. The article presents a mathematical model of the problem using a heuristic approach as well as a proposal for a method of calculating the cost of servicing customers’ inquiries.
1 Introduction Supply chain optimization is one of the key approaches to reducing the cost side of the budget of all types, especially manufacturing enterprises [1, 2] on the basis of good management of logistics processes for all related areas [3, 4]. The article focuses on one of the partial issues of manufacturing process management in which the usual key areas are stocks and accessibility of material resources [5], availability of technologies, competent human resources [6] and sufficient financial capital P. Suchánek (B) School of Business Administration in Karvina, Silesian University in Opava, Karvina, Czech Republic e-mail: [email protected] R. Bucki Institute of Management and Information Technology, Bielsko-Biala, Poland e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_19
209
210
P. Suchánek and R. Bucki
reserves supporting production with sufficient financial performance for as long as it is necessary [7]. Time is an important parameter for evaluating the performance of a production system and thus of the whole enterprise. It is possible to measure the time of production itself and to take into account various causes of time increase (e.g. lack of material and resources, production equipment failures, absence of key employees), time of product processing by dedicated devices, time from the production end to the delivery of finished products to customers, etc. Measurement of time is a very important parameter for establishing the production plan and order management [8] because it is always necessary to take into account the fact that production resp. its time, quality and price are important parameters for the customer [9]. Time is therefore included as a key parameter in most simulation models [10] whose outputs serve as a basis for planning and are used as a standard part of Business Intelligence tools [11] and operated under Industry 4.0 [12, 13]. The goal of the paper is to present the problem of modelling delays of the flow of customers’ inquiries as well as proposing the way of calculating the cost of servicing them in logistics units. The heuristic approach is used for model creation and it is possible to implement it for similar cases resp. models as it is sufficient and delivers realistic results [14, 15]. First of all, the sample logistics chain along with the graphical illustration of the flow of inquiries is shown and is followed by the adequate specification assumptions. The matrices of delay times of passing the inquiry between the logistics units and their subunits as well as the times of servicing incoming and outcoming inquiries in logistics units and subunits are introduced. Each delay time is correlated with its unit cost. The amount coefficient enables us to calculate the total cost of an individual inquiry and the total cost of servicing all customers’ inquiries.
2 Mathematical Model It is assumed that customers inquire about their orders before setting them. At a given moment, the contents of the inquiry matrix are analysed by the Customer Service Department (CSD) which means the customers’ inquiries are subject to a thorough analysis by appropriate units in the logistics chain responsible for making products, which results in time delays. There is also a need to send inquiries between logistics units. When the answer of the inquiry is received by the CSD and finally elaborated, the customer is informed whether or not their order can be fulfilled or not, and what quantities are possible. The customer can then either accept or decline this. After a specified time, the order matrix is blocked and no more inquiries can be placed in it. The order inquiry matrix is transformed into the order matrix. The order inquiry matrix takes the following form (1): 0 , m = 1, . . . , M, n = 1, . . . , N Z 0 = z m,n
(1)
0 —the inquiry about the n-th order of the m-th customer expressed in where z m,n contract unit numbers at the initial state. The vector of suppliers takes the following form (2):
Modelling Timings of the Company’s Response …
D = [dl ], l = 1, . . . , L
211
(2)
where dl —the l-th supplier for the manufacturing system. The adjustment matrix of customers’ orders to suppliers takes the following form (3): L = dl/(m,n) D M,N
(3)
where dl/(m,n) —adjustment of the l-th supplier to the n-th order of the m-th customer. At the same time dl/(m,n) = 1 if the n-th order of the m-th customer can be made from the charge material delivered by the l-th supplier. Otherwise, dl/(m,n) = 0. The order inquiry matrix elements are subject to analysis to determine whether they can be made by the company or not. After the analysis, orders which can be made in the company remain in the order matrix, otherwise, they are removed from it or modified. The order inquiry matrix is transformed into the order matrix as follows (4): 0 1 → Z 1 = z m,n Z 0 = z m,n
(4)
0 1 = z m,n if the n-th product for the m-th customer can be made; At the same time z m,n 0 1 otherwise, z m,n = z m,n , i.e. the m-th customer’s inquiry is rejected or modified. The matrix of amount coefficients is introduced (5):
Γ = γ m,n , m = 1, . . . , M, n = 1, . . . , N
(5)
z0
where γ m,n = ξ + m,n , ξ —the base amount coefficient; ζ —the minimising denomζ inator. There is a need for minimising the amount coefficient of inquiries in case it z0 exceeds the set value as follows: if m,n > 0, 1 · ξ , then γ m,n = 1, 1 · ζ . ζ To obtain the knowledge of whether or not an order can be made in the company, it is necessary to send an inquiry to all units responsible for all key manufacturing as well as storing and supply operations. There are the following approaches of solving the problem of servicing customers’ inquiries: 1. Passing inquiries to the preceding units in sequence (Fig. 1). 2. Passing inquiries to the chosen units (Fig. 2). In Figs. 1 and 2, CMS—the charge material storage; CSD—the customer service department; REG—the regeneration unit; MAN—the manufacturing unit; PRS—the production support unit; RPS—the ready product storage; SUP—the supply centre. Time needed for sending inquiries from one unit to another is explained in detail in Table 1. It is assumed that the key logistics operating units are responsible for elaborating the proper questions to the preceding unit as well as answers to these questions to the subsequent unit in the logistics chain. The formulas shown in Table 2 represent the times needed for elaborating these tasks after receiving the inquiry from the
212
Fig. 1 Passing inquiries to the preceding units in sequence
Fig. 2 Passing inquiries to the chosen units
P. Suchánek and R. Bucki
Modelling Timings of the Company’s Response …
213
Table 1 The costs of passing the inquiry between logistics units Times of sending the inquiry
From
To
Unit cost of sending the inquiry
Cost of sending the inquiry
τCm,n S D←z 0
0 z m,n
CSD
cCunit_m,n S D←z 0
cCm,n S D←z 0
m,n
m,n
m,n
τCm,n 0 S D→z m,n
CSD
0 z m,n
cCunit_m,n 0 S D→z m,n
=
· τCm,n 0 S D←z m,n
· cCunit_m,n S D←z 0
cCm,n = 0 S D→z m,n m,n m,n · τC S D→z 0 γ m,n
· cCunit_m,n S D→z 0
γ m,n
m,n
m,n
……………..
………
……..
……..
…………………………..
τdm,n 0 l(m,n) ←SU P
SUP
0 dl/(m,n)
cdunit_m,n 0 l/(m,n) ←SU P
cdm,n 0
l/(m,n) ←SU P
τdm,n 0 l(m,n) →SU P
0 dl/(m,n)
SUP
cdunit_m,n 0 l/(m,n) →SU P
=
· τdm,n 0 l/(m,n) ←SU P
· cdunit_m,n 0 ←SU P
cdm,n = 0 l/(m,n) →SU P m,n m,n · τd 0 γ l/(m,n) →SU P
· cdunit_m,n 0 →SU P
γ m,n
l/(m,n)
l/(m,n)
Table 2 Total costs of delays Times of inquiry analysis
Preceding unit
Subsequent unit
Unit cost of delay analysis
Total cost of delays
τCm,n S D←•
0 z m,n
RPS
m,n cCunit_ S D←•
cCm,n S D←• = unit_m,n γ m,n · τCm,n S D←• · cC S D←•
τCm,n S D→•
RPS
0 z m,n
m,n cCunit_ S D→•
cCm,n S D→• = unit_m,n γ m,n · τCm,n S D→• · cC S D→•
……………….
…………..
………….
………..
…………………………….
τdm,n 0 l/(m,n) ←•
SUP
0 dl/(m,n)
m,n cdunit_ 0 l/(m,n) ←•
cdm,n 0
=
γ m,n
· τdm,n 0 l/(m,n) ←•
· cdunit_m,n 0 ←•
cdm,n = 0 l/(m,n) →• m,n m,n · τd 0 γ l/(m,n) →•
· cdunit_m,n 0 →•
τdm,n 0 l/(m,n) →•
0 dl/(m,n)
SUP
m,n cdunit_ 0 l/(m,n) →•
l/(m,n) ←•
l/(m,n)
l/(m,n)
preceding unit and before sending its own inquiry to the subsequent unit. The same concerns the reverse way. The time of returning the m-th customer’s inquiry for the n-th product is calculated as follows (6): m,n m,n + τ→• τ m,n = τ←•
(6)
214
P. Suchánek and R. Bucki
where m,n m,n m,n m,n m,n = τCm,n τ←• S D←z 0 + τ R P S←C S D + τ M AN ←R P S + τ R E G←M AN + τ P R S←M AN + m,n
m,n m,n +τCm,n M S←M AN + τ SU P←C M S + τd 0
l/(m,n) ←SU P
m,n + τCm,n S D←• + τ R P S←• +
m,n m,n m,n m,n m,n m,n +τ M AN ←• + τ R E G←• + τ P R S←• + τC M S←• + τ SU P←• + τd 0
l/(m,n) ←•
m,n m,n m,n m,n m,n τ→• = τCm,n S D→z 0 + τ R P S→C S D + τ M AN →R P S + τ R E G→M AN + τ P R S→M AN + m,n
m,n m,n +τCm,n M S→M AN + τ SU P→C M S + τd 0
l/(m,n) →SU P
m,n + τCm,n S D→• + τ R P S→•
m,n m,n m,n m,n m,n m,n +τ M AN →• + τ R E G→• + τ P R S→• + τC M S→• + τ SU P→• + τd 0
l/(m,n) →•
Consequently, the matrix of costs of responses to customers’ inquiries is introduced (7): C = cm,n , m = 1, . . . , M, n = 1, . . . , N
(7)
where cm,n —the cost of returning the m-th customer’s inquiry for the n-th product. There is a need to implement the generalisation of the process. Let us introduce the matrix of delay times of passing the information inquiry between the preceding and subsequent units (8): m,n , α = 1, . . . , A, m = 1, . . . , M, n = 1, . . . , N Tα→(α+1) = τα→(α+1)
(8)
m,n where τα→(α+1) —the time of the information flow between the α-th preceding logistics unit and the subsequent logistics unit α + 1. Let us introduce the matrix of delay unit costs of passing the information inquiry between the preceding and subsequent units (9):
m,n , α = 1, . . . , A, m = 1, . . . , M, n = 1, . . . , N Cα→(α+1) = cα→(α+1)
(9)
m,n where cα→(α+1) —the delay unit cost of the information flow between the α-th preceding logistics unit and the subsequent logistics unit α + 1. Let us introduce the matrix of delay times of passing the information inquiry between the preceding and subsequent logistics units in the reverse mode (10):
m,n , α = A, . . . , 1, m = 1, . . . , M, n = 1, . . . , N T (α+1)→α = τ(α+1)→α
(10)
m,n where τ(α+1)→α —the time of the information flow between logistics units in the reverse mode. Let us introduce the matrix of delay unit costs of passing the information inquiry between the preceding and subsequent logistics units in the reverse mode (11):
m,n , α = A, . . . , 1, m = 1, . . . , M, n = 1, . . . , N C = c(α+1)→α
(11)
Modelling Timings of the Company’s Response …
215
m,n where c(α+1)→α —the delay unit cost of the information flow between logistics units in the reverse mode. Let us introduce the matrix of servicing times of incoming customers’ inquiries in logistics units (12):
Tαin = ταin_ m,n , α = A, . . . , 1, m = 1, . . . , M, n = 1, . . . , N
(12)
where ταin_ m,n —the time of servicing the incoming inquiry for the m-th customer ordering the n-th product in the α-th logistics unit. Let us introduce the matrix of delay unit costs of servicing incoming customers’ inquiries in logistics units (13): Cαin = cαin_ m,n , α = A, . . . , 1, m = 1, . . . , M, n = 1, . . . , N
(13)
where cαin_ m,n —the delay unit cost of servicing the incoming inquiry for the m-th customer ordering the n-th product in the α-th logistics unit. Let us introduce the matrix of servicing times of outcoming customers’ inquiries in logistics units (14): Tαout = ταout_ m,n , α = A, . . . , 1, m = 1, . . . , M, n = 1, . . . , N
(14)
where ταout_ m,n —the time of servicing the outcoming inquiry for the m-th customer ordering the n-th product in the α-th logistics unit. Let us introduce the matrix of servicing times of outcoming customers’ inquiries in logistics units (15): Cαout = cαout_ m,n , α = A, . . . , 1, m = 1, . . . , M, n = 1, . . . , N
(15)
where cαout_ m,n —the delay unit cost of servicing the outcoming inquiry for the m-th customer ordering the n-th product in the α-th logistics unit. Let us introduce the matrix of delay times of passing the information inquiry between main logistics units of the logistics chain and their subunits (16): m,n , α = 1, . . . , A, β = 1, . . . , B, m = 1, . . . , M, n = 1, . . . , N Tα→β = τα→β (16) m,n where τα→β —the time of the information flow between the α-th main logistics unit and its β-th logistics subunit. Let us introduce the matrix of delay unit costs of passing the information inquiry between main logistics units of the logistics chain and their subunits (17):
m,n , α = 1, . . . , A, β = 1, . . . , B, m = 1, . . . , M, n = 1, . . . , N Cα→β = cα→β (17)
216
P. Suchánek and R. Bucki
m,n where cα→β —the delay unit cost of the information flow between the α-th main logistics unit and its β-th logistics subunit. Let us introduce the matrix of delay times of passing the information inquiry between subunits and their main units (18):
m,n , α = 1, . . . , A, β = 1, . . . , B, m = 1, . . . , M, n = 1, . . . , N Tβ→α = τβ→α (18) m,n where τβ→α —the time of the information flow between the β-th logistics subunit and the α-th logistics unit. Let us introduce the matrix of delay unit costs of passing the information inquiry between subunits and the main units of the logistics chain (19):
m,n , α = 1, . . . , A, β = 1, . . . , B, m = 1, . . . , M, n = 1, . . . , N Cβ→α = cβ→α (19) m,n where cβ→α —the time of the information flow between the β-th logistics subunit and the main α-th logistics unit. Let us introduce the matrix of servicing times of incoming customers’ inquiries in logistics subunits (20):
Tβin = τβin_ m,n , β = 1, . . . , B, m = 1, . . . , M, n = 1, . . . , N
(20)
where τβin_ m,n —the time of servicing the incoming inquiry for the m-th customer ordering the n-th product in the β-th logistics subunit. Let us introduce the matrix of delay unit costs of servicing incoming customers’ inquiries in logistics subunits (21): Cβin = cβin_ m,n , β = 1, . . . , B, m = 1, . . . , M, n = 1, . . . , N
(21)
where cβin_ m,n —the delay unit cost of servicing the incoming inquiry for the m-th customer ordering the n-th product in the β-th logistics subunit. Let us introduce the matrix of servicing times of outcoming customers’ inquiries in logistics subunits (22): Tβout = τβout_ m,n , β = 1, . . . , B, m = 1, . . . , M, n = 1, . . . , N
(22)
where τβout_ m,n —the time of servicing the outcoming inquiry for the m-th customer ordering the n-th product in the β-th logistics subunit. Let us introduce the matrix of delay unit costs of servicing outcoming customers’ inquiries in logistics subunits (23):
Modelling Timings of the Company’s Response …
217
Cβout = cβout_ m,n , β = 1, . . . , B, m = 1, . . . , M, n = 1, . . . , N
(23)
where cβout_ m,n —the delay unit cost of servicing the outcoming inquiry for the m-th customer ordering the n-th product in the β-th logistics subunit. The logistics cost of servicing the m-th customer’s inquiry ordering the n-th product is calculated as follows (24): Cm,n = ξ + + +
A
0 z m,n ς
A A m,n m,n m,n m,n · τα+1→α · cα+1→α + τα→α+1 · cα→α+1 + α=1
ταin_ m,n · cαin_ m,n + +
α=1 B A
β=1 α=1
m,n τβ→α
·
m,n cβ→α +
α=1
A α=1
ταout_ m,n · cαout_ m,n +
B A β=1 α=1
τβin_ m,n
·
cβin_ m,n +
B A β=1 α=1 B A β=1 α=1
m,n m,n τα→β · cα→β +
τβout_ m,n
·
cβout_ m,n (24)
Finally, the total logistics cost of servicing delays resulting from customers’ inquiries is as follows (25): ⎛
N M
C = ⎝ξ + + +
n=1 m=1
⎞ 0 z m,n
ς
N M A n=1 m=1 α=1 N M A
+
⎠·
N M A
n=1 m=1 α=1
m,n m,n τα→α+1 · cα→α+1 +
m,n m,n τα+1→α · cα+1→α + N M A n=1 m=1 α=1 N M B
n=1 m=1 β=1 α=1
m,n τβ→α ·
A
m,n m,n τα→β · cα→β + n=1 m=1 β=1 α=1 N M B m,n cβ→α + τβin_m,n · cβin_m,n + n=1 m=1 β=1
N M B out_m,n out_m,n + τβ · cβ n=1 m=1 β=1
ταout_ m,n · cαout_ m,n +
n=1 m=1 α=1 N M B A
ταin_m,n · cαin_m,n +
Let us assume there are the following criteria to be implemented: • The criterion of minimising the time of responses to customers’ inquiries: Q T → min • The criterion of minimising the cost of responses to customers’ inquiries: Q C → min
(25)
218
P. Suchánek and R. Bucki
3 Conclusions The paper presents the model of delays in the logistics manufacturing system. The goals of the paper included presenting a mathematical model based on heuristic approach of the problem of modelling delays of customers’ inquiries as well as proposing a method for calculating costs of customers’ inquiries. The goals were met and the intended effect was achieved. The flow of inquiry information and its allocation in the logistics units was illustrated in detail. The information elaboration was taken into account. However, there was a need to simplify certain assumptions in order to make the paper an illustrative study case. The example of passing customers’ inquiries between logistics units directly and elaborating them in these units without skipping any of them is emphasised. It seems obvious that meeting the cost criterion requires minimising either time of passing and elaborating inquiries or/and unit costs of these operations. Therefore, future work should focus on rearranging the routes of passing customers’ inquiries, i.e. there is a need to connect directly the following units: CSD—MAN and MAN—SUP. Simultaneously, the following connections should be removed: CSD—RPS, CMS—MAN. The criterion remains to minimise the total cost of servicing customers’ inquiries. This approach is applicable to virtually all business systems that process customers’ queries. Acknowledgements This paper was supported by the project SGS/8/2018—“Advanced Methods and Procedures of Business Processes Improvement” at the Silesian University in Opava, School of Business Administration in Karvina.
References 1. Aalaei, A., Davoudpour, H.: A robust optimization model for cellular manufacturing system into supply chain management. Int. J. Prod. Econ. 183, 667–679 (2017) 2. Beheshtinia, M.A., Ghasemi, A.: A multi-objective and integrated model for supply chain scheduling optimization in a multi-site manufacturing system. Eng. Optim 50(9), 1415–1433 (2018) 3. Bucki, R., Suchánek, P.: Modelling decision-making processes in the management support of the manufacturing element in the logistics supply chain. Complexity 2017 (2017) 4. Mohammadi, M.: Challenges of Business Process Modeling in Logistics and Supply Chain Management. Int. J. Comput. Sci. Netw. Secur. 17(6), 259–265 (2017) 5. Zhang, F., Guan, Z.L., Zhang, L., Cui, Y.Y., Yi, P.X., Ullah, S.: Inventory management for a remanufacture-to-order production with multi-components (parts). J. Intell. Manuf. 30(1), 59–78 (2019) 6. Bocquet, R., Dubouloz, S., Chakor, T.: Lean manufacturing, human resource management and worker health: are there smart bundles of practices along the adoption process? J. Innov. Econ. Manag. 30, 113–144 (2019) 7. Nanda, S., Panda, A.K.: A quantile regression approach to trail financial performance of manufacturing firms. J. Appl. Acc. Res. 20(3), 290–310 (2019) 8. Saniuk, A., Waszkowski, R.: Make-to-order manufacturing-new approach to management of manufacturing processes. In: Proceedings of Modtech International Conference-Modern Technologies in Industrial Engineering IV, vol. 145, PTS 1–7 (2016)
Modelling Timings of the Company’s Response …
219
9. Paprocka, I.: The model of maintenance planning and production scheduling for maximising robustness. Int. J. Prod. Res. 57(14), 4480–4501 (2019) 10. Lee, H.: Real-time manufacturing modeling and simulation framework using augmented reality and stochastic network analysis. Virtual Reality 23(1), 85–99 (2019) 11. Suchánek, P.: Business intelligence-the standard tool of a modern company. In: Proceedings of the 6th International Scientific Symposium on Business Administration: Global Economic Crisis and Changes: Restructuring Business System: Strategic Perspectives for Local, National and Global Actors, pp. 123–132 (2011) 12. Lima, F., de Carvalho, C.N., Acardi, M.B.S., dos Santos, E.G., de Miranda, G.B., Maia, R.F., Massote, A.A.: Digital manufacturing tools in the simulation of collaborative robots: towards industry 4.0. Braz. J. Oper. Prod. Manag. 16(2), 261–280 (2019) 13. Mourtzis, D., Papakostas, N., Mavrikios, D., Makris, S., Alexopoulos, K.: The role of simulation in digital manufacturing: applications and outlook. Int. J. Comput. Integr. Manuf. 28(1), 3–24 (2015) 14. Zadeh, M.S., Katebi, Y., Doniavi, A.: A heuristic model for dynamic flexible job shop scheduling problem considering variable processing times. Int. J. Prod. Res. 57(10), 3020–3035 (2019) 15. Bektur, G., Sarac, T.: A mathematical model and heuristic algorithms for an unrelated parallel machine scheduling problem with sequence-dependent setup times, machine eligibility restrictions and a common server. Comput. Oper. Res. 103, 46–63 (2019)
Importance of Process Flow and Logic Criteria for RPA Implementation Michal Halaška and Roman Šperka
Abstract Robotic process automation (RPA) is a promising technology within the area of management of business processes. Firms are adopting RPA to digitalize and transform processes with the goal of increase of productivity and efficiency, cost reduction, and service improvement. As such it is crucial to identify processes and activities suitable for RPA. Research in this paper is focused on process flow and logic criteria which is neglected in the RPA literature. It is based on the analysis of real-life event log and its common pattern composed of tasks, which is found in the process using process mining techniques with Apromore tool and modeled using Bizagi modeler. The BPMN model is used for simulation of 3 scenarios with the common pattern. We found out that process flow and logic is important criteria to consider while implementing RPA solution. More specifically, in the case of fixed resources, it is necessary to first automate the task with the worst cycle time based on the productivity and efficiency of the common pattern. In the case of the pool of resources, the automation of the common pattern using RPA is less restrictive regarding cycle times of particular tasks due to its reallocation possibilities allowing for higher flexibility of gradual automation.
1 Introduction According to Aalst, Bichler, and Heinzl [1] the fundamental question related to RPA nowadays is “What should be automated and what should be done by humans?” This question is not new, a similar question was asked when Straight-Through Processing (STP) arrived into the financial sector in the mid-nineties. However, developments in data science, machine learning, and artificial intelligence are the source of the need to revisit this question. The combination of data science, machine learning, M. Halaška (B) · R. Šperka School of Business Administration in Karvina, Silesian University in Opava, Univerzitní Námˇestí 1934, Karviná 733 40, Czechia e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_20
221
222
M. Halaška and R. Šperka
and artificial intelligence with data from different sensors enables software robots to act in different environments more autonomously, thus offering wider application possibilities in the transformation of more complex and less structured processes and less routine tasks is typical. In general, the involvement of RPA within the business process is associated with increasing process speed, reducing error rates, and higher employee motivation similarly to the management of business processes [2]. According to Madakam, Holmukhe, and Jaiswal [3], if the company does not setup RPA technology, they will not be able to sustain future competition. It is necessary to combine RPA technology with operational and other capabilities and internal resources, because resources on their own may not create value for the company if not properly managed within internal processes of the company [4]. Thus, here comes the need for determination of processes and activities suitable for automation of such activities using RPA solution. Choosing an unsuitable process for the employment of RPA is reported as a significant reason for solution failure [5, 6]. There are few research articles interested in the criteria for the implementation of RPA solution. However, these consider criteria from several perspectives (e.g., cultural, financial, etc.) and different abstraction levels, while this research focuses on the operational and tactical level of management of business processes. Concretely, research in this paper is focused on process flow and logic criteria which is neglected in the RPA literature. Process flow and logic can have a significant impact on the results of the implemented RPA solution as shown in the case study of this research. Firstly, the goal of the paper is to show that process flow and logic criteria has to be considered for the recommendation of RPA implementation. Secondly, it is used to make recommendation about the implementation of RPA based on the common pattern using process flow and logic criteria. The case study is based on the simulation of three scenarios of common pattern discovered in real-life event log. The paper is structured as follows: the second section presents criteria for robotic process automation based on the management of business processes that need to be considered before implementation of RPA. The third section focuses on the methodology of the research. It describes used BPIC 2017 event log, applied methods, and parametrization of simulation experiments. The fourth section presents the results of simulation experiments and recommendations. Finally, the last section concludes the findings.
2 Criteria for Robotic Process Automation RPA is a tool that operates on the interface of other systems in an outside-in manner. RPA thus falls into the category of lightweight IT unlike different BPM solutions and other IS that are considered as heavyweight IT [7]. A consequence of the outside-in approach is that there is no need for changes within the information system. The change is the replacement of human agents for software robots [8]. Baranauskas defines RPA as an IT-based imitation of a human daily work where a limited number
Importance of Process Flow and Logic Criteria …
223
of autonomous decisions are needed and, in most cases, great numbers in quantity should be done in a short period of time [9]. As such, RPA is a rule-based software agent used to automate business processes involving tasks, structured data, and deterministic outcomes with a promise to allow workers to pay attention to more value adding or cognitively demanding tasks instead of routines and manual tasks which can be done faster and more accurately by software robots. Especially, considering that cognitively demanding tasks are not very well suited for RPA solution. In practice, this means that software robot uses IT systems exactly the same way a human would - repeating precise steps, and reacting to the events on a computer screen, instead of communicating with system’s Application Programming Interface [10]. An example is, e.g., transfer of data from a single source to multiple documents and vice versa. Besides that, RPA has many benefits to its deployment like, e.g., accuracy, productivity, flexibility, reliability, consistency, customer satisfaction, etc. [7, 11, 12]. Based on different characteristics of RPA, there are several criteria that are used for the identification of processes and activities suitable for the implementation of RPA. Below are summarized criteria significant for the perspective of management of business processes based on the literature. High volume transactions High volume transactions criterion follows from many definitions of RPA as one of the key criteria. High volume transactions are generally routine and repetitive whereby automation is an ideal choice [9, 13]. These transactions occur very frequently throughout the observed time period. Moreover, high volume of transactions is easier to carry through from the automation costs perspective. The exception may include highly valuable transactions, where the need for accuracy and reliability outweighs the cost of investment. Error proneness and reworks One of the main benefits of RPA implementation is accuracy, consistency, and reliability, which are the contributing factors towards the production of errors and unnecessary reworks due to the involvement of human agents [14]. Error proneness and reworks are related closely towards high volume of transactions. High volume transactions are more susceptible to errors and reworks, especially if they require attention and involvement over a long period of time. Similarly, they are also related to productivity contributing towards shorter cycle times and higher efficiency resulting in higher productivity. RPA can thus enforce compliance with the process. Manual work Manual work is again related to error proneness and reworks to which human agents are susceptible. Moreover, it is typical for manual work to consist very often of routine and repetitive tasks requiring intensive labor. Such tasks do not need human intervention or its need is limited together with limited need for handling exceptions. Moreover, if the workload is entirely dependent on human work, it can only be addressed during office hours, while RPA is able to handle it around the clock every day a year. Such a manual work, where the human agent works with one or possibly
224
M. Halaška and R. Šperka
several information systems is well fitting for RPA solution [1, 5, 8]. Thus, RPA can be utilized best on repetitive, standardized, and rule-based tasks [7, 8]. Productivity Higher productivity is one of the key expectations resulting from the implementation of the RPA solution. That is the reduction of workload of human agents, which is assigned to software robots [1]. These software robots are able to process high volumes of workload that would take human agents significantly more time to process. Thus, increasing the productivity of the process in terms of reduction of cycle times and higher efficiency and flexibility in resource allocation. The allocation of resources is important to consider regarding processes and tasks showing high fluctuations in the workload of human agents due to changes in transactional demand. Process flow and logic RPA is used for imitation of the workload of human agents and does not change the flow or logic of the process. Process flow and logic plays crucial role in management of business processes, even though it is neglected within RPA literature. It is necessary to consider placement in the process, as well as the complexity of processes or tasks. The following case study shows the importance of process flow and logic criteria on a part of the business process during the implementation of RPA together with possible impacts.
3 Methodology The goal of the paper is to show that process flow and logic criteria has to be considered for the recommendation of RPA implementation. We will put emphasis on a part of a business process extracted with the use of process mining techniques. Afterward, we will apply business process simulations in 3 scenarios for two types of resources: pool of resources and fixed resources. To allow for process mining analysis, it is necessary that the processes within process structure are supported by appropriate information system like, e.g., ERP, PAIS, etc., which are able to provide the needed data for process mining analysis. The main objective of process discovery is to find patterns in the data allowing to build models of analyzed processes. Process discovery is a challenging and nontrivial task. In other words, process discovery technique takes as input event log and produces a process model representation of input data. Nowadays, predominantly used approach towards process discovery is the so-called algorithmic approach. As the most influential techniques, one can mention, e.g., different versions of alphaalgorithm, HeuristicMiner and Fodina, Inductive mining, Split miner, and Fuzzy miner [15, 16]. There are several generally used process model representations, e.g., process maps, BPMN, Petri nets, etc.
Importance of Process Flow and Logic Criteria …
225
Fig. 1 Process map of 2017 BPIC log—offer event log
The presented case study is based on real-life event log from 2017 BPIC challenge.1 Data set is provided by financial institution and it describes loan application process. Loan application process was chosen because processes of financial institutions were popular to automate already using STP and these types of processes are generally suitable candidates to automate using RPA. The loan application log consists of three types of events, namely application state changes, offer state changes, and workflow events. After applying filter of activities on 2017 BPIC log, process consisting only of offer state changes, so-called offer log—see Fig. 1 is analyzed in this research. Firstly, the process map in Disco (Fig. 1—left side) is discovered using the filter of activities with a value of 65.9 and filter of paths with a value of 33.6. Secondly, the BPMN model in Apromore (Fig. 1—right side) is obtained using nodes filter with the value of 100, the value of arcs filter equals 34 and the value of parallelism filter is 40. After obtaining an idea about how the process looks, all filters are setup to 100. The common pattern depicted in Fig. 2 (upper part) was discovered within the process among other patterns. Initial analysis of this pattern suggests that all tasks in this business process fragment are suitable candidates for RPA, taking into account the criteria in Sect. 2 (e.g., common practice, frequency of their occurrence, error proneness, etc.). Followingly, obtained common pattern was modeled using BPMN as process representation in Bizagi modeler. 1 Seventh
International Business Process Intelligence Challenge (BPIC’17), https://www.win.tue. nl/bpi/doku.php?id=2017:challenge, last accessed 2020/01/03.
226
M. Halaška and R. Šperka
Fig. 2 Common pattern discovered in the BPIC 2017 log
For simulation purposes, we assume that the tasks are not automated using RPA. By their gradual automation using RPA, we examine the impact of automation on the overall performance of the common pattern in combination with human resources. To examine the productivity and efficiency aspect of the common pattern, we are altering the processing and waiting times of BPMN simulation models. Moreover, productivity and efficiency are highly influenced by the allocation of resources, meaning that specific type of human resources is assigned to all tasks. Besides productivity and efficiency criteria represented by the processing and waiting times, there are human resources in the form of the pool of resources or fixed resources used across all tasks. While the pool of resources represents a more universal workforce, fixed resources represent the more specialized workforce. We conducted 11 simulation runs with both types of resources for each scenario. Within the pool, there are 6 available resources in total and they are randomly assigned to particular tasks based on availability. When all 6 resources are processing parallel tasks 1–3, newly arrived events are waiting until some of them finishes processing.
Importance of Process Flow and Logic Criteria …
227
When the resource is available it is immediately assigned to waiting event based on first in, first out principle. They are set to achieve as close resource utilization as possible to 100% within Scenario 1. Nevertheless, they do not completely eliminate waiting times. There are 7 fixed resources that are needed to achieve similar processing and waiting times of particular tasks to the pool of resources. Fixed resources are assigned to a particular task and cannot be mixed, meaning that if Resource 1 is assigned to support Task 1, it can only process Task 1. Fixed resources are assigned as follows: Task 1–2 resources, Task 2–2 resources, and Task 3–3 resources.
4 Results We present parametrization and simulation results in this section. The arrivals of starting events were selected from the Poisson distribution with parameter mean equal to 30 min, which applies to all 3 scenarios. Minutes are the time units used in the entire simulation experiment. Processing and waiting times for particular tasks are based on Truncated normal distribution with four parameters: mean, standard deviation, min value, max value. Thus, in Table 1 parameter processing time (e.g., 10-3-1-40 for Task 1 in Scenario 1) means that the values are randomly generated using Truncated normal distribution with parameters mean equal to 10 min and standard deviation equal to 3. Min value of 1 and max value of 40 are in place so that generated values are not negative. Processing and waiting times of particular tasks are chosen so their order of magnitude is realistic, considering real processes. Avg. time and total avg. time are averages of values of 11 simulation runs for both types of resources. Avg. time is the result related to a particular task, while total avg. time is the result related to the entire common pattern. The result with index “P” Table 1 Processing and waiting times of tasks within different scenarios
Scenario 1 Task 1
Task 2
Task 3
Processing time
10-3-1-40
10-3-1-40
15-5-1-40
Waiting time
25-7-1-60
30-8-1-60
60-15-1-120
Scenario 2 Task 1
Task 2
Task 3
Processing time
7-2-1-28
10-3-1-40
15-5-1-40
Waiting time
15-4-1-36
30-8-1-60
60-15-1-120
Scenario 3 Task 1
Task 2
Task 3
Processing time
7-2-1-28
7-2-1-28
11-4-1-28
Waiting time
15-4-1-36
18-5-1-36
36-9-1-72
228 Table 2 Results of simulation experiments
M. Halaška and R. Šperka Scenario 1 Task 1
Task 2
Task 3
Avg. timeP
41.82
50.25
91.39
Total avg. timeP
110.03 39.92
122.64
Avg. timeF
35.7
Total avg. timeF
144.96
Scenario 2 Task 1
Task 2
Task 3
Avg. timeP
23.43
42.62
82.27
Total avg. timeP
99.42 40.08
126.23
Avg. timeF
22.01
Total avg. timeF
150.47
Scenario 3 Task 1
Task 2
Task 3
Avg. timeP
21.96
24.74
47.73
Total avg. timeP
57.11
Avg. timeF
21.97
24.72
47.81
Total avg. timeF
57.21
means that all simulation runs used the pool and result with index “F” means that all simulation runs used fixed resources (Table 2). We consider Scenario 1 as basic. Based on the comparison of Scenario 2 to 1, the 30% improvement in processing time (7, 10) and 40% improvement in waiting time (25; 15) of Task 1 resulted in 9.64% improvement in total avg. timeP (99.42; 110.3) using the pool of resources and 3.8% deterioration in total avg. timeF (150.47; 144.96). The 9.64% improvement in the case of the pool of resources is statistically significant. However, the impact of change on the productivity of the common pattern is small. In the case of fixed resources, the impact of RPA on the productivity of common pattern is not even statistically significant (see Table 3). This means that a combination of implementation of RPA in Task 1 and a more flexible pool of resources has a better impact on the overall performance of the common pattern (99.42 < 150.47). This is caused by the possible reallocation of resources. This effect starts to diminish with higher automation using RPA (Scenario 3). In that sense, fixed resources are more prone to waste and inefficiencies. Table 3 ANOVA: P-values and ω2 for particular scenario combinations P-valueP
ω2P
P-valueF
ω2F
Scenario 1 × Scenario 2
0.0028
0.348
0.7761
0.003
Scenario 1 × Scenario 3
3.40E-14
0.984
1.65E-06
0.691
Scenario 2 × Scenario 3
1.09E-20
0.944
1.62E-16
0.691
Importance of Process Flow and Logic Criteria …
229
On the other hand, comparing scenario 3 to 1, the 30% improvement in all processing times by 30% (7, 7, 11; 10, 10, 15) and waiting times by 40% (15, 18, 36; 25, 30, 60) results in improvement of productivity of the common pattern by 48.01% in case of the pool of resources (57.11; 110.03) and 60.53% in case of the fixed resources (57.21; 144.96). Based on Table 1, one can see that with increasing automation using RPA, the difference between the pool of resources and fixed resources diminishes. This is caused by the barriers of tasks with the worst cycle time. These barriers start to diminish only after the improvement of productivity and efficiency of the common pattern overcomes them, This suggests that growing levels of partial automation of particular tasks can improve on the inefficient allocation of resources. However, the pool of resources provides better overall improvements in productivity and efficiency in the common pattern. Following should be considered in case of fixed resources: • RPA implementation is ineffective within the common pattern unless the task with the worst cycle time (Task 3) is automated in the first place; • RPA implementation of a task other than the one with the worst cycle time (Task 3) might worsen productivity of the common pattern, because inefficiencies of the task will be more influential over the entire pattern; In the case of the pool of resources: • if all tasks in the common pattern are not being automated using RPA, implementation of RPA is more effective than in case of fixed resources due to the flexibility of resources. Thus, from the productivity and efficiency perspective, it is recommended to start implementing RPA from the task with the worst cycle time. In case of fixed resources, unless task with the worst cycle time cannot be automated it is not recommended to automate any tasks in the common pattern using RPA. In the case of pool of resources, the automation of the common pattern using RPA is less restrictive regarding cycle times of particular tasks due to its reallocation possibilities allowing for higher flexibility of gradual automation.
5 Conclusions Research in this paper focuses on process flow and logic criteria that is neglected in the literature. As the results of the case study show, they are necessary to consider for RPA implementation regarding productivity and efficiency. Thus, process flow and logic should be considered in the overall decision-making process. As shown, implementation of RPA is directly affected by process flow and logic criteria as it has a significant direct impact on the productivity and efficiency of RPA solution. One should be looking for common patterns that maximize productivity and efficiency, in this case from productivity perspective it is much effective to automate all the 3 tasks. It is recommended to start implementing RPA on tasks with worse cycle
230
M. Halaška and R. Šperka
times. In case of fixed resources, unless task with the worst cycle time cannot be automated it is not recommended to automate any tasks in the common pattern using RPA regarding productivity and efficiency of the common pattern. In the case of the pool of resources, the automation of the common pattern using RPA is less restrictive regarding cycle times of particular tasks due to its reallocation possibilities allowing for higher flexibility of gradual automation. For the purpose of process flow and logic, process maps are not appropriate to model representation, even though, they are usually the only process representations provided by commercial process mining tools typically used by companies. Much better results are provided through the use of BPMN. In general, process model representations with replay semantics are needed for process flow and logic criteria, e.g., BPMN or Petri nets. Process representation with replay semantics is an advantage also toward simulation purposes. Even though process maps can be used to identify attributes like productivity and efficiency, they can hardly be solely used for implementation purposes. Acknowledgments The work was supported by Project SGS/8/2018 project “Advanced methods and procedures of business process management” implemented by the Silesian University in Opava, Czechia.
References 1. van der Aalst, W.M.P., Bichler, M., Heinzl, A.: Robotic process automation. Bus. Inf. Syst. Eng. 60, 269–272 (2018). https://doi.org/10.1007/s12599-018-0542-4 2. Smart, P.A., Maddern, H., Maull, R.S.: Understanding business process management: implications for theory and practice. Br. J. Manage. 20, 491–507 (2009). https://doi.org/10.1111/j. 1467-8551.2008.00594.x 3. Madakam, S., Holmukhe, R.M., Jaiswal, D.K.: The future digital work force: robotic process automation (RPA). JISTEM—J. Inf. Syst. Technol. Manage. 16 (2019). https://doi.org/10.4301/ s1807-1775201916001 4. Zeng, J., Khan, Z.: Value creation through big data in emerging economies. Manage. Decis. 57, 1818–1838 (2019). https://doi.org/10.1108/MD-05-2018-0572 5. Osmundsen, K., Iden, J., Bygstad, B.: Organizing robotic process automation: balancing loose and tight coupling. https://scholarspace.manoa.hawaii.edu/bitstream/10125/60128/0688.pdf. Last accessed 22 Dec 2019 6. Lamberton, C.: Get ready for robots. Why planning makes the difference between success and disappointment. https://www.ey.com/Publication/vwLUAssets/EY_-_Get_ready_for_robots/ $FILE/EY-get-ready-for-robots-2016-DK.pdf. Last accessed 29 Dec 2019 7. Lacity, M., Willcocks, L.: Robotic process automation at Telefonica O2. MIS Q. Exec. 15 (2016) 8. Lacity, M., Wollcocks, L.: Robotic process automation: the next transformation lever for shared services. http://www.umsl.edu/~lacitym/OUWP1601.pdf. Last accessed 22 Dec 2019 9. Baranauskas, G.: Changing patterns in process management and improvement: using rpa and rda in non- manufacturing organizations. Eur. Sci. J. ESJ. 14, 251 (2018) 10. Asatiani, A., Penttinen, E.: Turning robotic process automation into commercial success—case OpusCapita. J. Inf. Technol. Teach. Cases 6, 67–74 (2016). https://doi.org/10.1057/jittc.2016.5
Importance of Process Flow and Logic Criteria …
231
11. Aguirre, S., Rodriguez, A.: Automation of a business process using robotic process automation (RPA): a case study. In: Figueroa-García, J.C., López-Santana, E.R., Villa-Ramírez, J.L., and Ferro-Escobar, R. (eds.) Applied computer sciences in engineering. pp. 65–71. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66963-2_7 12. Slaby, J.: Robotic automation emerges as a threat to traditional low-cost outsorcing, https://www.blueprism.com/resources/white-papers/robotic-automation-emerges-asa-threat-to-traditional-low-cost-outsourcing/. Last accessed 22 Dec 2019 13. Fung, H.P.: Criteria, use cases and effects of information technology process automation (ITPA). Adv. Robot. Autom. 03, 1–10 (2014) 14. Sutherland, Framing a constitution for robotistan: racing with the machine of robotic automation. https://neoops.com/wp-content/uploads/2014/03/RS_1310-Framing-a-constitution-forRobotistan.pdf. Last accessed 1 Jan 2019 15. van der Aalst, W.: Process mining: data science in action. Springer, Berlin, Heidelberg (2016) 16. van den Broucke, S.K.L.M., De Weerdt, J.: Fodina: a robust and flexible heuristic process discovery technique. Decis. Support Syst. 100, 109–118 (2017). https://doi.org/10.1016/j.dss. 2017.04.005
Agents and Multi-agents Systems Applied to Well-Being and Health
Multiagent System as Support for the Diagnosis of Language Impairments Using BCI-Neurofeedback: Preliminary Study Eugenio Martínez, Rosario Baltazar, Carlos A. Reyes-García, Miguel Casillas, Martha-Alicia Rocha, Socorro Gutierrez, and M. Del Consuelo Martínez Wbaldo Abstract This working progress paper will focus on determining the extent to which the Electroencephalogram (EEG) signal can be subjected to treatment and classification techniques in order to determine whether it is possible to differentiate between language disorders, as well as learn more about the behavior of these language alterations at the brain level, and provide a tool to support the medical diagnosis. We have established the hypothesis that, through a Brain Computer Interface (BCI) as well as through EEG signal treatment and classification techniques, in conjunction with the application of medical Neurofeedback techniques, and identify relevant information that allows grouping of language disorders; this by measuring concentration levels among patients with these conditions. E. Martínez (B) · R. Baltazar · M. Casillas · M.-A. Rocha Instituto Tecnológico de León, Av. Tecnológico s/n, León, Guanajuato, Mexico e-mail: [email protected] URL: http://www.itleon.edu.mx R. Baltazar e-mail: [email protected] M. Casillas e-mail: [email protected] M.-A. Rocha e-mail: [email protected] C. A. Reyes-García Instituto Nacional de Astrofísica, Óptica y Electrónica, Luis Enrique Erro 1, Sta María Tonanzintla, 72840 San Andrés Cholula, Pue, Mexico e-mail: [email protected] S. Gutierrez Centro Estatal de Rehabilitación INGUDIS, Silao, Gto, Mexico e-mail: [email protected] M. Del Consuelo Martínez Wbaldo Instituto Nacional de Rehabilitación Luis Guillermo Ibarra Ibarra, Arenal, de Guadalupe, Guadalupe Tlalpan, Tlalpan, 14389 Tlalpan, Mexico © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_21
235
236
E. Martínez et al.
1 Introduction Over the past decade, many studies have emerged, which have focused on differentiating the variety of systems and subsystems involved in language development. In the area of computational science, is used the analysis of the raw electroencephalogram signal; whose signal is acquired by systems known as Brain Computer Interface (BCI). BCI system is comprised of a Signal Processing module which can be further broken down into four submodules namely, Pre-processing, Feature Extraction, Feature Selection, and Classification. The objective of BCI systems is the extraction of characteristics or information of the brain activity which will be translated into commands to control external devices.These systems use different paradigms such as event-related potential (ERPs) from the cortex, P300 evoked potentials, neuronal action potencial, etc. This mainly involves the processing of the EEG signal to carry out each of the four submodules mentioned above [1]. It’s known as “Specific Language Impairment” (SLI) as the term that is defined as a type of developmental language disorder associate with no known sensory, neurological, intellectual or emotional deficits which can be identified early during the preschool period [2]. Today this disorder has become a case of an epidemic, in Mexico and USA around 7% of children have some form of language impairment, which has been present at increasingly earlier stages, most of these children, who have been diagnosed with SLI, have very definite patterns such as late speech, late vocabulary development, and word combinations [2]. As mentioned above, language impairment may be caused by abnormalities in brain activity. These abnormalities are often located in the left hemisphere, which is the dominant side for language development. In pediatric patients, ranging in age from 2 to 8 years, who underwent an exhaustive study to determine the appearance of such abnormalities; it was concluded that generalized abnormalities in the EEG are observed at higher frequencies and that focal interictal epileptic discharges are observed only in the left hemisphere in children with speech and language disorders [3]. One of the techniques used in carrying out these studies, and which is detailed by [4] in his article. Within these studies, the technique has been used, the EventRelated Brain potential (ERP) in order to characterize the relevant systems used in the language process in adults or detect language impaired children and language delayed infants. ERPs technique is an electrical event synchronized to the occurrence such as stimulus, and reflect the earliest stages of sensory processing. This is one of the most popular techniques for the study of language, since this technique is of the noninvasive type and can be used in infants, since it does not require an over response. The object of this working progress paper is to show the application of artificial intelligence techniques in the field of linguistics [5]. This problem will be attached with the theoretical support that we have both medical and computational. A multiagent system is proposed to determine the differences between language disorders,
Multiagent System as Support for the Diagnosis of Language Impairments …
237
in order to provide a tool to support the diagnosis, as well as providing knowledge to learn more about the behavior of these language disorders. Next, all the research work that was carried out to meet the proposed objectives will be described.
2 Related Work One of the most significant contributions within the area of communication through the EEG was proposed by Tonin et al. [6], which developed a BCI interface based on near-infrared spectroscopy, which stores a set of questions based on the set of 20 questions, which will allow the patient to communicate the answer used only & “YES or NO”; answers. This interface implements an Artificial Neural Network for the classification process, which estimates a statement thought by the patient based on the answer of at least 20 questions, these statements are phrases that are also in a data set. The results of the experimentation show that this system based on 20 questions, can be a valid interface for any BCI that uses a slow signal such as near-infrared spectroscopy or with a low accuracy rate. It can also be applied in an EEG-based interface, as this system improves performance by predicting entire sentences using at least 20 binary inputs. The only drawback is that the system can only predict the phrases that are stored in the database, so the patient will not be free to formulate their own phrases. With respect to the study of language disorders, the work of Kozhushko et al. [7], focuses on the study of cognitive skills and communication deficits in children with Autism Spectrum Disorder (ASD), whereby analyzing the spectral power of the EEG in a resting state, seek to find the correlation of abnormal brain activity with the severity of cognitive and communication dysfunction of children with ASD. In order to perform this analysis, 19 electrodes based on the 10–20 standard were used. The signal obtained was removed from the patients using the Independent Component Analysis (ICA) technique with a 0–2 Hz filter for slow waves and 20– 35 Hz for fast waves. To perform the analysis of spectral power, theta, alpha, and beta frequencies were taken, the values of these frequencies were normalized using the decimal logarithm; subsequently, an ANOVA test was performed to compare each pair of electrodes, with a significance value of p 0. 0002; the non-perimetric Spearman’s test was used to check the correlation between the cognitive deficit and the physiological deficit of the patients. In order to have a better visualization of the data, the author explains the use of a software known as sLORETA (Low-Resolution Electromagnetic Tomography) [8]. In the field of linguistics, we can refer to the work of Riccio et al. [12]. In this chapter, the three types of specific language disorders are described. This ranges from the inability to learn new vocabulary, limited oral expression, and the difficulty of acquiring the grammatical aspects of the language; typical of the expressive disorder of language. In addition to these characteristics, there is also the difficulty of understanding, or the presence of a poor retention of verbal short-term memory in adults, which is typical of mixed receptive—expressive language disorder. When
238
E. Martínez et al.
an infant has difficulty producing the correct speech or speech sounds according to his/her age range, and consequently the difficulty of acquiring new vocabulary, we can talk about Phonological disorder. It is also mentioned in the chapter the correlation between the presence of a hearing problem and multiple language disorders, as this presents a deficit in the patient’s ability to locate, discriminate, recognize patterns or store the meaning of hearing signals. In this chapter, the author points out the difficult task of evaluating, detecting, and diagnosing problems in language development. Since many times only those patients at an early age who present significant difficulties are identified. Leaving aside those children with minor or not very significant problems, which leads to a late diagnosis. A Neurophysiological research team carried out a prospective and descriptive study in children with significant language acquisition delay, whose purpose was to identify the frequency and characteristics of epileptiform waves present in the EEG of the group of children in the study. The results of this study, demonstrate the presence of epileptiform waves in the study group, corroborating previous studies in the area. Of the total population, the presence of these waves was identified in 89.3%; which greatly exceeds the percentage of the healthy child population reported in other studies. This research concludes that abnormal electroencephalographic activity is more frequent in children with SLI, with a preferential location over language-related regions. This suggests that, given the semiological characteristics of epileptiform discharges, the imposition of a therapy for the suppression or modification of such discharges could lead to beneficial results [13].
3 Methods and Materials 3.1 Multi-agent Model Today, with the growing advances in the study of language. There is a discrepancy between the new characteristics of each language disorder and the methods used for its diagnosis. Therefore, it is necessary to have an objective tool that provides support for the diagnosis of these conditions. The present article will focus on describing the acquisition and analysis of the EEG signal of a group of school-age children, whose ages range from 6 to 8 years; who are in the process of acquiring reading, writing, and reinforcement of the Spanish language. The model designed for this preliminary study, consists of several intelligent agents, covering from the neurosensorial stimulation, the EEG signal collection, the EEG signal analysis method, as well as the classification process, which it is intended to give a pre-diagnosis of the possible affectation of language. This design is intended to be a model that supports clinical decision-making for the diagnosis of language disorders. For this reason, all intelligent agents involved will follow a hierarchical structure. This hierarchical structure will follow the following scheme.
Multiagent System as Support for the Diagnosis of Language Impairments …
239
Fig. 1 Multi-agent model
For the model to start, all the agents will have a common situation that will be given as true. Later, a goal will be set, which represents a problem to be solved. Each agent will then individually follow up on his or her own actions in order to achieve the common goal. Finally, the result of each of the individual actions of each agent will be concentrated to make the necessary inferences [9]. As can be seen in the following Fig. 1.
3.2 Agent Based on Mobile Application To evaluate the brain activity generated, the use of a mobile application is considered. This is based on one of the most widely used methods for measuring and capturing the EEG signal in real time. This method is known as electroencephalographic neurofeedback (EEG-NFB), whose implementation began in the 60s. This method allows the learning process to be reinforced. This through behavior modification using the immediate feedback and positive reinforcement [10]. For this preliminary study, a mobile application will be developed. Whose goal is to generate brain stimulation.
3.3 EEG Data Acquisition Agent To test the proposed method of grouping and classifying language disorders, patients diagnosed with one of the language disorders of interest to the research will be recruited. The EEG signals will be recorded with the low-cost Cyton Biosensing Board interface from the company OpenBCI (http://www.openbci.com/), which consists of 8 channels that will be placed according to the 10–20 standard of electrode placement on the scalp in the following locations AF3, F3, F7, FC5, T7, P7, O1
240
E. Martínez et al.
Fig. 2 Standard 10–20 for electrode placement
covering the Geschwind-Wernicke linguistic region and using the P3/CMS electrode as a reference, as shown in the following picture [11] (Fig. 2).
3.4 EEG Analysis and Treatment Agent 3.4.1
EEG Signal Filtering Process
Given the noise-sensitive nature of the EEG signal, it is important to subject it to a stage of cleaning of artifacts such as blinking, heartbeat, or head movements. As a first instance, the Independent Components Analysis (ICA) will be carried out, which consists of finding a source of linear separation of the components that are already dependent. But in real life, it’s not so easy to find such separation. As an example, a set of random variable observations is given, where t is the time of a given sample, and it is assumed that they were generated by the mixed, independent method [14].
3.4.2
Feature Extraction
Once the Independent Components of the EEG signal have been found, the signal will undergo a transformation stage through the application of the Discrete Wavelet Transform (DWT), where two specific wavelet families such as the Daubechies and Symlet families will be explored. During this process, the levels of decomposition and the mother wavelet of these two families will be analyzed to select the most appropriate for the identification of the proposed language disorders.
Multiagent System as Support for the Diagnosis of Language Impairments …
3.4.3
241
Reduction of Characteristics
In order to reduce the dimensionality of the characteristics during the extraction stage, the Principal Component Analysis technique will be used, which tries to explain the structure of the variances and covariances of a set of X i variables, by means of a few linear combinations of them, called principal components. These principal components are not correlated with each other, and each one maximizes its variance. The PCA aims to reduce or simplify data to facilitate analysis and interpretation. The author explains that this reduction is possible because the variability of k the data can be explained by a k smaller number of main components, in such a way that the original data set would be reduced in n size by p variables [15].
3.5 Clasification Agent For this preliminary study, data science techniques, such as clustering algorithms, will be applied. Since the nature of this study is exploratory, it seeks to observe the behavior of the EEG signal in the study group; and as it is, it can be grouped based on the previous selection of characteristics. Clustering is an unsupervised learning technique, which is capable of grouping data containing points that are similar to each other. To calculate these similarities, mathematical operations are used such as distance, angles, averages, among others. For the purpose and characteristics of the data used in this study, several clustering techniques will be tested, as fuzzy c-means clustering algorithm; which is based on the distance between the centroid and the point to be grouped, membership of a group is given by values between 0 and 1 [16, 17]. Another technique that is proposed to be used in this study is clustering based on neural networks. Since these have been used successfully in the resolution of nonlinear problems, both with numerical and categorical variables [18].
4 Conclusion In this article, we proposed the design of an intelligent agents model for the support in the diagnosis of language disorders, since it seeks to reinforce what was found in the previous work on the appearance of abnormal brain waves between a control group of school-aged children and those who were diagnosed with Specific Language Impairments. Our model seeks to mitigate the discrepancy between the new characteristics and components of SLI, and the methods of evaluation and diagnosis. The preliminary study designed is taken as a basis for future validation of the functionality of the intelligent agent model. The purpose of this preliminary study is to provide an objective tool that supports the process of early diagnosis, thus combining
242
E. Martínez et al.
the medical intervention, with the techniques and computer processes of artificial intelligence. Acknowledgements To CONACYT, for the support provided during the period of study for the master’s degree. To Dr. Rosario Baltazar and the members of the committee of researchers who took part during the realization of this preliminary study. Special thanks goes to Dr. Socorro Gutierrez and Dr. Consuelo Martnez for their valuable contribution of knowledge in the medical area related to language.
References 1. Mutasim, A.K., Tipu, R.S., Raihanul Bashar, M., Kafiul Islam, Md., Ashraful Amin, M.: Springer international publishing AG, part of springer nature 2018 Pedrycz, W., Chen, S.-M. (eds.), Computational Intelligence for Pattern Recognition, Studies in Computational Intelligence 777, https://doi.org/10.1007/978-3-319-89629-8_11 2. Weismer, S.E.: The Cambridge Handbook of Communication Disorders. Chapter: Specific language impairment, pp. 73–87. https://doi.org/10.1017/CBO9781139108683.007 3. Mehta, B., Chawla, V.K., Parakh, M., Parakh, P., Bhandari, B., Gurjar, A.S.: EEG abnormalities in children with speech and language impairment. J. Clin. Diagnost. Res. https://doi.org/10. 7860/JCDR/2015/13920.6168 4. Mills, D.L., and Neville, H.J.: Electrophysiological studies of language and language impairments. volume 4. J. Seminars Pediat. Neurol. https://doi.org/10.1016/S1071-9091(97)800290 5. Obler, L.K., Gjerlow, K., Méndez, E., Tena, P.: El lenguaje y el cerebro. ISBN: 9788483230909 6. Tonin, A., Birbaumer, N., Chaudhary, U.: A 20-questions-based binary spelling interface for communication systems. J. Brain Sci. https://doi.org/10.3390/brainsci8070126 7. Kozhushko, N.J., Nagornova, Z.V., Evdokimov, S.A., Shemyakina, N.V., Ponomarev, V.A., Tereshchenko, E.P., Kropotov, J.D.: Specificity of spontaneous EEG associated with different levels of cognitive and communicative dysfunctions in children. J. Int. J. Psychophysiol. https:// doi.org/10.1016/j.ijpsycho.2018.03.013 8. Pascual-Marqui, R.D.: Standardized low resolution brain electromagnetic tomography (SLORETA): technical details. J. Methods fFindi. Exp. Clin. Pharmacol. 9. Xiao, L.: A herarchical agente decision support model and its cllinical application. In: Journal of Agents and Multi-agent Systems: Technologies and Applications 2019: 13th KES International Conference, KES-AMSTA-2019 St. Julians, Malta, June 2019 Proceedings. https://doi.org/10. 1007/978-981-13-8679-4_17 10. Omejc, N., Rojc, B., Battaglini, P.P., Marusic, U.: Review of the therapeutic neurofeedback method using electroencephalography: EEG Neurofeedback. Bosn. J. Basic Med. Sci [Internet]. 2019Aug.20 [cited 2020Jan.17];19(3):213-20. Available from http://www.bjbms.org/ojs/ index.php/bjbms/article/view/3785 11. PNuwer, M.R., Comi, G., Emerson, R., Fuglsang-Frederiksen, A., Guérit, J.-M., Hinrichs, H.: IFCN standards for digital recording of clinical EEG. J. Int. Feder. Clin. Neurophysiol. https:// doi.org/10.1016/s0013-4694(97)00106-5 12. Riccio, C.A., Sullivan, J.R., Cohen, M.J.: Specific language impairment/dysphasia. Chapter 4. In book of Handbook of Child Language Disorders. https://doi.org/10.4324/9781315283531 13. Aguilar Fabré, L., Valdivia Álvarez, I., Rodriguez Valdés, R.F., Gárate Sánchez, E., Morgade Fonte, R.M., Castillo Yzquierdo, G., et al.: Hallazgos electroencefalográficos en los pacientes con trastorno específico del desarrollo del lenguaje. Rev Cubana Neurol Neurocir. [Internet] 2015 [citado dia, mes y año];5(1):13–8. Disponible en: http://www.revneuro.sld.cu/index.php/ neu/article/view/205
Multiagent System as Support for the Diagnosis of Language Impairments …
243
14. Hyvarinen, A., Karhunen, J., Oja, E.: Independent component analysis. In Adaptive and Cognitive Dynamic Systems: Signal Processing, Learning, Communications and Control. ISBN: 9780471464198 15. Rodriguez Hernández, O.: Temas de Analisis Estadistico Multivariado. In Editorial Universidad de Costa Rica. ISBN: 9789977674902 16. Pradhan, C.K., Rahaman, S., Abdul Alim Sheikh, Md., Kole, A., Maity, T.: EEG signal analysis using different clustering techniques. J. Emerg. Technol. Data Mining Inf. Secur. https://doi. org/10.1007/978-981-13-1498-8_9 17. Arora, J., Khatter, K., Tushir, M.: Fuzzy C-means clustering strategies: a review of distance measures. J. Adv. Intell. Syst. Comput. https://doi.org/10.1007/978-981-10-8848-3_15 18. Ahmad, A., Khan, S.S.: Survey of state-of-the-art mixed data clustering algorithms. J. IEEE Access. https://doi.org/10.1109/ACCESS.2019.2903568
Multi-agent System for Therapy in Children with the Autistic Spectrum Disorder (ASD), Utilizing Smart Vision Techniques—SMA-TEAVI Ruben Sepulveda, Arnulfo Alanis, Marina Alvelais Alarcón, Daniel Velazquez, and Karina Alvarado Abstract The present article shows the analysis and design of a Multi-Agent System’s architecture for Emotional–Social assistance, which will be implemented in a robot, it will perform tasks such as therapy for children diagnosed within the Autistic Spectrum Disorder (ASD). Other tasks the robot will perform are the development of skills in the physical education field, with the goal of improving the motivation and mobility of the children, at the same time, the system will identify the emotions the child expresses, based on Paul Ekman’s basic emotions model.
1 Introduction In Mexico, there is a study that shows the prevalence of the autistic spectrum at a national level; the study determined almost 1% of all children, around 400,000, have Autism Spectrum Disorder (ASD), which means 1 in every 115 children have this R. Sepulveda (B) Master of Information Technology, Technological Institute of Tijuana, National Tech of México, Calzada Del Tecnológico S/N, Fraccionamiento Tomas Aquino, Tijuana, BC, Mexico e-mail: [email protected] A. Alanis · K. Alvarado Systems Computer Department, Technological Institute of Tijuana, National Tech of México, Calzada Del Tecnológico S/N, Fraccionamiento Tomas Aquino, Tijuana, BC, Mexico e-mail: [email protected] K. Alvarado e-mail: [email protected] M. A. Alarcón · D. Velazquez Schoool of Psycholgy, CETYS University, Calz Cetys 813, Lago Sur, C.P 22210 Tijuana, BC, Mexico e-mail: [email protected] D. Velazquez e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_22
245
246
R. Sepulveda et al.
condition; this is a significant number that raises an urgent public health issue. The study was done by experts financed by the Autism Speak organization [1]. Autistic Spectrum Disorder (ASD) is a life condition, this means there is no cure for the disorder; this makes parents constantly look for solutions to help improve the quality of life of their children. It is important that this condition is detected at early ages so that the child has better odds of being independent and being able to adapt to a schooled system; however, therapies with an expert are required for this [2]. The main goal of this paper consists of integrating an Emotional–Social Multiagent System (MAS) within a robot, whose purpose is supporting the child with his assisted therapies in his physical education class within the school context. The development of this paper began when observing insufficient motivation in children to exercise physically, as well as the scarce participation from these children in their classes. This project pretends to expand the knowledge in regards to increasing the participation and propitiate the imitation in the children of the physical education class, through identifying basic emotions based on the Paul Eckman’s model [3].
2 What Is Autism? At the present time, there is knowledge regarding the behavioral limitations that may be related to children with Autistic Spectrum Disorder (ASD), the spectrum envelops the autistic disorder, Asperger syndrome, and the generalized non-specified developmental disorder [4]. The most common traits of a child with ASD are [5]: • Difficulties in sensory processing. • Speech or language difficulties. • Social interaction problems. This means that the children have the possibilities to express themselves but have difficulties when identifying and letting others know what they feel. It is known that autism can be identified in babies as early as 18 months old; however, it is still difficult to detect it before 2 years of age [2], early attention helps improve their development, reduce the severity of the symptoms, and it betters the quality of life of both them and their families [4, 6, 7]. Normally the family is who detects any sort of anomaly and afterward they inquire with a doctor during the baby’s first years. In the world, the incidence of autism is between three to six children for every one-thousand, this disorder is four times more likely to be found in men.
Multi-agent System for Therapy in Children …
247
2.1 Primary Emotions As described beforehand, children with ASD have problems with social interaction which causes them to have difficulties expressing themselves and identifying emotions. Emotions are defined as “An affective experience, pleasant or unpleasant to some extent, which supposes a phenomenological quality and compromises three response systems: subjective-cognitive, expressive-behavioral and physiological-adaptive” [8]. Paul Ekman continued Darwin’s postulations about the universal nature of facial expressions and emotions, as well as their communicative function. Ekman elaborated his own theory, where different analytical tools that allow the objective description and measuring of facial expressions in other cultures were developed. Ekman proposes the existence of diverse facial expression traits for each emotion. Basic emotions are biologically defined and related to certain fundamental survival behaviors. The expression of each of the basic emotions requires the movement of certain facial muscles, which are the same globally across different cultures. Rage, disgust, fear, happiness, sadness, and surprise, these are the emotional states that have received the most abidance [9]. Secondary emotions are those that require cognitive elaboration from a social context for recognition, and they come from subtle variations and combinations from basic emotions [9], which is why they are considered mental states, it is through social cognition (SC) and specifically through the Theory of Mind (ToM) that its recognition is achieved [10]. From this, it was considered relevant for this paper to program the Multi-agent System (MAS) with social–emotional recognition inside a robot that could model the behaviors of children with ASD.
2.2 Theory of Mind (ToM) This concept reads as “the ability to comprehend and predict the behavior of other people, their knowledge, their intentions and beliefs.” Which is known as a “hetero metacognitive” ability, due to it referencing a cognitive system to learn about the contents of another cognitive system, different from the one that performs such learning [11]. It is in this context that many studies quote under the benefits of using social assistance robots an improvement in the recognition of emotions and an increase in the imitation of prosocial behaviors [12].
248
R. Sepulveda et al.
2.3 Multi-agent Systems (MAS) Smart agents are any entity capable of perceiving an environment through sensors and acting in that same environment through effectors, they also have the quality of being able to communicate with other agents. The agents that are integrated in other systems are called Multi-agent Systems (MAS); thus, agents contribute and communicate with each other so they can give a logical sequence to the resolution of a problem [13]. The most important qualities of smart agents are autonomy, rationality, proactivity, and benevolence. The aforementioned characteristics are given by the programmer [14].
2.4 Artificial Vision The main function of artificial vision is being able to recognize and locate objects within an environment through image processing. Computational vision studies these processes to better understand them and build machines with similar capabilities. There exist three important aspects that must be kept in mind [15]: • Vision is a computational process. • The obtained description depends on the observer. • It is necessary to discard information that is not useful [16].
2.5 Facial Recognition Facial recognition consists in identifying a persona through an image. This technique dates back to 1960 where it had its beginnings; at the present time its popularity has risen massively; nowadays facial recognition systems are more optimized due to them having complex, efficient, and effective algorithms [17].
3 Proposal Interpreting a person’s emotions from their gestures, posture, intonation, or expression is difficult for neurotypical humans; getting the small details and microexpressions in a chat is a difficult task that requires cognitive processes. To imagine that a robot manages to do the same today can be complicated but not impossible; the proposed architecture aims to take steps in that direction.
Multi-agent System for Therapy in Children …
249
To understand the proposal it is important to know that children with ASD have difficulties when they try to express and communicate their emotions, therefore it becomes difficult to know what it is to be happy or afraid, it is considered that through the implementation of a robot, therapists and parents of children with ASD can be supported by modeling and identifying emotions; a robot with these characteristics could also detect early if the user has traits of this disorder [18, 19]. An emotional robot would be of great help to parents and educators, to stimulate the development of the child’s emotions, managing to learn at their own pace, understanding facial expressions, and gradually getting used to having contact with other people. The following describes the proposal of the SMA-TEAVI, which will be implemented in the mechatronic system thus obtaining the emotional robot, as shown in Fig. 1; this proposal shows an architecture designed to identify the child’s face and process the basic emotions that they could express during the session. The SMA-TEAVI starts with capturing the data using two input processes, first it is fed by real-time video capture, and the second by means of the data previously stored—a repository of images, which allows this video processing to record the activity or activities performed by the user; the videos will have the imitation movements made by the child, this in order that it is always updated. In the next process, the information will be passed through the module of computational vision; this module will perform the facial recognition of each child; the purpose of this is to identify the area of interest in the face, so that it is subsequently processed using the mesh algorithm.
Fig. 1 Proposed model for the SMA-TEAVI
250
R. Sepulveda et al.
The section identified each of the faces processed by the computational vision module and will be sent to the SMA; the main functionality of our agent is to take care of the learning the emotions generated in each of the children during the session. The next process will be that the emotional agent manages to learn the emotion that is expressed in each face, thus giving it a value that serves as an indicator generating an emotional level in this module. When the emotion and its emotional level are identified, the affective agent will learn its or its effective state(s), which are based on the model of Ekman [12], which will result in the affective states where these will be used to initially identify two emotions such as happiness and sadness; this will be in the early stages of the agent but will add more emotions as the affective degree is improved. Finally, the SMA-TEAVI throughout the process by each of its modules has acquired the ability to identify the emotional degree of each faces; in the next process comes the planning agent who will have the task of coordinating each of the activities that the emotional robot would perform. This will result in the ability to determine the level of satisfaction obtained by the child during physical activity. This process can be repeated as many times as necessary for the user to interact with the robot and get a good experience.
3.1 Proposal Design To achieve a better understanding of the characteristics of users with ASD, a contextual study was done in the clinic PASITOS A.C [20], where it was possible to enter different therapies and interviews with therapists, being able to obtain behavioral observations of the children in their physical education class; through this process the goals and modules for the affective programming of the robot were established. All this process was done with the prior written consent of the parents. In addition, an intervention protocol will be carried out for the therapy with the help of specialists in the subject, and to better measure the child’s performance, it is important to know the amount of children that each group will have, in the first phase there will be a total of 10 children. It is important to know the environment in which the therapy will be done to have control over the movements a child does so that a routine may be recorded, which will later be processed in the MAS. Every aspect mentioned before was analyzed to solve the problem that exists in the clinic, at the same time, understanding that the robot will give more effective responses or notice if the child loses its patience due to not being convinced by therapy.
Multi-agent System for Therapy in Children …
251
4 Conclusions The analysis of the SMA-TEAVI’s proposal has been done in this paper; it has a modular and scalable structure, giving developers the possibility of integrating new modules or reutilizing agents that make up the different modules to add new functionality. Particularly, the integration of the artificial vision and basic emotions is an important area to perform investigations in one which may provide an important contribution to science and technology, becoming a promising field of knowledge. The need of integrating a MAS in autism is due to the need of knowing the children’s expressed emotions during the therapy sessions, which may prove difficult but not impossible; however, steps in that direction are being taken. The process to follow is the implementation of the architecture in the robot, where multidisciplinary work is required from the areas of psychology, neuropsychology, and medicine, in order to be able to generate improvements in the basic emotions detection model. It is hoped that in the following phases of the project, the corresponding tests be applied to observe the precision of the emotion recognition. The prevalence of ASD in many countries of low resources and tools to conduct these studies makes it so that there is no exact figure; thus the proposal of the MAS such as this one will bring new solutions for more people.
References 1. Gobierno del Estado de México Secretaría de Salud, “Autismo 2017,” no. 805 (2017) 2. Martos-Pérez, J., Llorente-Comí, M.: Tratamiento de los trastornos del espectro autista: Unión entre la comprensión y la práctica basada en la evidencia. Rev. Neurol. 57(SUPPL.1), 185–191 (2013) 3. Ekman, P.: Emot. Reveal. 328(Suppl S5) (2004) 4. Mulas, F., Ros, G., Millá, M., Etchepareborda, M., Abad, L., Téllez, M.: Modelos de intervención en niños con autismo. Rev. Neurol. 50(SUPPL. 3), 77–84 (2010) 5. Levy, S., Mandell, D., Schultz, R.: Autism. Lancet 374(9701), 1627–1638 (2009) 6. Gándara Rossi, C.C.: Intervención TEACCH en el autismo. 27(4), 173–186 (2007) 7. Salvadó-Salvadó, B., Palau-Baduell, M., Clofent-Torrentó, M., Montero-Camacho, M., Hernández-Latorre, M.A.: Comprehensive models of treatment in individuals with autism spectrum disorders. Rev. Neurol. 54(Suppl 1), S63–71 (2012) 8. Montañés, M.C.: Psicología De La Emoción: El Proceso Emocional, 1–34 (2005) 9. Ekman, P.: Ekman_Basic-Emotions.pdf. Handb. Cogn. Emot. (1992), 45–60 (1999) 10. Tabernero, M.E., Politis, D.G.: Reconocimiento facial de emociones básicas y su relación con la teoría de la mente en la variante conductual de la demencia frontotemporal. Interdiscip. Rev. Psicol. y Ciencias Afines 33(1) (2017) 11. Tirapu-Ustárroz, J., Pérez-Sayes, G., Erekatxo-Bilbao, M., Pelegrín-Valero, C.: ¿Qué es la teoría de la mente? Rev. Neurol. 44(8), 479–489 (2007) 12. Ekman, P.: FACIAL EXPRESSION Edited by An imprint of The Institute for the Study of Human Knowledge 13. Russell, S., Norvig, P.: Inteligencia artificial Un enfoque Moderno 1(3) (2014) 14. Campis, L.E.M., Gámez, Z.J.O.: Influence of the Intelligent Agents in the. 11, 51–62 (2012) 15. Marr, D.: VISION Tomaso Poggio (1982)
252
R. Sepulveda et al.
16. Sucar, L.E.: Visi´ on Computacional Giovani G ´ Helmholtz Zentrum Munchen. February (2015) 17. García-Rios, E., Escamilla-Hernández, E., Nakano-Miyatake, M., Pérez-Meana, H.: Sistema de reconocimiento de rostros usando visión estéreo. Inf. Tecnol. 25(6), 117–130 (2014) 18. Obafemi-Ajayi, T., et al.: Facial structure analysis separates autism spectrum disorders into meaningful clinical subgroups. J. Autism Dev. Disord. 45(5), 1302–1317 (2015) 19. Aldridge, K., et al.: Facial phenotypes in subgroups of prepubertal boys with autism spectrum disorders are correlated with clinical phenotypes. Mol. Autism 2(1), 1–12 (2011) 20. Pasitos. [Online]. https://pasitos.org/. Accessed 01 Feb 2020
Multiagent Monitoring System for Oxygen Saturation and Heart Rate Fabiola Hernandez-Leal, Arnulfo Alanis, and Efraín Patiño
Abstract In recent years, machine learning techniques have been the main techniques used for the early detection of various vital signals. With the integration of machine learning and body sensors, and with the widespread use of smartwatches and cellphones, it has been possible to keep track of a variety of physical parameters along with the possibility to give easy visualization of the obtained data. In this paper, a multiagent medical assistance system is proposed for the detection of cardio-respiratory abnormalities in older adults. In the data acquisition stage, heart rate and blood oxygen saturation parameters are acquired with a pulse oximeter. Once the information is obtained, it is stored, filtered, and processed on the edge with an embedded computer. For the classification stage, a random forest algorithm is used, using a public database for the training. The body signals and the classification results are displayed on a GUI.
1 Introduction The accelerated development in health technologies and the improvement of medical care have increased life expectancy in recent decades. As a result, we have witnessed a significant growth in the number of elderly people worldwide [1]. The use of F. Hernandez-Leal (B) Technological Institute of Tijuana, National Tech of Mexico, Master of Information Technology, Calzada Del Tecnológico S/N, Fraccionamiento Tomas Aquino, Tijuana, BC, Mexico e-mail: [email protected] A. Alanis Technological Institute of Tijuana, National Tech of Mexico, Systems and Computer Department, Calzada Del Tecnológico S/N, Fraccionamiento Tomas Aquino, Tijuana, BC, Mexico e-mail: [email protected] E. Patiño School of Medicine, Autonomous University of Baja California, Avenida Álvaro Obregón sin número, Colonia Nueva Mexicali, Tijuana, BC, Mexico e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_23
253
254
F. Hernandez-Leal et al.
technology to offer a better quality of life for older adults is one of the many ways in which different monitoring systems have been implemented. The objective of assisted living environments is to give a better life quality for people that wish to be independent but have some condition, due age or illness that affects their independence [2]. One of the services available in an assisted living environment is the preservation of health through constant monitoring of vital signs. Medical instrumentation has evolved in such a way that today we have sensors that measure the most common vital signs. This evolution has integrated information technologies and artificial intelligence, to collect data, analyze it, and send the information to the medical provider. In recent years, several works have focused in the integration of smartphones or web services, in combination with artificial intelligence algorithms, to aid in medical analysis. Banaee et al. [3] performed a review of the latest methods and algorithms used to analyze data from portable sensors utilized for physiological monitoring of vital signs in health services, describing the most common data mining tasks that have been applied, such as anomaly detection, prediction, and decision-making; and indicating the suitability of particular methods of data mining and machine learning used to process physiological data. Jiang et al. [4] examined the current state of artificial intelligence applications in medical care discussing their future mentioning that it can be applied in different areas of medical care using data. Other approaches have focused in the development of scalable and distributed architectures. Baljak et al. [5] integrated information that was previously discarded, with real-time information, for data processing.
2 Intelligent Agents Most of the problems in the medical field, especially those in patient monitoring systems can be modeled using a multiagent system (MAS) [6]. An agent is anything capable of perceiving its environment with the help of sensors, and acting in that environment using actuators [7]. The architecture of agents, in most systems for patient monitoring, are adaptations of knowledge-based systems whose architectures tend to be hybrid architectures. Basic properties of intelligent agents (autonomy, proactivity, social character) and MAS characteristics (information management distributed, communication and coordination between entities autonomous) suggest that they are a good option to consider in the design of a patient monitoring system [6].
Multiagent Monitoring System for Oxygen Saturation and Heart Rate
255
3 Internet of Health Things The concept of the Internet of Things (IoT) has evolved since its original proposition in 1999 into an interconnected global network, that involves sensing, wireless communication, and information processing technologies. The Internet of Health Things (IoHT) consists of interconnected objects with the capacity of exchanging and processing data to improve patient health. This patient-centric view involves four distinct layers acquisition, storage, processing, and presentation. The IoHT lies in a field of research that emerges from the use of wearables, biosensors, and other medical devices to improve patient data management in hospitals, with the goal of reducing hospitalization times and improving healthcare delivery to patients [8]. IoHT can contribute to the expansion of access to quality health through dynamic monitoring of the human being within his/her environment. In this way, IoHT can improve the effectiveness of the treatments, prevent risk situations, and assist the promotion of good health. Furthermore, IoHT enhances the efficiency of resource management through flexibility and mobility using intelligent solutions [9].
4 Methodology The methodology has two agents that are distributed in three main tasks: data acquisition, classification, and information visualization. One of the agents is responsible for handling data input and the output of information, that is, it performs the acquisition and displays the information in a GUI. The second agent is responsible for the classification of the data received by the first agent, and after that, it sends the information to the first agent. The system structure is illustrated in Fig. 1. For the first part, we used a pulse rate and oxygen saturation sensor on healthy individuals and patients. A medical pulse oximeter is used to obtain the blood oxygen saturation (SpO2 ) in real time, as well as the heart rate. All the information is stored Fig. 1 System structure
256
F. Hernandez-Leal et al.
on an embedded computer, in which the filtering processes and the classification are carried out through the data collection agent. The blood SpO2 is the measure of the amount of oxygen bound to the hemoglobin in the cells within the circulatory system. This measure provides an estimate of arterial oxygen saturation in the blood. It is presented as a percentage of oxygenated hemoglobin compared to the total amount of hemoglobin in the blood (oxygenated and non-oxygenated hemoglobin). Normal SpO2 values vary between 95 and 100% [10]. As well as normal parameters for heart rate refer to the number of times a person’s heart beats per minute. The normal heart rate varies from person to person, but the normal range for adults is 60–100 beats per minute (BPM) [10]. The relationship between oxygen saturation and heart rate as parameters of interest have helped to implement continuous monitoring systems in a simple way due to the ease of obtaining the data needed without jeopardizing the human body. The correlations through the vital signs and general characteristics such as age, weight, and height have been studied to determine the importance those represent for the body. The classification agent used databases that included SpO2 and measured the importance of this parameter for the patient’s well-being. Tests for the determination of oxygen saturation as a parameter of interest in addition to heart rate were performed by analyzing open-source medical databases, in which they showed that these values assist the decision-making when a patient underwent surgery. It was necessary to identify whether the person needed to be passed on to the intensive care unit or could be discharged within a shorter time [11]. In addition, databases that aimed to find the impact on oxygen saturation through smoking were analyzed and whether it was possible to regain an acceptable level of oxygen saturation once a person quit smoking [12]. It has generally been recognized that a change in heart rhythm can be considered as a health problem; however, oxygen saturation had only been studied in areas related to sleep disorders. The databases used consisted of general metrics such as age, habits, body temperature, and blood pressured. The first results are shown in Fig. 2, which shows that oxygen saturation can be a determining factor in decision-making regarding a patient’s well-being among other parameters such as the variety of body temperature (LCORE, LSURF, SURFSTBL, CORESTBL), the different stages of blood pressure (LBP, BPSTBL), and the quality of comfort for a patient (COMFORT). The data obtained directly from the sensor is saved in a .CVS file. Later, the selection of oxygen saturation and heart rate data within a given captured time frame is made, which provides several readings for both parameters from one user; this process is made by the data collector agent. It is also needed to add the age, weight, and height for the inferences. Because the database is within the embedded computer, and the classification agent processes the data in this device, we do not need any cloud services. For the embedded computer we selected a Raspberry Pi 3B+, for its convenience and computational power.
Multiagent Monitoring System for Oxygen Saturation and Heart Rate
257
Fig. 2 LO2 : oxygen saturation importance
Once the parameters of interest are selected from the data, the analysis is carried out by k-nearest neighbor and random forest to get a prognosis of events that may jeopardize the integrity of the person. For the display and front-end we used Flask, and for the classification we used Tensorflow API. The complete multiagent monitoring system is illustrated in Fig. 3, as this figure indicates, the data acquisition and inference are done continuously while the user is wearing the oximeter sensor, allowing the system to learn in time the characteristics of that user while in different scenarios. Having the data of individuals can provide any assistance it might need for their specific way of life.
5 Results and Discussions The system is in the experimentation phase, and currently, we are searching for people who want to participate in the experimental phase. Details such as the ease of use of the sensor, the comfort of the sensor, the display of the information presented, among
258
F. Hernandez-Leal et al.
Fig. 3 Multiagent monitoring system architecture
others are needed and must be obtained from the patients. For the data collection, we are proposing at least 30 adults, who can use the system for 30 continuous days. With this, enough information of the required body signals in different stages would be generated. It is estimated that the sensor readings perform with 95% certainty and that the system, in general, manages to determine those anomalies in real time. As preliminary results, we have a real-time reading system using a pulse oximeter sensor that allows the extraction of the information of the person who is using the device. The information is collected in a .CVS for easy manipulation, allowing the embedded computer to perform the analysis in real time. We also develop an initial algorithm that is being used to determine the ranges of oxygen saturation and heart rate of a person, where the initial classification of the vital signs, which are shown in Fig. 4, is achieved. The initial experimentation with the algorithm is carried out with healthy people, analyzing 7114 readings of the parameters of interest and where the average oxygen saturation range is 98% and the heart rate obtained on average was 74 bpm, where these are within the ranges considered normal. It is required to have information that serves as a starting point for the detection of acceptable ranges for these vital signs. Further work will include gathering information from unhealthy people to process them through the algorithm and the personalization of the system by learning about the person wearing the device, which permits a full knowledge of the user’s vital signs, keeping the distinguish data from user to user.
Multiagent Monitoring System for Oxygen Saturation and Heart Rate
259
Fig. 4 Classification of vital signs from healthy people
References 1. He, D., Zeadally, S.: Authentication protocol for an ambient assisted living system. IEEE Commun. Mag. 53(1), 71–77 (2015) 2. Koleva, P., Tonchev, K., Balabanov, G., Manolova, A., Poulkov, V.: Challenges in designing and implementation of an effective ambient assisted living system. In: 2015 12th International Conference on Telecommunication in Modern Satellite, Cable and Broadcasting Services (TELSIKS), pp. 305–308 (2015) 3. Banaee, H., Ahmed, M., Loutfi, A.: Data mining for wearable sensors in health monitoring systems: a review of recent trends and challenges. Sensors 13(12), 17472–17500 (2013) 4. Jiang, F., et al.: Artificial intelligence in healthcare: past, present and future. Stroke Vasc. Neurol. 2(4), 230–243 (2017) 5. Baljak, V., Ljubovic, A., Michel, J., Montgomery, M., Salaway, R.: A scalable realtime analytics pipeline and storage architecture for physiological monitoring big data. Smart Heal. (2018) 6. Julio, C., et al.: Sistema multiagentes para el monitoreo inteligente. IFMBE Proc. 18, 501–505 (2008) 7. Rusell, S., Norvig, P.: Inteligencia Artificial 2(6) (2007) 8. da Costa, C.A., Pasluosta, C.F., Eskofier, B., da Silva, D.B., da Rosa Righi, R.: Internet of health things: toward intelligent vital signs monitoring in hospital wards. Artif. Intell. Med. 89, 61–69 (2018) 9. Santos, M.A.G., Munoz, R., Olivares, R., Filho, P.P.R., Del Ser, J., de Albuquerque, V.H.C.: Online heart monitoring systems on the internet of health things environments: a survey, a reference model and an outlook. Inf. Fusion 53, 222–239 (2020) 10. Bonow, M.P.L.M.R.O, Mann, M.D.L., Zipes, M.D.P.: Braunwald’s Heart Disease: A Textbook of Cardiovascular Medicine, 9th ed. (2012) 11. Dua, D., Graff, C.: {UCI} Machine Learning Repository (2017) 12. Bhogal, A.S., Mani, A.R.: Pattern analysis of oxygen saturation variability in healthy individuals: entropy of pulse oximetry signals carries information about mean oxygen saturation. Front. Physiol. 8, 1–9 (2017)
Multi-agent System for Obtaining Parameters in Concussions—MAS-OPC: An Integral Approach Gustavo Ramírez Gonzalez, Arnulfo Alanis, Marina Alvelais Alarcón, Daniel Velazquez, and Bogart Y. Márquez
Abstract This article shows the analysis of the architecture of a Multi-Agent System (MAS) for data collection by impact. The concussion is the brain injury that occurs when an impact is received which generates physical damage and has important neuropsychological repercussions; the MAS will be implemented in a football helmet to know the damage caused by the impact to the head, which will allow us to determine if the athlete is in conditions to return to his sports commitment, and more over intend to prevent further damage.
G. R. Gonzalez (B) Technological Institute of Tijuana, National Tech of México, Master of Information Technology, Calzada Del Tecnológico S/N, Fraccionamiento Tomas Aquino, Tijuana, BC, Mexico e-mail: [email protected] A. Alanis · B. Y. Márquez Technological Institute of Tijuana, National Tech of México, Systems and Computer Department, Calzada Del Tecnológico S/N, Fraccionamiento Tomas Aquino, Tijuana, BC, Mexico e-mail: [email protected] B. Y. Márquez e-mail: [email protected] M. A. Alarcón · D. Velazquez Schoool of Psycholgy, CETYS University, Calz Cetys 813, Lago Sur, C.P 22210 Tijuana, BC, Mexico e-mail: [email protected] D. Velazquez e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_24
261
262
G. R. Gonzalez et al.
1 Introduction Neural death is common in high impact sports such as football and soccer. One of the aspects that can generate neuronal death is the impacts to the head. An undiagnosed head injury can cause the brain to start releasing a substance known as glutamate along with neuronal receptors which if not attended can cause neuronal death. The odds of suffering an impact of this magnitude depends on the sport that is practiced; the sports areas that are most common are contact sports, martial arts, and extreme sports. This situation mostly occurs in male people between the ages of 18 and 30, as well as professional-level players as well as the amateurs; sports federations are working on taking care measures as it becomes a health problem for the participants of the different disciplines. The objective of this article is to highlight the importance of the evaluation and monitoring of the contact sports player; the MAS aims to verify if the athlete is in condition to continue with his training or competition [1]. As part of this study, neuropsychological evaluations will be performed to detect if there are neurocognitive alterations in athletes who have symptoms related to an impact, as an added measure to determine cognitive recovery [2].
2 Concepts 2.1 What Is a Brain Concussion? A concussion is a specific type of brain injury. It implies a brief loss of normal brain function. It occurs when an impact to the head or body causes your head to move violently back and forth. This sudden movement can cause the brain to hit the skull. Concussions are one of the most common injuries in contact sports. Other causes include impacts to the head, sticking to the head after a fall, being shaken violently, or traffic accidents [3].
2.2 What Are the Symptoms? Symptoms may not occur immediately. They can start days or weeks after the injury. Symptoms may include headache or neck pain. Nausea, ringing in the ears, dizziness, or tiredness may also occur. Some of the most serious symptoms are • Seizures • Sleepiness or difficulty walking or sleeping • Headache that gets worse and doesn’t get better
Multi-agent System for Obtaining Parameters in Concussions …
• • • • •
263
Weakness, numbness, or decreased coordination Frequent vomiting or nausea Confusion Difficulty speaking Loss of consciousness.
2.3 What to Do in Case of Concussion? If a concussion is detected or suspected, the SCAT5 [4] questionnaire, which provides a basic protocol for the concussion examination, is required. This questionnaire includes a cognitive evaluation, balance and coordination assessment, and cervical spine examination. You should have a neurological evaluation that includes motor eye tests and neuro vestibular [3].
2.4 Brain Injury When defining brain injury, it is important to distinguish it from a neurodegenerative condition or a neurodevelopmental one. A sudden external event that leads to compromised brain function would manifest itself in some form of alteration in consciousness which could range from feeling dazed to being confused or even loss of consciousness and responsiveness [4]. It remains an important task to correctly identify and classify such injuries via the amount of time it takes to return to consciousness or a responsive state. Such classifications would range from mild to severe.
2.5 Physical Science Physical science has long been divided into fractions of mechanics, acoustics, thermodynamics, electricity, magnetism, and optics, and today at the forefront of research, the nature and structure of matter, atomic and nuclear physics. The more knowledge is improved, the more arbitrary the boundaries between these disciplines are. The mathematical formulation of a physical phenomenon is called a physical law; such formulation can be done by observing or by imitating the phenomenon under engendered and controlled conditions (physical experiment).
264
G. R. Gonzalez et al.
2.6 Magnitudes It is called a physical magnitude when it is observable and can be measured. For example, length, time, speed, acceleration, force, mass, color, etc., are examples of physical magnitudes. Beauty, taste, smell, love, satisfaction, etc., are observable that do not constitute physical magnitudes since these cannot be measured. The magnitudes can be classified into scalar, vector, and tensorial, in turn, the former, can be classified into extensive and intensive. These are scalar magnitudes that are determined by a real number, accompanied by a chosen state of this magnitude; among the extendable scalar magnitudes are mass, energy, time, electric charge, volume, amount of substance, electrical resistance, etc., and among the intensive scalar units, are the temperature, density, specific volume, specific load, etc.
2.7 Mathematical Representation Mathematically a scalar is represented by a single number and a vector with a series of coordinates, as many as dimensions have the space in which it is rendered. Thus a vector v is represented as v = (vx, vy, vz) = vxi + vyj + vzk, Being vx, vy, y, vz vector components, their projections on the axes xy, yz. At the same time i, j, y, k are the unit vectors in the directions of the axes xy yz, respectively.
2.8 Deep Learning Deep learning is the study that is responsible for emulating the learning approach that humans use to gain certain knowledge. In other words, it is a class of machine learning techniques that exploit many layers of nonlinear processing for the extraction and transformation of monitored and unsupervised functions, and for the analysis and classification of patterns [5]. Currently, some of the applications that Deep Learning has are • • • • • • • •
Identification of company brands and logos in photos posted on social networks. Ad orientation and prediction of customer preferences. Identification of leads. Arrest of fraud Medical imaging analysis (x-rays, MRIs, etc.) Increased diagnostic prediction. Face location and identification of facial emotions Voice recognition.
Multi-agent System for Obtaining Parameters in Concussions …
265
2.9 Multi-agent System (MAS) Multiple-Agent Systems (MASs) achieved a high level of acceptance by academics in different disciplines, where they represent a means of solving complex problems by subdividing them into smaller tasks [6]. An entity can be understood as a composition of rules and levels of knowledge that is placed in an environment and has the ability to communicate with other agents to fulfill the function of detecting different parameters that will later be used to make a decision based on the objective of the entity [6]. The efficiency of Multi-Agent Systems comes from splitting the work of a complex task into multiple smaller tasks, each of which is assigned to a different agent. Of course, the associated overheads, such as processing and energy consumption, are amortized across multiple agents, often resulting in a low-cost solution as compared to an approach where the whole complex problem must be solved by a single powerful entity. In the event of any agent failure, the task can be easily reassigned to other agents which adds a wide level of reliability to such systems [7].
2.10 Chronic Traumatic Encephalopathy (CTE) In a neurodegenerative disease that has pathological traits similar to Alzheimer’s and Parkinson, this is ensured that this happens to repetitive contacts at the head. Constant blows to the head produce certain proteins, such as the amyloid and tau protein that are added within the brain mass, creating neurofibrillary entities at the microscopic level. CTE consists of an aggregation of tau protein in a very close place of astrocytes, neurons, blood vessels, in this way an irregular pattern of accumulation of proteins. To distinguish a patient with CTE, this neurodegenerative condition requires an autopsy to observe their pathology. This represents major complications in the study of people who might have symptoms during the course of their lives. The most significant thing about this pathology is the accumulation of the tau and amyloid protein [4].
2.11 CTE in Football In studies that have been performed, this disease manifests itself to an athlete or person who is exposed to blows to the head that can cause a brain injury due to repetitive blows. In most cases that this disease occurs are contact sports, this means that they have a high impact rate with another person or object. Sports that are listed as contact include football, soccer, rugby, boxing, and martial arts.
266
G. R. Gonzalez et al.
A famous case within the NFL is New England Patriots player Aron Hernandez, whose body was found in his prison cell, according to analysis by Dr. Ann Mckee, suffered from severe CTE for a person of his age. In an interview, Dr. Ann mentioned that of the 468 brains analyzed I had never seen a brain as stunted as Aron Hernandez’s [4, 8].
3 Justification Currently, in the United States where more than a million concussion stakes occur, many of these cases are presented by athletes dedicated to playing football. One of the risks of playing football is the strong blows that are received in the head causing long-term contusions of the brain. The main idea is to develop the MAS architecture that helps obtain impact data caused by a coalition of football helmet collisions to help coaches determine whether the player who received the impact can continue the game, and proceed to conduct clinical studies to clarify the player’s health. All this with the long-term aim that MAS can be implemented as a strategy to prevent further damage being done to a person who suffers a hit during a game.
4 Proposal Currently, it is estimated that in the United States of America there are more than one million concussions every year [2]; this situation presents itself as much as in professional and amateur football, and this happens at all levels both as a professional to amateur.
Fig. 1 References proposed model of the SMA
Multi-agent System for Obtaining Parameters in Concussions …
267
The architecture of the MAS-OPC is described below, and as shown in Fig. 1, the intelligent system will detect the variations of movements, of magnitude that is considered a strong blow, present and implement it in a large variety of helmet, which can be football, and review its suitability to football soccer; the intelligent system is designed for obtaining data from the strength generated in the coalitions between helmet crashes and see the emotional impact of the player. The MAS-OPC, starts by capturing the data by sensing; sensor will be placed on the helmet; all this data collection will be performed during physical activity; the impact sensor will be placed in a position in such a way that it detects receiving the impact and will be calibrated correctly for proper data collection; this data will be stored in a signal database recorded chronologically; this is for the purpose of keeping the record always fed. In the next block, the signal identifier agent will process each record in the database to identify the magnitude of the impact. Once the above have been done, the signals will be clarified. To do this the algorithm based on deep learning will be used to classify the data. By obtaining the classification of the impact data, the predictor will learn its impact force order, which will vary from the reading of the sensor that is in constant communication with the emotional agent. The last component of the architecture is the emotional agent that was responsible for giving data of the user’s mood. This data is sent from the database to measure the impact obtained previously in order to achieve the comparison of before and after sports practice; in this part, it is important to be clear about the cognitive status of the player, therefore it is important to conduct a neuropsychological assessment of this component [9].
4.1 Proposal Design To achieve a better understanding of the injuries that occur in a football match, matches and practices were visited to determine the game positions where there is more probability of a brain injury. Likewise, interviews were conducted with coaches and from the information obtained a digital survey was proposed which can be applied in the field of play. Through this process, the objectives and methods for agent programming were established. One important thing to have is a knowledge base of the cognitive status of the players before starting their sports commitment, this because in case of receiving a strong blow or suffering a concussion can perform comparison [10]. It is important to take into account preventive measures; helmets demonstrate a reduction in impact force; incidences of concussions have also not been reduced by the use of protections, the helmet reduces head trauma and facial in other snow sports [11]. Taking these aspects into account, a helmet prototype can be performed to reduce the risk of concussion to which, later, the SMA will be added.
268
G. R. Gonzalez et al.
5 Conclusions In contact sports, during a career, it is highly likely that an athlete will suffer blows to the head, which means higher odds of having a concussion, so it is important to be attentive to your symptoms so that the doctor can make a timely diagnosis to avoid something that can impair the quality of life. The performance of the intelligent system will depend on electronic devices such as sensors that are implemented, and these will be pre-calibrated based on the necessary parameters, this way more accurate data of the factors involved in the coalition, such as mass, speed, and point of contact will be obtained. To evaluate the effectiveness of the system, behavioral data will be acquired by an expert in neuropsychology, so that a more complete picture of the assuagement can be measured utilizing protocols to asses an athlete before and after a game. To enhance the implementation of this architecture requires the management of a multidisciplinary team where various related specialties are integrated in the health sector: doctors, neurologists, neuropsychologists, among others. This work shows the importance of timely detection in brain disorders in contact sports athletes, which will avoid having a higher brain damage, the tools to have a control of impacts is something of great importance that will reduce the cognitive deficits in these athletes and ultimately contribute to a higher quality of life during and after their career ends.
References 1. Teel, E.F., Marshall, S.W., Shankar, V., Mccrea, M., Guskiewicz, K.M.: Predicting recovery patterns after sport-related concussion. 52(3), 288–298 (2017) 2. Villalva-sánchez, A.F.: La neuropsicología en la contusión y conmoción cerebral en el deporte. Res. Gate 1(April), 3–6 (2016) 3. Navarrete, H.: Neurología del deporte. Algunos aspectos del traumatismo craneoencefálico. Rev. Mex. Neurocienc. 2(5), 299–302 (2001) 4. Roebuck-Spencer, T., Cernich, A.: Epidemiology and societal impact of traumatic brain injury. In: Sherer, M., Sander, A. (eds.) Handbook on the Neuropsychology of Traumatic Brain Injury. Clinical Handbooks in Neuropsychology. Springer, New York, NY (2014) 5. Castillo, E.: Universidad de Puerto Rico en Cayey Programa de Estudios de Honor Conocimiento acerca de la Encefalopatía Traumática Crónica en atletas de alto contacto y profesionales puertorriqueños Por (2018) 6. Deng, L., Yu, D.: Deep learning: methods and applications. Found. Trends® Signal Process. 7(3–4), 197–387 (2014) 7. Sen, S.: Multiagent systems: milestones and new horizons. Trends Cogn. Sci. 1(9), 334–340 (1997) 8. Mckee, A.C., Stein, T.D., Kiernan, P.T., Alvarez, V.E., Encephalopathy, T.: US department of veterans affairs. 25(3), 350–364 (2016) 9. Norheim, N., Kissinger-knox, A., Cheatham, M., Webbe, F.: Performance of college athletes on the 10-item word list of SCAT5, pp. 1–5 (2018)
Multi-agent System for Obtaining Parameters in Concussions …
269
10. Belanger, H.G., Vanderploeg, R.D.: The neuropsychological impact of sports-related concussion: a meta-analysis. J. Int. Neuropsychol. Soc. 11(4), 345–357 (2005) 11. Liotta, C.A.: Concussion in sports an up to date. Trauma Fund MAPFRE 22(2), 108–112 (2011)
Data Analysis of Sensors in Smart Homes for Applications Healthcare in Elderly People Uriel Huerta, Rosario Baltazar, Anabel Pineda, Martha Rocha, and Miguel Casillas
Abstract This working in progress will focus on a certain sector of the population such as the elderly people, who face difficulties in their cognitive and physical capacities declining as the years progress, it is essential to provide better living conditions through the use of technology that must be friendly with users. The development will be of a web system with agents and include sensors in the room to regulate the temperature, light, and a bracelet-shaped sensor to detect the user’s heart rate, the use of algorithms is proposed of learning and proactive stage adapted with classification algorithms to measure the efficiency with the best performance.
1 Introduction The interaction of people with technology is more and more common, it is easy to see how computers, mobile devices, etc., are increasingly used, so we could say that are used naturally. However, in almost all these devices, the user has to use them continuously for being acquainted with them. Unlike current computing systems where the user has to learn how to use the technology, an IE (intelligent environment) U. Huerta (B) · R. Baltazar · M. Rocha · M. Casillas Instituto Tecnológico de León, León, Guanajuato, Mexico e-mail: [email protected] R. Baltazar e-mail: [email protected] M. Rocha e-mail: [email protected] M. Casillas e-mail: [email protected] A. Pineda Instituto Tecnológico de Matamoros, Matamoros, Tamaulipas, Mexico e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_25
271
272
U. Huerta et al.
adapts its behavior to the users, even anticipating their needs, preferences, or habits. For this reason, the environment should learn how to react to the actions and needs of the users, and this should be achieved in an unobtrusive and transparent way. In order to provide personalized and adapted services, it is necessary to know the preferences and habits of users [1]. Smart devices or objects, capable of communication and computation, ranging from simple sensor nodes to home appliances and sophisticated smart phones are present everywhere around us. The heterogeneous network composing of such objects comes under the umbrella of a concept with a fast growing popularity referred to as Internet of Things (IoT) which represents a worldwide network of uniquely addressable interconnected objects. IoT is an “interconnection of sensing and actuating devices providing the ability to share information across platforms through a unified framework, developing a common operating picture for enabling innovative applications. This is achieved by seamless ubiquitous sensing, data analytics, and information representation with Cloud computing as the unifying framework.” Therefore, the Internet of Things aim to improve one’s comfort and efficiency, by enabling cooperation among smart objects [2]. All data from different sources is accumulated in the cloud (households’ data, sensor measurements from the transmission/ distribution lines or from the production sites, etc.). The cloud should provide massive data storage and processing infrastructure. It is the most advanced level of the framework [3]. As stated in Gubbi [2], the cloud “promises high reliability, scalability, and autonomy” for the next generation of IoT applications. The cloud is the central part of this system, hence our framework can be considered as “cloud-centric” or “cloud-based”. The standard IoT usually consists of many Wireless Sensor Networks (WSN) and Radio-frequency identification (RFID) devices. Wireless Sensor Network is a paradigm that was tremendously explored by the research community in the last two decades [5]. AWSN consists of smart sensing devices that can communicate through direct radio communication. RFID devices are not as sophisticated. They mainly consist of two parts: an integrated circuit with some computational capabilities and an antenna for communication. The Model View Controller (MVC) framework has become the standard in modern software development, with the model layer, display layer, and controller layer making it easier and faster. The Flask is a framework that uses Python language with easy to understand code writing. But the Flask framework still doesn’t use the MVC method, so files and codes are not regular [9]. The Foundation for Intelligent Physical Agents (FIPA) is an international organization that is dedicated to promoting the industry of intelligent agents by openly developing specifications supporting interoperability among agents and agent-based applications [11].
Data Analysis of Sensors in Smart Homes for Applications Healthcare …
273
The present development will be of a web system that includes the architecture of agents that have the ability of monitoring data of the user. At the same time, the user uses the system in a remote way to make his daily activities. When the system learn the patterns of the user this react in the environment for give a personalized assistance to the user. We decided to use the classification algorithms to measure the efficiency with the best performance. One of our main motivations for this work was to provide independence, privacy, and dignity to elderly persons. Another important aspect that motivates us to make this work was the fact that there is the problem of finding the right person for comfort. It is important to mention that, although part of the database architecture is already developed, there are aspects of the system that are still being worked on, presenting this system as a work in progress.
2 Background The use of information and communication technologies, as well as the use of internet plays a very important role to be able to monitor patients, without the need that caregivers have to be at home all the time. A medical home focused on the patient is a promising model to improve access to high quality care, which can generate less cost of care and attention by family members. There are various organizations for health care among them Ambient Assisted Living (AAL) which refers to concepts, products, and services that tend to improve various aspects of the quality of people’s lives [7]. It is essential to provide better living conditions for vulnerable sectors of society using technology and it is important to consider that the technology must be friendly with users, and even adapt to their needs and desires. In this research, we present the physical implementation of a system to assist users and patients in daily activities or duties. We select the classification algorithm with the best performance using cross validation. The algorithms of pattern recognition were Back Propagation neural network, Naïve Bayes, Minimum Distance, and KNN (k near neighbor). Our motivation for this work was to help people with motor difficulties or people who use wheelchairs, for this reason, it was essential to use a wireless controller. The system was implemented in a testbed at the Leon Institute of Technology in Guanajuato, Mexico, and includes sensors of humidity and temperature, windows actuators, wireless agents, and other devices. Experimental tests were performed with data collected during a time period and using use cases. The results were satisfactory because it was not only possible remote assistance by the user but it was possible to obtain user information to learn comfort preferences using vector features proposed and selecting the classification algorithm with better performance [4].
274
U. Huerta et al.
3 Methodology An agent is no universally accepted definition the term agent, and indeed there is a good deal of ongoing debate and controversy on this very subject. Thus, for some application the ability of agents to learn from their experiences is of paramount importance; for other applications, learning is not only unimportant, it is undesirable. Nevertheless, some sort of definition is important otherwise, there is a danger that the term will lose all meaning “user friendly”. An agent is a computer system that is situated in some environment, that is capable of flexible autonomous action in order to meet its design objectives, where flexibility means three things [10], see Fig. 1. – Reactivity: intelligent agents are able to perceive their environment, and respond in a timely fashion to changes that occur in it in order to satisfy their design objectives. – Pro-activeness: intelligent agents are able to exhibit goal-directed behavior by taking the inactive in order to satisfy their design objectives. – Social ability: intelligent agents are capable of interacting with other agents (and possibly humans) in order to satisfy their design objectives [10]. This section describes the development of an web system with agent, that goes beyond simple assistance but based on their interaction with web system, is realized learning of behaviors that belong at user. All household devices equipped with interfaces for wireless communication. In the attention of vulnerable sectors of society such as the elderly people, who face difficulties in their cognitive and physical capacities declining as the years progress, it is essential to use friendly technologies to provide better living conditions. Deep data analysis with sensors are used to intervene with the user and adapt the computer algorithm to their needs and desires. A model of the database used was to Model View Controller (MVC) is used to perform a robust architecture in code development and building database, see Fig. 2. The first point that was considered was the monitoring of the patient’s physical data,
Fig. 1 An agent in its environment
Data Analysis of Sensors in Smart Homes for Applications Healthcare …
275
Fig. 2 Model view controller
since that in this way can take care of there health and also have the ability to save a history of data to perform an analysis that can be used to prevent some future illness. The web system will have sensing agents, actuators, of communication that is stored in a database engine to make a connection via the web, mounted on a web server, the following Fig. 3 shows the scheme. The ESP32 development board is the essential part of this project. Numerous models have been designed by Espressif and its counterparts. The board used in this project is ESP32 Development Kit (ESP32 DevKitC). The board has integrated 2.4 GHz Wi-Fi, Bluetooth, Low Energy Bluetooth (BLE), and 4 Megabytes (MB) of flash memory [6]. Because of its aforementioned features, among many others, the board is an ultimate choice of electronic hobbyist and professionals. Figure 4 shows the Espressif ESP32 Development board.
Fig. 3 Web system with agents interaction scheme
276
U. Huerta et al.
Fig. 4 ESP32 development board
The FIPA Request Interaction Protocol (IP) allows one agent to request another to perform some action. The Participant processes the request and makes a decision whether to accept or refuse the request. If a refuse decision is made, then “refused” becomes true and the Participant communicates a refuse of the other agent [11]. We present an architecture of agents that have the ability of monitoring the data of the user. At the same time, the user uses the system in a remote way to make his daily activities. When the system learn the patterns of the user this react in the environment for give a personalized assistance to the user. We decided to use a neural network and other classification algorithms because they are used very sparingly in learning of thermal comfort preference.
4 Development and Results The technical method is the incorporation of intelligent capabilities to all these objects traditionally passive (or “dumb”), through specific hardware devices (or simply from wireless sensors), so that they can collect data for sending to centers processing through an interconnected network structure, which allows objects of all kinds to which we have just referred to communicate with each other with the ability to transmit, compile, and analyze data. Due to the ubiquitous nature of the communicated objects in the IoT, it is expected that an unprecedented number of devices will have this technology, estimating about twenty billion by the year 2026 [8]. The development of a web system with agent and include sensors in the room to regulate the temperature, light and a bracelet-shaped sensor to detect the user’s heart rate, the use of algorithms is proposed of learning and proactive stage adapted with classification algorithms to measure efficiency with the best performance. The creation and implementation of the web system with agents, started by building the database in MySQL, this step is describing to Fig. 5.
Data Analysis of Sensors in Smart Homes for Applications Healthcare …
277
Fig. 5 DataBase sensor-lia
This structure of data is thinking to execute queries asynchronous that will help to have a better performance in web development. Install in a development server was installed which is in an internal network in the institution, using the flask framework for development frond-end, and developed in python be our main programming language to program the back end, in database architecture developed to MySQL it is used workbench. The flow Fig. 6 highlights the main processes to develop the web system. For description of flow diagram started by data collection is through ambient and body temperature sensors, light and heart rate sensor to continue storing data in the database, make a analysis of these data using artificial intelligence techniques, show a statistics analysis of the results, to can give a diagnosis and take a final decision in the action of actuator sensor and/or show a final report. The tools used to mount the web server were Windows Server 2019, Filezilla, Workbench (MySQL), Apache, and Visual Studio Code. The following Fig. 7 shows the configuration of the server through port 8080 through IP 10.0.29.101 locally. The code was developed using HTML5, CSS, Python, and through templates to have a web connection that was established as mentioned, the following Fig. 8 shows the interface.
278
Fig. 6 Flow diagram to principal processes
Fig. 7 Server configuration
U. Huerta et al.
Data Analysis of Sensors in Smart Homes for Applications Healthcare …
279
Fig. 8 Web interface
5 Conclusion and Future Work This work in progress of a web system presented allows analyzing the data taken from a user through the different sensors mentioned such as a detect thermal comfort frequencies to give an automatic assistance, as well as an analysis on the heart rate and the automatic control of lights to care the elderly users (through the action of actuators) will be a system that prevent some future illness, make a remote monitoring, and assistance (through actuator action) to a patient whose health conditions force you to be at home, which opens up new possibilities for health care. Also a web availability of this interface sharing on devices and computers is connected to the internal network. Finally, the proposed system is intended to continue developing with the aim of being able to share it with elderly society, therefore, it seeks not only to test usability but also the economic feasibility in a future time. Acknowledgments To CONACYT, for the support provided during the period of study for the master’s degree. To Dr. Rosario Baltazar and the members of the committee of researchers who took part during the realization of this preliminary study. Special thanks goes to The University of Texas Rio Grande Valley for my stay research like to Dr. Jesus Gonzalez and Dr. Anabel Pineda for his valuable contribution of knowledge.
280
U. Huerta et al.
References 1. Aztiria, A., Augusto, J.C., Basagoiti, R., Izaguirre, A., Cook, D.J.: Learning frequent behaviors of the users. IEEE Trans. Intell. Environ. Syst. Man Cybern. Syst. 43(6) (2013) 2. Gubbi, J., Buyya, R., Marusic, S., Palaniswami, M.: Internet of things (IoT): a vision, architectural elements, and future directions. Future Gener. Comput. Syst. 29, 1645–1660 (2013) 3. Da Xu, L., He, W., Li, S.: Internet of things in industries: a survey. IEEE Trans. Ind. Inf. 10, 2233–2243 (2014) 4. Lopez, S., Rosario Baltazar, M., Casillas, V., Zamudio, J.F., Miguel, A.A., Mendez, G.: Physical implementation of a customisable system to assist a user with mobility problems. Springer Int. 45, 10 (2016) 5. Oppermann, F.J., Boano, C.A., Romer, K.: A Decade of Wireless Sensing Applications: Survey and Taxonomy, the Art of Wireless Sensor Networks, pp. 11–50. Springer (2014) 6. https://www.espressif.com/en/products/hardware/esp32-devkitc/overview 7. Pavlina Koleva, K.T.: Challenges in designing and implementation of an effective ambient assisted living system. In: Telecommunication in Modern Satellite, Cable and Broadcasting Services (TELSIKS), 2015 12th International Conference on, 305–308 (2015). https://doi.org/ 10.1109/TELSKS.2015.7357793 8. Barrio, M.A.: Internet de las Cosas. Reus, S.A, Espana (2018) 9. Robihul Mufid, M., Basofi, A., Udin Harun Al Rasyid, M., Politeknik Elektronika Negeri Surabaya, Indonesia: Design an MVC Model using Python for Flask Framework Development, IEEE (2019) 10. Systems, M.: A Modern Approach to Distributed Artificial Intelligence Gerhard Weiss. The MIT press Cambridge, Massachusetts, London, England (1999) 11. Foundation for Intelligent Physical Agents. http://www.fipa.org/, Geneva, Switzerland, 1996– 2002
A Genetic Algorithm-Oriented Model of Agent Persuasion for Multi-agent System Negotiation Samantha Jiménez, Víctor H. Castillo, Bogart Yail Márquez, Arnulfo Alanis, Leonel Soriano-Equigua, and José Luis Álvarez-Flores
Abstract In recent years, reaching agreements is an important problem in multiagent systems (MAS), which require different types of dialogues between agents. Persuasion is one of them and it is traditionally based on first order logic. However, this type of technique is not adequate for solving complex problems. This situation requires developing new models for optimizing an agent persuasion process. This work presents a persuasion model for MAS based in genetic algorithms (GA) for reaching agreements in problem solving. The objective of this work is to optimize the negotiation between agents that solve complex problems. First, it was designed the persuasion model. Then, it was implemented and evaluated in experimental scenarios and it was compared its results against traditional models. The experimental results showed that a GA-oriented persuasion model optimizes the negotiation in MAS by improving execution time, which also eventually will optimize the processes carried out by MAS.
S. Jiménez (B) · B. Y. Márquez · A. Alanis Instituto Tecnológico de Tijuana, Tijuana, BC, Mexico e-mail: [email protected] B. Y. Márquez e-mail: [email protected] A. Alanis e-mail: [email protected] V. H. Castillo · L. Soriano-Equigua · J. L. Álvarez-Flores Universidad de Colima, Colima, Mexico e-mail: [email protected] L. Soriano-Equigua e-mail: [email protected] J. L. Álvarez-Flores e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_26
281
282
S. Jiménez et al.
1 Introduction Multi-Agent Systems (MAS) have become a new trend in research because of their potential to solve complex problems, i.e., problems where the solution is not exact and the execution time is untreatable [1–5]. The negotiation process between agents is one of the main problems in MAS [6]. Wooldridge [6] defines a taxonomy of six dialogue types to reach agreements based on each agent’s goal. One of this dialogue types is persuasion; in this type of communication an agent tries to persuade to another agent of the truthiness of something. In this dialogue type the initial situation is to solve a conflict between the agents. Several studies present different dialogue proposes between agents based on the type of problem and the agent interaction. For example, in the proposals of [2, 3, 7] the agents used a deliberative dialogue. However, this type of dialogue isn’t the best option if the agents face a problem where they can’t work together, but they have a conflict and the best agent is who will do the task [6]. In such cases, it is relevant to analyze the use of a persuasive dialogue. Traditionally negotiation is based on game theory, but this type of technique has some important disadvantages [6]: (1) it is difficult to justify the agreement between the agents, and (2) it is assumed that the agent’s function is static. These difficulties led to the development of the argumentation theory. In [6], Wooldridge defines argumentation as a process where the agents interchange premises. The game theory, as argumentation, is based on first order logic, but this technique could be limited to solve complex problems, these problems that can’t be solved in polynomial time. Solving these type of problems using traditional algorithms, based on first order logic, could be complicated because only database knowledge and inference motors compose this technique. These limitations led to the development of new mechanisms to optimize the negotiation process considering execution time. In nowadays, evolutionary computing is an area that optimize the process where the solution needs exponential time [8] such as Genetic Algorithms (GAs). A GA is a set of search methods that simulate the natural selection where the more fitted individual survives [9]. Gas are faster than the conventional search methods. There are two similitudes between GA and MAS [10]: (1) they can simulate the Darwinian process; and (2) their characteristics allow representing the evolution process. The literature reports different persuasive mechanisms [11], although the firstorder logic technique is the most commonly used for MAS negotiation. Likewise, the literature described that the problems solved by these MAS were not complex [11]. For that reason, this research proposes to develop a Genetic Algorithm-Oriented Model to optimize the execution time in the agent negotiation process. The rest of the paper is organized as follows: The Sect. 2 presents the design of the persuasive model based on Gaia methodology [12]. In the Sect. 3 the evaluation process is presented, we present 2 study cases to validate the proposal. Then, the
A Genetic Algorithm-Oriented Model of Agent Persuasion …
283
Sect. 4 presents the results. Section 5 discusses the findings. Finally, in Sect. 6, we present the conclusions and future work.
2 Persuasion Model This section describes the design of the proposed conceptual model of persuasion. Also, based on the Gaia methodology [12], the design of its information model is presented. Later, we will describe the architecture that supports the implementation that makes it functional by helping to transform it into MAS.
2.1 Conceptual Model The conceptual model abstractly exhibits the set of interrelations between the agents that make up the model [9]. It also defines the behavior of each agent that integrates the system. The model consists of the following types of agents: Master Agent (MA), whose function is to coordinate the MAS and there is only one of this type in the system; and the Genetic Agents (GAg), whose main task is to achieve goals under the GA architecture, the system consists of n agents of this type. This model establishes the following. The GAgs are responsible for achieving goals, that is, this type of agent achieves the goals set initially. As the system consists of n agents of this class, each of them seeks to achieve the goal that the MAS proposes. However, only the GAg that presents the best solution to the problem should persuade others not to carry out a task they had established to complete and only the agent who persuaded (GAg X) should carry it out. The MA has the task of determining which agent persuaded it to carry out the solution it found. In GAs, the selection of the fittest individual is determined by the value returned by the fitness function, called the fitness value. This represents the best solution for the problem, either maximization or minimization. Based on the foregoing, this investigation determined to base the persuasion on the fitness value, characteristic of the GA. The behavior of the GAg is similar to the execution of an AG, which allows to reach the goal using this type of algorithms, and the result of the suitability value, to support the persuasion process in the MAS. In the proposed persuasion model, GAgs obtain environmental data, which are conveniently represented, according to the type of problem to be considered. Next, the GA is executed. It selects the most suitable individual in his population, and sends this solution to the environment. The MA monitors the environment in order to obtain the fitness values resulting from each GAgs that integrates the system and selects the GAgs that persuaded the other MAS agents. In the GAgs, the data coming from the environment is stored in a single input representation string, while for the MA the
284
S. Jiménez et al.
Fig. 1 Master agent architecture
data has independent inputs for each fitness value, the amount of these depends on the GAgs number. The execution of the MA concludes when it selects the GAgs that has persuaded others. The proposed structure for the MA is illustrated in Fig. 1. All agents have established their goals to achieve. Persuasion is carried out when the GAg with the highest aptitude value persuades the other agents not to perform their initial task. The conceptual model supports the development of an information model, which is described below.
2.2 Information Model The conceptual model presented in the previous section led to the development of an information model. The design of the information model was based on the Gaia methodology [12], which is used to design MAS. In the present work, we only describe the role and relationship models. Agent Model. In Gaia, the agent model documents the types of agents that will be developed in the system, as well as the roles that agents perform [12]. Figure 2 shows the agent model where the roles that each of the MAS agents will carry out are established. In Fig. 2, GAg has a multiplicity +, which indicates that there may be 1 or more agents of this type. This type of agents implements the following roles: SenseData, that is, obtaining data from the environment where the problem develops; Generate Initial Population, using the data obtained from the sensing, an initial population is generated to begin the operation of the GAg; Generate New Population, after the GAg performs the first iteration a new population will be generated over which the
A Genetic Algorithm-Oriented Model of Agent Persuasion …
285
Fig. 2 Agent model
next iteration will be performed and so on; Evaluate the new population in each iteration, the evaluation determines the best individuals its Select the individual with the highest aptitude value. In Fig. 2, it can be seen that the MA has multiplicity 1, which indicates that there is only one agent of this type in the MAS. This type of agent has the roles of Sense Data (works in the same way as the GAg, obtains data from the environment), Select (selects the agent that proposes the best aptitude value) and Activate (activates the agent who persuaded the MAS to perform the task). The relationship between agents is explained in the next section. Relation Model. The Gaia relationship model defines the communication between different types of agents [12]. Figure 3 shows the relation model where the n GAg instances interact with the only MA bidirectionally. With the agent and relationship models, the transformation of the persuasion conceptual model, detailed in previous sections, to an information model is concluded. Fig. 3 Relation model
286
S. Jiménez et al.
Through this transformation, a system model can be generated, which is detailed in the next section. System Model. This section describes the architecture of an MAS derived from the conceptual model described above. Because the MA is the system coordinator and only makes comparisons, it is implemented in a rule-based paradigm, while the GAg is implemented in an architecture for achieving goals based on GA. The Deployment diagram (Fig. 4) is based on the information model (see Sect. 2.2), which shows the physical components of the system and their interconnections. The GAg detects the environment data through the Change interface and executes each of its roles, at the end, it sends the results of its execution to the environment. Meanwhile, the MA, like the GAg, detects through the Change interface the fitness values sent by the GAg to the environment and selects the GAg most suitable to perform the task and designates the same to execute that task through the Designate interface. The following section describes the structure of the agents involved in the site diagram mentioned above. Agent’s Architecture. From the perspective of agents, in [13] the architectures are defined as the relations that exist between the inputs of the sensors and the outputs of the actuators to the environment, as well as the internal reasoning of the agent. The first figure shows the internal structure of the GAg, while the second illustrates that of the MA. The GAg is composed of a sensor that obtains the data from the environment and associates it in a data chain. From that association, the
Fig. 4 Deployment diagram
A Genetic Algorithm-Oriented Model of Agent Persuasion …
287
initial population is generated for the agent’s internal GA to operate. Then, the data of the initial population is evaluated to know the aptitude of each individual as a solution to the problem that is faced. Next, the chromosome pairs (individuals) are formed to apply the genetic operators and generate the new population. Finally, the execution of the GA ends when the end criterion is met. Then the individual with the best fitness value is selected—depending on the type of problem he is facing—and if the criterion is not yet satisfied, then the procedure described above is iterated until the end condition is met. On the other hand, the architecture of the MA is based on a reactive type architecture. The MA consists of several sensors to obtain environmental data, inside it has the input of the fitness values sent by the GAgs. The MA makes a comparison of these values to select the GAg that persuaded the other agents in the system. The MA actuator designates the GAg that persuaded to achieve the goal of the MAS. Transforming the information model to a system model implies that design considerations are implemented in MAS. For this, it is pertinent to establish the conditions of the problems where the proposed model is potentially more efficient. These conditions are shown below.
2.3 Persuasion Model Application Characteristics This section presents the five general characteristics that the type of problem must meet so that it can be resolved with the persuasion mechanism proposed in this article. It must be a problem that is solved in exponential time. The proposed model can solve problems with small search spaces, but its usefulness is more appreciated in large search spaces. Must be resolved by MAS. The implementation of the proposed model must be MAS, because the scope of agreements is carried out between two or more agents. Agents must have genetic architecture. Another important feature is to internally structure agents with an GA, since the proposed model uses their characteristics to persuade. Agents must have a data set as an environmental input. GAgs need to input a set of data from the environment to represent the initial population, which helps the GA to carry out its operation. Agents must provide an aptitude value as output. This is another characteristic of relevance, since the persuasion of the GAg is through its fitness value. For example, some problems that adhere to the characteristics specified above are: combinatorial optimization [14], planning in the manufacturing industry [1, 15], task planning [16, 17], and automatic classification of defects in wood [18].
288
S. Jiménez et al.
3 Experimental Design This section describes the procedure followed to validate the relevance of the proposed persuasion model. To demonstrate the advantage that this presents with respect to traditional models, it is necessary to analyze the behavior of both in the same case study. This may be the ordering of tasks, since this scenario faces the problem of classifying data in a large search space, this problem meets the characteristics mentioned in the previous section. Taking that scenario into account, two traditional methods, Quick-sort and Bubble-sort, were chosen against which the proposed persuasion model was compared. A combinatorial optimization problem was chosen, that is, it seeks the best solution among a finite set of possible solutions to a problem. Obesity is an important health problem that is in the focus of public policies in several countries [19]. When designing a food control program, in weight control environments to avoid cardiovascular diseases, experts often do not consider the patient’s tastes for the foods that will be included in it. However, although considering this aspect is important to satisfy the patient it also implies a problem, which is because the expert must consider reducing the number of calories and increasing the patient’s taste for foods included in the diet. The problem is to design a diet in such a way that the value of the patient’s taste for food is maximized and that a certain number of calories is not exceeded. Which, as the number of foods increases, the problem becomes NP. The experiments were conducted in a laptop with an Intel CORE i7 2.53 GHz processor with 8 GB of memory, 1 TB of Hard Disk, and Windows 10 64 bits as the operating system. The implementation was designed as follows: The MA was implemented in C #, and the Genetic Agents were implemented in MatLab which demonstrates the model is multiplatform. For the scenario of the present case study, the model behaves as explained in previous sections. Thus, the MA is responsible for selecting the GAg with the best solution. On the other hand, the GAgs generate a series of approximate solutions that solve the problem in question. For this case study, the GA implemented in the agents considers the parameters shown in Table 1. The following describes how the GAg instances were implemented to solve this problem. Table 1 Agents parameters
Parameter
Value/Characteristics
Representación
Binaria
No. individuals
100
Evaluation
Fitness function
Selection
Random
Crossover
Order one
Termination criteria
Convergence
A Genetic Algorithm-Oriented Model of Agent Persuasion …
289
4 Results A vector containing the calories of each food was taken as input (CalVector = [83, 62, 79, 296, 195, 162, 77, 78, 180, 156, 113, 69, 74, 263, 198, 188, 121, 10, 250, 11, 80, 73, 17, 20, 1]) and another vector with qualitative values, between 0 and 10, that rate patient satisfaction about the food (SatVector = [1–8, 10]). Each food has a value of satisfaction between 0 and 10 assigned. The CalVector [0] element corresponds to the SatVector element [0], where 0 means the least taste and 10 means the greatest. The GAg generates an initial population with 0’s and 1’s ([1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1]), where 0 indicates the absence of food in the diet and 1 indicates including it in the diet taking into account the maximum number of calories. This vector has the same dimension as the CalVector and SatVector. The second condition that must be met is to select the individual with the highest value of taste for food. In the results presented in Table 2 we can see that the agent that persuaded in the MAS was GAgs 3. This agent has the most optimal value for the patient’s taste for food and in turn the one with the lowest number of calories. It should be noted that the previous results were obtained with 10 iterations. In addition, a conventional algorithm that solves this problem was implemented. The conventional algorithm obtained the following results. The value of 101 obtained by the conventional algorithm is the optimal solution for the patient’s taste for food. The result obtained by GAg 3 (Table 2) is acceptable with 88% accuracy to the exact result. On the other hand, the conventional algorithm obtained 1165 calories while the GAg obtained 1094 calories. As seen in Table 3, the number of iterations in which the conventional algorithm converges is markedly greater (a difference of 29,990) than those of the genetic agent. Table 2 The results of the agent interaction Agent
Vector
Calories
Satisfaction
1
[83, 62, 79, 0, 0, 0, 77, 0, 180, 0, 0, 69, 74, 0, 198, 188, 0, 10, 0, 0, 80, 0, 17, 0, 1]
1118
76
2
[0, 62, 79, 0, 0, 0, 77, 78, 180, 156, 113, 69, 74, 0, 0, 0, 121, 10, 0, 11, 0, 73, 17, 20, 1]
1141
86
3
[83, 62, 79, 0, 195, 0, 77, 0, 0, 156, 113, 69, 0, 0, 0, 0, 121, 10, 0, 11, 80, 0, 17, 20, 1]
1094
89
Table 3 Comparison between a traditional algorithm and a GAg Traditional algorithm
Genetic agent 3
[83, 62, 79, 0, 0, 0, 77, 78, 0, 156, 113, 69, 0, 0, 0, 188, 121, 10, 0, 11, 80, 0, 17, 20, 1]
[83, 62, 79, 0, 195, 0, 77, 0, 0, 156, 113, 69, 0, 0, 0, 0, 121, 10, 0, 11, 80, 0, 17, 20, 1]
1165 calories
1094 calories
101 satisfaction level
89 satisfaction level
30,000 iterations
10 iterations
290
S. Jiménez et al.
5 Conclusion and Future Work This article presents a negotiation model, based on the characteristics of the GAs, with a persuasive dialogue. The problem facing this work is to optimize, with respect to the execution time, the negotiation process in the MAS. The results obtained in the evaluation of this work show that the proposed model optimizes the execution time of the persuasion process in a MAS that has the characteristics described in Sect. 2. As in any bioinspired optimization strategy, in exchange for an improvement in convergence time for the solution of complex problems, the model proposed in this article sacrifices levels of precision. However, it is estimated that the solution generated by the model presented here is scalable to larger search spaces. In these scenarios, the choice of such a model would be widely justified. This research is the basis for the development of future works. First, from the perspective of addressing the limitations of the proposed model, we have the GA redesign to improve the accuracy of the model. On the other hand, a line of future study is related to the formal analysis of the complexity of the solution proposed by the persuasion model with respect to the algorithms with which it was compared. Another future line of work is the physical implementation of the proposed persuasion model and its empirical evaluation, since this work was only analyzed from a software perspective. Finally, we can conclude that the proposed persuasion model can optimize the solution when MASs face a complex problem, which could eventually optimize the processes automated by those MASs.
References 1. Solari, M.D.L.Á.: Aplicación de Algoritmos Genéticos en un Sistema Multiagente de Planificación en una Industria Manufacturera. In: XXXII Conferencia Latinoamericana de Informática (2006) 2. Benedettelli, D.: A LEGO Mindstorms experimental setup for multi-agent systems. In: IEEE ASSP Magazine (2009) 3. Simonin, O.: A cooperative multi-robot architecture for moving a paralyzed robot. Mechatronics 19, 463–470 (2009) 4. Balducelli, C., Esposito, C.D.: Genetic agents in an EDSS system to optimise resources management and risk objects evacuation. Saf. Sci. 35, 59–73 (2000) 5. Dipsis, N., Stathis, K.: Ubiquitous agents for ambient ecologies. Pervasive Mob. Comput. 8, 562–574 (2011) 6. Wooldridge, M.: An Introduction to Multiagent Systems. Wiley (2002) 7. Aaron, E., Admoni, H.: Action selection and task sequence learning for hybrid dynamical cognitive agents. Rob. Auton. Syst. 58, 1049–1056 (2010) 8. Fleming, P.: Genetic Algorithms in Engineering Systems. Institution of Engineering and Technology (1997) 9. Goolberg, D.E.: Genetic Algorithms in Search, Optimization, and Machine Learning. AddisonWesley (1989) 10. Jean-Philippe, V.: Genetic algorithm in a multi-agent system. In: IEEE International Joint Symposia on Intelligence and Systems, pp. 17–26 (1998) 11. Wooldridge, M., Jennings, N., Kinny, D.: The Gaia methodology for agent-oriented analysis and design. Auton. Agents Multiagents Syst. 3, 285–312 (2000)
A Genetic Algorithm-Oriented Model of Agent Persuasion …
291
12. Fernando, L., Ossa, C., Car, C.R., Muñoz, G.M., Álvarez, J.A.: Análisis, diseño e implementación de un agente deliberativo para extraer contextos definitorios en textos especializados * María Mer re Resumen Intr oducción Introducción El trabajo interdisciplinario e inter-grupal ha cobrado importancia en campos de, pp. 59–84 13. Ruiz, R.E.S.: Algoritmo genético para la solución del problema de optimización combinatoria y decisión secuencial en el juego ‘but who’s counting’. Rev. Generación Digit. 9, 65–70 (2011) 14. Charalambous, C., Hindi, K.S.: Applying GAs to complex problems: the case of scheduling Multi-State intermittent manufacturing systems. In Genetic algorithms in Engineering Systems: Innovations and Applications (1997) 15. Ying, W., Bin, L.: Job-shop scheduling using genetic algorithm (1996) 16. Thomas, J., Leon, S.: The ROADMAP meta-model for intelligent adaptive multi-agent system in open environments (2003) 17. Estévez, P.: Optimizacion mediente algoritmos genéticos. In: Anales del Instituto de Ingenieros de Chile, vol. 1, pp. 83–92 (1997) 18. Gómez, P., Andres, C., Lario, F.-C.: An agent-based genetic algorithm for hybrid flowshops with sequence dependent setup times to minimise makespan. Expert Syst. Appl. 39, 8095–8107 (2012) 19. Obesity and the Economics of Prevention: Fit not Fat: Obesity Update 2014 presents an update of analyses of trends and social disparities in obesity originally presented in OECD report (2014). http://www.oecd.org/els/health-systems/obesity-update.htm
Business Informatics
Impacts of the Implementation of the General Data Protection Regulations (GDPR) in SME Business Models—An Empirical Study with a Quantitative Design Ralf-Christian Härting, Raphael Kaim, and Dennis Ruch Abstract The European General Data protection regulation (GDPR) contains a lot of legal provisions. For small- and medium-sized companies (SMEs) the implementation of the GDPR is a major challenge. Factors influencing the implementation of the GDPR in already existing business models for SMEs are examined in this paper. In a previous qualitative research project, a hypothesis model was developed. That model is examined in this work with a quantitative approach, using structural equation modeling to evaluate the answers of 103 German experts in the field of data protection. The factors Process adaption, Costs, Know-how, Uncertainty, Provision of information, and Expenditure of time were examined as to their significance. The results of this study show that Know-how, Costs, Provision of information, and Process adaption have a positive as well as a significant influence on the impacts on the implementation of the GDPR in existing business models for SMEs.
1 Introduction The European General Data Protection Regulation (GDPR) was adopted by a large majority in the European Parliament on 14 April 2016. It replaces the 1995 data protection guideline [1]. The GDPR became mandatory on 25 May 2018 after a two-year transitional period [2]. Until that deadline, all enterprises must adjust their business processes to the new legal framework. The objective of the new regulation is to update European data protection legislation. In times of increasing digitization, a balance must be found between economic and customer-related concerns. Customer R.-C. Härting (B) · R. Kaim · D. Ruch Aalen University of Applied Science, Beethovenstraße 1, 73430 Aalen, Germany e-mail: [email protected] R. Kaim e-mail: [email protected] D. Ruch e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_27
295
296
R.-C. Härting et al.
relationships are, therefore, one of the key factors of small and medium-sized enterprises to meet changing customer needs [3]. More transparency and more control over their data, given to the customer, should become another important factor [4]. Furthermore, the GDPR creates a forward-looking legal framework for businesses in the processing of data and innovative new business models [5]. In total, this regulation has six main elements. These are data minimization, purpose limitation, accuracy, storage limitation, confidentiality, fairness and legality, and integrity [6]. The GDPR creates a standard for data protection across Europe and strengthens customers’ rights by extending the coverage of the rights of the individuals concerned, including data transferability and access [7]. This removes barriers to competition and market entry due to initially different data protection rules [8]. However, the implementation of the new GDPR is still an issue, especially for SMEs, since the digitization is a major challenge for these companies [9]. The empirical study focuses on small and medium-sized enterprises in Germany, where they account for over 99% of all companies and as they differ significantly in their ownership structure from foreign SMEs. Furthermore, Germany already had a rather high level of data protection before the GDPR. It can, therefore, be assumed that the implementation process differs significantly from that of other European countries. In the run-up to 25 May 2018, a lot of SMEs were concerned that they would not able to meet the deadline and demonstrate compliance with the GDPR processes. There are new fines for non-compliance. The fines could amount up to EUR 20 million [10]. The aim of this research project is to answer the following main research question: Which impacts has the implementation of the GDPR in already existing business models for SMEs? Based on a quantitative design, using structural equation modeling, we tested the constructs Know-how, Expenditure of time, Uncertainty, Costs, Provision of information, and Process adaption. These have been determined in previous work by Härting et al. [11]. This paper is structured as follows: First, the research design is presented, starting with a short summary of the methodology, the data collection and the evaluation of the preceded qualitative study. Afterward the methodology of the quantitative research is presented. Subsequently, the evaluation of the empirical data is explained. This is followed by a discussion and the limitations of the study. The paper closes on a conclusion and an outlook for future research.
2 Research Design 2.1 Qualitative Research In the previous qualitative study, a hypothesis model concerning the “Impacts on the Implementation of the GDPR” (IIGDPR) in already existing business models for SMEs was derived using semi-structured interviews with experts [12].
Impacts of the Implementation of the General Data …
297
The experts had to have long experience in the field of data security or data protection and be responsible in the field of data protection in their respective company. The company itself had to be a SMEs, therefore, not exceeding 500 employees or an annual turnover of EUR 50 million. The questionnaire consisted of eight semistructured questions. The interviews, all with German experts and thereby conducted in German language, were transcribed and afterwards evaluated. Thirteen experts have been interviewed. The evaluation of the interviews has been handled using a hybrid model of a cluster analysis according to Everitt et al. [13] and a structured content analysis according to Mayring [14]. Combining the two techniques allows a more objective view of the data. Six constructs have emerged from the thorough analysis: Knowhow, Expenditure of time, Uncertainty, Costs, Provision of information, and Process adaption. Each of these constructs has a negative influence on the dependent variable “impacts on the implementation of the GDRP in already existing business models for SMEs”. Together they form the hypothesis model, which is depicted with all the single items in Fig. 1.
lack of technical know-how
information procurement time for implementation
Know-how
Expenditure of time
legal uncertainty missing meaning of the information fear of warnings
Uncertainty
uncertainty in communication implementation costs loss in revenue other opportunity costs
Costs
running costs for external data protection officer missing guidelines little support from authorities
Provision of information
elaborated processes changed processes
Process adaption
slowed digitization
Fig. 1 Hypotheses model including items, constructs, and the dependent variable
impacts on the implementation of the GDPR in already existing business models for SMEs
lack of legal know-how
298
R.-C. Härting et al.
The six hypotheses that are quantitatively examined for significance in this paper are thus: 1. Missing “Know-how” has a negative impact on the implementation of the GDPR in already existing business models for SMEs. 2. High “Expenditure of time” has a negative impact on the implementation of the GDPR in already existing business models for SMEs. 3. Big “Uncertainties” have a negative impact on the implementation of the GDPR in already existing business models for SMEs. 4. High “Costs” have a negative impact on the implementation of the GDPR in already existing business models for SMEs. 5. Insufficient “Provision of information” has a negative impact on the implementation of the GDPR in already existing business models for SMEs. 6. Difficult “Process Adaptations” has a negative impact on the implementation of the GDPR in already existing business models for SMEs.
2.2 Quantitative Design The next step of our sequential research design is the quantitative analysis. A quantitative model validation is needed to prove or disprove the hypotheses we set up through the analysis of the interviews. In order to examine the influences of the determinants on already existing business models for SME, a questionnaire was prepared and implemented via the open source software for questionnaires “LimeSurvey” [15]. The target group of the questionnaire consists of experts like academic scientists, CEOs, lawyers, external data protection officers, members of associations, public institutions or experts from conferences. Most of the contacts were approached via e-mail, twitter, and the German business social-network Xing. As only German-speaking experts in Austria, Germany, and Switzerland were addressed, the questionnaire was drawn up in German. All of the measurement items were based on a five-point Likert scale with a range from full approval to total rejection (1: “I totally agree, 2: “I rather agree”, 3: “Neither nor”, 4: “I rather disagree”, 5: “I totally disagree”). To get valid results, every question has to be answered. In addition to the questions about the 14 items depicted on the left-hand side in Fig. 1, an aggregated question for each determinant as well as sociodemographic questions were asked, in order to see if the respondents fit the target group. A pretest had been created to verify the questionnaire for understandability. Since it was all positive, there was no need to change the questions. In total, the questionnaire was answered 146 times, of which 103 were completely filled in and can be used after data cleaning. In order to analyze the theoretical causal model using empirical data, a structural equation modeling approach (SEM) was used. SEM enables the visualization of the relationship between various variables [16] and can be described as a second generation of multivariate analysis that allows a deeper insight into the analysis of
Impacts of the Implementation of the General Data …
299
different (data) relationships, as compared to cluster analysis or linear regression, for example [17]. There are two basic components of the SEM: Measurement and a structural equation model [18]. While the function of the measurement model is to validate the latent variables, the structural equation model analyzes the relationship between the research model and the latent variables [17]. From various available SEM approaches such as AMOS, LISREL or SmartPLS [16], the authors chose SmartPLS because of its robustness and data requirements [19]. In addition, the SEM approach of SmartPLS is increasingly used by the information systems community [18]. With bootstrapping, SmartPLS calculates the structural path significance in contrast to AMOS or LISREL [16]. With regard to the causal model, multi-item measurement scales are used which take into account common criteria of quality such as Cronbach’s Alpha (CA), Average Variance Extracted (AVE), and Composite Reliability (CR) [20].
3 Empirical Results All six determinants Know-how, Expenditure of time, Uncertainty, Costs, Provision of information, and Process adaption have a negative influence on the impacts on the implementation of the GDPR in already existing business models for SMEs. However, only the influence of Costs, Provision of information, and Process adaption is clearly significant. The results of the quantitative examination are shown in Fig. 2. H1 (Missing “Know-how” has a negative impact on the implementation of the GDPR in already existing business models for SMEs) cannot be confirmed under the assumption that the t-value has to surpass the threshold value of 1.96. In order to confirm this hypothesis, lower threshold values have to be used. This special case will be addressed in more detail later on. Furthermore, the current Fig. 2 Structural equation model with t-statistics
300
R.-C. Härting et al.
t-value (t = 1.873) could only be achieved using the aggregated question, so that the factors “lack of technical and legal know-how” cannot be confirmed individually. H2 (High “Expenditure of time” has a negative impact on the implementation of the GDPR in already existing business models for SMEs) cannot be confirmed. The construct “high expenditure of time” with its indicators “information procurement” and the required “time for the implementation” to fulfill the guidelines has a negative impact (t = 1.514), however, the t-Statistic it is not significant (t < 1.96). H3 (Big “Uncertainties” have a negative impact on the implementation of the GDPR in already existing business models for SMEs) cannot be confirmed. The big uncertainties have a negative impact (t = 1.032), but again are not significant (t < 1.96). The missing meaning of the information, the fear of warnings, and the uncertainty in communication do not have a significant negative impact on the implementation of the GDPR in already existing business models for SMEs. H4 (High “Costs” have a negative impact on the implementation of the GDPR in already existing business models for SMEs) can be confirmed. The results show a negative (t = 2.318) and highly significant (t > 1.96) impact, thus the hypotheses can be confirmed. It is apparent that the loss in revenues, as well as other opportunity costs faced by SMEs, do have a negative impact on the implementation of the GDPR in already existing business models. H5 (Insufficient “Provision of information” has a negative impact on the implementation of the GDPR in already existing business models for SMEs) can be confirmed, because the insufficient provision of information has a negative (t = 2.499) and highly significant (t > 1.96) impact. Similarly, to H1, the t-value could only be achieved using the aggregated question, therefore, only the determinant can be confirmed, but not the indicators, which are missing guidelines and lacking support from authorities. H6 (Difficult “Process adaptions” has a negative impact on the implementation of the GDPR in already existing business models for SMEs) can also be confirmed. The results show a negative (t = 3.990) and a highly significant (t > 1.96) impact. The elaborated and changed processes and the slowed down digitization do have a negative impact on the implementation of the GDPR in already existing business models for SMEs (Table 1). If the path coefficient is higher than 0.2, a significantly positive influence can be observed. The standard derivation should be as low as possible. With a value >1.96, the t-Statistic is considered significant. The last value in the table, the p-value is the level of significance, which should be a minimum of 95%. Hence, the p-value should be lower than 0.05 to verify the hypothesis. However, the p-value is not as crucial as the t-statistic [21]. From this point of view H4, H5, and H6 can be confirmed, H2 and H3 have to be rejected. For H1, which only barely touches the threshold values described above, different measures can be used. The threshold value for the path coefficient can be lowered to >0.1 [20] and for the value of the path t-statistic to t > 1.65 in order to confirm the hypothesis. But in doing so the error margin will increase to 10% compared to the 5% above [22].
Impacts of the Implementation of the General Data …
301
Table 1 Empirical results Hypo-thesis
SEM-Path
H1
Know-how → IIGDPR
H2
Expenditure of time → IIGDPR
H3
Uncertainty → IIGDPR
H4
Costs → IIGDPR
H5 H6
Path coefficient
Standard deviation
t-statistic
p-value
0.148
0.079
1.873
0.062
0.167
0.110
1.514
0.131
−0.123
0.119
1.032
0.303
0.228
0.098
2.318
0.021
Provision of information → IIGDPR
0.269
0.108
2.499
0.013
Process adaption → IIGDPR
0.359
0.090
3.990
0.000
Composite reliability is used as the quality criterion for measurement models with multiple indicators. The values of both constructs are clearly above the limit value of 0.6 (H4: 0.82; H5: 0.79). In addition, Cronbachs Alpha was examined to assure a high quality of the measurement models. Here too, both the values are higher than 0.58, and therefore, in a satisfactory range [23]. The coefficient of determination (R2 ) indicates, how well the exogenous constructs Know-how, Costs, etc., explain the endogenous dependent construct impacts on the implementation of the GDRP (IIGDPR) in already existing business models for SMEs, i.e., how good the model is. In this case, the coefficient of determination for the overall measurement model is in a high satisfying range (0.559 > 0.19) according to Chin [24].
4 Discussion and Limitations Starting with the qualitative study, thirteen interviews were conducted with data protection experts from Germany in order to capture the various already perceived influences of the new regulation [11]. By clustering, summarizing, and coding the data, six determinants have been found: missing Know-how, Expenditure of time, Uncertainties, Costs, insufficient Provision of information, and a difficult Process adaption. A negative impact on the implementation of the GDPR in already existing business models for SMEs has been assumed. The aim of the subsequent quantitative study was to test the hypotheses formed to this point. For this purpose, a questionnaire based on the qualitative results was prepared, which was completed in full by 103 German-speaking experts. The findings show, that all six determinants have a negative influence on the impacts on the implementation of the GDPR in already existing business models for SMEs. However, Expenditure of time, Uncertainty, and Know-how do not reach the required significance level, although, in the case of Know-how, it does reach it very narrowly.
302
R.-C. Härting et al.
The other three determinants are highly significant, thereby confirming H4, H5, and H6: The high Costs, the insufficient Provision of information, and the difficult Process adaption have a negative impact on the implementation of the GDPR in already existing business models for SMEs. The study, however, has several limitations. Regarding the Provision of information, a single item-construct had to be used in the quantitative part, so the variable has a significant negative impact, but the indicators “little support from the responsible authorities” and “missing guidelines” could not be confirmed. The authors interviewed only German experts. Therefore, also the questionnaire was only designed for the German-speaking area. However, the regulations discussed apply to the entire European Union. Therefore, a certain national mentality could bias the results. In other countries, the results could look different despite the same law. Furthermore, the study discusses only the negative effects of and on the GDPR. However, there were also notes on positive effects in the interviews. These could not be followed further in consideration of the extent of our research.
5 Conclusion This paper examines the impacts of the implementation of GDPR in already existing business models. In order to obtain insightful and reliable results, the findings already made were quantitatively investigated. Our research identified important influencing aspects of the implementation of the GDPR in already existing business models for SMEs. This will help practitioners to guide their skills in implementing these and similar regulations. Additionally, the executives of regulations get an impression where they can support smaller companies with challenges of implementing GDPR. Further research is needed to study not only the negative, but also the positive impacts of the GDPR on SMEs. Also, an international study might give deeper insights into the topic.
References 1. de Hert, P., Papakonstantinou, V.: The new General Data Protection Regulation: still a sound system for the protection of individuals? Comput. Secur. Rev. 32(2), 179–194 (2016) 2. The European Parliament and the Counsel: Regulation (EU) 2016/679 of the European Parliament and the Counsel of the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/64/EC (General Data Protection Regulation). Official Journal of the European Union, L 119 (2016) 3. Dittert, M., Härting, R.C., Reichstein, C., Bayer C.: A data analytics framework for business in small and medium-sized organizations. In: Czarnowski, I., Howlett, R., Jain, L. (eds.) Smart Innovation, Systems and Technologies–Proceedings of the 9th KES-IDT 2017–Part II, vol. 73, pp. 169–181. Springer (2017)
Impacts of the Implementation of the General Data …
303
4. Sobolewski, M., Mazur, J., Palinski, M.: GDPR: a step towards a usercentric internet? Intereconomics 52(4), 207–213 (2017) 5. Chen, J.: How the best-laid plans go awry: the (unsolved) issues of applicable law in the General Data Protection Regulation. Int. Data Priv. Law 6(4), 310–323 (2016) 6. Goddard, M.: The EU General Data Protection Regulation (GDPR): European regulation that has a global impact. Int. J. Mark. Res. 59(6), 703–705 (2017) 7. Todorovic, I., Komazec, S., Krivokapic, D., Krivokapic, D.: Project management in the implementation of General Data Protection Regulation (GDPR). Eur. Proj. Manag. J. 8(1), 55–64 (2018) 8. The Federation of German Industries (BDI). https://english.bdi.eu/article/news/the-eusgeneral-data-protection-regulation-answers-to-the-most-importantquestions/. Accessed 16 May 2019 9. Härting, R., Reichstein, C., Jozinovic, P.: The potential value of digitization for business– insights from German-speaking experts. In: Eibl, M., Gaedke, M. (eds.) Informatik 2017, Gesellschaft für Informatik, Bonn 2017 (2017). https://doi.org/10.18420/in2017_165 10. Jackson, O.: Many small firms are still unprepared for GDPR. International Financial Law Review, London (2018) 11. Härting, R.-C., Kaim, R., Klamm, N., Kronenberg, J.: Impacts of the New General Data Protection Regulation for small- and medium-sized enterprises. In: Advances in Intelligent Systems and Computing, Proceedings of 5th ICICT 2020, AISC Springer (2020). ISSN 2194-5357 (Paper accepted) 12. Härting, R., Kaim, R., Klamm, N., Kroneberg, J.: Impacts of the New General Data Protection Regulation for small- and medium-sized enterprises. In: Fifth International Congress on Information and Communication Technology. Springer, Singapore (2020) 13. Everitt, B.S., Landau, S., Leese, M., Stahl, D.: Cluster Analysis, 5th edn. Wiley, Chichester (2011) 14. Mayring, P.: Qualitative content analysis. Forum Qual. Soc. Res. 1(2) (2000). Art. 20 15. Limesurvey [Internet]. [cited 2019 Dez 10]. https://www.limesurvey.org/de/ 16. Wong, K.K.K.: Partial least squares structural equation modeling (PLS-SEM) techniques using SmartPLS. Mark. Bull. 24(1), 1–32 (2013) 17. Fornell, C., Larcker, D.F.: Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 18(1), 39–50 (1981) 18. Markus, K.A.: Principles and practice of structural equation modeling by Rex B. Kline. Struct. Equ. Model. Multidisc. J. 3, 509–512 (2012) 19. Ringle, C.M., Sarstedt, M., Straub, D.: A critical look at the use of PLS-SEM. MIS Q. 36(1) (2012) 20. Lohmöller, J.B.: Latent Variable Path Modeling with Partial Least Squares, pp. 60–63. Physica, Heidelberg (1989) 21. Weiber, R., Mühlhaus, D.: Strukturgleichungsmodellierung: Eine anwendungsorientierte Einführung in die Kausalanalyse mit Hilfe von AMOS, SmartPLS und SPSS, pp. 325–331. Springer Gabler, Berlin (2014) 22. Krafft, M., Götz, O., Liehr-Gobbers, K.: Die Validierung von Strukturgleichungsmodellen mit Hilfe des Partial-Least-Squares (PLS) Ansatz. In: Bliemel, F., Eggert, A., Fassot, G., Henseler, J. (eds.) Handbuch PLSPfadmodellierung-Methode, Anwendung, Praxisbeispiele, Stuttgart, Schäffer-Poeschel, p. 82 (2005) 23. Taber, K.S.: The use of Cronbach’s alpha when developing and reporting research instruments in science education. Res. Sci. Educ. 48(6), 1273–1296 (2018) 24. Chin, W.W.: The partial least squares approach for structural equation modeling. Modern Methods Bus. Res. 8(2), 295–336 (1998)
A Study on the Influence of Advances in Communication Technology on the Intentions of Urban Park Users Noriyuki Sugahara and Masakazu Takahashi
Abstract In this study, we investigated the influence of communication technology and the influence of communication devices on people’s intention of using urban parks. Urban parks are public open spaces necessary for a sustainable society because they alleviate the problems caused by urbanization all over the world. People use communication devices to transmit their experiences in urban parks. Efficient management must know how people use urban parks and what concepts they have. On the other hand, the development of communication technology has dramatically changed people’s lives. Examining the impact of communication technology on people’s intention information is essential to know people’s behavior and emotions. It helps to manage urban parks efficiently. The analysis compared 2004, 2008, and 2014 data. The content of the posts did not change over time, but there was a difference depending on the device used. Posts from personal computers have many topics related to tourism, and posts from smartphones have many personal opinions such as childcare and family. The portability of communication devices made a difference in intention information. It is essential to know the influence of the device used to know people’s behavior and emotions.
1 Introduction In recent years, communication technology has advanced rapidly. The Internet has rapidly become popular since around 2000. Advances in communication technology have brought great benefits and comfort to people’s lives. Understanding of enormous information on the Internet contributes to solving problems in our society, life, and city. The estimating clearly about urban parks user’s behavior is very important for a comfortable life in the city. However, we have no way, and the new method is required. N. Sugahara (B) · M. Takahashi Graduate School of Sciences and Technology for Innovation, Yamaguchi University, Yamaguchi, Japan e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_28
305
306
N. Sugahara and M. Takahashi
On the other hand, urban parks have many problems that unsolved in a long time. Many people use urban parks and want to comfortable. However, people’s comfort is diverse, and behavior is too. Diversity of behavior in the same urban park become causes people conflict, and the understanding of behavior is necessary to solve people’s conflict. Understanding the information on the Internet needs a new method of analyzing the intention of people. This study analyzed the intention information concerning urban parks user that contributes to understanding the effect of evolving communication devices and technology on human behavior. This analysis contributes to an understanding of the effect of evolving communication devices and technology on human behavior. People use different devices according to topics and situations. Posts from smartphones had many personal and recent topics, and PCs had many topics that needed to be compared and searched for information. When estimating people’s behavior and intentions more clearly from information on the Internet, it is useful to consider the communication devices used. This study is part of the study to develop a method to achieve comfortable urban parks. The rest of the paper is organized as follows: Section 2 describes the research background. Section 3 describes the Data Configuration and Experimental Method. Section 4 describes the Experiment Result. Finally, Sect. 5 describes concluding remarks and future work.
2 Background 2.1 Development of Communication Technology in Japan The Internet penetration rate in Japan spread rapidly around 2000, and at the same time, it became possible to connect to the Internet using mobile phones [1]. The use of e-mail on mobile phones is very convenient, and everyone has come to use it. However, the number of characters that can be used was small, and it was challenging to convey what people thought. Therefore, “Emoji” was developed by the Japanese communications company NTT. “Emoji” is now called emoticon and is used all over the world. The emoticon is often used as a tool for efficiently communicating feelings on SNS. Moreover, now, communication on the Internet includes a lot of movies, photos, etc., that helps to understand each other. On the other hand, with the spread of the Internet, communication devices are also spreading. The penetration rate of mobile phones is almost 95%, including smartphones. The penetration rate of PC is decreasing and now is about 70%. Smartphones started to spread in 2010 with the release of the iPhone. The smartphones has spread rapidly, about 75% in 2017. Tablets have begun to cover with the iPad released at the same time as the iPhone, and now about 35%. Furthermore, the communication devices we use every day are evolving. The communication device evolution gave us convenience and quick communication was
A Study on the Influence of Advances in Communication …
307
made possible. Posts represent human behavior and intentions. The shorter the time between behavior and posting, the more realistic the content, because It is known that people’s emotions fade over time [2], compared between past post and the present post can understand more clearly the estimation of human behavior and intention. The posting time has become shorter than the past by the effect of the evolution of communication technology and devices.
2.2 Urban Park in Japan Japanese urban parks are maintained according to the City Planning Act [3]. There are over 100,000 urban parks in Japan, of which over 80% are small block parks. The purpose of maintaining the urban park is for disaster prevention, landscape, children’s play space, and so on. Many people gather in urban parks and enjoy their lives. Urban park user’s purposes are very diverse [4]. For example, dog walking, relaxation, performance, exercise, conversation, child care, ball play, photography, flower viewing, etc. People want to do different things in the same park and at the same time. Different behaviors can cause people’s conflict. We proposed to avoid people’s conflict in urban parks with using “Nudge” that is effective [5]. “Nudge” can be expected to work to guide people’s behavior and resolve conflicts. To take measures to make urban parks comfortable, such as “nudge,” it is necessary to know the actual situation of urban park users. Many surveys and reports of urban parks are being conducted by national and local governments [6–8]. However, these surveys are time and place limited. Because these surveys can be only a part of the whole of urban parks in the nation. Therefore, we can be collected the opinions and impressions of people using urban parks are a part of the whole too. For example, look at an example of a survey of urban parks nationwide conducted by the Japanese government every 5–7 years [9]. The number of parks investigated in this survey was only 273, 0%, the label is Positive, when α < 0%, it was set to Negative and labeled.
316
Y. Nishi et al.
Fig. 2 Diagram of the labeling method
Stockprice f luctuationrate(%) = ((Averageprice1minutea f ter newsdistribution) − (Averageprice1minutebe f or enewsdistribution)) /(Averageprice1minutebe f or enewsdistribution) × 100 Positive : α > 0% N egative : α < 0%
(1)
3.3 News Articles Generation In this study, GPT-2 is used as a language generation model. We used a model with 117 million parameters, which is a model of GPT-2 that is currently available. The GPT-2 model used is pre-trained based on 40 GB of text data. GPT-2 is a significant scale-up of GPT. The model used uses 8 billion parameters to train a dataset of 8 million web pages (40 GB) on a 48-tier network [9]. Figure 3 shows the data used for the analysis of the news article evaluation model constructed. In addition to the original news article text, the news article text generated by GPT-2 was added to the data for analysis, and the news article was evaluated. Fig. 3 Diagram of data stored in the database
Construction of News Article Evaluation …
317
3.4 Vectorization of News Articles For news vectorization, we used the most widely used Skip-gram of word2vec [6]. A model that predicts surrounding words from the central word in a document is called Skip-gram. When words appear in the order of W1, W2, …, Wt, Skip-gram searches by learning the vector that maximizes the logarithmic term of the random variable using the definition formula of (2). 1 T
T
t=1 −c≤ j≤c, j=0
log p Wt+ j |Wt #
(2)
p(Wt + j| Wt) is calculated using Hierarchical Softmax. Hierarchical Softmax is a method that creates a Huffman tree in the order of the most frequent words and approximates a fully connected softmax using logistic regression for each hierarchy [10].
3.5 Classification Through the LSTM Model The LSTM model was used for classification analysis. LSTM (Long Short-Term Memory) is a kind of RNN that learns time series data. LSTM extends RNN to enable long-term learning of dependencies [5]. We used The LSTM model for classification analysis, and the cross-validation score (correct answer rate) used for accuracy verification. For classification analysis, Set Compile to loss = ’binary_crossentropy’, optimizer = ’rmsprop’, metrics = [‘accuracy’] and analyze.
4 Results 4.1 Labeling Based on Stock Price Fluctuations The news data was labeled using the formula of equation (1). Figure 4 shows the results of the labeling. The labeling results showed that there was slightly more positive news than negative news during the analysis period 2014–2016.1 As an example of the result of the labeling, the news articles about “TOYOTA TO START SELLING NX COMPACT CROSSOVER SUV IN U.S. IN NOV, AIMS TO SELL 42,000 NX SUVS ANNUALLY IN U.S. -EXEC” had a positive effect on stock prices, the news articles about “TOYOTA MOTOR SAYS NO TRUTH TO 1 In this study, the positive news and negative news are based on stock price fluctuations and are not
regarding emotional polarity values.
318
Y. Nishi et al.
Fig. 4 Labeling results
REPORT ABOUT TIE-UP TALKS WITH SUZUKI MOTOR” had a negative effect on stock prices.
4.2 Generating News Using GPT-2 Figure 5 shows news articles about Toyota Motor Corporation generated based on positive original news articles. As a result of generating news articles by GPT-2, news Fig. 5 Example of a news article generated regarding Toyota Motor Corporation
Construction of News Article Evaluation … Table 2 Data set used for analysis
319 Model 1 (Original news only)
Model 2 (Created news addition)
Positive
1,137
2,137
Negative
1,122
2,122
Total
2,259
4,259
Table 3 Results of classification analysis through LSTM model Accuracy score
Model 1 (Original news only)
Model 2 (Created news addition)
0.615
0.784
articles that could be read by humans were generated. Table 2 shows the breakdown of the dataset for each model. Using GPT-2, we created 1,000 positive news articles and 1,000 negative news articles, a total of 2,000 articles, and added them to the model 2 dataset as new data.
4.3 Results of Model Accuracy Table 3 shows the results of the classification analysis. As a result of classification analysis through LSTM, it confirmed that the cross-validation score (Accuracy Score) is 16.9 points higher in model 2 than in model 1.
5 Summary This study constructed a news article evaluation system that utilizes a language generation model to analyze financial markets. It is possible to generate news articles through GPT-2 based on news articles distributed within the financial markets. In the model analyzed using only the original news articles and the model analyzed by adding the generated news articles, the accuracy score was higher in the model analyzed by adding the generated news articles. Building a more accurate news article valuation model is useful for investors such as fund managers because it contributes to the correction of asset price valuation. In this study, only three companies, Toyota Motor Corporation, Honda Motor Co., Ltd. and Nissan Motor Co., Ltd., were analyzed. Analyzing all listed companies is one of our future projects.
320
Y. Nishi et al.
References 1. Aishwarya, A., Jiasen, L., Stanislaw A., Margaret, M.C., Lawrence, Z., Dhruv, B., Parikh, D.: VQA: visual question answering. In Proceedings of the International Conference on Computer Vision (2015) 2. Fung, G.P.C., Yu, J.X., Lam, W.: News sensitive stock trend prediction. In: Proceedings of the 6th Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp. 481–493 (2002) 3. Fung, G.P.C., Yu, J.X., Lam, W.: Stock Prediction: Integrating Text Mining Approach Using Real-time News. In Proceedings of the IEEE International Conference on Computational Intelligence for Financial Engineering, pp. 395–402, (2003) 4. Gidófalvi, G.: Using news articles to predict stock price movements. Technical Report, Department of Computer Science and Engineering, University of California (2001) 5. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997) 6. Mikolov T., Chen K., Corrado G., and Dean J.: Efficient estimation of word representations in vector space. In: Proceedings of the International Conference on Learning Representations Workshop (2013) 7. Mittermayer, M.A.: Forecasting Intraday Stock Price Trends with Text Mining Techniques. In: Proceedings of the 37th Hawaii International Conference on System Sciences (2004) 8. Nishi Y., Suge A., Takahashi H.: Text analysis on the stock market in the automotive industry through fake news generated by GPT-2. In: Proceedings of the Artificial Intelligence of and for Business (2019) 9. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. Technical Report OpenAI (2019) 10. Rong, X.: word2vec parameter learning explained. arXiv preprint arXiv:1411.2738 (2014) 11. Schumaker, R.P., Chen, H.: Textual analysis of stock market prediction using breaking financial news. Proc. ACM Trans. Inform. Syst. 27, 1–19 (2009) 12. Zhang, R., Guo, J., Fan, Y., Lan, Y., Xu, J., Cheng, X.: Learning to control the specificity in neural response generation. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, 1, pp. 1108–1117 (2018)
Constructing a Valuation System Through Patent Document Analysis Shohei Fujiwara, Yusuke Matsumoto, Aiko Suge, and Hiroshi Takahashi
Abstract In the Initial Public Offering (IPO), many prior studies have discussed underpricing, where the public price is below the initial price. In this study, we propose a novel system for company valuation employing patent document analysis. Specifically, we first extracted patent document data published from 2001 to 2015 and vectorize them through Sparse Composite Document Vectors (SCDV). Similar companies in this study refer to the same industry, products handled, and business scale. However, there are various theories on what should be treated as similar companies. Next, the center of gravity is calculated from the vectorized patent document, and comparable companies are selected by expressing the distance between companies. A detailed analysis is planned for further study.
1 Introduction Firm plays a significant role in the social system. A great amount of studies has tackled the issue on corporation through agent-based modeling and information technologies [2, 4, 19, 21, 27, 28]. Corporate value calculation is one of the important financial modelings in business management. In particular, at the time of the IPO, the offering price depends on the results of the corporate valuation because there is no stock price of the new issuer company. IPOs are for companies whose internal value S. Fujiwara (B) · Y. Matsumoto · A. Suge · H. Takahashi Graduate School of Business Administration, Keio University, Hiyoshi 4-1-1, Kohoku-ku, Yokohama-shi, Kanagawa-ken, Japan e-mail: [email protected] Y. Matsumoto e-mail: [email protected] A. Suge e-mail: [email protected] H. Takahashi e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_30
321
322
S. Fujiwara et al.
and borrowings from banks alone cannot compensate for the investment amount in increasing corporate value, by publicly offering shares and raising funds from an unspecified number of investors in order to maximize corporate value. This is also an effective means. However, underpricing of the offering price (underestimation problem) has been on the agenda of IPOs for many years. Underpricing refers to a phenomenon in which the initial price of a newly listed company exceeds the offering price announced by the lead manager in the prospectus for correction. Underpricing raises problems such as the inability of newly listed companies to maximize the amount of financing they could have originally obtained. Three main methods of evaluating stock prices are the market price standard method, comparable company comparison method, and Discounted Cash Flow method (DCF) [31]. In particular, the comparable company comparison method is used in a wide variety of applications, such as when calculating the corporate value in M&A and when determining the offering price of the book building method in an IPO. On the other hand, the calculation results greatly depend on the selection of comparable companies, which directly affects the financing and acquisition price. Therefore, the criteria for selecting comparable companies and the logic in the selection process are important. In this study, we try to select comparable companies from patents owned by companies. Specifically, the patent document data is vectorized by using Sparse Composite Document Vectors [6], and the center of gravity of the patent is calculated from the vectorized data. Sparse Composite Document Vector is one of the important models with higher accuracy than existing methods (for example, Neural Tensor SkipGram Model (NTSG1 to NTSG-3) [22], Bag-of-Words (BoW) model [32], tf–idf weighted average word-vector model 23]) in multi-label classification using sentence vectors among natural language processing technologies. Next, by measuring the distance between the obtained companies, the company with a short distance is selected as a comparable company.
2 Related Work Underpricing has occurred in IPO offering prices using the comparable company comparison method. According to Ritter [25], an average of 28.4% underpricing has occurred in 1,689 Japanese companies that performed IPOs between 1970 and 2001. There are many hypotheses about the cause of underpricing. For example, the hypothesis caused by the lead secretary firm is that Agency Hypothesis [3, 12, 14– 16, 20, 29], which set public prices at a discount to avoid litigation from investors. In addition, the hypothesis caused by a newly launched company is that Signaling Hypothesis (Allen and Faulhaber) sends a signal to the market by paying the cost of lowering the public price if a company with high corporate value and strong future profits is expected. Courteau [5], Grinblatt and Hwang [7], Welch [30] and investor-caused hypotheses include Investor Sentiment Hypothesis [13]. Although
Constructing a Valuation System Through Patent Document Analysis
323
many previous studies have discussed the causes of underpricing, a unified view has not yet been obtained. On the other hand, in the field of empirical analysis, Kim and Ritter [11] pointed out that although the firm value was calculated using the commonly used comparable firm comparison method, the initial value of public stocks could not be explained. Subsequently, Purnanandam and Swaminathan [24] reported that when the fundamental value of a newly listed company calculated by the comparable company comparison method was compared with the listed price, the price of the 1980–1997 issue was consistently higher. In practice, it is generally accepted that comparable companies are selected by using the type of industry when conducting the comparable company comparison method. It is pointed out that only the company-industry code is assigned. Especially in the case of diversified companies, there is a possibility that there is much room for improvement in selecting comparable companies using industries. Therefore, in this research, we try to apply the patent to the comparable company comparison method as a new index for comparable companies. There are three main methods for measuring the similarity between companies using patents. First, Jaffe [10] measures the distance between technologies by vectorizing the shares in the technology field owned by the company. The second method measures the similarity between companies based on the status of patent citations performed by Stuart and Podolny [26]. Third, Hall et al. [8] measure the similarity between companies based on the number of patent citations. On the other hand, as a recent trend in the finance field, there is research using alternative data such as patent document data. For example, Hoberg and Phillips [9]. They analyzed the relationship with M & A by measuring the product similarity between companies from the Form 10-K documents. Loughran and Mcdonald [17] created a dictionary using words in Form 10-K and analyzed its relationship to stock prices. In this study, the use of alternative data to provide a new approach to analysis and to improve the accuracy of analysis by increasing the amount of data to be analyzed has been increasing. It is presumed that this is the utilization of patent document data.
3 Data The patent document data to be analyzed are patents held by 763 listed Japanese companies with publication dates between 2001 and 2015. The number of patent data used as the patent document vectors is 2,446,802. The patent data of each company was obtained from Thomson Reuters. The data used in the patent data is the Derwent World Patent Index (DWPI). DWPI is a compilation of titles and third-party abstracts by technical experts. The parts of the DWPI that describe the novelty, detailed description, uses, and superiority of patents were used for analysis [18] (Fig. 1).
324
S. Fujiwara et al.
Fig. 1 Used data from Derwent World Patent Index
4 Algorithm In this study, we developed Sparse Composite Document Vectors proposed by Dheeraj et al. [6], which is expected to improve the accuracy and precision compared to the mainstream methods such as Bag of Words in creating vectors from document data. We created a vector from a patent document by using this method. All variables such as the number of dimensions and the number of clusters in the Sparse Composite Document Vectors follow Dheeraj et al. [6]. The number of dimensions (d) by the Skip-Gram model is 200, and the number of clusters (k) in the mixed distribution model is 60. The sparse value was set to 3%. First, a d-dimensional word vector is obtained by using the Skip-Gram model for the four items of text data in the above abstract. Second, we classify the word vectors obtained by the mixture model into clusters and assign probabilities to each → cluster. The d ×k dimentional word vector is calculated by combining w− cv ik . Thirdly, weights are added to the obtained d ×k-dimensional word vector again by the inverse document frequency IDF of each word to obtain wtvi . In the following equations, d indicates a word and k indicates the number of clusters (Eqs. 1 and 2).1 → w− cv ik = wvi × P(Ck |wi )
(1)
→ wtvi = I D Ft × ⊕(1∼k) w− cv ik
(2)
⊕ = concatenation Equation 3 shows the formula for calculating the inverse document frequency IDF. Here, N represents the total number of documents, and d f t represents the number of documents in which a certain word d appears (Eq. 3).
1 Refer
to the paper for details of the formula Dheeraj et al. [6].
Constructing a Valuation System Through Patent Document Analysis
325
Fig. 2 Sparse Composite Document Vectors
I D Ft = log dNft + 1
(3)
By summing up the word vectors wtvi obtained by [1, 2, 3], finally standardizing and setting the sparse value, the document vector using the patent (DWPI) was calculated (Fig. 2). cvi =
p1 + p2 +···+ pn p1 + p2 +···+ pn pn , , · · · , p1 + p2 +···+ n n n 1 2 12000
(4)
Finally, the distance between the centers of gravity of the patents of each company obtained by Eq. 4 is obtained in the same way as calculating the distance between the vectors (Eq. 5). Companyi · Companyi+1 of distance = ( pi1 − pi1+1 )2 + ( pi2 − pi2+1 )2 + · · · + ( pi12000 − pi12000+1 )2
(5)
Those with a value of the inter-company distance calculated from Eq. 5 close to 0, that is, those with a high degree of patent technology similarity, are selected as comparable companies (Fig. 3). In this study, 11 comparable companies are selected. At this time, the criteria are three axes: whether to consider the type of business, the distance between companies, or the number of companies (Fig. 4).
5 Result The following are some of the results of the actual selection of comparable companies by Sparse Composite Document Vectors analysis for 773 companies listed on
326
S. Fujiwara et al.
Fig. 3 Comparable company selection model
the First Section of the Tokyo Stock Exchange with publication dates from 2001 to 2015. Figure 5 is a two-dimensional visualization of the patent of Kao Corporation (stock code 4452) using the t-SNE method. The t-distributed Stochastic Neighbor Embedding (t-SNE) is a dimension reduction algorithm suitable for visualizing highdimensional data. Figure 6 shows the top 30 comparable companies of OPTiM (stock code 3694). The vertical axis represents the distance from OPTiM and the horizontal axis represents the securities code of a comparable company. As a result of these comparable company selections, comparable companies were selected by this method from companies with comparable business types and business forms. Therefore, it is suggested that when the comparable company comparison method is implemented, the comparable company selected using the patent document data may be able to calculate the corporate value.
Constructing a Valuation System Through Patent Document Analysis
327
Fig. 4 11 patterns of models
6 Conclusion In this study, we first vectorized patent data (DWPI) owned by a company through Sparse composite document vector, so that the distance between them could be treated quantitatively. Next, we used the resulting vector representation to select comparable companies. As a result of intensive analyses, we found that comparable companies can be selected with a certain degree of accuracy with this proposed method. A detailed analysis is planned for further study.
328
Fig. 5 Patent Visualization (t-SNE) of Kao Corporation (stock code 4452)
Fig. 6 Top 30 comparable Companies by OPTiM (stock code 3694)
S. Fujiwara et al.
Constructing a Valuation System Through Patent Document Analysis
329
References 1. Allen, F., Faulhaber, G.R.: Signaling by underpricing in the IPO market. J. Financ. Econ. 23, 303–323 (1989) 2. Axelrod, R.: The Complexity of Cooperation—Agent-Based Model of Competition and Collaboration. Princeton University Press (1997) 3. Baron, D.: A model of the demand of investment banking advising and distribution services for new issues. J. Financ. 37(4), 955–976 (1982) 4. Brealey, R., Myers, S., Allen, F.: Principles of Corporate Finance, 8E. The McGraw-Hill (2006) 5. Courteau, L.: Under-Diversification and retention commitments in IPOs. J. Financ. Quant. Anal. 30(4), 487–517 (1995) 6. Dheeraj, M., Vivek, G., Bhargavi, P., Harish, K.: SCDV: sparse composite document vectors using soft clustering over distributional representations, association for computational linguistics. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 659–669 (2017) 7. Grinblatt, M., Hwang, C.Y.: Signaling and the pricing of new issues. J. Financ. 44(2), 393–420 (1989) 8. Hall, B.H., Jaffe, A., Trajtenberg, M.: Market value and patent. RAND J. Econ. 36(1), 16–38 (2005) 9. Hoberg, G., Phillips, G.: Product market synergies and competition in mergers and acquistions: a text-based analysis. Rev. Financ. Stud. 23(10), 3773–3811 (2010) 10. Jaffe, A.B.: Technological opportunity and spillovers of R&D: evidence from firms’ patents, profits and market value. Am. Econ. Rev. 76(5), 984–999 (1986) 11. Kim, M., Ritter, R.J.: Valuing IPOs. J. Financ. Econ. 53(3), 409–437 (1999) 12. Loughran, T., Ritter, J.: Why Don”t issuers get upset about leaving money on the table in IPOs. Rev. Financ. Stud. 15, 413–443 (2002) 13. Loughran, T., Ritter, J.: The new issues puzzle. J. Financ. 50, 23–50 (1995) 14. Loughran, T., Ritter, J.: Why has IPO underpricing changed over time?. Financ. Manag. 33(3), 5–37 (2004) 15. Liu, X., Ritter, R.J.: The economic consequences of IPO spinning. Forthcoming Rev. Financ. Stud. (2009) 16. Lowry, M., Shu, S.: Litigation risk and IPO underpricing. J. Financ. Econ. 65(3), 309–335 (2002) 17. Loughran, T., McDonald, B.: When is a liability not a liability? Textual analysis, dictionaries, and 10-ks. J. Financ. 66(1), 35–65 (2011) 18. Matsumoto, Y., Suge, A., Takahashi, H.: Construction of new industrial classification through fuzzy clustering. In: JSAI International Symposia on AI workshops (2018) 19. Matsumoto, Y., Suge, A., Takahashi, H.: Capturing corporate attributes in a new perspective through fuzzy clustering. In: Kojima K., Sakamoto M., Mineshima K., Satoh K. (eds.) New Frontiers in Artificial Intelligence. Lecture Notes in Computer Science, vol. 11717, pp.19–33, Springer (2019) 20. Muzatko, R.S., Johnstone, M.K., Mayhew, W.B., Rittenberg, E.L.: An empirical investigation of IPO underpricing and the change to the LLP organization of audit firms. Aud. J. Pract. Theor. 23(1), 53–67 (2004) 21. Nishi, Y., Suge, A., Takahashi, H.: Text analysis on the stock market in the automotive industry through fake news generated by GPT-2. In: JSAI Interantional Symposia on AI Workshops (2019) 22. Pengfei, L., Xipeng, Q., Xuanjing, H.: Learning context-sensitive word embeddings with neural tensor skip-gram model. In: Proceedings of IJCAI 2014, the 24th International Joint Conference on Artificial Intelligence, pp. 1284–1290 (2015) 23. Pranjal, S., Amitabha, M.: Words are not equal: graded weighting model for building composite document vectors. In: Proceedings of the twelfth International Conference on Natural Language Processing (2015)
330
S. Fujiwara et al.
24. Purnanandam, K.A., Swaminathan, B.: Are IPOs really underpriced? Rev. Financ. Stud. 17(3), 811–848 (2004) 25. Ritter, R.J.: Investment banking and securities issurance. Chapter 5 of North-Holland Handbook of the Economics of Finance edited by George Constantinides, Milton Harris, and Rene Stulz (2003) 26. Stuart, T.E., Podolny, J.M.: Local search and the evolution of technological capabilities. Strateg. Manag. J. 17(S1), 21–38 (1996) 27. Takahashi, H., Terano, T.: Agent-based approach to investors’ behavior and asset price fluctuation in financial markets. J. Artif. Soc. Soc. Simul. 6, 3 (2003) 28. Takahashi, H.: An analysis of the influence of dispersion of valuations on financial markets through agent-based modeling. Int. J. Inf. Technol. Decis. Mak. 11, 143–166 (2012) 29. Tinic, M.S.: Anatomy of initial public offerings of common stock. J. Financ. 43(4), 789–822 (1988) 30. Welch, I.: Seasoned offerings, imitation costs, and the underpricing of initial public offerings. J. Financ. 44(2), 421–449 (1989) 31. Yee, K.K.: A bayesian framework for combining valuation estimates. Rev. Quant. Financ. Account. 30(3), 339–354 32. Zellig, H.: Distributional structure. Word 10, 146–162 (1954)
Modeling of Bicycle Sharing Operating System with Dynamic Pricing by Agent Reinforcement Learning Kohei Yashima and Setsuya Kurahashi
Abstract The objective of this research is to encourage users participating in bicycle sharing schemes to return the bicycle to a specific station in return for an incentive. In addition, we aim to verify the business operation model, which differs fundamentally from the conventional operation based on truck allocation. We employed a reinforcement learning technique for directions to user agents to autonomously respond to uncertain events. Reinforcement learning, which is assigned a value, is calculated by function approximation using a hierarchical neural network. In the hierarchical neural network, the network configuration was arbitrarily determined to be fully connected and to contain 3 hidden layers and 16 nodes. Function approximation is considered to be an effective method because there are huge combinations of states and actions. The results indicated that the proposed model reduced the cost by approximately 7% compared to the conventional truck operation. This result suggests the usefulness of autonomous system operation by using reinforcement learning.
1 Introduction Bicycle sharing is a service that allows users to rent a bicycle at a particular station and return it to the same or any other stations scattered across a specific area. The service is attracting attention as a form of public transportation alternative to taxis and buses. In recent years, many companies have entered the share cycle business in Japan, Europe, China, and other countries. The service is expected to reduce traffic congestion in urban areas, generate marketing data in the form of GPS data, and reduce CO2 emissions. K. Yashima (B) · S. Kurahashi University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki, Japan e-mail: [email protected] S. Kurahashi e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_31
331
332
K. Yashima and S. Kurahashi
The disadvantage of the service is that shared bicycles are typically returned to a specific station because of user demand. Therefore, the major problem when operating a share cycle business is the labor and transportation costs that arise because of the need to collect bicycles from the stations at which they accumulate and return them to the other stations by truck. User demand differs depending on the situation, such as whether a station is located in an area containing offices or an area frequented by tourists. In addition, the usage behavior varies according to the characteristics of each user. Therefore, effective redistribution is difficult. The purpose of this research is to encourage users to return to a specific station by paying them an incentive. The research also aims to verify a business operation model that is fundamentally different from the conventional operation based on truck allocation. The experiment is based on agent simulation, because multiple interactions occur between the user and the environment.
2 Related Work 2.1 Main Factors in Commuters’ Choice of Traffic Behavior Heinen et al. [1] conducted an online questionnaire survey of employees working for several large Dutch companies in 2008, and estimated the main factors related to bicycle use during commuting by using factor analysis. The questionnaire used a five-point Likert scale from −2 to +2, and was conducted as a questionnaire survey on traffic behavior, rather than being limited to cycling to avoid bias. The basic behavioral theory that affects bicycle commuting has four components: attitude, subjective norms, perceived behavior control, and habits. In addition, it is stated that the weather and luggage volume have different effects on choice behavior, depending on whether a bicycle is used for daily commuting. The results of the factor analysis are listed in Table 1. Three factors are mentioned with values below 0.3 being omitted. The first is “Direct benefit,” with high scores for “Comfortable” and “Time-saving.” The second is “Awareness,” with high scores for “Environmental benefits,” “Health benefits,” and “Mentally relaxing.” The third is “Safety,” with high scores for “Socially safe” and “Traffic safety.” These analysis results show that commuters are more likely to choose a bicycle based on a “Direct benefit” such as “Time-saving,” “Comfortable,” and “Flexible.”
2.2 Behavioral Change by Incentive Lu et al. [2] used agent-based modeling to simulate the behavior of choosing a bikesharing service and other modes of transportation for commuters in Taipei to affect
Modeling of Bicycle Sharing Operating System …
333
Table 1 Results of factor analysis on a commuter’s choice of traffic Factor scores of the attitudes towards characteristics of bicycle commuting Factor Direct benefit Awareness Safety Comfortable Flexible Mentally relaxing Health Benefits Cheap Suits lifestyle Physically relaxing Environmental Benefits Pleasant/Nice Offers privacy Socially Safe Time-saving Traffic safety
0.712 0.658 –0.717 –0.774 –0.559 0.530 0.313
0.554
–0.527 –0.822 –0.367 0.315 0.917
0.815 0.746
Values below 0.3 are not reported
improvements such as reducing CO2 emissions. The framework of the entire model is depicted in Fig. 1. Six modes of transportation are available for commuting: “bus,” “metro,” “bicycle,” “walking,” “motorcycle,” and “car.” Among these modes, “bus” and “metro” may need to be connected to other modes to complete the trip, whereas “bicycle,” “walking,” “motorcycle,” and “car” can be used for end-to-end trips. The environment used for the simulation is based on actual spatial information of Taipei City including the locations of bus stops and metro stations. Commuting agents” choice of transportation is affected by four factors: “travel fees,” “total travel time,” “walking or cycling time,” and “car or motorcycle ownership.” The value of time is defined based on the hourly wage calculated by taking into account the socioeconomic status of each agent, and determines which mode of transportation to use. Three experimental scenarios are set: the first entails the installation of 369 share cycle stations near bus stops, the second is free of charge for a share cycle used in transit to connect to the bus or metro, and the third involves issuing a coupon worth 2 NTD (roughly $0.66) after use for a share cycle usage fee of 5 NTD (roughly $1.66). The experiment examines which of these scenarios lead to a sustainable approach with less environmental impact compared to the proportion of transportation options selected for commuting in Taipei City in 2015. The results of each scenario are provided in Table 2. In Scenario 1, in which a new share cycle station is installed near a bus stop, the use of share cycles increases from 5.40 to 5.79% and the use of the bus increases
334
K. Yashima and S. Kurahashi
Fig. 1 Model framework Table 2 Taipei scenario simulation results Simulation results of the two scenarios Mode (%) 2015 BAU Scenario1 Infrastructure extensions Bike Walk Motor Car Bus Metro
5.40 16.40 27.30 16.90 17.20 16.90
5.79 15.70 31.40 12.40 21.49 13.22
Scenario2 Free for transit connection
2 NTD coupon
6.30 20.47 24.41 10.24 19.69 18.90
5.60 20.00 33.60 12.80 14.40 13.60
Notes BAU represents business as usual, and NTD refers to the New Taiwan Dollar
from 17.20 to 21.49% compared to the 2015 baseline result. This indicates that the bus has captured market share from the metro. A comparison of scenarios 2 and 3 shows that the free-to-connect scenario 2 has a higher share of cycle usage than the coupon-issuing scenario 3 and has the lowest environmental impact compared to all scenarios. This related work considers sustainability by simulating transportation choices for commuters. However, the problem presented by the need to redistribute the share cycles is not addressed. The paper also mentions an extension of the model for tourists, and states that it is necessary to consider a way of thinking about “time” and “costs” that differs from that of commuters.
Modeling of Bicycle Sharing Operating System …
335
2.3 Agent Simulation Using Reinforcement Learning Shimizu et al. [3] employed Q-learning as one type of reinforcement learning for agent-based modeling, and reproduced the uneven demand of the share cycle problem by following a game-theoretic approach. The simulation is performed for five scenarios with different station locations and conditions. In the commuter scenario, the agent travels downtown from a station located outside the city. Q-learning converges quickly because the agent’s destination and return location are the same, but the simulation does not consider the return trip and the departure station is considered to be depleted of bicycles. In another scenario, the hill scenario, one of the seven stations is on top of a hill and the remaining six stations are at the bottom of the hill. The simulations can reproduce realistic events where a bicycle does not return from a station at the bottom of the hill to the station at the top of the hill. However, all users returned to the station at which they rented the bicycle. This scenario does not reproduce the situation in which the hilltop station is depleted of bicycles. In this study, the behavior of the user is considered not to correspond to reality and the relationship between behavioral changes is not considered to have been clarified.
3 Simulation Model 3.1 Behavior Selection Model An overview of the simulation model is presented in Fig. 2. In this study, an agent who proposes an incentive and requests a dispatch is referred to as an “Overlooking agent.” The overlooking agent uses reinforcement learning that explores the value of Q(s, a) for selecting the action (a) in the state (s). In this simulation, the state (s) is the number of bicycles remaining at each station every hour obtained from the environment, and the action (a) is the overlooking agent’s request for dispatch from a specific station to a different specific station, incentive amount, and number of bicycles(one or three). The value of Q(s, a) is calculated by function approximation using a hierarchical neural network. In the hierarchical neural network, the network configuration was arbitrarily determined to be fully connected and to contain 3 hidden layers and 16 nodes.
3.2 Environment In the environment, demand patterns of various users can be considered according to the location of stations such as in office districts or at sightseeing spots. In the environment used in this simulation, verification of the business operation model under specific conditions is based on the assumption that a sightseeing spot is located
336
K. Yashima and S. Kurahashi
Fig. 2 Overview of the simulation model Fig. 3 Schematic of the simulation environment
within 2 km from a station. The weather is assumed to be sunny during a comfortable outdoor weekend. A schematic representation of the simulation environment is shown in Fig. 3. The environment contains four stations, located at the four corners of a square with a side of 1 km (the diagonal is 2 km in Manhattan distance). In the initial state, 10 bicycles are installed at each station. To simulate an environment where demand in the morning and evening changes dynamically, station A is assumed to be a railway station, station B is stop-off facility 1, station C is stop-off facility 2, and station D is a sightseeing spot. In the morning, many users move from the railway station to the sightseeing spot, and in the evening, many users move from the sightseeing area to the railway station.
Modeling of Bicycle Sharing Operating System …
337
Fig. 4 Number of bicycles remaining at each station per hour
3.3 Demand Curve The demand for stations changes dynamically between the morning and the evening. Furthermore, the number of bicycles remaining at stations A and C becomes 0. The demand at each station from 7:00 a.m. to 11:00 p.m. (17 frames per hour) was simulated. The number of bicycles per hour remaining at each station is shown in Fig. 4.
4 Experiments 4.1 Simulation Scenario An agent who dispatches bicycles is assumed to be waiting for requests at stations (similar to a private business operator of the food and beverage delivery service UberEats). This agent is referred to as a “Specialist agent.” A reward of 150 yen/km from the restaurant to the delivery destination is set by UberEats. This simulation assumes that the specialist agent returns the bicycles to the end-station and then
338
K. Yashima and S. Kurahashi
walks back to the start-station. It is assumed that the return trip will attract a reward of 150 yen/km after returning. Therefore, the total incentive per kilometer would be 300 yen. In all experiments, the result of the overlooking agent selecting one action at 0 min every hour is taken as a single step, and it is possible to select 17 actions in order from 7:00 a.m. to 11:00 p.m. Therefore, the maximum number of steps is 17. However, the number of steps and the environment are reset when the number of bicycles remaining at any station becomes 0. Therefore, cases exist in which the number of steps does not reach 17. (When the number of steps is 17, it means that the remaining number of bicycles does not become 0 until 11:00 p.m.) One episode is defined as one of the stations becoming depleted of bicycles or the number of steps being 17. When one episode elapses, the number of steps and the environment are reset, and the next episode starts anew. In this study, 100,000 steps are considered as one experiment, and five experiments are performed for each scenario. The results are expressed in terms of the total amount paid as an incentive.
4.2 Experimental Results When an overlooking agent makes a dispatch request to a specialist agent, it specifies the number of bicycles (either one or three) in addition to the start point and end point. Specialist agents work to earn incentives. Therefore, if any stations with available bicycles remain, always respond to instructions from overlooking agents. The simulation was performed iteratively for a maximum of 100,000 steps with one action per hour considered as a single step of the overlooking agent. The changes in the number of steps and total incentives for each episode are shown in Fig. 5.
Fig. 5 Number of steps and incentive paid versus the number of episodes
Modeling of Bicycle Sharing Operating System …
339
Up to approximately 500 episodes, a maximum number of seven steps are required. Within the seven hours from 7:00 a.m. to 13:00 p.m., the number of bicycles remaining at any station became 0, indicating the transition to the next episode. On the other hand, from approximately 1,000 episodes onward, the number of steps is often 17. These results showed that stations at which no bicycles are available can be maintained throughout the day. This indicates that the agent has learned effectively and has been making dispatch requests. Next, the average values and standard deviations of the episodes for which the number of remaining bicycles did not reach zero in the preceding 100 episodes are listed in Table 3 and plotted in Fig. 6. The average incentive amount for all the experimental results was 19,825 yen and the standard deviation was 1,592 yen. The third experiment yielded a small standard deviation. In the first experiment, the minimum was approximately 15% lower than in the other experiments. The experiments also differed in terms of the rate at which the final 17th step was reached.
Table 3 list of experimental results Experiment Experiment Experiment Experiment Experiment Total of all 1 2 3 4 5 experiments Number of data Average SD Min Max
86
95
88
80
94
443
17,892 1,030 15,300 21,300
20,100 1,438 17,400 23,100
19,793 777 17,700 21,900
21,450 1,276 18,900 26,100
19,962 1,102 17,400 23,400
19,825 1,592 15,300 26,100
Fig. 6 Experimental result_box plot
340
K. Yashima and S. Kurahashi
4.3 Discussion The reason for the varying incentive amount paid in each experiment (as indicated by the experimental results) is likely to be the stochastic selection of the behavior. In an actual operation, it is considered that the results of each experiment will be the same by selecting a non-stochastic action after learning is completed. Therefore, this simulation realized operation by autonomous reinforcement learning where the number of stations at which bicycles remained did not become zero. The assumptions on which the operating cost is based are listed below to enable the costs of operating conventional trucks with CO2 emissions to be compared. Truck operating costs and CO2 emissions • • • • •
Gasoline price: 146/day (distance 10 km, gasoline 0.96 ) labor cost: 16,000/day (two workers) Car maintenance costs: 230/day (various types of insurance) Total cost: 16,376/day CO2 emissions: 812 kg/year (gasoline 0.96 × CO2 emission factor 2.32 × 365 day).
The operating costs of the truck total 16,376 yen per day. Compared to operating a conventional truck, the average value did not justify the large associated costs. However, the minimum value was 15,300 yen, indicating that the operating cost could be lower than that of the conventional model. In addition, CO2 emissions were reduced by approximately 812 kg. This is equivalent to the amount of CO2 absorbed by approximately 58 cedar trees in one year, which has also been shown to lead to efforts to reduce the environmental impact. As a future task, first, the reward for reinforcement learning was selected arbitrarily, and further cost reduction could be expected by appropriate reward design. Although the use of a method such as inverse simulation, which utilizes expert knowledge for reward design, is conceivable, it is necessary to consider ways in which to gain expert knowledge in a business model that is not yet available. Second, in this study, dispatch requests are made only to specialist agents, but it is also possible to consider an operation in which the request is made directly to the users. In that case, it would have to be assumed that the user’s acceptance would vary according to the situation, and it would be necessary to specify parameters such as the distance, incentive amount, and the weather. Finally, a station at which the remaining number of bicycles is zero does actually exist even in the conventional operation. It would not be feasible to dispatch a truck to all of the stations added during future expansions. However, with the cooperation of specialist agents, the operation could possibly be performed more flexibly than with a truck. The results of this study are expected to promote the spread of bicycle sharing operations and contribute to a sustainable society.
Modeling of Bicycle Sharing Operating System …
341
5 Conclusion The purpose of this study was to propose and verify a business operation model that differs fundamentally from the conventional operation based on truck allocation. The first item to be verified was to confirm whether the reinforcement learning agent could make a dispatch request to the specialist agent and perform the operation without leaving the number of stations at which no bicycles remained at zero. The second was to determine the extent to which the cost and CO2 reductions would change compared to conventional operations by requesting specialist agents to dispatch vehicles. The experimental results indicated that when the demand needs to be considered, as in this study, specialist agents and the overlooking agent in combination with reinforcement learning could be used to realize an operation in which the number of stations at which no bicycles remain does not become zero. In addition, it was demonstrated that the replacement of trucks by agents could significantly reduce the costs and CO2 emissions compared to conventional operations.
References 1. Heinen, E., Maat, K., Van Wee, B.: The role of attitudes toward characteristics of bicycle commuting on the choice to cycle to work over various distances. Trans. Res. Part D Trans. Environ. 16, 102–109 (2011) 2. Lu, M., Hsu, S.-C., Chen, P.-C., Lee, W.-Y.: Improving the sustainability of integrated transportation system with bike-sharing: a spatial agent-based approach. Sustain. Cities Soc. 41, 44–51 (2018) 3. Shimizu, S., Akai, K., Nishino, N.: Modeling and Multi-agent Simulation of Bicycle Sharing Serviceology for Services, pp. 39–46 (2014)
Omni-Channel Challenges Facing Smalland Medium-Sized Enterprises: Balancing Between B2B and B2C Tomohiko Fujimura and Yoko Ishino
Abstract The omni-channel strategy is one of the main topics in marketing that have been receiving attention in recent years, because smartphones and social media are significantly altering consumer behavior. In general, setting up a large-scale omni-channel requires significant investment. It is thought that only large retailers are able to employ the omni-channel strategies, but in fact, they feature aspects that are especially suitable for small start-ups. This study focuses on the importance of omni-channel strategies for small- and medium-sized enterprises (SMEs) that are trying to become more active in e-commerce (EC). We examine an SME developing a new health food product. According to the nature of the product, the firm tries to employ omni-channel strategy which consists of B2C (business-to-customer) EC and B2B (business-to-business) sales. A probability-based risk simulation is performed to find the optimal weighting between B2C EC and B2B sales. This study demonstrates that making business-weighting decisions in advance by means of simulations is an important exercise for SMEs that do not have abundant capital or labor.
1 Introduction With recent developments in Information and Communications Technology (ICT), such as cloud-based services, omni-channel distribution strategies are attracting a great deal of attention as a new growth model for the retail industry. Macy’s, a major United States (US) department store, developed the concept in 2010, and since then, many other US retailers have been promoting omni-channel approaches. Several large Japanese retailers have begun merging their physical retail stores with the Internet in a similar manner.
T. Fujimura · Y. Ishino (B) Yamaguchi University, 2-16-1 Tokiwadai, Ube, Yamaguchi 755-8611, Japan e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_32
343
344
T. Fujimura and Y. Ishino
This approach includes integrated services such as enabling in-store pick-up of online purchases; enabling customers to check in-store stock levels online; and centrally managing customer and inventory information. Japan’s largest retail group, Seven & I Holdings, launched its own omni-channel offering, and called Omni7, in November 2015. Omni7 can be used for the online purchase of roughly 3 million items offered by the group’s firms (a consortium of approximately 20 companies), and the goods can be collected, paid for, or returned to any of the group’s roughly 18,500 retail outlets, which consist mainly of convenience stores. In general, setting up a large-scale omni-channel requires significant investment from major distribution and retail companies. It also requires comprehensive organizational restructuring, including the unification and optimization of inventory information at both physical stores and EC outlets, training of employees, and distribution of mobile terminals. So how do small- and medium-sized enterprises (SMEs), which do not have as much available capital and labor, respond to this challenge? In reality, most SMEs that attempt to open online marketplaces to engage in EC go deeply into debt. Hence, most SMEs have not started omni-channel distribution yet. This study focuses on the importance of omni-channel strategies for SMEs that are trying to become more active in EC. As an example, we examine the case of a new health food product and simulate how changing strategies would affect the value of the business. In conclusion, this study suggests that SMEs should adjust the weightings of multiple channels regarding logistics.
2 Related Work 2.1 Stages to Achieve Omni-Channel Sales Strategy The historical evolution of retail methodology and consumer spending habits began in the 1990s, with the appearance of EC [1, 2]. Although convenient, because EC shopping makes it impossible to see or touch the products, there is always a certain type of risk called the perceived risk to the customer. To reduce the perceived risk, consumers have begun purchasing products online at lower costs after checking the physical product in a brick and mortar store, a process known as “showrooming” [3]. In response to this, retailers invented the “click and mortar” system to unify their physical stores with their online stores [4]. This trend has grown including many elements of retail channels such as sales staff, mobile phones, catalogs, direct mail, call centers, and social media, gradually evolving into what we today know as multichannel retail, a system that enhances customer contact [5]. As the multichannel strategy is fundamentally based on the idea of channel independence, the concept of the so-called “cross-channel,” which focuses on inter-channel links, has begun to emerge as the next logical step in this
Omni-Channel Challenges Facing …
345
evolution [6]. Omni-channel is conceptually similar to cross-channel, but when crosschannel is formulated from the retailer’s perspective, omni-channel has emerged in order to understand channel activity from the consumer’s perspective. Because of this, omni-channel depends on the seamless integration of continuous and comprehensive information to promote the free movement of consumers between physical stores and online outlets [7]. Omni-channel is a channel for trade as well as communication through a range of different contact points.
2.2 Consumer Decision-Making in EC There is a range of existing academic research that focuses on the determinative factors regarding customer decision-making when purchasing products via EC stores. Usuki et al. used customer surveys to analyze customer repurchase intention for online stores, and their results indicated that factors such as the practical use of online information, store credibility, satisfaction with distribution channels, and geographic convenience were all positively correlated with consumer purchasing intent [8]. The work of Smith et al. demonstrated that “brand” was a key determinant of consumer choice when choosing products for purchase and that consumers use brands as a proxy for retailer reputation [9]. The research done by Corbitt et al. emphasized that “trust” is an important part of online retailing for consumers [10], and Lee et al. constructed a model in which “trust” comprised the online retailer’s trustworthiness, the trustworthiness of the online shopping medium, and other contextual factors [11]. As can be seen in all of the above studies, trust is an important factor in making EC decisions. This is the reason for which larger retailers often promote brands and use the previously established trust as leverage to connect them to EC purchases.
3 Research Method: Using a Case of Health Food In Japan, with its aging population, data show that elderly people frequently use EC to buy health food in order to maintain their health. We, therefore, consider the case of a new type of health food product designed to meet the health and longevity needs of the elderly that is sold by one of SMEs using an omni-channel strategy. The study was divided into two phases: developing a health food product targeted by the elderly and simulating a marketing strategy using a Monte Carlo simulation. In the first stage, a questionnaire survey was conducted to determine the needs of the elderly and the concept of health food was formulated. In the second stage, when an SME company sells the product using the omni-channel strategy, marketing measures including a sales strategy were formulated and then, simulation to examine the balance between B2C EC and B2B was conducted. Several simulation-based approaches for the financial evaluation of sales strategy decisions exist. Paisittanand and Olson used Monte Carlo simulation to evaluate the
346
T. Fujimura and Y. Ishino
financial risk of an IT outsourcing project [12]. They considered the net present value, rate of return, and profit of different alternatives as output values. We use Oracle Crystal Ball, which is an application of Monte Carlo simulations for predictive modeling and forecasting [13].
4 Concept Development of a Health Food Product 4.1 Customer Survey Survey Plan. A customer survey was conducted as follows: The survey aimed to build a better understanding of the attitudes of the elderly regarding health food and to establish the acceptability of the proposed concept of health food. • Survey period: 20 August 2019 through 20 September 2019; • Number of survey participants: 1,100; • Survey area: Japanese urban centers (e.g., Tokyo, Nagoya, Fukuoka, Osaka, and Hiroshima); and • Survey method: distributing questionnaires to adults on bus tours. Results. The results of the questionnaire suggested that “diet” was a principal health concern of those in their 20s through to those in their 50s, but as the respondents reached their 60s through their 90s, there was an increase in responses such as “strengthening bones,” “joint health,” “improving physical strength,” and “preventing disease,” as shown in Fig. 1. There was a strong desire shown in the survey results to
Fig. 1 Health concerns by age groups
Omni-Channel Challenges Facing …
347
“remain self-sufficient and able to walk,” and so, a product was developed to “work on muscle maintenance and help prevent the decline in muscle strength in order to live an independent life (i.e., the strength that is needed for standing and walking).
4.2 Building a Marketing Strategy We decided to use HMBCa (calcium beta-hydroxy-beta-methylbutyrate) as the main ingredient. HMBCa is a metabolite of leucine, which is an essential amino acid that promotes muscle synthesis and suppresses muscle degradation. It has been sold to young people as a muscle-training supplement, and on this basis, it has a proven track record. However, to date, it has rarely been used in products targeted at the elderly. The product is planned to be sold in powder form because the elderly are not good at taking tablets. It is designed to be dissolved in hot water (it features a coffee flavor). It is packaged so that one sachet provides a daily dose of 1.5 g of HMBCa with 30 sachets included in each box. This means that a typical consumer would consume one box per month. The product is planned to be manufactured and sold by an SME in Hiroshima, a provincial Japanese city. Its main business to date has been B2C (business-toconsumer) EC and advertising, making this the first time that it has manufactured and sold a health food. In short, this corporation does not have the brand power of a major food company. As a result, the company decided to obtain government approval as a “Food with Function Claims” for its product in order to build consumer trust. “Foods with Function Claims” are explained by the Consumer Affairs Agency by the following statement: “Under the food business operator’s own responsibility, Foods with Function Claims can be labeled with function claims based on scientific evidence. Information on the evidence supporting the safety and effectiveness of the product is submitted to the Secretary-General of the Consumer Affairs Agency before the product is marketed.” Few SMEs submit evidence and receive approval from the Consumer Affairs Agency as the submission process is costly and time-consuming. It is believed that obtaining such an approval will provide customers with a higher level of trust and confidence in the product. Furthermore, the primary target consumers for this product are “middle-class elderly people who enjoy travel and going out.” An example of such a target customer is described as follows: “Mrs. A is a woman in her 60s. Her husband is retired, and they have two independent daughters who live with their spouses in local cities. Her main source of income is her pension and savings, but she is still in good health, so she works at a local supermarket. Her hobby is traveling with family and friends, and she often uses day bus tours as an easy way of making trips with her friends and daughters.” One point to remember when considering health food product sales strategies for SMEs is that unlike major companies, they cannot allocate a lot of money to
348
T. Fujimura and Y. Ishino
advertising. Therefore, the company decided to conduct omni-channel distribution strategy. First, the firm decided to employ a B2C business. Paying attention to the fondness for bus tours described in the above target customer persona, the firm signs a contract with a bus company to allow it to advertise and sell its products on bus tours for the elderly. Flyers are distributed on busses, and when customers use their smartphones to read the flyer’s QR code, the company’s EC site launches, building a call-toaction mechanism (a B2C EC channel) that allows the customer to read information about the product and possibly make a purchase. Additionally, the tour operator will promote the product by allowing passengers to sample it during the bus tour. The company will also advertise on social media and send catalogs to its existing consumers to boost awareness. Second, simultaneously, the company will make business-to-business (B2B) trades with other companies that can sell the product while explaining it directly to consumers (e.g., acupuncturists, nursing homes, consignment drug distributors, etc.), because the product’s main ingredient, HMBCa, is not widely recognized yet. Changing the packaging and setting their own prices are needed for B2B trades. The measures described above will enable the firm to adopt an omni-channel strategy by centrally managing inventory and customer information, while providing multiple points of customer contact, including flyers, smartphone apps, B2C EC sites, social media, product samples, and face-to-face sales by downstream B2B customers. The customer contact points are a mix of both sales and communication channels, while the sales channels themselves can be broadly divided into “B2C EC” and “B2B real sales.” As B2C EC and B2B real sales have different selling prices, management costs, and profits, it is important to determine the optimum ratio to target in order to establish a balance between the two methods of sales. To this end, we performed a simulation to model the data.
5 Sales Simulation 5.1 Objectives and Core Settings The aim of this simulation is to clarify the optimum ratio between two sales channels, B2C EC and B2B real sales in order to focus business efforts. The key performance indicator (KPI) for the simulation was set using net present value (NPV), and sales were scheduled to begin in April 2020. Calculations were made to cover a period of 5 years and 9 months until the end of 2025, with sales volume directly impacting profit. The unit of sale for the product is one box (equivalent to one month’s dosage), and for the sake of simplicity, bulk purchases are not made; hence, the monthly sales volume can be considered to be the total number of customers.
Omni-Channel Challenges Facing …
349
Table 1 Expected monthly growth rate for new customers of B2C EC Year
2020
2021
2022
2023
2024
2025
Expected monthly growth rate for new customers (%)
5.0
5.0
2.0
1.5
1.0
0.0
First, the core settings for B2C EC were made. B2C EC customers can be divided into new and repeat customers, and changes in their numbers were applied on a monthly basis. A typical case was modeled for the rate of increase/decrease, in which there was steady growth based on data for previous health food products as shown in Table 1. The case indicates that awareness will grow rapidly and the rate of customer increase will be large during the first two years but will decline once competitors enter the market. The distribution data for repeat customers was obtained from existing sales data. Next, as the issue being examined, five different weightings to the ratio between “sales volumes through B2C EC” to “sales volume through B2B,” from 1:1 to 1:5, were modeled. Additionally, for the production volumes, two cases were modeled: one in which the production volume per year had an upper limit, and one in which there was no maximum volume. In summary, there are 10 scenarios in total, with five different weightings applied to sales and two models of production cap restrictions. Other settings are as follows: • Three types of pricing are modeled and applied to the actual cost: “individual purchase price through EC,” “price of purchase via EC subscription,” and “wholesale price to traders.” The relationship between the three is: individual > subscription > wholesale. • It is assumed that 80% of repeat purchasers will make a subscription. • Business revenue is set as purely driven by product sales and calculated by the following equation.
Sales = Product Price × number of pieces
(1)
• Fixed costs for subsequent years are assumed to remain equal to the first year’s costs. • The discount rate is assumed to be 10%, since the firm has never sold health food before. Furthermore, the customer increase rate, customer repeat rate, and variable costto-sales ratio are assumed to fluctuate in a probabilistic manner in reality, and the simulation was run by using Crystal Ball (risk analysis/decision-making support software) to apply a triangular distribution, a lognormal distribution, and a normal distribution for each situation.
350
T. Fujimura and Y. Ishino
5.2 Simulation Results We ran each scenario 100,000 times and determined the NPV. Figures 2 and 3 show the distribution of NPV and total sales volume for a case without maximum limitation of production volume. In Fig. 3, we can see that production volume increases as the weighting of B2B sales increases, unless the upper limit of production volume is set. Also, as the weighting of B2B sales increases, the unreliability of output increases. This can be seen from the fact that the bar height decreases and distribution widens. The same is true for NPV (see Fig. 2). Without any limit on production, a ratio of 1:5 would require three times as many products as those in the case of a ratio of 1:1. In contrast, Figs. 4 and 5 show the distribution of NPV and total sales volume for a case where there is an upper limit of 200,000 units being produced per year.
Fig. 2 Probability distribution of NPV without maximum limitation of production
Fig. 3 Probability distribution of number of products without maximum limitation
Omni-Channel Challenges Facing …
351
Fig. 4 Probability distribution of NPV with maximum limitation of production
Fig. 5 Probability distribution of number of products with maximum limitation
In the real world, production volumes are often limited. Figure 5 shows that as the weight of B2B sales increases, production and sales volumes also increase, but they approach the maximum production volume. In terms of NPV, a 1:1 ratio can be seen to have the least value, and from 1:3 to1:5, they almost overlap (see Fig. 4). Then, the average NPV was calculated for each scenario and is represented as the line that has its highest value taken to be 100% (see Fig. 6). Figure 6 also shows the width of the NPV range through a bar graph. According to this graph, when the upper limit of production is capped, it can be seen that NPV reaches 99.7% at a weighting of 1:4, which is where the range is the narrowest. It is, therefore, optimal to set the weighting of B2C EC to B2B sales at 1:4 when there are maximum production limits.
352
T. Fujimura and Y. Ishino
Fig. 6 Comparison of NPVs obtained under each condition
The simulation suggests that there is an appropriate balance to be reached when there is an upper limit to production. Recklessly continuing to increase the weighting of B2B leads to unfruitful production. This demonstrates that making businessweighting decisions in advance by means of simulations is an important exercise for SMEs that do not have abundant capital or labor.
6 Conclusions This study examined an SME developing a new health food product that was coming up with an omni-channel sales and distribution strategy. A simulation was performed to find the optimal weighting between B2C EC and B2B sales using actual figures. The study showed that adopting an omni-channel strategy was important for SMEs attempting to enter the EC market but that there were some significant points for introducing such strategy. As a matter of fact, B2B sales have a smaller profit margin but have the advantage of offering large per-deal sales volume and allowing actual marketing to be left to
Omni-Channel Challenges Facing …
353
downstream retailers, so increasing transaction volume is desirable. However, when a business faces constraints such as an upper limit on production volume, simulations should be used to find the appropriate balance that will maximize NPV and minimize risk rather than recklessly increasing B2B sales. Omni-channel strategies tend to be overlooked because of the predominance of large retailers, but in fact, they feature aspects that are especially suitable for small start-ups. Focusing on social media to increase brand recognition through person-toperson connections and expanding into other media through WOM communication are effective strategies for start-ups that intend to minimize risk by “starting small and growing big.” However, SMEs have limited labor and capital available; hence, they should come up with policies for their channels that are aligned with their products’ characteristics and the strengths of their company.
References 1. Ngai, E.W.T., Wat, F.K.T.: A literature review and classification of electronic commerce research. Inf. Manag. 39(5), 415–429 (2002) 2. Webb, K.L.: Managing channels of distribution in the age of electronic commerce. Ind. Mark. Manag. 31(2), 95–102 (2002) 3. Rapp, A., Bakera, T.L., Bachrach, D.G., et al.: Perceived customer showrooming behavior and the effect on retail salesperson self-efficacy and performance. J. Retail. 91(2), 358–369 (2015) 4. Bahn, D.L., Fischer, P.P.: Clicks and mortar: balancing brick and mortar business strategy and operations with auxiliary electronic commerce. Inf. Technol. Manag. 4(2), 319–334 (2003) 5. Neslin, S.A., Shankar, V.: Key issues in multichannel customer management: current knowledge and future directions. J. Interact. Mark. 23(1), 70–81 (2009) 6. Cao, L., Li, L.: The impact of cross-channel integration on retailers’ sales growth. J. Retail. 92(2), 198–216 (2015) 7. Lazaris, C., Vrechopoulos, A.: From multichannel to “omnichannel” retailing: review of the literature and calls for research. In: 2nd International Conference on Contemporary Marketing Issues, (ICCMI), 18–20 June 2014, Athens, Greece (2014) 8. Usuki, H., Nishio, C.: Purchasing factors of online shopping: considerations from empirical studies in 1996. Mark. J. 17(3), 23–32 (1998). In Japanese 9. Smith, M.D., Brynjolfsson, E.: Consumer decision-making at an internet shopbot: brand still matters. J. Ind. Econ. 49(4), 541–558 (2001) 10. Corbitt, B., Thanasankit, T., Yi, H.: Trust and E-commerce: a study of consumer perceptions. Electron. Commer. Res. Appl. 2(3), 203–215 (2003) 11. Lee, M.K.O., Turban, E.: A trust model for consumer internet shopping. Int. J. Electron. Commer. 6(1), 75–91 (2001) 12. Paisittanand, S., Olson, D.: A simulation study of IT outsourcing in the credit card business. Eur. J. Oper. Res. 175(2), 1248–1261 (2006) 13. Charnes, J.: Financial Modeling with Crystal Ball and Excel, Plus Website, 2nd edn. Willey, United States (2017)
A Formal, Descriptive Model for the Business Case of Managerial Decision-Making Masaaki Kunigami , Takamasa Kikuchi , Hiroshi Takahashi , and Takao Terano
Abstract This paper proposes a formal descriptive model of organizational decision-making called the Managerial Decision-Making Description Model (MDDM). This model introduces visual representations to describe managerial decisions that redefine relationships between their objectives and resources. The MDDM describes various business cases and enables us to compare these decision-making processes. This paper presents the MDDM’s methodologies and describes, and compares, the decision diagrams extracted from actual business cases and organizational agent-based simulation (ABS) logs as virtual business cases.
1 Introduction This paper proposes a formal, descriptive model to describe managerial decisionmaking processes that transform business organizations. This tool, named the Managerial Decision-Making Description Model (MDDM) [1], provides a common method to compare decision-making processes for business cases, as well as a means to visualize them. Here we introduce the MDDM and demonstrate how it works on actual business cases.
M. Kunigami (B) Tokyo Institute of Technology, 4259 Nagatsuta-cho, Midori-ku, Yokohama, Kanagawa, Japan e-mail: [email protected] T. Kikuchi · H. Takahashi Keio University, 4-1-1 Hiyoshi Kohoku-ku, Yokohama, Kanagawa, Japan e-mail: [email protected] H. Takahashi e-mail: [email protected] T. Terano Chiba University of Commerce, 1-3-1 Konodai, Ichikawa-shi, Chiba, Japan e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4_33
355
356
M. Kunigami et al.
In contrast to the Object Modeling Group’s models Case Management Model and Notation (CMMN) [2], BPMN (Business Process Model and Notation) [3], and DMN (Decision Model and Notation) [4]), MDDM focuses on describing organizational decision-making that changes a business’s structure. CMMN, BPMN, and DMN provide useful representations of a business’s states and behaviors as long as the entire business structure is static or at least stable. In contrast, the MDDM focuses on a one-time transition process that changes an entire business structure. A High-Level Business Case (HLBC) [5] is presented to describe such a onetime transition of a business structure. While HLBC represents an evolution of the functions and services of a business structure, the MDDM focuses on the decisionmaking process driving transitions. We began by defining key terminologies for the MDDM. First, the business structure of an organization is defined as a multi-layered structure of business objectives and their related resources. Next, managerial decision-making is the way an agent (i.e., a member of an organization) defines or redefines business objectives and their related resources in a business structure. To describe managerial decision-making that changes the business structure of an organization, the MDDM must be able to represent the following items: (a) (b) (c) (d)
the multi-layered structure of a business, and its transition, the focus (or bounded scope) of an agent’s observations and actions, the agent’s position corresponding to each layer in the business structure, and the chronological order and the causality of an agent’s decisions.
By satisfying these requirements, the MDDM enables us to describe “who” decides “what,” “when,” and “where” decisions affect the business structure, along with “how” decisions change.
2 Methodologies To represent a transition of business structures as a “Decision Diagram,” the MDDM uses three kinds of components. In placing and connecting those components, the decision diagram describes organizational decision-making as an equivalent circuit. The decision diagram satisfies the condition presented in the previous section.
2.1 Three Major Components Please note that the first paragraph of a section or subsection is not indented. The first paragraph that follows a table, figure, equation, etc., does not have an indent, either. Subsequent paragraphs, however, are indented.
A Formal, Descriptive Model for the Business Case … Fig. 1 The Business Structure Component represents a multi-layered coupling of objectives and resources in business organizations
Business Structure Component Goal/Objective
Upper (e.g. Strategic) Layer
Lower (e.g. Field) Layer
2.1.1
357
Resource/ Process
Resource/ Process
Sub-Objective
Sub-Objective
Resource/ Process
Resource/ Process
Business Structure Component
The Business Structure Component represents a multi-layered structure of objectivesresources couplings, tied to the organizational business process (Fig. 1). This component is comprised of the objective symbols, resource symbols, and the connections between them. Each objective symbol represents a goal, an objective, or a target business layer. A resource symbol represents a resource, an operation, a product, or a means required to achieve the objective symbol. By heaping up the objectives-resources couplings the Business Structure Component represents a multi-layered structure of business organizations.
2.1.2
Environmental Component
The environmental component describes transitions and events outside of the organization (Fig. 2). This component consists of status and event symbols. Each status symbol represents a technological situation or condition in the market or another organization. The event symbol indicates that something happened to the status that triggers an agent’s decision, or a result caused by an agent’s decision. The status order (Fig. 2), and the events from left to right, indicate their chronological order. Fig. 2 The environment component represents states inside or outside the business and events caused by these states’ or agents’ decisions
Environment Component
State
State Event
State Event
time
358
M. Kunigami et al.
Fig. 3 The agent’s decision element redefines objectives–resources coupling in business structures
Agent s Decision Element Event Objective
Objective
Observe before
Observe after
Agent Name Resource
Act before
Act after
Resource Sub-Objective
Event
2.1.3
Agent’s Decision Element
Agent’s decision element describes how an agent redefines the objectives and resources in an organization’s business structure. Each agent’s decision is represented as a “decision element” with 2 × 2 terminals (Fig. 3). Each terminal has a specific function. The left hand’s dual terminals represent an agent’s observation-action pair before the decision. The upper left terminal indicates an agent’s former objective. The lower left terminal indicates an agent’s former resources or means for the former objective. In contrast, on the right-hand side, the two terminals represent an agent’s observation-action pair as a consequence of an agent’s decision. The upper right terminal indicates an agent’s new objective, and the lower right terminal is an agent’s new resources or means to facilitate the new objective.
2.2 Composing the Decision Diagram By allocating and connecting those components, the decision diagram describes the organizational decision-making involved in a business structure’s transition (Fig. 4). To begin with, the environmental component is at the top or bottom of the decision diagram. It introduces time (from old to new) in a horizontal direction (from left to right) in the decision diagram. Next, to describe transitions in the structure, the two business structure components are on the left and right-hand sides of the decision diagram. The left-side component represents the business structure that existed before agents’ decisions and the right-side component represents the structure resulting from agents’ decisions. We call the left-side structure “Before” or “As Is,” and the right-side one “After” or “Outcome.” These business structures introduce vertical layers into the decision diagram from strategic management (upper) to field operations (lower).
A Formal, Descriptive Model for the Business Case …
359
Environment State
State Event
Business Structure
Observe before
Goal Resource/ Process
Objective Resource/ Process
Business Structure After(to be /can be )
Observe after
Goal
Agent 1 Act before
Mismatch
Middle Layer
Event
Agent s Decision
Before(as is)
Upper (Strategic) Layer
State
Resource / Process
Act after Event Observe before
Objective
Observe after Agent 2
Act before
Resource / Process
Act after
Event
Lower (Field) Layer
Sub-Objective Sub-Objective
Observe before
Observe after
Sub-Objective Sub-Objective
Agent 3
Resource/ Process
Resource/ Process
Act before
Act after
Resource / Process
Resource / Process
Fig. 4 The Decision Diagram describes how agents’ managerial decision-making transforms the business structure by connecting the three types of components
Third, an agent’s decision elements are allocated between business structures. These allocations reflect the organizational position and chronological order of an agent’s decisions. The decision’s vertical position indicates the structural layer to which an agent belongs. The horizontal order of the decisions indicate their chronological order. Fourth, agents’ decisions connect to the other components and decision elements. The upper left terminal of each decision element connects to the symbols an agent observes as the objective or the target in the left-hand (“before”) business structure. The lower left terminal connects to the symbols an agent acted upon regarding the resource or the means in the left-hand (“before”) business structure. The upper right terminal of each decision element connects to the symbols that an agent observes as the new objective or target in the right-hand (“after”) business structure. The lower right terminal connects to the symbols an agent uses to act regarding the resources or the means in the right-hand (“after”) business structure. Finally, either an environment-agent interaction or an agent-agent interaction is represented by connecting an agent’s terminal and related event symbol. For example, when an event related to the environment triggers an agent’s decision, the event symbol is connected to the agent’s upper left terminal. Similarly, if an agent’s decision triggers another agent’s decision, the agent’s lower right terminal and the other agent’s upper right terminal are connected through the trigger event’s symbol.
360
M. Kunigami et al.
2.3 Properties of the Decision Diagram A decision diagram of the organizational decision-making enables us to describe the following properties: (a) a decision diagram represents a multi-layered structure introduced using business structure components, before and after a structure’s transition, (b) each agent decides upon specific observation–action (objectives–resources) pairs, limited by their scope and position, (c) each agent’s vertical position corresponds to a business structure layer in which the agent belongs, and (d) each agent’s horizontal position reflects the chronological order of his decision, and event symbols’ connections represent causalities between decisions and events. These properties provide points of view to compare decision diagrams as configurations of the diagrams themselves or the symbols’ meanings. “Configurations” are the distinctions between decision diagrams’ layouts and connections of the symbols or decision elements. “Meanings” are the distinctions between the content of the symbols or the decision elements.
2.4 Set-Theoretic Description of MDDM To clarify the components of MDDM, we describe them in the following format: M MDDM =(AMDDM , BsMDDM , C MDDM N MDDM , N MDDM , D MDDM , E MDDM , V MDDM , L MDDM , T MDDM , S MDDM ) Here, AMDDM is the agent, BsMDDM is the business structure, C MDDM is the connection, N MDDM is the component symbol (node), DMDDM is the decision, E MDDM is the environment, V MDDM is the event, L MDDM is the hierarchy, T MDDM is a set of time order indicators, and S MDDM is a set of business case stages (before/after). Each set is defined as follows: MDDM AMDDM = {a : l-th hierarchy}, = (i, l)| i: Agent’s name, l ∈ L MDDM = l|ltop ≺ l2nd ≺ · · · ≺ lbottom , L BsMDDM = OsMDDM , RsMDDM , C MDDM OsMDDM , RsMDDM , S MDDM = {s|before ≺ after}, OsMDDM = {ω = (o, l)| Object of business structure, o: Identifier, l ∈ L MDDM : l-th hierarchy}, RsMDDM = {ρ = (r, l)| Resource of business structure, r: Identifier, l ∈ L MDDM : l-th hierarchy}, MDDM , Nto ) = {(κ, nfm , nto )| κ: Connection, Node: nfm ∈ N MDDM , nto C MDDM (N MDDM fm fm MDDM MDDM MDDM ∈ N f m }, N f m , Nto ⊆ N MDDM ,
A Formal, Descriptive Model for the Business Case …
361
Fig. 5 The relationship between MDDM elements in the Venn diagram
DMDDM = {Δa = (a, (obssa , actsa )|s = before , (obssa , actsa )|s = after , τ )| a ∈ AMDDM , τ ∈ T MDDM }, T MDDM = {τ |τfirst ≺ τ2nd ≺ · · · ≺ τlast }, E MDDM = {ε = (e, τ start , τ end )| e: Identifier, τ start , τ end ∈ T MDDM }, V MDDM = {v = (, τ )| : Identifier, τ ∈ T MDDM }, a |a ∈ AMDDM , s ∈ S MDDM } ∪ {actsa | a ∈ N MDDM = V MDDM ∪E MDDM ∪ {obss OsMDDM ∪ RsMDDM . AMDDM , s ∈ S MDDM } ∪ s∈S MDDM
s∈S MDDM
Here, the relationship between MDDM elements is as shown in the Venn diagram (Fig. 5).
3 Application to Actual Business Cases Here we illustrate how the MDDM describes actual managerial decision-making, using well-known business innovation cases. The decision diagram from the Honda case shows bottom-up managerial decision-making. The Honda Super Cub case, introduced in Christensen’s classic text [6], is an example of destructive innovation. In 1959, Honda sent a team to enter the U.S. motorcycle market. After struggling to sell a big highway-bike the leader of the team, Kawashima, started pursuing new market opportunities in the light-bike (Super Cub) market. Afterward, Kawashima argued to the Tokyo headquarters to change the company’s unsuccessful big-bike strategy.
362
M. Kunigami et al.
Honda s big bike was not accepted in North America market L.A. people are interested in Off-road riding with light bikes
Entry to the North American Motorcycle Market
Observe before
Observe after
Create New Light-Bike Market in North America
Tokyo HQ Low-cost Manufacturing
Act before
Powerful, Big Highway Bike
Act after
Small, Low-cost Recreational Manufacturing Bike
Ask their Supercub for Dirt Riding Sell a Fast, Big Bike
Observe before
Observe after
Kawashima Use Own 50cc Delivery Bikes: Supercub
Local Dealers
Act before
Promote Supercub
Request a Change Strategy for New Market
Act after
New Channel of Dealers
Advertisement: Nicest People Campaign
Fig. 6 The decision diagram for the Honda Supercub case: the bottom-up managerial decisionmaking for transitioning from a large-bike strategy (left) to a light-bike strategy (right)
The case’s decision diagram visualizes Honda’s U.S. business model transitioning from the large highway-bike to the smaller recreational-bike market. The diagram also illustrates that the transition was led by a bottom-up layout of the decision elements; from the manager, Kawashima, to the Tokyo headquarters (Fig. 6). Therefore, the decision diagram formally represents the organizational decisionmaking in the business case. This formalized description allows graphical comparison of decision-making patterns between cases. For example, they can be obtained from the difference in the form between top-down and bottom-up decisions, the presence or absence of the communication, the position, timing, and character of the input or output of the trigger events. These allow us to explicitly compare similarities and differences between the decision-making in cases. Figure 7 shows another decision diagram (Fujifilm [7]) as an example of the different management styles. Although the details are omitted, the connection of the decision elements indicates a top-down decision-making different from that in Fig. 6. Decision diagrams enable us to illustrate visually the similarities or differences between the business cases.
A Formal, Descriptive Model for the Business Case …
363
Fig. 7 The decision diagram for the Fujifilm case: the top-down managerial decision-making for transitioning from a strategy focused on the imaging and solutions business (left) to a strategy to diversify into new growth fields; life science field etc. (right)
4 Application to Organizational ABS Logs as Virtual Business Cases As a virtual case, the simulation log from the agent model related to the organization’s environment recognition is written down by MDDM (Fig. 8). This model is based on Axelrod’s cultural dissemination model [8] and expresses the perception and propagation of the external environment of organization members. In the simulation settings, the initial structure assumes that the business structure of all members is consistent with the initial external environment. When the simulation starts, the external environment changes, and the members change to the external environment. The initial business structure and the external environment of all members are represented by “333, 383,” and the external environment after the change is represented by “888, 888.” The sample simulation log could be described as a case where a bottom-up decision is made as in the case of Honda: (1) the bottom member (agent#8) recognized the new external environment, and (2) the company’s business structure was uniformed through the top agent (agent #1). The authors have shown in the previous literature that both (1) individual simulation logs [9] and (2) individual simulation log types [10] can be written down by MDDM. In this way, the decision diagram makes it possible to formally compare the written business case with the simulation log. In previous studies, the simulation log was converted once into text and then compared with the actual business case.
364
M. Kunigami et al.
EbeforeABM
EafterABM Interaction With the External Environment
Step 0
Step 13
Step 45
Step 88 888
333, Observe/ Act before
Agent #1
Observe/ Act after
383
888
333,
888
Observe/ Act before
Agent #8
Observe/ Act after
383
888
Fig. 8 Example of constructing a virtual case by using MDDM
The comparison points can be more clarified by using a decision diagram than the previous way. Therefore, making decision diagrams is also a way to extract stylized facts from real cases and agent-based simulations. MDDM could provide a way to extract stylized facts that are common to written actual cases and simulation logs and business gaming logs.
5 Summary and Remarks To describe business cases, MDDM provides a decision diagram illustrating the transition of business structures caused by related agents’ decisions. The decision diagram also represents a chronological order and causalities between the decisions and the environment. The MDDM discriminates between the decision style in a business case, e.g., top-down, bottom-up, or informal communication. Illustrated are the decision diagrams of the actual business cases of the Honda Super Cub and organizational agent-based simulation (ABS) logs as virtual business cases. The simulation study on Kaizen and deviation [11] indicates no essential differences exist between cases from organizational ABS logs and actual business cases. The MDDM provides decision diagrams from business game logs, as well as actual business cases and organizational agent-based simulation logs. A paper on business gaming [12] already presented the simulated business gaming environment,
A Formal, Descriptive Model for the Business Case …
365
integrated with case learning, and based on actual business cases. The MDDM will provide an effective way to describe gaming players’ decisions and compare them to original business cases. Acknowledgments This work is supported in part by the Grant of Foundation for the Fusion Of Science and Technology. The authors would like to thank Enago (www.enaga.jp) for the English language proof.
References 1. Kunigami, M., Kikuchi, T., Terano, T.: A Formal Model of Managerial Decision Making for Business Case Description, GEAR2018 Letters of the Special Discussion on Evolutionary Computation and Artificial Intelligence (2018) 2. Object Management Group The Case Management Model and Notation Specification (CMMN) Ver.1.1 (2016). https://www.omg.org/spec/CMMN/. Accessed 10 March 2020 3. Object Management Group The Business Process Model and Notation Specification ver.2.0.2 (2014). https://www.omg.org/spec/BPMN/. Accessed 10 March 2020 4. Object Management Group The Decision Model and Notation Specification Ver.1.2 (2016). https://www.omg.org/spec/DMN. Accessed 10 March 2020 5. Sawatani, Y., Kashino, T., Goto, M.: Analysis and Findings on Innovation Creation Methodologies,https://www.slideshare.net/YurikoSawatani/analysis-and-findings-on-innovationcreation-methodologies, slide 15 (2016). Accessed 10 March 2020 6. Christensen, C.M.: The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail, pp. 149–153, Harvard Business Review Press (1997) (reprint 2016) 7. Gavetti, G., Aoshima, Y., Tripsas, M. P.: Fujifilm: A Second Foundation. Harvard Business Publishing (2007). Case reference no. 9-807-137 8. Axelrod, R.: The dissemination of culture: a model with local convergence and global polarization. J. Conflict Resolut. 41, 203–326 (1997) 9. Kikuchi, T., Kunigami, M., Takahashi, H., Toriyama, M., Terano, T.: Description of decision making process in actual business cases and virtual cases from organizational agent model using managerial decision-making description model. Inform. Proc. Soc. Jap. 60(10), 1704–1718 (2019) 10. Kikuchi, T., Kunigami, M., Takahashi, H., Toriyama, M., Terano, T.: Explaining log data of agent-simulation results with managerial decision-making description model. Stud. Simul. Gaming 29(1), 36–48 (2019) 11. Kobayashi, T., Takahashi, S., Kunigami, M., Yoshikawa, A., Terano, T.: Is there innovation or deviation? Analyzing emergent organizational behaviors through an agent based model and a case design. In: The 5th International Conference on Information, Process, and Knowledge Management (eKNOW 2013), pp. 166–171 (2013) 12. Nakano K., Matsuyama S., Terano T.: Research on a learning system toward integration of case method and business gaming. In: The 4th International Workshop on Agent-based Approach in Economic and Social Complex Systems (AESCS 2007), pp. 21–32 (2007)
Author Index
A Abbattista, Ilenia, 187 Alanis, Arnulfo, 245, 253, 261, 281 Alarcón, Marina Alvelais, 245, 261 Alisoltani, Negin, 199 Alvarado, Karina, 245 Álvarez-Flores, José Luis, 281
B Babac, Marina Bagi´c, 103 Baltazar, Rosario, 235, 271 Bucki, Robert, 209
C Camarda, Domenico, 187 Car, Zeljka, 57 Casillas, Miguel, 235, 271 Castillo, Víctor H., 281 Consuelo Salgado Soto del, María, 167
D Del Consuelo Martínez Wbaldo, M., 235 Din, Fareed Ud, 3
G Gembarski, Paul Christoph, 93 Ghedira, Khaled, 71 Gonzalez, Gustavo Ramírez, 261 Grgi´c, Demijan, 103 Gutierrez, Socorro, 235
H Halaška, Michal, 221 Härting, Ralf-Christian, 27, 295 Hassan, Adel, 177 Henskens, Frans, 3 Hernandez-Leal, Fabiola, 253 Horsch, Frieder, 27 Huerta, Uriel, 271
I Ilya, Danenkov, 113 Ishino, Yoko, 343 Ismail, Leila, 127 Iureva, Radda, 113
J Jezic, Gordan, 37 Jiménez, Samantha, 281
E Esposito, Dario, 187
F Fujimura, Tomohiko, 343 Fujiwara, Shohei, 321
K Kaim, Raphael, 27, 295 Kambayashi, Yasushi, 47 Karaula, Mislav, 103 Keselj, Ana, 57
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 G. Jezic et al. (eds.), Agents and Multi-Agent Systems: Technologies and Applications 2020, Smart Innovation, Systems and Technologies 186, https://doi.org/10.1007/978-981-15-5764-4
367
368 Kikuchi, Takamasa, 355 Konishi, Kota, 47 Kremlev, Artem, 113 Krivic, Petar, 83 Ktata, Farah Barika, 71 Kunigami, Masaaki, 355 Kurahashi, Setsuya, 331 Kusek, Mario, 83
L Leclercq, Ludovic, 199
M Márquez, Bogart Yail, 261, 281 Mandaric, Katarina, 37 Margun, Alexey, 113 Martínez, Eugenio, 235 Materwala, Huned, 127 Matsumoto, Yusuke, 321 Moreno, Hilda Beatriz Ramírez, 167
N Nishi, Yoshihiro, 313
P Patiño, Efraín, 253 Paul, David, 3 Pineda, Anabel, 271 Podobnik, Vedran, 103
R Ramírez, Margarita Ramírez, 167 Rasan, Ivana, 57 Rattrout, Amjad, 155, 177 Reyes-García, Carlos A., 235 Rocha, Martha-Alicia, 235, 271 Rojas, Esperanza Manrique, 167 Ruch, Dennis, 295 Ryan, Joe, 3
Author Index S Sabha, Muath, 155, 177 Safarini, Muhammed, 155 Safarini, Rasha, 155 Samet, Donies, 71 Samigulina, G. A., 143 Samigulina, Z. I., 143 Sawan, Aktham, 177 Sepulveda, Ruben, 245 Skocir, Pavle, 37 Soic, Renato, 17 Soriano-Equigua, Leonel, 281 Šperka, Roman, 221 Suchánek, Petr, 209 Sugahara, Noriyuki, 305 Suge, Aiko, 313, 321
T Tago, Itsuki, 47 Takahashi, Hiroshi, 313, 321, 355 Takahashi, Masakazu, 305 Takimoto, Munehiro, 47 Terano, Takao, 355 Thaher, Thaer, 155
V Velazquez, Daniel, 245, 261 Vukovic, Marin, 17
W Wallis, Mark, 3
Y Yashima, Kohei, 331
Z Zargayouna, Mahdi, 199 Zilak, Matea, 57 Zivkovic, Jakov, 83