160 103 132MB
English Pages 1322 [1313] Year 2020
Advances in Intelligent Systems and Computing 1131
Tareq Ahram · Waldemar Karwowski · Alberto Vergnano · Francesco Leali · Redha Taiar Editors
Intelligent Human Systems Integration 2020 Proceedings of the 3rd International Conference on Intelligent Human Systems Integration (IHSI 2020): Integrating People and Intelligent Systems, February 19–21, 2020, Modena, Italy
Advances in Intelligent Systems and Computing Volume 1131
Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Nikhil R. Pal, Indian Statistical Institute, Kolkata, India Rafael Bello Perez, Faculty of Mathematics, Physics and Computing, Universidad Central de Las Villas, Santa Clara, Cuba Emilio S. Corchado, University of Salamanca, Salamanca, Spain Hani Hagras, School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK László T. Kóczy, Department of Automation, Széchenyi István University, Gyor, Hungary Vladik Kreinovich, Department of Computer Science, University of Texas at El Paso, El Paso, TX, USA Chin-Teng Lin, Department of Electrical Engineering, National Chiao Tung University, Hsinchu, Taiwan Jie Lu, Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, Australia Patricia Melin, Graduate Program of Computer Science, Tijuana Institute of Technology, Tijuana, Mexico Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro, Rio de Janeiro, Brazil Ngoc Thanh Nguyen , Faculty of Computer Science and Management, Wrocław University of Technology, Wrocław, Poland Jun Wang, Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong
The series “Advances in Intelligent Systems and Computing” contains publications on theory, applications, and design methods of Intelligent Systems and Intelligent Computing. Virtually all disciplines such as engineering, natural sciences, computer and information science, ICT, economics, business, e-commerce, environment, healthcare, life science are covered. The list of topics spans all the areas of modern intelligent systems and computing such as: computational intelligence, soft computing including neural networks, fuzzy systems, evolutionary computing and the fusion of these paradigms, social intelligence, ambient intelligence, computational neuroscience, artificial life, virtual worlds and society, cognitive science and systems, Perception and Vision, DNA and immune based systems, self-organizing and adaptive systems, e-Learning and teaching, human-centered and human-centric computing, recommender systems, intelligent control, robotics and mechatronics including human-machine teaming, knowledge-based paradigms, learning paradigms, machine ethics, intelligent data analysis, knowledge management, intelligent agents, intelligent decision making and support, intelligent network security, trust management, interactive entertainment, Web intelligence and multimedia. The publications within “Advances in Intelligent Systems and Computing” are primarily proceedings of important conferences, symposia and congresses. They cover significant recent developments in the field, both of a foundational and applicable character. An important characteristic feature of the series is the short publication time and world-wide distribution. This permits a rapid and broad dissemination of research results. ** Indexing: The books of this series are submitted to ISI Proceedings, EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink **
More information about this series at http://www.springer.com/series/11156
Tareq Ahram Waldemar Karwowski Alberto Vergnano Francesco Leali Redha Taiar •
•
•
•
Editors
Intelligent Human Systems Integration 2020 Proceedings of the 3rd International Conference on Intelligent Human Systems Integration (IHSI 2020): Integrating People and Intelligent Systems, February 19–21, 2020, Modena, Italy
123
Editors Tareq Ahram Institute for Advanced Systems Engineering University of Central Florida Orlando, FL, USA Alberto Vergnano Dipartimento Di Ingegneria, Edificio 25 Università di Modena e Reggio Emilia Modena, Modena, Italy
Waldemar Karwowski University of Central Florida Orlando, FL, USA Francesco Leali Department of Mechanical and Civil Engineering University of Modena Modena, Modena, Italy
Redha Taiar GRESPI Université de Reims Champagne-Ardenne Reims Cedex 2, France
ISSN 2194-5357 ISSN 2194-5365 (electronic) Advances in Intelligent Systems and Computing ISBN 978-3-030-39511-7 ISBN 978-3-030-39512-4 (eBook) https://doi.org/10.1007/978-3-030-39512-4 © Springer Nature Switzerland AG 2020, corrected publication 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
This volume, entitled Intelligent Human Systems Integration 2020, aims to provide a global forum for introducing and discussing novel approaches, design tools, methodologies, techniques, and solutions for integrating people with intelligent technologies, automation, and artificial cognitive systems in all areas of human endeavor in industry, economy, government, and education. Some of the notable areas of application include, but are not limited to, energy, transportation, urbanization and infrastructure development, digital manufacturing, social development, human health, sustainability, a new generation of service systems, as well as developments in safety, risk assurance, and cybersecurity in both civilian and military contexts. Indeed, rapid progress in developments in the ambient intelligence, including cognitive computing, modeling, and simulation, as well as smart sensor technology, weaves together the human and artificial intelligence and will have a profound effect on the nature of their collaboration at both the individual and societal levels in the near future. As applications of artificial intelligence and cognitive computing become more prevalent in our daily lives, they also bring new social and economic challenges and opportunities that must be addressed at all levels of contemporary society. Many of the traditional human jobs that require high levels of physical or cognitive abilities, including human motor skills, reasoning, and decision-making abilities, as well as training capacity, are now being automated. While such trends might boost economic efficiency, they can also negatively impact the user experience and bring about many unintended social consequences and ethical concerns. The intelligent human systems integration is, to a large extent, affected by the forces shaping the nature of future computing and artificial system development. This book discusses the needs and requirements for the symbiotic collaboration between humans and artificially intelligent systems, with due consideration of the software and hardware characteristics allowing for such cooperation from the societal and human-centered design perspectives, with the focus on the design of intelligent products, systems, and services that will revolutionize future human– technology interactions. This book also presents many innovative studies of ambient artificial technology and its applications, including the human–machine v
vi
Preface
interfaces with a particular emphasis on infusing intelligence into the development of technology throughout the lifecycle development process, with due consideration of user experience and the design of interfaces for virtual, augmented, and mixed reality applications of artificial intelligence. Reflecting on the above-outlined perspective, the papers contained in this volume are organized into seven main sections, including: 1. 2. 3. 4. 5. 6. 7. 8.
Automotive design and transportation engineering Humans and artificial cognitive systems Intelligence, technology, and analytics Computational modeling and simulation Humans and artificial systems complexity Materials and inclusive human systems Human–autonomy teaming Applications and future trends
We would like to extend our sincere thanks to Axel Schulte, Stefania Campione, and Marinella Ferrara, for leading a part of the technical program that focuses on human–autonomy teaming and smart materials and inclusive human systems. Our appreciation also goes to the members of Scientific Program Advisory Board who have reviewed the accepted papers that are presented in this volume, including the following individuals: D. Băilă, Romania H. Blaschke, Germany S. Campione, Italy J. Chen, USA G. Coppin, France M. Draper, USA A. Ebert, Germany M. Ferrara, Italy M. Hou, Canada M. Jipp, Germany E. Karana, The Netherlands A. Kluge, Germany D. Lange, USA S. Lucibello, Italy E. Macioszek, Poland M. Neerincx, The Netherlands R. Philipsen, Germany J. Platts, UK D. Popov, USA A. Ratti, Italy R. Rodriquez, Italy V. Rognoli, Italy U. Schmid, Germany
Preface
vii
A. Schulte, Germany N. Stanton, UK E. Suhir, USA We hope that this book, which presents the current state of the art in intelligent human systems integration, will be a valuable source of both theoretical and applied knowledge, enabling the design and applications of a variety of intelligent products, services, and systems for their safe, effective, and pleasurable collaboration with people.
Contents
Automotive Design and Transportation Engineering User-Centered Design Within the Context of Automated Driving in Trucks – Guideline and Methods for Future Conceptualization of Automated Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Paula Laßmann, Florian Reichelt, Dominique Stimm, and Thomas Maier Towards Probabilistic Analysis of Human-System Integration in Automated Driving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ephraim Suhir, Gunther Paul, and Hermann Kaindl Trust Provisioning in the Transport Infrastructure . . . . . . . . . . . . . . . . Scott Cadzow Drivers’ Interaction with, and Perception Toward Semi-autonomous Vehicles in Naturalistic Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jisun Kim, Kirsten Revell, Pat Langdon, Mike Bradley, Ioannis Politis, Simon Thompson, Lee Skrypchuk, Jim O-Donoghue, Joy Richardson, Jed Clark, Aaron Roberts, Alex Mouzakitis, and Neville A. Stanton Are Autonomous Vehicles the Solution to Drowsy Driving? . . . . . . . . . . Daniel Grunstein and Ron Grunstein Exploring New Concepts to Create Natural and Trustful Dialogue Between Humans and Intelligent Autonomous Vehicles . . . . . . . . . . . . . Andrea Di Salvo and Andrea Arcoraci Integrating Human Acceptable Morality in Autonomous Vehicles . . . . . Giorgio M. Grasso, Chiara Lucifora, Pietro Perconti, and Alessio Plebe The Future of User Experience Design in the Interior of Autonomous Car Driven by AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Laura Giraldi
3
9 15
20
27
34 41
46
ix
x
Contents
Measuring Driver Discomfort in Autonomous Vehicles . . . . . . . . . . . . . Dario Niermann and Andreas Lüdtke Human-Centered Design for Automotive Styling Design: Conceptualizing a Car from QFD to ViP . . . . . . . . . . . . . . . . . . . . . . . . Gian Andrea Giacobone and Giuseppe Mincolelli Enriching the User Experience of a Connected Car with Quantified Self . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Maurizio Caon, Marc Demierre, Omar Abou Khaled, Elena Mugellini, and Pierre Delaigue
52
59
66
Constructing a Mental Model of Automation Levels in the Area of Vehicle Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Larissa Zacherl, Jonas Radlmayr, and Klaus Bengler
73
Effect of Phone Interface Modality on Drivers’ Task Load Index in Conventional and Semi-Automated Vehicles . . . . . . . . . . . . . . . . . . . Kristina Davtyan and Francesca Favaro
80
Software Failure Mode and Effects Analysis . . . . . . . . . . . . . . . . . . . . . Palak Talwar
86
A Validated Failure Behavior Model for Driver Behavior Models for Generating Skid-Scenarios on Motorways . . . . . . . . . . . . . . . . . . . . Bernd Huber, Paul Schmidl, Christoph Sippl, and Anatoli Djanatliev
92
Human-Machine Interface Research of Autonomous Vehicles Based on Cognitive Work Analysis Framework . . . . . . . . . . . . . . . . . . . . . . . . Chi Zhang, Guodong Yin, and Zhen Wu
99
Mercury: A Vision-Based Framework for Driver Monitoring . . . . . . . . 104 Guido Borghi, Stefano Pini, Roberto Vezzani, and Rita Cucchiara Investigating the Impact of Time-Lagged End-to-End Control in Autonomous Driving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Haruna Asai, Yoshihiro Hashimoto, and Giuseppe Lisi The Car as a Transformer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Jeremy Aston and Rui Pedro Freire Unmanned Small Shared Electric Vehicle . . . . . . . . . . . . . . . . . . . . . . . 124 Binhong Zhai, Guodong Yin, and Zhen Wu A Forward Train Detection Method Based on Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Zhangyu Wang, Tony Lee, Michael Leung, Simon Tang, Qiang Zhang, Zining Yang, and Virginia Cheung
Contents
xi
Styling Research of DFAC-6851H4E City Bus Based on Fuzzy Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Huajie Wang and Jianxin Cheng Object Detection to Evaluate Image-to-Image Translation on Different Road Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Fumiya Sudo, Yoshihiro Hashimoto, and Giuseppe Lisi Humans and Artificial Cognitive Systems Modelling Proxemics for Human-Technology-Interaction in Decentralized Social-Robot-Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Thomas Kirks, Jana Jost, Jan Finke, and Sebastian Hoose Category Learning as a Use Case for Anticipating Individual Human Decision Making by Intelligent Systems . . . . . . . . . . . . . . . . . . . 159 Marcel Lommerzheim, Sabine Prezenski, Nele Russwinkel, and André Brechmann System Architecture of a Human Biosensing and Monitoring Suite with Adaptive Task Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Brandon Cuffie and Lucas Stephane The Role of Artificial Intelligence in Contemporary Medicine . . . . . . . . 172 Larisa Hambardzumyan, Viktoria Ter-Sargisova, and Aleksandr Baghramyan Improving Policy-Capturing with Active Learning for Real-Time Decision Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Bénédicte Chatelais, Daniel Lafond, Alexandre Hains, and Christian Gagné Task Measures for Air Traffic Display Operations . . . . . . . . . . . . . . . . . 183 Shi Yin Tan, Chun Hsien Chen, Sun Woh Lye, and Fan Li Identifying People Based on Machine Learning Classification of Foods Consumed in Order to Offer Tailored Healthier Food Options . . . . . . . 190 Jenna Kim, Shuhao Lin, Giannina Ferrara, Jenna Hua, and Edmund Seto On the Perception of Disharmony . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Stijn Verwulgen, Thomas Peeters, Sander Van Goethem, and Sofia Scataglini Mobile Real-Time Eye-Tracking for Gaze-Aware Security Surveillance Support Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Alexandre Marois, Daniel Lafond, François Vachon, Eric R. Harvey, Bruno Martin, and Sébastien Tremblay
xii
Contents
Detecting Impulsive Behavior Through Agent-Based Games . . . . . . . . . 208 Alia El Bolock, Ahmed Ghonaim, Cornelia Herbert, and Slim Abdennadher Visual and Motor Capabilities of Future Car Drivers . . . . . . . . . . . . . . 214 Ferdinando Tripi, Rita Toni, Angela Lucia Calogero, Pasqualino Maietta Latessa, Antonio Tempesta, Stefania Toselli, Alessia Grigoletto, Davide Varotti, Francesco Campa, Luigi Manzoni, and Alberto Vergnano A Fixation-Click Count Signature as a Visual Monitoring Enhancement Feature for Air Traffic Controllers . . . . . . . . . . . . . . . . . 221 Hong Jie Wee, Sun Woh Lye, and Jean-Philippe Pinheiro Digital Transformation in Product Service System for Kids. Design Tools for Emerging Needs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Benedetta Terenzi and Arianna Vignati A Novel Heuristic Mechanism to Formalize Online Behavior Through Search Engine Credibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Debora Di Caprio and Francisco J. Santos-Arteaga Caterina, Alexa and the Others . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Elisabetta Benelli and Jurji Filieri Event-Related Potential Study on Military Icon Based on Composition-Semantic Relationship . . . . . . . . . . . . . . . . . . . . . . . . . . 248 Xian Li, Haiyan Wang, and Junkai Shao Ekybot: Framework Proposal for Chatbot in Financial Enterprises . . . 254 Maritzol Tenemaza, Sergio Luján-Mora, Angélica de Antonio, Jaime Ramírez, and Omar Zarabia Alpha and Beta EEG Desynchronizations Anticipate Steering Actions in a Driving Simulation Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 Giovanni Vecchiato, Maria Del Vecchio, Sergey Antopolskiy, Andrea Bellotti, Alessia Colucciello, Anna Marchenkova, Jonas Ambeck-Madsen, Luca Ascari, and Pietro Avanzini The Quantitative Evaluation of Permanent Disability in Forensic Medicine Through Stereo Photogrammetric Technology . . . . . . . . . . . . 266 Claudia Trignano, Andrea Castelli, Vittorio Dell‘Orfano, and Elena Mazzeo A Unified Framework for Symbol Grounding in Human-Machine Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Dingzhou Fei
Contents
xiii
Improving Machine Translation Output of German Compound and Multiword Financial Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Christina Valavani, Christina Alexandris, and George Mikros Self-adjusted Data-Driven System for Prediction of Human Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Oleksandr Burov, Evgeniy Lavrov, Nadiia Pasko, Olena Hlazunova, Olga Lavrova, Vasyl Kyzenko, and Yana Dolgikh Human Factor and Cognitive Methods in the Design of Products and Production Systems of Mechanical Engineering in the Framework of NBIC Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 Evgeny Kolbachev, Elena Sidorova, and Polina Vaneeva Automatic Assessment System of Operators’ Risk in Order Picking Process for Task Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Yangxu Li, Bach Q. Ho, Tatsunori Hara, and Jun Ota TARS Mobile App with Deep Fingertip Detector for the Visually Impaired . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Tetsushi Miwa, Youichi Hosokawa, Yoshihiro Hashimoto, and Giuseppe Lisi Analysis Process of Exploratory Research Represented in a Coordinate System XYZ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Olga Popova, Boris Popov, Vladimir Karandey, and Vladimir Afanasyev A Variety of Visual-Speech Matching ERP Studies in Quiet-Noise Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Lingling Hu, Chengqi Xue, and Junkai Shao Research on Color Stratification in Dynamic Environment: Frequency Domain Analysis of Delta, SMR and Theta EEG Rhythms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Cheng Guan, Lei Zhou, Tongtong Zhang, and Xiang Zeng Human AI Symbiosis: The Role of Artificial Intelligence in Stratifying High-Risk Outpatient Senior Citizen Fall Events in a Non-connected Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Chandrasekar Vuppalapati, Anitha Ilapakurti, Sharat Kedari, Rajasekar Vuppalapati, Jayashankar Vuppalapati, and Santosh Kedari Intelligence, Technology and Analytics Distinguishing a Human or Machine Cyberattacker . . . . . . . . . . . . . . . 335 Wayne Patterson, Acklyn Murray, and Lorraine Fleming Using Eye Tracking to Assess User Behavior in Virtual Training . . . . . 341 Mina Fahimipirehgalin, Frieder Loch, and Birgit Vogel-Heuser
xiv
Contents
Democratization of AI to Small Scale Farmers, Albeit Food Harvesting Citizen Data Scientists, that Are at the Bottom of the Economic Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Chandrasekar Vuppalapati, Anitha Ilapakurti, Sharat Kedari, Rajasekar Vuppalapati, Jayashankar Vuppalapati, and Santosh Kedari Cybersecurity in Educational Networks . . . . . . . . . . . . . . . . . . . . . . . . . 359 Oleksandr Burov, Svitlana Lytvynova, Evgeniy Lavrov, Yuliya Krylova-Grek, Olena Orliyk, Sergiy Petrenko, Svitlana Shevchenko, and Oleksii M. Tkachenko The Problem of Tracking the Center of Attention in Eye Tracking Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Marina Boronenko, Vladimir Zelensky, Oksana Isaeva, and Elizaveta Kiseleva Health Risk Assessment Matrix for Back Pain Prediction Among Call Center Workers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 Sunisa Chaiklieng and Pornnapa Suggaravetsiri Towards Conceptual New Product Development Framework for Latvian ICT Sector Companies and Startups . . . . . . . . . . . . . . . . . . 379 Didzis Rutitis and Tatjana Volkova A Liveness Detection Method for Palmprint Authentication . . . . . . . . . 385 Ayaka Sugimoto, Yuya Shiomi, Akira Baba, Norihiro Okui, Tetushi Ohki, Yutaka Miyake, and Masakatsu Nishigaki Procedure for the Implementation of the Manufacturing Module of an ERP System in MSME. Applied Case: Textile “Tendencias” Enterprise, UDA ERP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Pedro Mogrovejo, Juan Manuel Maldonado-Maldonado, Esteban Crespo-Martínez, and Catalina Astudillo Model of Emotionally Stained Pupillogram Plot . . . . . . . . . . . . . . . . . . . 398 Marina Boronenko, Yurii Boronenko, Oksana Isaeva, and Elizaveta Kiseleva Cost-Informed Water Decision-Making Technology for Smarter Farming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 Joanne Tingey-Holyoak, John Dean Pisaniello, Peter Buss, and Ben Wiersma A Review on the Role of Embodiment in Improving Human-Vehicle Interaction: A Proposal for Further Development of Embodied Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 Hamid Naghdbishi and Alireza Ajdari
Contents
xv
Analysis of Topological Relationships of Human . . . . . . . . . . . . . . . . . . 415 Jia Zhou, Xuebo Chen, and Zhigang Li Impact of Technological Innovation on the Productivity of Manufacturing Companies in Peru . . . . . . . . . . . . . . . . . . . . . . . . . . 421 Julio César Ortíz Berrú, Cristhian Aldana Yarlequé, and Lucio Leo Verástegui Huanca Parametric Urban Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427 Rongrong Gu and Wuzhong Zhou Narrative Review of the Role of Wearable Devices in Promoting Health Behavior: Based on Health Belief Model . . . . . . . . . . . . . . . . . . 433 Dingzhou Fei and Xia Wang Competitiveness of Higher Education System as a Sector of Economy: Conceptual Model of Analysis with Application to Ukraine . . . . . . . . . . 439 Olha Hrynkevych, Oleg Sorochak, Olena Panukhnyk, Nazariy Popadynets, Rostyslav Bilyk, Iryna Khymych, and Yazina Viktoriia Application of Classification Algorithms in the Generation of a Network Intrusion Detection Model Using the KDDCUP99 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446 Jairo Hidalgo and Marco Yandún Vulnerability Discovery in Network Systems Based on Human-Machine Collective Intelligence . . . . . . . . . . . . . . . . . . . . . . . 453 Ye Han, Jianfeng Chen, Zhihong Rao, Yifan Wang, and Jie Liu Computational Modeling and Simulation Supporting Decisions in Production Line Processes by Combining Process Mining and System Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 461 Mahsa Pourbafrani, Sebastiaan J. van Zelst, and Wil M. P. van der Aalst Using Real Sensors Data to Calibrate a Traffic Model for the City of Modena . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468 Chiara Bachechi, Federica Rollo, Federico Desimoni, and Laura Po Logistic Regression for Criteria Weight Elicitation in PROMETHEE-Based Ranking Methods . . . . . . . . . . . . . . . . . . . . . . 474 Elia Balugani, Francesco Lolli, Maria Angela Butturi, Alessio Ishizaka, and Miguel Afonso Sellitto 3D CAD Design of Jewelry Accessories, Determination of Geometrical Features and Characteristics of the Used Material of Precious Metals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480 Tihomir Dovramadjiev, Mariana Stoeva, Violeta Bozhikova, and Rozalina Dimova
xvi
Contents
Discovering and Mapping LMS Course Usage Patterns to Learning Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486 Darko Etinger Drug Recommendation System for Geriatric Patients Based on Bayesian Networks and Evolutionary Computation . . . . . . . . . . . . . 492 Lourdes Montalvo and Edwin Villanueva Software for the Determination of the Time and the F Value in the Thermal Processing of Packaged Foods Using the Modified Ball Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498 William Rolando Miranda Zamora, Manuel Jesus Sanchez Chero, and Jose Antonio Sanchez Chero Communication Protocol Between Humans and Bank Server Secure Against Man-in-the-Browser Attacks . . . . . . . . . . . . . . . . . . . . . 503 Koki Mukaihira, Yasuyoshi Jinno, Takashi Tsuchiya, Tetsushi Ohki, Kenta Takahashi, Wakaha Ogata, and Masakatsu Nishigaki Development of a Solution Model for Timetabling Problems Through a Binary Integer Linear Programming Approach . . . . . . . . . . 510 Juan Manuel Maldonado-Matute, María José González Calle, and Rosana María Celi Costa Machine, Discourse and Power: From Machine Learning in Construction of 3D Face to Art and Creativity . . . . . . . . . . . . . . . . . 517 Man Lai-man Tin Modelling Alzheimer’s People Brain Using Augmented Reality for Medical Diagnosis Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524 Ramalakshmi Ramar, Swashi Muthammal, Tamilselvi Dhamodharan, and Gopi Krishnan Rajendran Software Vulnerability Mining Based on the Human-Computer Coordination . . . . . . . . . . . . . . . . . . . . . . . . . 532 Jie Liu, Da He, Yifan Wang, Jianfeng Chen, and Zhihong Rao Design and Verification Method for Flammable Fluid Drainage of Civil Aircraft Based on DMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539 Yu Chen Low-Income Dwelling Bioclimatic Design with CAD Technologies. A Case Study in Monte Sinahí, Ecuador . . . . . . . . . . . . . . . . . . . . . . . . 546 Jesús Rafael Hechavarría Hernández, Boris Forero, Robinson Vega Jaramillo, Katherine Naranjo, Fernanda Sánchez, Billy Soto, and Félix Jaramillo
Contents
xvii
Virtual Reduction and Interaction of Chinese Traditional Furniture and Its Usage Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552 Dehua Yu Humans and Artificial Systems Complexity Your Voice Assistant Will See You Now: Reducing Complexity in Human and Artificial System Collaboration Using Voice as an Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 Viraj Patwardhan, Neil Gomes, and Maia Ottenstein Pre-emptive Culture Mapping: Exploring a System of Language to Better Understand the Abstract Traits of Human Interaction . . . . . . 567 Timothy J. Stock and Marie Lena Tupot ''Meanings'' Based Human Centered Design of Systems . . . . . . . . . . . . . 573 Santosh Basapur and Keiichi Sato A Systematic Review of Sociotechnical System Methods Between 1951 and 2019 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580 Amangul A. Imanghaliyeva Designing a Safety Confirmation System that Utilizes Human Behavior in Disaster Situations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588 Masayuki Ihara, Hiroshi Nakajima, Goro Inomae, and Hiroshi Watanabe Designing Ethical AI in the Shadow of Hume’s Guillotine . . . . . . . . . . . 594 Pertti Saariluoma and Jaana Leikas A Counterattack of Misinformation: How the Information Influence to Human Being . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600 Subin Lee and Ken Nah Effects of Increased Cognitive Load on Field of View in Multi-task Operations Involving Surveillance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605 Seng Yuen Marcus Goh, Ka Lon Sou, Sun Woh Lye, and Hong Xu Investigating Human Factors in the Hand-Held Gaming Interface of a Telerehabilitation Robotic System . . . . . . . . . . . . . . . . . . . . . . . . . . 612 S. M. Mizanoor Rahman Procedure of Mining Relevant Examples of Armed Conflicts to Define Plausibility Based on Numerical Assessment of Similarity of Situations and Developments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619 Ahto Kuuseok Human Digital Twins: Two-Layer Machine Learning Architecture for Intelligent Human-Machine Collaboration . . . . . . . . . . . . . . . . . . . . 627 Wael Hafez
xviii
Contents
Semantic Network Analysis of Korean Virtual Assistants’ Review Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633 Hyewon Lim, Xu Li, Harim Yeo, and Hyesun Hwang Design Collaboration Mode of Man–Computer Symbiosis in the Age of Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640 Jinjing Liu and Ken Nah User Experience over Time with Personal Assistants of Mobile Banking Application in Turkey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646 Hatice Merve Demirci and Mehmet Berberoğlu Human-Automation Interaction Through Shared and Traded Control Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653 Mauricio Marcano, Sergio Díaz, Joshue Pérez, Andrea Castellano, Elisa Landini, Fabio Tango, and Paolo Burgio Alignment of Management by Processes and Quality Tools and Lean to Reduce Unfilled Orders of Fabrics for Export: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660 Z. Bardales, P. Tito, F. Maradiegue, Carlos Raymundo-Ibañez, and Luis Rivera Detection and Prevention of Criminal Attacks in Cloud Computing Using a Hybrid Intrusion Detection Systems . . . . . . . . . . . . 667 Thierry Nsabimana, Christian Ildegard Bimenyimana, Victor Odumuyiwa, and Joël Toyigbé Hounsou Development of Tutoring Assistance Framework Using Machine Learning Technology for Teachers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677 Satoshi Togawa, Akiko Kondo, and Kazuhide Kanenishi Replenishment System Using Inventory Models with Continuous Review and Quantitative Forecasting to Reduce Stock-Outs in a Commercial Company . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683 Carlos Malca-Ramirez, Luis Nuñez-Salome, Ernesto Altamirano, and José Alvarez-Merino Applying SLP in a Lean Manufacturing Model to Improve Productivity of Furniture SME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 690 Zhelenn Farfan-Quintanilla, Manuel Caira-Jimenez, Fernando Sotelo-Raffo, Carlos Raymundo-Ibañez, and Moises Perez Collaborative Model Based on ARIMA Forecasting for Reducing Inventory Costs at Footwear SMEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697 Alejandra Angulo-Baca, Michael Bernal-Bazalar, Juan Sotelo-Raffo, Carlos Raymundo-Ibañez, and Moises Perez
Contents
xix
A Framework of Quality Control Matrix in Paprika Chain Value: An Empirical Investigation in Peru . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704 Diana Garcia-Montero, Luz Roman-Ramirez, Fernando Sotelo-Raffo, and Edgar Ramos-Palomino Inventory Optimization Model Applying the Holt-Winters Method to Improve Stock Levels in SMEs in the Sports Retail Sector . . . . . . . . 711 Diego Amasifén-Pacheco, Angela Garay-Osorio, Maribel Perez-Paredes, Carlos Raymundo-Ibañez, and Luis Rivera Recruitment and Training Model for Retaining and Improving the Reputation of Medical Specialists to Increase Revenue of a Private Healthcare SME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719 Audy Castro-Blancas, Carlos Rivas-Zavaleta, Carlos Cespedes-Blanco, Carlos Raymundo, and Luis Rivera Research on Disabled People’s Museum Visit Experience from the Perspective of Actor-Network Theory . . . . . . . . . . . . . . . . . . . 726 Shifeng Zhao and Jie Shen Production Management Model to Balance Assembly Lines Focused on Worker Autonomy to Increase the Efficiency of Garment Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733 Valeria Sosa-Perez, Jose Palomino-Moya, Claudia Leon-Chavarril, Carlos Raymundo-Ibañez, and Moises Perez Rural Ecotourism Associative Model to Optimize the Development of the High Andean Tourism Sector in Peru . . . . . . . . . . . . . . . . . . . . . 740 Oscar Galvez-Acevedo, Jose Martinez-Castañon, Mercedes Cano-Lazarte, Carlos Raymundo-Ibañez, and Moises Perez Picking Management Model with a Focus on Change Management to Reduce the Deterioration of Finished Products in Mass Consumption Distribution Centers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746 Lourdes Canales-Ramos, Arelis Velasquez-Vargas, Pedro Chavez-Soriano, Carlos Raymundo-Ibañez, and Moises Perez Risk Factors Associated with Work-Related Low Back Pain Among Home-Based Garment Workers . . . . . . . . . . . . . . . . . . . . . . . . . 753 Sunisa Chaiklieng, Pornnapa Suggaravetsiri, and Sari Andajani Demand Management Model Based on Quantitative Forecasting Methods and Continuous Improvement to Increase Production Planning Efficiencies of SMEs Bakeries . . . . . . . . . . . . . . . . . . . . . . . . . 760 Denilson Contreras-Choccata, Juan Sotelo-Raffo, Carlos Raymundo-Ibañez, and Luis Rivera
xx
Contents
Study on Key Elements of Shopping App Design for the Elderly . . . . . . 766 Wenfeng Liu, Fenghong Wang, and Yiyan Chen Shopping Website Accessibility Study Based on Users’ Mental Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773 Zhen Wu, Chengqi Xue, Yanfei Zhu, Binhong Zhai, and Chi Zhang HIRAC-Based Risk Management Model with POKA–YOKE and TPM Continuity to Control and Mitigate Emergency Scenarios in Hydrocarbon Sector Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780 Jose Echevarria-Cahuas, Maria Quispe-Huapaya, Cesar Ramirez-Valdivia, Carlos Raymundo, and Luis Rivera Materials and Inclusive Human Systems CAD-Based Risk Assessment Approach for Safe Scheduling of HRC Operations for Parts Produced by Laser Powder Bed Fusion . . . . . . . . 789 Fabio Pini, Enrico Dalpadulo, and Francesco Leali Photogrammetry and Additive Manufacturing Based Methodology for Decentralized Spare Part Production in Automotive Industry . . . . . 796 Antonio Bacciaglia, Alessandro Ceruti, and Alfredo Liverani Improved Heat Sink for Thermoelectric Energy Harvesting Systems . . . 803 Alessandro Bertacchini, Silvia Barbi, and Monia Montorsi A Framework Designing for Story Sharing of the Elderly: From Design Opportunities to Concept Selection . . . . . . . . . . . . . . . . . . 810 Cun Li, Jun Hu, Bart Hengeveld, and Caroline Hummels A Methodological Approach for the Design of Inclusive Assistive Devices by Integrating Co-design and Additive Manufacturing Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 816 Francesco Gherardini, Andrea Petruccioli, Enrico Dalpadulo, Valentina Bettelli, Maria Teresa Mascia, and Francesco Leali New Collaborative Version of the Quality Function Deployment: Practical Application to the HABITAT Project . . . . . . . . . . . . . . . . . . . 823 Giuseppe Mincolelli, Gian Andrea Giacobone, Michele Marchi, and Silvia Imbesi Human Centered Methodologies for the Development of Multidisciplinary Design Research in the Field of IOT Systems: Project Habitat and Pleinair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829 Giuseppe Mincolelli, Silvia Imbesi, Gian Andrea Giacobone, and Michele Marchi
Contents
xxi
Design of an Innovative Furniture System: Improving Acoustic Comfort in Coworking Workplaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835 Viola Geniola, Stefania Camplone, Antonio Marano, and Emilio Rossi Modeling of Subcutaneous Implantable Microchip Intention of Use . . . 842 Mona A. Mohamed A Brief Analysis of the Status Quo and Trend of Wearable Smart Jewellery Devices Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848 Jing Liu and Ken Nah Accessibility Evaluation of Video Games for Users with Cognitive Disabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 853 Luis Salvador-Ullauri, Patricia Acosta-Vargas, and Sergio Luján-Mora Design of Smart Devices for Older People: A User Centered Approach for the Collection of Users’ Needs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 860 Silvia Imbesi and Giuseppe Mincolelli Examining Feedback of Apple Watch Users in Korea Using Textmining Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 865 Yu Lim Lee, Minji Jung, In-Hyoung Park, Ahyoung Kim, and Jae-Eun Chung Structural Testing of Laminated Prosthetic Sockets: Comparison of Philippine Pineapple Fabric and Fiberglass . . . . . . . . . . 871 Glenn Alkuino, Ervin Fandialan, and Marvin Medina Challenges and Improvements in Website Accessibility for Health Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 875 Patricia Acosta-Vargas, Paula Hidalgo, Gloria Acosta-Vargas, Mario Gonzalez, Javier Guaña-Moya, and Belén Salvador-Acosta Providing Comprehensive Navigational Cues Through the Driving Seat to Reduce Visual Distraction in Current Generation of Semi-autonomous Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 882 Ahmed Farooq, Grigori Evreinov, and Roope Raisamo Ensuring the Sustainability of Inclusive Projects Through Strategic Addressing Supported by Process Management: Case Applied to Aquamarinna Handmade Soap . . . . . . . . . . . . . . . . . . 889 Diego S. Suarez, Esteban Crespo-Martínez, and Pedro Mogrovejo A New Model to Bionic Hand Prosthesis with Individual Fingers Actuators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 896 Marcelo H. Stoppa, Guilherme F. Neto, and Danillo A. de S. Dunck A Predictive Model of Users’ Behavior and Values of Smart Energy Meters Using PLS-SEM . . . . . . . . . . . . . . . . . . . . . . . 903 Ahmed Shuhaiber
xxii
Contents
UltraSurfaces: A New Material Design Vision . . . . . . . . . . . . . . . . . . . . 909 Marinella Ferrara and Chiara Pasetti The Hybrid Dimension of Material Design: Two Case Studies of a Do-It-Yourself Approach for the Development of Interactive, Connected, and Smart Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 916 Stefano Parisi, Markus Holzbach, and Valentina Rognoli Human-Autonomy Teaming Goal Directed Design of Rewards and Training Features for Selflearning Agents in a Human-Autonomy-Teaming Environment . . . . . . . 925 Simon Schwerd, Sebastian Lindner, and Axel Schulte Facial Expressions as Indicator for Discomfort in Automated Driving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 932 Matthias Beggiato, Nadine Rauh, and Josef Krems Can We Talk? – The Impact of Conversational Interfaces on Human Autonomy Teaming Perception, Performance and Situation Awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 938 Adam Bogg, Andrew Parkes, and Mike Bromfield Driver’s Situational Awareness and Impact of Phone Interface Modality in Conventional and Semi-autonomous Vehicles . . . . . . . . . . . 945 Syeda Rizvi, Francesca Favaro, and Nazanin Nader Concept of an Adaptive Cockpit to Maintain the Workflow of the Cockpit Crew . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 952 Juliane Müller and Axel Schulte A Conceptual Augmentation of a Pilot Assistant System with Physiological Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 959 Dennis Mund, Evgeni Pavlidis, Matthew Masters, and Axel Schulte Implementation of Teaming Behavior in Unmanned Aerial Vehicles . . . 966 Marius Dudek, Sebastian Lindner, and Axel Schulte Behavioral Analysis of Information Exchange Digitalization in the Context of Demand Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 973 Tim Lauer and Katharina Franke Signs Symbols & Displays in Automated Vehicles: A Focus Group Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 980 Joy Richardson, Kirsten Revell, Jisun Kim, and Neville A. Stanton Beauty Attracts the Eye but Character Captures the Heart: Why Personality Matters in Chat Bot Design . . . . . . . . . . . . . . . . . . . . . 986 Helen Muncie
Contents
xxiii
Integration of Humans in the Fallback Process by a Machine in Fully Automated Railway Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 992 Bilal Üyümez and Andreas Oetting Analysis of Facial Expressions Explain Affective State and Trust-Based Decisions During Interaction with Autonomy . . . . . . . 999 Catherine Neubauer, Gregory Gremillion, Brandon S. Perelman, Claire La Fleur, Jason S. Metcalfe, and Kristin E. Schaefer Let’s Get in Touch Again: Tangible AI and Tangible XR for a More Tangible, Balanced Human Systems Integration . . . . . . . . . 1007 Frank Flemisch, Konrad Bielecki, Daniel López Hernández, Ronald Meyer, Ralph Baier, Nicolas Daniel Herzberger, and Joscha Wasser Time Line Based Tasking Concept for MUM-T Mission Planning with Multiple Delegation Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1014 Felix Heilemann and Axel Schulte Towards Cognitive Assistance and Teaming in Aviation by Inferring Pilot’s Mental State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1021 Nele Russwinkel, Christoph Vernaleken, and Oliver W. Klaproth Evaluating the Impact of Phone Interface Modality on Response Times to Stimuli in Conventional and Semi-automated Vehicles . . . . . . 1028 Sky O. Eurich, Shivangi Agarwal, and Francesca Favaro Design and Evaluation of Human-Friendly Hand-Held Gaming Interface for Robot-Assisted Intuitive Telerehabilitation . . . . . . . . . . . . 1034 S. M. Mizanoor Rahman Capture of Intruders by Cooperative Multiple Robots Using Mobile Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1041 Yasushi Kambayashi, Taichi Sekido, and Munehiro Takimoto Automation as Driver Companion: Findings of AutoMate Project . . . . . 1048 Andrea Castellano, Massimo Fossanetti, Elisa Landini, Fabio Tango, and Roberto Montanari Applications and Future Trends Development of a Human System Integration Program in Military Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1057 Jari Laarni and Marja Ylönen Beyond Confluence, Integration and Symbiosis: Creating More Aware Relationships in Smart Cities . . . . . . . . . . . . . . . . . . . . . . 1063 H. Patricia McKenna
xxiv
Contents
A Potential Analysis of Cognitive Assistance Systems in Production Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1069 Jessica Klapper, Bastian Pokorni, and Moritz Hämmerle Identifying and Analysing Risk Factors from a Sociotechnical System Perspective: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1074 Amangul A. Imanghaliyeva Experimental Learning for a Basic Technology Acquisition of Moving Images Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1082 Akiko Kondo and Satoshi Togawa Mechanical Fatigue Evaluation by Image Recognition . . . . . . . . . . . . . . 1088 Massimo Milani, Luca Montorsi, Luca Fontanili, Gabriele Storchi, and Gabriele Muzzioli Universal Access and Inclusive Dwelling Design for a Family in Monte Sinahí, Guayaquil, Ecuador . . . . . . . . . . . . . . . . . . . . . . . . . . 1094 Jesús Rafael Hechavarría Hernández, Boris Forero, and Robinson Vega Jaramillo Integrated Safety Risk Assessment Between Enterprises, Industries and Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1101 Lu Zhang, Yun Luo, and Rui Liao Comparison Between ARIMA and LSTM-RNN for VN-Index Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1107 Nguyen Trong Co, Hoang Huu Son, Ngo Thanh Hoang, Tran Thi Phuong Lien, and Trinh Minh Ngoc E-material Formatting Application Prototype 2.0 Development Through Usability Testing of Prototype 1.0 . . . . . . . . . . . . . . . . . . . . . . 1113 Kristine Mackare, Anita Jansone, and Raivo Mackars Use of CAD-CAM Technologies in the Production of Furniture for Natural Disaster Areas in Ecuador . . . . . . . . . . . . . . . . . . . . . . . . . . 1119 Francesco Giuseppe Magnone, Víctor Gustavo Gómez Rodríguez, Yoenia Portilla Castell, and Jesús Rafael Hechavarría Hernández A Five-Factor KMS Success Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1126 Gabriel Nyame and Zhiguang Qin A Study on Understanding of Visitor Needs in Art Museum: Based on Analysis of Visual Perception Through Eye-Tracking . . . . . . . 1132 Taeha Yi, Mi Chang, Sukjoo Hong, Meereh Kim, and Ji-Hyun Lee Analysis of Art Museums’ Visitor Behavior and Eye Movements for Mobile Guide App Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1138 Mi Chang, Taeha Yi, Po Yan Lai, Jun Hee Lee, and Ji-Hyun Lee
Contents
xxv
Al-Maqta Canal of Abu Dhabi, UAE: A Study of Waterfront Landscapes and Flow in Manmade Canals . . . . . . . . . . . . . . . . . . . . . . 1145 Mohamed El Amrousi and Mohamed Elhakeem A Discussion of User Experience on a Panoramic Scooter Riding Video Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1152 Fei-Hui Huang Application Trend of Interactive Multimedia in Art Museums . . . . . . . 1159 Yongbin Wang and Jian Yu Design Criteria in Vernacular Architecture as a Proposal for Low-Income Dwelling for Urban Parishes of the Babahoyo Canton, Ecuador . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1164 Julio Franco Puga, Bryan Colorado Pástor, Jesús Rafael Hechavarría Hernández, and Maikel Leyva Consumer Experience of a Disruptive Technology: An O2O Food Delivery App Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1171 Jaehye Suk, Yeon Ji Yang, Yun Jik Jeong, Muzi Xiang, and Kee Ok Kim A Group Travel Recommender System Based on Collaborative Filtering and Group Approximate Constraint Satisfaction . . . . . . . . . . . 1178 JinLu He, IlYoung Choi, and JaeKyeong Kim Consumer’s Information Privacy and Security Concerns and Use of Intelligent Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1184 Seonglim Lee, Jaehye Suk, Hee Ra Ha, Xiao Xi Song, and YuanZhou Deng PEST Analysis Based on Fuzzy Decision Maps for the Ordering of Risk Factors in Territorial Planning of the Vinces Canton, Ecuador . . . 1190 Carlos Luis Valero Fajardo and Jesús Rafael Hechavarría Hernández Model for Urban Consolidation of Informal Human Settlements Based on Cooperation Systems and Human Participation in Guayaquil, Ecuador . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1195 María Milagros Fois, Karen Sellan, Karla Moscoso, and Maria Ruiz Systemic Approach to the Territorial Planning of the Urban Parish La Aurora, Daule, Ecuador . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1201 Cyntia Alava Portugal, Jesús Rafael Hechavarría Hernández, and Milagros Fois Lugo Systemic Approach to Strategic Tourism Planning in the Cantonal Capital of Bahía de Caráquez, Sucre, Ecuador . . . . . . . . . . . . . . . . . . . 1206 Milton Zambrano and Jesús Rafael Hechavarría Hernández
xxvi
Contents
Cognitive Rehabilitation for Autism Children Mental Status Observation Using Virtual Reality Based Interactive Environment . . . . 1213 Tamilselvi Dhamodharan, Manju Thomas, Sathiyaprakash Ramdoss, Karthikeyan JothiKumar, SaiNaveenaSri SaravanaSundharam, BhavaniDevi Muthuramalingam, NilofarNisa Hussainalikhan, Sugirtha Ravichandran, VaibhavaShivani Vadivel, Pavika Suresh, Sasikumar Buddhan, and Ajith Madhusudanan Eye Control System Development and Research of Effects of Color of Icons on Visual Search Performance Based on the System . . . . . . . . 1219 Jiaqi Cui, Yafeng Niu, Chengqi Xue, Xijiang Cai, Yi Xie, Bingzheng Shi, and Lincun Qiu How to Improve Manufacturing Process Implementing 5S Practices: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1225 Beata Mrugalska, Monika Konieczna, and Magdalena K. Wyrwicka The Initial Stage of Development of a New Computer Program for the Processing of Psychophysiological Tests . . . . . . . . . . . . . . . . . . . 1233 Jelena Turlisova and Anita Jansone Experimental Study on Dynamic Map Information Layout Based on Eye Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1238 Jiapei Ren, Haiyan Wang, and Junkai Shao Research on Readability of Adaptive Foreground in Dynamic Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1244 Maoping Chi and Lei Zhou Research on Interaction Design of Children’s Companion Robot Based on Cognitive Psychology Theory . . . . . . . . . . . . . . . . . . . . . . . . . 1250 Tianmai Zhang and Wencheng Tang Strategies for Accessibility to the Teodoro Maldonado Hospital in Guayaquil. A Design Proposal Focused on the Human Being . . . . . . 1256 Josefina Avila Beneras, Milagros Fois Lugo, and Jesús Rafael Hechavarría Hernández Fatigue Measurement of Task: Based on Multiple Eye-Tracking Parameters and Task Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1263 Hanyang Xu, Xiaozhou Zhou, and Chengqi Xue Emotional Data Visualization for Well-Being, Based on HRV Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1270 Akane Matsumae, Ruiyao Luo, Yun Wang, Eigo Nishimura, and Yuki Motomura
Contents
xxvii
A Consumer-Centric Approach to Understand User’s Digital Experiences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1277 Yeon Ji Yang, Jaehye Suk, Kee Ok Kim, Hyesun Hwang, Hyewon Lim, and Muzi Xiang Research on Design Skills for Personnel Evaluation Systems and Educational Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1284 Toshiya Sasaki Correction to: Intelligent Human Systems Integration 2020 . . . . . . . . . . Tareq Ahram, Waldemar Karwowski, Alberto Vergnano, Francesco Leali, and Redha Taiar
C1
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1289
Automotive Design and Transportation Engineering
User-Centered Design Within the Context of Automated Driving in Trucks – Guideline and Methods for Future Conceptualization of Automated Systems Paula Laßmann1(&), Florian Reichelt1, Dominique Stimm2, and Thomas Maier1 1
Institute for Engineering Design and Industrial Design (IKTD), Department of Industrial Design Engineering, University of Stuttgart, Pfaffenwaldring 9, 70569 Stuttgart, Germany {Paula.Lassmann,Florian.Reichelt, Thomas.Maier}@iktd.uni-stuttgart.de 2 Institute for Mobility and Digital Innovation (MoDI), Stuttgart Media University, Nobelstraße 10, 70569 Stuttgart, Germany [email protected]
Abstract. The monotony of moving in queues enhances truckers´ sleepiness and the danger of micro sleep, which put a serious risk on traffic safety. The research project TANGO aims at overcoming these risks by developing an Attention and Activity Assistance system (AAA). The AAA shall enable truck drivers to engage in non-driving-related activities during automated driving phases (up to SAE level 3), but keep them alert for tasks (monitoring or takeover) nevertheless. Throughout the project, numerous user studies have led to a great gain of knowledge about the users’ needs and design requirements concerning an automated system. For the purpose of an easier accessibility of this knowledge for future research, a guideline will be developed that covers both design guidance as well as a review of applied methods. The main scope of this paper focuses on the methodical process of developing the guideline. Keywords: Automated driving Guideline Methods
User-centered design Automotive design
1 Objective and Significance The monotony during driving a truck, mainly caused by moving in queues, increases the danger of micro sleep and therefore causes a serious risk on traffic safety [1]. The aim of TANGO (German abbreviation for Technologie für automatisiertes Fahren nutzergerecht optimiert, English equivalent Technology for automated driving, optimized to user needs) is a gain of user experience and acceptance of automated driving in trucks. The project is developing a new technology that helps keeping the driver at the optimal level of strain with consideration of the driver comfort at the same time: the Attention and Activity Assistance system (AAA). The AAA shall enable truck drivers to engage in © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 3–8, 2020. https://doi.org/10.1007/978-3-030-39512-4_1
4
P. Laßmann et al.
non-driving-related activities during automated driving phases (SAE level 2 and 3 [2]). This will be achieved with the help of a driver observer system as well as a user-friendly human machine interface (HMI) that is constantly evaluated and improved throughout the development process. The development of the AAA follows the user-centered design process according to DIN EN ISO 9241-210 [3], which integrates users’ concerns in iterative steps of conceptualization and testing. Within this process, the project consortium gained a vast knowledge about the users’ concerns and safety while interacting with an automated truck system like the AAA. In addition, within this development process the consortium has applied a broad range of scientific methods. For the purpose of an easier accessibility of this knowledge for HMI designers and future research projects, the consortium will derive user concerns, top findings, and recommendations (summarized in a chapter design guidance) as well as a review of methods and document them in a guideline. For manual driving, several guidelines exist, among other things a project documentation by the project HARDIE [4], that covers recommendations and methodological advice. Other guidelines focus on certain interface elements like vibro-tactile displays [5], high-priority warning signals [6] or in-vehicle information systems [7]. In terms of guidelines for automated driving in the context of SAE level 2 and 3 there are few guidelines providing actual guidance on designing and developing automated systems. The National Highway Traffic Safety Administration (NHTSA) published a guidance for Human Factors Design for those levels in 2018 [8]. The guidance provides much information and aid on designing interfaces for automated vehicles. In addition, Naujoks and colleagues [9] give advice by listing design recommendations and including an extended checklist for automated interface design. The main goal of the guideline presented in this paper is to provide knowledge, gained within the TANGO project, to support developers concerned with similar research questions within the future development and design of automated systems. Therefore, the guideline focusses not only on recommendations concerning the HMI design and the design of the AAA in general but also provides possibilities for the methodological approach while developing automated systems which is not yet covered in other guidelines.
2 Methodological Development of the Guideline A comprehensive project documentation serves as a basis for the guideline. This documentation contains all data and facts gathered throughout the iterative development of the AAA. Since the project documentation is very detailed and will not be published, it is not particularly usable for other projects and developers in general. The guideline, in contrary, should provide easy access to the knowledge gained within TANGO. For differentiation of design and methodological approach, the TANGO guideline is divided into two parts: design guidance and a review of methods. An example of what could be addressed in this guideline is the question of how the drivers could occupy themselves during automated driving phases: what would be best from the perspective of the user (e.g. [10]) and what would be best in terms of traffic safety (e.g. [11])? The provided information of the two segments design guidance and review
User-Centered Design Within the Context of Automated Driving in Trucks – Guideline
5
of methods is extracted by using the knowledge derived from each part of the project documentation. Whereas the focus of design guidance is on top findings, recommendations and user concerns regarding the design of the AAA, the review of methods provides a detailed overview of the methods applied within TANGO that can be used for both development and evaluation of an automated system like the AAA. Figure 1 shows the development process and structure of the guideline as well as the connections to the detailed project documentation.
Fig. 1. Structure of the guideline, connections to the project documentation, and development steps.
2.1
Design Guidance
The design guidance should provide advices concerning the future development and design of automated systems. Major findings listed in the project documentation are discussed and categorized in workshops within the project consortium. Due to the fact that not every finding can be used as a valid basis for deriving an appropriate recommendation, e.g. if the finding results only from one study (in contrary to multiple studies), a consequent clustering and categorization was conducted in workshops with special regards to importance and scientific validity. Therefore, the findings are going to be clustered within three core categories: user concerns, top findings and recommendations. This categorization is important due to the fact, that the TANGO project included multiple qualitative user studies. The users had various concerns that the consortium can neither recommend including in the TANGO system nor in the design of an automated system level 2 or 3 in general, e.g. the wish to sleep while automated driving. Nevertheless, these user concerns could be interesting for another project, for example regarding higher automation levels. Top findings contain all findings resulting from the studies guaranteeing a scientific validity and can be used as a basis for recommendations. An example for a top finding is the fact, that during level 2, an auditory cognitive non-driving related task leads to a good monitoring
6
P. Laßmann et al.
performance in terms of a visual detection task [11]. The recommendations are going to include design suggestions derived from the study results of the project as well as a discussion with references to other projects. 2.2
Review of Methods
The methods applied within the various studies (qualitative user research as well as simulator studies) are gathered and evaluated via checklists. These checklists contain a description of each method as well as a critical methodical reflection in order to give advice for future user-centered design projects. Therefore, all methods applied within the TANGO project are categorized into macro and micro methods, whereas a macro method is the superordinate method that includes one or more micro methods. For instance, within a simulator study, which would be the macro method, any questionnaire or objective method used would be a micro method. Each macro and micro method was critically reflected with the help of the quality criteria of scientific tests (objectivity, reliability and validity) as well as strengths and weaknesses of the method. Each project partner filled the checklists for the methods applied in their studies. Furthermore, for a better methodical overview each method was rated via a three color system, comparable to a traffic light in the colors green, yellow and red. Red means, that the method did not work well in the context of the project TANGO and is not recommended for similar application. A method that is rated yellow could be used, but adaptation is suggested. If the method worked well in this application, it is rated green. Each method assessment comes with an explanation of the
Table 1. Example of the rating of several micro methods according to the gathered experience in the specific context of TANGO. Macro method
Title and objective micro methods
Traffic light status Green
Justifications/Remarks
Achievement in the discovery of relevant positive and negative occurrences during the drive Yellow Method to measure visual distraction, hit rate too high, targets should be displayed shorter Driving simulator Electrodermal Red No differences of EDA study at MAN Truck activity (EDA) could be shown during & Bus SE different levels of strain Red: The method did not work well in the context of the project TANGO and is not recommended for similar application Yellow: Adaptation of the method is suggested in the context of the project TANGO Green: The method worked well in the context of the project TANGO
Ethnographical Interview: Usercentered research to identify driver´s needs [10] Driving simulator study at the University of Stuttgart [11]
Observation of truck drivers: Discovery of potentially critical moments in terms of driver experience Sign Detection Task (SDT)
User-Centered Design Within the Context of Automated Driving in Trucks – Guideline
7
rating by the operator. It must be clear, that the rating cannot be applied to any other project right away without consideration of the context. Publications of the results of the TANGO studies will be linked to each method for further consultation, if available. Reviews of three methods that have been used within the project are displayed in Table 1. In conclusion, the developer will have a list with all methods that were conducted in TANGO. In this list, additional information about the context as well as a (non) recommendation will be provided.
3 Conclusion and Outlook This guideline, which includes both design recommendations and methods, sums up the findings of TANGO in an accessible format. It aims at being an assistance for future projects dealing with automated vehicle design, in particular regarding the design of an HMI. The main scope of this paper focuses on the methodical process of developing the guideline regarding both the recommendations and methods. The guideline will presumably be published after the TANGO project ends and might be a handy tool for developers of automated systems. In addition, the process description of the development of the guideline aims at being an example for how other research projects could summarize and document their findings in order to make them accessible for other scientists. Especially in terms of a method review, no examples have been found. The goal is to start a discourse about a best practice in this regard. Acknowledgments. The German Federal Ministry of Economic Affairs and Energy fund the presented research. The goal of this 3.5 years research project is to develop an Attention and Activity Assistance system (AAA) to overcome monotony and truckers’ sleepiness. The project consortium consists of Robert Bosch GmbH, Volkswagen Aktiengesellschaft, MAN Truck & Bus SE, Stuttgart Media University and University of Stuttgart.
References 1. National Transportation Safety Board. Factors that affect fatigue in heavy truck accidents. Safety Study NTSB/SS-95/01 and NTSB/SS-95/02. Washington, DC (1995) 2. SAE international: Taxonomy and definition for terms related to driving automation systems for on-road motor vehicles (2018) 3. DIN EN ISO 9241-210 Ergonomics of human-system interaction – Part 210: Humancentered design for interactive systems (2010) 4. Ross, T., Midtland, K., Fuchs, M., Pauzié, A., Engert, A., Duncan, B., Vaughan, G., Vernet, M., Peters, H., Burnett, G.E., May, A.J.: HARDIE Design Guidelines Handbook. Human Factors Guidelines for Information Presentation by ATT Systems, vol. 530 (1996) 5. Van Erp, J.B.: Guidelines for the use of vibro-tactile displays in human computer interaction. Paper Presented at the Proceedings of Eurohaptics (2002) 6. United Nations Economic Commission for Europe. Guidelines on establishing requirements for high-priority warning signals (2009) 7. Transport Research Laboratory. A checklist for the assessment of in-vehicle information systems (IVIS) Wokingham: TRL (2011)
8
P. Laßmann et al.
8. Campbell, J.L., Brown, J.L., Graving, J.S., Richard, C.M., Lichty, M.G., Bacon, L.P., Sanquist, T.: Human factors design guidance for level 2 and level 3 automated driving concepts (Report No. DOT HS 812 555). Washington, DC: National Highway Traffic Safety Administration (2018) 9. Frederik, N., et al.: Towards guidelines and verification methods for automated vehicle HMIs. Transp. Res. Part F: Traffic Psychol. Behav. 60, 121–136 (2019) 10. Ruppert, M., Engeln, A., Michel, B., Stimm, D.: The human factor. working to ensure acceptance of autonomous systems among truck drivers. Fut. Transp. Rev. 2018, S118–123, (2018). London 11. Lassmann, P., Fischer, M. S., Bieg, H.-J., Jenke, M., Reichelt, F., Tüzün, G.-J., Maier, T.: Keeping the balance between overload and underload during partly automated driving: relevant secondary tasks. Submitted to 17. ATZ-Fachtagung. Springer (2019)
Towards Probabilistic Analysis of Human-System Integration in Automated Driving Ephraim Suhir1,2,3,4(&), Gunther Paul2, and Hermann Kaindl3 1
2
Portland State University, Portland, OR, USA [email protected] James Cook University, Townsville, QLD, Australia [email protected] 3 TU Wien, Vienna, Austria [email protected] 4 ERS Co., Los Altos, CA, USA
Abstract. According to the Automated Driving Roadmap ERTRAC 17, only vehicles of Level 5 may not need human interference. The current adaptive cruise control system or more advanced automated driving solutions below Level 5 require, therefore, that a human driver takes over, if an extraordinary situation occurs. A critical safety problem may be caused by the very short time span available to the driver. It has been recently demonstrated, mostly in application to the aerospace domain, how probabilistic analytical modeling (PAM) could effectively complement computational simulation techniques in various human-in-the-loop (HITL) related missions and off-normal situations, when the reliability of the equipment (instrumentation), both hard- and software, and the performance of the human contribute jointly to their likely outcome. Our objective is to extend this approach, with appropriate modifications, to safety analyses of automated driving applications. Keywords: Human factors
Human-systems integration Automated driving
1 Introduction It has been recently demonstrated, mostly in application to the aerospace domain [1–3], how PAM [4, 5] could effectively complement computational simulations (see, e.g., [6, 7]) or model checking (see, e.g., [8, 9]), when the reliability of the equipment (instrumentation), both hard- and software [10], and human performance contribute jointly to the outcome of a mission or an extraordinary situation. In the analysis of this paper, we show, using an example of emergency stopping, how this approach could be brought “down to earth”, i.e., modified for its application to automotive automated driving. The application of the PAM concept can improve the understanding of and accounting for the human performance in various vehicular missions and situations [11, 12]. It is noteworthy that the automotive vehicle environment may be even less forgiving than for an aircraft: slight deviations in aircraft altitude, speed, or human actions are often tolerable without immediate consequences, while an automotive vehicle is © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 9–14, 2020. https://doi.org/10.1007/978-3-030-39512-4_2
10
E. Suhir et al.
likely to have tighter control requirements for avoiding collision than an aircraft [13]. The existing Automated Driving Roadmap ERTRAC 17 [14] indicates that only vehicles of Level 5 may not need human interference. Hence, using any of the current adaptive cruise control (ACC) or more advanced autopilot solutions requires a human driver to take over, if something extraordinary occurs, or if there is an autopilot failure. A key safety problem is likely to be caused particularly by the short time span available to the driver to react properly. While this rather obvious problem is fairly well understood qualitatively, we are not aware of any publicly available quantitative evaluation. In order to address the key problem, we present and analyze here a first example, where the automated system all of a sudden hands over to the driver in the face of an obstacle. In fact, this is a scenario where the automated system may directly and automatically initiate emergency braking [15] instead. We think that this will eventually be the better approach with respect to safety in such a situation, but there may even be legal problems involved for not implementing this and handing over to the driver. Our analysis provides some quantitative evidence, how bad this is in terms of safety because of the time needed by the human driver to take over and to initiate the breaking himself. We conjecture that the basic modeling approach and the probabilistic analysis based on it will be amenable to a more general study of HITL in the context of automated driving. The remainder of this paper is organized in the following manner. First, we present the example problem chosen for this initial study. Then this example is analyzed both in a deterministic and a probabilistic approach, where the latter analysis is the core contribution of this paper. Finally, we draw a few conclusions and sketch future work on our way towards probabilistic analysis of human-system integration in automated driving.
2 An Example Problem The context of this example is using ACC or even a more advanced autopilot. Let us assume that there is fog ahead, and hidden through it there was already an accident happening, so that the road is blocked by one or more obstacles that do not move (any more). In such a situation, the automated system may all of a sudden realize that and hand over to the human driver. The role of the human factor could be assessed by comparing the distance covered by the vehicle from the moment of time, when an obstacle has been detected (predeceleration time), to the distance covered from the end of this ‘undesirable’ time to the moment of time when the ‘useful’ constant-deceleration time ends, see Fig. 1. The accident could be avoided, if the probability that the total operational distance covered by the moving vehicle during these two random periods of time exceeds the available sight distance (automatically detected, e.g., by radar) is sufficiently low. The pre-deceleration time includes three major time periods: (1) the decision-making time, i.e., the time that the system and/or the driver need to decide that/if the driver has to intervene, including time to take over control of the
Towards Probabilistic Analysis of Human-System Integration
11
Fig. 1. Overview of the model.
vehicle (this important distinction still has to be determined and decided upon [16], and this critical effort and the associated time and distance is beyond the scope of this analysis); (2) the time that the driver needs to make his decision on pushing the brakes (prebraking time), including biomechanical motion time, and (3) the time that the driver needs to adjust the brakes (brake-adjusting time) when interacting with the vehicle’s anti-lock (anti-skid) braking system. Although this third period of time is affected by both human and vehicle performance, in this analysis we conservatively assume that it is part of the pre-deceleration time. A safe outcome of the situation in question is likely, if the probability that the (random) sum of the (random) distances corresponding to the (also random) sequential pre-deceleration and deceleration times exceeds the ‘available’ (also random) sight distance is sufficiently low.
3 Analysis 3.1
Deterministic Approach
A deterministic approach distinguishes two periods of time in a possible collision situation: (1) The pre-deceleration time T0 from the initial moment of time, when the radar detects the obstacle, until the time, when the vehicle starts to decelerate; during this time, that depends on the driver’s experience, age, fatigue and, perhaps, his/her other human capacity factors (HCFs), as well as the identification-reaction-communication performance of the automated vehicle system [1–4, 12]; it could be assumed that the vehicle keeps moving with its initial speed V0 during this time, and it is this time that characterizes the performance of the driver and automation systems; if, e.g., the car’s initial speed is V0 ¼ 13:41 m=s; the pre-deceleration time may be T0 ¼ 3:0 s (the corresponding distance is S0 ¼ V0 T0 ¼ 13:41 3 ¼ 40:23 mÞ and (2) The deceleration V0 1 time T1 ¼ 2S V0 ¼ a obtained assuming constant and immediate deceleration a, i.e., immediate effective braking with no phase of brake force build–up; where S1 is the stopping distance during the deceleration time. If, e.g., a ¼ 4 m=s2 (it is this deceleration force that characterizes the vehicle’s ability to decelerate, when necessary), and the V0 13:41 initial velocity is V0 ¼ 13:41 m s ; then T1 ¼ a ¼ 4 ¼ 3:35 s; is the deceleration time, and S1 ¼ V02T1 ¼ 13:41 2 3:35 ¼ 22:46 m is the corresponding braking distance. The contributions of the two main constituents of the total stopping distance S ¼ S0 þ S1 ¼ 40:23 þ 22:46 ¼ 62:69 m in this example are at a magnitude 2:1. It follows also from the formula S ¼ V0 T0 þ T21 for the total stopping distance that the pre-deceleration
12
E. Suhir et al.
time T0 that reflects the role of the human factor, is even more critical than the deceleration time T1 affected by the vehicle and its brakes, and that the total stopping time is simply proportional to the initial velocity that should be low enough to avoid an accident and allow the driver to make a decision in a timely fashion. If the stopping distance S is smaller than the available obstacle distance S from the location at which the autopilot and/or the driver detected the obstacle in sight, to the location of the obstacle, than the collision could possibly be avoided. In the above example, the detected obstacle should not be closer than S ¼ 63 m (available sight distance) to the location, at which the vehicle and/or driver detected it, otherwise a collision becomes unavoidable. 3.2
Probabilistic Approach
A probabilistic approach considers that, in effect, none of the above times and the corresponding distances are known, or could be, or even will ever be, evaluated with sufficient certainty. Therefore, a probabilistic approach should be employed to assess the probability of an accident. Here is a possible rationale behind such an approach to the problem above. To some extent, it is similar to the “convolution approach” applied in the helicopterlanding-ship situation [4], where, however, random times, not random distances, were considered. Here is the rationale behind the PAM concept used in this analysis: if the probability PðS S Þ that the random sum S ¼ S0 þ S1 of the two random distances S0 and S1 is above the ‘available’ (and also random) distance S to the obstacle is low, then there is a good chance and a good reason to believe that collision will be avoided. It is natural to assume that both random times T0 and T1 , as well as the corresponding distances, are distributed in accordance with the Rayleigh law (see, e.g. [11]). Indeed, both these random times cannot be zero, but cannot be very long either. In such an emergency situation, short time values are more likely than long time values, and, because of that, their probability density distribution functions should be heavily skewed in the direction of short time values. The Rayleigh distribution f ðtÞ ¼ 2 t t r2 exp 2r2 possesses all these physically important properties, and is used, therefore, for our analysis. Here r is the mode (most likely value) of the random variable T; t ¼ ffiffiffiffiffiffiffi ppffiffi pffiffiffiffiffi q4p r is its mean value and D ¼ t 2 2 r is its standard deviation. Using Rayleigh law and treating the distances S0 and S1 as non-random of the random variables functions s20 s2 s0 T0 and T1 , we use the expressions f0 ðs0 Þ ¼ r2 exp 2r2 and f1 ðs1 Þ ¼ rs12 exp 2r12 as 0
0
1
1
their probability density functions. Here, r0 ¼ V0 T0 and r1 ¼ V20 T1 : It is also natural to assume that the available distance S is normally distributed: h i ðssÞ2 1 fs ðsÞ ¼ pffiffiffiffi exp 2r2 ; where rs 4, s is the mean value of the random distance S 2prs
s
s
and rs is its standard deviation. The “safety factor” rss in this distribution should be large enough, so that the normal law could be used as an acceptable approximation of the random times and distances that cannot be negative. For large rss ratios, the small negative values, although they exist, are ‘suppressed’ by the large positive values of the distribution. The Rayleigh law cannot be employed for the available sight distance and the
Towards Probabilistic Analysis of Human-System Integration
13
corresponding time, because this distance and time, unlike the random times T0 and T1 , should be symmetric with respect to their mean values. The probability, PS that b the sum S ¼ s0 þ s1 of the random variables S0 and2 S 1 exceeds a certain level S is 2 R S^ s0 ^ ðSs0 Þ s [1, 11] PS ¼ 1 0 r2 exp 2r02 1 exp 2r2 S ¼ 85:2 m, r0 ¼ ds0 . If, e.g., ^ 0
0
1
40:23 m; and r1 ¼ 44:97 m; then PS ¼ 0:6832. The probability that the normally R ^S 1 S level, is P ¼ pffiffiffiffi distributed ‘available’ distance S is below the b 2p r s 1 s
h i 1 ðssÞ2 exp 2r2 ds ¼ 12 1 erf pffiffi2r^Ss . s
^ S
If, e.g., the mean value s of the available distance is s ¼ 66:5 m; and its standard deviation is rs ¼ 13:5 m; then, with ^S ¼ 85:2 m; the formula (2) yields: P ¼ 0:0829; and the probability of the collision is PC ¼ PS P ¼ 0:6832 0:0829 ¼ 0:0566: This probability should be evaluated for different ^S values. After a low enough probability PC is established (agreed upon), the suggested methodology can be used to quantify, on the probabilistic basis, the role of the human factor and the initial speed of the vehicle on the likelihood of a successful outcome of the addressed situation.
4 Conclusion and Future Work This example problem and especially its probabilistic analysis intend to show how such an approach can provide quantitative analyses of the more general issues of automated driving with human drivers in the loop. While this example is confined to a special situation, it points already to the potential of such an approach for mathematically founded analyses of problems to be urgently solved for a safe adoption of automated driving (below Level 5), which is currently widely judged as important for both industry and society. Hence, PAM may provide an effective means for reducing vehicular casualties. We conjecture that it will be able to improve dramatically the state-of-the-art in understanding and accounting for the human performance in various vehicular missions and off-normal situations, and in particular in the pressing issue of analyzing the role of the time span for the human to take over from the system. Future work should include implementation of the suggested methodology, as well as the development of practical recommendations, considering that the probability of an accident, although never zero, could and should be predicted, adjusted to a particular vehicle, autopilot, driver and environment, and made sufficiently low. This work should include also considerable effort, both theoretical (analytical and computeraided) and experimental, in the addressed and in many other problems associated with the upcoming era of automated driving.
14
E. Suhir et al.
References 1. Suhir, E.: Human-in-the-Loop: Probabilistic Modeling of an Aerospace Mission Outcome. CRC Press, Boca Raton (2018) 2. Suhir, E.: Assessment of the required human capacity factor (HCF) using flight simulator as an appropriate accelerated test vehicle. IJHFMS 6(1), 71–74 (2019) 3. Suhir, E.: Probabilistic Risk Analysis (PRA) in Aerospace Human-in-the-Loop (HITL) Tasks, plenary lecture, IHSI 2019, Biarritz, France, 11–13 September 2019 4. Suhir, E.: Helicopter-landing-ship: undercarriage strength and the role of the human factor. ASME OMAE J. 132(1), 011603 (2009) 5. Suhir, E.: Analytical modeling occupies a special place in the modeling effort. Short Comm. J. Phys. Math. 7(1), 2 (2016) 6. Luckender, C., Rathmair, M., Kaindl, H.: Investigating and coordinating safety-critical feature interactions in automotive systems using simulation. In: 50th Hawaii International Conference on System Sciences, pp. 6151–6160 (2017) 7. Rathmair, M., Luckender, C., Kaindl, H., Radojicic, C.: Semi-symbolic simulation and analysis of deviation propagation of feature coordination in cyber-physical systems, best paper award. In: 51st Hawaii International Conference on System Sciences (HICSS 2018), pp. 5655–5664 (2018) 8. Rathmair, M., Luckender, C., Kaindl, H.: Minimalist qualitative models for model checking cyber-physical feature coordination. In: 23rd Asia-Pacific Software Engineering Conference (APSEC 2016). IEEE Press (2016) 9. Luckender, C., Kaindl, H.: Systematic top-down design of cyber-physical models with integrated validation and formal verification. In: 40th International Conference on Software Engineering: Companion. ACM/IEEE (2018) 10. Suhir, E.: Probabilistic design for reliability of electronic materials, assemblies, packages and systems: attributes, challenges, pitfalls, plenary lecture. In: MMCTSE 2017, Murray Edwards College, Cambridge, UK, 25 February 2017 11. Suhir, E.: Applied Probability for Engineers and Scientists. McGraw Hill, New York (1997) 12. Suhir, E.: Adequate trust, human-capacity-factor, probability-distribution-function of human Non-failure and its entropy. IJHFMS 6(1), 75–83 (2019) 13. Gerstenmaier, W.H.: NASA Headquarters, private communication, September 2019 14. ERTRAC 17, Automated Driving Roadmap, Version 7.0, May (2017) 15. Kudarauskas, N.: Analysis of emergency braking of a vehicle. Transport 22(3), 154–159 (2007) 16. Sirkin, D.M.: Stanford University, private communication, September 2019
Trust Provisioning in the Transport Infrastructure Scott Cadzow(&) C3L for Cryptalabs, London, England, UK [email protected]
Abstract. In any security process the role of trust anchors and trust roots have a complex interaction and applying them to the transport domain where many different forms of trust relationship have to work together is going to be one of greatest of the problems to surmount. ITS and CAV are data centric information systems in which the provenance of the data in the system is key to the success of the system. Provenance and integrity are underpinnings of trust, but in CAV and ITS there is no a priori knowledge of the source and value of data. This means that the provenance of a signal or message from the infrastructure is unlikely to be tested in advance and the verification of trust needs to be applied on demand for each message. Determining trust distribution in the transport infrastructure is one of the biggest unanswered questions regarding the viability of CAV. Keywords: Integrity Trust Provenance Connected and Autonomous Vehicles (CAV) Intelligent Transport Systems (ITS) Trust networks Smart infrastructure
1 Introduction In the days before we had any concept of Intelligent Transport Systems (ITS) there was a simple model of trust in most transport networks. The roads would be broadly serviceable to the traffic they were expected to handle. At source and destination of any journey there would be somewhere to park and load or unload. If the distances were beyond the reach of a single fueling you could reasonably count on being able to refuel without major changes of route. If restrictions were placed on your journey by the use of speed limits, or barring certain routes, you saw the notifications and acted on them in good faith. The trust model was implicit and reinforced by education and experience over the many hundreds of years that mankind has needed to move themselves and goods from A to B. Roads, vehicles and all the infrastructure of transport was marked by steady evolution between revolutionary leaps. Our concept of a connected and autonomous or automated vehicle belonged to the pages of science fiction novels but instinctively is also evolution of our existing transport modes. However, this evolution has more stakeholders than ever before and there is a danger that without taking a holistic view to how we engage, and trust, with the new environment that there may be an existential threat to this particular evolutionary step. © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 15–19, 2020. https://doi.org/10.1007/978-3-030-39512-4_3
16
S. Cadzow
A key assertion is that the future of transport is that our understanding of transport moves from a mostly physical infrastructure to a pair of parallel and interworking infrastructures, one being the existing physical infrastructure and the other being additional data infrastructure that enables ITS and the associated world of Connected and Autonomous Vehicles (CAV).
2 The Role of Trust If we take one of the more common definitions of trust as “firm belief in the reliability, truth, or ability of someone or something” as it applies to transport and transport infrastructure, and consider the complexity of that domain, then it should be clear that the question of who to trust is a difficult one to answer. Trust is not absolute, it is often not rational, and it is often not transferable, but it is fundamental to making decisions based on data received. For any security process the role of trust anchors and trust roots have a complex interaction and when in place serve to answer, at least in part, the question of “who do I trust?”. A major concern in security in the transport domain is that many different forms of trust relationship have to work together. Managing trust is one of the more complex of many security and privacy issues that have to be considered in moving to an autonomous vehicle world that intersects to the smart city. Smart cities, smart highways, smart junctions, in addition to smart vehicles will need to exchange data. Every player will need to know with a very high level of assurance if the data they are receiving and expected to act on can be trusted. This requires verifying the integrity, or more completely the provenance, of data in the system. The methods available to achieve this kind of active trust verification are complex to install and to manage. One of the major problems is that there is no a priori knowledge of the forms of interaction a vehicle will have with the infrastructure. This means that the provenance of a signal or message from the infrastructure is unlikely to be tested in advance and the normal cryptographic means of verifying trust need to be supplied on the fly. How such trust is distributed in the transport infrastructure is one of the biggest unanswered questions regarding the viability of large scale CAV. A by-word of security is to anchor things in hardware to then spread roots of trust through the system. The grand engineering question is where these hardware trust anchors fit into the built infrastructure. So what are the engineering challenges in building a strongly anchored, but highly mutable, security system that have to serve 10 s or even 100 s of millions of users exchanging billions of messages a year? The set of stakeholders in the trusted CAV and smart-entity system that enables it is considerable and includes, amongst others, the civil engineering industry, the vehicle manufacturing industry, government, public safety organizations with all of their implied blue light groups, and the ICT industry amongst many in making a success of smart transport infrastructure. If this is addressed and trust is endemic to the system then the promise of an intelligent integrated and society supporting transport system can begin to be achieved. What is the promise of evolving from our vague notions of ITS towards the intelligent integrated and society supporting transport system can achieve?
Trust Provisioning in the Transport Infrastructure
17
Conventional trust hierarchies such as that shown in Fig. 1 have been enthusiastically adopted as the norm for the management of cryptographic keys but aren’t all that accurate when mapped to the real-life trust structures of society.
Root of the trust network First trusted level branch B
First trusted level branch A
Alice
Bob
Charles
Fig. 1. A conventional view of trust hierarchies
The trust architecture of Fig. 1 suggests that Alice and Bob trust each other by a shared association to the node above them, but have to go back to the root in order to trust Charles. Whilst this fits to the cryptographic models of Public Key Infrastructures it tends to fall down in real scenarios. Thus it is essential to consider characteristics of trust and to model from there to what we need in ITS and CAV and the smart-city (see also contributions to standardization made by Cadzow [3–5]): • Trust is highly dynamic and contextual, and may be described in assurance levels based on specific measures that identify when and how a relationship or transaction can be relied upon. • Trust measures can combine a variety of assurance elements that include identity, attribution, attestation and non-repudiation. • An entity A has no need to have a direct trust relationship with another entity B if B’s operation has no direct impact on A. It may be that entity C is affected by entity B’s operations, and that entity A relies on entity C, but this does not affect entity A directly, and therefore the trust relationship can be considered separate. • The core requirement related to trust in any ICT system is the identification of the “root of trust”. • For each element protected within a trust relationship it is necessary to identify both the root of trust and the path from the protected element to the root of trust. It is strongly recommended (non-negotiable in some jurisdictions) that the root of trust is a trusted hardware module (that is able to store keys for example in tamper-resistant hardware).
18
S. Cadzow
• Having a secured communications channel with another entity is never sufficient reason to trust that entity, even if you trust the underlying security primitives on which that communications channel is based. • Trust is not a binary operation. There may be various levels of trust that an entity has for another. • Trust may be relative, not absolute. • Entity A may trust Entity C more than Entity B, without trusting either absolutely. • Trust is rarely symmetric. Entity A may trust Entity B completely, whereas the amount of trust that B has for A may be very low. • This does not always matter: a schoolchild may trust a schoolteacher, for instance, without any requirement for that trust to be reciprocated. • One of the axes for trust is time, and the trust relationship between two entities may be highly dynamic over time. • Just because a certain level of trust was established at point T, it does not mean that that level will be maintained at time T + s, as it can increase and decrease.
3 Allocation of Trust Roots Roots and anchors are what we need to be concerned about in the transport network’s data infrastructure. Every stakeholder in the trust network has to be anchored – the anchor is akin to that of the underpinning of a building and has to be in hardware. If the data is cryptographically protected it is essential that the keys that enable protection are themselves firmly rooted in hardware. In the specific domain of Co-operative ITS in which equipped vehicles send frequent updates of their location to enable receiving entities to make calculations of such things as collision likelihood, but also to enable situational awareness (i.e. knowing where vehicles are with respect to the receiver), it is essential that the provenance of the messages containing this data is strongly asserted. The security model identified in ETSI’s TS 102 940 [1] and TS 102 941 [2] require that each message is digitally signed by a pseudonymous identity that has been asserted as valid by a trusted authority. The unstated requirement is that key pairs are generated in a trusted environment and at a suitably performant rate – this strongly suggests that only specialized crypto hardware will fit and this offers the concrete root of trust. Similarly as each key pair requires attachment to a 3rd party authority this trusted authority, and there may be many depending on the assertions contained in each message, has to have a concrete attachment to a trust anchor. A first step in the allocation of trust anchors is recognition of stakeholder’s role in a trusted transport infrastructure. From this every stakeholder has to be ready to document the form of assertions they make, or rely on, and to identify how they would enable proof of that assertion. In this regard, proof includes provenance of the assertion. In everyday terms the aim is to give the same level of confidence in ephemeral data that there is in something that is tangible. Thus being able to validate that data from, say, a Mercedes really comes from a Mercedes and that adopts or inherits the trust
Trust Provisioning in the Transport Infrastructure
19
associated to that marque is key to giving credence to the data. If the data is intended to alter behavior then this provenance is essential. Hence every stakeholder has to be ready and able to build such proof into the data they offer.
4 Summary and Conclusion Every stakeholder in the transport system that has to be trusted when making an assertion that has to be acted on has to be in a position to verify the provenance of that assertion or claim. An assertion that is proven without a tangible hardware anchor may be less trusted than one that is strongly anchored and rooted in hardware. In moving towards the data centric virtualized infrastructure the concrete of built infrastructure has to move too. This offers a path towards trust in the infrastructure by having anchors to the trust relationships, and strong roots building from those anchors. This is still an evolving area and the benefits of this particular evolution of our transport world needs considerable investment in time and intellectual effort. This paper has barely scratched the surface other than to assert that trust has to be anchored in hardware and the roots drawn from that hardware to the relying parties. The challenges remaining are considerable to ensure that the future data centric transport infrastructure has the same strength and resilience as the centuries old built infrastructure we have relied on for so long. Acknowledgments. The author thanks CryptaLabs staff, in particular Joe Luong, Justin Roberts and Alison Roberts, for support in review and discussions the have inspired this paper.
References 1. ETSI TS 102 940: Intelligent Transport Systems (ITS); Security; ITS communications security architecture and security management 2. ETSI TS 102 941: Intelligent Transport Systems (ITS); Security; Trust and Privacy Management 3. Cadzow, S.: Security and Privacy Issues Arising from ITS Integration with Smart Infrastructure, from ETSI ITS Workshop 2019 4. Cadzow, S.: Ethics in AI, from ETSI Security Workshop 2019 5. Cadzow, A.: Are we designing cybersecurity to protect people from malicious actors? In: Proceedings of the 1st International Conference on Human Systems Engineering and Design (IHSED2018): Future Trends and Applications, CHU-Université de Reims ChampagneArdenne, France, 25–27 October 2018
Drivers’ Interaction with, and Perception Toward Semi-autonomous Vehicles in Naturalistic Settings Jisun Kim1(&), Kirsten Revell1, Pat Langdon2, Mike Bradley2, Ioannis Politis2, Simon Thompson3, Lee Skrypchuk3, Jim O-Donoghue3, Joy Richardson1, Jed Clark1, Aaron Roberts1, Alex Mouzakitis3, and Neville A. Stanton1 1 University of Southampton, Southampton, UK {J.Kim,K.M.Revell,Joy.Richardson,jrc1g15,apr1c13, N.Stanton}@soton.ac.uk 2 University of Cambridge, Cambridge, UK [email protected], [email protected], [email protected] 3 Jaguar Land Rover, Coventry, UK {sthom261,lskrypch,jodonog1, amouzak1}@jaguarlandrover.com
Abstract. Partially automated vehicles are in actual use, and vehicles with higher levels of automation are under development. Given that highly automated vehicles (AVs) still require drivers’ intervention in certain conditions, effective collaboration between the driver and vehicle seems essential for driving safety. Having a clear understanding about drivers’ interactions with the current technologies is key to enhance them. Additionally, comprehending drivers’ perceptions toward AVs investigated in naturalistic settings seems important. This study particularly focuses on usability, workload, and acceptance of AVs as they are key indicators of drivers’ perceptions. Eight drivers conducted manual and automated driving in urban and highway environments. Their interactions and verbal descriptions were recorded, and perceptions were measured after each drive. Instances that may have negatively affected the perceptions were identified. The results showed that workload was higher, usability and acceptance were lower in automated driving in general. Findings show what should be considered to improve driver-autonomous vehicle interaction, in turn to help reduce workload, enhance usability, and acceptance. Keywords: Autonomous vehicles Human-machine interaction factors Workload Usability Acceptance
Human
1 Introduction Semi-autonomous vehicles are in actual use, and highly automated vehicles are actively under development. This has been motivated by the major benefit of automated driving technologies that is insusceptible to human errors [1]. Thus, an interest in the technology has increased, but concerns have also been expressed regarding system failure, © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 20–26, 2020. https://doi.org/10.1007/978-3-030-39512-4_4
Drivers’ Interaction with, and Perception Toward Semi-autonomous Vehicles
21
overreliance, and insufficient situation awareness [2]. The main concerns on issues about driver-vehicle interaction, where undesirable consequences may result from unsatisfactory interaction between the operator and the machine [3]. However, highly automated vehicles still require drivers’ intervention in certain conditions [4], thus creating effective HMI will remain as challenging [5]. Therefore, the investigation of driver-vehicle interaction with existing technologies on actual road conditions would benefit further enhancement. Additionally, it would be beneficial to improve our understanding about drivers’ perceptions toward AVs influenced by the interaction. Acceptance of AVs can be explained by workload, and usability of the system [6–8]. Whilst simulations have widely been selected as a method for investigating drivervehicle interaction [9], they might not reflect real-world situations [10], thus the drivers’ responses may be limited. A number of studies conducted on public roads are documented in academic literature. Revell et al. [11] investigates driver-vehicle interaction to understand how drivers interact with designs of semi-autonomous vehicle interaction in a naturalistic setting. Wei et al. [12] introduces an autonomous vehicle research platform widely tested on public roads, that is enabled to conduct everyday driving. Wang et al. [13] develop a platform that allows testing driver-vehicle interaction on the road without obtaining an actual autonomous vehicle. Endsley [10] runs a naturalistic driving study about automation features of the Tesla S. Her own 6-month driving experience is recorded focusing on situation awareness, and problems of automation in realistic conditions. In summary, a need to investigate driverautonomous vehicle interaction in real-world settings is clear, and relevant attempts have been made. However, few discussions have been implemented to interpret the dynamics between the interaction and subjective evaluation of workload, usability, and acceptance. Therefore, this study aims to investigate driver-autonomous vehicle interaction in naturalistic settings in lights of perceived workload, usability, and acceptance. This study compares drivers’ perceptions in automated and manual driving, and uses video and verbal protocol data to understand in greater depth how interaction incidents explain the results found. Insights into what needs to be considered to improve driver-vehicle interaction are discussed.
2 Method The study was designed to enable explorations and comparisons of driver-autonomous vehicle interaction, and perceptions in naturalistic settings. Participants conducted driving in urban and highway environments; manual and automated modes; in the 2016 T Model S. It was equipped with Adaptive Cruise Control (ACC); Steering Assist (S/A); Stop & Go; and Auto Lane Change. The routes were located on the M40 near Coventry, and public roads in Coventry area. The experiment procedure was as follows. The participants reviewed an information sheet, and a consent form, signed the form as an expression to agree to participate. They were given training sessions on the Jaguar Land Rover test track. In the experiments, they conducted driving on the planned routes being asked to engage autonomy features as much as they could, but only when it was safe to do so. For safety reason, a safety driver was ready to take control whenever circumstances required it. During the drives, think-aloud protocol
22
J. Kim et al.
was used to capture the drivers’ thoughts, and actions. Video and audio data was collected to record the drivers’ actions, verbal protocols; vehicles’ interfaces; and road ahead. After each drive, participants completed questionnaires, and had a break. The NASA Task Load Index [14], the System Usability Scale [15], and the acceptance of advanced transport telematics scale [16] were used. Mean scores for the first scale, and overall scores for the last two scales calculated according to the scoring methods were used for analysis. Box plots were generated for descriptive analysis. For interpretation, median was mainly used considering upper, lower hinges, and extremes. Video recordings were reviewed to discern instances in which the perceptions may have been negatively affected. For the analysis, cases were selected that showed biggest/bigger differences in workload, usability, and acceptance between manual and automated driving compared to other participants, thus the instances during the drive warranted reviewing. Healthy adults having a full UK driving license, including 1 female and 7 males, aged between 36 and 59 participated. They included 2 drivers with more experience in automated driving, and 6 novice drivers of AVs.
3 Results 3.1
Comparisons Between Automated and Manual Driving
Higher workload, and lower usability and acceptance were scored for automated driving than manual driving both in highway and urban environments considering the general tendency (see Figs. 1 and 2).
Highway Manual
Urban Manual
Highway Urban Automated Automated
Highway Manual
Urban Manual
Highway Automated
Urban Automated
Fig. 1. Workload (Left), and Usability (Right) scores for driving in each condition
Drivers’ Interaction with, and Perception Toward Semi-autonomous Vehicles
23
Highway Highway Urban Urban Highway Highway Urban Urban Manual Manual Manual Manual Automated Automated Automated Automated Usefulness Satisfying Usefulness Satisfying Usefulness Satisfying Usefulness Satisfying
Fig. 2. Acceptance (usefulness and satisfying) scores for driving in each condition
3.2
Instances Linked to Higher Workload, Lower Satisfaction, and Acceptance in Automated Driving
In order to identify how workload, usability, and acceptance may have been negatively affected in automated driving, qualitative analysis was carried out. The cases of automated driving in highway and urban environments will be discussed. First, participant 4’s case presented the biggest differences in workload, usability, and second biggest difference in satisfying (part of acceptance) score. Main issues were: (1) the driver indicated when intending to adjusted the ACC speed; (2) the vehicle could not detect the car behind in the targeted lane, but the vehicle approached the lane when the auto lane change was on, and this required manual steering input; (3) the vehicle surged to reach the ACC speed when the driver did not know the reason. Second, participant 6’s case showed the biggest differences in workload, usability, and second biggest difference in acceptance. Main issues were: (1) the vehicle did not seem to position in the middle of the lane when only one side of the lane marking, or the vehicle in front was used for the S/A, and this kind of instances were recorded 5 times, thus the driver put manual steering input 4 times including 1 sudden input; (2) the vehicle did not stop at the traffic light because the car did not react to the stationary car in front.
4 Discussion This section discusses comparison between manual and autonomous driving, and supporting explanation about higher workload, lower usability, and acceptance observed in automated driving. Differences between manual and automated driving may have resulted from frequent drivers’ monitoring of the road ahead, and the system; active projections; and interventions. The processes included checking the vehicles’ automation status, and behaviour: to see if the vehicle detected the car in front, and lane markings; if the vehicle reacted accordingly; if not, to intervene. This tended to increase mental demand [17]. The heightened workload in automated driving does not seem to be consistent with the most of the previous findings that workload was lower in automated mode [9].
24
J. Kim et al.
The discrepancy may be explained the majority of the studies adopted simulation that may not have reflected the real world situations [10]. Further, in this study, the drivers were asked to pay attention to the road conditions all the time, and place their hands close to the wheel to intervene when needed. Hence, workload was increased, usability, and acceptance were lower than manual driving that did not require the monitoring, and immediate intervention when the automation reached the limits. Instances that could have a negative impact on the measured perceptions are summarised as follows. First, positions of the ACC and indicator stalks seemed to cause confusion. They were located close to each other on the left hand side of the wheel, and this layout seemed to require careful selection. However, the drivers tended to get used to it as time passed. Second, the failure of detecting the vehicle behind led to the driver immediately adjusting the steering angle not to depart from the current lane. The situation required monitoring of the car behind, and the host vehicle’s behaviour before taking control. Third, the acceleration of the vehicle was a normal behaviour, but the driver could not understand the intention behind it. Hence, this may have been perceived as an automation surprise. Relevant knowledge could be gained through appropriate training to develop more accurate driver’s mental model of the system [18]. Based on the observations of the first and third cases, more experienced drivers more familiar with usage of the autonomy features, and having more accurate mental model would have perceived the instances less demanding. Fourth, the S/A did not seem to provide stable support, thus the instances were found that the car deviated from the centre of the lane. This may have resulted from the manufacture’s intention to allow a slight more lateral movement to replicate human driving (S Thompson 2019, personal communication), but in this experiment, the instances hardly perceived as positive experiences. Rather, the situations seemed to require the driver’s interventions. Fifth, the driver needed to remember limitations of the automation: the vehicle detected stationary car in front, however, it did not slow down, as the host vehicle did not recognise the stationary car as significant information for the ACC. As seen in the second, fourth, and fifth cases, the technical instability led to the circumstances that the driver had to keep monitoring. If the vehicle did not steer accordingly, the driver had to take control back. As previously described, these instances could increase workload, and lower usability, and acceptance.
5 Conclusion This study was conducted to investigate driver-autonomous vehicle interaction through assessment of the drivers’ workload, usability, and acceptance in naturalistic settings. The experiment was run in manual and automated modes, highway and urban environments. The result showed that the drivers’ workload was higher, usability and acceptance were lower in automated driving. Active monitoring of the road, the system, and the vehicle’s behaviour seemed to increase workload. Thus, the drivers did not seem to perceive the driving easier, simpler or more effective to use than manual driving. A number of instances were identified which may have negatively affected the perceptions. They were the cases of novice drivers’ of AVs. They were summarised as layout of the stalks which required cautious selection that may cause mistake and
Drivers’ Interaction with, and Perception Toward Semi-autonomous Vehicles
25
confusion; insufficient understanding about the autonomous function that led to an automation surprise; and technical instability that required the drivers’ intervention. Improvement of interface design to enable more intuitive usage; appropriate driver training; and advancement in technology to reduce the occurrence of incorrect actions of AVs seem necessary to help reduce workload, and improve usability and acceptance. Acknowledgments. This work was supported by Jaguar Land Rover and the UK-EPSRC grant EP/N011899/1 as part of the jointly funded Towards Autonomy: Smart and Connected Control (TASCC) Programme. The authors thank the funders for their support.
References 1. Van Brummelen, J., O’Brien, M., Gruyer, D., Najjaran, H.: Autonomous vehicle perception: The technology of today and tomorrow. Transp. Res. Part C: Emerg. Technol. 89, 384–406 (2018) 2. Aria, E., Olstam, J., Schwietering, C.: Investigation of automated vehicle effects on driver’s behavior and traffic performance. Transp. Res. Proc. 15, 761–770 (2016) 3. Degani, A., Heymann, M.: Formal verification of human-automation interaction. Hum. Factors 44(1), 28–43 (2002) 4. SAE International 2018, SAE International Releases Updated Visual Chart for Its “Levels of Driving Automation” Standard for Self-Driving Vehicles 5. Koo, J., Kwac, J., Ju, W., Steinert, M., Leifer, L., Nass, C.: Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance. Int. J. Interact. Des. Manuf. (IJIDeM) 9(4), 269–275 (2015) 6. Nees, M.A.: Acceptance of self-driving cars: an examination of idealized versus realistic portrayals with a self-driving car acceptance scale. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, p. 1449. SAGE Publications Sage CA, Los Angeles (2016) 7. Cummings, M., Clare, A.: Holistic modelling for human-autonomous system interaction. Theor. Issues Ergon. Sci. 16(3), 214–231 (2015) 8. Jordan, P.W.: An Introduction to Usability. CRC Press, Boca Raton (1998) 9. De Winter, J.C., Happee, R., Martens, M.H., Stanton, N.A.: Effects of adaptive cruise control and highly automated driving on workload and situation awareness: a review of the empirical evidence. Transp. Res. Part F: Traffic Psychol. Behav. 27, 196–217 (2014) 10. Endsley, M.R.: Autonomous driving systems: a preliminary naturalistic study of the Tesla model S. J. Cogn. Eng. Decis. Making 11(3), 225–238 (2017) 11. Revell, K., Richardson, J., Langdon, P., Bradley, M., Politis, I., Thompson, S., Skrypchuk, L., O’Donoghue, J., Mouzakitis, A., Stanton, N.: “That was scary…” exploring driverautonomous vehicle interaction using the perceptual cycle model 12. Wei, J., Snider, J.M., Kim, J., Dolan, J.M., Rajkumar, R., Litkouhi, B.: Towards a viable autonomous driving research platform. In: 2013 IEEE Intelligent Vehicles Symposium (IV), p. 763. IEEE (2013) 13. Wang, P., Sibi, S., Mok, B., Ju, W.: Marionette: enabling on-road wizard-of-oz autonomous driving studies. In: 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRIIEEE), p. 234 (2017) 14. Hart, S.G.: NASA-task load index (NASA-TLX); 20 years later. In: Proceedings of the Human Factors and Ergonomics Society Annual meeting, p. 904. sage publications Sage CA, Los Angeles (2006)
26
J. Kim et al.
15. Brooke, J.: SUS-a quick and dirty usability scale. Usability Eval. Ind. 189(194), 4–7 (1996) 16. Der Laan, Van, Jinke, D., Heino, A., De Waard, D.: A simple procedure for the assessment of acceptance of advanced transport telematics. Transp. Res. Part C: Emerg. Technol. 5(1), 1–10 (1997) 17. Stapel, J., Mullakkal-Babu, F.A., Happee, R.: Automated driving reduces perceived workload, but monitoring causes higher cognitive load than manual driving. Transp. Res. Part F: Traffic Psychol. Behav. 60, 590–605 (2019) 18. Endsley, M.R.: From here to autonomy: lessons learned from human–automation research. Hum. Factors 59(1), 5–27 (2017)
Are Autonomous Vehicles the Solution to Drowsy Driving? Daniel Grunstein1(&) and Ron Grunstein2 1
2
Kellogg College, University of Oxford, Oxford, UK [email protected] Woolcock Institute of Medical Research, University of Sydney, Sydney, Australia
Abstract. The estimated cost of drowsy driving exceeds $100 billion annually in the US alone. Initial Autonomous Vehicle (AV) literature discusses an opportunity to partially salvage financial and societal burdens in 93% of accidents which are caused by human error. It is often assumed that AVs are capable of continuous safe operation during cases of fall-asleep driving. However, the Dynamic Driving Task (DDT) rather than being suddenly eliminated, is incrementally simplified over time and fall-asleep episodes invariably increase alongside. This paper suggests some potential consequences for sleep-related road accidents in the context of AVs and identifies potential future areas of collaborative research between social and sleep scientists. Its concluding hypothesis is that whilst full-autonomy is likely to decrease the risk of fatigueinduced accidents, interim incremental innovations may increase this risk given the lethal combination of passive sleepiness and driver over-confidence. Keywords: Autonomous Vehicles
Drowsy driving Road accidents
1 Introduction It is estimated that approximately 40% of fatal crashes are primarily caused by substance abuse, distraction and/or fatigue [1]. Full vehicular autonomy presents an opportunity to offset or possibly eliminate medical, economic, social and infrastructural issues resulting from drowsy driving and startling evidence indicative of its increasing incidence and severity. Should AVs be capable of operation without human intervention then the drowsy driver becomes a drowsy passenger who can sleep safely. This would not only reduce vehicle crashes attributed to fall-asleep and micro-sleep episodes and impaired judgement during drowsiness, but also see substantial awake commuting hours substituted for rest hours leading to a less sleep-deprived society. Consideration of this scenario does not imply exact knowledge of the likelihood or a timeline to reaching this goal. Trying to predict the nature, timing and adoption of new inventions is an extremely uncertain science. Thus, this paper makes no prediction as to the AV technology timeline. Rather it works with facts on the ground, namely that the most advanced vehicle currently being tested is semi-autonomous not fully autonomous. Therefore, research concerning implications for drowsy drivers should prioritise the semi-autonomous context over fully autonomous scenarios. © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 27–33, 2020. https://doi.org/10.1007/978-3-030-39512-4_5
28
D. Grunstein and R. Grunstein
2 The Taxonomy of Autonomous Vehicles Many adjectives are used to describe the extent of human operation of motor vehicles; e.g. automatic, autonomic, automated, cooperative, and autonomous. Automatic transmission describes a vehicle which automatically changes gears without the need for a gearbox and clutch (i.e. manual transmission). Autonomic is used to describe an autonomous feature in a vehicle that itself may not necessarily be autonomous, usually the vehicle is classified as ‘automated’ but could also simply be automatic/manual. It is often used in patents for specific component technologies e.g. the Autonomic vehicle safety system [2, 3]. Additionally, the Society of Automotive Engineers (SAE) International’s J3016, the golden standard for vehicular autonomy classification, holds that if self-driving depends on communication with outside entities like a control room, the vehicle should be classified as cooperative instead of autonomous [4]. Autonomous systems are capable of making decisions independently of human intervention, while automated systems require intervention in situations for which they have not been programmed [5]. In its original January 2014 publication, the SAE delineated six levels of automotive autonomy, according to this classification, vehicles are autonomous at Level 4 and fully autonomous at Level 5.
3 What Is Drowsy Driving? Drowsy driving implies controlling a motor vehicle under conditions of increasing sleepiness or an incipient fall-asleep episode. Sleepiness may be the result of acute or chronic reduction in sleep duration in individuals without sleep disorders driven by both homeostatic and circadian factors [6]. Generally, the longer a person has gone without sleep, the sleepier they are (homeostatic process) but this relationship is also impacted by presence or absence of a circadian alerting drive, making night time driving more vulnerable to drowsiness. Other diverse factors related to a higher incidence of drowsy driving include duration of the drive, age of individual (younger more affected), co-existing use of central nervous system depressants such as alcohol and presence of sleep-fragmenting disorders including obstructive sleep apnea (OSA) [7]. Many medical conditions, both primary sleep disorders (such as OSA and insomnia) and other chronic medical conditions that impact sleep quality (such as lung disease, asthma, renal disease, depression, and anxiety) can lead to sleep-related cognitive decrements that increase crash risk [8]. Sleepiness can be increased by monotony with drowsy driving more common when there is less environmental stimulation. The exact impact of drowsy driving is unknown. This depends on crash reports from police who often have limited expertise in drowsy driving or assign causes erroneously (e.g. a crash may be blamed on excessive speed when the driver may have not slept for 24h and exhibited increased risk taking and poor judgement). The annual economic cost of drowsy driving has been estimated at $109 billion in the USA [9].
Are Autonomous Vehicles the Solution to Drowsy Driving?
29
4 Drowsy Driving and Autonomous Vehicles Research addressing drowsy driving in the context of AVs is understandably limited given that the technology is only in pilot-stage and is not expected to reach the market for several more years. The gradual ‘autonomization’ of passenger vehicles has been occurring for over two decades beginning with automatic transmission and later features such as Automatic Cruise Control (ACC) and Highly Automated Driving (HAD – automation of both longitudinal and lateral motion) [10]. Additionally, innovation in products that are complementary to automobiles themselves such as Satellite Navigation, reversing sensors, lane-keeping assistance and E-payment for toll-roads have also contributed to the simplification of the DDT in recent decades. There is ample evidence to suggest that increased simplification of DDT increases the likelihood of driver drowsiness – for every innovation that simplifies the task, the likelihood of giving in to drowsiness progressively increases [11]. Full autonomy (Level 5) might relieve the major societal burden of sleep lost to commuting commitments by allowing the driver to safely sleep in a moving vehicle. However, AV technology has and is expected to continue along a path of incremental innovation and therefore technology, and its regulation, are likely to only initially facilitate on-road AVs with a driver, or in actuality ‘a passenger who knows how to drive’ – ready to take control in cases of disengagement (Level 4). Under these circumstances, the driver needs to be alert, not drowsy and certainly not asleep. The incremental innovation over past decades and drowsy driver research alongside would predict that under Level 4 autonomy drowsy driver incidence might increase significantly given the task simplification. Under Level 5 autonomy, if the technology reaches that point, incidence will be even higher, but the negative implications for society will be allegedly eliminated. Due to the hypercompetitive landscape that consists of traditional automobile OEMs (e.g. Daimler, GM, Ford etc.), Technology Companies (e.g. Google, Apple, Intel, Nvidia etc.) and Ridesharing Companies (e.g. Gett, Uber, Yandex and Lyft), a culture of trade secrets is pervasive and it is difficult to get an accurate picture of where the technology is at. Nevertheless, it is known that the vehicles being tested on-road today are Level 4. There have already been cases of accidents caused by human error in these pilot journeys, and indeed the one recorded fatal accident was a result of driver distraction [12]. There has been no inquiry as to the role of drowsiness in these incidents, and this should be rectified as regulatory oversight is formalized. The state of the market would suggest that Level 4 semi-autonomy is forthcoming as a mass market reality whereas the exact pipeline and social/regulatory shifts required to bring Level 5 to a similar state are difficult to determine. Driver fatigue has been cited as a key motivator behind the race to perfecting vehicular autonomy [13]. If the DDT is too simple in semi-autonomous vehicles the passengers designated as emergency drivers in cases of disengagement may experience passive fatigue despite the potential need for intervention in extreme situations. This inference is drawn from experience with tasks of low cognitive load where subjects invariably have direct control over the task at hand [14]. Domain-specific research has demonstrated the inverse relationship between passive fatigue and driver performance [15]. Conversely,
30
D. Grunstein and R. Grunstein
boredom may also proliferate from low workload in periods of automated driving causing drivers to seek more entertaining tasks. This has serious consequences for situations when the driver must intervene during a disengagement [16].
5 Drowsy Driving and the Current State of the AV Market 5.1
Ridesharing/Ride-Hailing
Some of the most advanced players in the AV market are ride hailing companies. The long-term AV landscape is also dominated by discussion of a shared vehicle network as opposed to today’s largely owner-based model. In order to get to a fully autonomous ride hailing network, paid drivers will likely operate interim AV prototypes (Level 4) as some of the most frequent ride hailers (the youth, elderly, disabled, and inebriated) are not able to safely take control in event of a disengagement. A trained driver is also needed in case of disengagement between journeys. The largest impediment to achieving enough sleep is the number of hours worked [17]. Many drivers in the ridesharing industry view this type of work as an additional income stream. Drivers often work a primary job and drive in their “off” time, such a schedule may lead to driving after extended periods of wakefulness or during nights. Working long and irregular hours (especially at night), or having multiple jobs, substantially increases the risk of work-related accidents and automobile crashes [18]. In the near future while AV technology remains at Level 4 or below, ridesharing vehicles will be staffed by a driver who is at increasing risk of harmful drowsiness. 5.2
Autonomous Heavy (Freight) Vehicles
Freight vehicles, if totally autonomous are only carrying goods not people, and are therefore considered a popular initial use-case for full-autonomy (Level 5). Though the risk might be lowered by conducting pilots in freight vehicles, these AVs would share the road with smaller passenger vehicles and any malfunction resulting in a collision could be fatal given the size of the freight truck. Proponents argue that trucks drive on highways which are more spacious and have less obstacles for the AV to consider, and that freight road transport is often overnight when there are less passenger cars on the roads. A reasonable conclusion is that testing Level 5 technology in freight vehicles should be considered risk minimizing rather than risk mitigating. Testing (currently ongoing) [19] and eventual roll-out of Level 4 vehicles may also see heavy freight vehicles become an attractive starting point for similar reasons. In this case, the driver ready to intervene will likely be a regular truck driver. These drivers are significantly overrepresented in road deaths internationally, often due to lifestyle factors making them susceptible to drowsy driving [20]. For example, deaths related to heavy-vehicle crashes in the United States comprise 8% of all road deaths, despite accounting for only 1% of vehicle registrations [21]. One reason may be that OSA is common in heavy vehicle drivers [22]. Also significant is the challenging work conditions involving long distances and driving durations coupled with tight delivery schedules. Case-control research comparing drivers involved in non-fatal, non-serious
Are Autonomous Vehicles the Solution to Drowsy Driving?
31
crashes show night time driving and work hours to be strongly related to crash risk [21]. Further simplification of a task performed by an at-risk population with disproportionally high rates of drowsiness will possibly increase likelihood of accession to that drowsiness. It could be a dangerous combination during roll-out of Level 4 autonomy in heavy vehicles. 5.3
Encouraged Adoption of Collision Avoidance Technologies
Human error or poor decision making was the primary causal factor in 94% of motor crashes, according to a NHTSA report [23]. The US DoT is working to accelerate the spread of crash-avoidance technologies that have the potential to prevent the thousands of deaths caused by human error. The Department is also working to make vehicle-tovehicle safety communications a part of future fleets and to identify and address potential obstacles to safety innovations within existing regulations [24]. While safety innovations based on vehicle automation may be able to prevent some drowsy driving crashes, intermediate stages of automation will require human oversight. Such interactions between driver and vehicle will likely be impacted by drowsiness. Some of these technologies may lead to greater levels of drowsiness as less driver involvement is required due to increasing levels of automation and may constitute an emerging safety concern that should be monitored as these systems become more ubiquitous. Finally, there is a notable exception here and that is with respect to technologies that specifically address drowsiness. For example, Mobileye sounds a loud beep when the car leaves its lane, approaches a pedestrian or moves too close to a vehicle in-front. A driver who has fallen into micro-sleep will be jolted awake by audio and visual stimuli [25]. The mitigating role of driver drowsiness detection technologies in the pathway to Level 5 is not limited to car-based devices like Mobileye but also eye closure or other drowsiness detection methods.
6 Conclusions and Discussion of Topics for Further Research Research in this emerging domain will need to expand upon realms of both ergonomics and sleep science, and ideally both concurrently. There is a need to expand on the line of inquiry taken in this paper and also to explore plenty more ergonomic features of increasingly autonomous/automated driving technologies which have a strong relationship with drowsy driving. Subjects of further research in this domain might include overreliance and trust of the system (the AV), skill degradation and others. On the sleep medicine side, there is a need to evaluate some existing assumptions and future assumptions through credible trials, surveying and data analysis. Given the strong evidence supporting significance between lifestyle factors and cases of drowsy driving we determine a good starting point would be to research potential changes in time use in the context of vehicles that increasingly operate autonomously. This informs our next line of inquiry on this subject and will put to the test some of the hypotheses outlined above.
32
D. Grunstein and R. Grunstein
References 1. Fagnant, D., et al.: Preparing a nation for autonomous vehicles: opportunities, barriers and policy recommendations. Transp. Res. Part A: Policy Pract. 77, 169 (2015) 2. Uchida, T., et al.: Autonomic traveling apparatus for a vehicle. US Patent 8,548,664 (2013) 3. Mays, W.: Autonomic vehicle safety system. US Patent 12/366,521 (2010) 4. SAE International: Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles, SAE International (2014) 5. Brodsky, J.: Autonomous vehicle regulation: how an uncertain legal landscape may hit the brakes on self-driving cars. Berkeley Technol. Law J. 31(2), 851–878 (2016) 6. Smolensky, M., Di Milia, L., Ohayon, M., Philip, P.: Sleep disorders, medical conditions, and road accident risk. Accid. Anal. Prev. 43(2), 533–548 (2011) 7. Tregear, S., et al.: Obstructive sleep apnea and risk of motor vehicle crash: systematic review and meta-analysis. J. Clin. Sleep Med. 5(6), 573–581 (2009) 8. Howard, M., et al.: Sleepiness, sleep-disordered breathing, and accident risk factors in commercial vehicle drivers. Am. J. Respir. Crit. Care Med. 170(9), 1014–1021 (2004) 9. Higgins, J., et al: Asleep at the wheel-the road to addressing drowsy driving 40 (2017) 10. De Winter, J., et al.: Effects of adaptive cruise control and highly automated driving on workload and situation awareness: a review of the empirical evidence. Transp. Res. Part F: Traffic Psychol. Behav. 27, 196–217 (2014) 11. McDowell, K., Nunez, P., Hutchins, S., Metcalfe, J.: Secure mobility and the autonomous driver. IEE Trans. Robot. 24, 688–697 (2008) 12. BBC News: Uber not criminally liable’ for self-driving death, 6 March 2019. https://www. bbc.co.uk/news/technology-47468391. Accessed 26 Sept 2019 13. Oron-Gilad, T., Ronen, A.: Road characteristics and driver fatigue: a simulator study. Traffic Inj. Prev. 8(3), 281–289 (2007) 14. Desmond, P., Hancock, P.: Active and Passive Fatigue States, in Stress, Workload, and Fatigue, pp. 455–465. Lawrence Erlbaum, Mahwah (2001) 15. Neubauer, C., Matthews, G., Langheim, L., Saxby, D.: Fatigue and voluntary utilization of automation in simulated driving. Hum. Factors 54(5), 734–746 (2012) 16. Stanton, N., Young, M., et al.: Drive-by-wire: the case of driver workload and reclaiming control with adaptive cruise control. Saf. Sci. 27(2–3), 149–159 (1997) 17. Basner, M., Fomberstein, K., Razavi, F., et al.: American time use survey: sleep time and its relationship to waking activities. Sleep 30(9), 1085–1095 (2007) 18. Stutts, J., Wilkins, J., Scott, O., Vaughn, B.: Driver risk factors for sleep-related crashes. Accid. Anal. Prev. 35(3), 321–331 (2003) 19. Anmar, F.: Self-driving trucks are being tested on public roads in Virginia, 10 September 2019. https://www.cnbc.com/2019/09/10/self-driving-trucks-are-being-tested-on-publicroads-in-virginia.html. Accessed 30 Sept 2019 20. Australian Bureau of Statistics, Motor Vehicle Census, ABS, Canberra, Australia (2011) 21. Stevenson, M., et al.: The role of sleepiness, sleep disorders, and the work environment on heavy-vehicle crashes in 2 Australian states. Am. J. Epidemiol. 179(5), 594–601 (2014) 22. Sharwood, L., Elkington, J., Stevenson, M., Grunstein, R., et al.: Assessing sleepiness and sleep disorders in Australian long-distance commercial vehicle drivers: self-report versus an at home monitoring device. Sleep 35(4), 469–475 (2012) 23. National Highway Traffic Safety Administration, Drowsy driving. https://www.nhtsa.gov/ risky-driving/drowsy-driving. Accessed 20 Mar 2018
Are Autonomous Vehicles the Solution to Drowsy Driving?
33
24. National Transpiration Safety Board, NTSB 2017–2018 Most Wanted List of Transportation Safety Improvements. https://www.ntsb.gov/safety/mwl/Pages/mwl6-2017-18.aspx. Accessed 20 Mar 2018 25. Mobileye, Wake Up for Lane Departure Warnings, 24 October 2018. https://www.mobileye. com/us/fleets/blog/wake-up-for-lane-departure-warnings/. Accessed 29 Sept 2019
Exploring New Concepts to Create Natural and Trustful Dialogue Between Humans and Intelligent Autonomous Vehicles Andrea Di Salvo(&) and Andrea Arcoraci DAD - Department of Architecture and Design, Politecnico di Torino, Viale Mattioli, 39, 10125 Turin, Italy {andrea.disalvo,andrea.arcoraci}@polito.it
Abstract. In recent years, research on autonomous cars had an ever-increasing spread, exploring many of the technological elements, the intelligence necessary to build hardware and connections, the limits linked to legislation and ethical aspects. The automotive field is rich of interior style solutions, but the topics linked to interaction design and digital user experience are relatively young and need a different approach. In fact, if the higher limits were considered the cognitive workload, the attention and the reaction times required by a still semiautomatic drive, what are the methods to follow in order to obtain an innovative user experience, that also considers the aspects of complexity and of personal digital connections? The presented case study aims to describe the methodological approach and the first concepts developed for the research project that studies an innovative interface system for an autonomous vehicle, able to travel both on the ground and in flight. Keywords: Interaction design
User experience Design concept
1 Introduction Autonomous driving is a research field that is continuously developing since many years, the focus is based on the ambition to completely automate the driving process, integrating the use of sensors and actuators connected to both an internal car software system and to the mobility system. The primary goals of these projects are: a substantial increase in safety [1, 2], let the car passengers free from driving tasks in order to allow them to engage in other activities [2], to increase the sustainability of the mobility system considering cars as a part of an integrated service based on sharing [3, 4]. To deal with such a complex challenge, car manufacturers and research institutions are following two paths, the first specifically concerns with the safety of the car, the second imagines new on-board configurations. The latter task is usually managed by the style centers that design solutions based on formal languages, while the interaction modes are still conditioned by the reached transition state. The presence of driving controls in semi-autonomous driving cars impacts not only on the structure of the interior but on interaction and experience. In a semi-autonomous vehicle, in fact, the responsibility of the vehicle is still of the driver and the other activities are strongly © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 34–40, 2020. https://doi.org/10.1007/978-3-030-39512-4_6
Exploring New Concepts to Create Natural and Trustful Dialogue
35
limited by the response times that the driver must maintain to return in the loop. Then, the topics linked to interaction design (IxD) and digital user experience are still relatively young because they are still closely related to the driving activity. In fact, considering the perspective of a fully autonomous car, interaction designers are supposed not to consider only actual primary limits coming from cognitive ergonomics, such as the cognitive workload. On the contrary they should focus on other aspects, such as the critical shift in the division of driving task, that needs to reconsider a communication between users and cars. The users will move from performing the driving to managing several elements (e.g. setting direction and choosing driving style) and letting the system execute the action as it knows best [5]. The user will continuously evaluate the performance of the vehicle to build acceptance and trust [6], if these don’t meet the expectations, the potential risk is to decrease user trust. With a low trust level, the user may not only dismiss the particular vehicle but dismiss autonomous vehicles altogether [5]. In order to be accepted, autonomous vehicles have to be perceived as useful, as well as safe and competent [7]. Interaction designers, in conclusion, have the responsibility to focus on approaches and methods to obtain a completely different UX, also considering the aspects of complexity and of personal digital connections.
2 Scenario The 5 automation levels of a vehicle [8] represent a roadmap, however, that document does not suggest which other activities could be carried out on board besides driving. Furthermore, the level 5 of automation appears technologically achievable today, but the lack of connection to the mobility system is evident, at least as far as cars are concerned. Trucks manufacturers worked on different bases and they are putting efforts to create a shared system to test the platooning techniques i.e. a semi-autonomous mode that requires an interoperable system capable of coordinating many vehicles. There are many differences between trucks and cars, but the common factor concerns interaction [9] and on-board experience. Despite users are free from the driving task, they still interact with the system, but it will happen differently [10]. The users will have a seemingly more passive role inside the vehicle as their ability to send direct input to the system through commands (e.g. pedals, steering) decreases or disappears at all. Instead, the emphasis will be more on output information that user receives from the system [5]; this information will be crucial to increasing the perception of predictability and then building user’s trust [11]. As the way of communicating between the users and system will change, the traditional interface is no longer suitable in the new configuration of fully autonomous vehicles and the new one should be also designed to convey new type of information such as intentions and awareness [5]. Furthermore, the interface placement itself will be different and users will be involved in much more activity and will experience the cabin in a totally different way [12]. These revolutionary factors open the possibility to reconfigure the vehicle under many aspects covering both sides, physical and digital. Today, tests in real contexts focus on the relations between the number of disengagements [13] and the possible causes, and on the reaction time that the driver had in front of an unexpected event. The driver’s
36
A. Di Salvo and A. Arcoraci
monitoring phase is, therefore, relegated to safety critical situations or to keep the driver in the loop by assessing the level of attention or fatigue [10]. Much work, instead, has been implemented to evaluate other aspects that can be considered as part of UX. First of all, the level of acceptance and the consequent trustability perceived not only by those aboard autonomous vehicles, but also by other actors in the mobility system. A trend that has a significant impact on the UX concerns the detection of emotions on board and data potentially useful to create passengers’ profiles [14]. This can be already done technologically through personal devices, like wearables, and onboard sensors, but it allows in perspective to connect the user’s emotions to the car’s behavior and to the on-board interfaces, making the cabin more relational. This research field can also be faced during the metadesign phase through customer journeys that design sentiment and emotional reactions. This approach can be considered as speculative and broadens the design horizon, usually linked to the concept of HMI to the dimension of mobility as a service.
3 Design Approaches and Methods The methodological approach intersects IxD, UX and design fiction [15, 16] in order to build all the design elements based on a plausible story and, at the same time, with a wider overview that these projects need. Consequently, HCD is shifted in a more speculative approach. Although the project was initially structured on a megacity, we decided to work on the city of Turin, the location of the design course of study, so as to be able to carry out a detailed exploration of the territory. The process started with a user research phase, integrating a desk research based on mobility reports and data released by public institution, to an in-depth study of new interactive technologies released on the market or in embryonic stage. This phase was complemented by a hands-on research in which the involved students tested all the sharing and pooling services available in Turin. Enlarging the research field also to bike sharing, cargo or to pooling services allowed to shift the focus more on the service, in order to structure the customer journey without focusing on the only HMI which has, however, been analyzed in detail. Only after this phase, the research project was presented, which already included a prototype and an interface that were in turn explored hands-on. Following this phase Personas were built. Although data came from user research and from further demographic analyses obtained through experts, the imaginative effort is attributable to that carried out by design fiction. Creating narrative and filmic characters able to distinguish themselves from the usual businessman that appears in the initial narration of the project. The next step has been the creation of the interface concepts. Even in this case, the methodology has drawn on the typical techniques of filmic narration, identifying the interface as another character with a specific personality. This approach proved to be very useful for defining the behaviors and interactions of interfaces. As a result of the concept, a Customer Journey was realized for each Persona and therefore all the interactive and graphic languages. The project ended with a communication video able to show the UX on board.
Exploring New Concepts to Create Natural and Trustful Dialogue
37
4 The Research Project This project is part of a research program named and involves a company of the Turin territory which developed a concept of fully electric and fully autonomous modular system. One of the peculiarities is its ability to reconfigure according to the user needs, it can move inside and outside the urban context in both ways, by ground and by flight. Relevant factors as the rise of urban density and the increase of megacities across the world expected for the next fifty years led the transportation market to explore new concept and new ways of thinking about urban movements. Another big trend is the dismission of private vehicles in favor of the spread of shared vehicles [17], this shift should make the whole mobility system more efficient and sustainable, mitigating problem as congestion and pollution. Therefore, the vehicle has been designed as a service. This characteristic, together with the double mode of movement and the fully automation of the vehicle, will dramatically change the experience of city moving. The role of this research project is thus to understand the future needs of and desires of the actors within the new mobility system, imagine and rethink the possible UXs within autonomous drive vehicle, and exploring possible applications in terms of innovative technologies.
5 The Concepts The high-level interaction concepts that are presented are categorized according to the grammar of the interaction and the experience on board. They have been developed by 8 groups of students during the master’s degree course. In particular, the concepts explore the way in which: interfaces and artificial intelligence will recognize and answer to the user’s specific needs; personal and on-board devices will handle emergencies, delays, malfunctions. Particular attention has been dedicated to the emotional aspects of the user and the way of monitoring them, in order to design the system’s behaviors and the constant dialogue through the interfaces. The interaction modes are shown in the figure and in the description that follows, leaving out the different services that vary a lot from project to project (Fig. 1).
Fig. 1. The 8 high level concepts
38
A. Di Salvo and A. Arcoraci
The extended personal is based on a hologram projected by the personal device, connected to the car, which answers to the needs expressed by the user. It is not limited to answering but rather, given its connection to the user’s personal data, it is able to anticipate requests and reshape the interactive elements of the cabin according the detected emotions. The interface becomes multisensory, the main dialogue is based on the voice, but the connections makes it really personal. In this case AI is an extension the personal device, capable of communicating with the on-board system. Relaxed Monitoring conceives the car as part of a stand-alone delivery service. The interface is therefore useful for constantly programming and monitoring the journey from the user’s personal device. It assumes the dimensions of a current app with the aim of reassuring both the sender and the receiver. Pick up needs interface is based on the evolution of direct manipulation, using three-dimensional holographic projections able to provide tactile feedback. The user can interact with the on-board systems using his hands. The grammar of interaction is radically different and is initially based on a 6sided cube, each side allows entering a specific section of the system. In addition to the usual infotainment the interface allows, by pinching the corners of the cards, to explore the contents more and more in depth. The whole surrounding context is reconstructed and visualized in 3d allowing also a punctual interaction with the city. 360° 360° interaction is based on free touch gestures, while the interface is developed on all the glass surfaces of the cabin. This allows not only an interaction with the context through augmented reality but amplifies the possibilities of vision, projecting for example immersive 360° contents on treated surfaces; the system agrees with the user a series of activities on board connected to the motivation of the journey, like going to a concert or to an event. Continuous Communication’s interface is designed to continuously communicate with users through a tactile interface, specifically designed for visual disabilities, but scalable to all users. For continuous communication we do not intend to leave the channel always open, as happens for voice assistants, but to trigger a continuous and natural dialogue that reassures the user during all phases of the journey. Playful Grammar changes interaction grammar into visceral, referring to the Norman categories, gestures are not learned by the user but developed by the user according to his needs and expectations. In this way, users can feel engaged by a new and peculiar language, also aimed at amazing the other passenger. Body touchless interaction with the contents is no longer tied to an object or display, but it shapes itself on the body of the user while remaining invisible to others. Based on the need for privacy and hygiene on board a shared vehicle, the user can, for example, write tapping on his legs using a virtual keyboard projected onto his device. The movement of his fingers is detected, and the user can easily look directly to the device. Although lacking in a tangible tactile interface, the concept leaves user’s arms and hands in a relaxed position. My favorite instrument turns the car into an extension of users’ very professional instruments in a disruptive way, the windows and the windscreen together with a voluminous control console can become the studio of a photographer working directly during the flight, or a recording studio for a musician, or a reporter.
Exploring New Concepts to Create Natural and Trustful Dialogue
39
6 Conclusion and Future Works The presented high-level interaction concepts are characterized by a futuristic vision of technologies and a paradigm shift in the grammar of interaction. This is a speculative work that figures out a scenario in which autonomous-shared mobility becomes crucial in the city environment, highlighting the fundamental role of the UX and the IxD. At present, concepts have been tested in some vertical aspect, presented and discussed with the other project partners. Despite the critical issues related to the technological feasibility of some concepts, this research phase underlined how wide the margin of innovation is and what is the role of IxD and UX in the design, not only of the interfaces but also of the behaviors that the AI should play within the dialogue with users. The next steps undoubtedly concern an intensive phase of tests that consider, above all, the other parameters that contribute to the perception of comfort such as the flight context and the time duration.
References 1. Andersen, J., et al.: Autonomous Vehicle Technology. RAND, Santa Monica (2016) 2. Fagnant, D., Kockelman, K.: Preparing a nation for autonomous vehicles: opportunities, barriers and policy recommendations. Transp. Res. Part A: Policy Pract. 77, 167–181 (2015) 3. Pavone, M., et al.: Robotic load balancing for mobility-on-demand systems. Int. J. Robot. Res. 31(7), 839–854 (2012) 4. Spieser, K., et al.: Toward a systematic approach to the design and evaluation of automated mobility-on-demand systems: a case study in Singapore. In: Road Vehicle Automation, pp. 229–245 (2014) 5. Strömberg, H., et al.: HMI of autonomous vehicles - more than meets the eye. In: Advances in Intelligent Systems and Computing, pp. 359–368 (2018) 6. Ghazizadeh, M., et al.: Extending the technology acceptance model to assess automation. Cogn. Technol. Work 14(1), 39–49 (2011) 7. Choi, J., Ji, Y.: Investigating the importance of trust on adopting an autonomous vehicle. Int. J. Hum.-Comput. Interact. 31(10), 692–702 (2015) 8. SAE International: Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles. SAE International, (J3016) (2016) 9. Ramm, S., et al.: A first approach to understanding and measuring naturalness in driver-car interaction. In: Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications - AutomotiveUI 2014 (2014) 10. Helgath, J., et al.: Investigating the effect of different autonomy levels on user acceptance and user experience in self-driving cars with a VR driving simulator. In: Design, User Experience, and Usability: Users, Contexts and Case Studies, pp. 247–256 (2018) 11. Ekman, F., et al.: Exploring automated vehicle driving styles as a source of trust information. Transp. Res. Part F: Traffic Psychol. Behav. 65, 268–279 (2019) 12. Payre, W., et al.: Intention to use a fully automated car: attitudes and a priori acceptability. Transp. Res. Part F: Traffic Psychol. Behav. 27, 252–263 (2014) 13. Dixit, V., et al.: Autonomous vehicles: disengagements, accidents and reaction times. PLoS ONE 11(12), e0168054 (2016) 14. Amazon Technologies Inc.: Passenger profiles for autonomous vehicles (2018)
40
A. Di Salvo and A. Arcoraci
15. Dunne, A., Raby, F.: Speculative Everything: Design, Fiction, and Social Dreaming. The MIT Press, Cambridge (2013) 16. Bleecker, J.: Design Fiction: A Short Essay on Design, Science, Fact and Fiction, p. 29. Near Future Laboratory, Los Angeles (2009) 17. Kim, S., et al.: Autonomous taxi service design and user experience. Int. J. Hum.–Comput. Interact. 1–20 (2019)
Integrating Human Acceptable Morality in Autonomous Vehicles Giorgio M. Grasso, Chiara Lucifora(&), Pietro Perconti, and Alessio Plebe Department of Cognitive Science, University of Messina, Messina, Italy {gmgrasso,clucifora,perconti,aplebe}@unime.it
Abstract. Our study aims at progressing the assessment of the moral behaviour of human drivers. It is a felicitous coincidence that psychological and philosophical research into human morality has been dominated by thought experiments, resembling vehicles facing emergency situations. These thought experiments involve a running trolley, and have been used to contrast different moral principles, especially deontology versus utilitarianism. We designed an ecologically valid trolley-like dilemma with the help of virtual reality, aimed to understand the moral behavior of human subjects when facing a car accident situation. We report and comment on early results of our first tests. Keywords: Moral dilemma
Virtual reality Autonomous driving system
1 Introduction Recent advances in deep neural algorithms and in sensor technologies are pushing the deployment of autonomous vehicles close to reality [11, 15]. It is time to reflect on the social integration of fleets of autonomously driven cars and trucks with humans. Within this perspective, some of the most compelling challenges concern moral issues, like those arising when a vehicle is facing emergency situations [3, 5]. This issue cannot be addressed from a pure engineering standpoint only, because it typically involves the comparison of different choices with different predictable damages. Ranking different possible harms pertains to human morality. In principle, the engineering solution should integrate in the vehicle design a decision logic yielding choices as close as possible to those humans would have chosen in the same situation. However, it is far from being clear how human drivers behave in emergency situations requiring moral decisions. In our opinion, a remarkable advantage of still using trolley-like dilemmas is the careful critical analysis undertaken in the last decades on their legitimacy as experimental stimuli. There are known limitations and drawbacks, and there are prescriptions for exploiting at best this experimental paradigm. For example, Himmelreich [9] argues that the moral dilemma is not relevant for the autonomous vehicle, because the hypothesized situations rarely occur in the real world, and when it happens the speed of the autonomous vehicle is not sufficient to make a choice; Nyholm and Smids [14] remark that moral dilemmas ignore some proprieties of © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 41–45, 2020. https://doi.org/10.1007/978-3-030-39512-4_7
42
G. M. Grasso et al.
the real world, such as who is the responsible for the road accident, and ignore information about special obligations, that in the real world are very important to understand the moral permissibility. Even more serious is the criticism that moral studies using trolley-like dilemmas as stimuli lack of ecological validity [4, 13]: moral mental processes used in real situations perform differently when tested in an unusual conceptual context. Therefore, we took great care on this issue. We designed a driving simulator in realistic city traffic scenarios, in which subjects are immersed wearing a virtual reality helmet, driving using car-like steering wheel and pedals. Moreover, even if our stimuli are highly ecological, we try further to distinguish moral judgments from moral behavior in the driving context. We administered two different tasks to the subjects: a first one recording in real time the choice actually made in the critical situation by driving, and a second one halting the driving simulation, and allowing the subjects enough time to reflect before deciding their action, and we ask subjects to report the motivations of their choices. Abundant evidence in the literature indicates that the reported motivations of subjects may diverge from their unconscious decision process [10] including moral decisions. Therefore, it is useful to understand if there are systematic discrepancies between what human subjects deem is morally correct when driving, and the moral choice they make automatically in emergency driving maneuvers.
2 Our Research In our approach, we think that it is possible to correlate ethics with engineering, although ethics has mainly taken a top-down approach and engineering uses more often a bottom-up approach, in which the actions are not guided by explicit rules. In fact, there are many studies that use a top-down approach to build the moral artificial agent. For example, Wienfield et al. (2014) uses a clearly top-down approach to construct robots capable to implementing Asimov’s first law, and Anderson et al. [2] built a robot based on a specific ethical theory, namely principalism. So, we must distinguish between an implicit ethical agency to an explicit ethical agency [6]. Based on the first, it is possible to build agents acting on ethically relevant consideration, but this does not mean that the agent should rely on such considerations to explain or justify his choices, so it is possible to speak about “functional morality” [1, 16] in which the system simply act within acceptable standard. As stated in the Introduction, a distinctive objective of our study is to assess possible discrepancies between moral conduct in ecological conditions, and post-hoc moral judgments. 2.1
Experiment
In our experiment we put the subjects in a virtual driving situation and study their behaviour, and in order to fulfill our just described aims, we created two different scenarios. In the strictly ecological choice, while the user drives in an unknown city, he breaks into a sudden car accident due to the crossing of a child who does not take care of the danger. In this case, three alternatives are shown to the user: A. Invest the child;
Integrating Human Acceptable Morality in Autonomous Vehicles
43
B. turn right and kill three passers-by (like a family) on the sidewalk unaware of the danger; C. turn left and kill two road construction workers on the roadway. The other – let call it “cold” - conceptual condition differs in halting the simulation before the subject breaks into the road accident again. Here, the alternatives of choice are visible to the user, and clarified orally by the experimenter, and the subject makes her choice and report a motivation. Our experimental setup include a virtual reality helmet that allows the subject to feel immersed in the road environment, and a setup consisting of a real driving seat positioned on a rigid frame, a steering wheel with force feedback, pedals and gearshift used to provide a complete driving experience during the simulation [8].
3 Preliminary Results To date, experiments have been performed on a sample of 84 subjects, composed by 10 males and 74 females, with a mean age of 22.3 years. These first results show a dramatic difference in the moral behavior of the subjects between the strict ecological condition and the “cold” conceptual condition. While in the strictly ecological choice, 92% of subjects chose A, in the cold choice only 35% confirms this decision, 61% chose C, and 4% chose B (Fig. 1). Histogram of multiple variables Hot vs Cold 80
70
60
50
40
30
20
10
0
Hot Cold a
c
b
Fig. 1. Bar chart comparison hot-cold choice. Hot: A 92.85% B 3.57% C 3.57%. Cold: A 35.71% B 3.57% C 60.71%
We have classified the reported motivations for the cold choice into 4 categories: Deontological reasons (protection for family and/or child); Utilitarian reasons (oriented more to quantity than to quality); Normative reasons (related to compliance with the highway code); Other. The choice A (kill 1 child) made by 34% of the subjects, is due to 46% utilitarian reasons and 54% normative reasons. The choice B (to kill 3 people/family) is due, in 100% of the cases to motivations of “other origin” that have no statistical significance (only 3 subjects of 76). The choice C (kill 2 road workers) made by 61% of the subjects
44
G. M. Grasso et al.
is due for 58% to deontological reasons; 17% to utilitarian reasons; 4% to normative reasons and 21% of other reasons. This classification of the moral preferences in the “cold” situation confirm the relative prevalence of the utilitarian arguments (27.38%), already assessed by previous studies. Bonnefon et al. [3] reported that 76% of the subjects in their study choose to kill the least number of people even at the cost of their own lives, and their moral utilitarian attitude increases with the number of lives that can be saved. Also Faulhaber et al. [7] report in their study a greater preference for the subjects in minimizing the number of deaths, independently on age or context. However, our results show that there are other factors (15.48%) that influence the moral choice, for example, social and economic factors [12] and a particular preference for the deontological code (35.71%) and normative code (21.43%) – see Fig. 2.
Fig. 2. Pie chart about motivations of cold choice: 35.71% Deontology; 27.38% Utilitarianism; 21.43% Normative; 15.48% Other
4 Conclusion The main result of our study is that there is a significant discrepancy between the mental moral processes in act when people are immersed in a situation similar to driving, and the “cold” moral judgments that subjects report in the less ecological and more conceptual evaluation of the same critical situation. Specifically, in the strict ecological condition subjects appear much more deontological than utilitarian in their moral choices, reverting the proportions of moralities in their verbal reports after “cold” situations. If confirmed on a larger sample of subjects, this result raises worries on the possibility of conceptual trolley-like stimuli in grasping a correct evaluation of the common morality of human drivers.
Integrating Human Acceptable Morality in Autonomous Vehicles
45
References 1. Allen, C., Smit, I., Wallach, W.: Artificial morality: top-down, bottom-up, and hybrid approaches. Ethics Inf. Technol. 7, 149–155 (2005) 2. Anderson, M., Anderson, S.L.: Machine ethics: creating an ethical intelligent agent. AI Mag. 28(4), 15 (2007) 3. Bonnefon, J.F., Shariff, A., Rahwan, I.: The social dilemma of autonomous vehicles. Science 352, 1573–1576 (2016) 4. Casebeer, W.D.: Moral cognition and its neural constituents. Nat. Rev. Neurosci. 4(10), 840 (2003) 5. Coca-Vila, I.: Self-driving cars in dilemmatic situations: an approach based on the theory of justification in criminal law. Crim. Law Philos. 12(1), 59–82 (2018) 6. Danaher, J.: Should we create artificial moral agents? A Critical Analysis. Philosophical Disquisitions (2019) 7. Faulhaber, A.K., Dittmer, A., Blind, F., Wächter, M.A., Timm, S., Sütfeld, L.R., König, P.: Human decisions in moral dilemmas are largely described by utilitarianism: virtual car driving study provides guidelines for autonomous driving vehicles. Sci. Eng. Ethics 25, 399– 418 (2019) 8. Grasso, G., Lucifora, C., Perconti, P., Plebe, A.: Evaluating mentalization during driving. In: Proceedings of the 5th International Conference on Vehicle Technology and Intelligent Transport Systems (VEHITS 2019), pp. 536–541 (2019) 9. Himmelreich, J.: Never mind the trolley: the ethics of autonomous vehicles in mundane situations. Ethical Theor. Moral Pract. 21, 669–684 (2018) 10. Jack, A., Roepstorff, A.: Trusting the subject? Part 2 (2004) 11. Li, J., Cheng, H., Guo, H., Qiu, S.: Survey on artificial intelligence for vehicles. Automot. Innov. 1(1), 2–14 (2018) 12. Maxmen, A.: Self-driving car dilemmas reveal that moral choices are not universal. Nature 562, 469–470 (2018) 13. Moll, J., Zahn, R., de Oliveira-Souza, R., Krueger, F., Grafman, J.: The neural basis of human moral cognition. Nat. Rev. Neurosci. 6(10), 799 (2005) 14. Nyholm, S., Smids, J.: The ethics of accident-algorithms for self-driving cars: an applied trolley problem? Ethical Theory Moral Pract. 19, 1275–1289 (2016) 15. Takács, Á., Drexler, D.A., Galambos, P., Rudas, I.J., Haidegger, T.: Assessment and standardization of autonomous vehicles. In: 2018 IEEE 22nd International Conference on Intelligent Engineering Systems (INES), pp. 000185–000192, June 2018 16. Wallach, W., Allen, C.: Moral Machines: Teaching Robots Right from Wrong. Oxford University Press, Oxford (2008)
The Future of User Experience Design in the Interior of Autonomous Car Driven by AI Laura Giraldi(&) Department Dida, University of Florence, Florence, Italy [email protected]
Abstract. Today it is not easy to imagine what would be the future of user experience (UX) design in the world of Artificial Intelligence (AI). Referring to the sector of autonomous vehicles the paper aims to explore the changes that will be brought by artificial intelligence to the innovative sector of the selfdriving car. According to these transformation car interiors and passengers experiences will become very different from the actual one. The aim of the research is, starting from different kind of passengers needs (Human Centered Design approach) individuating essential factors to design autonomous cars interiors and in particular to design innovative interfaces for a better communication with the passengers and a high level of living experience for different kind of users. Keywords: Human factors Autonomous car design Product design AI
UE/UI Communication
1 Introduction: Context of Reference The availability of ubiquitous network access to shared data (now called big-data), through different kind of devices such as smartphone, tablet, computer, the diffusion of IoT (Internet of Things) is bringing the application of Artificial Intelligence (AI) in many areas of our daily activities. Besides, the imminent arrival of the new wireless broadband standard 5G in a large number of popular products of daily use will allow the access AI and data analytics on the move. AI is drastically changing our world and our habits. The application of these technologies to the automotive business as a key building block for the design of the autonomous car generates great interest to solve transportation problems like road traffic and address safety issues. Today it is not easy to imagine what would be the future of User Experience (UX) and User Interaction (UI) design in the world of Artificial Intelligence (AI). It is a time when the human-AI connection is so deep that some experts say there will be “no interface.” Currently, UX/UI often depends on an interface with screens that are often inadequate and not friendly. Specialists of transportation technologies, software and telecommunications engineers, are working to release autonomous vehicles in the very next future but this fact is © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 46–51, 2020. https://doi.org/10.1007/978-3-030-39512-4_8
The Future of User Experience Design in the Interior of Autonomous Car
47
not sufficient for their success and diffusion because it is necessary to understand if users are ready to use this kind of vehicle. For this reason it is very important their opinion, expectations benefits and needs. Indeed, the success of this technology will depend largely on the public opinion and will be strictly related to the society’s benefits, concerns, and by the level of adoption by the people. Studies dedicated to the interaction of the human being with the self-driving car demonstrates that the above factors change in relation to important users differences: gender, age range, study level, culture, employment status, etc. [1]. Until now, in general, to improve user engagement in their products, UX designers have turned to tools and metrics such as usability tests, heat maps and usage data. By using AI, it will be possible to observe users’ behaviours at 360° to provide tips on how they can improve their user experience, eventually leading to a better and more effective one [2]. AI could be used to design according to each user’s needs (practical and emotional), based on the analysis of the collected data. All this is achieved through the application of machine and deep learning algorithms that combines large data sets to make inferences. Additionally, these systems can learn from the data and adjust their behaviour accordingly, in real time. This revolution is currently on going and some popular examples are the Siri and Alexa digital personal assistants and Netflix’s highly predictive algorithms, which took the market by storm. For that regards the automotive market, some examples can be found in the Toyota’s artificial intelligent buddy named Yui, an AI personality, embedded into the vehicle’s architecture, able to help users to navigate, communicate and even contributes to enhance their conversations. Today, it is globally recognized that technology will permeate the car of the future. However, in our view, this new kind of vehicles, instead of being a cold and weird flying robot car, will have to go completely in the opposite direction. Although it is definitely futuristic, it will have a friendly and welcome demeanor. The focus will be on the importance of the human that is using it. Indeed, a car digital assistant like Yui has to support driver and passengers ever since they approach the vehicle. In this way, the rear of the vehicle can display messages to communicate about upcoming turns or warn about a potential hazard. The front of the vehicle could be able to communicate whether the car is in automated or manual drive. In the future, human being will have to increasingly answer the questions of what will be the relationship between the vehicles of the future, and the people who drive them and whether technology can be warm and friendly, engaging, and immersive. More often cars have become our home on wheels. We connect with our cars, and we build an emotional relationship. What was once an unspoken bond will now progress on grounds of mutual affection.
2 The Aim of the Research Nowadays there are a lot researches about autonomous car and the effect of this possible innovation in daily life of people. At first the studies deepened, for the most, technical and security aspects and many engineers are working hard to solve all the related problems also involving other disciplines. Mostly the product and communication
48
L. Giraldi
designers point of view is interested to study how the car could change in terms of morphologies, interactions and user experiences. As said above, according to the possibilities offered by new technologies, probably, the new autonomous vehicles should be so different from the previous one that it could be very hard to recognize them in relation to the archetype of cars of the collective imaginary. Consequently both the outside and the interior could completely change in comparison with the current vehicles. The present work focuses on the possible changes of autonomous car AI driven in order to consider possible strengths and weakness to design new interiors also according to users need and behaviours to introduced innovations able to be understandable and easy usable by people generating a pleasant UX. The aim of the paper is to explore the changes that will be brought by artificial intelligence to the innovative sector of the self-driving car in order to design autonomous cars interiors. In particular the intention is to find some factors useful to design innovative interfaces for a better communication with the passengers and a high level of living experience for different kind of users.
3 Multidisciplinary Approach Passengers of a self-driving car have very different experience in comparison with passengers in a car with human driver. The interaction is so different according to many factors, for this reason it is important to study their behaviors and needs in both scenarios. Consequently, it is necessary to define a design method to analyses these factors according to the skills, behaviors and inclinations of different kind of passengers. In this way it is possible to design a system really able to interact and entertain passengers in a pleasant way using strategic solutions while they travelling within a self-driving car. The methodological approach to design passengers experience refers to the Human Centered Design approach (HCD) in order to study in detail different passengers factors. As a matter of fact, from the birth to older age people, as possible passengers, they have a continuous development of physical and psychical abilities, which are necessary to analyze in order to design new solutions. The research refers to a multi-disciplinary and holistic approach involving different disciplines. The involved experts are not only engineers and specialist in vehicle technologies but also psychologists, anthropologists, smart material specialists, interaction designers, user experience designers, communication designers, product designers. This main method put users at the center of the entire design process [2]. At the same time all the individuated factors are to be approved and tested by all the specialists. The contributions of each discipline represent the indispensable basic knowledge necessary to summarize all the features to design an appropriate and satisfactory interaction and passengers experiences.
The Future of User Experience Design in the Interior of Autonomous Car
49
4 Analysis for Future Scenarios Autonomous vehicles and in particular cars represent a very important development able to improve our daily life [3]. In particular these kind of “cars” may help different kind of people in mobility, offering them different experience according to their needs and use opportunity. Anyhow human behavior in self-driving cars is not linear and sometimes unforeseeable; users perceptions of service experiences have always been important to the success of product/service. The relationship between quality perceptions and satisfaction is the key for the innovation acceptance by users but this relation can change according to the kind of user and his/her gender. The factors able to influence all the users (both male and female) satisfactions are tangibility, reliability and responsiveness of a product/service [4]. According to recent studies passengers prefer to interact with the car using the agent interlocutor. The voice command is able to attract the highest level of trust improving the pleasure on their experience [5]. In autonomous car the interaction between the user and the vehicle is very different, Users inside autonomous cars have free time because their attention is not for driving. They could use their time to work to study to play, to converse to navigate, to shop online, to organize their trip, and so on. It is possible to say that, in public autonomous driven cars, as taxi of the next future, users needs depend on a series of elements according to the kind of users (i.e. children, older people, disables,...) their habits and culture, the aim of the trip (a tourist trip, a business one,…). Following these factors and specific user needs the interior could change adjusting in real time to offer different living solutions and giving personalized information [6]. To design the interior of a AV is essential to know different users necessities but it is not sufficient because they could change according to chance of use, the context, and finally to the travel time. Recent researches on this issue underline that in six years cars will be smarter and more functional, with high-efficiency engines and lighter materials [7]. Due to the diffusion of fifth generation network (5G) there will be a real revolution on connectivity. This network will allow the car to communicate in real time with the surrounding environment, making possible the increasingly advanced autonomous driving systems. It is possible to foresee that the future self-driving cars will have swivel and adjustable seats to self create a open space inside the car, advanced head-up display screens, biometric systems to interact, to open and lock the vehicle in safety. Moreover commands will be activated by gestures and voice control, and a smart centre of the car’s infotainment system able to connect to smart public and private sensors and devices, and of course the on-board computer will be the only driver of the car itself. The whole cycle of user travel, from the autonomous car’s call to the final destination, could be characterized by the main actions as follow: 1. User call the autonomous car communicating his/her destination. This action is possible using the relative app with the smartphone. 2. The autonomous car app communicates the time to arriving and the price. (These two above points happen also today calling for instance Ubertaxi service.)
50
L. Giraldi
3. The autonomous car app asks the passengers different information necessary to customize the interior of the autonomous car according to: – number of passengers; – kind of passengers (gender, adult, older people, younger, children, babies, if able-bodies or kind of disability) if there are animals (pet or other kind…); – kind of personal effects: (computer, shopping bags…); – kind of luggage (trolley, bag, stroller, baby car seat…); – aim and time of travel (for business, tourism, free time, personal necessities,…). Analysing these above data, the system proposes different solutions also asking specific questions to passengers. Then the autonomous car sets its interior customized according to data and users needs, before taking users on board. The use of Autonomous vehicles introduces interesting change in human habits and behaviours giving new opportunities to live innovative experiences during a trip. In fact the absence of the human driver allows all the passengers to do different activities saving their time. For the most, the user interaction is with voice command and a virtual “assistant” is at disposal to satisfy passengers needs, giving information using, for instance, augmented reality and smart screens. Thanks to the AI the vehicle become friendly and very personal even if it is a share vehicle. According to Mike Ramsey, research director of Automotive and Smart Mobility, when on board passengers communicate to the car using the voice command and the car will communicate with the roads, other cars and infrastructure not only to calculate the best route for the passenger but also for any other passengers that it is possible to “collect” during the journey. Once the passengers have been taken to their destination, the car will automatically withdraw to the nearest charging station, waiting to be called to bring passengers back or to continue its journey to take other passengers. We suppose that the future autonomous car will be not only a car but a partner, it will be able to connect with the passengers on a level that current UX doesn’t reach and by using functions over and over, users end up establishing an interdependent relationship with the system immediately. In this way the AI’s deep learning algorithms can collect, by following a continuous cycle, data and use them to learn and adapt the human interface to the real need and behaviour of the passengers. Users will not be aware about it and before they will know it, they will be deeply connected. AI could become another complementary method to help designers in designing products and services with user interfaces able to involve user senses satisfying material and intangible needs and emotions. The success of self-driving cars depends to the users opinions on benefit, concern, and on the use of these technologies. The findings can help supporting the design and the development of vehicle agent-based voice interfaces to enhance trust and user experience in autonomous cars. Acknowledgments. The research was carried out thanks to the consulting of Gianluca Mando’ Head of Research, Technology & Innovation at THALES ITALIA S.P.A.
The Future of User Experience Design in the Interior of Autonomous Car
51
References 1. Marti-Belda, A., Boso, P., Lijarci, I.: Beliefs and expectations of driving learners about autonomous driving. ToTS, Palacky University in Olomouc, 10(X) (2019) 2. Norman, D.A.: Emotional Design. Apogeo, Milano (2013) 3. Gebresenbet, Bayyou, D.: Artificially-intelligent-self-driving-vehicle-technologies–benefitsand-challenges. Int. J. Emerg. Technol. Comput. Sci. Electron. 26(3), 5–13 (2019) 4. Mokhlis, S.: The influence of service quality on satisfaction: a gender comparison. Public Adm. Res. 1(1), 103 (2012) 5. Large, D.R., Harrington, K.J., Burnett, G.E., Bennet, P.: To please in a pod: employing an anthropomorphic agent-interlocutor to enhance trust and user experience in an autonomous, self-driving vehicle. In: Automotive User Interfaces and Interactive Vehicular Applications Conference, Utrecht, Nederland, pp. 49–59 (2019) 6. Cianfanelli, E., Tufarelli, M.: Smart vehicles. a design contribution for the changing urban mobility. In: Cumulus Conference Proceedings Wuxi, DTDO, Diffused Transition and Design Opportunities (2018) 7. Keeney, T.: Mobility as a service: why self-driving cars could change eveything. ARK Invest Research (2017)
Measuring Driver Discomfort in Autonomous Vehicles Dario Niermann(&) and Andreas Lüdtke OFFIS e.V., Human Centered Design, Escherweg 2, 26121 Oldenburg, Germany {Dario.Niermann,Andreas.Luedtke}@offis.de
Abstract. Autonomous driving is getting more common and easily accessible with rapid improvements in technology. Prospective buyers of autonomous vehicles need to adapt to this technology equally rapidly to feel comfortable in them. However, this is not always the case, since taking away control from the user often correlates with loss of comfort. Detecting uncomfortable and stressful situations while driving could improve driving quality and overall acceptance of autonomous vehicles through adaption of driving style, interface and other methods. In this paper, we test a range of methods, which measure the discomfort of a user of an autonomous vehicle in real-time. We propose a portable set of sensors that measure heart rate, skin conductance, sitting position, g-forces and subjective discomfort. Preliminary results will be examined and next steps will be discussed. Keywords: Autonomous driving Discomfort Human factors Human-machine-interaction
Acceptance On-the-road
1 Introduction We start with the hypothesis that drivers of (autonomous) vehicles have moments of stress or discomfort while driving through traffic, due to always occurring slightly dangerous or unpredictable traffic conditions. Especially in autonomous vehicles (AVs), stress might occur more frequently, since loss of direct control often reduces trust in the system. Therefore, it is desired to develop methods that lower discomfort and stress, thus increasing the acceptance of AVs. Besides static methods, like always driving slowly, the detection of the exact situations where discomfort or stress is building up is an approach to individualize the needs of the driver and to adapt only when adaption is needed. Measuring uncomfortable or stressful situations can be done using two different streams of information: The physiological indicators of the driver and the information from surrounding traffic. A combination of both validations is a promising way to evaluate these situations. Since traffic information will necessarily be recorded and interpreted by AVs, we investigate the usefulness of physiological sensor data while driving through diverse traffic conditions. A more precise definition of ‘stress’ and what precisely we measure will be explained in Sect. 2. In the following, we explain the overall concept of our idea. © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 52–58, 2020. https://doi.org/10.1007/978-3-030-39512-4_9
Measuring Driver Discomfort in Autonomous Vehicles
53
The concept of combining the above-mentioned streams of information and the measurement of stressful situations is displayed in Fig. 1. On the left are multiple sensors that will be used to measure physiological responses (see Sects. 2 and 3.1). These will be combined in real time and give input to a classification model. This model analyses the given sensor values and outputs a likelihood on how uncomfortable or stressed the user currently is. The traffic context is the current situation of the vehicle and its surroundings. This considers the current driving style, maneuver, road and weather conditions, time of day and so on and should be handled by the board computer of the AV. The maneuver model combines both outputs from the stress classifier and traffic context and assigns the stress likelihood to the current maneuver. For example, if the vehicle is currently overtaking another vehicle, on a wet road in the dark and the user feels highly stressed in the same moment, the maneuver model assigns and saves the current maneuver to high stress. If necessary, the maneuver model can collect multiple equal maneuvers and collect an average stress likelihood. If the maneuver model has a precise measure on the stress likelihood for the currently planned maneuver, the maneuver can be adapted towards a more relaxing one (Fig. 1).
Fig. 1. Process chart of our concept to reduce stress. The grey marked nodes are our focus in the paper. Greyed out parts are only necessary for training of the model and can later be removed.
Training the classification model is done by using self-reported subjective stress while driving (see Sect. 2). This self-reported stress acts as ground truth and allows for supervised training strategies of many machine-learning algorithms.
2 Stress and Physiological Sensors The word stress is a difficult term, which is not well defined. We will use it as a collection of terms all concerning discomfort, risk perception, tension, anxiety and so on. Hence, every situation concerning the traffic situation and driving style of the vehicle that induces negative physiological and psychological effects. We collect these situations simply by letting the test subjects tell us when they felt those negative emotions, by pressing a continuous handset button (see Sect. 3.1). This way, we capture situations that induce subjective ‘stress’ and assign the intensity of the feeling to a value between zero (no stress) and one (high stress). From now on, we will refer to these subjective feelings as ‘stress’ to make the paper more easily readable.
54
D. Niermann and A. Lüdtke
Measuring stress is a popular objective in many scientific fields. Different methods and possible bio signals emerged over the last decades, most notably measuring the skin conductance (also called galvanic skin response, GSR) and heart rate (HR). Coming from the medical field, [1] proposed that GSR is an indicator of stress. Later, GSR signals were used to develop a stress detector with reported success rate of 78% [2]. Various other signals were also analyzed; [4] used EEG, ECG and EMG combined with GSR and [5] used skin temperature. Another approach analyzed facial expressions to detect stress [7]. In above-mentioned submissions, most measurements are done in very controlled environments and therefore, the used methods will most likely not yield good results in driving vehicles. A more related submission from Healey and Picard [6] measured ECG, EMG, GSR, heart rate and respiration rate while the test subjects were driving through traffic. They assigned different traffic conditions to different stress levels (e.g. city driving = high stress) with a resolution of five-minute intervals and reported a 97% accuracy for stress level classification. However, they reported this high performance is only possible in these five-minute intervals and not on a real-time resolution. In addition, the stress levels of the subjects where assigned by the researchers, not the subjects themselves, thus could be a strong mismatch between the assigned stress and real physiological stress. They also analyzed real time correlations by assigning stress levels every second based on video recording of the drive. However, they did not use this data to create a classification model. One result of their analysis showed that GSR and HR correlated best with stress levels of the driver, also considering the real time data. Since HR and GSR measurements have an overall good reported correlation to stress levels and the corresponding sensors are also very unobtrusive, we decided to use these two bio signals for our measurements. We found no released research project that accomplished a real time stress detection in real traffic conditions.
3 Methology 3.1
Hardware Setup
To measure the stress level of the subjects on a real time resolution, we use a handset, which is pressed during stressful or uncomfortable situations. With this method, we measure the real time, self-reported emotions and do not need to assign the (possibly wrong) values by someone else. The handset can be pressed continuously strong to reflect the amount of stress or discomfort the subject is experiencing (light press = slight discomfort, strong press = high stress). In addition to above sensors, we also implemented pressure sensors into a portable seat mat with three pressure zones (see Fig. 2). With these, we measure the positional changes of the subject’s body and the g-forces acting on them, which could also correlate to the feeling of discomfort or stress. They work by attaching long silicon tubes to piezoelectric pressure sensors on one end. The other end of the tube is sealed. If a tube is compressed somewhere along its length, the air pressure presses against the
Measuring Driver Discomfort in Autonomous Vehicles
55
pressure sensor. Therefore, the tube defines an area on which pressure is measured. For the areas P3 two tubes were connected to one sensor, thus the sensor is collecting data on both sides of the backrest. Areas P1 and P2 use folded tubes to cover a broader area. All sensors connect to a Raspberry Pi 3 Model B via wired connections. The GSR1 and (optical) HR2 sensors were bought from Seeedstudio3 and connect to and Arduino Nano, which forwards the signals via USB B to the Raspberry Pi. The Handset is a common handset from Carrera, also connected to the Arduino Nano. Pressure sensors are connected via a custom made circuit board4 to the Raspberry Pi. The setup is shown in Fig. 2.
Fig. 2. Display of the portable hardware on an office chair. The blue marked pressure areas show where the silicon tubes are located
3.2
Study Setup
While driving, the handset is held in one hand of the test subject and the GSR sensor is strapped on two fingers of the other hand to remove any correlations that could occur because of finger movement. The HR sensor is clipped to the ear and measures the blood flow optically. Pressure sensors are attached under a seat mat and are not noticeable. The test subject is in the passenger seat and is driven through traffic by a driver. The test subject should concentrate on the traffic and cannot control the car, therefore has a similar experience as in an autonomously driven vehicle. An additional person on the back seats monitors the sensor data, can talk to the test subjects and answer questions.
1 2 3 4
GSR Sensor from Seeedstudio, http://wiki.seeedstudio.com/Grove-GSR_Sensor. HR Sensor from Seeedstudio, http://wiki.seeedstudio.com/Grove-Ear-clip_Heart_Rate_Sensor/. Seeedstudio Grove system, http://wiki.seeedstudio.com/Grove_System. Developed by OFFIS Health.
56
D. Niermann and A. Lüdtke
The traffic is uncontrolled, which is not a problem but rather an advantage, since many different situations lead to stress and create a diverse set of measurements, which can also improve machine-learning models.
4 Preliminary Results To test our system and get first impressions about the measured data, we measured three internal colleagues each over a one-hour tour through city, highways and rural roads. The analysis of the collected data is presented as a preliminary result in this chapter. First, we want to look at the measured self-reported stress situations to prove our initial hypothesis. Over one hour of normal driving, all test subjects reported multiple situations where they felt at least uncomfortable. For example, the stress value of over 0.5 was reached between 10 and 15 times in this hour. Smaller values occurred roughly every 2 min. This is corresponding well with our hypothesis. The duration of a stressful situation is typically very short, with only a few seconds, rarely surpassing four seconds. This emphasizes how important a quick classification is. However, since the physiological responses need time to relax again, the handset value was edited afterwards, such that it decays exponentially to create larger time windows and fits physiological relax times. While the test subjects report stress, the physiological sensors need to measure some abnormalities such that an automatic classification is possible. If we look only at moments of medium to high stress, defined by handset values of higher than 0.5, we can recognize abnormalities and average trends of the measurements. In Fig. 3(a), such an analysis is displayed for one test subject. Trends like rising heart rate and lowering GSR can be seen clearly.
Fig. 3. (a) Time series plots of multiple sensors while the test subjects reported medium to high stress (see handset value higher than 0.5). (b) Heart rate distributions over three test subjects divided into relaxed and stressed (handset values > 0.5) intervals.
This makes clear that the subjective self-reported stress is indeed correlated with some physiological responses.
Measuring Driver Discomfort in Autonomous Vehicles
57
However, Fig. 3(b) makes also clear, that not all test subjects react equally and that the overall distributions of values while stressed and relaxed are very similar. We need to further investigate the differences of users in further studies. As a small preview, we trained a neural network to classify stress based on our sensor data. This neural network model with simple sigmoid units and one hidden layer provided best results in comparison with simpler models. We did not yet develop a good performance measure, since it is very difficult to detect and compare peaks reasonably, especially with maneuver based knowledge. As a way to show the performance of our model, we overlaid the measured, self-reported stress with the model prediction in Fig. 4. Here, it is visible that the model predicts most of the peaks, but also misses around 20%. This is a random cut out from the drive of proband zero. Other subjects were more difficult to model, with a peak detection success rate of around 50%. As mentioned, this was done without any information about the traffic, so this classification could improve when this information is also considered.
Fig. 4. Comparison of the self-reported stress (‘Desired’) and the model prediction over time. Both curves should be identical for a perfect model.
5 Discussion and Outlook In future work we will proceed with a thorough study, including at least 10 participants of different age and different driving experience. This will ensure a diverse data set and give insights about the differences of users. We will analyze differences between test subjects and performance of multiple machine learning models. Another step will be to analyze how well a general model for all persons can be used and if a person specific model is necessary. In addition, we will investigate the usefulness of every used sensor and which one correlated most with self-reported stress.
References 1. Storm, H., Myre, K., Rostrup, M., Stokland, O., Lien, M.D., Raeder, J.C.: Skin conductance correlates with perioperative stress. Acta Anaesthesiol. Scand. 46(7), 887–895 (2002) 2. Villarejo, M.V., Zapirain, B.G., Zorrilla, A.M.: A stress sensor based on Galvanic Skin Response (GSR) controlled by ZigBee. Sensors 12(5), 6075–6101 (2012) 3. Minguillon, J., Perez, E., Lopez-Gordo, M., Pelayo, F., Sanchez-Carrion, M.: Portable system for real-time detection of stress level. Sensors 18(8), 2504 (2018)
58
D. Niermann and A. Lüdtke
4. Yamakoshi, T., Yamakoshi, K.I., Tanaka, S., Nogawa, M., Shibata, M., Sawada, Y., Rolfe, P., Hirose, Y.: A preliminary study on driver’s stress index using a new method based on differential skin temperature measurement. In: 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 722–725, August 2007 5. Healey, J., Picard, R.W.: Detecting stress during real-world driving tasks using physiological sensors. IEEE Trans. Intell. Transp. Syst. 6(2), 156–166 (2005) 6. Gao, H., Yüce, A., Thiran, J.P.: Detecting emotional stress from facial expressions for driving safety. In: IEEE International Conference on Image Processing (ICIP), pp. 5961–5965, October 2014
Human-Centered Design for Automotive Styling Design: Conceptualizing a Car from QFD to ViP Gian Andrea Giacobone(&) and Giuseppe Mincolelli Department of Architecture, University of Ferrara, Via Ghiara 36, 44121 Ferrara, Italy {gianandrea.giacobone,giuseppe.mincolelli}@unife.it
Abstract. Designing a vehicle is a harmonic balancing between engineering functions, technologies, and designing activities. However, the design process is laborious because the convergence between styling and engineering needs to find constant compromises. Using a human-centered design strategy improves the management of the process. Considering this, the paper reports an automotive design process that is a part of a multidisciplinary project funded by Emilia Romagna Region, Italy. The main goal of the overall project is to design the concept of a hybrid sport vehicle. The peculiarity of this specific research concerns the combination of two different approaches that rely on the importance of human-centered design. In particular, The process initially uses Quality Function Deployment (QFD) and it concludes the styling process through Vision in Product Design (ViP). The result converges both outcomes to come up with a novel and integrated concept solution. Keywords: Human-centered design Automotive design Research methods Quality Function Deployment Vision in Product
1 Introduction The vehicle is a very intricate amalgam of technological, functional and styling properties. Its design is the result of holistic thinking determined by a harmonic balancing between engineering functions and designing activities that make the vehicle itself unique [1]. The entire design process is laborious because the convergence between styling and engineering needs to find constant compromises [2]. This hard task is generally undertaken by automotive design or automotive styling, namely the discipline that is implicated in giving a form to the car, encompassing package design and negotiating engineering constraints [3]. Good management of those activities ensures for a company the successful development of the vehicle. However, given the complexity of the automotive product, sometimes many difficulties in managing the entire design may occur during the process. If all aspects of a car are not faced up with proper solutions, they could create a real gap between the level of satisfaction desired by end-users and the development management utilized by the process to achieve the requested goals of the company [4]. For these reasons, © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 59–65, 2020. https://doi.org/10.1007/978-3-030-39512-4_10
60
G. A. Giacobone and G. Mincolelli
developing an optimal strategy at the early stages of automotive design helps a company to achieve the high score of customer and avoid negative outcomes. One of the potential and successful strategies is adopting a human-centered design (HCD) methodology, because it is capable to identify users’ perspective, which can be subsequently used throughout the process to lead the entire project. Every need is immediately analyzed and consequently incorporated into research as decision-making tool for designing the expected vehicle. The outcomes derived from this practice can improve not only the engineering properties of the car, but also the styling aspects, in terms of appearance, functions, and operation [5]. Together, all these factors can match user desires determining, at the same time, the acceptance of total product experience.
2 Objective and Methodology This research is integrated within a wider multidisciplinary project called “Automotive Academy: learning by doing project for innovation engineering automotive” which is financed by the Emilia Romagna Region, Italy. The overall project aims to develop several engineering and designing solutions in different expert areas that can be applied for developing the future concept of a sustainable sport car. Regarding this specific research, the main objective is to give a proper vision and shape to the overall project through defining a successful design strategy to develop the concept vehicle. The paper proposes the development of the conceptual design phase using a human-centered methodology that is focused on two different approaches: Quality Function Deployment and Vision in Product Design. Both models are selected because they are perfectly suitable in automotive for structuring idea-generating throughout the styling process. They support also design team in taking significant decisions for automotive product development. Quality Function Deployment (QFD) is a human-centered tool and inclusive that is able to orient the product development process toward the real exigencies of end-users [6, 7]. The core of QDF is based on a bi-dimensional mathematical matrix, called House of Quality. It represents graphically the correlation between the most important needs of users and the product characteristics that are considered more significant to develop suitable solutions for satisfying costumer demand [8]. The result is a clear assessment of the most important product specifications that are utilized as guidelines both to design the final product and achieve the desired quality requested by users. Vision in Product Design (ViP) is an innovative strategic method that enables innovation through envisioning new ideas for the future [9]. Instead of simply solve present-day problems, ViP tries to discover new design opportunities that can be designed in the future ahead. It establishes the raison d’êtré of the final design, namely the intrinsic and anthropologic meaning of the artefact, which justify its existence in the world [10]. To do this, ViP model starts from carefully understanding the current evolution of socio-cultural models [11], to successively have a strong basis to envision the future context. ViP is a step-by-step design process and is divided into two areas: the left side represents the process of “deconstruction” and it analyses the current state of the art; the right side is connected to the process of “designing”, in which the vision is realized in eight steps. Each area is divided into three main levels of study, which are
Human-Centered Design for Automotive Styling Design
61
based on product, interaction and context analysis. In the end, the outcomes provided by the vision are represented by a final abstract model that is called advance concept.
3 Activities and Results In the initial phase of design process, QFD was utilized to perform user analysis and interpreter the most important needs in its referred context. Next, ViP envisioned the concept vehicle in a future context in which the expected model will be inserted. In the end, the outcomes of both methods were converged together in a third and integrated activity in which the final concept was designed. To facilitate the research, the exploratory design process selected an existing car as a case study to develop the final concept: KTM X-Bow. This because the model coincides perfectly to the requirements of the project but also because user analysis has been conducted during a specific race event sponsored by KTM and dedicated to X-Bow series [12]. From this activity, about 200 needs were gathered and successively processed through QFD. To better manage QFD process, data were grouped into four systems of the vehicle that have been considered the areas in which more interactions occur between users and vehicle. The areas are: (a) vehicle dynamics; (b) powertrain system; (c) passenger package; (d) vehicle interface. About 20–24 needs for each area has been selected and processed within its specific QFD matrix. Next, about 20–28 technical requirements for each QFD matrix were chosen in order to find the most important product specifications that can be considered useful for satisfying user expectations. Data of each QFD matrix were correlated together revealing the most important product specifications that should be included in the styling process to develop the concept vehicle. The considerations about QFD assessment are: (a) the powertrain must provide quick acceleration and different engine mappings in order to have many driving performances in base of a particular context or a specific purpose (for instance urban or race use). The powertrain must also produce sound, which in turn suggests the research to maintain the traditional experience provided by internal combustion engine; (b) regarding vehicle dynamics, the car must be rigid, permeable and must provide adequate downforce to guarantee high stability and performance. The vehicle must maintain low height, ground clearance and its center of mass to increase handling; (c) bout interior package, the vehicle must guarantee safety both at the belt and head heights so that airflow cannot stress the driver or affect negatively its driving performance. The cockpit must provide also a not-intrusive steering wheel hub, in order to provide comfortable ingress/egress to all drivers; (d) concerning driving interface, the vehicle must provide a usable configuration of its driving controls. The interface must clear and understandable in showing only the essential information based on the context of use. The controls must be immediately recognizable and reachable by hands. After QFD process, ViP model envisioned the advanced concept in a future context. KTM’s car was used again as a case study for creating the final solution. Several context factors were gathered to define the vision and successively grouped in a foursquare matrix. Each square indicated a statement, namely a potential way in which the advanced concept would be designed. Only one of them was chosen and it was: “I want
62
G. A. Giacobone and G. Mincolelli
people to experience mobility that expresses themselves as an act of resistance”. The sentence operated as the main goal to take a position in the world of tomorrow. This choice affected the entire design of the advanced concept [10]. In this case, the statement led the process in defining the characters of the expected vehicle utilizing analogies to better explain the purpose of the statement. The selected analogy was “It feels like making graffiti on the wall”, which consequently delineated two main product characters: secretive and adversative. The properties defined all features of the advanced concept in terms of use, shape and technology (Fig. 1).
Fig. 1. Visual representation of the advanced concept developed with ViP process.
The vision developed an extreme and interesting scenario in which a hidden and silent car can be driven only in the night or darkness as resistance against daylight routine of an efficient and technological mobility. For this reason, the character of secretive delineated a fully electric and four-wheel drive vehicle. The ingress/egress is located in the rear of the car and is hidden by coachwork. This configuration allows the vehicle to eliminate traditional outlines of the doors and hence it obscures the vehicle’s driving orientation. The cockpit covers also driver’s identity by eliminating windshield and every window from the car. Consequently, the interface proposes to drive the vehicle only through digital screens, improving driver’s perception through night vision. The character of adversative produced a three-seat configuration. The driver’s position is in the middle while the other ones are slightly behind it so that the rebel property can imprint to the pilot more activism in the act of driving. This experience is also accentuated by maintaining the traditional steering hub, which is able to transfer every vehicle’s behavior directly to the driver’s hands. Once the two methods were performed, it has been chosen to converge the outcomes of both methods in a single and unique solution. The final results produced an integrated vehicle, which was the combination of the two approaches. ViP inserted the specific personality of the vision into the final solution. For instance, the character of secretive influences the aerodynamics of the vehicle. Two air intakes in the front deviate airflow, which conveys along the lateral sides of the car to cool down both engine and batteries, without deliberately showing the outlines on
Human-Centered Design for Automotive Styling Design
63
coachwork. Instead, data from QFD reduced the frontal-section area and improved its permeability to reduce aerodynamic drag but also to increase downforce. Again, since the advanced concept was full electric, the same personality of secretive mediated with QFD a parallel hybrid system. This particular configuration equips the vehicle with a downsized engine and three lightweight electric motors – two in the front and one integrated into the rear transmission as KERS (Kinetic Energy Recovery System) – in order to have different driving settings, and improve the efficiency of fuel consumption and carbon emission. For example, fully electric mode permits to drive a silence car (according to the vision), while sport mode allows drivers to experience the sound of the engine without losing the feeling of traditional driving (Fig. 2).
Fig. 2. On the left the integrated solution and its active seating. On the right the parallel hybrid system (with the rear mid-engine layout) and the aerodynamic set up.
ViP and QFD data were combined also for passenger package and its corresponding driving interface. The character of adversative configured an active driving seat, which can be slid from the lateral location to the central racing position. The dynamic configuration guarantees much more activism in driving providing two different experiences. Instead, QFD served to set the height of the lateral sides at shoulder height, in order to improve safety and decrease drive exposure against airflow. Concluding, the character of adversative provided to the vehicle a racing steering wheel, in which the main controls are reachable by hands as expected by QFD outputs. Concluding, the character of adversative provided to the vehicle a racing steering wheel, in which the main controls are reachable by hands as expected by QFD outputs.
64
G. A. Giacobone and G. Mincolelli
4 Conclusions In conclusion, QFD and VIP supported each other in producing many design outputs, which consequently were used as guidelines to define every technical and aesthetic property of the final vehicle. The combined approached produced an inedited design strategy, which supported automotive design to come up with a novel and inspiring concept vehicle. ViP equips the vehicle with a character, which can delineate the styling properties of the car throughout the process. While QFD qualitatively matches user desires underlining many constraints that are utilized to define technical specifications of the car according to their wants. This approach is capable of making the design strategy wider, as well. The two approaches are based on two different timelines, which together offer new and long-term automotive product management. QFD is a mid-term approach because it investigates the near context starting from user needs. However, people tend to express their wants about the present without pushing their wishes further on. Therefore, since ViP is a long-term strategy, it supports the design strategy to push its boundaries in the future anticipating new coming trends, which are subsequently used by ViP to drive every decision of the process. Finally, QFD refines the vision making the final concept more feasible and tangible.
References 1. Chandra, S.: Evaluation of clay modelling and surfacing cycles from designers perspective. In: Weber, C., Husung, S., Cascini, G., Cantamessa, M., Marjanovic, D., and Rotini, F. (eds.) DS 80-5 Proceedings of the 20th International Conference on Engineering Design (ICED 15). Design Methods and Tools - Part 1, Milan, Italy, 27–30 July 2015, vol 5, pp. 215–224. Lightning Source, Milton Keynes (2015) 2. Macey, S., Wardle, G.: H-point: The Fundamentals of Car Design & Packaging. Design Studio Press, Culver City (2014) 3. Van Grondelle, E., Brand de Groot, S.: Discovering the meaning of form by exploded sketching. In: Bohemia, E., Buck, L., Eriksen, K., Kovacevic, A., Ovesen, N., and Tollestrup, C. (eds.) Proceedings of the 18th International Conference on Engineering and Product Design Education (E&PDE16), Design Education: Collaboration and CrossDisciplinarity, Aalborg, Denmark, 8th-9th September 2016, pp. 374–379. The Design Society, Glasgow (2016) 4. Bhise, V.D.: Automotive Product Development: A Systems Engineering Implementation. CRC Press, Boca Raton (2017). https://doi.org/10.1201/9781315119502 5. Hekkert, P., Mostert, M., Stompff, G.: Dancing with a machine: a case of experience-driven design. In: DPPI 2003 Proceedings of the 2003 International Conference on Designing Pleasurable Products and Interfaces, pp. 114–119. ACM, New York (2003). https://doi.org/ 10.1145/782896.782925 6. Akao, Y.: Quality Function Deployment (QFD) Integrating Customer Requirements into Product Design. Productivity Press, Portland (1990) 7. Mincolelli, G.: Customer/user centered design. Analisi di un caso applicativo. Maggioli Editore, Santarcangelo (2008) 8. Franceschini, F.: Advanced Quality Function Deployment. St. Lucie Press, Boca Raton (2001)
Human-Centered Design for Automotive Styling Design
65
9. Van Boeijen, A., Daalhuizen, J., Zijlstra, J., Van Der Schoor, R.: Delft Design Guide. Bis Publisher, Amsterdam (2014). https://doi.org/10.1017/CBO9781107415324.004 10. Hekkert, P., Van Dijk, M.: VIP Vision in Design: A Guidebook for Innovators. Bis Publisher, Amsterdam (2011) 11. Verganti, R.: Design, meanings and radical innovation: a meta-model and a research agenda. J. Prod. Innov. Manag. 25, 436–456 (2008). https://doi.org/10.1111/j.1540-5885.2008. 00313.x 12. Giacobone, G.A., Mincolelli, G.: Human-centered design and quality function deployment: understanding needs on a multidisciplinary automotive research. In: Bucchianico, G.D. (ed.) Advances in Intelligent Systems and Computing, pp. 57–68. Springer, Heidelberg (2020). https://doi.org/10.1007/978-3-030-20444-0_6
Enriching the User Experience of a Connected Car with Quantified Self Maurizio Caon1(&), Marc Demierre1, Omar Abou Khaled1, Elena Mugellini1, and Pierre Delaigue2 1
University of Applied Sciences and Arts Western Switzerland (HES-SO), Fribourg, Switzerland {maurizio.caon,omar.aboukhaled, elena.mugellini}@hes-so.ch 2 Renault S.A., Boulogne-Billancourt, France [email protected]
Abstract. This paper presents the R’SENS project, the design for a system leveraging the Internet of Things to connect the dots between the user, her car and her environment through the integration of tools belonging to different domains, such as the connected car, the quantified self and web services. The aim of this study is to test a concept through a prototype to explore the possibility of enhancing the user experience of a connected car through the opportune and ubiquitous display of relevant information not only about the car status and the environment, but also the physical status of the driver. The integration of three different user interfaces (i.e., wearable, smartphone and infotainment system) to be used in different moments played a crucial role in the preliminary evaluation of the perceived usefulness and usability of this proof-of-concept. Keywords: Personal informatics Virtual companion Wearable Connected car
1 Introduction The automotive world is currently changing fast and connected technology started to be integrated in commercial cars. The possibilities of interaction [1] and for the creation of smart services [2] grows with the sources of data that are available for the user. For instance, when the connected car (i.e., a vehicle that is able to send information about its status to remote servers and to download information from the Internet) meets the quantified self (i.e., the possibility to measure physiological signals about the person’s status) new opportunities can arise [3]. In particular, the R’SENS project aimed at exploring driver monitoring and interaction with the vehicle and its environment outside of the driving situation. Its goal was to design and develop an always-on multimodal companion system for the driver, to provide her with complete feedback about the driving conditions. This system provides a moment of self-reflection on the current conditions of (1) her physiological status, (2) the car status and (3) the surrounding area (including weather and traffic). For instance, if the wearable device recorded that the user did not sleep much during the night, this information will be shown on one of the © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 66–72, 2020. https://doi.org/10.1007/978-3-030-39512-4_11
Enriching the User Experience of a Connected Car with Quantified Self
67
system interfaces to make the user aware of the not ideal conditions before a trip. Moreover, the wearable device is used as a personal wireless key to open the car’s doors, increasing the personal relation with the vehicle and also inducing the user to look at the wearable, which shows a summary of the three domains conditions, before entering into the car (Fig. 1). This concept and its prototype investigate how the user experience of owning or just using a car is not limited to driving in itself, but it implies other activities of our daily living. Cars play a crucial role in our current society; therefore, it is important to design its experience in a comprehensive manner that encompasses all the aspects of their users’ complex lives, which go way beyond the time actually spent in the vehicle.
Fig. 1. Scenario depicting the concept of the use of a wearable device connected to the R’SENS system.
2 User Experience and System Design Car applications for smartphones and connected watches are not new [4]. Connected cars already incorporate technology that enables owners to communicate remotely with their vehicle. But none of these solutions collects data from a wearable’s body sensors. R’SENS stands apart by delivering real-time feedback concerning the driver, the vehicle and the environment for a seamless experience. Moreover, what makes this technology unique is that it breaks the traditional frontiers by making this information available on different platforms, applications and devices: on the Microsoft Band wearable device for ubiquitous interaction [5], on the Android smartphone for detailed information and on the R-Link infotainment system embedded in the car [6]. Last but not least, R’SENS is truly user-friendly and boasts clear graphics aiming at generating a moment of reflection on the conditions for a safe drive aiming at improving the user experience (Fig. 2). Connected to a smartphone via Bluetooth, the wearable embeds sensors to collect a range of biometric information about the user, including body temperature, heart rate, sleeping time, calories spent, etc. The smartphone then
68
M. Caon et al.
transmits these data to a cloud-based server. The server is in turn connected to the vehicle, from which it receives data such as tire pressures, remaining fuel or battery charge and service intervals. Completing the triangle, the server receives real-time information about the vehicle’s environment (e.g., weather and traffic). This information is then made available under three menus respectively called “Me,” “My car” and “My world” via the wearable, a smartphone application, or the in-car display (Fig. 2).
Fig. 2. The interfaces of R’SENS; (a) the wearable device showing a summary; (b) the infotainment system of the connected car presenting all the details; the smartphone application with the global view (c), and the detailed “Me” tab (d).
The central point of this system is a wearable device replacing the traditional car key and giving a simple feedback in the form of icons or lights, as shown in Fig. 2(a). There is also a smartphone application giving detailed feedback, Fig. 2(c) and (d), and an application running in the R-Link car’s infotainment system to give feedback when starting the car, as depicted in Fig. 2(b). The wearable interface has been designed as a convenient interface to be glanced at just before entering into the car to check if everything is fine. In case the user wants to read the details of each domain, she can do so via the mobile application or the R-Link interface. The combination of these three modalities offers a complete and integrated experience to the user, allowing her to always be informed about the situation in a natural way. To determine if the conditions are good for driving, the application gathers and analyzes data from 3 different sources: the driver (via physiological sensors placed in a wearable device); the car (via Renault R-Link); the driving environment (via external web services). The wearable gives a simplified feedback to the user, where each of these 3 elements has an associated colored icon (green, amber or red), making each icon similar to a traffic light. To get more information, the driver can use the smartphone app (when outside the car or not yet driving). In her car, she can get detailed information on the car’s infotainment
Enriching the User Experience of a Connected Car with Quantified Self
69
system’s R’SENS app. The wearable also acts as a car key to lock and unlock the doors, replacing the traditional card key. Figure 3 shows the general architecture of the project. Its main components are the following: • MS Band: The Microsoft Band, which gathers physiological data, shows the 3 icons feedback and allows the user to access the car key features. It is connected to the smartphone via BLE. • Key module: Arduino-based modified card key, which receives commands from the smartphone application via BLE and relays them to the car. • Smartphone application: R’SENS Android application, which contains the logic to talk with the key module, receive and analyze the physiological data from the wearable and push it to the Collabox platform, send periodic GPS updates to the Collabox platform, and receive the data from the car and environment from Collabox. • R-Link application: Embedded application in the R-Link car infotainment system and connects to Collabox to push the car diagnostics and to receive the health and environment data. It communicates with Collabox using the MQTT protocol. • Third-party APIs: Third-party web services for weather (Forecast.io), traffic (Bing) and health data (Microsoft Health). • Collabox (Loop) Platform: The Collabox platform acts as a central bridge and storage point to process, store and push data between the different applications. The Collabox platform gets data from the traffic and weather services when a new GPS location is received. It acts as a bridge for the Microsoft Health API.
Fig. 3. R’SENS System architecture diagram.
70
M. Caon et al.
3 Evaluation The goal of this first user testing is to get a first feedback of the usefulness and usability of the system, and more generally on the ideas of the project. It is also a good moment to receive suggestions for changes or new features. For these tests, the users get the application freshly installed on their device. They complete the first launch setup wizard, and then use the system for a certain duration. The car part of the system was simulated by injecting messages from Loop Triggers, as it was not possible to provide users with a connected car. A few warnings about the car’s battery and tire pressure were simulated at different moments to ensure that the status changed. Two types of user tests have been done: long tests (more than 16 h, including a night); short tests (about an hour). After the tests, the users were asked to fill a computer-based questionnaire (on Google Forms) to give their feedback. This questionnaire had three parts: 1. A System Usability Scale (SUS), which is a standard test used to measure the usability of a system. 2. Some questions about their experience (what warnings they received, if they noticed any bug/crash, etc.). This is useful to see if they noticed the simulated warnings, and what other things happened during their test. 3. Some questions to collect their general feedback on the project and its ideas, what they liked/didn’t like and additional remarks. In total, 5 people took part in the testing. 3 people did the long test, and 2 people did the short test. They have each been given a participant number, so it would be possible to distinguish between the two groups. Table 1 shows the scoring of the SUS. The results were computed for all users together, and also for the long tests and short tests participants to see if there are differences. Table 1. Results of the SUS questionnaires filled out by the test participants. Type of test
SUS scores (0–100) Average Minimum All (5 participants) 82.0 75 Long test (3 participants) 85.8 75 Short test (2 participants) 76.3 75
Maximum 95.0 95.0 77.5
The overall usability scores are quite good (a score of 80/100 corresponds to an A grade). It was expected, because the system is relatively simple and there is not much user input needed outside the initial setup process. The global minimal score is 75/100, which is still above the value considered average (68) and corresponds to a “Good” interface [7]. An interesting result is that the score for the short tests is lower than for the long tests. This could suggest that as time passes the user becomes more comfortable with the system. The low number of users is of course a limitation. The results of the first questions about noticing the changes in status show that the user noticed at least one of the warnings. For the users in the long tests, there were additional red and amber status for external reasons (mostly traffic incidents). One user also had warnings about lack of sleep, which were correct.
Enriching the User Experience of a Connected Car with Quantified Self
71
In the general feedback part of the questionnaire, all users said that they thought that R’SENS would be useful for drivers and that the separation in three domains (me, my car, my world) made sense. However, there were some differences in the more precise questions. One of the 5 users thought that including the user’s health/fitness was not useful. At the question “Do you think having a simple feedback on the wearable is useful?”, the results were mixed. The three users having done the long tests answered “yes”, but the 2 users of the short tests answered “no”. These answers could implicate that using the wearable in the long term is useful, but that the short test settings did not reflect it. At the question “Do you think that replacing the car key with a wearable device makes the interaction with the car more convenient?”, 3 users out of 5 answered no. One user also commented that the feature is similar to remote entry, so it does not make interaction more convenient. These results mean that the advantage of having the key always on the wrist is not enough to make it more useful. Other features (such as virtual keys or increased security) would probably be needed. When asked about irrelevant/useless features or information, one user thought that the wind conditions seemed of limited use and another said he didn’t see a use for the heart rate in his test. Winds can have effect on driving, but instant heart rate may not be really useful as is. It’s worth exploring what other insights could be extracted from the heart rate. When asked what they liked about the system in general, users mentioned the wellintegrated interface, the simplicity of the system, the additional information offered by the “Me” and “My World” reports and the centralization of the information. When asked what they disliked about the system, remarks about the lack of detailed information and the fact that green should be hidden were again voiced by the same user.
4 Conclusion This paper presented a first study aiming at exploring the concept of integrating personal relevant information through wearable sensors for the enhancement of the user experience in an automotive context. Such experience should not be limited to the driving activity but should put together many aspects of the user’s daily life. The R’SENS system was developed to test this concept, including three interfaces designed to provide this relevant information to the user in different manners, always adapted to the context of use. The test showed that the system was appreciated and perceived as useful although a number of modifications can be applied in order to improve it.
References 1. Angelini, L., Baumgartner, J., Carrino, F., Carrino, S., Caon, M., Khaled, O.A., Sauer, J., Lalanne, D., Mugellini, E., Sonderegger, A.: A comparison of three interaction modalities in the car: gestures, voice and touch. In: Actes de la 28ième conférence francophone sur l’Interaction Homme-Machine, pp. 188–196. ACM, October, 2016 2. Coppola, R., Morisio, M.: Connected car: technologies, issues, future trends. ACM Comput. Surv. (CSUR) 49(3) (2016). Article No. 46
72
M. Caon et al.
3. Swan, M.: Connected car: quantified self becomes quantified car. J. Sens. Actuator Netw. 4 (1), 2–29 (2015) 4. Caon, M., Tagliabue, M., Angelini, L., Perego, P., Mugellini, E., Andreoni, G.: Wearable technologies for automotive user interfaces: danger or opportunity? In: Adjunct Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 1–5. ACM, September, 2014 5. Rawassizadeh, R., Price, B.A., Petre, M.: Wearables: has the age of smartwatches finally arrived? Commun. ACM 58(1), 45–47 (2015) 6. R-Link store. https://fi.rlinkstore.com/. Accessed 27 September 2019 7. Bangor, A., Kortum, P.T., Miller, J.T.: An empirical evaluation of the system usability scale. Int. J. Hum.-Comput. Interact. 24(6), 574–594 (2008)
Constructing a Mental Model of Automation Levels in the Area of Vehicle Guidance Larissa Zacherl(&), Jonas Radlmayr(&), and Klaus Bengler(&) Chair of Ergonomics, Technical University of Munich, Boltzmannstr. 15, 85748 Garching, Germany [email protected], {jonas.radlmayr,bengler}@tum.de
Abstract. The aim of this work was to construct a mental model of automation in the area of vehicle guidance. The users’ understanding and mental model affect the safety and user experience. The qualitative method Card Sorting was applied with 25 participants. The task was to categorize cards on driving automation according to the participants’ own understanding. Analysis showed that the mental model of automation in the area of vehicle guidance is made up of three levels: the level “Information and Driver” incorporates functions, which do not influence lateral and longitudinal guidance of the vehicle. Systems that interfere or control the lateral and longitudinal guidance of the vehicle are included in the level “Assisted to Automated Driving”. The level “Autonomous Driving” specifies systems that operate the vehicle independently while no driver has to be present. Findings indicate a mismatch between the mental model of users and well-known taxonomies of automated driving. Keywords: Automated driving
Levels of automation Mental models
1 Introduction Driving Automation. Alot of different assisting and automated systems are being researched and exist already where the drivers take over different roles such as operating versus monitoring. Hence, a clear understanding and categorization of these systems is necessary. Different organizations like the Society of Automotive Engineers (SAE), the National Highway Traffic Safety Administration (NHTSA) or the German Federal Highway Research Institute BASt have defined taxonomies of driving automation. The SAE taxonomy includes six levels of automation [1]. The taxonomies published by NHTSA and the BASt include five levels of automation [2]. Automated driving functions change the interactions of driver, vehicle and other road users [3]. Due to these changes, it is necessary to find out how humans perceive and understand different levels of driving automation. In the expert community, a clear understanding of different levels of automation is based on the existing taxonomies, whereas the understanding of the users might be different. This paper tries to answer the following research question: How is automation in the area of vehicle guidance represented in the users’ perspective? © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 73–79, 2020. https://doi.org/10.1007/978-3-030-39512-4_12
74
L. Zacherl et al.
Mental Models. According to Durso et al., a mental model of a system is the mental representation of the functionalities of the system with all including aspects [4]. Problems in the area of human-computer-interaction can be tackled with the help of mental models, as studies about how users perceive a device and how the users’ view affects the handling can be conducted [5]. Conforming conclusions about the conception and design of a device can be gathered. Hence, mental models are important when researching the successful handling of devices. Card Sorting. When applying the user-centered method open card sorting, labelled cards are being sorted into categories created by each participant [6]. Participants are restricted as little as possible and often have the option of renaming, reusing, removing and adding cards [6]. Due to these non-restrictive options, card sorting can be used to generate mental models and offer valuable clues to existing and non-existing products [6].
2 Method Sample and Experimental Setup. The cards for the following card sorting study were developed in an iterative process incorporating both expert and novice feedback (semistandard interviews with 10 participants) based on descriptions of existing and future assisted and automated driving functions. In addition, some cards featured system limits or restrictions of the operational design domain of these functions. The terms have its origin in literature, as well as definitions and descriptions of automobile manufacturers like Tesla and BMW [7, 8]. The analysis of the interviews was conducted according to the qualitative content analysis of Mayring [9]. To avoid influencing participants specifically with specific levels of driving automation, none of the established levels of driving automation were included. 39 terms have arisen from the interviews including different aspects and functions of driving automation. The card sorting study was conducted with 25 participants (11 female and 14 male participants). The mean age was M = 40.1 years (SD = 19.5) and the range was 18–89. The participants had different educational and occupational backgrounds, and it was ensured that no experts were included in the experiment. German (23 participants), Spanish (one participant) and English (one participant) was specified as their native language and the card sorting was always carried out in German. Participants had to provide a self-assessment of their respective knowledge about driving automation: Eight participants estimated their knowledge as “low,” ten as “moderate” and five as “very high.” Only one participant estimated their knowledge as “very low” respectively “high.” The knowledge of the participants regarding driving automation did not stem from an occupational background but was rather based on experience or media. All participants took part voluntarily and did not receive any compensation. Procedure and Measures. In addition to preparing the cards, blank cards and a pen to label these were provided to the participants. To document the study, a camera was used, filming the hands and recording the voice of the participants as each participant
Constructing a Mental Model of Automation Levels in the Area of Vehicle Guidance
75
was asked to think aloud. At the beginning, the 39 cards were distributed randomly on a large table area. Thus, no order of the cards has been conveyed to the participants. To begin with, each participant had to fill in a consent form. The task was described to sort the cards into categories, which are defined by the participant using blank cards. Participants were granted a high amount of autonomy, by allowing categorizing the same card into different categories, determining the number of categories and naming the categories with a name or description. After finishing the card sorting, participants were asked to fill in a demographic questionnaire. The study lasted between 30 and 60 min per participant and was conducted between the March 26, 2017 and the April 23, 2017. To analyze the card sorting study, a spreadsheet according to the model of Lamantia was generated [10]. The raw data (created categories and assigned cards) were documented for each participant. The chosen categories were matched with the categories from the other participants. If two categories of different participants would match regarding the overall content, these categories were combined and identified as one by adding the names of both categories and by adapting the number of each assigned term. If a category did not match any of the existing categories, it was added as a new category. The specific decision of either combining categories or having new ones was reached by an expert decision from the authors. This procedure was repeated for all participants. In this process, 23 different categories were generated. For each card in each category, the absolute occurrence was recorded. Afterwards, agreement weights were calculated for each category and each term according to Paul [6]. Each category with an agreement weight of more than 50% was analyzed further. In addition, an agreement weight for each term was calculated. Only the terms that were put into a specific category by more than 50% of the participants creating this category remain therein [6]. In summary, the remaining categories were assigned general names considering the titles and descriptions provided by the participants. agreement weight (category) in % = number of participants generating this category/total number of participants. agreement weight (term) in % = absolute occurrence of term in this category/number of participants generating this category.
3 Results Each participant generated three to seven different categories; the mean is M = 4.2. The participants created the categories according to different patterns, which they explained aloud. The most frequent one being the arbitration of how much the driver or the system are involved into controlling the vehicle. After applying the analysis, three categories emerged: Category 1 (C1): “Information and Driver” (agreement weight: 64%), Category 2 (C2): “Assisted to Automated Driving” (agreement weight: 88%) Category 3 (C3): “Autonomous Driving” (agreement weight: 88%).
76
L. Zacherl et al.
Table 1 shows the result of the card sorting study. Note: The card sorting was originally conducted in the German language; this is a translated version that was not used for the study but lectured by professional translators.
Table 1. Card sorting results, the agreement weight is shown per term and category in %. Terms/cards with agreement weights under 50% are neglected. Term/Card The driver steers the vehicle The driver accelerates the vehicle The driver brakes the vehicle The driver is not allowed to carry out any side tasks Parking Sensor: The system warns the driver about potential barriers when parking by giving an acoustic signal. The system does not interfere with the lateral or longitudinal guidance of the vehicle Blind Spot Monitor: The system warns the driver about potential collisions when changing lane by a display in the side mirror. The system does not interfere with the lateral or longitudinal guidance of the vehicle Lane Departure Warning System: The car warns the driver by vibration on the steering wheel as soon as it unintentionally leaves the lane. The car drives and does not steer itself but gives only feedback by vibration Collision Warning: The car warns the driver of possible frontal collisions by an acoustic signal. The driver has to brake on his own Cruise Control: The driver sets a speed that is maintained by the car. The driver has to steer and, if necessary, adjust speed and brake Adaptive Cruise Control: The driver enters a maximum speed and a minimum distance to the vehicle ahead, which must not be undershot. If the vehicle approaches another vehicle on the identical lane, the car automatically brakes and accelerates again with free travel to the maximum entered speed. The driver has to steer and monitor the system Traffic Jam Assistant: The car brakes, accelerates and steers in traffic jams (up to approx. 60 km/h) - without lane change. The driver must monitor the system permanently Lane Departure Assist: The system prevents the vehicle from unintentionally leaving the lane by interfering with lateral control Park Steering Assist: The car controls the steering wheel when parking – while the driver has to accelerate and brake. The car steers itself and gives instructions for acceleration and braking Collision Avoidance Assist: An (emergency) braking maneuver will be executed by the car in critical situations
C1 81.3 81.3 81.3 81.3 68.8
C2 4.5 9.1 4.5 4.5 36.4
62.5
50.0
62.5
63.6
62.5
63.6
18.8
95.5
6.3
95.5
90.9
6.3
C3
9.1
12.5
90.9 90.9
77.3
31.8
(continued)
Constructing a Mental Model of Automation Levels in the Area of Vehicle Guidance
77
Table 1. (continued) Term/Card Traffic Jam Pilot: The car brakes, accelerates and steers in traffic jams (up to 60 km/h) itself - without lane change. The driver does not need to monitor the system permanently Emergency Stop Assist: If the car notices that the driver is unable to drive the vehicle, the system takes control and brings the vehicle safely to a halt Autobahn/Highway Pilot: Enables automated driving on highways up to approx. 130 km/h from the entrance to the exit, including lane changes. The driver must turn on the system and then no longer monitor it Can be applied on autobahn/highway (without approach and departure) Robot Taxi: Allows autonomous driving from start to finish – without restrictions: a driver is not necessary Parking Garage Pilot: Allows autonomous parking in and out of a parking garage including parking space search and provision of the vehicle. The driver can move away and does not have to control the parking process The system steers the vehicle Monitoring the driving environment is in charge of the system The system accelerates the vehicle The system brakes the vehicle The driver does not have to be prepared to take over the driving task completely The driver is allowed to carry out the following side tasks: sleep The system is able to react to weaker road users like pedestrians or cyclists The driver is allowed to carry out the following side tasks: play a game on the smartphone The driver is allowed to carry out the following side tasks: read the news on the display at the center console Can be applied in bright sunshine Can be applied in the case of road construction
C1
12.5
C2 68.2
C3 31.8
63.3
40.9
63.6
40.9
59.1
22.7 95.5
13.6
81.8
18.2
81.8 81.8 72.7 77.3 77.3
6.3 36.4 31.8
25.0 25.0
31.8
68.2 68.2
9.1
63.6
22.7
59.1
36.4 45.5
59.1 59.1
4 Discussion Despite no limitations on the number of categories and categorizing according to the participants’ own understanding, the mental model emerged to consist of three categories describing to which extend the driver and the system arbitrate vehicle guidance with respect to different assisted or automated driving functions: “Information and Driver” (C1): This category included the four warning functions, which do not interfere with controlling the vehicle. This showed that the participants
78
L. Zacherl et al.
clearly separated warning functions and assisting or automated functions. The remaining terms of this category described the responsibility of the driver, e.g. the driver has to steer the vehicle. “Assisted to Automated Driving” (C2): Two of the warning functions attributed to C1 were also matched to C2. Additionally, this category incorporated all the given assisting and automated driving functions where drivers still have the option to engage into the driving task (level 1–3 following SAE/NHTSA). It can be concluded that the participants had difficulties to sort the variety of different driving functions. “Autonomous Driving” (C3): This category contained the two functions where the driver becomes solely a passenger. Additionally, different terms describing the category were included, e.g. the responsibility of the system taking over the steering task. These clear and detailed descriptions highlighted that the participants did have a clear imagination of this category. SAE’s and NHTSA’s definition consist of five or six levels [1, 2], whereas the mental model constructed in this study only contains three levels. C1 corresponds to level 0 of SAE and NHTSA. C2 is rather unspecific and includes level 1, 2 and 3 of SAE’s and NHTSA’s definition. Level 4 and 5 of SAE’s definition and level 4 of NHTSA’s definition can be attributed to C3. Hence, the levels of automation of the mental model of users appear to differ significantly from the taxonomies of the SAE and the NHTSA. The comparison is shown in Table 2.
Table 2. Comparison of the mental model of driving automation constructed in this study and the taxonomies of SAE and NHTSA. Mental model SAE [1] NHTSA [2]
Information and Driver Level 0: No Automation Level 0: No Automation
Assisted to Automated Driving
Autonomous Driving
Level 1: Driver Assisted Level 1: FunctionSpecific Automation
Level 4: Level 5: High Full Automation Automation Level 4: Full Self-Driving Automation
Level 2: Partial Automation Level 2: Combined Function Automation
Level 3: Conditional Automation Level 3: Limited Self-Driving Automation
Mental models are crucial to gather conclusions about the conception and design of a function and hence, to allow a successful handling of the function. There appears to be a mismatch between the arbitration of responsibility represented in users’ mental models and existing taxonomies concerning specific functions. The successful introduction of higher levels of automation is likely based on a correct mental model of users and should be considered when designing automated driving functions. Considering the mental model contributes to developing systems and functions which are understandable, intuitive and easy to use.
Constructing a Mental Model of Automation Levels in the Area of Vehicle Guidance
79
References 1. Society of Automotive Engineers: J3016 - Taxonomy and Definitions for Terms Related to on-road Motor Vehicle Automated Driving Systems. Surface Vehicle Recommended Practice (2018) 2. National Highway Traffic Safety Administration. Preliminary Statement of Policy Concerning Automated Vehicles (2013). http://www.nhtsa.gov/staticfiles/rulemaking/pdf/ 3. Meschtscherjakov, A., Tscheligi, M., Szostak, D., et al.: HCI and autonomous vehicles: contextual experience informs design. In: CHI EA 2016 Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, pp. 3542–3549. ACM (2016). https://doi.org/10.1145/2851581.2856489 4. Durso, F.T., Gronlund, S.D.: Situation awareness. In: Durso, F.T., Nickerson, R.S., Schvaneveldt, R.W., Dumais, S.T., Lindsay, D.S., Chi, M.T.H. (eds.) Handbook of Applied Cognition, vol. 10, pp. 283–314. Wiley (1999) 5. Payne, S.J.: Mental models in human-computer interaction. In: The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, vol. 3, pp. 39–52, Second Edn. Taylor and Francis (2007). ISBN: 978-1-4200-8881-6 6. Paul, C.L.: A modified delphi approach to a new card sorting methodology. J. Usability Stud. 4(1), 7–30 (2008). Usability Professionals’ Association 7. Tesla, I.: Autopilot: Full Self-Driving Hardware on All Cars (2018). https://www.tesla.com/ autopilot 8. BMW AG, Autonomous Driving (2018). https://www.bmwgroup.com/en/innovation/ technologies-and-mobility/autonomes-fahren.html 9. Mayring, P.: Qualitative content analysis. Advance online publication (2000). https://doi.org/ 10.17169/FQS-1.2.1089. Flick, U., von Kardoff, E., Keupp, H., von Rosenstiel, L., Wolff, S. (eds.) 10. Lamantia, J.: Analyzing Card Sort Results with a Spreadsheet Template (2003). http:// boxesandarrows.com/analyzing-card-sort-results-with-a-spreadsheet-template/
Effect of Phone Interface Modality on Drivers’ Task Load Index in Conventional and Semi-Automated Vehicles Kristina Davtyan1 and Francesca Favaro1,2(&) 1
2
RiSAS Research Center, San Jose State University, San Jose, CA, USA {kristina.davtyan,francesca.favaro}@sjsu.edu Department of Aviation and Technology, San Jose State University, San Jose, CA, USA
Abstract. This study tested how two different interaction modalities with a smartphone (manual texting and vocal replies) affected task loads perceived by drivers places in various driving scenarios. The study employed human-in-theloop simulation, and involved both a city environment (driven in conventional manual mode) and a highway environment (driven first in automated mode and then requiring takeover from the driver). The interface was tested in relation to following GPS instructions while being prompted to reply to texts at specific times in the simulation. The results compare the overall mental load scores of the manual texting condition for both city and highway scenarios to the vocal reply condition. Unweighted NASA TLX data shows that the vocal interface led to lower scores; however, statistical significance was shown only for the city scenarios for physical load and effort calculations. Keywords: Texting task load interaction
Human-in-the-loop simulation Smartphone
1 Introduction Distracted driving is the main cause of fatalities on US roads [1]. The National Highway Traffic Safety Administration estimates that, regardless of the regulation in place in a specific state, over 60% of the drivers’ population engage in cell-phone based activities while driving at least once per day [2]. Currently, there is no national ban on texting or using a wireless phone while driving, but a number of U.S. states have passed laws banning texting or wireless phones or requiring hands-free use of wireless phones while driving [3]. Among researchers, there is a consensus that research is needed to establish a clear and quantitative effect of smartphone usage on driving performance [4]. A substantial body of literature from 2000 to 2018 shows mixed results [5, 6], with manual texting clearly correlated to impairment of normal driving activities, but also pointing to the existence of circumstances under which the use of phones enhances alertness due to more expected threats. Moreover, in conjunction with automation, research has shown that drivers of highly automated vehicles are more likely to engage in secondary activities while driving [7]. © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 80–85, 2020. https://doi.org/10.1007/978-3-030-39512-4_13
Effect of Phone Interface Modality on Drivers’ Task Load Index
81
The present work focuses on how such interaction with secondary tasks is carried out, by comparing two different modes of interaction between drivers and smartphones: touch-based commands vs. voice-to-text commands. 32 subjects participated in a human-in-the-loop simulation scenario with an integrated-car simulator study. We assessed the NASA Task Load Index (TLX) for both smartphone interaction modality (touch-based vs. voice-to-text) during two driving scenarios (city vs. highway). The structure of the paper is as follows: Sect. 2 provides an overview of the experimental setup; Sect. 3 presents the study’s main results; Sect. 4 concludes the paper.
2 Methods 2.1
Participants
32 participants (16 men and 16 women, M = 26.8 years of age, SD = 6.5 years of age) from Silicon Valley, CA volunteered to participate in this driving experiment. All participants were screened prior to participation to ensure normal or corrected-tonormal vision. All participants were active CA-licensed drivers with no medical condition that could impair their driving. Participants reported sending 2 text messages on average per drive (SD = 1.24). The texting frequency of our population was comparable to literature on the topic. 2.2
Apparatus and Stimuli
Driving behavior was assessed in a human-in-the-loop static simulator, consisting of a real BMW series 6 rented from FKA Silicon Valley. The simulation environment used the Linux-based simulation framework Virtual Test Drive (VTD) by Vires Simulationstechnologie GmbH in version 2.1.0. Open standards (OpenDRIVE® and OpenSCENARIO) were used for road and scenario creation. The simulated driving task was displayed on a 220° surround projection screen, with a resolution of 1080 1920 and a refresh rate of 60 Hz. The simulator showed road and traffic information projected in front of the car and through a rear-view mirror behind the vehicle. A three-way split rear-projection wall provided the projection for side and rear-view mirrors. A 4.3′′ Nexus 6 touch-screen smartphone running Android OS Version 5 was used for the texting task, and to navigate the participants through the simulated city and highway environments. The buttons on the keyboard of the smartphone were arranged in a QWERTY layout. 2.3
Experiment Tasks
Driving Task. Participants were instructed to follow the GPS instructions on the mounted Android phone during the driving scenarios. During manual city scenarios, participants had to complete a series of left and right turns, and stop at stop signs and a red light while they maintained posted speed limit indicated in the navigation interface. The city scenario was driven in conventional manual mode. During the fully-autonomous
82
K. Davtyan and F. Favaro
highway scenarios, participants were instructed to continue monitoring the outside environment also while the vehicle was driven autonomously, and to be ready to take over control of the vehicle. A structured disengagement event would then take place, with GPS indicating to take the next exit while observing all traffic rules. Vehicle dynamics, including lane position, steering wheel position, speed, and braking distance were recorded during the execution of the driving scenario. The simulated driving scenario consisted of a 1 mile long single lane suburban streets grid (city scenario) and three-lane highway with straight and curved portions (highway scenario). For the highway environment, traffic density was kept constant for all tests, for a total of 50 vehicles distributed within a 400-m diameter from the test vehicle. Texting Task. A texting task was used to compare the distracting effects of manual versus vocal texting on driving performance. In the city scenario, two texts were introduced as secondary non-driving task. The first text prodded the driver to provide his/her ETA (displayed on the GPS app), and was introduced in conjunction with a traffic light turning red. While replying to the text, the participant would need to resume driving after the light became green, while a second text message would be received, asking whether the participant would prefer a cheese or pepperoni pizza for dinner. In the highway scenario, a single text was employed. The text, which again inquired about the ETA, was triggered right after the notification to the driver to “prepare to takeover” for the vehicle movement. 15 s after the “priming” of the participant, the actual disengagement would be triggered, with the driver supposed to correctly take the highway exit and the text awaiting reply. NASA TLX. NASA TLX has been cited in over 4,400 studies. It is considered a useful measure for giving insight into the cognitive workload of a given task. A paperpen NASA TLX was administered immediately after the end of each driving scenario. The TLX test consisted of six subjective subscales. In each section, the participant was required to rate on a 20-points range for: mental demand; physical demand; temporal demand; performance; effort; and frustration. Descriptions were provided for each item, and each participant completed a total of 4 TLX surveys, one for each repetition of scenario and combination of texting interface. Participants were not asked to rate each subscale for perceived importance; hence, we use Raw TLX scores for analysis. Test Procedure. The test comprised a total of five sections. Four of those were given by the counterbalanced conditions from the variables tested. These included two dualtask conditions in which participants operated the car in manual mode during the city scenario while texting using manual input modality (manual + city) and voice-to-text input modality (voice-to-text + city), as well as in autonomous car mode during a highway scenario (manual + highway; voice-to-text + highway). A fifth section was included at the beginning of each test, to allow participants to get accustomed to simulated driving. The practice phase lasted roughly 7 min and was concluded only after the participant expressed comfort with the simulator driving performance. The tests had an average duration of 5 min per condition. The entire session lasted about 60 min (including pre- and post-test surveys), and drivers were informed that they could withdraw from participation at any time. Figure 1 reports a schematic structure of the test, where each intermediate drive would also feature an in-between tests survey.
Effect of Phone Interface Modality on Drivers’ Task Load Index
83
Fig. 1. Schematic representation of the test structure
Statistical Analysis. For the purposes of this study, NASA-TLX workload scores from each of the 4 four test-conditions were analyzed and compared using effect-size for a repeated measures. Cohen’s effect size is useful for estimating statistical power and sample size, especially if there is low variance. Overall workload ratings were derived through the use of unweighted “raw” combination of TLX subscales.
3 Analysis of Results Looking at all the subscales across all conditions and participants, physical (M = 6.57, SD = 4.10) and performance (M = 6.79, SD = 4.42) subscales total mean scores were the lowest in the set (see Table 1). The highest recorded mean scores were for effort (M = 8.65, SD = 4.57), mental (M = 8, SD = 4.60), temporal (M = 7.90, SD = 4.89), and frustration (M = 7.79, SD = 4.85) sub-scales. Table 1. Descriptive statistics summary of TLX sub-scores (mental demand; physical demand; temporal demand; performance; effort; and frustration) across four test conditions (city manual Cm, city voice - Cv, hwy manual - Hm, hwy voice - Hv) Mode Cm M SD Cv M SD Hm M SD Hv M SD
Mental 9.03 4.27 8.13 5.01 7.97 4.60 6.88 4.46
Physical 8.50 4.53 7.28 3.98 5.44 3.76 5.06 3.24
Temporal 7.78 4.38 7.14 4.92 8.78 5.49 7.63 4.82
Performance 5.91 3.92 5.52 3.73 8.41 4.97 7.34 4.54
Effort 9.91 4.21 8.17 4.76 8.44 4.59 8.09 4.68
Frustration 7.88 4.72 7.67 4.69 7.91 5.04 7.69 5.17
Total 49 18.06 44.17 20.04 46.94 21.45 42.69 20.42
Looking at the total TLX mean scores by modality (manual vs voice-to-text) across both city and highway scenarios, mean scores for city manual and highway manual (M = 49, SD = 18.06, M = 46.94, SD = 21.45) appeared to be higher on average compared to vocal replies for both the city or highway scenario (M = 44.17,
84
K. Davtyan and F. Favaro
SD = 20.04, M = 42.69, SD = 20.42). Figure 2 provides a visual depiction of the average sub-scores for the investigated conditions.
Fig. 2. Graph showing the mean values for each of the 6 different sub-scores of NASA TLX (mental demand; physical demand; temporal demand; performance; effort; and frustration) as well as a total NASA TLX workload score
Cohen’s d effect size (d = .25) for paired samples within-group comparison design analysis revealed that the main effect of cell-phone use modality (manual vs voice-totext) did not exceed Cohen’s convention for a large effect, where d = 0.2 is considered to be a small effect size. These results indicate that participants in the voice-to-text secondary task condition did not do better in the manual texting condition when compared across both city and highway scenario. Two-tailed t-tests were executed. Statistical significance was shown only when comparing manual and vocal interfaces for physical demand (p = 0.039) and effort (p = 0.024) for city scenario. None of the TLX sub-scores showed statistical significance for the highway scenario.
4 Conclusions Data from our study suggests that when it comes to driver’s task load as measured across mental, physical, temporal, effort, performance, frustration ratings, participants who engaged in texting using vocal input have slightly lower scores compared to conditions where they used traditional manual features, regardless if they engaged in manual city or autonomous highway primary task scenarios. Statistical analysis revealed however, that these differences have a low effect size. This has implications for the current state driving regulations in U.S. overwhelmingly enforce a texting ban while making no stipulations regarding the use of voice-to-text technology. These conclusions are limited to measuring mental workload only and is not meant to advocate for a ban of voice-to-text messaging during driving. However, it does cast doubt to popular claim that leveraging voice commands technology represents a “safety improvement”.
Effect of Phone Interface Modality on Drivers’ Task Load Index
85
Acknowledgments. Funding for this research was provided by the US Department of Transportation (grant 69A3551747127 managed by the Mineta Transportation Institute of San Jose, CA).
References 1. National Highway Traffic Safety Administration, NHTSA – Distracted Driving 2015 – DOT HS 812 381 (2015). https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812381 2. Zendrive, 2018 Distracted driving snapshot (2018). http://blog.zendrive.com/distracteddriving-is-far-worse/ 3. US Federal Communications Commissions, The Dangers of Distracted Driving (2018). https://www.fcc.gov/consumers/guides/dangers-texting-while-driving 4. Caird, J.K., Willness, C.R., Steel, P., Scialfa, C.: A meta-analysis of the effects of cell phones on driver performance. Accid. Anal. Prev. 40(4), 1282–1293 (2008) 5. Fitch, G.M., Soccolich, S.A., Guo, F., McClafferty, J., Fang, Y., Olson, R.L., Perez, M.A., Hanowski, R.J., Hankey, J.M., Dingus, T.A.: The impact of hand-held and hands-free cell phone use on driving performance and safety-critical event risk (No. DOT HS 811 757) (2013) 6. Neubauer, C., Matthews, G., Saxby, D.: The effects of cell phone use and automation on driver performance and subjective state in simulated driving. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 56, no. 1, pp. 1987–1991 (2012) 7. Llaneras, R.E., Salinger, J., Green, C.A.: Human factors issues associated with limited ability autonomous driving systems. In: Proceedings of the 7th International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design, Bolton Landing, New York, 17–20 June, vol. 2013, pp. 92–98. Public Policy Center, University of Iowa, Iowa City (2013)
Software Failure Mode and Effects Analysis Palak Talwar(&) Senior Safety Engineer, Lyft Level 5, Palo Alto, CA, USA [email protected]
Abstract. Failure Mode and Effects Analysis (FMEA) is a key safety assessment analysis that determine failure modes at system, hardware and software level. Overlooking Failure Modes can often cause system or functionality failure which directly impacts a systems safety performance, reliability and quality. FMEA is a bottom-up approach which has four key phases - identification of fault, assessment of impact, determination of potential causes and their resolutions, and finally testing and documentation of analysis. FMEA addresses the effect of failures at the system, software and hardware level. The outcome of the analysis helps us identify gaps in safety requirements specification and provides input for component testing, integration testing and system level testing. This paper describes the application of Failure Mode and Effects Analysis (FMEA) to software modules. Keywords: Control systems faults Software failure analysis
Failure modes Safety assessment
1 Failure Mode and Effects Analysis (FMEA) Software Failure Mode and Effects Analysis (FMEA) is a bottom-up analysis technique to identify the consequences of possible software failure modes on the software system. An example below outlines the application of Software FMEA to Brake ECU (Electronic Control Unit). As depicted in Fig. 1 below, Brake ECU receives brake pedal sensor input from the driver as an analog signal and vehicle speed information from another ECU via CAN which in turn outputs brake torque request and brake module status to other ECUs over CAN.
Fig. 1. Brake ECU and its interfaces
FMEA starts with identifying different software failure modes that can influence the subsystem or system. The four phases (mentioned above) is one potential approach to perform FMEA. A brief expansion of these phases are: © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 86–91, 2020. https://doi.org/10.1007/978-3-030-39512-4_14
Software Failure Mode and Effects Analysis
87
• Look at the system functionality holistically and identify a comprehensive list of potential failure modes • For each identified failure mode in step 1, assess the implications of failure on connected software or hardware system and also on the overall performance of the system • Once we know the overall impact, we isolate potential causes for failure. Once the causes are identified the system design needs to be enhanced to adequately prevent future failures • Once the design change is made, we retest the failure mode to ensure that the system appropriately handles the failure before release. Then, the necessary documentation in done Now that we have a brief understanding of the approach, let’s follow these steps to perform software FMEA on Brake ECU depicted in Fig. 1 above. 1.1
Step 1
For the example above, let’s start by listing individual components including interfaces, the function they provide and their failure modes. The only component that is of interest here is Brake ECU with inputs and outputs. The function can be defined as: 1. transmitting brake torque request, based off inputs: brake pedal sensor and vehicle speed, to other vehicle modules. 2. sending brake module fault status to other vehicle modules. The failure modes for interfaces and the component (Brake ECU) can be defined as follows (Table 1):
Table 1. Potential failure modes. Component/Interface Function Brake pedal sensor analog voltage input Vehicle speed
Brake ECU
Potential failure mode No signal Signal voltage out of range Message corruption Message loss Message timeout Transmits brake NO brake torque request torque request DELAYED brake torque request INVALID brake torque request Sends brake module NO brake module status fault status to other DELAYED brake module status vehicle modules INVALID brake module status
88
1.2
P. Talwar
Step 2
Once we have listed the failure modes, let’s determine the effect(s) of the failure on other system components and on the overall system for each failure mode. For the example above, we determine the effect(s) of receiving invalid or delayed vehicle speed, brake pedal analog voltage out of range, or not receiving anything at all and ask this question: – What if the brake pedal input requested by the driver is not received for a certain period of time? – What if we receive corrupted vehicle speed over CAN? Are we okay with 1* corrupted message or not? – Does the failure impact vehicle behavior resulting in high severity? The table below lists the potential effect(s) of failure which might or might not impact vehicle behavior (Table 2).
Table 2. Potential effects of failure. Component/Interface Function Brake pedal sensor analog voltage input
Vehicle speed
Brake ECU
Transmits brake torque request
Potential failure mode No signal Signal voltage out of range Message corruption Message loss Message timeout NO brake torque request DELAYED brake torque request INVALID brake torque request
Sends brake module NO brake fault status to other module status vehicle modules
Potential effect(s) of failure
No brake command issued to the vehicle actuator when requested by the driver Brake command issued too late to the vehicle actuator when requested by the driver Invalid brake command issued to the vehicle actuator when requested by the driver which might cause overbraking No brake module status issued to other vehicle modules in order to notify brake ECU failure (continued)
Software Failure Mode and Effects Analysis
89
Table 2. (continued) Component/Interface Function
Potential Potential effect(s) of failure failure mode DELAYED brake module status INVALID brake module status
1.3
Brake module status issued too late to other vehicle modules in order to notify brake ECU failure Invalid brake module status issued to other vehicle modules in order to notify brake ECU failure
Step 3
After we are done defining the failure modes and potential effect(s) of failure, the next step is to determine potential cause(s) of failure. For each failure mode, we determine all possible causes, including both hardware and software. Listing potential cause(s) of failure helps us figure out which design controls prevention technique to be implemented in order to mitigate these failures. We can have the mitigation strategy defined only in hardware or software or both (Table 3). Table 3. Potential causes of failure. Component/Interface Function Brake pedal sensor analog voltage input
Vehicle speed
Brake ECU
Transmits brake torque request
Potential failure mode No signal Signal voltage out of range Message corruption Message loss Message timeout NO brake torque request
Potential effect(s) of failure
Potential cause (s) of failure
No brake command issued to the vehicle actuator when requested by the driver
[brake pedal sensor analog voltage input] No signal [vehicle speed] Message loss No power supply (continued)
90
P. Talwar Table 3. (continued)
Component/Interface Function
Potential Potential effect(s) of failure mode failure
Potential cause (s) of failure
[vehicle speed] Message timeout [Brake ECU] Internal fault [brake pedal Invalid brake INVALID brake torque command issued to the sensor analog vehicle actuator when voltage input] request requested by the driver Signal voltage out of range which might cause overbraking [vehicle speed] Message corruption [Brake ECU] Internal fault [Brake ECU] No brake module Sends brake NO brake status issued to other Internal fault module fault module status vehicle modules in status to No power order to notify brake other vehicle supply ECU failure modules [Brake ECU] DELAYED Brake module status issued too late to other Internal fault brake vehicle modules in module order to notify brake status ECU failure Invalid brake module [Brake ECU] INVALID status issued to other Internal fault brake vehicle modules in module order to notify brake status ECU failure DELAYED Brake command brake torque issued too late to the vehicle actuator when request requested by the driver
1.4
Steps 4 and 5
After we are done identifying potential failure modes and causes of failure with the severity of failure captured under potential effects column, we list down current design controls prevention and recommend action(s) to mitigate these failures if already not in place. For example: – To mitigate brake pedal sensor failure, we can add a redundant sensor to fall back on in case the primary sensor fails. Also, we can add plausibility check which reads both the sensor voltages and compare against each other and set a fault if the difference between the two increases by some value for some period of time.
Software Failure Mode and Effects Analysis
91
– To check for CAN message corruption, we can verify CRC (Cyclic Redundancy Check), parity bit, etc. added to a field of CAN messages, on the receiver side and set a fault flag if invalid CRCs exceed a threshold. – To check for CAN message drop or loss, we can verify MC (Message Counter), sequence number, etc. added to a field of CAN messages, on the receiver side and/or check for timeout. – To mitigate risk, we can add sensor fault detection strategy in hardware, like what happens if the power supply to the sensor goes off, what is the sensor malfunctions, what if there is a register failure, etc. and what actions to take.
2 Conclusion Bottoms up FMEA analysis approach helps in functionality level failure modes identification, assessment of severity and the impact on the overall system. If there is no impact, then we can be fairly confident that the system design is robust. If there is some impact, then preventive measures need to be initiated as highlighted in this paper.
A Validated Failure Behavior Model for Driver Behavior Models for Generating Skid-Scenarios on Motorways Bernd Huber1(&), Paul Schmidl2, Christoph Sippl1, and Anatoli Djanatliev3 1
AUDI AG, Simulation Electric/Electronic, 85045 Ingolstadt, Germany {bernd2.huber,christoph.sippl}@audi.de 2 Technical University of Munich, Chair of Ergonomics, Boltzmannstr. 15, 85748 Garching b. München, Germany [email protected] 3 Computer Networks and Communication Systems, Friedrich-Alexander Universität Erlangen, Martensstr. 3, 91058 Erlangen, Germany [email protected]
Abstract. The automation of the driving task will gain importance in future mobility solutions for private transport. However, the sufficient validation of automated driving functions poses enormous challenges for academia and industry. This contribution proposes a failure behavior model for driver models for generating skid-scenarios on motorways. The model is based on results of the five-step-method provided by accident researchers. The failure behavior model is implemented using a neural network, which is trained utilizing a reinforcement learning algorithm. Hereby, the aim of the neuronal network is to maximize the vehicle’s side slip angle to initiate skidding of the vehicle. Concluding, the failure behavior model is validated by reconstructing a real accident in a traffic simulation using the failure behavior model. Keywords: Simulation
Human-Modeling Automated driving
1 Introduction In future concepts of individual mobility, the automation of the driving task is becoming an important requirement. Automated Driving Functions (ADFs) open up new sales opportunities for car manufacturers [1]. In addition, ADFs allows traffic safety to be significantly increased, as over 90% of traffic accidents can be linked to driver failures [2]. In order to satisfy these objectives, the ADF must perform as errorfree as possible in any potential situation in its working domain. The resulting complexity of ADFs impedes the formulation of detailed and complete requirements. Consequently, the verification and validation of ADFs is partial and therefore involves risks [3]. Bergenhem et al. defines this problem as “sematic gap” [4]. Various approaches exist to validate the mode of operation of ADFs. On the one hand, function tests can be carried out on public roads with the aid of test drives. On the other hand, simulations allow functions tests to be conducted in the laboratory. © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 92–98, 2020. https://doi.org/10.1007/978-3-030-39512-4_15
A Validated Failure Behavior Model for Driver Behavior Models
93
However, performing real test drives to validate ADFs is not feasible due to economic constraints, technological restrictions or reasons of safety [5]. Along the product development process, so-called X-in-the-loop (XIL) methods are deployed to evaluate vehicle functions in a virtual environment [6]. Ulbrich et al. [7] describe the systematic application of such simulation methods for testing an automated lane changing function along the development phase. Here, the function developers modelled traffic simulation scenarios for the validation of the ADF. This is time-consuming and complex as the behavior of each entity must be described individually. Therefore, researchers proposed different approaches for automating the scenario generation process [8, 9]. Alternatively, behavior models such as driver behavior models are capable of defining the behavior of entities in the simulation. Numerous driver behavior models for various purposes and different levels of detail have been presented in the literature [10].
2 Concept We have already published the concept [11] of a modular and validated failure behavior model in driver behavior models for testing ADFs. The concept is based on findings by an interdisciplinary team of the Audi Accident Research Unit (AARU)1. Especially the conclusions of the five-step-method [12] are valuable for the modeling of failure behavior for driver models. Among other aspects, this method enables the AARU to define and categorize the human cause factor of an accident. The human cause factors of the five-step-method are derived from human decision-making models. This information is used to model driver’s failure behavior. Therefore, a driver model is mandatory, which also reflects the levels of the human decision-making process. This enables the implementation of modular human failure behaviors in corresponding levels of the driver behavior model. These are categorized as motivational driver behavior models [10]. Furthermore, the modeled misbehavior is validated by reconstructing real accidents from the AARU accident database in a respective traffic simulation. We proved the effectiveness of the method by modelling an information processing failure in the decision-making process of the driver. Thereafter, we validated the model by successfully reconstructing a real accident, the cause of which according to the findings of the AARU corresponded to the same class of human failure behavior [11]. In this contribution, we present a failure behavior model for generating skidscenarios on motorways. Skidding situations, if they occur, lead in many cases to severe injuries or casualties. The number of accidents of this type has decreased significantly due to the introduction of Electronic Stability Control (ESC), but such situations still exist on motorways [13]. Nevertheless, it is necessary to investigate the behavior of the ADF in such rare events. We utilized the driver behavior model from the Human Factors Consult (HFC)2. The model consists of four consecutive layers: perception, situation awareness,
1 2
Audi Accident Research Unit (AARU): https://www.aaru.de/. Human Factors Consult (HFC): https://human-factors-consult.de/en/.
94
B. Huber et al.
decision-making and tasks-execution [14]. These layers implement the human decision-making process. Furthermore, our vehicle model is controlled by the HFC driver behavior model. Additionally, a two-track model is employed to simulate the vehicle dynamics of our vehicle model [15]. Thus it is possible to generate over- or understeer based on the control commands of the driver behavior model.
3 Implementation We utilized a neural network to model the failure behavior to generate skidding, since it was not possible for us to identify a clear human failure behavior from the AARU database. The neural network is trained by applying the reinforcement learning algorithm Proximal Policy Optimization [16]. The algorithm is composed of two neural networks with common parameters. The first network calculates the next state, while the second network represents the value function. _
LPPO ðhÞ ¼ Et ½LCLIP ðhÞ c1 LVF t t t ðhÞ þ c2 S½ph st :
ð1Þ
Where h denotes the parameters of the neural networks, ph is defined by the policy _
and Et represents the expected value over multiple calculation steps. c1 and c2 refer to constants and S describes the entropy in time step st . The deviation of the neural CLIP ðhÞ determines the clipped network of the value function is explained by LVF t ðhÞ. Lt main objective. This parameter defines the maximum change of the neural network within a calculation step. Here, the modification is generated on the basis of the introduced policy. In addition to the policy, a corresponding reward (R) is calculated, which is dependent on the current state of the environment. In the training phase of the neural network, the policy is optimized to obtain a maximum reward. For a more detailed explanation of the functionality of the algorithm, see [16]. The training process of our failure behavior model is illustrated in the following pseudo-code:
while not done do for t=1,2,…,N do Execute Policy Save , end for for calculate optimize end while
In our setup, the steering wheel position is defined by the neural network. As input values, the current distance of the vehicle to the edge of the road, the speed of the vehicle and the heading angle relative to the road are provided to the neural network. Subsequently, the corresponding reward is computed based on two conditions. First the reward is increased when the vehicle is on the road, otherwise it is drastically reduced.
A Validated Failure Behavior Model for Driver Behavior Models
95
Secondly, the current slip angle [17] of the vehicle is computed. A nonzero slip angle signifies a positive reward. A termination condition of the training process is not implemented. On completion of the training, the neural net is integrated into the task-execution layer of the driver behavior model. Whenever necessary, the neural network may be activated to produce an increased slip angle according to the current situation of the vehicle model and its environment.
4 Validation A reconstructed skidding accident from the AARU database is simulated in a submicroscopic traffic simulation with our driver behavior model to validate the failure behavior model. Hereby, the objective is to substantiate the model’s ability to reflect human failure behavior in skidding accidents. Therefore the database for the validation of the failure behavior is explained. Afterwards, the simulation results are compared with the reconstruction of the real accident. 4.1
Database
The database for the validation consists of the results of the AARU on an analyzed skidding accident. The reconstruction of the collisions of the skidding accident is illustrated in Fig. 1. Since the reconstruction is focused on the collisions of the vehicles, the trajectory of the skidding vehicle was extrapolated for this purpose.
Fig. 1. Reconstruction of the collisions of a skidding accident provided by the AARU. The trajectory of the skidding vehicle was extrapolated by the authors, represented by the blue vehicles.
In this accident, vehicle A (blue vehicle in Fig. 1) was driving on the right-hand lane of a two-lane motorway. The driver performed a lane-change maneuver to the left lane. Vehicle B (vehicle on the left lane in Fig. 1) was approaching from behind. As the driver of vehicle A recognized vehicle B the driver of vehicle A abruptly steers back to the right lane. Thereby, the driver lost control of the vehicle. This is followed by vehicle A and vehicle B colliding. Furthermore, the AARU estimated the velocity of vehicle A to 110 km/h, while the estimation of vehicle B was 145 km/h.
96
B. Huber et al.
The illustrated states of vehicle A were determined according to the roadway geometry and the roadway markings. In addition, the measured values were used as interpolations points for the states of the vehicle. Thus, the trajectory as well as the slip angle can be used for the validation. The results are visualized in Fig. 2 by the blue line. For a more accurate representation of the trajectory, the image is inverted. 4.2
Simulations Results
The driver behavior model with integrated failure behavior model is embedded in our traffic simulation and controls vehicle A. The virtual motorway in the simulation scenario is modeled on the basis of the reconstruction image. Vehicle B is simulated by the traffic simulation and controlled respective. Additionally, another vehicle is placed on the right lane in our scenario. It forces vehicle A to perform the lane change maneuver to the left lane of the motorway. At the same time, the simulated vehicle B is approaching from behind. As soon as vehicle B is approximately 50 m behind vehicle A, the failure behavior model is activated. The results are show in Fig. 2 by the orange line.
Fig. 2. The graphs compare the simulations results with the reconstruction. The left graphs describes the trajectory of vehicle A. The right graph visualizes the respective slip angle.
Referring to the left part of Fig. 2, the steering back to the right lane matches the data from the reconstruction. However, the reconstructed trajectory reaches is maximum earlier, which also shows a lower amplitude. This deviation may be caused by different factors. On the one hand, it is likely that the neural networks react too slowly to the oversteering vehicle, since it is optimized to maximize the slip angle. As pictured in the right part of Fig. 2, the slip angle in the simulation achieves a higher maximum earlier. On the other hand, it is important to note that the vehicle parameters are approximated and therefore the behavior of vehicle A may be not perfectly modeled.
A Validated Failure Behavior Model for Driver Behavior Models
97
5 Discussion and Future Work In this contribution we use a reinforcement learning algorithm to models the failure behavior of drivers in skidding situations. Subsequently, the model was validated on the basis of a real accident, which was analyzed and reconstructed by the AARU. The validation shows that the neural network is able to approximate the behavior of the accident driver. Since we consider it impossible to generalize the driving behavior of human drivers, the deviation nevertheless indicates human behavior. In our future work we will utilize the failure behavior models to generate multicausal accidents or near crash situations in our traffic simulation for testing ADFs.
References 1. McKinsey & Company: Automotive revolution – perspective towards 2030: How the convergence of disruptive technology-driven trends could transform the auto industry. Report, Advanced Industries (2016) 2. Singh, S.: Critical reasons for crashes investigated in the national motor vehicle crash causation survey. Report, Traffic Safety Facts – Crash (2015) 3. Ardelt, M., Coester, C., Kaempchen, N.: Highly automated driving on freeways in real traffic using a probabilistic framework. In: 2012 IEEE Intelligent Transportation Systems (ITSC), pp. 1576–1585 (2012) 4. Bergenhem, C., Johansson, R., Soederberg, A., Nilsson, J., Tryggvesson, J., Toerngrne, S., Ursing, S.: How to reach complete safety requirement refinement for autonomous vehicles. In: CARS 2015 – Critical Automotive Applications (2015) 5. Winner, H., Wachenfeld, W., Junietz, P.: Validation and introduction of automated driving. In: Winner, H., Prokop, G., Maurer, M. (eds.) Automotive Systems Engineering II, pp. 177– 196. Springer, Cham (2018) 6. Shokry, H., Hinchey, M.: Model-based verification of embedded software (2009) 7. Ulbrich, S., Schuldt, F., Homeier, K., Steinhoff, M., Menzel, T., Krause, J., Maurer, M.: Testing and validating tactical lane change behavior planning for automated driving. In: Watzenig, D., Horn, M. (eds.) Automated Driving, pp. 451–471. Springer, Cham (2017) 8. Schuldt, F., Reschka, A., Maurer, M.: A method for an efficient, systematic test case generation for advanced driver assistance systems in virtual environments. In: Winner, H., Prokop, G., Mauerer, M. (eds.) Automotive Systems Engineering II, pp. 147–175. Springer, Cham (2017) 9. Menzel, T., Bagschick, G., Isensee, L., Schomburg, A., Maurer, M.: From functional to logical scenarios: detailing a keyword-based scenario description for execution in a simulation environment. Technical report, IEEE Intelligent Vehicles Symposium (2019) 10. Michon, J.A.: A critical view of driver behavior models: what do we know, what should we do? In: Evans, L., Schwing, R.C. (eds.) Human Behavior and Traffic Safety, pp. 485–524. Springer, Boston (1985) 11. Huber, B., Sippl, C., German, R., Djanatliev, A.: A validated failure behavior model for driver models to test automated driving functions. In: Ahram, T., Taiar, R., Colson, S., Choplin, A. (eds.) Human Interaction and Emerging Technologies, IHIET 2019. Advances in Intelligent Systems and Computing, vol. 1018, Springer, Cham (2020)
98
B. Huber et al.
12. Weber, S., Ernstberger, A., Donner, E., Kiss, M.: Learning from accidents: using technical and subjective information to identify accident mechanisms and to develop driver assistance systems. In: Dorn, L., Sullman, M. (eds.) Driver Behaviour and Training, vol. VI, pp. 223– 230. Ashgate, Surrey (2013) 13. Winkle, T.: Safety benefits of automated vehicles: extended findings from accident research for development, validation and testing. In: Maurer, M., Gerdes, J., Lenz, B., Winner, H. (eds.) Autonomous Driving. Springer, Heidelberg (2016) 14. Human Factor Consult Driver Modeling. https://human-factors-consult.de/en/services-andproducts/driver-modeling/ 15. Schramm, D., Hiller, M., Bardini, R.: Zweispurmodelle. In: Modellbildung und Simulation der Dynamik von Kraftfahrzeugen. Springer, Heidelberg (2018) 16. Schulmann, J., Wolksi, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017) 17. Von Vietinghoff, A.: Nichtlineare Regelung von Kraftfahrzeugen in querdynamischen Fahrsituationen. Insitut fuer Industrielle Informationstechnik (2008)
Human-Machine Interface Research of Autonomous Vehicles Based on Cognitive Work Analysis Framework Chi Zhang, Guodong Yin(&), and Zhen Wu School of Mechanical Engineering, Southeast University, Nanjing 211189, China [email protected], [email protected], [email protected] Abstract. This article deals with human-machine (HMI) interface design of autonomous vehicles. Due to the limitations of the current autonomous driving technologies, drivers are also required to participate in some driving tasks, which leads to a lot of challenges in the human-machine interface (HMI) design of autonomous vehicles, especially when drivers are using in-vehicle information systems (IVIS) or mobile devices. This paper presents a methodology aiming to solve these challenges. The method relies on the Cognitive Work Analysis framework and mainly using the example of vehicle to driver handover phase to identify and to organize information that should be displayed to minimize distraction and maximize safety benefits. Finally, some practical HMI design principles of autonomous vehicles are provided. Keywords: Human factors
Interaction design Systems engineering
1 Introduction In recent years, many functions of vehicles have become automated, including some main driving tasks, autonomous driving technology is more and more powerful, the latest advanced automated driving system can already be in full control of the car under the condition of limited road, and drivers can delegate control to automated systems when they wish. The current SAE (the Society of Automotive Engineers) taxonomy provide a clear classification which ranges from level 0 to level 5, literally from no driving automation to full driving automation [1]. This article deals with human-machine (HMI) interface design of autonomous vehicles. Under the SAE lv3, the system is responsible for monitoring the environment, drivers are expected to respond and control the car if there is a request to intervene. Wiener and Curry indicates that increased automation can lead to errors [2]. This is probably caused by the lack of situational awareness, drivers think the system can control the whole driving process so they don’t need to pay attention. In the first part, we present the research context and the methodology used. Then we mainly examines a specific handover situations called non-scheduled system initiated handover by using Cognitive Work Analysis framework. Finally, we provided some design principles to guide the interface design of autonomous vehicles. The original version of this chapter was revised: The authors’ names has been amended. The correction to this chapter is available at https://doi.org/10.1007/978-3-030-39512-4_197 © Springer Nature Switzerland AG 2020, corrected publication 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 99–103, 2020. https://doi.org/10.1007/978-3-030-39512-4_16
100
C. Zhang et al.
2 Context and Methodology Many researches have conducted studies in the handover situations of autonomous vehicle. In SAE level3, driver is necessary, but is not required to monitor the environment. Therefore, the driver can perform some other activities but also needs to understand the inside and outside of the car in order to drive manually when needed. It is necessary to convey this information through the appropriate human-machine interface, so that the driver can know these situations, namely, to establish accurate situational awareness. If the driver cannot get enough information through the interactive interface, it will not only cause the accident due to the lack of situational awareness, but also make the driver express his fear of driverless and resist driverless technology [3]. The handover process has three specific phases: 1. Automated driving phase, in this phase the autonomous driving system controls the vehicle. 2. Handover phase from the system to the driver. This process can be divided into scheduled handover and non - scheduled handover [4]. 3. The transition from manual driving to autonomous driving system, human drivers believe that their driving task has been completed and are ready to hand over control to system again. This paper specifically discusses how to maintain situational awareness in the nonscheduled system initiated handover, because this condition is likely to lead to accidents. To support this, we refer to Endsley’s model, which divides situational awareness into three different parts: perceive elements within an environment; Understanding the meanings of task goals and prediction of specific future actions. Then we use CWA methodology to extract the information required by drivers. Based on this information, some design principles are established to guide the interface design of autonomous vehicles.
3 Cognitive Work Analysis Cognitive Work Analysis proposed by Rasmussen [5]. This research method comes from Rasmussen’s analysis of industrial accident. Vicente developed five stages of cognitive work analysis [6]: Work Domain Analysis, Control Task Analysis, Strategies Analysis, Social Organization and Cooperation Analysis, and Worker Competencies Analysis. In the five main parts of CWA, this paper mainly uses two methods: Work Domain Analysis (WDA) and Control Task Analysis (ConTA). 3.1
Work Domain Analysis
The implementation hierarchy supports five levels of abstraction to describe the work domain: functional purposes (the goal of the work domain), abstract functions, general functions, physical processes and activities, and physical objects. The five levels of implementation hierarchy are connected step by step to form the system connection
Human-Machine Interface Research of Autonomous Vehicles
101
architecture of means-ends. From top to bottom, it can express the realization of means of system functions, while from bottom to top, it can explain the purpose of the existence of a specific facility and process in the system. Functional purposes
General function
Ends
Means
Abstract function
Physical function Physical objects
Fig. 1. Means-ends
Work domain analysis of Lv3 autopilot has been done by Debernard [7], so I will not finish all the stages of WDA here. Based on the cognitive work of level3 autonomous vehicles conducted by s. Debernard, we can know that main purposes of a level 3 autonomous vehicle are to realize safe and efficient driving process. In addition, I would like to make some specific supplementary information refer to handover process. In the abstract function layer: in order to achieve controllability, the condition inside and outside the vehicle should be clearly displayed and make sure the driver understands this information. In the physical function layer: the driver’s ability to get information should be taken into account. In the physical object layer: in addition to the Vehicle’s senses, the driver’s sensory organs should also be consideration. Based on the informational basis provided by abstraction hierarchy method, control task analysis could be conducted. 3.2
Control Task Analysis
Control Task Analysis (ConTA) is related to activities needed to meet specific system goals. In the handover scenario, as a result of the existence of people, there will be a lot of observation, decision-making and analysis process. In CWA framework this process is called ConTA. ConTA can take the form of decision ladder [6] to represent the control task and transform the specific activities in WDA analysis into the process of information processing and decision making. In this form, we use black and white box instead traditional represents to maximize the use of the page size, the black box represents information processing and the white ones represents the cognitive state generated by these activities. Figure 1 shows that the different stages of the handover situation performed by the system and identifies what needs to be done in each stage according to functional goals. In ConTA, observe is the phase in which the driver receives information. After receiving the observed information, the driver will judge the current state, which is the Diagnose state in the table. Skilled driver will perform tasks directly after understanding the status, but novice driver will continue to process some information (Fig. 2).
102
C. Zhang et al.
GOALS: Transition from manual to automated driving Evaluate performance OPTIONS: Do I need to take over the car quickly? Do I need to change lanes after taking over control? Do I need to change speed after taking over control? Do I need to brake after taking over control? Can I continue driving after taking over control.
GOAL: Manually control the vehicle out of road conditions that the system cannot handle and return control to the technical agent.
Predict consequence STATE: What is preventing the autopilot from continuing to drive? On what roads? What are the current road conditions? Is there much traffic around? Do I bump into obstacles? Diagnose state
TARGET: Fast and secure handover process.
Definition of task
INFORMATION: What's the current speed? What's the speed of the cars around? Where is the current vehicle? What is the road condition around the vehicles? Are there any cars moving? Is there an obstacle near me? What are the surrounding obstacles? What's the current weather like? Observe ALERT: what's wrong with the system? Why the sudden transfer of control? Is the current situation dangerous? Should I take over the car immediately? Is it dangerous not to take over the car?
TASK: The transition of control from the technical agent to the driver's manual control. Planning of procedure PROCEDURE: Maintain situational awareness. Gradually take over control of the car. Slow down, change lane or pull over. Hand over control to system when the operation is over
Execute
Activition Fig. 2. Handover decision ladder
Human-Machine Interface Research of Autonomous Vehicles
103
4 Conclusion Through the cognitive work analysis methodology, we have obtained a lot of information needed by the driver. Based on four main stages of decision ladder, we can summarize some design principles and requirements. Information acquisition: Principle 1: in the alert phase, there should be a clear reminder to let the driver know that the system is no longer able to continue the driving task. Principle 2: sufficient time should be allowed for the driver to observe the environment during the switching process. If this cannot be done, the system should gradually transfer control and take deceleration actions to ensure safety. Analysis information: Principle 3: when the driver is observing the environment, the system can provide necessary assistance, enabling the driver to find out the cause of the failure of the automatic driving system faster so as to quickly maintain the situational aware-ness of the environment. Select tasks: Principle 4: the system should alert the driver to possible dangerous conditions to help the driver make correct prediction. Implementation of tasks: Principle 5: the system provides certain guidelines when a driver performs a driving task.
References 1. Automated Vehicles for Safety – NHTSA (2018). https://www.nhtsa.gov/technologyinnovation/automated-vehicles-safety 2. Wiener, E.L., Curry, R.E.: Flight-deck automation: promises and problems. Ergonomics 23 (10), 995–1011 (1980) 3. Bennett, R., Vijaygopal, R., Kottasz, R.: Willingness of people with mental health disabilities to travel in driverless vehicles. J. Transp. Health 12, 1–12 (2019) 4. McCall, R., McGee, F., Mirnig, A., et al.: A taxonomy of autonomous vehicle handover situations. Transp. Res. Part A Policy Pract. 124, 507–522 (2018) 5. Rasmussen, J.: Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering. North-Holland, New York (1986) 6. Vicente, K.J.: Cognitive Work Analysis: Toward Safe, Productive, and Healthy ComputerBased Work. CRC Press, Boca Raton (1999) 7. Debernard, S., Chauvin, C.: Designing Human-Machine Interface for Autonomous Vehicles. IFAC-PapersOnLine (2016)
Mercury: A Vision-Based Framework for Driver Monitoring Guido Borghi2(&), Stefano Pini1, Roberto Vezzani1, and Rita Cucchiara1 Dipartimento di Ingegneria “Enzo Ferrari”, Università degli Studi di Modena e Reggio Emilia, Modena, Italy {s.pini,roberto.vezzani,rita.cucchiara}@unimore.it Centro di Ricerca Interdipartimentale Softech-ICT, Università degli Studi di Modena e Reggio Emilia, 41125 Modena, Italy [email protected] 1
2
Abstract. In this paper, we propose a complete framework, namely Mercury, that combines Computer Vision and Deep Learning algorithms to continuously monitor the driver during the driving activity. The proposed solution complies to the requirements imposed by the challenging automotive context: the light invariance, in order to have a system able to work regardless of the time of day and the weather conditions. Therefore, infrared-based images, i.e. depth maps (in which each pixel corresponds to the distance between the sensor and that point in the scene), have been exploited in conjunction with traditional intensity images. Second, the non-invasivity of the system is required, since driver’s movements must not be impeded during the driving activity: in this context, the use of cameras and vision-based algorithms is one of the best solutions. Finally, real-time performance is needed since a monitoring system must immediately react as soon as a situation of potential danger is detected. Keywords: Driver Monitoring Human-Car Interaction Computer Vision Deep Learning Convolutional neural networks Depth maps
1 Introduction The loss of vehicle control is a common problem, due to driving distractions, also linked to driver’s stress, fatigue and poor psycho-physical conditions. Indeed, humans are easily distractible, struggling to keep a constant concentration level and showing signs of drowsiness just after a few hours of driving [1]. Furthermore, the future arrival of (semi-)autonomous driving cars and the necessary transition period, characterized by the coexistence of traditional and autonomous vehicles, is going to increase the already-high interest about driver attention monitoring systems, since for legal, moral and ethical implications driver must be ready to take the control of the (semi-)autonomous car [2]. Therefore, in this paper, a driver monitoring system, here referred as Mercury, based on Computer Vision and Deep Learning algorithms, is proposed. From a technical point of view, Mercury framework is composed of different stages. © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 104–110, 2020. https://doi.org/10.1007/978-3-030-39512-4_17
Mercury: A Vision-Based Framework for Driver Monitoring
105
Given an input depth map, in the first step the head center is automatically detected, according to which a bounding box containing the driver’s head and a minor part of the background is extracted. Then, this crop is fed into a system that estimates the 3D pose of the head, in terms of yaw, pitch and roll angles. On the same head crop, a facial landmark localization algorithm is applied, in order to identify the salient areas that belong to the face and are visible in both the depth and the intensity frame. We believe that head pose information could be useful in terms of driver monitoring, to understand where the driver is looking (e.g. inside or outside the car cabin) and for the infotainment systems, in order to make new systems for the Human-Car Interaction more intuitive and user-friendly and to improve the velocity of operations conducted by the driver inside the cockpit [3]. Finally, driver’s eyes, detected through the facial landmark system mentioned above, are analyzed to estimate the level of driver drowsiness. Specifically, PERCLOS, a drowsiness detection measure, is exploited.
2 Mercury The whole architecture of the system is represented in Fig. 1. The system has been set up in a modular way, since it is able to work only with RGB data, depth data or both. The functions available for each type of data are different and are described in the following paragraphs.
Fig. 1. Overall architecture of the proposed framework. Blue blocks represent algorithms applied on RGB or gray-level frames, while in the green ones are reported the solutions for the depth maps. Finally, orange boxes represent the computed measures for the Driver Monitoring task.
2.1
Acquisition Device
The Microsoft Kinect One sensor has been exploited as acquisition device. Although more recent depth sensors are available on the market, with more suitable factor forms for the automotive field, this device has been preferred due to the availability and relative simplicity of the proprietary SDK that is compatible with the Python
106
G. Borghi et al.
programming language1, as well as the simultaneous presence of an excellent color camera and a depth sensor with a good spatial resolution and a low noise level. Moreover, the minimum range for data acquisition allows its use on the automotive context, since in our case the device is placed in front of the driver, near the car dashboard. In the following, technical specifications of the device are reported: • Full HD RGB camera: the spatial resolution is 1920 1080. The device is able to acquire up to 30 frame per second; • Depth sensor: the spatial resolution is 512 424. This sensor provides depth information as a two-dimensional array of pixels, namely depth map, similar to a gray-level image. Since each pixel represents the distance in millimeters from the camera, depth images are coded in 16 bits; • Infrared emitter: this device is based on the Time-of-Flight (ToF) technology and its range starts from 0.5 up to 7 m; • Microphone array: in this device is present an array of microphones. Audio data are sampled at 16 kHz and coded with 16 bits. It is used to pick up the voice commands, for the calibration of the peripheral and the suppression of white noise. Microphones are not used in this project. 2.2
Head Extraction
The first step of the framework is the automatic head extraction from the acquired frames. It is assumed that within the acquired scene there is no more than one person, i.e. the driver, sitting in front of the acquisition device, about 1 m away. For the part of head detection on RGB frames, the well-known object detection framework of Viola & Jones [4] has been exploited: despite the presence of more recent deep learning-based algorithms, is still a valid choice for its effectiveness and good speed performance. Moreover, in this regard, we note that many car companies still use derivative versions of the algorithm in question. Head detection on depth maps is conducted through the Fully Convolutional Network proposed in [5]: the limited depth of the network preserves and balances detection accuracy and speed performance. The output of RGB and depth head detection modules consists of the head crop, which is a bounding box containing the driver’s face with minimal background portions. 2.3
Head Pose Estimation
The obtained head crop is used to produce three different inputs for the Head Pose Estimation module. The first input is represented by the raw depth map, while the second is a Motion Image, obtained running the Optical Flow algorithm (Farneback implementation) on a sequence of depth images. The third input is generated by a network called Face-from-Depth [6], that is able to reconstructs gray-level face images 1
https://github.com/Kinect/PyKinect2.
Mercury: A Vision-Based Framework for Driver Monitoring
107
starting from the related depth images. All these three inputs are then processed by a regressive Convolutional Neural Network (CNN) [7] that finally outputs the value of the yaw, pitch and roll 3D angles expressed as continuous values. 2.4
Facial Landmark Estimation
The goal of this module is a reliable estimation of the facial landmark coordinates, i.e. salient regions of the face, like eyes, eyebrows, mouth, nose and jawline. Due to the limited spatial resolution of available depth images, we focus on a selection of five facial landmarks: eye pupils, mouth corners and the nose tip. Accordingly, the system outputs 10 image coordinates, i.e., the x and y values for each facial landmark. The core of the method is a CNN [8] that works in regression and receives a stream of depth images as input. The ground truth annotation of the landmark positions is required during the network training step and is used as a comparison during the test. The developed system has real-time performance and it is more reliable than stateof- art competitors in presence of poor illumination and light changes, thanks to the use of depth images as input. The extraction of the facial landmarks allows obtaining the bounding box of the driver’s right and left eye: these images are then analyzed by a CNN to determine if the eye is open or closed. 2.5
Driver Monitoring
The analysis of the attention and the level of fatigue of the driver is done through the indicator called PERCLOS (PERcentage of eyelid CLOSure), introduced in 1994 in [9]. This measure aims to express the percentage of time in a minute in which the eye remains closed from 80% to 100%. In general, the computation of the PERCLOS measurement yields a numerical value that can be analyzed to determine the driver’s fatigue level. Blinking beats, identifiable as almost instantaneous closures of the eye, are excluded in its computation, since only the prolonged and slow closures, usually called droops, are maintained. The level of driver attention is then classified through three thresholds: • Alert (PERCLOS < 0:3): good level of attention, optimal driver conditions; • Drowsy (0.3 < PERCLOS < 0.7): first signs of carelessness and fatigue, the driver is not in optimal conditions and that introduces some risks; • Unfocused (PERCLOS > 0.7): complete lack of attention of the driver who is in poor physical condition or is sleeping;
3 Implementation In the following paragraphs, details about the hardware and the software implementation of Mercury are reported and discussed.
108
3.1
G. Borghi et al.
Hardware Implementation
The Mercury framework has been implemented and tested on a computer equipped with an Intel Core i7-7700 K processor, 32 GB of RAM and a Nvidia 1080Ti GPU. The various components belonging to the deep learning tools have been implemented with the Keras2 framework with Tensorflow3 as backend. Finally, a graphic interface has been created through the Qt Libraries4 and it is detailed in the following paragraph. Being aware that the aforementioned hardware equipment is not suitable for the automotive context, due to reasons related to the costs and power consumption, we have started to implement these systems on embedded boards. The Nvidia TX2 board has been chosen, since it is equipped with an embedded GPU suitable for deep learning-based algorithms. 3.2
Graphical User Interface
The Graphical User Interface (GUI), shown in Fig. 2, has been created using the Qt libraries 4 and Python programming language. It has been divided into four vertical sections: 1. Input Visualization, 2. Head Detection, 3. Head Pose Estimation and Facial Landmark Detection and 4. Driver Attention Analysis. From the left, the first part contains the RGB and depth frames captured by the Kinect One device, for visualization and debugging purposes. The side column is dedicated to display the output of the head detection modules. In particular, the localized bounding boxes are shown in green on the frames. The central part is dedicated to the driver’s face analysis, and it is divided into several horizontal sections. Starting from the top, the head pose is displayed either through a 3D cube or with bars, centered on the zero angle. The yaw angle is shown in blue, the roll angle in green and the pitch in red. Further down, the facial landmarks found on the RGB frame are shown through red dots. Thanks to these facial landmarks, the bounding boxes containing the right eye, the left eye, all displayed at the bottom, are extracted to compute the PERCLOS measure. At the extreme right, the information regarding the analysis of driver attention are reported. The PERCLOS measure is indicated by its percentage value and, according to the established thresholds, the term alert, drowsy or unfocused is highlighted. The head pose is instead indicated with the representation of the possible positions of the gaze on a given object. An image of a generic car interior has been then used to graphically convey the concept of Human-Car Interaction, highlighting with the red color the object inside the passenger compartment. Some common areas have been chosen as target for the interaction: the right, central and left rear mirrors, the area of the steering wheel, the gearbox and the part of the infotainment.
2 3 4
https://keras.io/. https://www.tensorflow.org/. https://www.qt.io/.
Mercury: A Vision-Based Framework for Driver Monitoring
109
Finally, in the bottom part of the GUI, the buttons that allow the user to select the stream of data on which the framework must work, i.e. RGB, depth or both streams, are placed. Moreover, the measure of the frame per second for each part of the system is shown.
Fig. 2. The graphical user interface developed for the Mercury framework.
4 Conclusions In this paper, Mercury, a framework for automatic Driver Monitoring, has been introduced. Input data is represented by both RGB images and depth maps, in order to improve the light invariance of the system. Future work will include the implementation of the framework in real embedded boards, suitable for the automotive context.
References 1. Young, K., Regan, M., Hammer, M.: Driver distraction: a review of the literature. In: Faulks, I.J., Regan, M., Stevenson, M., Brown, J., Porter, A., Irwin, J.D. (eds.) Distracted Driving, pp. 379–405. Australasian College of Road Safety, Sydney (2007) 2. Venturelli, M., Borghi, G., Vezzani, R., Cucchiara, R.: Deep head pose estimation from depth data for in-car automotive applications. In: International Workshop on Understanding Human Activities through 3D Sensors, pp. 74–85 (2016) 3. Nawaz, T., Mian, M.S., Habib, H.A.: Infotainment devices control by eye gaze and gesture recognition fusion. IEEE Trans. Consum. Electron. 54(2), 277–282 (2008) 4. Viola, P., Jones, M.J.: Robust real-time face detection. Int. J. Comput. Vis. 57(2), 137–154 (2004) 5. Ballotta, D., Borghi, G., Vezzani, R., Cucchiara, R.: Fully convolutional network for head detection with depth images. In: 24th International Conference on Pattern Recognition (ICPR), pp. 752–757 (2018)
110
G. Borghi et al.
6. Borghi, G., Fabbri, M., Vezzani, R., Cucchiara, R.: Face-from-depth for head pose estimation on depth images. IEEE Trans. Pattern Anal. Mach. Intell. (2018) 7. Borghi, G., Venturelli, M., Vezzani, R., Cucchiara, R.: POSEidon: face-from-depth for driver pose estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4661–4670 (2017) 8. Frigieri, E., Borghi, G., Vezzani, R., Cucchiara, R.: Fast and accurate facial landmark localization in depth images for in-car applications. In: International Conference on Image Analysis and Processing, pp. 539–549 (2017) 9. Wierwille, W.W., Wreggit, S.S., Kirn, C.L., Ellsworth, L.A., Fairbanks, R.J.: Research on vehicle-based driver status/performance monitoring; development, validation, and refinement of algorithms for detection of driver drowsiness (1994)
Investigating the Impact of Time-Lagged End-to-End Control in Autonomous Driving Haruna Asai(&), Yoshihiro Hashimoto, and Giuseppe Lisi Nagoya Institute of Technology, Nagoya, Japan [email protected], [email protected]
Abstract. End-to-end training, the strategy by which deep neural networks learn to map raw pixels from front-facing cameras directly to steering commands have recently gained attention in the autonomous driving field. Here, we investigate the possibility of extending this approach with a time-lagged procedure, by training the system to map raw pixels at time T to steering commands at time T+Lag. We are interested in evaluating such an approach for two main applications: (1) time-lagged end-to-end control towards an artificial driving instructor (ADI) that recommends future control actions to novice human drivers; (2) time-domain data augmentation to improve the performance of standard non-lagged end-to-end control. Our results show that time-lagged end-toend training is not appropriate for time-lagged control, but using it for data augmentation leads to a smoother output in standard non-lagged end-to-end control. This suggests that time-lagged training improves the anticipatory ability of end-to-end control and augmentation reduces overfitting. Keywords: Artificial driving instructor ADI Autonomous driving augmentation Time-lag End-to-end control Deep learning
Data
1 Introduction Deep neural networks that map raw pixels from front-facing cameras directly to steering commands have recently gained attention since the work by Bojarski et al. [1]. Such end-to-end systems promise to be more compact and performant compared with long pipelines of handcrafted feature extractors (e.g. lane, obstacle detectors). Here, we investigate the possibility of extending this approach with time-lagged end-to-end control, by training the system to map raw pixels at time T to steering commands at time T+Lag. We are interested in evaluating such approach for two main applications: (1) prototyping an artificial driving instructor (ADI) that recommends future control actions to novice human drivers, given the current camera pixels; (2) time-domain data augmentation to improve the performance of standard end-to-end control. Artificial Driving Instructor (ADI) – Experiment 1. In this paper we propose the ADI, a type of advanced driver-assistance system (ADAS) [2] that teaches novice human drivers to drive, by recommending control actions (Fig. 1). Implementing ADI in a controlled environment, such as a driving test tracks, has numerous advantages: (1) novice drivers learn from the ADI and vice versa, in a co-adaptive manner, (2) test © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 111–117, 2020. https://doi.org/10.1007/978-3-030-39512-4_18
112
H. Asai et al.
autonomous driving algorithms, while collecting data, (3) provide a service such as an automated driving school, (4) minimize the risk of accidents. Given its characteristics, a successful implementation of ADI is a necessary condition for the mass adoption of autonomous driving. In this framework, end-to-end training based on deep learning allows to easily learn a controller based on a new test track by simply collecting data with a proficient human driver, eliminating the need of re-programming complex trackspecific algorithms. Subsequently, the neural network can adapt itself based on the incoming successful data from the novice driver. Here, we simulate the system suggestion by storing the output produced at time T and using it as an actual control command at time T+Lag. Recommends actions
Machine Learning
a
Artificial Driving Instructor
Provides feedback for improvement
Novice Human Driver
”Please prepare to steer right in 3 seconds”
b
Fig. 1. Artificial Driving Instructor (ADI). (a) co-adaptation in ADI. (b) in a controlled environment, such as a driving test tracks, the ADI recommends future control actions to novice human drivers.
Time-Domain Data Augmentation - Experiment 2. Data augmentation is the strategy that allows to increase the diversity of data available in the training set via domain-specific transformations. For image data, commonly used transformations include random cropping, random perturbation of brightness, saturation, hue and contrast [3]. In this work, not only we perform data augmentation on the camera pixels, but also in the time domain: we train the system to map the pixels at time T to steering commands chosen randomly between time T and T+Lag.
2 Methods In the next two sections we describe in detail the two experiments we conducted, targeting the time-lagged end-to-end control for ADI and the time-domain data augmentation, respectively. Both experiments are implemented in Python, Keras, and a simulator provided by Udacity. Deep neural networks (DNN) with the same architecture as in Bojarski et al. [1] are employed. Standard data augmentation commonly employed in the Udacity context is performed in both experiments, such as randomly picking from the frontal, left and right camera and correcting the angle accordingly [1],
Investigating the Impact of Time-Lagged End-to-End Control in Autonomous Driving
113
brightness changes, small normal noise on the steering value, discarding a portion of the samples associated with steering angle 0 to avoid unbalance in the data (steering angle close to 0 is the majority). In both experiments, training and testing are performed on the same track (i.e. track 1 of the Udacity software). Training was done on data from a 8 min run on the track, for a total of about 4 laps (batch size: 512, steps per epoch: 125, epochs: 50). Performance is assessed by making the DNN drive for maximum 120 s around the track, and computing the steering angle absolute deviation and the time to crash. The former represents the maximum angle of the steering wheel at each turn, and smaller values represent a less shaky and better drive. It is defined as d ð pÞ ¼ jhð pÞj, where h represents the steering angle, and p is a local positive or negative peak of h, found using the argrelextrema function from SciPy. For each run all the local peaks are found and then all the d ð pÞ are computed; then the distributions of d ð pÞ are compared across DNNs, using one-way ANOVA, Tuckey Multiple Comparison Test and Cohen’d effect size. With respect to the time-to-crash, a value of 120 s indicates that the car was able to complete a run without crashing.
Steering(T+Lag) Steering(T+0.5) Steering(T)
T At time T
Fig. 2. Time-lagged end-to-end training strategy. For each time-lag a deep neural network (DNN) is trained to predict the steering control at time T+Lag given an image at time T. With the data augmentation strategy, a neural network is trained by randomly selecting a different time-lag for each sample.
2.1
Time-Lagged End-to-End Training: Experiment 1
In this experiment, a DNN is trained to map raw pixels at time T to steering commands at time T+Lag (Fig. 2). In total we trained 4 DNNs, each with a different lag time: 0.5, 1, 3, 5, 10 s. Moreover, we trained a DNN with no lag as a baseline model. Experiment 1-1. The trained networks are evaluated by storing the output produced at time T and using it as an actual control command at time T+Lag. This is represented as Experiment 1-1 in Fig. 3.
114
H. Asai et al.
Experiment 1-2. Moreover, we tested the same networks by applying the output immediately at time T (i.e. non-lagged control). This is labeled as Experiment 1-2 in Fig. 3. For each lag level, we repeated this experiment 10 times. 2.2
Time-Domain Data Augmentation: Experiment 2
A neural network is trained by performing data augmentation using different lag times at random. Specifically, based on the observations of Experiment 1, we trained two different neural networks with lag augmentation range between 0–3 s and 0–10 s, respectively. The trained neural network is evaluated in the same manner as in Experiment 1-2, by using the raw pixels at time T to produce the steering command and use it immediately, without lag. This is labeled as Experiment 2 in Fig. 3. For each lag level, we repeated this experiment 10 times. At time T
Augmentation DNN (0-3s, 0-10s)
Lag DNN (0s, 0.5s, 1s, 3s, 5s, 10s)
Steering at time T + Lag
Steering at time T + Lag
Experiment 2
Experiment 1-2
T
Experiment 1-1
T+Lag
time (s)
Fig. 3. Evaluation strategies in the two experiments
3 Results 3.1
Time-Lagged End-to-End Training: Results of Experiment 1
In Experiment 1-1, the time-lagged end-to-end control is clearly challenging at every lag level, and the car crashes out of track after a few seconds of driving. The time-tocrash (TTC) of each lag level in Experiment 1-1 is represented in Table 1. Table 1. Experiment 1-1. Time-to-crash (TTC) of time-lagged control (max. 120 s). Time lag 0 0.5 1 3 5 10 TTC (s) 120 32 17 17 17 54
Investigating the Impact of Time-Lagged End-to-End Control in Autonomous Driving
115
However, to our surprise, in Experiment 1-2, if the control command of the networks trained with lag is used immediately, instead of storing the output and waiting for the respective lag, the car is able to drive an entire lap without crashing, up to 3 s of lag. The time-to-crash (TTC) of each lag level in Experiment 1-2 is represented in Table 2. Table 2. Experiment 1-2. Time-to-crash (TTC) of non-lagged control (max. 120 s). Mean and standard deviation across 10 trials. Time lag
0
0.5
1
3
5
10
Crash time (s) 120 0 120 0 84.60 16.24 115.17 15.25 57.10 7.37 55.91 9.42
3.2
Time-Domain Data Augmentation: Results of Experiment 2
In Table 3 we observe that data augmentation between 0–3 s does not crash in all the 10 trials, while 0–10 s crashes two times. Table 3. Time-to-crash of non-lagged control using data augmentation (max. 120 s). Time-lag 0 is included for comparison Time lag 0 0–3 0–10 Crash time (s) 120 120 110.46 20.97
3.3
Comparing Steering Angle Absolute Deviation
We analyzed the steering angle absolute deviation only for the runs where there was no crash. Therefore, the comparison includes only a subset of models (Fig. 4). There are statistically significant differences between group means as determined by one-way ANOVA (F(4,5554) = 232.1, p = 1:41 10184 ). The results of multiple comparisons and effect sizes are reported in Table 4 and Fig. 4. Notably, Lag: 0.5 and Aug: 0–3 have a lower deviation than Lag: 0, with differences of 2:15 and 1:56 , respectively. In both cases, effect sizes close to a medium level (i.e. 0.5 [4]). It should be considered the deviations are computed in absolute terms, therefore the reported differences impact both the left and right steering, effectively doubling their effect (e.g. 2:15 ! 4:30 ).
116
H. Asai et al.
Fig. 4. Boxplot representing the steering angle absolute deviation of each model.
4 Discussion Results of Experiment 1-1 indicate that the time-lagged end-to-end control makes the car crash almost immediately. Indeed, the car behaves like an unstable oscillator and after an initial mistake cannot recover due to the lag. In general, a DNN trained with an end-to-end strategy learns to follow the borders of the track, making it reactive, and not proactive. This suggests that memory-based control by recurrent neural networks may be more appropriate. In general, we observed that end-to-end training produces strong oscillations, that feel unnatural. Therefore, for ADI combining deep learning with control theory (e.g. model predictive contouring control, MPCC) may be the way to pursue [5]. Table 4. Multiple comparisons of steering angle deviation across models that do not crash Group 1 Aug: 0–10 Aug: 0–10 Aug: 0–10 Aug: 0–10 Aug: 0–3 Aug: 0–3 Aug: 0–3 Lag: 0 Lag: 0 Lag: 0.5
Group 2 Aug: 0–3 Lag: 0 Lag: 0.5 Lag: 3 Lag: 0 Lag: 0.5 Lag: 3 Lag: 0.5 Lag: 3 Lag: 3
Mean diff −0.96 0.59 −1.56 2.53 1.56 −0.59 3.50 −2.15 1.94 4.09
p-value 0.0 0.0002 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Cohen’s d −0.27 0.16 −0.49 0.60 0.45 −0.19 0.87 −0.68 0.48 1.09
Experiment 1-2 and Experiment 2 indicate that time-lagged training improves nonlagged control. This may be due to the ability of anticipating an action, and achieving a better response time. Specifically, compared to Lag: 0, Lag: 0.5 and Aug: 0–3 achieve a
Investigating the Impact of Time-Lagged End-to-End Control in Autonomous Driving
117
smoother drive. The time-domain data augmentation strategy (Aug: 0–3), may be more general, since it includes several time-lags. Indeed, randomized time-lagged training reduces the effect of overfitting and increases generalizability, producing a smoother output. Future studies may address this point by testing this method on different tracks and road conditions using an advanced simulator such as CARLA [6].
References 1. Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L.D., Monfort, M., Muller, U., Zhang, J., Zhang, X.: End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316 (2016) 2. Geronimo, D., Lopez, A.M., Sappa, A.D., Graf, T.: Survey of pedestrian detection for advanced driver assistance systems. IEEE Trans. Pattern Anal. Mach. Intell. 32(7), 1239– 1258 (2009) 3. Zhang, C., Bengio, S., Hardt, M., Recht, B., Vinyals, O.: Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530 (2016) 4. Cohen, J.: Statistical Power Analysis for the Behavioral Sciences. Routledge, London (1988). ISBN 978-1-134-74270-7 5. Kabzan, J., Valls, M.D.L.I., Reijgwart, V., Hendrikx, H.F.C., Ehmke, C., Prajapat, M., Bühler, A., Gosala, N., Gupta, M., Sivanesan, R., Dhall, A.: AMZ Driverless: The Full Autonomous Racing System. arXiv preprint arXiv:1905.05150 (2019) 6. Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V.: CARLA: An open urban driving simulator. arXiv preprint arXiv:1711.03938 (2017)
The Car as a Transformer Jeremy Aston(&) and Rui Pedro Freire ESAD, College of Art & Design; ESAD Idea (Investigation in Design and Art), Av. Calouste Gulbenkian 1011, Sra. da Hora, 4460-268 Porto, Portugal {jeremyaston,ruipedro}@esad.pt
Abstract. Vehicle design is at the forefront of innovation as we enter the next technological era (industry 4.0). This transition from traditional to alternative vehicles concerns not only the engine configuration, but the whole car as well. Electrical optimisation, autonomous driving and ‘drive-by-wire’, are the three main trends that are currently fostering within the automotive industry [1]. These changes provide designers with new opportunities to reimagine cars, as well as their own contributions for design [2]. ESAD College provided creative activities to stimulate the students through the design process by creating personas, scenarios and goals [5]. As a result, five different vehicle concepts were proposed, addressing ‘modularity in use’ (MIU). Keywords: Modularity in use Vehicle design Personas in design thinking Alternative vehicles Autonomous driving Intelligent mobility Industry 4.0
1 Introduction The article written by Martin Luccarelli ‘Technological challenges and the new design opportunities’, published in Auto & Design magazine [2] was used to introduce the project theme ‘The car as a transformer’ and acted as an inspirational tool for this academic project. Article. Car manufacturers are currently investing billions in energy storage systems to retain technical advantages over their competitors. However, there is still some uncertainty regarding the future orientation of sustainable passenger mobility [3]. In this regard, modularity is an appealing method to cope with new vehicle concepts, and the need for producing them using the current production systems. This methodology can be applied to the product ‘modularity in design’ (MID), to its production process ‘modularity in production’ (MIP), and to its use ‘modularity in use’ (MIU). While examples of MID (e.g. the Drive module and the Life module of BMW i3 and i8 models) and MIP (e.g. the MQB Platform of Volkswagen allowing the production of 27 models for three brands) are available in the present car market, recent examples of MIU are missing in current automotive design. ‘Modularity in use’ may offer designers new opportunities to reimagine vehicles as well as their own contribution on car design, by shifting their focus from car styling to product usability [2]. Apart from showing the technological challenges in car design, this article highlights the importance of a multiple-disciplinary approach to design, i.e. multi-disciplinary, interdisciplinary and trans-disciplinary [4]. The objective was to demonstrate combining © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 118–123, 2020. https://doi.org/10.1007/978-3-030-39512-4_19
The Car as a Transformer
119
disciplines (i.e. design and engineering) and collaborating to improve the overall result, especially in the design of complex products like the car (see article conference video: vimeo.com/350296012).
2 Method ESAD students from various design backgrounds, were invited to participate. The project lasted 8 weeks and was divided into 3 assignments: briefing; workshop; visualisation (see video links). The briefing was formed from the article, technical issues considered fundamental for the project’s direction: ‘skateboard’ powertrain platform; high-customisation; low centre of gravity; good vehicle package; MIU. Students worked in teams to discuss ideas and concepts, and personas were created to help understand the users’ needs, experiences, behaviours and goals. Developing a different persona for each team made the design task less complex, guided their decision making and orientated a good user experience for the targeted user group [5] (see workshop video: vimeo.com/350451944).
3 Results From the workshop, 5 vehicle concepts emerged, displaying different interpretations of the project theme and opinions about alternative mobility for future lifestyles. The concepts revolved around multi-tasking, sharing, connectivity and autonomy. 3.1
Concept 01, iBudd
A subcompact car for surfers (see pitch video: vimeo.com/350452664). Technical Proposal. A mono-volume vehicle with high driving position and large interior space. Voice controlled with artificial intelligence, learning about the driver’s surroundings and auto-adjusting the comfort for different road conditions. Persona. Francisco is a male student nurse, living in Porto with his parents. In the evenings and weekends, he mets with friends to surf, often changing clothes in the car and carrying wet equipment. Justification. In 2012, the International Surfing Association (ISA) claimed 35 million surfers worldwide [6] and was expected to increase to 60 million by 2018 [7]. ISA’s statistics show 81% being male and 40% 24 years old and younger. Therefore, Francisco potentially represents 20 million users worldwide (40% to 45% American) with a similar persona profile. 3.2
Concept 02, India
A station wagon/van for market gardeners (see pitch video: vimeo.com/350452793).
120
J. Aston and R. P. Freire
Technical Proposal. A ‘skate-board’ platform with interchangeable carriages for transporting people or merchandise. With autonomous levels 4 and 5, the users manage the vehicle through smart devices, synchronising agendas and operating needs. Persona. Joaquim is a market gardener for 45 years and lives in Portugal. He is married to a school teacher with three grown-up children. They live and work in the countryside transporting workers and produce between fields and farm. On market days, Joaquim uses the vehicle as a mobile market stall. Justification. A market garden is a small scale production of fruits, vegetables and flowers (also known as truck or organic farms), who sell directly to the consumer, small traders or restaurants. In Portugal, organic farming is increasing, from 73 producers in 1993, to 1500 in 2005 [8]. However, on a global scale this is more impressive, with 2.9 million organic growers worldwide and 69.8 million hectares, 50% in Australia alone [9]. 3.3
Concept 03, Jensen
A patrol truck for breakdown assistance. (see pitch video: vimeo.com/350453136). Technical Proposal. A pickup truck/van configuration with interchangeable carriages for roadside car repair and autonomous recovery. This vehicle is connected to a management system to optimise distribution, human resources and reduce waiting times. Persona. John Jenson is 30 and works as a breakdown patrol driver and mechanic. He lives in California with his wife and newborn baby. John manages his patrol fleet in Los Angeles, coordinating 200 drivers and assisting 300,000 vehicle incidents a year. Justification. Statistically, it is unclear to identify accurate breakdown patrol figures for American roads. According Popular roadside assistance services (i.e. Allstate Motor Club, AAA, Good Sam and AARP), they claim 52 state coverage [10] and subcontract to smaller regional companies, which makes it difficult to quote an accurate patrol fleet number. However, in the UK patrol fleet numbers are more accurate and the 6 most popular roadside rescue services (i.e. RAC, AA plc, GEM Motoring Assit, Green Flag, Britannica Rescue and Auto Aid) [11], together represent 20 to 25 thousand patrols operating nationwide. 3.4
Concept 04, Picos
A snowmobile for ski tourist groups. (see pitch video: vimeo.com/350453214). Technical Proposal. An electric-powered snowmobile with passenger trailer. When coupled, this vehicle provides sight-seeing, security, comfort, audio and visual information, and ample luggage/equipment space. Separately, the snowmobile gives rapid assistance to tourists in the snow. Meanwhile, the trailer becomes a shelter. Persona. Fernando Martinez is 20 and works as a ranger for snow-seeking tourists of all ages. Based in the Spanish Cantabrian Mountains, about 50 kms from Bilbao,
The Car as a Transformer
121
Fernando specialises in skiing and mountaineering. In the winter he transports people and equipment to the mountain slopes for sight-seeing, walking, sledging and skiing. Justification. The Cantabrian mountains is part of the 430 km Pyrenees mountain range, between Spain, France and Andorra. In 2015, there were 3.1 million visitors on the Spanish side of the boarder [12]. Snowmobiles are popular on these mountainous regions, for their hill-climbing capability, speed and agility. Snow Coaches, Snowcats and Terra Buses are commonly found in Canada, Greenland, Iceland and Norway. The ‘Picos’ concept combines these two snow vehicle types. 3.5
Concept 05, Seniors
An autonomous micro car for the elderly (see pitch video: vimeo.com/350453263). Technical Proposal. A micro car with a spacious interior, elevator luggage access, comfortable high-swivel seats and large windows with interior digital interface. This vehicle is fully autonomous and promotes ‘drive-on-demand’ for elderly users. Persona. John and Marie are retired and enjoy the freedom to go shopping, theatre, garden centres, and visit friends and family around London. Recently, John has started to lose his hearing and is experiencing slow reactions. Justification. In Britain, there are 5.3 million over 70 s with full driving licences and there is no legal age for seniors to stop driving. They can decide when to stop as long as they do not have any medical conditions that affect their driving [13]. There are laws to eventually retain a senior’s driving licence, but governments rely on the driver’s good sense to notice slower reactions, driving anxiety or worsening eyesight.
4 Conclusions All 5 vehicle concepts expressed some interesting perspectives on how student designers interpret the latest technological and social trends. The ideas considered industry 4.0 and its impact on many aspects of our lives in the future, in particular how we could move around more effectively in new forms of intelligent mobility. Niche. Some concepts could be considered ‘niche’ based on their limited persona profiles. However, if the data research considered broader persona characteristics, then statistics might prove otherwise. If concept 01 (iBudd) included similar activities to surfing (i.e. camping, skiing and fishing), this could increase user potential by forming persona ‘clusters’. Concept 02 (India) improved x2000 when research moved from Portuguese data to global organic farmer statistics. Concept 03 (Jensen) had unclear patrol vehicle data in America. However, according to AA plc (within the top 6 breakdown services in the UK), they introduced 500 new patrol vehicles to their 2000 fleet in 2015 [14]. This could be used as an indicator that 25% of patrol vehicles are replaced annually within this business.
122
J. Aston and R. P. Freire
Concept 04 (Picos) was aimed at a multi-million dollar global industry with an estimated 300 to 350 million visitors worldwide [15]. Unfortunately, this could be affected by climate change. Concept 05 (Seniors) is potentially a huge market with 42 million licensed drivers aged 65 and older in the United States [16], almost x10 greater than the UK. Gender. Interestingly, all concepts showed male persona bias, except for concepts 2 and 5, which introduced female partners with shared vehicle scenarios. Considering that the majority (58%) of student participants in this project were female, they might have considered more influential female personas. However, referring to published statistics about driver dominant gender, women travel more often than men, but men drive greater distances and with more diverse modes of transport [17], which justifies certain persona choices within each concept. On the Market. Since the project’s completion in May 2017, all 5 concepts have in part, appeared on the market in similar forms. Bosch puts a voice (an interactive assistant called “Hey Casey”) into the car with multi language recognition [18] (Jan 2018). Rinspeed released the ‘Snap’ concept at the Geneva Show and focusses on MIU concept [19] (March 20018). Scania introduced an autonomous truck without a cab. This takes a significant step towards smart transport systems of the future [20] (Sept 2019). Venturi Automobiles developed the first electric Snow Rover, designed for short scientific missions in the polar ice [21] (Dec, 2018). Waymo developed an autonomous car that helps the elderly, as well as individuals with disabilities [22] (Dec 2018). Final Observation. Based on this evidence, elements from all 5 concepts were able to forecast similar ideas currently on the market. This emphasises the importance of investing in a multidisciplinary approach to vehicle design. Acknowledgments. Professors: Magri L; Gomes M.; Ferreira J.L.; guest professor Luccarelli M. (Reutlingen University). Students: Cunha A.; Dias D.; Correia E.; Kunsler J.; Rodrigues M.; Gomes N.; Legoinha C.; Ribeiro C.; Silva M.; Neves P.; Alves S.; Ribeiro C.; Coelho A.; Teixeira B.; Vieira S.; Etkin N.; Kulovesi A.; Dybicki K.; Nowoszynska D.
References 1. Luccarelli, M., Matt, D.T., Spena, P.R.: Modular architectures for future alternative vehicles. Int. J. Veh. Des. 67(4), 368–387 (2015) 2. Luccarelli, M.: Technological Challenges and New Design Opportunities. Auto Des. Torino. (222), 2–7 (2017) 3. Rossini, M., Matt, D.T., Ciarapica, F.E., Russo Spena, P., Luccarelli, M.: Electric vehicles market penetration forecasts and scenarios: a review and outlook. Int. J. Oper. Quant. Manage. 20(3), 153–192 (2014) 4. Luccarelli, M.: Interdisciplinarity in product design education. Balancing disciplines to foster creativity. In: 4th International Conference on Design Creativity, Atlanta, GA (2016)
The Car as a Transformer
123
5. Personas - A Simple Introduction (Interaction Design Foundation, Est. 2002) (2019). interaction-design.org/literature/article/personas-why-and-how-you-should-use-them 6. Surferstoday: How many surfer are there in the world? (2019). surfertoday.com/surfing/howmany-surfers-are-there-in-the-world 7. How many surfer are there in the world? medium.com/lipchain/how-many-surfers-are-therein-the-world-c43d04b93442 8. Organic farming (2019). en.wikipedia.org/wiki/Agriculture_in_Portugal 9. Organic Agriculture (2019). orgprints.org/33355/5/lernoud-willer-2019-global-stats.pdf 10. Best roadside assistance (2019). toptenreviews.com/best-roadside-assistance-services 11. Best breakdown cover (2019). https://www.autoexpress.co.uk/car-news/driver-power/92413/ best-breakdown-cover-2019 12. Number of visitors to ski resort in the Pyrenees from 2012 to 2015 (2019). statista.com/ statistics/768160/number-visitors-station-ski-pyrenees-by-country/ 13. Deciding when to stop driving (2019). https://www.nidirect.gov.uk/articles/older-driversdeciding-when-stop-driving 14. AA plc. (2019). en.wikipedia.org/wiki/AA_plc 15. A critical review of climate change risk for the ski tourism (2019). https://www.tandfonline. com/doi/pdf/10.1080/13683500.2017.1410110 16. Older Adult Drivers (2019). cdc.gov/motorvehiclesafety/older_adult_drivers/index.html 17. Driver gender (2019). brake.org.uk/facts-resources/1593-driver-gender 18. Car, we have to talk (2019). bosch-presse.de/pressportal/de/en/car-we-have-to-talk-boschputs-the-voice-assistant-behind-the-wheel-137856.html 19. Rinspeed Snap (2019). rinspeed.eu/en/Snap_48_concept-car.html 20. Scania AXL (2019). scania.com/group/en/a-new-cabless-concept-revealing-scania-axl/ 21. Venturi Antarctica all-electric exploration rover (2019). newatlas.com/venturi-electricantarctica-polar-vehicle/57622/ 22. Designing self-driving cars for the elderly (2019). ecnmag.com/article/2018/12/designingself-driving-cars-elderly
Unmanned Small Shared Electric Vehicle Binhong Zhai, Guodong Yin(&), and Zhen Wu School of Mechanical Engineering, Southeast University, Nanjing 211189, China [email protected], [email protected]
Abstract. Nowadays, shared electric cars are becoming a new form of transportation. In the future, unmanned small shared electric cars will be the mainstream direction of the market, which is very different from traditional cars in terms of usage mode, usage scenario and interaction mode, which will promote the fundamental transformation of the relationship between people and cars. Based on the analysis of the advantages and disadvantages of the current development of shared cars, this paper points out that in the future, driverless vehicles will be the transformation direction of shared electric cars, which needs to be developed in the direction of service design, and then introduces the design concept of “service-oriented”. Under the guidance of this concept, the relationship between cars and people will be reconsidered to provide better travel services and experience for users. Keywords: Unmanned small shared electric vehicle Sharing economy User experience and service Service-oriented User-centered
1 Introduction With the advent of the sharing economy, it has an important impact on social development [1]. Shared electric cars are becoming a new form of transportation. A large number of small electric cars that share transportation are suddenly popular in the market. But there are also many challenges in the process of its development, such as car clean, quality of service, supply shortage during peak time and so on have hindered its better development. In today’s society, driverless technology is improving constantly, the development of driverless small shared electric cars is an important opportunity that many companies want to seize, which is very different from traditional cars in terms of usage mode, usage scenario and interaction mode. In the future, only by carrying out service reform, aiming at the needs of users in the era of sharing economy, and finding out the new positioning of unmanned shared electric cars, can cars stand out in the industry and produce products that meet the needs of users. Therefore, it is necessary to firmly grasp the future automobile development mode, consider the future design of shared electric cars from a multi-dimensional perspective, break through the constraints of traditional car design, and explore various possibilities of design, so as to improve people’s short-distance travel problems faster and better.
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 124–128, 2020. https://doi.org/10.1007/978-3-030-39512-4_20
Unmanned Small Shared Electric Vehicle
125
2 Analysis of Advantages and Disadvantages of Shared Car Development 2.1
Advantages
Although most families in China now have a car, it can indeed bring convenience to some extent, but the high gasoline cost and maintenance cost of cars make most families choose public transportation to go out. The emergence of shared cars greatly solves this problem, enabling most people to use cars without spending insurance and maintenance costs. Most families can choose to share cars when they go out for play, which can save costs. The development of shared car can bring great convenience to people. It is different from the traditional method of car renting, users can reserve car online at anytime and anywhere, and the corresponding software will have detailed cost description, the follow-up matters of the car are taken care of by the shared car company, it has a relatively simple procedures and more convenient for users. 2.2
Disadvantages
When the company operates the shared car, it is necessary to improve the relevant facilities for users to provide greater convenience. However, the development of shared cars is slow in China, and the construction of related facilities is not perfect in the process of development. For example, the number of parking lots of shared cars is small, and it is difficult to find parking spaces when using shared cars, which reduces the user experience to some extent. Secondly, the distribution of charging piles is uneven, and the distance between charging piles is relatively long, which makes it inconvenient for users to use. In the end, users will no longer use shared cars, and the development of shared cars is limited. When users use the shared car, there is a problem of non-standard use. For someone who just got a driver’s license but has never driven a car, shared car is definitely the first choice. However, due to lack of experience of novices, if the accident on the road caused damage to the car, then the responsibility of attribution becomes a big problem.
3 Future Prospects - Driverless Small Shared Electric Vehicles At present, sharing electric cars offers a lot of convenience for people, but there are also a lot of difficulties that hinder its better development. In the future era of sharing economy, driverless vehicles will greatly promote the development of shared cars, bring more benefits to enterprises, and promote the fundamental transformation of the relationship between people and cars, bringing better services and experience to people. In this case, service design will be an important tool to provide innovation and comfort [3]. Next, the development idea of driverless small shared electric vehicles will be introduced in the direction of service design.
126
3.1
B. Zhai et al.
Service Design Research Methodology
In general, service design orchestrates programs, technologies, and interactions to cocreate valuable solutions in complex systems and integrate their stakeholders. It is different from traditional design disciplines such as industrial design and product design, the final result of service design is intangible: a service. Therefore, the product of service design is not a physical object. Rather, it is the service experience itself that occurs at the moment of service delivery. The purpose of a service designer is to serve the end users of a service and create value for the service provider simultaneously. 3.2
Unmanned Small Shared Electric Vehicles Need to Comply with Service Design Methods
Service design pays attention to each stakeholder through a comprehensive perspective [2]. To achieve comprehensive development of driverless small shared electric vehicles, service process needs to be paid attention to, and service design processes need to be better integrated into it. The service design process is mainly divided into four parts, discover, define, design and deliver, which belong to parallel design. It is that the whole product life cycle is considered from the beginning of the product, including the generation of the concept, how the product is put into the market and the abandonment of the product. Service design comes into play when problems arise, or when shared concepts occur. The traditional design process includes serial design process and parallel design process. The serial design process mainly includes the following stages: user demand analysis, program design, technical design, test and production design. In this phase, the processes connect to each other, and the previous process completes and passes to the next. However, the disadvantage is that each department is independent of each other, and there will be many problems left over from the previous stage in the production process, which makes it difficult for products to be put into production smoothly, resulting in a long production cycle and increased production costs. In the case of driverless small shared cars, this is very bad, because the links are often connected and the processes are not independent. Therefore, it is necessary to follow the service design process for driverless small shared cars. 3.3
Unmanned Small Shared Electric Vehicles Should Follow Service Design Principles
(1) User-centered problem analysis and planning phase This process can also be called requirements exploration and discovery stage. In the creation stage, a requirement must clearly dissect the current service process and conduct user-centered research on the users of the service and the real situation when the service is used, so as to determine the service positioning. The user-centered research mentioned here applies different methods and analysis tools to study users’ psychology, physiology and various behaviors, then establish user role model [5]. It is also a key step for enterprises to seize service opportunities in the early stage of service.
Unmanned Small Shared Electric Vehicle
127
(2) Design and development phase based on user participation design This stage can also be called prototype creation stage. Through the establishment of service prototype in the early stage and user test, the corresponding service contacts are determined to further improve the service process system. In this stage, enterprises should let real users participate in creative activities and design products for themselves, in order to enhance their sense of ownership, mobilize the enthusiasm of users, then users could provide their own insights to achieve user participatory design [4]. Designers and users should work together to complete the preliminary prototype design of products, and modify and improve the service design system constantly. (3) The design evaluation of multiple scale increases the confidence of enterprises This stage is also the last stage of service design, and it is also the stage to promote the better development of service design. In the process of design evaluation, the usability of service design is tested through the subjective feedback and objective evaluation of users, and the design evaluation is made to correct and iterate the problems in the design process, so as to design unmanned small shared electric vehicles that better meets the needs of users and increase the credibility of the enterprise.
4 The Revolution of Automobile Travel Under Service Design Based on the development direction of driverless small shared electric vehicles under the guidance of service design, the design concept of “service-oriented” has been formed. It aims to guide future driverless small shared electric vehicles with the goal of improving the travelers’ experience, such as reducing travel waiting time, increase transfer connection reliability, so as to improve service accessibility, diversity, and improve the quality of service. 4.1
Travelers Can Enjoy Better Travel Experience
The driverless small shared electric vehicles provide the whole process of “door to door” service based on “service-oriented”. Users only need to book the departure time, starting point, destination and other special travel preferences before travel. After the system plans the trip, users can obtain the specific information of all travel segments, including the destination related information [6]. During the trip, the travel platform will provide users with the data analysis results based on historical and dynamic changes, so that users can obtain real-time trip information “anytime and anywhere”, such as traffic information, weather information, the possibility of connecting the next stage of the trip. Then users facilitate the re-arrangement of the trip according to relevant information to make the whole travel process more reliable. 4.2
Travel Structure and Characteristics Will Be Changed
Since driverless small shared electric vehicles can make the connection between different transportation systems closer, the travel efficiency of people will be greatly
128
B. Zhai et al.
improved, and the travel distance will also increase. The spatial distance will no longer be the major obstacle to travel, and the proportion of cross-district or even cross-city travel will also increase significantly [7]. Driverless small shared electric cars will pay more attention to improving the service quality for passengers, customized services will also be provided for passengers with special needs, such as the elderly and the disabled. The composition of travel chain of the whole society will also be more flexible, so that people can more easily enter more places to have leisure, entertainment and social contact [8].
5 Conclusion This paper mainly discusses driverless small shared electric vehicles. Firstly, the current development status of shared cars was introduced, and then it was proposed that driverless cars would promote the development of shared cars. Next, the development idea of driverless small shared electric vehicle is introduced in the direction of service design, and the design concept of “service-oriented” is put forward. Under the guidance of this concept, the product will be more responsive to user needs, better insight into user behavior, and better understand that the car is not only a simple transportation tool, but also an emotional partner for people to travel. It is accompanied by people’s good expectations for travel time after time, as well as their desire for convenience, and safety to go home. Acknowledgments. I want to express my gratitude to my tutor and some good classmates who helped me during writing the paper.
References 1. Giana, M.E., Mark, B.H., Baojun, J., Cait, L., Aric, R., Georgios, Z.: Marketing in the sharing economy. J. Mark. 83(5), 5–27 (2019) 2. Turetken, O., Grefen, P., Gilsing, R., Adali, O.E.: Service-dominant business model design for digital innovation in smart mobility. Bus. Inf. Syst. Eng. 61(1), 9–29 (2019) 3. Qian, S., Carolyn, R.: Is service design in demand? Acad. Des. Manag. Conf. Inflection Point 11, 67–78 (2016) 4. Kimbell, L.: Designing for service as one way of designing services. Int. J. Mark. 5(2), 41–52 (2011) 5. Alam, I., Perry, C.: A customer-oriented new service development process. J. Serv. Mark. 16 (6), 515–534 (2002) 6. Paundra, J., Rook, L., Van Dalen, J., Ketter, W.: Preferences for car sharing services: effects of instrumental attributes and psychological ownership. J. Environ. Psychol. 53, 121–130 (2017) 7. Susan, S., Adam, C., Ismail, Z.: Shared mobility: current practices and guiding principles (2016) 8. Mattia, G., Roberta, G.M., Ludovica, P.: Shared mobility as a driver for sustainable consumptions: the intention to re-use free-floating car sharing (2019)
A Forward Train Detection Method Based on Convolutional Neural Network Zhangyu Wang1(&), Tony Lee2, Michael Leung2, Simon Tang2, Qiang Zhang3, Zining Yang4, and Virginia Cheung4 1
2
Beijing Aiforrail Technology Co. Ltd., Beijing, China [email protected] MTR Corporation Ltd., Kowloon Bay, Hong Kong, China {tkylee,mileung,simtang}@mtr.com.hk 3 Traffic Control Technology Co. Ltd., Beijing, China [email protected] 4 Claremont Graduate University, Claremont, CA, USA {zining.yang,virginia.cheung}@cgu.edu
Abstract. Forward train detection is of great significance to improve train safety. In this paper, a vision-based method is proposed for forward train detection. The proposed method based on CenterNet, a novel object detection network, to realize accurate and fast forward train detection. The forward train detection network is divided into two stages: downsampling stage and center points generation stage. In downsampling stage, a full convolution network is performed to downsample the image, meanwhile to extract the feature of the image. In the center points generation stage, three branch networks are used to predict the bounding box of the forward train, including heatmaps generation branch, center offset regression branch and forward train bounding box size regression branch. Experiments results show that the proposed method can detect the forward train well and achieve 30.7% Average Precision (AP) with a 47.6 Frames Per Second (FPS). Keywords: Train detection
Railway safety Convolutional neural network
1 Introduction In all transportation systems, especially in rail transit, safety is the most important [1]. However, collision incidents still exist in rail transit at the present stage. In the last decade in China, there were 7 serious collisions which caused 786 injuries [2]. According to the study [3], 95% of incidents are related to human factors and 70% of these are directly related to human errors. Therefore, it is essential to improve the train operation safety by adopting forward train detection for reducing collision related incidents caused by human factors. There are various studies in the past few decades to improve the train safety by forward train detection or forward obstacle detection. For example, Karaduman used the image processing and laser meter to detect the obstacle in the railway [4]; García et al., proposed a multisensory system to detect the obstacles, the system consists two © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 129–135, 2020. https://doi.org/10.1007/978-3-030-39512-4_21
130
Z. Wang et al.
emitting and receiving barriers, which are placed on opposing sides of the railway respectively, and the system uses infrared and ultrasonic sensors, establishing different optical and acoustic links between them [5]; Yao et al. proposed three vision-based algorithms for obstacle detection, mainly include method based on Gray Level Histogram, method based on the Proportion of Black and White Pixels, and method based on Integrity of Rails and Sleepers [6]. In this paper, convolutional neural network is applied for forward train detection. The proposed method mainly based on CenterNet [7], a novel object detection network, to realize accurate and fast forward train detection. The proposed method transforms the forward train detection into the detection of critical points of the forward train, and thus enhances the performance of the detection function. The remaining part of this paper is structured as follows. Section 2 illustrates the methodological details of the proposed framework for forward train detection. Section 3 shows the experiment results. Finally, a conclusion is presented in Sect. 4.
2 Forward Train Detection The accuracy and speed of the forward train detection are the most important factors in practical application. In this paper we apply the Centernet [7], a novel object detection framework for forward train detection. 2.1
Overview of the Framework
Instead of detecting the forward train as a series of bounding boxes which is widely used in two-stage detectors [8] or one-stage detectors [9], the forward train detection image is converted for a key point estimation, i.e. get the center points of the forward train.
Fig. 1. Network architecture
A Forward Train Detection Method Based on Convolutional Neural Network
131
As illustrated in Fig. 1, forward train detection network is divided into two stages: downsampling stage and center points generation stage. In downsampling stage, a full convolution network, i.e. DLA-34 [10] is performed to downsample the image, meanwhile to extract the features of the image and to get a 4 times lower sampled feature map. Based on the downsampled feature map, the center points of the forward train are then generated. In the center points generation stage, three branch networks are used to predict the bounding box of the forward train, including heatmaps generation branch, center offset regression branch and forward train bounding box size regression branch. The heatmaps generation branch is used to generate the center points of the forward train. The number of channels of the heat map is equal to the number of forward train categories to be detected, and in this paper, the number of the channels is 1. The center offset regression branch is used to correct the deviation between true train center points and predicted train points which is generated from the first branch. The train bounding box size regression branch is used to predict the width and height of the forward train. 2.2
Detection of Forward Train Points
The forward train detection is taken as the center points prediction of the forward train. Therefore, behind the full convolution network, there are three independent forward train center points regression branches to detect the center points of the forward train. Heatmaps Generation In heatmap generation branch, two-layer convolution is applied to obtain the heatmaps of the feature map, and then the 3 3 max pooling operation is applied to get the peaks of the forward train center. The first 100 peaks are taken as the target center points extracted by the network, and then a threshold is set for screening to obtain the final forward train heatmaps. W H b 2 ½0; 1 R R C , where the W and H are the The heatmaps can be expressed as Y width and height of the input image respectively, the R is the downsampling factor, i.e. 4 in this paper, and the C is category of the detected forward train, i.e. 1 in this paper. In bx;y ¼ 1, it means the point in the coordinate xy is the heatmaps, if the prediction value Y bx;y ¼ 0, it means the point in the coordinate a forward train, and if the prediction value Y xy is not a train. In the training stage, a gaussian kernel is applied to map the key points to the 2 2 ðxep x Þ þ ðyep y Þ feature map, i.e. Yxyc ¼ exp p y are the forward train , where e p x and e 2r2 p
center in ground truth, and the r is the standard deviation relative to the size of the forward train. The loss function of the heatmaps can be expressed as: Lk ¼
a bxyc log Y bxyc 1Y a : 1 Yxyc b Y bxyc log 1 Y bxyc
8 < X 1 N
xyc
if Yxyc ¼ 1 otherwise
ð1Þ
132
Z. Wang et al.
Where the a and b are the hyper-parameter of the focal loss, N is the number of key points. Offset Regression In the heatmap generation branch, the feature map is sampled by 4 times lower; therefore, error in accuracy will be caused when the feature map is remapped to the original image and an additional local offset is used for each center point. In the offset regression branch, two-layer convolution is applied to obtain the offset feature b 2 RWR HR2 , and the two feature maps map. The feature map can be expressed as O predicts the deviation of x and y respectively. In the offset regression branch, the loss function can be expressed as: Loff ¼
p 1X b e p j Oep N p R
ð2Þ
b is the offset predicted, and the p e Where the O p is the offset between the center R ep point in the original image and the center point of the downsampled feature maps. Bounding Box Size Regression In the forward train bounding box size regression branch, two-layer convolution is applied to obtain the boxes size feature map. The feature map can be expressed as W H b S 2 R R R 2 , and the two feature maps predicted the width and height of the forward train respectively. The loss function can be expressed as: Lsize ¼
N 1X jb S pk sk j N k¼1
ð3Þ
Where the b S pk is the boxes size predicted, and the sk is the boxes size that in the ground truth. In the proposed network, the whole loss function in the training stage can expressed as: Lsize ¼ Lk þ coff Loff þ csize Lsize
ð4Þ
Where the coff and csize are the hyper-parameter of the whole loss.
3 Experiment To evaluate the effectiveness of the proposed forward train detection method, a large amount of real train running video with resolution of 1280 720 are collected. And we made a dataset based on the video. The dataset is labeled in MS Coco’s [11] format and contains 6,779 annotated samples (5811 for training and 968 for test respectively). All experiments were based on Titan Xp GPU, Pytorch 1.1.0, CUDA 9.0, and CUDNN 7.1.
A Forward Train Detection Method Based on Convolutional Neural Network
133
The Adam [12] is used to optimize the overall objective, and the loss function curve is shown in Fig. 2.
Fig. 2. Loss curve during training
Figure 2 shows the model can converge quickly. Besides, the accuracy of forward train detection is evaluated in detail by Average Precision (AP) [11] and Frames Per Second (FPS). The speed and accuracy of the forward train detection method is shown in Table 1. Table 1. Speed and accuracy of the forward train detection method. AP AP50 AP75 APS APM APL FPS 0.307 0.456 0.331 0.323 0.348 0.382 47.6
Table 1 shows that the method can achieve 30.7% AP with 47.6 FPS. This shows that our forward train method achieves a good balance between speed and accuracy. This achieved AP and FPS are under stringent requirements adopted in the calculation algorithm of this method. The FPS is higher than most of typical calculation algorithm. Therefore, high accuracy of detection can be expected for practical application in railway environment. In addition, in our practical application, by combining the detection of railway, we can obtain 0.007 of leak detected rate (LDR) and 0.000478 of the mistakenly detected rate (MDR). Some samples are shown in Fig. 3 to demonstrate the forward train detection results.
Fig. 3. Experimental results
134
Z. Wang et al.
4 Conclusion In this paper, a convolutional neural network based forward train detection method is proposed. The proposed method mainly divided into two stages: downsampling stage and center points generation stage. Downsampling stage mainly used for image sampling and feature extraction. The center points generation stage used three branches to predict the bounding box of the forward train. The experiments show that the proposed forward train detection method can achieve 30.7% AP with 47.6 FPS. The achieved AP and FPS are under stringent requirements adopted in the calculation algorithm and the overall performance in accuracy and speed is superior than most of typical calculation algorithm. Therefore, high performance of forward train detection can be expected for practical application in railway environment. As image recognitions are susceptible to changing of environmental lighting, application of the fusion of lidar and image will be adopted in further study in order to further enhance the accuracy of forward train detection. Acknowledgments. This work is partially supported by the Beijing Municipal Science and Technology Project under Grant # Z181100008918003. The MTR Corporation Ltd. in Hong Kong has provided the testing field in co-researching the proposed forward train detection method and technology. The authors would also like to thank the insightful and constructive comments from anonymous reviewers.
References 1. García, J.J., Urena, J., Mazo, M., et al.: Sensory system for obstacle detection on high-speed lines. Transp. Res. Part C Emerg. Technol. 18(4), 536–553 (2010) 2. Li, S., Cai, B., Liu, J., et al.: Collision risk analysis based train collision early warning strategy. Accid. Anal. Prev. 112, 94–104 (2018) 3. Lüy, M., Çam, E., Ulamış, F., et al.: Initial results of testing a multilayer laser scanner in a collision avoidance system for light rail vehicles. Appl. Sci. 8(4), 475 (2018) 4. Karaduman, M.: Image processing based obstacle detection with laser measurement in railways. In: 2017 10th International Conference on Electrical and Electronics Engineering (ELECO), pp. 899–903. IEEE (2017) 5. Garcia, J.J., Ureña, J., Hernandez, A., et al.: Efficient multisensory barrier for obstacle detection on railways. IEEE Trans. Intell. Transp. Syst. 11(3), 702–713 (2010) 6. Yao, T., Dai, S., Wang, P., et al.: Image based obstacle detection for automatic train supervision. In: 2012 5th International Congress on Image and Signal Processing, pp. 1267– 1270. IEEE (2012) 7. Zhou, X., et al.: Objects as points. arXiv preprint arXiv:1904.07850v2 8. Ren, S., He, K., Girshick, R., et al.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91– 99 (2015)
A Forward Train Detection Method Based on Convolutional Neural Network
135
9. Redmon, J., Farhadi, A.: Yolov3: an incremental improvement. arXiv preprint arXiv:1804. 02767 (2018) 10. Yu, F., Wang, D., Shelhamer, E., et al.: Deep layer aggregation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2403–2412 (2018) 11. Lin, T.Y., Maire, M., Belongie, S., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) European Conference on Computer Vision, pp. 740–755. Springer, Cham (2014) 12. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014)
Styling Research of DFAC-6851H4E City Bus Based on Fuzzy Evaluation Huajie Wang1,2(&) and Jianxin Cheng1 1
East China University of Science and Technology, Shanghai 200237, People’s Republic of China [email protected], [email protected] 2 Shanghai Art and Design Academy, Shanghai 201808, People’s Republic of China
Abstract. In the process of design decision-making, in order to establish a more scientific design evaluation method, help enterprises determine the best city bus design plan and guarantee the best economic, social and cultural benefits of design results. In this paper, the analytic hierarchy process is used to analyze the form of the city bus, and the importance proportion of the appearance of the product is studied. The design project of DFAC(DFAC: It is the abbreviation of Dongfeng Automobile Company, which is the very famous Auto company in China.)-6851H4E is taken as the research object, and the established evaluation model is verified by practice. These results show that the scientific design evaluation model is constructed, and the progress of the design is effectively promoted in the process of product design, which avoids the uncertainty of design, reduces the risk of design, and brings good economic benefits to the enterprise. Keywords: City bus styling method Design decision
Analytic hierarchy process Fuzzy evaluation
1 Introduction In the development process of city bus, the styling design is an important stage. A good modeling scheme is an important factor to win the market. In the process of selecting the best scheme, product appearance styling design scheme will be optimized and selected through design evaluation. Ma [1] proposed the scheme evaluation compass, Yang proposed to establish a perceptual evaluation index system based on crutches for the aged [2]. However, for the design of the city bus styling scheme evaluation has not had mature case, this thesis city buses DFAC - 6851H4E as the research object, through the use of certain quantitative analysis means to evaluate the product design, the design scheme of the perceptual factors of quantitative analysis, using the evaluation model for evaluation, increase the product outward appearance modeling design scheme decision-making objectivity.
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 136–142, 2020. https://doi.org/10.1007/978-3-030-39512-4_22
Styling Research of DFAC-6851H4E City Bus Based on Fuzzy Evaluation
137
2 Process of Bus Modeling and Design In bus enterprises, the design process of bus products is usually shown in Fig. 1. Design evaluation is very critical, especially in the styling phase of the program evaluation is an important link to determine the product design results.
Fig. 1. Flow chart of bus modeling design
3 Analysis of the Status Bus Industry Modeling Program Review In bus enterprises, the product modeling design scheme is generally determined by the boss of the enterprise. This evaluation mode generally has the following problems: (1) Incomplete consideration of evaluation objectives. It is not comprehensive to consider which aspects of the design plan need to be evaluated, (2) The weight of evaluation factors is not quantitative. There is no quantitative distinction between the evaluation objectives, which easily leads to the inversion of priority and priority.
4 Fuzzy Evaluation Method Fuzzy evaluation method is an effective comprehensive evaluation method for things affected by multiple factors. It breaks through the logic and language of precise mathematics [3], emphasizes the fuzziness in influencing factors to express the nature of information in natural language, and processes evaluation information in numerical calculation. 4.1
Determine the Evaluation Index Set
Due to the complexity of the evaluation target, it is necessary to decompose the evaluation target into multi-level indexes and establish the hierarchical structure chart of the evaluation index system. After the structure diagram is established, the evaluation target can be regarded as a fuzzy set composed of multiple factors. There are n
138
H. Wang and J. Cheng
evaluation indicators, and there are n subsets. If the evaluation indicator set is U, then U = {U1, U2, U3… Ui,… Un}, type in the Ui (i = 1, 2, 3… n) is each evaluation index. 4.2
Determination of Evaluation Set (Evaluation Level)
After the evaluation index set is determined, corresponding evaluation results should be made for it. Each indicator is evaluated with abstract evaluation level to form a fuzzy set of comments (called evaluation set). The grade of comments is generally expressed by very good, better, general, poor, poorer or excellent, good, medium, poor and other comments. If the evaluation set is set as V and m evaluation levels are set, then the evaluation set V = {V1, V2… The Vm}. 4.3
Establish Fuzzy Relation Matrix R1
After the index set and evaluation set are determined, each index has certain quantitative relation with any evaluation, which can be represented by matrix R. R is the fuzzy relation synthesized by U and V, which is a (n m) fuzzy matrix [4], namely: 2
3 2 r11 R1 6 R2 7 6 r21 6 7 6 R ¼ U V ¼ 6 .. 7 ¼ 6 . 4 . 5 4 .. Rm rn1
r12 r22 .. . rn2
.. .
r1j r2j .. . rnj
3 r1m r2m 7 7 .. 7 .. . 5 . rnm
ð1Þ
rij represents the corresponding evaluation factor Ui, and its evaluation level of the evaluation object is j. 4.4
Membership
Membership is the degree to which an element belongs to a set. Membership degree can be represented by a value on the closed interval of 0–1. When the membership degree is stronger, the value is closer to 1; when the membership degree is weaker, the value is closer to 0. For example, when the membership degree is 0.9, it means that 90% of element x belongs to set A. Fuzzy matrix R is a collection of the degree of attribution of each evaluation index to each evaluation level. Membership of a single qualitative evaluation index Ui to Vi can be expressed by the following formula (2): ri ¼
di d
ð2Þ
ri is an element in the fuzzy matrix, d represents the number of experts participating in the evaluation, and di refers to the number of experts making Vi evaluation on the evaluation index Ui.
Styling Research of DFAC-6851H4E City Bus Based on Fuzzy Evaluation
4.5
139
Determine the Weight of Each Evaluation Index
Considering that each evaluation index has different importance degrees, corresponding weight coefficients should be given to it. This paper uses the common expert scoring method to determine. The principle and formula are shown in the membership section. If the weight set is set as A, then A = {a1, a2, a3…… an}. 4.6
Fuzzy Comprehensive Evaluation
The problem of fuzzy comprehensive evaluation is to transform A fuzzy set A in the evaluation index set U into B in the evaluation index set V through fuzzy relation R. Therefore, a mathematical model of fuzzy comprehensive evaluation can be established [5]: 2
r11 6 r21 6 B = A R ¼ ða1 ; a2 . . .an Þ 6 . 4 ..
r12 r22 .. .
rn1
rn2
r1j r2j .. .. . . rnj
.. .
3 r1m r2m 7 7 .. 7 . 5
ð3Þ
rnm
In this model, the evaluation result B is a fuzzy subset of the domain V, and also a fuzzy vector, or a 1 m fuzzy matrix, B = (b1, b2, b3… bm), A is the weight set, and R is the fuzzy matrix synthesized by U (evaluation index) and V (evaluation grade).
5 Establishment of Evaluation and Quantification Model of Bus Modeling Design Scheme 5.1
Bus Modeling Evaluation Index System
According to the fuzzy evaluation method and the components of product modeling design, mainly from the perspective of aesthetics, the preliminary determination of bus modeling evaluation index system is based on 13 evaluation indexes. As shown in Fig. 2. 5.2
Set the Rating Level
For the evaluation of each index in the evaluation target, the grade judgment is generally carried out by means of excellent, good, medium and poor. Therefore, the comment set V for evaluation index U is: V = (excellent, good, medium, poor) = (V1, V2, V3, V4). 5.3
Weight Calculation
After the evaluation index of bus model is determined, the weight of each index should be calculated. The methods are as follows: 40 experts (including major departments of
140
H. Wang and J. Cheng
Fig. 2. Bus model quality evaluation index system
the company) are selected to evaluate the importance of each indicator to the overall modeling, and vote on which indicator is important. Primary index can be obtained by the weight is: A = (a1, a2, a3, a4, a5) = (8/40, 25/40, 2/40 40, 1/4/40) = (0.2, 0.625, 0.025, 0.1). Similarly, the weights of secondary indexes can be calculated.
6 Practical Case Verification 6.1
Preliminary Selection of Design Scheme
Now take DFA6851H4E city bus design scheme evaluation as an example. As shown in Fig. 3, among multiple design schemes, the decision-makers of the enterprise first select a design scheme, which is more in line with the design concept of Dongfeng brand. 6.2
Use Fuzzy Evaluation to Evaluate the Design Scheme
In order to further verify whether the proposal is the best design proposal or not, the fuzzy evaluation method is adopted for further evaluation, and a 15-person evaluation team including engineering, technology, planning, marketing and sales is organized to vote and evaluate. Vote on the primary plan. According to the statistical data, the fuzzy evaluation matrix of evaluation indexes is calculated. By multiplying each index weight coefficient A with the above fuzzy matrix R, the fuzzy comprehensive evaluation of this design scheme can be obtained: B = A R = (0.44, 0.38, 0.10, 0.03). Learned from this, the design of optimal membership
Styling Research of DFAC-6851H4E City Bus Based on Fuzzy Evaluation
141
Fig. 3. Best design scheme selected from multiple modeling design schemes
degree is 0.44, the good membership degree was 0.38, the middle membership degree is 0.10, and the bad membership degree is 0.03, according to the principle of maximum membership, the indicators of the evaluation objects, item by item, into a relative rank score, can give the evaluation results with the corresponding score [6]. If the four grades of excellent, good, medium and poor in the evaluation grade are successively converted into 10, 8, 6 and 4 points, then the evaluation score of the modeling design scheme is: X = 0.4433 10 + 0.3833 8 + 0.095 6 + 0.0283 4 = 8.1826. Thus, it can be determined that the comprehensive evaluation of the modeling design scheme is excellent. 6.3
Engineering and Mass Production of the Final Design Scheme
After nearly 8 months of product verification and trial-production, the design and manufacturing task has been completed, and the implementation of the design scheme has been achieved (as shown in Fig. 4).
Fig. 4. Mass production process of the optimal design scheme
142
H. Wang and J. Cheng
7 Conclusion The fuzzy evaluation method can be used to evaluate the design scheme of bus appearance quantitatively. The evaluation quantitative model not only provides the evaluation basis for the development of products with market competitiveness, but also effectively reduces the market risk of the strategic decision of product development. The program selected through fuzzy evaluation has performed well in the market sales, with the sales of single product exceeding 8 billion yuan (RMB), which has won the approval of customers.
References 1. Ma, Y.: Research on SUV side shape evaluation based on fuzzy evaluation method. Jilin University (2018) 2. Yang, C., Cheng, J.: Study on the sensibility evaluation method of old conceptual products based on fuzzy analysis. Packag. Eng. 39(10), 128–132 (2018) 3. Chen, M.: Analysis and research on gesture interaction design of smart touch-screen mobile phones. Wuhan University of Technology (2013) 4. Wang, L., Du, W.: Application of ASP in fuzzy comprehensive evaluation. Inner Mongolia TV Univ. J. (01), 22–23 (2007) 5. Wang, H.: Design of Suzhou King Long B92V series city bus. Creat. Des. Sources (06), 42– 47 (2010) 6. Cheng, X.: Research on product aesthetic evaluation based on fuzzy evaluation method. Design (02), 170–171 (2013)
Object Detection to Evaluate Image-to-Image Translation on Different Road Conditions Fumiya Sudo(&), Yoshihiro Hashimoto, and Giuseppe Lisi Nagoya Institute of Technology, Nagoya, Japan [email protected], {hashimoto,lisi.giuseppe}@nitech.ac.jp
Abstract. In vision-based navigation, image-to-image translation systems have been recently proposed to remove the effect of variable road conditions from images, in order to normalize the available data towards a common distribution (e.g. sunny condition). In this context, previous works have proposed image deraining using generative adversarial networks (GAN) and evaluated performance based on pixel-to-pixel similarity measures that do not account for semantic content relevant for driving. Here, we propose to evaluate the performance of a de-raining GAN using an object detection neural network pretrained to find cars and pedestrians. We conducted experiments on the CARLA simulator to collect training and evaluation data under several weather conditions. Results indicate that GAN de-rained images achieve a high object detection performance in some conditions, but on average lower than object detection on the original rainy images. Future work will concentrate on improving semantic reconstruction and detection of other road elements (e.g. lanes, signs). Keywords: Image-to-image translation Generative adversarial networks Autonomous driving Image de-raining Object detection
1 Introduction Road safety is one of the biggest obstacles to the adoption of autonomous driving. The varying road conditions (e.g. rain, snow) affect car sensors in often unpredictable ways. For example, the performance of LIDAR sensors significantly decreases under rain [1]. Similarly, deep learning models trained on camera images, may be strongly affected by changing road conditions, resulting in poor performance. Indeed, most of the images used to train deep neural networks come from sunny data collection sessions, while much fewer samples are available for different road conditions. To mitigate this problem, image-to-image translation systems, mapping from an input image to an output image, have been proposed for de-raining [2]: removing the effects of rain from images (e.g. rain drops, reflective road). Similarly, these systems could be used to generate images under different road conditions, for data augmentation [3]. Such deraining systems have been only evaluated using methods that are typically used in the image-to-image translation literature, such as Peak Signal to Noise Ratio, Structural Similarity, Universal Quality Index and Visual Information Fidelity [2], which require © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 143–149, 2020. https://doi.org/10.1007/978-3-030-39512-4_23
144
F. Sudo et al.
ground-truth frames and omit semantic factors that are relevant for driving. Here, we propose to evaluate the translated images using an object detection neural network at each frame of a driving session (Fig. 1). This allows for a quantitative evaluation of the translated image that is based on its semantic content, similar to the FCN score proposed in [4], and does not require ground-truth frames.
2 Methods We conducted experiments on the CARLA simulator [5], where a self-driving car autonomously drives around a simulated city, collecting frames under sunny and rainy road conditions. An image-to-image translator is trained to map rainy to sunny images, using a Cycle Generative Adversarial Network (Cycle GAN).
Fig. 1. Flowchart of the proposed evaluation strategy. The Cycle GAN image-to-image translator takes a rainy image (leftmost) as input and converts it to a sunny image (center). The SSD object detection finds objects (rightmost image) using the output of the Cycle GAN. Surprisingly, Cycle GAN translates correctly not only the cars, but also the traffic signs, light poles, trees, and even brands the truck. However, we observe some artifacts on the sidewalk.
The advantage of Cycle GAN is that a mapping from a source domain X to a target domain Y is learned in the absence of paired examples [6]. Subsequently, a SSD [7] object detector, pre-trained to detect cars and pedestrians, is applied to the original and to the de-rained frames, respectively. 2.1
Data Collection
We evaluate the performance and generalization ability under several experimental conditions: 3 weather conditions (i.e. sunny noon, mid-rain noon, hard-rain noon; Fig. 2), 2 resolution levels (i.e. 320 240 and 280 200) of the Cycle GAN input (leftmost in Fig. 1), and 2 towns (i.e. experimental environment in CARLA). For this purpose, we collected all the data necessary for training the Cycle GAN and for evaluating the object detection performance, including images from the frontal camera, the coordinates of all the vehicles and pedestrians in the environment, and the relative timestamps. The autonomous driving agent available in CARLA was executed
Object Detection to Evaluate Image-to-Image Translation on Different Road Conditions
145
Fig. 2. An example of image data collected using Carla
for about 13 min, saving data with a time step of about 0.2 s, producing 3650 samples, for each condition. In all the runs, the CARLA environment contained 60 pedestrians and 30 cars, moving around the town. 2.2
Training and Evaluation Strategy
Only using samples from Town 01 (i.e. training town), a Cycle GAN is trained to derain images separately for each image resolution level and for different rain levels: midrain, hard-rain and mixed samples from mid- and hard- rain, for a total of 6 different neural networks. The training of Cycle GANs is done as in the original paper with batch size 1 and 200 epochs. In order to generate de-rained images and evaluate the performance, each model is then validated on samples from its respective image resolution, at Town 02 (i.e. test town) and Town 01 (i.e. validation town), under the mid-rain and hard-rain conditions. The de-rained images are then fed to the SSD object detection in order to evaluate how much semantic content is left in the de-rained images. In order to build a baseline, SSD is also executed on the original image (i.e. no_change). The SSD neural network is trained to detect pedestrians and cars. The last step of the validation procedure is to evaluate the accuracy of the SSD object detection. For this purpose, we used the vehicles and pedestrian coordinates to generate the ground truth. Specifically, the area within a 20 m radius and within a 90° angle of view, was considered as the detection area (Fig. 3). For each frame, the output of the SSD detector was then compared with the number of pedestrians and cars in the detection area, in order to compute the following measures: true positives (TP), false positive (FP), false negatives (FN), true positive rate (TPR = TP/(TP + FN)) and the false discovery rate (FDR = FP/(FP + TP)). Examples of TP, FP and FN are visualized in Fig. 4.
Fig. 3. Detection area for object detection
146
F. Sudo et al.
3 Result The summary of the performance grouped by no_change and Cycle GAN averaged models is shown in Table 1, while the separate results of all the experimental conditions can be found from Tables 2, 3, 4 and 5. Model no_change represents the situation where SSD is applied directly to the original image, i.e. without Cycle GAN. Moreover, the model called mid_hard_rain represents the Cycle GAN trained with a mix of samples from the mid-rain and hard-rain conditions. The weather column of the tables represents the condition of the road when the model was executed.
Fig. 4. Examples of no_change images containing at least a true positive (TP), false positive (FP), false negative (FN)
Table 1 indicates that, in general, the model no_change achieves the best TPR and FDR for car and pedestrian detection, suggesting that the Cycle GAN removes a large portion of semantic content (i.e. lower TPR) and introduces artifacts that fool the SSD into hallucinating objects (i.e. higher FDR). However, in some instances, we observe only a 10% difference in TPR between no_change and model hardrain (Town 02, hard-rain condition), and a 11% difference in Town 01 and mid-rain condition. This is a quite impressive performance of the Cycle GAN that is able to remove rain while keeping the features of big objects. On the other hand, with respect to pedestrian detection, TPR is low for all the models, including no_change, due to the fact that pedestrians entering the detection area are often too far for SSD to work properly, and they rarely come close enough to the car. Accordingly, the FDR is highly variable due to the low number of TP and FP counts. Therefore, with the current method, it is hard to make a conclusive evaluation of the Cycle GAN performance with small objects. Remarkably, we find no performance difference between Town 01 (i.e. training and validation environment) and Town 02 (i.e. test environment), suggesting that the neural network learned at Town 01 is able to generalize to Town 02 with a different layout and characteristics, and that overfitting was not a problem. Moreover, image resolution does not have an impact on the performance of the Cycle GAN + SSD pipeline object detection.
Object Detection to Evaluate Image-to-Image Translation on Different Road Conditions Table 1. Average performance Car Pedestrian Model TPR FDR TPR FDR no_change 50.9% 1.0% 2.4% 1.9% Cycle GAN avg. 26.6% 4.4% 0.3% 41.1%
Table 2. Town 01, resolution = (200, 280) Car Weather Model TPR midrain no_change 47% midrain midrain 15% midrain hardrain 36% midrain mid_hard_rain 19% hardrain no_change 49% hardrain midrain 33% hardrain hardrain 24% hardrain mid_hard_rain 18%
Pedestrian FDR TPR FDR 1% 0.96% 9.1% 3% 0.29% 25.0% 1% 0.29% 25.0% 4% 0.19% 33.3% 2% 2.49% 0.0% 2% 0.00% 100% 3% 0.28% 20.0% 5% 0.48% 12.5%
Table 3. Town 01, resolution = (240, 320) Car Weather Model TPR midrain no_change 47% midrain midrain 16% midrain hardrain 30% midrain mid_hard_rain 21% hardrain no_change 51% hardrain midrain 32% hardrain hardrain 27% hardrain mid_hard_rain 23%
Pedestrian FDR TPR FDR 1% 1.54% 5.9% 2% 0.38% 63.6% 2% 0.10% 66.7% 3% 0.38% 42.9% 2% 2.42% 0.0% 3% 0.14% 33.3% 2% 0.42% 64.7% 5% 1.54% 5.9%
Table 4. Town 02, resolution = (200, 280) Weather midrain midrain midrain midrain
Car Model TPR no_change 56% midrain 22% hardrain 25% mid_hard_rain 29%
Pedestrian FDR TPR FDR 0% 4.99% 0.0% 7% 0.21% 40.0% 8% 0.21% 0.0% 10% 0.00% 100% (continued)
147
148
F. Sudo et al. Table 4. (continued) Weather hardrain hardrain hardrain hardrain
Model no_change midrain hardrain mid_hard_rain
Car TPR 50% 12% 40% 18%
FDR 1% 5% 4% 5%
Pedestrian TPR FDR 0.80% 0.0% 0.11% 33.3% 0.06% 85.7% 0.48% 12.5%
Table 5. Town 02, resolution = (240, 320) Weather midrain midrain midrain midrain hardrain hardrain hardrain hardrain
Model no_change midrain hardrain mid_hard_rain no_change midrain hardrain mid_hard_rain
Car TPR 57% 20% 31% 39% 50% 30% 40% 38%
FDR 0% 6% 7% 3% 1% 6% 4% 5%
Pedestrian TPR FDR 4.99% 0.0% 0.14% 71.4% 0.07% 50.0% 0.00% 100% 1.32% 0.0% 0.23% 0.0% 0.17% 0.0% 0.11% 0.0%
4 Discussion In general, Cycle GAN drastically reduces object detection performance, suggesting that the semantic reconstruction after de-raining is not perfect. However, in some experimental conditions, TPR on de-rained images is surprisingly high, indicating that Cycle GAN is able to reconstruct semantically valid images, especially with big objects, such as cars. With small objects, such as pedestrians, the FDR increases drastically, suggesting that the Cycle GAN introduces artifacts that make the SSD hallucinate. With respect to the various experimental conditions, we find that Cycle GAN generalizes well from training in Town 01 to testing in Town 02. Moreover, image resolution does not have a particular effect, which may due to the fact that the resolution levels tested are quite similar. However, testing larger resolutions is prohibitive from an hardware and memory perspective. In future, we will improve the detection area for pedestrians, and expand the proposed technique to different objects on the road, such as lanes, traffic lights. Moreover, we will test different types of GANs such as the Semantic GAN (SemGAN), which produces semantically consistent output. Image-to-image translation generates surprisingly realistic sunny images; however, more research is required to generate more semantically coherent images.
Object Detection to Evaluate Image-to-Image Translation on Different Road Conditions
149
References 1. Rasshofer, R.H., Gresser, K.G.: Automotive radar and lidar systems for next generation driver assistance functions. Adv. Radio Sci. 3(B. 4), 205–209 (2005) 2. Wang, C., Xu, C., Wang, C., Tao, D.: Perceptual adversarial networks for image-to-image transformation. IEEE Trans. Image Process. 27(8), 4066–4079 (2018) 3. Uricár, M., Krízek, P., Hurych, D., Sobh, I., Yogamani, S.: Yes, we GAN: applying adversarial techniques for autonomous driving. arXiv preprint arXiv:1902.03442 (2019) 4. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017) 5. Dosovitskiy, A., Ros, G., Codevilla, F., López, A., Koltun, V.: CARLA: an open urban driving simulator. arXiv preprint arXiv:1711.03938 (2017) 6. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycleconsistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision (2017) 7. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S.: SSD: single shot multibox detector. In: European Conference on Computer Vision. Springer, Cham (2016)
Humans and Artificial Cognitive Systems
Modelling Proxemics for Human-Technology-Interaction in Decentralized Social-Robot-Systems Thomas Kirks(&), Jana Jost, Jan Finke, and Sebastian Hoose Fraunhofer Institute for Material Flow and Logistics, J.-v.-F. Str. 2-4, 44227 Dortmund, Germany {thomas.kirks,jana.jost,jan.finke, sebastian.hoose}@iml.fraunhofer.de
Abstract. Automation in production and logistics facilities has drastically been increased in the past decades in a way that human workers are continually confronted with robots in their working environment. In this paper, we present the approach to model interactions between workers and our socially interacting AGV EMILI by using elliptic, worker dependent zones in free navigation so that it is possible to navigate around humans while respecting their individual distances. We use the human agent to save and update the information about his or her personal zones. An AGV, which needs to pass by the human, communicates with his or her human agent to retrieve this information and then calculates a path, which respects the persons preferred distance. Keywords: Human factors decentralized systems
Human-technology interaction Social
1 Introduction Automation in production and logistics facilities has been increased in the past decades in a way that human workers are continually confronted with robots in their working environment. Especially, automated guided vehicles (AGVs) are supporting human workers. To increase the acceptance of workers, those robots need to interact with humans in a safe and emphatic way. To design a social and interactive robot, the AGV EMILI has been developed which can interact with humans through multiple interfaces. Previous work has shown that each human keeps individual distances around him/her. Depending on the counterpart one interacts with, different distances are kept e.g. family members can enter a closer zone than friends. Contrary, people feel uncomfortable if an unfamiliar person enters a private zone. The field of proxemics investigates these spaces and their effects on interaction among humans. Studies have shown that these zones are also applicable to robots as well as that they are not always of circular shapes. For a better acceptance towards robots, one has to use this information when designing the path planning process of an AGV. In this paper, we present the approach to model interactions between workers and our socially interacting AGV EMILI by using elliptic, worker dependent zones in free navigation so that it is © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 153–158, 2020. https://doi.org/10.1007/978-3-030-39512-4_24
154
T. Kirks et al.
possible to navigate around humans while respecting their individual distances. We use the human agent to save and update the information about his/her personal zones. An AGV retrieves this information from the human agent and then calculates a path respecting the persons distance.
2 Background 2.1
State of the Art in Human-Robot Interaction
Today, robots are used in diverse use cases in industry and private areas. When robots were heavy industrial robots decades ago, nowadays their appearance changed to more smaller, flexible and cognitive robots [1] along with their abilities. These robots interact directly with humans and partially do not need safety devices. Hence, they can be used in new fields of application e.g. surgery [2]. Although the usage of robots in industrial applications is increasing, the human worker will still play a major role in production and logistics. This can only be achieved by the development of new intuitive interaction methods for humans and robots. Human-robot collaboration is one interaction form besides cooperation and co-existence in human-robot interaction (HRI) [3], For HRI we need also to consider psychological issues like cognitive ergonomics, which aims on the reduction of mental stress by reducing complexity, enabling to create intuitive interaction modalities and thereby raising acceptance of robotic systems. 2.2
Proxemics
Since nowadays humans and robots collaborate, the results of decades-long research in the field of human-human interaction such as the theory of proxemics by Hall [4] can be used to ensure a human-friendly work environment. According to Hall each person has four unique zones surrounding him/her where each zone has its own distance and is defined by the kind of people entering it e.g. close friends are allowed to enter the second closest, the personal zone, whereas in the closest, the intimate zone, partners are positioned. In contrast to that, people feel uncomfortable if an unfamiliar person enters a private zone. Robots working close to humans have to react according to its related zone. Various studies have focused on factors, which have an impact on the distance between robot and human. Humans feel anxious with taller robots and keep greater distances [5]. The distance kept is proportional to the robots velocity [6] and the general attitude towards robots influence the way people approach robots [7]. Further, gender and age as well as robot experience influence the distance [8]. In addition, Jost et al. [9] showed that humans approaching robots are keeping a smaller distance in contrast to being approached. Besides, they found out that for robot unexperienced persons the facial expressions of the robot – showing emotions – has an impact on the distance kept.
Modelling Proxemics for Human-Technology-Interaction
2.3
155
Navigation for Mobile Robots
One option for the software development of robots is the Robot Operating System (ROS). It offers the so called Navigationstack, a ROS node, for navigating robots. It needs range sensor data, a model of a robot, odometry data, a navigation goal and a map of the environment as input. Using range sensor data and the given map, a costmap is generated on which the path planning algorithm relies on. The calculated path is transformed into translational and rotational velocities to move the robot from one point to a goal [10]. The costmap generation is split up in layers of which each adds up more information to the costmap. The first layer adds the general layout, while following layers add data e.g. from range sensors. Each layer is independently implemented in a plugin. The data in the costmap is represented by a character array, which can be reinterpreted as a one-channel image. Each pixel represents an area depending on the resolution and the corresponding cost [11]. Using costmap layers, the behavior of the navigation can be directly. Hence, obstacles referring to humans can be manipulated in the costmap to fit the needs of workers. An example is provided by the social_plugin which adds a Gaussian on a given position with the purpose to implement a proxemics zone [11]. Unfortunately, this implementation does not allow an individualization of the zones depending on the referenced person and its needs, as described in Sect. 2.2.
3 Modelling Proxemics for Social-Robot-Systems 3.1
Requirements for the Model
In this paper, we present a model for the proxemics zone which is referring to the various needs of a worker. To model those needs different requirements have been identified. First, we want to cover different proxemics zones. The intimate zone ðhð x; yÞÞ should not be entered by any robot. Further, for interaction with the robot the personal zone is needed. Depending on the individual worker and his/her experience with robots as well as individual and daily specifically characteristics e.g. inattentiveness and mental workload, the size of the zones may vary in both directions (y0 : back and forth as well as x0 : left to right). Further, the distance kept in front of a person and the one behind a person may vary ðrY Þ. Due to secure feelings, people tend to keep a shorter distance if something is frontal approaching while coming from behind may surprise them. In addition, sinistral and dextral people keep different distances to their sides ðrX Þ. Finally, other human characteristics e.g. visual impairment or transverse posture as well as process specific requirements have to be met ðHÞ while modeling the personal zone. Previous work has shown how to integrate the human worker into a multi-agent system by representing him/her via a human agent which is running on his/her human interface device e.g. smartphone and can provide the needed parameter information.
156
3.2
T. Kirks et al.
Concept and Implementation
To implement proxemics zones as a plugin for the ROS Navigationstack, a Gaussian function [12] has been implemented (see equation below) which uses the standard deviation to dynamically model the curvature of the proxemics as described in Sect. 3.1. Accordingly, the possibility of shifting ðx0 ; y0 Þ the zones related to the person’s position is given as well as the option to rotate the proxemics depending on the personal needs (H). Also, two different areas (intimate hð x; yÞ and personal zone) can be used, by defining each value of the Gaussian higher than a certain, person dependence ðlÞ, value as a critical obstacle, such that routing algorithms would not plan to move the robot through this, inner area. The outer area still is marked as an obstacle but not as a critical one such that navigation algorithms tend to plan paths around this area, but do move the robot through this area if necessary. gðx, yÞ ¼ e
2 cos2 H þ sin 2H 2r2 2r X Y
ðxx0 Þ2 þ 2
hð x; yÞ ¼
sin2H þ sin2H 4r2 4r2 X Y
ðxx0 Þðyy0 Þ þ
2 sin2 H þ cos 2H 2r2 2r X Y
ðyy0 Þ2
gð x; yÞ; if gð x; yÞ l 253; else
The parameters as well as the position of a person is known by the proxemic_layer by subscribing to the people message, which contains information of all workers. The definition of this message, the source code for the proxemic_layer as well as a testing node and usage description can be found in our public source code repository [15]. An example of the results of the proxemics layer is shown in Fig. 1.
Fig. 1. Robot navigating around interacting human using proxemics. Left - photo of the situation. Right - screenshot of the corresponding proxemics and planned route.
Modelling Proxemics for Human-Technology-Interaction
157
4 Conclusion and Outlook The developed plugin for the ROS Navigationstack fulfills the needs of individual workers while interacting with mobile robots by taking into account the theory of proxemics. By modeling the zones via a Gaussian function all necessary characteristics of a person e.g. sinistrality or visual impairment as well as process specific spatial requirements. The proxemics plugin ensures that the human worker feels comfortable while working with robots and is one important step towards human-robot collaboration. Future work will include case studies with participants in industrial environments. Here we will evaluate the feasibility and the adequacy of the use of proxemics distances with mobile robots. The evaluation will be based on specific use cases and methods like “Negative Attitude Towards Robots Scale” [7] or User Experience Questionnaire [13]. Furthermore, we want to investigate on spatially extended influences like aerial robots and therefore need to implement other approaches of proxemics e.g. different shapes. Additionally, we extend the people message to include more features and abilities of the workers. A topic we are already working on is to have the robots learn from and adapt to the human worker’s needs and behaviors over time. In this process, we intend to alter or sharpen the parameters influencing the proxemics shapes based on method of machine learning. In this manner, we gain dynamically changing profiles for each individual, which is considered private data and has to be stored in a save manner [14]. Acknowledgments. This research is supported by the Center of Excellence for Logistics and IT and has been partially funded by the European Union’s Horizon 2020 research and innovation program under grant agreement No. 688117 (SafeLog).
References 1. Weber, M.: Mensch-Roboter-Kollaboration (2001). https://www.arbeitswissenschaft.net/ fileadmin/Downloads/Angebote_und_Produkte/Zahlen_Daten_Fakten/ifaa_Zahlen_Daten_ Fakten_MRK.pdf 2. Onnasch, L., Maier, X., Jürgensohn, T.: Mensch-Roboter-Interaktion - Eine Taxonomie für alle Anwendungsfälle, vol. 1. Fokus, Baua (2016) 3. International Organization for Standardization: ISO 8373:2012-03 Robots and robotic devices - Vocabulary (2012) 4. Hall, E.: The Hidden Dimension. Doubleday Anchor Books, Doubleday (1966) 5. Hiroi, Y., Ito, A.: Are bigger robots scary? - the relationship between robot size and psychological threat. In: 2008 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 546–551, July 2008 6. Sakai, T., Nakajima, H., Nishimura, D., Uematsu, H., Kitano, Y.: Autonomous mobile robot system for delivery in hospital. Tech. Rep. Matsushita Electr. Works 53(2), 62–67 (2005) 7. Syrdal, D.S., Dautenhahn, K., Koay, K.L., Walters, M.L.: The negative attitudes towards robots scale and reactions to robot behaviour in a live human-robot interaction study (2009) 8. Butler, J.T., Agah, A.: Psychological effects of behavior patterns of a mobile personal robot. Auton. Robots 10(2), 185–202 (2001)
158
T. Kirks et al.
9. Jost, J., Kirks, T., Chapman, S., Rinkenauer, G.: Examining the effects of height, velocity and emotional representation of a socital transport robot and human factors in human-robot collaboration. In: Proceedings of the Human-Computer Interaction – INTERACT 2019 17th IFIP TC 13 International Conference, Paphos, Cyprus, 2–6 September 2019, Part II (2019) 10. Martinez, A., Fernández, E.: Learning ROS for Robotics Programming. Packt Publishing Ltd., Birmingham (2013) 11. Lu, D.V., Hershberger, D., Smart, W.D.: Layered costmaps for context-sensitive navigation. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (2014) 12. Bronstein, I.N., et al.: Taschenbuch der Mathematik, vol. 1. Springer, Heidelberg (2012) 13. Laugwitz, B., Held, T., Schrepp, M.: Construction and evaluation of a user experience questionnaire. In: Proceedings of the 4th Symposium of the Workgroup Human-Computer Interaction and Usability Engineering of the Austrian Computer Society on HCI and Usability for Education and Work, USAB 2008, pp. 63–76. Springer, Heidelberg (2008) 14. Kirks, T., Uhlott, T., Jost, J.: The use of blockchain technology for private data handling for mobile agents in human-technology interaction. In: 2019 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM) (2019, in press) 15. Hoose, S.: GitHub repository of the implementation of the proxemic_layer. https://github. com/iml130/proxemic_layer. Accessed 19 Sept 2019
Category Learning as a Use Case for Anticipating Individual Human Decision Making by Intelligent Systems Marcel Lommerzheim1(&), Sabine Prezenski2, Nele Russwinkel2, and André Brechmann1 1 Leibniz-Institute for Neurobiology, Brenneckestrasse 6, 39118 Magdeburg, Germany {marcel.lommerzheim,andre.brechmann}@lin-magdeburg.de 2 Technische Universität Berlin, Marchstrasse 23, 10587 Berlin, Germany {sabine.prezenski,nele.russwinkel}@tu-berlin.de
Abstract. Anticipating human behavior requires a model of the rationale how humans acquire knowledge while solving a problem. The rational aspects of decision making needs to be taken into consideration for improving computational models that currently fail to fully explain behavioral data in rule-based category learning. Compared to reinforcement learning models that assume gradual learning, cognitive modelling allows to implement selection rules and instance based learning for decision making to allow more flexible behavior. Here we use ACT-R to model behavioral data of auditory category learning. By systematically changing the probabilities of rule selection, capturing individual preferences of auditory features, we first improve the original model regarding the average learning curves of subjects. The aim is then to generate a version of the model that explains learning performance of individual subjects. Neuroimaging data will allow to test the predictions of the model by analyzing the dynamics of activation of brain areas linked to the model processes. Keywords: ACT-R
Category learning Decision making fMRI
1 Introduction To achieve effective intelligent human systems integration, it is necessary to understand intentions and to anticipate possible evolving mistakes of humans in order to provide the right support. When working on problems, humans inevitably learn about causal relations between objects and actions resulting in explanatory models used as a basis for further decisions. A prime cognitive function that has been the focus of several models for understanding human learning is categorization. However, due to the assumption of a gradual learning process, such models cannot fully explain human category learning and it has been claimed that different modeling accounts are required that can capture the process of model building in humans [1, 2]. Accordingly, we have used the cognitive architecture ACT-R towards understanding the dynamics and individuality of category learning. ACT-R allows for an algorithmic implementation of several human core abilities and their specific characteristics such as resource © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 159–164, 2020. https://doi.org/10.1007/978-3-030-39512-4_25
160
M. Lommerzheim et al.
limitations e.g. in visual attention. A vast number of cognitive tasks have been implemented that predict human behavior. Especially task models using memory and learning mechanisms have shown good correspondence to human behavior. In category learning participants form the categories at different speed with transitions between learning phases occurring at different points in time. Thus, category learning is not well represented by group-average learning curves [3]. Reasons for such variability may lie in differences in the learners’ prior experience or strategies applied for solving a given category learning task. Moreover, the task design itself may introduce stimulus features with different salience or require motor responses for which participants have different preferences and thus increase the variance of resulting behavioral data. Thus, predicting the dynamics of decision making in individual participants requires models than can consider the influence of such factors. In the present study we analyze behavioral data of an auditory category learning experiment in which participants must learn the conjunction of two category-defining rules by trial and error. In the middle of the experiment the assignment of the target button was changed so that participants had to adapt to the changed situation. We used ACT-R to model this task based on previous work [4] and expanded the model by adding different preferences of the sound features in order to better explain the behavioral results.
2 Methods 2.1
Experiment
55 subjects participated in the experiment that took place inside a 3 T MR scanner (27 female, 28 male, age range between 21 and 30 years, all right handed, with normal hearing). All subjects gave written informed consent to the study, which was approved by the ethics committee of the University of Magdeburg, Germany. A set of 160 different frequency-modulated tones served as stimuli for the categorization task. The tones had five different features, with one of two possible categorical values: duration (short, 400 ms, vs. long, 800 ms), direction of frequency modulation (rising vs. falling), volume (low, 76–81 dB, vs. high, 86–91 dB), frequency range (five low frequencies, 500–831 Hz, vs. five high frequencies, 1630– 2639 Hz), and speed of modulation (slow, 0.25 octaves/s, vs. fast, 0.5 octaves/s). The task relevant features were the direction of frequency modulation and sound duration, resulting in four tone categories: short/rising, short/falling, long/rising, and long/falling. For each participant, one of these categories constituted the target sounds (25%), while the other three categories served as non-targets (75%). As feedback stimuli, we used naturally spoken utterances (e.g., ja, “yes”; nein, “no”) as well as one time-out utterance (zu spät, “too late”) taken from the evaluated prosodic corpus MOTI [5, 6]. The experiment lasted about 33 min in which a large variety of frequencymodulated tones were presented in 240 trials in pseudo-randomized order and with a jittered inter-trial interval of six, eight, or ten seconds. The participants were instructed to indicate via button-press whether they considered the tone in each trial to be a target
Category Learning as a Use Case for Anticipating Individual Human Decision Making
161
(right index finger) or a non-target (right middle finger). They were not informed about the target category but had to learn by trial and error. Correct responses were followed by positive feedback, incorrect responses by negative feedback. If participants failed to respond within two seconds following the onset of the tone, the time-out feedback was presented. After 120 trials, a break of 20 s was introduced. From the next trial on the contingencies were reversed such that the target stimulus required a push of the right instead of the left button. The participants were informed in advance about a resting period after finishing the first half of the experiment but they were not told about the contingency reversal. 2.2
Model
Modelling Method. The experimental task was modeled with the ACT-R framework, a cognitive architecture that provides a set of different cognitive functionalities called modules that interact with each other via interfaces called buffers. We will first describe the general modelling approach as recently implemented [2] and then outline the changes that we made to include individual preferences in tone features. Our model uses the motor, the declarative, the imaginal, the goal, the aural, and the procedural module of ACT-R. The motor module is responsible for the motor output of the model. The declarative module represents the long-term memory of ACT-R in which all representation units (chunks) are stored and retrieved. The imaginal module acts as the working memory that holds and modifies the current problem state. The goal module represents the control states. The aural module is responsible for processing auditory information. The procedural module is central for ACT-R as it coordinates the other processing units by selecting production rules (representing procedural knowledge) based on the current state of the modules. Original Task Model. In order to specify a model in ACT-R the production rules and the chunks (representing background knowledge) need to be defined. Chunks are the smallest units of information and can be exchanged between Buffers. Production rules or productions have a condition and an action part. They are selected sequentially. Thus, only one production can be selected simultaneously. They are only selected if the condition part of the production matches the state of the modules. Subsequently, the action part modifies the chunks in the modules. In the case that more than one production matches the state of the modules, a subsymbolic production selection process chooses which production is selected. Another subsymbolic process of ACT-R is the activation of a chunk. The activation of a chunk determines whether a chunk can be retrieved from memory and how long this takes. A chunk’s activation value is determined by its past usefulness (base-level activation), its relevance in the current context (associative activation) and a noise parameter. The model is equipped with two different types of chunks: strategy chunks and control chunks. Strategy chunks represent the strategies in form of examples of featurevalue pairs (i.e. duration is long and volume is loud) and responses (i.e. left or right button). These strategy chunks are stored in and retrieved from long-term memory
162
M. Lommerzheim et al.
(declarative module). The currently pursued strategy is stored in working memory (imaginal module). A strategy chunk contains the following information: Which feature (s) (i.e. volume) and what corresponding categorical value(s) (i.e. loud) are relevant, the proposed response, and the degree of complexity. The degree of complexity determines whether the model attends to just one feature (one-feature strategy) or to two features (two-feature strategy). Furthermore, it contains a metacognitive process that includes noting whether and how often a strategy was successful. All of the different possible strategy chunks are stored in declarative memory right from the start. Control chunks represent other metacognitive aspects of the model. They are stored in the goal buffer of the model. They include first the level of rule complexity used, second whether or not a long-time successful strategy caused an error, and third whether external changes occurred that require a new search for a strategy. The production rules used for defining the task are described now in detail with the specific names in parentheses. When a tone is presented to the model it enters the aural location buffer (listen). Subsequently it is encoded in the aural buffer (encode). This results in a chunk with all audio information necessary (duration, direction of pitch change, volume, and frequency range) stored in the aural buffer. This audio chunk is then compared to the strategy chunk in the imaginal buffer (compare). If the specific features of the strategy chunk match those of the audio chunk, the response is chosen according to the strategy chunk (react-same), if not, the opposite response is selected (react-different). The model then listens to the feedback and holds it in the aurallocation buffer (listen-feedback). Subsequently it is encoded in the aural buffer (encodefeedback). In case of positive feedback, the current strategy is maintained and the count-slot is updated (feedback-correct). In case of negative feedback, the strategy usually is altered depending on previous experiences (feedback-wrong). Next we will describe how this strategy updating is implemented. In case that a onefeature strategy fails in the first attempt, a different motor response is selected for this feature-value pair. Otherwise, the feature-value pair is changed while the response is retained. When a one-feature strategy was successful often and then fails once, it is not directly exchanged, but re-evaluated and it is noted that the strategy has caused an error. In two circumstances a switch from a one-feature strategy to a two-feature strategy occurs: Either no successful one-feature strategy is left or an often successful one-feature strategy fails repeatedly. For switches of two-feature strategies the following rules apply: If the first attempt of a two-feature strategy fails, any other twofeature strategy is used. In the case a two-feature strategy that was initially successful fails, a new strategy that retains one of the feature-value pairs and the response is transferred to the imaginal buffer. When an environmental change is detected, an often successful two-feature strategy will fail and a retrieval of another two-feature strategy takes place. Individual Preference Model. In order to improve the models performance and to better fit the human experimental data, we included preferences to special strategies. While the probability of selecting a strategy in the initial model was the same, we changed the model by increasing the probability (by changing the utility of the production) of being selected for different strategy chunks. We did so by setting the activation values of certain strategy chunks to 1.0. In one model we increased the
Category Learning as a Use Case for Anticipating Individual Human Decision Making
163
activation values of strategies that use duration and/or direction of frequency modulation as acoustic features. Thus, the strategy chunks that use these sound features receive a higher probability of being retrieved as active strategy. In another model the activation level of strategies using an irrelevant feature, i.e. volume was increased.
3 Results and Discussion
Fig. 1. Average performance of the participants and the two ACT-R models. Error bars depict standard errors.
Figure 1 depicts the performance of the participants and three different model versions: The initial model with without feature preference, a model with preference for the relevant sound features duration and direction and a model with preference for the irrelevant sound feature volume. As expected the model with increased attention to volume performs even worse than the initial model. In contrast, the model with increased attention to duration and direction of frequency modulation performs nearly as good as the participants in blocks 4–6. A qualitative difference between the models and participants performance can be seen after the change of response-contingencies (after block 6). While the participants adopt rather fast to the change and increase performance already in block 8, all models require more time to recover their performance. Thus, while introducing preferences for task relevant sound features improved the performance of the model, it performs still significantly different to the participants in most blocks. There are two things that could be implemented to further improve it: Firstly, response preferences could be included so that one response button is favored over the other. This may improve the initial performance of the model. Secondly, the
164
M. Lommerzheim et al.
way the model learns after the reversal can be changed. Instead of randomly switching to another strategy with a different feature-value pair as target tone, the model could first try out to choose a different response to the same target tone. This is a necessary next step to generate a model that can explain the average learning curves of humans. Then certain simulations of the model can be run, that generated the exact same performance as individual subjects during the learning experiment. Since we acquired fMRI data during the participants’ learning, the third step would then include to compare the activity of the different ACT-R modules to the neural activation in the brain regions that have been suggested as corresponding brain regions [7]. Acknowledgments. BR 2267/9-1.
The work was supported by EU-EFRE (ZS/2017/10/88783) and by DFG
References 1. Smith, J., Ell, S.: One giant leap for categorizers: one small step for categorization theory. PLoS One 10, e0137334 (2015) 2. Jarvers, C., Brosch, T., Brechmann, A., Woldeit, M.L., Schulz, A.L., Ohl, F.W., Lommerzheim, M., Neumann, H.: Reversal learning in humans and gerbils: dynamic control network facilitates learning. Front. Neurosci. 10, 535 (2016) 3. Gallistel, C.R., Fairhurst, S., Balsam, P.: The learning curve: implications of a quantitative analysis. Proc. Natl. Acad. Sci. 101, 13124–13131 (2004) 4. Prezenski, S., Brechmann, A., Wolff, S., Russwinkel, N.: A cognitive modeling approach to strategy formation in dynamic decision making. Front. Psychol. 8, 1335 (2017) 5. Wolff, S., Brechmann, A.: MOTI: a motivational prosody corpus for speech-based tutorial systems. In: Proceedings of the 10th ITG Conference on Speech Communication, pp. 1–4 (2012) 6. Wolff, S., Brechmann, A.: Carrot and stick 2.0: the benefits of natural and motivational prosody in computer-assisted learning. Comput. Hum. Behav. 43, 76–84 (2015) 7. Borst, J.P., Anderson, J.R.: A step-by-step tutorial on using the cognitive architecture ACT-R in combination with fMRI data. J. Math. Psychol. 76, 94–103 (2017)
System Architecture of a Human Biosensing and Monitoring Suite with Adaptive Task Allocation Brandon Cuffie(&) and Lucas Stephane(&) Florida Institute of Technology, Melbourne, FL 32901, USA [email protected], [email protected]
Abstract. Future space missions require the integration of enhanced physical and physiological monitoring systems for supporting missions of both short and long duration. This human biosensing and monitoring suite will specify and implement a proof of concept of a non-invasive sensor system for crew monitoring in space missions, by integrating cost-cutting equipment. HuBAM is composed of: (1) astronaut wearable suit(s) (AWS), combined with (2) an Automated Equipping station (AES) that can be deployed as needed within space habitats and/or space transportation vehicles. The design and evaluation methods employed will be systematically mapped onto each Design Thinking stage for this first iteration. Based on results and findings, the paper will deliver recommendations and tasks for further developments, and future iterations using the Design Thinking process. Keywords: Design Thinking system integration
Biosensing Monitoring Usability Human-
1 Introduction As astronauts explore the deep space environment, where support from earth is not feasible, mission success will depend on keeping the crew healthy and performing at high levels, while making mission specific tasks easier, adaptable and optimal for the crew [1]. This paper provides a system architecture design for a human biosensing and monitoring suite (HuBAM) with adaptive task optimization using the Design Thinking approach. This human biosensing and monitoring suite will specify and implement a proof of concept of a non-invasive sensor system for crew monitoring in space missions, by integrating cost-cutting equipment. HuBAM is composed of an astronaut wearable suit(s) (AWS) with physical and physiological sensors, integrated with an Automated Equipping station (AES) that can be deployed as needed within space habitats and/or space transportation vehicles. The astronaut-centered AWS leverages state-of-the-art requirements inclusive of physical and physiological sensors. The AES is leveraging requirements to equip the astronaut with his extra-vehicular activity (EVA) suit, hardware and software solutions for both sensing and managing sensor information and automated equipping tools and accessories necessary for specific EVA missions. © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 165–171, 2020. https://doi.org/10.1007/978-3-030-39512-4_26
166
B. Cuffie and L. Stephane
Using the iterative and incremental five stages of Design Thinking (i.e. (1) problem statement, (2) need-finding/empathizing, (3) ideation, (4) prototyping, (5) evaluation) [2] to the design of the suite, will allow to systematically eliciting useful knowledge for optimal design requirements and formative evaluation feedback from astronauts and space experts. The design and evaluation methods employed will be systematically mapped onto each Design Thinking stage (e.g. interviews with astronauts mapped onto need-finding, design rationale and scenario-based design mapped onto ideation, feasibility related to detailed technical hardware and software tools mapped onto prototyping, and testing protocols mapped onto the evaluation stage) for this first iteration.
2 Design Thinking: First Iteration (See Fig. 1).
Problem
Evaluation
Prototpe
Need inding
Ideation
Fig. 1. Design Thinking cycle applied to the first iteration. This shows that this iteration is cyclic.
2.1
Problem Statement
For this first iteration, the aim is to design the HuBAM process that will take place for the astronaut as he goes through the phases of monitoring to mission equipping, in a human-centered manner. In the development of the process, we had to evaluate possible automated alternatives for the task-specific equipping of astronauts for the completion of a dynamically assignable, planetary task. Within the context of this study, the planetary task is defined as a nominal Mars rover support and maintenance scenario in which the mission specialist must monitor and assist in the deployment of an automated rover. The specialist will also support the rover in the collection of surface samples, and monitor and assist in the rover’s recovery.
System Architecture of a Human Biosensing and Monitoring Suite
2.2
167
Need-Finding/Empathizing
In exploring the problem, it first required an intuitive understanding of the topic of spacesuits. A combination of very in depth research, task analysis and state of the art questions options and criteria (QOC) [3], shown in Fig. 2, achieved such. It also included many articles, documentaries and even a manual on operating a spacesuit, which aided in understanding use cases and scenarios about the problem. As part of this type of participatory design, we conducted informal and formal interviews with subject matter experts (SMEs) such as a space scientist, a biomedical expert and a retired astronaut. The astronaut gave a precise account of his time in space and his recommendations to improve his spacesuit.
Fig. 2. Design Space Analysis for Mission Based Task Allocation. This is a QOC showing the various requirements needed for successful working concept.
2.3
Ideation
“What is a possible solution?” “What to test?” These crucial questions were answered based on the synthesis of empathizing and the problem question. Sequence based diagrams were used to depict and describe the solutions [4] (Fig. 3).
168
B. Cuffie and L. Stephane
Fig. 3. Sequence based diagrams for the mission “Rover Maintenance and Retrieval” (Adapted)
For modeling and simulation, we used Signavio Process Modeling [5]. Unlike physical modeling, such as making a scale copy of the AES, this computer-based tool, provided an environment while they are running, and having the possibility to view them in 2D or 3D. The ability to analyze the model as it runs sets simulation modeling apart from other methods, such as those using Excel or linear programming. This model captured more detail than an analytical model, providing increased accuracy and more precise forecasting. It was less expensive, took less time than real experiments with real assets, and efficiently represented uncertainty in operations’ time and outcome. As such, we compressed or expanded time in the simulation to speed up or slow down the process in the AES. 2.4
Prototype
Heads up Displays (HUD), in the aerospace field, improves human perception in the control of complex machines and refines skills in the training of astronauts for extravehicular activity (EVA) and robotics operations [6]. They participate and contribute to the astronaut’s actions and workload in space, and will be a key component in the equipping of the astronaut to the procession to the specified mission. All necessary information required for the smooth operation, such as mission information and mission tools will be displayed. Information from physical and physiological sensors such as electrocardiography (EKG), heart rate variability (HRV), maximum oxygen consumption (VO2), pulse oximetry (POx), electrodermal activity (EDA), electroencephalography (EEG), and electromyography (EMG) will also be displayed.. The software program Adobe XD was used to create the prototype (Fig. 4).
System Architecture of a Human Biosensing and Monitoring Suite
169
Fig. 4. Selected screenshots of the prototype for the HUD at different stages: starting screen (left), physiological monitor screen (top right) and equipped tools (bottom right).
2.5
Evaluation
Among the various methods available for usability assessment, and those that were shortlisted, the Self-Assessment Manikin (SAM) and the Cognitive Walkthrough methods evaluated the prototype. The SAM nine-pointer scale was used to directly measure the pleasure, arousal, and dominance in response to, and related to the use and the procedure process of the prototype HUD view. Figure 5 below shows the SAM figure with the respective scales. For the cognitive walkthrough, we had limited availability of expert users with knowledge and experience of the spacesuits used on previous missions whose feedback carries weight. We followed through each predefined tasks and procedure that the user is expected to go through and identify specific patterns or required skills that could emerge as being critical. All the participants had normal vision. The participants had never seen the video before. The experiment was conducted in the evening in a closed room, and all were placed in a comfortable chair. Whenever they felt the need, all participants had access to the prototype and played with it multiple times.
170
B. Cuffie and L. Stephane
Fig. 5. SAM nine-point scale used to assess pleasure, arousal and control of the prototype
3 Conclusion and Future Work It should be noted that a number of the features tested were done in the context of a fully functioning automated system but tested using a laptop-based simulation of a software prototype. A such, it was not possible to test the usability of interactive gestures, automated selection of mission tasks or displays for equipment located on the suit. Analysis of the process of astronaut equipping by way of a sequence diagram (Fig. 2) illustrates the inherent operational complexity of the task. From a design perspective, the cognitive walkthrough and self-assessment manikin demonstrate the learnability and user experience characteristics of our prototype. Findings concerning the cognitive walkthrough suggested required improvements in system observability. Per the internal focus group findings of the SAM, the initial prototype did, however, support pleasurable and arousing usage while promoting the perception that the user remained in control of the sequence. Future goals of this project include the refinement of the HUD interface, the automation of mission selection, the addition of salient sequence progress displays, the addition of user position and orientation demonstration videos, and the provision of an augmented reality display which depicts the location of equipment in real-time prior to the astronaut’s departure from the station. These improvements should maintain acceptable levels of excitement and arousal and promote the perception of user control in the equipping process.
System Architecture of a Human Biosensing and Monitoring Suite
171
Acknowledgements. This project was supported by the National Aeronautics and Space Administration, through the University of Central Florida’s NASA Florida Space Grant Consortium, and SPACE FLORIDA. The authors wish to acknowledge and thank in particularly Captain Winston E. Scott, USN (ret.) for his astronautics expertise as a NASA astronaut.
References 1. Furr, P.A., Monson, C.B., Santoro, R.L.: Extravehicular activities limitations study. National Aeronautics and Space Administration (1988) 2. Meinl, C., Leifer, L.: Design Thinking: Understand, Improve, Apply. Springer, Heidelberg (2011) 3. MacLean, A., Young, R.M., Bellotti, V.M., Moran, T.P.: Questions, options and criteria: elements of design space analysis. Hum. Comput. Interact. 6, 201–250 (1991) 4. Cuffie, B., Bernard, T., Mehta, Y., Kaya, M., Scott, W.E., Stephane, L.: Proposed architecture of a sensory enhanced suit for space applications. In: 2018 AIAA SPACE and Astronautics Forum and Exposition (2018) 5. Signavio Process Manager: User Guide (12.6.0) (2018) 6. Aukstakalnis, S.: Practical Augmented Reality: A Guide to the Technologies, Applications, and Human Factors for AR and VR. Addison-Wesley, Boston (2016) 7. Wharton, C., Rieman, J., Lewis, C., Polson, P.: The cognitive walkthrough method: a practitioner’s guide. In: Nielson, J., Mack, R.L. (eds.) Usability Inspection Methods, pp. 105– 140. Wiley, Canada (1994)
The Role of Artificial Intelligence in Contemporary Medicine Larisa Hambardzumyan, Viktoria Ter-Sargisova(&), and Aleksandr Baghramyan Medical University named after St. Tereza, 54a Mashtots Ave, 0010 Yerevan, Armenia [email protected]
Abstract. The use of artificial intelligence (AI) in the sphere of medicine is aimed at primary diagnostics of human body and finding treatment approaches and solutions. While some scientists consider that AI is able to detect many complicated diseases at early stages thus saving lives of many patients, human reasoning methods in the field of medicine cannot be fully replaced by machines. This paper aims to find out the optimal balance between human and AI participation in diagnostic and treatment decision processes. With this purpose a simulation of diseases based on 100 real-case anonymous data was generated to be diagnosed through independent and simultaneous AI and human analysis. As a result, the quantitative and qualitative outcomes were summarized and used to suggest a solution to combine AI and human participation, aimed at elimination disadvantages and most effective use of advantages of both sides. Keywords: Artificial intelligence Machine Learning Decision tree Human factor Medicine Diagnostics Decision Management Automation Big data
1 Introduction With the implementation of elements of computation to all aspects of our lives, in the era of globalization it is becoming crucial to process large volumes of data and information in a more efficient and resultative manner than it has been done before, consequently technologies underlying the maintenance of AI (Speech Recognition, Computer Vision, Media Processing, ML (Machine Learning), Decision Management, Natural Speech/Language Synthesis, Full Automation) are becoming irreplaceable. The field of medicine and healthcare is one of the most sensitive and challenging spheres where the practitioners have to keep up will all the latest developments, in particular those in the sphere of technological solutions. The ability to make reasonable assumptions by analyzing a given set of data is a fundamental attribute of intelligence, which, in turn, is an irreplaceable feature needed to diagnose, treat and even prevent a wide range of diseases. The purpose of this paper is to analyze the behavior of human medical practitioners and artificial intelligence in artificially created terms where diagnosis and consecutive treatment prescription are required, to evaluate the level of possible treats and benefits of AI implementation in real hospital conditions [4]. © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 172–176, 2020. https://doi.org/10.1007/978-3-030-39512-4_27
The Role of Artificial Intelligence in Contemporary Medicine
173
2 Methodology With the purpose of performing analysis of AI and human behavior in terms of diagnostics and treatment of potential patients, several medical archives of clinical case reports amounting to more than 3,000 medical files were used for random selection of 150 clinical files for further research. Three trial groups took part in the research. Group 1 consisted of 10 medical specialists representing different areas of medicine (2 specialists per each of 5 selected spheres: pediatrics, obstetrics, general surgery, cardiology, oncology), Group 2 represented the computer software controlled by 1 computer software specialist proficient in AI performance control, and Group 3 consisted of 5 medical specialists representing same 5 spheres of medicine as in Group 1, each of them working in collaboration with the computer software applied in Group 2. Each of the 3 groups were given 50 clinical files to work on, within a time period which was recorded. The purpose of this research was to identify and evaluate the impact of usage of the computer software on the efficiency of medical diagnosis [5]. The computer software was developed to include diagnostic models of over 50 diseases, starting from easily diagnosed ones as well as such diseases as tuberculosis and depression which can be also misdiagnosed. The software enabled the operating specialist to analyze and classify all available symptoms to give an exact diagnosis. The process can be represented as a flowchart in a form of a decision tree [1, 2]. Sample decision tree is presented below (Fig. 1):
Fig. 1. The decision tree for hepatitis B predictions [6]
The whole process of analysis by all three groups was performed based on the realcase patient symptoms through paper evidence only, with no involvement of real patients. It is important to mention that the selected medical cases had high level of complication to be able to consider more factors and functionalities of each tested group and evaluate the results from different angles.
174
L. Hambardzumyan et al.
The major task of the tested groups was to form a collegial decision on analysis for each patient case, followed by the respective treatment decision. The parameters taken into consideration were as follows: number of correct diagnoses, number of correct treatment solutions, and average duration of a single patient case analysis [3].
3 Findings The comparison of diagnosis and treatment simulation results have shown that the collaborative work of human medical practitioners and artificial intelligence has contributed the most to the number of correct diagnosis cases in trial groups, as well as to improved results of selected treatment options (Fig. 2).
Fig. 2. AI impact on the efficiency of diagnosis and treatment
The figure above shows that Group 1 with the participation of a group of medical practitioners has shown the lowest results by number of correct diagnoses and number of correct treatments, whereas the average duration of a single analysis was the longest across three groups. This shows that taken relatively low ratios (although with a marginal difference) of decision correctness, the timeframe of pure human activity with no AI participation can considerably affect the treatment outcomes in cases when speed is crucial for saving the life of a patient. The results of Group 2 testing (AI participation only) have shown average results by number of correct diagnoses and number of correct treatments, whereas the average duration of a single analysis was the shortest across three groups with a considerable difference (8 min in Group 2 vs. 38 and 26 min in Groups 1 and 3 respectively).
The Role of Artificial Intelligence in Contemporary Medicine
175
The results of Group 3 testing (collaboration of AI and human medical practitioners only) have shown the highest results by number of correct diagnoses and number of correct treatments, whereas the average duration of a single analysis was average, though closer in duration to the results of Group 1. Taking into consideration that the rate of correct diagnosis across 3 groups varied from 84% to 94%, whereas the rate of correct treatment varied from 80% to 92%, the role of AI and AI/Human collaboration in Groups 2 and 3 respectively was of medium significance, while in case of the speed of decision-making the role of AI component was major. While relating the percentages of correct diagnoses and treatment decisions to the average duration of decision-making process by groups, the results have shown that the highest efficiency has been achieved in Group 3 (Fig. 3).
Fig. 3. Diagnostic efficiency
This finding supports the suggested thesis stating that the collaboration between human medical practitioners and computer software representing AI can help to achieve the highest patient saving rate due to quicker and more accurate decisions.
4 Conclusion Based on the research results presented above, it can be concluded that human medical workers have an important role in the situations where an artificial intelligence device cannot substitute human advice and charity which can help a medical practitioner to relieve patient’s anxiety or depression. The machine is also unable to cope with disease cases with new components or mutations which have never occurred before, thus AI
176
L. Hambardzumyan et al.
cannot suggest new ways of treatment. At the same time AI helps to eliminate human factor which might lead to routine errors, as well as it can help to lead the diagnostic processes to a new level of speed and automation, whereas speed and correctness of considering all the underlying factors can be often decisive to save a patient’s life. AI “doctors” are also capable of constant learning of new medical knowledge, experience and diagnosis process. Computer software can have a big database consisting of tens of millions of clinical cases, designed to be frequently updated with new cases and symptoms matrices. Therefore, this paper suggests that further analysis needs to be performed aimed at creating more detailed matrices of a synergy between human and artificial intelligence.
References 1. Murthy, S.K.: Automatic construction of decision trees from data: a multi-disciplinary survey. Siemens Corporate Research, Princeton, USA (n.d.). https://cs.nyu.edu/*roweis/csc25152006/readings/murthy_dt.pdf 2. Quinlan, J.R.: Industry of Decision Trees. Kluwer Academic Publishers (1985). https://link. springer.com/content/pdf/10.1007/BF00116251.pdf 3. Henriksen, K., Brady, J.: The pursuit of better diagnostic performance: a human factors perspective. BMJ Qual. Saf. (2013). https://qualitysafety.bmj.com/content/qhc/22/Suppl_2/ ii1.full.pdf 4. Karimi, K., Hamilton, H.J.: Generation and interpretation of temporal decision rules. Department of Computer Science, University of Regina, Regina, Saskatchewan, Canada (n. d.). https://arxiv.org/pdf/1004.3334.pdf 5. Quinlan, J.R.: Semi-autonomous acquisition of pattern-based knowledge. Baser Department of Computer Science, University of Sydney (n.d.). http://citeseerx.ist.psu.edu/viewdoc/ download?doi=10.1.1.472.7028&rep=rep1&type=pdf 6. Albu, A.: From logical inference to decision trees in medical diagnosis. In: E-Health and Bioengineering Conference (EHB), pp. 65–68 (2017)
Improving Policy-Capturing with Active Learning for Real-Time Decision Support Bénédicte Chatelais1(&), Daniel Lafond1, Alexandre Hains2, and Christian Gagné2 1
2
Thales Research and Technology Canada, Quebec City, Canada {Benedicte.Chatelais, Daniel.Lafond}@ca.thalesgroup.com Dép. Génie électrique et Génie Informatique, Université Laval, Quebec City, Canada [email protected], [email protected]
Abstract. Thales Research and Technology Canada is developing a decision support system consisting in multiple classification models trained simultaneously online to capture experts’ decision policies based on their previous decisions. The system learns decision patterns from examples annotated by a human expert during a training phase of knowledge capture. Because of the small volume of labeled data, we investigated a machine learning technique called active learning that copes with the dilemma of learning with minimal resources and aims at requesting the most informative samples in a pool given the current models. The current study evaluates the impact of using active learning over an uninformed strategy (e.g., random sampling) in the context of policy capturing to reduce the annotation cost during the knowledge capture phase. This work shows that active learning has potential over random sampling for capturing human decision policies with minimal amount of examples and for reducing annotation cost significantly. Keywords: Active learning Machine learning Policy capturing Decision support Operator assistance Human-Machine collaboration Judgmental analysis Expert system Cognitive modeling Classification
1 Introduction Policy capturing refers to methods that use statistical models or machine learning algorithms to model the decision-making patterns of humans [1]. As experts might not be aware of their decision rules, or might not be able to fully explain in words their professional domain knowledge [2, 3], the use of policy capturing systems is promising for improving human-machine teaming. Thales Research and Technology Canada has developed a cognitive modeling and decision support system called Cognitive Shadow, based on artificial intelligence techniques. The solution aims at learning decision patterns of experts in real-time in order to raise advisory warnings to notify operators in case of mismatch between the expected decision inferred by models and the actual © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 177–182, 2020. https://doi.org/10.1007/978-3-030-39512-4_28
178
B. Chatelais et al.
observed decision. The goal is to prevent human errors that could happen under stressful or time pressure situations, mental overload, fatigue, stress or distraction. The Cognitive Shadow consists in multiple classification models trained simultaneously to capture experts’ decision policies based on their previous decisions. As decision patterns are variable from one human to another, there is not one specific model that is suited for all experts. Therefore, the key point of the Cognitive Shadow lies in training a pool of different algorithms, allowing for a dynamic selection of the best model over time [4–6]. Prior to real-time judgment analysis and actual decision support, the Cognitive Shadow must learn decision patterns from a few examples annotated by human experts during a short one-time training phase of knowledge capture. Importantly, the system does not require learning from any pre-existing dataset or ground truth. The current study aims at supporting operators working in maritime patrol aircrafts (MPA) for classifying vessels into three categories (allied, neutral, sensitive) given a list of variables of interest. The goal of the project is to improve human-machine collaboration by taking advantage of the judgmental bootstrapping effect that demonstrated a reduction in human error and, more interestingly, an increase in the individual performances of the human-machine team [7, 8]. In order for Cognitive Shadow to raise reliable warnings, the algorithms must be trained with data having a good coverage of the domain of interest. Since each individual cannot annotate data from every possible situation, they should at least annotate few examples from all classes in order to pre-train the models before allowing the Cognitive Shadow system to actively display warnings. Once trained during the knowledge capture phase, the models keep on learning and updating over time for every new decision. Hundreds of instances represent a large volume of data from the users’ perspective who has to annotate them manually (time-consuming), while it is not sufficient for training machine learning algorithms efficiently. Thus, it is essential to carefully select the most informative data from which the models will learn. Currently, the system presents to the operator a set of randomly sampled instances without any guarantee that it was informative enough to cover the whole space of interest (passive learning). Unlike passive learning using random examples, Active Learning (AL) can be used to reduce the quantity of required data and improve model accuracy by automatically selecting the most informative instances to be labeled. For instance, AL can be used during the initial knowledge capture phase to allow algorithms to request data from which they want to learn based on past queries and responses (labels) from those queries. The current study aims at evaluating the impact of such use of AL over random sampling during the initial knowledge capture in the context of automatic policy capturing. With this approach, we aim at minimizing the labeling time required from the experts using AL techniques, and as such improving Cognitive Shadow’s efficiency at modelling an expert’s decision policy, while maximizing the models accuracy. The experiment of the paper is conducted using a fixed number of training examples, but a reduction in the number of instances presented during the knowledge capture phase can also be considered. The main goal is to accelerate the learning process so that the Cognitive Shadow system could be useful for decision support purposes as quick as possible – when a satisfactory reliability level is reached – and selecting a good set of
Improving Policy-Capturing with Active Learning
179
instances to label that are covering the various situations that can be encountered, including rare instances seldom observed in real operations. Section 2 of this paper describes the methods used for applying AL in this context and the datasets used for this study. Section 3 presents the results of two experiments. Section 4 summarizes the main conclusions and proposes future research perspectives.
2 Method 2.1
Active Learning
AL refers to semi-supervised machine learning methods aiming at actively selecting the input instance from which the model may learn the most. The instance is selected among a pool of unlabeled data based on a measure of utility in order to maximize the learning phase by focusing the training on the most informative data. The chosen instance is presented to an oracle to be annotated; in our use-case, the oracle is the human expert. The model(s) will learn from this new-labeled data and will then request a new instance from the unlabeled pool to improve its training. For the present purposes, two different instance selection methods from the literature were applicable for the project: pool-based sampling and stream-based sampling [9]. In short, pool-based sampling evaluates and ranks the entire instances in a large collection of unlabeled data, also called the pool, before selecting the most informative one. Stream-based sampling scans sequentially every data in the pool one-by-one and decides to either query or discard it based on its estimated information value. Although pool-based sampling is computationally more expensive given the ranking phase (which needs continuous reassessment as models learn), we focused on this method in our study to allow the selection of the most informative data in the pool. Different query strategies (or heuristics) are used in the literature to evaluate the information level of instances in a pool [9]. The query strategy is a function that measures a score for every instance in order to select the most informative one (i.e., the one involving the most uncertainty). As the Cognitive Shadow consists in training simultaneously different models on the same data because expert’s labeling time is limited, the selected instances have to take into consideration all models and should be informative for all, or at least the majority of them (depending of the chosen strategy). The query-by-committee strategy allows each model in a committee to vote on the labeling of instances. The one that generates the biggest disagreement is selected as the query of interest. Here we used consensus entropy sampling consisting in calculating the average of the class probabilities for each model of the committee. The instance with the largest entropy is then selected and presented to the operator [9]. 2.2
Datasets
We evaluated the impact of AL in two phases. The first phase aims at evaluating AL strategies on a benchmark dataset. The goal is to evaluate technical choices, to demonstrate the advantage of AL over random sampling independently of the classification task of interest, and to validate if we can achieve performances equivalent to
180
B. Chatelais et al.
the literature with a reduced number of queries. The second phase consists in validating conclusions of the first phase on a synthetic dataset representative of the classification problem of interest simulated on the high-fidelity simulator called Thales Airborne Surveillance Mission System (AMASCOS). Benchmark Dataset: Mushroom dataset is an openly available benchmark dataset1 for which the task consists in classifying mushrooms as poisonous or not. The dataset is composed of 8124 instances characterized by 22 variables, taken from 23 different species. We used one-hot encoding on categorical features, then split the dataset into train and test subsets. Experimental Dataset: The experimental dataset refers to the AMASCOS maritime patrol task (radar contact classification by a tactical coordinator). Surface vessels are characterized by fourteen features and the goal is to classify each vessel as “Allied”, “Neutral”, or “Suspect”. The features are a mix of categorical and numerical attributes, which include: platform type, speed, speed change, stationary, length, friend list, AIS (on-off), nearest track distance, cluster size, sea/coastal proximity, interception, sea lane deviation, heading change and nationality. We used one-hot encoding on the categorical variables. We generated a synthetic dataset of AMASCOS representative of all possible combinations of features in which unrealistic combinations were excluded. In total, this represents a pool of more than 3 million possible examples. For experimental purposes, we simulated an expert policy (based on domain expert feedbacks) and generated the labels for each vessel. Class distribution was unbalanced (approximately 5% of vessels classified as allied, 12% as neutral, and 83% as sensitive). Because of the computational time necessary for ranking every instance in the pool and selecting the most informative one, we reduced the size of the pool by selecting a subset of 500 000 combinations, with an equally balanced representation of each class. 2.3
Metrics
To evaluate the advantage in using AL over passive learning (i.e., the random sampling baseline), we calculated the accuracy of the committee on the test dataset (a holdout labeled dataset not used for training or optimizing models) after every training iteration. Practically, it allows comparing the two sampling strategies and the number of queries required to reach the target accuracy. 2.4
Training Process
The committee is composed of six heterogeneous classifiers (models implemented into the Cognitive Shadow aiming at learning the experts’ decision pattern): a decision tree, k-nearest neighbors (KNN), support vector classifier (SVC), a neural network (MLP), a naïve Bayes classifier and logistic regression. Since it is impractical to start AL without data from all classes, we are using a seed sample (n = 4 for the benchmark dataset and n = 30 for AMASCOS dataset) with balanced stratification of every class. Data are
1
https://archive.ics.uci.edu/ml/datasets/mushroom.
Improving Policy-Capturing with Active Learning
181
selected randomly in the pool. Following every querying phase, queried instances are removed from the unlabeled pool and are added to the labeled training dataset on which models are learning. Because the labeled pool is relatively small, models are trained and optimized using k-fold cross validation (k = 2 for the benchmark dataset and k = 10 for AMASCOS dataset). We evaluate models’ performance between active and passive learning by measuring the accuracy on the test dataset. The knowledge capture ends after a predefined number of queried instances.
3 Results Figure 1 illustrates the accuracy of the committee achieved with AL (green curve) and passive learning (blue curve), respectively measured on Mushroom (Fig. 1, left) and AMASCOS (Fig. 1, right) test datasets. We first assessed the performance achieved on the Mushroom test dataset on 200 iterations of training after every new query. AL outperformed random sampling by 153 queries to reach an accuracy of 99.9% and achieved perfect prediction with less than 40 labeled examples while the committee’s accuracy using random sampling kept oscillating around 98.8% after 200 queries. Next, we evaluated AL on the AMASCOS synthetic dataset. The 500 iterations of query selection were followed by a training phase for each performed query. Results are showing that AL reached an accuracy of 95.0% after 47 examples, compared to random sampling that required 225 examples to reach that level. It represents a reduction of 178 annotations for the human expert. After 500 iterations, the final random sampling reached an accuracy of 96.6% while AL sampling reached a near perfect performance of 99.7%, representing an absolute gain of 3.1%.
Fig. 1. (left) Accuracy of the committee measured on the Mushroom test dataset for active (green curve) and passive (blue curve) learning. (right) Accuracy of the committee achieved on the AMASCOS test dataset for active (green curve) and passive (blue curve) learning.
182
B. Chatelais et al.
4 Discussion and Perspectives This study shows that AL outperforms random sampling and reduces annotation time for knowledge capture. In fact, the quantity of annotations required to reach an accuracy of 99% on the benchmark dataset was reduced by 86%. On the AMASCOS test dataset, it represents a reduction of 79% to reach an accuracy of 95%. Although these results are demonstrating the potential of using AL to reduce the annotation cost, further work is required to evaluate the potential of other query strategies. Furthermore, it is possible that a homogenous committee will converge faster than a heterogeneous committee. Therefore, we will evaluate AL on a Random Forest classifier (a homogenous committee of decision trees) and compare the performances with the current heterogeneous committee. We also plan to evaluate a different type of sampling strategy from the field of design of experiments that could provide a more challenging baseline. Finally, human-in-the-loop experiments must be performed to validate the benefit of using AL in real-world situations, where the “oracle” involves human variability; potentially introducing noise is the herein investigated process. Acknowledgments. We are thankful to Frédéric Morin and his team for software development support and integration of the current work into the Cognitive Shadow system. This work was supported by MITACS and PROMPT-Québec R&D partnership grant awarded to Christian Gagné, and the financial support from Thales Canada.
References 1. Cooksey, R.W.: Judgment Analysis: Theory, Methods, and Applications. Academic Press, San Diego (1996) 2. Berry, D.C.: The problem of implicit knowledge. Expert Syst. 4, 144–151 (1987). https://doi. org/10.1111/j.1468-0394.1987.tb00138.x 3. Ericsson, K.A., Simon, H.A.: Protocol Analysis: Verbal Reports as Data. MIT Press, Cambridge (1984) 4. Lafond, D., Tremblay, S., Banbury, S.: Cognitive shadow: a policy capturing tool to support naturalistic decision making. https://doi.org/10.1109/cogsima.2013.6523837 (2013) 5. Lafond, D., Labonté, K., Hunter, A., Neyedli, H., Tremblay, S.: Judgment analysis for realtime decision support using the cognitive shadow policy-capturing system. In: Ahram, T., Taiar, R., Colson, S., Choplin, A. (eds.) IHIET 2019. Advances in Intelligent Systems and Computing, vol 1018, pp. 78–83. Springer, Cham (2020) 6. Lafond, D., Vallières, B.R., Vachon, F., Tremblay, S.: Judgment analysis in a dynamic multitask environment: capturing non-linear policies using decision trees. J. Cogn. Eng. Decis. Mak. 11, 122–135 (2017) 7. Armstrong, J. S.: Judgmental Bootstrapping: Inferring Experts’ Rules for Forecasting. Principles of Forecasting: A Handbook for Researchers and Practitioners, pp. 171–192. Norwood, Kluwer. (2001) 8. Karelaia, N., Hogarth, R.: Determinants of linear judgment: a meta-analysis of lens model studies. Psychol. Bull. 134(3), 404–426 (2008) 9. Settles, B.: Active Learning Literature Survey. Technical report, University of Wisconsin Madison (2010)
Task Measures for Air Traffic Display Operations Shi Yin Tan1(&), Chun Hsien Chen1, Sun Woh Lye1, and Fan Li2(&) 1
2
School of Mechanical and Aerospace Engineering, Nanyang Technological University, 50 Nanyang Ave, Block N3, Singapore 639798, Singapore [email protected], {MCHchen,MSWLYE}@ntu.edu.sg Fraunhofer IDM Centre, Nanyang Technological University, 50 Nanyang Ave, Block NS1, Singapore 639798, Singapore [email protected]
Abstract. With rising growth in air traffic globally, advanced technologies are being developed to aid ATCOs in the managing and control of a foreseeable denser airspace. The need to perform holding stack management, a potential challenge to ATCO, especially during heavy traffic congestion owing to weather and runway conditions is expected to be more frequent. To mitigate this challenge, the use of 3D displays was suggested. This paper examines the performance impacts resulting from the adoption of 3D instead of 2D radar displays with regards to visual search and relative vertical positioning identification. Observations relating perceived increased in stress and workload by the participants are also made. Keywords: Human performance Task measures Perspective changes in dimensional representation
Stress and workload
1 Introduction Affordable commercial air travel has sparked rapid growth in the aviation industry in recent years [1]. Managing and controlling air traffic effectively and efficiently is becoming more important than ever before. Given that the primary role of an air traffic controller (ATCO) is to guide and direct air traffic flow, ATCO is presented with a vast amount of flight data daily. From these data, ATCO would then process them and make appropriate decisions. This is because their workload is dependent on the number of flights within their jurisdiction [2]. Thus, several advanced technologies have been implemented to reduce the workload of ATCOs, such as conflict resolution advisory aids [3], 3D radar display [4], and adaptive fatigue management [5]. Nevertheless, implementing these novel advanced technologies would substantially change the interactions between human and systems. In other words, the tasks and operational procedures of ATCOs will change with the introduction of advanced technologies. In a conventional ATC setup, screens are made to display a plan view of the airspace with vertical separation of aircrafts denoted by altitude values next to the aircraft markers. In recent years, there have been various studies on the use of 3D radar displays that suggest evidence that a change in perspective can help to improve the © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 183–189, 2020. https://doi.org/10.1007/978-3-030-39512-4_29
184
S. Y. Tan et al.
ATCOs spatial awareness of traffic and terrain thereby better navigate aircrafts in the vertical plane [6]. Interface congestion or cluttering is also a problem with 2D radar displays when operations such as holding stack management is utilized [7]. In holding stack management, aircrafts are tasked to fly in a preassigned route while waiting for clearance to land. This preassigned route normally takes the form of an oval circuit with multiple aircrafts following the same route separated vertically. The close horizontal proximity on the screen of these aircrafts while carrying out a holding procedure poses problems to the ATCO viewing on 2D radar displays due to the overlapping of flight labels and markers. It has been suggested that the use of 3D radar displays can help to reduce the cognitive loads of the ATCOs via the reduction of interpretative task as better and clearer mental picture can be derived from such displays. This is especially so in managing congested airspace or in holding stack management [6]. Furthermore, the alleviation of congestion of markers in the display using 3D radar displays can help ATCOs make better judgements on vertical arrangements. However, little is reported on the effects of 3D displays on ATCO’s acceptance and performance. This paper seeks to investigate into performance measures relating to accuracy and task completion time between 2D and 3D display interfaces in an ATC setting.
2 Methodology 2.1
Experimental Design
A laboratory-based experiment composing of 16 university-level student subjects was conducted to study into the two measures of accuracy and completion times when performing various scenario tasks using 2D and 3D displays. Each exercise consists of a set of eight scenarios. On accuracy, this is based on the number of correct answers provided by the participants when queried. For task completion time, this is based on the recorded timestamps that captures the start of task till the final mouse click that signals the end of the exercise. Besides this, NASA-TLX questionnaires were also used where the participants are required to complete the study based on a Likert-type rating scale of (0–9) aimed to measure their perceived mental workload and stress when undertaking the various scenario tasks. The experiment was conducted using a real-time simulator, the NLR ATM Research SIMulator (NARSIM), to generate and simulate air traffic scenarios in real time. The simulated scenarios were displayed on two display monitors. Scenarios used in this study are based on holding stack management, a scenario which occurs when multiple aircrafts upon ATCO’s instructions are required to fly in a holding pattern at separate altitudes. 2D and 3D visual interfaces are used in this experiment as shown in Fig. 1. 2.2
Experimental Procedure
Figure 2 shows the experimental flow where participants would first be required to undergo several calibration procedures in relation to the EEG headset and eye tracker
Task Measures for Air Traffic Display Operations
185
Fig. 1. 2D visual orientation (Left), 3D visual orientation (Right)
that would be used in this exercise. This is then followed by a training period in which participants familiarize with the experimental environment by running through several simulated scenarios.
Fig. 2. Experimental flow of exercise to be conducted
Once familiar, the actual experiment exercise commences which comprises 8 test scenarios of varying intensity and dimensional information representation as highlighted in Fig. 3.
Fig. 3. A set of eight scenarios to be performed
The procedure within each scenario consists of two phases as follows: (a) Memory Phase: Participants were first asked to memorize key flight details such as the callsigns and altitude of the aircraft markers that are presented in the left
186
S. Y. Tan et al.
display for 10 s before the screen will be turned off. For example in Scenario 1, the left display would present a particular airspace with three dots denoting the flights with its callsigns and flight levels highlighted in 2D orientation. Test Phase: The right display is then activated, displaying the same airspace and flight activity and information. Participants are then required to carry out two tasks namely: T1: A visual search task where one is to locate the aircraft markers in alphabetical order based on their callsigns. T2: Relative Position Identification when one is to select an aircraft at a higher altitude out of 2 highlighted flight markers. (b) The participant is required to give his/her answer for T1 and T2. Each (Memory and Test) phase is repeated 15 times per scenario per participant. (c) At the end of the scenario, the participant is required to fill up a set of NASA-TLX questionnaires used to measure the perceived mental workload and stress for that scenario. (d) A 5 min break would then be instituted before the commencement of the next scenario. This process repeats itself until a participant completes all the eight scenarios. It is to be noted that in the first 4 scenarios, different combinations of 2D and 3D orientation representation were adopted. For scenarios 5–8, they are similar to scenarios 1–4 except that it functions at a higher intensity level where the number of aircraft markers are increased from 3 to 4. These intensity values were chosen as these values represent the limits of spatial working memory of young healthy adults. It was reported that a young adult is found to be only capable of maintaining 3 to 4 simple visual objects in working memory [8, 9].
3 Results and Discussion 3.1
Response Accuracy
Accuracy in response was based on the answers provided by the participant for the 2 test tasks (T1 and T2) in each of the scenarios. For each test task, 240 responses were derived from all the participants. Figure 4 shows the mean or average percentage of correct responses over the total number of participants’ responses for each test task in the various scenarios. To evaluate situations having low and high markers, scenarios 1 and 5 of 2D > 2D are used as reference. 2-tailed paired t-test was then conducted between these references with the other 3 display combinations (3D > 2D, 3D > 3D and 2D > 3D) to find the significance of the values. A p score 0.05 is regarded as null. In visual search tasks (T1), t-test results showed that there were significant differences in accuracy registered for 3D > 2D scenarios (2 & 6) with respect to 2D > 2D scenarios (p = 0.0205 and p = 0.0007 for the low (3 dots/markers) and high intensity (4 dots/markers) levels respectively. Based on the mean of Scenario 2 (0.917) and Scenario 1 (0.996), there is a significant drop in accuracy in performing visual search tasks. This is also seen at higher intensity levels where lower accuracy and 0.842 for
Task Measures for Air Traffic Display Operations
187
Fig. 4. Average task accuracy
scenario 6 were registered compared to of 0.946 for scenario 5. One possible reason is that in scenario 2 and 6, the change in visual projection may have forced the participants to reorient the memorized airspace information which may be a factor for the decrease in accuracy. Furthermore, participants performed the worse in terms of accuracy when the 3D orientation is used for the memory phase. This coincides with a study that found significant detrimental effects of using 3D orientation whilst performing visual search tasks [10]. In identifying vertical relative positioning tasks (T2), all other scenarios register significant difference except for 3D > 2D at low intensity level (p = 4.506). Accuracy for T2 was higher in scenarios where the dimensional representation is either upgraded from 2D > 3D or at 3D > 3D orientation (Scenarios 3, 4, 7, 8). It is observed that introduction of a depth cue in the 3D orientation allowed participants to visualize altitude separation of aircraft better thereby greatly enhanced performance in T2. Participants engaged in Scenario 6 (3D > 2D) fared the worst on average. This would be due to a combination of perspective reorientation and engagement of a more complicated 3D display in the memory phase. 3.2
Task Completion Time
On task completion time, the measure is from the start of the test stage to the final click recorded in the test stage. The timings were recorded in milliseconds. Like the previous exercise on response accuracy, average values were obtained over 240 responses across all participants for T1 and T2 tasks. and represented in Fig. 5. These average values are based on times where the participants have completed their tasks correctly.
Fig. 5. Average completion time
In visual search tasks (T1), values for 3D > 3D scenarios (scenarios 3 and 7) were found to be insignificant (p = 0.7638 and p = 0.068) This could be owing to the single orientation setup in these scenarios. For scenarios involving dual orientation displays
188
S. Y. Tan et al.
(Scenarios 2, 4, 6, 8), participants completed the tasks on average slower than a single orientation setup for both low and high intensity levels. For vertical relative positioning identification (T2), values for all other scenarios are found to be significant except for Scenario 2 (p = 0.3468). Scenarios which utilized the 3D orientations for the test phase (3, 4, 7, 8) registered faster completion time on average as 3D orientation display provides a better representation in terms of relative vertical positioning. However, in scenario 6, a longer completion time is observed when compared to scenario 5 (2151.981 > 1905.397). This may be attributed to the higher complexities of the 3D representation during the memory phase. 3.3
Self-task Evaluation
For the self-evaluation section by the participants, NASA-TLX questionnaires were used. Each participant is required to evaluate the perceived stress and workload at the end of each scenario. Mean scores across all participants for each scenario along with the respective 2-tailed t-test p-values relative to the 2D > 2D scenarios for each respective intensity level are shown in Fig. 6.
Fig. 6. Participants’ questionnaire scores on perceived stress and workload
From Fig. 6, a significant higher mean is registered for scenarios 2 and 6 when compared to 2D > 2D scenarios indicating that a higher level of stress was perceived by the participants. Noticeable p scores of p = 0.0278 and p = 0.0032 for scenarios 2 and 6 respectively were also registered which indicated a significant difference between the two compared scenarios. On workload, the mean scores of the participants show significantly difference for scenarios 2, (6, 7) when compared with scenarios 1 and (5) respectively. This is also reflected in the p scores ( .05). Furthermore, regardless of the eye tracker, cycle time was negatively correlated with incident detection performance (r = −.28, p = .042) and strongly associated to scanlag (r = .86, p < .001).
4 Discussion The present study aimed at comparing two eye tracker systems integrated in the context of the Scantracker gaze-based intelligent support system (see [7]). Contrary to our hypothesis, the mobile system led to poorer gaze detection and fewer measures of
Mobile Real-Time Eye-Tracking
205
inspection. Nonetheless, measures of surveillance did not significantly vary across the two systems. Measures of cycle time collected by both eye trackers were also related to detection performance, thus supporting the operational principle of the Scantracker (see Table 2). Table 2. Measures of surveillance as a function of the type of eye tracker Surveillance measures Detection accuracy (%) Detection time (s) Perceived workload Scanlag (10-s time bins) Cycle time (s)
Fixed 19.55 (11.70) 27.18 (9.44) 5.42 (0.91) 3.91 (2.20) 145.14 (102.06)
Mobile 23.09 (10.59) 27.81 (8.07) 5.22 (1.08) 4.05 (1.86) 122.24 (63.30)
Differences in valid gaze data between both types of device might be mainly caused by situations in which environmental markers were undetected (see the marker unrecognized on the upper right of Fig. 1). Because integration of the 3D environment was based on these markers, the decision-support tool could not always infer the gaze position within the visual scene, which led to invalid eye data. Still, our results suggest that implementation of a decision-support tool that relies on the live processing of eye measures can be performed despite a significant amount of missing data. The gazebased measure of cycle time computed with the Scantracker can even be used to predict incident detection accuracy (longer time being related to poorer performance). Implementing wearable systems could be an asset for researchers interested in posthoc eye movement analysis and for practitioners aiming at implementing gaze-based decision support tools for security surveillance [7, 8]. First, mobile eye trackers do not impede operators’ movements. Second, integration of the 3D environment can be done quite easily using cost-free open source computer vision solutions. Third, mobile devices are often cheaper than fixed ones (for multi-screen contexts) and, despite their accuracy limits [12], represent cost-efficient tools. Finally, once the gaze detection algorithm developed, implementation of this system can be performed easily as long as visual markers are detectable [11]. Still, using a mobile tool may incur some drawbacks. First, head-mounted devices are sometimes deemed intrusive and uncomfortable. Because security surveillance operators often work for long shifts [13], wearing eye-tracking glasses for many hours may become tedious. Second, because the system highly depends on the detection of visual markers, images of the markers must be clear and completely visible to be detected. Low performance (in resolution and integration time capture) mobile systems may collect blurry images, which may mitigate detection from the computer vision algorithm. From an NSEEV (noticing - salience, expectancy, effort, value) model perspective [14], visual alerts provided by the Scantracker tool may help to overcome the effort required to move one’s gaze towards camera feeds in which incidents are scarce by increasing saliency [7]. Moreover, according to [5], gaze-based decision-support systems deployed in contexts related to visual search can decrease cognitive load as they
206
A. Marois et al.
provide cues that alleviate demands on their user’s memory system. Since those benefits are contingent on the decrease in effort (and cognitive load), using a fixed device that may impose operators to limit their movements could mitigate the positive impact of the decision-support system as further efforts may be required to stay in a specific zone. Privileging mobile technologies could allow surveillance operators to freely move and concentrate on their task, without having to think about their position. Although no impact was observed on the measures of surveillance, the mobile technology used in the current study still led to poorer data validity. Future studies could aim at implementing this mobile gaze-aware system using augmented reality tools equipped with an eye-tracking capability (e.g., the new Microsoft HoloLens 2) as it may remove the necessity of environmental markers and of interfacing with the surveillance system. This would ensure better and easier environment detection while still allowing the user to move freely without adding any cost in cognitive effort. Acknowledgments. This project was supported by a grant from the Innovation for Defence Excellence and Security (IDEaS) program and by financial support from Mitacs Canada. We are grateful to all the students involved in the data collection and to the financial and in-kind contributions of Thales Research and Technology Canada.
References 1. Chérif, L., Wood, V., Marois, A., Labonté, K., Vachon, F.: Multitasking in the military: cognitive consequences and potential solutions. Appl. Cogn. Psychol. 32, 429–439 (2017) 2. Piza, E.L., Welsh, B.C., Farrington, D.P., Thomas, A.L.: CCTV surveillance for crime prevention: a 40-year systematic review with meta-analysis. Criminol. Public Policy 18, 135–159 (2019) 3. Hodgetts, H.M., Vachon, F., Chamberland, C., Tremblay, S.: See no evil: cognitive challenges of security surveillance and monitoring. J. Appl. Res. Mem. Cogn. 6, 230–243 (2017) 4. van Voorthuijsen, G., van Hoof, H., Klima, M., Roubik, K., Bernas, M., Pata, P.: CCTV Effectiveness Study. In: Proceedings of Carnahan Conference Security, pp. 105–108 (2005) 5. Taylor, P., Bilgrien, N., He, Z., Siegelmann, H.T.: Eyeframe: real-time memory aid improves human multitasking via domain-general eye tracking procedures. Front. ICT 2, 17 (2015) 6. Morozkin, P., Swynghedauw, M., Trocan, M.: An Image compression for embedded eyetracking applications. In: IEEE International Symposium INISTA (2016) 7. Tremblay, S., Lafond, D., Chamberland, C., Hodgetts, H., Vachon, F.: Gaze-aware cognitive assistant for multiscreen surveillance. In: International Conference on Intelligent Human Systems Integration, pp. 230–236. Springer, Cham (2018) 8. Hodgetts, H., Chamberland, C., Latulippe-Thériault, J.-D., Vachon, F., Tremblay, S.: Priority or parity? scanning strategies and detection performance of novice operators in urban surveillance. In: Proceedings of Human Factors and Society, vol. 62, pp. 1113–1117 (2018) 9. St-John, M., Risser, M.R.: Sustaining vigilance by activating a secondary task when inattention is detected. In: Proceedings of Human Factor and Erogonomics Society, vol. 23, pp. 155–159 (2009)
Mobile Real-Time Eye-Tracking
207
10. Vachon, F., Vallières, B.R., Suss, J., Thériault, J.-D., Tremblay, S.: The CSSS microworld: a gateway to understanding and improving CCTV security surveillance. In: Proceedings of Human Factor and Erogonomics Society, vol. 60, pp. 265–269 (2016) 11. Funke, G., Greenlee, E., Carter, M., Dukes, A., Brown, R., Menke, L.: Which eye tracker is right for your research? Performance evaluation of several cost variant eye trackers. In: Proceedings of the Human Factors and Erogonomics Society, vol. 60, pp. 1240–1244 (2016) 12. Macinnes, J.J., Iqbal, S., Pearson, J., Johnson, E.N.: Wearable eye-tracking for research: automated dynamic gaze mapping and accuracy/precision comparisons across devices, 28 June 2018, Unpublished manuscript 13. Keval, H., Sasse, M.A.: “Not the usual suspects”: a study of factors reducing the effectiveness of CCTV. Secur. J. 23, 134–154 (2010) 14. Steelman, K.S., McCarley, J.S., Wickens, C.D.: Modeling the control of attention in visual workspaces. Hum. Factors 53, 142–153 (2011)
Detecting Impulsive Behavior Through Agent-Based Games Alia El Bolock1,2(&), Ahmed Ghonaim1, Cornelia Herbert2, and Slim Abdennadher1 1
2
German University in Cairo, Cairo 11432, Egypt [email protected] Department Applied Emotion and Motivation Psychology, Ulm University, 89081 Ulm, Germany
Abstract. Impulsiveness is a trait that guides human behavior. People who score high on self-report measures of impulsiveness are characterized by unplanned risky behavior and acting without foresight. Character Computing and Automatic personality recognition aim at detecting characteristics and traits from different cues without relying on self-assessment measures. An individual’s behavior during agent-based games is one such cue. In this work, we aimed to detect impulsiveness through two agent-based games: Tic Tac Toe and Colored Trails. The developed agent estimated impulsiveness using Pearson correlation and linear regression with an accuracy of 77% in comparison to selfreport questionnaires. Keywords: Impulsiveness Personality Character Computing Agent-based games Tic Tac Toe Colored Trails Regression
1 Introduction Impulsiveness is a trait that strongly drives behavior. It is one of the risk factors that contribute to impairments in mental health [12] and to disorders such as addictive behavior [3] and obesity [13]. People who score high on self- report measures of impulsiveness often have problems in decision making and in planning of behavior in general. In addition, they show decreased sensitivity to negative consequences of behavior, often acting without foresight (see [10]). Like all other personality traits, impulsiveness is measured via self-report questionnaires. The TCI [10] is one such measure which is not exclusively related to impulsiveness but measures it as one of the facets belonging to a hierarchy of “top-level” traits. Self-report measures however are challenged by a number of problems including the self-report bias [14]. Automatic Personality Recognition (APR)) [16] and Character Computing [2, 7, 19, 20] tackle this problem by investigating different ways of recognizing the personality traits and other defining factors of an individual by analyzing individual behavior in different situations (e.g. assessing an individual’s social media behavior. The aim of this paper is to propose an APR technique by presenting different agent-based gaming scenarios for detecting impulsiveness. We investigate whether the construct of impulsiveness can be automatically detected from the interactions of an individual within certain gaming © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 208–213, 2020. https://doi.org/10.1007/978-3-030-39512-4_33
Detecting Impulsive Behavior Through Agent-Based Games
209
scenarios. We implement two games (Colored Trails and Tic Tac Toe) within the environment aimed at investigating impulsive behavior in the player’s interactions. Both games have been used in previous studies to predict certain personality traits [1, 15]. Akin to previous studies, in the present study the behavior during both games was recorded and analyzed by the agent to form an estimate of the player’s impulsiveness. The resulting value was then compared to the player’s measured impulsiveness from self-report questionnaires. The estimated impulsiveness was found to be highly correlated with the measured one and the corresponding linear regression model. In the following, the design and results of the present study will be described in detail (Sects. 3 and 4) on the basis of related previous work (Sect. 2).
2 Related Work There has been a lot of work towards automatic personality recognition. One popular approach is mapping behavioral characteristics extracted from smartphone data or social media to self-reported personality traits [17]. One heavily investigated indicator of personality traits is the behavior in games. For starters, the motivation behind online game playing and personality are investigated in [11]. The relation between personality and physiological effects on an individual whilst playing computer games is discussed in [9]. In [4], the relationship between personality traits and satisfaction in on-line game players using the big five model has been investigated. Of particular relevance to this paper is predicting personality traits from Tic Tac Toe and CT. [15] is an example of formative and psychological factor assessment of students through Tic Tac Toe game playing. In [8], CT was used to find correlations between behavior and reward dependence. Different ways to analyze strategies and behaviors of players in CT are presented in [5] while unsupervised learning is used in [1] to map big five personality traits to the behavior of different agent models in CT. However, the impulsiveness trait has not been investigated using these means before.
3 Aim and Approach Interactions in games are very telling of a person’s behavior and thus characteristics. Impulsiveness can be very apparent when playing a game. As a proof of concept we implemented two basic agent-based games: Tic Tac Toe and CT. Tic Tac Toe was chosen as it is a very basic intuitive game that everyone is usually familiar with. It does not introduce any high mental load and can thus show the intuitive reactions of the players. CT was chosen as it is a relatively simple agent-based game that still provides enough complexity to allow for measuring the difference in behavior between individuals. The interactions of the players with the game were automatically recorded and analyzed by the game agents. This data was used by the agent to present an estimated value of the player’s impulsiveness. In the following, we will briefly explain each game, how its agent was implemented and how the agent calculated the player’s impulsiveness.
210
3.1
A. E. Bolock et al.
Tic Tac Toe
The game is played on a 3X3 grid, where each player would place their mark (x or o). A player wins upon having 3 vertical, horizontal or diagonal marks. Tic Tac Toe is a zero-sum game, i.e. if both players are playing with an optimal strategy; every game will end in a tie. The actions of the agent during the game were based on a variation of the Minimax algorithm [6] where each player targets the move with the maximum score in their turn. This is done by recursively calculating and comparing the score of all possible moves from the moves tree. However, to avoid having an unbeatable agent, a few losing strategies are incorporated into the agent and the agent alternates between them, depending on the strategy of the player. The following was collected by the agent during gameplay. (1) List of all the positions for player and agent. (2) Effect of each position on player and agent (3) Time taken by the player for each move. (4) Winning status: player, agent, tie. 3.2
Colored Trails Game
The Colored Trails Game [11] (CT) is a test-bed designed to learn about the cooperativeness of an individual in their decision making. The board of the round-based game is a 4 4 grid of colored squares. Each player starts each round at a specific position and with a specific set of colored chips with the aim of reaching the goal square. The player must own a chip of the same color of an adjacent square to move to it. Despite being competitive, the players have to cooperate by negotiating chip exchange. The communication phase is divided into three phases; offer, evaluation, and offer response. Each round one player is the proposer and the other is the responder, alternatingly. The initial chipsets of the player and the agent are generated and shuffled, depending on their position and the board, using 4 variations based on the Manhattan distance to calculate the route. The scoring function is points ¼ 125 25 ðNumberOfSquaresÞ þ 10 ðUnusedChipsÞ:
ð1Þ
For computing the best route, we traverse the whole board from the starting position to the goal. We consider all possible routes based on the given chipset. The strategy of the agent is determined by a complete search algorithm, where the agent tries to reach the best scenario for both itself and the user. The agent tries to create a chipset for the agent and the user where both of them reach the goal. If that is not possible, the agent considers all the available chips and creates a win situation for itself without considering the users state if the offer was accepted. Thus, the agent first analyzes the intentions of the opponent and accordingly adapts its moves. The agent starts the game with a higher probability of not helping the user or being selfish. After each round, the agent starts analyzing the intentions of the user and then changes the probability of its actions. The change made to the actions of the agent depends mainly on the favorability of the offer made i.e. whether the offer was of benefit or did not intend to make the agent lose the round. The favorability of the offer is determined by Δp.
Detecting Impulsive Behavior Through Agent-Based Games
Dp ¼ 125 25 ðNumberOfSquaresÞ þ 10 ðsumðchipSetÞ lenðRouteÞÞ:
211
ð2Þ
The following data is automatically collected during gameplay (in reference to the chipset at the beginning of a round). (1) Status of the agent/players before the offer (can the goal be reached?). (2) Points of the agent/player if the offer is rejected/accepted. (3) Response of the agent/player to the current offer. (4) Status of the agent/player if the offer is accepted (can the goal be reached?). (5) Time taken for the player to make an offer or respond to one. (6) The offered chipset of both the player and the agent. 3.3
Estimating Impulsiveness
The data collected by each game was then analyzed by the agent to calculate an estimate for the current player’s impulsiveness. As inspired by [15], an average time for each round was calculated based on all participants. Accordingly, a percentile calculation was performed and the mean and standard deviation for the seconds per round was calculated. This was calculated separately for each game. This data was used to segment the players into different groups by average time per game, as shown in Table 1. We also separate the set of played rounds U into two sets: UF and UL. UF consists of the useful offers and responses of CT and the useful moves of Tic Tac Toe. Accordingly, UL = U \UF consists of all the useless actions. Given the function cat(r) which gives the speed categorization of round r (Table 1), and the calculated impulsiveness constants per speed category a, b, the estimated impulsiveness per user is calculated as P P r 2UF acatðrÞ r 2UL bcatðr Þ : ð3Þ rounds Table 1. Speed categorization per round No. Group Group Group Group Group Group Group
1 2 3 4 5 6 7
Time 20 s/round >20 s & 40 s >40 s & 60 s >60 s & 80 s >80 s & 100 s >100 s & 120 >120 s
Qualitative Very Fast Fast Slightly Fast Moderate Slightly Slow Slow Very Slow
The indicators we chose and focus on are in line with previous findings from psychological studies which suggest that impulsive individuals show shorter reaction times and make more errors than less impulsive individuals in reaction time. However, other (gaming) tasks and behavioral indicators may also prove worthwhile to differentiate high from low impulsive individuals (e.g., measures of inhibitory control). To validate the developed framework, we conducted an experiment, with the main aim of comparing the estimated impulsiveness of the implemented agent to that of validated
212
A. E. Bolock et al.
self-report questionnaires. The participants consisted of 43 volunteer students recruited from the university’s participant pool (23 males, 20 females, µage = 22.3). After signing the consent form, the participants played 15 rounds of each game to record their playing data and then took the TCI questionnaire as a measure for their impulsiveness. This provided a record for each participant containing both, the estimated and the reported impulsiveness. The Pearson Correlation between the estimated and the reported impulsiveness was calculated. The correlation coefficient showed a high correlation of 0.77 (p 0.01). As behavior can depend on the participants’ gender [18], the correlation analysis was repeated separately for female and male participants. It was found that the correlation was slightly higher for females (0.88, p 0.01) as opposed to men (0.74, p 0.01). This shows that the impulsiveness measures should be differently estimated for each gender, as it could affect the ktime, for example. Thus, the grouping shown in Table 1 should be separately calculated for each gender. We applied Linear Regression (LR) on the recorded values to predict the measured impulsiveness and test whether the recorded data could reproduce the measured results. We applied LR with 10-fold cross validation, which yielded r2 = 0.63 and mean square error = 8.2 (p < 0.01). This is high, considering human behavior inherently has a greater amount of unexplainable variation. Thus, the developed model is on the right track but needs more training data. When adding the gender as a parameter the accuracy increased to 0.66. Although the increase is not substantial, it shows that adding more components defining the players is needed to accurately predict impulsiveness from behavior cues.
4 Conclusion and Future Work We presented two agent-based games (CT and Tic Tac Toe) for predicting impulsive behavior in players. We conducted an experiment to collect data from 43 participants and investigate the accuracy of the developed agents. For the personality measure of impulsiveness, prediction accuracy is close to the test-retest accuracy of the used impulsiveness assessment questionnaire (TCI). Further large scale testing should be conducted while adding more parameters distinguishing between individuals, e.g. age, occupation, affect and IQ [2, 17, 19, 20]. The agent model for estimating impulsiveness should be adapted according to the different distinction measures between individuals, as shown for gender. Estimation models for further traits should be added to the agents. Finally, more gaming environments, e.g., Virtual Reality can be investigated.
References 1. Ahrndt, S., Albayrak, S.: Learning about human personalities. In: German Conference on Multiagent System Technologies, pp. 1–18. Springer, Cham (2017) 2. Bolock, A.E., Abdelrahman, Y., Abdennadher, S. (eds.) Character Computing. Human Computer Interaction Series. Springer, Switzerland (2020)
Detecting Impulsive Behavior Through Agent-Based Games
213
3. Brooks, S.J., Lochner, C., Shoptaw, S., Stein, D.J.: Using the research domain criteria (RDoC) to conceptualize impulsivity and compulsivity in relation to addiction. In: Progress in brain research, vol. 235, pp. 177–218. Elsevier (2017) 4. Chen, L.S.L., Tu, H.H.J., Wang, E.S.T.: Personality traits and life satisfaction among online game players. CyberPsychol. Behav. 11(2), 145–149 (2008) 5. De Jong, S., Hennes, D., Tuyls, K., Gal, Y.K.: Metastrategies in the colored trails game. In: The 10th Int. Conference on Autonomous Agents and Multi-agent Systems-Volume 2, pp. 551–558. International Foundation for Autonomous Agents and Multiagent Systems (2011) 6. Du, D.Z., Pardalos, P.M.: Minimax and applications, vol. 4. Springer, US (2013) 7. El Bolock, A.: Defining character computing from the perspective of computer science and psychology. In: Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia, pp. 567–572. ACM (2018) 8. Gal, Y., Grosz, B.J., Kraus, S., Pfeffer, A., Shieber, S.: Colored trails: a formalism for investigating decision-making in strategic environments. In: Proceedings of the IJCAI Workshop on Reasoning, Representation, and Learning in Computer Games, pp. 25–30 (2005) 9. Griffiths, M.D., Dancaster, I.: The effect of type a personality on physiological arousal while playing computer games. Addict. Behav. 20(4), 543–548 (1995) 10. Gutierrez-Zotes, J.A., et al.: Temperament and character inventory-revised (tci-r). Standardization and normative data in a general population sample. Actas españolas de psiquiatría 32 (1), 8–15 (2004) 11. Jeng, S.P., Teng, C.I.: Personality and motivations for playing online games. Soc. Behav. Pers. Int. J. 36(8), 1053–1060 (2008) 12. Kisa, C., Yildirim, S.G., Göka, E.: Impulsivity and mental disorders. Turkish J. Psychiatry 16(1), 46–54 (2005) 13. Mobbs, O., Crépin, C., Thiéry, C., Golay, A., Van der Linden, M.: Obesity and the four facets of impulsivity. Patient Educ. Couns. 79(3), 372–377 (2010) 14. Van de Mortel, T.F., et al.: Faking it: social desirability response bias in self-report research. Australian J. Adv. Nurs. 25(4), 40 (2008) 15. Prakash, V.C., et al.: Assessing the intelligence of a student through tic-tac-toe game for career guidance. Int. J. Pure Appl. Math. 117(16), 565–572 (2017) 16. Vinciarelli, A., Mohammadi, G.: A survey of personality computing. IEEE Trans. Affect. Comput. 5(3), 273–291 (2014) 17. El Bolock, A., Amr, R., Abdennadher, S.: Non-obtrusive sleep detection for character computing profiling. In: International Conference on Intelligent Human Systems Integration, pp. 249–254. Springer, Cham (2018) 18. El Bolock, A., El Kady, A., Herbert, C. Abdennadher, S.: Towards a character-based meta recommender for movies. In: Computational Science and Technology, pp. 627–638. Springer, Singapore (2020) 19. El Bolock, A., Salah, J., Abdelrahman, Y., Herbert, C., Abdennadher, S.: Character computing: computer science meets psychology. In: 17th International Conference on Mobile and Ubiquitous Multimedia, pp. 557–562. ACM (2018) 20. El Bolock, A., Salah, J., Abdennadher, S., Abdelrahman, Y.: Character computing: challenges and opportunities. In: Proceedings of the 16th International Conference on Mobile and Ubiquitous Multimedia. pp. 555–559. ACM (2017)
Visual and Motor Capabilities of Future Car Drivers Ferdinando Tripi1, Rita Toni2, Angela Lucia Calogero3, Pasqualino Maietta Latessa4, Antonio Tempesta5, Stefania Toselli6, Alessia Grigoletto6, Davide Varotti6, Francesco Campa6, Luigi Manzoni7, and Alberto Vergnano8(&) Ass. Volontariato “Insieme si Può”, via Canalino 67, 41123 Modena, Italy [email protected] 2 Poliambulatorio Chirurgico Modenese, via Arquà 5, 41125 Modena, Italy [email protected] 3 ASD inMo.To., via De’ Roberti 34, 41124 Modena, Italy [email protected] 4 Department for Life Quality Studies, University of Bologna, corso D’Augusto 237, Rimini, Italy [email protected] 5 Automobile Club Modena, viale G. Verdi 7, 41121 Modena, Italy [email protected] 6 Department of Biomedical and Neuromotor Sciences, University of Bologna, via Ugo Foscolo 7, 40123 Bologna, Italy {stefania.toselli,francesco.campa3}@unibo.it, {alessia.grigoletto,davide.varotti}@studio.unibo.it 7 Poliambulatorio CFT Città di Vignola, viale Giuseppe Mazzini 5/2, 41058 Vignola, Italy [email protected] 8 Department of Engineering Enzo Ferrari, University of Modena and Reggio Emilia, via P. Vivarelli 10, 41125 Modena, Italy [email protected] 1
Abstract. Driving safety is recognized as critical for young people by institutions, insurances and research. The ability to manage such a complex activity as driving is still developing through adolescence and in early adulthood. The present research investigates the human factors in the driver-car interaction. The experimental method assesses the visual-motor coordination capabilities of future drivers, also in relation to their life styles. The results show that a frequent but good quality physical activity improves visual-motor coordination. Keywords: Visual-Motor coordination Distance Rock test Peripheral Wall Chart Broc String Motor Efficiency Test Human factors
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 214–220, 2020. https://doi.org/10.1007/978-3-030-39512-4_34
Visual and Motor Capabilities of Future Car Drivers
215
1 Introduction Road accidents are the leading cause of injury and death among people aged 15 to 29, [1]. Literature identifies greater risks when driving for the elderly, [2] and for young people, [3]. The driving styles of elderly are influenced by the decline in cognitive abilities, neck stiffness and loss of vision. On the other hand, young people present deficiencies in driving ability for reasons related to the maturation of the frontal lobe through adolescence and in early adulthood, [4, 5]. The ability of an individual to manage such a complex activity as driving is called Executive Function (EF), subdivided into working memory, inhibition and set-shifting, [3]. Working memory is the ability to monitor, update and manipulate information with readiness, [6]. Inhibitory control is the ability to filter and resist to distractors and noise, [5]. Finally, set-shifting enables to adapt to changes from the outside context or in the planned goal of driving action, [5]. In a danger situation, the instinct of self-preservation pushes the brain to process visual information as quickly as possible in order to command the body to react to ensure survival. It happens that a car accident is not directly caused by lack of danger perception, but more by the incapacity of adequate motor reaction. Playing videogames can improve the adolescent EF, [7], but also challenges them with an intense activity in an obscured room and focused on a monitor, without focus changes or external stimuli. For this reason, young people of school age are increasingly lacking in perception and coordination skills. The same phenomenon was also observed in the ophthalmologic field: the ability to focus on near and/or distant elements in a rapid and accurate manner, as well as the ability to locate a physical object in space, decreased. During growth, the link between EF and individual differences, personality traits, social and emotional context is particularly important. The goal of the present research work is to investigate the visual-motor coordination of future drivers. The developed method of investigation would be useful both for the educational system and for assessing the human factors in the car-driver interaction. The paper is organized as follows. Section 2 introduces the research method, Sect. 3 reports and discusses the results while conclusions are drawn in last Sect. 4.
2 Method of Investigation The experimental method is conceived to link the three skills identified as fundamental for driving (perception, elaboration and response) to the lifestyle of each individual. Each test session consists of three phases, as described hereafter. The tests are performed in primary and secondary school, for collecting data about future drivers which are still developing their EF. 2.1
Individual Experiences
The psychophysical potential of individuals is positively or negatively exploited with a leverage effect from the attitude and driving behavior. Being aware of the surrounding environment, assessing the situation and responding with timely and appropriate actions are all skills that can be developed through adequate and constant motor
216
F. Tripi et al.
activity. Data on physical activity of young people are collected with the Physical Activity Questionnaire for Children (PAQ-C) and for Adolescents (PAC-A), [8]. PAQ-C and PAQ-A have been designed to monitor physical activity in the 7 days before the test. 2.2
Visual Abilities
A driver tends to stare straight ahead in the travel direction, towards the position the vehicle will soon take, [9]. The driver will then rapidly scan left and right. There is a direct link between eye movements and attention. For many experiments on the field, rapid standardized tests indicative of the ability of visual perception have been elaborated, [10, 11], and here adopted. First, the Distance Rock test (DR) evaluates the ability of the subject to make rapid and accurate focus shifts from far to near and back. The test assesses the flexibility of accommodation and vergences. The test requires two tables of letters, as shown in Fig. 1a. The subject must stand still at a predetermined distance from the larger table, supporting the smaller one with hands. The test consists in reading the letters as fast as possible from far to near and back (one cycle), and again for 30 s. The results of the test are the (largest possible) numbers of cycles without errors to near (DR20/25) or far (DR 20/80) tables.
Fig. 1. Visual ability tests: (a) Distance Rock, (b) Peripheral Wall Chart, (c) Broc String.
Peripheral vision is particularly important for detecting potential dangers in a poorly structured context, such as driving in traffic. The Peripheral Wall Chart (PWC) test assesses the peripheral perception. The subject must stand still at a distance of 40 cm from the chart shown in Fig. 1b. He/she must therefore read the letters arranged in circles while keeping the sight fixed on the central point. The Broc String test investigates the ability to locate an object in space. One end of a rope is held by the subject under the nose tip, the other end by the researcher. A 1 cm diameter ball stands on the rope between them, as shown in Fig. 1c. The test is based on the physiological diplopia. By fixing the ball at 3 m from the subject, the objects placed before or after are perceived as double, so that the two strings form an X. If the cross occurs first, it indicates esoforia, if after exoforia.
Visual and Motor Capabilities of Future Car Drivers
2.3
217
Physical Efficiency
As in driving, observing space, assessing the situation and responding quickly and appropriately with a movement of the body, arms or legs, are peculiar demands of sport activities. So, the physical and coordination capabilities of the response will be evaluated with the Motor Efficiency Test (MET). MET has been developed by the Institute of Sports Medicine and Science of the Italian National Olympic Committee (CONI), [12]. MET consists of a circuit, as shown in Fig. 2, with 4 stages to be reached with 4 different gaits. The subject has to perform a coordination exercise in each stage. In the end, the subjective perception of effort is assessed through the Borg scale, [13]. The physical ability of a subject results both from speed and coordination.
Fig. 2. MET circuit and stages.
3 Experiment Results The introduced method was tested with 647 students from Primary School (PS), Lower Secondary School (LSS) and Upper Secondary School (USS). The scores achieved in PS are significantly lower than in LSS and USS, as shown in Table 1. Then, for female students, there is no clear difference between the scores from LSS to USS, except for the DR tests. On the contrary, the scores of male students significantly increase in each school level. At LSS females perform better than males, but in USS the males overtake females. In fact, that 12–13 years age involves the adolescent shooting for girls, followed only two years later by the male one. The students are divided into five groups based on the Amount of physical activity practiced, as shown in Table 2: GA0 = no physical activity, GA1 = 1–2 times a week, GA2 = 3–4 times a week, GA3 = 5–6 times a week, GA4 = 7+ times a week. A significant difference is observed between students do not practicing physical activity,
218
F. Tripi et al.
Table 1. ANOVA test for comparison of Visual Abilities and Physical Efficiency tests for female and male students of different grade schools. Test PS Female students DR20/80 9.35 DR20/25 7.80 PWC 3.70 MET 13.80 Male students DR20/80 9.08 DR20/25 7.28 PWC 3.61 MET 13.76
± ± ± ± ± ± ± ±
2.59 2.55 1.20 4.43 2.20 2.26 1.28 3.89
LSS 11.89 10.27 4.32 17.16 11.58 9.84 3.88 16.93
± ± ± ± ± ± ± ±
3.27 2.93 0.93 2.61 3.06 2.65 1.24 3.52
USS 13.93 11.51 4.30 17.09 13.69 10.81 4.10 19.19
± ± ± ± ± ± ± ±
2.60 2.79 0.93 3.42 2.86 2.83 0.97 3.30
F 62.65 42.79 9.09 21.29 77.65 49.95 4.33 57.36
P-value adaptive based on HUD color > color invariant (green) adaptation. It can be considered that the subjects have the best cognitive performance in the complementary color adaptive test, followed by contrast color adaptation and HUD color based adaptation, and the worst cognitive performance is the regular green test. Good contrast can help participants understand color and information. Under the simple task, the average value of the d-band power normalization value is smaller than the average value under the complex task, which can get better cognitive performance under complex tasks, because the complex task is simpler, and the subject will have a more nervous mental state, put more attention, so there will be better cognitive performance.
4 Conclusion In the EEG interface color perception EEG experiment, under the color adaptive mode factor, the delta, SMR, theta band power normalization values are complementary color adaptive > contrast color adaptive > HUD color based adaptive > color not. The adaptation of the change (green) indicates that the increase in contrast between the
324
C. Guan et al.
foreground color and the background color is conducive to the attention capture of the foreground information, which is beneficial to the improvement of the cognitive performance of the subject. Under the hue change density factor, the proper color change density is beneficial to human attention, and the faster rate of change (16 frames/second) and the slow rate of change (2 frames/second) have no moderate rate of color change (The attention capture effect of 4 frames/second and 8 frames/second is good. Under the task difficulty factor, there is better cognitive performance under complex tasks, because complex tasks are simpler, and the subjects will have more nervous mental state and more attention, so they will have better cognitive performance. Under the brain region factor, the power normalization values of delta, SMR, and theta bands were not significantly different between the four levels under the brain region factors. Under the brain-side factors, there were significant differences between the three levels of the left side of the brain and the midline of the brain. There was no significant difference between the left and right sides of the brain, indicating that the power of the SMR wave does not have Hemisphere effect. Acknowledgments. This paper is supported by Science and Technology on Avionics Integration Laboratory and Aeronautical Science Fund (No. 20165569019) and the National Natural Science Foundation of China (No. 71871056, No. 71471037).
References 1. Wang, L., Li, S., Gao, Z.: Effects of highway landscape color on driver’s EEG delta wave components. J. Harbin Inst. Technol. 48(09), 35–40 (2016). (in Chinese) 2. Liu, Y., Cui, G., Jin, J., Xu, T., Jiang, L., et al.: Study on color difference evaluation based on brain wave signal. J. Wenzhou Univ. (Nat. Sci. Ed.) 40(01), 38–47 (2019). (in Chinese) 3. Hou, Y., Zhang, L., Miao, D.: Study on the influence of color background on physiology and performance of visual cognitive tasks. Chin. J. Clin. Psychol. (05), 506 (2008). (in Chinese) 4. Khoroshikh, V.V., Ivanova, V.Y., Kulikov, G.A.: The effect of unconscious color hue saturation on the emotional state of humans. Hum. Physiol. 38(2), 129–136 (2012) 5. Tcheslavski, G.V., Vasefi, M., Gonen, F.F.: Response of a human visual system to continuous color variation: an EEG-based approach. Biomed. Signal Process. Control 43, 130–137 (2018) 6. Tripathy, J., Fuss, F.K., Kulish, V.V., et al.: Influence of colour hue on fractal EEG dimensions. In: International Conference on Biomedical & Pharmaceutical Engineering. IEEE (2006) 7. Aprilianty, F., Purwanegara, M.S., Suprijanto: Effects of colour towards underwear choice based on electroencephalography (EEG). Australas. Mark. J. (AMJ) 24(4), 331–336 (2016). S1441358216302191 8. Li, J., Xue, C.: Study on color coding of digital interface based on vision perception layering. Chin. J. Mech. Eng. 52(24), 201–208 (2016). (in Chinese)
Human AI Symbiosis: The Role of Artificial Intelligence in Stratifying High-Risk Outpatient Senior Citizen Fall Events in a Non-connected Environments Chandrasekar Vuppalapati1, Anitha Ilapakurti1(&), Sharat Kedari1(&), Rajasekar Vuppalapati1(&), Jayashankar Vuppalapati2(&), and Santosh Kedari2(&) 1
Hanumayamma Innovations and Technologies, Inc., Fremont, CA, USA {cvuppalapati,ailapakurti, sharath,raja}@hanuinnotech.com 2 Hanumayamma Innovations and Technologies Private Limited, Hyderabad, India {jaya.vuppalapati,skedari}@hanuinnotech.com
Abstract. Senior Citizen Falls are debilitating and harmful events. Not only does it negatively affect morality, psychology, self-esteem but also tends to be very repetitive and life costly. To prevent future falls, the outpatient senior citizen needs to be equipped with real-time monitoring sensors such as, a wrist band or a sensor necklace. Nonetheless, In the world where real-time sensor monitoring systems are not available due to connectivity limitations and economic affordability, the onus of senior citizen fall predicting, and preventing, needs to be on cognitive systems that are democratized in nature and yield learning from population health analysis. In this paper, we apply population collaborative filtering techniques and artificial intelligent models to cohort high risk senior citizen clusters and alert healthcare professionals and primary care family members. Keywords: Senior Citizens Fall event Machine learning Kalman Sanjeevani electronic health records Outpatient
K-Means
1 Introduction 1.1
Fall Incidence in Senior Citizens
Senior Citizen fall is the most debilitating and hurtful event. An elderly falling is a public health and community problem with adverse physical, psychological, social, medical, and economic consequences. According to Center for Disease Control and Prevention (CDC) Home and Recreational Safety guidelines, each year millions (Fig. 1) of elders - “those 65 and older”- fall1 [2]. These falls are serious and costly.
1
Important Facts about Falls - https://www.cdc.gov/homeandrecreationalsafety/falls/adultfalls.html.
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 325–332, 2020. https://doi.org/10.1007/978-3-030-39512-4_52
326
C. Vuppalapati et al.
The CDC estimates the cost of falls injuries around $31 billion annually. Hospital costs account for two-thirds of the total [1, 3]. The incidence of falls increases with advancing age. It is one of the leading causes of death2 in elderly due to its complications of which 50% is hip fracture [4]. The fall events are repetitive [5].
Fig. 1. Unintentional fall death rates, adults 65+ [2]
It’s evidently clear that asthma is costly and more prevalent diseases across the nation and has affected millions of Americans. The best antidote for asthma is prevent triggers of asthma attacks. The triggers could vary from person to person but on a macro level the air quality & pollution indexes dominate. Common asthma triggers include3: Environmental Tobacco Smoke, Dust Mites, Outdoor air pollution, Cockroach Allergan, Pets, and Mold [3]. 1.2
Frequency of Falls
The Falls are repetitive. They occur in outpatient, inpatient, community and senior living facilities. y. Findings4 show that, among community-dwelling older people over 64 years of age, 28–35% fall each year (Fig. 2). Of those who are 70 years and older, approximately 32%–42% fall each year. The frequency of falls increases with age and frailty level. Older people who are living in nursing homes fall more often than those who are living in the community. Approximately 30–50% of people living in long term care institutions fall each year, and 40% of them experienced recurrent falls [6].
2 3 4
Falls In Older People - https://www.who.int/ageing/projects/SEARO.pdf. You can control your asthma - https://www.cdc.gov/asthma/pdfs/asthma_brochure.pdf. Global Report on Falls Prevention – Epidemiology of falls - https://www.who.int/ageing/projects/1. Epidemiology%20of%20falls%20in%20older%20age.pdf.
Human AI Symbiosis
327
Fig. 2. Percentage of falls [6]
1.3
Environment - Rural vs. Urban Areas
The senior population occupied a larger proportion5 in the rural than the urban participants, and the mean age of the rural group was older than that of the urban group [7]. Senior citizens with lower socio-economic demographic have more risk in falling and getting hospitalized. Comparatively, the fall rate increases rural environment is much higher than in urban settings due to lack of resources (See Fig. 3).
Fig. 3. General characteristics [7]
5
Disparity in the Fear of Falling Between Urban and Rural Residents - https://www.ncbi.nlm.nih.gov/ pmc/articles/PMC3895525/.
328
1.4
C. Vuppalapati et al.
Fall Prevention - Electronic Sensors
As the population ages, the problem related to falls are expected to grow and pose an even greater challenges [4]. Meeting the challenges require clear understanding of the prevalence and nature offalls. Outpatient Data is the most important tool [5] to enable preventive healthcare [8]. A majority of falls are predictable and therefore preventable [4]. Development continuous monitoring system plan an important role in predicting falls [4, 6]. There are sensors6 available to detect fall and notify primary care takers. However, these sensors require connected network such as home Wi-Fi, GPS, In-house internet or the Internet Services. The structure of this paper is presented as: Sect. 2 discusses the basic concepts and methods about Machine Learning Algorithms. Section 3 presents our Population AI architecture, and Sect. 4 shows a case study.
2 Understanding Machine Learning Algorithms Fall Detection 2.1
Abnormal Movement Detection in Constrained Environment
We have two challenges in detecting fall in non-connected environment. One, the computation of fall detection algorithms are performed on the sensor (edge) and, second, store fall events on the sensor for future analysis. In order to detect fall event, we needed to capture drastic or abnormal changes in sensor values (accelerometer). Using Kalman filter would able to capture the fall detection [9]. Here is very high level view of Kalman filter for fall detection: In the Kalman filter, the fundamental concept is the notion of the state (See Fig. 4). By this is meant, intuitively, some quantitative information (a set of numbers, a function, etc.) that is the least amount of data one has to know about the past behavior of the system in order to predict its future behavior. The dynamics are then described in terms of state transitions, i.e., one must specify how one state is transformed into another as time passes.
Fig. 4. Kalman filter 6
‘Aging In Place’ tech helps seniors live in their home longer - https://www.usatoday.com/story/tech/ columnist/saltzman/2017/06/24/aging-place-tech-helps-seniors-live-their-home-longer/103113570/.
Human AI Symbiosis
329
The Kalman filter model assumes the true state at time k is evolved from the state at (k − 1) according to Xk ¼ Fk Xk1 þ Bk uk þ wk where • Fk is the state transition model which is applied to the previous state xk−1; • Bk is the control-input model which is applied to the control vector uk; • wk is the process noise which is assumed to be drawn from a zero mean Our goal to apply Kalman filter is to capture any abnormal state transitions due to fall and store the event for future analysis. In fall detection state equation is the regular normal movement of the User (normal accelerometer X, Y and Z values). 2.2
Clustering
To recommend based on the locations, fall detect clustering is needed (See Fig. 5). Clustering could be Hierarchical (agglomerative or Divisive). This is achieved through Similarity or K-Means clustering [10, 11].
Fig. 5. Clustering [6]
In our case we have used K-means cluster to register the fall positions. Let’s say Senior citizen fall on floor 2 near garden (please see Table 1); the sensor captures the fall event and whenever the senior citizen near the fall locations, the sensor vibrates to alert. K means Cluster Implementation7 [please see Table 2]: Please note – to improve performance of devices, the K-means cluster computes the proximity (Euclidian distance [10, 11]) only after significant changes8 occurred to the senior citizen movement. 7
8
C source code implementing k-means clustering algorithm - http://homepages.cae.wisc.edu/ *brodskye/mr/kmeans/. Apple Location Services – Significant Change Location - https://developer.apple.com/documentation/ corelocation/cllocationmanager/1423531-startmonitoringsignificantlocati.
330
C. Vuppalapati et al. Table 1. Fall event sensor data Sensor accelerometer Fall (#s) X Y 1 12.6 133.5 2 13.9 122 3 10 22.3 4 10 22.3
Description Mom fall on second floor near garden Mom fall on front porch Mom fall in bedroom Mom fall in bed room
Table 2. High risk stratification to falls K means C Code void kmeans( int dim, double *X, int n,
// dimension of data // pointer to data // number of elements
int k, // number of clusters double *cluster_centroid, // initial cluster centroids int *cluster_assignment_final // output ) { double *dist = (double *)malloc(sizeof(double) * n * k); int *cluster_assignment_cur = (int *)malloc(sizeof(int) * n); int *cluster_assignment_prev = (int *)malloc(sizeof(int) * n); double *point_move_score = (double *)malloc(sizeof(double) * n * k); }
3 Architecture The system architecture consists of four major parts (please see Fig. 6): (1) Sensor Module (2) Edge Processor, (3) Bluetooth Module, and (4) Battery. Sensor Module. The Sensor module collects accelerometer (X, Y, and Z) sensor values. Additional sensors include temperature and humidity for capturing ambient temperature values.
Human AI Symbiosis
331
Fig. 6. Architecture
Edge Processor. The K-means clustering [10, 11] and Kalman [9] architectures are deployed on the Edge Processors. For classification of high risk patients in a nonconnected environment, please see Table 3. Table 3. Senior citizens - high risk stratification to falls. Stratification rules • Recent fall encounters • Capture of fall locations (Accelerometer X, Y, Z) • Cluster computation based on the movement • Compute optimization • Notifications (Shake or)
4 A Case Study This paper presented a novel approach to detect and predict senior citizen fall events. The novel idea of the paper is to predict fall events under no connectivity or offline applications. We staunchly believe that having edge devices that can predict senior citizen fall for offline environments can greatly reduce fall related deaths and thus improve overall healthcare delivery. We strongly believe that Population Healthcare AI will not only reduce the cost factor for outpatients but also saves the lives.
References 1. Preventice Solutions, About Us. http://www.preventicesolutions.com/about-us.html. Accessed 20 Feb 2017 2. Davis, J.: Remote patient monitoring market booming amid readmission fines, doctor shortages, report says, 15 December 2015. http://www.healthcareitnews.com/news/remotepatient-monitoring-market-booming-amid-readmission-fines-doctor-shortages-report-says
332
C. Vuppalapati et al.
3. Center For Disease Control and Prevention. http://www.cdc.gov/homeandrecreationalsafety/ falls/adultfalls.html. Accessed 29 Oct 2016 4. Krishnaswamy, B., Gnanasabandam: Falls in Older People. Address to WHO/SEARO Staff on the eve of the World Health Day 2005, WHO/SEARO, New Delhi, 6 April 2005. https:// www.who.int/ageing/projects/SEARO.pdf 5. Vuppalapati, J.S., Kedari, S., Ilapakurti, A., Vuppalapati, C., Vuppalapati, R., Kedari, S.: Machine learning infused preventive healthcare for high-risk outpatient elderly. In: Arai, K., Kapoor, S., Bhatia, R. (eds.) Intelligent Systems and Applications, IntelliSys 2018. Advances in Intelligent Systems and Computing, vol. 869. Springer, Cham (2019) 6. Yoshida, S.: A Global Report on Falls Prevention Epidemiology of Falls. http://citeseerx.ist. psu.edu/viewdoc/download?doi=10.1.1.506.2291&rep=rep1&type=pdf. Accessed 10 Oct 2019 7. Cho, H., et al.: Disparity in the fear of falling between urban and rural residents in relation with socio-economic variables, health issues, and functional independency. Ann. Rehabil. Med. 37(6), 848–861 (2013). https://doi.org/10.5535/arm.2013.37.6.848 8. Chandrasekar, V., Anitha, I., Santosh, K., Jayashankar, V.: Stratification of, albeit Artificial Intelligent (AI) Driven, High-Risk Elderly Outpatients for priority house call visits - a framework to transform healthcare services from reactive to preventive. MATEC Web Conf. 255 (2019). https://doi.org/10.1051/matecconf/201925504002. Article no. 04002 9. Kalman, R.E.: A new approach to linear filtering and prediction problems. Trans. Am. Soc. Mech. Eng. D 82, 35–45 (1960) 10. Han, J.: Data Mining: Concepts and Techniques. Originally Published, August 2000. ISBN13 978-0123814791 11. Leskovec, M.J., Rajaraman, A., Ullman, D.: Mining of Massive Datasets. ISBN 9781107077232
Intelligence, Technology and Analytics
Distinguishing a Human or Machine Cyberattacker Wayne Patterson1(&), Acklyn Murray1, and Lorraine Fleming2 1
Patterson & Associates, 201 Massachusetts Avenue NE, Suite 316, Washington, DC 20002, USA [email protected], [email protected] 2 Department of Civil Engineering, Howard University, 2400 6th Street NW, Washington, DC 20059, USA [email protected]
Abstract. In the world of cybersecurity, attacks may come from many sources. It is a problem faced by all users, because cyberattacks have become ubiquitous. Although many types of attacks are relatively simple to defend given commercially available detection software, the attacks are becoming more sophisticated. In more recent times, it is possible that a cyber attack might be initiated by a human or by an automated software attack or bot. One line of defense that has not been sufficiently explored is the advantage that might come to a defender if he or she has the ability to detect whether or not the attacker is human or machine-based. In this paper, we attempt to develop methods to allow a defender to try to determine the type of human or bot attack. Keywords: Cybersecurity factors
Cyberattacks Machine translation Human
1 Introduction We explored an approach in developing the ability of a cyber defender to detect whether he or she is under attack by another human or via bot. If the attack involves an onscreen dialogue, which is often the case, it is the intent of this research to see what can be developed for a strategy that can detect the potential nature of such an attack. This approach was first explored by Alan Turing. His 1950 article in the Journal “Mind” [1] provided significant insights into this challenge, and his insights are usually described as the “Turing Test.” In this paper, we develop a new Turing Test to explore the potential differences between three types of text utilized in a cyber attack. The first text is a quotation in its exact form. Examples will be taken from several sources for quotations from well-known individuals historically [2], and others from well-known quotations from popular films [3]. The second type of text is a modification by a human editor of the examples of the first type, changing a word or words in the first type of example. The third type of text is a machine translation by taking the first text, translating to a second language (in this case, French, using Google Translate), and then translating the French text back to English in the same fashion (subsequently we will call this process “EFE”). Google Translate is chosen in part because it is highly © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 335–340, 2020. https://doi.org/10.1007/978-3-030-39512-4_53
336
W. Patterson et al.
ranked among machine translation software [4], but also because it is free and widely available throughout the entire community of Google users.
2 Research Design A baseline test of 30 elements was administered to a group of 65 potential subjects of cyberattacks to give an initial analysis of their ability to detect which of the three above categories is used in each case. From the baseline responses, a training module has been developed to assist the subjects in improving their performances on a second but similar test bed of questions. For example, a training item could advise: “Watch for misplaced modifiers: ‘Keep your close friends’ instead of ‘Keep your friends close’”. The paper discusses performance of the test subjects, the choices made in the design of the training items, and the level of improvement from the (pre-) test bed to the (post) test bed. Part 1: For the first part of the study, 65 volunteers, from three distinct groups volunteered to take a test which involves classifying 30 separate texts as to which items could fall into one of three categories. In every case, the test item was developed from well-known quotes, either as developed from standard sources of quotations (in which case the source would be generally speaking grammatically) or from popular dialogues (well known quotations from film scripts, therefore generally more colloquial). In what follows, the three distinct sets of volunteers will be referred to by this rough distinction of the source of each subset. Table 1. Description of the volunteer groups Designation E T R
Description No. of respondents Engineering students at Howard University 9 Employees of a technology-based company 43 Random selection of well-educated individuals 13
The subjects would classify each item as M (for machine translation), H (for a human translation), or N (for the normal or original quotation). In this first part, the respondents were given the test file, with only the instructions as to how to code their answers. Part 2: After analysis of the results of the previous part, recognizing the type of modifications made by machine translation or by human translation, a Training Module was developed to provide assistance to respondents to this or similar surveys to detect the types of EFE translation. This Training Module identified a number of modifications in machine translation, examples of which are given below. 1. Split words: the English word “cannot” has no counterpart in French, thus the translation would normally render “can not”.
Distinguishing a Human or Machine Cyberattacker
337
2. Misplaced part of speech: (From “The Godfather”) “Keep your friends close, but your enemies closer” becomes (going English to French and back to English): “Keep your close friends, but your enemies closer”. 3. Contracted phrases/words: the common phrase, such as “most of the things” has no counterpart in French and thus would be translated as “most things”; words such as “I’m” when translated into French and back to English will normally appear as “I am”. 4. Hyphenated expressions: phrase in English “eggs-and-ham breakfast” can only be translated into French as “breakfast with eggs and ham”. In addition, given the assumption made that the human translation would not find instances of poor grammar, but rather the choice of that may result in a grammatically correct sentence, but perhaps with a loss of meaning because of the center and chosen, say, by a human English writer for whom English is not a first language (Table 2). Table 2. Examples of replacement by synonyms Synonym instead of Despotism Divinity Sphere Flair Straightforward, as in “Straightforward, my dear Watson”
Source word Tyranny Religion World Gift Elementary
Part 3: A second test of the same nature, but with a largely separate set of examples so that memorizing the results in the first test would not assist substantially in answering the test questions. However, for the purposes of consistency, a few items from the first test were repeated in the second test. In this way, it was hoped that the Training Module would assist the testee in the overall performance on the test bed, and perhaps of greater interest, to see if the performance increased on the questions repeated from the first test. Part 4: In addition to consideration of the overall responses, these could be further stratified into 3 groups (E, T, R as above) with analyses for each group separately.
3 Analysis of Results What follows is the tabulation of responses to the “Recognizing Machine Translation” test with n = 65. The data are sorted in descending order of accuracy the choice. This subset of the test document consisting of approximations to known quotes maintaining correct grammar but changing 1 or 2 words to synonyms that might be found in any dictionary. The purpose for this choice was the assumption that an attacker might be careful in not displaying incorrect grammar or possibly one from an individual who is not a native speaker of English. A few examples are indicated in Table 1 below.
338
W. Patterson et al.
The ability of the respondents on these test questions was worse than on any of the other sets. Only 26.2% of the respondents correctly identified that these were human translations, and almost 50% assumed these were actual quotations (Table 3). Table 3. Recognition of 10 examples of human-translated documents (H) Quotation There are no facts, only connotations Political correctness is despotism with manners People demand freedom of speech to make up for the freedom of conviction which they avoid The fabric that dreams are made of Life is a feast, and most poor suckers are starving to death! I’ll get you, my pretty, and your little cur too! Sex and divinity are closer to each other than either might prefer A lie gets halfway around the sphere before the truth has a chance to get its pants on I have always hinged on the kindness of strangers The only way to get rid of a desire is to yield to it AVERAGE
Correct (H) 33.8% 32.8% 32.3%
Chose M for H 16.9% 25.0% 27.7%
Chose N for H 49.2% 42.2% 40.0%
31.3% 29.7%
7.8% 10.9%
60.9% 59.4%
29.2% 27.7%
52.3% 10.8%
18.5% 61.5%
23.4%
29.7%
46.9%
20.3% 18.5% 26.2%
3.1% 33.8% 25.4%
76.6% 47.7% 48.5%
The second-best category consisted of the machine translations, where the respondents were correct 43.2% of the time. This is encouraging, since it indicates a reasonably good ability to detect grammatical or syntactical anomalies. A few examples are the dialogue from the film “Network” where “I’m as mad as hell, and I’m not going to take this anymore!” when translated EFE becomes “I’m crazy like crazy, and I’m not going to take that anymore!”; from “The Graduate”: “Mrs. Robinson, you’re trying to seduce me. Aren’t you?” becomes “Ms. Robinson, you are trying to seduce me. Is not it?”. Table 4. Recognition of 11 examples of machine-translated documents (M) Quotation Why do not you come to see me one day? Ms. Robinson, you are trying to seduce me. Is not it? When a person suffers from delirium, we speak of madness. when many people are delirious, we talk about religion Open the doors of the pod bay, please, HAL
Correct (M) 72.3% 70.8% 70.3%
Chose N for M 6.2% 13.8% 9.4%
Chose H for M 21.5% 15.4% 20.3%
67.7%
9.2%
23.1% (continued)
Distinguishing a Human or Machine Cyberattacker
339
Table 4. (continued) Quotation The greatest danger for most of us is not reaching our goal too high or reaching it, but reaching our goal too low and reaching our goal I’m crazy like crazy, and I’m not going to take that anymore! You do not understand! I could have class. I could be a competitor. I could have been someone instead of being an idiot, that’s what I am Keep your close friends, but your enemies closer You know how to whistle, do not you, Steve? You have just gathered your lips and blow If you build it, it will come Of all the gins joined of all the cities of the world, it enters mine AVERAGE
Correct (M) 67.2%
Chose N for M 10.9%
Chose H for M 21.9%
60.0%
16.9%
23.1%
47.7%
21.5%
30.8%
40.0% 34.4%
32.3% 46.9%
27.7% 18.8%
29.2% 14.1%
36.9% 51.6%
33.8% 34.4%
43.2%
28.9%
28.0%
The strongest performance in the test was on the 11 examples where the quotation was exactly as has been found in the document sources. In this case, the respondents were able to answer correctly 64.1% of the time. It is possible, although we cannot determine from this that the language was not only natural, but also carried some familiarity to the respondent, thus convincing that person that it was the actual quotation. Table 5. Recognition of 9 examples of Original or Normal Document (N) Quotation Today, I consider myself the luckiest man on the face of the earth My mama always said life was like a box of chocolates. You never know what you’re gonna get Listen to me, mister. You’re my knight in shining armor. You’re going to get back on that horse, and I’m going to be right behind you, holding on tight, and away we’re gonna go, go, go! The whole problem with the world is that fools and fanatics are always so certain of themselves, and wiser people so full of doubts Play it, Sam. Play ‘As Time Goes By.’ Whether you think you can, or that you can’t, you are usually right
Correct (N) 89.2%
Chose H for N 10.8%
Chose M for N 0.0%
84.6%
13.8%
1.5%
72.3%
13.8%
13.8%
71.9%
14.1%
14.1%
62.5% 56.3%
35.9% 23.4%
1.6% 20.3% (continued)
340
W. Patterson et al. Table 5. (continued)
Quotation A census taker once tried to test me. I ate his liver with some fava beans and a nice Chianti What we’ve got here is failure to communicate I’m living so far beyond my income that we may almost be said to be living apart AVERAGE
Correct (N) 54.7%
Chose H for N 26.6%
Chose M for N 18.8%
53.1% 39.1%
26.6% 34.4%
20.3% 26.6%
64.1%
22.6%
13.3%
4 Subgroup Comparison The respondents, as noted above, could be divided into 3 groups, previously labelled as E, T, and R. The responses were considered separately in Table 6. Table 6. Recognition of 9 examples of Original or Normal Document (N) Subgroup E T R
Correct response Total responses % Correct 107 269 39.8% 621 1278 48.6% 199 388 51.3%
5 Follow-Up Study With a small number of the overall participants it was possible to provide a training guide after the analysis of the results from the first test, and thus prepare a second test after the training guide had been studied. The second test again contained 30 elements and was of the same form requesting a H, M, or N response. In this case, a few of the original questions were repeated to see if memory might be an important factor in the responses. In approximately half the cases, the second test following the training module provided very large levels of improvement in the scores.
References 1. Turing, A.M.: Computing machinery and intelligence. Mind Q. Rev. Psychol. Philos. LIX (236), 433–460 (1950) 2. Bartlett’s Quotations. http://www.bartlettsquotes.com/ 3. American Film Institute. https://www.afi.com/100years/quotes.aspx 4. Best Machine Translation Software. www.g2.com/categories/machine-translation
Using Eye Tracking to Assess User Behavior in Virtual Training Mina Fahimipirehgalin(&), Frieder Loch, and Birgit Vogel-Heuser Automation and Information Systems (AIS), Technical University of Munich (TUM), Munich, Germany {Mina.Fahimi,Frieder.Loch,Vogel-heuser}@tum.de
Abstract. Virtual training systems can provide flexible and effective training for interactions with increasingly complex industrial machines. However, existing approaches do not adapt to the adaptive attributes of the user. Being able to track the current state of the user enables a humanization of virtual training system, since it allows analyzing the strain and the cognitive processes of the user and reacting accordingly. In recent years, eye tracking technology has become a widespread research area in human machine interaction. This paper introduces an approach to adapt virtual training systems based on eye tracking analysis. The approach detects specific patterns from eye movements and evaluate the performance of the user based on detected patterns. If the pattern suggests that the user cannot follow the instructions or that the user is distracted, the complexity of the training system can be reduced. Keywords: Eye Tracking Pattern detection User behavior Virtual training
1 Virtual Training Systems and Eye Tracking Manufacturing systems are becoming increasingly customized and dynamic and – at the same time – more complex. Especially vulnerable groups, such as low-experienced operators or seniors, may become over-challenged by these transformations. Effective systems for the training and support of machine operators are required in such environments [1]. In this field, one approach can be virtual training systems. They are based on a virtual replication of the industrial environment and allow the training of industrial procedures [2] and they should adapt to the abilities and the current state of the user [3]. This allows a humanization of training. The current state of the user can be detected to deduct whether the user can follow the instructions that are provided by the virtual training system. The training system can react based on this information, for instance, by suggesting an interruption of the training or by providing additional instructions if a user is struggling. Eye-tracking allows a non-invasive measurement of the current state of the user by detecting specific patterns during the interaction with a virtual training system. Research in the field of eye tracking [4] shows that the eye movements can provide a suitable indication for the difficulty of understanding a text. Processing time, for instance, is affected by the difficulty of the text or by inconsistencies that impede understanding. These findings motivate the application of eye tracking measurements for virtual training systems. The detection of different patterns from eye movements © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 341–347, 2020. https://doi.org/10.1007/978-3-030-39512-4_54
342
M. Fahimipirehgalin et al.
can provide insights about the current state of the user. If the pattern indicates that the user can follow the instructions, the virtual training system can offer to increase the realism and the complexity of the training. If the pattern indicates that the instructions are difficult, the virtual training system can, for instance, offer reducing the complexity of the instructions or suggest a break. This paper is organized as follows: the review on the research studies in the field of virtual training systems and eye tracking are provided in Sect. 1. Then, the proposed approach for analyzing the gaze positions in the virtual training system and possible user patterns are discussed in Sect. 3. Finally, the paper is concluded in Sect. 4.
2 State of the Art Virtual training systems have been suggested for different industrial applications, such as assembly or maintenance procedures. Gorecky et al. [2] design a virtual training system for assembly line and use gesture-based, speech-based interaction. However, they focus on proposing different approaches to interact with user during training and they do not consider user’s abilities for providing adaptive training system. Abe et al.’s training system [8] can detect the user’s erroneous actions while they learn how to assemble/disassemble the machine in the virtual training system. This information is used to analyze whether the user learns the procedure correctly. Brough et al. [9] propose a virtual training system in which during training sessions, error of users are detected and feedbacks are generated by tracking the user’s hand motion. However, they do not propose any pattern that are used to analyze the user’s behavior. Bluemel et al. [12] propose a virtual training system that adapts the degree of detail of the instructions. These adaptations are carried out manually without considering the performance or the skills of the user. Gutiérrez et al. [6] propose a multimodal training system including haptic, gesture and visual feedbacks which can be adapted to the user’s preferences and needs. However, they do not consider the performance of the user during the run time. Loch et al. [7] propose an adaptive virtual training system with the means of different measurements of the user, however, they do not propose any approach to deduct specific patterns from users and provide adaptive interaction with the user based on their patterns. In the domain of Eye Tracking, Groen et al. [10] suggest eye tracking measurements to evaluate the usability of user interfaces and improve interface design. However, they do not define specific patterns to evaluate the user behavior. Scheiter et al. [11] study the effect of multimedia instructions in learning process using eye tracking. They use Eye Movement Model Example (EMME) to improve the user’s capability and learning in following multimedia instructions. Previous research in the field of virtual training systems is lacking abilities for adaptation of the training systems to the current state or the abilities of the users [3]. This is especially the case for the applicability of eye tracking in the virtual training system. This paper presents an analytical approach to investigate the applicability of the eye tracking in the virtual training system by defining patterns for eye movements.
Using Eye Tracking to Assess User Behavior in Virtual Training
343
3 Eye Tracking Measurements in Virtual Training Systems The patterns analyzed by this contribution are based on the virtual training system shown in Fig. 1 (left). It trains the steps of an industrial procedure. The instructions of the procedure are displayed on the right side of the screen. The central part displays a threedimensional model of the machine. The objective of this contribution is to detect if the user can understand the instructions of the training system by analyzing eye movements. The patterns focus on the textual instructions provided by the training system to simplify the analysis. Therefore, only eye-movements in the rectangular area containing the textual instructions are considered for the eye tracking analysis (Fig. 1 (Right)).
Fig. 1. Left: Virtual training system and heat maps of the eye gaze. The selected area for the eye tracking analysis is marked as a white rectangle. Right: Sample gaze data in ðX; Y Þ coordinates in the selected area.
This study uses head-mounted eye tracking glasses to avoid inhibiting the mobility of the user. The recorded data is represented in ðX; Y Þ coordinates (see Fig. 1 (right)). The center of the coordinate system of the eye tracking device is at the top-right point of the screen. Therefore, reading the instructions from top to bottom means an increase in Y coordinate and reading from left to right means a decreasing X coordinate. The gaze position data is collected from different users while they were reading the instruction list. In order to indicate the problems in understanding of the instructions, several hypotheses for patterns of eye movements are created. The first hypothesis is called Monotone Trend. This hypothesis assumes that the user can follow the instructions and read them without repetitions and hold-up times. Therefore, the Y coordinate should monotonically increase during the time [4] which shows that the user can read them in a reasonable time. In this case, the slope of the curve of the Y coordinate w.r.t the time is a positive slope which confirms the monotonically increasing trend in Y coordinates. Furthermore, in each line of the text, the gaze data in the X coordinate should decrease during the reading of each instruction in this line. Therefore, the pattern in the eye movement follows a monotone trend (see Fig. 2 (left)). Observing this pattern indicates that the user can follow the instructions without hold-up times or distractions.
344
M. Fahimipirehgalin et al. Pixels on vertical axis (Y-Coordinate)
Median value Moving average
Pixels on horizontal axis (X-Coordinate)
Time
Positive line slope in Y coordinate
Start point of the next command in X Coordinate
End point of the command in X Coordinate
Fig. 2. Left: Monotone trend in Y coordinate indicates that the user can read the commands from up to the bottom. The gaze position in X coordinate monotonically decreases per each command and increases again to the start point of the next command. Right: Non-monotone trend in Y coordinate indicates that the user has difficultly in reading. The gaze position in X coordinate shows that the user reads the same line several times.
The second hypothesis is called Non-monotone Trend. This hypothesis assumes that the user cannot follow the instructions. This is shown by holdup times during reading and therefore, the gaze positions will not show a monotone trend as in the previous hypothesis. In this case, the slope of the curve of the Y coordinate w.r.t the time will be approximately zero. This zero slope indicates no changes in the Y coordinate due to hold-up times. Furthermore, in each instruction text, the gaze data in the X coordinate will indicate the several readings of the same instruction. The detected pattern of non-monotone trend is shown in the Fig. 2 (right). Observing this pattern indicates that the user is not able to follow the commands properly and hold-up times show the user’s difficulty in understanding the commands. In order to detect the hold-up time, more detailed analysis is required in Y coordinate. Since there is no significant changes in the Y coordinate during the hold-up times, the slope of the curve is approximately zero. Therefore, for further analysis the derivative of the Y coordinate w.r.t time can be considered to detect the hold-up time, which means @Y @t ¼ 0. Zero derivative of y coordinate means that the user is gazing on a specific instruction and therefore, there is no change in Y coordinate which can indicate the hold-up pattern. However, since the gaze positions cannot be measured in an exact area with exact X and Y coordinates and there are small oscillations in both coordinates, it is not possible to consider that @Y @t can be exactly zero. Therefore, an interval around zero can be considered as ½a; a, in which a is a small number and when the derivative of Y coordinate is in this interval, it can be assumed that there is not any changes in the Y coordinate. To identify the monotone and non-monotone pattern with the hold-up in gaze positions, a time threshold, T, can be defined. If the derivative of the Y coordinate w.r.t time is in ½a; a interval for a continuous duration more than T, then the hold-up pattern can be detected. If the derivative of Y coordinate w.r.t time is
Using Eye Tracking to Assess User Behavior in Virtual Training
345
in ½a; a interval for a short duration less than T, it can indicate that there is no hold-up time. These two patterns and the derivative of Y coordinate are shown in Fig. 3. The 3D scatter plot represents the variation of ðX; Y Þ coordinates of the gaze positions. In Fig. 3 (left) the gaze pattern of the user follows a monotone trend, and therefore there is no hold-up time. As it is shown in the derivative plot, the derivative of Y coordinate is in the zero interval only for short periods. This indicates that the user can read the instructions without serious hold-up time and therefore, there is no difficulty for the user. In Fig. 3 (right) the gaze pattern of the user does not follow a non-monotone trend. The 3D plot shows that the user spends some time reading the instructions. The hold-up time is marked by planes in parallel with the plane including the X coordinate and the time coordinate, while the Y coordinate is fixed in different positions. Furthermore, by considering the derivative of Y coordinate w.r.t time, it can be seen that the derivative of Y coordinate is in zero interval for longer time (more than a threshold T) and therefore, it is another evidence for hold-up time.
Fig. 3. Different patterns in the gaze positions in 3D plots of X, Y, and time coordinates. Left: Monotone pattern. The derivative of Y coordinate w.r.t time is in zero interval for short periods. Right: Non-monotone pattern. The derivative of Y coordinate w.r.t time is in zero interval for longer periods.
Finally, The detected patterns can be used to improve the interaction with virtual training systems. In the case that the hold-up time can be detected in the user’s gaze positions, an interactive message can be presented to inform the user and suggest actions, such as the interruption of the training process. The difficulty of the virtual training system can be reduced when serious problems with the instructions are detected. Thus, the virtual training system can be adapted to user needs by tracking the eye movement of the user.
346
M. Fahimipirehgalin et al.
4 Conclusion and Outlook In this paper, an approach based on eye tracking is presented to improve the efficiency of virtual training systems. The analysis shows that the proposed method can detect different patterns including monotone trend, non-monotone trend (with hold-up time), and distraction. The detected patterns can further be used to interact with users and give them feedbacks regarding their performance based on detected patterns. Future work in this field will focus on the usability of eye tracking measurements in the area of virtual training systems and follow the behavior of the user not only on the instruction part but also on the area of the plant. Acknowledgments. This work has been supported by the INCLUSIVE collaborative project, which has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No. 72337.
References 1. Villani, V., et al.: Towards modern inclusive factories: a methodology for the development of smart adaptive human-machine interfaces. In: 2017 22nd IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Limassol, pp. 1–7 (2017) 2. Gorecky, D., Mura, K., Arlt, F.: A vision on training and knowledge sharing applications in future factories. IFAC Proc. Vol. 46(15), 90–97 (2013) 3. Loch, F., Böck, S., Vogel-Heuser, B.: Teaching styles of virtual training systems for industrial applications – a review of the literature. Interact. Des. Archit. J. (IxD&A) 38, 46– 63 (2019) 4. Rayner, K., Chace, K.H., Slattery, T.J., Ashby, J.: Eye movements as reflections of comprehension processes in reading. Sci. Stud. Read. 10(3), 241–255 (2006) 5. Just, M.A., Carpenter, P.A.: A theory of reading: from eye fixations to comprehension. Psychol. Rev. 87(4), 329–354 (1980) 6. Gutiérrez, T., et al.: IMA-VR: a multimodal virtual training system for skills transfer in Industrial Maintenance and Assembly tasks. In: IEEE RO-MAN 2010: 19th IEEE International Symposium on Robot and Human Interactive Communication, Viareggio, Italy, 13–15 September 2010, pp. 428–433 (2010) 7. Loch, F., et al.: An adaptive virtual training system based on universal design. IFACPapersOnLine 51(34), 335–340 (2019) 8. Abe, N., Zheng, J.Y., Tanaka, K., Taki, H.: A training system using virtual machines for teaching assembling/disassembling operation to novices. In: 1996 IEEE International Conference on Systems, Man and Cybernetics, Beijing, China, October 1996, pp. 2096– 2101 (1996) 9. Brough, J.E., et al.: Towards the development of a virtual environment-based training system for mechanical assembly operations. Virtual Reality 11(4), 189–206 (2007) 10. Groen, M., Noyes, J.: Using eye tracking to evaluate usability of user interfaces: is it warranted? IFAC Proc. Vol. 43(13), 489–493 (2010)
Using Eye Tracking to Assess User Behavior in Virtual Training
347
11. Scheiter, K., Schubert, C., Schüler, A.: Self-regulated learning from illustrated text: eye movement modelling to support use and regulation of cognitive processes during learning from multimedia. Br. J. Educ. Psychol. 88(1), 80–94 (2018) 12. Bluemel, E., Hintze, A., Schulz, T., Schumann, M., Stuering, S.: Virtual environments for the training of maintenance and service tasks. In: Proceedings of the 2003 Winter Simulation Conference, New Orleans, LA, USA, pp. 2001–2007 (2003)
Democratization of AI to Small Scale Farmers, Albeit Food Harvesting Citizen Data Scientists, that Are at the Bottom of the Economic Pyramid Chandrasekar Vuppalapati1, Anitha Ilapakurti1(&), Sharat Kedari1(&), Rajasekar Vuppalapati1(&), Jayashankar Vuppalapati2(&), and Santosh Kedari2(&) 1
Hanumayamma Innovations and Technologies, Inc., Fremont, CA, USA {cvuppalapati,ailapakurti, sharath,raja}@hanuinnotech.com 2 Hanumayamma Innovations and Technologies Private Limited, Hyderabad, India {jaya.vuppalapati,skedari}@hanuinnotech.com
Abstract. Climate change is impacting milk production worldwide. For instance, increased heat stress in cows is causing average-sized dairy farms losing thousands of milk gallons each year; drastic climate change, especially in developing countries, pushing small farmers, farmers with less than 10 to 25 cattle, below the poverty line and is triggering suicides due to economic stress and social stigma. It’s profoundly clear that current dairy agriculture practices are falling short to counter the impacts of climate change. What we need are innovative and intelligent dairy farming techniques that employ best of traditional practices with data infused insights to counter negative effects of climate change. To achieve innovative and intelligent dairy farming techniques, we need to disseminate state of art data science to masses. In other word, provide the state of art data science algorithms to every food harvesting citizen data scientist, farmer. The democratization of artificial intelligence to farmers, importantly, is not only empowers farmers to understand the patterns and signatures of climate change but also provides the ability to forecast the impending climate change adverse events and recommends data driven insights to counter the negative effects of climate change. With the availability of new data tools, farmers can not only improve their standard of life but also conquer, importantly, perennial “climate change related suicide” issue. It’s our staunch believe that the gold standard for the success of the democratization of artificial intelligence is no farmer life loss due to negative effects of climate change. In this paper, we propose an innovative machine learning Sensor edge approach that considers the impact of climate change and develops artificial intelligent (AI) models that is validated globally but enables localized solution to thwart impacts of climate change. The paper presents prototyping dairy IoT sensor solution design as well as its application and certain experimental results.
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 348–358, 2020. https://doi.org/10.1007/978-3-030-39512-4_55
Democratization of AI to Small Scale Farmers
349
Keywords: Internet of Things (IoT) Machine Learning Climate Change Climate change related suicides Decision tree Regression analysis Embedded device Edge analytics Hanumayamma dairy IoT sensor Climate models Farmer suicides
1 Introduction 1.1
Climate Change and Farmers Suicide
Small Farmers make big contribution1 to agriculture and dairy production in developing countries. Unlike the dairy farms of the west, milk originates in highly decentralized2 villages with the help of small farmers who own three to five cattle and they bring milk twice a day to milk collection centers to get paid. Simply put, the livelihood of roughly 2 billion people (26.7% of the world population)3,4 of small farmers in developing world depend on agriculture and the climate change adversely impacting their survival; drastic climate change, especially in the developing and the developed countries, pushing small farmers below the poverty line and is triggering suicides (See Fig. 1) due to economic stress and social stigma5. The Climate Change is real and “climate is now a data problem6” [1]. The study [3] from the University of California, Berkeley, has clearly corroborated the link between the rising temperatures and the resultant stress on India’s agricultural sector and its impact on increase in suicides over the past 30 years (see Fig. 1); the study has categorically quantified suicides of nearly 60,000 Indian farmers [3, 4] are related to climate change7,8.
1
2
3
4
5
6
7
8
Small Farmers make Big Contributions - https://ucanr.edu/blogs/blogcore/postdetail.cfm?postnum= 23184. Strategies for the Bottom of the Pyramid: Creating Sustainable Development - http://pdf.wri.org/ 2001summit_hartarticle.pdf. Industrial Agriculture and Small Scale Farming - https://www.globalagriculture.org/report-topics/ industrial-agriculture-and-small-scale-farming.html. Food and Agricultural Organization of the United Nations – http://www.fao.org/india/fao-in-india/ india-at-a-glance/en/. India’s shocking farmer suicide epidemic - https://www.aljazeera.com/indepth/features/2015/05/ india-shocking-farmer-suicide-epidemic-150513121717412.html. How Machine Learning Could Help to Improve Climate Forecasts - https://www.scientificamerican. com/article/how-machine-learning-could-help-to-improve-climate-forecasts/. Suicides of nearly 60,000 Indian Farmers linked to climate change, study claims - https://www. theguardian.com/environment/2017/jul/31/suicides-of-nearly-60000-indian-farmers-linked-toclimate-change-study-claims. 59,000 farmer suicides in India over 30 years may be linked to climate change, study says https:// www.washingtonpost.com/news/worldviews/wp/2017/08/01/59000-farmer-suicides-in-india-overthree-decades-may-be-linked-to-climate-change-study-says/?noredirect=on&utm_term=. f62a830a292a.
350
C. Vuppalapati et al.
Fig. 1. Farmers protest [2]
The problem not only confined to the developing countries, ironically, in the Unites States of America, a recent report by the Guardian Newspaper [5] establishes the facts that “the Suicide rate for farmers is more than double that of veterans9”. Similarly, Forbes magazine’s article “dairy Farmers across the United States have been committing suicide at an alarming rate because their respective industry no longer supports them in the same way that it has for generation10”. The National Public Radio (NPR) [6], importantly, ran the program11 that underscores [7] “milk prices decline to the worries about dairy farmers suicides rise” (see Fig. 2). Finally, a recent New York Times12 article summarizes the plight of farmers in Australia [8]: “being a breadbasket to the world and a globalization success story, so why are Australian farmers killing themselves?” [8]. What we need is innovative and intelligent dairy farming techniques that employ best of traditional practices with data infused insights and continuous inclusion of adaptive data-models that factor in climate changes to counter the negative effects. To achieve innovative and intelligent dairy farming techniques, we need to make data science as citizen data science13, i.e., proliferation of data science to masses. Thus, democratizing the field of statistics and analytics to extract predictive and prescriptive insights from data and enabling farmers to take actionable insights; a bottom up innovation14 for the small farmers at the bottom of the pyramid. It’s our ardent believe that the data is our best defense and the savior against the negative effects of climate change. The sooner we embark on democratization of AI to small farmers, the better
9
10
11
12
13
14
Why are America’s farmers killing themselves? - https://www.theguardian.com/us-news/2017/dec/ 06/why-are-americas-farmers-killing-themselves-in-record-numbers. Suicidal Dairy Farmers should consider Marijuana Industry - https://www.forbes.com/sites/ mikeadams/2018/03/16/suicidal-dairy-farmers-should-consider-marijuana-industry/#689e0865 1d1e. As Milk Prices Decline, Worries about Dairy Farmers Suicides rise - https://www.npr.org/2018/02/ 27/586586267/as-milk-prices-decline-worries-about-dairy-farmer-suicides-rise. A Booming Economy with a price tag - https://www.nytimes.com/2018/05/20/world/australia/ruralsuicides-farmers-globalization.html. Gartner Identifies the Top 10 Strategic Technology Trends for 2019 - https://www.gartner.com/en/ newsroom/press-releases/2018-10-15-gartner-identifies-the-top-10-strategic-technology-trends-for2019. Strategies for the Bottom of the Pyramid - http://pdf.wri.org/2001summit_hartarticle.pdf.
Democratization of AI to Small Scale Farmers
351
we leave our progeny a wonderful life on the earth, i.e., better than what we have inherited. As part of the paper, we have developed mathematical, electrical, software, and data science solution! We have developed Dairy Sensors to empower small farmers to counter the climate change. The structure of this paper is presented as: Sect. 2 discusses the basic concepts and methods about Machine Learning Algorithms. Section 3 presents our Climate Model, and Sect. 4 shows a case study.
2 Understanding Machine Learning Algorithms for Climate Change and Dairy Sensor 2.1
NOAA
NOAA’s National Centers for Environmental Information (NCEI15) hosts and provides public access to one of the most significant archives for environmental data on Earth. The climate and weather forecast models can be used to model agriculture data. For instance, Wind Chill Warning16 can be used to take proactive steps to mitigate the side effects. Additionally, the data points could be baselined to functioning of the Dairy sensors. 2.2
Hanumayamma Dairy Sensors
Generally, CLASS 10 classification17 is reserved to Medical Apparatus such as: Surgical, medical, dental and veterinary apparatus and instruments, artificial limbs, eyes and teeth; orthopedic articles; suture materials [10–12], [13]. Specifically, for Dairy sensor (see Fig. 2), also known as Cow necklace, we have been assigned “Class 10: Wearable veterinary sensor for use in capturing a cow’s vital signs, providing data to the farmer to monitor the cow’s milk productivity, and improving its overall health”. We have based above sensor design based on reference architecture provided by Hanumayamma Innovations and Technologies, Inc., IoT Dairy Sensor18. 2.3
Climate and Dairy Sensor Deployment
Dairy IoT Sensor Values Ingest: The embedded system [12], [13–15] was deployed in fields to collect cattle activity, the temperature and humidity data. The sensors are deployed globally and are periodically uploads data on day-to-day basis.
15 16 17
18
NOAA - https://www.ncei.noaa.gov/. National Weather Services - https://www.weather.gov/safety/cold-wind-chill-warning. Trademark administration, General Provisions and Definitions - https://www.sec.state.ma.us/cor/ corpdf/trademark_regs_950_cmr_62.pdf. Hanumayamma Innovations and Technologies, Inc., http://hanuinnotech.com/dairyanalytics.html.
352
C. Vuppalapati et al.
Fig. 2. Dairy sensor
2.4
Data
We have observed the reporting of humidity values greater than 100% during the five months. For instance, the below has figure (Fig. 3) has the hourly temperature plot grouped by time of the day for the period of October 30, 2016 to January 18, 2017, in Punjab. The x-axis 0:00 represents mid-night 12:00 h and value 9.36 data at morning 9:36 am. For some dates, the data humidity values are reported more than 140%. The data is collected in Patiala, Punjab and Vizag – a costal of Andhra Pradesh.
Fig. 3. Field data collected in dairy farms in India
2.5
Climate Events
To derive Climate change to Dairy Sensor parametric model, let’s start with following climate events (see Table 1): (other extreme climate events can be logically deduced through the established process): Table 1. Climate events Climate event Rise of tidea Ũ Rise of temperature Ů Rise of Humidity Ŭ Floodingb Ű Rise of noise ȸ (continued)
Democratization of AI to Small Scale Farmers
353
Table 1. (continued) Climate event Depression Ų Wind Chill Warning Ŵ ϔ Snow Stormc a Water Quality for Live Stock - https://www. agric.wa.gov.au/ livestock-biosecurity/ water-quality-livestock b Flooding – Contaminated farm dams - https://www. agric.wa.gov.au/watermanagement/ contaminated-farmdams?page=0%2C1 c Mid-West Snow Storm https://phys.org/news/ 2019-01-recordbreaking-cold-midwestsnowstorm.html
Let’s consider following Climate Events to Dairy Sensor Parametric model table (see Table 2). Please note: # (arrow down) means decreases " (arrow up) Increases ! (arrow flat) no change or don’t care Table 2. Dairy sensor parametric model table Climate event A (x, y, z) G (x, y, z) T H Physiological Emotional Ũ " " " " " " # " # " ! Ů #a Ŭ " " # " ! ! Ű ! ! ! ! " " ȸ " " " " " " a Heat Stress in Dairy Cows - http://article.sapub.org/pdf/10.5923.j.zoology. 20120204.03.pdf
354
2.6
C. Vuppalapati et al.
Modeling Climate Change – Mathematics and Data Events
For instance, consider activity function of dairy sensor, of course, based on the cattle’s activity: let Fa(x, y, z): Fa is accelerometer function with varying values based on the dy dz time where dx dt ; dt and dt are change of x, y, and z values with respect to time.
dx dy dz ; ; Fa ðtÞ ¼ F dt dt dt
2.7
ð1Þ
Normal Regular Day – No Weather Events
Under normal climatic condition, the function of sensor values (Fa) has the influence of climatic factor U. Put it simple, the climate factors influence the sensor readings. The Eq. 1 can be rewritten as: dx dy dz dx dy dz ; ; Fa ðtÞ ¼ F ¼ FaU U ;U ;U dt dt dt dt dt dt
ð2Þ
That is, under normal climate conditions, Eqs. 1 and 2 are same. 2.8
Weather Event Day - Factor-in Climate Events
For example, consider modeling of extreme (see Fig. 4) flood event (Ű): The corresponding impact of Ű on dairy sensor accelerometer values be: F
00 dx 00 dy 00 dz U ;U ;U aU dt dt dt 00
Fig. 4. Field data collected in dairy farms in India
ð3Þ
Democratization of AI to Small Scale Farmers
2.9
355
Partial Factors
Partial derivatives are defined as derivatives of a function of a multiple variable when all, but the variable of interest is held fixed during the differentiation. Partial derivatives are very useful to see the effects of the climate on cohort clusters that are geo-fenced or spread across different geolocations. The partial derivatives factors calculate the climate change effect or delta on the factor. 00
FaU The Climate Factor ¼ FaU ¼[ 00 dx 00 dy 00 dz dx dy dz F 00 U ;U ;U = FaU U ;U ;U aU dt dt dt dt dt dt
ð4Þ
Since FaU and FaŰ have three different values (x, y, z), the partial values for each (x, y, z) vector can be deduced: Climate Partial for X variance: 00
ð
@Ux
@Ux Simplifies to Þ= @FaU @FaU 00
ð5Þ
00
00 @Ux given @FaU @FaU @Ux 00
Ux Similar Therefore, partial accelerometer x variance due to climate fluid event is @@Ux approach will calculate, partial accelerometer y variance due to climate fluid and partial accelerometer z variance due to climate fluid. 2.10
Parametric Model
The purpose of parametric model is to evaluate any climate event and calculate partial derivatives that could have impacted by the event. Additionally, to store the partial derivatives for future climate forecast – Eqs. 1, 2, 3, 4, and 5.
Fig. 5. Parametric model
356
C. Vuppalapati et al.
3 Factor-in Climate Conditions To factor-in the effects of Climate events on Dairy cattle, through Dairy Sensors, first, consider the mathematical models of Dairy Sensor on the cattle activities and, next, bring in the effects of climate events. Third, apply partial derivatives that zoom in the attribute in-focus leaving all other constant. Finally, once influencing coefficient are calculated, apply network effects and cohort cluster techniques to apply to forecast models. Please see 1, 2, 3, 4, and 5 on Fig. 5. 3.1
Climate Reference Pattern
To ensemble the impact of a forecasted climate event, the saved derivative values are combined with model equation to simulate the negative effects of climate change. To show in action, let’s consider Eq. 1: dx dy dz FaU U ;U ;U dt dt dt
ð6Þ
Let’s say National Weather Services has predicted climate event – Wind Chill Warning. To calculate the impact of Wind Chill Warning on Eq. 6, retrieve all partial derivatives from the database that have influence on the model equation. Of course, take into consideration the similarities of cohort cluster need to be validated for to apply: dx c dy c dz Z F b c W W ;W ;W ¼ FaU d c aW dt dt dt dx c dy c dz Z dx dx dy W F b c ;W ;W ¼ FaU ðU c W ;U aW dt dt dt dt dt dt dy dz dz c ;U c W Þ W dt dt dt
ð6aÞ
The forecasted climate change event Dairy Sensor data is equal to the raw sensor model plus any partial derivatives that have influence on the model. Through this computation, the system provides recommendation to the farmer the data insights that are derived from similar cohort of dairies plus superimposed to the readings of the dairy sensor.
Democratization of AI to Small Scale Farmers
357
4 A Case Study As part of adaptive edge analytics, the temperature and humidity embedded system that we have developed used in several agricultural and dairy settings (see Fig. 6). We have captured real-time customer related data for creating an adaptive edge analytics paper. Finally, the product is used and tested by precision dairy agriculture company19.
Fig. 6. Dairy sensor
References 1. Jones, N.: How machine learning could help to improve climate forecasts. Nat. Mag. (2017). https://www.scientificamerican.com/article/how-machine-learning-could-help-to-improveclimate-forecasts/ 2. Michael Safi in Delhi: Suicides of nearly 60,000 Indian farmers linked to climate change, study claims, Mon 31 July 2017 15.00 EDT. https://www.theguardian.com/environment/ 2017/jul/31/suicides-of-nearly-60000-indian-farmers-linked-to-climate-change-study-claims 3. Umar, B.: India’s shocking farmer suicide epidemic, 18 May 2015. https://www.aljazeera. com/indepth/features/2015/05/india-shocking-farmer-suicide-epidemic-150513121717412. html 4. Doshi, V.: 59,000 farmer suicides in India over 30 years may be linked to climate change, study says, 1 August 2017. https://www.washingtonpost.com/news/worldviews/wp/2017/08/ 01/59000-farmer-suicides-in-india-over-three-decades-may-be-linked-to-climate-changestudy-says/?noredirect=on&utm_term=.38c8866059de 5. Weingarten, D.: The suicide rate for farmers is more than double that of veterans. A former farmer gives an insider’s perspective on farm life – and how to help, December 2017. https:// www.theguardian.com/us-news/2017/dec/06/why-are-americas-farmers-killing-themselvesin-record-numbers 6. Adams, M.: Suicidal Dairy Farmers Should Consider Marijuana Industry, 16 March 2018 12:23 pm. https://www.forbes.com/sites/mikeadams/2018/03/16/suicidal-dairy-farmersshould-consider-marijuana-industry/#30a8ae8c1d1e 7. Smith, T.: As Milk Prices Decline, Worries About Dairy Farmer Suicides Rise, 27 February 2018 11:31 AM ET. https://www.npr.org/2018/02/27/586586267/as-milk-prices-declineworries-about-dairy-farmer-suicides-rise
19
Hanumayamma Innovations and Technologies, Inc. – Dairy IoT Sensor.
358
C. Vuppalapati et al.
8. Williams, J.: A Booming Economy With a Tragic Price, 20 May 2018. https://www.nytimes. com/2018/05/20/world/australia/rural-suicides-farmers-globalization.html 9. GUEST BLOGGER: Machine Learning May Be a Game-Changer for Climate Prediction, 21 June 2018. https://blogs.ei.columbia.edu/2018/06/21/machine-learning-may-game-changerclimate-prediction/ 10. Karwowski, W., Ahram, T. (eds.): Proceedings of the 1st International Conference on Intelligent Human Systems Integration (IHSI 2018): Integrating People and Intelligent Systems, Dubai, United Arab Emirates, January 7–9 2018. https://www.springer.com/us/ book/9783319738871 11. Kedari, S., Vuppalapati, J.S., Ilapakurti, A., Vuppalapati, C., Kedari, S., Vuppalapati, R.: Precision dairy edge, albeit analytics driven: a framework to incorporate prognostics and auto correction capabilities for dairy IoT sensors, chap. 35. Springer, Cham (2019) 12. Ilapakurti, A., Vuppalapati, J.S., Kedari, S., Kedari, S., Vuppalapati, R., Vuppalapati, C.: Adaptive edge analytics for creating memorable customer experience and venue brand engagement, a scented case for Smart Cities. In: 2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI) (2017)
Cybersecurity in Educational Networks Oleksandr Burov1(&), Svitlana Lytvynova1(&), Evgeniy Lavrov2(&), Yuliya Krylova-Grek3(&), Olena Orliyk4(&), Sergiy Petrenko4(&), Svitlana Shevchenko5(&), and Oleksii M. Tkachenko6(&) 1
Institute of Information Technologies and Learning Tools, 9 Berlinskoho Str., Kyiv 04060, Ukraine [email protected], [email protected] 2 Sumy State University, Sumy, Ukraine [email protected] 3 State University of Telecommunications, Kyiv, Ukraine [email protected] 4 Scientific-Research Institute of Intellectual Property, Kyiv, Ukraine [email protected], [email protected] 5 Borys Grinchenko Kyiv University, Kyiv, Ukraine [email protected] 6 National University of Life and Environmental Sciences of Ukraine, Kyiv, Ukraine [email protected]
Abstract. The paper discusses the possible impact of digital space on a human, as well as human-related directions in cyber-security analysis in the education: levels of cyber-security, social engineering role in cyber-security of education, “cognitive vaccination”. “A Human” is considered in general meaning, mainly as a learner. The analysis is provided on the basis of experience of hybrid war in Ukraine that have demonstrated the change of the target of military operations from military personnel and critical infrastructure to a human in general. Young people are the vulnerable group that can be the main goal of cognitive operations in long-term perspective, and they are the weakest link of the System. Keywords: Cyber-security engineering
Cognitive performance Education Social
1 Introduction A constantly increasing number of cybersecurity-related publications demonstrates a growing comprehension of this complex challenge facing the Globe and the necessity to consider wider spectrum of issues. Unfortunately, technical and informational solutions cannot satisfy humans’ safety and security of life and activity. Since it is an on-going process, specialists in this field are lack of current information and feel the need to change the training programs of cybersecurity (CS) that should focus “on the social, economic, and behavioral aspects of cyberspace, which are largely missing from the general discourse on cybersecurity” [1, p. 2]. First of all, new training programs should take into account the human features and a person’s functional state as well as © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 359–364, 2020. https://doi.org/10.1007/978-3-030-39512-4_56
360
O. Burov et al.
cognitive resilience due to the increasing role of cognitive warfare [2]. The cognitive war must deserve particular attention as its primary goal is not a prompt military operation and fight for territorial or economic resources, but it is a battle for people [3] aimed at affecting public opinion, radicalizing young people, infiltrating and corrupting enemy’s information systems. Since the information in the global network exists out of space and time, the Net itself becomes an active human influencer [4], especially in social networks [5]. One of the human dimensions of extensive change involves the transition from producing predominantly material issues to intellectual ones and alterations in competitive target resources. Intellectual capital (first of all, human capital includes abilities, talents, knowledge, ideas, etc.) is becoming the most in-demand resource and the target of diverse cyber-attacks [6]. At present, digital networks are taking more and more crucial place in our everyday routine. Therefore, interventions to these networks pose a real threat to both humans and the state. By saying “humans” we don’t mean just military (including cyber-)specialists, but everybody, since the cyberspace is a worldwide electronic medium facilitating social interaction. Undoubtedly, transformations in the forms, methods, and means of education are related to and accompanied by changes in learners’ behavior by transition from traditional classroom education to network activities with unproductive consequences of the information received and its safety. However, at the same time, a human is still the weakest link in cybersecurity systems [7]. Purpose. To analyze potential hazards associated with learners’ participation in online activities in digital education.
2 Method Considering learning as a type of activity in human-system integration, today’s learner may be viewed as an operator-researcher who acts in the digital environment. Successful learning involves mutual adaptation between a human and activity tools [8] using individual cognitive abilities measurement [9, 10]. On the other hand, it is possible to use ergonomics’ methods and techniques to assess a learner’s safety in the education system.
3 Results and Discussion The core directions of cybersecurity analysis in the education field should be focused on the following issues: CS levels, role of social engineering in providing CS in education, and so-called “cognitive vaccination”. Cyber Security Levels. The paper deliberates about the problems of learners’ cybersecurity in the educational process. It emphasizes the fact that the given problems are not limited to the technical aspects of protecting information resources, which must include such types of protection as legal, technical, informational, organizational, and psychological ones [4, 11].
Cybersecurity in Educational Networks
361
The legal maintenance covers (but not limited) [12]: • • • • • •
National and international legislation in the field of cybersecurity. Appropriate international legal agreements, conventions, and standards. Intellectual property rights. Protection of computer programs and databases [13]. Personal data protection. Legal support of victims of cyber-attacks and expert opinions on the results of the computer-technical examinations. • Legal support of a human right to know and get access to verified information (a person’s education and development cannot be achieved without realizing selfconcept). • Legal literacy for young people regarding actions in digital networks.
Cybersecurity technical aspects imply the security of diverse technical means and tools (computers, networks, databases, information resources, etc.). Information tools can be categorized according to the tasks solved by the users [11, p. 321]: Protection/Remedies, Awareness, Content, Learning to use, Security, Lifespan, Avoiding threats. Organizational tools for solving cybersecurity issues comprise Awareness, Learning the cybersecurity culture, CS professional staff and the general population, Creation of CS special means, Distribution of CS facilities, Control of use. Psychological means can be grouped depending on the personal and interpersonal level: National, Public, Group, Individual, Cultural, Cognitive, Intellectual, Habits. Among the psychological tools aimed at achieving cybersecurity, the cognitive ones are the most vital. Recent cybersecurity research shows that information technology tools in this field are constantly being refined and hacker attacks become more human-centered [14]. This is extremely important because of the urgency of personal safety and the results of its activities. As shown in [4], the common accessibility of the information space leads induces that a person becomes a target of other participants’ activity, while working in the information environment. Harmful activities force a person to read or to respond to the “wrong” information or to make other mistakes that leave his/her system vulnerable to cyber attacks, information leakage, etc. These days, not only huge corporations or governing bodies are usual targets of cyber attacks, ordinary people, especially children and young adults, suffer from them as well. Their cognitive sphere is the most vulnerable (weak) link in the persontechnology network [7], in particular, due to the extending usage of group work (project-oriented activity). In this regard, it is reasonable to exploit the operators’ experience of preventing against cyber threats in the education field [15], accounting that in anthropocentric networks, which make up an ever-increasing share among common networks, the network itself acquires new properties, acting as an independent component (in addition to such factors as the network unit, interface, and links) acting beyond time and space [6]. Role of Social Engineering in Providing CS in Education. The spectrum of hazards from the open cyberspace is continuously expanding. If ten years ago, the hazards for schoolchildren could be reduced to a relatively small number of groups (viral attacks,
362
O. Burov et al.
cybercrime, the hazards of Internet surfing), then the diversity of hazards and threats is increasing over time, affecting all possible human activities online [11]. Threats coming from networks can be divided into the following types: active and passive, open and hidden, current and delayed [11, p. 309]. The greatest danger to students is hidden hazards of the Internet and especially the social engineering methods [16, 17]. The shift of cybercrime goals from technical (information) objects to the human link led to the emergence of social engineering (SE) as methods and technologies for obtaining the necessary access to information based on the characteristics of human psychology. Social engineers, for instance, use fear, interest or trust to manipulate, to change the behavior or perception of others. Sad to say, nowadays everybody can master the art of gaining access to computer systems or personal data [18]. Yet it is possible to resist SE impact if to follow nine recommendations: • User credentials are the school property. • Conduct introductory and regular training sessions for staff and students to increase information security skills. • It is mandatory to have safety regulations and instructions that the user must always have access to. • Users’ computers must always have up-to-date antivirus software and firewall installed. • Systems of detection and prevention of attacks should be used in any corporate network. Confidential information leakage prevention systems should be employed as well. • It is necessary to restrict users with administrative privileges for operating systems and applications as much as possible. • You need to be vigilant about the source requiring sensitive information. • You should never open the contents of applications or follow the link without examining all the details and your own experience. • It is also important to be critical of the messages received: how plausible can the information be? • It is recommended to report such dangers to other family members, first of all, the elderly, who have no experience of using electronic means and are not aware of SE issues. We believe that psycholinguistic tools could be useful to recognize SE interference and the ways to affect human cognition and safety, especially the cognitive weapon (mass-media, politicians’ impact, textbooks, etc.) [19]. If a person knows, realizes and is aware of these tools, he/she can obviously resist them, which is the most effective way of providing cybersecurity. “Cognitive Vaccination”. In 2002, UN General Assembly adopted resolution 57/239 “Elements to Create a Global Cybersecurity Culture” [20] to identify nine fundamental complementary elements of the global cybersecurity culture, including awareness; responsibility; response; ethics; democracy; risk assessment; design and implementation of security measures; security management; revaluation. The Resolution and cybersecurity elements relate to five levels of CS mentioned above. At the same time, it can be noted that psychological means (which relate directly
Cybersecurity in Educational Networks
363
to each person separately) involve only behavioral aspects, i.e. responsibility and ethics; in other words, it is a manifestation of the social attitude to cybersecurity expressed by a person, who is considered as a relatively passive element of the cybersecurity system. Moreover, since no means guarantee 100% of human protection, it is advisable to determine the range of individual abilities to produce personal protection, except for the above. The analysis of the curriculum and training programs implemented in pedagogical educational institutions has demonstrated that traditional education does not pay enough attention to the development of students’ critical thinking skills related to the use of the Internet. We propose to introduce “cyber vaccination” as part of the cybersecurity-related training. It can increase the human’s safety level by a wide array of means: to accept rules for safe and responsible use of the Internet, to improve critical thinking skills, to train the participants of the network activity and to inform them about possible impact of the cyber environment, to model and to simulate cyber threats in relatively closed systems such as corporate and educational ones, to teach how to confront with the cyber threats for gaining the practical experience of behaving and restoring after cyber vulnerabilities, including assessing the person’s current state and necessary adjustments to optimize his/her cognitive workability, and cyber survival trainings aimed at recognizing the threat or possible dangerous action in the network and the rational psychological and behavioral compensation for this action.
4 Conclusion The analyzed features of teaching and learning in contemporary digital environment and recommendations could be an influential tool to improve the security and safety of educational process by adapting students’ activity depending on his/her cognitive state in digital education, by designing intelligent individual-oriented systems and services that ameliorate human – E-technology interaction. Acknowledgments. This work is supported by the grant 0118U003160 “System of Computer Modeling of Cognitive Tasks for the Formation of Competencies of Students in Natural and Mathematical Subjects”.
References 1. Ahram, T., Karwowski, W.: Advances in Human Factors in Cybersecurity. Proceedings of the AHFE 2019 International Conference on Human Factors in Cybersecurity, Washington D.C., USA, 24–28 July 2019. Springer, Cham (2019) 2. Bienvenue, E., Rogers, Z., Troath, S.: Cognitive Warfare (2018). https://cove.army.gov.au/ article/cognitive-warfare 3. Pocheptsov, G.: The War in Cognitive Space (2017). https://nesterdennez.blogspot.com/ 2017/08/global-permanent-war_39.html 4. Burov, O.: Educational networking: human view to cyber defense. Inf. Technol. Learn. Tools 52, 144–156 (2016)
364
O. Burov et al.
5. Lytvynova, S., Burov, O.: Methods, forms and safety of learning in corporate social networks. In: Proceedings of the 13th International Conference on ICT in Education, Research and Industrial Applications. Integration, Harmonization and Knowledge Transfer, Kyiv, Ukraine, 15–18 May, pp. 406–413 (2017). http://ceur-ws.org/Vol-1844/10000406.pdf 6. Bykov, V.Yu., Burov, O.Yu., Dementievska, N.P.: Cybersecurity in digital educational environment. Inf. Technol. Learn. Tools 70(2), 313–331 (2019) 7. Yan, Z., Robertson, T., Yan, R., Park, S.Y., Bordoff, S., Chen, Q., Sprissler, E.: Finding the weakest links in the weakest link: how well do undergraduate students make cybersecurity judgment? Comput. Hum. Behav. 84, 375–382 (2018). ISSN 0747-5632 8. Veltman, J.A., Jansen, C., Hockey, G.R.J., Gaillard, A.W.K., Burov, O.: Differentiation of mental effort measures: consequences for adaptive automation. NATO Sci. Ser. Sub Ser. I Life Behav. Sci. 355, 249–259 (2003) 9. Basye, D.: Personalized vs. Differentiated vs. Individualized Learning. ISTE 1/24/2018 (2018). https://www.iste.org/explore/articleDetail?articleid=124 10. Veltman, H., Wilson, G., Burov, O.: Operator Functional State Assessment. Cognitive Load. NATO Science Series RTO-TR-HFM-104, Brussels, pp. 97–112 (2004) 11. Burov, O.Iu., Kamyshin, V.V., Polikhun, N.I., Asherov, A.T.: Technologies of network resources’ use for young people training for research activity, Monograph. In: Burov, O.Iu. (ed.) Informatsiini Systemy. TOV, Kyiv (2012). (in Ukrainian). 416 p. 12. Orlyuk, O.: On the development of a policy on intellectual property in national universities and the role of profile departments of intellectual property. Theory Intellect. Prop. 5, 61–69 (2017) 13. Petrenko, S.A.: Protection of the Computer Program as an Intellectual Property Object: Theory and Practice, Monograph, p. 172. Research Institute of IP at NA-PrNU, “LazuritPolygraph”, Kyiv (2011) 14. https://www.computerweekly.com/news/252448101/People-top-target-for-cyber-attackersreport-confirms 15. Lavrov, E., Pasko, N., Tolbatov, A., Tolbatov, V.: Cybersecurity of distributed information systems. The minimization of damage caused by errors of operators during group activity. In: Proceedings of 2nd International Conference on Advanced Information and Communication Technologies (AICT 2017), pp. 83–87 (2017). https://doi.org/10.1109/AIACT. 2017.8020071 16. Savchuk, T.: Social Engineering: How Fraudsters Use Human Psychology on the Internet (2018). https://www.radiosvoboda.org/a/socialna-inzhenerija-shaxrajstvo/29460139.html. Accessed 30 Aug 2018. (in Ukrainian) 17. A Powerful Tool for Social Engineering. https://www.trustedsec.com/social-engineertoolkit-set/# 18. SecurityLab. https://news.rambler.ru/other/39395044-nazvany-glavnye-problemy-bezopasn osti-web-resursov-v-2017-godu/ 19. Krylova-Grek, Yu.: Psycholinguistic peculiarities for application of the symbol-words in the political communication. Adv. Educ. 7, 129–134 (2017). https://doi.org/10.20535/24108286.99321(2017) 20. UN General Assembly Resolution 57/239 Elements to Create a Global Cybersecurity Culture (2002). http://www.un.org/en/documents/declconv/conventions/elements.shtml
The Problem of Tracking the Center of Attention in Eye Tracking Systems Marina Boronenko(&), Vladimir Zelensky, Oksana Isaeva, and Elizaveta Kiseleva Ugra State University, 16 Chekhov Street, 628012 Khanty-Mansiysk, Russia [email protected], [email protected], [email protected], [email protected]
Abstract. Modern methods of analyzing the results of eye tracking are high provided that the rigid fixation of the head of the subject is observed. To improve the accuracy of the identification of the cause of the change in the size of the pupils, you can make adjustments to the coordinates of the center of attention. We propose an alternative, simpler adjustment method. In the proposed method, the transition to the coordinate system associated with the pupil is carried out. After this transition, the results of the study will correspond to the task given to the participants. Keywords: Eye tracking response
Test-object A coordinate bond An emotional
1 Introduction Today, an important aspect of modernity is security [1]. In order to ensure it, the latest technologies are being introduced. Security systems allow employees and visitors to the organization using biometric data, which is not an innovation [2]. Existing methods are good enough to recognize attempts to conceal information, but no technology can with absolute certainty give a 100% result [4]. Therefore, it is necessary to apply a set of methods that will improve the interpretation of the information received in many areas of activity – from the selection of personnel to the investigation of crimes. One of the tools of eye tracking technology is eyetracker, which recognizes and records pupil positions and eye movements. The device can be worn on the head (eyewear or helmet) or stationary, which is placed on a table in front of the monitor screen, it can also be used in any research related to the visual system [2]. Visual information causes natural eye reactions to what is seen, which is impossible to control [2]. Therefore, this technique is widely used in applications such as sleepiness detection, diagnosis of various clinical conditions or iris recognition, cognitive and behavioral therapy, visual search, advertising, neuroscience, psychology, and it is also applicable for analysis in security systems [3, 15]. The majority of works devoted to the use of eye tracking technologies analyze the focus of attention when performing technical actions (including ignoring visual stimuli), visual search strategies in the process of activity, measurement of pupil diameter (as an indicator of cognitive load), the number of saccadic eye movements, as well as © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 365–371, 2020. https://doi.org/10.1007/978-3-030-39512-4_57
366
M. Boronenko et al.
fixations, blinks and other parameters (Drummers, Zhegallo, 2013, etc.) [5]. Often eye tracking is used to determine lie [13]. This method is also used in forensic examinations to determine the truth of the testimony. Eye tracking keeps track of non-verbal human activity, contributes to the identification of the most marked psychological characteristics governing a particular type of behavior [6]. It is also often used in the selection of employees for service [7]. Eye tracking technology is used to improve performance in various sports, to predict consumer behavior in response to various marketing incentives, in the process of learning a foreign language and translation, in the process of knowledge control in distance learning [8. 9]. By registering the time of fixation and the density of the trajectory of the gaze, it is possible to judge the significance of the elements seen for a person. Equipment based on the principle of recording eye movements in infrared radiation with subsequent determination of direction based on the displacement vector between the centers of the pupil and the corneal glint, gives high accuracy results, provided that the subject’s head is fixed. Not having the equipment providing fixation of the head, the process of data analysis is complicated, so the development of other methods that allow, without using special equipment, to increase the accuracy and reliability of data is relevant [12]. With the help of modern technology eye tracking possible seconds to restore what interested a person, what he looked, what became the subject of his attention [4]. This technology has a number of advantages associated with the fact that oculomotor reactions are recorded remotely, without attaching sensors to the body of the subject, making the evaluation procedure unobtrusive, more comfortable and less stressful. The process of registering the position and movement of a person’s gaze takes three times less time than a standard study using a polygraph [5]. An additional tool for tracking the psychophysical state of the subject is the method of pupillometry. Emotional stability of a person can be determined by his reaction to the provided test objects, by analyzing changes in the area of the pupil and the trajectory of its movement. But since the registration of the reaction of the pupil without rigid fixation of the head affects the position and coordinates of the pupil, thereby reducing the accuracy of the interpretation of the results, it becomes necessary to develop a method that reduces the impact of this effect. The goal of our study is to develop a way to improve the accuracy of tracking the center of attention track.
2 Research Methods To study the reaction of the pupil to test objects, a helmet was developed that creates a rigid coordinate connection between the camera and the head. The T7 Astro Camera Astro-nominal Astronomy Planetary High Speed Electronic Eyepice Telescope Digital Lens for Guiding Astro-photograph, 30 fps video mode, a microscope lens with an optical magnification of 1X–100X were used for video shooting. 17 people took part in the experiment. The main part of the studied people were persons under 20 years of age, men-8, women-9. There were no eye diseases in the respondents. All participants were warned in advance and voluntarily decided to participate in the experiment. As test objects, specially selected pictures with negative emotional coloring (disgust,
The Problem of Tracking the Center of Attention
367
irritation) were used [16]. Stimulus material was displayed on a monitor screen. The distance between the test objects and the eye was not less than 2.5 m. The subjects alternately at regular intervals were presented with images of test objects and monochrome slides of light gray color [14]. During all the experiments, the brightness of the test objects was not changed. The selection of test images was carried out on the basis of a survey conducted among a large number of respondents. Before the test, each subject was instructed, which stipulated the requirements for the test procedure and the order of its passage, as well as the adjustment of the chair on which the subject sat, and the monitor screen. It was necessary, in connection with the physiological characteristics of each person. Individual data of the subjects on the reaction to the test objects were analyzed on the basis of changes in the size of the pupil and the attention tracker. The total time of the whole procedure of the study takes no more than 5 min, which does not affect the fatigue of the Respondent, but enough to track the tracker look. Processing and analysis of the results was carried out in two stages. First, image dissection, processing and delineation of the pupils in ImageJ were carried out. This was necessary for further analysis of the data obtained: the coordinate trajectory of the pupil and its relative size. Visualization of the results was made in Origin19. Origin software is one of the most powerful tools for graphical representation of results.
3 Main Results When implementing a series of experiments with the presentation of unpleasant stimulus material, it is difficult to find the same emotional response. Moreover, the intensity of stimuli is low. This is due to different preferences, temperament of people, current concerns, etc. Therefore, the results were sometimes unexpected. We planned to get a reaction to the test object, causing disgust, and got a reaction to the image of Wi-Fi (Fig. 1). The subject did not react to the test object, which was supposed to cause disgust, as at the time of testing, he was thinking about online games. This was found out by a survey conducted after testing. For him the lack of Internet turned out to be a stressful situation.
Fig. 1. Track attention
Figure 2 shows the graphs of the dependence of the change in the area and the coordinate of the pupil on the time of demonstration of the test object. In the period from 6 to 9 s, the subject observed a test object with a Wi-Fi image (Fig. 1). The analysis showed that during this period there was a slight change in the X and Y coordinates. Calibration showed that the shift of the center of attention by 1°38′ leads to
368
M. Boronenko et al.
a change in the size of the pupil by no more than 0.1 times. Synchronization pupillogram with the coordinate tracks the attention of through software.
Fig. 2. Graph of the relative size of the pupil and its coordinates of movement from time
During calibration, it was found that in some cases, the resulting tracks were offset relative to the area of the image (Fig. 3 track turquoise). This is due to the lack of a rigid coordinate connection between the head and the surface of the monitor from which the test objects were demonstrated. If you use eye tracking, for example, for marketing, the offset won’t be a huge loss. However, such distortions are unacceptable in high-precision systems.
The Problem of Tracking the Center of Attention
369
Fig. 3. Distortion of the visual trace due to the rotation of the head-turquoise dots; the same track in the coordinate system associated with the center of the pupil-red dots
4 Discussion of the Results Analyses of the results are complicated by the fact that the shooting is carried out without rigid fixation of the human head. Every movement of the body can increase the inaccuracy of the processed data. The authors M. Yu. Kataev and N. V. Kovalev proposed a method for estimating the angles of human head rotations that is uncritical to changing measurement conditions [10, 11]. It is known that Euler angles uniquely determine the rotation of one coordinate system relative to another. Rotation matrix R (1): 0
cos u cos w sin u cos h sin w R ¼ @ sin u cos w cos u cos h sin w sin h sin w
cos u sin w þ sin u cos h cos w sin u sin w þ cos u cos h cos w sin h cos w
1 sin u sin h cos u sin h A cos h
ð1Þ Equations expressing the coordinates of a point in the O′x′y coordinate system relative to the Oxy coordinate system: 8 < x ¼ ðcos w cos u sin w cos h sin uÞx0 þ ð cos w sin u sin w cos u cos uÞy0 þ sin w sin hz0 y ¼ ðsin w cos u þ cos w cos h sin uÞx0 þ ð cos w sin h þ cos w cos u cos uÞy0 cos w sin hz0 ; : z ¼ sin h sin ux0 þ sin h cos uy0 þ cos hz0 :
Thus, determining the Euler angles by the above-mentioned method [10, 11], it is possible to amend the coordinates of the center of attention. This will improve the accuracy of identification of the cause of the change in pupil size. We propose an alternative, simpler adjustment method. In the proposed method, the transition to the coordinate system associated with the pupil is carried out. After this transition Fig. 3 the track (red) began to correspond to the task given to participants.
370
M. Boronenko et al.
5 Main Conclusions In the course of the research it was found that changing the angle of inclination or head rotation leads to distortion of the track obtained by the method without the use of infrared illumination; the transition to the coordinate system associated with the center of the pupil, while tracking the center of attention allows you to minimize distortion. In this case, the video camera used to obtain information about the direction of the gaze has a rigid coordinate contact with the head. Acknowledgments. This Study was funded by RFBR under the research project 18-47-860018 r_a.
References 1. Fomenko, G.Yu.: Psychology of personal security: theoretical and methodological foundations of institutionalization. Community Management, no. 1, pp. 83–99. The Kuban State University, Krasnodar (2010) 2. Lebedenko, Yu.I.: Biometric System (Биoмeтpичecкиe cиcтeмы), p. 159. Publishing House of Tula State University (2012) 3. Luneva, E.A., Skobelkina, N.G.: Eyetracking in the system of modern technologies of neuromarketing. STAZH 3(24), 50–53 (2016) 4. Fazylzyanov, G.I., Belalov, V.B.: Eye tracking: cognitive techniques in visual culture. Bull. Tambov State Univ. 2, 628–633 (2014) 5. Zhbankova, O.V., Gusev, V.B.: Using the method of video oculography (eytracking) to identify hidden information. In: Complex Psychophysiological Forensic Examination: Current State and Prospects of Development: Sat. Sci. - Prakt. Conference, Kaluga, pp. 102– 105 (2016) 6. Bessonova, Yu.V., Oboznov, A.A.: Eyetracking in the diagnosis of truth/lies. In: Materials of the Congress of the Russian Psychological Society, pp. 227–228 (2017) 7. Barabanshchikov, V.A., Zhegallo, A.V., Jose, E.G.: Assessment of the reliability of the reported information on nonverbal behavior. In: Complex Psychological and Psychophysiological Forensic Examination: Current State and Prospects of Development, p. 33. MGPPU (2016) 8. Zhbankova, O.V., Gusev, V.B.: Using the method of video oculography (eytracking) to identify hidden information. In: Complex Psychophysiological Forensic Examination: Current State and Prospects of Development: Sat. Sci.- Prakt. Conference, Kaluga, pp. 102– 105 (2016) 9. Lisi, A.Y., Kompaniets, V.S.: Eye tracking as a method to evaluate user interfaces. In: The New Task of Technical Sciences and Ways of Solution: Collected Articles on the Results of International Scientific-Practical Conference, Orenburg, pp. 31–33 (2017) 10. Kataev, M.Yu., Kovalev, N.V., Griboyedov, A.A.: Restoration of human head rotation angles from images. Reports of TUSUR 2.1, pp. 238–242 (2012) 11. Kataev, M.Yu., Kovalev, N.V.: Assessment of the position of the human head on the analysis of images. Reports TUSUR, no. 1–2 (21) (2010) 12. Demidov, A.A., Zhegallo, A.V.: SMI equipment for recording eye movements: test drive. Exper. Psychol. 1, 149–159 (2008)
The Problem of Tracking the Center of Attention
371
13. Boronenko, M.P., Zelensky, V.I., Kiseleva, E.S.: Application of attention waves as a marker of hidden intentions. Nat. Psychol. J. 2(34), 88–98 (2019). https://doi.org/10.11621/npj. 2019.0212 14. Bobrova, D., Bikberdina, N., Boronenko, M.: Investigation of the color effect of the test object on the pupil response. J. Phys. Conf. Ser. 1124(3), 031016 (2018) 15. Boronenko, M., et al.: Use of active test objects in security systems. In: Proceedings of the AHFE 2019 International Conference on Neuroergonomics and Cognitive Engineering, and the AHFE International Conference on Industrial Cognitive Ergonomics and Engineering Psychology, Washington DC, USA, 24–28 July 2019, p. 438. Springer (2019) 16. Boronenko, M., et al.: Methods of indication of low intensity pupil reaction on the subjectively-important stimuli. Act. Nerv. Super Rediviva 61(2), 49–56 (2019)
Health Risk Assessment Matrix for Back Pain Prediction Among Call Center Workers Sunisa Chaiklieng1(&) and Pornnapa Suggaravetsiri2 1
2
Department of Environmental Health, Occupational Health and Safety, Faculty of Public Health, Khon Kaen University, Khon Kaen, Thailand Department of Epidemiology and Biostatistics, Faculty of Public Health, Khon Kaen University, Khon Kaen, Thailand [email protected]
Abstract. Call centers workers commonly complain about back pain which could be predicted from health surveillance program. This study aimed to estimate risks for back pain among call center workers by the application of a risk matrix. A combination of ergonomics assessor with Rapid Office Stain Assessment (ROSA) and muscular discomfort assessment with Cornell discomfort questionnaire (CMDQ) was done among 216 call center workers. The results showed that most call center workers (44.9%) had health risk on neck, shoulder and back pain at the moderate level. The highest rate was indicated on back pain (25.5%), followed by shoulder pain (22.2%). Pearson product moment correlation coefficient (r) showed that discomfort score significantly associated with back pain score (r = 0.756). Call center workers can be predicted to develop back pain from exposure to ergonomic risk factors by application of the intelligence matrix which considers both ergonomic risks and selfreported discomfort. The health risk matrix may be used for the surveillance of back pain among call center workers and other similar workers. Keywords: Call center
Ergonomics Intelligence matrix Shoulder pain
1 Introduction A call center is a unit for transferring information by calling or emailing a customer. Call center workers spend the most of their time on the telephone or searching and recording data on a computer. By use of shift work and interactive voice response (IVR), call centers provide a daily 24-h customer service. Workers may face eye, ear, and throat related health problems [1] along with musculoskeletal disorders (MSDs). In call center workers MSDs are caused by awkward posture [2] or inappropriate workstation design such as a lack of forearm support [3] while using computers. Normally, MSDs are common among those whose work involves repetitive action and a prolonged static posture [4]. The high prevalence of MSDs among call center workers has been previously reported. Poochada and Chaiklieng [5] found a high prevalence of neck, shoulder and back pain during the last 3 months among call center workers in Thailand (83.8%; 95% CI: 78.8–88.7), and back pain was the most common complaint. Likewise, a study by Subbarayalu [1] found that back pain had the highest © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 372–378, 2020. https://doi.org/10.1007/978-3-030-39512-4_58
Health Risk Assessment Matrix for Back Pain Prediction
373
prevalence during the last 12 months among call center workers in India. This predominant health impact among call center workers can be controlled by surveillance with an ergonomics risk assessment which considers posture, force, frequency and duration of work [6]. There have been two main approaches to ergonomic risk assessment: (1) subjective assessment by self-report questionnaire, and (2) an objective approach which can be subdivided into (2.1) observational assessment and (2.2) direct measurement techniques [7, 8]. A basic health risk assessment (HRA) considers both the probability of exposure and the severity of an adverse health effect from that health hazards exposure [9]. However, the use of a single method of ergonomics risk assessment may not apply equally to all regional MSDs problems. Not much is known about applying the two approaches for MSD prediction with a combination of two assessment methods developed for the health risk matrix. The one previous matrix was applied to the risk assessment of shoulder pain in office workers by combining observational assessment using the Rapid Office Strain Assessment (ROSA) with a subjective assessment of self-reported discomfort using the Cornell Musculoskeletal Discomfort Questionnaires (CMDQ). The prediction provided by application of the risk matrix was confirmed by the later incidence rate of shoulder pain in the office workers [10]. It is interesting to apply this health risk assessment (HRA) matrix to call center workers whose work is similar in nature to that of the office worker. The aim of this study was to estimate health risk of back pain among call center workers by the application of the HRA risk matrix.
2 Methods This was a cross-sectional descriptive study conducted among 216 call center workers from some companies in Khon Kaen province, Thailand. The survey was based on inclusion criteria that call center workers had experienced their jobs for more than 6 months, working time at least 32 h per week with a computer at least 4 h per day. Workers who had musculoskeletal disorders at the present time of study were excluded from the study. 2.1
Research Tool
Rapid Office Strain Assessment (ROSA) [11] was applied for considering the sitting posture, work station (chair height, pan depth, armrest and back support), computer (monitor, mouse and keyboard), telephone and duration of spending time for each posture or activities. ROSA had high inter and Intra-observation reliability (ICCs) which were 0.84 and 0.86, respectively. It was used to identify probability of ergonomics hazard exposure. There were 4 risk level classifications that were low (score 1–2 points), moderate (score 3–4 points), high (score 5–7 points) and very high (score 8–10 points). Cornell Musculoskeletal Discomfort Questionnaires (CMDQ) applied from Hedge et al. [12] was used to identify severity of discomfort perception from ergonomics hazard exposure with the reliability of 0.986. Discomfort or pain level were considered by frequency of pain (never, 1–2 times per weeks, 3–4 times per weeks, once daily,
374
S. Chaiklieng and P. Suggaravetsiri
more than once daily), discomfort (mild, medium, severe) and the rubs of pain from work (never, likely, medium). There were 5 levels of discomfort that were no discomfort (score 0 points), mild (score 1–5 points), moderate (score 6–13.5 points), high discomfort (score 14–39.5 points) and severe discomfort (score 40–90 - points). Matrix of health risk level were applied from previous study [10] considering probability (ROSA level) and severity of discomfort (from CMDQ level) which was finalized to be 5 levels; low risk (score 0–2 points), acceptable risk (score 3–4 points), moderate risk (score 6–8 points), high risk (score 9–12 points) and very high risk (score 15–20 points). 2.2
Data Analysis
Data were analyzed by STATA program version 10.1 to show the number and percent of personal characteristic, work characteristic, and health risk level. Pearson Product Moment Correlation Coefficients were used as measures of correlations between a health risk score and CMDQ scores and health risk scores and ROSA scores. This study obtained ethical approval from the Khon Kaen University Ethics committee, Thailand, no. HE572131.
3 Results 3.1
Personal and Work Characteristic
Among the 216 call center workers, 75.9% were female, and 54.6% were aged between 25-29 years (median = 26 years). Most workers had graduated with a bachelor’s degree (85.7%). Regarding the health status of the workers, 40.2% exercised for at least 30 min 3 days a week, and most (84.3%) had no chronic diseases. A large number of call center workers (74.5%) had work experience for 1–3 years (median = 1 year), 76.4% worked for 8 h per day, and 86.6% worked continuously with computer for 4–8 h per day (median = 8 h). Most (70.0%) had telephone calls with customers at the rate of 6–10 times per hour (median = 5 times), and 64.8% spent 5 min on each call. 3.2
Ergonomics Risk and Muscular Discomfort
Call center workers were exposed to only two ergonomics risk levels measured by the ROSA: these were a high risk level (n = 113 or 52.4%) and a moderate risk level (n = 103 or 47.6%). Discomfort identified by the CMDQ was mostly found at a mild level (40.3%), followed by a moderate level (35.2%) when no specific area of neck, shoulders and back (NSB) was considered. For moderate and mild discomfort, the volume of complaints was highest for the back area, followed by the shoulder and then the neck areas. 3.3
Health Risk Levels for Neck, Shoulder and Back Pain
From the matrix of health risk assessment of neck, shoulder and back pain was considered at least one area of complain. The most of workers (44.9%) had health risk at
Health Risk Assessment Matrix for Back Pain Prediction
375
moderate risk level which 24.1% had mild level of CMDQ and high level of ROSA. Neck, shoulder and back pain risk was acceptable to low level for 33.7% and high risk level was 21.3%, as shown in Table 1. In terms of health risks associated with only one area of pain (neck or shoulder or back), the results showed that the highest percentage of workers had a moderate health risk for back pain (25.5%), followed by shoulder pain (22.2%), and 16.7% had a high health risk associated with back pain. Details of the proportions of workers in each health category according to region of pain are shown in Table 2. Most call center workers had health risk on back pain from low risk to high risk level (51.8%), follow by shoulder pain (39.3%) and neck pain (23.6%), respectively. Table 1. Number (%) of call center workers on health risk of neck, shoulder and back pain (n = 216)
CMDQ level
Neck, shoulder and back pain Severe
low 0 (0.0)
ROSA level moderate high 0 (0.0) 0 (0.0)
very high 0 (0.0)
high
0 (0.0)
8 (3.7)
7 (3.2)
0 (0.0)
moderate
0 (0.0)
37 (17.1)
39 (18.1)
0 (0.0)
mild
0 (0.0)
35 (16.2)
52 (24.1)
0 (0.0)
Comfort
0 (0.0)
23 (10.6)
15 (6.9)
0 (0.0)
Comments:
Very high risk High risk Moderate risk
Low risk Acceptable risk
Table 2. Number (%) of workers classified by health risk level and region of pain complaint (workers had chance of pain more than one region) (n = 216). Health risk
Region Neck Shoulder Unacceptable risk 0 (0.0) 0 (0.0) High risk 13 (6.0) 19 (8.8) Moderate risk 26 (12.0) 48 (22.2)b Low risk 12 (5.6) 18 (8.3) Acceptable risk 135 (76.4) 131 (60.7) Comments: a = the first ranking, b = ranking, c = the third ranking
Back 0 (0.0) 36 (16.7)c 55 (25.5)a 21 (9.6) 104 (48.2) the second
376
S. Chaiklieng and P. Suggaravetsiri
Regarding the relationships between health risk scores and ergonomic risks and discomfort levels for the back pain. The result showed a statistically significant positive correlation between health risk and CMDQ scores (r = 0.756). A positive correlation between the health risk and ROSA scores (r = 0.442) was also illustrated.
4 Discussion For over 70% of 216 call center workers, the discomfort levels measured by the CMDQ were reported to be at a mild to moderate level. Previous study [13] explained that resting time of workers was reduced by durations of work. Duration of work for 8 h per day (76.4%) and working in a sitting posture for more than 6 h a day played a role in back pain development. Workers had rest periods less than three times a day (87.6%) and 15 min each time (86.5%). A previous study [14] found that insufficient rest periods were significantly correlated with MSDs. The ROSA showed that call center workers were exposed to only two levels of ergonomics risk which were high and moderate risk caused by conditions of the inappropriate work environment and awkward postures. According to a previous report among these workers, 27.3% of them sat on a chair which was too high and resulted in a knee angle of more than 90 degrees. Moreover, 53.7% of workers had unadjustable armrest, and 57.9% had no back support or were leaning their body forward [15]. These work environments may contribute to the development of neck, shoulder and back pain about which workers in the present study complained. However, a limitation of the ROSA is that the outcome of the risk assessment does not specify the body region associated with the levels of risk. The suggestion has been that, when the ergonomic risk is above the moderate risk level, the office workstation should be repeatedly modified until the risk is at an acceptable level [9]. Another possibility is to change awkward posture which is one of risk factors of MSDs [2]. Moreover, Errico et al. [3] reported that unsafe workstations were significantly correlated with MSDs. The present study found that majority of call center workers had moderate to high level health risks for neck, shoulder and back pain. In the health risk matrix some call center workers were assessed as having a low level of health risk even though their CMDQ score was zero and indicated no recent or current discomfort. However, when their ergonomic risk on the ROSA was high, these workers might reasonably be expected to develop MSDs problems to at least a low level at some time in the future [10]. Health risk score had a linear regression with CMDQ and ROSA, that the higher correlation of health risk scores and CMDQ scores was related to discomfort perception of workers. This study could therefore apply to health risk assessment matrix on prediction of back pain by combining two approaches of objective assessment and subjective assessment for office ergonomics [16].
Health Risk Assessment Matrix for Back Pain Prediction
377
5 Conclusions and Suggestions Most workers were exposed to ergonomic risk factors (ROSA) at a high or moderate level (52.4% and 47.6%, respectively). The CMDQ levels were mostly reported at the mild or moderate discomfort level (40.3% and 35.2%, respectively). However, they did have a health risk for neck, shoulder and back pain at the moderate or high risk level (44.9% and 21.3%, respectively). The highest proportion of workers with high or moderate health risks was specifically the health risk of back pain (42.2%). The linear correlation between health risk score and CMDQ score (r = 0.756) was statistically significant. Call center workers can be predicted to develop neck, shoulder and back pain from exposure to ergonomic risk factors by application of the HRA matrix which considers both ergonomic risks (objective assessment) and self-reported aches/discomfort/pain (subjective assessment). For prevention, risk should be at acceptable level by reducing ergonomics risks either by the improvement of workstations or sitting posture. The health risk matrix can also be applied for neck, shoulder and back pain surveillance in other workers engaged in similar work.
References 1. Subbarayalu, A.V.: Occupational health problems of call center workers in India: a crosssectional study focusing on gender differences. J. Manage. Sci. Pract. 2(1), 63–70 (2013) 2. Spyropoulos, P., Papathanasiou, G., Georgoudis, G., Chronopoulos, E., Koutis, H., Koumoutsou, F.: Prevalence of low back pain in Greek public office workers. Pain Physician 10, 651–660 (2007) 3. Errico, A.D., Caputo, P., Falcone, U., Fubini, L., Gilardi, L., Mamo, C., et al.: Risk factors for upper extremity musculoskeletal symptoms among call center employees. J. Occup. Health 52, 115–124 (2010) 4. Keawduangdee, P., Puntumetakul, R., Siritaratiwat, W., Boonprakob, Y., Wanpen, S., Thavornpitak, Y.: The prevalence and associated factors of working posture of low back pain in the textile occupation (fishing net) in Khon Kaen province. Srinagarind Med. J. 26(4), 317–324 (2011) 5. Poochada, W., Chaiklieng, S.: Prevalence and neck, shoulder and back discomfort among call center workers. Srinagarind Med. J. 30(4), 282–289 (2015) 6. Bramson, J.B., Smith, S., Romagnoli, G.: Evaluating dental office ergonomic risk factors and hazards. J. Am. Dent. Assoc. 129(2), 174–183 (1998) 7. Dempsey, P.G., McGorry, R.W., Maynard, W.S.: A survey of tools and methods used by certified professional ergonomists. Appl. Ergon. 36, 489–503 (2005) 8. David, G.C.: Ergonomic methods for assessing exposure to risk factors for work-related musculoskeletal disorders. Occup. Med. 55, 190–199 (2005) 9. Chaiklieng, S.: Toxicology in Public Health. Khon Kaen University Printing House, Khon Kaen (2014) 10. Chaiklieng, S., Krusan, M.: Health risk assessment and incidence of shoulder pain among office workers. Procedia Manuf. 3, 4941–4947 (2015) 11. Sonne, M., Villalta, D.L., Andrews, D.M.: Development and evaluation of an office ergonomic risk checklist: ROSA a rapid office strain assessment. Appl. Ergon. 43, 98–108 (2012)
378
S. Chaiklieng and P. Suggaravetsiri
12. Hedge, A., Morimoto, S., McCrobie, D.: Effects of keyboard tray geometry on upper body posture and comfort. Ergonomics 42(10), 1333–1349 (1999) 13. Chaiklieng, S., Suggaravetsiri, P., Boonprakob, Y.: Work ergonomic hazards for musculoskeletal pain among university office workers. Walailak J. Sci. Technol. 7(2), 169–176 (2010) 14. Chalardlon, T.: Work-related musculoskeletal injuries and work safety behaviors among call center workers. Nurs. J. Ministry Public Health 23(1), 44–59 (2013) 15. Poochada, W., Chaiklieng, S.: Ergonomics risk assessment among call center workers. Procedia Manuf. 3, 4613–4620 (2015) 16. Chiropractic, C.: Office Ergonomics. http://colonychiro.com/office-ergonomics/
Towards Conceptual New Product Development Framework for Latvian ICT Sector Companies and Startups Didzis Rutitis(&) and Tatjana Volkova BA School of Business and Finance, Kr. Valdemara 161, Riga 1063, Latvia {Didzis.Rutitis,Tatjana.Volkova}@ba.lv
Abstract. The purpose of this paper is to propose outline for a new product development framework to be utilized by Latvian ICT companies and technology startups. Design/methodology: To achieve this objective, semi-structured expert interviews with 12 executives from Latvian ICT companies and startups were held to identify common factors for these companies that have contributed to the success of new product launch in the market. Findings: The paper summarizes new product development success factors, outlines organizational management aspects to be implemented for improving product development process from the managerial perspective, and suggests possible metrics that should be used to measure these factors. Practical implication: the insights gathered for the intended new product development framework could be also applied by managers of established and mature ICT companies to ensure successful internal transformation with aim to maintain competitiveness against emerging startups and promote internal entrepreneurial activities not only in Latvia, but also abroad. Keywords: Startups
Product development Company growth
1 Introduction New product development is an important prerequisite for improvement of competitiveness of almost every ICT sector company and particularly for companies representing ICT services activities category (software development, telecommunications, consulting data processing segments). This has been the fastest growing category within ICT sector and has exceeded turnover of hardware wholesale in 2016 [1]. Research of product development is essential because of rapidly evolving technological advancements and the speed at which new products are being delivered to market by both, ICT sector high-growth participants like software startups, fintech companies and other technology startups, and also mature ICT companies. The continuous innovation and new product development are not only the growth opportunities, but also a question of survival. Thus, employees directly and indirectly involved in new product development contribute a critical part of efforts aimed for maintaining the company afloat and running.
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 379–384, 2020. https://doi.org/10.1007/978-3-030-39512-4_59
380
D. Rutitis and T. Volkova
2 Methodology of Research This paper reflects results from a study aimed to examine product development methods and principles used by the ICT sector companies and technology (generally, software) startups. In the context of this research, authors refer to a startup as an organization formed to search for a repeatable and scalable business model, before it has turned into an enterprise with a traditional management structure [2]. Previous studies and literature reviews on new ICT product development [3], startup growth acceleration from MVP to commercialization stage [4–6] and further scaling [7] were analyzed to obtain theoretical basis and preliminary insights into the main product development process to be discussed during the interviews. The collection of insights regarding product development principles was implemented by applying expert method and using semi-structured interviews for data collection. Twelve experts (see Table 1) representing various Latvian ICT companies (from software startups in early growth stage to mature telecommunications company) and also one commercial bank (its division in charge of digital channels and partnerships with the fintech software startups). The information in table also indicates the number of employees directly involved in new product development, including managers, dedicated software developers, data analysts and designers. Table 1. Description of experts. Position of expert in a company
ICT industry sector and sub-segment
1. CEO 2. CEO 3. CEO 4. CEO 5. Head of business development 6. Head of product management 7. Co-founder, VP of product 8. Head of product management 9. Head of product management 10. Head of product management 11. Head of corporate development and new products 12. Head of digital channels
Startup (Productivity software) Startup (Construction software) Productivity software Software development Software development Fintech software Fintech software Fintech software Fintech software Manufacturing software Telecommunications Commercial bank - department in charge of digital channels and partnerships with fintech software companies
Product team size 5 3 14 4 34 12 22 32 12 15 50 15
During the interviews, experts were asked to describe the actual product development process taking place at their company, frameworks and tools used by the product team and other employees involved into new product development processes, metrics used for the evaluation and supervision of the product development process.
Towards Conceptual New Product Development Framework
381
Interviews were held and data was collected between December 2018 and May 2019.
3 Background 3.1
The Role of Product Management
At high level, a product manager is the single cross-functional owner directly responsible for the success of a product. They have all the responsibility for the product’s success, but often lack the direct line reporting of the other functions. Product managers are generally responsible for [7]: (1) (2) (3) (4)
Product strategy and vision; Product prioritization and problem solving; Execution: timelines, resources, and removal of obstacles; Communication and coordination (overlays all of the above).
In order to build a great product organization, the company needs to understand the role of the product manager; to hire individuals with the right skill sets, including a strong VP of product (for startup); and establish a simple set of processes to enable the product organization and help the company scale its product development [7]. Great product management organizations help a company set product vision and road maps, establish goals and strategy, and drive execution on each product throughout its lifecycle. In contrast, bad product management largely function as project management groups, running schedules and tidying up documents for engineers [8]. From such perspective, this research contributes to better understanding of success factors leading to successful commercialization of technology products (mainly, software) and successful new product creation within the ICT sector. 3.2
Selection of Question for Discussion with Experts
After carrying out literature review on new ICT product development related research, the authors have derived the following factors to be discussed and evaluated by experts during semi-structured interviews: • Product development frameworks. What product development frameworks are used by the product team and other employees involved into new product development processes? • Time. Is there any pattern in terms of timing and length of product development cycle for new product development and pivoting until reaching commercialization stage and achieving successful traction for new products? • Innovation tools used by product teams. Which innovation tools and software are used by product managers and product teams during daily work? • Managerial processes. What other organizational procedures and processes are implemented or utilized by product teams? • Measurement of traction. What metrics and techniques are used for the product development management supervision and evaluation?
382
D. Rutitis and T. Volkova
4 Research Results 4.1
Product Development Frameworks
Experts confirmed general use of such general ICT product development frameworks as Lean Startup, Agile, Scrum, and Waterfall within their respective companies. Next, SAFe (Scaled Agile Framework) for Lean Enterprises was recognized as a relevant methodology incorporating large company needs and existence of different managerial levels with different requirements [10]. Another expert highlighted effectiveness of 5day Sprint method by Google Ventures [11] to test hypothesis behind new product idea promptly before investing additional efforts for prototype development and the following launch to the market. Several experts indicated that they tend to adapt the experience of large technology giants and test their usability within Latvian environment for their respective companies. Among outlined were Spotify for their engineering approach, which is known as a “Spotify Model” [12]. Another expert confirmed internal knowledge and best practice exchange with product development leads from technology “unicorn” companies like Pipedrive and TransferWise. 4.2
The Use of Innovation Tools
The use of such innovation tools as Business Model Canvas, Value Proposition Canvas, Storyboards, Kanban and Trello boards was identifies as rather common among both, established ICT companies and also startups to generate new business models and ideas in the idea generation stage. Productivity and project management tools like Basecamp, Asana and Trello are used to categorize product development related information and follow deadlines during product or software prototype development. Also, the traditional Microsoft Office software (Word, Excel) is used to create product backlogs that are used for planning purposes, bi-weekly, quarterly and yearly strategy setting meetings. 4.3
Timeline
Experts mentioned a time period of 3–6 months as the approximate time necessary for implementing a new pivot at a startup company. Several fintech companies indicated that their first prototype actually showed poor results right after the launch. Within approximately 6 months period another pivot was implemented, and it has resulted in a business model that has been developed further on. The initial business model was either terminated or separated into another legal entity apart from the core business. For larger companies, new product development cycles tend to be longer (starting from 6 months) due to larger number of internal procedures that prevent implementation of purely iterative process and prompt product update during development stage and also after the launch of the product. However, they tend to encounter such deficiencies with establishment of such business units that are enabled to work using iterative approach, build and test product prototypes using Lean Startup approach within an isolated sandbox environment.
Towards Conceptual New Product Development Framework
4.4
383
Managerial Processes
Several experts noted that the product management teams perform their tasks using matrix organizational structure, which usually involved product managers, designers, software developers. Depending on the organizational structure and the company size, the technical team can represent either a core team and share pooled resources, or there can be separate teams for maintaining core functions and developing new products. Stock option system has been used by one of the software companies to align decision-making by different department managers with the strategic and business goals of the company and eliminate ego-centric behavior that might conflict with these corporate business goals. Experts also noted a need for properly managed company culture and importance of hiring the right people with successful product development track record, which confirms initially outlined conceptual assumptions from the theory [7]. 4.5
Measurement of Traction
While majority of experts indicated that the main indicator of product development success can be related to the increase in number of users or additional revenue generated by the new product after it is launched on the market, one of the experts indicated that the company (in this case – large and established enterprise with numerous products) may launch a new product that minimizes costs internally, instead of generating additional revenue. Additional revenue can be a positive externality in such case. Another important thing to take into account is the development stage of the company – in the stage of business idea generation the criteria for evaluation could be ROI or simple net present value (NPV) calculation, followed by business case. For very early stage startups it can be interest from prospective customers or demo customer feedback and readiness to pay for the product. For later stage startups this relates to growth of user base, turnover per user, increased consumption (time spent) using the product, etc., which coincides with findings from the literature review [9].
5 Conclusions The research results indicate that new product development processes differ from company to company and reflect a rather unique blend for each company. There are different organizational processes taking place if one compares new product development processes and related procedures (formal and informal) within an established ICT company and a newly established startup that operates fully in a Lean Startup mode. Established enterprise level companies are likely to deploy a mix of Waterfall for existing products and Lean Startup approach to develop new offerings, while early stage startups are more likely to rely fully on a Lean Startup approach to identify a scalable business model and achieve necessary traction. From the managerial perspective, the challenge for mature companies is to cultivate the innovative spirit and
384
D. Rutitis and T. Volkova
overcome objections of employees working with standardized and established business lines when approached by new product managers to work The factors to be included in framework for new product development by ICT companies should include: use of specific product development methodologies by the product team or the entire company (in case of an early stage startup); the minimum time for new pivot implementation and testing to validate new product hypotheses; use of innovation tools to generate new business models and ideas; organizational management - integration and alignment of new product initiatives with marketing and technical development teams; availability of financial resources to finance the pivot or fundamentally new product development; consistent measurement of product development and the following traction after the launch to the market with relevant metrics. The relative importance and nature of each factor is directly correlated with a stage of product (or company, in case of a startup) development. Acknowledgments. This research has been funded by the European Union Post-Doctoral program. Project number: 1.1.1.2/VIAA/1/16/089.
References 1. Central Statistical Bureau of Latvia: ICT industry turnover. https://www.csb.gov.lv/en/ statistics 2. Blank, S.: What’s A Startup? First Principles. https://steveblank.com/2010/01/25/whats-astartup-first-principles/. Accessed 10 Apr 2019 3. Rutitis, D., Volkova, T.: Product development methods within the ICT industry of Latvia. Vadyba J. Manag. 1(34), 15–23 (2019) 4. Nguyen-Duc, A., Shah, S.M.A., Abrahamsson, P.: Towards an early stage software startups evolution model. In: 2016 42nd Euromicro Conference on Software Engineering and Advanced Applications (SEAA), pp. 120–127 (2016) 5. Nguyen-Duc, A., Abrahamsson, P.: Minimum viable product or multiple facet product? The role of MVP in software startups. In: Sharp, H., Hall, T. (eds.) Agile Processes, in Software Engineering, and Extreme Programming, XP 2016. Lecture Notes in Business Information Processing, vol. 251, pp. 118–130. Springer, Cham (2016) 6. Berg, V., Birkeland, J., Nguyen-Duc, A., Pappas, I., Jaccheri, L.: Software startup engineering: a systematic mapping study. J. Syst. Softw. 144, 255–274 (2018) 7. Gil, E.: High Growth Handbook. Stripe Press, San Francisco (2018) 8. Horowitz, B.: Good Product Manager/Bad product Manager (2012). https://a16z.com/2012/ 06/15/good-product-managerbad-product-manager/ 9. Bhuyian, N.: A framework for successful new product development. J. Ind. Eng. Manag. 4 (4), 646–770 (2011) 10. SAFe: Scaled Agile Framework. https://www.scaledagileframework.com/about/ 11. Knapp, J., Zeratsky, J., Kowitz, B.: Sprint. How to Solve Big Problems and Test New Ideas in Just Five Days. Simon & Schuster, New York (2016) 12. Spotify Engineering Culture. https://blog.crisp.se/2014/03/27/henrikkniberg/spotifyengineering-culture-part-1
A Liveness Detection Method for Palmprint Authentication Ayaka Sugimoto1, Yuya Shiomi1, Akira Baba2, Norihiro Okui2, Tetushi Ohki1, Yutaka Miyake2, and Masakatsu Nishigaki1(&) 1
Shizuoka University, 3-5-1, Johoku, Naka, Hamamatsu, Shizuoka 4328011, Japan [email protected] 2 KDDI Research, Inc., Garden Air Tower, 3-10-10, Iidabashi, Chiyoda, Tokyo 1028460, Japan
Abstract. Many smartphones are equipped with a biometric authentication function to prevent their unauthorized use. An authentication method using palmprint as a physical feature has been proposed. Palmprints can be authenticated without special devices because the features of palmprints can be acquired using a smartphone camera. However, impersonation may be performed using user photographs. In this study, we examine a liveness detection method against a presentation attack in palmprint authentication using a smartphone. Assuming impersonation using familiar media, we consider attacks using printed and displayed images. In the attack image of the printed and displayed palms, we focus on the characteristics of resolution degradation caused by ink bleeding and the moiré (interference fringe) caused when capturing the attack palm. Keywords: Palmprint authentication Liveness detection Presentation attack
1 Introduction In the current digital transformation era, almost all users own smartphones and a variety of social activities are performed online. A significant portion of shopping and banking has already shifted online; in these services, smartphones play an important role as the user terminal. Accordingly, users’ important information, such as credit card information and electronic money information, is stored in their own smartphones. Therefore, smartphones are critical devices that must be strictly protected. Recent smartphones are equipped with various security mechanisms to ensure that only a legitimate user can activate the smartphone. Among these mechanisms, biometric authentication is expected to be the most promising option, as smartphones do not have a keyboard and entering a password into such tiny terminals is difficult. However, biometric authentication is highly susceptible to presentation attacks with fake biometric samples. Thus, this issue must be solved for the practical use of smartphones in online services. The studies of [1, 2] have reported that fingerprints can be reproduced using hand photographs. Hence, this motivated us to investigate the tolerance (i.e., liveness detection ability) of palmprint authentication [4] against presentation attacks using © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 385–391, 2020. https://doi.org/10.1007/978-3-030-39512-4_60
386
A. Sugimoto et al.
printed/displayed palmprint images. In the first of our investigation, this paper focuses on impersonation using palmprint images that are printed on a paper or displayed on a liquid crystal screen. The printer used for printing is a commercially available product, and the media to be printed are a plain paper and a glossy paper. The device used for displaying is a commercially available high-definition smartphone display.
2 Attack Method A presentation attack using familiar media is assumed in this study. The characteristics of each medium from the viewpoint of liveness detection are described below. • Plain Paper When printer ink is sprayed on plain paper, the ink bleeds. Consequently, when the “plain paper attack palm (palmprint image printed on plain paper)” is captured using a smartphone camera, the resolution of the obtained image (hereinafter called “plain paper attack image”) is degraded compared with the palmprint image of a “real palm” captured using a smartphone camera (hereinafter called “real image”). • Glossy Paper In the case of glossy paper, because the ink bleed is minimal, fine ink particles are regularly arranged on the paper surface. However, digital cameras also have a structure in which image sensors are regularly arranged. Hence, when the “glossy paper attack palm (palmprint image printed on glossy paper)” is captured using a smartphone camera, partial moiré (interference fringes) is induced in the obtained image (hereinafter called “glossy paper attack image”). • Display The display has a structure of regularly arranged light emitting elements. Hence, as in the case of glossy paper, when a “display attack palm (palmprint image displayed on the display)” is captured using a smartphone camera, moiré (interference fringes) is induced in the obtained image (hereinafter called “display attack image”). Figure 1 illustrates examples of attack images. The attack image size is set to 300 300 pixels.
Fig. 1. Palmprint image area of attack images (Right: Plain paper, Center: Glossy paper, Left: Display)
A Liveness Detection Method for Palmprint Authentication
387
3 Image Processing This study aims to enhance biometric authentication used for activating smartphones. Due to the limited computational resources of smartphones, liveness detection should be implemented by a lightweight process. Therefore, in this paper, we investigate how liveness detection can be achieved by simple image processing, such as Sobel filters and autocorrelation. 3.1
Sobel Filter
The Sobel filter is a first-order differential filter. As described in Sect. 2, plain paper attack images have palmprint edges that are slightly blurred owing to resolution degradation. Glossy paper attack images and display attack images are characterized by the presence of edges that do not exist in the palmprint of real images because of moiré. Therefore, these features are made obvious by performing edge detection using the Sobel filter. Sobel filter with a kernel size of K = 7 is applied to both the real and attack images in both the X and Y directions. Furthermore, edge enhancement is realized by binarizing the filtered image. A binarization method using a common threshold for all pixels and method by setting a threshold for each local region (Otsu’s binarization [3]) are applied. In the following, Process1 is the binarization method using the common threshold for the result of applying the Sobel filter, and Process2 is the Otsu’s binarization for the result of applying the Sobel filter. Figure 2 illustrates an example of the results by applying Process1 and Process2. 3.2
Autocorrelation
From the image of Process1 in Fig. 2, the sharpness is slightly reduced in the plain paper attack image, the palmprint pattern is slightly collapsed in the glossy paper attack image, and moiré lines in the horizontal direction are appeared in the display attack image. In this study, we used autocorrelation to detect them. Specifically, using the 50 50 pixel area of the center of the palmprint image as a template, we calculated the error (sum of squared differences in brightness) from the area shifted by 1 pixel in the X direction from 10 pixels to 50 pixels (Fig. 3). In the plain paper attack image, the sharpness is slightly reduced; therefore, the autocorrelation is expected to be higher (the error will be smaller) than that of the real image. In the glossy paper attack image, as the pattern is messy, the autocorrelation is expected to be higher (the error will be smaller) than that of the real image. In the display attack image, as moiré lines in the horizontal direction are generated, the autocorrelation is expected to increase periodically (the error will be smaller periodically). The autocorrelation for the image of Process2 is calculated in the same manner.
388
A. Sugimoto et al. Real image
Plain paper attack image
Glossy paper attack image
Display attack image
Process1
Process2
Fig. 2. Result of applying the Sobel filter
Fig. 3. Method to obtain autocorrelation
3.3
Liveness Detection Procedure
Here describes the liveness detection procedure when liveness detection is performed after palmprint authentication. I. II. III. IV.
Capture palm image using a smartphone camera Create palmprint image by extracting the palmprint area Perform palmprint authentication Apply the Sobel filter to the palmprint image that has been successfully authenticated, and create the images of Process1 and Process2 by binarization V. Obtain the autocorrelation of the images of Process1 and Process2
A Liveness Detection Method for Palmprint Authentication
389
4 Experiment Preliminary experiments were conducted to confirm how liveness detection was achieved by Sobel filters and autocorrelation. 4.1
Dataset
The digital camera equipped in Nexus 5 of LG Electronics was used for image capture (corresponding to Step I in Sect. 3.3). The experimenter (the author) visually extracted the palmprint region from the photographed image (corresponding to Step II in Sect. 3.3). The size of the palmprint area was 300 300 pixels. By applying palmprint authentication in advance and excluding images that do not pass authentication from the experiment, the total number of palmprint images that are subject to liveness detection experiments can be reduced. Thus, the experimenter (the author) visually inspected the image set and excluded images that did not show palmprints or those that were out of focus, i.e., images that did not pass the palmprint authentication (corresponding to Step III in Sect. 3.3). Table 1 lists the number of images that passed the palmprint authentication. In this dataset, the number of real images that passed the palmprint authentication was only 104 out of 400, and no image passed the palmprint authentication in the glossy paper attack and display attack images. This means that the photographing was not adequate enough in this experiments and we will have to collect images again in the future study. Table 1. Dataset
Real image Plain paper attack image Glossy paper attack image Display attack image
4.2
Resolution of attack palms ― 2400 1200 dpi 9600 2400 dpi 1080 1920 pixels
Total number of images 400 380
Number of pass authentications 104 27
221
0
400
0
Results
At Step V in Sect. 3.3, the error between each image area and the template image was calculated while shifting the template image by 1 pixel from 10 pixels to 50 pixels in the X direction. Figures 4 and 5 illustrate the changes in the errors of the real and attack images in Process1 and Process2, respectively. In Figs. 4 and 5, only the characteristic image results are selected and presented for better visibility. As illustrated in Fig. 4, with regard to the image of Process1, a method could be adopted in which the threshold is set to 0.430 and the image whose minimum value in the autocorrelation score graph is higher than the threshold is judged as a real image.
390
A. Sugimoto et al.
As depicted in Fig. 5, with regard to the image of Process2, a method could be adopted in which the threshold is set to 0.245, and the image whose maximum value in the autocorrelation score graph is higher than the threshold is determined as a real image. In addition, liveness detection by AND-type combination of the two results could be adopted. The results are as summarized in Table 2. Table 2. Number of images judged as real images
Real image Plain paper attack image
Number of pass authentications 104 27
Process1
Process2
Combine
74 (71.2%) 7 (26.0%)
92 (88.5%) 10 (37.0%)
71 (68.3%) 5 (18.5%)
Fig. 4. Results for Process1
Fig. 5. Results for Process2
5 Conclusion We investigated the possibility of liveness detection based on simple image processing for palmprint authentication using images captured with a smartphone. This paper detected liveness using the Sobel filter and autocorrelation, assuming a presentation attack using a printed paper and display. This study is still being analyzing how image processing can be used for liveness detection and has not yet been evaluating unlearned samples. In addition, processing related to palmprint area extraction and palmprint authentication was performed manually. In the future, we plan to further investigate the processing thereof.
A Liveness Detection Method for Palmprint Authentication
391
References 1. Echizen, I., Ogane, T.: BiometricJammer: method to prevent acquisition of biometric information by surreptitious photography on fingerprints. IEICE Trans. Inf. Syst. E101-D(1), 2–12 (2018) 2. Chaos Computer Club: Fingerprint biometric hacked again. https://www.ccc.de/en/updates/ 2014/ursel 3. Otsu, N.: A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 9(1), 62–66 (1979) 4. Ota, H., Aoyama, S., Watanabe, R., Ito, K., Miyake, Y., Aoki, T.: Implementation and evaluation of a remote authentication system using touchless palmprint recognition. Multimedia Syst. 19(2), 117–129 (2013)
Procedure for the Implementation of the Manufacturing Module of an ERP System in MSME. Applied Case: Textile “Tendencias” Enterprise, UDA ERP Pedro Mogrovejo1(&), Juan Manuel Maldonado-Maldonado1,3, Esteban Crespo-Martínez1,2,3, and Catalina Astudillo2,3 1
Escuela de Ingeniería de la Producción, Universidad del Azuay, Cuenca, Ecuador [email protected], {jmaldonado,ecrespo}@uazuay.edu.ec 2 Escuela de Ingeniería en Ciencias de la Computación, Universidad del Azuay, Cuenca, Ecuador [email protected] 3 LIDI, Universidad del Azuay, Cuenca, Ecuador
Abstract. Innovation and competitiveness are key factors for the sustainability of companies in any market in which they develop; the implementation of technological tools is invaluable for obtaining these two factors at the same time. Knowing that in Ecuador, MSMEs (Micro, Small and Medium Enterprises) represent 99.55% of the productive sector, the need to offer an ERP (Enterprise Resource Planning) with easy access and adaptability is emphasized. This software also requires the commitment of the management and multiple efforts by the staff of the organizations. This study summarizes the approach of a generic methodology, with different useful tools that will allow the subsequent adaptation of the software in a MSME. Keywords: ERP Tool
MSME Competitiveness Companies Technological
1 Introduction Innovation and competitiveness are key factors for the sustainability of companies over time, regardless of the field in which they develop; for this reason, the implementation of technological tools or ICTs allows to obtain an added value over their competence, in addition to generating a continuous improvement within the organization. According to data from [1], the companies considered MSMEs (Micro, Small and Medium Enterprises), represent 99.55% of the productive sector, which is why business software such as ERP (Enterprise Resource Planning) Easy access and adaptability will allow these companies to generate added value over competitors and in turn gain a competitive advantage.
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 392–397, 2020. https://doi.org/10.1007/978-3-030-39512-4_61
Procedure for the Implementation of the Manufacturing Module
393
In a significant effort, since 2015, the University of Azuay, as part of its research program, has been developing the UDA ERP enterprise resource management software, initially linking two engineering schools: Systems and Telematics, and Production and Operations. UDA ERP is a business management software that works with cloud computing technology, minimizing the computational requirements on the client side. In order to offer a solution to this business need, a study of a methodological guide for these organizations was proposed, which contribute significantly to the Ecuadorian economy. This methodology has been exposed in a generic way and validated with the implementation of the UDA ERP software in a MSME in Cuenca, dedicated to the production of textiles. The proposed methodology includes several tools: for information gathering, organizational management, process management and improvement; the same that must be implemented within the different organizations, standardizing their processes, to subsequently adopt an ERP, which will require companies to have their processes and information well organized that allows the system to deliver results that serve the correct decision making. Considering the magnitude of a computer system and added value as a business contribution, according to [2], this system is a formal set of processes that, operated on a structured data collection according to the needs of the company, collects, elaborates and distributes selectively the necessary information for its operation, facilitating the management and control activities. On the other hand, considering that global trends and the importance of keeping data safe under parameters of confidentiality, availability and integrity of the data generated by companies, these are loaded and processed in the so-called computational cloud, which at the same time adds a component of simplicity in access and use for those interested in the organization. The adequate protection of these leads to strengthening the correct decision-making and generation of business strategies Crespo [3]. According to [4], the ERP is a computer system used to identify and plan the organization’s resources and thus manage customer orders in accordance with inventories and organizational processes, that is, it is a system that holistically encompasses really important information for the organization that uses it. This article is divided into 7 sections: (i) the state of the art, where works by other authors related to research are analyzed; (ii) the methodology, which describes the process carried out to achieve the results; (iii) the results obtained after the application of the proposed and described methodological guide; (iv) the discussion, as part of the validation of the methodology proposed in the implementation of the ERP software; (vi) the general conclusions and work to be done in the future; (vii) the bibliography and sources consulted.
2 State of Art According to Badenas [5], an ERP is a system that integrates the information and processes of an organization in a single environment, which is usually made up of modules of: Material Requirement Planning, Customer Relationship Management,
394
P. Mogrovejo et al.
Finance Resource Management, Human Resource Management and Supply Chain Management. In addition, some benefits of ERP according to Monk and Bret [6], are summarized in the easy global integration of data, the integration of people with data and the elimination of the need to update and repair many computer systems separately and that Management really manages business operations and resources, and is not limited to its moderation alone. In addition, according to Ptak [7] most enterprises can expect to change their computer information system either to a new system or have a major upgrade at least every 3 to 5 years. On the other hand, according to the Internal Revenue Service (by its acronym in Spanish SRI), a MSME is known as “the group of micro, small and medium enterprises that, according to their sales volume, social capital, number of workers, and their production level or assets have characteristics of this type of economic entities”. In addition, [8] argue that, companies use IT (Information Technologies) mainly to improve the efficiency of external processes with customers, as well as sales and image of the company abroad. In turn, they state that the companies that have invested the most in IT and/or that have the greatest endowment of these technologies are the ones that have obtained the most positive improvements, in addition they state that the companies that have the greatest endowment of some ITs are larger companies. An important point in the implementation of an ERP system is the standardization that your data requires where [9], states that, to be part of such databases or informatics systems, information has to be standardized. It must be codified according to a format which is often established in its final form at the firm level, the global level. Some of the conflicts and circumstances of the development of this type of software can be reviewed in [10], where the development of the ERP UDA is mentioned.
3 Method The present work began with the planning of the field work that should be practiced within the selected company, in turn the context in which it is developed was analyzed in addition to the possible factors that would affect or condition a MSME in the adaptation of a computer system Once the environment surrounding these types of companies was known, the company was visited under study, where the technical team visited all areas of the company, discussed with its employees. After this first visit, a meeting was held at the University of Azuay where the witnesses are analyzed together with the technical and development team of the UDA ERP, to plan again the actions to be used and investigate about the most suitable and adaptable tools for a MSME. Multiple management tools were applied, other engineering tools, but above all techniques and tools to obtain information; information with which the initial situational analysis was subsequently carried out. Using the tools classified in Table 1, the analysis and situational diagnosis of the company was carried out. For this, it was visited two to three times a week, for a total of 12 weeks. For the analysis and processing of data, formats of the proposed tools were applied, in addition to different sources of MSMEs that were developed in a context like that of the company under study.
Procedure for the Implementation of the Manufacturing Module
395
Table 1. Tools applied in the study Gathering information Interview Snap picture Time analysis Procedure sheets
Organizational management Porter value chain Process interaction matrix SWOT analysis
Process management Process diagram Process flow diagram Route diagram Machine-Man analysis Value added analysis Demand prevision Inventory management
Improvement 5’s Brainstorm Ishikawa diagram Cycle PDCA Kanban
Subsequently, the data not relevant for decision-making was purified and eliminated, the technical tools such as flowchart, route diagram, time analysis, value-added analysis, etc., were applied, and then the degree of usefulness of the information to finally generate the improvement proposals for the company. The tools selected and proposed for use were socialized with those involved in the management of the business processes, which in this case were the managers and their subordinates, in which the importance of them, their operation and results were emphasized. That can be achieved with its adoption and proper use. Finally, to validate the information obtained and the methodological guide, the tools were applied by the company’s employees, confirming whether their understanding of them is adequate and if they are easily adaptable.
4 Results An applicable methodology was obtained in different MSME companies, with tools that encompass all their administrative and operational operations. The proposed methodology defines the steps to follow so that a company in this sector achieves the adoption of a comprehensive information system. On the other hand, an optimal interaction was obtained to propose the methodological guide, evidencing its dynamism among employees, technicians of the survey and data analysis, and developers of the UDA ERP, where constant improvements are analyzed and proposed so that a MSME can obtain a competitive advantage with ERP adaptation. In addition, evidence was obtained of the lack of technification in companies of this type, the requirements with the highest priority for the adoption of computer systems and the barriers that prevent this activity.
396
P. Mogrovejo et al.
5 Discussion For the development of the methodology, the use of electronic devices is essential for the gathering of information, because, when taking photos, recording videos and audios they allow a later analysis of the environment and the dynamics of the workers within their functions, this is useful when the information is not documented and it depends on the opinion of the manager and others involved. On the other hand, in agreement with Crespo [3], considering the organizational culture of the companies and their environment for the success of the implementation of improvements is fundamental, so that the organizational management tools are the perfect and necessary complement within the organizations for the achievement of objectives, as in this case adopt an ERP information system, thus agreeing with Badenas [5] and Monk [6], which will facilitate the organization’s operations by its integrating function. The implementation of the UDA ERP within this textile company, deduces that the correct use of this computer system can determine a turning point within the organization, because being an integrative technological tool, depends on the proper functioning of all its operations and the standardization of its processes as stated by [9], both internal and external, taking into account that an ERP integrate all its stakeholders. Performing the correct planning to propose a methodology applicable to MSMEtype companies allowed the company named “Tendencias”, to generate an organizational change and awareness of the advantages that TIs can bring to your business, in accordance with the statement by [8].
6 Conclusions and Further Works It is concluded that the proposed methodology can be implemented in different companies in the MSME sector, so the generation of a baseline for its analysis is fundamental and that the achievement of results and the success of any type of methodology is clearly linked to collaboration of those involved in this process, because today our society and environment depends largely on people. Finally, it is concluded that the incorporation of technical or engineering tools requires trial and error so that the organizational culture adapts to them, so patience in the process and technical support is essential; In turn, these initially basic tools will allow the construction of a solid base of technification for a correct data collection, so that in the future the evolution and implementation of more sophisticated tools can be implemented in a simple way. Then the next works will be that the present methodological guide could be implemented in different enterprises, like services and manufacturing enterprises.
Procedure for the Implementation of the Manufacturing Module
397
References 1. EL UNIVERSO, Las mipymes representan el 99% de negocios en Ecuador, El Universo, 27 Junio (2019) 2. Trasobares, A.: Los sistemas de información: evolución y desarrollo, Revista de relaciones laborales, pp. 149–165 (2003) 3. Crespo, E.: Una metodología para la gestión de Riesgos aplicada a las MPY-MEs. In: INCISCOS, Quito, (2017) 4. Heizer, J., Render, B.: Dirección de la producción y de operaciones, Decisiones Tácticas. Pearson Eduación, Madrid (2008) 5. Badenas, O.: Sistemas Integrados de Gestión Empresarial: Evolución histórica y tendencias de futuro. Universitat Politècnica de València, Valencia (2012) 6. Monk, E., Bret, W.: Concepts in Enterprise Resource Planning. Course Technology Cengage Learning, Boston (2013) 7. Ptak, C.: ERP Tools, Techniques, and Applications for Integrating the Supply Chain. CRC Press Company, Boca Raton (2005) 8. Pérez, M., Martínez, Á., Carnicer, L., Vela, M.: LAS TIC EN LAS PYMES: ESTUDIO DE RESULTADOS Y FACTORES DE ADOPCIÓN, Economia de la Información y la comunicación: difusión e impacto de las TIC, pp. 93–106 (2006) 9. Mayére, A., Isabelle, B.: ERP implementation: the question of global control versus local efficiency. In: ERP Systems and Organisational Change, pp. 47–58 (2008) 10. Astudillo-Rodríguez, C.V., Maldonado-Matute, J.M., Crespo-Martínez, P.E.: UDA-ERP: theory, development and conflicts. In: Kantola, J., Nazir, S., (eds.) Advances in Human Factors, Business Management and Leadership, AHFE 2019. AISC, vol. 961. Springer, Cham, (2020)
Model of Emotionally Stained Pupillogram Plot Marina Boronenko(&), Yurii Boronenko, Oksana Isaeva, and Elizaveta Kiseleva Yugra State University, 16 Chekhov street, 628012 Khanty-Mansiysk, Russia [email protected], [email protected], [email protected], [email protected]
Abstract. The article proposes a model of an “emotionally colored portion of the pupillogram” based on experimental data. Pupillograms were shot in people of an older age category (50–75 years old), in which representatives of different sexes were present. To call emotions, we used accident videos that are freely available on the Internet. An analysis of the pupillograms showed waves of attention. The structure of attention waves in pupillograms includes saccades when you move your gaze, micro saccades when you focus on individual elements of the objects in question and the emotional component, provided that the test object is significant for the individual. The peaks are well approximated by the Gaussian function, which confirms. The emotionally colored portion of the pupillogram can be described by the Gram-Charlier peak function. The resulting expression can characterize the intensity and concentration of attention. Keywords: Pupillogram Emotion
Test object Spotlight Mathematical model
1 Introduction This is an example paper and template to duplicate exactly with respect to the required format of your paper In the modern world, suicide is one of the ten leading causes of population deaths; in terms of the frequency of suicidal cases, Russia ranks first in Europe, with a maximum of teenagers and young people [1, 2]. According to the forecast, by 2020, suicide will come in second place in the world as the cause of death, second only to cardiovascular diseases. Organizations use test methods, questionnaires, and questionnaires to prevent suicides. Their use is one of the most common diagnostic tools used in assessing suicidal risk. Recently, methods have begun to emerge for identifying risk groups using the intellectual search for specific web activity, identifying risk groups in social networks. Networks [3]. It is generally accepted that depression is the cause of suicide [4]. It is known that the emotional reaction of a person on the same stimulus depends on whether he is depressed. If you carefully monitor the person, then you can prevent suicide attempts. But it is impossible for a person to exercise continuous control. Therefore, it is necessary to use intelligent video surveillance systems. The video surveillance systems installed in organizations are used to resolve disputes and identify © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 398–403, 2020. https://doi.org/10.1007/978-3-030-39512-4_62
Model of Emotionally Stained Pupillogram Plot
399
the culprits in the event of an emergency, identify and suppress conflicts, prevent illegal actions, and investigate crimes recorded on video. Obviously, for the prevention of suicides, not only their upgrade is necessary, but also a modernization of the approach to the analysis of the information received. To date, the resolution of video cameras allows you to remotely monitor the dynamics of the size of the pupils. By using intelligent security systems instead of conventional surveillance systems, suicides can be prevented. Having placed the cameras in the right places, after appropriate calibration, you can get a combination of facial and pupil reaction data synchronized with the Eye tracker. This will allow for a comprehensive analysis of a person’s mental state, to identify a depressed state, and to take measures in time. For such systems, it is necessary to develop a pupilogram model that describes emotion. One sign of anxiety depression is fear. The article proposes a model of a plot of a pupillogram containing the emotion “fear”, based on experimental data.
2 Experimental Methods and Techniques The first step in creating intelligent security systems that recognize the pre-suicidal state involves two tasks. The first task is to detect the emotional component in pupillograms. The second task is the selection of a suitable mathematical model. A helmet has been developed for research (Fig. 1.), with the help of which a rigid coordinate connection with a video camera is created. Due to this, we avoid increasing the error in measuring the size of the pupils. The helmet is a frame welded from an aluminum tube. For convenience, the aluminum frame is fixed with regular screws to the head mount of the welding mask.
Fig. 1. Components of the experimental setup.
The frame is equipped with an additional layer of foam. In the occipital part of the head-mounted mount is an adjustment screw that allows you to loosen or tighten the helmet on the head. Camcorders 1 and 2 are attached to the bracket with standard mounting bolts. Bolts allow you to direct the lens at the pupil of the eye by adjusting the angle of inclination. The bracket is movably mounted on the frame, which allows
400
M. Boronenko et al.
you to adjust the distance between the lens and the eye. To exclude backlash of an arm we use a directing hole and a clip. Camcorder-1 registers the image of the monitor on which video files were displayed. Camcorder-2 registers pupil size. The internal time of the cameras is synchronized up to hundredths of a second. This allows you to establish the reason for the change in the size of the pupils. A person wearing a helmet-3 is located at a distance at which the change in the illumination of the surface of the eye due to the glow of the monitor becomes insignificant [5, 6]. Participants were shown car accidents captured by DVRs. Videos are freely available on the Internet. Pupillograms were shot in people of the older age category (50–75 years) in which representatives of different sexes were present. All participants in the experiment were volunteers. Everyone was warned of a possible emotional experience. Basically, all participants have farsightedness, one woman has myopia, corrected by glasses. The oldest cataract participant. Pupillograms of this participant are not accepted. During the calibration of the system, the distance was determined at which the change in the illumination of the surface of the eye, due to the glow of the monitor, does not lead to a change in the size of the pupils. The distances between the calibration points, the angular dimensions of the test objects and the change in the size of the pupils were compared. Figure 2 shows a typical spotlight track. This is where the focus is on the black dot demonstrated from the laptop screen during calibration. Presentation of the track in the form of a pie chart allows you to track the amplitude of micro and macro saccades.
Fig. 2. Spotlight track when looking at point 1 and moving attention to point 2 (when it appears).
Calibration by points made it possible to establish the magnitude of the change in the size of the pupils, explained by micro-saccades, which ensure the preservation of
Model of Emotionally Stained Pupillogram Plot
401
the image of the fixed object in the zone of best vision. When focusing a look at a point, the amplitude of micro saccades does not exceed 50’, which corresponds to a change in the size of the pupils no more than 1.1 times. We also determined the maximum change in the size of the pupils, explained by macroaccades (moving the gaze to the points on the screen that are maximally distant from each other). The amplitude of macroaccades of the order of 60 leads to changes in the size of the pupils no more than 1.25 times. During fixations of the spotlight, objects are recognized. In the experiments, video cameras with a temporal resolution of not more than 33 ms were used, which makes it possible to recognize explicit (from 250 to 450 ms) and implicit (150–250 ms) gaze fixations. Therefore, simultaneous tracking of the center of attention and pupil size will allow the creation of intelligent systems, the purpose of which will depend on the specifics of visual stimuli.
3 Experiments to Identify the Emotional Components and Modeling of Pupillograms Attention center tracking experiments were carried out using captured video files containing an optical image of the pupil of a person who was presented with stimuli. In our case, as an irritant stimulating the response of the pupil, complex pictures were used, the details of which can also be considered independent stimuli under certain conditions. It is also accepted that the pupillary reaction is regulated by the most significant parts of the image. So, the impulse affecting the size of the pupil from one presented slide is added up from the pulses received from the most significant image components for the subject. Tracking in each frame of the center of attention was carried out by the coordinates of the center of mass of the reflection of the monitor on the pupil image. Then, the center of mass of reflection was tracked in each frame and its coordinates were determined. The coordinates of the center of mass of the reflection of the monitor and the corresponding pupil size were used to build pupillograms and oculograms. The analysis of the obtained images was carried out in the free software FiJi. For statistical analysis, we used the StatPlus program to visualize OriginLab2019 data. As you know, the emotional influence of stimuli is proportional to the degree of the individual’s internal relationship to this topic. The subject of information, the carriers of which are test objects, can be determined by pressing problems of society, pressing psychological problems, etc. Before conducting the experiment, a survey was conducted to determine the topic of concern to the subjects. In agreement with the participants, in one of a series of experiments with people of an older age group, fear was chosen as the emotion. To evoke emotion, we used accident videos that are freely available on the Internet. Video files did not contain bloody scenes, unexpected sounds. All the events shown in the video files were predictable. The average value of the level of intensity of emotions was considered a normal reaction, because the hypothesis was that most people are mentally balanced and tolerant. Upon completion of the experiment, all participants were alive, healthy and satisfied. To identify emotionally colored areas, pupillograms were compared with tracks of the center of attention. Typical results are shown in Fig. 3(a).
402
M. Boronenko et al.
Fig. 3. (a) Puppillogram plot containing emotion; (b) Gram-Charlier peak function.
The choice of function for approximating the experimental data was based on the following considerations. To describe the acuity of attention, it was proposed to use the [3, 4] Gaussian function. An analysis of the attention curve shows that its shape is completely determined by the variable D (s): the greater the dispersion of attention, the less attention is paid to the object of interest to us (i.e., the stimulus). In order to focus attention on the stimulus, it is necessary, as far as possible, to minimize the variance D (s). Of course, the KL model is well suited for such a description [7]. Therefore, to approximate the experimental data, a function containing an exponent was chosen. With sufficient reliability, the Gram-Charlier peak function is suitable (Fig. 3b).
Model of Emotionally Stained Pupillogram Plot
403
4 Conclusions An analysis of the literature shows that there are a number of urgent problems for which there are no definitive answers, for example: an objective definition of the strength and sign of emotion; the law of decreasing the strength of emotion depending on time and distance; algebraic properties of emotions and emotional logic; algorithms and formulas of emotions; the connection of consciousness, thinking and emotions. In addition, the lack of existing models is attributed to the lack of attention of researchers to modeling the general state of the actor. The latter can be simultaneously influenced by several emotions, which together form his psycho-emotional state at a given time. However, if the issue of the course of one emotion has been studied to some extent, the translation of emotions into the general state of the actor requires research. Our studies show that the pupil reaction to a stimulus that is significant for the individual causes emotion. The mathematical model of the emotionally colored portion of the pupillogram is well described by the Gram-Charlier peak function. The results are consistent with the KL model of emotions. Acknowledgments. The study was carried out with the financial support of the Russian Foundation for Basic Research in the framework of the research project 18-47-860018 p_a.
References 1. Vasatkina, N.N., Merinov, A.V.: Clinical practice of child-teen suicides in the Ryazan region. Tyumen Med. J. 16(3), 4–5 (2014) 2. Miller, D.N., Mazza, J.J.: School-based suicide prevention, intervention, and postvention. In: Handbook of School-Based Mental Health Promotion, pp. 261–277. Springer, Cham (2018) 3. Valverde, L., de Lera, E., Fernàndez, C.: Inferencing emotions through the triangulation of pupil size data, facial heuristics and self-assessment techniques. In: 2010 Second International Conference on Mobile, Hybrid, and On-Line Learning, pp. 147–150. IEEE (2010) 4. Lempert, K.M., Glimcher, P.W., Phelps, E.A.: Emotional arousal and discount rate in intertemporal choice are reference dependent. J. Exp. Psychol. Gen. 144(2), 366 (2015) 5. Boronenko, M., et al.: Use of active test objects in security systems. In: Advances in Neuroergonomics and Cognitive Engineering: Proceedings of the AHFE 2019 International Conference on Neuroergonomics and Cognitive Engineering, and the AHFE International Conference on Industrial Cognitive Ergonomics and Engineering Psychology, 24–28 July 2019, Washington DC, USA, p. 438. Springer (2019) 6. Boronenko, M., et al.: Methods of indication of low intensity pupil reaction on the subjectively-important stimuli. Act Nerv Super Rediviva 61(2), 49–56 (2019). Checklist of Items to be sent to Proceedings Editors 7. Glazunov, Yuri Trofimovich.: Modeling the dynamics of the distribution of attention in goalsetting processes. Bull. Baltic Fed. Univ. I. Kant. Ser. Philol. Pedag. Psychol. 5 (2012)
Cost-Informed Water Decision-Making Technology for Smarter Farming Joanne Tingey-Holyoak1(&), John Dean Pisaniello1, Peter Buss2, and Ben Wiersma2 1
2
Sustainable Engineering, Accounting and Law Group, UniSA Business School, City West Campus, North Terrace, Adelaide, SA 5000, Australia [email protected] Sentek Pty Ltd., 77 Magill Rd, Stepney, SA 5069, Australia [email protected]
Abstract. Around the world, producers across a variety of industries need tools and models for better measuring and monitoring water in order to guide decision making and improve productivity. Nowhere more is this the case than in irrigated agriculture. The purpose of this research is to develop a technology that integrates farmer costing data with on-site physical data provided by intelligent systems and technology to provide an account of on-farm water use. Currently, farm accounting technologies are not at the level of sophistication required to deeply consider the financial impacts of water use or loss. Water accounting technologies are not able to easily link to and integrate business systems and data to provide necessary monetary decision-making data, nor the context to support a smarter approach to cost-informed water management. Linking of soil moisture sensed data and local and remote weather station data to farm accounting in a potatoes case study demonstrates how smarter farm decision making can be supported by linking business systems and sensing technology. Keywords: Soil sensors accounting Potatoes
Water productivity Smart farming Cost
1 Introduction Agriculture uses about 70% of global fresh water for irrigation [18] and pressure to produce more with less requires tools and models that increase productivity and capacity [1]. However, tools for accurately and fully accosting water are challenging to develop for agriculture [7, 20]. Farming is made up of spatial and temporal scales that make water tracking and cost allocation challenging [4, 20]. Whilst there are water accounting models and tools in development and in use globally they operate at macro scales [17], can be complicated to apply [10, 16], and do not integrate business systems [20]. Like other governments around the world, the Australian Government has recently allocated considerable funding under Smart Farms to support the development and uptake of tools and technologies to support farmers. However, as yet there has not been the development of technology that is integrated into business decision making and data and these un-integrated tools can even generate negative environmental outcomes © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 404–408, 2020. https://doi.org/10.1007/978-3-030-39512-4_63
Cost-Informed Water Decision-Making Technology for Smarter Farming
405
at the catchment scale [12]. Contextualised and full farm water accounting is required [4, 5]. Therefore, this paper seeks to answer the question: How can cost-informed technology for smarter farming be developed?
2 Materials and Methods The study included a participatory case study on a potato farm in the Murraylands of South Australia. Case study activities included the collection of accounting, soil and climate sensing data and combining it in a way that can assist water-related decision making [5, 13]. The producer was chosen from the direct monitoring industry partner’s client base with an identified interest in enhancing water productivity [19]. Site meetings and discussions were undertaken followed by soil moisture probe and weather station installation on a center pivot irrigated field of potatoes. Sensor data included rainfall and irrigation, evapotranspiration, soil moisture, and drainage whilst accounting data included water costs, power (pumping costs), costs of labour and materials, such as fertiliser, in addition to costs of water storage, infrastructure maintenance, equipment insurance, pump/pipe loan interest, and licensing. Data was combined in a Microsoft Excel dataset once reduced to the appropriate spatial and temporal scale. Then analysis of management and financial accounting water-related information was undertaken to determine water productivity. This process not only illustrates the utility of the tool when used against ‘acceptable’ water productivity indicators but also provides increased understanding of the contribution of the data combination to understanding critical water-related costs.
3 Results Modern varieties of potatoes require irrigation often as they can easily be exposed to soil water deficits. The state of the case study, South Australia produces 80% of the country’s washed market potatoes worth AUD206 million [14]. The case study region has low rainfall and sandy soil which means keeping soils optimally moist is challenging so informing irrigation decisions using soil moisture data is an important area to focus [11]. Therefore, the first step was to set up the potato site with soil moisture probes and a weather station and monitor a season of potato crop (Fig. 1). The case study site of 28 Ha was monitored from March to June 2018. Accounting data was collected from farmer diaries and records, accounting software and suppliers. Power for pumping pivots accounted for 12% of total variable costs but was highly differential all season, subject to half hourly market price fluctuations ranging from $0.64/Mwh to $4798.86/Mwh wthin the March to June timeframe (Fig. 2). Actual costs of water comprised 75% of the total variable costs. Figure 2 demonstrates that beyond fixed and variable water-related costs, a new criteria of “hidden water related costs” were noteworthy (Fig. 2). These included owner labour and water quality treatment which comprised 15% that had not been considered in accounting for water previously. Water costs in total made up 75% yield excluding all other input and processing costs which highlights the significance of including all water-related financial information to better inform decision making.
406
J. Tingey-Holyoak et al.
Fig. 1. Weather station, sensor and solar panel in-field set up
Fig. 2. Water-related costs composition compared to total yield revenue
Water productivity was assessed for the season using methods of Molden et al. [8]; Hussain et al. [6]: ð1Þ WpVolume ¼ Yield ðKgÞ = Water M 3 ð2Þ WpDollars ¼ Yield ð$Þ=Water M 3
ð1Þ ð2Þ
Yield value for potatoes of $0.1 per kg should have a water productivity of 3 to 7 gk/M3 and 0.3 to 0.7 $/M3 (per Eq. 2). The producer’s water productivity was within this ranges and at the upper end (Table 1) which is positive for the producer, however further seasonal data is required to track change and also any effect from using the tool.
Cost-Informed Water Decision-Making Technology for Smarter Farming
407
Table 1. Water productivity at case study site compared to potato indicators from literaturea 1. Product per unit of water applied WP (kg/M3) 2.79
2. Gross value of product per unit of water applied WP ($/M3) 0.96
Case study 2018 potatoes season 3–7 0.3–0.7 Potatoes indicatorb a Renault and Wallender [15]; Molden et al. [8] b Indicator range encompasses all possible agricultural types.
The producer hoped that once implemented, the tool could provide water productivity alerts, isolate cost saving concentrations through tracking costs, but also to integrate much of the data that is accessible freely or via subscription to do with production such as irrigation management and soil chemical information. So an additional dataset and software framework was established for data integration to provide alerts for plant stress, disease conditions, and low quality irrigation likelihood, and the development of scenario models and forecasts.
4 Discussion and Conclusions The study explored collecting the data and producer insight for a cost-informed water decision making tool. A baseline database for potatoes was established which allowed for understanding of full cost and the potential for its tracking across fields and seasons. In addition the development of cost-informed alerts and forecasting is possible from the linked database developed. Assessment of water productivity whilst using the tool was in the good range of common indicators [8, 15] however, the cost/yield revenue analysis demonstrates profit erosion and water productivity impacts from ignoring the ‘hidden costs’ of water, such as owner labour and water quality treatment [9]. The study advances understanding of how smarter farming can be promoted through better cost information linked to water use and productivity, and how tools that seek to achieve this can be integrated into agricultural businesses. Acknowledgments. Great thanks to the very generous case study participants who graciously shared their time and knowledge. Thanks also to the University of South Australia for Research Themes Investment Funding.
References 1. Altobelli, F., Monteleone, A., Cimino, O., Dalla Marta, A., Orlandini, S., Trestini, S., Toulios, L., Nejedlik, P., Vucetic, V., Cicia, G., Panico, T.: Farmers’ willingness to pay for an environmental certification scheme: promising evidence for water saving. Outlook Agric. (2019). https://doi.org/10.1177/003072701984105 2. Food and Agriculture Organisation (FAO) of the United Nations: Water Report 43: Water Accounting and Auditing Sourcebook. FAO, Geneva (2017)
408
J. Tingey-Holyoak et al.
3. Hoekstra, A.Y., Chapagain, A.K., Aldaya, M.M., Mekonnen, M.M.: The Water Footprint Assessment Manual: Setting the Global Standard. Earthscan, London (2011) 4. Hussain, I., Turral, H., Molden, D.: Measuring and enhancing the value of agricultural water in irrigated river basins. Irrig. Sci. 25(3), 263–282 (2007) 5. Jack, L.: The adoption of strategic management accounting tools in agriculture post subsidy reform: a comparative study of practices in the UK, the US, Australia and New Zealand. Discussion Paper. CIMA, London (2009) 6. Molden, D, Oweis, T.Y., Pasquale, S., Kijne, J.W., Hanjra, M.A., Bindraban, P.S., Bouman, B.A., Cook, S., Erenstein, O., Farahani, H., Hachum, A.: Pathways for increasing agricultural water productivity, IWMI, 612-2016-40552 (2007) 7. Moran, J.: Business Management for Tropical Dairy Farmers. CSIRO Publishing, Canberra (2009) 8. Mulla, D.J.: Twenty five years of remote sensing in precision agriculture: key advances and remaining knowledge gaps. Biosyst. Eng. 114(4), 358–371 (2013) 9. Pardossi, A., De Pascale, S., Rouphael, Y., Gallardo, M., Thompson, R.B.: Recent advances in water and nutrient management of soil-grown crops in Mediterranean greenhouses. In: GreenSys2015: International Symposium on New Technologies and Management for Greenhouses, vol. 1170, pp. 31–44 (2015) 10. Perry, C.: Water security – what are the priorities for engineers? Outlook Agric. 39(4), 285– 289 (2010) 11. Perry, C.: Accounting for water use: terminology and implications for saving water and increasing production. Agric. Water Manag. 98(12), 1840–1846 (2011) 12. Potatoes, S.A.: South Australia Snapshot, Potatoes South Australia, Adelaide (2017) 13. Renault, D., Wallender, W.W.: Nutritional water productivity and diets. Agric. Water Manag. 45(3), 275–296 (2000) 14. Srinivasan, M.S., Bewsell, D., Jongmans, C., Elley, G.: Just-in-case to justified irrigation: applying co-innovation principles to irrigation water management. Outlook Agric. 46(2), 138–145 (2007) 15. Tingey-Holyoak, J.L., Pisaniello, J.D., Buss, P., Wiersma, B.: Water productivity accounting in Australian agriculture: the need for cost-informed decision-making. Outlook Agric. (2019). https://doi.org/10.1177/0030727019879938 16. United Nations Intergovernmental Panel on Climate Change (UNIPCC): Climate Change 2007. Cambridge University Press, Cambridge (2007) 17. Welsh, R., Rivers, R.Y.: Environmental management strategies in agriculture. Agric. Hum. Values 28(3), 297–302 (2011) 18. Young, M.D., McColl, J.C.: Double trouble: the importance of accounting for and defining water entitlements consistent with hydrological realities. Aust. J. Agric. Resour. Econ. 53(1), 19–35 (2009)
A Review on the Role of Embodiment in Improving Human-Vehicle Interaction: A Proposal for Further Development of Embodied Intelligence Hamid Naghdbishi(&) and Alireza Ajdari Enghelab Sq., University of Tehran, 16th Azar Street, Tehran, Iran [email protected]
Abstract. This article tries to see how embodiment discourse can improve the design of an intelligent system, in this case Human-Vehicle Interaction. This proposal is based on the studies of Maurice Merleau Ponty and Paul Dourish applied in Intelligence System Design. Based on the mentioned above, the concept of Designerly Embodied Intelligence is introduced and applied for such a discourse. While the concept of Embodied Interaction sounds new and applied, the role of design thinking in improving such concept still needs to be discussed and elaborated. Therefore, vehicle design and especially product planning phase would be needed to be elaborated based on this discussion. It would be reviewed how intelligent systems could be improved if the concept of embodiment would be used for them, moreover it would be reviewed that how design thinking and designer-ly ways of knowing and interacting would enrich both embodiment and human-vehicle Interaction. It is claimed that design discourse, not only in vehicle planning, but also in improving an intelligent and appropriate system design, can be very helpful. Therefore, the role of design thinking in intelligent vehicle design is based on the concept of Appropriateness, or in other words, an intelligent design which is directed toward needs of the target user, no more and no less. Seeking the new potentials of intelligent systems in vehicle design and helping Open Innovation could be also another outcome of such proposal. Based on such argumentation, some proposals would be sketched in order to visualize possible solutions for designer-ly embodied intelligence in human-vehicle interaction. Keywords: Design thinking Embodied intelligence interaction Appropriate innovation
Human-vehicle
1 Introduction Analyzing dynamic and complex activities aimed at identifying vehicle driver activities is one of the most important issues in machine learning. Driver activity detection has many applications such as driving skill analysis, group statistics, automatic detection of sensitive movements and behaviors, and automatic control of the car’s steering (when the driver is unsafe). Although much research has been done on the analysis of drivingbased and in-vehicle activities, this design and research has always been from the © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 409–414, 2020. https://doi.org/10.1007/978-3-030-39512-4_64
410
H. Naghdbishi and A. Ajdari
designer’s point of view, and in this research, with the help of intelligent and designbased “Open Innovation” that focuses on the behaviors of people’s lives and the processing of driving modes that contribute to the reduction of human error by analyzing driving guidance. The detection algorithm also converts video into a visual system based on machine learning and artificial intelligence in the vehicle and transmits messages to the driver.
2 Problem Formulation Principles: Tonal discourse can improve the design of an intelligent system by humanmachine interaction and provide a sense of satisfaction when driving. Explaining the problem and analyzing the problem by adapting the interaction by “Open Innovation” helps. The modularity of mind that assumes that cognitive processes are composed of parts that have a particular relation to these parts that form the information processing model for cognitive abilities, which assumes that cognitive processes are composed of parts that have a particular relation to these parts, which form the information processing model for cognitive abilities. Epiphenomenalism deals with the relationship between mind and body, which according to this theory, all mental phenomena are caused by physical events and phenomena in the brain. Aspectual shape: Whenever we perceive or think about something, the conscious experience of the external object is shaped by certain features in our mind [1].
3 Method This research, which seeks suggestions for further developing artificial intelligence for human-vehicle interaction using artificial intelligence, is a quantitative research method in terms of research design and is a correlational design.
4 Design Thinking Design thinking can be successfully applied to all topics as a tool to solve problems and foster Innovation [2]. It can also be used as a tool for insight, new understanding, and creative solutions to complex problems [3]. Open Innovation helps accelerate domestic Innovation and expand foreign use markets [4]. As such, Open Innovation encompasses two dimensions of Innovation and information: • Technical operation • Technical Exploration An Open Innovation paradigm is to develop Innovation that is inherently focused on creating value through the extraction of unused technologies and ideas [5]. Design thinking is an effective tool for developing innovative and new products, services and processes [6]. This is a “human-centered” approach to design, that is, a repetitive process of listening, observing, sampling and experimenting according to user needs [7].
A Review on the Role of Embodiment in Improving Human-Vehicle Interaction
411
The concepts and language of design thinking are to improve a variety of processes including product design [8], service design, food product Innovation [9] and social services [10]. Design thinking is essentially a human-based Innovation process that emphasizes ideas and rapid prototyping [11].
5 Open Innovation The theory of Open Innovation was invented by Chesbrough [12], which is a new perspective on Innovation. It is a theory of coexistence with the external environment that enables us to make more regular use of innovative external ideas. It also enables users to plan their activities in line with innovative activities [13]. Open Innovation combines internal and external ideas with architectures and systems whose needs are defined by a business model [14].
6 Design Thinking and Open Innovation Design thinking is often at odds with alternative Innovation strategies, such as technology-based Innovation and design by designers [15]. Design thinking can be effective in creating the Innovation process by providing a “safe” space [16]. Design thinking is a faster and cheaper process, but it is a set of techniques that reduce the cognition of Innovation teams [9]. Design thinking ability to solve complex problems [17]. Some argue that design thinking can be used for non-designers and can provide a competitive advantage to companies through improved suggestions [18]. By gathering feedback from product users, the Innovation team was able to have a full understanding of user preferences and expectations of new products [19] in which this research could consider better and more human interaction as a vehicle user with artificial intelligence. Open Innovation in design thinking involves several concepts related to art that are referred to as “destiny engineering” [20]. The purpose of this research is to inspire phenomenological theories of perception, as it allows for the fundamental insights to be explored and applied to the concept of meaning in design and related technologies. These views are conceptually, dynamic, subjective parts of the world. Phenomenology helps in designing these views by developing the role of intersubjectivity in rationalizing emotion. The phenomenology of subjective, tonal and contextual layers or context-based layers engages individuals in products and in a dynamic, rich and complex world [21]. Maurice Merleau-Ponty goes even further and states that we need the body to experience the world [22]. Syntactic perception is a law and is used in design, synesthesia, a mixture of emotion, as a good mechanism for the development of artwork or artifact, an instinctive link between sound, imagery, and aroma [23, 24]. Hubert Dreyfus applied philosophy to artificial intelligence, and Paul Dorsey introduced the work of phenomenologists such as Merleau-Ponty and Alfred Schütz to a variety of designs through his book “where the action is [21]. In design, phenomenology transforms and values the way of thinking in the design and creation of works [22].
412
H. Naghdbishi and A. Ajdari
7 Embodied Design Thinking The conceptual blending process, together with the embodied mind and embodied concept, gives rise to the embodied design thinking; the embodied design thinking is a process that seeks to resolve qualitative differences in changing situations.
8 Conclusion According to the studies, it is thought that, if shown in Fig. 1, a combination of design thinking and design thinking is incorporated into the design process, and then reconciled with the Open Innovation theory; Concerning the relationship of human-centered design to the vehicle will achieve the desired result. In this algorithm, the process of human, body, Innovation, as well as artificial intelligence is incorporated into the design thinking.
Fig. 1. The design process is based on a combination of several separate analytical processes
A Review on the Role of Embodiment in Improving Human-Vehicle Interaction
413
9 Result This paper investigates through a case study on embodied intelligence, Open Innovation and artificial intelligence, in which this study investigates the rate of reduction of driver error by artificial intelligence regarding the problem of embodiment as well as the idea by Open Innovation and collecting the demands of users have been achieved. The results of the studies are presented as a design algorithm that can be effective in reducing the error leading to traffic accidents. This design is intended for the vehicle’s rearview mirror to act on the behavior of drivers and detect the sensors embedded in the car. The machine learning and embodied intelligence-driven performance of the vehicle made users less anxious while driving and reduced the amount of financial damage caused by other car crashes or roadblocks. Machine learning through the perception of the driving behavior of each individual, as well as the demands of the drivers under study, has formed an organized structure of how to explore and come up with all the elements involved in embodied intelligent design.
References 1. Searle, J.R.: The Rediscovery of the Mind. MIT Press, Cambridge (1992) 2. Schmiedgen, J., Rhinow, H., Köppen, E., Meinel, C.: Parts Without a Whole? - The Current State of Design Thinking Practice in Organizations (2015) 3. Owen, C.L.: Design thinking: notes on its nature and use. Des. Res. Q. 1(2), 16–27 (2007) 4. Chesbrough, H.W., Crowther, A.K.: Beyond high tech: early adopters of open innovation in other (2006) 5. Chesbrough, H.W., Schwartz, K.: Innovating business models with co-development partnerships. Res. Technol. Manag. 50, 55–59 (2007). Industries. R & D Management, 36 (3), 229–236 6. Gruber, M., de Leon, N., George, G., Thompson, P.: Managing by design: from the editors. Acad. Manag. J. 58, 1–7 (2015) 7. Cross, N.: Designerly ways of knowing. Des. Stud. 3, 221–227 (1982) 8. Brown, T.: Design thinking. Harv. Bus. Rev. 86(6), 84 (2008). June 2008 9. Olsen, N.V.: Design thinking and food innovation. Trends Food Sci. Technol. (2014) http:// dx.doi.org/10.1016/j.tifs.2014.10.001 10. Liedtka, J., King, A., Bennett, D.: Solving Problems with Design Thinking: Ten Stories of What Works. Columbia University Press (2013) 11. Lockwood, T.: Design thinking in business: an interview with Gianfranco Zaccai. Des. Manag. Rev. 21, 16–24 (2010) 12. Elmquist, M., Fredberg, T., Ollila, S.: Exploring the field of open innovation. Eur. J. Innov. Manag. 12(3), 326–345 (2009) 13. Chesbrough, H., Crowther, A.K.: Beyond high tech: early adopters of open innovation in other industries. R & D Manag. 36(3), 229–236 (2006) 14. Chesbrough, H.W.: Open Innovation: The New Imperative for Creating and Profiting from Technology. Harvard Business Press (2003) 15. Verganti, R.: Design, meanings, and radical innovation: a metamodel and a research agenda. J. Prod. Innov. Manag. 25(5), 436–456 (2008) 16. Docherty, C.: Perspectives on design thinking for social innovation. Des. J. 20(6), 719–724 (2017). https://doi.org/10.1080/14606925.2017.1372005
414
H. Naghdbishi and A. Ajdari
17. Acklin, C.: Model: a framework to describe and measure the absorption process of design knowledge by SMEs with little or no prior design experience 22(2), 147–160 (2013) 18. Acklin, C.: Design-driven innovation process model. Des. Manag. J. 5(1), 50–60 (2010). http://doi.org/10.1111/j.1948-7177.2010.00013.x 19. Von Hippel, E.: Sticky information and the locus of problem-solving: implications for innovation. Manag. Sci. 40(4), 429–439 (1994) 20. Lindsay, G.: Engineering Serendipity. The New York Times, 5 Apr. 2013. NYTimes.com. Web. Accessed 3 Nov 2013 21. Dourish, P.: Where the Action Is. The Foundations of Embodied Interaction. MIT Press, Boston (2001) 22. Merleau-Ponty, M.: Phenomenology of perception trans. In: Smith, C. (ed.) Humanities Press, New York (1962) 23. Smets, G.J.F., Overbeeke, C.J.: Scent and sound of vision: expressing scent or sound as visual forms. Percept. Motor Skills 69(1), 227–233 (1989) 24. Smets, G., Overbeeke, C.J., Gaver, W.W.: Form-giving: expressing the nonobvious. In: Proceedings of CHI 1994, pp. 79–84. ACM Press, New York (1994)
Analysis of Topological Relationships of Human Jia Zhou, Xuebo Chen(&), and Zhigang Li School of Electronics and Information Engineering, University of Science and Technology Liaoning, Anshan 114051, Liaoning, People’s Republic of China [email protected], [email protected], [email protected]
Abstract. In order to study the network characteristics of human, this paper analyzes the interactions between individuals and cluster behavior based on the lennard-jones model. Firstly, we simulate a multi-agent system and randomly distribute individual locations in the system. Then, we keep the total number of individuals and interactions unchanged, and change the number of interconnection. By analyzing the phenomenon of cluster behavior, we can summarize the impact of the number of interconnects on system stability. Finally, combined with the above factors, some specific measures are proposed to solve the stability problem of complex systems. Keywords: Multi-agent systems interconnection
Cluster behavior Topological
1 Introduction Multi-Agent systems, as an important branch of artificial intelligence, have become hot spots in many fields. Since the famous biologist Bertalan Fei proposed the concept of complexity [1], the complexity problem has begun to be widely concerned, and has made a lot of progress, such as von Neumann’s cellular automata and complexity, Shannon’s informatics. At present, modeling methods based on multi-agent systems have become one of the important methods for studying complex systems. Cluster self-organizing behavior is widespread in the biological world and plays a vital role in the survival of living things. In fact, it also plays an important role in the control of artificial robots and in man society. We often find that many biological individuals are simple in structure and weak in strength. Once they are active in groups, they will accomplish many things that are incredibly far beyond their own abilities. Therefore, they survived the brutal competition. In the process of mutual cooperation, each individual works in a division of labor and contributes their own strength. They exchange information [2] with each other and improve work efficiency. In the artificial robot, in order to complete the arrangement of the formation, the detection of the area coverage, or the cooperative rounding, the interference is kept stable, etc., and each agent is required to be in the self-organizing control process. Large-scale distributed control in multi-agent systems does not affect the operation of the entire multi-agent systems due to communication failure between individual agents, thus providing greater flexibility and scalability. For example, the current Internet is multi-agent systems that does not affect the © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 415–420, 2020. https://doi.org/10.1007/978-3-030-39512-4_65
416
J. Zhou et al.
communication of the network due to the damage of some routers. This distributed control approach is more robust than centralized control [3, 4].
2 Method For the cluster self-organizing movement of multi-agent systems, the key is to study the autonomic movement of each individual to study the movement of the cluster. We build rules and models of individual free movement based on loose preference rules to simulate the emergence of cluster movements, such as simulating the interrelationships between individuals in clusters of nature, so we build such a model. The rule is that the robot will form a formation that tends to disperse and the group gathers at the target point during the movement toward the target [5]. Therefore, it is called the individual free movement rule based on loose preference behavior. Potential energy is energy stored in a system that can also be released or converted into other forms of energy. Potential energy is a state quantity, also known as potential energy. Potential energy is not shared by individual objects, but by interacting objects. The earliest proposed potential energy function was proposed in physics to describe potential energy. Because each object has a real interaction with each other, and the energy that depends on the relative position between the objects is called potential energy [6]. The artificial potential field method [7] can not only solve the motion problem of a single agent, but also deal with the multi-agent systems and multi-mobile robot coordination and cooperation problems. By adding an artificial potential field to each agent in the system, the motion of a single individual in the system behaves like a force between molecules: that is, in equilibrium, the molecules are relatively balanced, less than the equilibrium position. Reflected as repulsive force, it is reflected in gravity when it is greater than the equilibrium position. Therefore, it can simulate the two basic rules of cluster movement in biological groups: the attractiveness of individuals with distant spacing and the repulsive force of individuals with too close spacing, so that individuals in the system can move to the expectation. At this time, the total potential energy field of all individuals is at the global minimum. The emergence of systems often requires a variety of factors. For the mechanism of research emergence, more external factors should not be introduced in order to generate emergent phenomena, which makes the mechanism research difficult to explain. How to find a simple and effective emerging system as the research object is the key to emerging modeling. It can be seen from the broad generalized summary that emergence is an inherent property of the system, but it will exhibit different complex performances due to external disturbances. In this model, external interference is not considered and only the effects of internal interactions are considered. The starting point for the study should be attempted to approximate the interconnection rules between individuals by reference to various real-life examples in nature. Using graph theory, the interconnection between systems and individuals can be described simply and effectively. Two-dimensional loose preference behavior rules were used for two-dimensional planes such as formation, handling, and rounding. Collaborative tasks. In order to study the emerging generation and control methods, the
Analysis of Topological Relationships of Human
417
two-dimensional interconnection rules are extended into three-dimensional interconnection rules. X X fi ¼ f vj vi eij ð2:1Þ j¼N i
Where eij ¼ aij vj vi =vj vi describe the interconnection vi ; vj with the unit vector of the weight aij , and the interconnection force in the direction of the unit vector can be passed through the Lennard-Jones function [8] or The functions mentioned in the chapter are described. A self-organizing system based on loose preference behavior rules has the simplest dynamic equation
v_ ¼ f i
ð2:2Þ
3 The Simulation Analysis In order to simulate the cluster motion of multi-agent systems, the model [9] of the interconnecting force that attracts close-distance things remotely can be simply described as 8 1 1 > < wr h x ri ; rl \r Þ f ð xÞ ¼ wa sin ððrxr ð3:1Þ ; r \x\rh h r Þ > : 0; others Where is the individual’s expected distance ðr ¼ 5Þ, rl and rh are the minimum and maximum perceived distances of the individual, respectively, and wr and wn are the interconnecting force parameters. In general, the motion patterns of the natural cases in the natural world are approximated by the blending of parameters in the model. These simulation diagrams are based on the multi-agent systems emerging results of topology interconnection. In Figs. 1, 2, 3 and 4, all simulation diagrams are at the time of step number = 20000, all the number of individuals N is 11, randomly distributed in three-dimensional space, and the small black dots in the figure are the starting positions of the individuals. The big black point individual is the end position, and the color curves are the motion trajectories. Note that the thick red line is the average value of the final positions of all individuals. It is not difficult to find that when the number k of interconnections is small, the individual is almost unchanged in position, for example, in Fig. 1, because the number of interconnections is small, the interaction between them is small, and it is difficult to gather in a short time. In Fig. 2, the phenomenon of individual aggregation is obvious, and the aggregation density becomes larger at the same time.
418
J. Zhou et al.
Fig. 1. Cluster motion (k = 1)
Fig. 2. Cluster motion (k = 6)
In Figs. 3 and 4, it is worth noting that at k = 7 and 9, the emerging pattern appears spirally curved, presenting a dynamic phenomenon. In Fig. 2, all individuals are all gathered and in a static state. Interestingly, the individuals are almost all gathered together in a straight line to form a circle with substantially equal distances between individuals. It is possible that cohesion is greatest when all individuals interact.
Fig. 3. Cluster motion (k = 7)
Fig. 4. Cluster motion (k = 9)
Figures 5 and 6 are graphs of the system and individual stress based on the topological interconnection relationship, in which the blue thick line indicates the combined force of the system, the red thick line indicates the maximum force of the individual, and the other lines indicate the individual’s force, which can be seen in the perception. According to the graph analysis, when k = 6, the system stress is 0. The system is in dynamic equilibrium; when k = 7, the system is not subjected to force and is constant. It can be inferred that the system is also subjected to a resistance equal to the magnitude of the force, thus making the system dynamically balanced. In Fig. 3, the system moves along the arc, calling this phenomenon “spiral emergence”.
Analysis of Topological Relationships of Human
Fig. 5. Force of the system (k = 6)
419
Fig. 6. Force of the system (k = 7)
4 Conclusion The stability of a multi-agent system is affected by many factors. When the total number of individuals in the system remains unchanged and the number of individuals contacting neighboring individuals is small, clusters appear to aggregate, and the clusters reach stability quickly. As the number of contacts between individuals and neighboring individuals increases, the cluster aggregation phenomenon becomes more obvious, and the cluster can also reach stability. However, when the number of individuals and neighboring individuals is 7 or 9, the cluster appears spiral motion and the system is unstable. It can be seen from the above analysis that there are some numbers of interconnection making the system unstable. At this time, we can increase or decrease the number of individuals connected with neighboring individuals to stabilize the system. Acknowledgments. This research reported herein was supported by the NSFC of China under Grant No.71571091 and 71771112.
References 1. Cowan, G., Pines, D., Meltzer, D.E. (eds.).: Complexity: Metaphors, Models, and Reality. Perseus Books, Cambridge (1994) 2. Moreau, L.: Stability of multi-agent systems with time-dependent communication links. IEEE Trans. Autom. Control 50(2), 169–182 (2005) 3. Gazi, V., Passino, K.M.: Stability analysis of social for aging swarms. IEEE Trans. Syst. Man Cybern. Part B Cybern. 34(1), 539–557 (2004) 4. Gazi, V., Passino, K.M.: Stability analysis of swarms. IEEE Trans. Auotom. Control 48(4), 692–697 (2003) 5. Cortes, J., Bullo, F.: Robust rendezvous for mobile autonomous agents via proximity graphs in arbitrary dimensions. IEEE Trans. Autom. Control 51(8), 1289–1298 (2006)
420
J. Zhou et al.
6. Su, H.S., Wang, X.F., Chen, G.R.: Rendezvous of multiple mobile agent with pre-served network connectivity. Syst. Controlletters 59(5), 313–322 (2010) 7. Khatib, O.: Real-time obstacle avoidance for manipulators and mobile robots. Int. J. Rob. Res. 5(1), 90–98 (1986). Spring 8. Davis, B.D., Mingioli, E.S.: Mutants of escherichia coli requiring methionine or vitamin B12. J. Bacteriol. 60(1), 17–28 (1950) 9. Gerkey, B., Vaughan, R.T., Howard, A.: The player/stage project: Tools for multi-robot and distributed sensor systems. In: Proceedings of the 11th International Conference on Advanced Robotics, vol. 1, pp. 317–323 (2003)
Impact of Technological Innovation on the Productivity of Manufacturing Companies in Peru Julio César Ortíz Berrú1, Cristhian Aldana Yarlequé2(&), and Lucio Leo Verástegui Huanca3 1
Banco Central de Reserva Del Perú, Piura, Peru [email protected] 2 Universidad Nacional de Frontera, Piura, Sullana, Peru [email protected] 3 Universidad Complutense de Madrid, Madrid, Spain [email protected]
Abstract. It is estimated the relationship between the investment in innovation activities, the results of innovation and productivity in Peruvian manufacturing firms, using the multi-equation model, called the CDM model, which studies the entire innovation process. The quantile regression approach was used, using data from the II National Survey of Innovation in the Manufacturing Industry 2015 in Peru. The findings were that technological innovation is associated with a 56% increase in productivity in the company. Finally, the quantile regression approach shows that the effect of technological innovation on the firm’s productivity is heterogeneous, increasing the effect on bigger firms. If investment in technological activities increases by 1%, then the labor productivity increases by 0.22% and the returns of innovation depend on the entrepreneurial position according the distribution of productivity. Keywords: Technological innovation CDM
Productivity Manufacturing sector
1 Introduction The relationship between productivity and innovation is one of the first work submitted by Schultz and Griliches. Since then, a considerable number of empirical and theoretical works have been generated in this regard. In some of the most recent theoretical models, a fundamental role has been attributed to investment in innovation activities as an impulse of productivity and, therefore, of economic growth [1, 2]. Most of the empirical work on the relationship between innovative activities and productivity has been focused on developed countries [3–6]. Studies in developing countries are mostly qualitative, without being able to infer clear statistical patterns. Therefore, it derives the great importance it has, in the case of developing countries; specify the largest possible number of country-specific studies [7]. The goal of this working paper is to evaluate the relationship between investment in innovation activities and the productivity. Firstly, Peru is in a lagging position in terms © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 421–426, 2020. https://doi.org/10.1007/978-3-030-39512-4_66
422
J. C. Ortíz Berrú et al.
of innovate initiatives: R&D spending represents an average of 0.1% of sales and 4.8% of spending on innovation activities at the firm level, these percentages are inferior to those destined by OECD countries [8, 9]. Secondly, at the moment there are not researches in the country about the relationship between productivity and innovation, based on surveys on innovation in the manufacturing industry. It was used The CDM model and it was evaluated the empirical relationship between investment in innovation activities, the results of innovation and productivity in Peruvian firms in the manufacturing sector. This approach is based on a multi-model model that concerns the entire innovation process. It takes the determinants of firms’ decisions to adopt innovation initiatives, the results of those initiatives and their impact on productivity. The data comes from the II National Commission of the Manufacturing Industry 2015 in Peru, which contains quantitative and qualitative information for the period between 2012–2014 [10–12]. Understanding the relationship between technological innovation1 and productivity can not only lead the performance of firms in the sector level, but can also serve as a guideline for policy makers to achieve a better economic and industrial performance of our country.
2 Materials and Methods The research was made with a study by Créspon, Duguet and Mairesse, it is known as the CDM model. The approach is based on a multi-equation model in which it takes the entire innovation process. The reference model consists of four equations: (i) the firms’ decision to invest in innovation activities, (ii) the intensity of investment in innovation activities, (iii) the knowledge production function which links the intensity of innovation to its results, and (iv) the production function, in which the firm’ productivity is based on the results of technological innovation [2]. The CDM model tries to solve the problem of selection bias2 and endogeneity in innovation and productivity functions. The following equation represents the effort in innovation by firms IEi . IEi ¼ zi0 b þ ei
ð1Þ
Where the subscript i represents the signatures, IEi is a latent variable not observed, zi is a vector of determinants of the effort in innovation, b is a vector of parameters and
1
2
In this study to define technological innovation it is used the concept of the Oslo Manual; it means, technological innovation, also called product or process innovation, is structured of nine activities: (i) internal Research and Development (R&D) activities; (ii) acquisition of external R&D; (iii) acquisition of capital goods; (iv) Hardware acquisition; (v) software acquisition; (vi) technology transfer; (vii) industrial design and engineering; (viii) training for innovation activities; and (ix) market research for the introduction of innovations. The problem of selection is that, in each period of time, we only keep with the firms that reported investment in innovation activities. By eliminating companies with zero investments in innovation activities, the sample would be biased.
Impact of Technological Innovation on the Productivity
423
ei is an error term. IEi is approached through the logarithm of the innovation activity expenditure per worker denoted by IEi only if the firms make (and report) said expense. In this way it can directly estimate Eq. (1) with the risk of selection equation. Instead, the following selection Eq. (2) is assumed that describes whether the firm decides to make (and/or report) the investment in innovation. IDi ¼
1 si IDi ¼ w0i a þ ei [ c; 0 si IDi ¼ w0i a þ ei c
ð2Þ
Where IDi is a binary variable and represents the investment decision in innovation. IDi will be zero for firms that do not invest in innovation activities, and equal to one for firms if they do. IDi is a latent variable that expresses the innovation investment decision if it is located above a threshold c, where w is a vector of explanatory variables of the innovation investment decision3, a is the vector of parameters of interest and ei is an error term. Equation (3) expresses the intensity of investment innovation activities IEi 4 observed. Assuming that the error terms have zero mean, variance r2e ¼ 1, and correlation coefficient qee , the system of Eqs. (2) and (3) are estimated using a generalized Tobit model for maximum likelihood. IEi ¼
IEi ¼ z0i b þ ei si IDi ¼ 1 0 si Di ¼ 0
ð3Þ
Equation (4) is the production function of innovation (or knowledge). Where IT is equal to 1 when the company has introduced an innovation; IEi is the prediction of the value of the innovative effort of the company from the previously estimated generalized Tobit equations, xi is a vector of other determinants of knowledge production5, c and d are parameter vectors of interest, and ui is an error term. F is the function of standard normal distribution. TIi ¼ F IEi c þ x0i d þ ui
ð4Þ
Equation (5) is the production function, which relates innovation and labor productivity. It is assumed that firms adopt a Cobb-Douglas function that is usually assumed to have constant returns to scale. The dependent variable yi is the firm’s labor productivity, measured by the logarithm of sales per worker in the last year of the
3
4
5
The following independent variables were used: company experience, exports per worker in the initial period, participation of foreign capital in the initial period, ratio of skilled workers, linking the company with organizations and institutions related to science and technology, concentration and market share in the initial period and the size of the firm. The Innovative Effort equation includes all the variables of Eq. 3 (except the size of the company) and also the following variables: demand incentives (demand pull), supply incentives (technology push), sources of information, financial restrictions, public resources, property rights and chains. The following variables were used: company size (logarithm of the number of workers), participation of foreign capital in the initial period and public resources.
424
J. C. Ortíz Berrú et al.
survey (2014). ki is the logarithm of physical capital per worker in the initial year (approximate through investment in physical capital per worker), and TIi is used as an explanatory variable and captures the impact of innovation on productivity levels (this variable is the prediction of the knowledge production function, obtained in Eq. (4). yi ¼ pi ki þ p2 TIi þ vi
ð5Þ
In all the equations, five (5) of sectorial dummies variables are incorporated: capital goods, consumer goods, intermediate goods, primary manufacturing, and services. In addition, in all the equations, a variable that controls by geographic location is included: 1, represents when the company is located in Lima or in Callao, and 0 in another case. Finally, in all the estimates only those companies that reported having more than ten workers at the beginning of the period were taken.
3 Results The results in Table 1 suggest that technological innovation is on average associated with a 56% increase in labor productivity. In order to verify the robustness of the outputs, the same model was tested using the prediction of the innovative effort. The outputs show that on average the impact of technological investment is 0.22, this means that if investment in technological activities increases by 1%, labor productivity increases by 0.22%. Table 1. Impact of innovation on productivity Variable
Y: Logarithm of sales per worker in 2014 Logarithm of physical capital per worker in 2012 0.256*** 0.267*** (0.017) (0.016) Log. of the number of workers in 2012 −0.136*** −0.153*** (0.021) (0.021) IEp_p (Prediction of the innovative effort in technological 0.224*** innovation) (0.037) TI_p (Technological innovation) 0.563*** (0.103) Constant 8.382*** 9.650*** (0.309) (0.204) Observations 1,155 1,155 R-squared 0.323 0.31 Source: National Survey of Innovation in the Manufacturing Industry 2015. Standard errors in parentheses; ***p < 0.01, **p < 0.05, *p < 0.1. Elaboration: Authors’ calculations. Note: Reported coefficients represent Average Marginal Effects. For discrete variables, the marginal effect is calculated as the first difference with respect to the base category. Estimation errors are corrected by BOOTSTRAP - 500 repetitions.
Impact of Technological Innovation on the Productivity
425
Table 2. Heterogeneous impact of technological innovation Y: Logarithm of sales per worker in 2014 Q15 Q25 Q50 Q75 Q90 Logarithm of physical capital 0.254*** 0.237*** 0.257*** 0.260*** 0.279*** per worker in 2012 (0.026) (0.026) (0.018) (0.024) (0.035) Log. of the number of workers −0.056** −0.108*** −0.157*** −0.239*** −0.266*** in 2012 (0.025) (0.031) (0.027) (0.028) (0.044) TI_p (Technological 0.332** 0.383** 0.422*** 0.634*** 0.714*** innovation) −0.157 −0.15 −0.118 −0.145 −0.22 Constant 8.455*** 9.222*** 9.856*** 10.697*** 11.232*** (0.322) (0.361) (0.241) (0.282) (0.35) Observations 1,155 1,155 1,155 1,155 1,155 Source: National Survey of Innovation in the Manufacturing Industry 2015. Standard errors in parentheses; ***p < 0.01, **p < 0.05, *p < 0.1. Elaboration: Authors’ calculations. Note: Reported coefficients represent Average Marginal Effects. For discrete variables, the marginal effect is calculated as the first difference with respect to the base category. Estimation errors are corrected by BOOTSTRAP - 500 repetitions.
One way to show if the impacts of innovation on productivity are heterogeneous, is through the quantile regression approach. The results show that the returns of innovation depend on the position of the firm according the distribution of productivity. Such as in Table 2, firms located in quantile 15, private returns are not more than 33%. However, returns increase around 71% to firms that are at the top of the distribution.
4 Conclusions There is a positive impact of technological innovation on the labor productivity of Peruvian manufacturing firms. In addition, technological innovation on average is associated with a 56% increase in labor productivity. These results are in the range of values reported in the Crespi & Zuñiga study (2010): Argentina (24%), Chile (60%), Colombia (192%), Panamá (165%), and Uruguay (8%). On the other hand, using the prediction of the innovative effort (EI), the results show that a 1% increase in technological activities investment increases labor productivity by 0.22%. This value is into the range of values reported by Crespi & Zuñiga (2010) for Latin American countries: Argentina (0.41%), Chile (0.20%), Colombia (0.61%), Panama (0.69%), and Uruguay (0.45%). The effects of innovation on productivity are heterogeneous and depending of the size of the firm. In that sense, according to the quantile regression approach, it was found that, firms that carry out technological innovation activities and they are located in quantile 15, their productivity increases by 33%; in contrast, those firms that are in the quantile 90, their productivity increase in 71%. Finally, innovation plays an important role in the productivity of the firms. In addition, the quantile regression approach showed that the effect of innovation on the productivity of the firms is heterogeneous. This result implies different effects on aggregate efficiency, so business policy programs should not be transversal. It means that business support programs have to be based on selective policies.
426
J. C. Ortíz Berrú et al.
References 1. Álvarez, R., Bravo, C., Navarro, L.: Innovación, investigación y desarrollo, y productividad en Chile. Revista CEPAL 104, 141–166 (2011) 2. Lambardi, G., Mora, J.: Determinantes de la innovación en productos o procesos: EL caso colombiano. Revista de Economía Institucional XVI(31), 251–262 (2014) 3. Lee, C.: The determinants of innovation in the malaysian manufacturing sector: an econometric analysis at the firm level. s.l.: University of Malaya. Centre on Regulation and Competition (2004) 4. Crespi, G., Katz, J.: R&D expenditure, market structure and “technological regimes” in chilean manufacturing industries. Estudios de Economía 26(2), 163–186 (1999) 5. Crespi, G., Zúñiga, P.: Innovation and productivity: evidence from six Latin American countries. s.l.: IDB Working Paper Series No. IDB-WP-218 (2010) 6. Crespi, G., Zuñiga, P.: Innovation and productivity: evidence from six Latin American countries. IDB Working Paper Series No. IDB-WP-218, pp. 1–38 (2010) 7. Banco Interamericano de Desarrollo: Cómo repensar el desarrollo productivo? Políticas e instituciones sólidas para la transformación económica. s.l.: BID (2014) 8. World Economic Forum: Global Competitiveness Report 2015-2016. World Economic Forum, Ginebra (2015) 9. Ministerio de la Producción: Desempeño Exportador del Sector Manufacturero por Innovación Tecnológica: Cambios en la estructura exportadora y productiva de bienes industrializados. Lima: s.n. (2015) 10. Tello, M.: Firms Innovation, Constrains and Productivity: The Case of Peru. Departamento de Economía PUCP, Lima, Documento de Trabajo 382 (2014) 11. Crepón, B., Duguet, E., Mairesse, J.: Research, innovation and productivity: an econometric analysis at the firm level. National Bureau of Economic Research, Cambridge (1998) 12. Ministerio de la Producción: Encuesta Nacional de Innovación en la Industria Manufacturera 2012. PRODUCE, Lima (2013)
Parametric Urban Design Rongrong Gu(&) and Wuzhong Zhou School of Design, Shanghai Jiao Tong University, Minhang District, Shanghai 200240, China {skygrr,wzzhou}@sjtu.edu.cn
Abstract. Urban design is the process of designing and shaping the city, involving large-scale buildings, streets, and public space organizations. Accompanied by technological improvement, and increasing computational power, new methods and technologies become more and more important in urban design. The parametric design is to treat urban design as a complex system. When basic design parameter relationships are set, we have to find an algorithm to construct a parameter relationship to generate a shape. Based on the introduction and application of parametric design, this paper takes several urban design projects for example, and proposes the development ideas and design methods of parametric urban design. Finally, this paper states that parametric design can give us what kind of reform for urban design at a global scale. Keywords: Urban design
Parametric Algorithm generating
1 Urban Design Urban design is related to the characteristics and pattern of the entire city. A good urban design is the soul of the whole city. For the new requirements of urban design in the new era, we should take advantage of the advantages of high technology to enhance the scientific nature of urban design. In the development of contemporary cities, human activities have a major impact on the development of cities. Urban planning is usually carried out from the top down, while the urban sports model is bottom-up. Together they determine the shape and movement of the city. Parametric Parameterization is a new design methodology, which emerged from the end of the 19th century to the mid-twentieth century. Parametric design in architecture can date back to the hanging chain model created by Gaudi [1]. It has even been suggested that parametric design is fundamental to creativity, in a sense that design variations can be generated by altering values of different parameters [2]. Having design variations is also the key of creativity [3]. The breakthrough development of industrial technology has caused dramatic changes in people’s living environment, and the social and economic systems have presented unprecedented complexity. The parametric design software itself cannot explain the profound shift from modernism to parametric style. The essence of parametric design is the process of controlling the quantization parameter to generate multiple results through computer software setting rules or logic functions, which interprets the design purpose in a digital way, constructs design logic, © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 427–432, 2020. https://doi.org/10.1007/978-3-030-39512-4_67
428
R. Gu and W. Zhou
solves design problems, and then Look for newer forms and more design-optimized solutions. Parametric design is controlled by parameters. Each parameter controls or indicates some important property of the design result. Changing the value of the parameter will cause the design result to change. Early parametric urban design work such as ‘parametric urbanism’ of Patrik Schumacher (2008) and zaha Hadid included formal contextual features of rivers, roads and topography [4]. Recent parametric urban design projects focus on the need for broader and more complex geospatial scales understanding. With the notion of parametric urbanism, Zaha Hadid Architects also won a series of planning competitions including One-North Masterplan in Singapore and Soho City in Beijing, China [5]. It has also been suggested that parametric modelling could be used to generate urban design solutions in high-density cities such as Hong Kong [6].
2 Design Steps In parametric design, we regard the main factors affecting design as parametric variables. First, we find some important design requirements as parameters, and construct parameter relationships by using one or several rule systems (i.e., algorithms) as instructions. The computer language is used to describe the parameter relationship to form the software parameter model. When the parameter data information is input in the computer language environment and the algorithm instruction is executed at the same time, the shape target can be realized and the prototype shape can be obtained. 2.1
Organization of Regional Analysis and Design Conditions
Through the investigation of the plot area, the analysis of human behavior and activities, various information related to the design is collected, and one or more factors are found out as the control factors for generating the analysis model in the parameterization process. 2.2
Build a Parametric Model
According to the analysis of the influence design factors in the previous step, find the control factors of the project and the problems faced in the design, and choose the appropriate algorithm in combination with other professions to construct the parametric model. 2.3
Correct and Optimize the Parametric Model
Optimize the parametric model obtained in the previous step, especially for the optimization of important control factors.
Parametric Urban Design
2.4
429
Improve the Parametric Model
Complete the details in accordance with the analytical model and deal with the primary and secondary relationships. 2.5
Detection and Adjustment of Overall Design Results
The improved parametric model is tested and the software is used to adjust the overall spatial structure of the city. If the expected requirements are not met, the control factors of the parametric model must be re-evaluated.
3 Algorithm Generating An algorithm is a series of logical decisions and operations that are organized in order, that is, instructions that collectively accomplish a particular task. Here, a specific task is to construct a parameter relationship. We use two projects to illustrate the process of designing shape-finding. 3.1
Kartal Pendik Masterplan Breakdown/Zaha Hadid
Where routes connecting Europe and Asia meet coastal highways, sea terminals and rail links in an abandoned industrial area of Istanbul, the Kartal Pendik masterplan is taking shape – creating a new urban centre based on grid form and utilizing calligraphic notions of topography to create truly responsive structures and spaces. The project starts with the integration of the infrastructure and the urban context of the surrounding lots. The horizontal lines will be linked from the main roads in Kartal in the west and Pendik in the east. These lateral connections, together with the main longitudinal axes, create a soft grid that forms the underlying framework of the project. This texture is further refined through the draft urban design – different building types are created for each region’s different needs. The open conditions of the draft production can transform an independent building into a block and eventually become a hybrid system – creating a porous, interconnected open space network that is free from the city (Fig. 1).
Fig. 1. Marek Kolodziejczyk, Wool-thread model to compute optimised detour path networks, Institute for Lightweight Structures (ILEK), Stuttgart, 1991 (Source: https://tshristov.wordpress. com/2015/05/08/kartal-pendik-masterplan-by-zaha-hadid-architects-work-in-progress/)
430
R. Gu and W. Zhou
A solution was calculated using the Rhino and Grasshopper applications. The principle being to adjust the length of the thread. After connecting the intersections between the region of interest and the main track, a mesh is obtained. It is then refined to form a wool line model simulation (Fig. 2).
Fig. 2. Wool-thread simulation output (Source: https://tshristov.wordpress.com/2015/05/08/ kartal-pendik-masterplan-by-zaha-hadid-architects-work-in-progress/)
The deformation curve is created by dividing the longitude curve into segments and adjusting the control points on the curve. Create a road by dividing the area into inwardly set areas or blocks with longitude and latitude curves (Fig. 3).
Fig. 3. Deformed grid (Source: https://tshristov.wordpress.com/2015/05/08/kartal-pendikmasterplan-by-zaha-hadid-architects-work-in-progress/)
Parametric Urban Design
431
Next step is to use the squeeze command. The height of the extrusion is defined by the distance from the zone to the middle curve, and ultimately the greater the distance, the higher the height of the extrusion (Fig. 4).
Fig. 4. Volumes by distance to mid curve (Left) Volumes by area m2 (Right) (Source: https:// tshristov.wordpress.com/2015/05/08/kartal-pendik-masterplan-by-zaha-hadid-architects-work-inprogress/)
3.2
One North Masterplan/Zaha Hadid
One North has created a typical form in Singapore that defines the definition of a manmade landscape to a residential area of the city. Urban buildings have the spatial composition and form of natural landscapes (Figs. 5 and 6).
Fig. 5. Traffic grid (Source: http://www.ikuku.cn/concept/one-north-masterplan)
432
R. Gu and W. Zhou
Fig. 6. Design results (Source: http://www.ikuku.cn/concept/one-north-masterplan)
4 Conclusion As a complex integrated system, the city’s diverse characteristics give parametric design extended platform and potential for future expansion. The parametric design approach is an interdisciplinary subject with many elements, and the application in urban design is at the beginning and exploration stage. The application of advanced technology makes it possible to build complex designs. The city of the future will become an organic whole and become the crystallization of emotion and wisdom.
References 1. Frazer, J.: Parametric computation: history and future. Architectural Des. 86(2), 18–23 (2016) 2. Lee, J., Gu, N., Williams, A.: Exploring design strategy in parametric design to support creativity. In: Proceedings of International Conference on Computer-Aided Architectural Design Research in Asia (2013) 3. Liu, Y.-T., Lim, C.-K.: New tectonics: a preliminary framework involving classic and digital thinking. Des. Stud. 27(3), 267–307 (2006) 4. Speranza, P.: Using parametric methods to understand place in urban design courses. J. Urban Des. 21(5), 661–689 (2016) 5. Schumacher, P.: Parametricism: a new global style for architecture and urban design. Architectural Des. 79(4), 14–23 (2009) 6. Schnabel, M.A., Zhang, Y., Aydin, S.: Using parametric modelling in form-based code design for high-dense cities. In: Proceedings of International High-Performance Built Environment Conference (iHBE 2016) (2016)
Narrative Review of the Role of Wearable Devices in Promoting Health Behavior: Based on Health Belief Model Dingzhou Fei(&) and Xia Wang Department of Psychology, Wuhan University, Wuhan, China [email protected]
Abstract. Background: Currently, wearable devices, such as smartwatches, allow people to monitor health-related data, which help a better sense of their health status. But it still has a long way to go to really prove whether the perceptions could translate into health-related behavior which require individual be responsible for their own health. Purpose: The purpose of this paper is to investigate the possibility or effectiveness and limitations of wearable devices in health behavior intervention. Firstly, we explored the factors related to wearable devices and health self-management. Secondly, this paper attempted to get whether these factors related to health self-management could induce health behavior based on the Health Belief Model. Methods: The paper was organized as follow. We first reviewed the literature on wearable devices and then attempted to explain the possibility and effectiveness of wearable devices in promoting health behavior according to the concept in Health Belief Model. Finally, this paper gave a preliminary conclusion about the issues we were discussing. Keywords: Health Belief Model Wearable device Health behavior Fitness
1 Introduction Self-quantification has set off a new trend. People get self-tracking data to manage themselves. Data which is generated from Self-quantification systems (SQS) is considered as catalyst for behavioral change [1]. Wearable devices are important innovation among self-tracking tool. People use wearable devices to monitor blood pressure or heart rate and obtain activity data such as number of steps and calorie consumption. We considered wearable devices reflect need for self-management. However, it’s not covered by current study whether need for self-management triggers behavioral change. With popularity of smartphone applications, usability of wearable devices is greatly enhanced, and the number of people using wearable devices is growing. This review aims to describe patterns that wearable devices can solve as well as defects based on Health Belief Model (HBM), to explore health behavior intervention effectiveness and limitation. Given breadth of available research, a systematic review would be inappropriate. Thus, we try to unearth evidence through narrative review. In addition, we focused on wearable devices used in fitness as it’s different from that for © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 433–438, 2020. https://doi.org/10.1007/978-3-030-39512-4_68
434
D. Fei and X. Wang
medical use. Users have control of data, and self-management needs originate from themselves rather than requirements of medical professionals. So, can data fed back to users by wearable devices lead to health behavior when users are in charge and responsible for data? This review explains the issue through HBM, which is an important model to carry out health behavior intervention. Studies have proved that HBM variables are consistent with behavior [2].
2 Health Belief Model HBM suggested that behavioral change depended on whether people realized that the event could be prevented and they could avoid it by taking certain actions [3]. Becker et al. improved model by including Self-Efficacy, that is behavioral change also depends on whether people have confidence to carry out certain behaviors successfully that could avoid the event. Please see Fig. 1 for HBM. According to HBM, the formation of health behavior requires following conditions, higher Perceived Benefits, lower Perceived Barriers, higher Perceived Threat (including Perceived Seriousness and Perceived Susceptibility), higher Self-Efficacy, Cues to Action and Modifying Variables. We believe that the fitting degree of wearable devices to HBM can reveal the possibility, effectiveness and limitation of behavioral change. 2.1
Support for Perceived Benefits vs. Perceived Barriers
Perceived benefits and perceived barriers (two strongest predictors of HBM [2]) contributed to behavioral change in positive and negative directions. HBM believes behavior change when perceived benefits are higher than perceived barriers. Support for Perceived Benefits. Self-tracking tool allow you to monitor yourself and conduct self-experiment, though validity of self-experiment is questionable. The increase in use of wearable devices leads to healthy lifestyle, and users are more active [4] which can be transferred into perceived benefits. In other words, wearable devices have potential to promote health behavior [5]. They allow individual to set and achieve goals, then adjust behavior by feedback. One study has divided health selfquantification into two stages, namely data management and health activation. The first is a series of activities involved in collecting, managing and transforming data. Secondly, a mapping has been formed between available information and individual’s goals to build self-awareness of health [6].
Fig. 1. Health belief model
Narrative Review of the Role of Wearable Devices in Promoting Health Behavior
435
Support for Perceived Barriers. Wearable devices are notorious for privacy issues. Self-quantification means the beginning of an era of personal control over data, but the extent to which data is controlled is uncertain. It is argued that self-tracking data is generated by interaction between user and device [7], therefore, belongs to both device manufacturing company and user. People expect others to notice they are using wearable devices as a sign of self-discipline. In contrast, people hope to share data in very limited circumstances. Data privacy has to be said to be stumbling block to people’s decision to start and continue using wearable devices. It can’t be ignored Because it hinders health behavior. What’s more, One crucial problem is that people are not clear about how to interpret data [7]. As a result, people can’t define criteria for next action trigger, overestimate or underestimate severity of problem. Besides, control over data means not only data is generated by you, but also digested by you. We believe that obstacles in data restrict catalytic effect of wearable devices on health behavior. 2.2
Support for Perceived Threat (Perceived Seriousness and Susceptibility)
HBM argues that higher perceived threats are more likely to lead to health behavior, people often feel threated in comparison with others. Support for Perceived Seriousness. Research shows that quantitative self-data leads to more reflective thinking and thoughtful analysis [8]. This means that self-monitoring data provided by wearable devices can give users meaningful feedback which is intuitive sense of seriousness of health problem. An obese person, for example, can face reality of being overweight and experience severity of problem in the difference between standard weight and actual weight through feedback provided. Support for Perceived Susceptibility. Perceived Susceptibility is a subjective assessment of risk of health problems. Wearable devices are designed for people who are in bad health, we have no doubt that self-tracking data is of great reference value to evaluate health and possibility of health problem that may be overlooked by doctors. Self-quantification is not a tool for self-optimization, but an opportunity for selfdiscover [9]. The discover was accompanied by judgment of perceived susceptibility. 2.3
Support for Self-Efficacy
Self-Efficacy was added when social psychologists refined model which make HBM be applied to substantial, long-term behavioral change. Whether individuals consider they have ability to influence change is thought to be key factor in change of health behavior. Quantitative data increase goal-directed motivation which is important initial step for individuals in behavioral change [10]. In addition, wearable devices with fitness assistance have potential to encourage more positive health behavior [11]. Hence, it’s necessary to provide contextual feedback based on user’s understanding for data.
436
D. Fei and X. Wang
More feedback users get, more aware they are of their situation, and more confident they are in implementing behavioral change. The comparison with others from feedback will increase belief that they can carry out certain behaviors leading to change. Feedback provided by wearable devices enhance Self-Efficacy and promote health behavior. 2.4
Cues to Action
There have been few defined about what is cues to action, as cues can be internal like physical cues (pain, symptoms) or external like information from others [3]. It depends on people’s self-report which missed unconscious clues, so It’s difficult to specify the clues that trigger behavior as anything before behavioral change is possible. But use of wearable devices is a clue to action, collection, monitoring and feedback triggered behavioral change, so we hold a positive attitude towards next change in behavior. 2.5
Modifying Variables
Individual characteristics, including demographic, psychological, and structural variables, are considered Modifying Variables of HBM. Modifying variable affect health behavior indirectly by influencing perceived seriousness, susceptibility, benefits and barriers [12]. But current defect of HBM is that there is no evidence to support moderating effect of Modifying Variables [13]. Therefore, we don’t discuss modifying variables involved in wearable devices. However, it should be noted that interaction between modifying variables and impact of wearable devices on health behavior can’t be ignored. It was showed that there are gender differences in wearable devices from beginning of product design, and influence of gender differences exist in user experience [14]. After reviewing, we find that wearable devices have some characteristics corresponding to variables of HBM (see Table 1) except for Cues to Action and Modifying Variables. Besides, the interaction between remaining variables and Modifying Variables is also an issue worth paying attention to in the future. Table 1. The fit between variables of HBM and characteristics of wearable devices No 1
Variables of HBM Perceived benefits
2
Perceived barriers
3
Perceived threat
Perceived seriousness Perceived susceptibility
Characteristics of wearable devices Self-experiments Perceived usefulness Self-monitoring Mapping between the Data and individual goals Privacy concern Unclear data interpretation Degree of data control Rely on intuitive feedback Self-descovery (continued)
Narrative Review of the Role of Wearable Devices in Promoting Health Behavior
437
Table 1. (continued) No 4
Variables of HBM Self-efficacy
5
Cues to action
6
Modifying variables
Characteristics of wearable devices Increased goal orientation Timely feedback Outside the scope we discussed (Clues are hard to define) Outside the scope we discussed (Influence behavior indirectly by influencing variables 1, 2, 3, and 4)
3 Conclusion Although existing studies are limited, they fit HBM to some extent. We can see that wearable devices contribute to both perceived benefits and barriers, two of most important predictors of HBM. Although there is no data to support relative degree of the two, we consider that wearable devices may be effective in promoting health behavior when perceived benefits are higher than barriers. In terms of perceived threat, wearable devices provide direct feedback through intuitive and specific numbers, which is cornerstone of self-efficacy. For modifying variables, we believe that interaction will affect effectiveness of wearable devices in inducing behavioral change.
References 1. Fiore-Silfvast, B., Neff, G.: What we talk about when we talk data: valences and the social performance of multiple metrics in digital health. Ethnographic Prax. Ind. Conf. 2013(1), 74–87 (2013) 2. Sulat, J.S., Prabandari, Y.S., Sanusi, R., Hapsari, E.D., Santoso, B.: The validity of health belief model variables in predicting behavioral change: a scoping review. Health Educ. 118, 499–512 (2018) 3. Janz, N.K., Becker, M.H.: The health belief model: a decade later. Health Educ. Behav. 11, 1–47 (1984) 4. Lunney, A., Cunningham, N.R., Eastin, M.S.: Wearable fitness technology: a structural investigation into acceptance and perceived fitness outcomes. Comput. Hum. Behav. 65, 114–120 (2016) 5. Rachuri, K.K., Musolesi, M., Mascolo, C., Rentfrow, P.J., Longworth, C., Aucinas, A.: EmotionSense: a mobile phones based adaptive platform for experimental social psychology research. In: 12th International Conference on Ubiquitous Computing, UbiComp 2010, pp. 281–290 (2010) 6. Almalki, M., Martin Sanchez, F., Gray, K.: Quantifying the activities of self-quantifiers: management of data, time and health. In: Georgiou, A., Sarkar, I.N., de Azevedo Marques, P.M. (eds.) 15th World Congress on Health and Biomedical Informatics, MEDINFO 2015, vol. 216, pp. 333–337. IOS Press (2015) 7. Neff, G., Nafus, D.: Self-Tracking. MIT Press, Cambridge (2016)
438
D. Fei and X. Wang
8. Li, I., Dey, A., Forlizzi, J.: A stage-based model of personal informatics systems. In: 28th Annual CHI Conference on Human Factors in Computing Systems, CHI 2010, pp. 557–566 (2010) 9. Wolf, G.: The Data-Driven Life. New York Times (2010) 10. Pettinico, G., Milne, G.R.: Living by the numbers: understanding the “quantification effect”. J. Consum. Mark. 34, 281–291 (2017) 11. Winchester, W.W., III, Washington, V.: Fulfilling the potential of consumer connected fitness technologies: towards framing systems engineering involvement in user experience design. In: 11th Annual IEEE International Systems Conference, SysCon 2017. Institute of Electrical and Electronics Engineers Inc. (2017) 12. Rosenstock, I.M.: Historical origins of the health belief model. Health Educ. Behav. 2, 328– 335 (1974) 13. Choi, S.H., Duffy, S.A.: Analysis of health behavior theories for clustering of health behaviors. J. Addict. Nurs. 28, 203–209 (2017) 14. Zhang, Y., Rau, P.L.P.: Playing with multiple wearable devices: exploring the influence of display, motion and gender. Comput. Hum. Behav. 50, 148–158 (2015)
Competitiveness of Higher Education System as a Sector of Economy: Conceptual Model of Analysis with Application to Ukraine Olha Hrynkevych1, Oleg Sorochak2, Olena Panukhnyk3, Nazariy Popadynets4(&), Rostyslav Bilyk5,6, Iryna Khymych3, and Yazina Viktoriia7 1
Ivan Franko National University of Lviv, Svobody Ave. 18, Lviv 79008, Ukraine [email protected] 2 National University “Lviv Polytechnic”, Bandery str., 12, Lviv 79013, Ukraine [email protected] 3 Ternopil Ivan Puluj National Technical University, Ruska str. 56, Ternopil 46001, Ukraine [email protected], [email protected] 4 SI “Institute of Regional Research named after M. I. Dolishniy of the NAS of Ukraine”, Kozelnytska str. 4, Lviv 79026, Ukraine [email protected] 5 Yuriy Fedkovych Chernivtsi National University, Kotsjubynskyi str. 2, Chernivtsi 58012, Ukraine [email protected] 6 Military Institute of Taras Shevchenko National University of Kyiv, Lomonosov str. 81, Kiev 03189, Ukraine 7 University of Customs and Finance, Vladimir Vernadsky str. 2/4, Dnipro 49000, Ukraine [email protected]
Abstract. This research aims to lay out a framework for analysis the competitiveness of higher education as one of the service sector of economy. The study offers a conceptual model for the analysis and is based on the criteria of quality, social responsibility and economic efficiency. The interests of stakeholders, the potential of the education providers form a core of the model and framework for construction of indicators on higher education system competitiveness analysis according to abovementioned criteria. The proposed model and corresponding indicator frame was applied for the analysis of the higher education system competitiveness of Ukraine’s regions. Cluster analysis allowed selecting groups of Ukraine’s regions that differ significantly in the level of higher education system competitiveness and require a differentiated governmental policy as well as financial support. Keywords: Higher education competitiveness Model of analysis Cluster analysis quality Economic efficiency Social responsibility Ukraine
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 439–445, 2020. https://doi.org/10.1007/978-3-030-39512-4_69
440
O. Hrynkevych et al.
1 Introduction In recent years, there has been significant increase in the number of studies concerning the higher education (HE) system. In a global study of directions, publications and the institutional structure of higher education research, P. Altbach concludes that despite the growth in the number of research centres, ‘higher education is a field without a clear intellectual, methodological, or disciplinary center [1]. A diversity of approaches reflects the diverse interests and backgrounds of those involved in the field’. T. McCowan proposes an original model for studying the evolution of HE which is comprised of three components: (1) value; (2) functions; (3) interaction with the external environment. This model allows to explain the increase of contribution of the economic efficiency and social interaction of the HEIs in the analysis of its competitiveness [2]. Analyzing the HEIs management in terms of balancing the interests of various stakeholders, U. Teichler focuses on the criterion of quality, relevance and efficiency [3]. Based on the integrated analysis of the HE functions and its goals in the global strategic development, S. Panchyshyn and O. Hrynkevych concludes that quality, social responsibility and economic efficiency are the main criteria for the analysis of competitiveness of HE systems at all levels its functioning [4]. The ranking company Quacquarelli Symonds, annually develops and publishes a ranking of national systems of HE effectiveness titled ‘QS Higher Education System Strength Rankings’ [5]. The methodology includes four components of assessment: (1). System strength; (2) Access; (3) Flagship Institution; (4) Economic context. However this ranking is based only on the positions of the country’s HEIs, which fall into the ranking, that is formed by the same company. Secondly, the ranking does not take into account the overall economic potential of the country’s system of HE and its funding by different stakeholders. Thirdly, the ranking does not reflect the implementation of all HE functions (educational and scientific activities, public services and promoting the goals of sustainable development). Based on this analysis we found that there is no ranking that is equally informative and useful for various stakeholders and we agree with the conclusions of Goglio [6] that ‘an active role of researchers in the field of university rankings is desirable, for monitoring improvements on methodological issues and for making better-informed ranking consumers’. This study aims to lay out an interdisciplinary approach and conceptual model for the analysis of the HE system competitiveness that is based on the criteria of quality, economic efficiency and social responsibility. The proposed conceptual model and cluster analysis were applied for assessment of the HE system competitiveness in the Ukraine’s regions.
2 Methodology Using the interdisciplinary approach based on the institutional theory [7], the theory of human capital [8, 9], the theory of social capital [10], the theory of intellectual capital [11], the theory of stakeholders [12], the theory of competitive advantage [13] we propose the definition of the HE as a system of informal (values, traditions etc.) and
Competitiveness of Higher Education System as a Sector of Economy
441
formal (laws, standards, HEIs, stakeholders, etc.) institutions that provide an understanding of the critical role of knowledge in self-improvement, accumulation, transmission and generation of new knowledge for the purposes of personal and social development. The main components of the institutional structure of the national system of HE can be defined as following: education providers, their products, internal and external stakeholders. Based on the introduced main components it is proposed to define the competitiveness of the HE system as the ability of their providers to create products that generate benefits in human, social and intellectual capital development, and, accordingly, in realizing of the individual, national and global goals of development. The frameworks for constructing of indicators of the HE system competitiveness by the criteria of quality, social responsibility and economic efficiency were developed using the methods of system analysis. Cluster analysis was applied in order to classify Ukraine’s regions based by the set of competitiveness indicators. A tree-like method proposed by J. Ward uses the variance analysis to estimate the distances between clusters [14]. The method is aimed at connecting clusters that are close to each other’s but it seeks to create small-sized clusters, which is not always convenient for a large number of observations.
3 Results and Discussion The interdisciplinary approach to the competitiveness analysis of the HE systems involves determination reserves for its increase, taking into account the interests of stakeholders, the potential and performance of education providers, as well as the levels of the HE system: HEI, region, country. Figure 1 provides a conceptual model for analysis of the higher education system competitiveness.
Quality Stakeholders (interests) Products (results)
Providers (potential)
National level
Fig. 1. Conceptual model for analysis of the higher education system competitiveness Source: developed by the authors
442
O. Hrynkevych et al.
The components of the HE system – providers, products and stakeholders, as well as criteria for their evaluation – quality, social responsibility and economic efficiency – form the basis for the competitiveness analysis. The next level of the conceptual model of analysis takes into account the practicability and achievability of its implementation at various levels of the HE system functioning: local/ institutional, regional, national. Figure 2 presents the framework for construction of indicators of HE system competitiveness analysis according to criteria of quality, social responsibility and economic efficiency.
Fig. 2. Framework for construction of indicators of HE system competitiveness analysis according to criteria of quality, social responsibility and economic efficiency Source: developed by the authors
Proposed in Figs. 1 and 2 conceptual approaches can serve as a basis for the development of analytical databases for decision-making at different levels of competitiveness management. In other words, it refers to the methodology for the formation of large data sets (Big-data) in the management of HE system both at the institutional (HEI) and the regional/national levels. Cluster analysis was applied in order to classify regions of Ukraine by the set of competitiveness indicators. Figure 3 shows a tree of the clustering results of the 25 regions of Ukraine using the Ward’s method. As Ward’s clustering results show, four major types of regions in Ukraine can be identified in terms of the competitiveness of the HE system. The first cluster includes the capital of Ukraine – Kyiv with universities that provide the highest level of quality according to such criteria as ‘the education quality of HEIs entrants’, ‘the academic reputation of universities in the national ranking’, ‘the attractiveness of the region HEIs among students’ and ‘prospects for employment of graduates’ The second cluster combines eleven regions of Ukraine (Odesa, Luhansk, Donetsk, Kharkiv, Lviv, IvanoFrankivsk, Transcarpathian, Chernivtsi, Sumy, Poltava, Dnipropetrovsk), with generally higher than average level of most indicators of competitiveness, in particular those that evaluate criteria such as ‘the academic reputation of academic staff’, ‘the effectiveness of research’, ‘attractive for students from other regions’, but with a rather large variability of academic rankings of universities. Among the regions that have fallen into this cluster, the Lviv and Kharkiv regions are distinguished with the highest values
Competitiveness of Higher Education System as a Sector of Economy
443
Fig. 3. Cluster analysis of the Ukraine’s regions according to the level of the higher education system competitiveness: the tree-like method Source: developed by the author based on data [15–18]
of the academic reputation of staff, positions in national and international rankings and the education quality of entrants. The third cluster includes the six regions (Kherson, Mykolaiv, Zaporizhzhia, Kropivnitsky, Kyiv, Zhytomyr), that have values of most competitiveness indicator somewhat lower than the average level. At the same time, this cluster is characterized by a rather high level of training for certain branches of the economy, but the absence of classical universities with high positions in national and international rankings. The fourth cluster comprises of seven Ukraine’s regions (Chernigiv, Cherkasy, Khmelnytsky, Rivne, Volyn, Ternopil, Vinnytsya), that are characterized by a marked lagging behind the top regions by most indicators of competitiveness and the urgency of the problem of entrants migration to neighbouring regions with higher educational and economic potential. At the same time, each of the regions of this cluster has its own strengths. For example, Vinnytsia region has a high level of patent activity of researchers and the attractiveness of medical profile for foreign students. The high level of regional financial support for the HEIs is distinctive for Volyn, Ternopil, and Chernihiv regions.
444
O. Hrynkevych et al.
4 Conclusions The present study offers a conceptual model for the analysis of the HE system competitiveness and is based on the following key points: 1. The main elements of the HE system of the region/country are education providers, the results of their activities (products), as well as internal and external stakeholders. The interests of stakeholders, the potential of the providers form a core of the model and are the main objects in the competitiveness analysis. 2. Quality, social responsibility and economic efficiency are the main criteria for the analysis of the HE system competitiveness. 3. It is beneficial to develop various types of applied models for analysis of the HE system competitiveness and corresponding sets of indicators that depend on the level of analysis (institutional, regional, national) and stakeholder priorities. The conceptual model was implemented for the empirical analysis of the HE system competitiveness of Ukraine’s regions. Based on a set of indicators of quality, social responsibility and economic efficiency the Ukraine’s regions were divided into groups using hierarchical clustering algorithm. The implementation of the methods allowed to classify 4 groups of Ukraine’s regions with different levels of the HE system competitiveness. The results can be used to improve the governmental and regional management in order to balance the development of regional systems of higher education.
References 1. Altbach, P.: Knowledge for the contemporary university: higher education as a field of study and training. In: Rumbley, L.E., Altbach, P.G., Stanfield, D.A., Shimmi, Y., de Gayardon, A., Chan, R.Y. (eds.) Higher Education: A Worldwide Inventory of Research Centers, Academic Programs, and Journals and Publications, 3rd edn., pp. 12–21. Center for International Higher Education, Boston College & Lemmens Media, New York (2014) 2. McCowan, T.: Universities and the post-2015 development agenda: an analytical framework. High. Educ. 72(4), 505–523 (2016) 3. Teichler, U.: Steering in a Modern Higher Education System: The Need for Better Balances between Conflicting Needs and Expectations. International Centre for Higher Education Research, Kessel (2019) 4. Panchyshyn, S., Hrynkevych, O.: The conceptual apparatus for institutional analysis of the higher education system competitiveness. Econ. Dev. 1(81), 50–58 (2017) 5. Quacquarelli Symonds: QS Higher Education System Strength Rankings: Methodology (2018). https://www.topuniversities.com/system-strength-rankings/methodology 6. Goglio, V.: One size fits all? A different perspective on university rankings. J. High. Educ. Policy Manage. 38(2), 212–226 (2016) 7. North, D.: Institutions, Institutional Change and Economic Performance. Cambridge University Press, Cambridge (1990) 8. Schultz, T.: Investment in human capital. Am. Econ. Rev. 11(1), 1–17 (1961) 9. Becker, G.: Human Capital: A Theoretical and Empirical Analysis, with Special Reference to Education. National Bureau of Economic Research, New York (2017). https://www.mdpi. com/2227-7099/7/2/49/htm
Competitiveness of Higher Education System as a Sector of Economy
445
10. Bourdieu, P.: The forms of capital. In: Richardson, J.G. (ed.) Handbook of Theory and Research for the Sociology of Education, pp. 241–258. Greenwood, Westport(1986) 11. Stewart, T.: Intellectual capital: the new wealth of organizations. Perform. Improv. 37(7), 56–59 (1998) 12. Freeman, R.: Strategic Management: A Stakeholder Approach, pp. 31–51. Cambridge University Press, Cambridge (2010) 13. Porter, M., Kramer, M.: Strategy & society: the link between competitive advantage and corporate social responsibility. Harvard Bus. Rev. 84(12), 78–94 (2009) 14. Ward, J.: Hierarchical grouping to optimize an objective function. J. Am. Stat. Assoc. 58, 236–244 (1963) 15. The State Statistics Service of Ukraine: The main indicators of the activity of Ukraine’s HEIs at the beginning of the 2016/17 academic year: statistical bulletin (2017). http://www.ukrstat. gov.ua 16. Ukrainian Center for Educational Quality Assessment. Statistical data of the main session of the external independent evaluation 2016 (2017). https://zno.testportal.com.ua/stat 17. Educational portal in Ukraine. Ranking of universities on Scopus 2016 indicators (2017). http://osvita.ua/vnz/rating/51053 18. Profrights: Localization of the violations of the rights with regard to students and staff of HEIs at the beginning of 2017 (2017). https://profrights.org/visual
Application of Classification Algorithms in the Generation of a Network Intrusion Detection Model Using the KDDCUP99 Database Jairo Hidalgo(&) and Marco Yandún Universidad Politécnica Estatal del Carchi, Tulcán, Ecuador {jairo.hidalgo,marco.yandun}@upec.edu.ec
Abstract. The technological activities must be guaranteed by an acceptable level of security both in the operations carried out by the users and in the data that travel through the infrastructure of the networks, in this research the traffic database is analyzed KDDCUP99 network [Knowledge Discovery in Databases] in whose results were obtained through the conjunction matrices between the algorithms of Neural Networks and K-NN (K Neighbors Neighbors) to determine the best classifier in the training of the model, as a first step the preprocessing was used of the information and the analysis of the users by means of methods of classification of entities and of specific attributes, using techniques and tools of mining of data such as (to remove duplicate values. SimpleKMeans, selection of more significant attributes, elimination of unused characteristics, voracious algorithms and discrete Chi-Square attributes, the division of our database into two random parts continued: one composed of 70% used to train the model and another with the remaining 30% to validate the result; The results allowed us to determine that the most effective model is not the Neural Networks algorithm with 100% of correctly classified instances compared to K-NN with 100%. Keywords: Multilayer perceptron Neighbors)
Neural Networks K-NN (K Neighbors
1 Introduction Computer security plays a leading role in providing reliability, availability and nonrepudiation of data for users who require it. Several studies have been carried out on models of intrusion detection systems using artificial intelligence techniques such as neural networks, Bayesian networks, fuzzy logic etc., which allow modeling classification models with high precision in intrusion detection systems (IDS). In the article published by Reddy [1] he proposes a discriminant function of machine-based support vector classification (SVM). An SVM is based on the theory of approximate sets for the selection of characteristics, which uses a discriminant function that is a hyperplane that separates different kinds of data, decreasing the dimensionality of the set, thus the set of data obtained is properly scaled to increase the effectiveness of © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 446–452, 2020. https://doi.org/10.1007/978-3-030-39512-4_70
Application of Classification Algorithms in the Generation
447
the discriminant function. The data used are those recovered from the KDDCUP99 Database. The result is that by combining the scale and the selection of approximate set characteristics, the overall performance of the intruder detection system is improved.
2 Materials and Methods 2.1
Materials
In Table 1, part of the KDDCUP99 database is presented, which is constituted by a standard set of data that has been audited and includes a wide variety of simulated intrusions in a military network environment [2]. The database is made up of 494020 instances and 41 attributes. Table 1. Instances and attributes of the KDDCUP99 BDD duration service flag src_ dst_ bytes bytes 1 2 3 . . 494020
2.2
land wrong_ urgent hot num_ Logged_ Inum fragment failed_ in compromised logins
0 0 0
http http http
SF SF SF
285 316 335
34557 0 3665 0 10440 0
0 0 0
0 0 0
0 0 0
0 0 0
1 1 1
0 0 0
0
http
SF
219
1234
0
0
0
0
1
0
0
Methods
As part of the methods used, different algorithms were applied in the cleaning and preprocessing of the data, in the classification, clustering, association and selection of attributes. Algorithms of Selection of Instances and Attributes To perform preprocessing and processing, different data mining techniques are applied. The same ones that allow to have a database with correct instances and attributes that provide information selected in the study. Pre Information Processing Remove Duplicates It allows to generate a data set smaller than the original but more efficient, selecting a relevant data set, eliminating duplicate, anomalous data, selection of instances etc., for which we use Wweka.clusterers.SimpleKMeans-init0-max-candidates 100 42 attributes and 145585 instances (Table 2).
448
J. Hidalgo and M. Yandún Table 2. Filter unsupervised attribute AddCluster # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Label http smtp finger domain_4 auth telnet ftp eco_i ntp_u ecr_i other private pop_3 ftp rie time mtp link
Count 3349 2066 223 1313 146 286 261 138 104 158 593 831 67 1455 15 25 16 15
Weight 3349.0 2066.0 223.0 1313.0 146.0 286.0 261.0 138.0 104.0 158.0 593.0 831.0 67.0 1455.0 15.0 25.0 16.0 15.0
Attribute Selection Through the application of the Rank data mining technique that allows classification and filtering of data characteristics by relevance, the following results presented in Table 3 were obtained, which contains the instances and attributes that it provides with more information for the study.
Table 3. Attributes and records of the BDDKDD99 that provides more information #
Gain ratio Gini
dst_host_same_src_port_rate label 23 0.652 service 66 0.573 dst_host_same_srv_rate 0.487 dst_host_same_src_count 0.476 same_srv_rate 0.472 srv_count 0.446 count 0.443 srv_serror_rate 0.422 flag 11 0.414
0.453 0.484 0.250 0.263 0.198 0.359 0.354 0.155 0.217
ANOVA x2 NA NA NA NA NA NA NA NA NA
520855.337 2476689.193 133224.694 142056.587 48953.026 364794.982 355186.068 280484.105 134985.862
Application of Classification Algorithms in the Generation
449
GreedyStepwise Algorithm (Ravenous Algorithms) The voracious algorithms are widely used in data mining, in their search they have an entry of size n that are the candidates to be part of the solution, to the subset of those n candidates that satisfies certain restrictions are called a feasible solution, so it is obtained the feasible solution that maximizes or minimizes a certain objective function to achieve the optimal solution [3]. With the application of the GreedyStepwise Algorithm and the use of the CfsSubsetEval evacuator attribute, through the selection of the classifier protocol_type class, the attributes with more significant characteristics for the study were identified (Table 4). Table 4. BOD stratified cross validation KDD99 # of folds (%) 10 (100%) 10 (100%) 10 (100%) 10 (100%) 10 (100%)
attribute 1 3 5 6 8
duration service src_bytes dst_bytes wrong_fragment
Disposal of Unused Features It allows to select instances of data according to the conditions on the characteristics of the data [4], for this case we proceed to eliminate unused characteristics and classes obtaining the following results (Tables 5 and 6).
Table 5. Result of the elimination of characteristics and classes protocol_type duration service Flag src_byte dst_bytes land wrong_fragment urgent hot 1 2 3 4 5 6 7 8 9 10 11
tcp tcp tcp tcp tcp tcp tcp tcp tcp tcp tcp
0 0 0 0 0 0 0 0 0 0 0
http http http http http http http http http http http
SF SF SF SF SF SF SF SF SF SF SF
181 239 235 219 217 217 212 159 210 212 210
5450 486 1337 1337 2032 2032 1940 4087 151 786 624
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0
450
J. Hidalgo and M. Yandún
Chi-Square Discrete Attributes Table 6. Records and attributes as a result of applying chi-square10 most prominent attributes
1 2 3 4 5 6 7 8 9 10 11
Protocol_type service Inum_ is_host_ label outbound_ login cmds
dst__host_ srv_ diff_srv_ count dst_host_ diff_srv_ count rate srv_serror_ rate rate
tcp tcp tcp tcp tcp tcp tcp tcp tcp tcp tcp
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
http http http http http http http http http http http
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0
normal normal normal normal normal normal normal normal normal normal normal
8 8 8 6 6 6 2 5 8 8 18
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
8 8 8 6 6 6 1 5 8 8 18
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
3 Development of Model For the creation of the model, we use the algorithm of Neural Networks and K-NN (K Neighbors Neighbors) dividing the 0.1% the our database into two random parts: one composed of 70% used for training and another for the remaining 30% to validate the model of the result of applying the technique of preprocessing the data when removing duplicate values. Neural Networks Neural networks as a predictive model that allows to obtain good results in the classification and approximation when correctly computing the BDDKDD99 by the use of adaptive algorithms and self-organization in the same way by the approximation of functions allows to classify and increase immunity against noise at use a large number of processing nodes with a high level of interconnectivity (Image 1).
Image 1. Correctly classified instances Neural Networks
Application of Classification Algorithms in the Generation
451
K-NN (K Neighbors Neighbors) K-NN classifies each new data in the corresponding group as it has K neighbors closer to one group from another, thus calculating the distance of the new element to each of the existing ones and ordering the distances from least to greatest to select the group to which it belongs, obtaining as a result that the group will be the one with the greatest frequency with shorter distances (Image 2).
Image 2. Correctly classified instances K-NN (K Neighbors Neighbors)
4 Results The results obtained in the confusion matrix are presented in the following table when applying Neural Networks and K-NN (K Neighbors Neighbors) (Table 7). Table 7. Results confusion matrix
Neural Networks
K-NN (K Neighbors Neighbors).
5 Conclusions and Future Works We can conclude that the multilayer perceptron algorithm of neural networks for the database BDDKDD99 is not the most effective classifier when classifying data with 100% of correctly classified instances, there being significant differences in the evaluation of the two models with a percentage 100% success with only 1 neighbor for K-NN (Supervised Learning).
452
J. Hidalgo and M. Yandún
As future work, the analysis and study of information and user behaviors through networks will be continued using a network infrastructure in a laboratory and validating and modeling other data mining and artificial intelligence algorithms such as Evolutionary Neural Networks and Algorithms Genetic.
References 1. Reddy, R.R., Ramadevi, Y., Sunitha, K.V.N.: Effective discriminant function for intrusion detection using SVM. In: 2016 International Conference on Advances in Computing, Communications and Informatics (ICACCI), pp. 1148–1153 (2016) 2. Peluffo, I., Capobianco, M., Echaiz, J.: Machine Learning aplicado en Sistemas de Detección de Intrusos (2014) 3. Zarkami, R., Moradi, M., Pasvisheh, R.S., Bani, A., Abbasi, K.: Input variable selection with greedy stepwise search algorithm for analysing the probability of fish occurrence: a case study for Alburnoides mossulensis in the Gamasiab River, Iran. Ecol. Eng. 118, 104–110 (2018) 4. Vattani, A.: k-means requires exponentially many iterations even in the plane, pp. 144–153 (2006) 5. Mendivelso, F., Rodríguez, M.: Prueba Chi-cuadrado de independencia aplicada a tablas 2 x n independence chi-square test applied to 2 x N tables. Rev. Medica. Sanitas 21(2), 92–95 (2018) 6. Sow, M.T.: Using ANOVA to examine the relationship between safety & security and human development. J. Int. Bus. Econ. 2(4), 101–106 (2014). 2374-2194 7. Martínez-Estudillo, F.J., Hervás-Martínez, C., Gutiérrez, P.A., Martínez-Estudillo, A.C.: Evolutionary product-unit neural networks classifiers. Neurocomputing 72(1–3), 548–561 (2008) 8. Beasley, D., Bull, D.R., Martin, R.R.: An Overview of Genetic Algorithms: Part 1, Fundamentals (1993) 9. Stoyanov, S.: Invited Paper Analyses of Methods and Algorithms for Modelling and Optimization of Biotechnological Processes (2009)
Vulnerability Discovery in Network Systems Based on Human-Machine Collective Intelligence Ye Han1(&), Jianfeng Chen1,2, Zhihong Rao1,2, Yifan Wang1, and Jie Liu1,2 1
2
China Electronic Technology Cyber Security Co., Ltd., Chengdu 610041, China [email protected] Cyberspace Security Key Laboratory of Sichuan Province, Chengdu 610041, China
Abstract. Network vulnerability mining is an important topic in cyberspace security. Network vulnerabilities enable attackers to obtain sensitive information from computer systems, control computer systems illegally and cause severe damage. More effective vulnerability mining requires wider participation of cybersecurity engineers and intelligent computing devices as cooperation among the mining participants could take advantages of the complementarity capabilities of human and machine. The human and resource cost of network vulnerability mining can be remarkably reduced and the mining efficiency is improved accordingly. The principles and engineering mechanisms of introducing collective intelligence to network vulnerability mining is discussed in the paper and a vulnerability mining platform based on crowd intelligence is established following a four-layer system structure. Experimental tests showed that cooperation enables mining participants to work better and learn from empirical information, while better mining results could be obtained through procedure optimization. Keywords: Cybersecurity
Vulnerability Collective intelligence
1 Introduction As a new research hotspot in recent years, collective intelligence provides a new technical means for exploring human-machine collaboration. It borrows the concept and builds a paradigm of interaction from individuals and groups’ behaviors in nature. The intelligence that emerged this way could over perform far beyond the level of individual intelligence. Desirable features of collective intelligence, such as inherent high concurrency, fast decision finding, strong environmental adaptability, high robustness and self-recovery revoked by a functional group are especially suitable for performing tasks that are difficult for individuals to complete [1]. Cyberspace security is an engineering science focus on the gaming between attackers and defenders. The discovery and repair of vulnerabilities is the main reason and an endless loop for cyberspace security to develop. A vulnerability is any factor © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 453–458, 2020. https://doi.org/10.1007/978-3-030-39512-4_71
454
Y. Han et al.
that exists in computer network systems and may cause damage to system components or its data. Vulnerabilities may appear in multiple dimensions such as hardware, software, protocol implementation, or even system security policy. For an attacker, if a vulnerability is discovered before being noticed, he will write malwares and put it to use immediately to cause maximum damage. For the defender, if he discovers the vulnerability first, then the repair work could be finished in time to protect the system from potential attacks and reduce security risks. The innovative concept and characteristics of collective intelligence coincide with the increasing demand of vulnerability mining scale and efficiency. To be more specific, the diversity of collective intelligence can realize adequate participation of mass entities of vulnerability discovery, evaluation, and verification, so the mining creativity will be greatly enhanced. The natural concurrency of collective intelligence can make the mining work go parallel to achieve better mining scalability and reduce latency. The good coordination property of collective intelligence would enable entities of different backgrounds or skill levels to work with complementary capabilities to optimize resource usage and task allocation. Last but not least, the high robustness and self-recovery of the mining process come from the strong adaptability of collective intelligence mechanisms. Besides, vulnerability mining systems that incorporate collective intelligence are devised to learn from empirical behaviors and achieve better mining performance in each iteration. Collective intelligence has been used in travel recommendation systems [2], online health support communities [3] and learning map making [4]. However, integrating collective intelligence for vulnerability mining is still a frontier research direction. Few theoretical or engineering work is disclosed at the time. Literature that relates to the topic include a method proposed by VG Le et al. for building automatic web application vulnerability detection in [5], and GuruWS, as a hybrid platform for detecting malicious web shells and application vulnerabilities was constructed in [6]. The reminding content of the paper intends to introduce vulnerability mining theory and method based on collective intelligence. The architecture of a practical vulnerability mining system is discussed afterwards. It reveals that the collective intelligence emerged by human and machines will achieve better results in transaction-intensive and intellectually intensive workflows such as vulnerability mining.
2 Vulnerability Mining Based on Collective Intelligence As is mentioned above, deep-level vulnerability mining requires the wide participation of cybersecurity engineers and intelligent computing devices. Based on collective intelligence, a vulnerability-mining framework that enables cooperation among large numbers of cybersecurity engineers and computing devices is proposed in this paper. Two key problems need to be considered when developing the framework based on collective intelligence. First, detailed information, namely digital portraits of different vulnerability mining participants (especially cybersecurity engineers) need to be obtained. These digital portraits stored in a knowledge base would form a capability map of engineers. Second, based on the digital portraits of mining participants, vulnerability-mining tasks are decomposed and distributed to different mining
Vulnerability Discovery in Network Systems
455
participants. Following collective intelligence approaches, information and knowledge gained from different mining participants are collected and shared to facilitate the mining process. These two issues are discussed in detail as follows. 2.1
Capability Map Generation
Figure 1 presents the process of participant capability map generation in vulnerability mining. The capability map discussed here is generated mainly for cybersecurity engineers, since the digital portrait of computers or servers are usually described based on structured information and can be directly used in task distributing. Relationship extraction
Entity and attribute extraction
Data acquision
Static attributes:
Source information
• Structured data • Semi-structured data • Unstructured text
Name, age, education, ... Engineers
Submitted Engineer1
Dynamic attributes:
Submitted
Reason of
Worked together
Common library dependency
Attributes: Vulnerabilities
Engineer3
Vulnerability1
Work experience, skills, ...
Vulnerability2 Resemble
Discription, type, severity, features, loss type, ...
Vulnerability3 Solved Engineer2
Knowledge inference Specialized fields
Information update
Current working status Engineer1
Active update
Knowledge base
etc
Specialized fields Current working status
Period update Engineer2
etc
··· Fig. 1. Capability map generation process
Information acquired from mass data source could be categorized as structured data, semi-structured data, and unstructured text. Entities and their attributes are extracted from such source information. Two types of entities, namely cybersecurity engineers and vulnerabilities, are extracted. Cybersecurity engineer entities include static attributes such as name, age, education etc. and dynamic attributes such as work experience and skills. Vulnerability entities include attributes such as description, type, severity, loss type, etc. After that, the relationship among different entities is analyzed and built. Knowledge inference can be performed based on the computed relationships. Specialized fields like current working status and other features of cybersecurity engineers, can be obtained this way. These features can be considered as labels of cybersecurity engineers and will be used to match the preliminarily recognized vulnerability labels to achieve optimized mining task distribution. The features of cybersecurity engineers are stored in a knowledge base as well as their identity information. The knowledge base is regularly updated based on the newly acquired source data.
456
2.2
Y. Han et al.
Mining Task Distribution
After capability maps are created, the mining tasks can be allocated to different mining participants. Cooperation driven by collective intelligence among different mining participants are devised to reduce the cost of vulnerability mining and improve the mining result. Figure 2 presents the framework of our vulnerability mining process based on collective intelligence. Fuzzing test and concolic execution techniques are incorporated as the main tools in the mining process.
Fig. 2. Vulnerability mining based on collective intelligence
First, fuzzy test seeds are generated and mutated using different strategies to maximize code coverage. Abnormal outputs of fuzzy tests are recorded. Different algorithms are used to perform symbolic execution and potential vulnerabilities are discovered. Next, further analysis and information refinement are performed for abnormal outputs generated by fuzzy tests or potential vulnerabilities recognized by symbol executions. After the extraction of the abnormal outputs and potential vulnerabilities features, they are treated as the inputs for a series of multi-label classification models. Labels of the vulnerabilities are recognized and predicted by these models. According to the matching results of the labels of vulnerabilities and the labels of cybersecurity engineers, target engineers with high relevance are recommended for current task. The background, root cause and exploitation mechanism of the vulnerabilities are analyzed by different cybersecurity engineers in our system.
Vulnerability Discovery in Network Systems
457
Opinions of cybersecurity engineers are collected and aggregated to form a detailed report with respect to the vulnerability. In this way, the mechanism of the vulnerability can be thoroughly analyzed. Under such guideline, specific input files that have higher possibilities of triggering the vulnerability can be generated and used in fuzzing tests, the concolic execution processes can also be more targeted. The knowledge previously gained about the vulnerability can be validated and new vulnerabilities can be discovered by vulnerability association. Both the accuracy and efficiency of vulnerability mining can be improved. The process mentioned above will be repeated until no vulnerabilities can be found. Since vulnerabilities are inevitable and ubiquitous in network systems, this repeated process can be regarded as a daily maintenance process of network systems.
3 Platform Architecture We build a vulnerability mining platform based on collective intelligence and realize the system in JAVA SE. The platform includes four hierarchical layers such as the theoretical layer, the infrastructure layer, the resource architecture layer, and the application service layer. These layers realize corresponding data and instruction exchange under the supervision and control of system security management and maintenance. The theoretical layer mainly includes the implementation of theories in collective intelligence, cooperative game, and optimization. The infrastructure layer includes various vulnerability mining tools as well as digital portraits of cybersecurity engineers and computing devices. The resource architecture layer includes vulnerability databases, mining engines, and collective intelligence computing engines, etc. The application service layer contains mining services, information retrieval services, and data drill services. This platform provides users of different skills and backgrounds with vulnerability mining and analysis computing resources, holographic vulnerability libraries, vulnerability mining engines, and related tools. Vulnerabilities can be discovered through distributed task assignment and public assessment, while a learning and communication place for engineers is also provided by the platform. Vulnerability mining related services are implemented on the cloud computing infrastructure to achieve better scalability and performance. The centralized computing resources are allocated to handle collective intelligence vulnerability mining transactions, transcend the inherent performance limitations of traditional security services, and achieves a significant increase in concurrency and procedure robust levels through adequate resource provision. Collective intelligence vulnerability mining services are mainly located at the SaaS (software as a service) level, the functions provided by the platform is exposed to the user in the form of service to be invoked through standardized interfaces. Now the platform has been successfully put into operation and run for nearly one year.
458
Y. Han et al.
4 Conclusion Network vulnerability mining is an important topic in cyberspace security. As a new research hotspot in recent years, collective intelligence provides a new technical means for exploring human-machine collaboration. The human and resource cost of network vulnerability mining can be reduced and the mining efficiency improved by innovative integration of collective intelligence and cyberspace security. This paper investigates vulnerability mining theory, discusses method based on collective intelligence, and builds a vulnerability mining platform based on crowd intelligence. In the future, new collective intelligence algorithms will be included in the platform. Auxiliary tools and knowledge will be explored from collective intelligence to make better vulnerability mining strategy. Besides, the datasets and knowledge base included in the platform will be further enriched to make the platform more practical and intelligent. Acknowledgments. This work is supported by National Key R&D Program of China No. 2017YFB08029 and is supported by Sichuan Science and Technology Program No. 2017GZDZX0002.
References 1. Benkler, Y., Hassan, M.: Collective Intelligence: Creating a Prosperous World at Peace. Earth Intelligence Network, Oakton (2008) 2. Shen, J., Deng, C., Gao, X.: Attraction recommendation: towards personalized tourism via collective intelligence. Neurocomputing 173, 789–798 (2016) 3. Introne, J., Goggins, S.: Advice reification learning and emergent collective intelligence in online health support communities. Comput. Hum. Behav. 99, 205–218 (2019) 4. Jeng, Y., Huang, Y.: Dynamic learning paths framework based on collective intelligence from learners. Comput. Hum. Behav. 100, 242–251 (2019) 5. Le, V., Nguyen, H., Lu, D., Nguyen N.: A solution for automatically malicious web shell and web application vulnerability detection. In: 8th International Conference on Computational Collective Intelligence (ICCCI), pp. 367–378. Springer, Cham (2016) 6. Le, V., et al.: GuruWS: a hybrid platform for detecting malicious web shells and web application vulnerabilities. In: Transactions on Computational Collective Intelligence XXXII, pp. 184–208. Springer, Heidelberg (2019)
Computational Modeling and Simulation
Supporting Decisions in Production Line Processes by Combining Process Mining and System Dynamics Mahsa Pourbafrani1(&), Sebastiaan J. van Zelst1,2, and Wil M. P. van der Aalst1,2
2
1 Chair of Process and Data Science, RWTH Aachen University, Aachen, Germany {mahsa.bafrani,s.j.v.zelst, wvdaalst}@pads.rwth-aachen.de Fraunhofer Institute for Applied Information Technology, Sankt Augustin, Germany
Abstract. Conventional production technology is static by nature, developments in the area of autonomous driving and communication technology enable a novel type of production line, i.e., dynamic production lines. Carriers of products are able to navigate autonomously through a production facility, allowing for several different “production routes”. Given such dynamic behavior, it is interesting for a production line manager to study in what type(s) of station(s)/resource(s) he/she needs to invest in. We can do so by analyzing the behavior of the autonomous production line, to calculate what change is most likely boosting performance. In this paper, we use historical event data, which are the actual execution of the process, to support the design of system dynamic models, i.e., a high-level predictive mathematical model. The purpose of our framework is to provide the possibility for production line managers to oversee the effects of the changes at the aggregated level in the production line, regarding different performance metrics. At the same time, we provide the freedom in choosing the level of detail in designing the model. The generated model is at a customized aggregated level. We evaluated our approach based on synthetic event logs in which we emulate the effect of policy changes, which we predict accordingly. Keywords: Process mining Performance analysis Production line Simulation What-if analysis
System dynamics
1 Introduction In the area of modern products in the automobile industry, e.g., e-mobility and autonomous driving, production lines should be able to handle the changes as fast as possible. Flexible manufacturing system proposed different approaches in order to deal with disturbances in the production systems [6]. Providing an agile platform in which the production line managers are able to find the points to improve the performance metrics is important. At the same time, the effects and costs of changes need to be © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 461–467, 2020. https://doi.org/10.1007/978-3-030-39512-4_72
462
M. Pourbafrani et al.
considered carefully. Production line managers have to make the changes with a certain level of confidence regarding the possible effects before applying them. A complete insight into the performance of the production line is a requirement for a production line manager, prior to evaluating the effect of changes. In modern organizations, information systems are playing a substantial role in the support of day-to-day operations. These information systems become more and more intertwined with production processes. Process mining [1] is a collection of highly automated techniques that aim to increase process knowledge primarily based on the event data recorded in such information. For example, process mining techniques enable us to discover descriptive process models, purely on the basis of the recorded information. Furthermore, advanced techniques allow us to assess process conformity w.r.t. a given reference model, as well as bottleneck identification. Since the information system tracks what actually happens, process mining techniques allow organizations to increase overall process transparency as well as improved process performance. The real running processes in an organization, along with bottlenecks and performance metrics are a crucial step for an organization to identify its current performance as well as to improve their processes. Furthermore, several techniques exist, that aim to increase the overall view of the process [7, 8]. Undisputed, predicting the future behavior of a process, specifically with the aim of improving process performance, is of interest to many organizations. Within process mining, some work towards the prediction of future behavior w.r.t. performance of the process is proposed [9]. In [11] focuses on assessing the applicability of deep learning techniques for short-term prediction. None of the existing techniques provides the freedom of choosing the level of detail in prediction. However, a decision-maker in an organization is often interested in the prediction of process performance regarding different levels of detail, specifically, in production lines. Current predictive approaches use extensive knowledge of the process, and, in production lines, the large number and diversity of activities make use of current forward-looking approaches not feasible. In [5] a general framework is proposed which can be used for scenario-based analysis and prediction at an aggregated level. It uses system dynamics models based on the past data of the organization. Using this approach the environmental variables, which in reality affect the performance of a process can be included. At the same time, despite the discrete event simulation techniques, the freedom in the level of details is provided. Therefore, in this paper, we adopt the main approach in [5] and propose a general framework based on process mining and system dynamics for production lines. It provides insight into the current status of the production lines and its future statues considering the upcoming changes. We extend the proposed framework in [5] by adding a different level of details for the modeling. In addition, we perform a preliminary case study w.r.t. the applicability of the proposed framework in future dynamic production line settings. The remainder of this paper is organized as follows. In Sect. 2, we introduce background concepts. In Sect. 3, we present our main approach. We provide an evaluation as a proof of concept in Sect. 4. In Sect. 5, related work are mentioned. Section 6 concludes our work and discusses interesting directions for future work.
Supporting Decisions in Production Line Processes
463
2 Background Process Mining. Process mining is a collection of approaches, techniques, and tools, which provides a wide range of knowledge about the processes inside the organizations based on event logs. Using process mining, discovering and analyzing the processes executed in an organization is possible [1]. Event Log. Past data of the execution of the organization’s processes provide the input for process mining. The execution of an activity, in the context of some process instances, identified by a unique case id at a specific timestamp by a specific resource is referred to as an event. For instance, an event in the production line is defined such as item 1 (case id) is cut (activity) by John (resource) at 10/1/2018 7:42:30 (timestamp). There are different events related to the different process instances, identified by different case ids. A set of events regarding the same case id forms a trace and multiple traces form the event log of the execution of the process. Note that, typically, an event log includes more data attributes related to the process, e.g., costs of an activity, account balance, customer id, etc. System Dynamics. System dynamics is the collection of approaches, techniques, and tools, which is able to present the model of complex, dynamic systems, in a structured manner. In particular, it allows us to capture the factors affecting the behavior of a system [4]. Within system dynamics, we use a specific modeling notation, i.e. a stockflow diagram that allows us to simulate possible future behavior of a system, e.g., a (business) process. Stock-Flow Diagram. A stock-flow diagram consists of three basic elements, i.e., stocks, in-/out flows, and variables [3]. A stock represents any entity that, in some way, is able to accumulate over time, e.g. the number of waiting items in the production line. An inflow increases the accumulated entity represented by a stock, whereas an outflow reduces the accumulated entity. Finally, any environmental factor that is able to influence the in-/outflow of a stock is modeled as a variable. Such a variable is able to influence other variables as well. Furthermore, the value of a stock, in turn, is able to influence w.r.t. a variable.
3 Proposed Framework General key-performance metrics in production lines are resource management/ utilization and throughput time. We adopt the framework presented in [5] and use the past execution of the processes in a production line in the form of event logs. Event logs and processes include track information on an activity level, i.e., they describe what activity is performed at what point in time, which makes the modeling complicated. In the proposed framework, as Fig. 1 shows, we extract the major process components, which are, in a production setting, the stations in the production line. In the newly discovered process model the level of stations is detailed enough to show the flow of cars being produced in the production line and at the same time aggregated enough to avoid unnecessary details. To do so, we extract activities and aggregate them into one single activity using [12]. The set of activities, which must be performed but may happen in different orders extracted, observing the traces in the event logs.
464
M. Pourbafrani et al.
Fig. 1. The Proposed framework for the production line. It starts with discovering the process model of the production line at the station level. Generating SD-Log including the different performance parameters, which are used to design the model. After validation, the scenario-based analysis for the production managers is possible.
Considering the performance aspect, since all the parallel activities happen in any possible orders, we combine them into one single high-level activity. Using a module based on process discovery, the process model at the station level is discovered. In production lines, the tasks are distributed between stations, which can be handled by the same resources, we are able to get the performance of the process among the stations. We consider the following performance parameters exclusively in production lines: average service time in each station, number of resources for each station, the arrival rate of items for production line, finish rate, the capacity of each station, and the number of items waiting in each station. In the next step, we generate the SD-Log based on the performance parameters of stations for each time window. As shown in Fig. 1 the similarity values of parameters in each time window are tested with “Time Window Stability Test”. Exploiting the system dynamics modeling, the stock-flow diagram is being generated for the production line. We simulate the model populating the model with the values from SD-Log. This step is followed by a validation step, which provides the level of certainty, i.e., whether the model is acting similar to reality. In the final step, the general model can be refined by adding other parameters in the production line. We use the model to change the parameters and predict the different scenarios.
4 Proof of Concept We use CPN tools1 and ProM2 to generate the event log based on the production line of an electric automobile company. The generated event log includes the execution of processes before and after applying the changes regarding performance metrics, e.g., the change in the number of resources. Our model includes multiple stations, which cars go into each in sequence. In our designed stock-flow model Fig. 2 the assembly of the doors including four other sub-activities takes two hours (station 210) and there is always a queue for this station. By increasing the number of resources in station 210, 1 2
http://cpntools.org/. http://www.promtools.org/.
Supporting Decisions in Production Line Processes
465
Fig. 2. Part of the designed stock-flow diagram for the production line based on synthetic data of automobile company. This model is populated with the data from SD-Log in the time window of one day at the station level.
Fig. 3. Number of cars waiting for station 212 and station 210 before (red) and after (blue) adding one resource to station 212 in 50 days.
as we expected the number of cars in the queue for station 210 is decreased to zero in the second execution of the model with two resources. Therefore, the problem of waiting cars seems to be solved. However, the proposed framework represents the effect of changes in this station on the two next stations, which is “window assembly” station. As Fig. 3 shows the cars which are waiting for station 210 are decreasing at the same time the number of cars waiting for station 211 is increasing. Since in the production line all the cars after station “door assembly” go through “window assembly”, we chose two involved stations and all their possible performance parameters generated from the aggregated process model and the event log. This evaluation as a proof of concept shows the effectiveness of the approach in demonstrating the effects of one change through the whole production line. We can pragmatically deduce the detailed knowledge of the process and performance aspects from an event log in the scenario-based analysis of processes. Using the proposed approach, we are able to predict any further changes in the production line by changing one part, such as adding more resources to one of the stations. As the example demonstrates, the proposed approach is able to predict the consequence of changes/decisions in the process.
466
M. Pourbafrani et al.
5 Related Work An overview of process mining and system dynamics is provided in [1] and [4], respectively. In the field of system dynamics, different work toward simulation and prediction are done. There are different research conduct on the basis of using system dynamics in different contexts such as business process management, e.g. using both Petri net models and system dynamics to develop a model for the same situation [2]. According to [13] system dynamics among the simulation techniques in the manufacturing and business is an effective technique, however, the used techniques did not use the provided insight into the process by process mining techniques. In process mining, prediction and simulation approaches are mainly at a detailed level and they are at the case level [10]. In [5] the possibility of addressing the aggregate level of models are addressed using both process mining and system dynamic.
6 Conclusion In this paper, the necessity of providing a platform to support the decisions in the modern production lines is discussed. Establishing flexible production lines for modern products such as autonomous cars is the goal of the new products. Our framework provides the ability to oversee the new decisions and changes for a production line to be agile. It employs process mining techniques, specifically processes discovery at a higher level of abstraction along with performance analysis. We use the outcome of process mining techniques to generate an SD-log. We design the general system dynamics model based on the discovered knowledge from process mining and related parameters in the production line. General stock-flow diagram for the production line at an aggregated level can be improved and changed regarding different situations. We evaluated our framework based on a synthetic event log, which is generated using a CPN model. This evaluation serves as a proof of concept showing the efficiency of our generated model. As future work, we focus on identifying the underlying relationships between the parameters of the production line. Extending our approach in the field of performance analysis and resource management for the process to meet the requirements of the business is another practical approach. Acknowledgments. Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC 2023 Internet of Production.
References 1. van der Aalst, W.M.P.: Process Mining - Data Science in Action. Springer, Heidelberg (2016) 2. Rosenberg, Z., Riasanow, T., Krcmar, H.: A system dynamics model for business process change projects. In: International Conference of the System Dynamics Society, pp. 1–27 (2015)
Supporting Decisions in Production Line Processes
467
3. Pruyt, E.: Small System Dynamics Models for Big Issues: Triple Jump Towards Real-World Complexity. TU Delft Library, Delft (2013) 4. Sterman, J.D.: Business Dynamics: Systems Thinking and Modeling for a Complex World. No. HD30. 2 S7835 (2000) 5. Pourbafrani, M., van Zelst, S.J., van der Aalst, W.M.P.: Scenario-Based Prediction of Business Processes Using System Dynamics, Rhodes, Greece (2019) 6. Qin, J., Liu, Y., Grosvenor, R.: A categorical framework of manufacturing for Industry 4.0 and beyond. Procedia CIRP 52, 173–178 (2016) 7. Leemans, S.J.J., Fahland, D., van der Aalst, W.M.P.: Process and deviation exploration with inductive visual miner. In: Proceedings of the BPM Demo Sessions, Eindhoven, Netherlands, 10 September 2014, p. 46 (2014) 8. Mannhardt, F., de Leoni, M., Reijers, H.A.: The multi-perspective process explorer. In: Proceedings of the BPM Demo Session 2015, pp. 130–134 (2015) 9. Rozinat, A., Mans, R.S., Song, M., van der Aalst, W.M.P.: Discovering simulation models. Inf. Syst. 34(3), 305–327 (2009) 10. Rozinat, A., Wynn, M.T., van der Aalst, W.M.P., ter Hofstede, A.H.M., Fidge, C.J.: Workflow simulation for operational decision support. Data Knowl. 68(9), 834–850 (2009) 11. Tax, N., Teinemaa, I., van Zelst, S.J.: An Interdisciplinary Comparison of Sequence Modeling Methods for Next-element Prediction (2018) 12. Leemans, M., van der Aalst, W.M.P., van den Brand, M.: Hierarchical performance analysis for process mining. In: ICSSP (2018) 13. Jahangirian, M., Eldabi, T., Naseer, A., Stergioulas, L.K., Young, T.: Simulation in manufacturing and business: a review. Eur. J. Oper. Res. 203(1), 1–13 (2010)
Using Real Sensors Data to Calibrate a Traffic Model for the City of Modena Chiara Bachechi, Federica Rollo(&), Federico Desimoni, and Laura Po ‘Enzo Ferrari’ Engineering Department, Via Vivarelli 10, Modena, Italy {chiara.bachechi,federica.rollo,federico.desimoni, laura.po}@unimore.it
Abstract. In Italy, road vehicles are the preferred mean of transport. Over the last years, in almost all the EU Member States, the passenger car fleet increased. The high number of vehicles complicates urban planning and often results in traffic congestion and areas of increased air pollution. Overall, efficient traffic control is profitable in individual, societal, financial, and environmental terms. Traffic management solutions typically require the use of simulators able to capture in detail all the characteristics and dependencies associated with real-life traffic. Therefore, the realization of a traffic model can help to discover and control traffic bottlenecks in the urban context. In this paper, we analyze how to better simulate vehicle flows measured by traffic sensors in the streets. A dynamic traffic model was set up starting from traffic sensors data collected every minute in about 300 locations in the city of Modena. The reliability of the model is discussed and proved with a comparison between simulated values and real values from traffic sensors. This analysis pointed out some critical issues. Therefore, to better understand the origin of fake jams and incoherence with real data, we approached different configurations of the model as possible solutions. Keywords: Traffic modelling Simulation model Big data analytics Traffic sensor
1 Introduction Italy is the second State in Europe with the highest number of passenger cars per thousand inhabitants. The Eurostat/ITF/UNECE Common Questionnaire on Inland Transport registers that in 2016 in Italy there were 625 cars every thousand inhabitants. The aim of the TRAFAIR1 project [1] is to implement a flexible solution to monitor and forecast urban air quality in 6 European cities (Modena, Florence, Pisa, Livorno, Zaragoza and Santiago de Compostela). Real traffic data are needed to evaluate trafficrelated emissions and then estimate how these pollutants move in the air according to the wind, weather and building shapes. Therefore, a simulation model has been employed to obtain the vehicle flow where no sensors are located.
1
www.trafair.eu.
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 468–473, 2020. https://doi.org/10.1007/978-3-030-39512-4_73
Using Real Sensors Data to Calibrate a Traffic Model
469
Public administrations usually employ static traffic models that provide only an average traffic condition during peak hours in the main streets of the city. This kind of model does not consider the dynamic evolution of traffic during daytime and the seasonal variation during the year. In general, simulation is a dynamic representation of the real world achieved building a computer model and moving it through time [2]. Traffic modelling [3] aims to accurately recreate real traffic flow by using data coming from a network of sensors distributed over the area of interest. The costs of the construction of such a distributed system can be burdensome for public administrations. However, in many cities, some distributed sensors are used for other purposes. In the city of Modena, more than 300 induction loops sensors are located near traffic-light controlled junctions. These devices are used locally to control the traffic light logic but their traffic-related data (for instance, the vehicle counts and the average speed) have never been analyzed before. In the TRAFAIR project, we customize a traffic model for the city of Modena employing SUMO (Simulation for Urban Mobility) [4] and OSM2 (Open Street Map), both open sources, to ensure a costeffective solution. The paper is organized as follows: Sect. 2 briefly describes the traffic model; in Sect. 3, an evaluation of the model performance is given comparing real traffic data with the simulated ones; finally, in Sect. 4 different configurations are explored to find a solution to the emerged criticalities.
2 The Model Our traffic model [5] is a micro-simulation model, obtained by using SUMO, configured to generate the routes of the vehicles starting from traffic sensors data. In a microsimulation model, vehicles are simulated individually: each vehicle has its own trip to follow and moves inside the road network considering traffic restrictions. Our model has the aim to produce data about vehicle counts and their average speed in every road portion of Modena starting from the measurements of the traffic sensors. The sensors placed in Modena count the vehicles passing through them every minute and evaluate their average speed. We collect sensors data in real-time in a local PostgreSQL database, and the model interacts with it directly [6]. In our SUMO simulated map, we placed a “calibrator” near each traffic sensor. A calibrator is an object capable of producing the aspired traffic flow, i.e. the number of vehicles counted by the sensor associated with that calibrator. Calibrators are part of the SUMO suite and are like virtual traffic sensors calibrated considering the real measurements of the on-road sensors. Unlike sensors that measure the number of vehicles pointwise, calibrators control the flow on a lane of a road portion. For this reason, we have also placed some virtual detectors, SUMO objects that mime exactly the behaviour of the sensors and returns the vehicle count at a precise point of the map. We use a Python script to produce automatically the file containing the positions of the virtual detectors. We consider the GPS coordinates of
2
https://www.openstreetmap.org.
470
C. Bachechi et al.
the corresponding traffic sensor and the name of the street in which they are placed. The road name was necessary to avoid some errors, sensors are placed near junctions where roads are one near another. Thus, considering only the geolocalization of the sensors and placing them in the nearest road section not always ensure to find the right position. Therefore, we consider also the similarity between the name of the roads in the junction and the correct road name to estimate the right position. The values retrieved by these virtual detectors can be compared with the measurement of the real ones to evaluate the performance of the model.
3 Evaluating the Model We evaluated the performance of the model using different techniques. In every point where there is a real sensor, the time series of the real flow measurements and the one retrieved by the virtual detector in the simulation are compared using DTW (Dynamic Time Warping) distance. The DTW distance [7] is a way to measure the distance between two different time series that allows sequences to be stretched along the time axis. The sensors with a DTW distance higher than 1200 have been considered distant and not reliable. DTW distance is not the only metric used to determine if a calibrator is following real measurements or not. We also calculated the difference between the virtual and real sensors vehicle counts at the same instant and evaluated an average of this difference. Finally, we evaluated the number of instants in which the difference is higher than 2 vehicles per minute. This metric consider that we could have some instants in which the calibrator is not able to follow real measurement and the distance is high but in all the others the calibrator flows is similar to the real one, thus the error is limited to a short period time. Using these two methods, we can classify calibrators into two ways: the ones that manage to produce the real aspired flow will be referred to as ‘aligned’, the others as ‘not aligned’. Tests have been performed on seven November 2018 days. In Table 1, the number of not aligned calibrators is displayed for every tested day. The ratio is obtained dividing the number of not aligned calibrators by the total number of calibrators in the simulation. This number is equal to the number of sensors with at least one measurement on that specific simulated day. More than 20% of calibrators appear to be not aligned. In Fig. 1 a graphical comparison between two time series is displayed to underline the difference between aligned and not aligned calibrator. We have identified the calibrators that in all the tested days are always classified as ‘not aligned’. They are 39 and belong to 23 different junctions. We observed that often calibrators in a junction belong to the same group. Observing some not aligned junctions we discover that in 6 junctions (where 13 of the 39 calibrators are located) the problem is related to the SUMO road network that does not match the real one. The geographic data are provided by OSM and they include only information about the total number of lanes in a road without information about directions. Thus, sometimes the direction of the lanes assigned by SUMO is not right. An example is shown in Fig. 2: on the left of the figure, the two sensors at the bottom are located in SUMO road network in the same lane on the same direction; however, the two sensors are on two different lanes in
Using Real Sensors Data to Calibrate a Traffic Model
471
Table 1. The number of not aligned calibrators and its percentage on the total number of calibrators for seven simulated November days. Day 14th Nov 15th Nov 19th Nov 22nd Nov 27th Nov 29th Nov Not aligned calibrators 59/240 52/240 59/239 69/241 59/240 49/241 % 24.6 21.7 24.7 28.6 24.6 20.3
reality as shown on the aerial view on the right. To overcome this problem, the counts provided by the two sensors could be summed up.
Fig. 1. Comparison between the time series of vehicle counts, simulated by calibrators (blue) and measured by real sensors (orange). The graphs on the left are referred to an all-day simulation of Monday 19th November. The graphs on the right are referred to the same calibrators and sensors in the sub-simulations of the same day.
Another reason why some calibrators are not aligned is the creation of fake jams in the simulation: when a jam appears, calibrators are not able to insert new vehicles even if the required flow is higher. We observe that 10 of the 23 junctions of not aligned calibrators are affected by fake jams. The presence of a fake jam can be observed in Fig. 1 in the graph at the top of the figure on the left. The calibrator initially manages to follow real vehicles count, then the number of vehicles increases, and a jam appears reducing the flow through it. The duration of the simulation can contribute to producing fake jams. When a calibrator generates a vehicle, it will remain in the simulation since the end of its route; if it does not drive over another calibrator that decides to remove it. Reducing the duration of the simulation (splitting the simulation in several sub-simulations of reduced duration) allows avoiding this problem because refreshing the simulation will remove not necessary vehicles. We performed several simulations excluding the calibrators of a specific junction to observe if the absence of them could affect the performance of the others. We observed that this influence is related to the geographical distance and also to the existence of a path that can connect the two junctions.
472
C. Bachechi et al.
In some cases, not aligned calibrators are located in the right place of a crossroad with the right morphology and their measurements are acceptable. The reason for their ‘not alignment’ is addressed to the sensors located near them. 3 of the not aligned junctions have some sensors in the neighbourhood that counts zero vehicles for the whole day simulation. The measurements of these sensors must be excluded from the input data, otherwise, the lane in which they are located will be forced to do not have any flow for the whole simulation.
Fig. 2. Simulation map showing sensor location and aerial view of the same junction.
4 Trying Different Configurations to Find a Solution We tried different configurations of the traffic model to overcome the issues emerged in Sect. 3. Firstly, we removed from the simulation input of Thursday 8th November some calibrators unable to follow real measurements or measuring zero vehicles all day. We removed 29 calibrators. Through the comparison of the lists of not aligned calibrators in the regular simulation and in the simulation without the 29 excluded calibrators, we observed that 11 calibrators improve their performances and 12 calibrators’ performances get worse. An interesting fact is that, comparing the real measurement and the simulated counts in 20 of the 29 positions where sensors have been removed, the time series of their virtual detectors better followed real measurements. This means that the model can infer the vehicles counts in their position even without the calibrators. However, this solution was not good enough since 58 calibrators still appear not aligned. For this reason, we tried another solution. We split the Monday 19th simulation in sub-simulations with a duration of 3 h each. This interval of time was chosen because it is not likely to have routes longer than 3 h in an urban context; however, the interval is long enough that calibrators can influence each other but not enough to make this influence producing fake jams in the network. A simulation of 24 h composed of eight simulations of 3 h was performed. The time series obtained by the sub-simulations were compared with real measurements as described in Sect. 3. The number of not aligned calibrators decreases incredibly to 2.0% (5/241), all of them belonging to the 39 calibrators that do not follow real measurements in any of the previous simulations. In Fig. 1 there is an
Using Real Sensors Data to Calibrate a Traffic Model
473
example of how the sub-simulations approach removes fake jams and improves the performances of two calibrators.
5 Conclusion and Future Work Splitting the simulation in sub-simulations proved to be a good solution to ensure that the simulation follows real measurements. The exclusion of sensors that always counts zero vehicles is necessary to avoid errors caused by not reliable input data from sensors. Therefore, we produce a simulation capable of following the exact number of vehicles circulating in almost every point where there is a sensor and to infer vehicles counts where there are a lot of sensors in the neighbourhood area. To enhance the realism of the simulation, a good improvement could be to include information about average traffic flows, like Origin-Destination matrices, to produce routes in roadways where no sensors are placed. Acknowledgments. This research has been supported by the TRAFAIR project 2017-EU-IA0167, co-financed by the Connecting Europe Facility of the European Union. The contents of this publication are the sole responsibility of its authors and do not necessarily reflect the opinion of the European Union. The authors would like to thank in particular all the partners that contribute in the collection and management of traffic sensor data: the City of Modena and Lepida S.c.p.A.
References 1. Po, L., Rollo, F., Viqueira, J.R.R., Lado, R.T., Bigi, A., Lòpez, J.C., Paolucci, M., Nesi, P.: TRAFAIR: understanding traffic flow to improve air quality. In: 2019 IEEE International Smart Cities Conference (ISC2) (2019, to appear) 2. Drew, D.: Traffic Flow Theory and Control. McGraw-Hill, New York (1968) 3. Nasuha, N., Rohani, M.: Overview of application of traffic simulation model. In: MATEC Web of Conferences, vol. 150, p. 03006 (2018) 4. Krajzewicz, D., Hertkorn, G., Feld, C., Wagner, P.: An example of microscopic car models validation using the open source traffic simulation SUMO. In: Proceedings of the 14th European Simulation Symposium (ESS 2002), Dresden, October 2002, pp. 318–322. SCS European Publishing House, Dresden (2003). Reverse-time simulation in production line redesign 5. Bachechi, C., Po, L.: Implementing an urban dynamic traffic model. In: IEEE/WIC/ACM International Conference on Web Intelligence, WI 2019, Thessaloniki, Greece, 14–17 October 2019 (2019, to appear) 6. Po, L., Bachechi, C., Rollo, F., Corni, A.: From sensors data to urban traffic flow analysis. In: 2019 IEEE International Smart Cities Conference (ISC2) (2019, to appear) 7. Bachechi, C., Po, L.: Traffic analysis in a smart city. In: Web4City, International IEEE/WIC/ACM Smart City Workshop: Web for Smart Cities - in Conjunction with IEEE/WIC/ACM International Conference on Web Intelligence, WI 2019, Thessaloniki, Greece, 14–17 October 2019 (2019, to appear)
Logistic Regression for Criteria Weight Elicitation in PROMETHEE-Based Ranking Methods Elia Balugani1(&), Francesco Lolli1, Maria Angela Butturi1, Alessio Ishizaka2, and Miguel Afonso Sellitto3 1
3
Department of Sciences and Methods for Engineering, University of Modena and Reggio Emilia, Via Amendola 2, Padiglione Morselli, 42122 Reggio Emilia, Italy {elia.balugani,francesco.lolli, mariaangela.butturi}@unimore.it 2 NEOMA Business School, Rue du Maréchal Juin 1, 76130 Mont-Saint-Aignan, France [email protected] Universidade do Vale do Rio dos Sinos, São Leopoldo, RS 93022750, Brazil [email protected]
Abstract. For a PROMETHEE II method used to rank concurrent alternatives both preference functions and weights are required, and if the weights are unknown, they can be elicited by leveraging present or past partial rankings. If the known partial ranking is incorrect, the eliciting methods are ineffective. In this paper a logistic regression method for weight elicitation is proposed to tackle this scenario. An experiment is carried out to compare the logistic regression method performance against a state-of-the-art linear weight elicitation method, proving the validity of the proposed methodology. Keywords: MCDM Elicitation Criteria weights Outranking PROMETHEE Machine learning Logistic regression
1 Introduction Multicriteria decision making (MCDM) methods compare multi-dimensional alternatives, presented to a decision maker (DM), and rank them according to his/her preferences. MCDM methods use preference functions to measure the DM’s specific preferences, and to translate the alternatives into comparable units. The resulting preferences are then weighted to obtain a scalar value for each alternative. This scalar value can be directly used to rank the alternatives. Readers interested in the theoretical foundation of MCDM methods alongside an established MCDM method, can refer to [1], while those interested in a recent application in the Life-cycle assessment (LCA) context can refer to [2]. The availability of credible weights can severely impact the performance of most MCDM methods [3], thus a series of weigh eliciting procedures have been designed [4]. This paper expands the methods proposed in [5] for weights eliciting in the PROMETHEE II context. © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 474–479, 2020. https://doi.org/10.1007/978-3-030-39512-4_74
Logistic Regression for Criteria Weight Elicitation in PROMETHEE
475
Important components required for the application of MCDM methods are often missing, both the preference functions and the weights might not be known. In case the weights are unknown, information on the ranking of the alternatives can be used to elicit them. In the literature specific weighting eliciting procedure has been designed for different MDCD methods, [6] propose an eliciting procedures for TOPSIS, [7] focus on a surrogate weighting procedure in PROMETHEE, and [8] revise Simo’s procedure for the ELECTRE method. From a broader perspective, [9] propose a posterior analysis using the popular Simple Additive Weighting (SAW) method while [10] focus of multicriteria additive models. For the PROMETHEE method, if a partial ranking of present or past decisions is available, [5] propose various elicitation methods based on linear and convex constrained optimization. If the preference functions are also unknown, Robust Ordinal Regression (ROR) methods bypass the elicitation problem by providing all the results obtainable using preference functions that are in line with a known partial ranking. Interested readers can refer to the first ROR publication [11], which is a re-design of the UTA method [1] in a robust framework. The PROMETHEE method re-designed in a ROR framework can be found in [12], while the ROR version of ELECTRE has been designed by [13]. Machine learning (ML) algorithms, like logistic regression, are often used in conjunction with MCDM methods. [14] Outline the similarities and differences between ML algorithms and MCDM methods, while [15] bridge the gap between ML and ROR methods. Recent applications of K-means for AHP can be found in [16] and decision tree algorithms for Data Envelop Analysis (DEA) in [17]. In this paper a logistic regression algorithm in used in the PROMETHEE weight elicitation context of [5] rather than a linear model. The linear and logistic regression algorithms are experimentally compared in cases where the known partial ranking is incorrect. This paper is organized as follows: in Sect. 2 the proposed weight elicitation models is described. In Sect. 3 the experimental setting is outlined, and its results are analysed. Section 4 concludes the paper with a summary of the key findings and suggestions for future research.
2 Logistic Regression Model Using the formalism presented in [5], if all the preference between alternatives are known they can be used to elicit unknown weights solving the optimization problem: max
X
! 1 Pn : log ai P~ ak fi;k g:~ P P :ðwÞ 1 þ e j¼1 ðð j Þik ð j Þki Þ j
ð1Þ
476
E. Balugani et al.
s.t. Xn j¼1
0 ðwÞj 1
ðwÞj ¼ 1: 8j 2 f1; . . .; ng:
ð2Þ ð3Þ
(1) maximizes the log-likelihood of identifying preferred alternatives by linearly separating the preference space. Each alternative i preferred over an alternative k is a success event drawn from a Bernoulli distribution. The distribution is characterized by a success probability pik parametrized, in the preference space, by net flow differences through the inverse canonical link function. If the linear predictor for the inverse canonical link function is defined without an intercept its parameters are maximum likelihood estimators for the PROMETHEE II weights. Since no failure event is considered, the Bernoulli distribution log-likelihood simplifies to (1). This model is effective even if some of the known preferences are incorrect. In addition, the probabilistic interpretation of the weights allows the DM to identify the known preferences that are most likely to be faulty.
3 Experimental Setting and Results 3.1
Experimental Setting
The experiment objective was to compare the two models in scenarios where some of the known preferences are incorrect. The Linear Model is expected to outperform the Logistic Regression Model if all the known preferences are correct, while erroneous information is expected to favour the proposed model over the original one. The Linear Model is also expected to be unable to find a feasible solution if wildly incorrect inputs are provided. The dataset for alternatives and weights is used in [5], which contains 5dimensional weights for 1000 DMs and their rankings of 100 different alternatives each. It is artificially expanded by permuting the rankings 1000 times, with each permutation being independent from the previous one, and it affects all the rankings leading to 1,000,000 permuted rankings. preference function for each weight j is linear in the entire interval h The i min Dj ik ; max Dj ik . For each DM l and permutation h both the Linear Method and the Logistic Regression Method are applied thereby obtaining two sets of elicited weights. Each set s of elicited weights is compared with the DM weights thus obtaining a performance measure in the interval ½0; 1: Pn
j¼1 ðweslh Þj ðwl Þj qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi performanceslh ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pn Pn 2 2 ð we Þ ð w Þ slh l j¼1 j j¼1 j
ð4Þ
where ðwl Þj is the DM’s weight for criterion j, and ðweslh Þj is the elicited weight for the same criterion.
Logistic Regression for Criteria Weight Elicitation in PROMETHEE
477
For each permutation h and set s, the DMs’ performance measures are aggregated by estimating their expected value: b ðperformancesh Þ ¼ E
X1000 l¼1
performanceslh :
ð5Þ
Each permutation h is rated according to its distance from the unpermuted ranking: P100 perh ¼
1fposhr [ r g r¼1 100 2
ð6Þ
100 where poshr is the permuted value in position r and is a normalization constant 2 to constraint perh in the range ½0; 1. The rating of the unpermuted ranking is 0, while the rating of the reversed ranking is 1. The permutations are generated to achieve ratings that are uniformly distributed between 0 and 1. b ðperformancesh Þ can be compared across different perh to gauge The two sets of E how different permutation ratings affect the model’s performance. 3.2
Results
b ðperformancesh Þ for the two methods against the perFigure 1 depicts the obtained E mutations perh . The expected values for the performance measure of the Linear Model are plotted with circles, while the Logistic Regression Model values are plotted with crosses.
Fig. 1. Expected value of the performance measure for the two models and permutation ratings.
According to Fig. 1, the Linear Model outperforms the Logistic Regression Model in the non-permuted scenario, while the Logistic Regression Model is superior for sizable values of perh . When the permutation becomes severe ðperh 0:5345Þ,
478
E. Balugani et al.
the Linear Model is again the preferred option for weight eliciting, up to complete rank reversion. Both the Linear Model and the Logistic Regression Model were always able to find feasible solutions.
4 Conclusions Unexpectedly, the Logistic Regression Model does not always outperform the Linear Model when some of the known preferences are incorrect. The advantage of the Logistic Regression Model is limited to those instances where the permuted ranking is closer to the non-permuted ranking than to the reverse ranking, with cut-point perh ¼ 0:5345. In nearly all the analysed cases, the Linear Model achieves, without incurring infeasibility issues, a constant expected value of the performance measure perh ¼ 0:8716, where chance alone would yield perh ¼ 0:7370. Leveraging the high dimensionality of the preference space, the Linear Model finds a single feasible solution and retains it for nearly all the permuted rakings, except for the reverse ranking that carries its own below-chance solution. These results provide guidelines on when one method is preferable over the other, and prove that, when the correct method is selected, the elicited weights are close to the real ones above chance. Further research will use the Logistic Regression Model to identify faulty known preferences, leveraging the probabilistic interpretation of the weights described in Sect. 2. Other machine-learning algorithms (e.g. Support Vector Machines, Neural Networks) will be specialized into weight-eliciting models. These models are expected to account for not only for incorrect known preferences but also for incorrect preference functions, discarding the hypothesis of the linear separability assumption in the preference space.
References 1. Jacquet-Lagreze, E., Siskos, J.: Assessing a set of additive utility functions for multicriteria decision-making, the UTA method. Eur. J. Oper. Res. 10, 151–164 (1982). https://doi.org/ 10.1016/0377-2217(82)90155-2 2. Lolli, F., Ishizaka, A., Gamberini, R., Rimini, B., Balugani, E., Prandini, L.: Requalifying public buildings and utilities using a group decision support system. J. Clean. Prod. 164, 1081–1092 (2017). https://doi.org/10.1016/j.jclepro.2017.07.031 3. Mareschal, B.: Weight stability intervals in multicriteria decision aid. Eur. J. Oper. Res. 33, 54–64 (1988). https://doi.org/10.1016/0377-2217(88)90254-8 4. Riabacke, M., Danielson, M., Ekenberg, L.: State-of-the-art prescriptive criteria weight elicitation. Adv. Decis. Sci. 2012, 1–24 (2012). https://doi.org/10.1155/2012/276584 5. Lolli, F., Balugani, E., Ishizaka, A., Gamberini, R., Butturi, M.A., Marinello, S., Rimini, B.: On the elicitation of criteria weights in PROMETHEE-based ranking methods for a mobile application. Expert Syst. Appl. 120, 217–227 (2019). https://doi.org/10.1016/j.eswa.2018. 11.030
Logistic Regression for Criteria Weight Elicitation in PROMETHEE
479
6. Alemi-Ardakani, M., Milani, A.S., Yannacopoulos, S., Shokouhi, G.: On the effect of subjective, objective and combinative weighting in multiple criteria decision making: a case study on impact optimization of composites. Expert Syst. Appl. 46, 426–438 (2016). https:// doi.org/10.1016/j.eswa.2015.11.003 7. de Almeida Filho, A.T., Clemente, T.R.N., Morais, D.C., de Almeida, A.T.: Preference modeling experiments with surrogate weighting procedures for the PROMETHEE method. Eur. J. Oper. Res. 264, 453–461 (2018) 8. Figueira, J.R., Roy, B.: Determining the weights of criteria in the ELECTRE type methods with a revised Simos’ procedure. Eur. J. Oper. Res. 139, 317–326 (2002). https://doi.org/10. 1016/S0377-2217(01)00370-8 9. Kaliszewski, I., Podkopaev, D.: Simple additive weighting—a metamodel for multiple criteria decision analysis methods. Expert Syst. Appl. 54, 155–161 (2016). https://doi.org/10. 1016/j.eswa.2016.01.042 10. de Almeida, A.T., de Almeida, J.A., Costa, A.P.C.S., de Almeida-Filho, A.T.: A new method for elicitation of criteria weights in additive models: flexible and interactive tradeoff. Eur. J. Oper. Res. 250, 179–191 (2016). https://doi.org/10.1016/j.ejor.2015.08.058 11. Greco, S., Mousseau, V., Słowiński, R.: Ordinal regression revisited: multiple criteria ranking using a set of additive value functions. Eur. J. Oper. Res. 191, 416–436 (2008). https://doi.org/10.1016/j.ejor.2007.08.013 12. Kadziński, M., Greco, S., SŁowiński, R.: Extreme ranking analysis in robust ordinal regression. Omega 40, 488–501 (2012). https://doi.org/10.1016/j.omega.2011.09.003 13. Greco, S., Kadziński, M., Mousseau, V., Słowiński, R.: ELECTREGKMS: robust ordinal regression for outranking methods. Eur. J. Oper. Res. 214, 118–135 (2011). https://doi.org/ 10.1016/j.ejor.2011.03.045 14. Doumpos, M., Zopounidis, C.: Preference disaggregation and statistical learning for multicriteria decision support: a review. Eur. J. Oper. Res. 209, 203–214 (2011). https://doi. org/10.1016/j.ejor.2010.05.029 15. Corrente, S., Greco, S., Kadziński, M., Słowiński, R.: Robust ordinal regression in preference learning and ranking. Mach. Learn. 93, 381–422 (2013). https://doi.org/10.1007/ s10994-013-5365-4 16. Ishizaka, A., Lolli, F., Gamberini, R., Rimini, B., Balugani, E.: AHP-K-GDSS: a new sorting method based on AHP for group decisions. In: Bruzzone, A.G., et al. (eds.) Proceedings of the International Conference on Modeling and Applied Simulation, pp. 1–5. CAL-TEK S.r. l., Barcellona (2017) 17. Ishizaka, A., Lolli, F., Balugani, E., Cavallieri, R., Gamberini, R.: DEASort: assigning items with data envelopment analysis in ABC classes. Int. J. Prod. Econ. 199, 7–15 (2018). https:// doi.org/10.1016/j.ijpe.2018.02.007
3D CAD Design of Jewelry Accessories, Determination of Geometrical Features and Characteristics of the Used Material of Precious Metals Tihomir Dovramadjiev(&), Mariana Stoeva, Violeta Bozhikova, and Rozalina Dimova Departments of Industrial Design, Software and Internet Technologies and Telecommunications, Technical University of Varna, str. Studentska N1, 9010 Varna, Bulgaria [email protected]
Abstract. The development of precious metal jewelry requires very good management in terms of design, material cost, good marketing and advertising, and good market positioning. This includes many aspects that, in the right approach, build a complete system of expensive business. With the development of modern technological means it is possible to pre-fabricate jewelry models in a computer solid environment in which it is possible to calculate the amount of precious metal material used. The purpose of this study is to develop a system that provides preliminary data on the geometric features of expensive jewelry models while at the same time calculating material costs. Keywords: 3D
Jewelry SolidWorks Mass properties Precious metals
1 Introduction Interdisciplinary scientific fields are of increasing importance in contemporary reality. This is directly related to the dynamics of our time, where we look for opportunities for optimization of technological processes, efficiency, reduction of material costs and time. When referring to engineering and technical design, this phenomenon affects practically every aspect of the process. The application of Advanced Technologies in Design is required, where CIM (Computer-Integrated Manufacturing) systems are widely used [1, 2]. This study focuses on the initial stage of CAD (Computer Aided Design) design, which is crucial in the hierarchical sequence [3–9]. The accuracy in Computer-Aided Design justifies the correct calculation of the following elements of the workflow. When looking at the jewelry business, but also referring to all areas of precious metals application, accurate costing of material is required [10, 11]. This is of Namely: The Advanced Technologies in Design is a discipline taught at Technical University of Varna characterized by the application of modern technological tools such as CIM systems and its sections, as well as other innovative applications. © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 480–485, 2020. https://doi.org/10.1007/978-3-030-39512-4_75
3D CAD Design of Jewelry Accessories
481
great importance both financially and in optimizing the application of the material, which in some cases is extremely restrictive and difficult to access [12–17].
2 Materials and Methods The main objective of the present work is to find an approach to optimize the design process that covers two aspects: accuracy of the constructed solid geometry and mass calculation of the developed models. This can be realized in the SolidWorks software environment - one of the world leaders in CAD design. Based on the properly constructed 3D geometry of jewelry models (bracelets and rings of gold, silver and titanium) and through computer calculations and the capabilities of SolidWorks Mass Properties, we will get an idea with mathematical accuracy of the real mass of expensive samples [18]. This gives a real opportunity for the proper management of the production of the models, by monitoring at all stages of the creation of the real jewelry models, by making an accurate estimate in advance. The present study aims to give a real idea of how much is the influence of the smallest elements of the construction of the developed 3D models by comparing in table form their obtained mass values. This is of great importance considering the extent to which each precious metal particle is stored. That is, there is no excess material and finding a way to optimize the design process is crucial. Material Financial Impact data is based on bulk raw material price computed from MetalPrices.com (2012; Accessed May 15th – SolidWorks reference) [19]. Table 1 shows SolidWorks data for the metals used in the studies. Table 1. SoldWorks metal properties Property Elastic modulus Poisson’s ratio Shear modulus Mass density Tensile strength Thermal expansion coefficient Thermal conductivity Specific heat
Pure gold Value 7.8e + 010 0.42 2.6e + 010 19000 103000000 1.4e-005
Units N/m^2 N/A N/m^2 kg/m^3 N/m^2 /K
Pure silver Value 7.1e + 010 0.37 2.5e + 010 11000 125000000 2e-005
Units N/m^2 N/A N/m^2 kg/m^3 N/m^2 /K
Titanium Value 1.1e + 011 0.3 4.3e + 010 4600 235000000 8.8e-006
Units N/m^2 N/A N/m^2 kg/m^3 N/m^2 /K
300
W/(m•K)
420
W/(m•K)
22
W/(m•K)
130
J/(kg•K)
230
J/(kg•K)
460
J/(kg•K)
The application of the appropriate material (the selected metal) is possible after the three-dimensional solid geometry of the developed models has already been established. Figure 1 shows the process of building 3D models of bracelets and rings, in this case the variant with gold is referred to.
482
T. Dovramadjiev et al.
Fig. 1. SolidWorks 3D CAD system interface/3D design of bracelet and ring (have similar geometry, the difference is approximately 1/3 in scale) (scale 0.33): (a) overall dimensions of the bracelet: Outside D.: ∅ 65.5 mm, Inside D.: ∅ 63.5 mm 10 mm/Fillets: 2 R 0.8 mm and 2 R 0.2 mm; (b) obtained initial 3D geometry; (c) applying a decorative element using a readymade font vector/WWDesigns Font - free commercial license [20]; (d) construction of 3D decorative elements on the basic geometry of the model through WRAP/Deboss with a value of 0.3 mm and Cir Pattern: Axis 1/360 deg/4/Equal spacing.
Figure 2 shows two finished 3D models of the developed bracelet design with and without decorations respectively.
Fig. 2. Fully 3D models of gold bracelets: (a) with decoration and (b) without decoration.
3D CAD Design of Jewelry Accessories
483
The available results of mass properties calculation are: Density, Mass, Volume, Surface area, Center of mass, Principal axes of inertia [18]. The moments of inertia and products of inertia are calculated to agree with the following definitions: (1). The inertia tensor matrix is defined below from the moments of inertia: (2). Z
Ixx ¼ ðy2 þ z2 Þdm Z
ð1Þ
Iyy ¼ ðz þ x Þdm 2
2
Z
Izz ¼ ðx2 þ y2 Þdm
2
Ixx 6 4 Ixy
Ixy Iyy
Ixz
Iyz
3 Ixz 7 Iyz 5
ð2Þ
Izz
Z
Ixy ¼ ðxyÞdm Z
Iyz ¼ ðyzÞdm Z
Izx ¼ ðzxÞdm The obtained Mass Properties values calculated using SolidWorks software of Brace-lets and Rings/Pure Gold, Pure Silver and Titanium/With and without decorations are shown on Tables 2 and 3. Table 2. SolidWorks calculated mass properties of bracelets/Pure Gold, Pure Silver and Titanium/With and without decorations Bracelet Mass properties
Pure Gold Pure Silver With Without With decoration decoration decoration Density - grams per 0.02 0.02 0.01 cubic millimeter Mass - grams 36.20 37.37 20.96 Volume - cubic 1905.50 1966.68 1905.50 millimeters Surface area - square 4612.90 4281.83 4612.90 millimeters
Titanium Without With Without decoration decoration decoration 0.01 0.0046 0.0046 21.63 1966.68
8.77 1905.50
9.05 1966.68
4281.83
4612.90
4281.83
Table 3. SolidWorks calculated mass properties of rings/Pure Gold, Pure Silver and Titanium/with and without decorations Ring Mass properties
Pure Gold Pure Silver With Without With decoration decoration decoration Density - grams per 0.02 0.02 0.01 cubic millimeter Mass - grams 1.30 1.34 0.75 Volume - cubic 68.48 70.68 68.48 millimeters Surface area - square 502.35 466.29 502.35 millimeters
Titanium Without With Without decoration decoration decoration 0.01 0.0046 0.0046 0.78 70.68
0.31 68.48
0.33 70.68
466.29
502.35
466.29
484
T. Dovramadjiev et al.
Figure 3 shows a photorealistic rendering of part of the developed 3D models of bracelet and three rings.
Fig. 3. Photo-realistic image of part of 3D models of gold bracelet with decoration and three decorated rings (gold, silver and titanium) obtained by rendering in SolidWorks software.
3 Conclusion The research carried out in this paper is wide-ranging. A complete process of threedimensional solid design through the SolidWorks CAD system is presented. The calculation of the mass of precious metals used in jewelry is optimized by computer calculations. Specific decorative elements to the basic geometry have been constructed which, in addition to aesthetic beauty, give data for the difference in the mass between the similar models. This gives reason to make a correct estimation of the consumption of material at the expense of additional elements, which can reduce the mass but increase the cost of making real models. This is strictly specific to individual decisions. Calculations in the Mass Properties CAD environment give a new approach and vision for the right attitude and work with precious metals, forming a positive trend in the technological present and the future. Acknowledgments. This paper (result) is (partially) supported by the National Scientific Program “Information and Communication Technologies for a Single Digital Market in Science, Education and Security (ICTinSES)” (grant agreement DO1-205/23.11.18), financed by the Ministry of Education and Science.
3D CAD Design of Jewelry Accessories
485
References 1. Dovramadjiev, T.: Advanced Technologies in Design, p. 228. Technical University of Varna, Bulgaria (2017). ISBN: 978-954-20-0771-5 2. Sandeep, T.R.: Computer Integrated Manufacturing. ACE, Bangalore. http://www.alphace. ac.in/downloads/notes/me/10me61.pdf 3. Dassault Systems SolidWorks: 3D CAD Design Software. https://www.solidworks.com/ 4. Lombard, M.: Solidworks 2013 Bible, 1st edn., 11 March 2013, p. 1296. Wiley (2013). ISBN-13: 978-1118508404, ISBN-10: 1118508408 5. Smirnov, A.A.: Three-dimensional geometric modeling [Trehmernoe geometricheskoe modelirovanie], pp. 1–40. MGTU, Moscow (2008) 6. Dassault Systems SolidWorks: Introducing Solidworks. https://my.solidworks.com/ solidworks/guide/SOLIDWORKS_Introduction_EN.pdf 7. Aliamovski,i A.A.: An engineering calculation in SolidWorks Simulation [Inzhenernyi raschet v SolidWorks Simulation], p. 462. DMK Press, Moscow (2010) 8. Putz, C., Schmitt, F.: Introduction to computer aided design—concept of a didactically founded course. J. Geom. Graph. 7(1), 111–120 (2003) 9. Lombard, M.: Mastering SolidWorks, 1st edn., p. 1248. Sybex, England (2018). ISBN-13: 978-1119300571, ISBN-10: 1119300576 10. Watkin, L.: Trends in the global jewelry Industry. Polygon.net (2014). https://www.polygon. net/jwl/public/documents/resource-center/industry-reports/Trends-in-the-Global-JewelryIndustry-PD.pdf 11. Stuller: The Basics of Jewelry/Terminology and Design Guide, USA. https://www. jmdjewelry.com/uploads/8/9/4/8/89486199/jewelry-basics.pdf 12. Sitko, J.: Analysis of selected technologies of precious metal recovery processes, MAPE 2019, vol. 2, no. 1, pp. 72–80. Sciendo, Warsaw (2019). https://doi.org/10.2478/ mape-2019-0007 13. Aquino, S.: Recycling precious metals from mobile phones. M.B.A., University of Rochester (2018). https://doi.org/10.14288/1.0362564,vancouver 14. Blasi L.: Precious metal financial instrument. US Patent: US8583547B2 (2013) 15. Gierałtowska, U.: Direct and indirect investment in precious metals, pp. 125–137. Uniwersytet Szczeciński, Szczecin (2019). https://doi.org/10.17951/h.2016.50.4.125 16. Blatter, A.: Explosive joining of precious metals. Gold Bull. 31(3), 93–98 (1998). https://doi. org/10.1007/bf03214769 17. Grimwade, M.: Introduction to Precious Metals. Brynmorgen Press, Brunswick (2009). ISBN: 978-1-929565-30-6 18. Dassault Systems SolidWorks: Mass Properties Dialog Box https://help.solidworks.com/ 2016/English/SolidWorks/sldworks/HIDD_MASSPROPERTY_TEXT_DLG.htm. Accessed 9 Oct 2019 19. MetalPrices.com./ https://www.argusmedia.com/en/metals#Tables-/SolidWorksreference. Accessed 15 May 2012 20. FontSpace: WWDesigns Font by WindWalker64. https://www.fontspace.com/windwalker64/ wwdesigns. Accessed 9 Oct 2019
Discovering and Mapping LMS Course Usage Patterns to Learning Outcomes Darko Etinger(&) Faculty of Informatics, Juraj Dobrila University of Pula, Zagrebačka 30, 52100 Pula, Croatia [email protected]
Abstract. The widespread use of Learning Management Systems in higher education, a growing adoption of e-learning, distance, hybrid and blended learning puts much pressure onto both students, to achieve learning goals and lecturers, to design high quality online courses. Lecturers typically evaluate how much the students have achieved at the end of the course. This exploratory study attempts to uncover the relationship between usage behavior and students’ grades, i.e. what are the online course usage patterns performed by higher graded students in contrast to lower graded ones. The core data is of the analysis are the event logs extracted from the online course Modeling and simulation at the Faculty of informatics in Pula and mapped to the students’ grades accumulated from the final exams, assignments, projects and class tasks. Process mining techniques were used for process discovery and process model analysis. A set of procedures were developed (within the R programming environment) to analyze the discovered process models. The findings indicate that a better understanding of online course usage patterns and its relationship with learning outcomes can be used to develop intelligent systems (recommender systems, intelligent agents, intelligent personal assistants etc.) that can improve students’ learning process. Keywords: Usage patterns management systems
Process mining Process discovery Learning
1 Introduction Learning management systems (LMS) are nowadays widely adopted in higher education institutions. Such systems are used to manage the learning experience and to enable teachers’ and students’ involvement in the learning experience. In a typical LMS, content in different formats (text, image, multimedia) is available through courses to students. Communication and interaction between students and teachers is available through message boards, forums, videoconferences or external plugins that provide communication functionalities. Moodle, as a global development project designed to support a social constructionist framework of education, is a software package for producing web-based courses. The Moodle learning platform is provided by Srce (University Computing Center) as the LMS of choice at the Faculty of Informatics in Pula and offered teachers as a complementary tool to design personalized © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 486–491, 2020. https://doi.org/10.1007/978-3-030-39512-4_76
Discovering and Mapping LMS Course Usage Patterns to Learning Outcomes
487
learning environments with the purpose of improving the teaching process and achieving students learning outcomes. A growing adoption of e-learning systems empowers educational institutions to develop high quality courses for students. To design such courses, lecturers and support staff need to engage, develop and implement such learning objects that will meet students’ expectations and achieve the proposed learning outcomes. Lecturers typically evaluate how much the students have achieved at the end of the course, by using standardized metrics defined before the start of the course. This study attempts to uncover the relationship between Moodle usage behavior and students’ grades. By exploring the discovered process models by using process mining techniques (if applied to educational setting, such analysis is part of Educational process mining), the goal is to identify what are the online course usage patterns performed by higher graded students in contrast to lower graded ones. The core data for the analysis are the event logs extracted from the online course Modeling and simulation at the Faculty of informatics in Pula and mapped to the students’ grades accumulated from the final exams, assignments, projects and class tasks. The rest of this paper is organized as follows: Sect. 2 describes the theoretical foundation for this study. Process mining techniques in the context of educational process mining are discussed. In Sect. 3, the research methodology is described, and process mining results are discussed. Finally, the fourth section outlines the contributions and implications of study results.
2 Related Work Process mining combines Data mining techniques and Process Modelling and Analysis with Big Data and provides comprehensive sets of tools to provide fact-based insights and to support process improvements [1]. It bridges the gap between model-based process analysis (e.g., simulation and other business process management techniques) and data-centric analysis techniques such as data mining and machine learning [2]. Educational process mining is an emerging field in the educational data mining discipline, concerned with discovering, analyzing, and improving educational processes as a whole, based on information hidden in educational datasets and event logs [3]. Van der Aalst [1] defines three distinct types of process mining techniques: process discovery, conformance checking, and process enhancement. Process discovery, as a combination of two perspectives, discovery task and control-flow, enables the construction of the process model based on an event log. In this way, the behavior seen in the log is captured. By using process discovery techniques, usage behavior and usage patterns of LMS can be explored, analyzed and knowledge about the LMS usage can be extracted. The insight is based on real data, stored in the event log. Each event contains an ID (the case ID), a time stamp, a description of the activity (activity ID), and additional resources. Recently process mining techniques have gain traction because event data tend to be huge (easily falling into Big data category) and thus sometimes, cannot be easily and successfully analyzed by traditional data mining tools.
488
D. Etinger
Educational process mining (EPM) is concerned with developing methods to better understand students’ learning habits and the factors influencing their performance [4]. EPM enables the discovery, analysis, and creation of visual representations of complete educational processes [5]. The results of EPM can give insight of the underlying educational process, generate recommendations, advice, and provide feedback to students, teachers and researchers, to detect learning difficulties early, to help students with specific learning disabilities, to improve management of learning objects [4].
3 Research Methodology and Results Learning outcomes define what are the expected knowledge and skills a student should acquire after completing a course. Assessments are closely related to learning outcomes as they can provide information about students’: understanding of a topic, application of acquired skills to specific problems, and overall, mastering the learning goals. Assessments such as final exams, assignments, projects and class tasks are all graded and provide a proxy measure about learning outcomes achievement. The event data contained in the Moodle LMS database, is extracted from the online course Modeling and simulation at the Faculty of informatics in Pula. This course is part of the third-year undergraduate program in Computer Science. It covers one semester, a sufficient time frame for exploration of students’ usage behaviors. Clairns et al. [5] note that an event log is a hierarchically structured file with data about historical process executions. Such file has to be constructed by structuring raw process data that can be found in files or databases (e.g., the Moodle LMS), into events and traces. An event contains a name, a specific timestamp associated with the event, the originator of the event and other attributes, and represents the most atomic part of a specific process execution. A trace is a collection of events that belong to the same process. An LMS logs the user activities such as courses subscription, the content accessed, attempted exams and corresponding scores, interaction via chat or discussion boards. The analysis procedure consisted of records collection from the Modeling and simulation course in the Moodle database. The raw data was transformed into an event log. Event logs are covering the time frame between October 1st, 2018 and June 19th, 2019. This time frame is relevant because it includes: classes, projects, exams and assignments. The original data was then reshaped into a consolidated log (XES format file), as a suitable shape for further processing with process mining algorithms. Each individual student case is associated with its final grade, thus making it possible to split the dataset into groups, based on the students’ grades. With this approach, it is possible to explore usage patterns corresponding to the students’ final grades, as the main research question is to find if different usage patterns between groups are present. The event log (as XES – eXtensible Event Stream format) was imported into Fluxicon Disco, the software package of choice for this process mining analysis. To extract relevant information about the process model, the filter for showing activities (50% of all activities based on absolute frequency) was applied. The groups were split by the final grades as follows: excellent – 89–100% achievement (10 cases), very good – 76–88.9% achievement (4 cases), good – 63–75.9% achievement (9 cases), and
Discovering and Mapping LMS Course Usage Patterns to Learning Outcomes
489
sufficient – 50–62.9% achievement (10 cases). The process models identified consist of a total of 33 cases, 27 atomic activities, and 6499 events, each with its own timestamp. The fuzzy algorithm (process model discovery algorithm in Disco) was applied on the filtered event log for four groups, and process maps were generated and visualized. Event logs in the education domain, particularly those coming from e-learning environments, may contain massive amounts of fine granular events and process related data [5]. The discovered models are often “spaghetti-like” showing all details without distinguishing what is important and what is not [1]. To obtain a usable model, the model discovered had to fit well with the students’ behavior. Tax et al. [6] propose the Local Process Models (LPMs) allowing the mining of patterns positioned in-between simple patterns and end-to-end models, focusing on a subset of the process activities and describing frequent patterns of behavior. Figures 1, 2, 3 and 4 show the obtained process maps for each individual group examined in this study.
Fig. 1. Process map – 89–100% achievement (50% of the activities - absolute frequency)
Visually, the process models and maps discovered show a difference in usage patterns between groups. There are several insights provided for each group: the most frequent activities performed, the maximum repetitions performed by students and an overall importance of learning objects perceived by the students. Those metrics were categorized and analyzed in the R programming environment. All groups (Figs. 1, 2, 3 and 4) put much emphasize on project assignments. Those cover specific skills including conceptual modeling, simulation program development within the R programming environment or Python programming language, data analysis, visualization and interpretations of simulation results. The data show that the number of repetitions grow as the grade lowers, meaning that lower graded students show difficulties in acquiring these skills. The group ‘excellent’ (Fig. 1) reads the literature the most, thus gaining theoretical knowledge and better understanding of topics such as Discrete Event Simulation (DES), Systems Dynamics (SD), Agent-based modeling (ABM) along with topics covering the probability distribution selection for input data and the development of conceptual simulation models. The group ‘very good’ (Fig. 2) showed interest for Monte Carlo techniques, as a static method for simulations, but put lower accent on other simulation techniques. The last two groups
490
D. Etinger
Fig. 2. Process map – 76–88.9% achievement (50% of the activities - absolute frequency)
Fig. 3. Process map – 63–75.9% achievement (50% of the activities - absolute frequency)
Fig. 4. Process map – 50–62.9% achievement (50% of the activities - absolute frequency)
(Figs. 3 and 4) show similarities: underutilization of the provided learning objects and a non-consistent usage of the LMS. The process maps for those groups look flat, meaning their learning path is not based on the course program i.e. these students do not engage in the learning process enough.
4 Conclusion Educational process mining aims to detect, monitor and improve real-life processes by extracting process-related knowledge from LMS event logs. The purpose of this study was to discover LMS usage behavioral patterns between different students’ groups based on their final achievement, by matching the cases in the event log with students’ grades, as proxies for learning outcomes. By applying a subset technique of process
Discovering and Mapping LMS Course Usage Patterns to Learning Outcomes
491
mining called process discovery, the process maps that describe the usage behavior based on real facts and evidence were obtained for each individual group. The findings indicate that a better understanding of online course usage patterns and its relationship with learning outcomes can be used to develop intelligent systems (recommender systems, intelligent agents, intelligent personal assistants etc.) that can improve students’ learning process. As each students’ group follows as specific learning path when using LMS, such intelligent systems could help them achieve the proposed learning outcomes.
References 1. Van der Aalst, W.M.P.: Trends in business process analysis: from verification to process mining. In: Proceedings of 9th International Conference Enterprise Information Systems (ICEIS 2007), pp. 12–22 (2006) 2. Mans, R.S., van der Aalst, W.M.P., Vanwersch, R.J.B.: Process mining. In: Process Mining in Healthcare: Evaluating and Exploiting Operational Healthcare Processes, pp. 17–26. Springer International Publishing, Heidelberg (2015) 3. Cairns, A.H., Gueni, B., Assu, J., Joubert, C., Khelifa, N.: Analyzing and improving educational process models using process mining techniques. In: IMMM 2015 Fifth International Conference on Advances in Information Mining Management, pp. 17–22 (2015) 4. Ariouat, H., Cairns, A.H., Barkaoui, K., Akoka, J., Khelifa, N.: A two-step clustering approach for improving educational process model discovery. In: 2016 IEEE 25th International Conference Enabling Technology: Infrastructure for Collaborative Enterprises, pp. 38–43 (2016) 5. Cairns, A.H., Gueni, B., Fhima, M., Cairns, A., David, S., Khelfa, N.: Process mining in the education domain. Int. J. Adv. Intell. Syst. 8, 219–232 (2015) 6. Tax, N., Sidorova, N., Haakma, R., van der Aalst, W.M.P.: Mining local process models. J. Innov. Digit. Ecosyst. 3, 183–196 (2016)
Drug Recommendation System for Geriatric Patients Based on Bayesian Networks and Evolutionary Computation Lourdes Montalvo(&) and Edwin Villanueva Pontifical Catholic University of Peru, Lima, Peru {montalvo.lourdes,ervillanueva}@pucp.edu.pe
Abstract. Geriatric people face health problems, mainly with chronic diseases such as hypertension, diabetes, osteoarthritis, among others, which require continuous treatment. The prescription of multiple medications is a common practice in that population, which increase the risk of unwanted or dangerous drug interactions. The quantity of drugs is constantly growing, as are they interactions. It is therefore desirable to have support systems for medical that digest all available data and warn for possible drug interactions. In this paper we proposed a drug recommendation system that takes into account pre-existing diseases of the geriatric patient, current symptoms and verification of drug interactions. A Bayesian network model of the patient was built to allow reasoning in situations of limited evidence of the patient. The system uses also a genetic algorithm, which seeks the best drug combination based on the available patient information. The system showed consistency in simulated settings, which were validated by a specialist. Keywords: Drug recommendation Drug-drug interaction algorithm Bayesian networks Drug-target interaction
Genetic
1 Introduction The selection of appropriate pharmacotherapy for elderly people has been recognized as a challenging and complex process and has become an important public health issue [5]. One of the problems that face geriatric patients is adverse drug reactions (ADRs). This problem is a major cause of hospital admissions, thereby leading to significant medical and economic problems [6]. In US, drug-drug interactions (DDI) have been responsible for approximately 26% of the Adverse Drug Reactions (ADR) annually [7], affecting 50% of inpatients, and causing nearly 74,000 emergency room visits and 195,000 hospitalizations. Multimorbidity, the coexistence of two or more chronic conditions, is in increasing rate and so the polypharmacy and the ADR cases [2]. This increases the risks and costs of the primary health care by causing serious health effects or reducing the therapeutic effect of some compounds [1, 2]. Some authors have proposed intelligent systems to support physicians in the prescription task. In [8] the authors propose a drug-recommendation system for patients © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 492–497, 2020. https://doi.org/10.1007/978-3-030-39512-4_77
Drug Recommendation System for Geriatric Patients
493
with infectious diseases. The system classifies patient’s abilities to protect themselves from infectious diseases. The authors constructed a knowledge base that included more than 60 risk factors, and developed a web-based prototype system. In [9] the authors developed a medicine recommender system framework that incorporate data mining technologies to uncover potential knowledge hide in medical records that can be used to decrease medical errors. Although these systems take into account the patient’s medical history, they are not aware of possible drug interactions, which means that the recommendations may not be safe. In [7] the authors propose an ADR-aware model that recommends safe drugs that can be taken together with the prescription. One limitation of this model is that the recommendation is built only on drug information. However, it is known that interactions do not only occur between drugs but also between drugs and diseases. In the present work, we proposed the development of a drug recommendation system for geriatric patients that take into account drug-drug interaction considering pre-existing diseases, current symptoms and verification of drug interactions. A model of the geriatric patient was built based on a Bayesian network, which allows inferences to be made when some patient information is available. After that, the development of the genetic algorithm seeks to obtain the best combination of medications for the geriatric patient with diverse medical history.
2 Methodology To develop our drug recommendation system we first developed a Bayesian network model to encode the probabilistic interactions between common geriatric pathologies, information of pre-existing diseases, medications to be prescribed and current symptoms of the patient. Secondly, we implemented an evolutionary algorithm to be the search engine of drug combinations that maximizes the absence of symptoms in the geriatric patient, taking into account the patient’s safety regarding possible drug interactions. Finally, we implemented the recommendation system in a web platform that allows using the functionalities of the proposed algorithm. For the first activity, it was decided to review the most frequent pathologies in geriatric patients. Then, the most frequent medications were sought to treat these diseases. It was reviewed articles from WHO (World Health Organization) and [12] related to potential inappropriate prescriptions in large elderly patients polymedicated. It was found that, from a sample of 349 patients, 223 suffered from arterial hypertension, which makes it the most frequent pathology in this sample. For each selected pathology we carried out an investigation to determine which medications are the most frequent and effective according to medical evidence. To establish the relationship between one medication and another, it was necessary to review medical articles [13, 14]. We also obtained drug information from [10] and the DrugBank database [11], which is one of the most embracing drug information source and their interactions. With the pathologies and medications chosen, a Bayesian network was implemented. The probabilities of prescribing medications used for patients with a certain medical profile were obtained from textual medical evidence. The motivation of using Bayesian networks come from their natural capability to manipulate uncertainty [4].
494
L. Montalvo and E. Villanueva
In our case, this uncertainty appears by the limited evidence that is normally available about the state of the patient and the variability of the biological processes. For the construction of the Bayesian network, it was decided to define a nomenclature for the random variables of the model, which helped to differentiate between a pre-existing disease (AN), current symptom (SA) and medication to be prescribed (MR). Figure 1 show a resulting Bayesian network model for Cardiology related diseases. The implementation of the Bayesian networks was done in Python programming language, using the pgmpy library (pgmpy.org).
Fig. 1. BN model for Cardiology diseases. The nodes at the bottom indicate the patient’s current symptoms, the nodes in the middle represent the medications to be prescribe, and the nodes on the top represent the pre-existing diseases in the patient.
With the BN model constructed, we implemented a genetic algorithm to obtain the best combination of medications for a given patient. Genetic algorithms (GA) are algorithms inspired in the natural evolution process that seek to improve a population of candidate solutions (individuals) by performing operations of crossing-over, mutation and selection. In our genetic algorithm, an individual represent a possible drug combination. This is coded as a boolean vector, where each element indicates the presence or absence of a particular drug. The variables we used in the algorithm are described in Table 1. Table 1. Variables for the construction of the genetic algorithm Variables Gen Allele Chromosome
Definition Medicine (Eg. “Enalapril”, “Propanolol”) State of Medicine (“Prescribed” (0) and “Not Prescribed” (1)) A candidate drug combination represented by a vector of alleles
Drug Recommendation System for Geriatric Patients
495
To obtain the fitness of each individual in the GA we use the BN model previously described as a simulator of the patient. Given some information of the patient, like preexisting diseases and symptoms, and the medications to be tested (encoded in the chromosome), the BN model is use to compute the posterior probabilities of all modeled symptoms. Such probabilities are used to compute the fitness of the individual. In other words, the fitness function is designed to maximize the absence of the symptoms that the geriatric patient presents. Equation 1 depict the fitness function. PAXi represent the posterior probability of absence of symptom i (PAXi) given by the BN. It is multiplied by the weight pi, that is the importance of the symptom given by the doctor. All the weighted probabilities are added and then normalized by the number of symptoms. Table 2 show the parameters of the fitness function Pn fitness function ¼
PAXi i¼0 Pn i¼0 pi
pi
ð1Þ
Table 2. Parameters of the fitness function Parameter Patient’s background Medicines to be prescribed Symptoms of the indicated pathology Weight adjustment of symptoms Bayesian Network
Detail The dictionary (key, value) will contain the name of the pathology and the state (present, absent) The dictionary (key, value) will contain the name of the medicine and the state “Prescribed” (0) and “Not Prescribed” (1) It will be necessary to pass all the symptoms related to the indicated pathology The doctor will be able to consider the importance of each symptom; therefore, a respective weight will be assigned between 0 to 1 For the proposed project, four pathologies were modeled, which were chosen because they are the most frequent in elderly patients. For each modeled pathology, a Bayesian network is proposed
Fig. 2. Deployment diagram for the drug recommendation system. It shows the Postgresql Database, Web Services with Ruby on Rails and the framework Flask which contains the algorithm.
496
L. Montalvo and E. Villanueva
For the development of the web system, we followed the component architecture presented in Fig. 2. The algorithm source code and models are available in the repository https://github.com/louMon/MedGeriatricThesis.
3 Experiments and Results We performed several tests of the system to assess its effectiveness and efficiency. Figure 3 show a typical behavior of the evolution of the fitness function in the GA for the case of Cardiology diseases. We used in such scenery 8 common medications as a pool of drugs. We simulated a case of a geriatric patient with a history of Bradycardia, Chronic Constipation and with high blood pressure. In the genetic algorithm we used 30 individuals, 25% of mutation rate and 50 generations in the evolution. We can observe that the system is able to find the best recommendation in about ten generations and with a fitness of the solution of 89%. The recommended drugs were positively validated by a specialist in Cardiology.
Fig. 3. Execution of the medical recommendation for a geriatric patient with a history of Bradycardia, Chronic Constipation and currently has high blood pressure; Likewise, the current diagnosis is the presence of Arterial Hypertension with a weight of 100%. Which resulted in the recommendation of the drugs Enalapril and Clortalidone.
4 Conclusion The proposed system showed consistency in the recommendations provided. The Bayesian networks models were able to capture the probabilistic interactions between drugs, pre-existing diseases and symptoms, which is an advance over previous systems. As a future research we target to enrich the patient simulator with a greater number of
Drug Recommendation System for Geriatric Patients
497
variables (greater amount of pathologies, physical characteristics, vital signs of the patient, mood, among other things) so that the model can improve its precision in the recommendation.
References 1. World Health Organization, Multimorbidity-Technical Series on Safer Primary Care (2016) 2. Cascorbi, I.: Drug interactions-principles, examples and clinical consequences (2012) 3. Nilashi, M., Bagherifard, K., Ibrahim, O., Alizadeh, H., Nojeem, L.A., Roozegar, N.: Engineering and technology - collaborative filtering recommender systems. Res. J. Appl. Sci. (2012) 4. Ankan, A., Panda, A.: Mastering Probabilistic Graphical Models using Python. Packt Publishing (2015) 5. Dieter Genser. Food and Drug Interaction: Consequences for the Nutrition/Health Status, Austria (2008). https://doi.org/10.1159/000115345 6. Köhler, G.I., Bode-Böger, S.M., Busse, R., Hoopmann, M., Welte, T., Böger, R.H.: Drugdrug interactions in medical patients: effects of in-hospital treatment and relation to multiple drug use (2000). https://doi.org/10.5414/cpp38504 7. Chiang, W.-H., Li, L., Shen, L., Ning, X.: Drug Recommendation toward Safe Polypharmacy, United Kingdom (2018). https://doi.org/10.1145/nnnnnnn.nnnnnnn 8. Shimada, K., Takada, H., Mitsuyama, S., Ban, H., Matsuo, H., Otake, H., Kunishima, H., Kanemitsu, K., Kaku, M.: Drug recommendation system for Patients with Infectious diseases, Japan (2005) 9. Bao, Y., Jiang, X.: An intelligent medicine recommender system framework. In: IEEE 11th Conference on Industrial Electronics and Applications, China (2016) 10. Organización Panamericana de la Salud. Clasificación Estadística Internacional de Enfermedades y Problemas Relacionados con la Salud (2008). http://www.insn.gob.pe/ sites/default/files/publicaciones/CIE-10-v.3.pdf 11. Drugbank. Canadian Institutes of Health Research. https://www.drugbank.ca 12. López-Sáez, A., Sáez-López, P.: Prescripción inadecuada de medicamentos en ancianos hospitalizados según criterios de Beers. Farmacia Hospitalaria (2012) 13. Akbar, S., Alorainy, M.S.: The current status of betablokers use in the management of hypertension (2014) 14. Gil de Miguel, A., Jiménez García, R., Carrasco Garrido, P., Martínez González, J., Fernández González, I., Espejo Martínez, J.: Seguridad y efectividad de amlodipino en pacientes hipertensos no controlados farmacológicamente en el ámbito de Atención Primaria (2000)
Software for the Determination of the Time and the F Value in the Thermal Processing of Packaged Foods Using the Modified Ball Method William Rolando Miranda Zamora, Manuel Jesus Sanchez Chero(&), and Jose Antonio Sanchez Chero Universidad Nacional de Frontera, Sullana, Peru [email protected], [email protected], [email protected]
Abstract. The software provided in a simple and agile way the time/F value of thermal processing of packaged foods. The objective of this research was to develop a software to perform the calculations of thermal processing of foods based on the Ball´s original modified formula method, which was based on the discrepancies already identified. To check the validity of the equation of Ball’s original formula modified, both formulas were compared with the tabulated data by Ball and programs were developed to evaluate the thermal process with this new modified equation. Differences were found ranging approximately between 0.03% and 4% with the tabulated data. From the linear regression analysis for the different C values for the range of z values from 14 to 26 °F the coefficients of determination are close to unity, so this could be taken into account as an alternative model to the equation, which was raised almost a century ago. Keywords: Ball equation
Ball tables Formula method C value
1 Introduction The general method is a technique, which relies on a graphical or numerical integration procedure based on temperature–time data, and it is used to obtained Lethality or F0 value [1]. However, it has been a target of much criticism for two reasons. First, due to its limited prediction power and second, because of its lack of effectiveness for providing accurate information regarding the thermal processing of food. This led to the emergence of other methods commonly known as a “formula method”. A method, which involves interpreting experimental heating curves, represented by a range of equations. The method has its origins in the first formula method published by Charles Olin Ball in 1923, and in the basis for most developments that process determination use fh and jc values for heating and cooling, and a hyperbola for representing the curvilinear parts of the heat penetration curve. Applicable for jc = 1.41 and cooling time tc = 1.41fc [2–4]. Small modifications in the nomenclature regarding the equation or formula can be observed in studies [5–7]. In the current study, other difference © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 498–502, 2020. https://doi.org/10.1007/978-3-030-39512-4_78
Software for the Determination of the Time and the F Value
499
managing to set up constants were possible to determine, because a smaller margin of error respected to the original Ball method was proved. It should be the exponential function in the heating contribution part E1 instead of Ei and a negative sign of the first and second term, which is incorrect [8–10]. At the same time, while searching a method which allowed to improve Ball’s formula method without the use of tables [5, 6] did not agree with the equation used for the development of them (the tables). Other formula methods were also compared by other authors [11–15].
2 Methodology 2.1
Calculation of the Values of the Coefficients of Ball’s Equation and Incorporation of One Factor in the Second Coefficient
The solution involved the following steps: (1) Find the coordinates of the intersection point from the hyperbola and logarithmic cooling sections. (2) Replace those coordinates in the cooling hyperbola equation to obtain the first relation between ac and bc. (3) Find the derivatives of the hyperbola and logarithmic cooling equations. (4) Substitute the coordinates of the point of intersection in these two differential equations. (5) The second relation between ac and bc is derived by equality of the differential equations. (6) Determine the expressions for the adjusted hyperbola constants from the relationships obtained in steps (2) and (5). Taking into consideration that the E value in Eq. (1) is always positive, a higher C value is needed than those calculated ones [14]. Therefore, after a comparative analysis with the C values according to several authors [5, 6], a value of ln10 = 2.303 is incorporated in Ball's original formula. " Ccl ¼
2.2
0:33172 e
0:343m ze
0:5833ze e þ m
0:300m ze
# E :
ð1Þ
Generation of the New C Values and Software Development
The new C values are found by the calculated coefficients, which are described in the six steps described previously in 2.1, and by the incorporation of the value “ln10” indicated in 2.2. An equation is developed in Microsoft Excel © 2018 to generate the new C values. According to [4] the C value can be calculated from the following equation: C ¼ Ch þ Ccl þ Cc ln10:
ð2Þ
where Ch, Ccl and Cc are the contributions of the C value of the heating phases, as wee as the start and the end cooling respectively. After comparing the C values, it was proceeded to develop a software to calculate the F0 value and the process time tB by applying the modified equation departing from the first Ball’s equation. This equation has been used in plants where thermal treatment or processing is done.
500
W. R. Miranda Zamora et al.
3 Case Study and Results 3.1
Calculation of the Values of the Coefficients of Ball’s Equation and Incorporation of a Factor in the Second Coefficient
The derivatives of cooling hyperbola and logarithmic equations, at the point of intersection, the two slopes of the two cooling equations are equal. Thus, in order to obtain the second relation between ac and bc, the two differential equations may be equated: ac ¼ 0:343
P 0:226732 ðTg TCW Þ: 0:453464 P
ð3Þ
and ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s 0:453464 0:0514074 1 0:453464 þ 1 þ bc ¼ fc: P P2 P2 P3
ð4Þ
In this model, firstly the cooling curve includes a hyperbola section and secondly a logarithmic section. The equations of the cooling curves are the same as described in the Ball method, with the adjusted constants ac and bc and also with the changing time and temperature which are calculated in the same way. Equation (1) is rewritten as follows after finding the ac y bc values, as well as incorporating th ln10: 2 6 bc Ccl ¼ 6 4 ac m
3.2
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi h i ðac m þ 0:343mÞ2 ðac mÞ2 e
0:343m ze
3 þ
bc ln10ze e ac m
ac m ze
7 E7 5:
ð5Þ
Generation of the New C Values and Software Development
From the time–temperature data obtained for each thermocouple location in each food product: the initial heating temperature (TI), the temperature of the autoclave or retort (TR) [16, 17] were obtained. Therefore, the parameters of heat penetration of Ball fh and jh were calculated. For simple curve and fh, fh2 and xbh (breaking time) broken heating curve [18], as well as the z value from the kinetic curve of thermal death of the microorganism [19, 20]. This software makes it possible to evaluate the time of Ball (tB) and the F value of the process [21–23]. Regarding the analysis of the C–values tabulated and the calculated C–values, the average relative error oscillates between a minimum of 0.03% and a maximum of 4% for the z–values of 14, 18, 22 and 26 °F. The coefficients of determination (R2) were 0.9995 for z = 14 °F and z = 18 °F and 0.9996 for z = 22 °F and z = 26 °F. These results are similar with respect to the mathematical model of several authors [23, 24] where coefficients of determination close to one were found. To calculate the F0 value and the process time tB using the modified equation from the original Ball equation, the software was developed in Visual Basic © 6.0. The inputs and outputs are shown in Table 1 for a case study.
Software for the Determination of the Time and the F Value
501
Table 1. Inputs and outputs of the program in case study. Parameter jh fh (min) fh2 (min) xbh (min) F (min) tB (min) TI (°F) TR (°F) Tref (°F) z (F°) m + g (F°)
tB (min) 1 30 6 6 6 – 50 245 250 18 180 23.42
F (min) 1.32 20 6 6 – 80 72 208 200 16 130 210.02
4 Conclusions A modification has been incorporated into the equation put forward 93 years ago by Ball. A program has been developed to calculate the new C values and, when compared with the tabulated C values, inaccuracies ranging between 0.03% and 4% were found. The regression analysis for the different C values for the range of z values from 14 to 26 °F is close to unity, so it could be considered a good alternative model to the equation proposed by Ball. Validation of published Ball tables versus their equations have been shown. Also, an equation has been used to establish its tables based on the analysis. The software developed to evaluate the thermal process is extremely useful for engineers, technologists and food scientists (research and education).
References 1. Bigelow, W.D., Bohart, G.S., Richardson, A.C., Ball, C.O.: Heat penetration in processing canned foods. Bulletin No. 16L. National Canners’ Association, Washington, DC (1920) 2. Holdsworth, S., Simpson, R.: Thermal Processing of Packaged Foods, 2nd edn. Springer, New York (2007) 3. Holdsworth, S., Simpson, R.: Thermal Processing of Packaged Foods, 3rd edn. Springer, New York (2016) 4. Miranda–Zamora, W.R., Bazán, J.F., Ludeña, A.L., Tapia, D.A.: Herramientas computacionales aplicadas a la evaluación de tratamientos térmicos de los alimentos envasados usando el método de Ball. Universidad Nacional de Piura, Piura–Perú (2010) 5. Ball, C.O.: Thermal process time for canned foods. Bulletin No. 37. National Research Council, Washington, DC (1923) 6. Ball, C.O., Olson, F.C.W.: Sterilization in Food Technology–Theory, Practice and Calculations. McGraw–Hill, New York (1957) 7. Goldblith, S.A., Joslyn, M.A., Nickerson, J.T.R. (eds.): Introduction to Thermal Processing of Foods. The AVI Pub. Co., Inc., Westport (1961)
502
W. R. Miranda Zamora et al.
8. Stoforos, N.G.: On Ball’s formula method for thermal process calculations. J. Food Process Eng. 13(4), 255–268 (1991) 9. Merson, R.L., Singh, R.P., Carroad, P.A.: An evaluation of Ball’s formula method of thermal process calculations. Food Technol. 32(3), 66–72, 75 (1978) 10. Steele, R.J., Board, P.W.: Amendments of Ball’s formula method for calculating the lethal value of thermal processes. J. Food Sci. 44, 292–293 (1979) 11. Stumbo, C.R., Longley, R.E.: New parameters for process calculations. Food Technol. 20, 341–345 (1966) 12. Hayakawa, K.: Experimental formulas for accurate estimation of transient temperature of food and their application to thermal process evaluation. Food Technol. 24(12), 1407–1418 (1970) 13. Smith, T., Tung, M.A.: Comparison of formula methods for calculating thermal process lethality. J. Food Sci. 47(2), 626–630 (1982) 14. Steele, R.J., Board, P.W., Best, D.J., Willcox, M.E.: Revision of the formula method tables for thermal process evaluation. J. Food Sci. 44, 954–957 (1979) 15. Miranda-Zamora, W.R., Teixeira, A.A.: Principios matemáticos del proceso térmico de alimentos. AMV (Antonio Madrid Vicente) Ediciones, Madrid-España (2012) 16. Etzel, M.R., Willmore, P., Ingham, B.H.: Heat penetration and thermocouple location in home canning. Food Sci. Nutr. 3(1), 25–31 (2015) 17. Mohamed, I.O.: Determination of cold spot location for conduction–heated canned foods using an inverse approach. Int. J. Food Process. Technol. 2, 10–17 (2015) 18. Thakur, R.S., Rai, D.C.: Heat penetration characteristics and physico–chemical properties of retort processed shelf stable ready to eat palak paneer. Int. J. Chem. Stud. 6(4), 949–954 (2018) 19. Ammu, D., Mohan, C.O., Panda, S.K., Ravishankar, C.N., Gopal, T.K.S.: Process optimisation for ready to eat tapioca (Manihot esculenta crantz) in high impact polypropylene containers. J. Root Crops 43(1), 104–110 (2017) 20. Ammu, D., Mohan, C.O., Panda, S.K., Ravishankar, C.N., Gopal, T.K.S.: Process optimization for ready to eat Indian mackerel (Rastrelliger kanagurta) curry in high impact polypropylene (HIPP) containers using still water spray retort. Indian J. Fish. 64(2), 83–89 (2017) 21. Condón-Abanto, S., Raso, J., Arroyo, C., Lyng, J., Álvarez, I.: Quality–based thermokinetic optimization of ready–to–eat whole edible crab (Cancer pagurus) pasteurisation treatments. Food Bioprocess Technol. 12, 436 (2019) 22. Mugale, R., Patange, S.B., Joshi, V.R., Kulkarni, G.N., Shirdhankar, M.M.: Heat penetration characteristics and shelf life of ready to serve eel curry in retort pouch. Int. J. Curr. Microbiol. Appl. Sci. 7(2), 89–100 (2018) 23. Friso, D.: A mathematical solution for food thermal process design. Appl. Math. Sci. 9(5–8), 255–270 (2015) 24. Friso, D.: A new mathematical model for food thermal process prediction. Model. Simul. Eng. 2013, 1–8 (2013)
Communication Protocol Between Humans and Bank Server Secure Against Man-in-the-Browser Attacks Koki Mukaihira1, Yasuyoshi Jinno1, Takashi Tsuchiya1, Tetsushi Ohki1, Kenta Takahashi2, Wakaha Ogata3, and Masakatsu Nishigaki1(&)
2
1 Shizuoka University, 3-5-1 Johoku, Naka, Hamamatsu, Shizuoka 432-8011, Japan [email protected] Hitachi Ltd., 292 Yoshida, Totsuka, Yokohama, Kanagawa 244-0817, Japan 3 Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro, Tokyo 152-8550, Japan
Abstract. In recent years, illegal money transfer using man-in-the-browser (MITB) attacks with malware in internet banking has become a social problem. This study focuses on transaction-tampering MITB attacks and considers countermeasures to combat them. The existing countermeasures require that another device, such as a token device, other than the device used to carry out the actual money-transfer operation is utilized to ensure a secure route different from that carrying out the actual money-transfer operation. However, because MITB attackers have become more sophisticated, malware will be able to infect all the devices in the end and the multiple routes approach will have become ineffective against this type of attack. Therefore, we constructed a human-tomachine communication protocol that requires only one device for each user and is secure against transaction-tampering MITB attacks, even if the device is taken over by malware. Keywords: Man-in-the-browser attack CAPTCHA Security proof Internet banking
1 Introduction Illegal money transfer using man-in-the-browser (MITB) attacks with malware has recently become a social problem in internet banking. Depending on the attack scenario, MITB attacks are classified into two types: ID stealing and transaction tampering [1]. Herein, we focus on transaction-tampering MITB attacks and consider countermeasures to combat them. A transaction-tampering MITB attack is an attack in which malware sends a money-transfer request different from that specified by a user (e.g., payee, amount of money) to force the server to accept an altered transaction request. As confirmation information sent back from the server is also forged by malware, the user is unable to notice the change. The existing countermeasures [2, 3] have approached this by utilizing a device, such as a token device, other than that used to carry out the © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 503–509, 2020. https://doi.org/10.1007/978-3-030-39512-4_79
504
K. Mukaihira et al.
actual money-transfer operation to ensure a secure route different from the route carrying out the actual money-transfer operation. However, with the advance of defensive technology, attackers have diversified their attack technology and technique. Ultimately, as long as machine-to-machine (M2M) communication is carried out, defense may lag behind sophisticated attackers, and it will be difficult to implement measures against such attacks. This scenario makes it necessary for the presence of humans at the end points of communication in order to initiate measures against such attacks. Conventional cryptographic techniques cannot be used when realizing secure human-to-machine (H2M) communication. In encryption, the calculation difficulty gap of a one-way function is utilized, in which the function evaluation is easy, but the inverse calculation is difficult. Unfortunately, however, humans only have low computational power. Even “easy” function evaluation in this context is still difficult for humans. Therefore, such gaps cannot be used in H2M communications. Instead, humans have high cognitive ability that machines do not have, that is, there is a cognitive gap between humans and machines or malware. This gap can be used to realize secure H2M communication. To construct the secure H2M communication channel, a problem that is easy for humans to solve but difficult for machines is used. In this study, the completely automated public Turing test to distinguish between computers and humans (CAPTCHA) is used as an example of such a problem. The contributions of this study are as follows. (1) A security concept, indistinguishability against chosen CAPTCHA problems attack (IND-C-CCA) security for CAPTCHA, is defined as an analogy of the well-known indistinguishability against chosen ciphertext attack (IND-CCA) security for public-key encryption. It can be used as one of the indices for the construction of CAPTCHA. (2) A system model of the money-transfer request protocol and its security against transaction-tampering MITB attacks are defined. These can be used as one of the indices for the construction of the money-transfer request protocol. (3) A CAPTCHA-based money-transfer request protocol is proposed. The proposed protocol is secure against MITB attacks, provided that the underlying CAPTCHA scheme has IND-C-CCA security. These results illustrate that as long as a CAPTCHA is constructed satisfying the security constraints defined in (1), a money-transfer request protocol having the security defined in (2) can be realized1.
2 Model of CAPTCHA 2.1
Formulation
CAPTCHA [4] is a security technology, which asks a user to solve a problem that is easy for humans but difficult for machines, and then identifies the user as a human or machine. In this paper, CAPTCHA is defined as a pair of CAPTCHA encryption algorithm C Enc and CAPTCHA decryption function C Dec.
1
This study has been originally published in Japanese at IPSJ (Information Processing Society of Japan) Journal, Vol. 60, No. 12, pp. 2147–2156, December 2019.
Communication Protocol Between Humans and Bank Server Secure
505
CAPTCHA encryption algorithm C EncðmÞ: A probabilistic polynomial time k (PPT) algorithm that inputs a solution m 2 Mk2N ; Mk f0; 1g and outputs a problem c 2 Ck2N ; Ck f0; 1glðkÞ corresponding to m where lðkÞ is a polynomial for k and f0; 1glðkÞ represents a set of bit sequences of bit length lðkÞ. Moreover, we assume that C Enc is an injective function. CAPTCHA decryption function C DecðcÞ: A mapping that maps a problem c ¼ C EncðmÞð2 Ck Þ to a solution m ðCk ! M k Þ. Moreover, we assume that humans with high cognitive ability can compute C Dec in real time, but it is difficult for machines to compute C Dec in real time. 2.2
Security Definition for CAPTCHA: IND-C-CCA Security
In this study, we consider IND-C-CCA security, which is similar in concept to the IND-CCA in public-key cryptography. We define the IND-C-CCA game played by attacker B and a challenger as follows. (1) B randomly chooses two solutions, m0 and m1 ðm0 ; m1 2 M k Þ, and sends them to the challenger. (2) The challenger randomly chooses m0 or m1 , sets it to mb , encrypts it into CAPTCHA problem c, and sends it to B as a challenge. (3) B outputs b b. If b b ¼ b, the attacker wins. In this game, B can use human oracle H at any time. The human oracle is an extension of the random oracle and is used widely to cryptographically model CAPTCHA [5]. In the human oracle model, B can query the oracle for a solution to any CAPTCHA problem, except for challenge c received in (2). The advantage of B in the IND-C-CCA game is defined as follows: AdvINDCCCA B
1 ¼ Pr½B wins 2
CAPTCHA has IND-C-CCA security if AdvINDCCCA is negligible for any B. B
3 Model of Money-Transfer Request Protocol Using CAPTCHA 3.1
Model of Money-Transfer Request Protocol
We consider the following model of the money-transfer request protocol using CAPTCHA. A user (or a human) S sends the money-transfer information xS to the server R via the browser. The user and the server communicate through the browser. All processing performed by the user is limited to solving the CAPTCHA. Based on the information received through the protocol, the server outputs xR , or outputs ⊥, meaning “stop money transfer.” In the following, SHðÞ denotes the user with high cognitive ability. Completeness and security against MITB attacks are defined as essential requirements of this protocol.
506
K. Mukaihira et al.
Completeness: The protocol satisfies completeness if S sends xS first, S and R carry out the protocol, and R outputs xR ¼ xS with probability 1 e, where e is a negligible value. As security against MITB attacks, we consider the SUB-MIM security described in the next subsection. 3.2
Definition of SUB-MIM Security
Substitution man-in-the-middle (SUB-MIM) security is defined by the SUB-MIM game. This game consist of a learning phase and attack phase (Fig.1). Attacker A is a PPT algorithm that can perform the money-transfer request protocol by employing a H ðÞ human with high cognitive ability SHðÞ . Use of SHðÞ by A is indicated by AS . Because MITB attacks, which are fully automated by malware, are the target of this paper, it is noted that A cannot directly use HðÞ in the learning phase and attack phase, but only via S. – Learning phase: SHðÞ and A run the protocol n times to obtain the transaction p. Let xðS;iÞ be the SHðÞ ’s input in the ith protocol execution. – Attack phase: SHðÞ and R run the protocol, but A acts as a man in the middle. Upon receiving xS from SHðÞ , A arbitrarily tampers with xS and sends it to R as x0S . A executes the protocol using SHðÞ and R, and finally R outputs xR or ⊥. The win condition of A in the SUB-MIM game is to make R accept xR that is different from the input xS of SHðÞ . Thus, in the SUB-MIM game, A wins if R outputs xR ð6¼ xS Þ. The advantage of A in the SUB-MIM game is defined as follows: AdvSUBMIM ¼ jPr ½A winsj A The protocol has SUB-MIM security if AdvSUBMIM is negligible for any A. A
4 Proposed Protocol and Security Proof 4.1
Proposed Protocol
The proposed protocol is depicted in Fig. 2. In Fig. 2, Capt ¼ ðC Enc; C DecÞ is CAPTCHA and R is a set of random numbers with a sufficiently large size. 4.2
Security Proof
We prove the following theorem as a security proof for the proposed protocol.
Communication Protocol Between Humans and Bank Server Secure
507
Learning phase times
Attack phase
Fig. 1. SUB-MIM game
Theorem. If CAPTCHA scheme Capt has IND-C-CCA security, the proposed protocol using Capt has SUB-MIM security. To prove this theorem, we assume that there is an algorithm A that wins the SUBMIM game against the proposed protocol with non-negligible probability, and show that there is an algorithm BA that wins the IND-C-CCA game with non-negligible probability. BA indicates use of A by B and is constructed as shown in Fig. 3. From the definition of IND-C-CCA game, BA can use human oracle H. For a queried CAPTCHA problem c, H outputs the solution m ¼ C DecðcÞ. In (1), B runs the protocol by itself to obtain p. B simulates SH ð:Þ by using its human oracle n times. The environment of A simulated by B is equal to that of A in the SUB-MIM game. The win condition in the SUB-MIM game of A is to output rb as Q0 in (12) corresponding to b selected by the challenger in (6) out of r0 ; r1 selected by B in (4). When A wins the SUB-MIM game, B can know the b selected by the challenger and send it to the challenger as ^ b in (14) to win the IND-C-CCA game. Therefore, it is evident that B wins the IND-C-CCA game if A wins the SUB-MIM game. Next, we describe the conditions under which B wins the IND-C-CCA game when A loses the SUB-MIM game. There are two cases where A loses the SUB-MIM game; the case where Q0 is not rb but r1b , and the case where Q0 is neither rb nor r1b . If Q0 is not rb but r1b , B cannot know this and has no choice but to select b b ¼ 1 b in
508
K. Mukaihira et al.
(13) and as a result, it loses to IND-C-CCA game. On the other hand, if Q0 is neither rb nor r1b , then B, knowing r0 and r1 , can know that Q0 is an incorrect value. Therefore, B chooses b b randomly by coin toss and outputs it. Then, B can win the IND-C-CCA game with a probability of 1/2. Thus, if A loses the SUB-MIM game, B wins the INDC-CCA game if Q0 6¼ r1b and B wins the coin toss. Let the probability that A wins the SUB-MIM game be eA , and the probability that B wins the IND-C-CCA game be Pr½B wins.
Fig. 2. Proposed protocol
Pr½B wins ¼ Pr½A wins þ Pr ½A loses ^ Q0 6¼ r1b ^ B wins by coin toss ¼ eA þ Pr ½A loses Pr½Q0 6¼ r1b jA loses
AdvINDCCCA B
Pr ½B wins by coin toss jA loses ^ Q0 6¼ r1b jRj 2 1 ¼ eA þ ð1 eA Þ jRj 1 2 jRj jRj 2 1 þ ¼ eA 2ðjRj 1Þ jRj 1 2 1 jRj 1 ¼ Pr ½B wins ¼ eA 2 2ðjRj 1Þ 2ðjRj 1Þ
Therefore, if the domain of random numbers jRj is large enough and the advantage of A in SUB-MIM game eA ¼ AdvSUBMIM is not negligible, the advantage of B in the A INDCCCA IND-C-CCA game AdvB is also not negligible. Thus, the contraposition is true, and the theorem is proved.
Communication Protocol Between Humans and Bank Server Secure
509
Fig. 3. Construction of BA
5 Conclusion This study proposed an H2M secure communication protocol that is capable of keeping security against transaction-tampering type MITB attack. It was constructed based on the concept of CAPTCHA as an example of a problem that is easy for humans to solve correctly, but difficult for machines. Moreover, CAPTCHA was formulated as a cryptographic model and its security notion (IND-C-CCA) was defined. In addition, SUB-MIM security was defined as security against transaction-tampering MITB attacks. Our money-transfer request protocol has SUB-MIM security if the CAPTCHA scheme used in the protocol has IND-C-CCA security.
References 1. Suzuki, M., Nakagawa, Y., Kobara, K.: Assessing the safety of “Transaction Authentication” against Man-in-the-Browser attacks in Internet banking. Financ. Res. 32(3), 51–76 (2013) 2. Weigold, T., Kramp, T., Hermann, R., Höring, F., Buhler, P., Baentsch, M.: The Zurich Trusted Information Channel - an efficient defence against man-in-the-middle and malicious software attacks. In: Trusted Computing-Challenges and Applications, pp. 75–91 (2008) 3. Negi, T., Mori, T., Hirano, T., Koseki, Y., Matsuda, N., Kawaguchi, K., Yoneda, T.: Approach for the method of transaction signing utilizing smartphone with Secure SIM, vol. 2015-CSEC-71, no. 11, pp. 1–8 (2015) 4. The Official CAPTCHA Site. http://www.captcha.net. Accessed 07 Sept 2019 5. Kumarasubramanian, A., Ostrovsky, R., Pandey, O., Wadia, A.: Cryptography using captcha puzzles. In: Public-Key Cryptography, pp. 89–106. Springer (2013)
Development of a Solution Model for Timetabling Problems Through a Binary Integer Linear Programming Approach Juan Manuel Maldonado-Matute1,2,3,4(&), María José González Calle2,3,4, and Rosana María Celi Costa2,3 1
Escuela de Ingeniería de la Producción y Operaciones, Universidad del Azuay, Cuenca, Ecuador 2 Escuela de Administración de Empresas, Universidad del Azuay, Cuenca, Ecuador {jmaldonado,mgonzalez,rosanacc}@uazuay.edu.ec 3 Observatorio Empresarial, Universidad del Azuay, Cuenca, Ecuador 4 Departamento de Posgrados, Universidad del Azuay, Cuenca, Ecuador
Abstract. A common situation when each academic period starts in academic institutions is the challenge of assign classrooms for the different courses in each faculty. This article shows an integer binary linear programming model (objective function, constraints and the relationship between decision variables) which, on a daily basis, performs a classroom assignment based on the hourly scheduling and the capacity of each classroom; the model considers two assumptions: (1) the assignment of each subject in a certain time period is already carried out optimally, and (2) the availability of teachers is not a restrictive element of the model. The test and analysis of the proposed model was carried out at the Faculty of Science and Technology of the University of Azuay (Cuenca, Ecuador). The result of the proposed model, which can be solved with any linear programming algorithm, allows an efficient assignment of the different classrooms, considering the constraints mentioned above. It is considered that under the proposed approach the model can be extended considering additional aspects and conditions, however this would increase the complexity which can be managed through a phased approach (fragmented problems). Keywords: Timetabling problem
Linear programming Class scheduling
1 Introduction Programming schedules and assigning classrooms at the beginning of each academic period is a problem faced by the great majority of educational institutions due to the series of factors that must be considered to successfully complete this task. This activity represents an important administrative task because it consumes significant amount of resources, especially time and staff. From the point of view of Operations Research, these types of problems are within the field known as Timetabling Scheduling Problems. Timetabling Scheduling Problems describe a situation that usually happens in the © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 510–516, 2020. https://doi.org/10.1007/978-3-030-39512-4_80
Development of a Solution Model for Timetabling Problems
511
educational environment, where a series of resources, usually subjects and teachers, must be assigned to a limited number of time periods and classrooms at the same time [1], so that there are no inconsistencies in the distribution. Anthony Wren defines timetabling problems as those that allow the allocation, subject to constraints, of resources to objects that are placed in the space-time, so that they satisfy as much as possible a set of desirable goals. Examples are the classification of class and exam hours and some forms of staff allocation, for example, the assignment of toll posts subject to a specific number of staffs [2]. The biggest difficulty in solving these types of problems is that the constraints and variables involved can be counted by hundreds or thousands due to the amount of resources that should be considered; due to this finding a suitable solution for the problem can become an extremely difficult task. There is a big variety of versions to solve timetabling problems, each one for a specific reality, so it is not possible to speak of a generalized model applicable to any circumstance or institution. An important variety of studies have been developed from the point of view of operations research and computer science in the area of time scheduling each with very different approaches, but with an important amount of similarities.
2 Timetabling Problems The manual solution of timetabling problems usually requires several staff and days of work, and they become considerably more complex as the factors involved in the calculation grow (teachers, time periods, capacity, classrooms, laboratories, etc.), and there is also the possibility that the solution obtained is not 100% satisfactory. For these reasons, considerable attention has been presented to the development of models that allow the automation of class scheduling problems for some years. The first documented and elaborated approach in important detail was that carried out by Gotlieb in 1963 [3] and has served as a starting point for a series of articles and applications that have achieved considerable success. Despite the series of variants that have been presented by several authors for the problem, which differ depending on the type of event and the type of constraints, the problem can be classified into three categories [4], programming schedules for exams and tests (Examination Timetabling), programming class schedules for schools (School Timetabling), and programming of class schedules for universities (University Course Timetabling). Although this classification exists it could not be considered strict because on several occasions a specific problem can overlap more than one category. In most cases, the problem is treated as an optimization problem where it is about maximizing or minimizing an objective function subject to a group of requirements called hard and soft constraints. Hard constraints are those that should never be violated and are generally not expressed as part of the objective function but as restrictive functions related to the objective function; the soft constraints, on the other hand, are not mandatory and are not as important as the previous ones, these are included as part of the objective function and are fulfilled only when all the hard constraints have been fulfilled. Hard constraints for the problem of programming class schedules for
512
J. M. Maldonado-Matute et al.
universities usually are regulated by the following rules [5]: (1) Collisions are not allowed: In a timetabling problem a collision occurs when two or more classes are scheduled in the same time period and with the same teacher or in the same classroom. (2) The schedule must be completed: A schedule is considered complete when all classes have been assigned to their respective time period and with a respective teacher, in these cases the assignment of a classroom is not always considered. (3) The schedules must allow programming of non-consecutive subjects: In many cases certain subjects are not taught at one time over a week but are distributed throughout the available academic days, it implies the possibility of schedule the same subject in different time periods along the week. (4) Consecutive class periods should be considered: It should be considered that in many cases certain subjects can be taught in consecutive time periods without this meaning a change of classroom or teacher. Similarly, soft constraints for the proposed problem are regulated by the following desired conditions: (1) Teachers preferences for specific time periods should be met if possible: Teachers can express an opinion about their preferences for teaching classes in certain time period. (2) The schedules for the students should be as compact as possible: Care should be taken to compact the schedules so as to avoid the dispersion of classes throughout a school day. (3) The change of classrooms should be avoided as much as possible: For convenience, the change of classrooms should be avoided specially to avoid the clutter and agglomeration of students in the subject changes. In 2003 the Metaheuristics Network organized an international competition for the problem called University Course Timetabling Problem [6], the objective of this contest was to establish a model of general acceptance by the research community. The problem consists of a set of n events to be scheduled in a set of 45 time periods (9 time periods per day), and a set of j classrooms in which each event is carried out. In addition, there is a set of S students who attend the events, and a set of F characteristics given by the classrooms and required by the events. A feasible schedule is one in which all events have been assigned to a time period and a classroom, so that the following constraints are met: (1) No student attends more than one event at time; (2) the classroom is large enough for all attending students and satisfies all the features required by the event; (3) there is only one event in each room at any time interval. The phased approach used in, the contest winning algorithm allows reducing the magnitude of the problem by dividing it into subproblems of less complexity, these subproblems can be handled individually, and the results of a given phase serve as inputs from another [1].
3 Problem Description and Solution Method The problem to be solved emulates the situation that occurs in the Faculty of Science and Technology of the University of Azuay formed by 7 schools, where a set of subjects and students must be distributed in classrooms throughout the academic week taking into account three main considerations, (1) not to exceed the capacity of each classroom, (2) do not use more classrooms than the Faculty owns, and (3) the schedule must be completed for each school; it is also considered that (1) the assignment of each subject in a certain time period is already carried out optimally, and (2) the availability
Development of a Solution Model for Timetabling Problems
513
of teachers is not a restrictive element of the model. For the generation of the model, a phased approach will be followed, concentrating on the constraints proposed, together with an approach similar to that proposed by [7] and [8] where the entire linear programming is used to construct a solution model. 3.1
General Characteristic of the Model
The following sets are considered for the structural basis of the proposed approach: • Days of the week, in which one or several subjects can be scheduled respecting the established time periods, this set will be denoted as D ¼ fd1 ; . . .; d5 g, based on a 5-day academic week. • Scheduled subjects, considering their respective number of students is de-noted by C ¼ fc1 ; . . .; cm g, considering that m subjects, usually between 100 and 110, can be offered in a regular semester in the Faculty. • Time periods, will be denoted by F ¼ ff1 ; . . .; fn g, a time period refers to the time interval during which a subject is taught, it is considered the existence of n time periods on a regular class day. • Classrooms, with its respective capacity, which will be available for the scheduling of a subject in a certain time period, the existence of p classrooms will be considered. The set of classrooms will be denoted by A ¼ a1 ; . . .; ap . It is considered a solution approach per day, fragmenting the weekly problem into 5 smaller ones. In this model a single type of variable denoted by xc;f ;a is considered; where c 2 C, f 2 F and a 2 A, this variable takes the value of 1 when the c subject, programmed in a f period is assigned to a classroom, otherwise it takes the value of 0. There is also a set of parameters that represent the conditions that must be respected. • Mjcj 2 N: vector that indicates the number of students in each subject. Where M |c| indicates the number of students in the c 2 C subject. • Njf j 2 N: vector that indicates the number of subjects assigned to certain time period. Where N|f| indicates the number of materials in the f 2 F period. • Pjaj 2 N: vector that indicates the capacity of each classroom. Where Pjaj indicates the number of seats available in the a 2 A classroom. • CFjCjjFj 2 f0; 1g: subject-period allocation matrix, where CFc;f ¼ 1 if a c 2 C subject has to be scheduled in a f 2 F period, otherwise 0. • CAjCjjAj 2 f0; 1g: subject-classroom allocation matrix, where CAc;a ¼ 1 if the a 2 A classroom has the capacity (possibility) to receive all the students enrolled in the c 2 C subject, otherwise 0. • CUjCjjFjjAj 2f0; 1g: subject-period-classroom availability matrix, where CUc;f ;a ¼ 1, if a c 2 C subject has the possibility of being scheduled in a f 2 F period and assigned to a a 2 A classroom, otherwise 0. 3.2
Linear Programming Model
The set of constraints necessary for the problem and its description are listed below, Eqs. (1), (2), (3), (4) and (5) respectively represent the mentioned constraints.
514
J. M. Maldonado-Matute et al.
1. Only the number of scheduled subjects must be assigned to a time period. 2. The classrooms capacity must be respected, that is, the number of students enrolled in a subject cannot exceed the capacity of the assigned classroom. 3. Subject scheduled in consecutive periods must stay in the same classroom. 4. A course scheduled in a time period must be located in a single classroom. 5. Classrooms can host a maximum of one subject in a certain time period. Xm Xp c¼1
x a¼1 c;f ;a
Nj f j ; 8f 2 F , CFc;f ¼ 1 ^ CAc;a ¼ 1:
xc;f ;a CUc;f ;a ; 8c 2 c; f 2 F; a 2 A , Mjcj Pjaj : Xp
x a¼1 c;f ;a
xc;f 1;a ¼ 0; 8c 2 c; f 2 F , CFc;f ¼ CFc;f 1 ¼ CAc;a ¼ 1:
Xp
x a¼1 c;f ;a
Xm
x c¼1 c;f ;a
ð1Þ ð2Þ ð3Þ
1; 8c 2 c; f 2 F , CFc;f ¼ 1 ^ CAc;a ¼ 1:
ð4Þ
1; 8a 2 A; f 2 F , CFc;f ¼ 1 ^ CAc;a ¼ 1:
ð5Þ
The model is completed with the objective function shown in (6), the coefficient that accompanies xc;f ;a defined by Mjcj =Pjaj allows an improvement in the allocation of space within the classrooms, since the percentage of occupation is taken into account based on the number of students per subject assigned. In this way, the problem raised becomes an exercise of maximization where the greatest efficiency in the occupation of all classrooms is sought, provided that the conditions already exposed are met. max
Xm Xn c¼1
f ¼1
Xp a¼1
Mjcj =Pjaj xc;f ;a , CFc;f ¼ 1 ^ CAc;a ¼ 1:
ð6Þ
4 Discussion and Results The model was tested in the Faculty of Science and Technology of the University of Azuay; in a test scenario, which emulates a regular semester, 5 working days, 290 subjects, 14 time periods and 35 classrooms were considered; information on schedules, teachers, students enrolled by subject and the capacity of each classroom was provided by the institution in order to complete the formulation of the model. The great majority of problems arise due to the lack of classrooms or due to their limited and varied capacity (from 15 to 40 seats) which contrasts with the large number of students enrolled in certain subjects every academic period, this problem, according to the revised literature is one of the most common. The number of variables and constraints generated per each day are shown in the Table 1. The model was solved using free software and results were obtained in times less than 5 min despite the number of variables and constraints, the results were verified manually and physically to ensure the correct functioning of the model, verifying 100% efficiency in the assignment. The analysis of the results together with the proposed
Development of a Solution Model for Timetabling Problems
515
Table 1. Number of variables and constraints handled in the tested model Day Monday Tuesday Wednesday Thursday Friday
Periods 14 14 14 14 14
Variables 4367 4332 3565 3679 2064
Constraints 2766 2702 2257 2400 1381
model can be considered as the basis for the resolution of more complex problems that may be set in a phases approach. The model will not provide an optimal solution when there are not enough classrooms and the adequate capacity to host the number of subjects to be scheduled in a certain time period; to solve this problem dummy classrooms could be introduced into the model (dummies variables) that allow to find a feasible solution and at the same time know the number of lack resources needed for a successful scheduling.
5 Conclusions In this work it has been possible to verify that a phased approach to solve timetabling scheduling problems is practical and efficient, since it allows reducing the complexity of a single problem that involves a large number of variables and constraints by fragmenting it in subproblems of less complexity. A similar approach to the one proposed in this research can be taken to solve complementary problems such as the assignment of teachers to subjects or the assignment of subjects to time periods under the same principle of sets, matrices and vectors, where the results of one problem would be taken as input for the solution of a following one, thus solving a macro problem of greater complexity that involves all the factors and variables that are usually involved in the creation and assignment of class schedules. It should be noted that this type of problem and its large number of approaches, is still widely studied among the research community because it has not yet been possible to generate a universal solution model, for this reason it is important to continue generating new alternatives and solution approaches.
References 1. Kostuch, P.: The university course timetabling problem with a three-phase approach. In: Burke, E., Trick, M. (eds.) Practice and Theory of Automated Timetabling V. PATAT 2004. LNCS, vol. 3616, pp. 109–125. Springer, Heidelberg (2005) 2. Wren, A.: Scheduling, timetabling and rostering - a special relationship?. In: Burke, E., Ross, P. (eds.) Practice and Theory of Automated Timetabling. PATAT 1995. LNCS, vol. 1153, pp. 46–75. Springer, Heidelberg (1995) 3. Gotlieb, C.C.: The construction of class-teacher time-tables. In: IFIP Congress, NorthHolland, pp. 73–77 (1963)
516
J. M. Maldonado-Matute et al.
4. Schaerf, A.: Survey of automated timetabling. Artif. Intell. Rev. 13(2), 87–127 (1999) 5. Daskalaki, S., Birbas, T., Housos, E.: An integer programming formulation for a case study in university timetabling. Eur. J. Oper. Res. 153(1), 117–135 (2004) 6. Rossi-Doria, O., et al.: A comparison of the performance of different metaheuristics on the timetabling problem. In: Burke, E., De Causmaecker, P. (eds.) Practice and Theory of Automated Timetabling IV. LNCS, vol. 2740, pp. 329–351. Springer, Heidelberg (2002) 7. Birbas, T., Daskalaki, S., Housos, E.: Timetabling for Greek high schools. J. Oper. Res. Soc. 48(12), 1191–1200 (1997) 8. Birbas, T., Daskalaki, S., Housos, E.: School timetabling for quality student and teacher schedules. J. Sched. 12(2), 177–197 (2009)
Machine, Discourse and Power: From Machine Learning in Construction of 3D Face to Art and Creativity Man Lai-man Tin(&) School of Arts and Social Science, The Open University of Hong Kong, Ho Man Tin, Kowloon, Hong Kong [email protected]
Abstract. The 3D face reconstruction based on a single image can be done recently by training the machine to learn from the database. The performance and accuracy of 3D facial structure and the geometry have largely improved with the powerful calculation of machine and its expanding data. The 3D face reconstruction can be applied in both still images and moving images and recently such technology draws attention in the art field such as portrait representation, facial recognition, surveillance art, etc. By using artificial intelligence (AI) technology and machine learning, artists are able to create artwork with hybridity characteristic, which consists of both human perspective, and machine creativity, which opens up a new gateway to the art world. The capability of artists in creating artwork, particularly portrait art in this research, has been enhanced in terms of technological and theoretical perspective. Artists can represent faces and understand 3D geometry in new ways by breaking through the traditional limitation; it also provides alternative methods in facial visualization beyond the expectation. Traditional expression and technique can be expended to a new dimension; the technology helps artists to think out of the box and continue to create new movement and ism throughout the art history. Moreover, the traditional belief about the importance of the originality of artists’ mind, sense, perspective and techniques in creating artwork can now be redefined by machine and artificial intelligence. A new form in art and creativity, and even knowledge and history has been developed. This paper attempts to discuss the essential transformation in dimensional imagery and its aesthetic experiences through machine learning means, and the shift of discourse of power from human to machine. Keywords: 3D face reconstruction Machine learning Artificial intelligence Post-human aesthetics
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 517–523, 2020. https://doi.org/10.1007/978-3-030-39512-4_81
518
M. L. Tin
1 Introduction With the improvement of machine calculation and capabilities in data manipulation, more and more artists are trying to explore the possibility of making artwork by training machine and using artificial intelligence in recent years. One of the creative applications is the 3D face reconstruction. 3D facial structure and geometry are difficult to generate without providing enough corresponding data, and hence matching the facial poses and generating accurate volumetric images are highly difficult [1]. Some of the existing technologies have been used to reconstruct the 3D face, for example, photogrammetry technology can generate geometric data and 3D facial model through instantaneous multiple photos taken by cameras and software integration. Another technology like infrared/structured light scanning can collect the surface data and distance information and extracts the 3D data in the scanning field within its own system. Such kind of human-centered operation and computer program system has been shifting to machine-led approaches to create and present 3D images. This paper will discuss technological change in portrait creation, the corresponding machine learning technology, explore how the technology can reconstruct 3D face, and mediate between human and machine to create an aesthetic experience of image and the shift of power of discourse in creativity.
2 Portraiture Evolution Portrait has been an artistic representation of humans in the history of art. The earliest example can be found in the plastered human skulls produced around 9000 and 6000 BC. These sculptures demonstrate how people in the prehistoric time understand their ancestors’ facial features and represent them in 3D by using plaster to model the human skull and face. Since the dawn of different media, techniques, technologies and imaging apparatus, the artistic representation of portraiture has been expanded and diversified. For example, painting and photography have been widely used to replicate the physical world including portrait in 2D visualization. One of the best-known portrait paintings Mona Lisa by the Italian Renaissance artist Leonardo da Vinci denotes the high ability and skill of human in painting and aesthetics. Camera allows us to capture the light and shadow and reproduce portrait images, which give us inspiration to understand face construction in a precise and detailed way. Computer graphics provide new possibilities in image and data manipulation, and the generation of 2D images and 3D models. These conventional and new imaging methods reveal the evolution of human replication of physical world and perception about 3 dimensions via 2D visualization.
Machine, Discourse and Power: From Machine Learning
519
In 2018, Edmond de Belamy, a portrait painting produced through Generative Adversarial Network (GAN) was sold at Christie’s auction in New York for US $432,000. This artwork created by artificial intelligence suggests an important direction and new possibilities in imaging technique, new challenges to human in replication and perception of the physical world, as well as the notion of art and creativity. The application of machine intelligence in image production has increased rapidly. In the near future, creation of artwork may be largely transformed and shifted from humancentered to human-machine integration, or machine-led systems.
3 Machine Artificial and Image Generation The GAN demonstrates the possibility for AI to create 2D portraiture, generation of 3D model is the next hurdle. Compare with 2D, 3D imagery is a big challenge in terms of the complexity of geometric and volumetric data generation, and corresponding model building. Current systems rely on multiple images to create the 3D models, which complicate the modeling process and increase in cost of image production. Recently there are researches suggested that the limitations can be overcome by machine learning through a Convolutional Neural Network (CNN) [2]. According to the researches, CNN can generate 3D model from a single 2D image. Such method and technology can address the problems and provide a simple but powerful way to produce 3D images. The AI technology and machine learning such as aforementioned GAN and CNN could be new direction for portraiture creation. In order to test the possibility and result of 3D portrait generated by machine artificial, CNN and relevant database of 2D and 3D images as well as the open source code have been used and adopted in this section. A photography (Figs. 1(A) and 2(A) shows the portrait photography of a Hong Kong artist and singer Leon Lai) and painting picture (Fig. 3(A) shows the portrait painting Mona Lisa) found from the internet were used. The photos and picture of painting were sent to machine for training and test the results of whether the machine can identify a face in a photograph and picture and reconstruct the correspondent 3D facial geometry. Figures 1, 2 and 3 show the results that 3D face can be reconstructed successfully from a single image through CNN. By comparing Figs. 2(B) to 3(B) and 2(C) to 3(C), it is noted that there are subtle changes in 3D geometric “landscape” of the same person in different facial expression. This is good evidence and result that the machine is able learn and generate different 3D faces based on its own interpretation of the picture and the facial expression as long as CNN can identify a face in the picture. Figure 1(D) shows different angles of the 3D face model indicating that CNN is able to visualize parts that are invisible in the original 2D picture such as forehead and chin.
520
M. L. Tin
Fig. 1. (A) Portrait photography of Hong Kong artist and singer Leon Lai found from the internet was used to do the testing. (B) A closeup of the 3D face generated from the photo Fig. 1 (A). (C)(D) Different angles of 3D face reconstruction generated through a Convolutional Neural Network.
Machine, Discourse and Power: From Machine Learning
521
Fig. 2. (A) Another portrait photography of Leon Lai found from the internet was used to test the 3D face reconstruction. (B) A closeup of the 3D face generated from the photo Fig. 2(A). (C) Different angles of 3D face reconstruction generated through a Convolutional Neural Network.
Portrait painting Mona Lisa as shown in Fig. 3(A) is another attempt to test whether CNN can identify a face in a painting and generate a 3D face. The result is positive and different angles of Mona Lisa 3D portraiture were captured as shown in Fig. 3(B) and (C). With the continuous development of the database of 2D images and 3D models for training, the performance of CNN in facial identification and reconstruction will be improved and the process will become smoother and the result will be more accurate. This has greatly improved the imaging technique and possibility of making 3D portraiture, as well as lower the cost in visualization in evolution [7], which has been a drawback of some of the existing technologies such as 3D scanning - sophisticated and precise set-up is required or low accuracy due to multiple scans.
522
M. L. Tin
Fig. 3. (A) Picture of Mona Lisa painting by Leonardo da Vinci. (B) A closeup of the 3D face generated from the photo Fig. 3 (A). (C) Different angles of 3D face reconstruction generated through a Convolutional Neural Network.
4 Post-human Aesthetics To a certain extent, AI and machine learning open up a new gateway to the art world, which allow artists to create portraiture with hybridity characteristic - machine “writing” with human “scripting” [4]. The autonomous art creation of machine artificial changes the traditional practice and belief on the artist’s mind, sense, perspective and techniques. In fact, machine artificial allows us to rethink the argument about “The Death of the Author” by Roland Barthes (1915–1980) - is it possible for us to detect precisely what the machine intended? Inevitably, I have shared the power of discourse in art authorship and authority with machine intelligence in the 3D face reconstruction as shown in Figs. 1 and 2. The interventions of “writing” and “scripting” are becoming more complicated in which the values of aesthetics and art have to be redefined in posthuman era.
Machine, Discourse and Power: From Machine Learning
523
References 1. Moschoglou, S., Ploumpis, S., Nicolaou, M., Papaioannou, A., Zafeiriou, S.: 3DFaceGAN: adversarial nets for 3D face representation, generation, and translation. arXiv:1905.00307v2 (2019) 2. Jackson, A.S., Bulat, A., Argyriou, V., Tzimiropoulos, G.: Large pose 3D face reconstruction from a single image via direct volumetric CNN regression. In: ICCV (2017) 3. Dawkins, R.: The Selfish Gene. OUP Oxford, Oxford (2006). 30th Anniversary edition 4. Seymour, L.: Roland Barthes’s the Death of the Author. Macat Library (2018) 5. Fig. 1(A). Portrait photography of Hong Kong artist and singer Leon Lai [image]. https:// images.app.goo.gl/xEmsMTdrLNDTiZJ99. Accessed 14 Sept 2019 6. Fig. 2(A). Portrait photography of Hong Kong artist and singer Leon Lai [image]. https:// images.app.goo.gl/fbKVso1FiLcvptdY6. Accessed 12 Oct 2019 7. Fig. 3(A). Picture of painting Mona Lisa by Leonardo da Vinci. (1503) [image]. https:// commons.wikimedia.org/wiki/File:Mona_Lisa,_by_Leonardo_da_Vinci,_from_C2RMF_ retouched.jpg#/media/File:Mona_Lisa,_by_Leonardo_da_Vinci,_from_C2RMF_retouched. jpg. Accessed 8 Oct 2019
Modelling Alzheimer’s People Brain Using Augmented Reality for Medical Diagnosis Analysis Ramalakshmi Ramar1, Swashi Muthammal1(&), Tamilselvi Dhamodharan2, and Gopi Krishnan Rajendran3 1
2
Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education, Krishnankovil 626126, Tamil Nadu, India [email protected], [email protected] Cognitive Science Research Group, Department of Information Technology, Thiagarajar College of Engineering, Madurai 625 015, Tamil Nadu, India 3 ThoughtWhiz, Bangalore 560011, Karnataka, India
Abstract. Alzheimer’s Disease (AD) in adults is characterized by gradual memory loss for old age people. Alzheimer’s brain have impact on affecting brain, include loss of memory, difficulty in thinking, language and solving problems symptoms. Heart issues, depression, diabetes and high blood pressure pose a higher risk of causing Alzheimer’s disease. The Alzheimer’s disease in the brain proteins build up to form structures known as plaques, which are the abnormal clumps in the brain, and tangles which are the bundles of fibbers in the brain. This leads to loss of brain tissue, nerve cells connections and lead to death of nerve cells. In their brain there is a shortage of chemicals. These chemical messengers around the brain help to transmit signals whose shortage causes the signals do not transmit effectively. The above symptoms were visualized as AR (Augmented Reality) model to assist doctors for medical analysis. Augmented Reality act as a extremity tool for findings, supporting and analyse the Alzheimer’s Disease. Understanding the AR brain model enhance the analysis of the brain with the technical exploration of tracking, visualization technology layer by layer level, integrated feedback about different parts of the brain function which plays a major role in clinical evaluation to treat the Alzheimer’s disease. Microglia are a type of cell that initiate immune responses in the brain and spinal cord. When AD is present, microglia interpret the beta-amyloid plaque as cell injury. To reduce or control the inflammatory response and brain shrinking can be visualized using Augmented Reality. Keywords: Brain Memory loss Alzheimer’s disease Augmented Reality Visualization
1 Introduction Augmented Reality is obtained from the concept of virtual reality in which a person creates an artificial world to explore interactively through sense of vision, audio and haptics. Although there are umber of definitions for AR universally accepted definition © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 524–531, 2020. https://doi.org/10.1007/978-3-030-39512-4_82
Modelling Alzheimer’s People Brain Using AR for Medical Diagnosis Analysis
525
was proposed by Azuma et al. The paper titled A Survey of Augmented Reality proposes that AR system combines both real and computer generated information in a real environment, interactively and in real time [1, 2]. AR is characterized to be potential to supplement the physical world [2]. The history of AR dates back to 1960s when Ivan Sutherland built a mechanically tracked head mount device Sword of Damocles. The term Augmented Reality was coined in the year 1990 by a Boeing researcher Tom Caudell [3, 4]. Then came Virtual Fixtures (a robotic system that places information on top the workers work environment to help with efficiency) in 1992 by Louis Rosenburg. In 1992 NASA integrated AR in X-38 spacecraft. By the 2000 Nara Institute of Technology released ARToolKit software. By the year 2014 Googleglass was invented. There are further more developments constantly taking place in AR field till date. AR is further of two triggers marker-based and markerless augmentation. Marker based is the concept where the digital information is anchored to a physical world object using the parameters of camera calibration and 3D object interaction with human hand [5] as in Anatomy 4D App where the virtual 3D human anatomical objects pops up from the book instantly [6]. Markerless augmentation is when an object is projected without any markers in picture cards and in application of magic mirror [7]. The higher technological requirements of augmented reality are displays which encloses aural display, visual display under which comes the HMDs, handheld displays, spatial and pinch gloves displays [8, 9]. Tracking sensors and modelling environments that are capable of sensing the outdoor and unprepared environment [10]. User interface and interaction as in holopanel interface for android [11]. Therefore AR has marked its path in various fields of medicine, personal information system, industrial and military applications for designing and assembling components, sports broadcasting, gaming, tourism, teaching and learning aids [12]. As medicine is known as the art of science that deals with practice, diagnostics, treatment, and prevention of the disease being caused. The diagnostics part is the primary and most complicated area for medical practitioners. AR acts as a supportive tool through which surgeon can see the hidden organs enclosed within the human body that enhances the accuracy and treatment. The enhancement of accuracy is done through the help of camera calibration, patient image registration [13]. AR provides an improved surgical planning, navigation, simulators, educational tools, therapy and rehabilitation. Visuo haptic augmented reality setup with OPTRAK 3020 and IR tracking systems were incorporated for stabilized tracking and efficient alignment of images [14]. This provides multimodality of AR in medicine by eliminating the previously used method called MAGI (Microscopic Assisted Guided Information) [15]. Regarding surgical planning Stereotactic surgery was performed with previously obtained radiographic images that were superimposed on the specific sites of the patients [16, 17]. There is a high possibility of AR examination being done remotely using holo lens [18, 19]. The same holo lens is effective in manipulation of the images using hand that finds its application in reconstruction surgery of bones due to the detailed view of tissues and blood vessels [20]. The traditional laparoscopic medicine has been replaced by a new technique called Natural Orifice Transluminal Endoscopic Surgery (NOTES) in Minimally Invasive Surgery [21]. Haptics are used in automatic registration, stiffness mapping, and dynamic image overlay that results in automated
526
R. Ramar et al.
tumor detection [22]. The same technique is used under dynamic texture mapping and for performing surgery in digestive system, liver, eye etc., [23–25]. A brain is considered to be the repository of memory, sensation, and other intellectual and cognitive activities. Human brain is quite flexible in nature when compared to the brains of other mammals. This flexibility contributes to the perpetual learning process and adaptable nature in humans. The memory framework of the brain system are termed into declarative memory and non-declarative memory. Declarative memory deals with recollection of events and facts, relies on connection with medial temporal lobe and diencephalon which when damaged leads to amnesia. Non-declarative memory is sub classified into skills and habits, priming, simple classical conditioning, and non-associative learning that are acquired through learning experience [26]. It involves the coordination of cerebellum, basal ganglia, motor cortex and hippocampus under the supervision of cerebral cortex [27]. Brain lesions are considered to be the major cause for basic and complex memory dysfunction. Cortical lesions leads to degeneration of memory for a short term and trouble in retrieval of sematic memory [28]. Planning ability is affected by frontal cortical dysfunction. Majority of amnestic diseases are due to diencephalic lesions [29]. Dementia is a dreadful neurodegenerative disease that have been categorized into 4 types, Alzheimer’s disease, Vascular dementia, Front temporal dementia and Dementia with Lewy Bodies based on their cause and effect [30]. Some of these are reversible based on the of resistance, resilience and compensation parameters exhibited by the patients [31]. Reversible dementia is due to geriatric depression [32]. The changes in the physiology of the brain and it’s chemical composition leads to irreversible dementia. There is no permanent cure for irreversible dementia there are medications including cholinesterase inhibitors, memantine [33, 34]. Apart from medication rehabilitation for patient’s cognitive and motor skill developments are provided [35, 36]. Alzheimer’s disease is the most common type of dementia according to the paper published in 2014 statistically proves that almost 4.4 million of the world’s population >65 is estimated to be affected and there is also a risk of growth in number to 65 million by 2030. The term was coined after the German neurologist Alois Alzheimer’s in the year 1907. Alzheimer’s disease is characterized by the progressive deterioration of the cognition, motor skills and memory. The prior cause of Alzheimer’s disease is amyloid plaque formation due to the left over amyloid beta proteins after Amyloid Precursor Protein (APP) cleavage. The microglia (Interleukin-1 (ILl)) and astrocytes responsible for formation of amyloid beta proteins formation stagnates leads to inflammation. These lesions affect the neurons and pyramidal cells initially the in hippocampus region that is identified using proteomics identification, and gradually spreads to the cortical region and then to the whole of the brain. In 1984 Glenner and Wong proposed that Alzheimer’s can be due to the genes of apolipoprotein E APOE (APOE e2, APOE e4, APOE e3). The genes are responsible for late onset of Alzheimer’s. The early onset is due to the mutation of Presenilin 1 (PSEN1), Presenilin 2 (PSEN2). The proposed AR (Augmented Reality) Brain model of AD that includes neurons, astrocytes, microglias and peripheral macrophages, as well as amyloid b aggregation and hyper phosphorylated tau proteins. The model is represented by a system of partial differential equations. The model is used to simulate the effect of drugs that either failed in clinical trials, or are currently in clinical trials.
Modelling Alzheimer’s People Brain Using AR for Medical Diagnosis Analysis
527
2 Proposed Augmented Reality Brain Model The physiological change in the Alzheimer’s brain is the result of chemical imbalance in the brain. The major causes are beta-amyloid plaques and neurofibrillary tangles. Beta-amyloid proteins are the small fragments produced by APP protein cleavage. The amyloid proteins usually breaks drown but in a brain prone to Alzheimer’s these sticky proteins groups into clusters. The plaques around the neurons causes death of the neurons. Neurofibrillary tangles are insoluble fibres that forms tau protein. Tau protein is formed inside the neurons that interferes with the cellular mechanism and leads to cell death. This paper focusses on the chemical and physiological changes in the brain. The major physiological change is the brain shrinkage. A normal human brain weighs 3 lb (1300 gms–1400 gms), Alzheimer’s brain is 10% less than that of the normal brain. This is due to the death of brain cells. The decline of the cells starts from the hippocampus and spreads to the cerebral cortex. The ventricle chambers that contains the cerebrospinal fluid on either sides are found to be enlarged. The connectivity of neurons gets disconnected that leads to the loss of memory, speech and motor skills. The sulcus and gyrus region develops enlarged gaps. Figure 1 shows the proposed model of the AR based modelling and analysis done for the Alzheimer’s brain.
Fig. 1. Proposed model and analysis in Alzheimer’s brain
Figure 1 shows the block diagram view of the physiological transformations. It will be modelled using Maya tool. Model of a healthy human brain and an Alzheimer’s brain will be modelled for overall structural comparison. Then each interior part will be designed to show the minute changes. All these models are imported to the unity tool where the 3D models are augmented to get a clear and detailed view. The dissection and slicing of the model will be enabled for improved analysis of the damages to the interior lobes. An audio track will be playing in the background to explain the changes.
528
R. Ramar et al.
The addition of the audio clip makes it more interactive. This can used as a learning tool or for research purpose. Figure 2a represents the schematic diagram of the normal brain and it’s internal neural structure. The figure exhibits the dendrites and axon are linked. The linkage helps in the proper transmission of information throughout the human body. Tau proteins and microtubules are stabilized. The mild Alzheimer’s brain shows slight structural variation in the hippocampus region. Figure 2b shows the severely affected brain. The internal neural structure is distorted in shape. This distortion leads to error in the smooth transmission of neural signals. The microtubules appear to be irregular and then the amyloid beta proteins are stagnated form in plaques. The cortical shrinkage with enlarged ventricles are the other structural variations.
Fig. 2. a. Healthy brain neuron network and mild Alzheimer’s brain; b. Severe Alzheimer’s brain
The AR brain model is represented by a system of partial differential equations (PDEs) based on Fig. 2. It represents all the pro inflammatory cytokines by TNF - a, and all the anti-inflammatory cytokines by IL-10. The model is used to conduct in silica trials with several drugs: TNF- a inhibitor, anti-A b drug, MCP-1 inhibitor, and injection of TGF - b. Simulations of the model show that continuous treatment with TNF - a inhibitor yields a slight decrease the death of neurons, and anti-A b drug yields a slight decrease in the aggregation of A b over 10 years period, while the benefits from injection of TGF - b and MCP-1 inhibitor drugs are negligible. This suggests that clinical trials consider combination therapy with TNF - a and anti-A b drugs. The above information present in the brain are graphically projected in the augmented reality model so that the analysis, research perspectives were visually identified by doctors to continue the advanced research in the AD.
Modelling Alzheimer’s People Brain Using AR for Medical Diagnosis Analysis
529
3 Conclusion and Future Works We conclude the earlier mathematical models which deal with some aspects of A. The model we developed A b polymerization, A b plaque formation and the role of prions interacting with A b, linear cross-talk among brain cells and A b, and the influence of SORLA on AD progression will support for Alzheimer research scientist to explore the causes of the issue and the ways to narrow down the research with Alzamier’s brain model. In future the interior visualization and differentiate the diseased neuron and the behavior to reflect in the human behavior and the memory loss will be analyzed.
References 1. Azuma, R.T.: A survey of augmented reality. Presence Teleoperators Virtual Environ. 6(4), 355–385 (1997) 2. Höllerer, T.: User interfaces for mobile augmented reality systems, Doctoral dissertation, Columbia University (2004) 3. Carmigniani, J., Furht, B., Anisetti, M., Ceravolo, P., Damiani, E., Ivkovic, M.: Augmented reality technologies, systems and applications. Multimed. Tools Appl. 51(1), 341–477 (2011) 4. Blanco-Fernández, Y., López-Nores, M., Pazos-Arias, J.J., Gil-Solla, A., Ramos-Cabrer, M., García-Duque, J.: REENACT: a step forward in immersive learning about human history by augmented reality, role playing and social networking. Expert Syst. Appl. 41(10), 4811– 4828, 213 (2014) 5. Chun, J., Lee, S.: A vision-based 3D hand interaction for marker-based AR. Int. J. Multimed. Ubiquit. Eng. 7(3), 51–58 (2012) 6. Hsieh, M.C., Lee, J.J.: Preliminary study of VR and AR applications in medical and healthcare education. J. Nurs. Health Stud. 3(1), 1 (2018) 7. Glockner, H., Jannek, K., Mahn, J., Theis, B.: Augmented reality in logistics: changing the way we see logistics–a DHL perspective. DHL: Customer Solutions & Innovation 28 (2014) 8. Kesim, M., Ozarslan, Y.: Augmented reality in education: current technologies and the potential for education. Procedia-Soc. Behav. Sci. 47, 297–302 (2012) 9. Sielhorst, T., Feuerstein, M., Navab, N.: Advanced medical displays: a literature review of augmented reality. J. Disp. Technol. 4(4), 451–467 (2008) 10. Azuma, R., Baillot, Y., Behringer, R., Feiner, S., Julier, S., MacIntyre, B.: Recent advances in augmented reality. IEEE Comput. Graph. Appl. 21(6), 34–47 (2001) 11. Brotos, A.: Interactive augmented reality panel interface for Android (2015) 12. Van Krevelen, D., Poelman, R.: Augmented reality: technologies, applications, and limitations, Dep. Comput. Sci. Vrije Univ., Amsterdam (2007) 13. Ha, H.G., Hong, J.: Augmented reality in medicine. Hanyang Med. Rev. 36(4), 242–247 (2016) 14. Harders, M., Bianchi, G., Knoerlein, B.: Multimodal augmented reality in medicine. In: International Conference on Universal Access in Human-Computer Interaction, pp. 652– 658. Springer, Heidelberg (2007)
530
R. Ramar et al.
15. Edwards, P.J., King Jr., A.P., Maurer, C.R., de Cunha, D.A., Hawkes, D.J., Hill, D.L.G., Gaston, R.P., Fenlon, M.R., Chandra, S., Strong, A.J., Chandler, C.L., Richards, A., Gleeson, M.E.: Design and evaluation of a system for microscope assisted guided interventions (MAGI). In: Taylor, C., Colchester, A. (eds.) MICCAI 1999: Proceedings of the Second International Conference on Medical Image Computing and Computer-Assisted Intervention. LNCS, vol. 1679, pp. 842–851. Springer, Heidelberg (1999) 16. Parkhomenko, E., Safiullah, S., Walia, S., Owyong, W., Lin, C., O’Leary., et al.: MP26-20 virtual-reality projected renal models with urolithiasis as an educational and preoperative planning tool for nephrolithotomy: a pilot study. Am. Urol. Assoc. Conf. 199(45), e345 (2018) 17. Alberti, O., Dorward, N.L., Kitchen, N.D., Thomas, D.G.: Neuronavigation-the impact of operating time. Stereotact. Funct. Neurosurg. 1997(68), 44–48 (1997) 18. Hanna, M.G., Ahmed, I., Nine, J., Prajapati, S., Pantanowitz, L.: Augmented reality technology using Microsoft HoloLens in anatomic pathology. Arch. Pathol. Lab. Med. 142 (5), 638–644 (2018) 19. Monsky, W.L., James, R., Seslar, S.S.: Virtual and augmented reality applications in medicine and surgery-the fantastic voyage is here. Anat. Physiol. 9(1), 1–6 (2019) 20. Pratt, P., Ives, M., Lawton, G., Simmons, J., Radev, N., Spyropoulou, L., Amiras, D.: Through the HoloLens™ looking glass: augmented reality for extremity reconstruction surgery using 3D vascular models with perforating vessels. Eur. Radiol. Exp. 2(1), 2 (2018) 21. De Paolis, L.T., Aloisio, G.: Augmented reality in minimally invasive surgery. In: Advances in Biomedical Sensing, Measurements, Instrumentation and Systems, pp. 305–320. Springer, Heidelberg (2010) 22. Zevallos, N., Srivatsan, R.A., Salman, H., Li, L., Qian, J., Saxena, S., Xu, M., Patath, K., Choset, H.: A surgical system for automatic registration, stiffness mapping and dynamic image overlay. In: 2018 International Symposium on Medical Robotics (ISMR), pp. 1–6. IEEE, March 2018 (2018) 23. Patath, K., Srivatsan, R.A., Zevallos, N., Choset, H.: Dynamic texture mapping of 3D models for stiffness map visualization. In: Workshop on Medical Imaging, IEEE/RSJ International Conference on Intelligent Robots and Systems (2017) 24. Soler, L., Nicolau, S., Schmid, J., Koehl, C., Marescaux, J., Pennec, X., Ayache, N.: Virtual reality and augmented reality in digestive surgery. In: Third IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 278–279. IEEE, November 2004 25. Kalkofen, D., Reitinger, B., Risholm, P., Bornik, A., Beichel, R., Schmalstieg, D., Samset, E.: Integrated medical workflow for augmented reality applications. In: International Conference on Medical Image Computing and Computer Assisted Intervention (2006) 26. Squire, L.R.: Declarative and nondeclarative memory: multiple brain systems supporting learning and memory. J. Cogn. Neurosci. 4(3), 232–243 (1992) 27. Dharani, K.: The Biology of Thought: A Neuronal Mechanism in the Generation of Thought-A New Molecular Model, pp. 53–74. Academic Press, Cambridge (2014) 28. Morris, R.G.: Dementia and the functioning of the articulatory loop system. Cogn. Neuropsychol. 1(2), 143–157 (1984) 29. Mayes, A.R.: Learning and memory disorders and their assessment. Neuropsychologia 24 (1), 25–39 (1986) 30. Chiu, M.-J., Chen, T.-F., Yip, P.-K., Hua, M.-S., Tang, L.-Y.: Behavioral and psychologic symptoms in different types of dementia. J. Formos. Med. Assoc. 105(7), 556–562 (2006) 31. Montine, T.J., Cholerton, B.A., Corrada, M.M., Edland, S.D., Flanagan, M.E., Hemmy, L.S., White, L.R.: Concepts for brain aging: resistance, resilience, reserve, and compensation. Alzheimers Res. Ther. 11(1), 22 (2019)
Modelling Alzheimer’s People Brain Using AR for Medical Diagnosis Analysis
531
32. Alexopoulos, G.S., Meyers, B.S., Young, R.C., Mattis, S., Kakuma, T.: The course of geriatric depression with “reversible dementia”: a controlled study. Am. J. Psychiatry 150, 1693 (1993) 33. Arlt, S., Lindner, R., Rösler, A., von Renteln-Kruse, W.: Adherence to medication in patients with dementia. Drugs Aging 25(12), 1033–1047 (2008) 34. Raina, P., Santaguida, P., Ismaila, A., Patterson, C., Cowan, D., Levine, M., Booker, L., Oremus, M.: Effectiveness of cholinesterase inhibitors and memantine for treating dementia: evidence review for a clinical practice guideline. Ann. Intern. Med. 148(5), 379–397 (2008) 35. Blattgerste, J., Renner, P., Pfeiffer, T.: Augmented reality action assistance and learning for cognitively impaired people: a systematic literature review. In: PETRA, pp. 270–279, June 2019 36. Viglialoro, R.M., Condino, S., Turini, G., Carbone, M., Ferrari, V., Gesi, M.: Review of the augmented reality systems for shoulder rehabilitation. Information 10(5), 154 (2019)
Software Vulnerability Mining Based on the Human-Computer Coordination Jie Liu1,2(&), Da He1, Yifan Wang1, Jianfeng Chen1,2, and Zhihong Rao1,2 1
China Electronic Technology Cyber Security Co., Ltd., Chengdu 610041, China [email protected], [email protected], [email protected], [email protected] 2 Cyberspace Security Key Laboratory of Sichuan Province, Chengdu 610041, China
Abstract. In recent years, the increasing size and complexity of software packages has led to vulnerability mining gradually becoming more difficult and challenging. The theoretic research and systematic practice of traditional software vulnerability mining system emphasize models and data. Now, it gradually requires the participation of human vulnerability-miners in the mining procedure. To address the issue, from the human-center perspective, this paper holds that the role of human should be highlighted in the system building process. Aimed at solving the task of software vulnerability mining and integrating the natural intelligence of human into the system, it attempts to assign people who participate in the vulnerability-mining activities as the components of the system. This paper proposes a vulnerability mining system architecture based on human-computer coordination. Then, it designs the workflow of the vulnerability mining task based on human-computer coordination. Finally, it designs a task solving strategy based on human-computer coordination for the fuzz testing scenario. Keywords: Human-computer coordination vulnerability mining Workflow
Fuzz testing Software
1 Introduction In the new information environment, the software scale is getting bigger and bigger, the software complexity is getting higher and higher, and the software-related security issues are affecting individuals, organizations and society. As the core of software security issues, software vulnerability mining is getting more and more attention from researchers [1]. At the same time, the problem of software vulnerability mining is becoming more and more complicated. The contradiction between the limitations of traditional vulnerability mining methods and the efficiency requirements of vulnerability mining is increasingly prominent [2]. In particular, the traditional vulnerability mining technology often appears to be “incapable” in the situation that the human vulnerability miners are gradually becoming more involved. This has led us to deeply © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 532–538, 2020. https://doi.org/10.1007/978-3-030-39512-4_83
Software Vulnerability Mining Based on the Human-Computer Coordination
533
think about traditional vulnerability mining methods and system performance. This also poses new challenges to the development of traditional vulnerability mining technology. Since the early 1990s, many researchers and designers have begun to pay attention to the relationship between “human and systems” and carried out a series of research on human-computer interaction systems for problem solving [3]. However, with the continuous in-depth study of related theory and system practice, many researchers have a consensus that the human and computer should be coordinated to enable enough participation of human to improve system efficiency. The researchers initially explored the mechanism of human-computer integration and put forward many ideas and theories about the relationship between humans and systems (such as “human-computer intelligent system”, “human-computer integration”, “human-computer interaction”, etc.) [4] [5]. At the same time, researchers have studied many conceptual forms and generation techniques (such as “SoftMan”, “agent”, “artificial life”, etc.) [6]. Driven by software vulnerability mining task solving, this paper proposes the concept of “human as a service” and reconstructs the architecture of software vulnerability mining system from technical and non-technical perspectives. According to the idea and basic principle of human-computer coordination, the capability of human vulnerability miners is “immersed” into the software vulnerability mining task solution as the component of the system (human service). This paper designs the overall architecture of software vulnerability mining system based on human-computer coordination. Finally, the fuzz testing of software vulnerability is taken as a specific application scenario, and the coordination strategy in the vulnerability mining process is studied.
2 Human-Computer Coordination Mechanism This paper establishes the human-computer coordination mechanism. This framework implements flexible switching between human services and software services, while shielding the differences between human services and software services in conducting software vulnerability mining tasks. This mechanism is implemented by the combination of the service modeling layer, the service assembly layer, and the process execution layer, as shown in Fig. 1. The service modeling layer accepts the target input by the user. Then the dispatcher of the service assembly layer obtains the software vulnerability mining workflow description from the target receiver, parses and dispatches it. The dispatcher also queries the corresponding service from the registration server, and generates the abstract workflow. The workflow is then instantiated by the interpreter of the process execution layer. After the instantiation, the abstract workflow is transformed into the execution workflow, which is then passed to the solver for execution. In the software vulnerability mining task solving process, human miner is efficiently integrated into the workflow through the adapter. The programmable part forms the human service, and the unprogrammable part forms the human-computer
534
J. Liu et al. Service modeling layer
Process excuƟon layer
Target receiver
Interpreter instanƟaƟon
service workflow
Service assembly layer
RegistraƟon server
query results
Register& Publish
Abstract workflow
service assembly
Result
Result
Services
Dispatch
Task list manager
Service publish Interface
ParƟal programmable
Call
Solver
Dispatcher
Human service
SoŌware service
execuƟon workflow
Human miner
Human-computer interacƟve service
ParƟal programmable
Unprogrammable
Adapter
Unprogrammable
Fig. 1. Vulnerability mining mechanism based on human-computer coordination
interaction service. When a call to a human-computer interaction service reaches a certain number of times (very frequently), it can be programmed (fixed) to form a human service. At the same time, when a call to a human service reaches a certain number of times (very frequently), it can be programmed (fixed) to form a software service. These rules enable the transformation of service forms from human-computer interaction services to human services, and from human services to software services. If it is human service, the query of the registration server gains a specific human service entity. Then the solver will use the task list manager to assign tasks to the corresponding human service entity. The task list manager gets the corresponding human service entity from the solver, including the unique identifier, associated task instance and process instance identifier, status, executor identifier, priority, start time, creation time, submission time, deadline and other attributes. The task list manager is responsible for generating a task list of the corresponding executor according to the human service entity object sent from the solver, and setting related information such as priority, time reminder, etc. After the human service entity is loaded, it provides the vulnerability mining task solving service according to its task list. After the task is completed, the results are submitted to the solver, and the task list manager reports the task completion status and is responsible for the archiving of the tasks. If it is software service, the query result back to the registration server is the binding information of the software service. At this point, the solver will generate corresponding service information based on its binding information and input, including unique identifiers, service types, associated task instances and process instance identifiers, status, binding information, input, output, QoS, etc. The solver then generates a
Software Vulnerability Mining Based on the Human-Computer Coordination
535
software service call client to call the appropriate software service, returning the result and execution status to the solver after the call ends. The dispatcher changes the state of this task instance based on the results of the service (human service and software service) returned by the solver. After the dispatcher coordinates the tasks, all the participants of the process are treated as services in the registration server for further query. The dispatcher completes the process through state transitions between tasks. Changes in state are set based on the result returned by solver. The dispatcher passes the binding information and task instance information to the solver, waiting for the service to complete asynchronously. Then, the task instance state is changed according to the returned result and state, thereby completing the service process. When an exception occurs, such as resource failure, timeout, etc., the dispatcher needs to re-query the corresponding service, then bind the new service, and continue to execute according to the previous mode. According to the task requirements and the corresponding scheduling policy, the dispatcher determines the number of rescheduling. If the number of times exceeds, the error handling mechanism is triggered. Then the dispatcher would conduct a new execution plan.
3 Task Execution Process Based on service-oriented architecture, the software vulnerability mining process is componentized into a series of coarse-grained service and task processing procedures. At some phase of the software vulnerability mining task, the working node A (subtask) receives the task information and conducts its job with the associated service. The result of the subtask is assumed to meet the target requirements.
Software service library Software service 1
Software service n
Software service 2
No
Info.
A
Whether the result meets the requirements
B
Human service library
Yes HST
Human service 1
No task division
Yes Info. 1 Info. 2
Human service n
Human ··· service 2
Info. n
Information base Fig. 2. The task execution process
B2 B1
C
536
J. Liu et al.
Then, the relevant information is transmitted to the working node B (subtask). B first judge whether the obtained software service and execution results can meet the target requirements. If so, the process continues and turns to the working node C, and so on. If not, B passes the Human-Service Trigger (HST) to determine whether the subtask to be executed matches a human service. B can encounter two situations: (1) If yes, it calls the human service resource in the service library. This subtask is executed with human service or human-computer interactive service. Then the process continues. (2) If not, then B needs to be executed by a joint service of software service and human service. The system decomposes task B into B1 (software task) and B2 (human task). B2 gains the information (or knowledge) by triggering HST. Based on the joint service of B1 (software service) and B2 (human service), this subtask should be done. After the result of the execution meets the target requirement, the process is continued and turns to working node C, as shown in Fig. 2.
4 Human-Computer Coordination for Fuzz Testing Here, the fuzz testing method [7] of software vulnerability is taken as a specific application scenario. And the vulnerability mining process needs to be divided into several phases. And each phase contains several subtasks. The software fuzz testing task based on human-computer coordination composed by software services, human services, hybrid services, and direct human-computer interaction services. The detail of the human-computer coordination strategy is as follows: Step 1: Initialize the fuzz testing task FT. Step 2: Fuzz testing task FT = {Ph1,Ph2,…,Phi}, Phi is each phase and i = 7. Step 3: Start from i = 1, determine whether the subtask set included in Phi matches the service in the software service library. If it can, store it in the buffer library and go to Step 4. Otherwise go to Step 5. Step 4: Determine if i = n is true, then go to Step 8. Otherwise i++, go to Step 3. Step 5: Determine whether it can match the human service library. If it can, store it in the buffer library, go to Step 4; otherwise, go to Step 6. Step 6: Determine whether the joint service of the human service and software service can match, then store it in the buffer library, go to Step 4; otherwise, go to Step 7. Step 7: Submit the subtask to the direct human-computer interaction service interface. After the interface returns a confirmation message to the subtask, store it in the buffer library and go to Step 4. Step 8: The whole fuzz testing planning is finished. The results should be comprehensively sorted, and the human-computer coordination strategy is output. The procedure of the human-machine coordination strategy is shown in Fig. 3.
Software Vulnerability Mining Based on the Human-Computer Coordination
Fuzz testing Task (FT)
537
Let FT={Phi} (i=1, 2, , n) Match with software service lib
No
Whether match successfully
No
Whether match successfully
i++
Yes
Yes
Match with joint service
Whether match successfully
Match with No human service lib
buffer library Yes Direct human-computer interaction service
Whether i=n Yes
No
Result
Planning sorting end
Strategy output
Fig. 3. The procedure of the human-computer coordination strategy
5 Conclusion The traditional software vulnerability mining method is often ineffective and difficult to improve the level of intelligence in the task solving environment. This paper reexamines the structure of the vulnerability mining system and the task solving process from the perspective of “human-oriented”, and suggests that the role of human beings should be highlighted in the process of system construction and task solving. This paper proposes a vulnerability mining system architecture based on human-computer coordination. Then, it designs the flow of the vulnerability mining task based on human-computer coordination. Finally, it designs a task solving strategy based on human-computer coordination for the fuzz testing scenario. This paper only discusses the basic problems of human-computer coordination in the software vulnerability mining system. In the future work, there are some theoretical and key technical problems to be solved: (1) Key technologies such as human service self-learning and adaptive data acquisition, learning mode, and learning algorithms; (2) Theories and key technologies for improving efficiency, credibility, and practicality in the automatic selection, matching, combination, and invocation of services (the joint service of human service and software service). Acknowledgments. This work is supported by National Key R&D Program of China No. 2017YFB08029 and is supported by Sichuan Science and Technology Program No. 2018GZ0101.
538
J. Liu et al.
References 1. Shahriari, H.R.: Software vulnerability analysis and discovery using machine-learning and data-mining techniques: a survey. ACM Comput. Surv. 50(4), 1–36 (2017) 2. Liu, J., He, D., Rao, Z.H.: An analysis model of buffer overflow vulnerability based on FSM. In: 2nd International Conference on Geoinformatics and Data Analysis (ICGDA), Prague, pp. 47–51. ACM (2019) 3. Woods, D.D., Roth, E.M., Bnett, K.: Explorations in joint human-machine cognitive systems. In: Cognition, Computing and Cooperation, pp. 123–158 (1990) 4. Jones, P.M., Chu, R.W., Mitehell, C.M.: A methodology for human-machine system research: knowledge engineering, modeling and simulation. IEEE Trans. Man Cybern. 25(7), 1025– 1038 (1995) 5. Barthelemy, J.P., Bisdo, R., Coppin, G.: Human Centered Processes and Decision Support Systems. Eur. J. Oper. Res. 136(2), 233–252 (2002) 6. Grudin, J., Carroll, J.M.: From tool to partner: the evolution of human-computer interaction. In: Extended Abstracts of the Chi Conference (2017) 7. Kim, H.C., Choi, Y.H., Dong, H.L.: Efficient file fuzz testing using automated analysis of binary file format. J. Syst. Architect. 57(3), 259–268 (2011)
Design and Verification Method for Flammable Fluid Drainage of Civil Aircraft Based on DMU Yu Chen(&) COMAC Shanghai Aircraft Design and Research Institute, 5188 Jinke Road, Pudong New District, Shanghai, China [email protected]
Abstract. Due to the complex internal environment, numerous system equipment and pipe lines, the civil aircraft has many fire safety feature elements, and the configuration of these feature elements needs to be managed and controlled during the design iteration process. However, the traditional methods and means has not met the needs of aircraft development. This paper proposes the use of digital methods and means to construct a “DMU based on flammable fluid drainage”, analyzes and summarizes the key technologies of DMU modeling and simulation, At the same time, according to the method of system engineering, the design and verification method of flammable fluid drainage based on DMU are studied. Keywords: Fluid drainage Civil aircraft
Flammable fluid System engineering DMU
1 Introduction Aircraft fire safety is a safety concern of airworthiness authorities, aircraft manufacturers and operators. In the civil aviation field, according to the ICAO Safety Report [1] and ASN accident statistics, the average annual aircraft fire accidents accounted for 8% of the total accidents in the year from 2005 to 2014. The main design measures for civil aircraft fire protection include: isolation, ventilation, drainage, detection, etc. Drainage is a major measure for fire protection of flammable fluid and is an important area for aircraft fire safety research. Civil aircraft have fuel tanks and pipelines for storing flammable fluids. They are scattered throughout the aircraft and may cause flammable fluids leakage during storage and use of flammable fluids. Drainage is a main measure for fire prevention of flammable fluids. The drainage of flammable fluids for civil aircraft is a measure taken in order to comply with airworthiness regulations [2]. The main consideration is the fire prevention of flammable fluids to avoid serious aviation accidents. Fuel, ignition source and oxidizer are the three elements of aircraft fire. According to the controlled state of three elements of flammable materials, ignition source and oxidizer, the aircraft environment zone can be roughly divided into the following five categories: designated fire zone, fire zone, flammable zone, flammable fluids leakage © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 539–545, 2020. https://doi.org/10.1007/978-3-030-39512-4_84
540
Y. Chen
zone and non-hazardous zone. For the fire zone and flammable fluids leakage zone on the aircraft, due to the risk of flammable fluids leakage, there are also ignition sources and oxidizers, fire-prevention design for flammable fluids is an important factor in above two zones.
2 Difficulties in Flammable Fluids Drainage Design Civil aircraft enterprises generally adopt the “Guidelines for the Development of Civil Aircraft and Systems (SAE ARP 4754A)” [3] to guide the research and development of commercial aircraft products to ensure the research and development of civil aircraft can meet the requirements of airworthiness certification. Due to the complex environment of civil aircraft, the integration work for the aircraft is very complicated. Therefore, in the design process, the control of the feature elements of aircraft fire prevention, based on the traditional design method and process, requires a lot of manpower and workload, and is extreme difficult. Therefore, for the flammable fluids drainage design, new tools, methods or means must be adopted to ensure that the process and results of the design based on fire protection requirements are accurate and efficient. With the promotion and application of digital definition and collaboration technology in aircraft design, the design efficiency has been greatly improved [4]. the development of DMU generally focus on the collaborative function of product modules based on product configuration management, and customizes DMU with the aim of real-time synergy between products and DMU. However, the development and application for specific function and customized users is very little. Therefore, the following discussion is mainly to explore the methods in the design and verification of flammable fluids drainage by means of corresponding digital tools. And further more, it also studies the methods and approaches of applying digital design means in fire prevention design from the perspective of life cycle of the aircraft and throughout the development process.
3 Design and Verification of Flammable Fluids Drainage The fluids drainage design is mainly based on the needs of fire protection design. Through reasonable engineering design, the flammable fluids are drained out of the fuselage, and the drain path is ensured to be safe, to make sure it won’t cause fire risk and affect other system equipment’s function in other zones of the aircraft. The design of aircraft flammable fluids drainage should be based on digital design and collaborative methods to carry out the design iteration. The DMU is used to analyze and simulate the design scheme of the flammable fluids drainage of the aircraft to reduce the loses of schedule delay and cost caused by the design change. The rationality of the design scheme of the aircraft flammable fluids drainage needs to be verified by the aircraft drainage ground test and flight test, and the design of drainage based on the DMU is optimized according to the verification result.
Design and Verification Method for Flammable Fluid Drainage
541
The design and verification of aircraft flammable fluids drainage based on DMU contains the following four stages of work. In requirements analysis and concept demonstration stage: Define the fluids type, system and areas involved in the aircraft fluid drainage according to the top-level fire protection requirements and the aircraft drainage requirements. The aircraft is divided into different zones by digital methods, and corresponding drainage requirements are set for different zones, which are used to guide and constrain system scheme design. In the preliminary design stage: Carry out preliminary drainage design of structure and system through identification of the drainage requirements of each zone. In the detailed design stage: Based on the detailed design of structure and system, gradually clarify the geometric parameter of fire safety feature element and conduct leak source and drain path analysis and design iteration. Verify the feasibility of the drainage proposal by using DMU tools. In the trial production and certification stage: Plan and implement on aircraft drainage ground test and flight test [5], compare the test results with the digital simulation results to optimize design.
4 Model Construction Based on the definition of MBD [6] and the configuration management of DMU, the aircraft drainage DMU is constructed for the design and verification of aircraft drainage. The aircraft’s drainage DMU consists of zone classification model, flammable material leak source model, leak source drip model, ignition source model, drain location model and drain path prediction model. Among them, the flammable material leak source model and the leak source drip model constitute flammable fluid control model. The leak source drip model, the ignition source model and the drain location
Fig. 1. Composition of the aircraft’s drainage DMU
542
Y. Chen
model constitute the environment of the drain path prediction model. The drain path prediction model is used to verify that the flammable fluid control meets the top level requirements of the aircraft. The drainage DMU is shown in Fig. 1. 4.1
Zone Classification Model
According to the requirements of the fire prevention, the aircraft is divided into five categories, see Sect. 1 of this article for details. The zone classification model can be obtained by defining the boundary of the aircraft according to the division of the five categories zone. 4.2
Flammable Material Leak Source Model
Most of the flammable material leak sources models are related to the location and form of the pipeline joints. For zone with high fire protection level such as E-E bay, it is generally necessary to avoid the use of joints that may leak flammable materials. In the flammable fluids leakage zone, there are many leak sources due to maintainability, separation interface, installation and connection of pipeline. In general, a large number of hydraulic pipe joints on the aircraft are leak sources. The flammable material leak source model can be represented with points. As shown in Fig. 2.
Fig. 2. Diagram of the flammable material leak source model
4.3
Leak Source Drip Model
With the improvement of the manufacturing technology of aircraft piping systems and joints, the failure modes of hydraulic and fuel pipelines are generally drip or leakage. The leakage source drip model simulates the motion trajectory after flammable material leakage, and can be represented with lines or curves. Leak source drip model is highly related to aircraft’s attitude. Pitch angle, yaw angle and roll angle are three important parameters that characterize the attitude of the aircraft, and are also an important basis for the establishment of the aircraft leakage source drip model. According to the attitude profile of the aircraft in the body coordinate system OXb Yb Zb , the envelope model of the influence range of dripping after fluids failure can be obtained. As shown in Fig. 3.
Design and Verification Method for Flammable Fluid Drainage
543
Fig. 3. Schematic diagram of the aircraft attitude
4.4
Ignition Source Model
The ignition source in most cases are hot surface of system, hot fluid leaking during system failure and arc/spark generated during system failure. For the ignition source, the model is usually represented by a surface or a body part, which can be extracted from the surface geometry of the product. 4.5
Drain Location Model
The fluids generated on the aircraft must be drained to the outside through drain outlets on fuselage surface. There are two main forms for the overboard drain outlets. One is the drain hole, and the other is the drain mast. These drain outlets also require the construction of a corresponding drain location model. 4.6
Drain Path Prediction Model
The drain path prediction model mainly refers to the path channel model in the process from the leak source to the final drainage outside point. Since the aircraft is based on the corrosion prevention and other factors in the structural design requirements, it is necessary to ensure that no accumulation of fluids on the structure. For the design of the structure-based drain channel, it is mainly to ensure the path from fluids to structural drainage channel is feasible. As shown in Fig. 4. Leak source 1
Leak source 2
Leak source 3
Fig. 4. Structure-based drain channel
544
4.7
Y. Chen
Design Integration Based on Drainage DMU
The drainage DMU is gradually improved and optimized as design is refined and iterated. The analysis and coordination based on DMU are carried out to gradually improve the design maturity. In the top-level planning, preliminary identification should be carried out according to the system function and architecture. The planning and definition of space allocation should be done as much as possible in the preliminary design. For example, EWIS wiring is a ignition source, therefore, when the pipeline route is planned, the relative space should be arranged reasonably. Generally, the EWIS channel should be avoided below the pipeline. Since the leak source is mainly fuel and hydraulic pipelines, it is essential to properly define the location of the leak source when making the space allocation of the flammable fluids system piping. 4.8
Validation and Verification Based on Drainage DMU
Finally, there are two main contents for the validation and verification based on DMU. One is whether there is ignition source on the flammable fluids leakage path, and the other is whether the flammable fluids is unobstructed to the outside of the aircraft. If the drain path does not meet the requirements, a change or optimized drainage scheme is required. The design optimization is carried out through various ways and measures such as joint position adjustment, pipeline layout optimization, ignition source position adjustment and installation isolation protection. It is necessary to plan the external drainage scheme of the aircraft, and define the corresponding external fluids drainage point and drainage form based on the external fluids control requirements. At the same time, in order to evaluate the influence of the design of the external drain device on the drainage trajectory and the aircraft fuselage, it is necessary to analyze and calculate the external flow field of the drainage point. As shown in Fig. 5.
Fig. 5. Simulation and verification of fluids drainage
5 Conclusion According to the requirements of aircraft flammable fluid drainage, this paper studies the work required for the flammable fluids drainage based on DMU in various stages of aircraft development by means of system engineering. In addition, in the aspects of
Design and Verification Method for Flammable Fluid Drainage
545
model construction, integration and simulation, the design and verification ideas of drainage DMU are gradually improved and refined, so that the development and application of the drainage DMU can fully support the design and verification of aircraft flammable fluids drainage.
References 1. International Civil Aviation Organization. ICAO Safety Report, 2014 edition. Montreal: International Civil Aviation Organization (2014) 2. FAR Part 25 Airworthiness Standards Transport Category Airplanes 3. SAE ARP 4754A, Guidelines for Development of Civil Aircraft and System. Warrendale: Society of Automotive Engineers (2010) 4. Mitchell, M.T., Jianxin, J., Chuan-Jun, S.: Virtual prototyping for customized product development. J. Integr. Manuf. Syst. 9(6), 334–343 (1998) 5. Flammable fluid fire protection. AC25.863-1draft. U.S. Department of Transportation: Federal Aviation Administration. ANM-110 6. Y14.41-2003, Digital Production Definition Data Practices
Low-Income Dwelling Bioclimatic Design with CAD Technologies. A Case Study in Monte Sinahí, Ecuador Jesús Rafael Hechavarría Hernández(&), Boris Forero, Robinson Vega Jaramillo, Katherine Naranjo, Fernanda Sánchez, Billy Soto, and Félix Jaramillo School of Architecture and Design, Catholic University of Santiago de Guayaquil, Av. Carlos Julio Arosemena Km 1 ½, Guayaquil, Ecuador {jesus.hechavarria,boris.forero,robinson.vega, maria.naranjo07,fernanda.sanchez01,billy.soto, felix.jaramillo}@cu.ucsg.edu.ec
Abstract. The present work is the result of a low-income dwelling inclusive design for a family at Monte Sinahí. Students and professors of the Faculty of Architecture and Design of the Catholic University of Santiago de Guayaquil in one of the economically less favored settlements and exposed to high vulnerability risks in Guayaquil, designed a housing prototype through a systemic way with bioclimatic criteria and the use of ecomaterials. The project includes universal accessibility required by a family consisting of an older adult with motor disability and a young man with Down’s syndrome. This archetype is a partial result of a winner project of the National Program of Financing for Research INEDITA in the area of Energy and Materials research of the Institutional Modality, organized by the Secretariat of Higher Education, Science, Technology and Innovation (SENESCYT) and the United Nations Development Program (UNDP) in Ecuador. Keywords: Universal access Ecomaterials
Inclusive design Low-income dwelling
1 Introduction The provision of housing demands answers to the increase of the population of Ecuador. According to INEC census data to the year 2010 [1] it is estimated that the population will increase by 180% by 2050. In Guayaquil inhabitant’s population will increase by 155%. This analysis was carried out considering a national growth rate of 0.0155 and a regional one of 0.011 on average, according to the projection data of the ecuadorian population by calendar years [2]. In this scenario, Ecuadorian institutions must continue to promote technological independence in various productive sectors such as design and production of homes which must be coupled with the development of human talent, technology and knowledge. In the cases studied in, it was evident that the desire to occupy the most of the land itself, brought with it space configurations that are not the most recommended for the warm-humid climate of Guayaquil [3]. These © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 546–551, 2020. https://doi.org/10.1007/978-3-030-39512-4_85
Low-Income Dwelling Bioclimatic Design with CAD Technologies
547
cases were characterized for its poor or no cross ventilation presence. To this is added the use of construction materials found on site, characterized by a high thermal transmittance of their enclosures, such as zinc roofs and the non-existence of ceiling roofs dampen the transfer of heat into the dwelling [4]. The main results obtained are related to the proposal of graphic and mathematical methods to analyze the dimensional behavior of residential spaces in Guayaquil [2], the user’s perception of their home and the environment in social interest programs [5], the analysis of the effect of thermal comfort in low-cost settlements [6], the systemic approach to design as a fundamental element in multiobjective analysis [4] and the use of bioclimatic strategies in the design of popular dwelling units. On the other hand, the use of software for geometric modeling of the building and thermal and loads simulations is a practice experienced by the work team [7], which favors the acquisition of new knowledge for the development of the investigation. On the other hand, inclusive design should consider the needs of people with limited accessibility capabilities. While modeling the behavior of dwellers with disabilities is a complex task, it has a remarkable social and economic impact. To do this, some authors explore cognitive neutrophic maps to represent the behavior and functioning of complex systems [8]. The relevance of the Catholic University of Santiago de Guayaquil considers the Sustainable Habitat through the planning of access to housing. The project called “Design and construction of a sustainable archetype of housing of social interest for inhabitants of Monte Sinahí, Guayaquil, Ecuador” aims to contribute to the design of social interest housing units through the construction of archetypes for the inhabitants of popular settlements with the use of ecomaterials and bioclimatic strategies that improve thermal comfort and enhance environmental care. This paper highlights the procedure followed during the design stage considering the climate and dwelling information from databases related to the geographical area of Monte Sinahí and field data gathering.
2 Collection of the Bioclimatic Information The collection of the bioclimatic information of the geographical area where Monte Sinahí is located begins with the installation of the Dataloggers [9] and will culminate with the benefits offered by the Meteonorm software [10] on global and diffuse radiation. The characterization of the climate of the study area is concluded, by analyzing the climatic variables that allow the construction of the psychrometric graph of the region where the architectural project will be located to define the requirements of bioclimatic design, its strategies and tactics to reach thermal comfort ranges.
3 Analysis of the Housing Information The design of an archetype of low-income dwelling must carry out in a systemic way where the study of the social, economic and environmental context are analyzed, thus characterizing the territory where the project will be implemented and users of the home who protagonists in the decision are making. Based on the requirements established under
548
J. R. Hechavarría Hernández et al.
multiple criteria, the multidisciplinary team is responsible for defining the requirements, strategies, tactics and techniques for the bioclimatic design of the archetype with the purpose of guaranteeing thermal comfort inside the dwelling at Monte Sinahí (Fig. 1).
Fig. 1. Working sessions with multidisciplinary teams to meet project requirements.
In prototype’s design, ceiling insulation is considered. This will allow a significant decrease of indoor temperature as demonstrated in others investigations [6]. The openings of the roof system were designed to establish a longitudinal flow for the ventilation of the interior spaces together with the garden and the orientation of the winds. Having spaces whose facades have openings towards exterior spaces, allows optimum ventilation of interior spaces and the extraction of heat generated by internal and external gains (Fig. 2).
Fig. 2. Sketches of architectural design process that meets multiple criteria.
Figure 3 shows the component elements of the archetype listed below: 1. 2. 3. 4. 5.
Roof design with facade openings that allows convective cooling. Polished textured metal roof with high albedo index. Mobile panels designed to sift natural light. Door and window designs that allows ventilation and visual control. Flexible panels brings spatial and visual integration with the garden.
Low-Income Dwelling Bioclimatic Design with CAD Technologies
549
Fig. 3. Archetype components details.
4 Digitalization of the Archetype Design The digitalization process was carried out with the version 22 of the ArchiCAD through the BIM work philosophy [11], which allows a better quality in the development of the assembly and greater precision in the elaboration of technical drawings, necessary for the manufacture of the ecomaterials to be used in the construction of the dwelling (Figs. 4 and 5).
Fig. 4. Digitalization process of dwelling design.
550
J. R. Hechavarría Hernández et al.
Fig. 5. Use of ArchiCAD through the BIM work philosophy.
The use of eaves is based on the solar orientation that allows the presence of shadows in the hours near mid-morning and mid-afternoon, where there is a higher incidence of radiation. Main ideas are expressed in sketches for a better understanding of spatial and dimensional relationships and the proportions that the interior spaces of the archetype must have. This is done through a cyclical process where technical and humanistic criteria are considered until reaching a definitive design.
5 Discussion The meteorological information obtained through Meteonorm software [9] and the Dataloggers [10] located inside the house, allows the analysis of requirements and strategies during the bioclimatic design process of the archetype. In order to ensure thermal comfort inside the home, tactics and techniques are established as described in. The spatial modeling of the archetype with the help of the ArchiCAD allows the visualization of the component elements of the dwelling as well as, the detailed description of the technical drawings necessary for the subsequent construction. Acknowledgments. The authors wish to thank the Catholic University of Santiago de Guayaquil and the Secretariat of Higher Education, Science, Technology and Innovation in Ecuador for the financial support for this research. To the United Nations Development Program - UNDP for the excellence in management and Graphisoft Ecuador for the professionalism in teaching version 22 of ARCHICAD. To priests Luis Enriquez and Antonio Martinez and Mrs. Gloria and her son for their human quality.
Low-Income Dwelling Bioclimatic Design with CAD Technologies
551
References 1. NISC: National Institute of Statistics and Censuses in Ecuador. https://www.ecuadorencifras. gob.ec/estadisticas/. Accessed 19 Aug 2019 2. Almeida, B., Díaz, J., Bermeo, P., Hechavarría, J., Forero, B.: An alternative graphic and mathematical method of dimensional analysis: its application on 71 constructive models of social housing in Guayaquil. In: Ahram, T. (eds.) Advances in Artificial Intelligence, Software and Systems Engineering. AHFE 2019. Advances in Intelligent Systems and Computing, vol 965, pp. 598–607. Springer, Cham (2020) 3. Forero, B., Hechavarría, J., Vega, R.: Bioclimatic design approach for low-income dwelling at Monte Sinahí, Guayaquil. In: Di Bucchianico, G. (eds.) Advances in Design for Inclusion. AHFE 2019. Advances in Intelligent Systems and Computing, vol 954, pp. 176–185. Springer, Cham (2020) 4. Hechavarría, J., Forero, B., Bermeo, P., Portilla, Y.: Social inclusion: a proposal from the University of Guayaquil to design popular housing with citizen participation. In: 2018 World Engineering Education Forum - Global Engineering Deans Council (WEEF-GEDC), Albuquerque, NM, USA, pp. 1–5 (2019) 5. Forero, B., Hechavarría, J., Alcivar, S., Ricaurte, V.: Systemic approach for inclusive design of low-income dwellings in popular settlements at Guayaquil, Ecuador. In: Ahram, T., Karwowski, W., Taiar, R. (eds.) Human Systems Engineering and Design. IHSED 2018. Advances in Intelligent Systems and Computing, vol 876, pp. 606–610. Springer, Cham (2019) 6. Ricaurte, V., Almeida, B., Hechavarría, J., Forero, B.: Effects of the urban form on the external thermal comfort in low-income settlements of Guayaquil, Ecuador. In: Charytonowicz J., Falcão C. (eds.) Advances in Human Factors in Architecture, Sustainable Urban Planning and Infrastructure. AHFE 2019. Advances in Intelligent Systems and Computing, vol 966, pp. 447–457. Springer, Cham (2020) 7. Dick, S., Hechavarría, J., Forero, B.: Systemic analysis of bioclimatic design of low-income state-led housing program ‘Socio Vivienda’ at Guayaquil, Ecuador. In: Ahram, T., Karwowski, W., Taiar, R. (eds.) Human Systems Engineering and Design. IHSED 2018. Advances in Intelligent Systems and Computing, vol 876, pp. 647–651. Springer, Cham (2019) 8. Colorado, B., Fois, M., Leyva, M. and Hechavarría, J.: Proposal of a technological ergonomic model for people with disabilities in the public transport system in Guayaquil. In: Ahram, T., Falcão, C. (eds.) Advances in Usability and User Experience. AHFE 2019. Advances in Intelligent Systems and Computing, vol 972, pp. 831–843. Springer, Cham (2020) 9. ELICROM: Datalogger (2019). http://elicrom.com/elicrom-ecuador/ 10. Meteonorm: Handbook part II: Theory Global Meteorological Database Version 7 Software and Data for Engineers (2018) 11. GRAPHISOFT, ArchiCAD (2019). https://www.graphisoft.es/downloads/archicad/. Accessed 08 Aug 2019
Virtual Reduction and Interaction of Chinese Traditional Furniture and Its Usage Scenarios Dehua Yu(&) Beijing Institute of Technology, No. 5 Zhongguancun South Street, Haidian District, Beijing, China [email protected]
Abstract. Chinese traditional furniture is an important part of Chinese traditional culture. Chinese traditional furniture is made of solid wood and uses mortise and tenon to combine all the parts of the furniture without any metal nails. This research is aimed at the display and interaction of the unique mortise and tenon structure of Chinese traditional furniture, drawing plane and 3D models, restoring the usage scenarios of Chinese traditional furniture. In this scenario, combination of reality and virtual, static and dynamic would be shown with the help of motion capture technology, sensing technology, AR and VR technology, interactive technology. From the interaction of the Chinese traditional furniture and the visitors in the exhibition room, people would obtain better experience of visiting, and learn more knowledge with more interest. Keywords: Chinese traditional furniture
Mortise and tenon Interaction
1 Introduction Chinese traditional furniture is an important part of Chinese traditional culture. Chinese traditional furniture is made of solid wood and uses the mortise and tenon to combine all the parts of the furniture without any metal nails. The structure of mortise and tenon of Chinese traditional furniture is so strong that many Chinese traditional furniture can hand down to generations for more than hundreds of years. That is the charm of the structure of mortise and tenon. Therefore, the structure of mortise and tenon has its own scientific principles and techniques, which is worth learning and inspiring innovation for designers and engineers. The inside structure of Chinese traditional furniture is complicated; however the outside looking of Chinese traditional furniture is simple, which indicate the implicit philosophy of China. The same outside looking of Chinese traditional furniture might have different inside structures, which is unimaginable for people who do not know more about Chinese traditional furniture. People cannot observe the inside structure direct and get a direct understand of the mortise and tenon of Chinese traditional furniture. Therefore, the display of interaction of the mortise and tenon of Chinese traditional furniture is vital and necessary.
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 552–558, 2020. https://doi.org/10.1007/978-3-030-39512-4_86
Virtual Reduction and Interaction of Chinese Traditional Furniture
553
2 Present Situation of Display The traditional methods of display of mortise and tenon of Chinese traditional furniture is just pictures, drawing, GIF animation on the computer, or real models made of wood or with 3D printing technology. These years some interaction methods are used to display the structure of Chinese traditional furniture. For example, two APPs “Wood Joints” and “Chinese Traditional Furniture” are both aimed at the display and interaction of Chinese traditional furniture, in which people can interact with the joints of different mortise and tenon, learning the connect methods, the different joints of mortise and tenon. Some more attempts of the interaction with the mortise and tenon are made in some exhibition. For example, the exhibition “the Charm of Mortise and Tenon” (Fig. 1) was held in Beijing Science and Technology Museum in 2018. In this exhibition all kinds of Chinese traditional wood works are display, for example, Chinese traditional architecture, boats, carriage, bridge, and so on, the materials of mortise and tenon are not limited in wood, but also metal and stone. What is more attracting is that there are some interaction between visitors and exhibits, some real and some virtual.
Fig. 1. The exhibition “the Charm of Mortise and Tenon” was held in Beijing Science and Technology Museum in 2018.
The project is amide at display, interaction of the unique tenon-mortise structure of Chinese traditional furniture and restoring the usage scenarios of Chinese traditional furniture to help people to know more about the connotation of Chinese traditional furniture. Some unique Chinese traditional furniture would be chosen from hundreds of furniture. The selection of furniture is based on the unique structure and unique outward appearance. In this paper two unique Chinese traditional furniture is selected to research on the display and interaction. There are two targets for the display and interaction, one is the structure of mortise and tenon, the other is the ancient using condition of Chinese traditional furniture, which can be both useful for people to learn about the deep connotation of Chinese traditional furniture and culture.
554
D. Yu
3 Special-Shaped Table Special-shaped table (Fig. 2) has a special shape “ ” tabletop, which is different other Chinese ordinary table with rectangle, square, circle tabletop. It is made of solid wood with black and red lacquer. Because of the special-shaped tabletop, the table has some unique function, which perhaps is the stand for the unique flowerpot or the unique artificial hills. Also because of the special-shaped tabletop, the structure of the mortise and tenon is unique which is different from the ordinary structure of Chinese traditional furniture. For example, the joint of the top and the leg is more complicated than ordinary mortise and tenon, which has changed and modified according to the unique shape and part (Fig. 3). It is not easy to observe the joints details from just only picture and drawing, and it is necessary to take advantage of interaction technology to guide people to better understanding of the complicated structure.
Fig. 2. Special-shaped table (Fig. 2) has a special shape “
” tabletop.
Fig. 3. The joint of the top and the leg is more complicated than ordinary mortise and tenon.
Virtual Reduction and Interaction of Chinese Traditional Furniture
3.1
555
Basic Display
At first, we make 3D model with 3D software, and make semi-transparent 3D model to help people observe the joint structure. Several videos (Fig. 4) on different parts of mortise and tenon of Chinese traditional furniture will be shown on the wall. When visitors pass by incidentally, the video would attract their attention. If someone is interested, he might come close to join in the interaction.
Fig. 4. Several videos are made on unique structure of mortise and tenon.
3.2
Interaction
When visitors are more interested, they would like to come close to observe the furniture. The furniture is consist of two parts, one half is the real table made of solid wood, the other half is the virtual display played by a projector. The two opposite parts forms a whole table (Fig. 5). When visitors come close to observe, the virtual part would ripple a little to imply it is virtual. When visitors touch the specific parts, the signal would inducted by the sensor, after that, the video would show the semitransparent parts, finishing the fix and unfix process of mortise and tenon to direct visitors to fix and unfix the real parts of the table. The interaction process is accompanied by indication music and voice, which is interesting and attracting.
Fig. 5. Real table made of solid wood and virtual display forms a whole table.
556
D. Yu
4 Backrest Chair Backrest chair (Fig. 6) only comes from Jiangsu province, and the backrest is made of weaving mat, different from the solid wood backrest. The unique structure is the joint between the back leg and the seating top (Fig. 7), which is earlier than the ordinary Gejiao Joint. This chair might be used during late Ming and early Qing dynasty. A famous painter called Fanglin Ye in Qianlong period in Qing dynasty, once draw a painting “Scholar Elegant Meeting” in which there is a similar chair with this backrest chair. The theme of the painting is also based on the location of Jiangsu province, which is the same with this backrest chair. Therefore, the painting could give us the possible using condition of this backrest chair.
Fig. 6. The 3D model of the backrest chair are made.
Fig. 7. The unique structure is the joint between the back leg and the seating top.
4.1
Basic Display
First, we make 3D model with 3D software, and all the details of mortise and tenon of the chair are made. A real solid model made with solid wood would be made to take apart to show mortise and tenon of the chair, especially the unique joints. The drawing “Scholar Elegant Meeting” would be shown on the whole wall played by a project, which illustrate the possible using condition of the chair in the ancient times.
Virtual Reduction and Interaction of Chinese Traditional Furniture
4.2
557
Interaction
The drawing “Scholar Elegant Meeting” would not only the display of the using condition, but also the interaction of the ancient living environment and the modern people. The surrounding of the wall of the drawing “Scholar Elegant Meeting” would add some motion sensors. When visitors pass by, the details of the drawing would move slightly, for example, the branches of the trees are shaking, and the leaves are falling, people wave his hand, and blink a little, which can get visitors’ attraction. In the mean scene of the drawing, one scholar sits on the chair which is similar with our chair. On the opposite of the wall, the copy of the chair and a table is displayed (Fig. 8). Every visitor can sit on the chair and besides the table. Once people sit on the chair, the gravity sensor would sense the signal, then the drawing of the scene would be display with an active video. In this scene, the scholar would stand from his chair, and say hello to visitors, and he would also show his chair to visitors to introduce the chair. Another scholar would also intervene in the interaction. All the interaction indicate that the visitor seems joining in the ancient living environment, and interact with the ancient people to talk about the daily life. Furthermore, different visitor sit on the chair might trigger different video, which might attract more visitors to join in the interaction, which add interaction and attraction.
Fig. 8. The interaction of the virtual painting and visitor are shown. Acknowledgments. This paper is supported financially by MOE (Ministry of Education in China) Project of Humanities and Social Sciences Research Youth Fund (Project No.19YJC760144).
References 1. Yang, Y.: Study of Ming-Style Furniture. Beijing Construction Industry Press, Beijing (2002). (in Chinese) 2. Wang, S.: Study of Ming-style Furniture. Joint Publishing Company, Beijing (2008). (in Chinese)
558
D. Yu
3. Ji, Y., Yu, D.: Chinese Classical Furniture Design Basics. Beijing Institute of Technology Press, Beijing (2015). (in Chinese) 4. Yu, D., Ji, Y.: Chinese Classical Furniture Design Practise. Beijing Institute of Technology Press, Beijing (2015). (in Chinese) 5. Berliner, N., Handler, S.: Friends of the House: Furniture from China’s Towns and Villages. Peabody Essex Museum (1995) 6. Chen, J.J.I.: Chinese Contemporary Furniture Design from Culture Appreciation to Chunzai Creativity. Palace Museum Press, Beijing (2016)
Humans and Artificial Systems Complexity
Your Voice Assistant Will See You Now: Reducing Complexity in Human and Artificial System Collaboration Using Voice as an Operating System Viraj Patwardhan(&), Neil Gomes, and Maia Ottenstein Thomas Jefferson University and Jefferson Health, 925 Chestnut St., Philadelphia, PA 19107, USA [email protected]
Abstract. There is something about smart artificial systems that fascinates us. In recent decades we have seen researchers, technology experts, large technology companies and governments across the world investing significant time in getting artificial systems to be smarter, more interactive and easier to communicate with. Smart assistants are slowly becoming an integral part of our lives, but they are far from perfect. There are many concerns and questions that must be addressed about these assistants. How can we create the ideal version of this partnership? How can we break the barriers to communication with these systems? This paper will highlight how companies are addressing the issues with artificial systems by designing systems that are safer, more accurate and trustworthy. Keywords: Voice technology Voice assistants Human and artificial system collaboration
Voice interfaces VUI
1 History and Evolution of Voice Technology Our voices are powerful. They are our primary outlet for our thoughts, feelings, and beliefs. We use our voices to interact with the world around us. Our voices enable us to communicate with other humans and other beings and help us to coordinate, collaborate, learn, teach, argue, and evolve. Innovators of every generation have explored how the human voice can alter our surroundings, from the first instances of humans exploring incantations to alter reality, to contemporary technology that enables us to alter reality by uttering a command as simple as “order my laundry detergent.” However, while the complex technology behind natural language processing has been evolving, the majority of public users, have largely adapted to the physical input methods and interfaces developed in lieu of voice, as this was less complex at the time. Our widespread interest in merging the human voice with technology is evidenced by the first documented voice-based technology released in 1877; Thomas Edison’s phonograph [1]. After Edison discovered how to record our voices, innovators harnessed this idea to, over time, invent the radio and the telephone. And eventually, in 1962, IBM invented Shoebox, the first IBM computer capable of speech recognition. © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 561–566, 2020. https://doi.org/10.1007/978-3-030-39512-4_87
562
V. Patwardhan et al.
Shortly after the invention of Shoebox, ELIZA – a computer program developed to allow humans to have natural conversations through text with an automated system – was invented. The ELIZA program was so well done for its time that, while most users began using it as skeptics, after a few exchanges with ELIZA, many were keen to continue speaking with the system [2]. By the 1980’s, voice recognition software was widespread enough to make its way into the toy industry where World of Wonder’s Julie doll and Playmates Toys’ Jill doll competed for the hearts of children across the U.S. In 1993, Apple commercialized built-in speech recognition and voice-enabled control systems with Speakable Items. Despite computer scientists’ best efforts to bring voice-controlled computers to the mainstream, the first widely used voice user interfaces, or VUIs, were interactive voice response (IVR) systems [3]. You may know these as the automated human-sounding systems that receive your calls to any variety of organizations, from local restaurants to your electricity company. IVR technology evolved from the 1960s through the 1990s, and by the early 2000’s, was a standard aspect of phone calls to any large organization. While the voice technology was evolving, mobile phones became commonplace and rapidly evolved with more capabilities and varying interaction models. It wasn’t long before modern innovators saw the opportunity to integrate voice technology into mobile devices. 2011 saw the birth of Siri, the first modern, voice-controlled, smart assistant and a major milestone in human-artificial systems collaboration. Shortly after followed Amazon’s Alexa and Google Home, which allowed for integrating voice interfaces in people’s homes and daily lives.
2 Voice as an Interface Most interfaces we are familiar with are visual and physical. Your computer screen, keyboard, even your coffee maker, all provide you with visual information, require physical manipulation to alter their state or create content, and provide visual and/or tactile feedback to confirm the success or failure of your attempt. On the other hand, voice interfaces allow users to interact with technology without the need for physical input or visual feedback. As physical devices get smaller and the situations in which we use smart devices become more varied, it becomes more difficult to physically interact with the GUIs we have come to know [3]. It is thus vital that we have alternate interaction options to extend usability and enhance safety across various environments of use. Voice technology is in many ways an ideal solution to close the usability gap for our technologies with shrinking screens and with the diverse use cases that continue to arise with newer smart devices. Voice systems are also an ideal solution because, for most people, our voices are our primary means for communication. Voice is not only the most prevalent method of communication, it is also the most natural to us. It is no surprise then that we aim to create artificially intelligent systems that communicate with us similarly to how we communicate with other humans.
Your Voice Assistant Will See You Now
563
3 Benefits There are many benefits to the integration of VUIs, including improved usability and accessibility, comfort, and advanced efficiency and safety. VUIs are much easier to learn and can be easier to navigate, especially for simple tasks, offering potential to drastically improve a device’s usability. As discussed in a paper by Pradhan et al. [4], Interaction is remote (e.g., across the room), which lowers the barrier to use in comparison to having to hold/use a device, and home-based assistants can connect to smart home appliances, becoming integrated into the home environment. A VUI can be particularly helpful for those with physical disabilities and accessibility issues. Many journals and studies investigating the application of VUIs across a range of contexts find them to increase safety, efficiency, and comfort. The issue of safety is most apparent in the example of driving, in which studies show VUIs can dramatically reduce risk of completing tasks like navigation while driving. In April 2019, Science Direct published an article about the risk of distracted driving, noting that “engaging in a visually demanding task, such as interacting with a cellphone or in-vehicle devices, was associated with increased crash risk as well as higher crash severity.” The report indicated that interactions that do not require drivers to take their eyes off the road greatly increase safety in comparison to activities in which drivers look away from the road [5]. Introducing voice-based artificial systems into the home is not only more accessible for those who may not be able to easily adjust their environment on their own, it adds a level of support and efficiency to the home environment, even for highly capable people. Existing smart home systems with VUIs offer voice-activated systems that, through one simple, natural command can sweep an item off your to-do list as soon as it arises, allowing you to run from task to task: for example, washing dishes and noticing the detergent is running low, you can simply announce “Order more detergent,” and know that it will be delivered before your container is empty. While this example may not sound like a high-stress occurrence, our lives are growing ever-more complicated, and each time we have to remember one more thing we are likely adding strain to an overworked brain. Authorities have been concerned about the increasing demand on our working memory since well before 1994, when the Rank Xerox Research Centre Cambridge Laboratory launched the “Forget-me-not” project, an exploration into what its creators called “intimate computing.” Their goal was to build smart, personal digital assistants, or PDAs, to support and minimize cognitive work by reducing the amount users needed to remember at any given time [6]. Due to their high accessibility, commercial popularity, low barrier to entry, and non-threatening or population-specific nature, artificial systems with voice-based interfaces have the potential to change the lives of millions of people who need support but do not want specialty assistance. Voice technology empowers these people to retain their autonomy by giving them control over their environment regardless of their physical movement limitations.
564
V. Patwardhan et al.
4 Challenges While integrating voice technology offers a huge amount of benefit to automated systems, there are some challenges that must be considered before incorporating VUIs into any system. Because voice technology is still developing and people and cultures are still adjusting to their growing ubiquity, many foundational needs remain to be addressed, including some usability issues, establishing trust, and the appropriateness of implementation. As mentioned previously, voice technology is still relatively young, and most systems cannot yet understand users whose voices sound different from a few specific demographics. This means that voice-based systems are unusable to users who do not speak exactly as the engineers who created the system want them to, leaving people who cannot speak loudly enough, clearly enough, or without their natural accent out of the voice tech revolution. VUIs also provide a barrier for users with impaired hearing. If a system accepts only voice input and supplies only audio responses, users with impaired hearing cannot interact with that system unless it is also accompanied by visual response indicators. Voice-based systems also require very thoughtful information and interaction designs, as it can be easy to get lost while navigating a system with no visual cues. Tasks that are simple in well-designed graphical interfaces, like trying to return to a previous place in your conversation with the device can become strenuous and confusing, if at all possible. Well-designed systems with visual interfaces show their users the information architecture of the system and where that user is currently located within that system, for example, the use of breadcrumbs on a webpage. But this is much more difficult to do with voice-only interfaces, and designers must carefully make the system “easy to use, easy to learn and resilient to errors” [7]. Good design is imperative not only in the creation of the system itself, but the appropriateness of the decision to include a voice interface at all. Many technologists are excited about voice technology and eager to adopt it into their systems, but unnecessarily relying on voice interactions may negatively impact users, risking both their satisfaction and their safety. Voice-based interactions are also not ideal for public or shared spaces, where it can not only be difficult to conduct depending on surrounding noise levels, but also disruptive to those beyond the immediate users [3]. Public spaces introduce the requirement for security, because every interaction with a system is now capable of being overheard. People may not be as comfortable using the ATM, for example, if it required users to speak their PIN codes rather than typing them in.
5 Potential of Voice Interfaces and a Healthcare Example Despite the challenges, voice technology is fast becoming a practical household reality. Voicebot.AI reports that the smart speaker install base within the U.S. grew 40% from 2018 to 2019 [8], now exceeding 66 million units. The smart speakers with screens market grew even more from 1.3M in January 2018 to 8.7M in just one year [8]. These trends suggest that adoption of voice-enabled devices is on the rise. So how do
Your Voice Assistant Will See You Now
565
companies deal with this technology that is clearly becoming a key communication channel to reach their customers? A recent article in HBR suggested that the most critical initiative for organizations from major industries is to have a Strategy for Voice Technology [9]. Amongst other interesting data points, the article highlights that companies that ignore voice technology will be at risk of not just losing market revenue but also losing current and future customers. At Thomas Jefferson University and Jefferson Health (Jefferson), we have embraced voice technology and consider it a key communication channel for our consumers. Health information is very personal. Hospitals are expected to keep this information private and safe and yet accessible to healthcare providers. At Jefferson, in addition to data privacy it is paramount for us to have the trust of our patients when it comes to their health data. So how does a healthcare system introduce voice technology by ensuring the safety and privacy of the data? After researching the benefits and challenges of the commercially available voice assistants, Jefferson decided to build its own system that will be self-contained within Jefferson’s secure network environment. This is a system, that will provide our patients the comfort that their data is not at-risk and their interactions with the voice assistant will remain private and confidential, just like all of their personal health data. One of the key reasons for developing our own voice system was to recognize the importance of and earn the trust of our patients. The system has been launched in 11 hospital rooms as a pilot and researchers continue to learn more about patient interactions to continuously improve the system. Some of the key features include, TV Control (Volume, Channel, On/Off), Jefferson information (Location, Hours, Phone Number, Visitor info) weather, patient onboarding information, diet information and device feedback. We have seen that once you establish trust with the end-users there is an increase in the interactions with the system. Some key learnings are listed below: 1. Introduce the system to the patient: Give information on how the system will work, how to interact with the system and what to do in case the system does not respond. This helps reduce patient anxiety. 2. 20 average interactions per patient/per day: Patients interact with the system when they build trust with the interface. 3. TV Control is the most widely used voice command: In a hospital environment, patients are looking to interact with systems that will distract them from their anxiety.
6 Conclusion In summary, we believe that voice technology is here to stay. The technology is still evolving and does pose challenges especially in the areas of privacy, trust and data protection. However, benefits like the simplicity of use, the human “naturalness” of the interface, lower barriers to adoption and its ability to reduce the cognitive load on users, mean that companies (especially in healthcare) should invest in this technology and help with its development.
566
V. Patwardhan et al.
In order to deliver the right voice product experience to patients, healthcare systems need to adopt a three-point strategy. 1. Invest more time in learning how this technology will impact your business: Have a clear and defined strategy for your voice technology. Each business will have a different set of goals based on their user and business needs. Having an institutional strategy will help deliver the right business outcomes. 2. Customize your voice technology to meet the needs of your customer: A patient will have a different expectation when interacting with a voice technology to find a doctor vs book a vacation. Understand how users interact with voice systems in different scenarios. Industry and specific business insights will dictate how the technology needs to be designed and leveraged. 3. Remove barriers to interactions: Make the interactions with the voice technology without any barriers. Any initial barriers or first-time interaction glitches can lead to the loss of customers. Ensuring that the customer interactions are seamless will build trust and improve the overall experience. Acknowledgments. We would like to acknowledge our staff, our leadership, our board and our Digital Innovation and Consumer Experience (DICE) Group team members who actively support us in our quest for creating solutions that improve lives.
References 1. Newville, L.: Development of the Phonograph at Alexander Graham Bell’s Volta Laboratory. http://www.gutenberg.org/files/30112/30112-h/30112-h.htm 2. Weizenbaum, J.: Computer Power and Human Reason: From Judgment to Calculation, p. 2, 3, 6, 182, 189. W.H. Freeman and Company, New York (1976) 3. Pearl, C.: Designing Voice User Interfaces: Principles of Conversational Experiences, 1st edn. O’Reilly, Sebastopol (2016) 4. Pradhan, A., Mehta, K., Findlater, L.: Accessibility came by accident: use of voice-controlled intelligent personal assistants by people with disabilities (2018) 5. Gerson, P., Sita, K., Zhu, C., Ehsani, J., Klauer, S., Dingus, T., Simons-Morton, B.: Distracted driving, visual inattention, and crash risk among teenage drivers. Am. J. Prev. Med. 56(4), 494–500 (2019) 6. Lamming, M., Flynn, M.: Forget-Me-Not, intimate computing in support of human memory. In: Proceedings of FRIEND21 1994 International Symposium on Next Generation Human Interface (1994) 7. Portet, F., Vacher, M., Golanski, C., et al.: Pers. Ubiquit. Comput. 17, 127 (2013) 8. Kinsella, B.: U.S. Smart speaker ownership rises 40% in 2018 to 66.4 million and Amazon echo maintains market share lead says new report from Voicebot (2019). https://voicebot.ai/ 2019/03/07/u-s-smart-speaker-ownership-rises-40-in-2018-to-66-4-million-and-amazonecho-maintains-market-share-lead-says-new-report-from-voicebot/ 9. Kinsella, B.: U.S. Smart display user base grew 558% in 2018 and more than doubled in second half of the year, Amazon holds two-thirds market share (2019). https://voicebot.ai/ 2019/03/07/u-s-smart-display-user-base-grew-558-in-2018-and-more-than-doubled-insecond-half-of-the-year-amazon-holds-two-thirds-market-share/
Pre-emptive Culture Mapping: Exploring a System of Language to Better Understand the Abstract Traits of Human Interaction Timothy J. Stock1,2(&) and Marie Lena Tupot1
2
1 scenarioDNA inc., New York, NY, USA {timstock,marielena}@scenariodna.com Parsons School of Design, The New School, New York, NY, USA [email protected]
Abstract. A sustainable and ethical future, rich with technology, requires an abstract human framework of ideation in order to understand the reality of user experience (UX) and the context design will live in. The layer between the human and object needs to be better understood – early on. We too quickly replace one interaction with another. However, critical human traits often remain hidden until the product is in situ. Their appearance can have terrible consequences. These traits, and the behavioral codes that underscore them, can be understood early on by looking at language. Keywords: Applied sociology Epistemology Semiotics Strategic foresight Systems thinking Culture mapping Human factors
1 Kicking the Machine Is a Sign of Discord After a deadly 2017 crash between a destroyer and an oil tanker, the US Navy [1] is set to replace its touchscreen controls with mechanical ones. Mapping the culture of users could have gotten ahead of the design decision that led to a software interface versus an analog one. We tend to make faster decisions when it comes to software because software is more easily mutable, but missteps happen where hardware and technology meet. It looks as though I am measuring my health. But is it functioning that way? Industry leaders have begun to question whether AI devices should have anthropomorphic traits. Should we feel comfortable “kicking a machine” [2] that has empathetic eyes? Are we encouraging bad behavior in children engaging with anthropomorphic devices? Feeling the need to “kick the machine” means we missed something human along the way. Innovation must lead with a holistic view of human/machine interaction. Consider facial recognition technology. Amidst global tension, developers are scrambling to rethink their role as a tool for good. Human anxiety leads to disruptive behavior. Protesters in China, for example, have been hiding behind masks and umbrellas, using burner cellphones, and paying for transit in cash. Protesters also have been cutting down lampposts with electric saws and wearing apparel designed [3] to trick photo recognition software. © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 567–572, 2020. https://doi.org/10.1007/978-3-030-39512-4_88
568
T. J. Stock and M. L. Tupot
Such paradigms of human to machine have not visibly been in play until recently. They’re becoming apparent in political arena, such as Hong Kong, as we witness people attempting to regain a sense of self amid the machines. There are a number of UX models currently in play. User experience designer Jesse James Grant authored The Elements of User Experience [4] in 2002. Fogg [5] founded the Behavior Design Lab at Stanford University where his 2009 Fogg Behavioral Model emphasizes motivation, ability and trigger. And, in 2014, entrepreneur Eyal detailed his model [6] in Hooked: How to Build Habit-Forming Products. Each is valid, but none address the human meaning behind behavior.
2 Deeper Abstraction Leads to Emotional Design Intelligence Our attempts at UX efficiency establish a process of expectation. It’s important that expectations meet the intent of the user. Hidden biological response and human expression attempt to mitigate chaos when it is not met. This process of exploring abstraction looks at speculative design in a way that better informs the designer about the public in question. The more a designer knows, the more can be built into prototyping to allow for natural, open-ended design – before permanent final design decisions are ever made. The cultural system that represents the process of adaptation can be plotted as a system of language that reflects the dynamics between the concrete and abstract worlds. Craving for such emotional intelligence requires that we expand our binary world into an abstract space for which only the human brain has the capacity. The matrix diagram (Fig. 1) is based on a patented method of analyzing and classifying open and closed corpora into discrete behavioral archetypes that can be visualized as a system of signifiers within a semiotic square. It classifies and quantifies linguistic signifiers along three principal axes. The x-axis represents relationship to self
Fig. 1. Culture mapping matrix, scenarioDNA inc., U.S. Patent No. US9002755B2.
Pre-emptive Culture Mapping: Exploring a System of Language
569
from analytical to expressive. The y-axis represents relationship to society from affirming societal norms and values to resisting societal norms and values. The z-axis represents time. This framework provides an ideological systematic view of dynamics and patterns that emerge in expressed language, including words, images and gestures. By looking at the linguistics behind the emergency tasks within the responsibility of US Navy personnel in the case of the destroyers, one might have found that the code of control defined the input task. The imprecision of a touchscreen would be counterintuitive to the task. Present-day touchscreens do not yet imply control and efficiency on a New York City Metrocard machine. They certainly do not define control on a US Navy ship. A behavioral code of control would live, as one might expect, within the policy space of a culture map – where a residual future exists until design can exceed the human context and expectation of control. Unlike the code of control for Navy personnel, protesters in Hong Kong are trying to regain ownership over self. The code of ownership falls into the symbolic quadrant of the matrix, within the disruptive quadrant. It’s a human reaction to a societal state of discord. We need to be knowing what certain facets of interaction mean to people. Artist Salome Asega [7] is the partnerships director of Powrplnt, a center for affordable technology-meets-art workshops and also an instructor of speculative design at Parsons School of Design. Asega asserts, “Having an arts or humanitarian lens in technology centers people and communities…it can offer an entry point for people to better understand the more complex aspects of technology.” The design industry has been discussing issues of problematic design for decades. “In the case of Apollo 13, the lesson (among others) learned was that “Ehh, I’m sure it’ll be fine’ is not a solid design philosophy,” writes Smith for Engineering.com [8]. This was 1970. Don Norman [9], author of Design of Everyday Things and a former Apple VP, discusses his investigation of the 1979 nuclear power accident at 3 Mile Island. Norman recalls, “We decided that the operators were really quite intelligent… except that the design was so bad that if you wanted to cause people to make errors, you couldn’t have done a better job.” We still have not learned. Even the contemporary theory of agile design has lost its way so deeply that debates about its usefulness persist. Author of the Agile Manifesto [10], Arthur Fowler says “a lot of what is being pushed is being pushed in a way that… goes really against a lot of our precepts… The team should not just choose the process that they follow, but they should be actively encouraged to continue to evolve it and change it as they go.” A radical rethink of the framework of human experience is needed now. Designers must consider the inevitable undoing of their design and a requisite for adaptability.
3 Function Is the Cognitive Framework The behavioral codes that underlie function are what we truly need to understand before we redesign access to function. Presently, design UX takes precedence. The expectation is that the human will conform to the more aesthetically pleasing interface. For certain, a human being will accept trade-offs in their own interest. In actuality, design is modifying human behavior to the existing capability of technology. It begets a string of incremental limitations and human frustration. Our expectation is rooted in everything going right. Yet, things go wrong all the time, especially when technology is involved.
570
T. J. Stock and M. L. Tupot
The critical nature of understanding abstract traits of human behavior becomes clear when we investigate our aging population and the implications of lost cognition and physical ability. “The world is designed against the elderly,” writes Norman in Fast Company [11]. Norman is 83. He continues, “Do not think that thoughtful design is just for the elderly, or the sick, or the disabled. [Inclusive design] helps everyone. Curb cuts were meant to help people who had trouble walking, but it helps anyone wheeling things: carts, baby carriages, suitcases… As Kat Holmes points out in her book Mismatch, all of us are disabled now and then.” Instead, the current speculative design framework around aging is about medical devices and warehousing. Yet, they have not lost the qualities that make them human. For them, robotic nurses will not be a well-received future. Getting to the human is the purpose of understanding the cycle of dissent and acceptance. This becomes a framework about humanity – whether we are discussing aging or cars. There are times when we seem to be getting it right. The behavioral codes in play at startup Voyage [12] are safety and vitality. The company operates a fleet of low-speed autonomous vehicles in two retirement communities, one near San Jose, California, and the other north of Orlando, Florida, both called The Villages. The typical disadvantages of autonomous vehicles, “slow speeds, overly cautious driving maneuvers, simplistic routing,” in the context of retirement communities becomes an advantage meeting the senior need for safety. Seniors also earn bragging rights as early adopters of new technology and hold equity stake. For now, this works. We can replace certain traits, such as the ability to drive, one with less human attributes, without much thought. We can give it back in the form of mobility and pride – human traits. However, our technical capabilities are expanding exponentially. We need to be aware of how replacing traits reshapes the human itself. We will always lose something in the exchange. The goal should be to better the human, not create a more obedient machine. When we are discussing human-centered design, we can make no mistake that we are designing interventions to better serve people’s needs.
4 The Exponential Trajectory of Technology Calls Us to Task Moore’s Law [13] is the observation that the number of transistors in a densely integrated circuit doubles about every two years. This law became a forecasting model used to predict the growth of technology. The linearity of a forecasting model based on materiality alone leads to exponential distortion. The exponential trajectory creates behavioral unpredictability. The power of technology is seductive. We can’t resist it. Think about twinning. The concept of twinning is returning to the design world, except now twinning is digital. In 2018, Philips engineer van Houten [14], executive vice president, CTO, Royal Philips cited duplicate modeling as a component that safely landed the Apollo 13 mission, “This allowed engineers on the ground to model and test possible solutions, simulating the conditions on board of Apollo 13.” It’s classic mirroring, even as it moves to digital simulation for devices.
Pre-emptive Culture Mapping: Exploring a System of Language
571
Duplicate modeling becomes a different concept once we attempt to replicate a human [15]. It’s a consideration that is entering the healthcare arena through the prototyping of concepts such as Philips Digital Twins. The expected evolution is for all the parts to come together to replicate the human. As some of us might know from ailing friends and relatives, the human patient will always surprise us with the unexpected as they continue to pursue their most human of codes in spite of all limitations: freedom. Freedom is hardwired for us, and it will trip up our most fluidly designed replicant. The patient advised not to drive will drive. The patient who is monitored for wandering will remove their monitors.
5 Human Nature Requires Cyclical Understanding The challenge is to have the design remain open to human potential, rather than structural control. It’s no surprise that most people engaged in “tech-mediated intimacy” prefer sexting with “adult” chatbots over sex robots [16]. They can imagine a human at the other end. The behavioral codes will live in the emerging ritual space. Perception is critical. However, culture will continually cycle. Already, technology is making it harder to recognize reality. It will be human nature to push back. In 1977, Stanley Milgram [17], a psychologist at Yale University, began replicating the mind-body fusion found in Edmond Rostand’s 19th century play Cyrano de Bergerac. His resulting cyranoid was a mechanism for speech shadowing. In his experiments, people failed to detect that the words were not coming from the expected source. They believed the illusion. But, over time, all it takes is one person to move beyond their suspension of disbelief before we need to evolve our design. As designers, we need to understand human traits and be ahead of that moment.
References 1. Fingas, J.: US Navy will scrap touchscreen controls on its destroyers. https://www.engadget. com/2019/08/11/us-navy-drops-touchscreen-controls-for-destroyers/ 2. Harrison, S.: Of course citizens should be allowed to kick robots. https://www.wired.com/ story/citizens-should-be-allowed-to-kick-robots/ 3. Hu, J.C.: When will TJ Maxx sell anti-surveillance fashion? https://slate.com/technology/ 2019/08/facial-recognition-surveillance-fashion-hong-kong 4. Garrett, J.J.: The Elements of User Experience. http://www.jjg.net/elements/ 5. Stulberg, B.: How to Build Better Habits. https://www.outsideonline.com/2401883/how-tobuild-good-habits 6. Eyal, N.: Hooked: How to Build Habit-Forming Products. Penguin Books, London (2014) 7. King, A.: Meet Salome Asega, the multihyphenate artist working to diversify the tech world. https://www.vogue.com/article/salome-asega-interview-powrplnt 8. Smith, V.: Great moments in engineering history: apollo 13. https://www.engineering.com/ DesignerEdge/DesignerEdgeArticles/ArticleID/15386/Great-Moments-in-EngineeringHistory-Apollo-13.aspx 9. Hawk, S.: Chat with Don Norman. https://uxmastery.com/transcript-chat-don-norman/
572
T. J. Stock and M. L. Tupot
10. Cagle, K.: The end of agile: a rebuttal. https://www.forbes.com/sites/cognitiveworld/2019/ 08/28/the-end-of-agile-a-rebuttal/#1fe4033c538a 11. Norman, D.: I Wrote the Book on User-Friendly Design. What I See today Horrifies Me. https://www.fastcompany.com/90338379/i-wrote-the-book-on-user-friendly-design-what-isee-today-horrifies-me 12. Hawkins, A.J.: AV startup Voyage on how low speeds and older customers. https://www. theverge.com/2019/9/12/20862659/voyage-self-driving-startup-funding-oliver-cameron 13. Feldman, M.: TSMC Thinks It Can Uphold Moore’s Law For Decades. https://www. nextplatform.com/2019/09/13/tsmc-thinks-it-can-uphold-moores-law-for-decades/ 14. van Houten, H.: The rise of the digital twin: how healthcare can benefit. https://www.philips. com/a-w/about/news/archive/blogs/innovation-matters/20180830-the-rise-of-the-digitaltwin-how-healthcare-can-benefit.html 15. Copley, C.: Medtech firms get personal with digital twins. https://www.reuters.com/article/ us-healthcare-medical-technology-ai-insi/medtech-firms-get-personal-with-digital-twinsidUSKCN1LG0S0 16. Ellis, E.G.: You are already having sex with robots. https://www.wired.com/story/you-arealready-having-sex-with-robots/ 17. Neuroskeptic: “Cyranoids”: Stanley Milgram’s Creepiest Experiment. http://blogs.discovermagazine.com/neuroskeptic/2014/09/06/cyranoids-stanley-milgrams-creepiest-experiment
''Meanings'' Based Human Centered Design of Systems Santosh Basapur1,2(&) and Keiichi Sato2 1
2
Rush University Medical Center, 1750 West Harrison St., Chicago, IL 60612, USA IIT Institute of Design, 3137 S Federal St., Chicago, IL 60616, USA {basapur,sato}@id.iit.edu
Abstract. There is a renewed interest in Systems Thinking and Systems Design in the practice and pedagogy of Design. Theoretical work to propose new design theories or methods of design are creating discussion on first principles and/or a general lack of it. Methods to map, model, understand the critical information needed for design of complex systems are being sought by practitioners and academics alike such as Organizational Semiotics and Ethical Value Based Design practices. This paper posits a hybrid approach based in traditional Human Systems Integration (HSI) approach and Organizational Semiotics. We believe that creating ability to incorporate perceived meanings and values of stakeholders will yield a much more empathetic and flexible design of systems. Keywords: Systems Design Integration
Systems theory Semiotics Human Systems
1 Introduction Recent years have seen a renewed interest in Systems Design. Focus on sustainability, ethics, values and moral dilemmas in the advent of new technologies like self-driven vehicles, AI based assistive agents in mobile devices and living rooms alike, has brought to fore the need for more semantics-based design processes than ever before. New design theories and methods of design [9] are creating opportunities for discussion of first principles, or a lack of it. Human Centered Design Traditional Approaches to HSI are that of Socio-Technical System Design (STSD) [20] and Macroergonomics [7]. User centered design is end-user centered and more often than not the complex relationship between service producer and service consumer gets reduced to end-user point of view. Human centered design on the other hand attempts to keep all stakeholders’ needs in focus for the entire systems development process.
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 573–579, 2020. https://doi.org/10.1007/978-3-030-39512-4_89
574
S. Basapur and K. Sato
2 Overview of Systems Design Approaches 2.1
Human Factors Based Systems Design
McCormick and Sanders [13] defined Human Factors/Ergonomics as “The field of human factors-referred to as ergonomics in Europe and elsewhere-deals with the consideration of human characteristics, expectations, and behaviors in the design of the things people use in their work and everyday lives and of the environments in which they work and live. Over the years this broad definition has held well for Human Centered Design practiced by HF professionals. But critique has always been that HFE is about task, person and the fit. Hollnagel [8] and Wickens et al. [21] argue that the nature of human factors works in 2010s is significantly different than the work done in the 1950s and 60s. He suggests it is not enough to improve the performance and physical well-being of people through systems design, because that kind of change can be narrow in focus. Boy et al. summarize that the challenge is to better understand socio-cognitive consequences due to shift from design of controls to management of processes and outcome [2]. 2.2
Human Systems Integration (HSI) Design Approach
HSI approach has gained traction and growth in multiple domains like Military, Healthcare, Transportation and Robotic systems. “HSI is the integrated and comprehensive analysis, design and assessment of requirements, concepts and resources for system manpower, personnel, training, environment, safety, occupational health, habitability, survivability, and human factors engineering, with the aim to reduce total ownership cost, while optimizing total mission performance” [1]. Other theories like Socio-Technical System Design (STSD) [20] and Macroergonomics [7] compliment the HSI view of systems design. These theories optimize integration of people and systems (technical, information, engineering etc.) to produce optimal output for the organization and quality of life for consumers and producers of services in these systems. 2.3
Systemic Design and Circular Economies
Systemic design is led by sustainability focused systems designers. They are bringing design thinking processes and frameworks to create systems thinking for design of complex systems. They believe in developing, initializing, and infrastructuring capacity for co-design and co-production with real stakeholders. Addressing challenges of global trends of resource consumptions and achieving deeper transformations to equitable futures requires cultivating woke networks in real world context - be it technological, social, infrastructural, and/or politico-economic [16] (Fig. 1).
“Meanings” Based Human Centered Design of Systems
575
Fig. 1. Systemic design, by Birger Sevaldson, 2017, RSD conference
3 Semiotics as Basis for Complex System Design Known as the “mathematics of humanities,” Semiotics is the science of the meaning of signs, including words, images, sounds, odors, flavors, gestures and objects [3]. Semiotics provides a means for codifying objects, their representations and their interpretation. It can also accommodate both engineering concepts and human behaviors. For this research in particular the interest is in Organizational Semiotics (OS) because HSI explains the larger system well but there is a lack of theoretical basis to model the rest of the information pertaining to technical and social fiber of the complex systems. Hence OS fills in the gap well (Fig. 2).
Fig. 2. Definition of Semiotics. Redrawn from [3]
3.1
Semiotics as Basis for Information Systems and Organization Design
Stamper and Liu [18, 19] described Organizational Semiotics (OS) as, “An organization is defined as a system of social norms.” They suggest that a basic strategy of coping with such complexity is “to find good criteria for partitioning the problem into components that can remain, as far as possible, invariant as the other components
576
S. Basapur and K. Sato
change.” Stamper specified the semiotic ladder to help understand information (figure below). OS has been successfully applied to Information Systems [10, 11] (Fig. 3).
Fig. 3. Stamper’s semiotic ladder. Source: [18, 19]
Semiotics, while elegantly, models what we take for granted in the representation of the complex systems also causes rhetoric of being easy to apply. This is misleading. There are not many case studies of application outside of Information Systems. Criticism has been that it is a general-purpose tool and not a full-fledged analytical and/or solution synthesis tool [3].
4 Semiotics Based Systems Life Cycle (SSLC) A semiotics-based systems life cycle is proposed by the authors in this paper. Semiotics informed systems lifecycle is shown in Fig. 4. The input informed by broad context creates mapping between goals, design principles and the actual design. Once design is implemented meanings become apparent through the outcomes of the system. Feedback informs the iterative cycle to self-improve over time.
Fig. 4. Semotics based System Life Cycle (SSLC)
“Meanings” Based Human Centered Design of Systems
577
Further procedural steps as mapped out in Fig. 5 below. The new holistic method emphasizes how to integration of data yields insights using the meanings of actions and activities of different Stakeholders. Also, different aspects of systems are detailed.
Fig. 5. Procedural flow of design method proposed
Application Method includes deliberate steps as follows: Step 1. Generating Contextual Information and Meanings via analysis of: a. Intent of the organization (business) that is providing the service b. Technological trends and landscape of available technologies c. Competitive Landscape of business Step 2. Current System Analysis: Analysis should include at least a. b. c. d.
Analysis of Assets and structure of subsystems in overall objective system Communication Flows and Information Flows People’s movement flows Material flows etc.…
Step 3. Infrastructural and Human Centered Semiotics Analyses: Social and Technical analyses are critical to this methodology but in addition cultural, ethical
578
S. Basapur and K. Sato
and economic analyses are also crucial to help understand people, roles and responsibilities. a. b. c. d. e. f. g.
Roles and Responsibilities network analysis, Social Structure and Hierarchy Human Resources and its current policies and Current Training for personnel Cultural analysis and ability to manage “change” Information Architecture and Information System model Task and Activities analysis - HTA Technical products and services architecture analysis Economic analysis of viability, desirability and feasibility of innovation
Step 4. Holistic Optimization: Working of the socio-economic-cultural structure along with the technical system (IT, Information flow and access etc.) is optimized with the “Intent” of the business organization in focus. Step 5. Prototype System Design: Design principles derived from Step 1 along with the Problems and Constraints are used to generate a “working prototype solution.” Step 6. Design for Execution: This is the final executable design solution for the system to work properly and iteratively improve over time. This also yields a flexible architecture for the system to adjust itself to unforeseen challenges. This design should well encapsulate: 1. 2. 3. 4. 5.
Principles of Design espoused by the design team Structure of entities and people that is sensible to organization goal Executable Design and Plan User Experience of services that is sustainable in quality over time Feedback/Iteration loops in system design to ensure tolerance to ambiguity.
Acknowledgements. We would like to acknowledge feedback from reviewers. Funding for various system design projects from multiple organizations.
References 1. Booher, H.: Handbook of Human Systems Integration. Wiley, Hoboken (2003) 2. Boy, G.A.: Human-centered design of complex systems: an experience-based approach. Des. Sci. 3, e8 (2017) 3. Chandler, D.: Semiotics: The Basics. Routledge, Abingdon (2002) 4. Czajkowski, K., Fitzgerald, S., Foster, I., Kesselman, C.: Grid information services for distributed resource sharing. In: 10th IEEE International Symposium on High Performance Distributed Computing, pp. 181–184. IEEE Press, New York (2001) 5. Foster, I., Kesselman, C.: The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann, San Francisco (1999) 6. Foster, I., Kesselman, C., Nick, J., Tuecke, S.: The physiology of the grid: an open grid services architecture for distributed systems integration. Technical report, Global Grid Forum (2002)
“Meanings” Based Human Centered Design of Systems
579
7. Hendrick, H.W., Kleiner, B.M.: Macroergonomics: Theory, Methods, and Applications. Lawrence Erlbaum Associates Publishers, Hillsdale (2002) 8. Hollnagel, E.: Human factors/ergonomics as a systems discipline? “The human use of human beings” revisited. Appl. Ergon. 45(1), 40–44 (2014) 9. Jones, P.H.: Systemic design principles for complex social systems. In: Social Systems and Design, pp. 91–128. Springer, Tokyo (2014) 10. Liu, K.: Semiotics in Information Systems Engineering. University Press, Cambrigde (2000) 11. Liu, K., Li, W.: Organisational Semiotics for Business Informatics. Routledge, Abingdon (2014) 12. May, P., Ehrlich, H.C., Steinke, T.: ZIB structure prediction pipeline: composing a complex biological workflow through web services. In: Nagel, W.E., Walter, W.V., Lehner, W. (eds.) Euro-Par 2006. LNCS, vol. 4128, pp. 1148–1158. Springer, Heidelberg (2006) 13. McCormick, E.J., Sanders, M.S.: Human Factors in Engineering and Design. McGraw-Hill Companies, New York (1982) 14. National Center for Biotechnology Information. http://www.ncbi.nlm.nih.gov. Accessed 10 Feb 2015 15. Norman, D.: Emotional Design: Why We Love (or Hate) Everyday Things. Basic Books, New York (2005) 16. Sevaldson, B.: RSD2. In: Proceedings of Relating Systems Thinking and Design (RSD2), 2013 Symposium, Oslo, Norway, 4–5 October 2013 (2013). http://www.slideshare.net/ RSD2/intro-lecture-forproceedings. Slide 23 17. Smith, T.F., Waterman, M.S.: Identification of common molecular subsequences. J. Mol. Biol. 147, 195–197 (1981) 18. Stamper, R., Liu, K.: Organisational dynamics, social norms and information systems. In: Proceedings of the Twenty-Seventh Hawaii International Conference on System Sciences, January 1994, vol. 4, pp. 645–654. IEEE (1994) 19. Stamper, R., Liu, K., Hafkamp, M., Ades, Y.: Understanding the roles of signs and norms in organizations-a semiotic approach to information systems design. Behav. Inf. Technol. 19 (1), 15–27 (2000) 20. Taylor, J.C., Felten, D.F.: Performance by Design: Sociotechnical Systems in North America. Prentice Hall, Upper Saddle River (1993) 21. Wickens, C.D., et al.: Engineering Psychology and Human Performance. Psychology Press, London (2015)
A Systematic Review of Sociotechnical System Methods Between 1951 and 2019 Amangul A. Imanghaliyeva(&) Institute for Infrastructure and Environment, Heriot-Watt University, Edinburgh EH144AS, UK [email protected], [email protected]
Abstract. A Sociotechnical System (STS) perspective offers potentially valuable insights into problems in a wide range of domains of society. The development of STS-based methods provide practical help for organisations when planning and evaluating new forms of designing the work systems. The current study provides an up-to date review of STS-based methods in seven different domains including (1) specialist STS methods, (2) the human factor, human behaviour and ergonomics, (3) safety, risk, reliability and accident prevention, (4) complexity, (5) information systems, information technology and the internet of things, (6) organisational design, work system and management studies and (7) design science. Based on a systematic review, 106 STS-based methods have been identified and grouped by relevant domains and summarised to form a unified list of the available methods for use by anyone involved in the system and organisational design. Keywords: Sociotechnical System Sociotechnical System methods Sociotechnical System methods review System design Organisational design
1 Introduction Clearly, there are numerous conventional methods (i.e. Hazard and Operability Analysis (HAZOP) [1], Error Analysis [2], ‘What if’ analysis [3], etc.) which are mainly focused on either human or technical aspects of the system. However, as studies indicate, these methods are not complete and in many cases do not solve complex (social and technical) system problems entirely. Moreover, due to the overly narrow consideration of the causes of system/project failures, these methods are not appropriate to justify fully the occurrence of system failure or accidents in the systems. As systems become more complex, there is an increasing need for new methods. Perhaps, this relates directly to the fact that 21st century systems require us to understand the increased interconnectivity and complexity between systems and their elements. All these considerations have implications for development of STS methods [4]. STS methods have been developing since the initial establishment of the Tavistock Institute up to the present day and have been manifested a wide range of STS-based methods in different sections [5]. Depending on the work environment of specific companies or organisations, STS methods achieved various levels of success in the early days of the emergence of STS. Although, there are quite a large range of methods © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 580–587, 2020. https://doi.org/10.1007/978-3-030-39512-4_90
A Systematic Review of STS Methods Between 1951 and 2019
581
based on STS, they are not widely adopted. Some of the reasons cited for the lack of use of such methods are that they do not always fit the organisational culture, and that knowledge about these techniques is not available to relevant personnel in organisations [6]. Thus, it is important to organise the STS-based methods applied in different domains into one unified framework (see Table 1).
2 Method The review of the methods was carried out over two stages. The first stage was a literature review of existing STS-based methods in different domains. The second involved a screening process of the identified STS methods. 2.1
Stage 1: Literature Review of Existing STS Methods in Different Domains
A large-scale literature search was undertaken to reveal all the STS-based methods used in the areas of (1) specialist STS, (2) the human factor, human behaviour and ergonomics, (3) safety, risk, reliability and accident prevention, (4) complexity, (5) information systems, information technology and the internet of things, (6) organisational design, work system and management studies and (7) design science. Four databases (Google Scholar, ScienceDirect, Web of Science and Scopus) were searched for articles published between the dates 1 January 1951 to 1 July 2019, inclusive. All instances of STS methods extant in the publicly available and peer-reviewed knowledge-base, including relevant scientific journals, standard textbooks were reviewed. STS-based methods were searched for by topics and keywords. The following targeted keywords/phrases were searched for: 1. 2. 3. 4. 5. 6.
Sociotechnical system methods Socio-technical system design methods Sociotechnical system approaches Sociotechnical system tools Sociotechnical system design methods Socio-technical system design tools
All frameworks, models related to STS and also traditional methods of work organisation or those methods which were solely focused on only one aspect (either social or technical) of the system were excluded from the review. Thus, the main emphasis was focused on purely STS-based methods which have a definite, established, logical or systematic plan of action. 2.2
Stage 2: Screening of STS Methods
Apart from the previously mentioned keywords, some additional criteria were set for selection of the STS methods. Before these methods were subject to further analysis, a screening process was employed to remove any methods (tools, techniques) that were not suitable for review with respect to their use in the design and evaluation of systems.
582
A. A. Imanghaliyeva
The screening procedure of the STS methods was conducted through setting the following specific criteria: • • • •
Methods had The methods The methods The methods
to be focused on both social and technical aspects of the system had to be applied at least in one case study had to be publicly freely available had to align with at least one of the 20 sociotechnical principles [7].
3 STS Methods Review in Different Domains 106 STS-based methods were identified in seven different domains, as explained in the following list below. (1) Specialist STS methods are useful for redesigning existing STS as well as for designing a new STS. The main target of these methods is focused on improving and developing the work system, work processes and working environment. The cases when these methods have been used report success in the dimensions of either productivity or quality of working life. In particular, the use of the Sociotechnical System design technique [8] enables the success of the design procedure by meeting both the human needs and technical requirements. (2) Human factor, human behaviour and ergonomics: It is widely acknowledged that availability of sociotechnical critical systems still rely on human operators, both through humans’ reliability and ability to handle unexpected events adequately [9]. Therefore, it does make sense to have methods for assessing human errors and human factors in complex STS. For instance, Systemic-theoretic accident model and processes based Human Reliability Analysis (STAMP based HRA) considers human-related accidents and identifies contributing factors and causal relationships [10]. This method was developed as a result of the inadequacy of sequential accident models in analysing human errors within STS. (3) Safety, risk, reliability and accident prevention: The majority of STS methods in this area tends to serve as a tool for identifying, mitigating and prioritising the risks that can dramatically degrade the performance of the entire system. Traditional approaches to risk assessment, such as THERP (Technique for Human Error Rate Prediction) [11], TRACER (Technique for the Retrospective and Predictive Analysis of Cognitive Errors) [12] etc. are typically reductionist in nature [13], focusing on individual tasks and technologies rather than the system as a whole [4, 14]. However, STS present unique challenges for safety management and risk assessment [4, 15]. Therefore, there is need to look at ST-PRA (Sociotechnical probabilistic risk assessment) [16, 17] that provides a tool for analysing risks associated with human error, non-standard practices, and procedural noncompliance, any of which can degrade safety and performance of the system [17]. (4) Complexity: Numerous research studies from various disciplines have explored the field of ‘complexity’. Systems Engineering and Psychology are the specific disciplines that deal with complexity. Complexity can be perceived as the
A Systematic Review of STS Methods Between 1951 and 2019
583
existence of complex, chaotic, interrelated, disordered, dynamic systems full of uncertainty and variety of complex elements’. In this context, STS-based methods serve as complex problem-solving tools. These methods can provide insights in the different complex processes as work systems change. (5) Information systems (IS), information technology (IT) and internet of things: The link between STS and IT/IS is rooted in the work of Enid Mumford in the 1960s, who realised that an essential part of an IS, which is constantly neglected by system engineers and system designers, regards the organisational aspects of the work system. This idea led to the development of STS-based methods in IS and IT. (6) Organisational design, work system and management studies: STS-based methods in work system and organisational design are very similar to specialist STS methods, in terms of their wide application in designing a system or redesigning the existing system. ‘Systems Scenarios Tool’ [18], Sociotechnical Tools’ [6] are examples of this. However, they are additionally are focused on optimising the distribution of functions between humans and equipment. (7) Design Science: These methods allow users to be creative and act as designers. For instance, ‘Business Origami’ [19] is a value-centred design method that enables design teams to paper-prototype a physical representation of a system.
Table 1. 106 identified Sociotechnical System (STS) based methods in different domains between 1951 and 2019 Identified Sociotechnical System-based methods in different domains between 1951–2019 № Methods № Methods (1) Specialist Sociotechnical System 54 Stabilize, evaluate, identify, methods standardize, monitor, implement, and control (SEISMIC) methodology 1 Socio-technical Allocation of 55 System dynamics (SD) Resources (Star) toolkit 2 Socio-technical roadmapping (5) Sociotechnical System-based methods in method Information Systems, Information Technology and Internet of Things 3 Socio-Technical System Design 56 Actor Network Theory (ANT) technique 4 Sociotechnical System Variance 57 Ariadne Analysis (2) Sociotechnical System-based methods 58 End-user participation in HIT on Human Factor, Human Behaviour and development (EUPHIT) method Ergonomics 5 Cognitive Work Analysis (CWA) 59 Future Technology Workshop (FTW) method 6 Cognitive Work Analysis Design 60 GenderMag method Toolkit (CWA-DT) (continued)
584
A. A. Imanghaliyeva Table 1. (continued)
Identified Sociotechnical System-based methods in different domains between 1951–2019 № Methods № Methods 7 Goals, Operators, Methods and 61 Mumford: Effective Technical and Selection Rules (GOMS) Human Implementation of Computerbased Work Systems (ETHICS) 8 Human Error Hazard and 62 Organisational Requirements Operability (HE-HAZOP) analysis Definition for IT Systems methodology (ORDIT) 9 Human Factors Analysis and 63 Semi-structured, socio-technical Classification System (HFACS) modelling method (SeeMe) 10 Human Factors Hazard Operability 64 Socio-technical evaluation matrices (HF-HAZOP) analysis (STEM) 11 Hierarchical Task Analysis (HTA) 65 Socio-technical walkthrough (STWT) 12 Sociotechnical Systems Analysis 66 Soft systems (SSM) methodology) 67 Tesseract: A sociotechnical (3) Sociotechnical System-based methods dependency browser tool on Safety, Risk, Reliability and Accident Prevention 13 Accident Analysis Technique (6) Sociotechnical System-based methods in (AcciMap) Organisational Design, Work system and Management Studies 14 Accident evolution and barrier 68 Barrier and operational risk analysis function (AEB) (BORA) 69 Barrier and operational risk analysis 15 Causal Analysis based on Systems Theory (CAST) Release (BORA-Release) 16 Cognitive Reliability and Error 70 Bostrom and Heinen: STS design Analysis Method (CREAM) method 17 Control change cause analysis 71 Bounded Socio-Technical Experiments (3CA) (BSTE) 18 Failure Mode and Effect Analysis 72 Delphi study (FMEA) 19 Fault Tree Analysis (FTA) 73 KOMPASS 20 Functional Resonance Analysis 74 Macroergonomic Work Analysis Method (FRAM) (MA) method 21 Health and Safety Executive 75 Norske Statesbaner (NSB) (HSG245) 22 Health Care Failure Mode and 76 Paper-based tools Effect Analysis (HFMEA) 23 HS-Ras 77 Pava: STS design method 24 Integrated use of cognitive mapping 78 Process mapping techniques techniques and the Analytic Hierarchy Process (AHP) method 25 Integrated Procedure of Incident 79 RAMESES Cause Analysis (IPICA) 26 Integrated safety investigation 80 Sociotechnical design (SD) method methodology (ISIM) (continued)
A Systematic Review of STS Methods Between 1951 and 2019
585
Table 1. (continued) Identified Sociotechnical System-based methods in different domains between 1951–2019 № Methods № Methods 27 Interdisciplinary approach: 81 Socio-technical experiments STAMP-based HRA considering causality 28 Management oversight and risk tree 82 Socio-Technical Method for Designing (MORT) Work Systems (STMDWS) 29 Man-Technology-Organisation 83 Sociotechnical Tools: Scenarios Tool (MTO) Analysis and Job Design Tool 30 Multilinear events 84 Socio-Technical Security Analysis sequencing (MES) 85 Systems-theoretic accident model and 31 NETworked hazard analysis and processes (STAMP) risk management system (NETHARMS) 32 Occupational Accident Research 86 Systems Scenarios Tool (SST) Unit (OARU) 87 Task allocation method (between and 33 Organisations Risk Influence Model (ORIM) among humans and machines) 34 Resilience Engineering 88 Task Analysis for Error Identification (RE) approach (TAFEI) 35 Requirements Engineering with 89 Work accidents investigation technique Scenarios for a User-centred (WAIT) Environment (RESCUE) 90 Work System Method (WSM) 36 Risk Situation Awareness Provision (RiskSOAP) methodology 38 Sociotechnical probabilistic risk 91 Analysis, Diagnosis and Innovation assessment (ST-PRA) (ADInnov) method 39 Specification Risk Analysis tool 92 Cost effectiveness analysis 40 Systematic cause analysis technique 93 Ethnographic method (SCAT) 41 SystemTheoretic Process Analysis 94 Inquiry cycle for Privacy (STPA-Priv) 42 Systems Theoretic Process Analysis 95 Multi-Method or mixed approaches (STPA) 43 TRIPOD 96 PreMiSTS (4) Sociotechnical System-based methods 97 Social Network Analysis (SNA) in Complexity 44 Agent-based modelling technique (7) Sociotechnical System-based methods in (ABM) Design Science 45 Causal tree method (CTM) or INRS 98 Business Origami method 46 COIM: An object-process based 99 Cognitive Mapping method 47 EAST (Event analysis for systemic 100 Cognitive Walkthrough teamwork) method (continued)
586
A. A. Imanghaliyeva Table 1. (continued)
Identified Sociotechnical System-based methods in different domains between 1951–2019 № Methods № Methods 48 Macroergonomic analysis and 101 Collage design (MEAD) 49 MAP method 102 Contextual Design 50 Modified Sociotechnical Systems 103 Heuristic Evaluation (MoSTS) methodology 51 Morphological Analysis (MA) 104 Meta-design 52 SEIPS-based process modelling 105 Participatory design (PD) method 53 Sociotechnical scenario (STSc) 106 Storyboards method
4 Conclusion The current study provided an up-to date review of STS-based methods in seven different domains including (1) specialist STS methods, (2) the human factor, human behaviour and ergonomics, (3) safety, risk, reliability and accident prevention, (4) complexity, (5) information systems, information technology and the internet of things, (6) organisational design, work system and management studies and (7) design science. Based on the review, 106 STS-based methods have been identified and grouped by relevant domains and summarised to form a unified list of the available methods for use by anyone involved in organisational and system design. Future research will be directed into a practical application of these methods. Acknowledgments. This work was financed by the Republic of Kazakhstan’s JSC ‘Centre for International Programs’ which operates under the Ministry of Education and Science of the Republic of Kazakhstan, and administers the Kazakhstan President’s Bolashaq Scholarship Program, that granted a Ph.D. scholarship to Amangul Imanghaliyeva to support this study.
References 1. Lawley, H.G.: Operability studies and hazard analysis. Chem. Eng. Prog. 70(4), 45 (1974) 2. Srinivas, S.: Error recovery in robot systems, Doctoral dissertation, California Institute of Technology (1977) 3. Crawley, F., Tyler, B.: Hazard Identification Methods. Institute of Chemical Engineers, Rugby (2003) 4. Waterson, P., Robertson, M., Cooke, N.J., Militello, L., Roth, E., Stanton, N.: Defining the methodological challenges and opportunities for an effective science of sociotechnical systems and safety. Ergonomics 58(4), 565–599 (2015) 5. Baxter, G., Sommerville, I.: Socio-technical systems: from design methods to systems engineering. Interact. Comput. 23(1), 4–17 (2011)
A Systematic Review of STS Methods Between 1951 and 2019
587
6. Axtell, C., Pepper, K., Clegg, C., Wall, T., Gardner, P.: Designing and evaluating new ways of working: the application of some sociotechnical tools. Hum. Fact. Ergon. Manuf. Serv. Ind. 11(1), 1–18 (2001) 7. Imanghaliyeva, A.A., Thompson, P., Salmon, P., Stanton, N.A.: A synthesis of sociotechnical principles for system design. In: International Conference on Applied Human Factors and Ergonomics, pp. 665–676. Springer, Cham (2019) 8. Taylor, J.C.: The human side of work: the socio-technical approach to work system design. Pers. Rev. 4(3), 17–22 (1975) 9. De Carvalho, P.V.: Ergonomic field studies in a nuclear power plant control room. Prog. Nucl. Energy 48(1), 51–69 (2006) 10. Rong, H., Tian, J.: STAMP-based HRA considering causality within a sociotechnical system a case of Minuteman III missile accident. Hum. Fact. J. Hum. Fact. Ergon. Soc. 57(3), 375– 396 (2015) 11. Swain, A., Guttman, H.: Handbook of human reliability analysis with emphasis on nuclear power plant applications, US Nuclear Regulatory Commission Technical report (NUREG/CR-1278). Google Scholar|Crossref (1983) 12. Shorrock, S.T., Kirwan, B., Scaife, R., Fearnside, P.: Reduced vertical separation outside controlled airspace. In: Third Annual Conference on Aviation Safety Management, May 2000 13. Stanton, N., Salmon, P.M., Rafferty, L.A.: Human Factors Methods: a Practical Guide for Engineering and Design. Ashgate Publishing, Ltd., Farnham (2013) 14. Stanton, N.A.: Hierarchical task analysis: developments, applications, and extensions. Appl. Ergon. 37(1), 55–79 (2006) 15. Flach, J., Carroll, J., Dainoff, M., Hamilton, W.: Striving for safety: communicating and deciding in sociotechnical systems. Ergonomics 58(4), 615–634 (2015) 16. Marx, D.A., Slonim, A.D.: Assessing patient safety risk before the injury occurs: an introduction to sociotechnical probabilistic risk modelling in health care. Qual. Saf. Health Care 12(Suppl. 2), ii33–ii38 (2003) 17. Battles, J.B., Kanki, B.G.: The use of socio-technical probabilistic risk assessment at AHRQ and NASA. In: Probabilistic Safety Assessment and Management, pp. 2212–2217. Springer London (2004) 18. Hughes, H., Clegg, C., Bolton, L., Machon, L.: Systems scenarios: a tool for facilitating the socio-technical design of work systems. Ergonomics 60, 1–17 (2017) 19. Hanington, B., Martin, B.: Universal Methods of Design: 100 Ways to Research Complex Problems, Develop Innovative Ideas, and Design Effective Solutions. Rockport Publishers, Mountain View (2012)
Designing a Safety Confirmation System that Utilizes Human Behavior in Disaster Situations Masayuki Ihara(&), Hiroshi Nakajima, Goro Inomae, and Hiroshi Watanabe NTT Service Evolution Laboratories, 1-1, Hikari-no-oka, Yokosuka, Kanagawa 2390847, Japan [email protected]
Abstract. This paper introduces a use case to discuss how to design services for disaster situations as well as to elucidate the development and evaluation of our safety confirmation system, which utilizes human behavior in disaster situation. The success of the safety confirmation system based on information sharing among evacuation centers depends on the inter-center movement of humans. In order to know whether the safety confirmation service would work effectively or not after a massive disaster, we conducted a survey that examined how many evacuation centers were visited after the East Japan Earthquake and the periods over which the evacuees visited the centers. As a result, we confirmed that many evacuees were still visiting centers 72 h after the earthquake, making our technology effective in responding to a disaster. Keywords: Design System complexity Human behavior Disaster Safety confirmation
1 Introduction After the East Japan Earthquake in 2011, communication services were suspended for a long time in many areas. This is because not only network cables but also buildings with network facilities in many areas were damaged by the tsunami. As regards mobile services, an official report states that 29 thousand routing points were damaged. It took one and half months to completely fix the damage to both fixed telephone services and mobile services. 80% of mobile service areas were fixed in one week. In dire disaster situations such as earthquakes, we should have alternative communication channels able to provide safety confirmation for at least 72 h after a disaster, a period known to be important in saving human life. We have developed a safety confirmation technology that can be used even when Internet connection is unavailable and in disaster situations. This paper shows how to design services for disaster situations as well as introducing the development and evaluation of our safety confirmation system, which was developed as software on smartphones; it utilizes human behavior in disaster situations.
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 588–593, 2020. https://doi.org/10.1007/978-3-030-39512-4_91
Designing a Safety Confirmation System that Utilizes Human Behavior
589
2 Requirements for Services in Disaster Situations When designing services for disaster situations, it is important to be able to handle the absence of Internet connectivity and the inability to install new application software. After a disaster, Internet connection cannot be guaranteed so the services should not require Internet connectivity. To meet this requirement, we decided that the only communication available would be local Wi-Fi. Our implementation assumes that lightweight servers and databases are set in local Wi-Fi routers and user mobile devices access those servers or database via the routers. One problem with applications for disaster situations is that the installation of emergency applications is not possible after a disaster. Accordingly, our proposal is completely web-based and uses HTML5.
3 DTN-Based Safety Confirmation To meet the requirements above, the server, database and local Wi-Fi in an evacuation center enable evacuees to use the safety confirmation service on their own smartphone. To expand the service area, our safety confirmation application makes it possible to share safety confirmation information among evacuation centers via a DTN (Delay Tolerant Network). The DTN is rather unique as it uses the inter-center movement of humans to realize information sharing rather than regular wired or wireless links [1]. Evacuees can confirm the safety of family or friends without putting themselves to further personal risk by visiting many evacuation centers.
4 Developed Technologies We have developed a DTN-based safety confirmation system by implementing a safety confirmation application on a resilient information sharing platform that enables multihop Wi-Fi communication [2]. 4.1
Resilient Information Sharing Platform
This platform (See Fig. 1) is realized as an HTML5 web-based application hosted by local Wi-Fi routers, so it satisfies the above two requirements. After a disaster, many people will frequently move between evacuation centers. This means that the number of mobile devices connected to Wi-Fi in the evacuation center changes frequently. Our platform can accept such changes as it uses a javascript-based decentralized cooperation control program on each device to collect, store, and distribute information among devices. The stored information is synchronized among the devices accessing the Wi-Fi routers in order to distribute the latest information. The device carried by a person who visits many evacuation centers stores the information held by the current router. When the person connects the device to the Wi-Fi network in another evacuation center, the device uploads its stored information for access/download by other devices and then downloads what information the center holds.
590
M. Ihara et al.
Fig. 1. Resilient information sharing platform.
4.2
Safety Confirmation Application
Fig. 2. Safety confirmation application.
This application is triggered simply by bringing the device into the Wi-Fi cover area and launching a web browser. Availability is achieved by using a URL redirection technique to automatically navigate users to a service portal menu. When they use the application for the first time, they need to register their own information such as name and/or a phone number. Figure 2 shows a screenshot and how to use the application.
5 Evaluation The success of information sharing among evacuation centers depends on the intercenter movement of humans. The system is complex as it treats humans in emergency situations as system components. Our field evaluation confirmed the basic acceptance
Designing a Safety Confirmation System that Utilizes Human Behavior
591
of the service concept (86% of subjects reacted positively). However, confirmation as to whether the safety confirmation service would work effectively or not after a massive disaster is needed. Moreover, to enhance the algorithm controlling information circulation, we need more facts about peoples’ responses. Thus, we conducted a survey that examined how many evacuation centers were visited after the East Japan Earthquake. We investigated the number of evacuation centers each subject visited in the days after the disaster. This survey, which took the form of a web-questionnaire, was responded to by 500 subjects (males and females in their 20s to 70s) who experienced the earthquake. We also investigated the period over which the evacuees visited the centers.
6 Results 6.1
The Number of Visited Centers
There were two types of responses to the earthquake; “center-evacuees” left their dwellings and entered an evacuation center while “home-stays” remained in their dwelling. In this survey, there were 218 center-evacuees and 282 home-stays. Figure 3 (a) and (b) show the number of evacuation centers visited by center-evacuees and home-stays, respectively. As shown in Fig. 3(a), 50.5% of center-evacuees visited at least one other evacuation center. 41.8% of the center-evacuees visited multiple centers. As shown in Fig. 3(b), 28.4% of home-stays visited at least one center. 35.0% of the home-stays visited multiple centers. Note that one evacuee visited 20 centers, the highest number reported.
(a) Center evacuees (n=218)
(b) Home-stays (n=282)
Fig. 3. The number of evacuation centers visited.
From the above results, we confirmed the existence of evacuees who visited multiple centers. More center-evacuees visited other centers than did home-stays. This is because most center-evacuees came from areas completely devastated and so the centers in those areas were rather overwhelmed. Safety confirmation was a stronger requirement, as was the need for daily necessities.
592
6.2
M. Ihara et al.
Days Spent Visiting Centers
In order to know the period over which our technology would be effective after a disaster, we investigated how many days each evacuee spent for visiting other centers. This investigation targeted 74 evacuees who visited multiple centers. Figure 4 shows evacuees who finished visiting centers on the day of earthquake and the next day were 4.1% and 16.2% respectively (79.7% of the evacuees were still visiting centers up to three days after the earthquake). These results mean that many evacuees were still visiting centers 72 h after the earthquake. 50% of the evacuees kept visiting centers for more than a week. This result confirms that our technology is a potential alternative to regular communication channels, which may be disconnected for a week.
Fig. 4. Days spent visiting centers. (n = 74).
From the above results, we confirmed that the period of visiting other evacuation centers was long enough to support the effectiveness of our technology. However, when this technology is used in the future, evacuees may think that they do not need to visit other centers since they can confirm safety state without visiting other centers. Accordingly, we conducted an additional survey to know whether evacuees visited other centers for purposes different from gathering safety information. As a result, 55.4% and 52.7% of the 74 evacuees visited other centers for daily necessities and for news about the disaster such as level of damage, respectively. These results mean that evacuees will visit other centers even if our safety confirmation service is used. On the other hand, since our technology depends on evacuees’ visiting actions, there may be some delay in safety confirmation. Evacuees may think it is faster to drive a car to other centers. We investigated the means used to visit other centers after the earthquake. As a result, 75.7% of the 74 evacuees visited other centers on foot. This is because many roads were destroyed by the earthquake or tsunami and they could not use a car due to the lack of gasoline.
7 Related Works The HCI- related community has been long interested in the nature of people’s behaviors as well as their interaction with ICT services during disasters. These include work to support professionals in emergency situations [3] as well as research on how
Designing a Safety Confirmation System that Utilizes Human Behavior
593
non-professionals utilize various ICT services in various disasters [4, 5]. Most studies targeting on a disaster situation are analyzing human’s online behaviors and social media [6, 7]. Sekimoto’s study treats the estimation of human movement in a disaster situation [8]. Song introduced a modeling of evacuation behavior [9]. There is a study of utilizing a DTN to create a technology for disaster situations [10].
8 Conclusion In this paper, we introduced a use case to discuss how to design services for disaster situations as well as introducing the development and evaluation of our safety confirmation system. The contribution of our study lies in its provision of a use case of a complex system that utilizes human behavior as a component. Human behaviors such as safety confirmation should be targeted as both services and possible system components. We should investigate the characteristics of human behaviors in advance and should reflect the result in system design.
References 1. Delay-Tolerant Networking Architecture. IETF RFC 4838 (2007) 2. Ihara, M., Seko S., Miyata, A., Aoki, R., Ishida, T., Watanabe, M., Hashimoto, R., Watanabe, H.: Towards more practical information sharing in disaster situations. In: Yamamoto S. (ed.) HIMI 2016. LNCS, vol. 9735. Springer, Cham (2016) 3. Pettersson, M., Randall, D., Helgeson, B.: Ambiguities, awareness and economy: a study of emergency service work. In: CSCW 2002, pp. 286–295. ACM Press (2002) 4. Palen, L., Lie, S.B.: Citizen Communications in crisis: anticipating a future of ICT-supported public participation. In: CHI 2007, pp. 727–736. ACM Press (2007) 5. Semaan, B., Mark, G.: ‘Facebooking’ towards crisis recovery and beyond: disruption as an opportunity. In: CSCW 2012, pp. 27–36. ACM Press (2012) 6. Kogan, M., Palen, L., Anderson, K.M.: Think local, retweet global: retweeting by the geographically-vulnerable during hurricane sandy. In: CSCW 2015, pp. 981–993. ACM Press (2015) 7. Reuter, C., Ludwig, T., Kaufhold, M.A., Pipek, V.: XHELP: design of a cross-platform social-media application to support volunteer moderators in disasters. In: CHI 2015, pp. 4093–4102. ACM Press (2015) 8. Sekimoto, Y., Sudo, A., Kashiyama, T., Seto, T., Hayashi, H., Asahara, A., Ishizuka, H., Nishiyama, S.: Real-time people movement estimation in large disasters from several kinds of mobile phone data. In: UbiComp 2016, pp. 1426–1434. ACM Press (2016) 9. Song, X., Zhang, Q., Sekimoto, Y., Horanont, T., Ueyama, S., Shibasaki, R.: Modeling and probabilistic reasoning of population evacuation during large-scale disaster. In: KDD 2013, pp. 11–14. ACM Press (2013) 10. Reina, D.G., Askalani, M., Toral, S.L., Barrero, F., Asimakopoulou, E., Bessis, N.: A survey on multihop ad hoc networks for disaster response scenarios. J. Distrib. Sens. Netw. 2015, 647037 (2015)
Designing Ethical AI in the Shadow of Hume’s Guillotine Pertti Saariluoma1,2(&) and Jaana Leikas1,2 1
2
University of Jyväskylä, Jyväskylä, Finland [email protected] VTT Technical Research Centre of Finland Ltd., Espoo, Finland
Abstract. Artificially intelligent systems can collect knowledge regarding epistemic information, but can they be used to derive new values? Epistemic information concerns facts, including how things are in the world, and ethical values concern how actions should be taken. The operation of artificial intelligence (AI) is based on facts, but it require values. A critical question here regards Hume’s Guillotine, which claims that one cannot derive values from facts. Hume’s Guillotine appears to divide AI systems into two ethical categories: weak and strong. Ethically weak AI systems can be applied only within given value rules, but ethically strong AI systems may be able to generate new values from facts. If Hume is correct, ethically strong AI systems are impossible, but there are, of course, no obstacles to designing ethically weak AI systems. Keywords: AI ethics
Design Hume’s Guillotine
1 Introduction After one of the many “winters” regarding artificial intelligence (AI) and intelligent technologies, they are again high on research agendas. The performance capacity of computers, their speed, the size of their memory, and the sizes of the data masses available all have increased. Consequently, intelligent systems, including AI, robotics, and autonomous technologies, all currently have serious practical applications. However, improvements in performance capacity will lead to many new challenges concerning the interactions between people and these systems and their effects on human life [1, 2]. Technologies have always been developed to aid people in their everyday tasks [3]. More than fifty years ago, the early computers began to replace mechanical office machines. Since then, large numbers of people have changed the ways they work. Computers changed office work step by step and eventually the organization of firms large and small. Today, examples of this can be found in practically all areas of human work and personal life. They illustrate the effects of the so-called fourth technology revolution on people’s lives. Intelligent systems are like any other technology in that they support people in their personal and work lives, making it possible to improve the quality of human life. However, these systems differ qualitatively from traditional, non-intelligent technologies in their abilities to accomplish many tasks that require intelligent information © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 594–599, 2020. https://doi.org/10.1007/978-3-030-39512-4_92
Designing Ethical AI in the Shadow of Hume’s Guillotine
595
processing. The focus is no longer on how to improve performance in the sensorymotor dimension, but rather on how to improve it in tasks that have required human higher-information processes, such as thinking [4, 5]. Intelligent systems have the capacity to improve human performance and even to replace people with machines in many tasks that require intelligent information processing [6], for example, navigating ships, which people have done for millennia because they have the capacity to carry out the required information-processing operations. Today, intelligent systems can not only play games, but can participate in transportation; help to proofread texts and perform many other natural-language processing tasks; sell and buying stocks; analyze massive datasets; make legal decisions; and participate in political, medical, and military tasks in various ways. Thus, intelligent systems are becoming an essential part of modern work and personal life. However, intelligent information processing includes not only processing data and working with facts. In intelligent information processing and action, ethics and values often are essential. In daily life, which is gradually becoming supported in various ways by intelligent systems, many actions entail ethical components. For example, can an intelligent financial system reject loans from startup firms that have for economic reasons located in the inexpensive neighborhoods of cities? Can we accept machines that are tacitly racist? Should expensive medicines be used to care for elderly people whose expected lifespans are comparatively short? Values and morals cannot be put aside when designing tasks and operations for intelligent systems. A brief look at the problems of applied ethics can be enlightening [7, 8]. For example, such concepts as bioethics, medical ethics, and the ethics of businesses and professions illustrate that even practical actions regarding human life are always value laden. Given that values are always incorporated into actions, should we expect intelligent systems to also process ethical actions? And if so, how can they do this? Understanding the facts does not necessarily ensure the exercise of values, as facts and values are not qualitatively identical: Facts are true or false and epistemic in nature, and in contrast, values are either more or less justified, and thus are ethical or moral.
2 Epistemic and Ethical Information Intelligent technologies are designed to conduct human operations in the real world. They can perform such tasks as driving cars or ships, coordinating the operations of various machines in factories, analyzing medical data, and teaching people. Very often, such tasks have two aspects: the facts regarding how things are (epistemic) and the values regarding how things should be and what should be done (ethical). Thus, the epistemic information represents the states of affairs and entails propositions that have truth-value. Ultimately, computers can be perceived as bits, electrical states, and the possible states of these bits. Today, vast datasets are typical, and evolving sensor systems are rapidly increasing the data available. This means that compared to earlier times, the amount of epistemic knowledge is becoming huge and people do not lack data. Each day, this knowledge base has new applications that are meaningful to human lives.
596
P. Saariluoma and J. Leikas
However, facts are not values. Moral information is of a different type. As mentioned, facts tell how things are and if things are true. Facts have definable references and meanings. There can also be procedures that tell whether the facts refer to something else and are true in relation to it. In contrast, norms do not have verifiable references in the same sense that facts do. For example, it is impossible that a sensor can indicate who is a good person. Norms are guidelines regarding how people should behave. Using intelligent systems, it is possible to register people’s behavior. In this sense, norms have factual properties. But, can values be processed like factual information? The fact that the Nazis killed people in concentration camps does not indicate that their behavior was moral. Undoubtedly, some of these Nazis accepted the prevailing moral system, but the rest of the world hardly did so. The crucial question is whether morals are directly derivable from facts, which would enable intelligent systems to analyze human life and derive new moral norms. Is there a danger that an intelligent system, independently of human involvement, would decide that concentrations camps were moral?
3 Hume’s Guillotine In designing intelligent moral systems, one important question is associated with the two types of information. This question can be formulated as follows: how should the processing of epistemic and moral information be associated and implemented in designing the intelligence of intelligent systems. The crucial difficulty of this question was identified by David Hume more than 200 years ago. The problem is called Hume’s Guillotine [9], which states that one cannot infer values from facts. In terms of the problem in question, this means that one cannot derive moral information from epistemic information. Because the operations of intelligent systems are based on epistemic information, Hume’s Guillotine poses the question of what it means that intelligent systems process values-based information. To clarify this problem, the scientific and design communities should specifically study the relationship between epistemic and moral information. Hume’s Guillotine was presented in Hume’s main work, “A Treatise on Human Nature” [9]. He did not see any way in which values could be based on facts and reasons. To him, values were based more on emotions. He wrote about this in a very straightforward manner: “It is impossible that the distinction betwixt moral good and evil can be made by reason” [9]. To Hume, actions had two main and independent aspects: reason and passion. For him, morals belonged to the aspect of passions and processing facts to that of reason. Hume’s Guillotine is a concept of surprising consequence in the design of ethical AI. Researchers argue that AI technologies make it possible to design ethical algorithms and consequently ethical machines. Furthermore, some have seen it as possible to use the data generated by ethical machines to develop modern societies. Both of these positions appear to contradict Hume’s Guillotine. Intelligent systems are machines, which can have different electrical states. How can an electrical state, which definitely must be analyzed in terms of natural laws,
Designing Ethical AI in the Shadow of Hume’s Guillotine
597
understand human ethics and values? Computers operate with facts and epistemic knowledge, whereas Hume’s Guillotine separates facts and values. What really happens when computers and intelligent systems must process moral information? Can they generate new values in the same sense as new facts? It seems that the task is more complicated, because ethical machines should link facts and values. Thus, if one hopes to process values autonomously using modern AI systems, it is necessary to circumvent Hume’s Guillotine in some way. A machine can generate a factual state of information, such as a chess position, but it cannot generate whether this position is good or bad. This must be defined by people in their ethical discourses.
4 Morally Weak and Strong AI In principle, it is easy to provide ethical rules for intelligent machines. For example, if some action causes pain to a human being, an intelligent machine can be programmed to avoid such an action. In practice, it is not straightforward to know what can be painful, but there are no principled obstacles to programming ethical norms into machines and making them follow these norms. In contrast, Hume’s Guillotine implies that machines cannot derive new norms from facts. Analysis of big data can find new types of data combinations, but how can this knowledge be used to generate new values? For example, if Substance X is dangerous for a small group of people suffering from a rare illness, should an intelligent system conclude that all people should give up using Substance X? A natural alternative to solving the ethical problem caused by Substance X is the following: The prevailing ethical and normative position is decided using human social and political discourses, and rules such as “Avoid situations involving Substance X” or “Allow Substance X” can be implemented in the memories of intelligent systems. Thus, people could decide, renew, and implement ethical rules and their content in an intelligent system. However, this type of system would be morally weak and nonautonomous, as it could not independently decide the contents of new moral norms. A morally weak AI system would follow the value systems provided by people and would analyze factual situations but would not manipulate its system of moral rules. This type of system can also be called a morally narrow AI system. A morally strong AI system could be realized by a system that generated new ethical principles and rules based on the analysis of a given factual situation. Such a system would be able to analyze people’s everyday lives and make new rules based on those analyses. This type of machine learning in a system could independently improve human value systems. A morally strong AI system could have the capacity to reflectively manipulate its own ethical rules and their practical consequences and could even generate new moral rules. The validity of Hume’s Guillotine is crucial in thinking about whether it would be possible to construct a morally strong AI system.
598
P. Saariluoma and J. Leikas
5 Aporia of AI Design Hume’s Guillotine appears to be a watershed between the morally weak and the morally strong AI. If Hume’s Guillotine is true, morally strong AI is impossible. If Hume’s Guillotine is incorrect, morally strong AI can be constructed. The latter possibility is very appealing, as it would make it possible to massively analyze the conditions of people’s lives and let intelligent technologies decide how people should behave and what the laws should be like, thereby obviating the need for political discourse to make laws. Thus, Hume’s Guillotine presents an important problem for all AI designers. This problem is serious. AI systems can effectively analyze the state of affairs in a society and thus promote understanding of that society, its states, and the states of the sectors and groups within it. However, this would not be helpful unless it were possible to design intelligent systems that could turn facts into values. Unless this is possible, human involvement is necessary. One can imagine several ways of solving the problem of Hume’s Guillotine. Herein, we present one solution: Because it is evident that one can construct morally weak AI, it might be possible to construct systems of morally weak systems and eventually reach a morally strong AI. However, the number of morally weak systems required might be huge or even infinite and the task thus impossible. Partial solutions can be tested using the Turing test, enabling study of whether morally weak AI systems behave like people. It would also be possible to test whether the systems behaved in a more ethical manner than people did in a given ethical situation. It would even be possible to test whether ethical AI can perform better than people can. The latter would be a worthy goal, but it does not solve the main problem: deriving values from facts. Indeed, it is possible to have systems with ethical content, but that is not a solution to the actual problem of morally strong AI. From the point of view presented herein, it appears that it is impossible to construct strong ethical AI. One cannot solve the general problem of strong ethical AI unless one cannot find a way to get machines to derive values from facts. Assuming that Hume’s Guillotine is correct, this solution does not work, and humankind should leave ethical content and ethical thinking to moral AI systems. This would mean that human beings must recognize the action situations that require ethical regulation and must construct basic value systems. Furthermore, political and ethical discourse must be conducted to validate ethics. Finally, values must be implemented in the form of social norms or laws. In light of our present understanding of facts and values, it is possible to construct any number of ethically operating systems, but not morally strong AI. In sum, constructing morally weak AI is easy. Machines can follow given values. However, Hume’s Guillotine seems to prevent people from developing genuinely autonomous moral AI and intelligent systems. Such systems would be very practical in designing and developing life in societies, but there is no clear path from the present to autonomous, morally strong AI. It may be that a large number of morally weak systems would enable us to reach a moral AI that is sufficiently strong to advance society, but these systems’ moral information would always need to be implemented by people and decided by human moral discourses. This is the basic aporia of ethical AI.
Designing Ethical AI in the Shadow of Hume’s Guillotine
599
References 1. Ford, M.: Rise of Robots. Basic Books, New York (2014) 2. Tegmark, M.: Life 3.0: Being Human in the Age of Artificial Intelligence. Random House, New York (2017) 3. Bernal, J.: Science in History. Penguin Books, Harmondsworth (1969) 4. Saariluoma, P.: Four challenges in designing human-autonomous system design processes. In: Williams, A., Sharre, P.D. (eds.) Autonomous Systems: Issues for Defense Policy-Makers. NCI, Haag (2015) 5. Saariluoma, P., Cañas, J., Leikas, J.: Designing for Life. Palgrave Macmillan, London (2016) 6. Newell, A., Simon, H.: Human Problem Solving. Prentice-Hall, Englewood-Cliffs (1972) 7. Boven, W.R.: Engineering Ethics. Springer, London (2009) 8. Stahl, B.C.: Social issues in computer ethics. In: Floridi, L. (ed.) The Cambridge Handbook of Information and Computer Ethics, pp. 101–115. Cambridge University Press, Cambridge (2010) 9. Hume, D. A.: Treatise of Human Nature. Dent, London (1738/1972)
A Counterattack of Misinformation: How the Information Influence to Human Being Subin Lee and Ken Nah(&) IDAS, Hongik Univeristy, 57 Daehakro, Jongno-gu, Seoul, Republic of Korea [email protected], [email protected]
Abstract. This study aims to investigate how the issue of misinformation is expanded in academic fields. Using the bibliometric data in the recent decade, we employ network analysis on author keywords to reveal the evolution of the academic network. Our results show that the academic areas in terms of misinformation-relevant topics have been grown rapidly and the author keywords network turn out to be complicated. This paper indicates that the theme of misinformation becomes more critical in academia, which leads us to estimate the impact thereof in the real world. Keywords: Misinformation Design thinking
Deepfake Fake news Network analysis
1 Introduction Thanks to internet distribution, it is a fact that information technology is increasingly becoming an integral part of contemporary life. Recently, the information shows us reveal one’s true character, which is closely related to such as misinformation and deepfake. It is a fact that misinformation is continuously disseminated. In spite of increasing efforts to improve new computational algorithms to detect the misinformation, people who live in the information age should pay attention to recognize whether the information is true or not. The information can be easily distorted by developing cuttingedge technology such as artificial intelligence (AI). As such, unexpected social and ethical problems emerge because of the uncertainty of information. For example, the fabrication of Barack Obama’s video known as a deep fake posted on BuzzFeed in 2017 warned potential risks that the reality in the world is no longer real [1, 2]. Thus, the experts have been explained the era of the information age peoples are need to consider a high level of education of understanding fake news which is closely related to future skills. For this reason, the keywords—such as deep fake, misinformation and fake news— have been playing a vital role in the society of academia. In this regard, the purpose of this study is to present the fake news, deep fake and misinformation that have been raised on the keywords of the social issue by using author keyword data, which can show the current key issue of user and academia.
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 600–604, 2020. https://doi.org/10.1007/978-3-030-39512-4_93
A Counterattack of Misinformation
601
2 Background It is well-known fake news, deepfake and misinformation have been a rise in a new phenomenon in social issues. The issue of deepfake technology hit the headlines in 2017 when the University of Washington researchers released a paper describing how they had created a fake video of President Barack Obama. This technology has been invented by researcher Dr. Suparson Suwajanakorn who agrees that while admitting the technology had the potential for misuse. In addition, recent research comes from a cyber-security company named, ‘Deeptrace’, the researchers have been found 14,698 deepfake videos online, compared with 7,964 in December 2018 [2]. This article explains that an alarming surge in the creation of the so-called issue of misinformation that have been leading to a boost of scientific research in the society of academia. Figure 1 shows the trend of how frequently the search terms—misinformation, fake news, and deepfake—has been entered into Google’s search engine during the last decade. It turns out that the topics’ growth has increased after 2016. These trends support that the issues of misinformation become important in the information age.
Fig. 1. Google trend of issues on misinformation
3 Data This study collected bibliometric data in the recent decade from 2009 to 2018. The search terms to define the scope of misinformation were “misinformation”, “fake news”, and “deepfake” in Web of Science. The sample this study collected was composed of 1,867 articles.
602
S. Lee and K. Nah
4 Descriptive Statistics The amount of academic papers has increased, as shown in Fig. 2. The annual percentage of the growth rate was 21.89. It describes that the misinformation-relevant fields have expanded enormously. In particular, the change in the growing slope was rapid in 2017. Figure 3 shows the growth of the top 10 author keywords over the last decade. Overall, the growing slope has increased continuously. It turns out that the number of author keywords, especially fake news and social media, increases rapidly in 2017. These results are similar to the trend of misinformation based on Google Trend.
Fig. 2. Annual academic production
Fig. 3. Yearly occurrences of top author keywords
A Counterattack of Misinformation
603
5 Future Analysis Strategies This study employs network analysis in terms of author keywords by relying on the concept of co-occurrence investigation. Author keywords are selected by the authors themselves, which describes best the contents of their work [3]. This allows for capturing an article’s content with minimizing additional effort to interpret the intention of authors. Thus, many scholars use the author keywords as a unit of analysis to construct a co-word network. In this paper, the co-occurrence matrix is derived from the occurrence matrix of order m n. The objects of n denote the author keywords we want to analyze. There are m documents on which the co-occurrence analysis is based. This occurrence matrix was a binary matrix. If author keyword j occurs in document i, the element in the ith row and jth column equals one, otherwise zero. We can derive the co-occurrence matrix of order n n by multiplying the occurrence matrix and its transpose. The cooccurrence matrix of author keywords is used for the network analysis. This paper divides two periods in five-year windows to examine the change of the network. T1 is defined as the first period from 2009 to 2013, and T2 is regarded as the second period from 2014 to 2018 in this study. Using the co-occurrence matrix, we will analyze network centrality including degree centrality, betweenness centrality, and closeness centrality, which lead us to understand how the academic network of misinformationrelevant theme has been evolved.
6 Conclusion The changing trend of the author keywords is regarded as the attention of not only academia but also the real world at a given time. As described in our preliminary results, the dimension of misinformation has been diversified across the academic topics such as social media, fake news, and false memory. For this reason, we should consider the importance of misinformation issues carefully. According to the work of Tresky and Kahneman in 1974, the tendency of the intuitive mind is more likely to be so simplified that ordinary people can have a cognitive bias shaped by their rough guess. Their findings argue that people’s bias can be more rendered by a cognitive system than personal emotion [4]. The rationality of human beings in the deluge of information needs to be redefined. This is because, unfortunately, the rhetoric—human beings are rational—is a myth, which supported by several studies based on the results ranging from psychology to neuroscience. Human beings have evolved toward forming schema to react to a certain situation adequately. However, the process of forming the schema can be different between online and offline conditions. Information processing is likely to be distorted within the online space such as the Internet. Hence, personal cognition may be limited by overlooking unexpected online events people have not experienced. For this reason, our future study will re-define design thinking methods which is related cognitive style in problem sloving that help to understand misinformation including fake news and deepfake, in which visually [5, 6].
604
S. Lee and K. Nah
References 1. He Predicted the 2016 Fake News Crisis. Now He’s Worried About an Information Apocalypse. BuzzFeedsNews. https://www.buzzfeednews.com/article/charliewarzel/the-terrifying-futureof-fake-news/Article/2018/He-Predicted-the-2016-Fake-News-Crisis-Now-He’s-WorriedAbout-an-Information-Apocalypse. Accessed 14 Oct 2019 2. Goole Makes Deep Fakes to Fight Deep Fakes. https://www.bbc.com/news/technology49837927/Article/2019/Google-Makes-Deep-Fakes-to-Fight-Deep-Fakes. Accessed 14 Oct 2019 3. Lee, P.-C., Su, H.-N.: Investigating the structure of regional innovation system research through keyword co-occurrence and social network analysis. Innovation 12, 26–40 (2010) 4. Tversky, A., Kahneman, D.: Judgment under uncertainty: heuristics and biases. Science 185, 1124–1130 (1974). https://doi.org/10.1126/science.185.4157.1124 5. Kim, E., Kim, K.: Cognitive styles in design problem solving: insights from network-based cognitive maps. Des. Stud. 40, 1–38 (2015) 6. Tversky, B., Suwa, M.: Thinking with sketches
Effects of Increased Cognitive Load on Field of View in Multi-task Operations Involving Surveillance Seng Yuen Marcus Goh1(&), Ka Lon Sou1, Sun Woh Lye2, and Hong Xu1 1
2
School of Social Sciences, Nanyang Technological University, 48 Nanyang Ave, Singapore 639818, Singapore [email protected] School of Mechanical and Aerospace Engineering, Nanyang Technological University, 50 Nanyang Ave, Block N3, Singapore 639798, Singapore
Abstract. Many operations, such as air traffic control which requires simultaneous surveillance and communication, involve multi-tasking. Yet, research on effects of multi-tasking on visual processing tend to revolve around driving. As such, similar effects on functional field of view in multi-tasking operations involving surveillance, while considering stimuli parameters that might be implicated, remain a gap in understanding. In this study, we investigated the effects of the presence of a secondary task and stimuli parameters (size and contrast) on the response accuracy and response time for stimuli appearing in the visual field. Mixed analyses of variance revealed that response time, but not accuracy, was affected by the engagement in multiple tasks. An interaction between the parameters of the presented stimuli signals the need to consider external factors in multi-task operations. Implications and future directions are discussed. Keywords: Functional field of view performance Human errors
Multi-task Size Contrast Visual
1 Introduction With technological advancements, traditional manual labour becomes more automated. Humans are thus expected to carry out different tasks from before, such as supervision of unmanned vehicles systems [1]. For example, the task of the driver in an autonomous vehicle will be switched to surveillance and standby in cases of malfunction or unknown situations of the automated systems. In various industries, such advancement presumably allows humans to engage in multiple tasks simultaneously (i.e. multitasking). However, multi-tasking may increase productivity, but could come with a trade-off. Among the many industries, operational industries involving surveillance tasks are particularly implicated. Surveillance is a visual task that is often coupled with another communication task. For instance, air traffic controllers (ATCOs) monitor the airspace while concurrently managing communication with the pilots of the aircrafts in their © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 605–611, 2020. https://doi.org/10.1007/978-3-030-39512-4_94
606
S. Y. M. Goh et al.
sectors: giving instructions (to the pilots) and taking requests (from the pilots) on adjusting the aircrafts’ altitude or velocity to prevent conflicts or collisions. With air traffic density predicted to increase in the future, ATCOs’ work will get more cognitively intense and complicated [2]. In the current study, we are particularly interested in the functional field of view (FFoV). FFoV is the area in the human visual field where stimuli can be detected and processed [3]. Multi-tasking can degrade FFoV in two ways, resulting in either tunnel vision [4] or general interference [5]. These degradation theories have been studied in the context of usage of cellular phone while driving [6–8]. However, instead of testing participants on a particular driving task, we will examine the effects of multi-tasking on a more general visual task, such that the results would be generalisable to surveillance tasks in other industries, such as that of ATCOs. This study investigates the effect of task requirements – whether one is engaging in multiple tasks or a single task – on one’s visual task performance. Engaging in multiple tasks is hypothesised to result in poorer visual task performance. Extending from the context of air traffic control, we identified size and contrast as the parameters of the stimulus in the visual task as they are particularly pertinent to such contexts in surveillance. Visual task performance is expected to decrease with both stimulus size and contrast. Moreover, the questions arise whether there are interactions among these factors. We will investigate these hypotheses and questions in the current study.
2 Methods The study is a 2 (task: multi-task vs. single-task) 2 (stimulus size: large vs. small) 2 (stimulus contrast: high vs. low) mixed-subject design1. The dependent variables to measure visual task performance are the response accuracy and response time. 2.1
Participants
Twenty participants (10 females, average age: 23.35 years), with normal or correctedto-normal vision, consented to participate the study. Each participant was compensated for their time and effort. This study was approved by the Institutional Review Board (IRB) at Nanyang Technological University, Singapore. Participants were randomly allocated to the experimental (multi-task) and control (single-task) groups. Within each group, the participants were tested on all 4 possible combinations of large/small stimulus size and high/low stimulus contrast. The details of these conditions are explained in Procedure.
1
Cognitive resource depletion was the between-subject variable while stimulus size and stimulus contrast were the within-subject variables.
Effects of Increased Cognitive Load on Field of View
2.2
607
Materials
The study was ran using MATLAB [9]. Using the functions in Psychophysics Toolbox extensions [10–12], a fixation cross was presented at the centre of the screen. The test stimulus, a Gabor patch grating, was presented at various locations on the screen followed by a beep sound as an indication of response for the participants. 2.3
Procedure
A randomised blocked design was adopted in the experiment. Each participant went through four different blocks; the blocks differed in terms of the parameters of the stimulus presented: large stimulus with high contrast, large stimulus with low contrast, small stimulus with high contrast, and small stimulus with low contrast (Fig. 1 shows the various stimuli). The order of the four blocks were randomised across all participants without repeat of any particular order. The test stimulus appeared at various locations on the screen, followed by a beep. Participants had to judge whether the grating was tilted to the left or the right by 45°, by hitting the “” keys on the keyboard respectively. The experimental group, in addition to the above orientation judgment task, has to recite numbers backwards by multiples of three, starting from one hundred. If they were to make a mistake, they would have to start from the beginning. If they managed to recite smoothly till zero before the end of the trials, they will repeat the task from ninety-nine. The purpose of this secondary task was to simulate multi-tasking in various operations. Participants received sufficient training to practice the tasks before they were tested.
Fig. 1. Various Gabor patch gratings used as stimuli in the experiment. Large (21 pixels 21 pixels) stimulus with high contrast (top left), large stimulus with low contrast (top right), small (11 pixels 11 pixels) stimulus with high contrast (bottom left), and small stimulus with low contrast (bottom right)
3 Results A multivariate analysis of variance (MANOVA) was ran to see the overall differences, with the response accuracy and time as the dependent variables, and task, stimulus size, and stimulus contrast as the independent variables. Main effects of Stimulus Size
608
S. Y. M. Goh et al.
(F(2, 17) = 32.99, p < .001) and Stimulus Contrast (F(2, 17) = 24.73, p < .001), as well as the interaction effect of Stimulus Size Stimulus Contrast (F(2, 17) = 8.03, p = .004) were significant in the multivariate analysis. However, none of the effects involving Task were significant (ps > .05). The 2 2 2 mixed analysis of variance (ANOVA) was then followed, with task as the between-subject variable, and stimulus size and stimulus contrast entered as the within-subject variables. The ANOVA were run separately for each dependent variable – response accuracy and response time. The response accuracy and time were negatively correlated but not significant (r = −.290, p = .215). 3.1
Response Accuracy
The ANOVA on response accuracy found significant main effects of stimulus size (F(1, 18) = 69.26, p < .001, η2 = .794) and stimulus contrast (F(1, 18) = 49.53, p < .001, η2 = .733), but not task (F(1, 18) = .114, p = .740, η2 = .006). Response accuracy in detecting large stimulus (M = .848, SE = 0.029) was significantly higher than in detecting small stimulus (M = .674, SE = 0.022). Similarly, the accuracy in detecting high contrast stimulus (M = .809, SE = 0.025) was significantly higher than in detecting low contrast stimulus (M = .712, SE = 0.024). The interaction between stimulus size and stimulus contrast was also significant (F(1, 18) = 15.03, p = .001, η2 = .455). The significant interaction between stimulus size and stimulus contrast may be related to the different effects of contrast in large vs. small stimulus size. For large stimulus size, there was a non-significant difference in accuracy, t(19) = .527, p = .604, d = .117, between stimulus of high contrast (M = .854, SD = .119) and stimulus of low contrast (M = .841, SD = .161). On the other hand, for small stimulus size, there was a significant difference in accuracy, t(19) = 7.05, p < .001, d = 1.58, between stimulus of high contrast (M = .764, SD = .126) and stimulus of low contrast (M = .583, SD = .092). No other interactions were found to be significant. The participants’ response accuracy are summarised in Fig. 2.
Fig. 2. Summary of participants’ response accuracy
Effects of Increased Cognitive Load on Field of View
3.2
609
Response Time
All three main effects (task, stimulus size and contrast) are significant in response time. Participants in the experimental group showed a significantly higher response time (M = 1.05, SE = 0.092) compared to participants in the control group (M = .716, SE = 0.092), F(1, 18) = 6.52, p = .020, η2 = .266. Furthermore, there was a significantly lower response time in detecting large stimulus (M = .786, SE = 0.065) compared to small stimulus (M = .976, SE = 0.081), F(1, 18) = 7.35, p = .014, η2 = .290. Similarly, response time for detection of stimulus with high contrast (M = .812, SE = 0.062) was significantly faster than for stimulus with low contrast (M = .951, SE = 0.074), F(1, 18) = 10.69, p = .004, η2 = .373. None of the interactions were found to be significant. The participants’ response time are summarised in Fig. 3.
Fig. 3. Summary of participants’ response time
4 Discussion We found significantly poorer visual task performance when multi-tasking than when single-tasking in terms of response time but not accuracy. Such a finding implies that participants engaging in multiple tasks may not divide their attention among the requirements of the tasks simultaneously, but are instead alternating their attention between the tasks. The implications to multi-task operations is critical; it signifies that multi-tasking may not necessarily result in erroneous processing of visual stimuli (accuracy), but it does take a toll on timely response to incidences (response time). Apart from multi-tasking, other factors may affect response accuracy. For example, the interaction between stimulus size and stimulus contrast on accuracy was significant. It suggests that erroneous processing becomes a concern when a heterogeneous variation of visual stimuli needs to be detected. Specifically, contrast has less effect on accuracy when the stimulus is large, but impairs performance when the stimulus is small. This carries implications in multi-task operations when the environment of the area under surveillance varies. For instance, external factors such as weather (e.g. rainy or foggy) can influence contrast of the surrounding objects in air traffic control.
610
S. Y. M. Goh et al.
Some limitations may be present in this study. Firstly, visual task performance (accuracy and response time) with distance from the point of fixation were not examined in this paper as there was a lack of a meaningful trend in the threshold of the FFoV. In other words, visual task performance of the participants did not seem to decrease consistently with increasing distance from the point of fixation. This might be because the task performance reaches plateau without much variations in this aspect. Secondly, as the cognitive load required by the secondary task used in this study was not measured, it is plausible that the task was not sufficiently difficult to reveal other impacts of multi-tasking. Therefore, the secondary task should not be regarded as representative of the actual tasks (which might utilise a greater cognitive load) that such operations might require their operators to engage in. Moving forward, future studies should address these limitations, to (1) aim to determine how the threshold of the FFoV vary with task requirements and stimulus parameters and (2) replicate the study using secondary tasks that are more representative for the context in which the results would be applied in. Such findings can go a long way in reducing human errors that can potentially result in dire consequences in multi-task operations in surveillance, such as that of the air traffic controllers.
5 Conclusion As technology advances, human beings are presumed to be able to multi-task [13]. This research demonstrated the potential impact of multi-tasking on visual task performance. Even for a simple visual orientation judgment task, imposing a secondary task can impair one’s visual task performance. When we consider the multitude of possible tasks in multi-task operations, this issue could only be further complicated. Further research in the future can inform us the necessary adjustments to task requirements in multi-task operations.
References 1. Squire, P.N., Parasuraman, R.: Effects of automation and task load on task switching during human supervision of multiple semi-autonomous robots in a dynamic environment. Ergonomics 53(8), 951–961 (2010) 2. Monechi, B., Vito, D.P.S., Loreto, V.: Congestion transition in air traffic networks. PLoS One 10(5), e0125546 (2015) 3. Sanders, A.F.: Some aspects of the selective process in the functional visual field. Ergonomics 13, 101–117 (1970) 4. Williams, L.J.: Cognitive load and the functional field of view. Hum. Factors 24, 683–692 (1982) 5. Holmes, D.L., Cohen, K.M., Haith, M.M., Morrison, E.J.: Peripheral visual processing. Percept. Psychophys. 22, 571–577 (1977) 6. Recarte, M.A., Nunes, L.M.: Effects of verbal and spatial-imagery tasks on eye fixations while driving. J. Exp. Psych.: Appl. 6, 31–43 (2000) 7. Strayer, D.L., Drews, F.A.: Profiles in driver distraction: effects of cell phone conversations on younger and older drivers. Hum. Factors 46, 640–649 (2004)
Effects of Increased Cognitive Load on Field of View
611
8. Strayer, D.L., Drews, F.A.: Multitasking in the automobile. In: Kramer A., Wiegmann D., Kirlik A. (eds.) Attention: From Theory to Practice, pp. 121–133 (2006) 9. MATLAB version 7.10.0 (R2010a). The MathWorks Inc., Natick, Massachusetts (2010) 10. Brainard, D.H.: The psychophysics toolbox. Spat. Vis. 10, 433–436 (1997) 11. Pelli, D.G.: The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat. Vis. 10, 437–442 (1997) 12. Kleiner, M., Brainard, D., Pelli, D.: What’s new in Psychtoolbox-3? Perception 36 ECVP Abstract Supplement (2007) 13. Lodinger, N.R., DeLucia, P.R.: Does automated driving affect time-to-collision judgments? Proc. Hum. Factors Ergon. Soc. Ann. Meet. 62(1), 1833 (2018)
Investigating Human Factors in the Hand-Held Gaming Interface of a Telerehabilitation Robotic System S. M. Mizanoor Rahman(&) Department of Intelligent Systems and Robotics, Hal Marcus College of Science and Engineering, University of West Florida, 11000 University Pkwy, Pensacola, FL 32514, USA [email protected]
Abstract. Robotic devices can be used as effective tools to provide rehabilitation supports to stroke patients. Such robotic rehabilitation practices can replace therapists providing rehabilitation therapy to patients. Patients can independently practice rehabilitation with robotic rehabilitation devices for longer hours. These advantages are now motivating patients to use rehabilitation robots. However, contemporary rehabilitation practices are expensive, and patients need to travel to rehabilitation centers for rehabilitation, which is time consuming and burdensome. This is why, the concepts of telerehabilitation are becoming more and more popular where the patients can practice rehabilitation at homes, and the therapists can remotely monitor the rehabilitation practices and communicate with the patients if necessary. However, such rehabilitation practices, real-time monitoring of patients by therapists from distant places and communication of patients with therapists may be more intuitive and human-friendly if the patients can operate the rehabilitation system and communicate with the therapists using a hand-held interface. Again, the hand-held interface may be more intuitive and engaging if the rehabilitation performance can be expressed through some gamelike activities. To do so, a clear understanding of human factors involved in the gaming interface is necessary. However, such knowledge is not available in the literature. To address this knowledge gap, in this paper, the human factors associated with the operation of a hand-held gaming interface for robot-assisted full-body smart telerehabilitation of stroke patients are investigated. At first, potential human factors associated with the operation are identified through surveys conducted with healthcare professionals, researchers and patients. The identified human factors are then analyzed and divided into different categories, e.g. physical and cognitive human factors. The role of each human factor in the interface operation is explained. The findings can be utilized to design and develop hand-held gaming interfaces for robot-assisted telerehabilitation that may be more human-friendly and intuitive. Keywords: Human factors Gaming interface Rehabilitation robot Stroke patient
Telerehabilitation
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 612–618, 2020. https://doi.org/10.1007/978-3-030-39512-4_95
Investigating Human Factors in the Hand-Held Gaming Interface
613
1 Introduction Application of robots in post-stroke rehabilitation is now a hot topic of research and development due to various advantages of robot-assisted rehabilitation [1, 2]. Different novel actuation methods were proposed to make those rehabilitation devices compliant and safe [3–5]. However, efforts to make the rehabilitation systems more user-friendly are still an open issue [6]. Within this direction, hand-held gaming interfaces are very latest addition to the wearable healthcare devices including the rehabilitation devices that can facilitate users to monitor rehabilitation performance in real-time through hand-held visual interfaces based on some game-like activities performed during actual rehabilitation practices [7]. The usability of such interfaces is to be high as they are hand-held during rehabilitation practices. The usability and user acceptance of the hand-held interfaces depend on how human factors associated with the interfaces are identified and addressed while designing the interfaces [8]. The usability of the interfaces may include the physical interfaces of the devices as well as the ways of expressing and displaying the rehabilitation performance through the visual interfaces [9]. However, investigation on human factors involved in hand-held gaming interfaces has not received much attention yet. As a result, user-friendliness and hence user acceptance of the proposed hand-held gaming interfaces are still questionable. On the other hand, telerehabilitation is a concept that can keep patients connected with healthcare providers and patient communities [10, 11]. Hand-held gaming interfaces can foster the telerehabilitation because the status of the rehabilitation performance can be easily monitored and shared through the hand-held visual interfaces [7, 9, 10]. Hence, the hand-held gaming interfaces need further investigation on human factors because the users (patients) may not feel comfortable and friendly while using the gaming interfaces on dynamic contexts if the interfaces cannot satisfy the ergonomic and psychological requirements of the users [12]. However, investigation on human factors associated with gaming interfaces for telerehabilitation has not been prioritized. The objective of this paper is to investigate the human factors associated with the design of hand-held gaming interfaces for robot-assisted smart telerehabilitation. At first, a complete full-body post-stroke telerehabilitation system is introduced that also includes a hand-held gaming interface. Then, potential human factors associated with the interface are identified through surveys conducted with concerned healthcare professionals, researchers and patients, and the identified human factors are analyzed.
2 The Proposed Full-Body Rehabilitation System with Hand-Held Gaming Interface and the Telerehabilitation Scenario Figure 1 shows the proposed overground mobile full-body post-stroke rehabilitation robotic system actuated by compliant actuation mechanism. The details of the configuration, working principles and actuation mechanisms are provided in [1–5]. A handheld user interface via a mobile phone is shown in the figure. When the patient conducts rehabilitation practice wearing appropriate sensors, the hand-held interface can display the rehabilitation performance expressed in terms of kinetics and kinematics based on
614
S. M. M. Rahman
some game-like activities that the patient performs during the rehabilitation practice. The performance can also be displayed in a large fixed screen placed in front of the patient during the rehabilitation session [9] (see Fig. 1). The game-like user interface can be designed using the Unity software for the stroke rehabilitation system shown in Fig. 1. In the proposed user interface, the patient can practice gait or upper arm rehabilitation, and the rehabilitation performance can be measured in real-time using suitable sensors (position, force) that the patient wears at different relevant body parts. The rehabilitation performance can be expressed in terms of force/torque and linear/angular velocity that the patient generates at each joint during the gait cycle (lower limb rehabilitation) or the arm movement (upper limb rehabilitation) for performing game-like activities (e.g., trying to reach a location within a specific time using the mobile overground exoskeleton for lower limb rehabilitation) as a part of the rehabilitation practice. The patient’s kinetics and kinematics performance is changed in real-time and the changes are reflected through the interface as bar diagrams. This can be installed in a mobile device and the user (patient) can hold it in his/her hand while performing the rehabilitation practice/game. The performance can also be displayed in a monitor [9].
Hand-held interface
Screen
Robotic system
Fig. 1. The proposed overground mobile full-body post-stroke rehabilitation robotic system and the hand-held gaming interface (left), and the display in a large fixed screen (right).
The rehabilitation system with the hand-held gaming interface shown in Fig. 1 can be augmented to telerehabilitation, as shown in Fig. 2 [10]. The patient can practice rehabilitation using the robotic device at home, the performance can be displayed and interacted through the hand-held gaming interface, and the status can be shared with the healthcare professionals or the patient communities at distant in real-time or near realtime through appropriate internet or cloud-based communication system [10, 11].
3 The Human Factors Survey Fifteen (15) human subjects were selected. Out of the 15 subjects, 9 subjects were robotics researchers focusing applications of robots for stroke rehabilitation, 4 subjects were professionals who provided robot-based and manual therapies to stroke or similar
Investigating Human Factors in the Hand-Held Gaming Interface
Hand-held interface
615
Distant room (healthcare center)
Robotic system
Fig. 2. Concepts of real-time telerehabilitation practice between patient and healthcare professional through hand-held gaming interface via appropriate communication system.
patients, and the remaining 2 subjects were stroke patients who came to the rehabilitation centers or the rehabilitation research office to receive rehabilitation practice and/or to know more about the robot-based rehabilitation practices. Each subject participated in the survey separately. At first, the subject was introduced with the robotbased rehabilitation practice. The objective and the way of using the hand-held gaming interface were also discussed with the subject. To clarify the concepts about the handheld gaming interface and the telerehabilitation, Figs. 1 and 2 were shared with the subjects and the necessary descriptions were provided. Any questions or comments from the subjects were responded. Then, each subject was provided a piece of paper separately, and was asked to brainstorm and write down the human factor cues that he/she thought necessary to address while designing the hand-held gaming interface.
4 The Survey Results At the end of the survey, the responses were accumulated and analyzed. The identified human factor cues were divided into two categories: (i) physical human factors (pHFs), and (ii) cognitive human factors (cHFs) [13–15]. The physical human factors included the human factors related to the physical and material properties of the interfaces. The cognitive human factors included the cues related to the ways of expressing and displaying the rehabilitation performance and the cues that can impact user cognition and perception. Table 1 shows the pHFs and the cHFs with response frequencies (how many subjects out of 15 mentioned a particular human factor as the cue that could impact user’s physical and/or cognitive abilities) in the parentheses. Figure 3 shows the relative importance of each pHF and cHF cue based on the results in Table 1. The results show that the subjects were mainly concerned with the haptic weight of the hand-held interface device (e.g., weight of the mobile phone or the tablet), fatigue due to holding the device for long time, situation awareness while conducting rehabilitation as well as monitoring the rehabilitation performance via the hand-held interface, cognitive workload while comprehending and interpreting the status of the performance displayed through the interface in the forms of different diagrams, physical and mental engagement with the hand-held device, how the color (e.g., glare) of the displayed contents may impact the vision and perception of the patient, whether
616
S. M. M. Rahman
Table 1. Identified pHFs and cHFs based on survey results for robot-assisted post-stroke telerehabilitation using hand-held gaming interface Human factor category pHFs
Human factor cues and frequencies Ease of use (6), safety/risk (3), haptic and proprioceptive perception (5), maneuverability (4), perceived haptic weight of hand-held device (11), fatigue (9) Self-satisfaction (3), situation awareness (8), cognitive workload (12), trust (6), emotion (3), engagement and connectedness (8), color perception (9), perception of shape and size of hand-held device (5), transparency (8), intuitiveness (2), reliability (4), predictability (10)
cHFs
or not the information displayed through the gaming interface is enough transparent to understand and reflect the true rehabilitation scenarios, and whether or not the subjects can predict the future of their rehabilitation practice and performance based on what they observe in the displayed diagrams via the interface, etc.
6
9
Ease of use Safety/risk Haptic and proprioceptive perception Maneuverability Perceived haptic weight Fatigue
3 5 11
4
10
3
8
4
2
12 8 6
5 9
3 8
Self-satisfaction Situation awareness Cognitive workload Trust Emotion Engagement and connectedness Color perception Perception of shape and size Transparency Intuitiveness Reliability Predictability
Fig. 3. The relative importance of each cue of pHFs (upper) and cHFs (lower).
Investigating Human Factors in the Hand-Held Gaming Interface
617
5 Conclusions and Future Work A survey was conducted with healthcare professionals, researchers and patients to identify the potential human factor cues associated with the design and development of hand-held gaming interfaces for robot-assisted post-stroke telerehabilitation. The identified human factors were then analyzed and divided into two categories: physical and cognitive human factors. The results can be used to optimize the gaming interface design satisfying human requirements, and benchmark and share the telerehabilitation performance for stroke patients. In the near future, a deeper and more formal survey with more relevant responders will be conducted to know more about the human factors associated with the gaming interface design. The knowledge on the human factors will be used to develop real telerehabilitation robotic system with hand-held gaming interface, and will be evaluated by actual patients performing rehabilitation practices at homes or rehabilitation centers or hospitals.
References 1. Rahman, S.M.M.: Design of a modular knee-ankle-foot-orthosis using soft actuator for gait rehabilitation. In: Proceedings of the 14th Annual Conference on Towards Autonomous Robotic Systems (TAROS 2013). Lecture Notes in Computer Science, Oxford University, U.K., vol. 8069, pp. 195–209. Springer (2014). (2013) 2. Rahman, S.M.M., Ikeura, R.: A novel variable impedance compact compliant ankle robot for overground gait rehabilitation and assistance. Proc. Eng. 41, 522–531 (2012) 3. Rahman, S.M.M.: A novel variable impedance compact compliant series elastic actuator for human-friendly soft robotics applications. In: Proceedings of the 21st IEEE International Symposium on Robot and Human Interactive Communication, pp. 19–24. IEEE Press (2012) 4. Rahman, S.M.M.: A novel variable impedance compact compliant series elastic actuator: analysis of design, dynamics, materials and manufacturing. Appl. Mech. Mater. 245, 99–106 (2013) 5. Yu, H., Rahman, S.M.M., Zhu, C.: Preliminary design analysis of a novel variable impedance compact compliant actuator. In: Proceedings of 2011 IEEE International Conference on Robotics and Biomimetics, pp. 2553–2558. IEEE Press (2011) 6. Rahman, S.M.M., Ikeura, R.: Improving interactions between a power assist robot system and its human user in horizontal transfer of objects using a novel adaptive control method. Adv. Hum.-Comput. Interact. 2012, 1–12 (2012). ID 745216 7. Valdés, B.A., Hilderman, C.G.E., Hung, C.T., Shirzad, N., Van der Loos, H.F.M.: Usability testing of gaming and social media applications for stroke and cerebral palsy upper limb rehabilitation. In: Proceedings of 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 3602–3605. IEEE Press (2014) 8. Rahman, S.M.M., Ikeura, R.: Cognition-based control and optimization algorithms for optimizing human-robot interactions in power assisted object manipulation. J. Inf. Sci. Eng. 32(5), 1325–1344 (2016) 9. Rahman, S.M.M., Wang, Y.: Mutual trust-based subtask allocation for human-robot collaboration in flexible lightweight assembly in manufacturing. Mechatronics 54, 94–109 (2018)
618
S. M. M. Rahman
10. Kawai, Y., Honda, K., Kawai, H., Miyoshi, T., Fujita, M.: Tele-rehabilitation system for human lower limb using electrical stimulation based on bilateral teleoperation. In: Proceedings of 2017 IEEE Conference on Control Technology and Applications, pp. 1446–1451 (2017) 11. Rahman, S.M.M.: Evaluating and benchmarking the interactions between a humanoid robot and a virtual human for a real-world social task. In: Proceedings of the 6th International Conference on Advances in Information Technology (2013). Communications in Computer and Information Science, vol. 409, pp. 184–197 (2013) 12. Rahman, S.M.M., Ikeura, R.: Investigating the factors affecting human’s weight perception in lifting objects with a power assist robot. In: Proceedings of 2012 21st IEEE International Symposium on Robot and Human Interactive Communication, pp. 227–233 (2012) 13. Rahman, S.M.M., Ikeura, R.: Cognition-based variable admittance control for active compliance in flexible manipulation of heavy objects with a power assist robotic system. Robot. Biomim. 5(7), 1–25 (2018) 14. Rahman, S.M.M., Liao, Z., Jiang, L., Wang, Y.: A regret-based autonomy allocation scheme for human-robot shared vision systems in collaborative assembly in manufacturing. In: Proceedings of the 12th IEEE International Conference on Automation Science and Engineering (IEEE CASE 2016), pp. 897–902 (2016) 15. Rahman, S.M.M., Wang, Y.: Dynamic affection-based motion control of a humanoid robot to collaborate with human in flexible assembly in manufacturing. In: Proceedings of ASME Dynamic Systems and Controls Conference, pp. V003T40A005 (2015)
Procedure of Mining Relevant Examples of Armed Conflicts to Define Plausibility Based on Numerical Assessment of Similarity of Situations and Developments Ahto Kuuseok(&) Estonian Police and Border Guard Board, Estonian Business School, Tallinn, Estonia [email protected]
Abstract. It is possible to link plausibility to numeric assessment of similarity of situations and developments. In order to observe similarities of two situations or developments, it is necessary to prepare descriptions. These descriptions must consist of statements, preferably in a format that allows them to be represented by appropriate logic formulas, with the purpose of calculating the index of descriptive similarity. With the need to observe several armed conflicts, there will arise a need to limit the amount of cases, considering physical abilities of experts. The aim is to decide- which cases can be considered as relevant. The focus of this study is the procedure whose implementation results are selected as conflicts (involving national armed forces), with certain geographical and time constraints, linked to conflicts, associated with a particular country C (i.e. with the state whose interests or concerns, etc., are applied in the procedure). The choices are based on the types of associations, like direct geographical neighborhoods within country C, binding to defense and security agreements, interconnection based on economic needs (raw materials, markets, access) etc. However, associations may be subject to some time and geographical constraints. For example, as the Republic of Estonia (located in northeastern Europe), is linked to several countries within NATO who only arose after World War II, then is allowed though, not required to exclude conflicts involving countries that preceded World War II. Alternatively, as the United States, is linked to the Republic of Estonia within the framework of NATO, then the Vietnam War (which took place in Southeast Asia) could be in the set of cases. In addition, one more example - because Russia is the immediate neighbor of the Republic of Estonia, so the military conflict in Georgia in 2012 should also be involved. Based on the types of associations fixed for the Republic of Estonia, the sample examined was limited to only 32 relevant examples. This, in turn, allows the comparison of pairs of conflicts to be limited to 496 pairs. Keywords: Plausibility Numeric assessment of similarity Situations and developments Descriptions and statements Index of descriptive similarity Selection of armed conflicts Constraints
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 619–626, 2020. https://doi.org/10.1007/978-3-030-39512-4_96
620
A. Kuuseok
1 Introduction In order to countries could perform main objectives, including avoid unpleasant surprises related to military security; they need several approaches to research security situations and developments. Among others, there is a need to evaluate the possibility of an outbreak of such kind of conflicts that directly or distantly affect those countries. One of such approaches relies on similarity of situations and developments. More specifically, it relies on a procedure based on descriptions of the situations and developments under consideration [1]. This procedure consists of several steps, the last of which is associated with numerical estimates of similarity and plausibility. In this way, there is one way of comparing the situation that is important for a given country and its evolution with some known situations and developments. The purpose of the comparison is to identify the most similar situation and development that is relevant to the given country. From this knowledge, we can find out if it is plausible that the particular situation of major importance to the country may become an armed conflict (compared to similar situations and related developments) [2]. The main purpose of this work is to describe how can the procedure of identification of relevant situations and developments to a given country, determine whether this situation can develop to an armed conflict between states. To some extent, this is the procedure, which produces so-called “source material” for the procedure for assessing the similarity of situations and developments, which in turn provides inputs for determining plausibility. Therefore, both procedures are covered. Also (rather briefly) attribution of plausibility.
2 Procedure for Extracting Situations and Developments Relevant to a Given Country The procedure under consideration has been divided into phases, followed by: 2.1
Phase One: Selecting the Type or Types of Situations and Developments
The list of situations and developments relevant to a given country is, in principle, not limited to this, the second, or the third situation or development. For example, climate change may be an issue, also the emergence of a new economic crisis, also an obviously cooking conflict involving a direct neighbor or contract partner, etc. Consequently, it is expedient to make an appropriate choice before proceeding with the procedure. For example, choosing developments that involve only two immediately preceding situations: – The situation immediately preceding the armed conflict, the so-called eve – The actual outbreak of the armed conflict.
Procedure of Mining Relevant Examples of Armed Conflicts to Define Plausibility
2.2
621
Phase Two: Define the Criteria
It is important to start with making the right choices and systematizing things, which are considered important for a country. There is no doubt that there are “countless” opportunities for this. Therefore, we should first agree on some criteria by which we associate these things for a given country (including, for example, inter-state armed conflicts). In this work, the following criteria is in place to associate countries (with a given country): 2.2.1
Primary Criteria (P1, P2, P3)
P1. Takes place with the immediate geographical neighbor of that country. P2. Takes place in a region of vital import or export to that country, or in a region indispensable for access to the aforementioned region P3. Occurs with a country that is a partner of security and defense treaty to given country Remark. In some cases, the second criteria can be ignored. For example, if a country has enough options to buy and sell vital products or commodities. In the other hand, in some cases, this override is not possible. Let us give an example, in the case of USA Nederland’s and other countries, after the so-called “Yom Kippur War” between Israel and the Arab states in 1973. The impact of the “Oil taps” was very severe [3]. In the same way, the restriction that prevented Cuba [4] from selling its few marketable products on the US market, including e.g. cane sugar was severe for the country.
2.2.2
Secondary Criteria (S1, S2, S3)
S1. Time, specifically the period under consideration to a given country, which meet the primary criteria S2. Existence of a diaspora that is well established in another (not necessarily primarily associated) country whose fate is important to that given country S3. Existence of a well-established diaspora of another nation in given country (not necessarily primarily associated) that is important for the nations home country S4. Emergencies requiring assistance (including e.g. genocide) in other countries Remark 1. Placing time among the secondary criteria, and yet not defining the exact period may be a little confusing. However, the primary criteria, in fact, often do set quite many time limits quite clearly. Let us give an example, a situation in a country linked to a security or defense treaty secondary criteria until that country already exists. There is still quite a difference: whether something happened in a particular country or in the region, where that particular state was founded later. Remark 2. It is quite common for these criteria to take into consideration situations and developments – “until today”. At the same time, the timing of the beginning of the period is not frequent. In other words, while looking for analogs for the comparison,
622
A. Kuuseok
similar situations and developments, we may take under research these, which just happened, but also what happened to the selected countries hundreds of years ago). Remark 3. Unlike many other nations, Estonians have not become very numerous, influential, or (potentially) problematic communities in other countries. However, there is a significant and not overly integrated Russian community in the Republic of Estonia Remark 4. Aforementioned criteria P1… S4 are, of course, not unique. These criteria’s can be changed: by deleting something, adding something or changing something. In this work, we have excluded several of them, considering the Republic of Estonia as a “given state”. Once the criteria have been selected and fixed, we can proceed to the next step of the procedure under consideration. 2.3
Phase Three: Finding and Gathering Relevant Situations and Developments
Important notes – as we decide to deal with only two-stage development fragments, one on “the night before” of the armed conflict and the other on the actual outbreak of the conflict – Then, in a very simplistic approach, we can identify the other stages with each other, because we always have a real outbreak of armed conflict, – We need to collect descriptions of “the night before”, more precisely the descriptions of situations what did happened just before the actual outbreak of the armed conflict For performing the steps of the procedure as just described, we can use help of several suitable solutions. For example, most of publicly announced security and defense policies, defense cooperation agreement etc. are publicly available in Estonia. Mentioned documents are available on the website of Ministry of Defense of Estonia and on the website of headquarters of the Defense Forces [5, 6].
3 Brief Numerical Evaluation the Similarity of Situations and Developments and Linking Them to Plausibility As previously, this is a procedure with its specific phases. 3.1
Phase One: Creating Cleaned Descriptions
This phase will be subdivided into the following steps 3:1:1. Creating a description of a situation or development so that they contain only statements. 3:1:2. Aggregation of claims into subsets that are worthy of identification with each other.
Procedure of Mining Relevant Examples of Armed Conflicts to Define Plausibility
623
3:1:3. Selecting one representative from each subset (formed in the previous sub-steps and containing at least two elements, i.e. a statement) and removing the remainder from the description. Just described set of statements we call a refined description of the situation or development. 3:1:4. Sorting the statements in the refined descriptions by assigning them one-onone, matching with natural number indexes. Let it be for any suitable M 1, all natural numbers 1 N M 3.2
Phase Two: Isolation of the Identifiable Parts of the Refined Descriptions (of the Situation or Development Selected for Comparison)
For that, we need to run a cyclic repeating action to form the identifiable part of the two refined descriptions. We have to keep in mind in this phase, that within the refined description here do not exist statements any more that we would like to identify. In the following proceeding mentioned cyclic activity, first we just determine the identifiers for suitable refined descriptions. Let us give an example: first description and second description. After that: 3:2:1. We take the first statement from the first description and look a t the statements in the second description, starting with the first. 3:2:1:1. If the description does not contain a statement that we would like to identify with the first statement from the first description, we will not remove that statement from the first description. Nothing we do remove from the second description, either. 3:2:1:2. If, among them, there is (in accordance with the above - the only one) an example that we want to identify with the first statement in the first description, we will remove the first statement from the first description. Now we form an ordered pair with the first statement of the first description in the first place and the statement from the second description in the second. We declare this ordered pair as the first element of the identifiable part of the two descriptions. Then we do remove this second-order statement from the second description of the second-order statement. 3:2:2 Let’s take the second statement from the first description and search the statements in the second description, starting with the first that still left after 2.1.2 as above. 3:2:2:1. If they do not contain a statement that we would like to identify with the first statement from the first description, we will not remove that statement from the first description. We do nothing to remove from the other description.
624
A. Kuuseok
3:2:2:2. If among these descriptions it consists of one statement (in accordance with the above - the only one) that we want to identify with the second statement in the first description, we will remove the second statement from the first description. Now we do form an ordered pair, with the second statement of the first description in the first place, and the statement from the second description in the second place. We declare this ordered pair as the second element of the identifiable part of the two descriptions. We do remove this second-order statement from the second description of the second-order statement. Acting in this way, we do have three sets of two descriptions: – D1, which consists of all statements from the first description, which we did not identify with any of the statements in the second description. – D2, which consists of any statement from the second description we did not identify with any of the statements in the first description. – Equ12, which consists of all such ordered pairs, in which on the first and second position is presented respectively to the statements from the first and second descriptions, which are identical to each other. An index of descriptive similarity between two comparable situations or developments we call a number Sim12 = E(Equ12)/(E(D1) + E(Equ12) + E(D2)), where E (H) denotes the number of elements of some finite set H. Remark. This may lead to confusion about the Jaccard similarity factor [6, 7]. In Jaccard similarity factor is presented the number of common elements, or the same elements E (D1 \ D2), instead of E(Equ12), which expresses the number of elements identified. Important Note. The descriptive similarity index expresses the process of how the similarity of the two descriptions has been calculated, depending on the particular way in which the similarity of the statements is determined. Therefore, when calculating and applying the descriptive similarity index, it is important and required to exhibit the full set of Equ12 or the process constructed. As examining the process carefully, there is always a certain “common source of mistrust” in the assessment of structural similarity. Mentioned “common source of mistrust” depends on what kind of compliance has been used observing two sets of elements and two observed systems. One (not the only) possible way of defining the similarity of statements, is to rely on their logical equivalence. (Within the framework of fixed logic - e.g. classical, intuitionist, etc.). This requires the transformation of statements in so-called natural language into e.g. predicate formulas, such as the DST dialog system [9–11]. Comparing descriptions of a situation or development under observation, with the description of already known or “studied” situations or developments, we can estimate plausibility, which they are based on [12]. For example, observing a development, or exploring an unknown future development (or vice versa some developments in the past), with need to predict a process, we can rely on the assumption that particular development under observation will continue in a predictable way if it has a (high) similarity index with already known and studied processes.
Procedure of Mining Relevant Examples of Armed Conflicts to Define Plausibility
625
4 Summary In order to be ready for strategically important developments n (-close) future, it is crucial to create some method to analyse and predict relevant developments, as much is possible. It is also important to perform analyses and possible predictions arguably and preferably equipped with numerical assessments. The basis of our approach is reliance on plausibility of situations and developments, specifically here: on procedure that proceeds on descriptions of situations and developments under consideration. Because the information in common world changes fast and in bulk, we need the clarity to select specific and relevant data: the statements derivate from descriptions of situations and developments. That ensures possibility to calculate relevant and mathematically accurate similarity indexes. That also provides possibility to argue the calculation process and if necessary, restore the situation. In this work, we dealt with this method for very important subject area: military security developments, analysis of the threat of a military crisis. Discussing procedure of mining relevant examples of armed conflicts to define plausibility based assessments of similarities; we tried to create procedure to help predict close future developments, with help mathematical tools and logic. Also not necessarily in case of armed confits, the method should be discussed also in case of other security areas, like economy, internal security etc. [14].
References 1. Lorents, P., Matsak, E., Kuuseok, A., Harik, D.: Assessing the similarity of situations and developments by using metrics. In: Intelligent Decision Technologies, (KES-IDT-17), Vilamoura, Algarve, Portugal, 21–23 June 2017. Springer (2017). 15 pages 2. Lorents, P., Kuuseok, A., Matsak E.: Applying systems’ similarities to assess the plausibility of armed conflicts. In: Smart Information and Communication Technologies (Smart-ICT19), Saidia, Morocco, 26–29 September 2019. Springer (2019). 10 pages 3. Wikipedia, Yom Kippur War. https://en.wikipedia.org/wiki/Yom_Kippur_War. Accessed 10 sept 2019) 4. Proclamation 3447, Embargo on all trade with Cuba. https://www.govinfo.gov/content/pkg/ STATUTE-76/pdf/STATUTE-76-Pg1446.pdf (U.S. Government Printing Office. 03 February 1962) 5. Ministry of defense of Estonia, official website. http://www.kmin.ee/en 6. Estonian Defense Forces, official website. http://www.mil.ee/en/news 7. Jaccard, P.: Étude comparative de la distribution florale dans une portion des Alpes et des Jura. Bull. de la Société Vaudoise des Sci. Nat. 37, 547–579 (1901) 8. Jaccard, P.: Distribution de la flore alpine dans le bassin des Dranses et dans quelques régions voisines. Bull. de la Soc. Vaudoise des Sci. Nat. 37, 241–272 (1901) 9. Matsak, E.: Dialogue system for extracting logic constructions in natural language texts. In: Proceedings of the International Conference on Artificial Intelligence. IC – AI’2005, Las Vegas, Nevada, USA, vol. II, pp 791–797. CSREA Press (2005)
626
A. Kuuseok
10. Matsak, E.: System DST for transforming natural language texts, representing estimates and higher order predicates and functionals. In: Proceedings of the 3-rd International Conference on Cybernetics and Information Technologies, Systems and Applications: CITSA 2006: The 3-rd International Conference on Cybernetics and Information Technologies, Systems and Applications: CITSA 2006, Orlando, Florida, USA, pp. 79–84, 20–23 July 2006 (2006) 11. Matsak, E.: Discovering Logical Constructs from Estonian Children Language. Lambert Academic Publishing, Germany (2010) 12. Sigarreta, J.M., Ruesga, P., Rodriguez, M.: On mathematical foundations of the plausibility theory. Int. Math. Forum. 2(27), 1319–1328 (2007). 2007 13. Balzacq, T.: Securitization Theory. How Security Problems Emerge and Dissolve. Rotledge, London and New York (2011). Edited kby Thierry Balzacq
Human Digital Twins: Two-Layer Machine Learning Architecture for Intelligent Human-Machine Collaboration Wael Hafez(&) Alexandria, VA, USA
Abstract. Systems around us, either commercial, industrial, or social, are rapidly becoming more complex, more digital, and smarter. There is also an increasing conviction that the effective management and control of various scenarios in such complex systems can only be achieved by enabling an intelligent collaboration between the involved humans and machines. A major question in this area is how to provide machines with access to human behavior to enable the desired intelligent and adaptive collaboration between them. Based on the industrial concept of digital twins, this study develops a new approach for representing humans in complex digital environments, namely, a human digital twin (HDT). The HDT is a smart machine that learns the behavior of a human in terms of his/her communication patterns with the smart machines he/she interacts with in a specific scenario. The learned patterns can be used by the HDT and other machines supporting a human to predict human–machine (H–M) interactions outcomes and deviations. Unlike current approaches, the HDT does not need to rely on the content of the H– M interactions to learn patterns and infer deviations, it just needs to register the statistical characteristics of the exchanged messages. Using HDT would enable an adaptive H–M collaboration with minimum interruptions and would provide insights into the dynamics and dependencies involved in H–M collaborations which can be used to increase the efficiency of such collaboration. Keywords: Human–machine collaboration Management Human digital twin
Smart environments
1 Introduction Trying to achieve intelligent human–machine collaboration (IH–MC) is not new. As early as 1960, the anticipation was that such collaboration would “enable men and computers to cooperate in making decisions and controlling complex situations without inflexible dependence on predetermined programs” [3]. Many concepts have since been developed to achieve the goal of enabling humans and machines to leverage one another towards controlling complex situations [4, 6, 7]. These approaches and research concepts focus primarily on increasing the information exchange and analysis capacities of
W. Hafez—Independent Researcher. © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 627–632, 2020. https://doi.org/10.1007/978-3-030-39512-4_97
628
W. Hafez
the various team members, i.e. they focus on the information content of H–M interactions, where the current study focus on the statistical aspect of the messages carrying that content. The study presented in this paper provides a conceptual approach for enabling IH–MC in complex scenarios. First, it is argued that H–M collaboration mostly follows an initial design, which can be used as a baseline to define H–M collaboration patterns. Then, it is assumed that intelligent collaboration is based on the capacity to adaptively learn such collaboration patterns and use them to predict deviations from the scenario objectives. A new type of agents called human digital twin (HDT) is argued to provide this capacity. Finally, the functionality of the proposed HDT and the overall IH–MC architecture necessary to support the desired intelligent collaboration are presented.
2 Characteristics of H–M Collaboration H–M collaboration does not just emerge through an accidental encounter between humans and smart machines. Scenarios such as the coordination and management of a complex supply chain, management of a large asset-maintenance process, provision of a comprehensive healthcare service, management of resources in a smart city, or airspace traffic flow management always follow a specific design. That is, humans and machines are always brought together intentionally, and according to some initial design to achieve some common objectives (see Fig. 1). Domain
M6 M1 Machine Agent
M2
M3 Human Agent
M7
H1 M5 M4 Information Channel
M9
H3
M8
H2
Fig. 1. H–M collaboration in a complex scenario.
The design of such scenarios, no matter how complex, is always based on a set of assumptions about the scenario environment or context and the internal dynamics and dependencies among the various humans and machines involved. Based on these assumptions, the initial design defines how the various agents, either humans or machines, should act to ensure their alignment with one another and with the overall scenario objectives.
Human Digital Twins: Two-Layer Machine Learning Architecture
2.1
629
Learning and Prediction in IH–MC
Currently, the design assumptions in complex scenarios cannot capture all the eventualities during the scenario execution. This is due to factors such as unforeseen changes in the environment or unknown information dependencies and feedback loops within the scenario [5]. Accordingly, the various agents would, sooner or later, be faced with situations not covered in the initial design. In this case, scenario owners and designers, smart machines vendors, and human agents need to continuously monitor the scenario performance and when necessary, intervene to adjust the relevant agent’s input and output information or the conditions necessary to make the appropriate actions and decisions that accommodate the new, unforeseen situations. However, if we are aiming at IH–MC, such external interventions should be avoided or kept at a minimum. That is, the agents supporting a scenario should be autonomous in identifying and mitigating deviations from the scenario objectives without relying on outside intervention. 2.2
Role of Human Agents in IH–MC
Humans have a leading role in providing the two necessary capacities for IH–MC. If we assume that Fig. 1 represents a future AI-supported airspace traffic flow management scenario, then the various machines would provide for example, route recommendations, weather predictions, or airport demand and capacity predictions. The involved human roles would be air traffic controllers, pilots, airlines dispatchers, or air traffic managers. Each role player would receive recommendations, options of decisions, and impact analyses from the machines supporting him/her to make decisions that maximize the scenario objectives: air traffic flow efficiency and safety. In such a scenario, and considering, the conditions determining the operation of each agent are very specific to the agent’s task. That is, the agent’s sensors, actuators, and world-model are all specific (and limited) to the scope of its tasks. The initial design, however, would ensure that the agent declared objectives and preferences are specifically defined to support the overall scenario objectives, e.g. safety. This taskspecific sensory, action, and representational capacities limit the ability of the machine agents to detect and use any information outside their design scope. Human agents (e.g., air traffic controllers, pilots, etc.) would have a rather universal understanding of the overall scenario objectives (e.g., safety), and how it relates to new, unforeseen situations [1]. This is partially because the world model of a human, i.e., the mental representation of the scenario and its objectives, is much more comprehensive than that of any single machine supporting the scenario. In addition, humans are still far better than machines in simulating the impact of their actions on the scenario objectives. Accordingly, the human agents in a scenario are best equipped to identify deviations from the scenario objectives and select the appropriate actions to offset such deviations. If it is then possible to observe how a human agent responds to changes and deviations in the scenario execution, i.e., which new percept–action patterns are developed by the human, then it would be possible to identify new H–M interactions patterns. Furthermore, it would be possible to use the new patterns to update the
630
W. Hafez
conditions (e.g., the world model or utility functions) of the machines directly interacting with that human. The new, role specific percept–action patterns can also be used by the scenario designers to compare how similar roles perform across multiple scenarios so as to upgrade the assumptions of the initial design and gain a deeper insight into how to improve the scenario performance. It should be noted that deviations that require completely new information (e.g., machines equipped with additional sensors or actuators) or new conditions (e.g., updated machines’ world model) would still require the involvement of scenario designers and/or machine vendors.
3 Second Layer of Smart Machines The idea here is to insert a second layer of smart machines—what we call HDTs— which would perform the task of observing humans as they interact with machines in a scenario. The HDT is human-specific, i.e. each human will have their own HDT. The HDT would access the communication between a human and all the machines supporting. Based on the human responses to machine actions, the HDT would identify role and context-specific human response patterns and use such patterns later to predict deviations in human responses and infer possible scenario breakpoints. 3.1
Human Digital Twin (HDT)
The HDT is based on the concept of industrial digital twins (DT) which are “software representations of assets and processes that are used to understand, predict, and optimize performance in order to achieve improved business outcomes. Digital twins consist of three components: a data model, a set of analytics or algorithms, and knowledge” [2]. Similar to the industrial DT, the HDT would develop its own representation of the human-machines interaction patterns for the human it observes and use this representation to manage and optimize human interactions with the machines collaborating with him/her. However, the fundamental difference to an industrial DT is that an HDT would learn and anticipate such interaction patterns and deviations by observing the communication (message exchanges) between a human and the machines they collaborate with, and not based on the message content. If, during the execution of their role, the human responds to the various machine actions within the set of responses provided by the initial design, then the HDT would conclude that the scenario is being executed as expected, and that the scenario objectives are being met. However, if the human starts overriding any of the machine actions or responding with actions not foreseen at this point by the initial design, then the HDT would assume that the scenario is deviating from its objectives, which caused the human to respond offdesign. To learn interaction patterns and infer deviations, the HDT does not need to rely on the content of the H–M communications, i.e., the content of the messages exchanged between them. Rather, it only needs to register the statistical characteristics of the exchanged messages: their usage probabilities and dependencies. As depicted in Fig. 2,
Human Digital Twins: Two-Layer Machine Learning Architecture
631
for that purpose, the HDT just needs to capture the unique identifier of the various messages. This unique identifier is used to represent the various exchanged messages.
Route Plan Recommendation Message Features
Type Aircraft ID Aircraft Type True Airspeed Departure Point Route of flight …
Air Traffic Manager
Input Message Input Message Unique ID
H1
M1-01
Approved
Flight Route Planning Machine
M2
HDT1
Output Message Unique ID
M1
M3 M4
H1-01
M5
Output Message
Fig. 2. Message content and message unique ID
4 Role of HDT in Enabling IH–MC The setup of the HDT would use the messages as defined by the initial design of the role to build a representation of the role it observes. A message in the current approach could stand for a complex transaction, a sensor reading, a yes/no response, or a combination of one or more of these features. The main requirement is that it is unique. If we assume that all the H–M communications are digital, then the scenario design would determine the features of each message and provide the HDT with the appropriate capabilities and access privileges to register their occurrence. As mentioned earlier, the HDT does not need to access the content of a message (e.g., the field values of a transaction). 4.1
Learning H-M Collaboration Patterns
In the positive case, where the H–M action–response takes place according to the initial design, each time the scenario is executed, the probabilities of specific dependent messages along the scenario, their sequence, and their levels of dependency are reinforced. For example, if the interactions leading to the route clearance message always follow the same sequence, then the HDT can assume that there is a stable pattern that leads to that message. A stable pattern, i.e., one that takes place with a high degree of confidence, can thus be used by the HDT to anticipate or predict how a human would respond to a specific set of input messages. This prediction can then be provided to the machines supporting the human with insights into his/her behavior, i.e. which output message would the human select as a response to a specific machine input message.
632
4.2
W. Hafez
Detecting and Predicting Scenario Deviations
In executing a scenario, if the human, based on the information provided to him/her from the various machines and his/her awareness of the overall scenario conditions responds with a message not foreseen in the initial design, then the HDT would assign a positive probability to this “unexpected” message. If this unexpected response is part of a stable pattern, then the degree of pattern confidence would decrease accordingly. The scenario designers can define rules for how the HDT should deal with deviations. For example, the HDT can be setup to flag all parents of messages involved in a deviation. Such flags can indicate to the human role owners that such messages might have a potential for a deviation. Based on further scenario runs and deviation management rules, the HDT could for example remove the flags, send messages and reports to relevant machine operators/vendors to change the message design, or alert the human role owner in case his/her decision involves any flagged messages. The HDT can also be set up to use the BN to run simulations to anticipate the impact of the deviation on the overall scenario.
5 Discussion Current approaches to IH–MC focus mainly on making agents more intelligent by increasing their sensory and actuator scope, world model, knowledge, or analytics capabilities. The two-layer ML architecture can complement current approaches by providing insights driven from the communication aspect of the H–M collaboration. As mentioned before, this assumption is based on the analogy with communication theory which, by considering the statistical characteristics of messages, enables a more efficient overall communication. A further area of investigation is how to use the insights obtained by the HDT to directly update the conditions of the various machines involved with the human the HDT supports. That is, how could message probabilities and correlations be used to directly update machines’ utility functions or world models.
References 1. Guszcza, J., Lewis, H., vans-Greenwood, P.: Cognitive collaboration: why humans and computers think better together. Deloitte University Press (2017) 2. General Electric Website. https://www.ge.com/digital/applications/digital-twin. Accessed 01 May 219 3. Licklider, J.: Man-computer symbiosis. IRE Trans. Hum. Factors Electron. HFE-1, 4–11 (1960) 4. Madni, A.: Adaptive cyber-physical-human systems. Insights 21(3), 87–93 (2018) 5. Maani, K., Cavana, R.: Systems Thinking, System Dynamics: Understanding Change and Complexity. Printice Hall, Aukland (2007) 6. McDermott, P., et al.: Human-Machine Teaming Systems Engineering Guide. MITRE Corporation (2018) 7. Sowe, S., et al.: Cyber-physical human systems: putting people in the loop. IT Prof. 18(1), 10–13 (2016). https://doi.org/10.1109/MITP.2016.14
Semantic Network Analysis of Korean Virtual Assistants’ Review Data Hyewon Lim, Xu Li, Harim Yeo, and Hyesun Hwang(&) Department of Consumer and Family Science, Sungkyunkwan University, 25-2 Sungkyunkwan-ro, Jongno-gu, Seoul 03063, Korea {hw3359,lsnowx16,cassendra}@naver.com, [email protected]
Abstract. The interest in artificial intelligence (AI) is increasing significantly. “Virtual Assistant” is a service that allows consumers to easily access AI technology. This study aims to provide a clear and thorough overview of the current consumer experience through reviews written by consumers in South Korea. We suggest possible solutions for future developments of virtual assistants that could further benefit consumers. This study provides insights into consumer experience with virtual assistants through their review data. We categorize each virtual assistant and their related issues and suggest possible solutions for improving future virtual assistants to enhance the quality of consumers’ life. Keywords: Artificial Intelligence Semantic network analysis
Virtual assistant Consumer experience
1 Introduction The size of the global Artificial Intelligence (AI) market is expanding as people’s interest in AI and technology increases. The International Data Corporation (IDC) expected the global AI market to expand from $62.6 billion in 2014 to $412.1 billion in 2024 [1]. “Virtual Assistant” is a service that allows consumers to easily access AI technology. The virtual assistant market in South Korea is being led by mobile carriers and large portal sites that are expanding to consumers through aggressive marketing. Virtual assistant is a structured service which operates through a mechanical mechanism that provides customized services by learning the user’s behavioral patterns and lifestyle habits. The service is mainly based on voice recognition technology [2]. Using the review data on virtual assistants commercialized in South Korea, this study aims to demonstrate the consumer experience and suggest a direction of development to the service that supports the real lives of consumers.
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 633–639, 2020. https://doi.org/10.1007/978-3-030-39512-4_98
634
H. Lim et al.
2 Theoretical Background 2.1
Virtual Assistant
Virtual assistant is defined as a software agent that, for an individual, can understand the user’s language through the combination of AI and other advanced technology and implement instructions that the user wants [3, 4]. In South Korea, domestic mobile carriers (SKT, KT, and LG U+) and leading portal sites Kakao and Naver released a virtual assistant equipped with AI in smart speakers. In September 2016, SKT was the first to release “Nugu”. In January 2017, KT released “Giga Genie”, based on voice recognition, into set-top boxes. LG U+ launched a smart home service in December 2017 with Naver, an artificial intelligence platform of South Korea’s leading portal site [5]. “Hay Kakao” that uses a voice recognition interface, was released in November 2017 [6]. According to a survey by Consumer Insight, a research organization specializing in mobile communications, the use experience rate of smart speakers was 11%, while the utilization rate of each platform was 39% for Giga Genie, 26% for Nugu, 16% for Clova, and 12% for Hey Kakao. It showed that services from latecomers met consumers’ needs more frequently than earlier services [7]. 2.2
Semantic Network Analysis
As methods of big-data analysis have developed, they have resulted in a variety of techniques, including data mining, opinion mining, text mining, and social mining [8]. Semantic Network Analysis, one of the text mining techniques, analyzes the meaning of the words that are the components of the message, and is a method of identifying the network [9, 10]. Semantic Network Analysis is an important means of understanding the entire structure of the network by analyzing the location and connectivity strength of words in the network [9].
3 Methods 3.1
Data Collection
We collected review data from Nugu, Giga Genie, Clova and Hey Kakao, all available from the Google Play Store (From the release date of each plat form to 9/10/2019). The review data was collected using R 3.5.3 and totaled as follows: Clova, 2,240; Nugu, 2,085; Hey Kakao, 752; and Giga Genie, 469, totaling 5,546, as listed in Table 1. After deleting irrelevant or simple reviews, the review data totaled: Clova, 2,037; Nugu, 1,745; Hey Kakao, 689; and Giga Genie, 402, totaling 4,873. Table 1. The number of review data Nugu Giga Genie Clova Hey Kakoa Total Before clean 2,085 469 2,240 752 5,546 After clean 1,745 402 2,037 689 4,873
Semantic Network Analysis of Korean Virtual Assistants’ Review Data
3.2
635
Semantic Network Analysis Method
Nouns were extracted from clean data through morphology analysis using the Korean natural language processing (KoNLP) toolkit, and the document-term matrix (DTM) was constructed. DTM is constructed from separate review data for each platform that is used to create a two-mode network matrix. The two-mode network analysis showed differences between each platform, but the relationship between the platforms was not clearly identified. One-mode network analysis was performed to check the relationship between the review data for each platform briefly.
4 Result 4.1
Word Frequency Analysis
Comparing the top 20 most frequent words, technical words such as “Applications”, “Voice Recognition”, “Wi-Fi”, “Artificial Intelligence”, and “Bluetooth” were common. One can also see that the speaker mainly used to play music through words such as “Music Service” and “Speakers”. In particular, the unique words for each platform identified the services that each volunteer would like to support centrally (Table 2). Table 2. Word frequency analysis of each platform’s reviews Rank Clova
Freq Nugu
Freq Hey Kakao
Freq Giga genie
Freq
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Application Music Connect Naver Speaker Voice Recognition Use Function Update Play Smart Phone Execute Set-up Call Error AI Wave
707 551 550 346 280 280 257 257 255 239 203 191 183 178 132 130 128
Connect Application Music Melon(a) Use Function Wi-Fi Update Device Play Aria Login Set-up Execute Error T-map(b) Radio
552 392 324 266 222 182 161 134 123 119 108 105 95 93 82 80 69
237 140 124 123 106 84 72 68 67 61 58 57 50 36 34 31 27
108 93 47 40 34 29 47 28 27 27 23 22 22 21 18 17 17
18
Wi-Fi
124
Service
67
Connect Kakao Music Wi-Fi Application Network Use Function Alarm Melon Update Set-up Speaker Bluetooth Issue Sound Voice Recognition Kakao talk(c)
19 20
Login Bluetooth
121 117
Alarm Search
67 67
Play Radio
25 25 25
Application Music Login Function Play Certification Connect Sign up Use Search Set-up KT TV Voice Recognition List Update Error Mobile Communication Bluetooth Smart Phone
(a) Melon: Music streaming Service (b) T−map: Web mapping service developed by SKT (c) Kakao Talk: A free mobile instant messaging application for smartphones with free text and free call;p
17 16 15
636
H. Lim et al.
The Clova platform used the word “Call” in its word frequency analysis. Clova launched the service in speakers by character such that various wake words can be set. The consumers discussed setting the wake word. Clova supports services that enable one to call in a friendly manner. Nugu, SKT’s web mapping service, had “T-map”, as the top frequency word. Nugu uses navigation through voice when the consumer’s hands are not free. In Hey Kakao, “KakaoTalk”, a messenger application, appeared in the upper word frequency. Because Kakao is a service that started from messengers, consumers using Hey Kakao can assume that they are using a voice-based interface rather than a touch-based interface when sending messages on KakaoTalk. The word “TV” only appears in the Giga Genie platform. As it provides a set-top box-based service, there is a lot of interaction with the TV. 4.2
Semantic Network Analysis
First, Fig. 1 looks at the connectivity between the words in the review data and shows that the central words most connected are “Play List”, “Service”, “Delete”, and “Login”. “Delete” is becoming an important link between words. It was directly connected to words such as “Sign up”, “Confirm”, and “Solve”. This is also numerically identifiable through degree centrality in Table 3. The high degree centrality of words such as “Wi-Fi”, “Issue”, and “Execute” were causing many problems. Many reviewers said they would delete apps because they are not able to use them. They experienced frequent errors in starting up, such as personal or adult authentication errors. Direct links to words such as “Play List”
Fig. 1. Network analysis of review data.
Semantic Network Analysis of Korean Virtual Assistants’ Review Data
637
and “Volume” can also be found in Fig. 1. Betweenness Centrality in Table 3 can be used to estimate the potential for adding or modifying playlists, replicating specific lists, and deleting required applications.
Table 3. Word centrality Degree centrality Wi-Fi 36 Issue 36 Execute 36 Alarm 36 AI 36 List 34 Login 33 Service 27 Improvement 25 Certification 25
Betweenness centrality Play List 128.3230622 Login 113.7116879 Wi-Fi 54.73386121 Issue 54.73386121 Execute 54.73386121 Alarm 54.73386121 AI 54.73386121 Delete 39.44599411 Improvement 31.79822932 Certification 31.79822932
Eigenvector centrality Issue 1 AI 1 Execute 1 Alarm 1 Wi-Fi 1 Service 0.870972 Modification 0.864577 Naver 0.864577 Version 0.864577 Brown 0.864577
Second, generally, the network can be divided into three communities. Nugu and Hey Kakao are two platforms mostly used through smart phones, focusing more on personalized services. From reviews, the navigation application “T-map” from NUGU is actively used by consumers. Korean leading message application “Kakao Talk” can be implemented with voice through Hey Kakao. Giga Genie actively uses set-top box as fixed devices focusing on household sharing services. Although the exact addresses are registered in the set-top boxes, consumers mentioned their issues concerning the risk of personal information leakage. Clova has released a variety of speakers, such as “Wave” and “Brown”, thus, they offer the service that enables users to set up different call names for the speaker. From the link between the words “Call” and “Name”, it can be confirmed that consumers’ interest in such services is growing. Third, a variety of differences emerged between different platforms. Nugu has difficulty in the usage of functions if the log-in password was not correct or the Wi-Fi connection was unavailable. For Clova, some consumers mentioned smart speakers from other brands, such as Brown, which released new products that are similar to the early versions of Wave and Friends from Clova. Additionally, as the application of Samsung Galaxy is widely used in different smartphones, many reviewers mentioned their comparisons with Bixby as to which one is better or worse.
5 Conclusion This study was intended to recognize consumer experience with intelligent agents being commercialized in South Korea through review data, and to present the current status of intelligent agents and directions to improve consumers’ life. The conclusions from this study are as follows.
638
H. Lim et al.
First, “Delete”, “Play List”, and “Service” were located at the center of the network and connected to all three communities formed by other keywords. This suggests that consumers are demanding improvements in various services in virtual assistants, including music-related services that are most commonly used. But due to continuing errors, consumers were looking for a solution themselves, repeatedly deleting and installing. Accordingly, services must be stabilized and facilitated before adding new services. The services must provide sufficient information on error solutions. Second, Hey Kakao and Nugu are forming one mobile based virtual assistant community focusing on providing personalized services using internet networks to suit their characteristics. Words such as “Solve”, “Network”, “Wi-Fi”, and “Necessary” imply that there have been frequent errors in network and Wi-Fi connections that are essential for the use of major services for mobile based virtual assistants. Thus, mobile based virtual assistants can strive to minimize the occurrence of network connectivity errors, while providing a variety of internet-based personalization services. Third, Giga Genie, a virtual assistant that utilizes set-top boxes, has had the problem that the consumer must register an address to use the Giga Genie service and frequently experiences errors in doing so. Besides, they were experiencing fear of personal information leaks used detailed addresses. Therefore, set-top box based virtual assistant can minimize personal information and unnecessary procedures. This may help reduce the inconvenience of using the service. Fourth, as words such as “Version”, “Modification”, and “Do not work” show, Clova frequently provides version updates and consumers expect to see improvements in services resulting from them, but errors follow. The fact that Clova could only be called with a button and not voice, in relation to keyword “Button”, also caused issues. As the convenience of calling is expected to be important for virtual assistants installed in the home and is used for various purposes, stabilization of basic functions, including calling functions, and careful update version development is recommended, rather than unnecessary feature addition. This study examined consumer experience with virtual assistants that have been continuously commercialized through review data, categorized each virtual assistant and related issues, and suggested the direction of development of a virtual secretary to improve consumer life. However, there were limitations of the insufficient size of the overall review data and the existence of differences in the number of reviews by each virtual assistant. Future studies can proceed with quality data of sufficient size to produce more realistic and meaningful results.
References 1. Ahn, S.W.: Reviewing artificial intelligence, and the implications of Google’s disclosure of Tensor Rolls. Softw. Pol. Inst. 12, 13–16 (2015) 2. Yoon, S.J.: Study on New Silver Generation’s Emotional Communication and customizedVirtual Assistant Contents (2016) 3. Kim, H.M.: Development of AI technology and evolution of virtual personal assistant service. KB Financ. Holding Manag. Inst. 16(53), 1–9 (2016) 4. Wikipedia. https://en.wikipedia.org/wiki/Virtual_assistant_(artificial_intelligence)
Semantic Network Analysis of Korean Virtual Assistants’ Review Data
639
5. ‘Smart Home Hub’ AI speaker competition 2 rounds. What about cards from 3 mobile network providers? http://www.updownnews.co.kr/news/articleView.html?idxno=202150 6. AI speaker Kakao Mini officially launched. http://www.updownnews.co.kr/news/ articleView.html?idxno=202150 7. Consumer Insight, Hot AI Speaker Market, Cold Consumer Assessment. https://m.post. naver.com/viewer/postView.nhn?volumeNo=16271707&memberNo=39577953 8. Ban, C.H., Kim, D.H.: Analysis of university department name using the R. J. Korea Inst. Inf. Commun. Eng. 22(6), 829–834 (2018) 9. Wasserman, S., Faust, K.: Social Network Analysis: Methods and applications. Cambridge University Press, Cambridge (1994) 10. Choi, Y.J., Kweon, S.H.: A semantic network analysis of the newspaper articles on big data. J. Cybercommun. Acad. Soc. 31(1), 241–286 (2014)
Design Collaboration Mode of Man–Computer Symbiosis in the Age of Intelligence Jinjing Liu and Ken Nah(&) The International Design School for Advanced Studies (IDAS), Hongik University, Seoul 03082, Republic of Korea [email protected], [email protected]
Abstract. At present, artificial intelligence (AI) is the hottest topic in the design field, many AI design application systems that already exist and have begun to replace the designer in the initial stage of design work. In this era of intelligence, the design activities of man-computer symbiosis have become mainstream, requiring designers to reflect on themselves and to re-position their role in design activities. This research refers to previous research and literature on mancomputer symbiosis design activity, combining earlier findings with the author’s observations and analysis of common practices in the design profession. It examines existing AI design application system cases, and it synthesizes the role position and scope of work for existing AI design application systems and designers. Finally, it combines the “Double Diamond Design Process Model” to find a collaboration mode between designers and AI design application systems in the design activities of man-computer symbiosis. Keywords: Age of intelligence Man-computer symbiosis mode Designer AI design application system
Collaboration
1 Introduction The age of intelligence has come. The emergence of artificial intelligence (AI) has changed not only the way we think and the way we live but also the way we work. But the emergence of new mode does not mean the complete replacement of traditional working thinking modes. Rather, each represents an upgrade to conventional modes. The impact of the age of intelligence on our lives is no less than the impact of the second industrial revolution, or that of the information revolution. AI has begun to enter our daily work and life, but most people are not aware of it. And with the development of intelligent technology, relationships between humans and computers have become more and more intimate. Since the 1970s and 1980s, man-computer interactive technology has become a focus for studies in linguistics, psychology, cognitive science, design, sociology, biology, and other disciplines. In the 1990s, with the computer taking the intelligent direction of development—able to carry out thinking, learning, memory, network communication, etc.—the man-machine relationship entered a new period of development, and a large number of man-computer interaction psychology books appeared. By the beginning of the 20th century, the relationship was in a stage of the gradual © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 640–645, 2020. https://doi.org/10.1007/978-3-030-39512-4_99
Design Collaboration Mode of Man–Computer Symbiosis
641
approach, and with the spread of the internet and the development of smartphones, software applications have become mobile and portable. Wearable design, cloud computing, virtual reality, and other technologies have also developed rapidly. Artificial intelligence began to emphasize a people-oriented purpose. Since 2012, this relationship has gradually entered the stage of man-computer symbiosis [1]. In fact, a concept of “man-machine symbiosis” was proposed by J.C.R. Licklider in 1960. He believed that this would be the expected development of cooperation and interaction between people and computers. People have come to expect computers to help with performing repetitive tasks, while humans continue to perform the tasks of setting goals, and of making assumptions and assessments, to build symbiotic relationships (“living together in intimate association, or even close union, of two dissimilar organisms” is called symbiosis [2]) [3]. Gill also presented a serious discussion in his book about the moment the machine transcends and replaces humanity. An alternative method to people-oriented design systems and technology has also been proposed, emphasizing a different symbiosis of human and machine capabilities [4]. In Donald Norman’s book, he compares the difference between humans and machines from “people-centered” and “computer-centered” perspectives. The advantages of human beings are creativity, attention to change, strategic thinking, flexibility, and emotion. The advantages of the computer are precision, order, attentiveness, and logic. This shows that the cooperation between people and computers needs to fully balance the advantages and disadvantages of relationships between the two [5]. More basically, human thinking is composed of two parts: creativity and repetitiveness. Executing the latter through machines—computers—is the best way to achieve man-computer symbiosis. Because the ability of computers to process and present data is far beyond human beings’ limits, it is with the help of computers that humans can focus more on their creative faculties. Man–computer symbiosis fits well with social development in the age of intelligence. If traditional computer-aided design simply locates the computer as a design tool, then in the age of intelligence, the role of the AI-loaded computer system marks a clear change. Computer systems will become reliable assistants for humans, and AI systems that assume the roles of task executors can help humans more effectively finish their work. Based on the research of an existing case in AI design application systems, this study applies the double diamond design process model widely used in design research. Analyzing the range of tasks that AI systems can perform in design activities, the study finds a collaboration mode linking designers and AI design systems in man-computer symbiotic design activities for the age of intelligence. Finally, it provides some suggestions for future working modes in design.
2 Case Study With the advent of more and more AI design application systems (It refers to the intelligent application program that can automatically generate the design scheme under the user’s requirements through algorithm, data and powerful computing power), if the designer positions his or her role as executor of the design, and if the main task of
642
J. Liu and K. Nah
the designer is to present the solution to the customer using design skills (e.g., drawing, or modeling), then AI will replace the designer. This will not wait until the distant future to unfold. Already, AI design application systems have been developed for many design industries, and these AI systems have been used in many practical design projects, obtaining significant achievements. Here are some examples of AI design systems used in real design projects: In 2017, Alibaba launched an AI design system called ‘Luban’, is based on image intelligence technology and three core modules (learning networks, actuators, and evaluation networks). Users only need to input their needs. The ‘Luban’ can replace manual completion of time-consuming design works and generate multiple sets of design schemes that are in compliance with requirements [6, 7]. The system solves the design needs of the massive web banner of the Taobao “Double Eleven Shopping Festival” every year. During the 2017 shopping festival, it designed 400 million posters for the merchants and became the hero of the Alibaba design department. SJYP and Designovel have jointly launched the first AI-designed clothing in Korea. The AI system they use is called ‘Style AI.’ The system learns and masters the brand style according to brand materials and then provides the appropriate design scheme. And the system also can export new clothing styles continuously and indefinitely [8]. In the fashion design industry, the traditional design method makes it difficult for designers to catch up with changing trends. AI technology can help designers analyze popular trends with the fastest speed and provide designers with more novel and diverse trends for reference. Besides the use of AI design application system in design activities that focus on presentation, the use of AI design system is also proposed in such complex design activities as industrial design, so as to improve work efficiency and obtain more design solutions. The ‘Dreamcatcher’ released by Auto-desk is a generative design system. Generative design is a way of translating computational energy into creative energy, is a designer driven, parametrically constrained design exploration process [9, 10]. “Dreamcatcher” enables designers to craft a definition of their design problem through goals and constraints. This information is used to synthesize alternative design solutions that meet the objectives. Designers are able to explore trade-offs between many alternative approaches and select the most suitable design solutions for manufacture [11]. These AI design application systems, already used in actual design activities, show us that AI systems can replace designers in completing the initial stages of design work (e.g. data collection, data analysis, typesetting, and scenario generation). This shows that for design activities in the age of intelligence, AI systems have begun to replace human designers as executors of design activities. In the face of these AI design application systems, the role of the designer must change, adapting to the new working form of design activities in the age of intelligence. The designer will no longer hold the role of executor, but rather that of planner for the entire design activity, designing and completing tasks efficiently with the help of AI. Therefore, designers should understand how to cooperate with AI systems in the design activities of man-computer symbiosis for the age of intelligence.
Design Collaboration Mode of Man–Computer Symbiosis
643
3 Analysis of Collaboration Mode The “Double Diamond Design Process Model”, probably the most famous and popular visual design process in the world, was developed by the British Design Council in 2005. Its core consists of finding the right problem and the right solution. It is divided into four phases: discover, define, develop, and deliver [12]. The main feature of this design process model is its emphasis on “divergent” and “convergent” thinking. First, divergent thinking generates as many ideas as possible. Then, convergent thinking refines and summarizes the many ideas into the best ideas. In this model, the process runs twice, once to confirm the definition of the problem, and once to create the solution. Based on the double diamond design process model, this study divides design activity into ten specific tasks: planning, material collection, information processing, requirements definition, creation, plan generation, outcome, evaluation, communication, management. See Fig. 1.
Fig. 1. Design specific task distribution based on the double diamond design process model.
Fig. 2. Functional scope statistics for 10 AI design application system cases.
Selecting 10 AI design application systems as cases that already show participation in actual design activities, and analyzing their functional scope, we can find the main functions of current AI design systems concentrated in the four design tasks. These
644
J. Liu and K. Nah
four, are the collection of materials, the processing of information, the generation of a plan, and the outcome. See Fig. 2. Based on the functional scope of AI design application systems in design tasks, we can reveal the basic collaboration mode between designers and AI in the design activities. These can help form an image of man-computer symbiosis in the age of intelligence. See Fig. 3:
Fig. 3. Basic collaboration mode between the designer and the AI design system
At Divergent stage, the AI design system will execute most of the work at this stage, using its powerful data analysis capabilities, while designers can focus more on planning, communication and management. AI can help designers to think outside the box and get more material and inspiration. the various preliminary solutions proposed by the designer’s creative thinking can be tested by AI, and AI can also help designers break the boundaries of thinking and discover more ideas that were not thought of before. At Convergent stage, the designer must manage the sorting of information and the screening of possible solutions to find the right information and the best solution for the project. The designer will dominate the stage of definition of requirements because of the lack of empathy AI system cannot truly understand the real needs of users. The key to the design is the need-finding, which is where the designer is best at. And at the final designer needs to evaluate the final generated design plan from the perspective of the planner and give criticism suggestions. As the planner of design activities, the designer must clarify their role positioning, have the ability to control each task node and be good at cooperating with AI. In the design activities of man-computer symbiosis, understanding and following such a design collaboration mode can know how to collaborate with the AI system, will undoubtedly greatly improve the efficiency of the design work, and deliver the most satisfactory design results.
Design Collaboration Mode of Man–Computer Symbiosis
645
4 Conclusion The emergence of AI has not diminished the role of human beings. Actually, human beings have always paid attention to respect for human personality when creating AI: the development of AI technology has always been to help humans to better play their roles according to human individuality. Social change promoted by AI will bring mankind into a better world. In the future, the boundary between humans and computers is bound to become increasingly blurred. The two are interdependent, and indispensable to each other. Man–computer symbiosis is an irreversible trend. As the main drivers of innovation, designers must always keep abreast of the latest social trends. They must now learn how to cooperate with AI and find, in the right collaboration. The “Double Diamond Design Process Model” is one of the most popular visual design processes at present. It guides the design work of many companies. Therefore, based on the model, it can more clearly explain the collaboration mode between designers and AI in human-computer symbiosis in the era of intelligence. At present, our research still has limitations. Since it only analyzes the functional scope of AI design application systems, as applicable to the general field of design, more professional, expert design systems have not appeared in the discussion. In the future, based on a more in-depth study of the AI design system and other design processes, it is hoped that more in-depth analytic results can be obtained to further improve the collaborative mode of man-computer symbiosis design.
References 1. Li, X.G.: From independence to symbiosis-on the relationship evolution and future development of designers and artificial intelligence. J. Art Educ. 09, 155–156 (2018) 2. Neilson, W.A., Knott, T.A., Carhart, P.W. (eds.): Webster’s New International Dictionary, 2nd edn. G. & C. Merriam Company, Springfield (1934) 3. Licklider, J.C.R.: Man-computer symbiosis. J. IRE Trans. Hum. Factors Electron. 3(1), 4–11 (1960) 4. Gill, K.S.: Human Machine Symbiosis: The Foundations of Human-Centred Systems Design. Springer, Berlin (1996) 5. Norman, D.A.: Things That Make Us Smart: Defending Human Attributes in the Age of the Machine. Diversion Books, New York (1993) 6. Alibaba Luban: AI-based. Graphic Design Tool. https://www.alibabacloud.com/blog/ alibaba-luban-ai-based-graphic-design-tool_594294 7. Lou, Y.S., Li, S.D.: Research on the trend of artificial intelligence design. J. Art Des. 2(07), 87–89 (2019) 8. Style AI: when artificial intelligence decides what is fashionable. http://www.255.it/en/StyleAI-when-artificial-intelligence-decides-what-is-fashionable 9. What is generative design. https://generativedesign.wordpress.com/2011/01/29/what-isgenerative-desing/ 10. Krish, S.: A practical generative design method. J. Comput.-Aided Des. 43(1), 88–100 (2011) 11. Project Dreamcatcher. Autodesk Research. https://autodeskresearch.com/projects/dreamcatcher 12. What is the framework for innovation? Design Council’s evolved Double Diamond. https:// www.designcouncil.org.uk/news-opinion/what-framework-innovation-design-councils-evolv ed-double-diamond
User Experience over Time with Personal Assistants of Mobile Banking Application in Turkey Hatice Merve Demirci1,2(&) and Mehmet Berberoğlu2 1
Atılım University, Kızılcaşar Mahallesi, İncek, Gölbaşı, Ankara, Turkey 2 Middle East Technical University, Üniversiteler Mahallesi, Dumlupınar Bulvarı No: 1, 06800 Çankaya, Ankara, Turkey {merve.demirci,mehmet.berberoglu}@metu.edu.tr
Abstract. Being able to chat with artificial intelligence-based applications has been changing user interaction and has opened doors for a various number of new user experiences. The new apps that emerged lately with new technologies now launched in mobile banking applications as the form of personal assistants. This paper aims to research what features of the assistants affect the overall user experience. At the end of the study, a possible design guideline for designing these assistants in Turkey discussed, since most of the existing guidelines are for personal assistants for everyday life. Twelve users of Maxi, one of the most used personal mobile banking assistant, participated in the research and evaluated their experience after 15 days of usage. The research showed that their impression was positive during the first interview but during the second interview, there was a decline for all values of the impressions of user experience. In the end, the results showed that the original application of the bank was more preferable than the personal assistant. Keywords: Personal assistants Mobile applications Mobile banking applications Conversation design User engagement User engagement patterns
1 Introduction Technological advancements re-shaped user-product interactions by enabling people to make conversation with interactive systems. This new form of interaction led people to adopt a new type of interactions with their smartphones. Now, with the help of technological advances, it is possible to use mobile applications, which introduced with smartphones, for various reasons. For instance, there are applications, which aim to sustain subjective well-being, increase physical activities, gaming, flirting and for doing banking related tasks. Using technology as a new product feature enabled people to become familiar with it first and then began to prefer using and integrate such applications in their everyday lives as routines. Now in Turkey, most of the banks are offering “user-friendly” mobile banking applications for their customers to satisfy their needs when and where they need. Additionally, because of advanced technology, the accessibility of these services © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 646–652, 2020. https://doi.org/10.1007/978-3-030-39512-4_100
User Experience over Time with Personal Assistants
647
affected the people’s expectations towards applications. As a result of these expectations, the overall experience offered by the mobile applications has shaped the designers’ role as not just designing new interfaces but also fulfilling those expectations and creating meaningful experiences that come with the use of the applications. To maximize user satisfaction, mobile banking applications in Turkey started to adapt interactive conversational agents into their systems. Nowadays, most downloaded and used mobile banking applications are offering personal assistants for their customers to accelerate transactions and increase customer satisfaction. Thus, the more technological advances become accessible, the more the designers’ role evolved around it. Designers now started to reconstruct the technology and find new forms which were non-existent before to integrate technology into products [1]. Additionally, providing an online experience independent of time and space attracted the interactive system developers to create a domain-specific Conversational Agent, which is particularly focusing on banking. Thus, to maintain user engagement and sustain usage, the design of such systems is important. However, existing design guidelines for Conversational Agents are limited and focus on business functions of these agents; however, the conversational experience should be the main focus for designers to sustain user engagement. In this study, the aim is to investigate how the features of personal assistants shape the overall user experience of mobile banking application usage and its perceived usability. Besides, throughout this study, we explored the design qualities of personal assistants of mobile banking applications that focus on enhancing people’s user satisfaction to obtain sustained engagement with the newly emerged technology. At the end of the study, a design guideline for designing such assistants in Turkey proposed.
2 Literature Review Existing interaction design guidelines are mostly offering ways to understand business functions for commercial products. Personal Assistants with artificial intelligence often feature access to other services that have utility in everyday life and can perform different tasks. The potential of these assistants searched to understand how these products integrated into mobile banking applications and how to improve service and system functionality and technicality to improve service quality. The history of conversational agents goes back to the 1960s and the first CA named ‘ELIZA’, which was created by Joseph Weizenbaum in 1966. The purpose of ELIZA was to help people whom have psychological disorders while acting as a therapist [2]. The achievements ELIZA brought into the personal assistant development played an important role in the creation of the new ones that blended with new technologies. The development of technology and the application of developing technology to existing mobile applications, these assistants have become very popular today. According to [3], the developments in voice recognition, natural language processing, and artificial intelligence, increased visibility and availability of these newly emerging agents, which using text or natural language as their interaction medium to imitate
648
H. M. Demirci and M. Berberoğlu
human-to-human conversation. Due to being supported by diverse technological advancements, and also being able to integrate into mobile applications or devices via Conversational User Interfaces, Personal Assistants have a great potential in domain specific applications [4]. In other words, being supported from diverse technologies and mobile applications, people can use personal assistant in their daily lives without any interruption to support their online banking transactions and as a result, they can simplify their work with a single operation without the need for a few steps. A growing number of people in Turkey uses online banking applications for several years. The fact that people have used this technology for years and started to handle daily banking transactions from these platforms has enabled users to get used to receiving feedback from these systems. In addition to this, two of the major banks in Turkey have started to integrate personal assistants into their applications to ease banking transactions. Thus, according to [5] report in 2019 40% of people in Turkey have started to use personal assistants while performing their online banking transactions. Starting from this information, we aimed to investigate how people perceive these assistants usability and how user engagement change over longitudinal usage. To investigate this aim we select one of the most promoted personal assistants from one of the major Turkish.
3 Research Method The perceived usability and functional performance of the selected personal assistant evaluated by the participants after 15 days of usage. The interpretations of the participants obtained by semi-structured and in-depth interviews. Through analyzing the results of the interviews, improvement suggestions presented regarding system operations and functionality in terms of the attractiveness, perspicuity, effectiveness, stimulation, dependability, efficiency and novelty. Lastly, the study provided insights that affect users’ engagement and overall experience in terms of online banking application usage in Turkey. 3.1
Selected Personal Assistant
At the beginning of the study, we first chose the personal assistant that used for the study; Maxi. Maxi was the most downloaded one between the two major banking applications with personal assistants and available to users since November 2018. The banking transactions that Maxi can perform according to self-system introduction from its website are; expenditure advice, spending history, transactions, recent payments, money transfer, foreign exchange transactions, and account and credit card balance transactions. 3.2
Sampling
We reached the participants of the study with the announcement we made on social media. The participants of the study were six women and six men who had banking application with the personal assistant chosen by writing to Facebook group. The
User Experience over Time with Personal Assistants
649
interviews conducted with these 12 users. The participants selected among people aged between 21 and 28 and prefer to use banking applications frequently during their daily lives. Having equal number of men and women participants in the study enabled us to observe whether sex, age, occupation, and previous experience affects the overall user experience and perceived usability of these personal assistants. 3.3
Method
The purpose of this method was to evaluate the interaction design elements that considered as useful regarding the interaction design process of personal assistants with artificial intelligence within a designated usage period. In this study, the specific usage period of the selected personal assistant was two weeks, which designed as longitudinal research. Long-term user experience is the whole usage process of a product, its subjective evaluation and how the meaning of that product perceived at the end of determined usage duration. The longitudinal research was also important to observe the interaction elements and system functionality affects the long-term usage patterns and user engagement [6]. 3.4
Procedure of the Research
At the beginning of the first interview, a brief introduction related to the duration of the study, purpose, and procedure of the study introduced to the participants. Then they asked to sign the content form of the study. To know the participants closely and to better understand the propensity to use technology and the habits of using the selected application; how often they use the application, which transactions they perform mostly and the duration of usage of the selected application asked. Afterward, they asked whether they had previous experience of personal assistant usage or not. The second phase of the interview started with a brief introduction about the personal assistants, their types and purposes each participant verbally. The system’s self-introduction, from own main web site, was performed after the verbal brief introduction. To access the main system application, a mobile phone used. From the screen, the participants asked to view the Maxi. Apart from voice recording, participants’ statements written down to compare the change in the opinions of the participants after the first interaction with the system. The first interaction performed during the first interview with the researcher. Before the interaction, how to initiate an interaction with the Maxi was described verbally without looking at the participants’ screens. First of all, the selected application was opened, the profile of the Maxi tapped and afterward, the participants asked to perform three tasks with Maxi. With the end of the first interaction with the system, participants were asked to fill the User Experience Questionnaire (UEQ) to evaluate personal assistant’s perceived usability and user experience. Then in response to the question asked, they stated their thoughts and feeling towards their first interaction. Before finishing the interview, they were asked to inform the researcher if they want to leave the study.
650
H. M. Demirci and M. Berberoğlu
The second interview started with filling the second UEQ and continued with comparing every 26 items of UEQ from the previously filled two of the UEQ in detail. The comparison of the filled questionnaires aims to understand the underlying reasons for participants’ evaluation of the Maxi’s characteristics, functionality, and personal traits and how those aspects affected the overall user experience. In addition to UEQ results comparison, the participants were asked to answer which personal assistant aspect or aspects they liked the most and least, the reasons why they chose the stated aspects of the system. Finally, at the end of the interview, the future Maxi system, service, and interaction related developments were stated by the participants as an answer to the “how the personal assistant should be improved in accordance to you/your experience?”. The obtained data from each participant through interviews and surveys were started to be analyzed after finishing the 24 interviews.
4 Results The system evaluated as positive during the first by eight out of twelve participants. Values between −0.8 and 0.8 represent a neural evaluation of the corresponding scale, values > 0,8 represent a positive evaluation and values < −0,8 represent a negative evaluation. Hence, the participants had a slightly positive or neutral impression concerning the user experience of the Maxi. The impression concerning the pragmatic quality (Perspicuity, Efficiency, and Dependability) is higher than the impression concerning the hedonic quality (Stimulation, Novelty). As understood from the graphs below, there has been a decline in all values of the impressions of user experience, but the most dramatic changes were in the efficiency, dependability, stimulation values (Table 1). Table 1. First UEQ results UEQ scales (mean and variance) Attractiveness 1,597 1,37 Perspicuity 2,271 0,40 Efficiency 1,896 1,13 Dependability 1,375 0,80 Stimulation 1,229 1,33 Novelty 0,917 2,13
The decline in numeric data may not be negative yet the participants evaluate the Maxi interaction negatively after fifteen days of interaction. In other words, during the second interviews, the perceived usability and ease of use did not meet the expectations of the participants. Maxi created an extremely positive impression concerning Perspicuity, but in terms of Attractiveness, Efficiency, Novelty, Dependability, and Stimulation the Maxi judged slightly positive. Nonetheless, different from the positive results, the participants did not positively comment on those scale related adjective pairs (Table 2).
User Experience over Time with Personal Assistants
651
Table 2. Second UEQ results UEQ scales (mean and variance) Attractiveness 0,894 2,03 Perspicuity 1,886 1,03 Efficiency 0,727 2,09 Dependability 0,659 0,88 Stimulation 0,091 2,02 Novelty 0,864 1,40
5 Discussion and Conclusion The participants of the study stated that they have been using the personal assistant application for at least 2 years. Apart from the two participants, the other participants did not know about the personal assistant that involved in this research, and stated that they had never heard of it before. During the first interaction, the first task was to look at the spending history and the participants were very pleased with the detailed graph that Maxi presented to them. However, at the end of the fifteen days, the participants stated that the personal assistant was not more useful than the application and did not give them a convenience or speed for the processes that had the reasons for using the application. The evaluations of the participants showed which design aspects and qualities of the system affected user engagement. In terms of perceived ease of use and usability, at the end, the application was more preferable since the participants have gained the habit of using mobile app while performing their banking transactions. Half of the participants told that during the 15 days period, they used the application less to not interact with Maxi. Moreover, after the study they said that they would not use Maxi again. At the end, we find four evidences that we can link with the interaction design elements of the personal assistant. The participants asked for personalized interface design to perform tasks faster. It is highly recommend being able to register the most used transactions to the system to provide handling transactions in only one tab.
References 1. Porcheron, M., Fischer, J.E., Reeves, S., Sharples, S.: Voice interfaces in everyday life. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 640. ACM, April 2018 2. Klopfenstein, L., Delpriori, S., Malatini, S., Bogliolo, A.: The Rise of Bots: A Survey of Conversational Interfaces, Patterns, and Paradigms, pp. 555–565 (2017). https://doi.org/10. 1145/3064663.3064672 3. Laranjo, L., Dunn, A.G., Tong, H.L., Kocaballi, A.B., Chen, J., Bashir, R., Surian, D., Gallego, B., Magrabi, F., Lau, A.Y., Coiera, E.: Conversational agents in healthcare: a systematic review. J. Am. Med. Inf. Assoc. 25(9), 1248–1258 (2018)
652
H. M. Demirci and M. Berberoğlu
4. Kocaballi, A.B., Laranjo, L., Coiera, E.: Measuring user experience in conversational interfaces: a comparison of six questionnaires. In: Proceedings of the 32nd British Computer Society Human Computer Interaction Conference, Belfast, Northern Ireland, July 2018 5. Türkiye’deki banka müşterileri chatbotla iletişimi benimsedi. https://www.cbot.ai/tr-blog/ turkiyedeki-banka-musterileri-chatbotla-iletisimi-benimsedi/ 6. Karapanos, E., Zimmerman, J., Forlizzi, J., Martens, J.B.: User experience over time: an initial framework. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 729–738. ACM, April 2009
Human-Automation Interaction Through Shared and Traded Control Applications Mauricio Marcano1(&), Sergio Díaz1, Joshue Pérez1, Andrea Castellano2, Elisa Landini2, Fabio Tango3, and Paolo Burgio4 1
Tecnalia Research and Innovation, 48160 Derio, Spain {mauricio.marcano,sergio.diaz, joshue.perez}@tecnalia.com 2 RE:Lab s.r.l, 42122 Reggio Emilia, Italy {andrea.castellano,elisa.landini}@re-lab.it 3 Centro Ricerche FIAT, 10123 Turin, Italy [email protected] 4 University of Modena and Reggio Emilia, Modena, Italy [email protected]
Abstract. Automated and Highly-automated Vehicles still need to interact with the driver at different cognitive levels. Those who are SAE Level 1 or 2 consider the human in the loop all the time and require strong participation of the driver at the control level. To increase safety, trust and comfort of the driver with this kind of automation, systems with a strong cooperative component are needed. This paper introduces the design of a vehicle controller based on shared control, together with an arbitration system, and the design of a Human-Machine Interface (HMI) to foster the mutual understanding between driver and automation in a lane-keeping task. The driver-automation cooperation is achieved through incremental support, in a continuum spectrum from manual to full automation. Additionally, the design of an HMI to support the driver in a takeover maneuver is presented. This functionality is a key component of vehicles SAE Level 3 and 4. Keywords: Human-machine interface Automated driving Highly automated vehicles Shared control Traded control Driver-automation cooperation
1 Introduction Since the beginning of automated driving, car manufacturers have embraced the idea of removing the driver from the control loop. However, the technology is not yet mature enough for the public [1], and social/legal acceptance issues [2] still represent a major impediment. A second approach has been to increase the ADAS functionalities to achieve a higher level of automation while considering the driver as an active agent. This approach considers two modalities of human-machine interaction: shared control [3], where driver and machine execute the same task at the same time (e.g., lanekeeping assistance system), and traded control, where driver and automation execute the same task at different times (e.g., an autopilot limited to highways). © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 653–659, 2020. https://doi.org/10.1007/978-3-030-39512-4_101
654
M. Marcano et al.
The major issue of the former is achieving a comfortable interaction where the system does not overwhelm the driver while preventing unsafety driver actions. On the other hand, the challenge of the latter is handling the transitions between different drivers’ roles (i.e., driver and passenger). Future Automated Driving Systems (ADS) will leverage on the potential of both approaches by developing gradual and incremental support to adapt to the everchanging cognitive and physical state and needs of the driver. The Human-Machine Interaction strategy has a key role in these future systems because it should be designed to exploit the potential of the technologies and the driver, i.e. to get the most out of the existing driver’s skills and cognitive resources according to the different roles he/she can have while the ADS is engaged. In traditional systems, the role of the HMI is to inform/warn the driver while minimizing the impact on distraction. Conversely, in highly automated systems, the HMI should emphasize the authority on each task, to make the driver always aware of his/her role. Moreover, the HMI should be designed to increase the trust in automation, and reduce potential anxiety arising from the unexpected variation of roles. The success of these interaction strategies relies less on extraordinary intelligence and more on sophisticated negotiation of changing context and subsequent behavior [4]. In the PRYSTINE project, a novel ADAS is designed under the framework of failoperational systems, considering a driver evaluation platform, the risk of collision with external agents and a controller to assist the driver in highway and urban scenarios. Together with these components, the HMI strategy has been designed to smoothly support the ADS, by including incremental levels of interaction to gradually support the driver in the driving tasks. This paper makes an emphasis on the integration of the lane-keeping assist controller with the visual HMI, as the first step of integration.
2 PRYSTINE Project PRYSTINE [5] has been realizing Fail-operational Urban Surround perceptION (FUSION) which is based on robust Radar and LiDAR sensor fusion and control functions to enable safe automated driving in urban and rural environments. The ADS developed in the project and described in this paper demonstrates the potential of the FUSION hardware/software architecture and reliable components to handle safety-critical situations which are not reached with not reliable state-of-the-art approaches. The reference architecture for the integration of the modules of decision, control and HMI used in this project is shown in Fig. 1.
Fig. 1. Reference architecture of module integration for driver-automation system
Human-Automation Interaction Through Shared and Traded Control Applications
655
3 Decision and Control System Vehicles with a Level of Automation (LoA) SAE 1 and 2, where the steering control is shared between the automation and the human, still require a strong interaction with the driver. Meanwhile, those with LoA SAE 3 and 4 still require that the driver steps into the loop under certain circumstances (more critical in L3). In the PRYSTINE project, novel ADS is proposed where the system assists the driver in a lane-keeping task with variable authority considering the driver state and the potential risk of collision with external agents. The system increases the level of intervention if it detects that the driver is distracted and there is a risk of collision, from alerting the driver to taking full control of the vehicle in a fluid, comfortable and safe manner. The design of such a system requires an adaptive decision and control system able to continuously adapt to the context changes, together with a visual HMI which support the mutual understanding of the driver-automation system. 3.1
Control System
The design of the lateral controller is based on a constrained Model Predictive Control (MPC) for a lane-keeping task. To consider the driver-automation interaction, two parameters are included within the problem formulation: (1) the Strength of Haptic Feedback (SoHF), which is the maximum guidance force felt by the driver, and (2) the Level of Haptic Authority (LoHA) which is related to the stiffness of the system around the optimal command. These parameters allow to vary the authority of the system and therefore share the control of the vehicle with the driver under different scenarios. The first functionality allows to avoid a road departure applying a torque on the steering. This is achieved including the lateral error as an MPC state and using the corresponding constraint (1:5 eL 1:5Þ. The second functionality of the controller is as lane-keeping assistance with variable authority. This is done by including the reference trajectory to the optimization function (min x xref ; y yref ; u uref ). Also, the variable authority gain is added as the stiffness around the optimal steering angle ðKðh hopt ÞÞ. When the authority gain reach its maximum value, the system is in full automation mode (but override of the driver should be always possible). The last two modes of operation are part of a continuum spectrum of system intervention which depends on the driver state and environmental conditions of the driving task, which are part of the decision system. 3.2
Decision System
When both the driver and the automation are part of the Dynamic Driving Task (DDT), an arbitration system is necessary to distribute the authority over the control of the vehicle. This authority is dependent upon the next variables: (1) driver’s status: ability at a given time to execute the corresponding DDT. It is commonly measured with a vision-based system which detects the cognitive states of the driver (e.g., distraction and/or drowsiness level), (2) collision risk: time to a collision of the vehicle with
656
M. Marcano et al.
external agents. To correctly assess this risk, information about the environment is needed, including the topology of the road, other agents around and traffic signs, and (3) tracking: measure of vehicle performance for tracking a pre-defined route. It generally depends on the lateral and angular errors of the vehicle. If the control is shared between the driver and the automation, the arbitration module assigns the appropriate authority to each agent. For this task, a fuzzy logic algorithm is proposed that receives three inputs (driver, risk, and tracking) and has as its output the LoHA of the system. This output feeds the control module that distributes the control authority. The driver is informed about this decision through the visual HMI to foster the adoption of the correct role of the driver and increase the trust in automation. 3.3
Use Case
Christine is bringing her baby Mark to the kindergarten when she notices that Mark is tired and a bit nervous. Worried and in a hurry, she speeds up continuing looking in the mirror to monitor the baby. After some kilometers, Mark loses the pacifier and starts crying and yelling. Christine gets extremely nervous, tries to calm down the baby with some words, but it seems not to work, so then tries to get the pacifier taking her eyes off the road for some seconds. She cannot concentrate on the street and starts driving erratically. Then, the activates a sequence of incremental support. The ADS detect this risk behavior and suggest keeping her eyes on the road. Since she maintains the wrong behavior and the lane deviations is becoming dangerous, the ADS apply a micro-control strategy to avoid it, securing the lane boundaries for avoiding an unintended lane departure, while assigning partial authority to the system to help Christine in the lane-keeping task. Even this support does not solve the situation, because Mark is crying, and Christin is still distracted. So, the ADS informs Christine that the automation is available for some kilometers, and the vehicle takes full control to allow her to look for the pacifier and take care of Mark to calm him down. Once in full automation mode, the ADS shows its autonomy of 1.4 km, and when the minimum takeover time is reached (see Fig. 3), it alerts Christine to get ready to take back the control. When she executes the takeover maneuver, the system starts decremental support to smoothly go from full automation to fully manual. 3.4
Visual HMI Design
The HMI strategy has been designed for the specific use case of Christine, to demonstrate the potential of the incremental support of the ADS developed in PRYSTINE.
Human-Automation Interaction Through Shared and Traded Control Applications
657
Fig. 2. HMI for warning the driver in case of visual distraction
The HMI strategy has the aim to increase the awareness of the driver about the mode of the automation and thus foster the adoption of the corresponding role of the driver. Explicit HMI graphic elements (i.e. icons) and text have been designed to clarify who is in charge of the DDT. The layout of the HMI reflects the modes of the ADS. Figure 2 shows the layout for the HMI Warning mode: in manual driving, the ADS is engaged only to alert the driver in case his/her state has been detected as not compatible with the DDT. Figure 3 shows the layout for the HMI Micro-control mode: when an almost-imperceptible control is applied to avoid lane deviation (so both the driver and automation steer) the HMI shows this support to increase the awareness of the driver about his/her wrong driving behavior.
Fig. 3. HMI to inform the driver a micro-control is applied for lane keeping.
Figures 4 and 5 shows the layout for the HMI Automated mode: the HMI for the Automated mode has a twofold objective: (1) to clarify the ADS is in charge of the DDT when no imminent takeover is expected; (2) to foster the adoption of an active role of the driver when the takeover is imminent i.e. 30 s.
658
M. Marcano et al.
Fig. 4. HMI for automated driving when no imminent take-over is expected
Fig. 5. HMI for automated driving when imminent take-over is expected
4 Conclusions and Future Work The HMI strategy has been designed to reflect the incremental levels of interaction of the ADS and smoothly and continuously support the driver according to his/her state and current role in the DDT. The performance of the ADS and its HMI will be tested in a driving simulator to assess its impact on safety and comfort-related parameters, using real subjects, considering a selection criterion defined by the PRYSTINE project. Acknowledgments. PRYSTINE has received funding within the Electronic Components and Systems for European Leadership Joint Undertaking (ECSEL JU) in collaboration with the European Union’s H2020 Framework Programme and National Authorities, under grant.
References 1. Brown, B., Laurier, E.: The trouble with autopilots: assisted and autonomous driving on the social road. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM (2017) 2. Marchant, G.E., Lindor, R.A.: The coming collision between autonomous vehicles and the liability system. Santa Clara L. Rev. 52, 1321 (2012)
Human-Automation Interaction Through Shared and Traded Control Applications
659
3. Sheridan, T.B., Verplank, W.L.: Human and computer control of undersea teleoperators. Massachusetts Inst of Tech Cambridge Man-Machine Systems Lab (1978) 4. Ju, W.: The design of implicit interactions. Synth. Lect. Hum.-Centered Inf. 8(2), 1–93 (2015) 5. Druml, N., Macher, G., Stolz, M., Armengaud, E., Watzenig, D., Steger, C., Herndl, T., Eckel, A., Ryabokon, A., Hoess, A., Kumar, S.: PRYSTINE-PRogrammable sYSTems for INtelligence in automobilEs. In: 2018 21st Euromicro Conference on Digital System Design (DSD), pp. 618–626. IEEE, August 2018
Alignment of Management by Processes and Quality Tools and Lean to Reduce Unfilled Orders of Fabrics for Export: A Case Study Z. Bardales1, P. Tito1, F. Maradiegue1, Carlos Raymundo-Ibañez1(&), and Luis Rivera2 1
2
Ingeniería Industrial, Universidad Peruana de Ciencias Aplicadas (UPC), Lima 15023, Peru {u201320009,u201217564,fmaradie, carlos.raymundo}@upc.edu.pe Escuela Superior de Ingeniera Informática, Universidad Rey Juan Carlos, 28933 Móstoles, Spain [email protected]
Abstract. The problem of non-conforming production has been identified in the dry-cleaning area of a company dedicated to the manufacture of fabrics. The alignment of concepts of Business Process Management (BPM) and total quality control (TQM) is applied in order to reduce the problem of non-fulfillment of orders by eliminating activities that do not generate value. The implemented is simulated, using Bizagi, macros and minitab. A higher score was achieved in the 5S audit at the end of the implementation, reduction of times and failures, both in the measurement for the supply and in the occurrence of machine failures due to the lack of preventive maintenance. Keywords: BPM
TQM Textile Industry Dry cleaner
1 Introduction The textile industry and apparel is an important element in the economy of developing countries, so it is a case study that can allow obtaining favorable results for the contribution in the country that is developed, taking into account its productive processes. In Peru, it contributes 1.9% of GDP, and involves approximately 14, 000 jobs per year [1]. The present article has as structure the development of the analysis of a case study of a textile company that is in charge of the production of fabrics for export. In this, you can identify the main problem presented by the company, the impact it generates and their respective direct and indirect causes. Afterwards, the process of applying the innovative technique and the respective validation and presentation of the results can be visualized. The implementation of process management ensures the increase of productivity in textile companies [2]; in Holland and Portugal improved flexibility in the processes of different industrial sectors [3]. The 5’S manages to improve the quality of the products © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 660–666, 2020. https://doi.org/10.1007/978-3-030-39512-4_102
Alignment of Management by Processes and Quality Tools
661
through the elimination of waste [4]. The Poka-Yoke tool is necessary for the reduction of human failure within the processes [5]. The correct application of the TPM improves the yield of the manufacture and the effectiveness of the product in the textile companies [6, 7]. Qualitative Analysis of the Current Situation of the Company: The process of identifying the central problem that causes the high rate of reprocessing within the company starts with the expert judgment, obtaining three central problems: High rate of non-fulfillment of orders in the dry cleaning area. Offset on the delivery date. Production Reprocesses. Identification of the Problem: In this step, we proceeded first to identify the standard product that the organization has, based on the basis that it produces only knitted fabric. The analysis ABC (Fig. 1) is used, which analyzes at the level of production and income by product or family of products, obtaining as main the Jersey.
Fig. 1. ABC analysis
Analysis of Causes: For the analysis of the causes, an analysis was carried out for each subprocess that the dry-cleaning process has: Batch assembly, chemical process and dyeing. It began by performing an Ishikawa analysis for each of them, finding various root causes, which were subsequently grouped. It is important to mention that most of these falls on the part where the operator intervenes. Afterwards, to carry out the aforementioned grouping, an AMFE analysis was carried out, which allowed determining which root causes should be analyzed due to the NPR obtained, being as follows: – – – –
X1: Dirty and messy work surface X2: Incorrect follow-up to the maintenance program X3: Poor reproducibility of dye supply X4: Breach of activity times (Incorrect handling of fabric due to lack of knowledge of procedures to follow)
662
Z. Bardales et al.
2 Proposal The innovative technique to be implemented will be the BPM supported with TQM tools. This aims to create synergy between how to organize the processes and the value added to each of these. When talking about BPM, not only is there a correct structure of processes to achieve a correct management within an organization, therefore it is proposed within the innovative technique the implementation of this in conjunction with quality tools. In a study by Nadarajah, Devika; Syed A. Kadir, Sharifah Latifah [8] makes mention of the importance of integration between process management and the continuous improvement of these, since they should be seen as a complement. The proposal is reduced to a better communication between processes for the solution of the existing problems and the increase of the added value to this by means of the continuous improvement. Companies usually consider these aspects unlinked, since they are used for different purposes, the BPO (measuring instrument to determine the impact on the overall performance of the business through financial and nonfinancial indicators) to manage the orientation of the processes and the PII (implemented as TQM, BPR or Benchmarking) as assurance of the quality of each process; However, they do not realize that companies that are capable of understanding that BPM is a combination of both strategies may have a broader perspective to carry out proper process management. To achieve the objective of the planning of the activities and optimization of these, tools of the Lean Manufacturing methodology will be used. This is responsible for continuous improvement and optimization of a production system by reducing wastage of all kinds: inventories, times, defective products, transportation, rework by teams and people. The tools of this technique are involved with the BPM since its novelty consists in the combination of different elements and applications to reduce the mentioned waste at the same time. [9] This is intended to involve the improvement of processes with their management. Finally, management evaluations are carried out to ensure that the process continues to satisfy the client’s requirements, remains in line with the company’s objectives and remains a competitive advantage (Fig. 2).
Fig. 2. Innovative proposal
Alignment of Management by Processes and Quality Tools
663
3 Results The implementation of the tools was done according to each independent variable obtained. In the same way, the effectiveness of each of them is analyzed, being as follows (Table 1): Table 1. Validation tools Tool 5S
Validation resource 5S Audit
BPM
Bizagi
Poka Yoke
Analysis R&R
Quality of the preventive machine program
New calculation of maintenance availability
3.1
Stage Implemented according to schedule Implemented according to schedule Implemented according to schedule Implemented according to schedule
Information Based on the results of the audit Based on the times after implementation Based on the times after implementation Based on the records implemented
5S Validation
For the validation of the 5S implementation, the final audit of the workstation is carried out in the entire dyeing process: batch assembly, prior chemical preparation and dyeing. According to this, it was obtained that the score was increased from 29 points obtained initially, to 40, as shown in the table. Likewise, in the radal, it is verified the “S” that have had variation and improvement due to the implementation (Table 2). Table 2. Comparison of audits before and after implementation
664
Z. Bardales et al.
3.2
BPM Validation
For this validation, the simulation was used in the Bizagi program, in this way the situation is plotted before the improvement and after the improvement, obtaining as a result, the amount of reprocessing in each scenario (Fig. 3).
Fig. 3. Process with improvement
Likewise, the ANOVA analysis is performed again to verify the standardization of the processes through the analysis of the means (Fig. 4).
Fig. 4. ANOVA analysis of the activities
Finally, the results of the implementation of this tool are reflected in the reduction of lots reprocessed by 3% as shown in the table (Table 3). 3.3
Poka - Yoke Validation
For the validation of this tool we proceeded to perform a test with samples for the supply of dye. The analysis was performed by entering data into the Minitab program and again using repeatability and reproducibility. Likewise, this is done to three operators of different shifts as in the previous case (Table 4).
Alignment of Management by Processes and Quality Tools
665
Table 3. Comparison of times (minutes) and percentage of reprocessing Process Armed batch Chemical preparation * Chamuscado * Chemical bleaching * Mercerizado Tinturado Total Reprocesses
Current time (simulation) Projected time (simulation) Variation 233.42 136.98 41% 227.51 205.99 9% 47.62 42.63 10% 85.36 76.87 10% 94.53 86.49 9% 1391.27 1167.27 16% 1852.2 1510.24 18% 8.58% 5.45% 3%
Table 4. Number of errors per measurement test Operator 1 2 3
Request Number of errors (current) Number of errors (pilot) 4.5 gr. Of color 19 of 30 15 of 30 4.5 gr. Of color 23 of 30 18 of 30 4.5 gr. Of color 21 of 30 16 of 30
As shown in the previous table, the number of errors decreased with the implementation of the pilot, with the current average of inaccuracy being 21/30 and the simulated 16/30. It is expected that with the implementation of the final version of the proposed PokaYoke system, 98% accuracy in the required weights will be achieved according to the characteristics of the software.
4 Conclusion In conclusion, the effectiveness of the proposed innovative technique is obtained, since the normal objective established at the beginning of the investigation was achieved. On the other hand, it is possible to consider and evaluate the decrease of agents that intervene in the process and produce the expected non-effectiveness of the results for each tool. Finally, the case study will find opportunities for improvement that can complement it, because it must be taken into account that the implementation can be applied in any area, which gives the opportunity to find variants in each of them that are adaptable and generalized and you can get a greater efficiency of this.
References 1. APTT. La Industrial Textil y Confecciones (2016). http://apttperu.com/la-industria-textilyconfecciones/ 2. Ponce Herrera, Katherine Cecilia (2016). Recuperado de: https://repositorioacademico.upc. edu.pe/handle/10757/620981?locale=es&language=es&locale-attribute=es
666
Z. Bardales et al.
3. Janssen, K.J., Revesteyn, P.: Business processes management in the netherlands and portugal: the effect of BPM maturity on BPM performance. J. Int. Technol. Inf. Manag. 24(1), Article 3 (2015) 4. Pheng, L.S., Khoo, S.D.: Team performance management: enhancement through Japanese 5S principle. TQM Mag. 7(7/8), 105–111 (2001) 5. Vinod, M., Devadasan, S.R., Sunil, D.T., Thilak, V.M.M.: Six Sigma through Poka Yoke: a navigation through literature arena (2015) 6. Wickramasinghe, G.L.D., Perera, A.: Effect of total productive maintenance practices on manufacturing performance: investigation of textile and apparel manufacturing firms. J. Manufact. Technol. Manag. 27(5), 713–729 (2016) 7. Lastra, F., Meneses, N., Altamirano, E., Raymundo, C., Moguerza, J.M.: Production management model based on lean manufacturing for cost reduction in the timber sector in Peru. In: Advances in Intelligent Systems and Computing, vol. 971, pp. 467–476 (2019) 8. Nadarajah, D., Syed A. Kadir, S.L.: Measuring business process management using business process orientation and process improvement initiatives. Bus. Process Manag. J. 22(6), 10691078 (2016). https://doi.org/10.1108/BPMJ-01-20140001 9. Jauregui, R., Pamela, A., Gisbert Soler, V.: Lean manufacturing: herramienta para mejorar la productividad en las empresas. In: 3C Empresa: investigación y pensamiento crítico, Edición Especial, pp. 116–124 (2017). http://dx.doi.org/10.17993/3cemp.2017.especial.116-124
Detection and Prevention of Criminal Attacks in Cloud Computing Using a Hybrid Intrusion Detection Systems Thierry Nsabimana1(&), Christian Ildegard Bimenyimana1, Victor Odumuyiwa2, and Joël Toyigbé Hounsou1 1
Institut de Mathématiques et de Sciences Physiques de l’Université d’Abomey-Calavi, Porto-Novo, Benin {thierry.nsabimana, christian.bimenyimana}@imsp-uac.org, [email protected] 2 University of Lagos, Lagos, Nigeria [email protected]
Abstract. In this paper, we provide a cloud based Hybrid Intrusion Detection and Prevention System using signature based method and Genetic Algorithm to defeat DDOS/DOS attacks attempting to compromise the three security goals known as “CIA” or Confidentiality (C), Integrity (I) and Availability (A) of cloud services and resources. We apply Snort-IDS with a combination of Splunk web framework (tool for visualization) to detect and prevent DDOS/DOS attacks based on signature rules. Moreover, to be able to mitigate known/unknown cloud attacks, anomaly detection approach is built using Genetic Algorithm. We deeply analyse, explore the existing Snort-IDS rules for DDOS/DOS attacks, and provide some improvement on the evaluated Snort-IDS rules. Through the analysis of the experimental results, we conclude that our approach could be incorporated in cloud service models to reduce these attacks. Keywords: Genetic Algorithm computing DDOS/DoS
IDS Splunk Snort-IDS Cloud
1 Introduction Cloud computing aims at providing a shared pool of resources (e.g., computing power, storage, network), on-demand to cloud users. The three major cloud service models are: Software as a Service (e.g. Salesforce, GoogleApps), where cloud users are provided the capability of accessing the software running on a cloud environment, Platform as a Service (e.g. Windows Azure), where cloud users are provided the capability of creating their own applications and deploy them onto the cloud infrastructure, and Infrastructure as a Service (e.g. Microsoft Azure), where cloud providers provide the capability of accessing computing power, file storage and network. Moreover, cloud service models has four possibilities of deployment models such as public, private, community and hybrid cloud. Public cloud infrastructure is provided to the general public and owned, managed, operated by cloud providers and whichever © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 667–676, 2020. https://doi.org/10.1007/978-3-030-39512-4_103
668
T. Nsabimana et al.
public organization. For instance, the services, which should be available for public users, are social media, E-commerce, E-learning, E-mail services, etc. Private cloud infrastructure is supplied for only one organization including several cloud users. It is owned, managed, operated by whichever private organization [1] providing the maximum control over data storage, data security, and quality of service to the cloud users. Community Cloud is a cloud deployment type that focuses on a community of users which could be multiple organizations having the same global mission [5] (e.g., insurance healthcare, policy, or compliance considerations, university) and may be operated by specific organization. Hybrid cloud infrastructure is a combination of two or more cloud deployment models as shown above. With the increasing growth of cloud-computing technology and services, public as well as private companies are willingly moving their applications and services into the cloud and subscribing to various cloud services. Due to this, criminal intruders or hackers attempt to penetrate cloud service models and launch different kinds of attacks against them. It has been observed that both cloud users and cloud providers are facing [3] the major issues of data availability, reliability and data security. Our concern is to ensure that the three security goals known as “CIA” or Confidentiality (C), Integrity (I) and Availability (A) of cloud resources and services are not compromised especially through attacks like DDOS/DOS. DDOS/DOS attacks are often infiltrating over cloud services and making them useless. An efficient way to defeat these attacks is to use the Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS) to enhance the security of cloud infrastructure and services. An IDS is broken down into three fundamental methodologies of intrusion detection approaches namely Misuse/Signature Detection Approach (SDA), Anomaly Detection Approach (ADA) and Hybrid Detection Approach (HDA). ADA discovers and detects network attacks based on deviations from established normal behaviour. A security software based on ADA will trigger an alarm when an unknown behaviour is detected. When a normal behaviour is wrongly identified as an attack by a system, it is considered as false positive. In cases where the system takes abnormal behaviour as normal behaviour, it is a false negative [4]. Despite generating very high false positive rate, ADA has strong capabilities of detecting known or novel network attacks. SDA detects and prevents a well-known network attacks based on known signature rules or patterns saved in the knowledge database. A security software based on “Misuse detection” generates very low false positive because all abnormal activities are predefined in the knowledge database [4]. HDA is a combination of the two approaches listed and described above. There are different categories of IDS used on cloud computing namely: Host based IDS, Network based IDS, Hypervisor based IDS and Distributed IDS. Host based IDS is deployed and monitored in real time for every host computer, and it is incorporated on business critical computer host as well as DMZ’s servers that are likely to be compromised [4]. Network based IDS is installed at the point of entry of a network and monitoring the incoming/outgoing traffic through its network segment [4]. Hypervisor/Virtual Machine based IDS controls and monitors intrusion detection for each virtual machine [3]. «Virtual Machine Monitor (VMM) or hypervisor is the software layer providing the virtualization, which allows the multiplexing of the underlying physical machine between
Detection and Prevention of Criminal Attacks in Cloud Computing
669
different virtual machines, each running its own operating system» [3]. Distributed IDS (DIDS) is a mixture of multiple IDS (e.g. HIDS, NIDS, etc.) over a large network, each of them communicates with each other, or supported by a central server that enables network monitoring [2]. The main goal of this paper is twofold. First, we provide a Hybrid Intrusion Detection and Prevention System; by applying soft computing algorithm namely, Genetic Algorithm (GA) to analyse, detect and prevent DDOS/DOS attacks for each cloud cluster. For demonstration purposes, we train and test our approach using network traffic packets captured by wireshark. Second, we apply the already existing Network Intrusion Detection System namely Snort-IDS with a combination of Splunk web framework to detect and prevent DDOS/DOS attacks based on signature rules. In addition, we deeply analyse, explore the existing Snort-IDS rules for DDOS/DOS attacks, and provide some improvement on the analysed Snort-IDS rules. In order to be able to mitigate against other known/unknown cloud attacks, we built an anomaly detection approach using Genetic Algorithm. We tested our improved Snort-IDS rules and the results showed an improvement in DDOS/DOS attacks detection. In addition, the experimental results confirmed that our proposed anomaly detection approach using GA is more efficient in detecting DDOS/DOS attacks as compared to the detection rates from the improved Snort-IDS rules. The remainder of the paper is structured as follow: Sect. 2 presents problem statement, Sect. 3 presents related work. Section 4 introduces background on proposed IDS/IPS approach and Snort-IDS software. In Sect. 5 presents the proposed HybridNIDS structure for cloud computing. In Sect. 6 presents the implementation and results of GA based IDS/IPS approach and Snort IDS software. Finally, Sect. 7 talks conclusion and future work.
2 Problem Statement The Security has always been the main issue for public as well as private companies migrating to cloud computing. Cloud computing suffers from DDOS/DOS attacks performed by external and internal attackers. These security issues may decrease rapidly the rates of using cloud computing. For that reason, cloud providers must secure adequately cloud resources in taking into account rigorously the three key of security purpose known as “CIA” or Confidentiality, Integrity and Availability of resources and services in cloud computing. There are commercial as well as open source IDS tools, that can efficiently defeat network attacks, but some of them have the drawback of generating very high false positive resulting from the usage of anomaly detection approach and very high false negative resulting from the usage of misuse/signature detection approach. It is however important to note that developing security solutions applicable to cloud computing demands a thorough understanding of cloud architecture and environment.
670
T. Nsabimana et al.
3 Related Work Security of cloud is of great concern for both cloud users and cloud providers. Security issues in cloud environment have attracted the attention of several researchers and quite a number of researches have been carried out in this domain. Hameed et al. [1] provided cloud-based IDS to reduce some risk of intrusion on the cloud networks and to overcome some limitations of already existing IDSs. Their solution is deployed as a Software as a Service model for securing cloud users from intrusion attack. They set up “a hardware layer, which forwarded the request to a service layer, which analyses the request patterns, and further forwards to the intrusion detection layers, which checks the request and provides the alerts against the malicious requests by applying genetic algorithm” [1]. Patil et al. [6] discussed cloud security issues. They also implemented an IDS in which they used genetic algorithm to detect and also prevent intrusion attack. Kuldeep et al. [7] showed that many of the existing IDSs are designed to handle specific attacks. There is no “one size fits all” technique; hence, no single technique can guarantee protection against all future attacks. They developed an architecture that adapts the snort IDS to cloud infrastructure in order to provide security against intrusion attacks.
4 Background on Proposed IDS/IPS Approach Proposed solution for security issues is to integrate an IDS/IPS over cloud computing environment. The position of the IDS/IPS within the network, the detection techniques it deploys and its (IDS/IPS) configuration are factors that determine its efficiency [2]. In this section, we present our proposed Genetic Algorithm approach based IDS/IPS and Snort-IDS which would be incorporated in cloud environment to overcome the security issues of resources and services. 4.1
Genetic Algorithm Based IDS/IPS Approach
Genetic Algorithm [11, 12] is an evolutionary algorithm used for optimization purposes. In applying GA to solve network problems, the features in the dataset are represented as chromosome [12]. Gong et al. [13] and Hounsou et al. [12] used seven features (Duration, Protocol, Source–port, Destination–port, Source–IP, Destination– IP, Attack–name) of captured packet. They used Fitness function to evaluate each individual in data set for the purpose of finding the best solution and discarding the worst one. Support confidence based fitness function are used for delivering rules, which in turn are applied in classifying network intrusions. The operators of GA include selection, crossover or recombination and mutation. They are used to produce new population. Based on our previous work, an example of the rules used to classify network connection in detecting U2R attack rlogin from training data is explained below [12]: if (Duration = 0:0:14 and Protocol = rlogin and Source–port = 1022 and Destination–port = 513 and Source–ip = 192.168.1.30 and Destination–ip = 192.168.0.20) then Attack–name = rlogin.
Detection and Prevention of Criminal Attacks in Cloud Computing
671
“This rule shows that if a network packet is originated from source IP address 192.168.1.30 and source port 1022, and sent to destination IP address 192.168.0.20 and destination port 513 using the protocol rlogin, and the connection duration is 0:0:14 s, then it is a network attack of type rlogin”. 4.2
Snort-IDS Software
Snort NIDS is an open source security network IDS/IPS invented by Martin Roech in 1998. It is configurable and freely available for multiple platforms (i.e. GNU/Linux, Window) [10]. The actions used by Snort are alert, drop, block, ignore, pass, reject, active and dynamic. Snort can be worked with other tools for better outputs: SnortSnarf, Snorby, BASE, OSSIM and OSSEC. The signature based IDS model used in Snort matches incoming attack signature with stored signatures associated with known attacks like the pod, portsweep, DoS-nuke, Teardrop, and Saint, etc. [10]. The detection engine of Snort allows registering, alerting and responding to any known attack [10]. Snort cannot identify novel attacks. Snort IDS can be used in Cloud environment if we keep updating signature rules database.
5 Proposed Hybrid-NIDS Structure for Cloud Computing The purpose of this work is to provide an efficient Hybrid-IDS/IPS, which can detect any internal or external attack. We present a proposed design of Hybrid-IDS/IPS structure for cloud and show how it can be deployed in cloud computing environment to solve the security issues on it. Our proposed Hybrid-NIDS architecture for cloud environment is divided into eight modules (as depicted in Fig. 1). We use Misuse/Signature detection approach using Snort-IDS and an Anomaly detection approach using Genetic Algorithm. Snort-IDS is used to detect known attacks based on signature rules, whereas GA can detect known as well as novel attacks. As Snort-IDS is configured with wireshark, the incoming traffic packets pass through Snort-IDS, in such way, the known attacks are detected based on signature rules saved in database. Whether traffic packets matched the rules, alert messages must be generated. The generated alerts are forwarded to the logging and alerting system for future output log or alert file. The alerts are then forwarded to the Splunk for visualization. For the anomaly detection, the incoming traffic packets are preprocessed and passed through Genetic Algorithm to detect both known and unknown attacks. As depicted in Fig. 2, we deploy Hybrid-NIDS model in each cloud environment. There are one or more cloud clusters. Each cloud cluster contains one or more VM. Hybrid-NIDS model is integrated in each cloud cluster to monitor, detect and prevent internal and external attack. A Central Hybrid-NIDS model is positioned in cloud to link one cloud cluster to another one.
672
T. Nsabimana et al.
Fig. 1. Design of proposed Hybrid-NIDS architecture for cloud
Fig. 2. Deploying of our proposed Hybrid-NIDS in cloud environment
6 Implementation and Results 6.1
Snort-IDS and Splunk for Snort Results
Table 1 contains the existing Snort rules found out on [8]. We analyzed, explored and tested them using our Hybrid-NIDS/IPS. As results, we noticed that some options should be added to gain a high-level detection rate. Figure 3 shows attacker source IP address and victim destination IP address from Splunk. Table 2 contains improved Snort rules. After analysing and improving, we tested them using our implemented system; we noticed that our improved Snort-IDS rules
Detection and Prevention of Criminal Attacks in Cloud Computing
673
Table 1. Sample of existing Snort-IDS rules [9] ID 1
2 3 4 5
Rules alert tcp $EXTERNAL_NET any -> $HOME_NET $HTTP_PORTS (msg: “SLR AlertHOIC Generic Detection with booster - HTTP 1.0/Header Double Spacing”; flow: established, to_server; content: “User-Agent|3a 20 20|”; nocase; content: “HTTP/1|2e| 0”; nocase; reference: url, blog.spiderlabs.com; threshold: type both, track by_src, count 15, seconds 30; classtype: slr-tw; sid:1; rev:1;) alert tcp any any −> 192.168.1.107 any (msg: “FIN Flood Dos”; flags: F; sid: 1000006;) alert tcp any any −> 192.168.1.107 any (msg: “SCAN SYN-FIN”; sid: 1000001; flags: SF; rev: 1;) alert tcp any any −> 192.168.1.10 any (msg: “TCP Flood”; sid: 1000001;) alert icmp any any −> any (msg: “Smurf Dos Attack”; sid: 1000003; itype: 8;)
Table 2. Sample of improved Snort-IDS rules ID 1
2
3
4
5
Rules alert tcp $ EXTERNAL−NET any −> $ HOME−NET $ HTTP−PORTS (msg: “High Orbit Ion Cannon (HOIC) tool attack detected”; flow: established, to−server; content: “|55 73 65 72 2d 4167 65 6e 74 3a 20 20|”; nocase; content: “HTTP 1.0|2e|0”; nocase; threshold: type both, track by−src, count 2, seconds 2; sid: 1000049; rev: 1;) alert tcp $ EXTERNAL−NET any −> $ HOME−NET any (msg: “Ping of death attack detected with Fin packets requests”; window: 512; detection−filter: track by−src, count 100, seconds 3; classtype: attempted-dos; reference: url, www.imsp-benin.com; sid: 1000050; flags: F;) alert icmp $ EXTERNAL−NET any −> $ HOME−NET any (msg: “Ping of death attack detected with Smurf DDOS ICMP packets”; detection−filter: track by−src, count 30, seconds 1; itype: 8; icode: 0; classtype: attempted-dos; reference: url, www.imspbenin.com; sid: 1000051; rev: 1;) alert tcp $ EXTERNAL−NET any −> $ HOME−NET any (msg: “Ping of death attack detected with SYN-FIN packets requests”; window: 512; detection−filter: track by−src, count 30, seconds 1; classtype: attempted-dos; reference: url, wwww.imsp-benin.com; sid: 1000052; flags: SF; rev: 1;) alert tcp $ EXTERNAL−NET any −> $ HOME−NET any (msg: “DOS attack with GoldenEye tool”; detection−filter: track by−src, count 200, seconds 5; classtype: attempted-dos; reference: url, wwww.imsp-benin.com; sid: 1000053; rev: 1;)
Table 3. Detection results from Snort-IDS Attack types ICMP Smurf TCP Flooding HTTP Goldeneye TCP SYN FIN Flood TCP HOIC
True positive 99.97% 99.95% 73.85% 49.96% 0.14%
False positive 0.00% 0.001% 0.00% 0.00% 0.00%
False negative 1.56% 0.049% 36.33% 50.03% 99.86
Running time 11.58 s 14 s 41 s 13 s 1.6 s
674
T. Nsabimana et al.
(a) Detected Attacker source IP addresses
(b) Detected Victim Destination IP address
Fig. 3. Attacker source IP address and victim destination IP address from Splunk.
detect successfully DDOS/DOS attacks. The obtained results can be seen in Table 3. ICMP Smurf DDoS attack is detected with very high detection rate over 99%. 6.2
GA Based IDS Results
Table 4 depicted the results generated by GAIDS. Comparing the results generated by our implemented and the one generated by Snort IDS (see in Table 3), we can conclude that GAIDS performed well with the corresponding high detection rates and low false positive rate. Table 4. Detection results from GA Attack types ICMP Smurf TCP Flooding HTTP Goldeneye TCP SYN FIN Flood TCP HOIC
6.3
TP 100% 100% 99.09% 99.97% 100%
FP 0.00% 0.00% 0.00% 0.00% 0.00%
FN 0.00% 0.00% 0.90% 0.02% 0.00%
Running time 26.34 s 282.81 s 20.59 s 489.26 s 192.38 s
Detection Rate Comparison Between GA and Snort-IDS
Table 5 compares GA based IDS and Snort IDS. The results of the tested rules using Snort-IDS software have shown less performance than Genetic algorithm. The GA based IDS detects over 90% with low False Positive whereas detection rate of SnortIDS is between 49%–99.97% with low False Positive.
Detection and Prevention of Criminal Attacks in Cloud Computing
675
Table 5. Detection rates comparison between GA and Snort IDS Approach Attack types GA ICMP Smurf TCP Flooding HTTP Goldeneye TCP SYN FIN Flood TCP HOIC Snort-IDS ICMP Smurf TCP Flooding HTTP Goldeneye TCP SYN FIN Flood TCP HOIC
TP 100% 100% 99.09% 99.97% 100% 99.97% 99.95% 73.85% 49.96% 0.14%
FP 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.001% 0.00% 0.00% 0.00%
TN 0.00% 0.00% 0.90% 0.02% 0.00% 1.56% 0.049% 36.33% 50.03% 99.86
Running time 26.34 s 282.81 s 20.59 s 489.26 s 192.38 s 11.58 s 14 s 41 s 13 s 1.6 s
7 Conclusion and Future Work In this work, a Genetic Algorithm approach based IDS and Snort-IDS were used to mitigate DDOS/DOS attacks in cloud environment. The hybrid-NIDS architecture and its deployment on cloud was presented. We improved the Snort-IDS rules for the DDOS/DOS attacks. The results of the tested rules using Snort-IDS software, HOIC tool and Kali lunix OS prove that our improved Snort IDS rules can efficiently detect the DDOS/DOS attacks in cloud. The experimental results confirm that Genetic algorithm outperforms the improved Snort-IDS, it detects significant number of DDOS/DOS attacks with very low false positive rate and false negative rate. Nevertheless, running time is critical comparing with Snort-IDS. Considering the detection rates, GA based IDS should be incorporated in cloud environment to mitigate DDOS/DOS attacks. For future work, we will apply deep learning convolutional neural network with other open source network intrusion detection and prevention system like Suricata-IDS to detect and reduce DDOS/DOS attacks.
References 1. Hameed, U., Naseem, S., Ahamd, F., Alyas, T., Khan, W.A.: Intrusion detection and prevention in cloud computing using genetic algorithm. Int. J. Sci. Eng. Res. 5 (2014) 2. Modi, C., Patel, D., Borisaniya, B., Patel, H., Patel, A., Rajarajan, M.: A survey of intrusion detection techniques in cloud. J. Netw. Comput. Appl. 36(1), 42–57 (2013) 3. Bhat, A.H., Patra, S., Jena, D.: Machine learning approach for intrusion detection on cloud virtual machines. Int. J. Appl. Innov. Eng. Manag. (IJAIEM) 2(6), 56–66 (2013) 4. Bunel, P.: An Introduction to Intrusion Detection Systems. SANS Institute, GIAC Security Essentials, Certificate (GSEC), Practical Assignment, Version 1.4c, SANS Conference, London, pp. 1–17 (2004) 5. Hesham, A.I.M.K.: Cloud computing security, an intrusion detection system for cloud computing systems, Ph.D. thesis, pp. 1–52 (2011)
676
T. Nsabimana et al.
6. Patil, D., Ganveer, K.S.S., Badge, K.P.S.: To implement intrusion detection system for cloud computing using genetic algorithm. Int. J. Comput. Sci. Inf. Technol. Res. 3(1), 193–198 (2015) 7. Kuldeep, T., Tyagi, S., Richa, A.: Overview-snort intrusion detection system in cloud environment. Int. J. Inf. Comput. Technol. 4, 329–334 (2014). ISSN 0974-2239 8. https://www.hackingarticles.in/dos-attack-penetration-testing-part-2/ 9. https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/hoic-ddos-analysis-anddetection/ 10. Singh, D., Patel, D., Borisaniya, B., Modi, C.: Collaborative ids framework for cloud. Int. J. Netw. Secur. 18(4), 699–709 (2016) 11. Jadhav, M.L., Gaikwad, P.C.M.: Implementation of intrusion detection system using GA. Int. J. Innov. Res. Electr. Intrusmentation Control Eng. 2(7), 1733–1736 (2014) 12. Hounsou, J.T., Nsabimana, T., Degila, J.: Implementation of network intrusion detection system using soft computing algorithms (self organizing feature map and genetic algorithm). J. Inf. Secur. 10(1), 1–24 (2018). https://doi.org/10.4236/jis.2019.101001 13. Gong, R.H., Zulkernine, M., Abolmaesumi, P.: A software implementation of a genetic algorithm based approach to network intrusion detection. Presented at the Proceedings of the Sixth International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing and First ACIS International Workshop on SelfAssembling Wireless Networks (2005)
Development of Tutoring Assistance Framework Using Machine Learning Technology for Teachers Satoshi Togawa1(&), Akiko Kondo2, and Kazuhide Kanenishi3 1
2
Education Center for Information Processing, Shikoku University, 123-1 Furukawa Ojin-cho, Tokushima 771-1192, Japan [email protected] Faculty of Management and Information Science, Shikoku University, 123-1 Furukawa Ojin-cho, Tokushima 771-1192, Japan [email protected] 3 Center of University Education, Tokushima University, 1-1 Minami-Josanjima, Tokushima 770-8502, Japan [email protected]
Abstract. This paper proposes a framework for tutoring assistance to tackle increasing student dropout rates. Student dropouts in higher education institutions, such as universities, often result in an increase in tutors’ workload. Currently, educational assistance is focused on supporting the students’ learning, and the main purpose of this assistance is an acceleration of the learning process. Although student assistance is undoubtedly of great importance, offering assistance to teachers who also have a tutoring role is equally important. The purpose of our framework for assistance is to detect students at risk for dropout, after which an alert is sent to the tutors. The alert encourages tutors to take timely action to avoid student dropouts. This paper describes the enhanced framework implementation, its experimental use, and the results. Keywords: Tutoring assistance detection Machine learning
Learning analysis Abnormal behavior
1 Introduction Japan has over 1,100 higher education institutions, including universities and junior colleges [1]. The total number of private universities is about four times the number of national and public universities. This means that there are currently too many universities in Japan. Due to a declining birth rate in the country, 40% of private university spots have not been filled. Students at these private universities are often not attending their first-choice schools; thus, their motivation to study is generally low. Consequently, students often fail to graduate, which results in high dropout rates. To address this problem, most Japanese universities are offering their students a tutoring program. The tutors are generally selected among high-achieving senior and graduate students. However, it is not always easy to secure a sufficient number of tutors. At some universities, teachers may also serve as tutors and mentors for the © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 677–682, 2020. https://doi.org/10.1007/978-3-030-39512-4_104
678
S. Togawa et al.
students. Since teachers already have many responsibilities to fulfill in their primary role, it becomes difficult to also adequately fulfill the role of being a tutor. One way of alleviating the strain on the universities’ tutoring programs would be to detect students’ abnormal behavior based on class attendance and their own learning history to provide appropriate instruction to the students via their tutors, thus making tutoring more effective and preventing student dropouts in time. In this research, we propose a tutoring assistance framework for teachers. Machine learning is used to determine the students’ learning history and their class attendance status, accumulated academic credit acquisition status, and overall dropout risk. When it is determined that the probability of a student dropping out is increasing, the teacher receives a notification via the framework. The teacher can then provide re-medial instructions directly focused on supporting the student falling behind. Consequently, students’ necessities can be addressed at an early stage, and the possibility of dropping out is reduced. The proposed framework does not support detected students directly; rather, it provides class management awareness for teachers who also have tutoring duties. We have been designing a tutoring assistance framework and have verified its effectiveness [2]. The problem of previous research is that the detection accuracy of students with the risk of dropout is not good. Therefore, in this research, the machine learning algorithm, such as the neural network for dropout risk detection and increased types of input data are reviewed. We first describe the current state of Japanese higher education, especially at private universities, and the difficulties encountered in keeping students motivated. Second, we review related studies, addressing the benefits of using the student assistance framework to maintain student motivation and to support tutors. Third, we present the concept and prototype developing method for the proposed assistance frame-work for tutors, which uses data from student learning history and machine learning analytics. Finally, we describe the results achieved with the experimental use of the prototype, and we present the effectiveness of tutoring based on the assistance framework.
2 Problems of Japanese Higher Education and Related Study The number of higher education institutions in Japan continues to increase. Between 2000 and 2018, 130 universities (mainly private) were inaugurated. Conversely, within the same period, the number of students enrolled in national, public and private universities, and junior colleges remained stable at around three million. The number of high school students, however, has seen a sharp decrease of 23% in 18 years: from approximately 4.16 million in 2000 to 3.23 million in 2018. The university entrance ratio, including universities and junior colleges, was 30.6% in 1990. In 2018, the ratio is 54.8%. The university entrance ratio exceeded 50% in 2007 for the first time and has remained steady ever since [1]. These facts indicate that it became easier to enter university, with the exception of high-end universities. This trend will further accelerate in the next ten years. Although the number of high school graduates has been decreasing significantly over the years, the number of universities has increased. Therefore, many private
Development of Tutoring Assistance Framework
679
universities are unable to fill the available spots. As the T-score required for admission to private universities is lower, the dropout rate tends to be higher [3]. For this reason, identifying students who risk dropping out at an early stage is important, especially in the case of universities with low entry levels. In searching for ways to solve the problem above, we have found some studies that show dropout prevention based on analytics of students’ learning history. For instance, a system aims to reduce the workload for mentors supporting students by using e-learning [4]. Whereas this research also aims to reduce the workload on mentors, its main purpose is to support learners’ learning activity. In another study, a notification reminds students to perform Self-Regulated Learning using an e-learning environment [5]. The aim is to support students during their learning process, not to avert the risk of dropping out. Naturally, these studies are effective in terms of the support offered to students with their learning. However, it is sometimes difficult to reach students with below average T-scores through the assistance system. Sometimes they do not accept the notifications, because it is not recognized for themselves. For these students, direct support from tutors is very important to detect dropout risk. Additionally, the task is wide-ranging: from focusing on the frequency of class attendance, to showing interest in daily life and human relations. Most teachers who assume the role of tutor are busy with tutoring work and do not have time for adequate class preparation and research activities. Consequently, it becomes difficult to ensure the quality of education.
3 Developing of Student Tutoring Assistance Framework 3.1
Outline of Student Tutoring Assistance Framework
In this section, we describe the student tutoring assistance framework and its development. Figure 1 shows the overview of the proposed student tutoring assistance framework. Under the Learning Management System (LMS), we see the learners’ login frequency, the login time slot, system use time and the course material use frequency. Under the Grade Processing System, the learners’ examination results, record of attendance, and status of enrolment in university are stored. In the Tutoring Assistance System (TAS), machine learning analyses the learning history and enrolment/withdrawal situation. The TAS detects potential student dropouts. When the TAS detects a potential dropout, it generates an alert to the teacher. The teacher, fulfilling the role of tutor, receives an alert of a possible student dropout and can then take action to prevent this. This system does not directly assist a student at risk for dropout. Instead, the teacher provides assistance to prevent dropout. Since direct notifications sent by some assistance systems to students can sometimes be ignored, it is difficult to effectively prevent student dropouts using this method. Therefore, the approach of this framework is to support teachers by using alerts and notifications to indicate the existing risk of dropout, and subsequently ensuring that the student who may potentially dropout is assisted by the teacher directly. Thus, it is possible to prevent students who may dropout from being overlooked. Furthermore, the burden on tutors can be reduced.
680
S. Togawa et al. [ Learning Management System ] -Login Frequency -Login Time Slot -System Utility Time -Course Material Utilization Frequency -Material Utilization Frequency by Category
[ Grade Processing System ] -Examination Result -Record of Attendance -Status of Enrollment
Collecting Learning History Collecting Results and Status
LMS Access Confirming Grade Processing System
Tutoring Assistance System Dropout Risk Detection
Learners Dropout Prevention
Teacher (Tutor)
Fig. 1. Student tutoring assistance framework
3.2
Implementation of Dropout Risk Detection Function
We implemented a neural network to detect potential student dropout. The aim of this neural network implementation was to judge the effectiveness of the proposed framework. Figure 2 shows a neural network implementation for dropout risk detection. This network is based on a feedforward neural network for binary classification. We believe dropout detection is a simple binary classification problem, because “dropout” and “non-dropout” are incompatible conditions. Therefore, a binary classifier is ideal to address the problem. Input data to detect dropout risks are login frequency, course material use frequency, examination scores, and attendance frequency via variable s1 to s5. The result for judging the dropout risk is obtained from the variables y1 and y2. If variable y1 is larger than variable y2, the dropout risk is judged to exist. This implemented binary classifier was built based on Keras [6] and TensorFlow libraries [7].
Development of Tutoring Assistance Framework
Login Freq.
s1
Course Material Utilization Freq.
s2
Examination Score1
s3
Examination Score 2
s4
681
y1
dropout
y2
non-dropout
output layer Attendance Freq.
s5 input layer
hidden layer 1
hidden layer 2
Fig. 2. Neural network implementation for dropout risk detection
4 Experimental Use and Results In this section, we describe the results of experimental use. The framework was tested to confirm its effectiveness. Table 1 shows the specifications of the computer on which the dropout risk detection user test was performed. Table 2 shows the condition of training dataset and testing dataset generation. Table 1. Computer specification of risk detection processing CPU specification System memory capacity GPU specification Operating system
Intel Xeon E3-1225v6 3.3 GHz 64.0Gbytes NVIDIA GeForce 1080 Ti with 11.0Gbytes Memory Ubuntu Server 18.04.3 LTS 64bit edition
We generated a training dataset with 10,000 samples. Under ordinary circumstances, the training dataset used to train the neural network would use actual data, such as students’ learning history in LMS. The values for Login Frequency, Course Material Use Frequency, and Attendance Frequency were randomly generated, and these data ranged from 0 to 15. Usually, classes are open for enrolment 15 times per semester. The ideal dataset for training processing must be used for this purpose. We trained the binary classifier based on the neural network used by the generated training dataset. One hundred testing datasets were generated randomly, following the correct range; subsequently, they were judged as to whether they had high-risk or not by the trained neural network. Of these test samples, 68 were found to be true. Thus, the correct answer rate is 68%. We consider this result to be satisfactory for our purpose. The main purpose of this study is to reduce the tutor’s workload. We need greater accuracy to be able to provide better assistance. For this purpose, we will try to optimize this neural network for dropout risk detection.
682
S. Togawa et al. Table 2. Condition of training and testing dataset generation Dataset Condition Login frequency Random, data range Course material utilization frequency Random, data range Examination scores Normal distribution, Attendance frequency Random, data range
0 to 15 0 to 15 SVG = 70, SD = 10 0 to 15
5 Conclusions In this paper, we proposed a student tutoring assistance framework. In particular, we described the importance of detecting the potential student dropout risk. We proposed a dropout risk detector based on a feedforward neural network to detect potential student dropout. Finally, we built the prototype implementation for detection, and the results of experimental use were clarified. This framework is still at an early stage of development. However, we think this framework and the results of the experimental use can encourage action to avoid potential student dropout. Acknowledgments. This work was supported by JSPS KAKENHI Grant Numbers 18K02922.
References 1. Ministry of Education, Culture, Sports and Technology of Japan: Report of School Basic Survey 2018 Edition (2018) 2. Satoshi, T., Akiko, K., Kazuhide, K.: Designing of student tutoring assistance framework using machine learning technology for teachers. In: Proceedings 11th International Conference on Education and New Learning Technologies, pp. 9502–9506. IATED Academy (2019) 3. The Japan Institute for Labor Policy and Training: Working and Consciousness Research of Student Dropped Out of Universities, JILPT Report No. 138 (2015) 4. Yutaka, S., Takeshi, M., Yoshiko, G., et al.: Development and evaluation of an e-mentors’ workload reduction system; based on planning phase in learners’ self-regulation. J. Jpn Soc. Educ. Technol. 36(1), 9–20 (2012) 5. Takeshi, M., Masahiro, Y., Yoshiko, G., et al.: Development of self-regulator that promotes learners to establish planning habit and its formative evaluation. J. Jpn Soc. Educ. Technol. 40, 137–140 (2017) 6. Keras. https://keras.io 7. TensorFlow. https://www.tensorflow.org
Replenishment System Using Inventory Models with Continuous Review and Quantitative Forecasting to Reduce Stock-Outs in a Commercial Company Carlos Malca-Ramirez, Luis Nuñez-Salome, Ernesto Altamirano, and José Alvarez-Merino(&) Ingeniería Industrial, Universidad Peruana de Ciencias Aplicadas (UPC), Lima, Peru {u201416189,u201413246,pcinealt,pciijalv}@upc.edu.pe
Abstract. In recent years, stock-outs significantly affects commercial enterprises with deficit equivalent to a large percentage of the total sales, due to its impact on current and future demand, regardless of their financial success. Carried out a qualitative and quantitative analysis of the current situation to identify the problem. In addition, the root causes were identified with the help of quality tools such as Pareto chart and Ishikawa and a questionnaire based on the SCOR model and management indicators. In this way why it is proposed a replenishment system applying models of inventories with continuous review and quantitative forecasts. The proposal consists of five phases: (i) identify the key products; (ii) Do quantitative forecasting; (iii) Determine inventory levels. (iv) Establish the new flow of information; (v) Establish new policies and procedures. We evaluate the models of inventories and forecasts through the program @Risk. Keywords: Inventory management Continuous review replenishment system Forecast Retails Stock outs
1 Introduction The events of stock outs occur when the product is not available in store so that the customer can purchase [1], this refers to a situation where the demanded product is not available to the customer in the expected place or is not available for sale [2]. Approximately 75% of the causes of the stock outs originate in the operations of the stores, while between the 25% and 30% originate in the distribution centers and in the operations of the central [2]. In addition, the bad practices of order (orders late or insufficient quantity ordered), forecasting demand [3, 4], the incorrect distribution of information and inventory also influence the stock outs of the shops [5, 6]. Customers will be carried out one of the following actions when they face a shortage of supplies at the store. (i) change store to locate the same product, (ii) to change the brand for a similar utility of the same product, (iii) to postpone the purchase
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 683–689, 2020. https://doi.org/10.1007/978-3-030-39512-4_105
684
C. Malca-Ramirez et al.
to a date in which the product is available, or (iv) to abandon completely the purchase, which will cause a loss of sales [1]. The stores and manufacturers lose more than 4 billion euros every year and each year and the total sales losses caused by lack of stock amount to 3.9% worldwide, 3.8% in the EE. UU and 3.7% in Europe, this percentage is even greater in South America [7]. The stock out result in a reduction in benefits of up to 10% and the decline in consumer loyalty can have a long-term impact on the participation of the market [8]. The world average of occurrences of stock outs is estimated at 8.3%, a little higher than in the United States. The United States. (7.9%), and a little lower than the average in Europe (8.6%). The situation is better in the regions of Asia Pacific and Australia/New Zealand, where the averages of the out of stock are 5% and 4.4%. On the other hand, the highest rates of stock out are recorded in South America, in Argentina the average is 5.15% and 14.3% in Chile, and 8% in Brazil [7]. On the other hand, it was demonstrated that the problem of shortage of supplies is a great opportunity for the stores and manufacturers increase their revenue from increased sales [2]. Previous studies concluded that the automatic system replenishment inventory could reduce the average rate of shortages in the shelves regardless of the characteristics of the product. Similarly, improve the processes of anticipation of orders and replenishment procedures for those items that fall into the category of important, results in a smaller number of stock outs in the levels of distribution centers and shops [3]. Similarly, automated ordering systems where you set the optimum amount of order and the point of order for each product in the store generate an increase in operational efficiency in terms of total cost and service levels. You can also achieve a higher level of availability in stores by means of cooperation in planning and anticipation among the shops and its suppliers [5]. The literature showed us that the stock-outs in shops consists in the depletion of stocks on the shelves. In the face of an occurrence, customers choose to replace the product, generating losses of profits. To solve the problem of shortages, the literature uses techniques aimed at alignment of the chain, where there is a predominance of inventory management and forecasting models, in order to ensure the optimal service level.
2 Problem The company is dedicated to the wholesale of products to perform any type of manual work. Its presents a decrease in sales in the last period as described in the following figure (Fig. 1): In order to confirm a decrease in revenue, a comparison was made with the year 2017 with revenues from the month of March of the year 2018. In this way, it was found that there is a sales decrease of 15%. It concludes with a qualitative and quantitative diagnosis obtaining four causes. However, only one of them presented more frequently with a 98% participation in the incidents. This was evident over time, as shown below (Fig. 2):
Replenishment System Using Inventory Models
685
Fig. 1. Sales report 2017–2018
Fig. 2. Stock outs value
In summary, stock outs in stores generates an impact on the company’s annual billing of 19% equivalent to $73,082.53. Once you have identified the problem an audit was carried out based on the SCOR model and the tool of the Ishikawa diagram to detect the main causes. In addition, an analysis was made of the indicators of the current processes of the company. • • • • •
Quality of generated orders = 81% Supplier compliance level = 95% % of late deliveries to store = 34% % of insufficient orders to warehouse = 20% % forecast error = 40.79%
In this way, the general and specific causes that were obtained through pareto analysis are shown below. • Unreliable demand prediction • Insufficient inventory availability in central warehouse • Late dispatches to stores Insufficient orders from store to store
686
C. Malca-Ramirez et al.
3 Proposed Methodology After identifying and analyzing the current situation of the problem, information on the monthly demand of the items is collected and analyzed in order to identify the best forecast model to obtain the optimum inventory levels (Q, r) of the central warehouse and the stores. The mentioned methodology consists of the following phases: 3.1
Key Products
For the identification of key products, the ABC multi-criteria tool was used, having as variables the stock break in soles, the margin per family of products, quantity of Sku’s per family of products. Obtaining the families to which the project will focus, which are: dies, cards, embossed folder, applications, tools, block cards, die cutters. 3.2
Forecast Analysis
To determine the most appropriate forecast for our demand, the models were compared by evaluation criteria (Table 1). Table 1. Comparative table of forecast model. Criteria / Model Degree of Forecast Space of time
Time series
Adequate or short Adequate or / medium short / medium
Amount of initial data required Forecasting costs Level of applicability and implementation
3.3
Moving averages
Exponential Linear adjustment regression Adequate or Suitable
Regression analysis Tall
Neural networks Tall
short / medium
Long
Long
Long
Low
Very low
Medium
Tall
Tall
Low
Low
Moderate
Low
Low
Moderate
Low
Low
Medium
Medium
Medium
Tall
Medium
Inventory Levels
Based on the results of the forecasts, the optimal inventory levels will be determined together with their security stock of both the stores and the warehouse stock of the selected items through the formulas. 3.4
Information Flow
Then, the information flow of the replenishment system to be executed is established (Fig. 3).
Replenishment System Using Inventory Models INPUTS DOCUMENTS
687
OUTPUTS DOCUMENTS
INTERMEDARY
Invoices Purchase order
SUPPLIER References guides
Requirements Current demand
Purchase order Purchase order requests WAREHOUSE
Historical sales Invoices References guides
Goods issue Current demand
Goods issue
STORE
Demanda actual Historical sales Requirements
Fig. 3. Information flows
3.5
Politics and Procedures
With the improvement and with the new inventory policies, procedures were generated and modified. As shown in the following table (Table 2): Table 2. New procedures. Code CMDA DDLM PRLD REDT SODC
3.6
Procedures Purchase of merchandise Merchandise Distribution Make forecasts Requirements Purchase requests
Monitoring After the purchase request After store requirements After the end of the month After store closing After dispatching merchandise
Considerations
The simulation is carried out by entering real data that correspond to the time of period between 2017–2018. The probability of the margin of error was extracted from the historical data and will influence inventory costs, sales revenue and stock-outs. 3.7
Variable
The input variables are as follows (Table 3):
4 Validation See Table 4.
688
C. Malca-Ramirez et al. Table 3. Dependent variables
Outputs Elements Total cost inventory Sales revenue Sales cost Operating utility
Observations Maintenance, storage and ordering costs The units sold annually multiplied by their respective unit price The units purchased annually multiplied by their respective costs and expenses Margin contribution obtained from the difference between sales revenue and cost of sales
Table 4. Results Indicator Result % of late deliveries to store 3.87% % of insufficient orders to warehouse 0.63% % forecast error 5.55% Stock out cost $1,930.82
5 Analysis of Results The specific objectives were achieved according to the simulation, reducing % of late shipments to store, % of insufficient orders to warehouse, % forecast error and the cost of breakage from 34% to 3.87%, 20% to 0.63%, 40.79% a 5.55% and $ 73,082.53.a to $ 1,930.82 that would be reduced with the proposed improvement.
6 Conclusions The case study has proven that stock outs can be reduced by means of the techniques used. Through a system proposed, it has been possible to reduce the incidents of stock outs in the commercial sector in Peru.
References 1. Govind, A., Luke, R., Noleen, P.: Investigating stock-outs in Johannesburg’s warehouse retail liquor sector. J. Transport Supply Chain Manag. 11, 1–11 (2017) 2. Avlijas, G., Milicevic, N., Golijanin, D.: Influence of store characteristics on product availability in retail business. E+M Ekonomie Manag. 21, 195–206 (2018) 3. Goran, A., Simicevic, A., Avlijas, R., Prodanovic, M.: Measuring the impact of stock-keeping unit attributes on retail stock-out performance. Oper. Manag. Res. 8, 131–141 (2015)
Replenishment System Using Inventory Models
689
4. Bobadilla, R., Mendez, A., Viacava, G., Raymundo, C., Moguerza, J.M.: Service model based on information technology outsourcing for the reduction of unfulfilled orders in an SME of the peruvian IT sector. In: Advances in Intelligent Systems and Computing, vol. 965, pp. 311–321 (2019) 5. Milicevic, N., Grubor, A.: The effect of backroom size on retail product availability – operational and technological solutions. Amfiteatru Econ. 17, 661–675 (2015) 6. Carazas, L., Barrios, M., Nuñez, V., Raymundo, C., Dominguez, F.: Management model logistic for the use of planning and inventory tools in a selling company of the automotive sector in Peru. In: Advances in Intelligent Systems and Computing, vol. 971, pp. 299–309 (2019) 7. Aleksandar, G., Milicevic, N., Djokic, N.: The impact of store satisfaction on consumer responses in out-of-stock situations. Revista Brasileira de Gestão de Negócios 19, 520–537 (2017)
Applying SLP in a Lean Manufacturing Model to Improve Productivity of Furniture SME Zhelenn Farfan-Quintanilla1, Manuel Caira-Jimenez1, Fernando Sotelo-Raffo1, Carlos Raymundo-Ibañez1(&), and Moises Perez2 1
2
Universidad Peruana de Ciencias Aplicadas (UPC), Lima 15023, Peru {u201112739,u201313833,pcinjsot, carlos.raymundo}@upc.edu.pe Escuela Superior de Ingeniera Informática, Universidad Rey Juan Carlos, 28933 Móstoles, Spain [email protected]
Abstract. Currently, a company’s competitiveness in the market depends on its productivity in efficiently using its available resources. Low productivity is a recurring problem in the furniture sector due to low production process efficiency, resulting in the failure to deliver orders. For this reason, businesses focus on seeking manufacturing systems that enable them to efficiently control their processes, eliminating any waste or activity that generates downtime, to achieve higher customer satisfaction. This research explores a work case at a small Peruvian business in the furniture sector, which is facing the problem of a large amount of unfulfilled orders on a monthly basis that has resulted in lost sales of 8%. The results showed a positive impact in which the new improved methods in the cutting and edging process, as well as a new plant layout enabled operations and activities to be performed in less time, increasing productivity from 42.5% to 64.20%. Keywords: Lean manufacturing
SLP Productivity Furniture SME 5S
1 Introduction Presently, the Peruvian manufacturing sector has been growing from 3.2% to 3.5%. Furthermore, it should be noted that micro, small and medium enterprises in the country play an important role for social development and are a key sector for national economic growth, as they are a source of employment, income, and drive productive activities in local economies. However, it must be taken into consideration that there is still a high percentage of informality, as 53% of MSEs still are not registered with the National Superintendence of Customs and Tax Administration. According to the World Trade Organization, “low levels of MSE productivity are directly related to the lack of capacity by these companies to capitalize on their economies of scale, a lack of specialized labor, low access to credit and investment, and the informality of their contracts with clients and suppliers”.
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 690–696, 2020. https://doi.org/10.1007/978-3-030-39512-4_106
Applying SLP in a Lean Manufacturing Model
691
One of the reasons behind low productivity problems is inefficiency in productive processes. The contribution presented in this study focuses on the production management model, which consists of planning, executing, checking, acting, and documenting the procedures performed by companies in the furniture sector. This is followed by providing training and evaluating operators’ performances based on a lean manufacturing approach and systematic layout planning (SLP) technique.
2 State of Art 2.1
Lean Manufacturing to Improve Productivity
Lean manufacturing techniques are a set of tools and methods that eliminate waste in production operations. As cited by Matt and Rauch [1], lean manufacturing is a multidimensional approach that covers a variety of management practices and is aimed at reducing waste and improving operational efficiency. Many authors state that lean manufacturing is intended to increase productivity, improve product quality and cycle times, reduce inventory and delivery times, and eliminate waste [1–7]. 2.2
5S and Visual Management to Improve Productivity of MSEs
5S is an integral step in lean manufacturing as it allows both internal and external customers to be supplied with the “right product”, at the “right time”, and in the “right quantity”. The use of 5S improves productivity and product or service quality. 5S is the first step of lean manufacturing, which addresses waste associated with the workplace and improves workflows between processes. Another vital consideration is visual management, which establishes visual controls that allow operators to distinguish between correct and incorrect behavior. Lastly [8, 9], agree that the implementation of color cards during waste elimination or sorting is a good visual practice that will generate low-cost positive results.
3 Contribution This production management model considers and focuses on the following four dimensions: human capital management, process management, production time management, and production control. 3.1
Proposed Model
In view of the above, an overview of the proposed model is shown in Fig. 1.
692
Z. Farfan-Quintanilla et al.
Fig. 1. Lean manufacturing model
3.2
Proposed Method
Figure 2 details the overview of the process that the proposed model is based upon.
Fig. 2. Process view
Applying SLP in a Lean Manufacturing Model
693
4 Validation 4.1
Case Study
The case study was conducted in a furniture SME located in the Chorrillos district of Lima, Peru. Owing to a lack of capacity in its production process, the company has failed to meet orders already placed, resulting in lost sales. Currently, the company’s efficiency stands at 51%, while the sector’s average efficiency is 59%, meaning that the company is 8% below their competition, in addition to using less than 50% of its capacity. 4.2
Diagnostic
The study was narrowed down to consider the service that generates the highest profit margin in the company, namely furniture manufacturing. Afterwards, an ABC analysis was conducted, which revealed that desks and closets represent 80% of its sales, thereby generating higher profit margin for the company. The problem identified is the high amount of unfulfilled orders on a monthly basis, with an average of 23% of orders. Then, the reasons behind this were analyzed, through which it was determined that the main reason was that its actual capacity was below demand. The main causes identified were poor methods (63%) and re-manufacturing (37%). 4.3
Solution Proposal
Improvement of the production system began with improving processes under the ECRS principles and 5W1H technique. Afterwards, the 5S technique was implemented and, lastly, the SLP technique. 4.3.1 5S Implementation Owing to the identification of the activities that compose the processes, this study was able to conclude that operators take a long time searching for work materials, inputs, and tools, which causes them to fall behind and creates delays, resulting in longer operation times. 4.3.2 SLP Implementation The purpose here is to evaluate two of the technique’s methods to choose the best alternative layout that offers greater cost and time savings. To this end, an analysis of the current layout was performed, taking into consideration workspaces, storage areas, materials, machinery, and movement times for people and materials. A simple graphical method and a matrix table method were proposed to determine the layout alternatives to present.
694
4.4
Z. Farfan-Quintanilla et al.
Results
Efficiency Improvement: This indicator was determined because it is of vital importance in increasing the average productivity established by the furniture sector, whose efficiency value was 59.3%. Initial efficiency: 42.50%, final efficiency: 64.20%. Reduction of Production Times: The results were positive, as new average times were obtained, achieving reductions of 42%, 29%, and 13% in the project processes in the company’s cutting, edging, and assembly processes, respectively (Fig. 3).
Fig. 3. Reduction of production times
4.5
Results Analysis
The project implementation results are outlined in the following table: Table 1. Implementation results Furniture production Desk production time Closet production time Average of unfilled orders
Initial 33 furniture pieces/month 14.88 h-h 19.87 h-h 15 orders/month
Current 54 furniture pieces/month 10.80 h-h 16.08 h-h 0 orders/month
As shown in Table 1, production times for cutting, edging, and assembly were reduced by 42%, 29%, and 13%, respectively. The total time was reduced by 4.08 h, thus increasing manufacturing productivity from 42.50% to 64.20%, and surpassing the sector’s productivity of 59.30%.
Applying SLP in a Lean Manufacturing Model
695
5 Conclusions In the analysis conducted at the company studied, a large number of unproductive activities within processes were found, representing more than 60% of the production time. With the help of the improved methods proposed in the design of good receipt and storage of raw materials, as well as in the cutting and edging processes using the ECRS approach, it was possible to reduce the time of the cutting process by 20%, the edging process by 40%, and the assembly process by 50%. The initial layout analysis identified large distances for material and staff movement and crossings resulting from the plant’s poor layout. After implementing the SLP technique, movement time was reduced by 20%. Workstations were rearranged according to the process, in particular reorganizing the space based on the finishing areas. After implementing lean manufacturing in a pilot at a furniture SME, manufacturing productivity increased from 42.50% to 64.20%, surpassing the sector’s productivity of 59.30%. Likewise, production capacity increased from 33 pieces of furniture per month to 74 pieces, exceeding current demand and thus covering the expected demand for furniture results in an additional average monthly profit of S/. 23,500. Moreover, an initial cash flow analysis, conducted from November 2018 to April 2019, determined that in the last month, it was feasible to make an investment in the improvement project given that it maintains a positive cash flow. Due to this analysis, the project was budgeted and its benefit/cost ratio was calculated, obtaining a score of 1.93, meaning that the financial proposal is viable and has a return on investment of nearly two months.
References 1. Matt, D.T., Rauch, E.: Implementation of lean production in small sized enterprises. Procedia CIRP 12, 420–425 (2013) 2. Roriz, C., Nunes, E., Sousa, S.: Application of lean production principles and tools for quality improvement of production processes in a carton company. Procedia Manufact. 11, 1069– 1076 (2017) 3. Talukder, M.H., Afzal, M.A., Rahim, M.A., Khan, M.R.: Waste reductio n and productivity improvement through lean tools. Int. J. Sci. Eng. Res. 4(11), 1844–1855 (2013) 4. Zhou, B.: Lean principles, practices, and impacts: a study on small and medium-sized enterprises (SMEs). Ann. Oper. Res. 241, 457–474 (2016) 5. Bellido, Y., La Rosa, A., Torres, C., Quispe, G., Raymundo, C.: Waste optimization model based on Lean Manufacturing to increase productivity in micro- and small-medium enterprises of the textile sector. In: CICIC 2018 - Octava Conferencia Iberoamericana de Complejidad, Informatica y Cibernetica, Memorias, vol. 1, pp. 148–153 (2018) 6. Bellido, Y., Rosa, A.L., Torres, C., Quispe, G., Raymundo, C.: Waste optimization model based on Lean Manufacturing to increase productivity in micro- and small-medium enterprises of the textile sector. In: CICIC 2018 - Octava Conferencia Iberoamericana de Complejidad, Informatica y Cibernetica, Memorias, vol. 1, pp. 148–153 (2018)
696
Z. Farfan-Quintanilla et al.
7. Paredes, M., Villa, J., Quispe, G., Raymundo, C., Moguerza, J.M.: Lean Manufacturing model for the improvement of manufacturing processes in small and medium-sized companies in the furniture industrial sector. In: CISCI 2018 - Decima Septima Conferencia Iberoamericana en Sistemas, Cibernetica e Informatica, Decimo Quinto Simposium Iberoamericano en Educacion, Cibernetica e Informatica, SIECI 2018 – Memorias, vol. 1, pp. 46–51 (2018) 8. Singh, A., Ahuja, I.S.: Evaluating the impact of 5S methodology on manufacturing performance. Int. J. Bus. Continuity Risk Manag. 5(4), 272–305 (2014) 9. Patil, S., Sapkal, A., Sutar, M.: Execute 5S methodology in small scale industry: a case study. Int. J. Res. Advent Technol. 5, 47–51 (2016)
Collaborative Model Based on ARIMA Forecasting for Reducing Inventory Costs at Footwear SMEs Alejandra Angulo-Baca1, Michael Bernal-Bazalar1, Juan Sotelo-Raffo1, Carlos Raymundo-Ibañez1(&), and Moises Perez2 1
2
Universidad Peruana de Ciencias Aplicadas (UPC), Lima 15023, Peru {u201414534,u201414724,pcinjsot, carlos.raymundo}@upc.edu.pe Escuela Superior de Ingeniera Informática, Universidad Rey Juan Carlos, 28933 Móstoles, Spain [email protected]
Abstract. This study addresses inadequate inventory management issues arising from poor demand management and insufficient inventory movement record communication. Focusing on a footwear retailer, this study determined that the main problem identified is rooted in an improper management of finished products caused by excessive production establishing optimum production quotas coupled with an inadequate optimization of the space used for inventory management. Within this context, the project proposes using the collaborative planning, forecasting, and replenishment (CPFR) methodology supported by ARIMA forecasting, a strategy that centers on maintaining adequate logistics development controls. Keywords: Footwear Change management
CPFR ARIMA forecast Inventory management
1 Introduction Throughout the years, small- and medium-sized enterprises (SMEs) have continuously faced new challenges, one of them is the implementation of a logistics management system, which requires optimization of the supply chain and its components, and provides SMEs with a competitive advantage over their competitors [1, 2]. When inventories are inaccurately recorded throughout the supply chain, the company’s operating performance (e.g., orders, inventories, customer service, among others) becomes unstable and starts generating losses. In fact, 70% of small- and mid-sized footwear retailers experience inventory management problems [3, 4]. This proposal is indeed relevant because proper inventory management leads to optimum warehouse spaces, which, in turn, improves customer responses [5]. This study focuses on the development of a forecast-based operational plan, which may provide companies with a solid foundation for their operations, thus improving their inventory and demand management while reducing costs and increasing profitability. © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 697–703, 2020. https://doi.org/10.1007/978-3-030-39512-4_107
698
A. Angulo-Baca et al.
2 State of Art 2.1
Logistics Planning Model
An efficient logistics management and planning system can meet the customer’s logistics requirements. In fact, as markets open and supply chains globalize, logistics plays an even bigger strategic role in the structural changes companies undertake [6]. An efficient logistics management defines the competitive advantages of a company, since cost reduction opportunities are generated and customer service levels improve through coordination between the parties included in the logistics flow [7, 8]. 2.2
Technique Review
2.2.1 CPFR – Collaborative, Planning, Forecasting and Replenishment Optimum inventory management creates flexibility in stock-related operations, while helping companies reach their competitive priorities more efficiently [9]. The implementation of this technique improves both collaboration and communication between the different parties in the supply chain, rendering benefits such as increasing demand forecast accuracies, improving customer service levels, and reducing inventory levels [10]. Through CPFR implementation, inventory levels, costs, and lead times are reduced, besides optimizing customer service levels and sales volumes [11]. 2.2.2 ARIMA Forecasts Most companies, regardless of the sector in which they operate, face a common problem: demand uncertainty. Aguirre explains that ARIMA forecasts use past time events to make demand predictions and that their main advantages are a fast performance and low costs. Demand forecasts and their accuracy exert a vast influence on the supply chain, and, specifically, on inventories, as the costs they represent increase substantially if their levels are not adequate [12, 13]. In fact, when the transgenic time series of the ARIMA model is used, a more accurate forecast is generated compared with when using other models, as Holt–Winters and double exponential smoothing.
3 Contribution Panahifar argues that CPFR implementation improves both collaboration and inventory level and increases customer service levels. However, De Arce and Mahía mention that the ARIMA models attempt to describe the evolution of a yt variable from a stochastic process based on the past values of said variable, which allows ARIMA forecasts to be successfully implemented within the CPFR because the collaborative model is built on making changes and decisions based on history data.
Collaborative Model Based on ARIMA Forecasting for Reducing Inventory Costs
699
Fig. 1. Collaborative logistics planning model structure
As denoted in Fig. 1, this proposal seeks to implement a logistics planning management model, which is rooted on ARIMA forecasts to optimize planning and inventory logistics. Figure 2 displays the four phases required for successful model implementation, which is based on a cycle of continuous improvement.
Fig. 2. Phases of the collaborative logistics planning model
3.1
Operating Plan Development Phase
The first two steps proposed by CPFR serve as foundation for preparing the business plan, which is executed by the project partners or stakeholders. In this case, the business plan refers to inventories. 3.1.1 Partnership Agreements (Start - End) In this first aspect, guidelines are defined for the collaborative relationship set forth between both parties. Once the agreement has been executed, the following must also be defined: Roles corresponding to each partner, Model-Related Processes and
700
A. Angulo-Baca et al.
performance indicators. For these purposes, there are some indicators which need to be assessed to obtain quantitative results, such as ROI, IRA and Inventory Turnover. 3.2
Forecasting Phase
In the second phase, forecasts will be used to optimize the proposal by implementing the ARIMA model within CPFR. In the first stage, the appropriate values for the ARIMA model must be determined. Moreover, the ARIMA model requires all series to be stationary (series without trend or seasonality). 3.2.1 Estimates In this stage, autoregressive (AR) and moving average coefficients included in the model are estimated. The latter uses an algorithm to minimize the sum of the squares of the residuals, starting with one of the initial values from the model parameters. 3.2.2 Forecasts At this stage, demand forecasts are made. However, to secure accurate results, we must first determine whether the original variable was differentiated. Three key time series indicators were used to assess forecast accuracy: • MAPE (Mean Absolute Percentage Error): This indicator measures how accurately time series values were estimated. P MAPE ¼
jðyt ^yt Þ=yt j 100; ðyt 6¼ 0Þ n
ð1Þ
• MAD (Mean Absolute Deviation): This indicator measures how accurately time series values were estimated. Pn MAD ¼
3.3
t¼1
jyt ^yt j n
ð2Þ
Warehouse Development and Organization Phase
First, whenever a new product is admitted into the warehouse, it must be sorted and labeled. This way, all products are tracked through a serial number and a product record. However, since not all products are located or managed in the same way and, instead, this completely changes according to each material in stock and destination. 3.4
Replenishment Phase
Replenishment forms are customized for each company, based on product quantities and characteristics. The replenishment process is expected to supply the right product, in the right place, at the right time, in the right quantities, and in the most efficient way possible. If accurate information is provided at the right time, the company may reap benefits, such as reduced inventory costs. To assess how optimum the warehouse
Collaborative Model Based on ARIMA Forecasting for Reducing Inventory Costs
701
layout is, it must be compared before and after the model implementation, which must evidence improvements in terms of inventory and warehouse layout.
4 Validation 4.1
Methodology Application
To establish a suitable operational plan, main and specific objectives were defined. Then, the strategies for its successful implementation are set forth, evaluating opportunities for improvement. Next, for forecasting purposes and for assessing the error percentage from the ARIMA model, forecasts were made up to 2018. This way, model results may be compared against actual data. Since the most common models are usually AR, before forecasting, autocorrelation and partial autocorrelation assessments were performed to examine the 2016 and 2017 data, determining that they are, in fact, AR. As per Table 1, deviations from ARIMA forecast are lower compared to the other types of forecasts. Therefore, ARIMA represents the best form of demand forecasting. Table 1. % Dev 2018 % Dev Winters January 5.85% February −3.99% March 8.95% April 4.37% May −2.38% June 8.23% Total 3.50%
% Dev Double Exp. Smoothing % Dev Arima 1.92% 2.76% −1.25% 1.27% 10.16% 3.10% 5.55% 2.87% −3.35% 1.69% 13.09% 6.28% 4.39% 2.95%
In the first place, all surplus products were sorted and sold through offers and discounts, as they represent non-profitable idle inventory. Subsequently, the installation of metal shelves was essential to organize products according to their classification so that employees may easily locate them (Table 2). Table 2. Optimum Warehouse Layout Design
Before
After
702
4.2
A. Angulo-Baca et al.
Metrics
After having implemented and validated the methodology in the company under study, improvement results are measured through the indicators defined in Table 3 below, which contains benchmarks proposed in previous studies. The indicators have improved significantly against pre-implementation values. Table 3. Metrics Indicators Inventory accuracy based on demand Finished product inventory at warehouse Profit margin per product unit
Before 87.53% 37% 23.40%
Expected 98.5% 20% 40–45%
After 97.05% 28% 41%
5 Conclusions The textile-and-footwear sector represents 24.2% of all SMEs nationwide, thus being deemed as one of the most prominent sectors in the country. Therefore, the companies operating within this sector must streamline their processes to remain profitable and sustainable. This project focused on finished goods inventory management optimization at a small leather footwear company through CPFR techniques and ARIMA forecasting. After the proposed model had been deployed, this company reported a 17% decrease in the finished product inventory stocked at the warehouse.
References 1. Ganeshan, H., Suresh, P.: An empirical analysis on supply chain problems, strategy, and performance with reference to SMEs. Prabandhan Indian J. Manag. 10(11), 19 (2017) 2. Fu, H.P.: Comparing the factors that influence the adoption of CPFR by retailers and suppliers. Int. J. Logist. Manag. 27(3), 931–946 (2016) 3. Moser, P., Isaksson, O., Seifert, R.W.: Inventory dynamics in process industries: AC SC. Int. J. Prod. Econ. 191, 253–266 (2017) 4. Cannella, S., Framinan, J.M., Bruccoleri, M., Barbosa-Póvoa, A.P., Relvas, S.: The effect of inventory record inaccuracy in information exchange supply chains. Eur. J. Oper. Res. 243 (1), 120–129 (2015) 5. Duan, L., Ventura, J.A.: A dynamic supplier selection and inventory management model for a serial supply chain with a novel supplier price break scheme and flexible time periods. Eur. J. Oper. Res. 272(3), 979–998 (2019) 6. Zare, R., Chavez, P., Raymundo, C., Rojas, J.: Collaborative culture management model to improve the performence in the inventory management of a supply chain. In: Congreso Internacional de Innovacion y Tendencias en Ingenieria, CONIITI 2018 – Proceedings, 8587073 (2018) 7. Riquero, I., Hilario, C., Chavez, P., Raymundo, C.: Improvement proposal for the logistics process of importing SMEs in Peru through lean, inventories, and change management. Smart Innovation, Systems and Technologies, vol. 140, pp. 495–501 (2018)
Collaborative Model Based on ARIMA Forecasting for Reducing Inventory Costs
703
8. Saric, A.: Improvements in the logistics system in companies in the last 7 years (2017) 9. Reyes, A., Villanueva, N.: Logistics Management improvement proposal to reduce costs in the Janet EIRL construction company (2018) 10. Panahifar, F., Byrne, P.J., Heavey, C.: A hybrid approach to the study of CPFR implementation enablers. Prod. Plan. Control 26(13), 1090–1109 (2015) 11. Abd, A., Barau, H.: Enhancing supply chain performance through collaborative planning, forecasting, and replenishment. Bus. Process Manag. J. 25(4), 625–646 (2018) 12. Petropoulos, F., Wang, X., Disney, S.M.: The inventory performance of forecasting methods: evidence from the M3 competition data. Int. J. Forecast. 35(1), 251–265 (2019) 13. Prak, D., Teunter, R.: A general method for addressing forecasting uncertainty in inventory models. Int. J. Forecast. 35(1), 224–238 (2019)
A Framework of Quality Control Matrix in Paprika Chain Value: An Empirical Investigation in Peru Diana Garcia-Montero, Luz Roman-Ramirez, Fernando Sotelo-Raffo, and Edgar Ramos-Palomino(&) Ingeniería Industrial, Universidad Peruana de Ciencias Aplicadas (UPC), Lima 15023, Peru {u201415400,u201321853,fernando.sotelo, edgar.ramos}@upc.edu.pe
Abstract. At present, in the agricultural industry, continuous improvement in products or crops and quality in processes is a challenging task. Therefore, several quality techniques were adopted in this industry and advantages were obtained. The main purpose of this document is to present the advantages offered by the use of quality control and the quality control matrix in order to reduce non-conformities in the production process. The similarities and differences between traditional quality control and modern control using QAM will be presented. Keywords: Agriculture
Control charts Quality control Quality matrix
1 Introduction The crucial position of agricultural productivity in the economic and social agenda of countries that are in development such as Peru, the results of agricultural productivity growth as the key to achieving growth and compliance led by agriculture [1]. It should be noted that the adoption of new technologies has been widely accepted as a means to increase productivity, the growth of production has also been used technology on one side. In other words, agricultural productivity depends on two components [2]: The first is the type and quality of inputs in the production process and the good as production technology. The second component relates to how well these inputs are combined and, therefore, apply to the technical efficiency of the production process [3]. In this sense, the planning and implementation of control panels plays a fundamental role in the desired reliability and availability of the final product for subsequent sale [4]. The ultimate goal of any quality control panel is to reduce or eliminate system failures, which in turn will improve and guarantee the effective reliability of the system [5]. The monitoring of the reliability of the paprika product is an important tool to monitor the effectiveness and productivity of the production process throughout the period. Therefore, constant monitoring increases the reliability of the products and can provide more benefits to the farmers of the research site, located in Barranca, Lima-Peru. © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 704–710, 2020. https://doi.org/10.1007/978-3-030-39512-4_108
A Framework of Quality Control Matrix in Paprika Chain Value
705
The control boxes are the most used SPC tools. Several researchers have addressed the integration of control panels with agriculture management. However, research on the integration of SPC and the agriculture sector is not adequately analyzed in the literature [6]. The purpose of this document is to provide a brief review of the existing literature on the integration of control panels in the agro-industrial sector, and to suggest some possible directions for future research [7].
2 Methodology The first is the type and quality of inputs in the production process and the good as production technology. The second component relates to how well these inputs are combined and, therefore, apply to the technical efficiency of the production process [8]. The main purpose of this document is to present the advantages offered by the use of quality control and the quality control matrix in order to reduce non-conformities in the production process. The similarities and differences between traditional quality control and modern control using QAM will be presented [9]. Quality control matrix: This Matrix is designed to help those responsible for the area of Quality Management to evaluate the objectives established for the Quality Management System, providing a general and comprehensive overview of the status of these objectives [10]. By performing the corresponding analysis and assessment of each of the evaluation criteria, this matrix also functions as a guide-control to direct the process of establishing the objectives in the sense that it allows verifying if the requirements of the international standard are being met. ISO 9001: 2015 for the objectives of the SGC, in order to be really useful and practical and, consequently, allow the development of strategies and plans for its achievement [11]. Management by processes: It is structured, functional analytical and process improvement, It is focused on the gradual and continuous improvement of processes within a business instead of a reengineering. The main objective of process management is to improve processes in order to ensure that critical processes are effective and efficient, thus aligning with the strategic objectives of the company and customer needs [12]. Process management has various tools such as Business Process and Notation, which has the objective of modeling business processes and indicators are used that are related to the strategic objectives of the company and/or business [13]. Another of the tools of process management is the SIPOC diagram, which means Supplidfx-Inputs-Process, Output-Custumers. The sipoc diagram identifies and maps the gaps between the specifications of the suppliers and the output specifications, expectations of the clients to define that way the scope of the process improvement activities [14]. Chain Value: The chain value is a very important instrument that is used to perform analyzes of the different activities carried out by a company in order to identify its sources of competitive advantage [15]. A value chain is a high-level model developed by Michael Porter used to describe the process by which companies receive raw materials, add value to raw materials through various processes to create a finished product and then sell that final product to the clients [16]. Companies conduct an
706
D. Garcia-Montero et al.
analysis of the value chain when they observe the production steps needed to create a product and identify ways to increase the chain’s efficiency [16].
3 Results To obtain the results, a generalized analytical model called the Monitoring Model is used to coordinate the various controls of paprika crop production. With the objective of obtaining and planning to minimize the expected costs and increase the price of the crop [17]. An effective way to control high quality processes is to use time frames and quality controls, all the matrices between events [18]. Some authors have addressed the maintenance combination with the times between the event control tables. Research in this area is still limited [19]. Of the documents investigated, two documents that have discussed the maintenance combination with the TBE control table. The proposed the use of the time table between events in the monitoring of the useful life of a distributed system. His mathematical model presents an integration between maintenance decisions with SPC [20]. They assumed that the system consists of a single component, which rarely happens in engineering applications. However, presented the use of the time frame between events in the monitoring of the time between failures of a reparable system with multiple components. We propose a stepby-step procedure that helps decision makers to periodically monitor and improve the reliability of the system. Figure 1 presents a summary of its suggested procedure.
Fig. 1. Analysis of the scenarios
It is very important to take into account the objectives to carry out the implementation of a self-control system in the most correct way possible, these objectives are: attitude, uniqueness, technologies, order, control, organization, needs, traceability, responsibility, observation cleaning [21]. Therefore, it is also important to define the sequence of steps to analyze the problems that arise in the day to day of the focus of study [22] (Fig. 2):
A Framework of Quality Control Matrix in Paprika Chain Value
707
Fig. 2. Identification of the problem or defect
To carry out the application of the self-control matrix, it is first necessary to identify the defect or problem. Secondly, the relationship that exists of the defect or problem with the various activities of the process will be analyzed [23]. Next, the matrix of quality self-control will be presented (Table 1): Table 1. Quality control matrix External Internal Activity Activity Activity Activity provider provider 1 2 3 4
Phase where the defect or problem is detected
Activity Total 5
Activity 1 Activity 2 Activity 3 Activity 4 Activity 5 Internal customer External customer Total
The red boxes represent the activities that should be given the highest priority in the analysis as it is where the fault or problem is detected. The green boxes do not represent a major impact in the analysis of the problem as the case may be. As part of the continuous improvement, the use of the risk control table is also proposed, since in this way it seeks to mitigate any type of unforeseen event that may occur throughout the season of paprika cultivation in the district of Araya Grande, Barranca. 3.1
Risk Control Table
The Risk Control table, which aims to identify, control and eliminate sources of risk before they begin to affect the fulfillment of the project’s objectives. In this way, the impact of the possible risks can be evaluated and estimated and, in turn, a contingency plan can be established to mitigate them in the event that the problem arises, with pro activity as its objective. Therefore: (1) it starts before the technical work, (2) it is categorized or classified according to its impact, (3) it identifies the potential risks, assessing their probability of impact and (4) establishing a plan [24].
708
D. Garcia-Montero et al.
Risk
Category
Probability
Impact
Action plan
Classification of category and risk factors the level of impact: Risk components
Performance Cost Maintainability Planning
Risk factor's
Negligible Marginal Critical Catastrophic
To finish the cycle of self-control of quality is of utmost importance the philosophy of continuous improvement because it is an ideal tool, so that the work team understands and seeks quality in the development of the product [25]. The process of continuous improvement is a constant refinement of the standards to respond in a dynamic way to the client’s demands and the opportunities to improve the processes. For to happen, the administration must first establish the standard or base in the processes or activities, so that, subsequently, the PDCA cycle performs its regulatory function 20, improving the established standards. In this way, the PDCA cycle allows organizational learning and the achievement of better standards [26].
4 Conclusion The Quality Assurance is a set of methods, tools and techniques that allow to manage the quality in the development of a product [27]. Despite being a fundamental element when developing a project, not all companies apply it due to budget, lack of personnel or adaptations of more complex standards [28]. This article presents a practical approach as a guide, to manage quality in the development of companies with a level zero maturity [20]. For the correct implementation of self-control in the different transformation industries, usually begins with a training/information campaign to the main stakeholders (operators) where they are explained what is expected of them [29]. The objectives to carry out the implementation of a self-control system are: attitude, uniqueness, technologies, order, control, organization, needs, traceability, responsibility, observation, cleaning [5].
References 1. Bouzembra, Y., Camenzuli, L., Janssen, E., Van der Fels-Klerx, H.F.: Application of bayesian networks in the development of herbs and spices sampling monitoring system. Food Control 83, 38–44 (2017) 2. Szandra, K., Helga, M., Miklós, P., Ildikó, B., Nóra, A.: Quality management in spice paprika production: from cultivation to end product (2018). https://doi.org/10.5772/ intechopen.71227
A Framework of Quality Control Matrix in Paprika Chain Value
709
3. Damiao, C., Reinaldo, R.: Microbiological quality of Argentinian paprika. Computers and Electronics in Agriculture, FI: 0.896, Q2 (2016) 4. Bracke, J., Schmiedel, T., Recker, J., Trkman, P., Mertens, W., Viaene, S.: The effect of intensive chemical plant protection on the quality of spice paprika. Bus. Process Manag. J. 67, 141–148 (2014) 5. Czako, E., Konczol, E.: Critical success factors of export excellence and policy implications: the case of Hungarian small and medium-sized enterprises. In: Gubik, A.S., Wach, K. (eds.) International Entrepreneurship and Corporate Growth in Visegrad Countries, pp. 69–84. University of Miskolc, Miskolc (2014) 6. ZakiI, N., HakmaouiI, A., OuatmaneI, A., Fernandez-Trujillo, J.: Quality characteristics of Moroccan sweet paprika (Capsicum annuum L.) at different sampling times. Food Sci. Technol. 33(3), 577–585 (2014) 7. Jin, N., Lee, Y., Lee, B., Jun, J., Kim, Y., Seo, M., Lim, C., Youn, Y., Yu, Y.: Pest control effect and optimal dose by pesticide dispersion spray method in the paprika Cultivation. Korean J. Pestic. Sci. 18, 350–357 (2014). https://doi.org/10.7585/kjps.2014.18.4.350 8. Nainggolan, K.: Major issues and challenges for improving the marketing and distribution of agricultural products. Agro Ekon. 10 (2003). https://doi.org/10.22146/agroekonomi.16786 9. Vincenzina, F., Antonio, L., Rodriguez, F.: Food safety aspects on ethnic foods: toxicological and microbial risks. Curr. Opin. Food Sci. 6, 24–32 (2015) 10. Kónya, É., Szabó, E., Bata-Vidács, I., Deák, T., Ottucsák, M., Adányi, N., Székács, A.: Quality management in spice paprika production as a synergy of internal and external quality measures (2016) 11. Molnár, H., Kónya, E., Székács, A.: Chemical characteristics of spice paprika of different origins. Food Control 83, 54–60 (2018) 12. Mahantesh, Y., Bmadalageri, M., Pujari, J., Mallimar, S.: Genetic variability studies in chilli for yield and quality attributes. Indian J. Ecol. 42(2), 536–539 (2015) 13. Lekshmi, S.: Variability, heritability and genetic advance in paprika (Capsicum annuum L.). Indian Hortic. 6(1), 109–111 (2016) 14. Gutierrez, L., Munoz, J.: Total quality management practices, competitive strategies and financial performance: the case of the palestinian industrial SMEs. Total Qual. Manag. Bus. Excell. 25(5–6), 635–649 (2015) 15. Jiao, X., Mongol, N., Zhang, F.: The transformation of agriculture in China: looking back and looking forward. J. Integr. Agric. FI: 0.401 (2016) 16. Randhawa, J.: 5S-a quality improvement tool for sustainable performance: literature review and directions. Int. J. Qual. Reliab. Manag. 34(3), 334–361 (2017) 17. Klatyik, S., Darvas, B., Mortl, M., Ottucsak, M., Takacs, E., Banati, H., Simon, L., Gyurcso, G., Szekacs, A.: Food safety aspect of pesticide residues in spice paprika. Int. J. Nutr. Food Eng. 10(3), 188–191 (2016) 18. Schaarschmidt, S.: Public and private standards for dried culinary herbs and spices - part I: standars defining the physical and chemical product quality and safety. Food Control 70 339–349 (2016). Department biological safety, bundesinstitut fur risikobewertung 19. Kekuewa, N., Ardoin, M.: Cultivating values: environmental values and sense of place as correlates of sustainable agricultural practices. Agric. Hum. Values 33(2), 389–401 (2016) 20. Marine, S., Martin, D., Adalja, A., Mathew, S., Everts, K.: Effect of market channel, farm scale, and years in production on mid – Atlantic vegetable producer’s knowledge and implementation of good agricultural. Food Control 59, 128–138 (2016) 21. Mok, W.: Maximizing control flow concurrency in BPMN workflow models through syntactic means. Bus. Process Manag. J. 24(2), 357–383 (2017) 22. Morais, M., Rossi, J., Binotto, E.: Using the reasoned action approach to understand Brazilian successors’ intention to take over the far. Land Use Policy 71, 445–452 (2018)
710
D. Garcia-Montero et al.
23. Müller, C., Elliott, J., Chryssanthacopoulos, J., Arneth, A.: Global gridded crop model evaluation: benchmarking, skills, deficiencies and implications. Geosci. Model Dev. 10(4), 1403–1422 (2016) 24. Parent, J., Lein, Q.: A review of the importance of business process management in achieving sustainable competitive advantage. TQM J. 26(5), 522–531 (2015) 25. Wezel, A., Casagrande, M., Celette, F., Vian, J., Ferrer, A.: Agroecological practices for sustainable agriculture. A review. Agron. Sustain. Dev. 34(1), 1–20 (2014) 26. Wongprawmas, R.: A multi-stakeholder perspective on the adoption of good agricultural practices in the Thai fresh produce industry. Br. Food 117(9), 2234–2249 (2015) 27. Izquierdo, J., Rodriguez, F.J., Duran, M.: Guidelines Good Agricultural Practices for Family Agriculture, Food and Agriculture Organization (2007) 28. Tradewinds Plantation Berhad. Good Agricultural Practices (2018) 29. Chaifetz, A., Alnajjar, K., Ammerman, A., Gunter, E., Chapman, B.: Implementation of good agricultural practices (GAPs) in school and community gardens. Food Prot. Trends 35 (3), 167–175 (2015)
Inventory Optimization Model Applying the Holt-Winters Method to Improve Stock Levels in SMEs in the Sports Retail Sector Diego Amasifén-Pacheco1, Angela Garay-Osorio1, Maribel Perez-Paredes1, Carlos Raymundo-Ibañez1(&), and Luis Rivera2 1
2
Universidad Peruana de Ciencias Aplicadas (UPC), Lima, Peru {u201417538,u201313716,maribel.perez, carlos.raymundo}@upc.edu.pe Escuela Superior de Ingeniera Informática, Universidad Rey Juan Carlos, 28933 Móstoles, Spain [email protected]
Abstract. Organizations are aware that the world of sports retail is shaped by situations and variables that make it a complex and competitive scenario in nature. This leads to a very common problem of inventory management, because the stock levels are not suitable for market behavior, generating significant loss of sale and increase in cost and reducing margins of companies. This research seeks to design an inventory optimization model that integrates tools such as multicriteria ABC analysis, Holt-Winters, and 5S methodology for improving stock levels in SMEs within the sports retail sector. The proposed model helps find an efficient method for the optimization sought, which enables the planning of sales and stock purchases according to the needs of the company. The main result of the validation in the case study was the reduction of additional costs by 85%. Keywords: Inventory management criteria ABC analysis 5S
Holt-Winters Sports retail Multi-
1 Introduction The sustained growth of the Peruvian economy has allowed the retail sector to experience significant and constant expansion in the last decade. In 2018, sales in the retail sector grew 9.7% and represented 1.91% of the national GDP. In addition, Peru ranks ninth in the list of 30 emerging countries classified as the most attractive for investing in the retail sector. However, 40% of national retail companies are inefficient in managing their supply chains [1] and are estimated to have 20–30% dead or obsolete inventory [2], reflecting a gap that prevents these organizations from evolving into more competitive businesses. It was identified that the stock levels managed by SMEs in the sports retail sector are not optimal due to the strategic goals they set forth. This problem is mainly caused by deficiencies in the planning, management, and control of inventories, which © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 711–718, 2020. https://doi.org/10.1007/978-3-030-39512-4_109
712
D. Amasifén-Pacheco et al.
translates into shortages and surplus stocks, low rotation, and deterioration of goods. This causes operating costs and expenses to increase and, therefore, liquidity to decrease as a result of lost sales [3]. In addition, companies often lose potential customers due to the dissatisfaction generated by not having the products they need when visiting a store. This research seeks to design a solution to address the supply chain problems faced by companies in the sports retail sector, which are evidenced by their stock levels. The proposed optimization model consists of tools based on the assessment of demand behavior, forecast of purchases according to the needs of the company, and application of the 5S methodology to strengthen the model through the implementation of good warehousing practices to correctly execute sales, product, and procurement plans.
2 State of the Art 2.1
Holt-Winters Method
The applicability of the Holt-Winters logistics method has been developed in the production, trade, and service industries over time [4]. Indeed, Teunter, Syntetos, Riquero, Bastidas [5–7] assure that their implementation has produced satisfactory results, both in reducing the cost of inventory and in the increase in service quality. It has also reduced emerging actions to avoid lost sales due to lack of inventory and total inventory breakdowns. Similarly, Bastidas and Toro [7] were able to assess the economic impact of the methodology applied in a household appliance retail company, through simulation and the Holt-Winters forecasting method, since a 50% reduction in costs due to stockouts is defined. In addition, the proposed methodology represents a significant improvement opportunity for the inventory management process, as this reduction will represent an increase in perceived utility. 2.2
Multi-criteria ABC Analysis
The classical ABC analysis has been questioned by some authors [7, 8], mainly because the importance and attention given by the management to each item depends on a single criterion when making the classification, knowing that some opportunities have characteristics and attributes that should be considered and possibly affect its importance. Moreover, Castro [8] proves that the application of the multi-criteria ABC method to inventory models through fuzzy logic is very important to obtain classifications that consider several criteria at the same time, thus engaging the expertise of the people in charge of the inventories in order to strengthen the item classification.
Inventory Optimization Model Applying the Holt-Winters Method
713
3 Contribution 3.1
Model Proposed
The model proposed optimizes inventories against demand volatility and supply variability, thus reducing overstocking and understocking risks, and optimizing their inherent costs. The deployment of the 5S methodology approach and the multi-criteria ABC tool guarantees proper process flow rates and product classification as per market behavior criteria. In fact, the results from both tools provide a clear snapshot of the stock held by the company, which leads to sales forecasting under the Holt-Winters tool in order to determine the product quantities to be purchased. The interrelation of the three elements makes it possible to measure the results objectively in order to find points for improvement, standardize the activities inherent to the process, and generate greater value to the proposed model (see Fig. 1 below).
Fig. 1. Optimization model
3.2
Method Proposed
In order to have greater understanding of the model proposed, a flowchart of the activities of the optimization proposal was prepared (see Fig. 2).
714
D. Amasifén-Pacheco et al.
Fig. 2. 5S Improvement Matrix in Beta Store – Sport SAC.
3.3
Indicators
As part of the development of the proposed model, management indicators have been created (see Fig. 3) to assess the current operations in order and set forth an evolution horizon, so that the results obtained may be compared with the inventory level issues reported by sports retail companies.
Fig. 3. KPIs of the model proposed.
4 Validation 4.1
Application of the Model Proposed
The model consists of three phases that together achieve the stock level optimization. It is necessary to point out that the validation of the case study has been segmented by product families at the “Beta” store of one of the main sports retailers in Peru. Then, based on a previous analysis of the total SKUs of the “Beta” store, the multi-criteria ABC tool will be used to assess the 56 different product families of Nike urban footwear for men.
Inventory Optimization Model Applying the Holt-Winters Method
715
With this tool, we will classify the result of the sum of homogenized values in descending order under the following criteria: quantity of stock, rotation, and product cost. 4.1.1 Phase 1 This matrix is rated on a monthly basis. However, it will now be rated quarterly in order to be able to better notice any changes. The rating protocol involves operators who check and score each item (see Fig. 4).
Fig. 4. 5S Improvement Matrix in Beta Store – Sport SAC.
4.1.2 Phase 2 The result of the multi-criteria ABC analysis in which 16 families of products belong to classification A, which represents 28.56% of the total families and 69.94% of the total products owned by the store in study.
716
D. Amasifén-Pacheco et al.
After performing the multi-criteria ABC analysis, the sales forecast was calculated using the Holt-Winters method. For this purpose, sales history records from 2016 to 2018 were used as the bases for 2019 sales forecast. First, we forecast demand for 2018, which has already been executed. That is to say, we use the parameters obtained, with data from 2016 and 2017, to verify whether they reasonably adjust to 2018. Once this adjustment is achieved, we can conclude that our parameters are properly calibrated, and we can calculate the forecast for 2019 (see Figs. 5 and 6). One of the advantages of the Holt-Winters method is that you can adjust the forecast as actual data becomes available.
Fig. 5. Demand Forecast for Beta store – Sport SAC.
Fig. 6. Sales Forecast 2019 Beta Store – Sport SAC.
Inventory Optimization Model Applying the Holt-Winters Method
717
Second, after forecasting sales for 2019, the proper inventory quantities are established for each product, taking into account that the safety stock established is 5% over the forecast. 4.1.3 Phase 3 The last phase of the proposed model lies in assessing performance through graphic indicators that represent the current situation and the proposed situation with the inventory optimization model (see Figs. 7 and 8). In fact, the proposed situation is more accurately adjusted to historical sales in October, November, and December 2018, and January, February, and March 2019.
Fig. 7. Current situation.
Fig. 8. Proposed situation.
718
D. Amasifén-Pacheco et al.
5 Conclusions It was possible to optimize the stock level of the store wherein the model was validated, thus reducing costs and expenses by 84.57%. Capital savings of 81,035.52 Soles were reported, which were marked to buy large quantities of products that did not match the store’s demand. In the four scenarios where the Holt-Winters forecasting method was validated, a margin of error of less than 5% was obtained, adjusting to the monthly sales behavior of the case study. The proposed optimization model represents a significant improvement opportunity for the inventory management process, as this reduction will represent an increase in profits. In addition, the store is now able to meet demand forecasts, thus guaranteeing optimum levels of customer service and, in turn, elevated store image and credibility.
References 1. Soluciones de Marketing: Primer Estudio sobre la Situación del Supply Chain Management en el Perú. [First Study on Supply Chain Management in Peru] (2018). http:// semanaeconomica.com/wpcontent/uploads/2018/10/encarte_Supply_Chain_Management_ OK_baja.pdf4 2. Perú Retail: El sobre stock destruye la cadena del retail. [Overstock destroys retail chains] (2019). https://www.peru-retail.com/sobrestock-destruye-cadena-retail/ 3. Santa Cruz, A.K., et. al.: Sales and operation planning model to improve inventory management in Peruvian SMEs (2019). https://doi.org/10.1109/ICITM.2019.8710734 4. Tirkeş, G., Güray, C., Çeleb, N.: Demand forecasting: a comparison between the holt-winters, trend analysis and decomposition models (2017). https://doi.org/10.17559/TV20160615204011 5. Teunter, R.H., Syntetos, A.A., Babai, M.Z.: Intermittent demand: linking forecasting to inventory obsolescence. Eur. J. Oper. Res. 214, 606–615 (2011) 6. Riquero, I., et. al.: Improvement proposal for the logistics process of importing SMEs in Peru through lean, inventories, and change management (2019). https://doi.org/10.1007/978-3030-16053-1_48 7. Bastidas, V.E., Toro, L.A.: (2011). Methodology for the control and inventory management in a company appliance retailer. Scientia et Technica. http://dx.doi.org/10.22517/23447214.1481 8. Castro, C.: Clasificación ABC Multicriterio: Tipos de criterios y efectos en la asignación de pesos. [ABC multi-criteria classification: types of criteria and effects on money allocation]. ITECKNE Book 8 No. 2 pp. 163 – 170 (2011)
Recruitment and Training Model for Retaining and Improving the Reputation of Medical Specialists to Increase Revenue of a Private Healthcare SME Audy Castro-Blancas1, Carlos Rivas-Zavaleta1, Carlos Cespedes-Blanco1, Carlos Raymundo1(&), and Luis Rivera2 1
2
Universidad Peruana de Ciencias Aplicadas (UPC), Lima, Peru {u201510504,u201422189,carlos.cespedes, carlos.raymundo}@upc.edu.pe Escuela Superior de Ingeniera Informática, Universidad Rey Juan Carlos, 28933 Móstoles, Spain [email protected]
Abstract. The present research presents an improved recruitment and training model focused on increasing physician reputation and loyalty to retain good physicians in the provinces of Peru and thereby increase competitiveness and profitability of the health care organization. Consequently, a pilot test of the model was implemented in an assisted reproductive technology (ART) clinic based in Chiclayo and Piura using a recruitment process checklist. The data resulted after the implementation were analyzed using the success rate of in vitro fertilization (IVF) treatments of the new physician and a structured engagement survey to determine retention by validation. In the new recruitment model, the new physician achieved a success rate of 46.15% in IVF pregnancies. The main physician participation was reduced from 69% to 48% without a decrease in total sales. The level of engagement achieved by the new physician was 96%, surpassing the average of the clinic’s medical team of 79%. Keywords: Personnel recruitment Reputation Retention
Continuous training and practice
1 Introduction In 2016, América Economía published an article entitled “Peru: Thousand physicians emigrate every year to other countries,” which showed that apart from a low remuneration at the national level, which is one of the lowest in the region: $1,500 compared to the average of $3,000 in Latin America, there is no medical career environment with establishments that offer specialization or training permits or the prospect of attending conferences and courses both in the country and abroad; further, research is not promoted [1]. In the same year, 16,630 medical specialists were required to cover the health needs of over 30 million Peruvians; however, only 8,074 were available [2]. Subsequently, in 2017, the informality rate of healthcare companies became one of the © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 719–725, 2020. https://doi.org/10.1007/978-3-030-39512-4_110
720
A. Castro-Blancas et al.
highest, i.e., 3 to 1, since there were approximately 60,000 health centers operating in the informal sector [3]. Studies in the health sector show the predominance of 62% and 25% in socioeconomic levels A and B, respectively, in the choice of the private clinic over other types of services, among which are Minsa, EsSalud, or Solidarity Hospitals, or consulting a private physician [4]. Similarly, there is an unequal distribution of physicians in Peru, according to the former Minister of Health and professor at the Cayetano Heredia Peruvian University, Eduardo Pretell. At the end of 2014, 47.7% of specialists in the country were concentrated in clinics and hospitals in Lima. Only in Lima, Callao, Arequipa, and La Libertad are 65.1% of the total amount of specialists distributed nationwide [5]. For the aforementioned problems, studies have been conducted in small and medium-sized enterprises (SMEs) and have concluded that most entrepreneurs or their partners have university education and begin their businesses in the sector in which they have worked. However, they do not necessarily have management knowledge in terms of administrative, financial, and managerial issues [6]. Efforts have already been made, and research has been conducted on a recruitment program aimed at attracting and training professionals to perform their medical internships in rural areas [7]. The following proposed solution is aimed at increasing the loyalty of medical professionals toward private healthcare centers in the provinces of Peru as well as seeking to increase the reputation level in less time and minimize the effects of problems experienced by healthcare SMEs. This model provides an alternative solution to the current problem of recruiting and training specialists in the provinces of Peru, in addition to engaging and retaining medical specialists in organizations, given the shortage of specialized healthcare workers in Peru. It also aims at improving the average time it takes for a health professional to acquire and gain reputation and experience in a specialized area, which, in the case study, is an assisted reproductive technology (ART) clinic, as well as at replicating the model in other small healthcare centers in the provinces of Peru. Finally, one difficulty to consider is the time factor in relation to the validation of each of the model components.
2 Literature Review 2.1
Model of Recruitment and Continuous Training in Healthcare
Given the need for qualified physicians, a training program was created that helped students become certified physicians and improve their skills. Physicians considered continuous training to be the most effective way of learning [8]. In the rural areas of Iran, a training model for community health workers showed that the relationship between training and employee performance improves the model effectiveness according to the community health needs [9]. In more urbanized areas, highly specialized physicians using advanced technology and complex treatments are needed, and better education and practical training are essential to achieve that quality [10]. Continuous training improves the specialists’ knowledge to a high degree. Moreover, it also improves their performance and job satisfaction and generates a positive economic impact on the health organization in which they work [11]. Similarly, studies
Recruitment and Training Model
721
on the continuous training of the health workforce, including medical specialists, showed the direct relationship between training factors such as satisfaction, training modality, and professional competence and the patients’ perceived quality of care [12]. 2.2
Recruitment and Retention Model with Organizational Commitment
Employee engagement is important in any healthcare organization. Individual characteristics and work experiences are essential in terms of employee engagement. Motivating factors were divided into individual, community, and organizational factors. One of the main engagement factors is the supervision and support from a team leader or colleagues, which positively affects organizational commitment [13, 14]. Another important motivating factor, particularly for young physicians or medical students, is the opportunity to specialize if there is an attractive goal-oriented training plan to become a medical specialist [15].
3 Contribution 3.1
Proposed Model
See Fig. 1. Individual Needs Recruitment Processes Profile Development
Retention Plan
Training Processes Continuous Training
Interview
Reputation Aproach
Promotion
Organizational Needs
Fig. 1. Model diagram for the improvement of recruitment and training processes.
3.2
Proposed Method
The proposal is divided into the following activities: assess decisive factors in the profile of the specialist to be hired, plan the retention model to be implemented, No Assess decisive factors in the medical specialist’s profile
Plan the retention model to be implemented
¿Does the specialist meet requirements ?
Yes
Determine continuing training
Assess results
Determine training benefits
Conduct the pilot test
Implement the model
Fig. 2. Proposal flowchart. Prepared by the author. 2019
722
A. Castro-Blancas et al.
determine the type of continuing training to be conducted, and determine and communicate training benefits. The flowchart is shown in Fig. 2. Ratios The metrics to use are as follows. Engagement Level. This ratio measures the level of employee engagement toward the organization where they work. The formula is where a total of 37 questions will be evaluated with values from 1 (Strongly Disagree) to 5 (Strongly Agree). P37 i¼1
Value½questionðiÞ 100%; 37 5
ð1Þ
Physician Capability Level (Effectiveness). This ratio measures the satisfaction of clients who, in this case, are the patients. In the case study of an ART clinic. One of the medical techniques used is in vitro fertilization (IVF). This ratio is analyzed through the result of this procedure since it is a complex process that increases the revenue of the case study company. P
Efffective IVF Treatments½PhysicianðiÞ P IVF Treatments
ð2Þ
Physician Participation Level. The level of participation of both the physicians hired under the traditional case study model and the physicians involved in the pilot test will be measured. IVF Treatments½PhysicianðiÞ P IVF Treatments
ð3Þ
Net Present Value (NPV). The NPV formula is used to calculate the present value for the five years of future cash flows that a physician hired under the recruitment and selection model would generate in the clinic. Net Present Value ¼ I0 þ
n X
Fi
i¼1
ð1 þ cok Þi
ð4Þ
4 Validation Current Diagnostic A pilot test has been conducted in a small clinic that provides ART services known as “Gestar In Vitro”.
Recruitment and Training Model
723
Current Diagnostic Much of the economic impact on the two clinics falls on physician’s services. Physician Engagement Level See Table 1. Table 1. Engagement level results. Current Physicians under the traditional model (%) Engagement level 79% overall average
Physician Capability Level. The results will be compared with the international standards provided by the Society for Assisted Reproductive Technology (SART) that has approximately 90% of IVF effectiveness studies in the clinics of the United States (Table 2). Table 2. Results of IVF Births, U.S. Network of Clinics Weighted result/outcome Number of initiated cycles 120,160 Live-birth rate per initiated IVF cycle 31.16%
Level of Physician Participation See Table 3. Table 3. Level of participation of case study physicians, March–December 2018 Level of participation (%) Main Physician 69 Physician A 21 Physician B 6 Physician C 4
NPV of Cash Flow Forecasting for a Physician (IVF Treatments Only). To make a diagnosis of the clinic sales, the average number of IVF treatments of physician A was considered (Table 4). Table 4. Financial ratios current situation Financial ratio Annual OCC (%) 10.8% Monthly OCC (%) 0.9% NPV 124,727
724
A. Castro-Blancas et al.
5 Summary of Results The results show that the physician achieved a 96% level of engagement compared to the 79% engagement the other physicians. Ratio 2. The effectiveness of the new physician was compared against the international levels of the Assisted Reproductive Technology Society (SART). The new physician achieved 46.15% of effectiveness against 31.16%. Ratio 3: The level of participation of the new physician increased notably, from being an inexperienced physician in IVF to being the second physician in terms with a 26% level of participation.
6 Conclusions The recruitment and training improvement model is based on an initial recruitment model with later modifications and additional components from other models. A physician hired under the proposed model, within a six-month span, achieved a participation of 26%; the percentage of the main physician was reduced by 21% in both branches, taking away a large workload. The knowledge and capacity level of the inexperienced physician is increased in less time, being validated by the success rate of the physician’s IVF treatments under the implementation of the Case Study pilot that so far has seven pregnancies achieved with the IVF of 14, i.e., 46.15% of the total from January to May. The data are compared with the international standards provided by the SART with a success rate of 31.6%.
References 1. América Economía Magazine: Peru: One thousand physicians emigrate each year to other countries, 04 May 2016 (2016). https://clustersalud.americaeconomia.com/peru-milmedicos-emigran-cada-ano-a-otros-paises 2. Gestión Newspaper: Peru needs more than 16 thousand specialist physicians (2016). https:// gestion.pe/suplemento/comercial/clinicas-centros-medicos/cifras-peru-necesita-mas-16-milmedicos-especialistas-1001790 3. Gestión Newspaper: Susalud: There are 60,000 informal medical establishments in Peru, triple the number of formal ones, 28 August 2017 (2017). https://gestion.pe/economia/ susalud-hay-60-000-established-medical-informales-peru-triple-formales-142486 4. Gestión Newspaper: 62% of Lima residents of SEL A go to clinics and 33% of SEL D to hospitals (2016). https://gestion.pe/economia/62-limenos-nse-clinicas-33-d-hospitales-4368 5. El Comercio Newspaper: 47.7% of medical specialists are concentrated in Lima, 21 August 2014 (2014). https://elcomercio.pe/economia/peru/47-7-medicos-especialistas-concentranlima-175747 6. Condorchoa, E., Gonzales, L.: Analysis of the factors that limit the development and growth of small businesses in Lima, Peru. Thesis, Peruvian University of Applied Sciences, Lima (2017) 7. Eidson-Ton, W.S., Rainwater, J., Hilty, D., Henderson, S., Hancock, C., Nation, C.L., Nesbitt, T.: Training medical students for rural, underserved areas: a rural medical education program in California. J. Health Care Poor Underserv. 27(4), 1674–1688 (2016)
Recruitment and Training Model
725
8. Li, X., Shen, J.J., Yao, F., Jiang, C., Chang, F., Hao, F., Lu, J.: Does exam-targeted training help village physicians pass the certified (assistant) physician exam and improve their practical skills? A cross-sectional analysis of village physicians’ perspectives in Changzhou in Eastern China. BMC Med. Educ. 18(1), 107 (2018) 9. Rahbar, M., Ahmadi, M.: Lessons learnt from the model of instructional system for training community health workers in rural health houses of Iran. Iranian Red Crescent Med. J. 17(2), e2145 (2015) 10. Kabanova, E.E., Vetrova, E.A.: Modern medical higher education institutions in Russia. Eur. J. Contemp. Educ. 7(4), 710–716 (2018) 11. Yoon, H., Shin, J., Bouphavanh, K., Kang, Y.: Evaluation of a continuing professional development training program for physicians and physician assistants in hospitals in Laos based on the Kirkpatrick model. J. Educ. Eval. Health Prof. 13, 21 (2016) 12. Gracía-Pérez, M.L., Gil-Lacruz, M.: The impact of a continuing training program on the perceived improvement in quality of health care delivered by health care professionals. Eval. Program Plan. 66, 33–38 (2018) 13. Zare, R., Chavez, P., Raymundo, C., Rojas, J.: Collaborative culture management model to improve the performance in the inventory management of a supply chain. In: Proceedings of 2018 Congreso Internacional de Innovacion y Tendencias en Ingenieria, CONIITI 2018 (2018). https://doi.org/10.1109/coniiti.2018.8587073 14. Bhatnagar, A., Gupta, S., Alonge, O., George, A.S.: Primary health care workers’ views of motivating factors at individual, community and organizational levels: a qualitative study from Nasarawa and Ondo States, Nigeria. Int. J. Health Plan. Manag. 32(2), 217–233 (2017) 15. Rozsnyai, Z., Tal, K., Bachofner, M., Maisonneuve, H., Moser-Bucher, C., Mueller, Y., Streit, S.: Swiss students and young physicians want a flexible goal-oriented GP training curriculum. Scand. J. Primary Health Care 36(3), 249–261 (2018)
Research on Disabled People’s Museum Visit Experience from the Perspective of ActorNetwork Theory Shifeng Zhao(&) and Jie Shen Jiangnan University, Wuxi, Jiangsu, China [email protected]
Abstract. Objective: In course of trying to integrate into social and cultural exchanges, people with disabilities face the problems during museum travel will not be the accessibility issue of the building environment, but the barriers in disabled people’s communication and interaction with other actors in the exhibition environment. Method: The literature results of disabled people’s barrier-free travel and museum barrier-free design are summarized. From the perspective of Actor-Network Theory, human and non-human actors are involved in the design network, thus playing an equal and unbiased stabilizing role throughout the barrier-free wheelchair travel system. Conclusion: Investigation and analysis are conducted on the research of museum information barrier-free access; the relevant capital and interest needs of the patients with lower limb disabilities in cultural education and independent travel involved in this study are clarified. Moreover, based thereon, a translation network that is equally driven by disabled people and other actors is constructed. Keywords: Human factors
Actor-Network Interaction barrier-free
1 Introduction In today’s era, the social problems that we need to solve are becoming more and more complex, and the user’s level of Maslow demand is also rising. The design has strengthened theoretical and scientific in the construction of disciplines in recent years. In particular, user experience and interaction design combine the knowledge of vision, space, psychology, and sociology to solve complex social problems. The intervention of actor network theory can describe the process of solving problems and building products as a dynamic network that promotes each other. The problems with the interaction between actors in this process can be described.
2 Literature Review There Are Few Studies on Barrier-Free Interaction in Museums. When participating in social and cultural exchanges, people with disabilities need to face not only
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 726–732, 2020. https://doi.org/10.1007/978-3-030-39512-4_111
Research on Disabled People’s Museum Visit Experience
727
the hardware barriers such as building facilities, but also the obstacles to the pursuit of equal rights to education and free and independent living. In China Knowledge Network, the keywords “museum” and “accessibility” were interleaved and searched, and 415 articles were obtained. The main research topics are “accessibility facilities”, “accessibility environment”, “accessibility design”, etc. The search date is August 23, 2019. In the comparative study of the barrier-free system of public buildings in Germany and China, Yu Qinran found the disabled organizations in Germany have gradually developed to provide paid counseling services for disabled people, relatives of disabled people and the society [1]. The barrier-free construction of public buildings in China began in the 2008 Beijing Olympic Games, which became the driving force for stimulating urban infrastructure construction. For example, the Forbidden City built a barrier-free route at the end of 2007. Visitors with physical disabilities can reach the “Wumen Gate” through barrier-free lifts. The digital display design in the museum is a new research hotspot in recent years. In the interactive multimedia research of the Shanghai Museum, Gao Yuejun pointed out that museums need to use multimodality interaction, diverse terminals to spread knowledge. Through enhancing the experience of cultural activities, it can make everyone enjoy culture and technology. From the technical perspective, Zhou Linlin proposed the method of digitalization of the museum, which uses modern technology to convert the physical object into a digital signal, and then converts the digital signal into an image that can be recognized by people. This digital display breaks geographical restrictions, time constraints, etc., making the museum’s resources more efficient [2]. The newly adopted museum law in Sweden proposes to provide an equal museum experience for all visitors. Hudson conducted design tests at the Trelleborg Museum in Sweden and found that the most important of the accessible navigation design methods is to provide a tactile experience, a good audio tour and easy navigation for people with disabilities [3]. Anna Kabjin Kim proposed a continuous momentum experience in the museum. Due to the many dynamic factors in the museum, it is easy to break the continuous experience of the visitors, and the momentum state is difficult to maintain [4]. Accessible information tours have only appeared in the special events of a few museums. The Shanghai World Expo Museum uses accessible reading technology to allow blind people to get information. Since then, they launched the “Smart World Expo Pavilion” in 2019, which has an auxiliary navigation function to solve the pain points that blind people cannot identify information. However, the need for digital interaction of patients with physical disabilities during the tour is ignored. Museum Accessibility Solutions Focus on the Design of Aging Wheelchairs. The barrier-free construction of developed countries has been relatively mature and a sound legal system has been established. The technical committee “Technical aids for disabled or handicapped people” was founded in 1978 and now has 29-member states and 25 observer countries. Since then, the UN General Assembly adopted the Convention on the Rights of Persons with Disabilities and its selective agreements in 2006. The Convention states that wheelchairs and other assistive devices can promote and protect the equal human rights and freedom for all persons with disabilities.
728
S. Zhao and J. Shen
The most common barrier-free solution in museums is the manual wheelchair. In the “Guidelines on the provision of manual wheelchairs” issued by the World Health Organization, the necessary considerations for the design and production of wheelchair products are emphasized: users, environment and materials. Su Ting pointed out the defects of intelligent wheelchair in the research of intelligent wheelchair control system, such as the difficulty of safety detecting the unknown environmental, the single interface of human-computer interaction and the high cost of intelligent wheelchair, and then proposed the solution strategy of modular production. Further, Zhou Lei made predictions on the future development of intelligent wheelchairs, and proposed four design trends of intelligent wheelchairs - personalized control of multiple postures, comprehensive sensor detection, product industrialization and modular design, aims to bring smart wheelchairs to people with disabilities and enter them into a new way of life.
3 Observation In this study, a field survey was conducted with Musée du Louvre, Musée d’Orsay, Musée de l’Orangerie, and Centre Pompidou in Paris, and the use of barrier-free facilities in the pavilion was observed using the shadow tracking method. 3.1
Typical User Path
The user path map is drawn as follows based on the combined results of the follow-up observations and interviews. It shows in Fig. 1. Users will check the official website’s accessibility notice before going to the museum to see if there is a priority channel or exclusive entrance for them. A security check will be carried out upon arrival at the museum entrance.
Fig. 1. The typical user path map
Research on Disabled People’s Museum Visit Experience
729
If the user decides to use the wheelchair provided by the museum, they will ask the staff about the location of the accessible information desk and go to the rental. The lease is free, but a proof of disability of the user is required. If they bring their own wheelchair, they can take the barrier-free elevator directly to the exhibition area. Because they also use wheelchairs from home, this is the way more than half of the users choose. Exhibition is a complex process that involves a series of behaviors and actions, such as finding barrier-free elevators, visiting the exhibition, watching the exhibit card and communicating. These actions constitute a cyclic behavioral system. Finally, the user can return the wheelchair or use his own wheelchair to leave the museum. 3.2
Observation Summary
The Barrier-Free Environment is Relatively Complete. Thanks to the early construction of barrier-free facilities in the European Museum, people with disabilities can easily enjoy the same content as normal people in the museum. However, during the observation process, it was found that the entrance of the barrier-free elevator was relatively hidden and the number of elevators was small. Some of the low-rise stairs used the lifting platform instead of the elevator. The lifting platform has experienced two inoperable problems and needs to be resolved by the staff. The Complicated Tour Route Makes the Family Exhausted. During the shadow tracking process, it was found that the internal moving lines of the museum were usually designed in the form of broken lines, which made the visiting route complicated and long. Family members need to walk an average of 13,400 steps in the museum. Not only that, but during this period, it is necessary to push the wheelchair forward, which is very laborious. Therefore, users usually choose to visit only the most important exhibitions to reduce the length of the route. It is Difficult for Wheelchair Users to Communicate with Their Families. Even if the family is only behind the user, the distance between them is still more than 60 cm under normal conditions. The museum is usually a quiet atmosphere, so family members generally choose to bend over and approach the wheelchair user for a conversation. At the same time, wheelchair users will try their best to turn around and listen. But they did not notice this action themselves. After the interview, the family members recalled that it was very difficult to communicate during the interview and their waists would be uncomfortable. They Watched the Exhibits for Different Lengths of Time. Everyone’s interest in different exhibits is different. During the observation, 25 conversations about time difference were found, including “Let’s go”, “Wait a minute”, “I don’t like it”, “Let me see.” etc. Additionally, the items in the museum they observe in the same location are also different. A partial photograph recorded during the observation is shown in Fig. 2.
730
S. Zhao and J. Shen
Fig. 2. Photos during shadow tracking
The construction of barrier-free interactions in museums is comprehensive and involves many elements such as technology, funds, people, organization, government and products. It is difficult to do it efficiently by relying solely on the power of the museum. On the contrary, it requires multiple actors to participate. However, different actors have different ways of action and interest in the construction and application of barrier-free interaction in museums. Translating these actors into a field is very valuable for the enjoyment of public education rights and a better viewing experience for physically disabled people.
4 Translation Translation is an important way to form an actor network. Michel Callon divides the translation processes into four steps: problematization, interessement, enrolment and mobilization [5]. 4.1
Problematization
In the stage of Problematicization, it is necessary to define the actors and clarify the obligatory passage points in the network. Table 1. The specific composition of actors. Types Human actors
Attributes Individual actors Organizational actors
Nonhuman actors
Material category actors Consciousness category actors
Main actors Disabled, family members, museum staff, tour guides, donors, artists, academics, etc. Non-governmental disability organizations, public culture and education departments, museums, manufacturers, etc. Technical equipment, funds, digital exhibition resources, museum guide resources, etc. Awareness of equal education, relevant policies and regulations, regional social culture, etc.
Research on Disabled People’s Museum Visit Experience
731
According to the general symmetry principle of the actor network theory, the actors of the construction of barrier-free interactions in museums are divided into human actors and non-human actors. Among them, human actors include individual actors and organizational actors. Non-human actors include both material category actors and consciousness category actors. The specific composition of each type of actor is shown in Table 1. Every participating actor will face different problems and obstacles; therefore, they need to align their interests with other actors through cooperation. It is key to let actors have a common goal through obligatory passage points, such as building an accessible interactive platform, building an inclusive public education approach, and enriching the exhibition experience for people with disabilities. 4.2
Interessement
This stage is to stabilize the actor in the set role. Core actors need to strengthen or stabilize the identity of other actors through a series of actions, such as negotiation and intervention. For example, communication in the real world can be carried out through technology, politics and negotiation. After finding the obligatory passage points, more relevant actors need to be involved in this field to form an interest alliance. People with disabilities need to participate in the system construction, put forward creative needs, contribute travel data and give feedback. Disabled people’s organizations are established to serve the disabled. Therefore, volunteer support, financial support, etc. are required. Artists and scholars need to contribute their own knowledge and provide relevant support for cultural services and digital exhibitions on the platform. 4.3
Enrolment
Core actor museum managers need to coordinate the interests of various actors, motivate the enthusiasm and initiative of the actors, and maintain the stability of the network. People with disabilities will get a better museum experience and digital culture education. Enjoy the right to equal education. There is even a chance to reduce the unemployment rate. Disabled people’s organizations will be given relevant benefits, such as organizational exposure, material replenishment and other policy support. While art and scholars help provide educational resources, accessible interactive systems will also serve as a platform for them to spread creative culture and ideas. 4.4
Mobilization
This is a phase of maintenance that requires constant checks to see if the actors in the network are fully representative of all similar actors, not biased. There is a need to ensure that representatives of persons with disabilities are able to represent typical disabled users and remain neutral, rather than those with financial interests, such as being bribed by other organizations or individuals.
732
S. Zhao and J. Shen
5 Conclusion and Future Work Through the museum’s barrier-free interactive system, high-quality cultural resources in the city can be extended to disadvantaged groups. Based on the perspective of actor network theory, it explores how to create a field for each actor through translation methods so that they can face obligatory passage points together. Future research will continue, and this stage is the initial stage of discovering problems and proposing translation methods. Later work will continuously adjust the translation process, and apply the translation results to actual production to establish a mobile assistant system for the museum’s barrier-free interactive system.
References 1. Yu, Q.: Research on the barrier-free system of public buildings in Germany. Shenyang Jianzhu University (2012) 2. Zhou, L.: Analysis of digital exhibition and display technology in museums. China Natl. Expo 2016(11), 212–213 (2016) 3. Hudson, A.: Letting the blind man see…: an investigation into methods and an evaluation to improve accessibility in Swedish museum exhibits for the visually impaired (2018) 4. Kim, A.K., Harris, E.: Experiencing momentum through an effective use of technology in museums. In: Shin, C. (ed.) Advances in Interdisciplinary Practice in Industrial Design, vol. 968. Springer, Cham (2019) 5. Callon, M.: The sociology of an actor-network: the case of the electric vehicle. In: Callon, M., Law, J., Rip, A. (eds.) Mapping the Dynamics of Science and Technology. Palgrave Macmillan, London (1986)
Production Management Model to Balance Assembly Lines Focused on Worker Autonomy to Increase the Efficiency of Garment Manufacturing Valeria Sosa-Perez1, Jose Palomino-Moya1, Claudia Leon-Chavarril1, Carlos Raymundo-Ibañez2(&), and Moises Perez3 1
Ingeniería Industrial, Universidad Peruana de Ciencias Aplicadas (UPC), Lima 15023, Peru {u201410875,u201320716,pcincleo}@upc.edu.pe 2 Dirección de Investigación, Universidad Peruana de Ciencias Aplicadas (UPC), Lima 15023, Peru [email protected] 3 Escuela Superior de Ingeniera Informática, Universidad Rey Juan Carlos, 28933 Móstoles, Spain [email protected]
Abstract. Currently, companies dedicated to the garment manufacturing industry have as a main problem the delay in their production assembly lines, which generates high downtime. Given this problem, it is proposed to make a correct allocation of resources by performing a balance of lines based of the Hoffman model. This with the objective of increasing the production cycle time. In the same way, define a work methodology, select good working practices and develop autonomy in workers. As a result of this implementation, workers acquire autonomy to perform their work, new working methods, and acquisition of new knowledge. After the proposed improvements had been deployed, the company reported an increase of over 25% in production assembly line quality, performance, and efficiency. Keywords: Assembly line balancing Empower employees manufacturing Hoffman method Value stream mapping
Garment
1 Introduction In the last decade, the Peruvian economy reported an average annual growth of 4.3% [1, 2]. One of the reasons for this weak growth is that local companies face different problems that mainly affect their productivity, which in turn affects their competitiveness. In this context, MSMEs of clothing manufacturing play an important leadership role, since they represent one of the main non-extractive activities at the national level, since they represent 1.3% of national GDP and 8.9% of manufacturing production in 2014 [3]. It is for this reason that the apparel production sector has become The original version of this chapter was revised: The author’s name in Reference 5 has been amended. The correction to this chapter is available at https://doi.org/10.1007/978-3-030-39512-4_197 © Springer Nature Switzerland AG 2020, corrected publication 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 733–739, 2020. https://doi.org/10.1007/978-3-030-39512-4_112
734
V. Sosa-Perez et al.
the second most important sector in manufacturing GDP, second only to the precious and non-ferrous metals industry. However, although its production value grew by approximately 14.8% between 2009 and 2014, the textile industry exhibited, from January to August 2017, a cumulative negative growth rate of 6.5%, while in the last 12 months, the variation was negative, approximately 6.0% [1]. Therefore, the root causes of this problem must be determined. It is for this reason that this study proposes an improvement model to increase the productive efficiency of the company. In addition, can also serve as an example for other apparel manufacturing companies towards better production controls. For this, a line-balancing model was implemented to allocate resources appropriately according to each work station, with adequate training to achieve worker autonomy.
2 State of Art 2.1
Line Balancing in the Textile and Clothing Sector
According to past research, the Balance of lines based on the grouping of activities that follow a sequence of work in a production plant, which in this case is a textile production. All this, in order to achieve maximum use of resources such as labor and equipment and thereby reduce or eliminate downtime. According to the research of the authors Oksuz [4] and Türkmen [5], one of the main objectives is to obtain an adequate balance in the balancing of the lines, eliminating the creation of possible bottlenecks and delays in the production. For this purpose, several heuristic models used and compared in order to create production, efficiency with the minimum use of machinery and labor. Having as a result the comparison of two heuristic models, which provide insight, implies the proper use of these models. Among the heuristic models are two very relevant, this is the Moodie Young model and the Hoffman. These models are looking to obtain the greatest efficiency in the balance of lines. While in Moodie Young, the workstations grouped through a precedence diagram, the Hoffman model additional to this has variables such as the use of machinery and a precedence matrix [6]. In the same way, the authors Syahputri [7] and Kayar [8] in their investigations make a comparison of the efficiency and loss of line balancing in a textile manufacturing company in Asia. This results in a difference in the efficiency of the line balance in the Hoffman model with more than 15% unlike the Moodie Young model. 2.2
Worker Autonomy Applied to the Textile and Clothing Sector
Achieving autonomy in workers is a management strategy that allows employees to develop their ability to make their own decisions competently [9, 10]. In fact, one of the studies reviewed developed a visual management model based on manual resources, signalling and experimental learning theory after implementing good work practices. The model validated through its implementation in 653 employees working in textile production companies. These findings suggest that creating a culture of organizational learning significantly strengthens teamwork relationships and the autonomy of workers to their daily activities.
Production Management Model to Balance Assembly Lines Focused
735
3 Contributions 3.1
Proposed Production Management Model
• Phase 1: Value stream mapping diagram shows the production flow of the company. This tool helps to identify lead-time and shows times that add and do not add value. • Phase 2: Line balancing to optimally allocate resources and balance production lines. • Phase 3: Work Standardization to establish the appropriate work methods. • Phase 4: Continuous training means prepare staff for the immediate execution of the various tasks of the position. Each component added to this production management model not only fosters production efficiency but also implementation sustainability. That is, it guarantees that the implementation will remain operational over time (Fig. 1).
Fig. 1. Proposed production management model
3.2
Indicator of Production Line Losses and Efficiency
These indicators identify the state of production line flows to determine if there are delays in the assembly lines. LB ¼
h i X nC Co =nC 100
LE ¼ ð1 LBÞ 100
ð1Þ
736
3.3
V. Sosa-Perez et al.
Proposed Method for Implementation of Improvement
This item explains the steps required for deploying the production management model as explained below (Fig. 2).
Fig. 2. Proposed method for implementation of improvement
4 Validation 4.1
Description
The company under study manufactures and markets garments for babies, boys, and girls. Located in Lima, Peru, the company was established in 2013, which evidences that it is still a developing company (Fig. 3).
Fig. 3. Company facilities under study
Production Management Model to Balance Assembly Lines Focused
4.2
737
Assessment
As part of the evaluation, a production line was identified that generates higher sales revenue for the company. For this case study, the production of pants was chosen. Next, the Value Flow Mapping diagram was drawn, establishing that after a delivery time of 13.5 days, it was found that a total of 6.5 days did not add value to the production process. These days are associated with the sewing process, which generates frequent bottlenecks at this station. Finally, the main causes of the problem are identified using 7 waste tools. 4.3
Implementation Step 1: Before starting to develop the tools, all activities within the pants sewing process must be identified, as well as the standard times of each activity (Fig. 4). Leather label BuƩonhole Zippers
8
Prepare label
Pockets
Front
Back
4 Prepare zippers
1
sew front and back piece
2
Sew front pockets
3
Sew back pockets
5
Sew zippers
6
Sew buƩonholeer
7
Sew belt
9
Sew leather label
10
Reinforcement
Total 10
Finished Pants
Fig. 4. OPC-Pants sewing process
Step 2: After this, the balance loss and efficiency are calculated in order to be compared with results after implementation. LB ¼ ð11 0:98 5:48Þ=ð11 0:98Þ 100 ¼ 49:17% LE ¼ ð1 0:4416Þ 100 ¼ 50:83%
ð2Þ
738
V. Sosa-Perez et al.
Step 3: Activities with less standard time are grouped with activities that do not exceed the larger standard time (Fig. 5). Achieving reduce from eleven stations to just eight (Fig. 6). OP 1 2 3 4 5 6 7 8 1 1 2 1 3 1 4 1 5 1 6 1 7 8 9 10 11 0 1 1 0 2 1 1 0
9 10 11 1 2 3 1 1
4 1 1
2
1
1
Fig. 5. Grouping matrix
5 6 7 8
Activity 1 2 10 3 6 4 8 5 7 9 11
Time 0.7 0.5 0.2 0.2 0.5 0.4 0.3 0.6 0.6 0.5 0.98
Total 0.7 0.7
Difference 0.28 0.28
0.7
0.28
0.7
0.28
0.6 0.6 0.5 0.98
0.38 0.38 0.48 0
Fig. 6. Activity matrix
Step 4: The efficiency of the new assembly line is calculated and compared with the efficiency before implementation (Fig. 7).
Line efficiency
Before improvement
After improvement
Difference
50.83%
76.89%
26.06%
Fig. 7. Before and after results
• The Line balancing application manages to eliminate 2 activities that do not generate value, allowing to create a continuous flow in production. Successfully allocating resources and having an efficient workload. Which meant an increase in the efficiency of the production line from 50.83% to 76.89%.
References 1. Ministry of Production: Economic Studies Offices (2015). http://demi.produce.gob.pe/ estadistica/sectorial 2. Clerk Maxwell, J.: A Treatise on Electricity and Magnetism, vol. 2, 3rd edn, pp. 68–73. Clarendon, Oxford (1892)
Production Management Model to Balance Assembly Lines Focused
739
3. National Statistics Institute (INEI) (2015). https://www.inei.gob.pe/media/MenuRecursivo/ indices_tematicos/pbi_act_econ_n54_kte_2007-2015_1.xlsx 4. Oksuz, M.K., Buyukozkan, K., Satoglu, S.I.: U-shaped assembly line worker assignment and balancing problem: a mathematical model and two meta-heuristics. Comput. Ind. Eng. 112, 246–263 (2017) 5. Türkmen, A., Yesil, Y., Kayar, M.: Heuristic production line balancing problem solution with MATLAB software programming. Int. J. Cloth. Sci. Technol. 28(6), 750–779 (2016) 6. Zhang, L., Narkhede, B.E., Chaple, A.P.: Evaluating lean manufacturing barriers: an interpretive process. J. Manuf. Technol. Manag. 28(8), 1086–1114 (2017) 7. Syahputri, K., Sari, R.M., Rizkya, I., Leviza, J., Siregar, I.: Improving assembly line balancing using Moodie Young methods on dump truck production. In: IOP Conference Series: Materials Science and Engineering, vol. 288, no. 1, p. 012090. IOP Publishing, January 2018 8. Kayar, M., Akalin, M.: Comparing heuristic and simulation methods applied to the apparel assembly line balancing problem. Fibres Text. East. Europe 2(116), 131–137 (2016) 9. Campos, R.Y., Lao, M.N., Torres, C., Quispe, G., Raymundo, C.: Modelo de Gestión del conocimiento para mejorar la Productividad del Talento Humano en empresas del sector manufactura. In: CICIC 2018 - Octava Conferencia Iberoamericana de Complejidad, Informatica y Cibernetica, Memorias, 1, pp. 154–159 (2018) 10. Espinoza, A., Rojas, E., Rojas, J., Raymundo, C.: Methodology for reducing staff turnover in service companies based on employer branding and talent management. In: Smart Innovation, Systems and Technologies, vol. 140, pp. 575–583 (2019)
Rural Ecotourism Associative Model to Optimize the Development of the High Andean Tourism Sector in Peru Oscar Galvez-Acevedo1, Jose Martinez-Castañon1, Mercedes Cano-Lazarte1, Carlos Raymundo-Ibañez2(&), and Moises Perez3 1
Ingeniería de Gestión Empresarial, Universidad Peruana de Ciencias Aplicadas (UPC), Lima 15023, Peru {u201212253,u201417205,mercedez.cano}@upc.edu.pe 2 Dirección de Investigación, Universidad Peruana de Ciencias Aplicadas (UPC), Lima 15023, Peru [email protected] 3 Escuela Superior de Ingeniera Informática, Universidad Rey Juan Carlos, 28933 Móstoles, Spain [email protected]
Abstract. In Peru, tourism has had an average annual growth of 5% in the last 5 years, however, the Huancavelica region has had an average reduction of 2% per year. Castrovirreyna, one of its districts, has 300 monthly arrivals on average, while other locations with lower HDI (Human Development Index), more tourism resources and higher per capita income, have at least twice as many arrivals. This wastage contributes to the degradation and oblivion of the potential tourist attractions and culture of the district. This research seeks local progress from the development of innovation and professionalization of enterprises (SMEs) and entrepreneurs, with the proposal of guidelines for tourism use. To achieve this purpose, it is proposed the implementation of an Associative Ecotourism Model that manages to link the important agents of this local economy to develop innovative, collaborative and sustainable tourism products, services, packages and projects. Keywords: Innovation Associativity Ecotourism Entrepreneurship Alto Andean SMEs
Sustainability
1 Introduction Domestic tourism has been increasing linearly since 2012 in Peru, with the exception of the Huancavelica region, which has had a decrease in tourism demand despite having potential for the development of ecotourism, experiential and recreational tourism. Castrovirreyna, Castrovirreyna district, is located at 3950 masl and has 3 tourist attractions recognized by the MINCETUR and other attractions that are not used for tourism; It is also crossed by 2 economic corridors that transfer an average of 500 people a day. The district has an average monthly arrival demand of 300 visitors, while other locations with the same context or in worse conditions have a higher tourist demand. © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 740–745, 2020. https://doi.org/10.1007/978-3-030-39512-4_113
Rural Ecotourism Associative Model to Optimize the Development
741
A model for the sustainable development of products from different stakeholders in mountain regions of Austria propose [1] a solution for the correct development of synergies in the tourism sector, taking advantage of trends such as mountain biking in conjunction with hunters, hikers and owners to reach a mutual agreement with all parties involved in the search for the development of all. On the other hand, studies in Peru, focused on the region with more flow of foreign visitors, Cuzco, introduce us in dynamics of people in extreme poverty with the aim of taking advantage of tourism and its relationship with local institutions [2]. However, the study and research in the development of tourism management models for the use of natural resources in high rural Andean towns is scarce. The objective of this research is to develop an Ecotourism Management Model that allows the sustainable use of tourism in high rural Andean towns and that reduces poverty rates and increases the use of the environment.
2 State of Art 2.1
Models of Tourism Value Chain in Mountain Regions
The revised literature establishes guidelines for the type presented, in the first one, [3, 4] are recognized as fundamental units of the tourism value chain in the rural context to organizational companies travel, tour operators, transport companies and tourist services; The guidelines focus on initial research to determine their situational diagnosis. On the other hand, [5] they also establish as a final dependent variable and management objective the development of authentic tourism products based on the use of the identity characteristics of SMEs and the context of mountain, all based on the management of the tourism value chain and political actors, which was validated by obtaining a high degree of correlation in the positive development of the other independent variables. The tourism value chain and other sectors present knowledge, infrastructure, policy, resource and cultural interpretation gaps [6], the market approach is a common denominator in these ventures and the development of this component is another key variable. These companies can only be competitive if they optimize their resources and guide them according to the trends identified in their users and potential users. 2.2
Ecotourism Models of Tourism Value Chain Management in Mountain Regions
Mountain destinations present their potential in most of the ecotourism resources present in the area (lakes, rivers, snowy and wildlife), the approaches to care and development of social, environmental and economic environments, as well as the correct development of a technological environment without affecting the aforementioned environments, are essential in these contexts [7]. Not only environmental conservation is relevant, but conservation based on the participation, assessment, opinions and perceptions of stakeholders, residents and government [8]. This is resolved by the development of the assessment of the identity of the residents, a development of awareness and care of the destination by tourists and an innovation management.
742
2.3
O. Galvez-Acevedo et al.
Associative Ecotourism Models of Tourism Value Chain in Mountain Regions
Ecotourism in rural locations is a trend issue related to globalization since the 1990s, which demands greater needs and more detailed requirements, such as green areas, zero pollution, open spaces, among others; In the case of high Andean rural towns, the natural tranquility that is very unlikely to be found in urban areas such as cities and large towns is expected. The entrepreneurs of these rural locations seek to be more competitive through the exchange of information and other resources, for which they tend to link informally with each other; therefore, a formalized alliance of tourism entrepreneurs could grant greater and more competitive advantages [9].
3 Methodology The model is based on the evaluation of the economic, social, environmental and political context of the high Andean towns. Small transport companies, lodgings and restaurants are recognized as causative actors of the local economic dynamics. These are associated, under a voluntary and collaborative approach, and are managed from 5 components: Processes, sustainability, Innovative Leadership, Organizational Culture and Business Development. These approaches contribute to the progress of associativity all together. Organizational Culture is recognized as one of the main elements to
Fig. 1. Associative Ecotourism Model of tourist value chains of high Andean towns in Perux
Rural Ecotourism Associative Model to Optimize the Development
743
face in rural contexts with high resistance to change and special roots in traditional working methods. The ultimate goal is the formalization, quality improvement, and creation of innovative products and services, in these remote rural locations, for the facilitation and optimization of tourism use taking into account the sustainability approach. See Fig. 1. 3.1
Organizational Culture
It seeks to modify the behavior patterns and ways of thinking of a human group in order to guide them to more fertile patterns for the development and implementation of new methodologies or approaches. This follows the fulfillment of a cycle, whose initial step is the analysis of the environment in which a human group or organization develops; then, it continues with the definition of strategies to face the environment in search of the fulfillment of objectives; then, it continues with the development of a culture that develops the vision of the strategies proposed and finally the results are communicated to promote the improvement and adherence of more actors. The application starts from a Diagnostic which follows the phases (13). Based on the context, strategies are defined in order to develop competitive advantages such as locality and enterprises. 3.2
Business Development
It maps the need for knowledge of adequate Business Management oriented to SMEs. Taking into account the difficulties of the human group to intervene, this component seeks to learn 3 basic items for the development of small businesses: Commerce and Marketing, Finance and Human Resources. The first includes the creation of customer value, price strategy, business model and tools. The second includes a company’s value, balance sheet, income statement, financial efficiency indicators and investment decisions. Finally, the selection of personnel, functions and coordination, goal evaluation, staff training, payments and business identity is contemplated; In this way, each SME is strengthened as an association and individually.
4 Results In order to validate the proposed model, a case study is developed in the town of Castrovirreyna, Huancavelica region, located at 3900 m above sea level. This account has an average daily number of 100 people that cross this town through the Huancavelica-Ica economic corridor, also presents an average of 300 monthly arrivals related to tourism, has 3 ecotourism resources recognized by MINCETUR and another 5 awaiting recognition; however, it does not offer any product or service focused on the exploitation of ecotourism, only 50% of the SMEs of the tourism chain have operating licenses and formal registrations; In addition, the services and products of these only have a satisfaction rating of 2 on a scale of 1 to 5, assessed by an initial survey of 350 people; Additionally, an evaluation of the organizational culture of the SMEs with a variation of the OCI tool, determined that these present “Defensive Passive” and
744
O. Galvez-Acevedo et al.
“Defensive Aggressive” styles, which represent a negative culture for the development of creativity, innovation, formalization and creation of quality products; finally, the promotion of tourism resources is zero. Once the case study is described, a pilot test is proposed with the purpose of implementing the model and validating 3 of the 5 total components, this test has a duration of 8 months, a budget of $ 664 and only develops the first 3 phases. Three evaluations of the organizational culture were carried out, at the end of the eighth month the third test was executed to evaluate the impact on the local culture, in this it can be seen that the constructive style presents an average score increase of 1.0 points, the Defensive Liability a reduction of 0.6 points and Aggressive of 0.4, all compared to the first evaluation. See Fig. 2.
Fig. 2. Third evaluation of Organizational Culture
This implies an increase in the constructive and unique style that we consider positive, after 8 months of implementation, with a budget execution of 70%, 8 field visits and 2 specialists in charge of the analysis, the following results are shown. At the same time, a second performance evaluation focused on the leader and president of the association was carried out after 6 months in the position, this show passing grades in the 5 competences evaluated. Both organizational culture and leadership show a direct relationship with the proper functioning of the association.
5 Conclusions The impact on the increase of the constructive cultural style of 1.0 point and the reduction of the passive and aggressive styles by the development of the Organizational Culture Model is fundamental in the paving of the land for the implementation of new work methodologies. Leadership Development based on the “Communication” competencies with a score of 4.31, Leadership with a score of 4.03, Teamwork with a score of 4-05, Responsibilities and quality with 4.00 and Innovation 4.3, directly favored to encourage innovation within the association and within each MYPE, this shows a direct relationship with the change in organizational culture and the number of innovative products and services.
Rural Ecotourism Associative Model to Optimize the Development
745
The 12 processes implemented, with their respective indicators and the process approach contributed to the consolidation of the collaborative culture of the association, was also the starting point for the start of volunteer work aimed at improving and signaling trekking routes. The development of the prototype, testing and launch of 4 local tourism products was achieved. These have already generated an economic impact equivalent to 1700 dollars, only with the direct intervention of 3 SMEs, 8 people involved and 4 days of work, which strengthens the local cultural change. Although the Implementation of the “Business Development” component is still under implementation, there is already an interest of the associates for the professionalization of their positions within their SMEs. Strengthening the orientation approach towards the formalization of local businesses. It can be concluded by the validation of the components “Organizational Culture”, “Process Management” and “Innovative Leadership” and its direct relationship that the case study has been positively impacted in its development by the implementation of the model and verified both through economic indicators and social indicators and meets the proposed objectives of quality improvement, formalization and development of innovative tourism products and services.
References 1. Pröbstl-Haider, U., Lund-Durlacher, D., Antonschmidt, H., Hödl, C.: Montaña Del Turismo En Bicicleta En Austria Y La Región De Los Alpes - Hacia Un Modelo (2017) 2. Knight, D.W.: An institutional analysis of local strategies for enhancing pro-poor tourism outcomes in Cuzco, Peru. J. Sustain. Tour. 26, 631–648 (2017). Sostenible Para El Desarrollo De Productos De Múltiples Partes Interesadas 3. Hjalager, A.-M., Tervo-Kankare, K., Tuohino, A.: Cadenas De Valor Turismo. Revisados Y Aplicados A Bienestar Rural Turismo, Planificación Y Desarrollo, Turismo (2016) 4. Ramírez, R.C., Plascencia, J.M.O.: Modelo De Cadena Productiva Para La Gestión Del Turismo En Contextos Rurales, Una Propuesta Desde La Economía Territorial. Caso De Aplicación En Comala, Colima (2016) 5. Aguilar, Y., Gordillo, A., Quispe, G., Dominguez, F., Moguerza, J.M., et al.: Tourism cluster implementation model in small cities of developing countries. In: CISCI 2018 - Decima Septima Conferencia Iberoamericana en Sistemas, Cibernetica e Informatica, Decimo Quinto Simposium Iberoamericano en Educacion, Cibernetica e Informatica, SIECI 2018 – Memorias, vol. 1, pp. 26–31 (2018) 6. Zapata, G., Murga, J., Raymundo, C., Dominguez, F., Moguerza, J.M., Alvarez, J.M.: Business information architecture for successful project implementation based on sentiment analysis in the tourist sector. J. Intell. Inf. Syst. 53, 563–585 (2019) 7. Hjalager, A.-M., Kwiatkowski, G., Larsen, M.Ø.: Brechas De Innovación En Turismo Rural Escandinavo. Scandinavian Journal De Hostelería Y Turismo (2017) 8. Junio, K.S.: Las Relaciones Estructurales De La Imagen De Destino, La Conciencia, La Singularidad Y La Lealtad De Destino En Destino Ecoturístico Periurbana (2016) 9. Grimstad, S., Burgess, J.: Sostenibilidad Ambiental Y Ventaja Competitiva En Un MicroCluster Vitivinícola, vol. 37, no. 6, pp. 553–573. https://doi.org/10.1108/mrr-01-2013-0019
Picking Management Model with a Focus on Change Management to Reduce the Deterioration of Finished Products in Mass Consumption Distribution Centers Lourdes Canales-Ramos1, Arelis Velasquez-Vargas1, Pedro Chavez-Soriano1, Carlos Raymundo-Ibañez1(&), and Moises Perez2 1
2
Universidad Peruana de Ciencias Aplicadas (UPC), Lima 15023, Peru {u201411063,u201412345,pedro.chavez, carlos.raymundo}@upc.edu.pe Escuela Superior de Ingeniera Informática, Universidad Rey Juan Carlos, 28933 Móstoles, Spain [email protected]
Abstract. The process of assembling an order or picking in distribution centers is manual in most cases. Past studies have shown the relationship between the performance of the operator and a work method designed, based on ergonomics rules. However, the management of changes when implementing a picking management model has not been evidenced until now. Therefore, this research study proposes a picking management model that aligns the elements of change management with those of a work-study and a validation method with five phases that will deal with the whole picking process. Validation presents a case study where the most relevant research result is the reduction of the damaged inventory from 4% to 1.5% and other evaluation indicators. Finally, conclusions of result analysis based on the validation of the model within a distribution center are presented. Keywords: Order picking model Change management Systems engineering Warehouse Inventory management Ergonomics Manual picking
1 Introduction By 2016, logistics costs represented 34% of the total cost of sales in Peru [1]. With this index, the country is placed 10% above the average in Latin America. Consequently, experts in supply chain management began to investigate the causes of logistics costs in the country. Based on their research, they found that 30% originated in the distribution centers, and as part of this 30%, 60% was represented by the cost of labor. After further research, it was detected that 42% of the workforce was devoted to picking operations [2]. This means that the main cause of the high logistics costs in the country is manual picking operations in distribution centers. The causes identified by the authors are © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 746–752, 2020. https://doi.org/10.1007/978-3-030-39512-4_114
Picking Management Model with a Focus on Change Management
747
worker exhaustion [3] and the research gap in technology to replace labor that can convert the picking operation into an automatic system [2]. The importance is noteworthy because of the consequences of high logistics costs. The logistic performance index in Peru is directly affected by this factor. To improve the quality of the services, an increase of productivity in the processes of warehouses and an improvement in the handling of inventories are sought. In countries such as India, advances in productivity and the creation of new manual picking methods have helped increase competitiveness [4, 5]. Different authors in the world have evaluated the problem in case studies of warehouses, identifying opportunities for improvement by focusing on the study of work. The motivation for this research study is to create a picking management model that is focused on change management. This approach will allow us to reduce the damaged inventory in the picking process of distribution centers and to involve the staff in improvements, by raising awareness on the importance of this issue, proposing solutions, making final measurement of performance, and creating a culture of continuous improvement in the rest of the operations in an equal manner. The model of picking management based on work techniques study process to be designed will include procedures to enable the diagnosis of the picking process in its current situation, eliminate repetitive activities, redesign the workplace on the basis of ergonomic rules, hold training events and carry out audits to guarantee their continuity. Indicators will also be presented to measure the performance of the proposal.
2 Review of Literature 2.1
Picking Management Models
Over time, picking management models have evolved to obtain solutions that benefit distribution centers to the greatest extent. Several authors proposed models that were based on the management of the picking process, including the study of methods as the main tool [4, 5]. Other authors mentioned a model based on the reduction of logistics costs through the elimination of losses. The tool used was value stream mapping for the redesign of the operational layout [6]. Two years later, they built a model based on human aspects. Among human factors, job satisfaction, as well as ergonomics in the picking process, was measured [3]. The main results of the different models indicate that productivity can be increased by more than 15% [4]. However, this could also mean a reduction of 4% of logistics costs [6]. 2.2
Change Management
In distribution centers where members have been working for more than 5 years and have an age range between 30 and 45 years, it is important to maintain the change management approach to improve the process that they were carrying out [3, 7]. Several authors mention the importance of considering what the staff thinks about the proposal, as well as understanding the impact of the problem. Likewise, the
748
L. Canales-Ramos et al.
inclusion of human factors must be considered in order to promote the comfort and satisfaction of workers [8]. With respect to human factors, workload balancing as a technique for not requiring employees to work beyond their limits by proposing different methods that should be applied by supervisors. It is necessary to use the workload index to plan the amount of resources efficiently and without harming employees [7]. Finally, authors also mention the need to reward employees for doing their work in the standardized way and when they stand out from fellow workers. To this end, they propose a monthly recognition program non-monetary assigning different indicator for the optimization of the process [8]. Among the main results obtained with change management was the decrease in 35% staff turnover [7]. On the other hand, an increase in employee satisfaction of more than 40% was achieved when compared with the average of 20% [8].
3 Contribution 3.1
Proposed Model
The picking management model (Fig. 1) has a change management approach as part of the strategy and will support the management of the work method, the workplace, the personnel, and the performance of the picking process.
Fig. 1. Picking management model with a focus on change management
Picking Management Model with a Focus on Change Management
3.2
749
Proposed Method
To implement the model, there are five phases (Fig. 2), which are aligned to the elements of the work study and change management.
Fig. 2. Proposed method to apply the model
3.3
Model Indicators
As part of the implementation of the model, the results obtained must be organized to evaluate the performance of picking management. The definition of indicators is part of the performance and impact management. They are both components of the picking management model (see Fig. 1). The proposed indicators are as follows: • Percentage of damaged products and value added: The purpose of the indicator is to measure the level of damaged inventory. The objective is 1%. On the other hand, the goal of value added is 80%. • Efficiency: The indicator focuses on measuring the efficiency of the picking process by means of the number of boxes collected in the target time. • OWAS rating: Measures the ergonomic risk in the postures of pickers during operations. A reduction of this index to 2 is proposed. • Percentage of Overtime cost: The objective is for overtime cost is 10%.
750
L. Canales-Ramos et al.
4 Validation 4.1
Case Study
The scenario for the execution of the proposal is aimed at distribution centers for mass consumption in the district of Cercado de Lima, in the city of Lima, Peru. The staff includes 26 men and 14 women, and the age ranges from 35 to 55 years (Table 1). Table 1. Initial values of the model. Indicator % of dmg products % of value added Efficiency Time % of overtime OWAS rating
4.2
Initial value 4% 65% 0.8 255 min 22% 4
Target value 1% 80% 1.1 190 min 10% 2
Application of the Contribution
Phase 0: Change Management. A session was held with the personnel involved in the process. The diagnosis, the proposal, and the impact were presented. Two types of surveys were applied. The change management survey showed type “A” questions whose score was greater than 40, which means that the picker felt exposed to great physical effort in the operation. In the case of type “B” questions, the score does not exceed the established value. This means that pickers feel that they are able to do the job. However, they are not sure of their ability to improve the current operation. The second survey showed whether the session was able to raise awareness among the pickers. These are the results: 69% said that the process could be carried out in a shorter time (some gave suggestions on how to do it), 54% said that they did not have enough strength to carry out their work and finally, 100% replied that they agreed with the picking management model and also offered some suggestions related to communication, tolerance, and teamwork. The information was used to reinforce the phases. At the end of the session, the responsibilities of the supervisor in each of the phases were defined so that they could be involved in the proposal. Phase 1: Definition of the Work Methodology Description of the Work Procedure. The matrix of value-added activities, better known as the VAA matrix was made. The follow-up of 60 picking lists, divided into three work shifts, was carried out. Men and women were considered. As can be seen, an increase in the index of value added from 65% to 91% was obtained due to the elimination of activities which were not needed nor did they add value to the process of order picking.
Picking Management Model with a Focus on Change Management
751
Integration of Ergonomic Factors. The ergonomic guidelines were included in the new picking procedure, and they specifically mentioned the postures required for loading and unloading boxes on the order pallets. The following four ergonomics activities were established: risk management, training in good practices, active breaks, and audits. Definition of Working Time. The optimal working time was obtained with the MOST tool using variables such as: product weight (30 kg), picker height (1.75 m), height of the location (1.4 m), and final height (0.3 m). This time considers four time demanding activities: moving equipment to a location, box loading, lowering of boxes and moving from the equipment to a location.
5 Results and Conclusions The percentage of damaged products was reduced to 2% and the percentage of overtime decreased by 16% this due to other associated causes that are beyond the reach of the model, such as the intervention of other types of equipment in the area and the width of the corridors. The combination of work study techniques with a change management approach produced favourable results for the case study which allowed improvement in the work environment as employees said they felt involved and valued (Table 2). Table 2. Final results of a model Indicator % of damaged products % of value added Efficiency Time % of overtime OWAS rating
Initial value 4% 65% 0.8 255 min 22% 4
Target value 1% 80% 1 190 min 10% 1
Final value 2% 91% 1.1 115.79/130.89 min 16% 2
References 1. Carlos Paz, J.: Sustentación ante la Comisión de Comercio Exterior Competencias para Plataformas Logísticas, Lima (2017) 2. Mayorga, H.: http://logistica360.pe/los-costos-logisticos-en-la-cadena-de-suministro-en-elperu-y-como-reducirlos-usando-la-automatizacion/ 3. Harari, Y., Riemer, R., Bechar, A.: Factors determining workers’ pace while conducting continuous sequential lifting, carrying, and lowering tasks. Appl. Ergon. 67, 61–70 (2018) 4. Singh, M.P., Yadav, H.: Improvement in process industries by using work study methods : a case study. Int. J. Mech. Eng. Technol. 7(3), 426–436 (2016) 5. Biswas, S., Chakraborty, A., Bhowmik, N.: Improving productivity using work study technique. Int. J. Res. Eng. Appl. Sci. 6(11), 49–55 (2016)
752
L. Canales-Ramos et al.
6. Malashree, P., Sahebagowda, M., Gaitonde, V.N., Kulkarni, V.N.: An experimental study on productivity improvement using workstudy and ergonomics. Int. J. Darshan Inst. Eng. Res. Emerg. Technol. 7(1), 31 (2018) 7. Riquero, I., Hilario, C., Chavez, P., Raymundo, C.: Improvement proposal for the logistics process of importing SMEs in Peru through lean, inventories, and change management. In: Iano, Y., Arthur, R., Saotome, O., Vieira, E.V., Loschi, H. (eds.) Smart Innovation, Systems and Technologies, vol. 140, pp. 495–501. Springer, Cham (2019) 8. Campos, R.Y., Lao, M.N., Torres, C., Quispe, G., Raymundo, C.: Knowledge management model to improve the productivity of human talent in companies of the manufacturing sector. In: CICIC 2018 - Octava Conferencia Iberoamericana de Complejidad, Informatica y Cibernetica, Memorias, vol. 1, pp. 154–159 (2018)
Risk Factors Associated with Work-Related Low Back Pain Among Home-Based Garment Workers Sunisa Chaiklieng1,2(&), Pornnapa Suggaravetsiri3, and Sari Andajani4 1
Department of Occupational Health and Safety, Faculty of Public Health, Khon Kaen University, Khon Kaen, Thailand [email protected] 2 Research Center in Back, Neck, Other Joint Pain and Human Performance (BNOJPH), Khon Kaen University, Khon Kaen, Thailand 3 Department of Epidemiology and Biostatistics, Faculty of Public Health, Khon Kaen University, Khon Kaen, Thailand 4 Faculty of Health Science and Environment, Auckland University of Technology, Auckland, New Zealand Abstract. This cross-sectional study investigated the prevalence of workrelated low back pain (LBP) and its associated risk factors among 446 homebased informal garment workers in the northeast of Thailand. The results from structured interviews indicated the six month-prevalence of LBP was 44.39% (95% CI = 39.67–49.02). The significant risk factors of LBP were included no ordinary exercise (ORadj = 1.6; 95% CI = 1.01–2.52), members of family had LBP (ORadj = 2.19; 95% CI = 1.90–2.37), working hour 8 h per day (ORadj = 1.6; 95% CI = 1.98–2.61), repetitive movement (ORadj = 1.84; 95% CI = 1.12–2.99), prolonged sitting >2 h (ORadj = 2.65; 95% CI = 1.62–4.34) and no change of posture each hour (ORadj = 1.97; 95% CI = 1.18–3.30). It is concluded that no regular exercise, nature of repetitive work and prolonged sitting, and behavior of unchanged posture each hour are contributed risk factors to LBP. In order to prevent LBP in this home-based worker, ergonomics training and worker’s health promotion should be routinely implemented. Keywords: Back pain Ergonomics Informal garment workers Prevalence
1 Introduction Thailand is an industrially developing country where 55.32% of the majority of the workforce comprises 31.18 million informal sector workers, mostly located in the northeast of the country (19.9 million, 35.91%) compared to the middle 12.6 million, the north 11.5 million and the south 7.4 million, respectively [1]. Textiles/garments comprises are the major producibility by nearly 6.1 million informal workers, mostly operating as small-scale enterprises or as home-based work with 7.5% of a total of product from the northeast of Thailand [1]. The work activities of the garment workers involve the adoption of a single or combined intensively static postures, prolonged sitting, and repetitive work resulting in muscular discomfort [2]. © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 753–759, 2020. https://doi.org/10.1007/978-3-030-39512-4_115
754
S. Chaiklieng et al.
The work-related musculoskeletal disorders (WMSDs) are the most reported health problem and a serious health condition among Thai workforces, involving a combination of painful disorders of muscles, tendons, and nerves induced by work activities. The highest prevalence of MSDs among industrial textile/garment workers in Thailand has been low back pain (30.7%) [3]. Among informal garment workers from a case study in Khon Kaen province, the Northeast region (NE) of Thailand revealed that low back pain occupied as the first place among MSDs [4]. Our previous studies suggest that individual characteristics and working characteristics played a role in low back pain. Primary studies conducted with different provinces in Northeast Thailand among informal garment workers have contributed to current knowledge of the influence of inappropriate workstations (seat and table) and work environment conditions to shoulders pain [5] and discomfort [2]. Studies on low back pain among informal garment workers in NE Thailand are limited. Only a case study in one district of Khon Kaen province of Thailand had conducted on the representative prevalence of back pain [4]. Loss of working time due to sickness absence of Thai garment workers in the factory [3] and cost of long term treatment have affected the income of those workers and small businesses as well as a gross national product [6]. Home-based garment worker, however, is not visible as they work in their own home, and there is still a rare study conducted with informal home-based garment workers. This study aimed to fill in the gap in research involving home-based garment workers by examining the prevalence of low back pain and its associated risk factors among home-based garment workers in NE Thailand.
2 Materials and Methods 2.1
Study Area and Subjects
This study was designed as a cross-sectional study to examine the prevalence of LBP and to investigate the associated risk factors of LBP among the home-based garment workers in NE Thailand. The sample size’s calculation and inclusion criteria were regarding the previous report on shoulder pain and its risk factors among the informal garment workers in NE Thailand of our previous study [5]. There were 446 homebased garment workers who met inclusion criteria, and all subjects gave informed consent before entering the study. The prior ethics approval number for the study obtained from Khon Kaen University ethics committee, Thailand, was no. HE542131. 2.2
Data Collection and Analysis
Data were collected by face-to-face interviews with the structured questionnaires used in the previous study [5]. There were four parts of the questionnaire required information for this study: part 1 was demographic characteristics and work experience; part 2 was about the health status, health behaviors, and history of low back pain amongst
Risk Factors Associated with Work-Related Low Back Pain
755
family members. Part 3 enquired work characteristics and work behaviors, and workstation (seat and table). Part 4 was the perception of low back pain adjusted by self-assessment from the subjects of that onset of LBP symptom occurred in working time during the past seven days or the six months up to data collection. If the pain was lasting longer than period of work time or persisting over the period of 24 h at least, it was also accounted for work-related LBP. Data were analysed by using STATA version 10. Descriptive statistics were used to describe the characteristics of workers, work conditions and work behaviors. Prevalence of work-related low back pain = (number of LBP cases during the past 7-days or the past 6-months) 100)/446. A simple logistic regression analysis examined the risk factors for the past 6-months of LBP. Factors with p-value less than 0.20 were selected to be candidate variables in a multiple logistic regression analysis. The confounding factors, which were age, gender and work experience, were always included in the models. Significant risk factors were screened out in a backward stepwise manner using likelihood ratio tests as selection criteria. The odds ratio (OR) and the adjusted odds ratio (ORadj) with 95% confident interval (95% CI) were presented and p-value 2 h No 337 124 (36.80) 213 (63.20) Yes 109 74 (67.89) 35 (32.11) Change of posture each hour No 255 100 (52.36) 91 (47.64) Yes 191 98 (38.43) 157 (61.57) Seat with backrest No 208 100 (48.08) 108 (51.92) Yes 238 98 (41.18) 140 (58.82) * Significant at p-value I1, since towards the end of the test the attentiveness weakens and the test tends to finish the task more quickly, which means that D < 1. Time can be considered as the fourth parameter, however, this parameter is very variable. Here you need to keep in mind that the first time you complete the task, it will always be more since the subject simply gets acquainted with the situation. The execution time can be taken into account when the number of tests performed is more than 2–3, and even then with great care. Here is an example (Fig. 1) of one such task, executed on paper by the old method (ImageJ) [3].
Fig. 1. The pattern for connecting dots.
Two stages of the drawing are visible. First, the subject incorrectly assessed the direction to the finish point, then corrected it, but he searched and already at the end completed the task. Below, the calculation of the coefficients is still according to the old method 2.2
The Second Group of Tests
These are tests for repeating shapes (Fig. 2. such example).
Fig. 2. An example of a test for repeating shapes based on an open path.
The Initial Stage of Development of a New Computer Program
1237
Shapes can be between lines, in a frame, or without them. At the beginning of the test, a figure is displayed that needs to be repeated a certain number of times. These are more complex tests that require the use of complex matrix algorithms close to artificial intelligence. One of the main questions here is whether the task can be considered completed if the figure is similar to the original, but has a different scale, smaller or larger. If geometric similarity solves, then image similarity algorithms can be decisive, says the same Pearson cross-correlation. However, if the violation of scale can also be considered an error, then the simple method shown in (Fig. 3) can give better results.
Fig. 3. A simple technique for comparing images based on open paths.
In this case, after the test is completed, the picture is not scaled, but the starting point of the curve is simply combined with the starting point of the template, and the endpoint of the curve is connected to the endpoint of the template. The standard deviations and error fields S1 and S2 are calculated according to the procedure described above for the first parameter of the linear test. More complex figures can only be compared using matrix methods based on artificial intelligence technologies. I will dwell on them in the next article. Acknowledgments. The article is written with the financial support of European Regional Development Fund project Nr.1.1.1.5/18/I/018 “Pētniecības, inovāciju un starptautiskās sadarbības zinātnē veicināšana Liepājas universitātē”.
References 1. Hammill, D.D., Pearson, N.A., Voress, J.K.: Measures visual perception and visual-motor integration. In: Developmental Test of Visual Perception, 2nd edn (1993) 2. Schwartz, S.: Visual Perception: A Clinical Orientation, 4th edn, pp. 229–2414. McGraw Hill Companies, New York (2009) 3. Turlisova, J.: Master work. Eye – hand coordination interaction with visual function (Acu – rokas koordinxcijas saistība ar redzes funkcijām), 13 June 2018 4. Turlisova, J., Jansone, A.: E-studies and mastering of educational material for people with visual perception and visual – motor integration problems – topical issues and perspectives. In: Proceeding of ICLEL 2018 (2018)
Experimental Study on Dynamic Map Information Layout Based on Eye Tracking Jiapei Ren, Haiyan Wang(&), and Junkai Shao School of Mechanical Engineering, Southeast University, Nanjing 211189, China [email protected], [email protected], [email protected]
Abstract. In the GIS system interface, the map module occupies more than 90% of the interface size, which is the focus of the GIS interface design. There are a large number of targets, target signs, target trajectories and other information on the map. The layout of various information is crowded and overlapped, making it difficult for users to search and identify information. The purpose of this paper is to explore the impact of different presentation forms of target signs on user search efficiency. This paper used 8 kinds of overlap and 4 kinds of target sign layout display to design the experiment. This paper analyzed the search performance of subjects, and gaze plot based on eye tracking data. Keywords: Map information layout Eye movement experiment cognitive experiment Dynamic layout
Map
1 Introduction The Geographic information system (GIS) situational system is based on digital maps to display geographical environment information, target motion situation and resource allocation and other related data in real time [1]. However, the excessive data causes the information density of the map interface to be high, and the layout in the situation map is crowded. Frequent overlap makes it difficult for users to quickly find the information they want to know. The situation map needs to solve the problem of information presentation through dynamic interactive layout. Visual search is an important method of obtaining visual information behavior and information processing. Eye tracking technology is widely used in web interface design, map usability research and so on. Nielsen [2] concluded in the eye movement experiment that the user’s attention distribution tends to be close to the “F” type when browsing the web. In 2008, Owens [3] used eye movement to study the subject’s search efficiency at different locations on a web page. Jing [4] used eye tracking technology to found that the layout has cognitive guidance for the display information of the numerical control interface.
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 1238–1243, 2020. https://doi.org/10.1007/978-3-030-39512-4_189
Experimental Study on Dynamic Map Information Layout
1239
2 Overlap Rate The overlap rate refers to the ratio of the area of overlap between graphics to the total area of the graphics. As shown in Fig. 1, rectangles are randomly distributed in the interface, and the rectangles overlap each other. The overlap area is the interface pixel areas - the sum of the white (color value #ffffff) pixel area - the sum of the light gray (#bdbdbd) pixel area. The total area of graphics is the interface pixel areas - the sum of the white (color value #ffffff) pixel area. Target overlap rate ¼ overlap area/total area of graphics
ð1Þ
Fig. 1. Overlap rate calculation case
3 Materials and Methods 3.1
Objective and Participants
There are a lot of information on the map, and various information overlap each other, making it difficult for users to search and identify information. This paper aims to explore the influence of different layouts of signs on the search efficiency of users under different overlap rate of graphics. A total of 22 participants from the School of Mechanical Engineering were enrolled. The average age of the subjects was 22 to 27 years old. The ratio of male to female subjects was 1:1. 3.2
Design
This test uses a two-factor multi-level test design. Variable 1 has a graph overlap rate of eight levels (0, 1.6%, 4.9%, 7.5%, 10.6%, 13.5%, 16.9%, 19.7%), and variable 2 is four kind of sign layout (the sign is presented below the target, the sign is not aligned with the target line, the sign is connected to the target and distributed in right and left alignment and the sign is connected to the target and distributed up and down aligned.). The experimental material is shown in Fig. 2.
1240
J. Ren et al.
Fig. 2. Different overlap rates and layout
3.3
Procedure
At the beginning of the experiment, the computer screen presents the experimental instruction. After reading, press the space bar to enter the experimental stage. Firstly, the cross vision guiding center appears in the center of the blank screen, and then the material picture appears. After finding the rounded rectangle, check the corresponding number and enter the last two digits on the keyboard. The stimulation interval is 1000 ms to eliminate visual residue, the entire experimental procedure was about 12 min.
4 Results 4.1
Behavior Data
4.1.1 Descriptive Statistics The statistical analysis results are as follows: the average correct rate of the test is over 90%, indicating that the test has maintained a high correct rate. The reaction time of layout 3 is the shortest, and the reaction of layout 1 is the longest. The order of the reaction time from short to long is: layout 3 < layout 2 < layout 4 < layout 1. The overlap rate level 1 (0) has the shortest reaction time, and the overlap rate level 8 (19.7%) has the longest reaction time (Table 1). Table 1. Descriptive statistics of behavior data.
Layout 1 2 3 4
Correct rate(avg) 0.9549 0.9792 0.9948 0.9818
Reaction time(avg) 2.770 2.628 2.371 2.670
4.1.2 Analysis of Variance Since the difference in correct rate is small, the deep interaction effect analysis is no longer carried out. The results of reaction time showed that the overlap rate P < 0.001; layout P = 0.004; overlap rate layout P < 0.001, the overlap rate, layout, and the interaction between the two are significantly different.
Experimental Study on Dynamic Map Information Layout
1241
Layout 3 compared with layout 1, p = 0.003, compared with layout 2, p = 0.004, compared with layout 4, p = 001. There are significant differences between layout 3 and other layout forms, and the reaction time is lower than other layouts. There is no significant difference between the other three layouts. The overlap rate level of 8 is significantly different from other overlap rates (except for level 2), and the reaction time is higher than other layout forms. When the overlap rate level is 1, there is a significant difference in the reaction rate with the 2, 3, 6, 7, and 8 overlap rate levels, and the reaction time is lower than other layout forms (Table 2). Table 2. Analysis of variance of reaction time in different layouts.
(I)Layout 3
4.2
(J) Layout 1 2 4
Standard error .092 .062 .061
Sig .003 .004 .001
Eye Movement Data
4.2.1 Descriptive Statistics Fixation count is the number of gaze points in a region of interest or in an area of interest (AOI) group; total fixation duration calculates the sum of the durations of all gaze points in an AOI. Layout 3 has the least fixation count and layout 4 has the most fixation count. The order of the fixation counts under the four layouts from small to large is: layout 3 < layout 1 < layout 2 < layout 4. In the case of the overlap rate levels 1, 2, 5, 6, and 8, the fixation count of the layout 3 is small. Layout 3 has the shortest total fixation duration, and Layout 2 has the longest Total fixation duration. The order of total fixation duration under the four layouts from small to large is: layout 3 < layout 1 < layout 4 < layout 2. At the overlap rate levels of 3, 5, and 6, the total fixation duration of layout 3 is small. At the overlap rate levels 1, 2, 4, and 7, the total fixation duration of layout 1 is small (Table 3). Table 3. Descriptive statistics of eye movement data.
Layout 1 2 3 4
Fixation count (avg) 7.913 8.285 7.740 8.366
Total fixation duration (avg) 1.544 1.753 1.504 1.558
4.2.2 Analysis of Variance The results of the variance analysis of fixation count showed that the overlap rate and overlap rate layout P < 0.001, layout P = 0.028. total fixation duration analysis showed that the overlap rate layout and overlap rate layout P < 0.001. Therefore, the
1242
J. Ren et al.
overlap rate, layout, and interaction between the two have a significant impact on fixation count and total fixation duration. The results of variance analysis of fixation count show that there is a significant difference between layout 3 and layout 2, 4 and no significant difference between the other three layouts. The results of total fixation duration show that there is a significant difference between layout 2 and layout 1, 3 and no significant difference between the other three layouts. Total fixation duration of layout2 is higher than 1, 3. 4.3
Visual Graphic Analysis
Figure 3 is one of the test results of the gaze plot sequence. The subject used random search strategy, the subject first randomly searches for the target graph, then fix on the target and further checks the corresponding number. The line in layouts 2, 3, and 4 have a significant guiding effect on the search direction of the subject, and this effect is consistent with expectations.
Fig. 3. Visual graphic diagram under different layouts
5 Discussion Layout 1 performs better when the overlap rate is low. When the overlap rate reaches 19.7%, the reaction time of layout 1 is significantly increased. It has also been proved that a small amount of overlap does not affect the visual search. Therefore, when the overlap rate is low, layout 1 can be used as the default state. Since layout 2 cannot completely avoid the overlap of the signs, so layout 2 is not recommended. Layout 3 performs better at various overlap rates than various other layout indicators. When searching for graphics, the graphics part of the layouts 3 and 4 are relatively less interfered, the subjects can quickly search for the graphics, and the layout 3 signs are aligned left and right, the sign layout is more compact, graphics and signs and the distance is closer. Therefore in most cases, the search efficiency of layout 3 is better than layout 4, and the result is consistent with Fitts’ law [5]. And according to the literature, the human eye’s visual law is from top to bottom, from left to right, and the left and right movement speed is fast [6]. After the layout 3 searches for the target, when the sign is positioned, the eye moves in the left and right direction, and the speed is also faster than the up and down movement of the layout 4.
Experimental Study on Dynamic Map Information Layout
1243
6 Conclusion In this paper, eye tracking technology is used to evaluate the layout design of map module graphics and signs in GIS interface. Under the task scenario developed in this experiment, there are significant differences between different layout forms. When there is no overlap or a small amount of overlap between the targets, you can choose layout 1 and layout 3; when the targets overlap rate becomes high, the first recommendation is to use layout 3, followed by Recommended layout 4. Since layout 2 has an ordinary effect at different overlap rates, layout 2 is not recommended. In addition, in this experiment, when the graph overlap rate increases to a certain extent, the search efficiency drops sharply. Therefore, after the target overlap rate on the map exceeds 20%, a certain design method should be adopted to reduce the graphics overlap rate. Acknowledgments. The authors would like to gratefully acknowledge the reviewers’ comments. This work was supported jointly by National Natural Science Foundation of China (No. 71871056, 71471037) and Equipment Pre research & Ministry of education of China Joint fund.
References 1. Wang, Y., Yao, Y.: Design of situation display system based on geographical information system. J. Shipboard Electron. Countermeas. 29(4), 77–79 (2006) 2. F-Shaped pattern for reading Web content (original study). http://www.nngroup.com/articles/ f-shaped-pattern-reading-web-content 3. Owens, J.W., Chaparro, B.S., Palmer, E.M.: Text advertising blindness: the new banner blindness? J. Usability Stud. 6(3), 172–197 (2011) 4. Jing, L., Shulan, Y., Wei, L.: Cognitive characteristic evaluation of CNC interface layout based on eye-tracking. J. Comput.Aided Des. Comput. Graph. 7, 20 (2017) 5. Soukoreff, R.W., MacKenzie, I.S.: Towards a standard for pointing device evaluation, perspectives on 27 years of Fitts’ law research in HCI. Int. J. Hum.Comput. Stud. 61(6), 751– 789 (2004) 6. Mccormick, E.J., Sanders, M.S.: Human factors in engineering and design (1982)
Research on Readability of Adaptive Foreground in Dynamic Background Maoping Chi and Lei Zhou(&) School of Mechanical Engineering, Southeast University, Nanjing 211189, China {chi,zhoulei}@seu.edu.cn
Abstract. Head-up display is ancillary information presentation devices that are widely used on aircraft, and the use of HUD increases flight adaptability and safety. However, in some special scenarios, the color of the background environment interferes with the color of the head-up display, reducing readability. Here we propose a method of adaptively adapting the color of HUD information according to the background environment to enhance readability. By simulating the process of reading information in a dynamic background, the subject asked to complete the task of identifying numbers in a string in a dynamic background. The experimental result is that the dynamically changing information color has a certain interference to the information recognition. In future research, adaptive matching methods are implemented to reduce the interference caused by color for better readability. Keywords: Human factors Color Head-up-display Dynamic background Readability
1 Introduction Head up display is mainly used to present auxiliary flight information on the fighter. Its main advantage is to reduce the number of times the pilot looks down at the dashboard information during the flight and keep looking up [1]. Since the display content of the head-up display and the external environment have color and brightness fusion and superposition [2], the display effect of the information is unstable, which will affect the performance of the pilot to complete the task, so in the aspect of the head-up display interface Color coding is an important research direction. In this paper, aiming at the color of the head-up display, this paper proposes an idea of adaptively adjusting the foreground color of the scene when the background field environment changes, and studies the character foreground color under the dynamic background.
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 1244–1249, 2020. https://doi.org/10.1007/978-3-030-39512-4_190
Research on Readability of Adaptive Foreground in Dynamic Background
1245
2 Related Work 2.1
Interface Color Study of Head-up Display
Most of the current HUD interface colors are basically single green with wavelengths between 500 and 560 nm. On the basis of the green interface, Xie et al. studied the character brightness and line width of the usual display [3]. Wang Haiyan et al. used the measurement method of cursor acuity to find that the response of green (120° 99% 33%) characters correctly recognized in various environments is shorter (Kang Haiyan literature). In order to explore the possibility of other colors, Derefeldt et al.’s study of the color coding of fighter interfaces pointed out that multi-color coding can improve the reaction time in complex environments, and pilots get better state perception [4]. Xiong et al. studied the ergonomics of the HUD interface using multiple colors in a simulated flight platform, and obtained two colors in combination with a single color or three colors, and the magenta logo key information and the subject green form performance are optimal [5]. 2.2
Research on Adaptive Regulation of Head-up Display
Due to changes in the external market environment, Yoon et al. proposed that when the highest frequency hue value in the background color and the HUD foreground color hue are within 30°, the HUD color has been adapted to the background environment, and the conclusion is that the color change HUD The interface is better readability [6]. Zhi-Shan et al. used this theory to improve the HUD interface. The results show that the user’s required PJND value is higher in low-light environments, and the PJND values are not much different for different chroma environments. Brightness adaptive design of HUD interface [7]. In color correlation theory, complementary colors and contrast colors are often used to enhance information contrast to highlight important information. This method is very common in graphic design. However, it is rarely used in a dynamic environment.
3 Experimental Design and Discussion The experiment simulates the external dynamic scene of the pilot in the flight mission to determine the type of randomly appearing characters to measure the performance of the foreground character color while remaining unchanged and adaptive according to the environment. The experiment selected four typical scenes, three character colors, and two adaptive methods. 3.1
Experimental Materials
The experimental variables are set to the color of the character and the background environment. The color of the characters is divided into two groups, and one set of colors remains unchanged in the dynamic environment, including red, green, and blue. The other set is an adaptively varying foreground color, experimented with
1246
M. Chi and L. Zhou
complementary colors with dynamic background color averages and 2 sets of contrast colors. The background environment selects the typical scene as shown in Fig. 1. Figure 1a and b simulate different time and varied sky, Fig. 1c simulates the calm blue sky, and Fig. 1d simulates the large green forest. Special scenes. Superimpose seven random characters consisting of uppercase English and numbers 4, 5, and 6 in front of the dynamic background, where the number is at most one [9]. Figure 2a shows the experimental material for one of the cases. Two groups of white on black and black on white were added as a control group, as shown in Fig. 2b and c.
Fig. 1. Typical scene.
Fig. 2. Experimental material case.
3.2
Experimental Environment and Subjects
The experiment used unity 3d to implement the experimental program. The presentation of the experimental video and the recording of the experimental data are performed by the programming of unity 3d. The raw data is exported and the experimental data is summarized and summarized by excel, and the statistical theory is used for data analysis. The experiment was performed on the iMac to ensure that the display color deviation was within an acceptable range, the brightness of the control display was 150 cd/m2, and the illumination in the control room was 500 lx. The resolution of the experimental video was 1024*768. During the experiment, the subjects were undergraduate and graduate students of 10 universities. They were not exposed to similar experiments. They were between 22–26 years old and had no color blindness and color weakness. After correction, the visual acuity was above 1.0.
Research on Readability of Adaptive Foreground in Dynamic Background
3.3
1247
Experimental Design and Process
According to the setting of experimental variables, the performance difference of different variables was reflected by testing the reaction time, the correct rate and the number of reactions. 26 experimental video materials were compiled into unity3d and presented randomly. Record the test response and the correct rate and number of reactions by the preparation of the program. Participants read the experimental instructions, press A to start the experiment, and ask the participants to feedback the appearing string. If there is no number in the string, press keyboard 0. If 4, 5, 6 digits appear, the corresponding keyboard button will be pressed. Feedback. (a) Experimental results Through the calculation and analysis of the correct rate of the test, the correct rate is calculated according to the following formula (1). The correct number of times of the test reaction is ncorrect, the number of errors is nincorrect, and the number of unreacted is nmiss. The result of the T-test is obtained by keeping the character color still and the correct rate according to the environmental adaptive condition is P > 0.05 (P = 0.625), so the character color change has no significant influence on the accuracy of the task judgment. P ¼ ðncorrect nincorrect 2 nmiss Þ=ntotal :
ð1Þ
(b) Through the calculation and analysis of the reaction of the subjects, the values of the subjects were normalized by using the formula 2 Min-max normalization, and the difference in reaction between the subjects was eliminated. As shown in Table 1, the white character on the black background has the lowest response time. As shown in Table 2, in the context of dynamic changes, the average value after normalization of the test reaction is shown in the figure. In different scenarios, the response of green characters is the shortest and most stable. Complementary and contrasting adaptive character colors are not stable, and blue does not perform well in all backgrounds (Figs. 3 and 4). T ¼ ðRT RTMin Þ=ðRTMax RTMin Þ:
Fig. 3. Average response time in different background.
ð2Þ
1248
M. Chi and L. Zhou
Fig. 4. Average response rime for different backgrounds with different fore-colors.
(c) By mathematically counting the number of responses to the stimulus, the number of stimuli controlling each background was 19, and the average number of responses of the character color in a set of experiments was 18.96 (standard deviation: 0.38, deviation 0.2%). The average number of response times for character color adaptation was 19.39 (standard deviation: 0.84, deviation 2.0%). It can be seen that the adaptation of the character color increases the number of reactions of the subject, and the number of reactions of each subject fluctuates greatly.
4 Conclusion This experiment got the conclusion that the green characters have better stability in the HUD interface, and the other colors are lower when the subjects respond. Although color adaptation meets the requirements for color contrast, it increases the extra load of reading the information of the subject. Acknowledgments. This work was supported by the Science and Technology on Avionics Integration Laboratory and Aeronautical Science Fund (No. 20165569019) and the National Natural Science Foundation of China (No. 71871056, No. 71471037).
References 1. Yang, X.: Design of head-up display. J. Fuzhou Univ. (Nat. Sci. Edn.) 04, 29–32 (2000). (in Chinese) 2. Harding, T.H., Rash, C.E., Lattimore, M.R., Statz, J., Martin, J.S.: Perceptual issues for color helmet-mounted displays: luminance and color contrast requirements. In: SPIE Defense+Security (2016) 3. Xie, J., Wang, X., Lu, J., Chai, X., Wang, L.: Study on character brightness and line width of head-up display. Electro-Opt. Control 21(08), 68–72 (2014). (in Chinese)
Research on Readability of Adaptive Foreground in Dynamic Background
1249
4. Derefeldt, G., Skinnars, Ö., Alfredson, J., Eriksson, L., Andersson, P., Westlund, J., Berggrund, U., Holmberg, J., Santesson, R.: Improvement of tactical situation awareness with colour-coded horizontal-situation displays in combat aircraft. J. Disp. 20, 171–184 (1999) 5. Xiong, D., et al.: The effect of one-color and multi-color displays with HUD information in aircraft cockpits. In: Man-Machine-Environment System Engineering. Springer, Singapore (2016) 6. Yoon, H.J., Park, Y., Jung, H.Y.: Background scene dominant color based visibility enhancement of head-up display. In: International Conference on Systems Engineering. IEEE Computer Society (2017) 7. Zhi-Shan, W., et al.: A new method to evaluate the readability of head up display. In: IEEE International Conference on Computer & Communications. IEEE (2017) 8. Wang, H., Shao, J., Gao, Y., et al.: Color design of helmet aiming interface. J. Electro-Opt. Control 23, 64–67+76 (2016) 9. Gabbard, J.L., Swan, J.E., Hix, D.: The effects of text drawing styles, background textures, and natural lighting on text legibility in outdoor augmented reality (2006)
Research on Interaction Design of Children’s Companion Robot Based on Cognitive Psychology Theory Tianmai Zhang and Wencheng Tang(&) School of Mechanical Engineering, Southeast University, Nanjing 211189, China {zhangtianmai,tangwc}@seu.edu.cn
Abstract. This paper aims to explore the interactive design methods for children’s companion robots by applying cognitive psychology theories. First of all, the researchers conduct an investigation, which analysis from the perspective of cognitive psychology. In combination with the users’ aesthetic requirements, habits and physiological laws, the user demands in product design are reasonably analyzed. Then use cognitive psychology to observe the using process of children. Design information is obtained by investigating perception, thinking and operational process of users. Finally, combined with the user model of design object groups, a targeted interaction mode is developed and verified by the product design practice. The article transforms the traditional design thinking of “machine-oriented” into “people-oriented”, which has guiding significance for the future interactive design of other robot products. Keywords: Interaction design
Cognitive psychology Companion robot
1 Introduction Nowadays, the “left-behind” and “feeding by generations” children have become the mainstream of Chinese society [1]. More and more children are lacking in parenting during their growth. The “lack of love” environment will cause psychological trauma to children’s psychology [2]. In this context, children’s companion robots came into being. They are not only developing, accelerating and promoting children’s intellectual level, social ability and logic thinking, but also create a solid foundation for children’s physical and mental development. They can also be indispensable partners for children’s healthy growth.
2 Current Status of Children’s Companion Robot Companion robots are one of the mainstream research directions in the home robot industry. Accompanying objects include children, the elderly and even pets. In the field of companion robots, most manufacturers are targeting children [3]. Modern life is fastpaced and stressful. Parents are busy with work and neglect their companionship with © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 1250–1255, 2020. https://doi.org/10.1007/978-3-030-39512-4_191
Research on Interaction Design of Children’s Companion Robot
1251
their children. At the same time, children have a natural interest in robots. In fact, with the help of currently available technologies and equipment, on the one hand, the current technology is easier to meet the needs of children. On the other hand, adult companionship requirements for robots do not seem to be particularly strong. It is foreseeable that the future smart companion machine market is bound to become the focus of global robotics competition. For the interactive interface of the product, the interface layout generally includes a side function bar and a content display area, and some products have a navigation bar portion. The structure of the interface is generally a 3-level page structure, and some of the more complex ones have a 4-level page structure. The interactive action is mainly by touch, and the gesture is generally by click and swipe.
3 Application of Cognitive Psychology Human cognition is a psychological process. The ability of the human brain to process, store, and extract information, perception, memory, attention, thinking, and imagination are all considered cognitive abilities [4]. Cognitive ability refers to the ability of the human brain to process, store, and extract information. It is the intelligence we generally talk about, such as observation, memory, imagination and so on. People mainly rely on cognitive ability to understand the objective world and acquire a variety of knowledge. 3.1
Human Cognitive Process
According to cognitive psychology theory, the entire cognitive process of a person can be summarized into three steps [5]: (1) Sensors receive information under environmental stimuli; (2) The brain processes the information; (3) Effector output corresponding reaction. The sensor mainly refers to the sensory system of the body, including vision, touch, hearing, smell, and force. Effectors are mainly including muscles, glands, and organs that respond to brain commands. Responses are expressed in terms of expressions, language, actions, and behaviors (shown in Fig. 1).
Fig. 1. Human cognitive process
1252
3.2
T. Zhang and W. Tang
Factors Affecting Cognitive Ability
The performance of a user’s cognitive ability is influenced by many factors. Therefore, the factors affecting the potential cognitive ability need to be fully considered in the process of product design. On the one hand, it will help to further understand the cognitive ability of the target user group. On the other hand, it helps to establish a more realistic user model to predict product usage scenarios.
Fig. 2. Cognitive model [6]
As shown in Fig. 2, human behavior, performance, and achievement are just representations. What hid in the hidden layers are the cognitive ability and cognitive psychology of human beings. The environment (including the family environment, school environment, social environment) will have an important impact on the ability of people to form and the formation of psychology.
4 User Analysis The interactive subject of the child companion robot is the child. At present, most of the research on interaction design of children’s companion robots is carried out from two aspects: children’s physical and mental development characteristics and the application of interactive technology. According to Piaget’s theory of children’s cognitive development [7], the development of children’s cognition is characterized by stage and incompleteness, as shown in Fig. 3. From that theory we can easily find that children from 0 to 2 years old are in the stage of perceptual movement, and the interaction between children and the outside world is from instinct and unconscious. Between the ages of 2 and 11 years, children gradually complete the transition from figurative thinking to simple abstract thinking. After age 11, children begin to master advanced abstract thinking skills. Perception mainly includes vision, hearing, touch, force, object perception, time and spatial perception. It mainly refers to the ability to exercise and behavior including movement. Children’s perception and motion development show different characteristics at different ages, as shown in Fig. 3.
Research on Interaction Design of Children’s Companion Robot
1253
Fig. 3. Children development process of perception
Therefore, companion robot products designed for children tend to be highly experiential and interactive, and need to meet the child’s physical interaction needs. In addition, children show a certain degree of gradual and progressive in the cognitive process due to the development of their perception and athletic ability. The interaction design of the robot should take this feature into consideration and make appropriate design choices.
5 Interactive Mode Design Through the cognitive theory model and the cognitive characteristics of children, we find that establishing multi-channel information interaction and forming rich sensory and motor stimuli have great influence on children in the stage of growth and development, which is of great significance. Next, we combine the analysis of children’s cognitive information channels with design practice to illustrate several ways of interacting with children. 5.1
Image Interaction
Children are very sensitive to color and like bright and bright colors, especially red, yellow, green, orange and blue. It is possible to distinguish between various figures by the age of 2, and enter adult vision when you are 6 years old. Our children’s companion robots use high purity and saturation in their design. The robot’s LCD interactive interface displays a clear-cut expression or cartoon pattern. Through intuitive image interaction, children are connected to the robot through image recognition and image perception (shown in Fig. 4).
1254
T. Zhang and W. Tang
Fig. 4. The robot’s LCD interactive interface
5.2
Voice Interaction
Providing appropriate auditory feedback with visual graphics and tactile perception during the interaction process can help children learn the basic concepts of everyday things, such as whistle, wind, raindrops, drumming, and so on. The child’s auditory development can be exercised by accompanying the movement of the robot to emit sound signals of different orientations. In addition, intelligent voice interaction is a new generation of interactive mode based on voice input. By developing a speech recognition feedback system, the robot can make corresponding feedback through the child’s speech. The use of auditory feedback and speech recognition enables the robot to effectively express its role positioning, making the interactive context more vivid and interesting. 5.3
Behavioral Interaction
Behavioral interaction refers to the transmission of meaning through the movements and postures of the limbs. The robot understands the meaning of actions and behaviors by tracking the child’s limb movement trajectory and recognizing the child’s posture characteristics, and makes corresponding intelligent feedback. By simulating different life situations, the robot allows children to experience simple actions such as “on-off” and “push-pull” to form a learned action experience. As children’s comprehensive cognitive ability increases, setting challenging interactions can not only exercise children’s hands-on ability, but also improve children’s intelligence and logical thinking ability. Our companion robot behavioral interaction mode is non-touch type, which can realize different movements such as turning and climbing through a 360-degree rotating front wheel and two 55-degree rotating track mechanisms (shown in Fig. 5). In addition, the robot can make a 55-degree lifting action of the rear left and right crawler wheels by cooperating with the child’s left & right hands lifting movements, thereby bringing an interesting interactive experience.
Research on Interaction Design of Children’s Companion Robot
1255
Fig. 5. Behavioral interaction
6 Conclusion This paper aims to explore the interactive design methods for children’s companion robots by applying cognitive psychology theories. First of all, the researchers conduct an investigation, which analysis from the perspective of cognitive psychology. In combination with the users’ aesthetic requirements, habits and physiological laws, the user demands in product design are reasonably analyzed. Finally, combined with the user model of design object groups, a targeted interaction mode is developed and verified by the product design practice. The article transforms the traditional design thinking of “machine-oriented” into “people-oriented”, which has guiding significance for the future interactive design of other robot products. Since children are still in the early stages of thinking development, most of their minds are still thinking in images and not good at abstract thinking (such as reasoning and judgment). The interactive system of robots should be designed from the perspective of children so that they can use most of the functions of the robot without special training. Through the three-dimensional of interaction design (images, voices and actions), the robot interaction with children is smoother and more natural than before.
References 1. Jingzhong, Y., Lu, P.: Differentiated childhoods: impacts of rural labor migration on leftbehind children in China. J. Peasant Stud. 38(2), 355–377 (2011) 2. Hetherington, E.M., Parke, R.D.: Child Psychology: A Contemporary Viewpoint. McGrawHill, New York (1976) 3. Belpaeme, T., Baxter, P., Greeff, J.D., Kennedy, J., Zelati, M.C.: Child-robot interaction: perspectives and challenges. In: International Conference on Social Robotics. Springer (2013) 4. Merikle, P.M., Smilek, D., Eastwood, J.D.: Perception without awareness: perspectives from cognitive psychology. Cognition 79(1), 115–134 (2001) 5. Carroll, J.B.: A theory of cognitive abilities: the three-stratum theory. Hum. Cogn. Abil. 631– 655 (1993) 6. Dong, H., Ning, W.N., Hou, G.H.: Measuring cognitive capability: a literature review based on inclusive design. Industrial Engineering & Management (2016) 7. Kamii, C., Devries, R.: Physical knowledge in preschool education: implications of Piaget’s theory (1978)
Strategies for Accessibility to the Teodoro Maldonado Hospital in Guayaquil. A Design Proposal Focused on the Human Being Josefina Avila Beneras(&), Milagros Fois Lugo, and Jesús Rafael Hechavarría Hernández Faculty of Architecture and Urbanism, University of Guayaquil, Cdla. Salvador Allende, Av. Delta y Av. Kennedy, Guayaquil, Ecuador {josefina.avilab,maria.foisl, jesus.hechavarriah}@ug.edu.ec
Abstract. Urban planning is a mechanism that allows cities to achieve sustainable development to the extent that it provides the citizen with quality of life. The present study is a territorial analysis and diagnosis process is carried out to improve pedestrian accessibility in the public space immediately to Teodoro Maldonado Hospital in Guayaquil through the development of a mobility plan for the establishment of design strategies. A mixed methodology is proposed under a systemic approach where six components are considered in the Jan Gehl methodology on five areas of action: Preservation of Heritage, Sustainable Mobility, Equity and Diversity, Urban Design at the Human Scale, Development Economic and Cultural. The proposal considers a higher level of importance (70%) for pedestrian circulation than the road system, following the parameters established in NACTO (National Association of transport officials of the city). Keywords: Urban planning strategies
Human factors Mobility Accessibility
1 Introduction Transport is associated with environmental problems, economic losses, population health and social inequities [1]. The Teodoro Maldonado Carbo hospital, in the city of Guayaquil-Ecuador, provides daily care in an average of 6000 members with a coverage of 50,000 inhabitants. Considering mobility and accessibility to this flow of people daily implies risks and dangers for passers-by and especially for people with special abilities that require even greater mobility due to their condition, according to Sierra, “provide for the mobility dynamics of cities to small scale, by neighborhoods, segments, crossings [2]. Urban planning should be oriented to seek equity in the distribution of resources (public works, equipment, infrastructure, among others [3], which is why it is proposed to achieve an equitable distribution of these resources to compensate for socioeconomic inequality and socio-spatial that presents the case study [4] and [5]. © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 1256–1262, 2020. https://doi.org/10.1007/978-3-030-39512-4_192
Strategies for Accessibility to the Teodoro Maldonado Hospital
1257
In the work, a study of accessibility to the public space of the hospital facilities is presented, using a mixed methodology. The diagnosis is made based on six components for the sector in general and for the specific sector the methodology of the architect - urbanist Gehl [6] is used. Based on the conclusions, the proposal for a strategic mobility plan is presented based on the design process of the NACTO (National Association of city transport officials) urban streets guide that uses qualitative and quantitative methods-two.
2 Methodology The proposal of the strategic mobility plan is presented in two phases, which are described below: Phase 1: It begins with stage 1 called “Diagnosis” it is composed of 4 methods that allow the analysis process: • (M-1) with a coverage diameter of 1000 m as seen in image 3, based on components specified in the Senplades processes for a general analysis; • (M-2) the diagnosis of the specific sector based on the Jan Gelh methodology is developed using a smaller radius of action 260 m; • (M-3) is the survey process as a complement to the diagnosis of the specific sector, it works by applying a survey of 382 samples with 5% error, and 95% confidence level, which its purpose is to know what people think and the feasibility of the project; • (M-4) to complete the diagnostic stage is established in SWOT matrix format, exposing the advantages and disadvantages found in the sector. Stage 2 establishes (M-5) the strategies for the improvement in mobility and accessibility of the immediate sector to health equipment Level III Teodoro Maldonado Carbo Hospital based on the conclusions of the diagnosis of the specific sector and its relationship with the results of the study of the general sector, resulting in the approach of the strategies to be used. Once the strategies for the Special Mobility Plan of the sector have been established, the structuring criteria of said plan are determined. Phase 2: As a secondary phase of the investigation, stage 3 is established, (M-6) which raises the proposal for the development of the content of the Special Mobility Plan, projecting strategic interventions based on two variables: the first will consider the Pedestrian with an importance of 70% and the second the road system with a ratio of 30%, following the parameters established in NACTO. Jan Ghel’s methodology is complemented, with the use of the NACTO guide, (National Association off City Transportation Officials) that is based on qualitative and quantitative methods with which a clear vision of the entire streets is described in a range of action from the hospital and a basic road map with the use of the guide (Fig. 1).
1258
J. Avila Beneras et al.
Fig. 1. Methodology diagram.
3 Development Phase 1: Stage 1-Diagnosis M1: Within stage 1 the diagnosis was established, which was made according to Senplades methodology, selecting the parameters related to the development of a PEMU: (i) land use, (ii) road (iii) public transport and (iv) infrastructure. The following is a summary of what was obtained about the study sector in phase 1 of the investigation: (ia) Land uses in the sector are divided into: 1.03% commercial, 1.65% equipment, 2.47 use of green area, 0.72% industrial, 80.42% residential component. 13.71% mixed use (residential-commercial); (ib) The limits of Teodoro Maldonado Carbo Hospital are: North with the secondary collector track V4: Av. Ernesto Albán Mosquera, South with V4 collector track: Dr. Leónidas Ortega Moreira, this with a V3 primary arterial route: Av. 25 de Julio, West via V4 secondary: Av. 1 around tracks V4 and V5; (iia) The study sector has: mass transportation through the Metrovia, 3 metrovía stations on Av. 25 de Julio, a terminal station called July 25 to the south, transportation of urban buses, there are 21 line signs (type signs); The coverage of the services is established as follows: 100% paved streets, 100% coverage of drinking water and sewerage, presence of electrical substation in the citadel La Pradera (200mt distance). M2: The methodological sheet of the urban architect Jan Gehl is used to assess the state of public space, with the B-Mobility Sustainability card whose objective is to combat the progressive invasion of the car and provide more sustainable and friendly solutions for people. Through the use of the described card, the following observations were obtained: • The study area is made up of routes: Arterial Primary V3, Secondary Collector V4 and V5. With Av. 25 de Julio being the primary arterial route V3. • All streets are paved and has mass transportation through Metrovia Transportation and Urban Buses • There are 3 Metrovía stations on Av. 25 de Julio that covers the study sector and a terminal station called 25 de Julio del sur.
Strategies for Accessibility to the Teodoro Maldonado Hospital
1259
• There is a public transport network system that congestes Av. 1 avenue • There is a very defined pedestrian access circuit to the hospital (Fig. 2).
Fig. 2. Parameter chart for diagnosis.
M4: Tool detailing the advantages and disadvantages that the sector presents in the swot: Strengths, Opportunities, Weaknesses and Threats based on the conclusions drawn from the site analysis. In order to establish the strategies and criteria, the Special Urban Mobility Plan (PEMU) is taken into account, in which interventions are planned based on two variables, that of prioritizing pedestrians and following the parameters established in the methodology. NACTO (National Association of city transportation officials). Strategies 1. Delimitation of parking lots for private vehicles and public bus parking, as an improvement strategy for road traffic in the surrounding sector of the Hospital. 2. Redesign of the roadway as a strategy for safe pedestrian mobility. 3. Incorporation of green areas with urban furniture, as a strategy of good use of public space. 4. Reorganization of informal merchants to determine specific areas of product sales and contribute to the availability of space to pedestrians. 5. Sectorize the intervention in specific urban actions (A, B and C). Criteria: On Av. Ernesto Albán Mosquera and Dr. Leonidas Ortega: Incorporation of tree vegetation: considering the current size of the sidewalks between 2 m and 2.30 m by selecting the species of semi-deciduous tree Zebra Tree (Erythrina indica picta) with a height ranging from 8 to 12 m [3]. With an 80 cm protection grid anchored in such a way that it is at the same level as the floor.
1260
J. Avila Beneras et al.
Corresponding to Av. 1 on the east sidewalk (emergency entrance and laboratory): • Delimitation of public bus parking lots: with their respective signage. • Redesign of the roadway: expanding from the current dimension of 3 m to 3.50 m depending on the high flow of users generated by the Teodoro Maldonado Hospital. • Incorporation of street furniture: The furniture has been arranged semicircularly with respect to the factory line (enclosure). • Reorganization of informal merchants: with support of the location of the kiosks for the food service. Corresponding to Av. 1 on the west sidewalk: • Incorporation of green areas with urban furniture: conditioning the existing spaces destined for this function, the vegetation to be used is the species Olivo Negro (Bucida buceras), an evergreen tree 12 m high. The furniture arranged on this sidewalk will have a circular shape with wooden seats plastic in a concrete base. Phase 2: Stage 3 - Urban Design Proposal (M5) Through the study of accessibility to the public space of the facilities of the Teodoro Maldonado Carbo Hospital, an urban design project is proposed in the immediate roads to the equipment and as a result of the strategies explained in the previous section the following proposal is expressed, consisting of 3 interventions listed below (Fig. 3):
Fig. 3. Representation of interventions.
• Intervention A: The use of existing infrastructure for the improvement of the circuits in zone C is proposed. • Intervention B: A reconfiguration will be done in the s/n route, admission to external consultation and pharmacy, in which the discontinuous widening of the sidewalks is proposed emphasizing the pedestrian access to the hospital in the same way as creating spaces for the inclusion of informal traders (Fig. 4). • Intervention C: It is planned to relocate the existing bus stops on Av. 1 avenue, so that bus stops associated with the main flows related to the hospital can be created, as well as the spaces for the proposal of parallel parking for the private vehicle and widening sidewalk (Fig. 5).
Strategies for Accessibility to the Teodoro Maldonado Hospital
1261
Fig. 4. Intervention B in the street adjacent to the income of external consultation.
Fig. 5. Intervention B, on both sides of the avenue of Av. 1.
4 Discussion The analysis carried out based on the methodology used allowed a design that prioritizes to people who come to the Hospital for some need and. Ample spaces are created that provide the comfort required for these cases. The proposed strategies were carried out so that there is a road and pedestrian adequate circulation in the vicinity of the Hospital. The redesign of the roadway, the implementation of green areas and the reorganization and location of informal merchants in the streets vicinity of the hospital; will allow an improvement in the quality of life of people arriving at this establishment. Acknowledgments. To the Programa of Master in Architecture with mention in Territorial Planning and Environmental Management. Faculty of Architecture and Urbanism of the University of Guayaquil.
1262
J. Avila Beneras et al.
References 1. Pástor, B.A.C., Lugo, M.M.F., Vázquez, M.L., Hernández, J.R.H.: Proposal of a technological ergonomic model for people with disabilities in the public transport system in Guayaquil. In: Ahram, T., Falcão, C. (eds.) Advances in Usability and User Experience, AHFE 2019. Advances in Intelligent Systems and Computing, vol. 972. Springer, Cham (2020) 2. Sierra, I.: Ciudades para las personas, Escenarios de vida. Editorial, Diaz de Santos, vol. 105 (2015) 3. Bazant, J.: Planeación Urbana estratégica, métodos y técnicas de análisis. Editorial Trillas, Mexico (2014) 4. Llop, J.M., Vivanco, L.: El derecho a la ciudad en el contexto de la agenda urbana para ciudades intermedias en Ecuador (2017) 5. Sánchez, G.: Planeación moderna de ciudades. Editorial Trillas, México (2008) 6. Ghel, J.: La dimensión humana en el espacio público, recomendaciones para el análisis en el diseño. Ministerio de vivienda y Urbanismo de Chile, PNDU (2017)
Fatigue Measurement of Task: Based on Multiple Eye-Tracking Parameters and Task Performance Hanyang Xu, Xiaozhou Zhou, and Chengqi Xue(&) School of Mechanical Engineering, Southeast University, Nanjing 211189, China {220174254,zxz,ipd_xcq}@seu.edu.cn
Abstract. Human error can be reduced by monitoring fatigue during work and taking measures so that accidents can be effectively prevent. In this paper, participants accumulated fatigue during the process of using the experimental program integrating the open source SDK, and their task performance and PERLCLOS were collected and calculated by the experimental program. Finally, a neural network model was established to describe the relationship between various parameters and fatigue. Through experiments, it is found that PERCLOS and task performance can well reflect fatigue state in tasks of searching. This model can detect the fatigue degree of different people in specific application scenarios. Keywords: Fatigue performance
PERCLOS Neural network Saccade Task
1 Introduction The accident can be caused by human, machine, environment and management. With the improvement of the reliability of equipment technology, human error has become the main reason of accidents. Once human suffered from the physiological or psychological load in activities, fatigue was generated [1]. Due to the complex factors of fatigue, only defining it according to specific tasks can make it more valuable for research. In studies on the relationship between eye movement parameters and fatigue, the commonly used parameters include PERCLOS, SPV (Saccade Peak velocity), duration and frequency of blinking, pupil diameter, etc. [2–4]. Meanwhile, studies have shown that eye movement parameters can objectively reflect the fatigue status of practitioners from a physiological perspective [5, 6]. Therefore, the fatigue study based on eye parameters is of great significance to reduce man-made accidents [7].
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 1263–1269, 2020. https://doi.org/10.1007/978-3-030-39512-4_193
1264
H. Xu et al.
2 Paper Preparation 2.1
Participants
Sixteen participants were recruited for the experiment (mean age 24 years old, From 22 to 26 years old), they are all graduate students of industrial design or mechanical engineering. All participants have normal vision or corrected vision. Six of the participants were given a pre-test, and after a five-day interval all participants were given a formal test. They were told to follow a normal and regular schedule of sleep and rest during the test. The whole fatigue tests did not influence on their work or life. 2.2
Technique
In this study, Unity and C# was used to build the experimental program. The program structure of the pre-experiment and experiment was roughly similar, except that the formal experiment had time limitation. In the experiment, one group of icons were randomly displayed in the screen, which contains five kind of icons, 225 icons in total (45*5), as shown in the Fig. 1.
Fig. 1. Experiment program: the matrix of icons
Six groups of icons will be used in this study, with five icons in which have similar complexity to ensure consistent response time. All the icons are semantically
Fatigue Measurement of Task: Based on Multiple Eye-Tracking Parameters
1265
ambiguous to ensure that the experiment is random and special to each participant, five groups of them was shown in the Fig. 2. As shown in the Fig. 3, participants was told to click on only one kind of these symbols in a specified amount of time, When the participant was ready, press the key A then they can entered the experiment which shown in the Fig. 1. The click path, amount, time and time stamp are recorded by programs. Meanwhile, the program will gradually increase the participants’ cognitive load and leads them to accumulate fatigue. The number and time of icon cancellation indirectly reflects the participants’ reaction time, which can be used to calculate fatigue status over a period of time.
Fig. 2. Groups of icons
Fig. 3. The instruct before experiment
2.3
Pre-experiment
Each participant will conduct the experiment in active period (9:00–10:00) and the tired period (15:30–17:00 PM or 40 min after dinner). As shown in the Fig. 4, all the groups of icons will be used in six rounds of pre-experiment only once. The random algorithm keeps each groups of icons appear with equal probability and without repetition. There was no time limit for each round of experiments, and a break between them, which was controlled by the participants.
1266
H. Xu et al.
Fig. 4. The process of pre-experiment
In this part, participants won’t wear the Diskablis Eye Tracker, and their click paths, reaction times and PERCLOS were recorded to optimize the program of main experiment and verify that the next stage of the experiment had a controlled effect on the SDK. 2.4
Experiment
As shown in the Tables 1 and 2, the results of the pre-experiment indicated that the reaction time, break time and error rate of participants in the tired period were significantly higher than those in the active period, so the main experiment would be conducted in the former. Table 1. Duration(s) and error rate of each icon set in different of period Period
Parameters Icon Set1 Icon Set2 Icon Set3 Icon Set4 Icon Set5 Icon Set6 Average
Morning Duration Error rate Evening Duration Error rate
36.667 0.363% 39.893 2.488%
41.467 0.727% 50.550 3.530%
47.367 2.488% 52.800 2.143%
50.650 0.708% 47.533 1.072%
48.583 0.727% 54.505 5.777%
42.100 0.708% 47.217 3.810%
44.427 0.954% 48.689 3.137%
Table 2. Time on task(s), total time(s) and break time(s) in different of period. Period Time on task Total time Break time Morning 266.833 275.700 8.867 Evening 292.133 316.750 24.617
The process of the main experiment is similar to that of the pre-experiment. According to the Fig. 5, six sets of icon will be used three times respectively, which means a total of 18 rounds of experiment. The order of each icon set was adjusted according to the average response time in the pre-experiment (each set of icon still had a random click target).
Fatigue Measurement of Task: Based on Multiple Eye-Tracking Parameters
1267
Fig. 5. The process of main experiment
There are six time threshold in the main experiment (t1, t2,… t6), which are 80% of the average reaction time of each set of icons. Participants will be informed to complete the experiment as soon as possible without knowing the time threshold, otherwise the current group of experiments may end with no warning, and start a new group of experiments without being informed. In this part, participants will wear Diskablis eye tracker, whose saccade distance, saccade time and front view image will be recorded, while the derived data will be retained for further studies.
3 Results Nie et al. [8] found in their study that fatigue was obvious when PERCLOS exceeded 6%, that is, the time of closing eyes of participants reached 3.5 s/min. At this time, P80 was selected as the calculation standard of PERCLOS with the best effect. The formula is show as follows: PERCLOS ¼ ½dmax ðdmax dmin Þ 80%=Ti
ð1Þ
In formula (1), dmax and dmin means the maximum and distance of eyelid closure per unit time, and Ti means the time spent in each round of experiments. The fatigue state of participants was defined by the PERCLOS parameters from two-thirds of the experimental data. SPSS was used to analyze the correlation between univariates. PERCLOS was strongly correlated with fatigue state (Pearson correlation coefficient = 0.714) and moderately correlated with task performance (Pearson correlation coefficient = −0.450), but only weakly correlated with SPV (Pearson correlation coefficient = −0.167). The inter-group reliability of PERCLOS is not available (Cronbach Alpha = 0.141). The click score was selected as the task performance in this study. The initial value of clicks score was 0, a correct click adds one point, and an error subtracted one until the end of the current task. The results showed that task performance was normally correlated with fatigue state (Pearson correlation coefficient = −0.384), but there was no correlation between task performance and SPV in this situation (Pearson correlation coefficient = 0.043). The average weight of each parameter obtained by using Radial basis function neural network to analyze the data of each participant was shown in the Table 3.
1268
H. Xu et al. Table 3. The weight of PERCLOS, click score and SPV Parameters PERCLOS Click score SPV
Weight 0.522 0.351 0.127
4 Discussion The results show that the fatigue description model established in this study is reliable. The low correlation between SPV and fatigue may be due to the fact that there is no linear correlation between SPV and fatigue, and since PERCLOS is based on image calculation, the error of the experimental instrument will also lead to this result, and the relationship between SPV and fatigue can be further described in further studies. Fatigue is a very complex phenomenon, and there must be more potential relationship between various parameters of fatigue. Therefore, the fatigue described based on a single index of PERCLOS may be different from the fatigue described by neural network. In practical application, the fatigue can be divided into several important grades and fed back to the user by fault-tolerant design in the system.
5 Conclusion This study confirms the availability of eye movement parameters to measure the fatigue levels in specific tasks. The method used in this study can also be used in other specific tasks to derive the specific relationships among PERCLOS, task performance and fatigue in a simulated environment. More complex neural network models and more physiological indicators should be adopted to make the method more applicable in the further research. Acknowledgments. This study was supported by the National Natural Science Foundation of China (No. 71901061), the Science and Technology on Avionics Integration Laboratory and Aeronautical Science Fund (No. 20185569008) and the National Natural Science Foundation of China (No. 71871056, No. 71471037).
References 1. Worm-Smeitink, M., Gielissen, M., Bloot, L., van Laarhoven, H.W.M., van Engelen, B.G.M., van Riel, P., Bleijenberg, G., Nikolaus, S., Knoop, H.: The assessment of fatigue: psychometric qualities and norms for the checklist individual strength. J. Psychosom. Res. 98, 40–46 (2017) 2. May, J.G., Kennedy, R.S., Williams, M.C., et al.: Eye movement indices of mental workload. Acta Psychol. 75(1), 75–89 (1990)
Fatigue Measurement of Task: Based on Multiple Eye-Tracking Parameters
1269
3. Di Stasi, L.L., Renner, R., Staehr, P., Helmert, J.R., Velichkovsky, B.M., Cañas, J.J., et al.: Saccadic peak velocity sensitivity to variations in mental workload. Aviat. Space Environ. Med. 81(4), 413–417 (2010) 4. Van Orden, K.F., Jung, T.P., Makeig, S.: Combined eye activity measures accurately estimate changes in sustained visual task performance. Biol. Psychol. 52(3), 221–240 (2000) 5. Wilson, G.F.: An analysis of mental workload in pilots during flight using multiple psychophysiological measures. Int. J. Aviat. Psychol. 12(1), 3–18 (2002) 6. van Drongelen, A., Boot, C.R.L., Hlobil, H., Smid, T., van der Beek, A.J.: Risk factors for fatigue among airline pilots. Int. Arch. Occup. Environ. Health 1(90), 39–47 (2017) 7. Marandi, R.Z., Madeleine, P., Omland, O., et al.: Eye movement characteristics reflected fatigue development in both young and elderly individuals. Sci. Rep. 8(1), 13148 (2018) 8. Nie, B., Huang, X., Chen, Y., Li, A., Zhang, R., Huang, J.: Experimental study on visual detection for fatigue of fixed-position staff. Appl. Ergon. 65, 1–11 (2017)
Emotional Data Visualization for Well-Being, Based on HRV Analysis Akane Matsumae1(&), Ruiyao Luo2, Yun Wang3, Eigo Nishimura1, and Yuki Motomura1 1
Faculty of Design, Kyushu University, 4-9-1 Shiobaru, Minami, Fukuoka, Fukuoka, Japan [email protected] 2 Politecnico di Milano, Milan, Italy 3 Beihang University, Beijing, China
Abstract. Managing negative emotions and promptly relieving anxiety are two main focuses in the prevention of mental illness. This paper indicates a biofeedback-training experience that helps an experiencer to manage his/her emotions. The authors have defined the following major problems to approach this bio-feedback design procedure. The first one is how to grasp human emotions using existing technologies in exhibition circumstances. The second one is what kind of manifestation should be used to create emotional resonance. For the first problem, Heart Rate Variability (HRV) is adopted to obtain anxiety and stress levels. For the second problem, the authors took a nature-inspired design approach. The results are visualized on a wall composed of 64 bionic scales. A device system guides the experiencer into a meditative state, and contributes to spurring thoughts on emotional management, and thus achieving the purpose of preventing mental illness. Keywords: Bio-feedback design Meditation
Heart rate variability Well-Being
1 Introduction Between 2005 and 2015, diagnosed cases of depression increased by 18% globally. At the same time, depression has been confirmed to be one of the biggest contributors to loss of life. In 2015, at least 788,000 people around the world committed suicide because of depression. Low treatment and cure rates are typical characteristics of mental illness. There are various theories trying to explain this situation. Greg Miller says “The human brain is complex and difficult to study, which has impeded the development of drug treatments for mental illnesses” [1]. Considering the fact that contemporary medical researchers studying the human brain remain in a primary stage (many of the mental illness models in use today were developed decades ago), the World Health Organization cites prevention as the best treatment for mental illness. The proper management of negative emotions is the main focus of prevention. In response to this situation, the authors hope to create an interactive emotional-visualization experience that leads people to pay more attention to © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 1270–1276, 2020. https://doi.org/10.1007/978-3-030-39512-4_194
Emotional Data Visualization for Well-Being, Based on HRV Analysis
1271
their mental health issues. Further, we hope to help people better manage their emotions to achieve the purpose of prevention. In this paper, we describe an HRV analysis algorithm using real-time ECG signals obtained by BiTalino, with an acceptable time complexity and detection accuracy. At the same time, a visualization installation is introduced in line with the needs of actual practice. Moreover, the real-time monitoring of HRV may play a crucial part in disciplines such as high-performance sports that could benefit from real-time detection of threatening heart conditions.
2 Methods The authors approached the problem via interactive installation design. In this section, HRV analysis, emotion features extraction and processing steps are described. 2.1
HRV Analysis
Heart rate variability (HRV) is a method of measuring the degree of change in a continuous pulse signal. Researchers have found that, by analyzing heart rate variability in ECG signals, it is possible to effectively identify changes in mood. This method offers a very good detection effect, especially regarding stress levels. The association between emotions and HRV is actually the association of emotions with autonomic nerves. The autonomic nervous system (ANS) is divided into two systems: the sympathetic nervous system (SNS) and the parasympathetic nervous system (PNS). Overall systems maintain a relative balance under normal conditions. Sympathetic and parasympathetic nerves can develop disorders when people are suffering from long-term stress. The heart is regulated by the autonomic nervous system in addition to its own rhythmic discharge. When the conductivity of the sympathetic nervous system cell membranes increases (sympathetic excitation), the membrane potential of cardiac pacemaker cells also increases [2]. Depolarization occurs when the membrane potential of cardiac pacemaker cells reaches a certain threshold which causes the interval between two heartbeats to become shorter. In contrast, parasympathetic excitation causes the interval to become longer [3]. The method of studying the change in intervals between two heartbeats is HRV, so we can use HRV to infer whether the tester is experiencing anxiety. 2.2
Statistical Analysis
The BiTalino platform is used as a means of simple ECG acquisition. This device allows various sampling rates in signal acquisition, fixed to 1000 Hz in our study. After receiving the sampling data from the platform, a band-pass filter is applied. (Flow = 5 Hz, Fhigh = 15 Hz). In this case we use a finite impulse response (FIR) filter. This procedure allows the removal of baseline noise and the avoidance of high-frequency noise and artifacts. The voltage signals are segmented and annotated to the P, Q, R, S, T points widely used in electrocardiography (Fig. 1). These points correspond to different stages of atrial/ventricular depolarization/re-polarization and are used to identify
1272
A. Matsumae et al.
waves and intervals that are relevant to ECG analysis. HRV assessment has traditionally relied on the analysis of the tachogram, giving potential information captured by the time series. The tachogram indicates the temporal series of RR heartbeat intervals, which contains the most relevant structural information of the HRV.
Fig. 1. QRS wave and RR interval
The first step of HRV assessment consists of R-peak detection. For this purpose, we apply an algorithm based on the widely used QRS complex detection algorithm (Pan and Tompkins, 1985). Furthermore, before the detection process, a derivative and squaring filter is applied to the original signal to amplify the peak feature for a better and faster detection rate. Once the R-peaks are located, several HRV features are extracted from the RR-interval series: SDNN (ms): Standard Deviation of Normal to Normal. Null variance would indicate identical consecutive RR intervals [4]. SDANN (ms): Standard deviation of the averages of NN intervals in all 5-minute segments of the entire recording. NN50: Number of pairs of adjacent NN intervals differing by more than 50 ms in the entire recording. SD1 (ms) and SD2 (s) ellipse sub-axis estimators: Nonlinear features are derived from the Poincaré plot, where each point is formed by the coordinates (RRi and RRi + 1). Each data point represents a pair of successive beats; the x-axis is the current RR interval, while the y-axis is the previous RR interval. LF and HF: The elementary components inside the Low-Frequency (LF - 0.04 to 0.15 Hz) and High-Frequency (HF - 0.15 to 0.40 Hz) bands refer to autonomic segments of HRV. LF and HF can refer to the current mental state [5].
3 Interaction Design 3.1
Procedure
Participants begin the process of interacting with the device as they enter the room. At this time the participant has the identity of the observer. After entering the process, a cushion
Emotional Data Visualization for Well-Being, Based on HRV Analysis
1273
is placed in the experience area. The visual image of the cushion is zen meditation, which can give the experiencer the suggestion of sitting still. Once seated, the sensor attached to the experiencer’s skin will start to capture ECG data. At the same time, the scale wall (Fig. 2) will create an opening movement, suggest that the experience process has begun. The device then starts the process of HRV analysis. The ECG signal will be extracted and mapped to multiple motion latitudes of the scale wall to give feedback to the experiencer.
Fig. 2. “Scale wall” visual variations
The overall trend of the movement is a regular rhythm. If emotional pressure on the experiencer is found after analyzing the data, the movement of the scale will change from regular to irregular. Through the two different feedback signals of regularity and irregularity on the scale wall, the user is finally guided to a calm, meditative state to achieve the purpose of emotional management. Finally, when the data reaches a certain threshold, (that is, when the experience is calm enough), the scale wall will enter a calm state, suggesting that the experience has ended, and will no longer react to the user’s actions. At this point, the experience process is over. 3.2
Flake Design
When it comes to the sense of order, a very striking example of animal scales can be found in nature. Different types of scales have different shapes and arrangements. In order to find a suitable combination for the device, it was necessary to analyze biological scales present in nature. Below are some representative scales of fish and reptiles. In order to reflect the core design concept of “taming a beast”, the authors referenced the scales of pangolins when designing the flakes. The scales of pangolins are narrow and sharp, with a longitudinal projection at the tail. Although the impression of pangolin itself is not fierce, the shape of its scale is closer to the feeling of a “dragon’s scale” in people’s minds. On the arrangement of the scales, this design refers to the western diamond spotted rattlesnake and rainbow trout. In the overall movement pattern of the scales, the design refers to the movement of the scales that occurs when a chameleon changes color. Inspired by the articulation between the snake scales and the
1274
A. Matsumae et al.
body, the mechanical part consists of a hinge mechanism at the upper end of the scale to connect with the underlying substrate, and supports the rotational movement of the scale on the corresponding axis (Fig. 3).
Fig. 3. Flake mechanical design
4 Discussions For this paper, a comparative analysis of eight ECG samples was conducted. Some of the datasets are referred from PhysioNet, which provides an ECG database from meditation experts. Other data are from our daily experiments performed on volunteers. One of the results is shown in Fig. 4. The green lines indicate the LF and HF spectrum during normal activity, while red lines indicate the others. By comparing the data, as the meditation process deepens, the tester generally shows a relatively higher spectrum in both LF and HF bands.
Fig. 4. LF and HF power comparison between meditation and normal activities
Emotional Data Visualization for Well-Being, Based on HRV Analysis
1275
A similar result was found in S. A. Matzner’s research [6]: “In the time domain, the variance of the heart rate during meditation was significantly higher than before meditation for all subjects”. However, there is no clear evidence indicating that HF and LF have a positive correlation with meditation procedures. Nevertheless, this result is sufficient to suggest that a deep meditation state is accompanied by a spectrum increase in HF and LF bands. Therefore, it can be used as an indicator to determine the degree of chaos on the scale wall.
5 Conclusion In recent studies in the bio-medical fields, there has been considerable research related to HRV analysis, and many inspiring research results have been produced. HRV is also continuously being used in the fields of sports research, rehabilitation, etc., and is expected to be a valuable research direction in the future. Because of the weak correlation between the signal and the output, the testers cannot directly describe the correspondence at the physical level. However, the HRV biofeedback training system not only effectively determines the emotional state of the experiencer, but also offers a brand-new interactive experience. The real-time processing method described in this paper has an efficient elimination effect on noise generated during ECG signal acquisition, which makes it possible to collect and analyze signals in more complex environments, such as at exhibitions and during athletic activities. It provides a relatively wide application range for this algorithm. Meditation is an effective means of emotional adjustment, but its application is not extensive due to the need for experience and long periods of training, and different individuals show disparate effects. The process described in this paper uses the aforementioned HRV analysis method to convey mental signals in the meditation process to the experiencer, which can guide them to enter meditation as soon as possible and maintain their mental health. Acknowledgments. We are grateful to Mitsuo Tsuda for his useful and constructive advice, Genki Fujita for his valuable technical support.
References 1. Miller, G.: Why is mental illness so hard to treat? Science 80(338), 32–33 (2012) 2. Berntson, G.G., Thomas Bigger, J., Eckberg, D.L., Grossman, P., Kaufmann, P.G., Malik, M., Nagaraja, H.N., Porges, S.W., Saul, J.P., Stone, P.H., Van Der Molen, M.W.: Heart rate variability: origins methods, and interpretive caveats. Psychophysiology 34(6), 623–648 (1997) 3. Shaffer, F., McCraty, R., Zerr, C.L.: A healthy heart is not a metronome: an integrative review of the heart’s anatomy and heart rate variability. Front. Psychol. 5, 1–19 (2014) 4. Acharya, U.R., Joseph, K.P., Kannathal, N., Lim, C.M., Suri, J.S.: Heart rate variability: a review. Med. Biol. Eng. Comput. 44, 1031–1051 (2006)
1276
A. Matsumae et al.
5. Rosenberg, W.V., Chanwimalueang, T., Adjei, T., Jaffer, U., Goverdovsky, V., Mandic, D.P.: Resolving ambiguities in the LF/HF ratio: LF-HF scatter plots for the categorization of mental and physical stress from HRV. Front. Physiol. 8, 1–12 (2017) 6. Matzner, S.A.: Heart rate variability during meditation. In: ECE 510 Statistical Signal Processing, pp. 1–4 (2003)
A Consumer-Centric Approach to Understand User’s Digital Experiences Yeon Ji Yang, Jaehye Suk, Kee Ok Kim(&), Hyesun Hwang, Hyewon Lim, and Muzi Xiang Department of Consumer and Family Sciences, Sungkyunkwan University, Seoul, Korea [email protected]
Abstract. Life-friendly payment platforms, offered by all-in-one applications, have transformed the way in which consumers interact with financial and nonfinancial services. This study investigates user experiences in the digital financial sphere via an innovative payment application, called PAYCO, from a consumer-centric perspective to provide practical insights to application developers into the domain requirements and opportunities, and thereby, improve user experience. From March 1st, 2016 to February 28th, 2019, 11,586 user-reviews of the application were collected from the Google Play website. The results of this study revealed that the reviews shifted from negative to positive and from those related to the major functions of the application to multiple functions of a life-supporting payment platform. These reviews collectively facilitate toward building a robust framework that addresses the changing consumer expectations in the payment space. It is envisaged that the adoption of this approach will help enhance the quality of human life. Keywords: All-in-one payment application Mobile application modeling analysis Network analysis User experience
Topic
1 Introduction Fintech provides the facility of making financial transactions via smartphone applications or through a company’s website. Payment systems benefit the most from Fintech that allow convenient and speedy monetary transactions [1]. Powerful mobile-appbased payments are revolutionizing the consumer experience in transactions. The transformation of consumers’ engagement with non-bank payment apps will raise their expectations to the same level as their expectations from any other supply side in digital economy. Innovation in app development should be grounded in end users’ experiences. The challenge of what to focus on and how to develop and deliver new services requires a consistent framework, through which a developer can evaluate present and future technologies [2]. When consumer needs and feedback serve as that framework, innovation will be set to deliver growth and value in human life in the year ahead. Launched in August 2015, PAYCO has become one of the leading non-bank mobile-payment providers with all-in-one fintech applications in Korea. The company © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 1277–1283, 2020. https://doi.org/10.1007/978-3-030-39512-4_195
1278
Y. J. Yang et al.
aims to transform the app into a life-friendly payment platform by offering financial and non-financial services. The purpose of this study is to understand consumers’ digital experiences with the innovation-driven payment application from a consumer-centric perspective to provide practical insights to application developers into the domain requirements and opportunities, and thereby, improve the related industry.
2 Methods 2.1
Data
Data were collected from online reviews posted on the Google Play website for the three years from March 1, 2016 to February 28, 2019 using R 3.3.3. The total number of collected texts was 11,691. Texts, not related to the services provided by PAYCO, were excluded and words with the same meaning in the text were unified into the same words. The final tidy dataset was composed of 11,586 reviews. The number of online reviews sharply increased during the three years, from 1,261 in 2016 and 1,441 in 2017 to 7,459 in 2018. 2.2
Topic Modeling
Topic modeling was applied to understand consumers’ main concerns regarding the payment app, and six to seven topics were extracted using the Latent Dirichlet Allocation (lda) package in R 3.5.3. Topic models extend and build on classical methods in natural language processing [3]. The lda model is a Bayesian mixture model for discrete data where topics are assumed to be uncorrelated [4]. 2.3
Network Analysis
Network analysis offers an intuitively visual way of thinking about relationships and interactions among entities such as keywords from online reviews. Bigram analysis is one of the commonly used to conduct the relationship between words in a text corpus [5]. The tidytext package in R 3.5.3 was used to extract a tokenized text dataset by pairs of adjacent words. In addition, the ggraph and igraph packages in R 3.5.3 were used to visualize the results of network analysis.
3 Results 3.1
Topic Extraction
First, the topic modeling analysis for 2016 results in seven topics and the top rated fifteen keywords in each topic are listed in Table 1. The seven topics are revealing three consumer issues, named as improvements, unstable system, and privacy. Topics 1 and 4 indicate users’ wish for improvements such as demanding more various offline
A Consumer-Centric Approach to Understand User’s Digital Experiences
1279
partner shops and overcoming system errors or customer service call errors. Topics 2, 3, and 5 are revealing negative user experiences caused by the unstable system such as transportation card charging and recognition errors in Topic 2, inconveniences due to the authentication errors when joining the application in Topic 3, and barcode or fingerprint recognition errors in Topic 5. Privacy issues are shown in Topics 6 and 7 with those keywords as consumers’ concerns about personal information steal when logging in and issues of certificate errors or certificate security when using bankingrelated functions such as remittance and deposit. Table 1. Topic modeling results in 2016 Improvement
Unstable system
Topic 1
Topic 4
Topic 2
1
Partner
Discount
2 3 4 5
Privacy Topic 3
Topic 5
Topic 6
Topic 7
Registration
Usage
Card
Remittance
Shop Update Usage Camera
Transportation card Coupons Charging Authentication Card Identification Error First payment Bus
Card Password Error Join
Check Card Account Usage Bank Event Usage Login Authentication
6 7 8 9
Functions Shopping Need Variety
Customer Call Improvement Center
Usage Recognition Credit card Balance
Inconvenient Input Complete Button
Offline Fingerprint Accumulate Convenience store Support Phone type Execute Galaxy
10 11 12 13 14 15
Offline Service Download Login Connect Order
Cancel Delivery Weird Person Upgrade Mobile
Subway Usim Pay later Delete Apply Doesn’t work
Steal The best Install Private information Bank Method Delete Information Wallet Add Way Barcode Benefits Number Doesn’t work Error Authentication Touch The latest Member Site Bank
Amount Function Deposit Virtual The worst Security Certificate Internet Error Shopping
Second, the topic modeling analysis for 2017 yielded six topics and the fifteen top keywords in each topic are listed in Table 2. These six topics depict three consumer issues: unstable system, benefits, and method of usage. Topics 1, 3, and 5 indicate users’ negative experiences caused by an unstable system, including the installation and authentication errors when users install and join the application, transportation card charging or recognition errors, and requests for inspection or refunds. Topic 5 contains mixed responses about payment service. Topic 2 is regarding the benefits of using the app, such as coupons or a 3% discount. Topics 4 and 6 are related to extended usage of the app, including partner shops in games, bookstores, and food according to the users’ lifestyles and activities with friends, such as gift giving or going to movie theaters.
1280
Y. J. Yang et al. Table 2. Topic modeling results in 2017
2 3 4 5 6 7 8
Unstable system Topic 1 Topic 3 Authentication Transportation card Installation Charging Update Error Delivery Input Join Wallet Error Login Shopping Complete Identification Inspection
9
Delivery app
1
Topic 5 Card Registration Convenient Coupons Remittance Offline Account Bank
Prepayment
Error
10 Clothing 11 Mobile
Recognition Refund
Way Password
12 Benefits
Balance
13 Doesn’t work
Inquiry
14 Member 15 Delete
Keyboard Reservation
Benefits Topic 3 Discount
Method of usage Topic 4 Topic 6 Partner Charging
Usage Error 3 percent Kakao Fingerprint Add Home shopping Weekend
Shop Bookstore Food Simple Order Credit card Convenience store Partnership
Charging Lifestyle Ten thousand Wallet won Check card Bank Person
Samsung pay Event Choice
Account
Customer
Support Center Accumulation Game
Purchase Cafe Friend Amount Fee Gift card Gift Movie theater Game Implement Remittance charge Culture Movie Free
Third, the topic modeling analysis for 2018 yielded seven topics. All seven topics reveal functional benefits of the app with minor negative experiences. The major functions of the app are included in Topic 1, bank-related functions in Topic 5, and further functions, such as ordering food or books, in Topic 3. Benefits of using the app are revealed in Topic 4 with partner shops, Topic 6 with gift giving and Topic 7 with discounting or coupons. Inconveniences from the automatically charged minimum amount pop up in Topic 2 (Table 3). Table 3. Topic modeling results in 2018 Topic 1
Topic 2
Topic 3
Topic 4
Topic 5
Topic 6
Topic 7
1 2 3
Remittance Free Magazine
Card Event Security
Discount Accumulation Friend
Functions Account Registration
Shopping Gift Mobile
Coupons Partner Shop
4
Delivery
Gift shop
Bank
Error
Offline
5
Remittance charge Fee
Usage Charging Transportation card Automatic Gift card
Variety
Inquiry
Purchase
Benefits
6 7
Credit rating Information
Convenience store Food People
Gifticon Add
Samsung pay Simple Authentication Internet
Game Amount
Wallet Necessary
(continued)
A Consumer-Centric Approach to Understand User’s Digital Experiences
1281
Table 3. (continued) Topic 1 8 9 10 11 12 13 14
3 percent Management Ticket Frequent use Reservation Installation Ten thousand won 15 Times
3.2
Topic 2
Topic 3
Topic 4
Topic 5
Topic 6
Topic 7
Inconvenience Cash Interconnect Minimum Culture Google play Credit card
Inquiry Fortune Bookstore Order Delivery app Join Method
Cafe Recommendation Type Start Download Clothing Coffee
Certificate Improvement Breakdown Fingerprint Movie Update Inquiry
Products Authentication Company Fun Parents Hot deal Credit Card
Useful Lifestyle Worry Amazing Criteria Utilize Discount
Save
Check card
Satisfaction
Bankbook
Birthday
Trust
Interrelationships of Keywords
Figure 1 shows the result of the network analysis for 2016. Most of words are closely linked to the word usage. The connection between the words usage and transportation card shows that charging and recognition errors of transportation cards are important issues for users. The connection between usage and offline indicates that users use this app at offline partner shops. The word convenience linked to usage in the center of the network connects to various benefits, such as remittance, event, (accumulate) points, and (first payment) discount. Those words linked with usage clearly indicate positive functional experiences among users with the app. Those words that are not associated with usage starting from private information, such as steal, doubt, login, and limit usage, indicate users’ severe concerns about their private information.
Fig. 1. Result of network analysis in 2016
The result of the network analysis in 2017 reveals the origins of positive and negative user experiences with the app, as shown in Fig. 2. The major positive experiences with the app come from points that could be accumulated with various convenient usages or orders, and they could be used for a 3% discount or coupons in convenient stores or in offline partner shops. The major negative experiences come
1282
Y. J. Yang et al.
from a charge of 10,000 won that requires prepayment with respect to the transportation card, which in turn links to inconvenient card registration, installation, updates, and app registers.
Fig. 2. Result of network analysis in 2017
The result of the network analysis in 2018 reveals extended benefits of using the app, as shown in Fig. 3. The 3% discount with coupons and accumulation of points through shopping or gift giving are the most highly appreciated among the app users. Other major benefits of using the app come from convenient usage for remittance with functions of transportation card, automatic charging, checking accounts, and notification services.
Fig. 3. Result of network analysis in 2018
The results from the topic modeling and from the network analysis for the three years of the time span reveal that users’ experiences with the app have shifted from negative to positive and from those related to the major functions of the app to those
A Consumer-Centric Approach to Understand User’s Digital Experiences
1283
related to an all-in-one platform offering financial and non-financial services. The network analysis with user reviews on the app adds detailed stories of user experiences to those from the topic modeling analysis.
4 Conclusion In this study, topic modeling and network analysis were applied to analyze consumers’ reviews of the simple payment application PAYCO during the past three years. The results suggest that consumers’ digital experiences with the app have become more positive. Negative experiences caused by system errors or privacy issues have quickly disappeared, and the benefits from extended financial and non-financial services through the app are clearly appreciated by users. These positive experiences reveal fintech’s advances in the payment space and lead to a new norm for consumer expectations by offering enhanced convenience and utility. Fintech has clearly led the simple payment services to become the most intensive users of personal data. Protection issues regarding consumer privacy and responsible use of consumer data become even more pivotal in relation to the functioning of fintech services. This study shows that text mining with topic modeling and network analysis can help app developers to understand users’ daily experience with an app. Bridging the gap between users and app developers and, further, between social scientists and computer scientists is challenging. A technical perspective and a human perspective need to interact intimately to deliver growth and value to society. The limitations of this study include a lack of Korean dictionary for identifying and extracting words or removing stop words, and an agreed upon selection criteria for the number of topics. To gain valid and valuable insights into consumer experience with mobile apps, more-robust data-cleaning and text-mining procedures are needed.
References 1. Bates, R.: Banking on the future: an exploration of fintech and the consumer interest. A Report for Consumers International (2017) 2. Sahn, B.: Customer experience at the centre of innovation agenda. FinTech Futures. https:// www.fintechfutures.com/2018/02/customer-experience-at-the-centre-of-innovation-agenda/ 3. Hornik, K., Grün, B.: Topicmodels: an R package for fitting topic models. J. Stat. Softw. 40 (13), 1–30 (2011) 4. Nigam, K., McCallum, A.K., Thrun, S., Mitchell, T.: Text classification from labeled and unlabeled documents using EM. Mach. Learn. 39(2–3), 103–134 (2000) 5. Qasim, M.: Sustainability and wellbeing: a text analysis of New Zealand parliamentary debates, official yearbooks and ministerial documents, No. 19/01 (2019)
Research on Design Skills for Personnel Evaluation Systems and Educational Programs Toshiya Sasaki(&) X Design Academy, 4-1-35, Kudankita, Chiyoda-Ku, Tokyo 102-0073, Japan [email protected] Abstract. In this research, we researched the next generation design skills and the current evaluation system, and developed a prototype for the evaluation system. Keywords: Design thinking Service design Computational design Human centered design Design skills
1 Introduction In recent years, businesses using data and artificial intelligence are spreading into society. With the trend of this era, the role of design in organizations and the required design skills are diversified. The purpose of this study is to organize design skills and use them as indicators for organizational personnel evaluation systems and educational programs.
2 Approach In this study, we investigate design skills and repeat hypothesis verification in personnel evaluation systems and educational programs based on prototypes created based on hypotheses (Fig. 1).
Fig. 1. Approach
3 Definition of “Design Skills” Design skills are the ability to think and practice to create the strategy, structure and appearance of services and products in IT companies. © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 1284–1288, 2020. https://doi.org/10.1007/978-3-030-39512-4_196
Research on Design Skills for Personnel Evaluation Systems and Educational Programs
1285
4 Investigation of Design Skills The concept of design was organized from the viewpoint of design definition [1], design type, and design process [2] (Fig. 2). The elements of 31 design skills obtained from the survey of design skills were classified into 10 design skills [3].
Fig. 2. Organizing design concepts
5 Hypotheses and Prototypes Since the skills required for each member differ depending on the organization, a typical designer pattern was created [4]. We thought that it could be used as an index for personnel evaluation by setting necessary skills for each pattern. The “Design Skill Sheet” and The “Designer Pattern Sheet” were created as prototypes. In the “Design Skill Sheet”, when the proficiency level of 31 skill elements is input, the proficiency level of 10 design skills is visualized. This tool is supposed to be used for diagnosing skill proficiency when performing personnel evaluations and for setting goals (Fig. 3). The “Designer Pattern Sheet” visualizes the proficiency level of skills required for each designer pattern within six career levels. It is assumed that career levels, required skills, skill proficiency levels, etc. will be arranged by the organization and used when grasping the required skills from patterns (Fig. 4).
1286
T. Sasaki
Fig. 3. Design Skill Sheet
6 Hypothesis Testing A workshop to visualize design skills was held for eight IT company designers. As a result of the workshop, the following problems were found. • It is difficult to understand the explanation of each skill element • It is difficult to match language recognition among members • Perception of proficiency varies from person to person
Research on Design Skills for Personnel Evaluation Systems and Educational Programs
1287
Fig. 4. Designer Pattern Sheet
In the prototype for personnel evaluation system, it will be considered to make it easy to understand the explanation of each skill and element, and to make it easy to match the proficiency level.
7 The Next Deployment Regarding education programs, the necessary skills are organized at each step of the design process, and the priority of skills required for the members of the target organization is considered and the education programs are examined.
References 1. Ministry of Economy: Trade and Industry/Patent Office Study Group Considering Industrial Competitiveness and Design, Declaration of “Design Management” (2018) 2. Norman, D.A.: The Design of Everyday Things: Revised and Expanded Edition (2015)
1288
T. Sasaki
3. Human Centered Design Organization, HCD Competence Map (2018) 4. Merholz, P., Skinner, K.: Org Design for Design Orgs - Building and Managing In-House Design Teams (2016)
Correction to: Intelligent Human Systems Integration 2020 Tareq Ahram, Waldemar Karwowski, Alberto Vergnano, Francesco Leali, and Redha Taiar
Correction to: T. Ahram et al. (Eds.): Intelligent Human Systems Integration 2020, AISC 1131, https://doi.org/10.1007/978-3-030-39512-4 The original version of the book was inadvertently published with an incorrect spelling of the authors’ names in Chapter 16 and an incorrect author’s name in Reference 5 of Chapter 112. The book has been updated with the changes.
The updated versions of these chapters can be found at https://doi.org/10.1007/978-3-030-39512-4_16 https://doi.org/10.1007/978-3-030-39512-4_112 © Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, p. C1, 2020. https://doi.org/10.1007/978-3-030-39512-4_197
Author Index
A Abdennadher, Slim, 208 Acosta-Vargas, Gloria, 875 Acosta-Vargas, Patricia, 853, 875 Afanasyev, Vladimir, 307 Agarwal, Shivangi, 1028 Ajdari, Alireza, 409 Alava Portugal, Cyntia, 1201 Alexandris, Christina, 276 Alkuino, Glenn, 871 Altamirano, Ernesto, 683 Alvarez-Merino, José, 683 Amasifén-Pacheco, Diego, 711 Ambeck-Madsen, Jonas, 260 Amrousi, Mohamed El, 1145 Andajani, Sari, 753 Angulo-Baca, Alejandra, 697 Antopolskiy, Sergey, 260 Arcoraci, Andrea, 34 Asai, Haruna, 111 Ascari, Luca, 260 Aston, Jeremy, 118 Astudillo, Catalina, 392 Avanzini, Pietro, 260 Avila Beneras, Josefina, 1256 B Baba, Akira, 385 Bacciaglia, Antonio, 796 Bachechi, Chiara, 468 Baghramyan, Aleksandr, 172 Baier, Ralph, 1007 Balugani, Elia, 474 Barbi, Silvia, 803 Bardales, Z., 660
Basapur, Santosh, 573 Beggiato, Matthias, 932 Bellotti, Andrea, 260 Benelli, Elisabetta, 241 Bengler, Klaus, 73 Berberoğlu, Mehmet, 646 Bernal-Bazalar, Michael, 697 Berrú, Julio César Ortíz, 421 Bertacchini, Alessandro, 803 Bettelli, Valentina, 816 Bielecki, Konrad, 1007 Bilyk, Rostyslav, 439 Bimenyimana, Christian Ildegard, 667 Bogg, Adam, 938 Bolock, Alia El, 208 Borghi, Guido, 104 Boronenko, Marina, 365, 398 Boronenko, Yurii, 398 Bozhikova, Violeta, 480 Bradley, Mike, 20 Brechmann, André, 159 Bromfield, Mike, 938 Buddhan, Sasikumar, 1213 Burgio, Paolo, 653 Burov, Oleksandr, 282, 359 Buss, Peter, 404 Butturi, Maria Angela, 474 C Cadzow, Scott, 15 Cai, Xijiang, 1219 Caira-Jimenez, Manuel, 690 Calogero, Angela Lucia, 214 Campa, Francesco, 214 Camplone, Stefania, 835
© Springer Nature Switzerland AG 2020 T. Ahram et al. (Eds.): IHSI 2020, AISC 1131, pp. 1289–1295, 2020. https://doi.org/10.1007/978-3-030-39512-4
1290 Canales-Ramos, Lourdes, 746 Cano-Lazarte, Mercedes, 740 Caon, Maurizio, 66 Castellano, Andrea, 653, 1048 Castelli, Andrea, 266 Castro-Blancas, Audy, 719 Celi Costa, Rosana María, 510 Ceruti, Alessandro, 796 Cespedes-Blanco, Carlos, 719 Chaiklieng, Sunisa, 372, 753 Chang, Mi, 1132, 1138 Chatelais, Bénédicte, 177 Chavez-Soriano, Pedro, 746 Chen, Chun Hsien, 183 Chen, Jianfeng, 453, 532 Chen, Xuebo, 415 Chen, Yiyan, 766 Chen, Yu, 539 Cheng, Jianxin, 136 Cheung, Virginia, 129 Chi, Maoping, 1244 Choi, IlYoung, 1178 Chung, Jae-Eun, 865 Clark, Jed, 20 Co, Nguyen Trong, 1107 Colorado Pástor, Bryan, 1164 Colucciello, Alessia, 260 Contreras-Choccata, Denilson, 760 Crespo-Martínez, Esteban, 392, 889 Cucchiara, Rita, 104 Cuffie, Brandon, 165 Cui, Jiaqi, 1219 D Dalpadulo, Enrico, 789, 816 Davtyan, Kristina, 80 de Antonio, Angélica, 254 de S. Dunck, Danillo A., 896 Del Vecchio, Maria, 260 Delaigue, Pierre, 66 Dell‘Orfano, Vittorio, 266 Demierre, Marc, 66 Demirci, Hatice Merve, 646 Deng, YuanZhou, 1184 Desimoni, Federico, 468 Dhamodharan, Tamilselvi, 524, 1213 Di Caprio, Debora, 235 Di Salvo, Andrea, 34 Díaz, Sergio, 653 Dimova, Rozalina, 480 Djanatliev, Anatoli, 92 Dolgikh, Yana, 282
Author Index Dovramadjiev, Tihomir, 480 Dudek, Marius, 966 E Echevarria-Cahuas, Jose, 780 Elhakeem, Mohamed, 1145 Etinger, Darko, 486 Eurich, Sky O., 1028 Evreinov, Grigori, 882 F Fahimipirehgalin, Mina, 341 Fandialan, Ervin, 871 Farfan-Quintanilla, Zhelenn, 690 Farooq, Ahmed, 882 Favaro, Francesca, 80, 945, 1028 Fei, Dingzhou, 271, 433 Ferrara, Giannina, 190 Ferrara, Marinella, 909 Filieri, Jurji, 241 Finke, Jan, 153 Fleming, Lorraine, 335 Flemisch, Frank, 1007 Fois Lugo, Milagros, 1201, 1256 Fois, María Milagros, 1195 Fontanili, Luca, 1088 Forero, Boris, 546, 1094 Fossanetti, Massimo, 1048 Franco Puga, Julio, 1164 Franke, Katharina, 973 Freire, Rui Pedro, 118 G Gagné, Christian, 177 Galvez-Acevedo, Oscar, 740 Garay-Osorio, Angela, 711 Garcia-Montero, Diana, 704 Geniola, Viola, 835 Gherardini, Francesco, 816 Ghonaim, Ahmed, 208 Giacobone, Gian Andrea, 59, 823, 829 Giraldi, Laura, 46 Goh, Seng Yuen Marcus, 605 Gomes, Neil, 561 Gómez Rodríguez, Víctor Gustavo, 1119 González Calle, María José, 510 Gonzalez, Mario, 875 Grasso, Giorgio M., 41 Gremillion, Gregory, 999 Grigoletto, Alessia, 214 Grunstein, Daniel, 27 Grunstein, Ron, 27
Author Index Gu, Rongrong, 427 Guan, Cheng, 319 Guaña-Moya, Javier, 875 H Ha, Hee Ra, 1184 Hafez, Wael, 627 Hains, Alexandre, 177 Hambardzumyan, Larisa, 172 Hämmerle, Moritz, 1069 Han, Ye, 453 Hara, Tatsunori, 294 Harvey, Eric R., 201 Hashimoto, Yoshihiro, 111, 143, 301 He, Da, 532 He, JinLu, 1178 Hechavarría Hernández, Jesús Rafael, 546, 1094, 1119, 1164, 1190, 1201, 1206, 1256 Heilemann, Felix, 1014 Hengeveld, Bart, 810 Herbert, Cornelia, 208 Hernández, Daniel López, 1007 Herzberger, Nicolas Daniel, 1007 Hidalgo, Jairo, 446 Hidalgo, Paula, 875 Hlazunova, Olena, 282 Ho, Bach Q., 294 Hoang, Ngo Thanh, 1107 Holzbach, Markus, 916 Hong, Sukjoo, 1132 Hoose, Sebastian, 153 Hosokawa, Youichi, 301 Hounsou, Joël Toyigbé, 667 Hrynkevych, Olha, 439 Hu, Jun, 810 Hu, Lingling, 313 Hua, Jenna, 190 Huanca, Lucio Leo Verástegui, 421 Huang, Fei-Hui, 1152 Huber, Bernd, 92 Hummels, Caroline, 810 Hussainalikhan, NilofarNisa, 1213 Hwang, Hyesun, 633, 1277 I Ihara, Masayuki, 588 Ilapakurti, Anitha, 325, 348 Imanghaliyeva, Amangul A., 580, 1074 Imbesi, Silvia, 823, 829, 860 Inomae, Goro, 588 Isaeva, Oksana, 365, 398 Ishizaka, Alessio, 474
1291 J Jansone, Anita, 1113, 1233 Jaramillo, Félix, 546 Jaramillo, Robinson Vega, 546 Jeong, Yun Jik, 1171 Jinno, Yasuyoshi, 503 Jost, Jana, 153 JothiKumar, Karthikeyan, 1213 Jung, Minji, 865 K Kaindl, Hermann, 9 Kambayashi, Yasushi, 1041 Kanenishi, Kazuhide, 677 Karandey, Vladimir, 307 Kedari, Santosh, 325, 348 Kedari, Sharat, 325, 348 Khaled, Omar Abou, 66 Khymych, Iryna, 439 Kim, Ahyoung, 865 Kim, JaeKyeong, 1178 Kim, Jenna, 190 Kim, Jisun, 20, 980 Kim, Kee Ok, 1171, 1277 Kim, Meereh, 1132 Kirks, Thomas, 153 Kiseleva, Elizaveta, 365, 398 Klapper, Jessica, 1069 Klaproth, Oliver W., 1021 Kolbachev, Evgeny, 288 Kondo, Akiko, 677, 1082 Konieczna, Monika, 1225 Krems, Josef, 932 Krylova-Grek, Yuliya, 359 Kuuseok, Ahto, 619 Kyzenko, Vasyl, 282 L La Fleur, Claire, 999 Laarni, Jari, 1057 Lafond, Daniel, 177, 201 Lai, Po Yan, 1138 Landini, Elisa, 653, 1048 Langdon, Pat, 20 Laßmann, Paula, 3 Latessa, Pasqualino Maietta, 214 Lauer, Tim, 973 Lavrov, Evgeniy, 282, 359 Lavrova, Olga, 282 Leali, Francesco, 789, 816 Lee, Ji-Hyun, 1132, 1138 Lee, Jun Hee, 1138 Lee, Seonglim, 1184
1292 Lee, Subin, 600 Lee, Tony, 129 Lee, Yu Lim, 865 Leikas, Jaana, 594 Leon-Chavarril, Claudia, 733 Leung, Michael, 129 Leyva, Maikel, 1164 Li, Cun, 810 Li, Fan, 183 Li, Xian, 248 Li, Xu, 633 Li, Yangxu, 294 Li, Zhigang, 415 Liao, Rui, 1101 Lien, Tran Thi Phuong, 1107 Lim, Hyewon, 633, 1277 Lin, Shuhao, 190 Lindner, Sebastian, 925, 966 Lisi, Giuseppe, 111, 143, 301 Liu, Jie, 453, 532 Liu, Jing, 848 Liu, Jinjing, 640 Liu, Wenfeng, 766 Liverani, Alfredo, 796 Loch, Frieder, 341 Lolli, Francesco, 474 Lommerzheim, Marcel, 159 Lucifora, Chiara, 41 Lüdtke, Andreas, 52 Luján-Mora, Sergio, 254, 853 Luo, Ruiyao, 1270 Luo, Yun, 1101 Lye, Sun Woh, 183, 221, 605 Lytvynova, Svitlana, 359 M Mackare, Kristine, 1113 Mackars, Raivo, 1113 Madhusudanan, Ajith, 1213 Magnone, Francesco Giuseppe, 1119 Maier, Thomas, 3 Malca-Ramirez, Carlos, 683 Maldonado-Maldonado, Juan Manuel, 392 Maldonado-Matute, Juan Manuel, 510 Manzoni, Luigi, 214 Maradiegue, F., 660 Marano, Antonio, 835 Marcano, Mauricio, 653 Marchenkova, Anna, 260 Marchi, Michele, 823, 829 Marois, Alexandre, 201
Author Index Martin, Bruno, 201 Martinez-Castañon, Jose, 740 Mascia, Maria Teresa, 816 Masters, Matthew, 959 Matsumae, Akane, 1270 Mazzeo, Elena, 266 McKenna, H. Patricia, 1063 Medina, Marvin, 871 Metcalfe, Jason S., 999 Meyer, Ronald, 1007 Mikros, George, 276 Milani, Massimo, 1088 Mincolelli, Giuseppe, 59, 823, 829, 860 Miranda Zamora, William Rolando, 498 Miwa, Tetsushi, 301 Miyake, Yutaka, 385 Mogrovejo, Pedro, 392, 889 Mohamed, Mona A., 842 Montalvo, Lourdes, 492 Montanari, Roberto, 1048 Montorsi, Luca, 1088 Montorsi, Monia, 803 Moscoso, Karla, 1195 Motomura, Yuki, 1270 Mouzakitis, Alex, 20 Mrugalska, Beata, 1225 Mugellini, Elena, 66 Mukaihira, Koki, 503 Müller, Juliane, 952 Muncie, Helen, 986 Mund, Dennis, 959 Murray, Acklyn, 335 Muthammal, Swashi, 524 Muthuramalingam, BhavaniDevi, 1213 Muzzioli, Gabriele, 1088
N Nader, Nazanin, 945 Naghdbishi, Hamid, 409 Nah, Ken, 600, 640, 848 Nakajima, Hiroshi, 588 Naranjo, Katherine, 546 Neto, Guilherme F., 896 Neubauer, Catherine, 999 Ngoc, Trinh Minh, 1107 Niermann, Dario, 52 Nishigaki, Masakatsu, 385, 503 Nishimura, Eigo, 1270 Niu, Yafeng, 1219 Nsabimana, Thierry, 667
Author Index Nuñez-Salome, Luis, 683 Nyame, Gabriel, 1126 O O-Donoghue, Jim, 20 Odumuyiwa, Victor, 667 Oetting, Andreas, 992 Ogata, Wakaha, 503 Ohki, Tetsushi, 503 Ohki, Tetushi, 385 Okui, Norihiro, 385 Orliyk, Olena, 359 Ota, Jun, 294 Ottenstein, Maia, 561 P Palomino-Moya, Jose, 733 Panukhnyk, Olena, 439 Parisi, Stefano, 916 Park, In-Hyoung, 865 Parkes, Andrew, 938 Pasetti, Chiara, 909 Pasko, Nadiia, 282 Patterson, Wayne, 335 Patwardhan, Viraj, 561 Paul, Gunther, 9 Pavlidis, Evgeni, 959 Peeters, Thomas, 195 Perconti, Pietro, 41 Perelman, Brandon S., 999 Pérez, Joshue, 653 Perez, Moises, 690, 697, 733, 740, 746 Perez-Paredes, Maribel, 711 Petrenko, Sergiy, 359 Petruccioli, Andrea, 816 Pinheiro, Jean-Philippe, 221 Pini, Fabio, 789 Pini, Stefano, 104 Pisaniello, John Dean, 404 Plebe, Alessio, 41 Po, Laura, 468 Pokorni, Bastian, 1069 Politis, Ioannis, 20 Popadynets, Nazariy, 439 Popov, Boris, 307 Popova, Olga, 307 Portilla Castell, Yoenia, 1119 Pourbafrani, Mahsa, 461 Prezenski, Sabine, 159
1293 Q Qin, Zhiguang, 1126 Qiu, Lincun, 1219 Quispe-Huapaya, Maria, 780 R Radlmayr, Jonas, 73 Rahman, S. M. Mizanoor, 612, 1034 Raisamo, Roope, 882 Rajendran, Gopi Krishnan, 524 Ramar, Ramalakshmi, 524 Ramdoss, Sathiyaprakash, 1213 Ramírez, Jaime, 254 Ramirez-Valdivia, Cesar, 780 Ramos-Palomino, Edgar, 704 Rao, Zhihong, 453, 532 Rauh, Nadine, 932 Ravichandran, Sugirtha, 1213 Raymundo, Carlos, 719, 780 Raymundo-Ibañez, Carlos, 660, 690, 697, 711, 733, 740, 746, 760 Reichelt, Florian, 3 Ren, Jiapei, 1238 Revell, Kirsten, 20, 980 Richardson, Joy, 20, 980 Rivas-Zavaleta, Carlos, 719 Rivera, Luis, 660, 711, 719, 760, 780 Rizvi, Syeda, 945 Roberts, Aaron, 20 Rognoli, Valentina, 916 Rollo, Federica, 468 Roman-Ramirez, Luz, 704 Rossi, Emilio, 835 Ruiz, Maria, 1195 Russwinkel, Nele, 159, 1021 Rutitis, Didzis, 379 S Saariluoma, Pertti, 594 Salvador-Acosta, Belén, 875 Salvador-Ullauri, Luis, 853 Sanchez Chero, Jose Antonio, 498 Sanchez Chero, Manuel Jesus, 498 Sánchez, Fernanda, 546 Santos-Arteaga, Francisco J., 235 SaravanaSundharam, SaiNaveenaSri, 1213 Sasaki, Toshiya, 1284 Sato, Keiichi, 573 Scataglini, Sofia, 195 Schaefer, Kristin E., 999
1294 Schmidl, Paul, 92 Schulte, Axel, 925, 952, 959, 966, 1014 Schwerd, Simon, 925 Sekido, Taichi, 1041 Sellan, Karen, 1195 Sellitto, Miguel Afonso, 474 Seto, Edmund, 190 Shao, Junkai, 248, 313, 1238 Shen, Jie, 726 Shevchenko, Svitlana, 359 Shi, Bingzheng, 1219 Shiomi, Yuya, 385 Shuhaiber, Ahmed, 903 Sidorova, Elena, 288 Sippl, Christoph, 92 Skrypchuk, Lee, 20 Son, Hoang Huu, 1107 Song, Xiao Xi, 1184 Sorochak, Oleg, 439 Sosa-Perez, Valeria, 733 Sotelo-Raffo, Fernando, 690, 704 Sotelo-Raffo, Juan, 697, 760 Soto, Billy, 546 Sou, Ka Lon, 605 Stanton, Neville A., 20, 980 Stephane, Lucas, 165 Stimm, Dominique, 3 Stock, Timothy J., 567 Stoeva, Mariana, 480 Stoppa, Marcelo H., 896 Storchi, Gabriele, 1088 Suarez, Diego S., 889 Sudo, Fumiya, 143 Suggaravetsiri, Pornnapa, 372, 753 Sugimoto, Ayaka, 385 Suhir, Ephraim, 9 Suk, Jaehye, 1171, 1184, 1277 Suresh, Pavika, 1213 T Takahashi, Kenta, 503 Takimoto, Munehiro, 1041 Talwar, Palak, 86 Tan, Shi Yin, 183 Tang, Simon, 129 Tang, Wencheng, 1250 Tango, Fabio, 653, 1048 Tempesta, Antonio, 214 Tenemaza, Maritzol, 254 Terenzi, Benedetta, 228 Ter-Sargisova, Viktoria, 172 Thomas, Manju, 1213 Thompson, Simon, 20 Tin, Man Lai-man, 517
Author Index Tingey-Holyoak, Joanne, 404 Tito, P., 660 Tkachenko, Oleksii M., 359 Togawa, Satoshi, 677, 1082 Toni, Rita, 214 Toselli, Stefania, 214 Tremblay, Sébastien, 201 Trignano, Claudia, 266 Tripi, Ferdinando, 214 Tsuchiya, Takashi, 503 Tupot, Marie Lena, 567 Turlisova, Jelena, 1233 U Üyümez, Bilal, 992 V Vachon, François, 201 Vadivel, VaibhavaShivani, 1213 Valavani, Christina, 276 Valero Fajardo, Carlos Luis, 1190 van der Aalst, Wil M. P., 461 Van Goethem, Sander, 195 van Zelst, Sebastiaan J., 461 Vaneeva, Polina, 288 Varotti, Davide, 214 Vecchiato, Giovanni, 260 Vega Jaramillo, Robinson, 1094 Velasquez-Vargas, Arelis, 746 Vergnano, Alberto, 214 Vernaleken, Christoph, 1021 Verwulgen, Stijn, 195 Vezzani, Roberto, 104 Vignati, Arianna, 228 Viktoriia, Yazina, 439 Villanueva, Edwin, 492 Vogel-Heuser, Birgit, 341 Volkova, Tatjana, 379 Vuppalapati, Chandrasekar, 325, 348 Vuppalapati, Jayashankar, 325, 348 Vuppalapati, Rajasekar, 325, 348 W Wang, Fenghong, 766 Wang, Haiyan, 248, 1238 Wang, Huajie, 136 Wang, Xia, 433 Wang, Yifan, 453, 532 Wang, Yongbin, 1159 Wang, Yun, 1270 Wang, Zhangyu, 129 Wasser, Joscha, 1007 Watanabe, Hiroshi, 588
Author Index Wee, Hong Jie, 221 Wiersma, Ben, 404 Wu, Zhen, 99, 124, 773 Wyrwicka, Magdalena K., 1225 X Xiang, Muzi, 1171, 1277 Xie, Yi, 1219 Xu, Hanyang, 1263 Xu, Hong, 605 Xue, Chengqi, 313, 773, 1219, 1263 Y Yandún, Marco, 446 Yang, Yeon Ji, 1171, 1277 Yang, Zining, 129 Yarlequé, Cristhian Aldana, 421 Yeo, Harim, 633 Yi, Taeha, 1132, 1138 Yin, Guodong, 99, 124 Ylönen, Marja, 1057
1295 Yu, Dehua, 552 Yu, Jian, 1159 Z Zacherl, Larissa, 73 Zambrano, Milton, 1206 Zarabia, Omar, 254 Zelensky, Vladimir, 365 Zeng, Xiang, 319 Zhai, Binhong, 124, 773 Zhang, Chi, 99, 773 Zhang, Lu, 1101 Zhang, Qiang, 129 Zhang, Tianmai, 1250 Zhang, Tongtong, 319 Zhao, Shifeng, 726 Zhou, Jia, 415 Zhou, Lei, 319, 1244 Zhou, Wuzhong, 427 Zhou, Xiaozhou, 1263 Zhu, Yanfei, 773