Advances in Artificial Intelligence, Software and Systems Engineering: Proceedings of the AHFE 2020 Virtual Conferences on Software and Systems Engineering, and Artificial Intelligence and Social Computing, July 16-20, 2020, USA [1st ed.] 9783030513276, 9783030513283

This book addresses emerging issues concerning the integration of artificial intelligence systems in our daily lives. It

406 97 47MB

English Pages XIX, 615 [624] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Front Matter ....Pages i-xix
Front Matter ....Pages 1-1
Development of a Holistic Method to Implement Artificial Intelligence in Manufacturing Areas (Bastian Pokorni, Fabian Volz, Jan Zwerina, Moritz Hämmerle)....Pages 3-8
Fighting Cyberbullying: An Analysis of Algorithms Used to Detect Harassing Text Found on YouTube (Rachel E. Trana, Christopher E. Gomez, Rachel F. Adler)....Pages 9-15
For Our Complex Future, Don’t Give Us AI, Give Us Intelligent Assistance (IA): The Case for Avionics (Sylvain Hourlier)....Pages 16-21
AI-Driven Worker Assistance System for Additive Manufacturing (Benjamin Röhm, Leo Gögelein, Stefan Kugler, Reiner Anderl)....Pages 22-27
Rating Prediction Used AI Big Data: Empathy Word in Network Analysis Method (Sang Hee Kweon, Hyeon-Ju Cha, Yook Jung-Jung)....Pages 28-32
Sound Identification System from Auditory Cortex by Using fMRI and Deep Learning: Study on Experimental Design for Capturing Brain Images (Jun Shinke, Kyoko Shibata, Hironobu Satoh)....Pages 33-39
Tracking People Using Ankle-Level 2D LiDAR for Gait Analysis (Mahmudul Hasan, Junichi Hanawa, Riku Goto, Hisato Fukuda, Yoshinori Kuno, Yoshinori Kobayashi)....Pages 40-46
New Trend of Standard: Machine Executable Standard (Haitao Wang, Gang Wu, Chao Zhao, Fan Zhang, Jing Zhao, Changqing Zhou et al.)....Pages 47-52
Machine Learning Analysis of EEG Measurements of Stock Trading Performance (Edgar P. Torres, Edgar A. Torres, Myriam Hernández-Álvarez, Sang Guun Yoo)....Pages 53-60
Evaluation System of GIS Partial Discharge Based on Convolutional Neutral Network (Liuhuo Wang, Lingqi Tan, Zengbin Wang)....Pages 61-68
Top-Level Design of Intelligent Video Surveillance System for Abandoned Objects (Chengwei Zhang, Wenhan Dai)....Pages 69-76
Detection of Human Trafficking Ads in Twitter Using Natural Language Processing and Image Processing (Myriam Hernández-Álvarez, Sergio L. Granizo)....Pages 77-83
Text Mining in Smart Cities to Identify Urban Events and Public Service Problems (Mario Gonzalez, Juan Viana-Barrero, Patricia Acosta-Vargas)....Pages 84-89
Deep Learning-Based Creative Intention Understanding and Color Suggestions for Illustration (Xiaohua Sun, Juexiao Qin)....Pages 90-96
Comparison of Probability Distributions for Evolving Artificial Neural Networks Using Bat Algorithm (Adeel Shahzad, Hafiz Tayyab Rauf, Tayyaba Asghar, Umar Hayat)....Pages 97-104
Deep Neural Networks for Grid-Based Elusive Crime Prediction Using a Private Dataset Obtained from Japanese Municipalities (Suguru Kanoga, Naruki Kawai, Kota Takaoka)....Pages 105-112
Remote Sensing, Heat Island Effect and Housing Price Prediction via AutoML (Rita Yi Man Li, Kwong Wing Chau, Herru Ching Yu Li, Fanjie Zeng, Beiqi Tang, Meilin Ding)....Pages 113-118
A Framework for Selecting Machine Learning Models Using TOPSIS (Maikel Yelandi Leyva Vazquezl, Luis Andy Briones Peñafiel, Steven Xavier Sanchez Muñoz, Miguel Angel Quiroz Martinez)....Pages 119-126
Front Matter ....Pages 127-127
Application-Oriented Approach for Detecting Cyberaggression in Social Media (Kurt Englmeier, Josiane Mothe)....Pages 129-136
A Study of Social Media Behaviors and Mental Health Wellbeing from a Privacy Perspective (Tian Wang, Masooda Bashir)....Pages 137-144
Risk Analysis for Ethical, Legal and Social Implications of Information and Communication Technologies in the Forestry Sector (Oliver Brunner, Katharina Schäfer, Alexander Mertens, Verena Nitsch, Christopher Brandl)....Pages 145-151
Traffic Scene Detection Based on YOLOv3 (Qian Yin, Ruyi Yang, Xin Zheng)....Pages 152-157
Social Networks’ Factors Driving Consumer Restaurant Choice: An Exploratory Analysis (Karen Ramos, Onesimo Cuamea, Jorge Morgan, Ario Estrada)....Pages 158-164
Front Matter ....Pages 165-165
Conversational Advisors – Are These Really What Users Prefer? User Preferences, Lessons Learned and Design Recommended Practices (Jason Telner, Jon Temple, Dabby Phipps, Joseph Riina, Rajani Choudhary, Umesh Mann)....Pages 167-173
Investigating Users’ Perceived Credibility of Real and Fake News Posts in Facebook’s News Feed: UK Case Study (Neil Bates, Sonia C. Sousa)....Pages 174-180
Future Trends in Voice User Interfaces (Jason Telner)....Pages 181-186
Artificial Intelligence Enabled User Experience Research (Sarra Zaghdoudi, Leonhard Glomann)....Pages 187-193
Product Sampling Based on Remarks of Customs in Online Shopping Websites for Quality Evaluation (Haitao Wang, Jing Zhao, Gang Wu, Fan Zhang, Chao Zhao, Xinyu Cao)....Pages 194-200
Affective Computing Based on Remarks of Customs in Online Shopping Websites (Haitao Wang, Fan Zhang, Gang Wu, Jing Zhao, Chao Zhao, Xinyu Cao)....Pages 201-206
A Review of Sign Language Hand Gesture Recognition Algorithms (Casam Nyaga, Ruth Wario)....Pages 207-216
Recognition Method of Feature Word Terms in Chinese Named Entities (Xinyu Cao, Jing Zhao, Fan Zhang, Gang Wu, Chao Zhao, Haitao Wang)....Pages 217-221
Research on the Sampling Procedures for Inspection by Variables Based on the Rate of Nonconforming Products (Jing Zhao, Jingjing Wang, Gang Wu, Chao Zhao, Fan Zhang, Haitao Wang)....Pages 222-231
Front Matter ....Pages 233-233
Deepfakes for the Good: A Beneficial Application of Contentious Artificial Intelligence Technology (Nicholas Caporusso)....Pages 235-241
Research on Somatotype Recognition Method Based on Euclidean Distance (Tong Yao, Li Pan, Jun Wang, Chong Yao)....Pages 242-249
Robust and Fast Heart Rate Monitoring Based on Video Analysis and Its Application (Kouyou Otsu, Qiang Zhong, Das Keya, Hisato Fukuda, Antony Lam, Yoshinori Kobayashi et al.)....Pages 250-257
Automatic Generation of Abstracts in Scientific Articles Based on Natural Language Processing for Early Education Professionals and Speech Therapists (Diego Quisi-Peralta, Vladimir Robles-Bykbaev, Jorge Galan-Mena, Roberto García-Vélez)....Pages 258-263
Automatic Generation of a Thesaurus for Language and Communication Disorders Based on Natural Language Processing and Ontologies (Diego Quisi-Peralta, Vladimir Robles-Bykbaev, Roberto García-Vélez, Luis Serpa-Andrade, Eduardo Pinos-Veles)....Pages 264-269
Automating the Generation of Study Teams Through Genetic Algorithms Based on Learning Styles in Higher Education (Roberto García-Vélez, Bryam Vega Moreno, Angel Ruiz-Ichazu, David Morales Rivera, Esteban Rosero-Perez)....Pages 270-277
Using Semantic Networks for Question Answering - Case of Low-Resource Languages Such as Swahili (Barack Wanjawa, Lawrence Muchemi)....Pages 278-285
Study on Design of Weaken Learning Costs for Business Intelligence Data Platforms (Yao Wang, Renjie Yang, Chen Mao, Jun Zhang)....Pages 286-291
AHP Applied to the Prioritization of Recreational Spaces in Green Areas. Case Study: Urban Area of the El Empalme Canton, Ecuador (Lizbeth Amelia Toapanta Orbea, Maikel Leyva Vazquez, Jesús Rafael Hechavarría Hernández)....Pages 292-297
Building Updated Research Agenda by Investigating Papers Indexed on Google Scholar: A Natural Language Processing Approach (Rita Yi Man Li)....Pages 298-305
Front Matter ....Pages 307-307
Microstrip Antennas Used for Communication in Military Systems and Civil Communications in the 5 GHz Band - Design and Simulation Results (Rafal Przesmycki, Marek Bugaj, Marian Wnuk)....Pages 309-316
Generation of Plausible Incident Stories by Using Recurrent Neural Networks (Toru Nakata)....Pages 317-324
Smart Home - Design and Simulation Results for an Intelligent Telecommunications Installation in a Single-Family Home (Rafal Przesmycki, Marek Bugaj, Marian Wnuk)....Pages 325-332
Construction and Application of Applicability Evaluation Model of Human Factors Methods for Complex Human-Machine System (Xueying Zhang, Beiyuan Guo, Yuan Liu, Tiancheng Huang)....Pages 333-340
Application of the Computer Vision System to the Measurement of the CIE L*a*b* Color Parameters of Fruits (Manuel Jesús Sánchez Chero, William Rolando Miranda Zamora, José Antonio Sánchez Chero, Susana Soledad Chinchay Villarreyes)....Pages 341-347
Analysis of the Gentrification Phenomenon Using GIS to Support Local Government Decision Making (Boris Orellana-Alvear, Tania Calle-Jimenez)....Pages 348-354
EMG Signal Interference Minimization Proposal Using a High Precision Multichannel Acquisition System and an Auto-Calibrated Adaptive Filtering Technique (Santiago Felipe Luna Romero, Luis Serpa-Andrade)....Pages 355-360
Decision Model for QoS in Networking Based on Hierarchical Aggregation of Information (Maikel Yelandi Leyva Vazquez, Miguel Angel Quiroz Martinez, Josseline Haylis Diaz Sanchez, Jorge Luis Aguilera Balseca)....Pages 361-368
Nondeterministic Finite Automata for Modeling an Ecuadorian Sign Language Interpreter (Jose Guerra, Diego Vallejo-Huanga, Nathaly Jaramillo, Richard Macas, Daniel Díaz)....Pages 369-376
Establishing and Verifying the Ergonomic Evaluation Metrics of Spacecraft Interactive Software Interface (Jianhua Sun, Yu Zhang, Ting Jiang, Chunlin Qian)....Pages 377-383
Systematic Mapping on Embedded Semantic Markup Validated with Data Mining Techniques (Rosa Navarrete, Carlos Montenegro, Lorena Recalde)....Pages 384-391
Physical Approach to Stress Analysis of Horizontal Axis Wind Turbine Blade Using Finite Element Analysis (Samson O. Ugwuanyi, Opeyeolu Timothy Laseinde, Lagouge Tartibu)....Pages 392-399
Application of Fuzzy Cognitive Maps in Critical Success Factors. Case Study: Resettlement of the Population of the Tres Cerritos Enclosure, Ecuador (Lileana Saavedra Robles, Maikel Leyva Vázquez, Jesús Rafael Hechavarría Hernández)....Pages 400-406
Design of the Future Workstation: Enhancing Health and Wellbeing on the Job (Dosun Shin, Matthew Buman, Pavan Turaga, Assegid Kidane, Todd Ingalls)....Pages 407-413
Efficient FPGA Implementation of Direct Digital Synthesizer and Digital Up-Converter for Broadband Multicarrier Transmitter (Cristhian Castro, Mireya Zapata)....Pages 414-421
Low-Cost Embedded System Proposal for EMG Signals Recognition and Classification Using ARM Microcontroller and a High-Accuracy EMG Acquisition System (Santiago Felipe Luna Romero, Luis Serpa-Andrade)....Pages 422-428
Rural Public Space Evolution Research - Based on the Perspective of Social Capital (Zhang Hua, Wuzhong Zhou)....Pages 429-436
Road-Condition Monitoring and Classification for Smart Cities (Diana Kassem, Carlos Arce-Lopera)....Pages 437-441
Discussion Features of Public Participation in Space Governance in Network Media – Taking Yangzhou Wetland Park as an Example (Zhang Hua)....Pages 442-448
Front Matter ....Pages 449-449
A Modeling the Supplier Relationship Management in Agribusiness Supply Chain (Rajiv Sánchez, Bryan Reyes, Edgar Ramos, Steven Dien)....Pages 451-458
Consumer Perception Applied to Remanufactured Products in a Product-Service System Model (Alejandro Jiménez-Zaragoza, Karina Cecilia Arredondo-Soto, Marco Augusto Miranda-Ackerman, Guillermo Cortés-Robles)....Pages 459-464
Blockchain in Agribusiness Supply Chain Management: A Traceability Perspective (Luis Flores, Yoseline Sanchez, Edgar Ramos, Fernando Sotelo, Nabeel Hamoud)....Pages 465-472
Cold Supply Chain Logistics Model Applied in Raspberry: An Investigation in Perú (Mijail Tardillo, Jorge Torres, Edgar Ramos, Fernando Sotelo, Steven Dien)....Pages 473-480
Front Matter ....Pages 481-481
Introducing Intelligent Interior Design Framework (IIDF) and the Overlap with Human Building Interaction (HBI) (Holly Sowles, Laura Huisinga)....Pages 483-489
IoT and AI in Precision Agriculture: Designing Smart System to Support Illiterate Farmers (Javed Anjum Sheikh, Sehrish Munawar Cheema, Muhammad Ali, Zohaib Amjad, Jahan Zaib Tariq, Ammerha Naz)....Pages 490-496
Selection of LPWAN Technology for the Adoption and Efficient Use of the IoT in the Rural Areas of the Province of Guayas Using AHP Method (Miguel Angel Quiroz Martinez, Gonzalo Antonio Loza González, Monica Daniela Gomez Rios, Maikel Yelandi Leyva Vazquez)....Pages 497-503
Cryptocurrencies: A Futuristic Perspective or a Technological Strategy (Carolina Del-Valle-Soto, Alberto Rossa-Sierra)....Pages 504-509
Acceleration of Evolutionary Grammar Using an MISD Architecture Based on FPGA and Petalinuxs (Bernardo Vallejo-Mancero, Mireya Zapata)....Pages 510-517
Front Matter ....Pages 519-519
Applying Deep Learning to Solve Alarm Flooding in Digital Nuclear Power Plant Control Rooms (Jens-Patrick Langstrand, Hoa Thi Nguyen, Robert McDonald)....Pages 521-527
The First Decade of the Human Systems Simulation Laboratory: A Brief History of Human Factors Research in Support of Nuclear Power Plants (Ronald Laurids Boring)....Pages 528-535
Human Factors Challenges in Developing Cyber-Informed Risk Assessment for Critical Infrastructure (Katya Le Blanc)....Pages 536-541
Dynamic Instructions for Lock-Out Tag-Out (Jeremy Mohon)....Pages 542-549
Design Principles for a New Generation of Larger Operator Workstation Displays (Alf Ove Braseth, Robert McDonald)....Pages 550-557
Tablet-Based Functionalities to Support Control Room Operators When Process Information Becomes Unreliable or After Control Room Abandonment (Espen Nystad, Magnhild Kaarstad, Christer Nihlwing, Robert McDonald)....Pages 558-565
Simulation Technologies for Integrated Energy Systems Engineering and Operations (Roger Lew, Thomas Ulrich, Ronald Boring)....Pages 566-572
Promoting Operational Readiness of Control Room Crews Through Biosignal Measurements (Satu Pakarinen, Jari Laarni, Kristian Lukander, Ville-Pekka Inkilä, Tomi Passi, Marja Liinasuo et al.)....Pages 573-580
Adopting the AcciMap Methodology to Investigate a Major Power Blackout in the United States: Enhancing Electric Power Operations Safety (Maryam Tabibzadeh, Shashank Lahiry)....Pages 581-588
Systematic Investigation of Pipeline Accidents Using the AcciMap Methodology: The Case Study of the San Bruno Gas Explosion (Maryam Tabibzadeh, Viak R. Challa)....Pages 589-596
A Tool for Performing Link Analysis, Operational Sequence Analysis, and Workload Analysis to Support Nuclear Power Plant Control Room Modernization (Casey Kovesdi, Katya Le Blanc)....Pages 597-604
Renewable Energy System Design for Electric Power Generation on Urban Historical Heritage Places in Ecuador (Blanca Topon-Visarrea, Mireya Zapata, Rayd Macias)....Pages 605-612
Back Matter ....Pages 613-615
Recommend Papers

Advances in Artificial Intelligence, Software and Systems Engineering: Proceedings of the AHFE 2020 Virtual Conferences on Software and Systems Engineering, and Artificial Intelligence and Social Computing, July 16-20, 2020, USA [1st ed.]
 9783030513276, 9783030513283

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Advances in Intelligent Systems and Computing 1213

Tareq Ahram   Editor

Advances in Artificial Intelligence, Software and Systems Engineering Proceedings of the AHFE 2020 Virtual Conferences on Software and Systems Engineering, and Artificial Intelligence and Social Computing, July 16–20, 2020, USA

Advances in Intelligent Systems and Computing Volume 1213

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Nikhil R. Pal, Indian Statistical Institute, Kolkata, India Rafael Bello Perez, Faculty of Mathematics, Physics and Computing, Universidad Central de Las Villas, Santa Clara, Cuba Emilio S. Corchado, University of Salamanca, Salamanca, Spain Hani Hagras, School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK László T. Kóczy, Department of Automation, Széchenyi István University, Gyor, Hungary Vladik Kreinovich, Department of Computer Science, University of Texas at El Paso, El Paso, TX, USA Chin-Teng Lin, Department of Electrical Engineering, National Chiao Tung University, Hsinchu, Taiwan Jie Lu, Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, Australia Patricia Melin, Graduate Program of Computer Science, Tijuana Institute of Technology, Tijuana, Mexico Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro, Rio de Janeiro, Brazil Ngoc Thanh Nguyen , Faculty of Computer Science and Management, Wrocław University of Technology, Wrocław, Poland Jun Wang, Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong

The series “Advances in Intelligent Systems and Computing” contains publications on theory, applications, and design methods of Intelligent Systems and Intelligent Computing. Virtually all disciplines such as engineering, natural sciences, computer and information science, ICT, economics, business, e-commerce, environment, healthcare, life science are covered. The list of topics spans all the areas of modern intelligent systems and computing such as: computational intelligence, soft computing including neural networks, fuzzy systems, evolutionary computing and the fusion of these paradigms, social intelligence, ambient intelligence, computational neuroscience, artificial life, virtual worlds and society, cognitive science and systems, Perception and Vision, DNA and immune based systems, self-organizing and adaptive systems, e-Learning and teaching, human-centered and human-centric computing, recommender systems, intelligent control, robotics and mechatronics including human-machine teaming, knowledge-based paradigms, learning paradigms, machine ethics, intelligent data analysis, knowledge management, intelligent agents, intelligent decision making and support, intelligent network security, trust management, interactive entertainment, Web intelligence and multimedia. The publications within “Advances in Intelligent Systems and Computing” are primarily proceedings of important conferences, symposia and congresses. They cover significant recent developments in the field, both of a foundational and applicable character. An important characteristic feature of the series is the short publication time and world-wide distribution. This permits a rapid and broad dissemination of research results. ** Indexing: The books of this series are submitted to ISI Proceedings, EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink **

More information about this series at http://www.springer.com/series/11156

Tareq Ahram Editor

Advances in Artificial Intelligence, Software and Systems Engineering Proceedings of the AHFE 2020 Virtual Conferences on Software and Systems Engineering, and Artificial Intelligence and Social Computing, July 16–20, 2020, USA

123

Editor Tareq Ahram Institute for Advanced Systems Engineering University of Central Florida Orlando, FL, USA

ISSN 2194-5357 ISSN 2194-5365 (electronic) Advances in Intelligent Systems and Computing ISBN 978-3-030-51327-6 ISBN 978-3-030-51328-3 (eBook) https://doi.org/10.1007/978-3-030-51328-3 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Advances in Human Factors and Ergonomics 2020

AHFE 2020 Series Editors Tareq Z. Ahram, Florida, USA Waldemar Karwowski, Florida, USA

11th International Conference on Applied Human Factors and Ergonomics and the Affiliated Conferences Proceedings of the AHFE 2020 Virtual Conferences on Software and Systems Engineering, and Artificial Intelligence and Social Computing, July 16–20, 2020, USA

Advances in Neuroergonomics and Cognitive Engineering Advances in Industrial Design

Advances in Ergonomics in Design Advances in Safety Management and Human Performance Advances in Human Factors and Ergonomics in Healthcare and Medical Devices Advances in Simulation and Digital Human Modeling Advances in Human Factors and Systems Interaction Advances in the Human Side of Service Engineering Advances in Human Factors, Business Management and Leadership Advances in Human Factors in Robots, Drones and Unmanned Systems Advances in Human Factors in Cybersecurity

Hasan Ayaz and Umer Asgher Giuseppe Di Bucchianico, Cliff Sungsoo Shin, Scott Shim, Shuichi Fukuda, Gianni Montagna and Cristina Carvalho Francisco Rebelo and Marcelo Soares Pedro M. Arezes and Ronald L. Boring Jay Kalra and Nancy J. Lightner Daniel N Cassenti, Sofia Scataglini, Sudhakar L. Rajulu and Julia L. Wright Isabel L. Nunes Jim Spohrer and Christine Leitner Jussi Ilari Kantola, Salman Nazir and Vesa Salminen Matteo Zallio Isabella Corradini, Enrico Nardelli and Tareq Ahram (continued)

v

vi

Advances in Human Factors and Ergonomics 2020

(continued) Advances in Human Factors in Training, Education, and Learning Sciences Advances in Human Aspects of Transportation Advances in Artificial Intelligence, Software and Systems Engineering Advances in Human Factors in Architecture, Sustainable Urban Planning and Infrastructure Advances in Physical, Social & Occupational Ergonomics

Advances in Manufacturing, Production Management and Process Control Advances in Usability, User Experience, Wearable and Assistive Technology Advances in Creativity, Innovation, Entrepreneurship and Communication of Design

Salman Nazir, Tareq Ahram and Waldemar Karwowski Neville Stanton Tareq Ahram Jerzy Charytonowicz Waldemar Karwowski, Ravindra S. Goonetilleke, Shuping Xiong, Richard H.M. Goossens and Atsuo Murata Beata Mrugalska, Stefan Trzcielinski, Waldemar Karwowski, Massimo Di Nicolantonio and Emilio Rossi Tareq Ahram and Christianne Falcão Evangelos Markopoulos, Ravindra S. Goonetilleke, Amic G. Ho and Yan Luximon

Preface

Researchers and business leaders are called to address important challenges caused by the increasing presence of artificial intelligence and social computing in the workplace environment and daily lives. Roles that have traditionally required a high level of cognitive abilities, decision making and training (human intelligence) are now being automated. The AHFE International Conference on Human Factors in Artificial Intelligence and Social Computing promotes the exchange of ideas and technology enabling humans to communicate and interact with machines in almost every areas and for different purposes. The recent increase in machine and systems intelligence has led to a shift from the classical human–computer interaction to a much more complex, cooperative human-system work environment requiring a multidisciplinary approach. The first part of this book deals with those new challenges and presents contributions on different aspects of artificial intelligence, social computing and social network modeling taking into account those modern, multifaceted challenges The AHFE International Conference on Human Factors, Software, and Systems Engineering provides a platform for addressing challenges in human factors, software and systems engineering pushing the boundaries of current research. In the second part of the book, researchers, professional software & systems engineers, human factors and human systems integration experts from around the world discuss next-generation systems to address societal challenges. The book covers cutting-edge software and systems engineering applications, systems and service design, and user-centered design. Topics span from analysis of evolutionary and complex systems, to issues in human systems integration, and applications in smart grid, infrastructure, training, education, defense and aerospace. The last part of the book reports on the AHFE International Conference of Human Factors in Energy, addressing oil, gas, nuclear and electric power industries. It covers human factors/systems engineering research for process control and discusses new energy business models. In keeping with a system that is vast in its scope and reach, the chapters in this book cover a wide range of topics. The chapters are organized into eight sections:

vii

viii

Preface

Human Factors in Artificial Intelligence and Social Computing Section Section Section Section

1 2 3 4

Artificial Artificial Artificial Artificial

Intelligence Intelligence Intelligence Intelligence

and Machine Learning and Social Computing User Research Applications

Human Factors in Software and Systems Engineering Section 5 Section 6

Applications in Software and Systems Engineering Applications in Supply Chain Management and Business

Cognitive Computing and Internet of Things Section 7

Cognitive Computing and Internet of Things

Human Factors in Energy Section 8

Human Factors in Energy: Oil, Gas, Nuclear and Electric Power Industries

The research papers included here have been reviewed by members of the International Editorial Board, whom our sincere thanks and appreciation goes to. They are listed below: Software, and Systems Engineering A. Al-Rawas, Oman T. Alexander, Germany S. Belov, Russia O. Bouhali, Qater H. Broodney, Israel A. Cauvin, France F. Fischer, Brazil S. Fukuzumi, Japan C. Grecco, Brazil N. Jochems, Germany G. Lim, USA D. Long, USA M. Mochimaru, Japan C. O’Connor, USA C. Orłowski, Poland H. Parsaei, Qatar S. Ramakrishnan, USA J. San Martin Lopez, Spain K. Santarek, Poland M. Shahir Liew, Malaysia D. Speight, UK

Preface

M. Stenkilde, Sweden T. Winkler, Poland H. Woodcock, UK Artificial Intelligence and Social Computing S. Pickl, Germany S. Ramakrishnan, USA J. San Martin Lopez, Spain K. Santarek, Poland M. Shahir Liew, Malaysia J. Sheikh, Pakistan D. Speight, UK M. Stenkilde, Sweden T. Winkler, Poland H. Woodcock, UK B. Xue, China Human Factors in Energy: Oil, Gas, Nuclear and Electric Power Industries S. Al Rawahi, Oman R. Boring, USA P. Carvalho, Brazil S. Cetiner, USA D. Desaulniers, USA G. Lim, USA P. Liu, China E. Perez, USA L. Reinerman-Jones, USA K. Söderholm, Finland Cognitive Computing and Internet of Things H. Alnizami, USA T. Alexander, Germany C. Baldwin, USA O. Bouhali, Qater H. Broodney, Israel F. Dehais, France K. Gramann, Germany R. McKendrick, USA S. Perrey, France S. Pickl, Germany S. Ramakrishnan, USA D. Speight, UK M. Stenkilde, Sweden

ix

x

Preface

A. Visa, Finland T. Ward, Ireland M. Ziegler, USA We hope that this book, which reports on the international state of the art in human factors research and applications in artificial intelligence and systems engineering, will be a valuable source of knowledge enabling human-centered design of a variety of products, services and systems for global markets. July 2020

Tareq Ahram

Contents

Artificial Intelligence and Machine Learning Development of a Holistic Method to Implement Artificial Intelligence in Manufacturing Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bastian Pokorni, Fabian Volz, Jan Zwerina, and Moritz Hämmerle

3

Fighting Cyberbullying: An Analysis of Algorithms Used to Detect Harassing Text Found on YouTube . . . . . . . . . . . . . . . . . . . . Rachel E. Trana, Christopher E. Gomez, and Rachel F. Adler

9

For Our Complex Future, Don’t Give Us AI, Give Us Intelligent Assistance (IA): The Case for Avionics . . . . . . . . . . . . . . . . . . . . . . . . . Sylvain Hourlier

16

AI-Driven Worker Assistance System for Additive Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Benjamin Röhm, Leo Gögelein, Stefan Kugler, and Reiner Anderl

22

Rating Prediction Used AI Big Data: Empathy Word in Network Analysis Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sang Hee Kweon, Hyeon-Ju Cha, and Yook Jung-Jung

28

Sound Identification System from Auditory Cortex by Using fMRI and Deep Learning: Study on Experimental Design for Capturing Brain Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jun Shinke, Kyoko Shibata, and Hironobu Satoh

33

Tracking People Using Ankle-Level 2D LiDAR for Gait Analysis . . . . . Mahmudul Hasan, Junichi Hanawa, Riku Goto, Hisato Fukuda, Yoshinori Kuno, and Yoshinori Kobayashi

40

New Trend of Standard: Machine Executable Standard . . . . . . . . . . . . Haitao Wang, Gang Wu, Chao Zhao, Fan Zhang, Jing Zhao, Changqing Zhou, Wenxing Ding, and Xinyu Cao

47

xi

xii

Contents

Machine Learning Analysis of EEG Measurements of Stock Trading Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Edgar P. Torres, Edgar A. Torres, Myriam Hernández-Álvarez, and Sang Guun Yoo

53

Evaluation System of GIS Partial Discharge Based on Convolutional Neutral Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Liuhuo Wang, Lingqi Tan, and Zengbin Wang

61

Top-Level Design of Intelligent Video Surveillance System for Abandoned Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chengwei Zhang and Wenhan Dai

69

Detection of Human Trafficking Ads in Twitter Using Natural Language Processing and Image Processing . . . . . . . . . . . . . . . . . . . . . . Myriam Hernández-Álvarez and Sergio L. Granizo

77

Text Mining in Smart Cities to Identify Urban Events and Public Service Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mario Gonzalez, Juan Viana-Barrero, and Patricia Acosta-Vargas

84

Deep Learning-Based Creative Intention Understanding and Color Suggestions for Illustration . . . . . . . . . . . . . . . . . . . . . . . . . . Xiaohua Sun and Juexiao Qin

90

Comparison of Probability Distributions for Evolving Artificial Neural Networks Using Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adeel Shahzad, Hafiz Tayyab Rauf, Tayyaba Asghar, and Umar Hayat

97

Deep Neural Networks for Grid-Based Elusive Crime Prediction Using a Private Dataset Obtained from Japanese Municipalities . . . . . . . . . . . 105 Suguru Kanoga, Naruki Kawai, and Kota Takaoka Remote Sensing, Heat Island Effect and Housing Price Prediction via AutoML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Rita Yi Man Li, Kwong Wing Chau, Herru Ching Yu Li, Fanjie Zeng, Beiqi Tang, and Meilin Ding A Framework for Selecting Machine Learning Models Using TOPSIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Maikel Yelandi Leyva Vazquezl, Luis Andy Briones Peñafiel, Steven Xavier Sanchez Muñoz, and Miguel Angel Quiroz Martinez Artificial Intelligence and Social Computing Application-Oriented Approach for Detecting Cyberaggression in Social Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Kurt Englmeier and Josiane Mothe

Contents

xiii

A Study of Social Media Behaviors and Mental Health Wellbeing from a Privacy Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Tian Wang and Masooda Bashir Risk Analysis for Ethical, Legal and Social Implications of Information and Communication Technologies in the Forestry Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Oliver Brunner, Katharina Schäfer, Alexander Mertens, Verena Nitsch, and Christopher Brandl Traffic Scene Detection Based on YOLOv3 . . . . . . . . . . . . . . . . . . . . . . 152 Qian Yin, Ruyi Yang, and Xin Zheng Social Networks’ Factors Driving Consumer Restaurant Choice: An Exploratory Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Karen Ramos, Onesimo Cuamea, Jorge Morgan, and Ario Estrada Artificial Intelligence User Research Conversational Advisors – Are These Really What Users Prefer? User Preferences, Lessons Learned and Design Recommended Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Jason Telner, Jon Temple, Dabby Phipps, Joseph Riina, Rajani Choudhary, and Umesh Mann Investigating Users’ Perceived Credibility of Real and Fake News Posts in Facebook’s News Feed: UK Case Study . . . . . . . . . . . . . . . . . . 174 Neil Bates and Sonia C. Sousa Future Trends in Voice User Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . 181 Jason Telner Artificial Intelligence Enabled User Experience Research . . . . . . . . . . . 187 Sarra Zaghdoudi and Leonhard Glomann Product Sampling Based on Remarks of Customs in Online Shopping Websites for Quality Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Haitao Wang, Jing Zhao, Gang Wu, Fan Zhang, Chao Zhao, and Xinyu Cao Affective Computing Based on Remarks of Customs in Online Shopping Websites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Haitao Wang, Fan Zhang, Gang Wu, Jing Zhao, Chao Zhao, and Xinyu Cao A Review of Sign Language Hand Gesture Recognition Algorithms . . . 207 Casam Nyaga and Ruth Wario

xiv

Contents

Recognition Method of Feature Word Terms in Chinese Named Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Xinyu Cao, Jing Zhao, Fan Zhang, Gang Wu, Chao Zhao, and Haitao Wang Research on the Sampling Procedures for Inspection by Variables Based on the Rate of Nonconforming Products . . . . . . . . . . . . . . . . . . . 222 Jing Zhao, Jingjing Wang, Gang Wu, Chao Zhao, Fan Zhang, and Haitao Wang Artificial Intelligence Applications Deepfakes for the Good: A Beneficial Application of Contentious Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Nicholas Caporusso Research on Somatotype Recognition Method Based on Euclidean Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Tong Yao, Li Pan, Jun Wang, and Chong Yao Robust and Fast Heart Rate Monitoring Based on Video Analysis and Its Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Kouyou Otsu, Qiang Zhong, Das Keya, Hisato Fukuda, Antony Lam, Yoshinori Kobayashi, and Yoshinori Kuno Automatic Generation of Abstracts in Scientific Articles Based on Natural Language Processing for Early Education Professionals and Speech Therapists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Diego Quisi-Peralta, Vladimir Robles-Bykbaev, Jorge Galan-Mena, and Roberto García-Vélez Automatic Generation of a Thesaurus for Language and Communication Disorders Based on Natural Language Processing and Ontologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Diego Quisi-Peralta, Vladimir Robles-Bykbaev, Roberto García-Vélez, Luis Serpa-Andrade, and Eduardo Pinos-Veles Automating the Generation of Study Teams Through Genetic Algorithms Based on Learning Styles in Higher Education . . . . . . . . . . 270 Roberto García-Vélez, Bryam Vega Moreno, Angel Ruiz-Ichazu, David Morales Rivera, and Esteban Rosero-Perez Using Semantic Networks for Question Answering - Case of Low-Resource Languages Such as Swahili . . . . . . . . . . . . . . . . . . . . . 278 Barack Wanjawa and Lawrence Muchemi

Contents

xv

Study on Design of Weaken Learning Costs for Business Intelligence Data Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 Yao Wang, Renjie Yang, Chen Mao, and Jun Zhang AHP Applied to the Prioritization of Recreational Spaces in Green Areas. Case Study: Urban Area of the El Empalme Canton, Ecuador . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 Lizbeth Amelia Toapanta Orbea, Maikel Leyva Vazquez, and Jesús Rafael Hechavarría Hernández Building Updated Research Agenda by Investigating Papers Indexed on Google Scholar: A Natural Language Processing Approach . . . . . . . 298 Rita Yi Man Li Applications in Software and Systems Engineering Microstrip Antennas Used for Communication in Military Systems and Civil Communications in the 5 GHz Band - Design and Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Rafal Przesmycki, Marek Bugaj, and Marian Wnuk Generation of Plausible Incident Stories by Using Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Toru Nakata Smart Home - Design and Simulation Results for an Intelligent Telecommunications Installation in a Single-Family Home . . . . . . . . . . . 325 Rafal Przesmycki, Marek Bugaj, and Marian Wnuk Construction and Application of Applicability Evaluation Model of Human Factors Methods for Complex Human-Machine System . . . . 333 Xueying Zhang, Beiyuan Guo, Yuan Liu, and Tiancheng Huang Application of the Computer Vision System to the Measurement of the CIE L*a*b* Color Parameters of Fruits . . . . . . . . . . . . . . . . . . . 341 Manuel Jesús Sánchez Chero, William Rolando Miranda Zamora, José Antonio Sánchez Chero, and Susana Soledad Chinchay Villarreyes Analysis of the Gentrification Phenomenon Using GIS to Support Local Government Decision Making . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Boris Orellana-Alvear and Tania Calle-Jimenez EMG Signal Interference Minimization Proposal Using a High Precision Multichannel Acquisition System and an Auto-Calibrated Adaptive Filtering Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 Santiago Felipe Luna Romero and Luis Serpa-Andrade

xvi

Contents

Decision Model for QoS in Networking Based on Hierarchical Aggregation of Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Maikel Yelandi Leyva Vazquez, Miguel Angel Quiroz Martinez, Josseline Haylis Diaz Sanchez, and Jorge Luis Aguilera Balseca Nondeterministic Finite Automata for Modeling an Ecuadorian Sign Language Interpreter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Jose Guerra, Diego Vallejo-Huanga, Nathaly Jaramillo, Richard Macas, and Daniel Díaz Establishing and Verifying the Ergonomic Evaluation Metrics of Spacecraft Interactive Software Interface . . . . . . . . . . . . . . . . . . . . . . 377 Jianhua Sun, Yu Zhang, Ting Jiang, and Chunlin Qian Systematic Mapping on Embedded Semantic Markup Validated with Data Mining Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 Rosa Navarrete, Carlos Montenegro, and Lorena Recalde Physical Approach to Stress Analysis of Horizontal Axis Wind Turbine Blade Using Finite Element Analysis . . . . . . . . . . . . . . . . . . . . 392 Samson O. Ugwuanyi, Opeyeolu Timothy Laseinde, and Lagouge Tartibu Application of Fuzzy Cognitive Maps in Critical Success Factors. Case Study: Resettlement of the Population of the Tres Cerritos Enclosure, Ecuador . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400 Lileana Saavedra Robles, Maikel Leyva Vazquez, and Jesús Rafael Hechavarría Hernández Design of the Future Workstation: Enhancing Health and Wellbeing on the Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407 Dosun Shin, Matthew Buman, Pavan Turaga, Assegid Kidane, and Todd Ingalls Efficient FPGA Implementation of Direct Digital Synthesizer and Digital Up-Converter for Broadband Multicarrier Transmitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 Cristhian Castro and Mireya Zapata Low-Cost Embedded System Proposal for EMG Signals Recognition and Classification Using ARM Microcontroller and a High-Accuracy EMG Acquisition System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 Santiago Felipe Luna Romero and Luis Serpa-Andrade Rural Public Space Evolution Research - Based on the Perspective of Social Capital . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 Zhang Hua and Wuzhong Zhou Road-Condition Monitoring and Classification for Smart Cities . . . . . . 437 Diana Kassem and Carlos Arce-Lopera

Contents

xvii

Discussion Features of Public Participation in Space Governance in Network Media – Taking Yangzhou Wetland Park as an Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442 Zhang Hua Applications in Supply Chain Management and Business A Modeling the Supplier Relationship Management in Agribusiness Supply Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451 Rajiv Sánchez, Bryan Reyes, Edgar Ramos, and Steven Dien Consumer Perception Applied to Remanufactured Products in a Product-Service System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 Alejandro Jiménez-Zaragoza, Karina Cecilia Arredondo-Soto, Marco Augusto Miranda-Ackerman, and Guillermo Cortés-Robles Blockchain in Agribusiness Supply Chain Management: A Traceability Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 Luis Flores, Yoseline Sanchez, Edgar Ramos, Fernando Sotelo, and Nabeel Hamoud Cold Supply Chain Logistics Model Applied in Raspberry: An Investigation in Perú . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473 Mijail Tardillo, Jorge Torres, Edgar Ramos, Fernando Sotelo, and Steven Dien Cognitive Computing and Internet of Things Introducing Intelligent Interior Design Framework (IIDF) and the Overlap with Human Building Interaction (HBI) . . . . . . . . . . . 483 Holly Sowles and Laura Huisinga IoT and AI in Precision Agriculture: Designing Smart System to Support Illiterate Farmers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490 Javed Anjum Sheikh, Sehrish Munawar Cheema, Muhammad Ali, Zohaib Amjad, Jahan Zaib Tariq, and Ammerha Naz Selection of LPWAN Technology for the Adoption and Efficient Use of the IoT in the Rural Areas of the Province of Guayas Using AHP Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 Miguel Angel Quiroz Martinez, Gonzalo Antonio Loza González, Monica Daniela Gomez Rios, and Maikel Yelandi Leyva Vazquez Cryptocurrencies: A Futuristic Perspective or a Technological Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504 Carolina Del-Valle-Soto and Alberto Rossa-Sierra

xviii

Contents

Acceleration of Evolutionary Grammar Using an MISD Architecture Based on FPGA and Petalinuxs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510 Bernardo Vallejo-Mancero and Mireya Zapata Human Factors in Energy: Oil, Gas, Nuclear and Electric Power Industries Applying Deep Learning to Solve Alarm Flooding in Digital Nuclear Power Plant Control Rooms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521 Jens-Patrick Langstrand, Hoa Thi Nguyen, and Robert McDonald The First Decade of the Human Systems Simulation Laboratory: A Brief History of Human Factors Research in Support of Nuclear Power Plants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528 Ronald Laurids Boring Human Factors Challenges in Developing Cyber-Informed Risk Assessment for Critical Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . 536 Katya Le Blanc Dynamic Instructions for Lock-Out Tag-Out . . . . . . . . . . . . . . . . . . . . . 542 Jeremy Mohon Design Principles for a New Generation of Larger Operator Workstation Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550 Alf Ove Braseth and Robert McDonald Tablet-Based Functionalities to Support Control Room Operators When Process Information Becomes Unreliable or After Control Room Abandonment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558 Espen Nystad, Magnhild Kaarstad, Christer Nihlwing, and Robert McDonald Simulation Technologies for Integrated Energy Systems Engineering and Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566 Roger Lew, Thomas Ulrich, and Ronald Boring Promoting Operational Readiness of Control Room Crews Through Biosignal Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573 Satu Pakarinen, Jari Laarni, Kristian Lukander, Ville-Pekka Inkilä, Tomi Passi, Marja Liinasuo, and Tuisku-Tuuli Salonen Adopting the AcciMap Methodology to Investigate a Major Power Blackout in the United States: Enhancing Electric Power Operations Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581 Maryam Tabibzadeh and Shashank Lahiry

Contents

xix

Systematic Investigation of Pipeline Accidents Using the AcciMap Methodology: The Case Study of the San Bruno Gas Explosion . . . . . . 589 Maryam Tabibzadeh and Viak R. Challa A Tool for Performing Link Analysis, Operational Sequence Analysis, and Workload Analysis to Support Nuclear Power Plant Control Room Modernization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597 Casey Kovesdi and Katya Le Blanc Renewable Energy System Design for Electric Power Generation on Urban Historical Heritage Places in Ecuador . . . . . . . . . . . . . . . . . . 605 Blanca Topon-Visarrea, Mireya Zapata, and Rayd Macias Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613

Artificial Intelligence and Machine Learning

Development of a Holistic Method to Implement Artificial Intelligence in Manufacturing Areas Bastian Pokorni(&), Fabian Volz, Jan Zwerina, and Moritz Hämmerle Fraunhofer IAO, Fraunhofer Institute for Industrial Engineering IAO, Nobelstraße 12, 70569 Stuttgart, Germany {Bastian.Pokorni,Fabian.Volz,Jan.Zwerina, Moritz.Haemmerle}@Iao.Fraunhofer.De Abstract. Cognitive systems are finding their way into factories. Production plants are increasingly decentralized and optimize themselves independently within the Smart Factory. The overall conditions for using AI have improved dramatically in the last years: high computing power, available storage capacity by low costs and new methods of machine learning. More and more companies want to introduce AI in their production. At present, due to a lack of experience, there is a lack of systematic procedures for AI implementations in manufacturing. The possibilities of AI in manufacturing are extensive, but the wealth of experience is limited. The paper presents a method for the implementation of AI in companies, preferably in manufacturing. This method will support companies in the introduction of AI in manufacturing. The aim is to identify individual potentials for AI applications, generate a suitable use case for the company and implement it according the specific company environment. Keywords: Artificial intelligence

 Implementation process  Manufacturing

1 Introduction Artificial intelligence (AI) is an important key technology that is indispensable for maintaining Germany’s economic performance. AI also has a high potential for value creation in manufacturing and service industries in the environment of Industry 4.0 processes. In Industry 4.0, today’s strongly product-centric manufacturing will be replaced by solution- and customer-centric concepts. Rigid defined manufacturing and value-added chains transform into flexible and highly dynamic manufacturing and service ecosystems in the future. These will enable a completely individualized manufacturing based on customer orders. Based on customer-specific requirements, manufacturing systems organize autonomously and manufacturing and logistics strategies optimizes independently via AI. [1] But companies needs a systematic approach to adopt AI in manufacturing so succeed.

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 3–8, 2021. https://doi.org/10.1007/978-3-030-51328-3_1

4

B. Pokorni et al.

2 Basic Work and Terminology 2.1

Artificial Intelligence Basic Abilities and Technologies

AI is about giving machines the ability to perform human behaviour and tasks [2, 3] and aims to imitate complex cognitive abilities based on human intelligence [4]. There are basic AI functions: • Recognition of ambiguous and contradictory information • Identify similarities of situations, tasks and solutions despite supposedly big differences • Based on relative importance in a situation, making decisions flexibly and depending on the situation • Learning from experience [5, 6] For the realization of the mentioned abilities, there are further subordinate procedures within the field of AI, such as machine learning and deep learning. Machine learning describes methods for a continuous learning process to identify regularities in data sets [7, 8]. In contrast to static program code, machine learning is based on feedback of knowledge and the resulting adaptation of decision rules [9, 10]. This enables algorithms to establish relationships between data and results and to predict events [8]. 2.2

Artificial Intelligence in Manufacturing

AI technologies are methods and processes that enable technical systems to perceive their environment, process what is perceived, solve problems independently, find new solutions, make decisions, in particular learn from experience, and thus become better at tasks and act accordingly [1, 11]. The use of AI technologies intends to increase the efficiency and effectiveness of industrial processes. Factors such as cost, speed, precision or problem solving beyond human capabilities are relevant. In order to achieve higher degrees of autonomy in industrial processes, cognitive performance is required, which AI can provide. Depending on the performance and intensity of AI, it reduces the need for human intervention in the processes [12, 13]. Today there are process standards for data mining, like CRISP-DM which provides a standardized approach for data mining applications by focusing business aspects [14, 15] and SEMMA which also provides a data mining implementation approach [15, 16]. 2.3

Conclusion

The possibilities of AI in manufacturing are extensive, but the wealth of experience is limited. A clearly structured and generally valid method to support companies in the adoption of AI in manufacturing does not exist. The aim of this work is to provide companies with an instrument that supports them on their way towards AI by considering central contents as a structured method and minimizing risks and uncertainties

Development of a Holistic Method to Implement Artificial Intelligence

5

for an effective implementation of AI in manufacturing. Knowledge of existing approaches like CRISP-DM and SEMMA should be considered and integrated.

3 Approach The method supports companies in the adoption of AI in manufacturing based on the individual company requirements that benefit from the opportunities offered by AI. The systematic approach supports the concrete definition of use cases by taking into account important critical success factors. The method intends the guideline as a process model to ensure a goal-oriented and efficient development of AI use cases for manufacturing. This includes all steps, from data collection and -provision, data preprocessing and analysis to algorithm and model development. Finally, the method should ensure the targeted integration of the developed intelligence into the manufacturing operation. For a suitable method for different industries, it is necessary to define the method as generally valid as possible. In the corporate context, the model can thus be extended and applied step-by-step to different plants, technologies, locations and corporate divisions, taking into account the applicable standards.

4 AI Implementation Method 4.1

Overall Process Model

The method is based on an iterative approach and consists of three process phases: “Activation”, “Innovation” and “Integration”. Within these phases, the method includes eight process steps. It defines the path from strategy-related definition of the AI application’s objectives to its application-related implementation in manufacturing. The eight successive process steps processes iteratively in the given order, which means that feedback loops are possible at any time in order to maximize the success of the AI application to be developed (see Fig. 1). 4.2

Activation Phase

The first phase defines basic goals and requirements for a general understanding of the project. In the second step, the AI readiness and business attractiveness as a prerequisite for the further procedure are examined. Mobilise. In this process step, the AI team creates a general understanding of the intention regarding the AI project and defines a clearly defined use case and user story. Besides this, a general understanding of the AI project related business process and the available AI technologies has to be developed. Check. The second process step is a decision step to check the AI readiness within the dimensions “company & use case”, “data”, “infrastructure”, “analytics”, “organization & people”. This can ensure that the company is AI-ready as well as that an AI

6

B. Pokorni et al.

technology is capable of implementing the planned use case. By the end of this phase, the use case can be assessed as proof of concept status. The general feasibility is checked in a first step. 4.3

Innovation Phase

The innovation phase consists of four process steps that form the core of the method: “understand”, “collect”, “analysis” and “design”. The focus is initially on the identification of possible AI technologies and the associated selection of appropriate learning procedures. The next step focuses on the required data, including their transfer and storage, before they are subjected to pre-processing, compression and examination in an explicit analysis. The algorithm and model takes place within the last step. Understand. This step considers the possibilities by using AI, with the goal of selecting methods that can be applied. This process step is essential for the further data selection and algorithm development. Collect. The focus is particularly on the aspect of data, from its acquisition, through its transfer, to its storage. With the availability of an extensive database, the system enables flexible and high-quality adaptation to problems. Analyse. This step focusses data pre-processing and processing. The content of this step includes the detection of noise effects and errors or deviations. As a result of “data aggregation”, the collected data or information can be compensated and bundled into superordinate categories. By using the possibilities of “explorative data analysis”, an understanding of the data set including its characteristics such as quantity, completeness and correctness can be developed. At the end of this phase, project team collect insights about correlations, patterns and trends. Develop. The algorithm development takes place during the fourth step of the innovation phase. Depending on the use case, there are different techniques and types of model creation. The “Visualisation design” process step rounds off the development phase and ensures an appropriate visualisation to convey understanding. With the transition to the “integration” phase, the transformation of the use case from PoC to prototype status takes place. 4.4

Integration Phase

The integration phase consists of two process steps and ensures the final phase of implementing the use case in the manufacturing area. The deployment process step involves modelling and simulating the developed model and evaluating it. There can occur iterative loops back to past process steps. The final step is the use case rollout, standard definition and starting a continuous improvement process. Deploy. The focus of this step is modelling or simulation of the developed model using test data, with the possibility of evaluating the respective results. If a deployment in the manufacturing environment turns out to be impossible, there will be an iteration to previous process steps.

Development of a Holistic Method to Implement Artificial Intelligence

7

Operation. In the further development, the operation step must take into account the company’s standards, integrate the systems into existing structures and define the respective user authorizations. The rollout step ensures the transition to operations. After confirmation of structures and processes, the pilot transfers to manufacturing as a complete system.

Fig. 1. Overview process steps within each phase

5 Conclusion Companies need a systematic method to adopt AI in manufacturing. The presented method describes an iterative process model for the introduction of AI applications in manufacturing with the intention to support companies on this novel and timeconsuming path. Through its three phases the method considers all aspects from the

8

B. Pokorni et al.

target development with associated feasibility check, data availability, analysis and model development to the adoption of the AI application after previous modelling and evaluation in the real manufacturing process. A conducted application of the method has succeeded in generating and implementing a promising use case for predictive maintenance at an automotive OEM. The method was used to take a holistic view of all essential areas, from the provision of data by sensors to data preparation and analysis to employee-oriented visualization and provision of information in a dashboard or app. Within the scope of the use case, it was thus possible to prove that the right data in correspondingly high quality is the central basis for the success of data-supported applications. Further publications will show detail methodological approach in each step of the presented method.

References 1. BMWi: Technologieszenario “Künstliche Intelligenz in der Industrie 4.0”. Berlin: Bundesministerium für Wirtschaft und Energie (2019) 2. Rich, E., Knight, K., Nair, S.: Artificial Intelligence, 3rd edn. Tata McGraw-Hill, New Delhi (2009) 3. Searle, J.: Minds, brains, and programs: Behavioral and brain sciences. Bd. 3. Cambridge (1980) 4. Gardner, H., Davis, K., Christodoulou, J., Seider, S.: The Theory of Multiple Intelligences. In: Sternberg, R., Kaufman, B. The Cambridge Handbook of Intelligence, p. 485–503. Cambridge (2011) 5. Lunze, J.: Künstliche Intelligenz für Ingenieure, 2nd edn. Oldenbourg Verlag, München (2010) 6. Kotu, V., Deshpande, B.: Predictive Analytics and Data Mining - Concepts and Practice with RapidMiner. Morgan Kaufmann, Waltham (2015) 7. Buxmann, P., Schmidt, H.: Grundlagen der Künstlichen Intelligenz und des Maschinellen Lernens. In: Künstliche Intelligenz: Mit Algorithmen zum wirtschaftlichen. Springer-Verlag, Berlin, pp. 3–19 (2019) 8. Barnes, J.: Azure Machine Learning - Microsoft Azure Essentials. Microsoft Press, Washington (2015) 9. Kirste, M., Schurholz, M.: Entwicklungswege zur KI. In: Kunstliche Intelligenz - Technologie Anwendung Gesellschaft/ Wittpahl, Volker (Hrsg.). Springer-Verlag, Heidelberg, pp. 21–35 (2019) 10. Sultan, K., Ali, H., Zhang, Z.: Big data perspective and challenge in next generation networks. Future Internet 10, 56 (2018). MDPI J. 11. Lee, K.: AI Superpowers. Houghton Mifflin Harcourt (2018) 12. Corea, F.: Applied Artificial Intelligence: Where AI can be Used in Business. SpringerBriefs in Complexity, Rome (2019) 13. Doherty, C., Camina, S., White, K., Orenstein, G.: The Path to Predicitve Analytics and Machine Learning. O’Reilly Media, Sebastpool (2016) 14. Chapman, P., Clinton, J., Kerber, R., Khabaza, T.: The modeling agency (2000). https://themodelingagency.com/crisp-dm.pdf. Accessed 20 Feb 2019 15. Cleve, J., Lämmel, U.: Data Mining. München: Oldenbourg Wissenschaftsverlag (2014) 16. Olson, D., Delen, D.: Advanced Data Mining Techniques. Springer-Verlag, Heidelberg (2008)

Fighting Cyberbullying: An Analysis of Algorithms Used to Detect Harassing Text Found on YouTube Rachel E. Trana(&), Christopher E. Gomez, and Rachel F. Adler Department of Computer Science, Northeastern Illinois University, 5500 North Saint Louis Avenue, Chicago, IL 60625, USA {r-trana,c-gomez12,r-adler}@neiu.edu

Abstract. Cyberbullying is a form of harassment that occurs through online communication with the intention of causing emotional distress to the intended target(s). Given the increase in cyberbullying, our goal is to develop a machine learning classification schema to minimize incidents specifically involving text extracted from image memes. To provide a current corpus for classification of the text that can be found in image memes, we collected a corpus containing approximately 19,000 text comments extracted from YouTube. We report on the efficacy of three machine learning classifiers, naive Bayes, Support Vector Machine, and a convolutional neural network applied to a YouTube dataset, and compare the results to an existing Formspring dataset. Additionally, we investigate algorithms for detecting cyberbullying in topic-based subgroups within the YouTube corpus. Keywords: Cyberbullying Bayes  CNN

 YouTube  Machine learning  SVM  Naive

1 Introduction Cyberbullying is a form of bullying that occurs through the use of online accounts, such as social networking sites (SNS), that frequently leads to physical or mental harm, or emotional distress. Within the last several years, cyberbullying has increased dramatically. A study from the National Society for Prevention of Cruelty for Children in the UK showed that 4,541 children received counseling for online bullying in 2015-16, almost twice the number from 2011–12 [1]. In a recent study in the US, 33.8% of 5,700 middle and high-school students reported being cyberbullied [2]. Social media platforms such as Twitter, Facebook, and Instagram, have expanded the ways in which bullying occurs. The Pew Research Center for Internet and Technology reported that in 2018, about 95% of teenagers ages 13 to 17 have access to a smart phone and that out of this group, roughly 45% are online constantly [3]. By using SNS sites, individuals can far more easily harass other users under the guise of anonymity and without any subsequent consequences, causing lasting impact to their victims, such as social anxiety, suicidal thoughts, depression, and instances of self-harm.

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 9–15, 2021. https://doi.org/10.1007/978-3-030-51328-3_2

10

R. E. Trana et al.

Prior research on analyzing cyberbullying has focused primarily on examining textbased comments or captions under images to determine whether a comment or the corresponding image is considered cyberbullying [4–6]. However, many harmful comments on social networking sites are not always in the form of comments, but appear within the images themselves [7]. The overarching goal of this project is to develop an application that determines whether the text within an image, specifically image memes, is harmful in order to help minimize the incidents of cyberbullying on SNS. To provide a current corpus for classification of the text that can be found in image memes, we collected a corpus containing approximately 19,000 comments extracted from YouTube and used Amazon Mechanical Turk to help these classify comments as cyberbullying text. We compare machine learning classifiers and report on the results achieved when predicting whether the text contains cyberbullying.

2 Classifying Cyberbullying on Social Networking Sites Many studies have examined text classification algorithms for cyberbullying using datasets from SNS such as Twitter, Formspring, Instagram, ASKfm, and YouTube [5, 6, 8–10]. In order to detect cyberbullying, Van Hee et al. [5] collected data from ASKfm and used a cost-sensitive Support Vector Machine (SVM) algorithm in order to fix the majority-minority class imbalance. Reynolds et al. [10] used Formspring data and achieved a 78.5% accuracy when identifying cyberbullying. Hani et al. [11] used the dataset from Reynolds et al. [10] and achieved an accuracy of 92.8% with a neural network algorithm and an accuracy of 90.3% with SVM. Al-garadi et al. [12] compared the results using naive Bayes, Random Forest, SVM, and K-Nearest Neighbor (KNN), both with and without the Synthetic Minority Oversampling Technique (SMOTE) for imbalanced data on Twitter data and found the best accuracy using random forest with SMOTE. Sugandhi et al. [13] found SVM to have a better accuracy than naive Bayes or KNN when determining cyberbullying with a Twitter dataset. Agrawal and Awekar [8] found that deep learning models performed better than more traditional models when detecting cyberbullying on Twitter, Formspring, and Wikipedia. This was further supported by Dadvar and Eckert [9] who also found deep learning-based models performed better than traditional ones when applied to a YouTube dataset. In order to account for words that have been deliberately misspelled, Zhang et al. [14] used a phonetically-based convolutional neural network (PCNN) with improved results on both a Twitter and Formspring dataset. While text-based comments on SNS are often used to detect cyberbullying, many images can have offensive comments directly in the image. Some studies have proposed frameworks that could be used to identify images and text for cyberbullying [15, 16]. Using Instagram and Vine datasets, studies have had workers examine images and their associated comments to detect cyberbullying [4, 6, 17]. Salawu et al. [18] point out that bullies can bypass automatic text classifiers by embedding harmful comments in images, videos, and animations and recommended using optical character recognition to extract the text from images.

Fighting Cyberbullying: An Analysis of Algorithms Used to Detect Harassing Text

11

3 Methodology 3.1

Data Collection: YouTube

To provide a current corpus for classification of cyberbullying text, we collected approximately 19,000 comments extracted using the YouTube API between October 2019 and January 2020. This general corpus consists of comments on controversial and divisive topics, such as politics, religion, gender, race and sexual orientation. This data was manually labeled as bullying/non-bullying using the online labor market Amazon Mechanical Turk (MT). Three MT workers classified each comment in the corpus as bullying or non-bullying. The complete dataset contained 6,462 bullying comments and 12,314 non-bullying comments, leading to a 34.4% bullying incidence, consistent with the description of a good dataset [18]. 3.2

Model Creation

Model creation was achieved through three primary steps: Preprocessing, feature vector creation, and classification. (1) Preprocessing: This included converting all text to lowercase, removing punctuation, special characters, URLs, @handles, stop words, empty comments, and names. Additionally, reducing redundant letters, correcting misspelled words through the Symmetric Delete Spelling Correction algorithm (SymSpell) and lemmatization of words. For consistency with the average length of the Kaggle Formspring dataset, we removed any comments larger than 100, resulting in a final dataset with a total of 11,798 comments, with roughly 30% of the comments annotated as bullying. For ease of reference, we refer to this dataset as YouTube-11k. (2) Feature Extraction: We used a bag of words approach to extract features from the dataset, and term frequency-inverse document frequency (TF-IDF) to assess the importance of words in comments relative to the dataset. (3) Classification: Consistent with other studies, the data was split into training and testing sets, with 80% of the data used for training and 20% used for testing. We used three classification algorithms: SVM, naive Bayes, and a convolutional neural network (CNN). All three algorithms were evaluated using the following criteria: accuracy (overall), precision, recall, and f-score. 3.3

Other Datasets: Kaggle - Formspring

As a baseline comparison for applying the SVM, naive Bayes, and CNN algorithms to the YouTube dataset, we applied these algorithms to a well-known cyberbullying dataset from Kaggle, which was collected and labeled by Reynolds et al. [10]. This dataset is composed of 12,770 messages collected from the social networking site Formspring and is annotated with classes of either bullying or non-bullying. The distribution of the annotation classes is strongly unbalanced, with roughly 6% of the responses belong to the bullying class. To address the data imbalance, we used a subset of the full dataset that contains a higher percentage of bullying conversations. The resulting subset of data was composed of 1,728 total comments, with 774 comments annotated as bullying and 954 comments annotated as non-bullying.

12

R. E. Trana et al.

4 Results 4.1

Formspring vs YouTube Classification Performance

The classification evaluation metrics (Table 1) of the SVM, naive Bayes and CNN algorithms on the balanced Formspring dataset were consistent with previous studies [6, 11]. We then applied the algorithms to the YouTube-11k dataset. SVM outperformed the other two algorithms in all evaluation criteria, yielding the highest accuracy at 76%, with naive Bayes yielding a 75% accuracy and CNN yielding 70% accuracy (Table 1). SMOTE was applied to the YouTube-11k dataset to help correct for data imbalance, but did not yield any significant improvements in accuracy. Table 1. Average accuracy, recall, precision, and f-score for SVM, naive Bayes (NB) and CNN when applied to the Formspring and YouTube-11k datasets. Formspring Acc. Recall SVM 91% 91% NB 90% 90% CNN 86% 86%

Prec. 91% 90% 86%

F-score 91% 90% 86%

YouTube-11k Acc. Recall Prec. 76% 63% 70% 75% 58% 70% 70% 61% 60%

F-score 64% 57% 61%

For a more equal comparison and to address the data imbalance, we created a reduced version of the YouTube-11k dataset, roughly the same size as the Formspring dataset. This smaller, more balanced dataset (referred to as YouTube-SM) consisted of 1,913 comments with a roughly 50–50 split of bullying to non-bullying instances (898 bullying, 1,015 non-bullying). All three algorithms showed an increase in all evaluation metrics on the YouTube-SM-balanced dataset, with naive Bayes exhibiting the highest accuracy at 84%, and SVM and CNN both exhibiting an overall 82% accuracy. However, the algorithms’ accuracies for the YouTube-SM dataset is still lower compared to those of the Formspring dataset, despite similarities in size and bullying/nonbullying annotated instances. 4.2

Classification Performance by Topic

Given the broad nature of content in the YouTube dataset, we investigated the accuracy of the algorithms when applied to topic-based subgroups within the dataset: Body Image, Race/Ethnicity, Politics, Gender Equality, and a General (no specific topic) category. Each subgroup within the dataset contained roughly equal percentages of bullying vs non-bullying comments. Running the classification algorithms on the individual subgroups showed higher average accuracy and improvement in classification metrics (see Table 2 - only average accuracy is shown). Naive Bayes outperformed SVM and CNN in three of the five subgroups: Race/Ethnicity, Politics, and General. SVM outperformed naive Bayes and CNN in the

Fighting Cyberbullying: An Analysis of Algorithms Used to Detect Harassing Text

13

Table 2. Average accuracy for each subgroup within the YouTube-SM dataset. Bullying (B) and non-bullying (N) comment count is provided for each subgroup. Body image Race/Ethnicity Politics Gender equality General 114 (B)/109 (N) 132 (B)/125 (N) 163 (B)/173 (N) 211 (B)/210 (N) 278 (B)/398 (N) SVM 93% NB 93% CNN 93%

81% 88% 77%

78% 82% 62%

88% 86% 85%

87% 88% 86%

Gender Equality subgroup, and all three algorithms exhibited an equal performance in average accuracy for the Body Image subgroup.

5 Discussion Despite the size and appropriate ratio of bullying/non-bullying incidents within the YouTube-11k and YouTube-SM datasets, the classification algorithms have a higher accuracy when applied to the Formspring dataset. However, the Formspring platform was primarily dominated by teens and as a result, the small number of occurrences of cyberbullying incidents have a limited focus. Given the broad range of topics that could be discussed online, a dataset such as the one collected from YouTube for this paper, can be a more accurate representation of possible online occurrences of bullying. Several previous studies investigated cyberbullying classification using data extracted from YouTube comments [9, 19, 20]. These datasets were composed of 4,500 comments or fewer and reported results of average accuracies of 66–67% or lower for SVM and 63% or lower for naive Bayes. Comparatively, our reported average accuracy results for both naive Bayes and SVM applied to the YouTube-11k dataset are at 75% and 76%, which suggests that word correction plays a significant role in improving classifier performance for a generalized dataset. Dinakar et al. [19] investigated binary classifiers on individual subgroups (sexuality, race, and intelligence), within a 4500-comment YouTube dataset. They reported improved performance for naive Bayes when applied to the subgroups compared to the mixed dataset and consistent/slightly-improved performance for SVM. This is consistent with our findings that classifying within a particular topic improves classifier performance. This is also consistent with higher performance on the Formspring dataset as it was restricted to a particular target audience and topic. Our findings suggest that a two-step process for cyberbullying detection could be implemented to first sub-classify based on topic and then subsequently apply a bullying classification model trained on that specific topic to determine its bullying/non-bullying classification.

6 Conclusion Research on detecting cyberbullying has focused primarily on examining text-based comments to determine whether a comment is considered cyberbullying. However, many images, particularly memes, on SNS contain text, which can be used as a method

14

R. E. Trana et al.

for bullies to get around tools that flag or prevent cyberbullying. However, one of the challenges in working with bullying comments and harmful text in images is the broad nature of bullying commentary. The results of this research provide a dataset that can be used to distinguish between bullying and non-bullying incidents on a range of topics. Future work will focus on the development of a two-part classification schema applied to testing text extracted from images to assess whether the YouTube dataset provides better context for bullying classification relative to other datasets. The results of this work can be used by SNS to automatically detect cyberbullying comments included in images to block or flag harmful images thereby reducing the negative effects of cyberbullying. Acknowledgments. This work was supported by the following: Northeastern Illinois University’s Committee on Organized Research, Student Center for Science Engagement Summer Research Program, and Northeastern Illinois University Graduate Dean’s Research and Creative Activities Assistantship. We would also like to thank Dr. Francisco Iacobelli, Diyan Simeonov, Kenneth Santiago, Obsmara Ulloa, Jorge Garcia, and Mirna Salem for their participation in this research project.

References 1. NSPCC: What children are telling us about bullying: childline bullying report 2015/16. London: NSPCC (2016). https://learning.nspcc.org.uk/media/1204/what-children-aretelling-us-about-bullying-childline-bullying-report-2015-16.pdf 2. Hinduja, S., Patchin, J. W.: Cyberbullying fact sheet: identification, prevention, and response. Cyberbullying Research Center (2019). https://cyberbullying.org/CyberbullyingIdentification-Prevention-Response-2019.pdf 3. Anderson, M., Jiang, J.: Teens, Social Media & Technology 2018. Pew Research Center, 31 (2018). http://www.pewinternet.org/2018/05/31/teens-social-media-technology-2018/ 4. Hosseinmardi, H., Mattson, S.A., Rafiq, R.I., Han, R., Lv, Q., Mishra, S.: Detection of cyberbullying incidents on the instagram social network. Association for the Advancement of Artificial Intelligence (2015) 5. Van Hee, C., Jacobs, G., Emmery, C., Desmet, B., Lefever, E., Verhoeven, B., et al.: Automatic detection of cyberbullying in social media text. PLoS ONE 13, 10 (2018) 6. Zhong, H., Li, H., Squicciarini, A.C., Rajtmajer, S.M., Griffin, C., Miller, D.J., Caragea, C.: Content-driven detection of cyberbullying on the instagram social network. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pp. 3952–3958 (2016) 7. Dewan, P., Suri, A., Bharadhwaj, V., Mithal, A., Kumaraguru, P.: (2017). Towards understanding crisis events on online social networks through pictures. In: Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp. 439–446. ACM, New York, NY, USA (2017) 8. Agrawal, S., Awekar, A.: Deep learning for detecting cyberbullying across multiple social media platforms. In: European Conference on Information Retrieval, pp. 141–153. Springer, Cham (2018) 9. Dadvar, M., Eckert, K.: Cyberbullying detection in social networks using deep learning based models; a reproducibility study. arXiv preprint. arXiv:1812.08046 (2018)

Fighting Cyberbullying: An Analysis of Algorithms Used to Detect Harassing Text

15

10. Reynolds, K., Kontostathis, A., Edwards, L.: Using machine learning to detect cyberbullying. In: 10th International Conference on Machine Learning and Applications and Workshops, pp. 241–244 (2011) 11. Hani, J., Nashaat, M., Ahmed, M., Emad, Z., Amer, E., Mohammed, A.: Social media cyberbullying detection using machine learning. Int. J. Adv. Comput. Sci. Appl. 10, 703– 707 (2019) 12. Al-garadi, M.A., Varathan, K.D., Ravana, S.D.: Cybercrime detection in online communications: the experimental case of cyberbullying detection in the twitter network. Comput. Hum. Behav. 63, 433–443 (2006) 13. Sugandhi, R., Pande, A., Agrawal, A., Bhagat, H.: Automatic monitoring and prevention of cyberbullying. Int. J. Comput. Appl. 144, 17–19 (2016) 14. Zhang, X., Tong, J., Vishwamitra, N., Whittaker, E., Mazer, J.P., Kowalski, R., Dillon, E.: Cyberbullying detection with a pronunciation based convolutional neural network. In: 15th IEEE International Conference on Machine Learning and Applications, pp. 740–745. IEEE (2016) 15. Drishya, S.V., Saranya, S., Sheeba, J.I., Devaneyan, S.P.: Cyberbully image and text detection using convolutional neural networks. CiiT Int. J. Fuzzy Syst. 11(2), 25–30 (2019) 16. Kansara, K.B., Shekokar, N.M.: A framework for cyberbullying detection in social network. Int. J. Curr. Eng. Technol. 5(1), 494–498 (2015) 17. Rafiq, R.I., Hosseinmardi, H., Han, R., Lv, Q., Mishra, S., Mattson, S.A.: Careful what you share in six seconds: detecting cyberbullying instances in Vine. In: Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp. 617–622. IEEE (2015) 18. Salawu, S., He, Y., Lumsden, J.: Approaches to automated detection of cyberbullying: a survey. IEEE Trans. Affect. Comput (2017) 19. Dinakar, K., Jones, B., Havasi, C., Lieberman, H., Picard, R.: Common sense reasoning for detection, prevention, and mitigation of cyberbullying. ACM Trans. Interact. Intell. Syst. 2(3), 1–30 (2012) 20. Marathe, S., Shirsat, P.: Contextual features based naïve bayes classifier for cyberbullying detection on youtube. Int. J. Sci. Eng. Res. 6, 1109–1114 (2015)

For Our Complex Future, Don’t Give Us AI, Give Us Intelligent Assistance (IA): The Case for Avionics Sylvain Hourlier(&) Thales Avionics Merignac, Mérignac, France [email protected]

Abstract. Complexity is everywhere and its progression isn’t weakening. AI, ML, DL, CC, IOT, QC (AI: Artificial Intelligence ∙ ML: Machine Learning ∙ DL: Deep Learning ∙ CC: Constant Connectivity ∙ IOT: Internet of Things ∙ QC: Quantum Computing), … all these barbaric acronyms are already spreading through our daily lives with debatable success. Everyone owns those wonderful fine pieces of equipment (Smartphone, Smart TV, connected appliances, even our basic PCs, … the list goes on and on) that we use without really mastering them. When they perform, we perform (most of the time), but whenever anything goes wrong we become helpless facing a void of incomprehension where we unsurprisingly fail. These technologies can suddenly turn daft, obscure and counter intuitive because their inherent (usually hidden) complexity surface to our interaction. If the situation is critical, consequences can be extremely severe. Pilots can also be in such situation where they have to face the critical emergence of hardly manageable complexity. It’s becoming common in HF related incidents or accidents, where we have the classic: “Pilots didn’t understand what the system was doing and the system never got the pilots intentions”. Keywords: Complexity

 AI  Assistance  Training  Aviation

1 Let’s Help Pilots (but also All of Us) Cope with Upcoming Ubiquitous Complexity In the near future, the aeronautical domain we will be facing four specific challenges that aren’t going to help us to face complexity: 1. Air Traffic will continue growing thus increasing aeronautical environment complexity with an average of 4.3% per annum over the next 20 years: linked mostly to emerging countries and population growth [1] 2. Shortage of pilots [2] will impact recruiting standards thus reducing their potential performance 3. Pilots’ nurtured lack of awareness on potential complexity due to benevolent, yet fallible masking automation [3] 4. The icing on the cake being that global warming may induce far more “exceptional”, thus complex, weather situations never experienced by pilots [4]. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 16–21, 2021. https://doi.org/10.1007/978-3-030-51328-3_3

For Our Complex Future, Don’t Give Us AI, Give Us Intelligent Assistance (IA)

17

Even at a constant technological complexity level (which won’t be the case) in the cockpit, the combination of these four will lead to an emergence of unknown, unforeseeable and formidable events in the cockpit, no one is prepared for, especially if we keep a safety objective of 10−9 per flight hour. Yet, the answer is still to increase training [5] at Controlled Flight Into Terrain (CFIT) symposium. (i.e. a CFIT happens when a aircraft with no technological problem manned by a trained crew still ends up crashing: your typical quid pro quo between system an human).

2 The Efficiency of Training for Complex Situations Even if we could anticipate all such “new” upcoming critical events in the cockpit (and we won’t), we would still have a fundamental problem of training priorities. Would you rather train your pilots to numerous extremely rare events (once in a lifetime type) or routine problems they encounter all the time? Moreover, a training to be efficient must be repeated enough or put into practice in real life to be profitable. That won’t be the case for such rare events. The training over duty ratio must remain realistic.

Fig. 1. Relationship between the occurrence of an event and its complexity in terms of training for its management.

18

S. Hourlier

Figure 1 tries to place how we can manage complex events with regards to their occurrence frequency and how we deal with it through training or technology solutions. Frequent simple events are dealt with technology and simple training, when it happens a lot, we are prepared for it. When events are rare but of low complexity, they fall into are “coping capability” zone, there is no specific training but all in all we can manage when relying on our Knowledge and skills. When events are extremely complex and quite frequent then they have been dealt with by the industry by technological safeguards (or imposed protections by procedures or regulations). Our concern is about the top right quadrant. Extremely complex critical event that almost never happen and thus are very difficult to prepare for; technologically speaking their resolution is very expensive with regards to their occurrence and in terms of training, they come in competition with more frequent situations that need to be trained and happen way more often. They must thus be sustainable in terms of financial consequences, though socially questionable.

3 Coping Can’t Be the Answer …… Neither Is AI Another way to deal with that could be to prepare pilots for adversity by enhancing their coping capacity for the “unknown”. But training coping capacity for this quadrant isn’t the answer as it is hardly teachable; in fact it appears to be mostly an experienced based ability. Ex-military crews, with extensive combat experience for instance, have developed such capacity through cumulative experiences and are able to put it into use in adverse situations [6]. Yet, such pilots will not be tomorrow’s average Airline recruits. At this point we have to admit the future is looking grim, but let’s consider that, today, all HMI1 are mostly designed for “within envelope” operations. There is room for improvement. Artificial Intelligence, being in-itself a model of opacity, has motivated DARPA [7] to fund research that would make it “[self] explainable”. Such AI would complement its propositions with clear elements that would make them “graspable” to Humans, so they wouldn’t doubt them mostly. Using complex systems without understanding them is putting your faith in magic. It can work and has in the past: there has always been complex technology around compared to the level of education (think electricity or the internet). But the satisfaction with such magic is acceptable ONLY if it never fails o if it is represents a last chance option (i.e. the panic button – better than nothing solution). To make matter worse, education has always had a slow start regarding emerging technologies. For instance, no one was properly educated at school to anticipate appropriate usage of social networks. The problem is that, with the technological push, users are more and more in a fragile situation where they may get overwhelmed by emerging complexity they can never be trained to cope with. One way to partially deal with complexity is to change

1

HMI: Human Machine Interface.

For Our Complex Future, Don’t Give Us AI, Give Us Intelligent Assistance (IA)

19

our 10−9 catastrophic failure per flight hours reference to 10−10 (i.e. change the occurrence to 10 times less). The other is to get outside help to deal with it. Most of us are already doing it; Remember the last setting up of that new oversized connected TV, with your entire preexisting tech (Box, Wi-Fi, Media player, NAS2, …). You weren’t able to do it yourself, so you looked it up on YouTube, found the tutorial, followed it step by step and there you were, all done and feeling like a hacker. But conditions were optimal; there was no threat, no time pressure, no risk or critical consequences in case of failure. Our pilots can’t (yet…) do the same.

4 Intelligent Assistance May Be the Solution We need another kind of outside help; we need Intelligent Assistance (IA).

Fig. 2. Relationship between complexity and training time needed (full time being 8 h a day) for acceptable performance.

2

NAS: Network Attached Storage.

20

S. Hourlier

Figure 2 is a tentative distribution of training time when addressing our environment’s complexity. The top of the figure serves as a reference as they spend most of their time learning or expanding their knowledge. Early education on the left addresses first low complexity problems to then move on to harder ones and on the top right we find academics in the research domain that spend their lives (also full time) to figure out the most complex problems. The bottom part represent the majority of the population that have to undergo a certain amount of training to face operational challenges the rest of the day. The bottom left quadrant represents the need for training with regards to the type of design, when complexity is supposed to be managed. For low complexity situations, when technology relies on intuitiveness and stereotypes, the training is reduced. Alas when the design is poor, even for simple situations, some technology may require excessive hours of training and usually operators resent them. When the situation complexity rise, even with the best design, the need for training increases to keep complexity managed (i.e. hidden). The most complex situations on the far right of the bottom quadrant represent unmanaged complexity (mostly because of extreme rarity or plain unpredictability). A problem may occur when a disruptive unpredictable event arise imposing unmanaged complexity to an operator that only has access to an HMI designed for managed complexity. Such “surge” of demands may induce an overwhelming effect of stress when the operator has no coping strategy ready. That bottom right space is the reason for the development of intelligent assistance. We need an AI/ML based system to explain complexity when it arises at operator level. We have to develop a mediation interface, between technology and its users, dedicated to recognize, explain and accompany critical situations: an interpreter of sorts, for puzzled Humans, capable of explaining in a reasonable understandable way those complex situations (as those in the bottom right corner of Fig. 2). Just like when we were kids and that very good teacher could make us understand, with accessible words, what was most complicated.

5 Conclusion What I just described here for the cockpit is highly applicable to any field of work, even everyday life and it becomes all the more relevant when you think about an ageing population. As a bonus, developing AI based assistance will contain AI based systems in the primordial role of helping humans; because AI is just another computer tool that needs to be used properly. It’s our future: endure without understanding (like children facing fallible magic) or develop the Intelligent Assistance between Human beings and the almighty complexity, to safeguard our capacity for control.

For Our Complex Future, Don’t Give Us AI, Give Us Intelligent Assistance (IA)

21

References 1. ICAO. Future of Aviation (2020). https://www.icao.int/Meetings/FutureOfAviation/Pages/ default.aspx. Accessed 27 Jan 2020 2. Higgins, J., Lovelace, K., Bjerke, E., Lounsberry, N., Lutte, R., Friedenzohn, D., Craig, P.: An investigation of the United States airline pilot labor supply. Grand Forks, ND, University of North Dakota, University of Nebraska Omaha, Embry-Riddle Aeronautical University, Southern Illinois University, LeTourneau University, Middle Tennessee State University, 35 (2013) 3. Parasuraman, R., Molloy, R., Singh, I.L.: Performance consequences of automation induced’complacency’. Int. J. Aviat. Psychol. 3(1), 1–23 (1993) 4. Koetse, M.J., Rietveld, P.: The impact of climate change and weather on transport: an overview of empirical findings. Trans. Res. Part D: Transp. Environ. 14(3), 205–221 (2009) 5. ICAO.: Key Outcomes of Loss of Control In-flight Symposium, 20–22 May 2014, Montreal Canada (2014). https://www.icao.int/Meetings/LOCI/Documentation/Key%20Outcomes.pdf 6. Bey, C.: Ph.D Thesis, University of Bordeaux, France (2017) 7. Gunning, D.: Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web, 2 (2017)

AI-Driven Worker Assistance System for Additive Manufacturing Benjamin Röhm(&), Leo Gögelein, Stefan Kugler, and Reiner Anderl Department of Computer Integrated Design (DiK), TU Darmstadt, Otto-Berndt-Straße 2, 64287 Darmstadt, Germany {Roehm,Kugler,Anderl}@dik.tu-darmstadt.de

Abstract. Conventional manufacturing processes continuously develop and new manufacturing processes, such as additive manufacturing, establish on the market. The benefits are product- and variant diversity. For employees in production, this means a broad expert knowledge of manufacturing processes and the operation of several partially automated processes simultaneously. To encounter this development and to reach a sustainable process, we developed an assistance system that supports workers in their daily challenge. The assistance system includes an AI-based evaluation of manufacturing technology, a userfriendly user-interface and a use-case for training and transfer. Keywords: Machine learning  Neural networks  Convolutional neural networks  Additive manufacturing  Worker assistance  Sustainable

1 Introduction Additive manufacturing processes have developed from prototyping to the use of series and individual production [1]. In addition to technical development, economic success requires interdisciplinary specialists [2], who are responsible for several manufacturing processes simultaneously [3]. The flexible processes are the basis for short and costeffective product development, but require efficient support with current product data for the employees. This research work deals with process optimization through the application of AI algorithms, as well as the supply of employee-relevant process information. In addition to technological optimization, the employees were involved in the development at an early stage, to counteract the often existing scepticism towards AI applications in their working environment [4]. In addition, a low-threshold transfer to other fields of application is to aim.

2 Concept The technological optimization with an AI application focuses on the analysis of camera images from production. In comparison to image alignment processing (feature recognition) of similar structures, it is required to compare different geometries independent of the actual production geometry [5]. In case of additive manufacturing, it is independent on specific settings or printer designs. The concept strongly affiliated with © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 22–27, 2021. https://doi.org/10.1007/978-3-030-51328-3_4

AI-Driven Worker Assistance System for Additive Manufacturing

23

the selected manufacturing process. The current research adapts AI applications to the partially automated FDM process. In the FDM process, the workpieces are printed layer by layer from extruded material on a printing bed. The adhere of the layers build an entire body [6]. Thus, a poor-quality printing layer leads to an inferior following layer and in the worst case to the loss of the entire workpiece. In particular, the first printing layer on the printing bed is crucial for a successful high quality printing result. In addition to the parameters of the preprocessing, an optimal printing layer depends on the structure, quality and distance between the extruder and the printing bed. Especially in the FDM process, the printing bed often is removed to extract the workpiece or a nozzle change is required. These operations lead to changing printing results. Distance sensors between extruder and printing bed help to level out the extruder but don´t calibrate the distance between the extruder and the printing bed, just keeps it constant. The required offset depends on the experience of employees during test prints. In our concept, we converted this experience-based procedure by a camera system and an AI-application. The concept intent to use a camera system below the printing bed and is connected to a computer unit (Fig. 1). The computer unit contains the neural network and evaluates the quality of the first printing layer.

AI

Training Usage

Fig. 1. Concept for the AI-based 3D printer: The camera system is located below the printing bed of the Additive Manufacturing process on the left side, and connected to an computer unit. For training, validation and evaluation of the first printing layer different images are required.

For the training of the neural network, the computer unit processes the labeled images and define the parameters of the algorithm. Experienced employees label the images via a user a designed interface. Thus, the algorithm represents the experience of the employee [7].

3 Technical Set up The FDM process contains different printer designs. This research used a delta FDM printer. Delta printers characterize a large build volume and a static printing bed [8]. Compared to other printer designs, this design allows mounting the camera system

24

B. Röhm et al.

below the printing bed without any relative movements. We replaced the printing bed with a double-glazed printing bed with heating elements. This allows the camera system to take a picture of the entire first printing layer (Fig. 2)

Fig. 2. Explosion of the 3D-Modell: 1 Delta Printer; 2 Glazed Printing bed; 3 Camera system; 4 Stand

Table 1 shows the set parameters of the preprocessing. Table 1. Printsetting Parameter Value Nozzle 8/0.315 Temperature extruder 215/419 Temperature bed 0/32 Layer hight 0.2/0.0079

Unit mm/inches °C/°F °C/°F mm/inches

Two separate systems controlling the printer and the image evaluation. Each system has its micro-controller. The designed process allows relevant data exchange between the systems via text messages. The information of a completed first printing layer serves as input signal for the AI application. This automatically triggers the camera and transmits the image to the neural network. We used a Convolutional Neural Network (CNN) and built out its parameters with training data. CNNs characterize the acquisition of patterns from high-dimensional input data, such as images or videos [5]. In case of production processes, it is possible to detect quality-reducing air voids in the first printing layer, even though they have been located eksewhere during the training of the CNN. To increase the number of training data, we divided every image into subsegments. Thus, each image results in 625 sub-segments. For a square image, this means a division of 25 by 25. 100.000 data sets could be achieved with a minimum of prints.

AI-Driven Worker Assistance System for Additive Manufacturing

25

4 Human-Machine-Interface (HMI) A developed human-machine-interface (HMI) supports the employee throughout the entire production process. The employee can navigate through different tabs. All tabs guide the employee through the whole process. For inexperienced employees, a general introduction to 3D printing is available as well as an explanation of the concept for a better understanding. The tab “Data Collection” supports the employee in his work routine. This tab shows the recent camera image. The image is already prepared and marks all segments with materials. The pie chart on the right side shows the ratio of printed segments to unprinted segments (Fig. 3).

Fig. 3. HMI “Data Collection”: Marking of detected Segments of Material and ratio of printed segments to unprinted segments

During the training of the CNN, experienced employees have to label images by using the HMI. To do so, the employee gets two information sets for decision making, even though he was not at the machine during printing. The HMI provides an image of the first printing layer on the right-hand side and a time-lapse video of the entire print (Fig. 4 left). Depending on this input, he determine the achieved print height and store the dataset – button “Set Label”. An additional tab shows the neural network during the calculation of the parameters. Finally, the tab “Deployment” shows an evaluation of a current print as output of the algorithm (Fig. 4 right). For this purpose, the previously detected segments are coloured as positive (green) or negative (red) segment. The pie chart represents the ratio of the positively evaluated segments. Out of the marked picture on the left and pie chart on the right. The employee can read out the positive marked segments of the current print. Based on the evaluation of the trained CNN the employee has to decide whether to stop the current printing or not. At this point of the research, the employee is responsible to operate the system. However, he has more information about the current printing and decides on a high level of information. Finally, these two charts represent the worker assistance system.

26

B. Röhm et al.

Fig. 4. HMI “Data Labeling” (left): time-laps video of the entire print and the image of the first print layer; “Deployment” (right): Worker Support of which segment was positive evaluated

5 Conclusion The developed and implemented concept of an AI-based 3D printer allows the evaluation of the quality of the crucial first printing layer. The reproducible determination bases on the use of a convolutional neural network. Therefore, the worker assistance system provides the employee with further information right at the time of his work routine. The special challenge of the concept is the required transparent printing bed. Therefore, we used a glazed printing bed for the test. Due to its very smooth surface and poor thermal conductivity, it causes additional challenges for the printing. However, the neural network could detect this weakening of the system early on. For further work, it is conceivable to extend the data set for teaching the convolutional neural network. Further parameters are printing of pre-processing and machine behaviour. This additional information enables to determine the influence of specific parameters on the first printing layer. In addition to the offset of the nozzle tip to the printing bed, high travel speeds and accelerations are also effecting the possible loss of the workpiece during the printing [9]. In the present case, we kept these parameters low to keep their influence negligible. For a fully automated process, the neuronal network needs to cover these parameters as well.

References 1. Lachmayer, R., Lippert, RB., Kaierle, S.: Konstruktion für die Additive Fertigung 2018 (2020) 2. Anderl, R., Eigner, M., Sendler, U., Stark, R.: Smart engineering. Interdisziplinäre Produktentstehung. acatech DISKUSSION. Springer, Heidelberg, April 2012 3. Grundig, C.-G.: Fabrikplanung. Planungssystematik - Methoden - Anwendungen, 6th edn. Hanser, München (2018) 4. Mainzer, K.: Künstliche Intelligenz - Wann übernehmen die Maschinen?, 2. Aufl. Technik im Fokus. Springer, Heidelberg (2019)

AI-Driven Worker Assistance System for Additive Manufacturing

27

5. Khan, S., Rahmani, H., Shah, S.A.A., Bennamoun, M., Medioni, G.A.: Guide to convolutional neural networks for computer vision. Synthesis Lectures on Computer Vision, Bd 15. Morgan & Claypool Publishers, San Rafael (2018) 6. Gebhardt, A.: Additive Fertigungsverfahren. Additive Manufacturing und 3D-Drucken für Prototyping - Tooling – Produktion. 5. Aufl. Hanser, München (2018) 7. Frochte, J.: Maschinelles Lernen. Grundlagen und Algorithmen in Python, 2. Aufl. Hanser, München (2019) 8. Sommer, W., Schlenker, A., Lange-Schönbeck, C.-D.: Faszination 3D-Druck. Alles zumDrucken, Scannen, Modellieren. Markt + Technik, Burgthann (2016) 9. Westkämper, E., Warnecke, H.-J.: Einführung in die Fertigungstechnik, 6th edn. Vieweg + Teubner Verlag, Wiesbaden (2004)

Rating Prediction Used AI Big Data: Empathy Word in Network Analysis Method Sang Hee Kweon1(&), Hyeon-Ju Cha2, and Yook Jung-Jung1 1

2

Department of Media and Communication, Sungkyunkwan University, Seoul, Korea [email protected] College of Social Sciences, Sungkyunkwan University, Seoul, Korea [email protected]

Abstract. Through big data analysis of social media, we analyzed the relationship between the emotional network and the semantic network structure of rational language with the viewership and the degree of immersion through UCINET analysis, one of the social network analysis software. As a result of applying the technique, the more concentrated the semantic connection of the emotional language, the higher the viewer rating, and when the rational language and story structure were dispersed, the viewer rating fell. In addition, the stronger the centrality of the language network, the higher the viewer rating and vice versa. Keywords: Social data

 AI audience prediction  Rating empathy

1 Introduction 1.1

Rating and Social Data

Emotion is related to the story. Social software, in which social relations are made of so-called SNSs, has led to an expansion of mind beyond our senses. It is pointed out that current viewer ratings cannot produce this social influence, but it is a limitation of current viewer rating method [1, 2].

2 Social Data and TV Rating These issues have led to alternatives to more accurate viewer ratings. For example, Nielsen, a leading viewership research firm, introduced the Nielsen Twitter TV viewership based on the correlation between social media and viewership in the United States, and the Nielsen Digital Program Ratings system for measuring online TV content viewing [2]. Various social media analyst firms in the US are also attempting to analyze social media responses and reflect them in viewership by quantifying social TV activities and matching them with demographic factors and the content of individual programs.

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 28–32, 2021. https://doi.org/10.1007/978-3-030-51328-3_5

Rating Prediction Used AI Big Data

29

In order to understand the structure and characteristics of buzz words in social TV activities related to related programs on Twitter, UCINET (Bogatti et al.) [3] and one of the social network analysis software [3] (NetDraw) was used to visualize the network of co-expression words. In order to assess the specific role of each word in the network, we examined the degree of centrality and the centrality, which are quantified based on the concept of centrality and the method of measurement. The degree of linkage centrality refers to the activity that is the degree to which several words are connected, and the mediation center refers to the intermediary influence between words and words [4] and [5].

3 Research Methods The application technique of this case is to evaluate viewers’ opinions through the network analysis, which is one of big data analysis. Analyzing the buzz that appears in Twitter on Twitter, the emotional immersion and its consequences. First, the viewers’ reviews of the broadcast program are analyzed, and the expected viewership trend is calculated accordingly. The applied broadcasting program analyzed Twitter Buzz volume during the same period as the audience rating of .

4 Analysis Result 4.1

Key Words and Semantic Networks According to the Rise and Fall of Drama Viewership

Looking at when the daily viewing rate rises and falls, a large difference is found first in the frequency of each word attribute. When audience ratings rose, word rectification and overall frequency were much higher in the peripheral properties as well as the central properties of programs that correspond to informational, emotional, and broadcast structures. On the other hand, on Twitter’s Twitter message, which leads to a drop in viewership, two main characters, Eugene and Lee J. J., came second. Although it occupies third place, other words are distributed in various characters such as ‘information stone’, ‘Jeon I. H’, ‘Min H. D’, ‘Yang C. H’, ‘Park Y. G’, ‘Lee W. J. young’ and ‘Cheolgyu’. Able to know. 4.2

Emotional Words of Rising Viewership

This trend is similar in semantic networks. In the section where viewer ratings increase on a daily basis (sensitive words and emotional immersion center), ‘100 years of heritage’, ‘Yujin’, ‘plurality’, ‘beginning’, and ‘notification’ form the strongest connected network. It can be seen that Eugene is interested in finally starting ‘revenge’, followed by background issues that explain the central issues such as ‘memory’, ‘recovery’ and ‘Chaewon’ (Fig. 1).

30

S. H. Kweon et al.

Fig. 1. Increasing daily rating

4.3

Decline in Ratings

On the other hand, in the section where the viewer rating decreases on a daily basis (dispersion of acid, story structure), for example, ‘Jun I. H’., ‘Jung B. S’. on the right, ‘Min H. D’, ‘Yang C. H’. and ‘Couple’. The issue that ‘Lover’ is formed is most noticeable. In addition, other stories are unfolding around ‘Cho W. Y’., ‘Cheolgyu’ and ‘Pension’, and other stories are spread radially rather than being concentrated as one. In particular, there is an issue in which Eugene suffers a ‘start’ between Eugene and Lee J. J., the young heroes of the drama, but it is not emerging as a center, revealing the coarse structure at the outskirts. In addition, ‘Eugene’ plays the role of connecting issues after ‘100 years of heritage’, but its connection-centeredness is not high. It means you are away from (Fig. 2).

Rating Prediction Used AI Big Data

31

Fig. 2. Case of decline in daily rating

4.4

Emotional Language Centers Between Changes

On the other hand, when looking at the period in which the weekly viewing rate rose the most, ‘Sim Y. Y’. is creating the most central issues as revealed in the key word ranking. ‘Sim Y. Y’. has appeared as ‘Ma H. J’. in ‘100 years of heritage’, ‘Appearance’ and ‘Injection’, creating a topic that makes ‘Fighting Showdown’ worth seeing as ‘Attacking Water’ for ‘Room Chairman’ and ‘Mother-in-law’. Most of the segments where weekly viewership fell. The main feature is that the issue that can be speculated as the ‘quiz event’ for the drama ‘Title’ starring ‘Yujin’ and ‘Lee J. J’ leads the central issue as the subgroup showing the strongest connection (Table 1).

32

S. H. Kweon et al. Table 1. Centrality value by daily view 1

Before rising ratings Connected centrality 100 years of 13.385 legacy Eugene 5.480 Revenge 4.246 Watching 3.056 Start 2.733 Memory 2.498 Lee J. J. 2.380 Ratings 2.277 Today 1.910 Recovery 1.822

Median centrality 100 years of 63.421 legacy Eugene 4.225 Watching 2.898 Start 1.640 Today 1.608 Revenge 1.372 Exciting 1.192 Memory 1.070 Broadcasting 1.055 Ridiculous 1.004

Before falling ratings Connected centrality 100 years of 24.093 legacy Jung B. S. 4.809 Today 4.583 Couple 4.401 Jun I. H. 4.310 Lover 4.265 Yang C.H. 4.265 Min H. D. 4.265 Eugene 3.811 Cheolgyu 3.584

Median centrality 100 years of 82.590 legacy Eugene 2.278 Cheolgyu 1.327 Jung B.S. 0.865 Fun 0.673 Choi W. Y 0.657 Park Y.G. 0.632 Acting 0.570 Chaewon 0.339 Broadcasting 0.333

References 1. Ma, K.R.: Do big data support to forecast tv viewer rating in korean tv drama case? Hangyang University, MA: Information System (2013) 2. Korea communications agency. A study on the evolution of the viewpoint measurement by the change of TV viewing behavior. Trend and Prospect 트렌즈 언드 프라스펙트스 63, 61– 73 (2013) 3. Bogatti, S., Everett, M.G., Freeman, L.: Ucinet for Windows: Software for Social Network Analysis. Analytic Technologies, Harvard (2002) 4. Freeman, L.C.: Centrality in social networks: conceptual clarification. Soc. Netw. 1(3), 215– 239 (1979) 5. Nielsen, F.: MBI Touchpoints, uSamp (동향과 전망, 2013.6 재인용) 2(5), 99–110 (2016)

Sound Identification System from Auditory Cortex by Using fMRI and Deep Learning: Study on Experimental Design for Capturing Brain Images Jun Shinke1(&), Kyoko Shibata1, and Hironobu Satoh2 1

2

Kochi University of Technology, Miyanokuchi 185, Tosayamada, Kami, Kochi, Japan [email protected], [email protected] National Institute of Information and Communications Technology, Nukui-Kitamachi, Koganei, Tokyo 187-8795, Japan [email protected]

Abstract. This study establishes a technique estimating sounds using deep learning based on brain images when humans hear sounds captured by fMRI. Humans are hearing complex sounds with mixed frequencies, so we develop estimating complex sound system to establish this technique. So far, we developed a system identifying single sound based on brain images when humans hear single sound by using CNN one of the deep learning, but it doesn’t support complex sound. Therefore, we focus on complex sound and aim to develop a system identifying complex sounds based on brain images when humans hear complex sound. Since identification results generally depend on the brain image used for identification, this report considers block design and event-related design which fMRI experimental designs for capturing brain images to understand effect of stability for brain activity on brain image. As a result, the identification rates of two types complex sounds were almost the same for both designs, and the effect of the stability of brain activity didn’t appear in the identification rates so we decide use event-related design. Keywords: fMRI related design

 Deep learning  Brain decoding  Block design  Event-

1 Introduction In recent years, fMRI has evolved and research on brain information decoding technology is being promoted using fMRI. fMRI is representative of functional brain image analysis and can quantitatively measure brain activity. Among brain information decoding technologies, in the field of the visual cortex, a decoding algorithm has been developed that can decode visual information, not only the objects seen by humans but also the visual imagery during sleep [1]. However, due to loud fMRI operational noise, development of the auditory cortical brain activity mechanism and auditory cortical © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 33–39, 2021. https://doi.org/10.1007/978-3-030-51328-3_6

34

J. Shinke et al.

brain information decoding technology has been met with challenges. Therefore, this study establishes a technique to estimate sounds using deep learning based on brain images when humans hear sounds captured by fMRI, to develop brain information decoding system in the auditory cortex. In our previous report [2], we developed a system that can identify single sounds based on brain images when humans hear a single sound by using deep learning. The system identifies sounds by analyzing brain images by using deep learning. Using two types of deep learning, CNN (Convolutional Neural Network) and DBN (Deep Belief Network), the sound of one scale difference (C7 (2097 Hz), C# 7 (2217 Hz)) was estimated. Three males in their twenties were the subjects used to capture brain images using fMRI. As a result, the system using CNN could identify C7 and C# 7 up to 75.00% with an average of 69.44%. The system is able to identify single sound based on brain image when humans hear a single sound. Humans are hearing complex sounds with mixed frequencies, so we develop estimating complex sound system to establish this technique. We developed a system that can identify single sounds based on brain images that humans hear a single sound, but it does not support complex sound. Therefore, we focuses on complex sound and aims to develop a system that can identify complex sounds based on brain images when humans hair complex sounds (Fig. 1). Since identification results generally depend on the brain image used for identification, this report considers fMRI experimental designs that capture brain images. In our previous report, brain images were captured using a block design, one of the fMRI experimental models, in order to capture more stable brain activity images. Acquiring brain images using block design can capture stable brain activity, but it cannot capture a lot of brain images because it takes many time to capture one image, so block design uses less images for identification and identification rate tends to decrease [3]. On the other hand, acquiring brain images using eventrelated designs can capture more images than acquiring brain images using block design because it doesn’t take many time to capture one image, but the brain images have less stable brain activity than brain images being acquired using block designs [3]. Therefore, in order to understand how the stability of brain activity affects the identification result, two types complex sounds are identified based on brain images acquired using two designs that differ only in the time to acquire images.

Fig. 1. Complex sound identification flow.

Sound Identification System from Auditory Cortex

35

2 Brain Image Acquisition Method 2.1

Auditory Stimulus

In this experiment, brain function images and brain structure images are captured using fMRI (SIEMENS MAGNETOM Prisma3T) [4]. Subject was head complex sounds through Active Noise Control thin headphones OptoACTIVE [5]. This headphones have reduced the effects of fMRI noise. It takes about 40 s to use the OptoACTIVE noise reduction function ANC (active noise control) effectively, so the ANC calibration time is about 1 min from the start of the experiment. Table 1 shows the fMRI functional imaging parameters. One octave was selected for complex sounds based on C7. C7 is an audible sound that is around 2000 Hz in the audible range and has little influence from fMRI noise. The complex sounds were selected from 3 sounds out of C7, D7 (2349 Hz), E7 (2637 Hz), F7 (2793 Hz), G7 (3135 Hz), A7 (3520 Hz), and B7 (3951 Hz). The complex sounds were heard by subjects as an auditory stimulation. The complex sounds used in this study were complex sound 1 (complex sound with C7) and complex sound 2 (complex sound without C7). These two sounds were measured using a block design and an event-related design. Brain function images and brain structure images were captured using fMRI and stored in DICOM format. Three male subjects in their 20 s were hear the sounds C7 to B7 and they could hear these sound. Table 1. fMRI capture parameters Echo time (TE) Repetition time (TR) Field of view (FOV) Filip angle Matrix size Slice thickness Slice gap Slice Slice acquisition order

2.2

48 [ms] 3000 [ms] 192  192 [mm] 90 [°] 2.0  2.0  3.0 [mm] 3.0 [mm] 0.75 [mm] 36 [slices] Ascending

Block Design

When acquiring brain activity with fMRI, it is necessary to consider the BOLD (Blood oxygenation level dependent) effect and there should be an experiment time restraint of about 1 h to reduce the burden on the subject, but deep learning requires a certain amount of data. In this experiment, an experimental model that could capture as much brain activity image as possible in a short time was used. Stimulation and rest time were seated with reference to Shigemoto set [2]. The stimulation time was set for 9 s to stabilize brain activity. The BOLD effect greatly decrease in about 9 s after being stimulated so the rest blocks were set for 9 s. Stimulating each complex sound 60 times within 60 min (Fig. 2).

36

J. Shinke et al.

Fig. 2. Brain image acquisition using block design.

The 60 stimuli for each complex sound were divided into three sessions. One session stimulates each complex sound 20 times in 20 min. The time between the start of the experiment and 60 s is the noise cancellation time. Therefore, the first stimulation starts 60 s after the start of the experiment. 2.3

Event-Related Design

Acquiring brain images using event-related design can minimize stimulation time and stimulate the brain at discrete time intervals. In the event-related design, the same amount of data is acquired in a shorter time than the block design can capture. Therefore, in this experiment of event-related design, the stimulation always gives for 3 s, and the rest periods are set randomly (Fig. 3). Each subject was stimulated 60 times in about 20 min. The time between the start of the experiment and 60 s is the noise cancellation time. Therefore, the stimulation starts 60 s after the start of the experiment.

Fig. 3. Brain image acquisition using event-related design.

3 Analysis Method Analytical methods were performed with reference to Shigemoto’s analytical method [2]. Brain function and structure image data (Sect. 2, 2.1) stored in DICOM format was converted to NIfTI-1 format. Reprocessing, image analysis, and individual analysis were carried out on SPM12 [6] (statistical parametric mapping). In the preprocessing, head movement during imaging was corrected (Realignment), the time shift between brain slices was corrected (Slice Timing Correction), superimposing on the structural image (Co-registration), space normalization (Normalization) and spatial smoothing (smoothing) were executed. In the individual analysis based on this data, the data used for

Sound Identification System from Auditory Cortex

37

individual analysis was randomly selected from the same stimulation and statistical analysis was conducted. Set the ROI (region of interest) to the primary auditory cortex (Broadman maps 41 and 42) and obtain T statistics for the primary auditory cortex only. These T statistics were obtained for each stimulation data, and numerical value of stimulation data was normalized to a continuous value of 0.0 to 1.0. Each stimulation data was output in CSV format. These data were converted to H100  W48 input data based on the location information. These data are used as the input layer for deep learning.

4 Complex Sound Identification In this experiment, two complex sounds are identified based on brain images to understand how the stability of brain activity affects the identification result.

Table 2. CNN parameters CNN Condition Number of CNN layers Convolution Stride Pooling Filter size Stride Learning rate Drop out Error rate Termination condition Hyper parameter Convolution Filter size Channels

Set value 4 1 22 2 0.0001 0.7 0.1 Error rate 2–32 (exponent of 2) 2–32 (exponent of 2)

The brain activity data analyzed in Sect. 3 is randomly divided into test data (12 data set) and training (48 data set) for each complex sound data. Training data is statistical data, statistics were taken from four analyzed data, for a total of 48 data. Analyzed data were randomly selected from same complex sound data. Training is statistical data, statistics were taken from one analyzed data, for a total of 12 data. using deep learning. These data are used as input data for deep learning. In this study, CNN that one of the deep learning is used. In this research, CNN trains through the processes of Convolutional, pooling, and fully connected layers. Convolutional and pooling layers were used to create maps that characterize complex sound called feature maps. Fully connected layers were used to obtain the output from the feature map. The layer between the output layer and the fully connected layer was designed using a NN (Neural Network). The purpose of the experiment is to identify complex sounds, so the output is converted to probability values using the Softmax function as the activation

38

J. Shinke et al.

function. Table 2 shows the training conditions for CNN. In order to prevent overtraining, if the error rate is less than 0.1, training ends and the process proceeds to evaluation. Two complex sounds are identified based on this evaluation so CNN have two output. From the identification results, stability of activity in brain images captured with two models is evaluated

5 Result Table 3 shows the results of identification experiments for the block design that achieved the highest identification rate for each subject, and Table 4 shows the identification results for the event-related designs. Table 3. Results of experiment in block design Subject A B C

ID Results Complex sound 1 (/12) Complex sound 2 (/12) Total (/24) Total (%) 7 10 17 70.83 6 7 13 54.17 5 10 15 62.50

Table 4. Results of experiment in event-related design Subject A B C

ID Results Complex sound 1 (/12) Complex sound 2 (/12) Total (/24) Total (%) 8 9 17 70.83 8 7 15 62.50 8 7 15 62.50

In both models, the maximum identification rate was 70.83%, the average identification rate for the block designs was 62.50%, and the average identification rate for the event-related designs was 65.28%. Subject B showed a slight difference in the discrimination rate by the experimental model, but subjects A and C have the same identification rate, so there was hardly difference in the identification rate by the experimental model.

6 Discussion From the identification results, it is assumed that there was hardly difference in the stability of brain activity in brain image between the two experimental models. There should be an experiment time restraint of about 1 h, so the block design cannot obtain

Sound Identification System from Auditory Cortex

39

more data. However, event-related designs can acquire more data. The event-related design is a better design than the block design for this experiment. The identification results showed that the average identification rate was 65.25%, which was not optimal. As more data is acquired in the event-related design, the identification rate will improve.

7 Conclusion The purpose of this study was to understand how the stability of brain activity in brain image that is used identify affects the identification result. Brain images were taken using two types of experimental models, and the effect of brain activity stability in brain image on the identification of complex sounds was considered. As a result, there was hardly difference in the stability of brain activity in brain image between the two experimental models. Event-related design is a better design than block design in this experiment because event-related design can acquire more data. If the accuracy of the identification system improves in the future, it will lead to the development of brain information decoding technology in the auditory cortex and to improved interpretations of brain activity mechanisms and better detection of hearing impairment cases.

References 1. Horikawa, T., Tamaki, M., Miyawaki, Y., Kamitani, Y.: Neural decoding of visual imagery during sleep. Science 340(6132), 639–642 (2013) 2. Narumi, S., Hironobu, S., Kyoko, S., Yoshio I.: Study of deep learning for sound scale decoding technology from human brain auditory cortex. In: 2019 IEEE 1st Global Conference on Life Sciences and Technologies (LifeTech), pp. 212–213 (2019) 3. Kikuchi, Y., Senoo, A, Abo, M., Watanabe, S., Yonemoto, K.: SPM8 manual for brain image analysis (2012). (in Japanese) 4. fMRI. https://www.healthcare.siemens.co.jp. Accessed 17 Jan 2019 5. OptoACTIVE. http://www.optoacoustics.com/medical/optoactive/features. Accessed 17 Jan 2019 6. SPM. https://www.fil.ion.ucl.ac.uk/spm/software/spm12/. Accessed 17 Jan 2019

Tracking People Using Ankle-Level 2D LiDAR for Gait Analysis Mahmudul Hasan1,2(&), Junichi Hanawa1, Riku Goto1, Hisato Fukuda1, Yoshinori Kuno1, and Yoshinori Kobayashi1 1

Saitama University, Saitama, Japan {hasan,hanawa0801,r.goto,fukuda,kuno, yosinori}@hci.ics.saitama-u.ac.jp 2 Comilla University, Comilla, Bangladesh

Abstract. People tracking is one of the fundamental goals of human behavior recognition. Development of cameras, tracking algorithms and effective computations make it appropriate. But, when the question is privacy and secrecy, cameras have a great obligation on it. Our fundamental goal of this research is to replace video camera with a device (2D LiDAR) that significantly preserve the privacy of the user, solve the issue of narrow field of view and make the system functional simultaneously. We consider individual movements of every moving objects on the plane and figure out the objects as a person based on ankle orientation and movements. Our approach calculates the number of frames of every moving object and finally create a video based on those frames. Keywords: People tracking  2D LiDAR  Kalman filter  Ankle level tracking

1 Introduction Person Tracking (PT) with machine is a salient field in Human Computer Interaction (HCI). Research on person tracking reached on utmost approximations in recent years. It involves with surface mapping, pointing persons’ position, consequent movements, differentiate these with other properties and finally projecting these on desired surface. Time series of individual position data enables us to analyze trajectory for many purpose (e.g. marketing). PT with 2D and 3D cameras play significant role in different practical applications. Real time PT from a live video makes it more robust and usable in different scenarios. Some statistical models and their efficacies make the PT well accepted to all. Here video camera plays the role of data acquisition and some of these can perform enhancement of that captured data. Recently, along with the development of deep learning-based image processing, the performance of people detection and tracking using cameras are dramatically improved. However, when we consider using cameras everywhere in a daily life, privacy issue cannot be ignored. In addition, some phenomena make it difficult to use cameras in the special situation such as smoke, fog etc. Furthermore, though the low-cost cameras (not only RGB cameras but also RGB-D cameras) are widely used, but computational cost of image processing is not least, besides it may go to apex in case of using deep learning techniques especially for many cameras for wide area surveillance. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 40–46, 2021. https://doi.org/10.1007/978-3-030-51328-3_7

Tracking People Using Ankle-Level 2D LiDAR for Gait Analysis

41

Our focus of this research is to use a sensor that will not compromise with privacy but enhance the efficiency of tracking. To cope with these problems, we propose a new people tracking technique using 2D LiDAR. Cost and real-time computational facility payed as the key influence behind this. To minimize the occlusions of pedestrians each other, we put 2D LiDAR at an ankle level and range the target area horizontally. Main issue of tracking people in this sensor setup is how to discriminate individuals from the isolated observation of multiple ankles of multiple persons. Here, we proposed an avant-garde method to use the time series data of ranging for classifying individuals. Individual ankles trajectories were considered for movement detection. Distances between ankles were calculated by well-known Euclidean nearest neighbor technique. This approach helps to determine the cluster of every pairs of ankles as a person. We clearly identified the paths of walking, running even if it goes very fast. This approach calculates the number of frames of every moving object and finally creates a video based on those frames. These videos can be further used for surveillances or any other use. Our method provides very accurate and robust tracking when the target is walking or running. Experimental result shows the effectiveness of our proposed method.

2 Related Works LiDAR stands for Light Detection and Ranging is a sensing device that can measure variable distances on the plane. Many researchers contributed on this tracking [1] field using cameras or LiDARs. Trajectory based human behavior analysis [2] using LiDAR was our initial appreciation of better use of this sensor. Then museum guide robot [3, 4] shows how LiDAR works on practical scenario. Specific person tracking is a vital task in daily applications. Misu, K. et al. [5] illustrates an approach for identifying and tracking a specific individual in outdoor by a mobile robot. Here their robot used 3D LIDARs for person recognition and identification, and a directivity-variable antenna (named ESPAR antenna) for finding a certain person if he is occluded and/or goes out-of-view. On the other hand, Yan, Zhi et al. [6] presented a framework which permit a robot to learn a sophisticated 3D LiDAR-based person classifier from other sensors time to time and getting benefits of a multi-sensor tracking system. Here, Koide, Kenji et al. [7] depicted a human identification system for a mobile service robot using a smart cellphone and laser range finders (LRFs). All these approaches used 3D LiDAR for their purposes which is not cost effective, but in our method, we prepared our system as it can track people with 2D LiDAR in outdoor and indoor also. Claudia, A.A. et al. [8] defined an experimental outcome using a single LIDAR sensor to provide a continuous recognition of an individual with respect of time and space. This system was based on the People Tracker package, aka PeTra, that used a convolutional neural network (CNN) to detect person legs in intricated scenarios. This system tracks only an individual with timely basis, but our system can track multiple people on the surface same time. Dimitrievski M. et al. [9] introduced a unique 2D–3D pedestrian tracker created for applications in autonomous vehicles. Here they used multiple sensors for the same. Sualeh, M. et al. [10] proposed a vigorous Multiple Object Detection and Tracking (MODT) procedure, using various 3D LiDARs for

42

M. Hasan et al.

perception. The combined LiDAR data is considered with an effective MODT structure, think about the shortcomings of the vehicle-embedded handling situation. Bence Gálai et al. [11, 12] introduced a performance analysis of numerous descriptors suitable to person gait analysis in Rotating Multi-Beam (RMB) Lidar measurement systems. All these methods are based on multiple sensor-based methods and need gigantic calculations to make it robust. Qing Li et al. [13] presented a unique deep convolutional network pipeline, LO-Net, for immediate lidar odometry valuation. Jiaxiong Qiu et al. [14] suggested a deep learning architecture that delivered perfect dense depth for the alfresco scene from a single-color image and a sparse depth. These applications are specially prepared for autonomous driving based on LiDAR and depth sensor cameras for tracking. Research on person tracking and their positioning is wide covering and conducted over a period. Many researches on this arena used cameras, 2D/3D LiDARs, ultrasonic sensors etc. for their experiments. But based on only 2D LiDAR person tracking and gait analysis is really a challenging and new concept. We concentrate on the topic and showed how it can be traced. Our focus is to develop a low-cost 2D LiDAR based tracking system which can be used any application of tracking without interfering human privacy.

3 Proposed Method We introduce a tracking method based on LiDAR sensor. Our method works on different environment. Here we have used 2D lidar sensor for its lower price and computational effectiveness. For our experiment we used HOKUYO UMT30LX Lidar sensor in a plane ground surface. We placed our LiDAR at ankle level and collected our data in 270-degree directions. When people are walking within the range of LiDAR sensor it collects actual position of the moving object and distance from its own position. We plot ankle positions of persons on the plane and tracked their movements. As shown in Fig. 1. a LiDAR is placed in the ankle level of a person and it provides visual information to the corresponding computer. In the first frame white lines indicate the boundaries of the surface, i.e. walls. We then remove the boundaries from the frame by background subtraction and showed only ankle position on the surface. The green marked lines indicate the ankle position of a person. If more than one person appears in front of the sensor it clearly identifies the persons. Distances between pixels of an ankle and between two ankles are measured and used for getting decision about number of people moving in front of a sensor. An experimented threshold is used for making decision of a person in different circumstances. Here we used Euclidian distance measurement approach for tracking persons. distððxa ; ya Þ; ðxb ; yb ÞÞ ¼

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   ðxa  xb Þ2 þ ðya  yb Þ2

ð1Þ

where, dist is a function for calculating distance between pixel (xa, ya) and (xb, yb) of an ankles’ position then between all appeared ankles on the plane.

Tracking People Using Ankle-Level 2D LiDAR for Gait Analysis

43

Fig. 1. (a) Lidar sensor placed on ankle level and getting data (b) Persons’ ankle movements and standing positions are showed and (c) Tracking people with a handler marker sign ‘|’ and shows his/her moving directions.

Depending on the positional relationship with the sensor sometimes it is possible that one ankle occludes another one. But this situation does not influence the decision. By using cluster-based techniques, it does not assume that both feet are visible. The system only calculates the distance between ankles appeared in the LiDAR field of view. If there is no other ankle is found, system identify that there is only one people walking on the floor. If the distance is larger than the threshold our system tracks the ankles as a separate person even though one ankle position is found. Thus, system overcomes the problem of obscure and disappearance of feet on the floor. Here in Fig. 2. it shows that our system clearly identifies the ankles with disappearance. In two frames before, here it calculated the ankle distance and found that it is more than maximum threshold and there are two persons here. But in one frame before the distance became smaller and it counted that this is within the range and there is only one person here. Finally, in current frame it finds that one ankle is occluded by another and without misclassification it shows that there is one person in the frame.

Fig. 2. (a) ankles are in different positions and it goes in overlapping position, (b) frame by frame detection

44

M. Hasan et al.

4 Experimental Results There are no well-known data sets for 2D LiDAR based person tracking and gait analysis. For the experiments we prepared our own data sets and appraise our method on this benchmark, which has 35 samples with 27 female and 8 male participants. The standard contains two scenarios: normal and highly crowded. We ensure that we evaluate the accomplishment of our proposals on the validation data set. We used Kalman filter to predict for tracking individuals. Here our system can track a person even if only one ankle is appeared on the frame. Individual Walking

Individual Running

Only Ankle Movement

Combined Walking

Combined Running

Fig. 3. Kalman Filter based movements in different circumstances; Upper Row: Ankle Positions on the frame; Lower Row: Direction of the movements

In Fig. 3. we clearly see that in different criteria our system can predict the movements of different persons with utmost accuracy. Here in the first upper frame individual walking is shown, and corresponding lower frame shows that person is going far from LiDAR. In 2nd frame one person is running in different directions then 3rd, 4th and 5th frames show other scenarios and corresponding lower frames show their positions tracked by LiDAR using Kalman filter technique respectively. Table 1. Experimental data and its performance Individual Walking

Individual Running

Only Ankle movement

Combined Walking

Persons/Frames

4

48 Frms

4

51 Frms

4

43 Frms

2

42 Frms

3

47 Frms

4

43 Frms

2

47 Frms

3

47 Frms

4

42 Frms

Gesture Correctly Identified

4

48 Frms

4

50 Frms

4

40 Frms

2

41 Frms

3

44 Frms

4

39 Frms

2

36 Frms

3

33 Frms

3

29 Frms

Percentage

100%

98.04%

93.02%

97.62%

93.62%

Combined Running

90.70%

76.60%

70.21%

69.04%

Tracking People Using Ankle-Level 2D LiDAR for Gait Analysis

45

In Table 1. we showed different persons and their recoded videos for our experimental evaluation with different gestures. We categorized our experiments into 5 different ways. Here individual walking, running, only ankle movement and combined walking can be tracked with absolute confidence. We have few observations on combined running scenario. We considered 4 peoples and their captured LiDAR videos for performance evaluation. For validation we considered near about 4 s videos of every person in different gestures where from each video we considered 42–51 frames. From the above table we see that this system performs relatively flat on different running situations. But compared with other camera-based systems the performance is impressive. A gait analysis and person height estimation based on ankle movements is also performed on the dataset and we interestingly found some consequences with walking and running patterns.

5 Conclusion In the cyber world a person is being tracked every time, everywhere. But when the question is privacy, people want a space from eyes around him, even it is in real or virtual world. On the other hand, questions of surveillance cannot be ignored. Here LiDAR plays as a kernel of these issues. Without disclosing persons identity our approach tracks a person and identifies his/her movements. Now our system is ready for commercial use. We will enhance our study to estimate the property of human activities using LiDAR. We are walking on Gait analysis using Ankle level 2D LiDAR. In future we will integrate the tracking and analyzing into one system.

References 1. Chen, L., Ai, H., Zhuang, Z., Shang, C.: Multiple people tracking with deeply learned candidate selection and person re-identification. In: Proceedings of Multimedia and Expo (ICME) (2018) 2. Rashed, M.G., Suzuki, R., Yonezawa, T., Lam, A., Kobayashi, Y., Kuno, Y.: Robustly tracking people with LIDARs in a crowded museum for behavioral analysis. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. E100 A, 2458 (2017) 3. Oyama, T., Yoshida, E., Kobayashi, Y., Kuno, Y.: Tracking visitors with sensor poles for robot’s museum guide tour. In: Proceedings of Human System Interactions (HSI), Sopot, pp. 645–650 (2013) 4. Oyama, T., Yoshida, E., Kobayashi, Y., Kuno, Y.: Tracking a Robot and Visitors in a Museum Using Sensor Poles. Proc. of Frontiers of Computer Vision, pp. 36–41, (2013) 5. Misu, K., Miura, J.: Specific person detection and tracking by a mobile robot using 3D LIDAR and ESPAR antenna. In: Proceedings of Intelligent Autonomous Systems (IAS) (2014) 6. Zhi, Y., Sun, L., Duckctr, T., Bellotto, N.: Multisensor online transfer learning for 3D LiDAR-based human detection with a mobile robot. In: Proceedings of Intelligent Robots and Systems (IROS), pp. 7635–7640 (2018)

46

M. Hasan et al.

7. Koide, K., Miura, J.: Person identification based on the matching of foot strike timings obtained by LRFs and a smartphone. In: Proceedings of Intelligent Robots and Systems (IROS), pp: 4187–4192 (2016) 8. Álvarez-Aparicio, C., Guerrero-Higueras, Á.M., Rodríguez-Lera, F.J., Clavero, J.G., Rico, F.M., Matellán, V.: People detection and tracking using LIDAR sensors. Robotics 8, 75 (2019) 9. Dimitrievski, M., Veelaert, P., Philips, W.: Behavioral pedestrian tracking using a camera and LiDAR sensors on a moving vehicle. Sensors 19, 391 (2019) 10. Sualeh, M., Kim, G.-W.: Dynamic Multi-LiDAR Based Multiple Object Detection and Tracking. Sensors. 19, 6, 1474 (2019) 11. Gálai, B., Benedek, C.: Feature selection for Lidar-based gait recognition. In: Proceedings of Computational Intelligence for Multimedia Understanding (IWCIM), Prague, pp. 1–5 (2015) 12. Benedek, C., Nagy, B., Gálai, B., Jankó, Z.: Lidar-based gait analysis in people tracking and 4D visualization. In: Proceedings of European Signal Processing Conference (EUSIPCO), Nice, pp. 1138–1142 (2015) 13. Li, Q., Chen, S., Wang, C., Li, X., Wen, C., Cheng, M., Li, J.: LO-Net: deep real-time lidar odometry. In: Proceedings of Computer Vision and Pattern Recognition (CVPR), pp. 8473– 8482 (2019) 14. Qiu, I., Cui, Z., Zhang, Y., Zhang, X., Liu, S., Zeng, B., Pollefeys, M.: DeepLiDAR: deep surface normal guided depth prediction for outdoor scene from sparse liDAR data and single-color image. In: Proceedings of Computer Vision and Pattern Recognition (CVPR), pp. 3313–3322 (2019)

New Trend of Standard: Machine Executable Standard Haitao Wang, Gang Wu, Chao Zhao, Fan Zhang, Jing Zhao, Changqing Zhou, Wenxing Ding, and Xinyu Cao(&) China National Institute of Standardization, Beijing 100191, China {wanght,wugang,zhaochao,zhangfan,zhaoj,zhouchq, dingwx,caoxy}@cnis.ac.cn

Abstract. Standards play an important and unreplaceable role in everywhere of human life and social activities. Most of countries pay much attention to standards development and application. However, all of standards are still paper media and mainly for human reading. It greatly restricts the efficiency of usage, broadcast, and data interchange especially in such digital era with computer being widely used. In this pater, we discuss a new form of standard, which is machine readable, understandable, executable, testable, and applicable as well as human applying. Keywords: Machine Executable Standards annotation

 Standardization  Semantic

1 Introduction Standards play an important and unreplaceable role in everywhere of human life and social activities. Most of countries pay much attention to standards development and application. However, all of standards are still paper media and mainly for human reading. It greatly restricts the efficiency of usage, broadcast, sharing, and interchange especially in such digital era with entire global communication and rapid developed Information Technology [1]. Consequently, a new form of standard named Machine Executable Standard (MES) is officially proposed during China-Germany standardization meeting in 2019. Revolutionary changes will take place in the development, application, implementation, management mechanism and technology related to standards [2]. This kind of standards has been highly concerned by the governments of China and Germany, as well as International Standardization Organization (ISO), International Electrotechnical Commission (IEC), and other developed countries. The concept Machine Executable Standard has not been defined formally and scientifically. Different researches may have different ideas of it. We believe that Machine Executable Standard should have following characteristics: 1. The contents is well structured and annotated with semantic tags that can be readable and understandable by machine and human, and the ambiguity is as less as possible. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 47–52, 2021. https://doi.org/10.1007/978-3-030-51328-3_8

48

H. Wang et al.

2. The contents between different standards can be linked, fused, and referenced easily and dynamically. 3. The functions and services provided by standards can execute like programmable modules or plug-ins. Especially, the functions and services are certain, predictable and testable. 4. The development and execution can depend on third-party platforms with open interface protocols. 5. Standards can easily share, adopt and interchange across languages and cultures identically. In this paper, we will propose our thoughts and discussion of critical issues briefly.

2 Significance and Meaning Nowadays, a regular standard consist of kinds of elements such as scope, normative references, terms, definitions, requirements, recommendations, and so on. When people use standards, they have to read all relative documents, understand every necessary technical details, and try to practice in real problems. The whole process is very demanding for users’ knowledge, abilities and experiences, and the quality of result is not stable usually. Besides, the efficiency of broadcasting and sharing of standard is confined by the difference of languages and cultures. In order to facilitate all the disadvantages, Machine Executable Standard is proposed, which is digital transformation from traditional standard, which is developed with new structures and formats. Comparing with traditional standard format, Machine Executable Standard will significantly change the service and application mode range from users, developers to managers. For users, it no longer will be necessary for users to understand the standard documents before they can use them properly. Standards will serve like black boxes with certain inputs, functions and outputs. It is not necessary for users to understand how the standard works, whereas it is now. It will highly increase the efficiency and decrease the cost. For developers and managers of standards, similarly, it also will bring significant change to their work. Example scene 1: when people want to check whether some data is satisfied some restriction, they just input the data into some machine or system, then, they will directly get “yes” or “no” answer. Example scene 2: people can construct a procedure with dozens of services from different standards according to the target problem, the whole work, to some extent, is like programing.

New Trend of Standard: Machine Executable Standard

49

3 Services of MES Machine Executable Standards will shape new service modes of standards in the future. Mode 1: Searching and querying Standard server like a search engine. Users input keywords or query conditions through human-machine interface, get related information. For example, uses can search terms, scopes, texts, tables, and so on. Mode 2: Data-driven procedure modules Standard provide several functions, every of which is implemented by one or more service modules. Every module is corresponding to one or more technical elements in standard, such as diagrams, formulas, and workflows, etc. The standards or some services that can be encapsulated independently are a kind of procedure module. When the data or signals input, they can work and output the result. These modules can be called and reused not only in integrated chips equipping in machines, but also by other modules like plug-ins or submodules. Mode 3: User programing Users can construct services of different standards together with script language to resolve specific problem. The script language should be open, parametric, pre-defined, have unique grammar, and supported by third-party platforms. The whole programs can run multi times or temporarily, and is portable across different platforms. Mode 4: Intelligent self-control This mode is the most advanced level with high intelligence for machines. In an intelligent system, services can be found and reorganized automatically to match user’s requirements.

4 Formal Definitions Firstly, from the view of computing, we give some basic definitions related. Definition 1: a Machine Executable Standard m is defined as following: m ¼ \S; D; T; E; C; L [

ð1Þ

where: 1. S is a set of service s which m can provide; 2. Ds ¼ f\si ; sj [ g; si and sj 2 S, is a set of dependent relation ds, means the service si can serves only if succeeded by sj; 3. Dp ¼ f\si ; sj [ g; si and sj 2 S, is a set of dependent relation dp, means the service si can serves only if preceded by sj; 4. T is the set of protocol t, which a third-party platform of software must follow to ensure every s in S works; 5. E is the set of programmable executable implement e for m;

50

H. Wang et al.

6. C is the content of m, which can be presented in readable understandable text for both human and machine; 7. L is the set of language l to represent m, which makes m readable for specific audience, not only for human but also for machine. Definition 2: A service s is defined as following: s ¼ \I; O; F; T; E; C; L [

ð2Þ

where: 1. 2. 3. 4.

I is a set of input i; O is a set of output o; F ¼ ff j o ¼ f ðiÞg; i 2 I; o 2 O, is a set of mapping f from input i to output o; T is the set of protocol t, which a third-party platform of software must follow to ensure s; 5. E is the set of programmable executable implement e for s; 6. C is the content of s, which can be presented in readable understandable text for both human and machine; 7. L is the set of representation of s in language l, which makes s readable for specific audience, not only for human but also for machine.

Definition 3: The normative reference relationship between Machine Executable Standards is defined as following: D ¼



dij j \mi ; mj [



ð3Þ

where: mi and mj is Machine Executable Standards, mi is normative referenced by mj (Fig. 1). Definition 4: The application of Machine Executable Standard is defined as following: c ¼ \M; v; u [

ð4Þ

c is an application of Machine Executable Standards M for a specific problem v under conditions u.

New Trend of Standard: Machine Executable Standard

51

Fig. 1. Application of standards

5 Key Problems As we mention previously, Machine Executable Standard is still a cutting-edge technology and new interdisciplinary field of standardization. Many disciplines such as computer technology, automation, management etc. are involved. The development and application of Machine Executable Standard is also constrained by specific domain. In this paper, we just discuss some relative critical technical problems in general and briefly. First is Basic theories and semantic models. Obviously, a series of precious computable theories, logical models, and semantic models related with Machine Executable Standard is indispensable. Based on them, a group of extended markup languages and programming script languages will be designed to annotate standards from different levels and granularities, such as textual level, standard element level, functional level, service level, management level, user requirement level, and so on. Second is development and test. There are two main ways to create Machine Executable Standard. One is to deal with the traditional standards using information technologies such as Natural Language Processing, Artificial Intelligent, Web service, Ontology, and so on. The other one is to create standard from the beginning directly in some platform with friend human-machine interface. Importantly, every standard must

52

H. Wang et al.

be tested and verified before release to ensure the correctness and stability. The workflows of development will change greatly. Third is management and update. Machine Executable Standard is stored as a group of modules and linking relations instead of a document. Every module can be developed without affecting others. Once a module updates, the standards that consist of it and other modules that call it need to update correspondingly. Every update need to recheck and retest. Meanwhile, copyright and intelligent patents involved in standards and modules become more complex. Four is application and execution. Machine Executable Standards can provide plenty of service modes for different requirements by means of service platforms and running environments. Most of standards need to use with other standards. It is also necessary to fuse and link related modules and standards without any ambiguities according users’ requirements. Data can share and interchange between platforms, standards, and modules. No doubts, data protection is still very important.

6 Conclusion Standards play an more and more important and unreplaceable role in everywhere of human life and social activities. Most of countries pay much attention to standards development and application. Machine Executable Standard is digital transformation from traditional standard, which is developed with new structures and formats. Comparing with traditional standard format, Machine Executable Standard will significantly change the service and application mode range from users, developers to managers. Acknowledgements. This research was supported by National Key Technology R&D Program (2016YFF0204205, 2018YFF0213901, 2017YFF0209004), and grands from China National Institute of Standardization (522019C-7044, 522019C-7045, 522018Y-5941, 522018Y-5948, 522019Y-6781, 522019Y-6771).

References 1. Zhao, J.: Knowledge Graph. High Education Press, Beijing (2018) 2. ISO 24617-6:2016 Language resource management — Semantic annotation framework — Part 6: Principles of semantic annotation (SemAF Principles)

Machine Learning Analysis of EEG Measurements of Stock Trading Performance Edgar P. Torres(&), Edgar A. Torres, Myriam Hernández-Álvarez, and Sang Guun Yoo Departamento de Informática y Ciencias de la Computación, Escuela Politécnica Nacional, Ladrón de Guevara E11-253, Quito, Ecuador {edgar.torres,myriam.hernandez,sang.yoo}@epn.edu.ec, [email protected]

Abstract. In this paper, we analyze the participants’ state of mind through the measurement of EEG readings like alpha, theta, gamma, and beta waves. To obtain the EEG readings, we use OpenBCI with its Cyton Bluetooth helmet product. Due to its higher temporal resolution, EEG is an important noninvasive method for studying the transient dynamics of the human brain’s neuronal circuitry. EEG provides useful observational data of variability in different mental states. Thus, since stress affects neural activity, EEG signals are the ideal tool to measure it. Our objective is to understand the relationship between mental states and trading results. We believe that understanding these relationships can potentially translate into improved trading performance and profitability in traders. Keywords: Machine learning interface

 EEG  Emotion recognition  Brain-computer

1 Introduction Our research suggests that emotions are essential for stock market trading, and traders cannot ignore them and focus on following procedures when real money is at risk. Therefore, emotions always influence trading decisions. If the trader has a constructive mindset for trading, performance will likely increase. Likewise, if emotions like fear, anger, stress, or frustration are too intense, then trading performance will probably worsen. Stock market trading requires continuous decisions that involve analyzing odds versus risk and reward payouts. Additionally, trading also involves variables such as time, beliefs, internal expectations, uncertainty, game theory, cognitive limitations, self-control, emotions, among others. Moreover, excitement and fear manifest themselves through changes in confidence and risk tolerance (Mind-Body Bubble book chapter). Emotions allow humans to respond to stimuli quickly and without rational thought. Moreover, emotions can generate biases that influence even rational decision making, often unbeknownst to humans [1]. These emotional processes had evolutionary purposes, as it is likely that the brain’s parts that create emotions evolved first and have a direct connection with the body, unlike the prefrontal cortex that has several neuronal © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 53–60, 2021. https://doi.org/10.1007/978-3-030-51328-3_9

54

E. P. Torres et al.

layers of separation. Thus, emotions are often harder to curtail through rational thought, especially under stressful situations such as stock market trading. Therefore, traders often report making decisions because they “feel” right at first, but are regrettable shortly after. An excellent example of emotional decisions like these is “panic selling” or “fear of missing out,” which cause traders to sell at market bottoms or buy market tops, respectively. These types of trading decisions occur due to emotions, expressed as the psychological pain avoidance of losing money. However, such decisions are often inadequate after logical scrutiny. For example, buying oversold and undervalued assets probably has a positive long-term expectancy. However, doing so is typically emotionally difficult for traders due to the previously explained dynamics. Therefore, we believe that it is vital for traders to recognize the effect of their emotions in their trading. This way, traders will use feelings to their advantage, but not allow them to take over their systematic processes in their trading strategy. Thus, our approach aims to understand how traders can get “the best of both worlds” in trading through emotions and rational thought. We believe that enhancing trading performance is supportive of a more efficient market, which is generally positive for a capitalist western society. We see a similar situation in other peak performance activities, such as chess. For instance, chess grandmasters are reportedly capable of seeing a chess position and instantly “feeling” who is winning or losing, without much need for calculating further moves. This example shows the need for intuition (emotion or “feeling”) through experience, combined with a disciplined, rational process (i.e., calculating chess moves). Automatic emotion uses several methods to study emotions, some invasive, and others are user-friendly. Emotion recognition through EEG allows researchers to study mind states relatively cheaply. EEG also has a higher temporal resolution, and it is noninvasive, which makes it ideal for the study of human brain neuronal circuitry. (Evaluation of Human Stress using EEG Power Spectrum) evaluates the correlation between EEG Power Spectrum, and Alpha and Beta waves, with the Cohen Perceived Stress Scale (PSS) through stress measurement questionnaires. We used EEG to analyze participants’ mind states through Alpha, Theta, Gamma, and Beta wave readings through appropriate characteristics and an automatic learning algorithm. Thus, it is possible to build an emotional reconnaissance system for tradingrelated processes. In this paper, we have used power band information (frequency dominion) and compared it to the performance of SVM algorithms with Artificial Neural Network as systems of automated learning. To obtain the EEG readings, we use OpenBCI with its Cyton Bluetooth helmet product. These products allow for open-source brain-computer interfaces that facilitate the procurement and analysis of EEG data. EEG provides useful observational data of variability in different mental states. As stress affects neural activity, EEG signals can measure them effectively. It consists of several frequency bands, i.e., Delta (0.5–4 Hz), Theta (4–8 Hz), Alpha (8–12 Hz), Beta (13–30 Hz), gamma (30–100 Hz) which is affected by different conscious states. This research uses machine-learning algorithms to classify the EEG measurements of participants according to arousal – valence quadrants, which divides the subjects into different categories and estimates the state of mind.

Machine Learning Analysis of EEG Measurements

55

Typically, it is difficult to provoke emotions reliably through audiovisual stimuli. In contrast, stock market trading consistently induces feelings in participants due to its “money” component. Hence, “simulated” trading is not as effective as “real” trading because money has a strong emotional connotation for humans. Money represents variables such as resources, lifestyle, survival odds, value, status, health, and even the likelihood of leaving offspring, to name a few. Thus, we believe that participants have strong emotions tied to money, which makes “real” trading a robust emotional stimulus. In particular, “real” stock market trading triggers four key emotions: 1) fear, 2) greed, 3) hope, and 4) regret. We believe that these four emotions have a direct effect on the participant’s trading, and these effects are likely measurable through EEG readings. The research results can improve trading performance and profits because they are likely linked to stress management and peak performance techniques.

2 Related Work In [2] is evaluated the correlation between the EEG Power Spectrum of Alfa y Beta waves and the Cohen Perceived Stress Scale (PSS) questionnaires. Results suggest that PSS and EEG Power Spectrum ratios can be used to measure stress in humans. The authors in [3] find a correlation between stress and high Beta EEG power spectrum in temporal brain lobes. In [4] Discrete Wavelet Transform (DWT) and an Artificial Neural Network are used to extract features for a classifier to sort through six possible emotional states: 1) fear, 2) sadness, 3) frustration, 4) happiness, 5) pleased, and 6) satisfied. Results show a 55.58% average accuracy for ANN, while other algorithms achieved between 51.82% and 49.82% accuracy. We believe that further work is necessary to improve these results. The paper in [5] process an emotional reconnaissance system using EEG signals through audiovisual stimuli for four emotions: 1) disgust, 2) happiness, 3) surprise, and 4) fear. Six participants were analyzed with biosensors and extract three characteristics using Lifting Based Wavelet Transforms (LBWT). Fuzzy C-Means (FCM) values were not reported for classifying emotions (revise this last sentence). In [6], it is presented a work that conducted experiments to compare extraction methods of characteristics. These were applied in both time and frequency for the selection of electrode placement. This way, they could analyze emotional states in subjects. They concluded that Beta and Gamma waves are valuable characteristics to discriminate between liking and disliking judgments. However, the authors did not statistically validate their findings due to a large number of attributes versus the sample size. Nevertheless, this study shows that electrode placement in the prefrontal region might not be as crucial as most studies suggest. This study concludes that the anterior scalp is better for differentiating discrete emotional states. There are no studies directly related to detecting emotional states and stress levels in individuals making stock market trading decisions.

56

E. P. Torres et al.

3 Methodology and Materials Our method involves selecting participants to engage in simulated trading of options on underlying equities traded in the American stock market. One participant has little experience, and a second one has no experience in stock market trading. We also included an experienced trader in the experiment and an individual meditating, which gives us a reference point. We presented participants with a standardized trading methodology through RSI, MACD, and Keltner channel indicators, plus momentum and mean reversal trading techniques. Our goal is to give participants a framework for trading and analyze their EEG readings as they dive into the markets. We set up a “carrot and stick” reward dynamic for participants. The top participant (as measured by their profits) will receive a payout from the bottom participants. Our objective is to simulate the risk and reward dynamics inherent in stock trading, which have an impact on the participant’s state of mind. The participants’ EEG readings can measure this variations. 3.1

EEG Data Collection

We used a brain-computer interface (BCI) Ultracortex Mark IV EEG headset to record the EEG signal from the studied individuals. It was equipped with 8-channel dry electrodes to record brain signals. To collect the EEG signal, we used a Cyton Biosensing board with an 8-channel neural interface and a 32-bit processor. The board communicates wirelessly with a computer using a USB dongle. Each component is shown in Fig. 1. We applied the 10–20 systems diagram with channels 1–8 of the OpenBCI default setting. Figure 1 also shows the systems’ electrode location (Table 1).

Fig. 1. OpenBCI Ultracortex headset, Cyton Biosensing board, Electrode location

3.2

Training Data, Test Data, and Feature Extraction

Training data for our algorithm was extracted from the DEAP dataset. This dataset contains EEG and peripheral physiological signals of 32 participants that were recorded as they watched 40 one-minute-long excerpts of music videos. Participants rated each video in terms of arousal, valence, like/dislike, dominance, and familiarity.

Machine Learning Analysis of EEG Measurements

57

Table 1. Channels and electrode positions Electrode position Channel 1 FP1 2 FP2 3 C3 4 C4 5 P7 6 P8 7 O1 8 O2

Electrode position Channel 9 F7 10 F8 11 F3 12 F4 13 T7 14 T8 15 P3 16 P4

From this dataset, we transform the signals to the frequency domain and then apply filters to obtain the respective band of frequencies. We extracted four features: alpha, beta, delta y theta waves power spectral density PSD ratio divided by total PSD. Each feature vector also contained the output for the respective quadrant for the valence – arousal graph (“op” field or quadrant number), as shown in Fig. 2.

Fig. 2. Valence – arousal combined graph

As test data, we used the data recollected using the brain-computer interface in 4 subjects: an expert trader, a person who knows some trading basis, and one person who is a complete neophyte in this area. We also compared this data with EEG signals of a person meditating. The test data was processed to obtain the same feature vector as the one obtained for the training dataset to classify the “op” field. The characteristics that are going to enter the classifiers to obtain the models will be combinations of these features with a level of complexity of maximum 1. In Fig. 3 we display the periodogram power spectral density estimate for the 4 participants.

58

E. P. Torres et al.

Fig. 3. Patticipants’ periodograms

4 Results Deep Learning classifier with features shown in Table 2 got a relative error lenient of 32.4% ± 1.5%. To obtain the best results, the model does not consider the [beta] feature. With this configuration, we got the average results shown in Table 3. Table 2. Combination of features for Deep Learning algorithm Name Expression Complexity F1 [alpha]/[delta] 2 theta [theta] 1

Table 3. Predictions Participant Expert Some knowledge Neophyte Meditation

OP field real 4 2

OP field classified Deep Learning 4 2

OP field classified SVM 4 3

2 3

2 4

2 3

Table 3 shows that for Deep Learning, all participants had correct predicted average values for their state of mind according to themselves. The baseline corresponding to a meditating subject was not classified with an “OP” field of three, a value that corresponds to a state of relaxation. But, instead, it was classified as a state of joy with “OP” equals 4. Table 3 also presents a classification of the participants using SVM. One of these classifications is not correct. The SVM algorithm obtained a relative error lenient of 31.52% ± 1.5% with the features shown in Table 4.

Machine Learning Analysis of EEG Measurements

59

Table 4. Combination of features for SVM algorithm Name Expression Complexity F2 Exp([alpha]*[delta]) 3

In Fig. 4, we present a comparison between Deep Learning and SVM algorithms. Although the error is smaller for SVM, it should be considered that the complexity of the combination feature is three, higher than for the Deep Learning algorithm. Figure 4 presents the error comparison for the two used algorithms. It should be noted that we experiment with several other algorithms, SVM and Deep Learning were the ones with the best results.

Fig. 4. Comparison between Deep Learning and SVM algorithm

5 Conclusions We analyze the state of mind of participants in our experiment using the arousal – valence scale through the measurement of EEG readings processed to obtain alpha, theta, gamma, and beta value waves. The results of this research may be used to boost trading performance and profits, as it’s thought to be closely related to stress management and peak performance techniques. We got satisfactory results using Deep Learning and SVM algorithms with feature vectors obtained from a combination of the normalized PSD values for the delta, theta, alpha, and beta waves. For Deep Learning, the combination of features, with complexity equal or less than two, was enough to obtain a relative error of 32.4%. SVM achieves a relative error of 31,5%, but with a combination feature complexity of 3. For this last consideration, we instead recommend the use of a Deep Learning algorithm.

References 1. Fenton-O’Creevy, M., et al.: Emotion regulation and trader expertise: heart rate variability on the trading floor. J. Neurosci. Psychol. Econ. 5(4), 227–237 (2012) 2. Hamid, N.H.A., Sulaiman, N., Aris, S.A.M., Murat, Z.H., Taib, M.N.: Evaluation of human stress using the EEG power spectrum. In: 6th International Colloquium on Signal Processing & its Applications, pp. 1–4. IEEE (2010)

60

E. P. Torres et al.

3. Seo, S.H., Lee, J.T.: Stress and EEG in Convergence and Hybrid Information Technologies, pp. 413–426 (2010) 4. Mohammadpour, M., Hashemi, S.M.R., Houshmand, N.: Classification of EEG-based emotion for BCI applications. In: Artificial Intelligence and Robotics (IRANOPEN), pp. 127– 131. IEEE (2017) 5. Murugappan, M., Rizon, M., Nagarajan, R., Yaacob, S., Zunaidi, I., Hazry, D.: Lifting scheme for human emotion recognition using EEG. In: International Symposium on Information Technology, pp. 1–7. IEEE (2008) 6. Jenke, R., Peer, A., Buss, M.: Feature extraction and selection for emotion recognition from EEG. IEEE Trans. Affective Comput. 5(3), 327–339 (2014)

Evaluation System of GIS Partial Discharge Based on Convolutional Neutral Network Liuhuo Wang(&), Lingqi Tan, and Zengbin Wang Electric Power Research Institute of Guangdong Power Grid Corporation, Guangzhou 510080, Guangdong, China [email protected],[email protected], [email protected]

Abstract. The paper presents a GIS partial discharge evaluation system based on convolutional neutral network. Partial discharge evaluation method and its system aim to solve the problem of false alarm and leakage alarm of the existing partial discharge monitoring system. The evaluation system is presented in detail. The method of the GIS partial discharge signal acquisition is analyzed. The convolutional neutral network to classify the partial discharge types is researched, and the experiment of evaluation system verify the effectiveness of convolutional neutral network. Keywords: GIS system neutral network

 Partial discharges  Model defect  Convolutional

1 Introduction Since GIS (Gas Insulated Switchgear) is widely used in power systems, its safety directly affects the reliability of the entire power grid. As time goes by, GIS insulation aging will be further aggravated, and it will seriously threaten the safe operation of the power grid. Partial discharge refers to the partial breakdown of electrical discharge in GIS, and it can occur near the high-voltage conductor or at other locations. Partial discharge often appears in GIS in the early stage of insulation deterioration. There are often multiple defects in the long-term operation process of GIS, and it will generate various forms of partial discharge signals [1–3]. The causes of partial discharge of GIS vary, and different partial discharge types have great differences in the damage of GIS equipment. The existing ordinary GIS partial discharge monitoring and evaluation method has the phenomenon of false alarms and frequent leak alarms, and has the disadvantage of effectiveness and poor usability. Therefore, it is necessary to distinguish the types of GIS partial discharge faults so that electric power personnel can provide effective measure for different partial discharge types. The GIS partial discharge is studied by the scholars all over the world. Judd studied the excitation of UHF signals by partial discharge using finite difference time domain and got the conclusion that the external sensor can still obtain high measurement sensitivity [4, 5]. Hampton described the discharge characteristics of different partial discharge and studied the location of the signal source based on the method of time difference of signal arrival [6]. However, the lack of effective research on various forms © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 61–68, 2021. https://doi.org/10.1007/978-3-030-51328-3_10

62

L. Wang et al.

of partial discharge characteristics restricts the effectiveness and accuracy of partial discharge detection and diagnosis of switchgear. In recent years, artificial intelligence has developed rapidly, and has shown strong ability in many fields. Moreover, thanks to the further improvement of computer performance and the emergence of cloud computing, GPU and TPU, the amount of computing is no longer an obstacle to the development of neural networks. Pattern recognition of partial discharge type is the recognition of PRPD, PRPS and other maps in the final analysis, and the current image recognition technology has a very strong foundation, so it will have excellent performance when applied to partial discharge type recognition. Convolutional neural networks (CNN) [7–9] is a kind of special artificial neural network, which is different from other neural network models (such as recurrent neural network, Boltzmann machine, etc.), and its main feature is convolutional operator. CNN performs well in many fields, especially in image classification [10, 11]. In this paper, a GIS partial discharge type evaluation and system based on convolutional neutral network is presented. Firstly, the evaluation system is introduced in detail. Secondly, the partial discharge signal is analyzed. Thirdly, the convolutional neutral network for GIS partial discharge classification is studied. At last, the experiment of the evaluation is presented to verify that the evaluation system can effectively and accurately perform GIS partial discharge diagnose.

2 Evaluation System There are two ways to get the partial discharge samples of GIS: the detection system is applied to collect the data, and the data is got from different GIS system of power grid. However, the quantity of the data collected by the detection system is limited, so it is impossible to obtain a large number of data. If we use the existing data, there are many sources of the existing sample database, and the data is difficult to form a unified format. And the use of external data will lead to incompatibility with the detection system. Here we propose a method that the hardware system transmits sample data and the detection system obtains it by receiving signals. The basic principle of the system’s hardware is shown in Fig. 1.

Phase signal Analog signal emission Receiving signal

Evaluation system

Fig. 1. Schematic diagram of partial discharge sample data acquisition system.

As shown in Fig. 2, the sample data signal is stored in the form of PRPD spectrum in the sample database computer, which is converted from digital to analog through the

Evaluation System of GIS Partial Discharge

63

sample data output system, and output in the form of UHF (ultra-high frequency) signal. The PRPD spectrum can be formed by combining the phase information. The detection system receives the signal through the antenna, and forms the PRPD spectrum in its own system. This method overcomes the disadvantage of the nonuniformity of the sample data sources, and realizes the real output effect of the sample data. It can simulate the actual detection effect of the scene to the maximum extent. It is the key to get the partial discharge data for studying the algorithm of partial discharge type recognition in GIS. Enough data can better mine the inherent law of partial discharge in GIS and effectively improve the recognition rate of partial discharge types. Considering that the actual operation of GIS partial discharge data is small, and the type of partial discharge is not easy to determine, it is difficult to get a stable mass of data. Therefore, it is very important to get the correct data of GIS Partial Discharge samples. The specific operation method is as follows: firstly, the simulated GIS sample devices of different defect types are constructed in the laboratory. Then the equipment is pressurized and the partial discharge pulse is measured by the pulse current method. At last, the data is recorded by the computer and data acquisition device, and is formed in a stable partial discharge database.

3 Partial Discharge Signal Acquisition Typical internal insulation defect partial discharge types of GIS include metal protrusion, voids discharge, free particle discharge, floating conductor, surface discharge. These five types of GIS partial discharge are common to detect in practice and easy to be realized in the laboratory. Metal protrusion: There are some fixed defects in GIS, such as metal tip on high voltage bus, metal tip on metal shell, and metal protrusion on pot insulator. Voids discharge: The air gap or air bubble in the basin insulator used in GIS is easy to be left because of the epoxy pouring of the insulator. Free particle discharge: Free metal particles are the most common types of defects. In the process of GIS component production, friction, environmental impact during installation, collision caused by component action in work, and failure to completely clean particles during GIS maintenance are all possible to produce metal particles adhesion. Floating conductor: There are many parts in the GIS cavity, which should be closely connected in normal operation. When there are external factors such as interference, the metal parts will be loose, which will produce some floating conductor, and then lead to repetitive partial discharge. Surface discharge: The reason of surface discharge is that there is pollution on the surface of insulator in GIS, which comes from the volatilization of some rubber and other materials under the influence of electric power, heat and other factors after a longtime operation in the interior of GIS chamber or the reaction of some chemical substances in GIS produces semiconductor substances. There are two ways to set up the specific defects of GIS. The two ways are presented in our system as follows.

64

L. Wang et al.

1. Typical insulation defect setting on GIS entity equipment There are 550 kV and 220 kV GIS entities in our team, and there are many different types of typical insulation defects on the GIS entities, such as metal protrusion on the cylinder wall, all different kinds of floating conductor, surface discharge defects of silver sulfide and conductive adhesive, and the free metal particle defects with different sizes of aluminum wire, copper wire. The practicality pictures of defects on GIS entity equipment are shown in Fig. 2.

Fig. 2. Practicality pictures of defects on GIS entity equipment.

2. Typical insulation defect model setting In the experiment, plexiglass tank was used to simulate GIS, and partial discharge insulation defect model was added inside. Sulfur hexafluoride can be injected into the tank. The following is a brief presentation of these kinds of insulation defects. According to the characteristics of each kind of insulation defect, the model building method is given.

4 Analysis and Experiment Based on Convolutional Neutral Network The traditional process of GIS partial discharge diagnosis is to analyze the partial discharge signal obtained by partial discharge detection equipment after preprocessing, extract the characteristic parameters of the signal, classify the partial discharge and determine the discharge mode. Most of these characteristic parameters are defined according to experience. The relationship between these characteristic parameters and the actual partial discharge needs to be further explored. So here the evaluation system of GIS partial discharge based on convolutional neutral network is presented to improve the accuracy of recognition and classification. The essence of neural network training is to solve the optimization problem and find out a set of parameters that make the loss function take the minimum value. The solution method is similar to the optimization problem, which is mainly solved by gradient descent. The derivation of multivariate composite function meets the chain rule, so the training of neural network is also called gradient back propagation.

Evaluation System of GIS Partial Discharge

65

The general convolutional neural network consists of the following four layers: convolutional layer, pooling layer, fully connected layer and activation function. 1. Convolutional layer The convolution layer is the basic part of the convolutional neural network. And in the engineering implementation, the full connection layer, which plays the role of classification at the end of the network, is also implemented by the convolution layer. Convolution is a mathematical operation of two real variable functions. Let x(t) and w(t) be two functions of independent variable t. The result of convolution operation of x (t) and w(t) be denoted as s(t). The convolution operation can be written as s (t) = (x * w)(t), the specific operation mode is as follows: Z sðtÞ ¼

þ1 1

xðsÞwðt  sÞds:

ð1Þ

where the first parameter (x(t)) of convolution is generally referred to as input, and the second parameter (w(t)) is referred to as kernel function. The output is referred to feature mapping. The above convolution operation is based on the integral operation of continuous function. If both the input function and the kernel function are discrete sequences, then the convolution operation is defined as follows: sðtÞ ¼ ðx  wÞðtÞ ¼

þ1 X

xðaÞwðt  aÞ:

ð2Þ

a¼1

In fact, convolution can also be defined in higher bits of space. We often convolve on multiple dimensions at once. If we take a two-dimensional image I as input, we may also want to use a two-dimensional kernel K: Sði; jÞ ¼ ðI  K Þði; jÞ ¼

þ1 X þ1 X

Iðm; nÞKði  m; j  nÞ:

ð3Þ

m¼1 n¼1

Because the convolution operation is commutative, it can also be operated in the following way: Sði; jÞ ¼ ðK  I Þði; jÞ ¼

þ1 X þ1 X

Iði  m; j  nÞKðm; nÞ:

ð4Þ

m¼1 n¼1

2. Pooling layer The pooling layer function uses the overall statistical characteristics of the adjacent outputs in a location to replace the output of the network in that location. The maximum pooling function gives the maximum value within the adjacent

66

L. Wang et al.

rectangular region. Other commonly pooling functions include the average value in the adjacent rectangular region, L2 norm and weighted average function based on the distance from the center pixel. Because pooling can synthesize the information of all neighbors, we can integrate the statistical characteristics of K pixels in the pooling area instead of a single pixel, so the input of the next layer is only 1/K of the number before pooling, and the scale is greatly reduced. 3. Fully connected layer The full connection layer plays the role of classifier in the whole convolutional neural network. Convolution layer, pooling layer and activation function layer are used to map the original data to the hidden layer feature space, while full connection layer is used to map the learned feature representation to the sample tag space. In practice, the full connection layer can be realized by convolution operation: the full connection layer with full connection to the front layer can be converted into convolution with convolution kernel of 1  1; the full connection layer with convolution layer as the front layer can be converted into global convolution with convolution kernel of H  W, where H and W are the height and width of convolution output results of the front layer respectively. 4. Activation function Activation function layer is also called nonlinear mapping layer. The activation function is lead into increase the nonlinear ability of the whole network. And there are up to a dozen activation functions to be adopted. In recent years, the training tools of deep learning are becoming more and more perfect. The mainstream open-source tools of deep learning are tensorflow, pytorch, caffe and so on. These deep learning tools can make researchers quickly build networks and solve problems, and can call GPU to realize parallel computing, which greatly improves the efficiency of research and development. In our paper, tensorflow will be selected as the training tool of neural network. The process of tensorflow to train neural network is as follows. In the first stage, the calculation diagram (the network architecture of neural network) is defined first, which should include all structures and loss functions of the whole neural network. In the second stage, the implementation stage feeds data to the corresponding variable space, and then solves the problem. In the experiment, the classification accuracy of GIS partial discharge in the evaluation system based on convolutional neutral network is show in Table 1. Table 1. Classification accuracy of GIS partial discharge in our evaluation system. Partial discharge type Metal protrusion Voids discharge Free particle discharge Floating conductor Surface discharge

Classification accuracy 90.5% 90.3% 90.6% 91.2% 90.1%

Evaluation System of GIS Partial Discharge

67

The experiment results show that the generalization ability of the proposed algorithm is excellent, and the classification accuracy of our GIS partial discharge evaluation system is up to 90%.

5 Conclusion The measurement and analysis techniques of GIS partial discharge based on convolutional neutral network is presented in this paper. The evaluation system and the partial discharge signal acquisition is introduced in detail. The convolutional neutral network for GIS partial discharge signal evaluation is studied, and the experiment of the evaluation is presented to verify that effectiveness and accuracy of the evaluation system based on convolution neutral network. So this system can be employed to accurately assess the insulation condition of GIS system in measurements. Acknowledgments. This work is supported by the Technical Projects of China Southern Power Grid (No. GDKJXM20180128). The authors would like to thank all the members including trainees in China Southern Power Grid.

References 1. Wang, L.H., Lv, H., Wang, Z.B., Tan, L.Q., Wu, J.: Study on partial discharge characteristics of dual typical defects in switchgear. High Volt. Appar. 54(11), 265–272 (2018) 2. Ren, M., Dong, M., Ren, Z., et al.: Transient earth voltage measurement in PD detection of artificial defect models in SF6. IEEE Trans. Plasma Sci. 40(8), 2002–2008 (2012) 3. Zhao, X.F., Yao, X., Guo, Z.F., et al.: Partial discharge characteristics and mechanism in voids at impulse voltages. Meas. Sci. Technol. 22(35), 704–710 (2011) 4. Judd, M.D., Farish, O., Hampton, B.F.: The excitation of UHF signals by partial discharges in GIS. IEEE Trans. Dielectr. Electr. Insul. 3(2), 213–228 (1996) 5. Judd, M.D.: Using finite difference time domain techniques to model electrical discharge phenomena. In: IEEE Conference on Electrical Insulation & Dielectric Phenomena, Victoria, Canada, pp. 518–521. IEEE Press (2000) 6. Hampton, B.F.: Diagnostics for gas insulated substations. In: IET International Conference on Advances in Power System Control, Operation & Management, pp. 17–23 (1994) 7. Sermanet, P., Chintala, S., LeCun, Y.: Convolutional neural networks applied to house numbers digit classification. In: Proceedings of the 21st International Conference on Pattern Recognition, pp. 3288–3291 (2012) 8. Cireşan, D.C., Giusti, A., Gambardella, L.M., Schmidhuber, J.: Mitosis detection in breast cancer histology images with deep neural networks BT - medical image computing and computer-assisted intervention – MICCAI 2013. In: Proceedings MICCAI, pp. 411–418 (2013)

68

L. Wang et al.

9. Spanhol, F.A., Oliveira, L.S., Petitjean, C., Heutte, L.: Breast cancer histopathological image classification using Convolutional Neural Networks. In: 2016 International Joint Conference on Neural Networks (IJCNN), vol. 29, no. 1, pp. 2560–2567 (2016) 10. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1–9 (2012) 11. Rawat, W., Wang, Z.: Deep convolutional neural networks for image classification: a comprehensive review. Neural Comput., 1–98 (2017)

Top-Level Design of Intelligent Video Surveillance System for Abandoned Objects Chengwei Zhang and Wenhan Dai(&) Faculty of Management and Economics, Dalian University of Technology, Dalian 116000, Liaoning, China [email protected]

Abstract. In the area of public safety and video surveillance, it is particularly important to monitor suspicious abandoned objects and pedestrian abandonment in public places. By analyzing the characteristics of abandonment behavior, this paper put forward the theory of life cycle of abandoned objects, which describes the feature of abandoned objects and abandonment behaviors overtime, and proposed the top-level architecture and workflow of the abandoned objects intelligent monitoring system based on this theory. The system tracks pedestrians entering the video scene, captures offloading behaviors, extracts motion features and outputs suspicious images. It also captures static images of abandoned objects through video surveillance, extract image feature and monitor object carriers. Finally, images, videos and other structured data is collected and as the input of a multi-characteristic risk assessment model, which outputs the suspicious level of the abandonment event. This paper fills the blank in the theoretical research on abandonment behavior and gives solutions to the shortcomings of traditional remnant monitoring systems. Keywords: Top-level design  Intelligent video surveillance  Abandonment behaviors  Life cycle of abandoned object  Systems engineering

1 Introduction The anti-terrorism situation nowadays is grim, and in the field of video surveillance, it is particularly important to monitor suspicious abandoned objects and pedestrian abandonment in public places. However, traditional abandoned objects monitoring system rely on human effort, which has many shortcomings. On the one hand, it is inefficient, relies on manual labor, and the recognition accuracy is low; on the other hand, it cannot track abandoned carriers and has poor real-time performance. Existing related research has been able to identify static and visible remnants in the video surveillance scenes through intelligent video surveillance technology [1], but it is not possible to judge the suspicious level of abandoned objects, nor can it respond to the objects intentionally hidden by criminals [2, 3]. This paper analyzes the characteristics of abandoned behaviors and the various stages from the emergence to the disappearance of abandoned objects, and puts forward the theory of life cycle of abandoned objects. Based on this, combining computer vision and machine learning, we proposed a top-level design of abandoned objects, which integrates monitoring, detection and © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 69–76, 2021. https://doi.org/10.1007/978-3-030-51328-3_11

70

C. Zhang and W. Dai

early warning, making up for the deficiencies of traditional abandoned objects monitoring systems.

2 Abandonment Behavior Analysis In order to monitor abandoned behaviors, the characteristics of abandonment behavior and abandoned objects need to be analyzed first. All moving objects with the potential to unload objects in the video surveillance scene are called carriers. The carriers can be pedestrians carrying objects like backpacks, luggage, or it can be vehicles such as cars or trolleys. By analyzing the motion characteristics of the carrier, several possible evolution modes of the carrier in the video surveillance scene can be obtained, as shown in Fig. 1.

Fig. 1. Possible evolution modes for carriers, which enter the video surveillance scene.

The carrier first enters the video surveillance scene with extra load, and if it does not unload, it will eventually leave the scene with the load. If the carrier needs to unload, it will first move to the unloading area. If the area is visible, unloading behavior of the carrier will be captured by video surveillance. After unloading is completed, the carrier leaves the unloading area and eventually leaves the scene without the load. If unloading area is not visible due to occlusion, etc., the carrier will temporarily disappears from the scene, completes abandonment, then reappears, and eventually leaves the scene without the load. According to the analysis above, the unloading behavior and the unloaded objects in the scene may not be visible. In some abandoning behaviors for the purpose of terrorist attacks, criminals often intentionally hide their objects. However, once the carrier has unloaded, there must be differences when the carrier enters and exits the scene. This difference can sometimes captured by video surveillance, but sometimes it cannot. On the other hand, it is necessary to analyze the characteristics of the abandoned objects. To detect visible abandoned objects in the video surveillance scene, the most

Top-Level Design of Intelligent Video Surveillance System

71

essential part is to describe the difference between abandoned objects and other objects in the same scene. Here we propose the concept of the life cycle of abandoned objects, which describes the state of them at various time nodes, as shown in Fig. 2.

Fig. 2. Time nodes and phases in the life cycle of abandoned object.

The first phase is forming phase. Object enters video surveillance scene as a moving object along with carrier until carrier unloads it. The second phase is the transforming phase. After the object is unloaded, it becomes a static object. However, it cannot be defined as abandoned object, because there is a big difference between unload and abandon. An object can only be defined as abandoned after it is unloaded for a certain period without movement. The third phase is existing phase, the abandoned object exist in the scene for a while until it is taken away or destroyed, thereby entering the vanishing phase, and eventually disappears from the scene.

3 Workflow Design of Intelligent Video Surveillance System for Abandoned Objects 3.1

Top-Level Workflow Design

According to the analysis above, it can be concluded that in order to increase the accuracy of detection of abandonment behavior and abandoned object, not only do we need to detect visible abandoned objects in the scene, but also track and analyze carriers entering the scene. Only through comprehensive research and judgment with multi-dimensional features can the judgment accuracy be maximized. Therefore, we propose an overall workflow for intelligent video surveillance system that combines carrier monitoring and abandoned object analysis, as shown in Fig. 3. On one hand, system searches for objects that move first and then stops from the video surveillance scene, and determines whether they are abandoned objects. On the other hand, system tracks carriers entering the scene and analyze whether the carrier unloaded any object inside the scene, and if the carrier did, analyze whether it is abandoned. Some specific abandoned objects such as abandoned vehicles, might eventually become part of the background of the scene, in this case it is necessary to refresh the background model. After these processes are finished, the system has collected image data and statistic data of the abandonment, and will immediately alert and output related images. Meanwhile, the system needs to comprehensive research

72

C. Zhang and W. Dai

and judge with multidimensional features to output more results, like what kind of abandoned object is it or whether this abandonment is suspicious.

Fig. 3. Top-level workflow for intelligent video surveillance system for abandoned object.

3.2

Design and Implementation of Carrier Monitoring Process

Fig. 4. Detailed workflow of the carrier monitoring process.

Figure 4 shows detailed workflow of the carrier monitoring process. After a moving object enters the scene, the system classifies the image of it, and performs semantic segmentation (as is shown in Fig. 5b) [4]. If this object is a pedestrian without any bags or luggage, then it is not a carrier, therefore it is unsuspicious, otherwise use algorithm based on YOLOv3 [5, 6] to track it (as is shown in Fig. 5a). If the carrier is not a pedestrian, usually it is impossible to judge whether it is loaded, so simply keep the trajectory of the carrier. And if the carrier is a pedestrian, it means the pedestrian enters the scene with visible luggage and got detected by semantic segmentation process. So the system will perform another semantic segmentation before the pedestrian leaves the scene, and compare the result with the former one, if there is any difference, it indicates that it is highly possible that this carrier abandoned something in

Top-Level Design of Intelligent Video Surveillance System

73

the scene. As is shown in Fig. 5c & d, pedestrian enters the scene with a bag but left without it, and semantic segmentation based on YOLOv3 tells the difference. Then the system will alert and output videos (as is shown in Fig. 5b) of the abandonment and images of the carrier and the abandoned object, and save data like trajectory for intelligent assessment process.

Fig. 5. Results and outputs of carrier monitoring process. (a) shows the result of motion tracking, while (b) shows the abandonment behavior, which is a screenshot picture from the video output. (c) shows the result of image semantic segmentation after the pedestrian enters the scene, which indicates that the pedestrian enters the scene with a handbag. While (d) shows the result of image semantic segmentation before the pedestrian leaves the scene without his handbag.

3.3

Design and Implementation of Abandoned Objects Analysis Process

As was mentioned above, only carrier monitoring is not enough because not all carriers have visible loads. It is also necessary to analyze static abandoned objects in the video scene, and the workflow of abandoned objects analysis process is shown in Fig. 6.

Fig. 6. Detailed workflow of the abandoned objects analysis process.

74

C. Zhang and W. Dai

Gaussian mixture model (GMM) is a commonly used background model for video surveillance [7]. The model points out that the color of each pixel in the video should conform to a mixed Gaussian distribution, so that a subset of this mixed Gaussian distribution can be used to describe the background. If the color of a pixel in a new frame does not belong to the background subset of the history of the pixel, then the pixel is considered to be part of the foreground in this frame, thereby distinguishing the foreground area from the background. This mixed Gaussian distribution can be obtained by fitting the data of a certain number of historical frames, so this Gaussian distribution model can be adaptively updated. In the GMM model, the number of historical frames determines the accuracy of the model. If a small number of historical frames is set, temporarily stationary object will be fitted to the background; however, if a small number is set, temporarily stationary object will be fitted to the foreground along with the moving objects. Using this feature, we can build two GMM models with different historical frames and take the difference between the foreground objects of the two to obtain a temporarily stationary image of the object (as is shown in Fig. 7) [8, 9].

Fig. 7. Abandoned object detection. (a) is the original frame; (b) is the foreground result of GMM module with 27 historical frames, while (c) is the result with 4000 historical frames. The difference area is shown in the circle, which indicates abandoned object.

At the moment when the abandoned object is detected, it has actually been unloaded and remain static for some time, and its carrier is likely to have left. Assume that at this moment the object remains static for t seconds, then if we trace back t seconds it will be exactly the moment when unloading is finished. So, the carrier can be found by back tracing the surveillance video. The system also needs to extract the basic feature of this object, such as relative size, location etc. for intelligence assessment, and output processing results (as is shown in Fig. 8).

Top-Level Design of Intelligent Video Surveillance System

75

Fig. 8. Abandoned object analysis output. (a) is the result of abandoned object detection, (b) is the image of abandoned object while (c) is the image of the carrier.

3.4

Design of Intelligent Assessment Process

Figure 8 shows detailed workflow of the intelligent assessment process. System collects structured data and perform structure process on unstructured data, eventually build a case database. The intelligent assessment module evaluates the suspicious degree of abandonment behavior by integrating the characteristics of the abandoned objects and the trajectory of the carriers, etc., and output the suspicious level to security experts. Security experts decides whether the output is correct and make feedbacks about the output to improve the performance of the module. Once the output is proved to be correct, the result will be saved to case database for further module training [9] (Fig. 9).

Fig. 9. Detailed workflow of the abandoned objects analysis process.

4 Summary This paper analyzes the behavioral characteristics of the carriers committing abandonment and the life cycle of abandoned objects from a theoretical perspective, and based on this, proposes a top-level design of intelligent video surveillance system for abandoned objects. This paper also puts forward the implementation method of some functions in this design, which receive good results. On the one hand, it makes up for the shortcomings of traditional video surveillance systems, on the other hand, it fills the gaps in theoretical research in related fields. Acknowledgments. This paper and related research is funded by China’s National Key Research and Development Project Key Technologies and Equipment for Intelligent Monitoring and Identification of Police Events Based on Multiple Information Fusions in Public Security Prevention and Control Places, project number: 2018YFC0807503.

76

C. Zhang and W. Dai

References 1. Zhang, Y.: SmartCatch intelligent video analysis technology full contact (SmartCatch 智能视 频分析技术全接触). J. China Public Secur. Acad. Edn. (中国公共安全: 学术版) (6), 79–81 (2009) 2. Tang, Y., Fu, J., Chen, Y.: A system of abandoned objects detection based on omnidirectional computer vision (基于全方位计算机视觉的遗留物检测系统). J. Comput. Meas. Control (计算机测量与控制) 18(03), 517–519+523 (2010) 3. Walls, R.M., Zinner, M.J.: The Boston Marathon response: why did it work so well? J. JAMA 309(23), 2441–2442 (2013) 4. ObjectDetection-YOLO. https://github.com/spmallick/learnopencv/tree/master/ ObjectDetection-YOLO 5. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. arXiv:1506.02640 [cs.CV] (2016) 6. Redmon, J., Farhadi, A.: YOLOv3: an incremental improvement. arXiv:1804.02767 [cs.CV] (2018) 7. Kaewtrakulpong, P., Bowden, R.: An improved adaptive background mixture model for realtime tracking with shadow detection. In: Remagnino, P., Jones, G.A., Paragios, N., Regazzoni, C.S. (eds.) Video-Based Surveillance Systems: Computer Vision and Distributed Processing, pp. 135–144. Springer, Boston (2002) 8. Porikli, F.: Detection of temporarily static regions by processing video at different frame rates. In: IEEE Conference on Advanced Video and Signal Based Surveillance, AVSS 2007 (2007) 9. Zhang, C., Wu, X.P., Zhou, J.Y., Qi, P.Q., Wang, Y.G., Lv, Z.: An abandoned object detection algorithm based on improved GMM and short-term stability measure (基于改进混 合高斯建模和短时稳定度的遗留物检测算法). J. Sig. Process. (信号处理) 28(08), 1101– 1111 (2012)

Detection of Human Trafficking Ads in Twitter Using Natural Language Processing and Image Processing Myriam Hernández-Álvarez(&) and Sergio L. Granizo Departamento de Informática y Ciencias de la Computación – DICC, Escuela Politécnica Nacional, Quito, Ecuador {myriam.hernandez,granizo.sergio}@epn.edu.ec

Abstract. Human trafficking that aims at the sexual exploitation of minors is a problem that affects the world; this crime has evolved with the use of the Internet. To make a contribution that facilitates the work of the Police, we have developed a method that uses Natural Language Processing and Image Processing techniques to detect messages on Twitter related to this felony. If minors are used for sexual exploitation, the Law in most countries, consider them human trafficking victims. The system has two phases to recognize the gender and age group of very young people. In the first one, it captures Twitter messages that are suspicious of being related to the crime through specific normalized hashtags. In the second phase, the system recognizes gender and age groups using facial features and or upper body geometry and proportions using Haar filters and SVM algorithm. Keywords: Human trafficking  Twitter adds classification  Age group classification

 Haar filters  SVM  Gender

1 Introduction With the evolution of the Internet and the arrival of web 2.0, a door has been opened for illegal businesses such as human trafficking [1]. Countries as those in Latin America have the highest rates of smuggling of people, especially children and adolescents. In the penal codes of most countries, the age of consent is 14 years. If people 14 years old or younger are being used for sexual exploitation purposes, the Law states that they are, in fact, victims of trafficking. Currently, if we examine Twitter [2], we find websites that offer escort or similar services where young girls are promoted for the consumption of “customers.” And, although they could not disclosure themselves as providers, some users promote this type of illicit activity through messages with photos of minors. The young people involved are generally abused physically [3], psychologically, and sexually [4]. Despite the fact that there are previous tweet filtering and image classifications efforts to detect illicit messages, most of them use or Natural Language Processing methods or Computer Vision techniques, but not both of them jointly in the same system. Our proposed system combines both types of procedures to identify these questionable tweets. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 77–83, 2021. https://doi.org/10.1007/978-3-030-51328-3_12

78

M. Hernández-Álvarez and S. L. Granizo

2 Related Work In [4], the authors analyze cheating techniques present in blogs, collaborative projects, microblogging, virtual games, and social networks, using manipulation of its content, falsification of information, images, and videos. These deception techniques may have success, depending on the skill of the attacker. Analyses using natural language processing have been developed to contribute to the Police in the combat of this crime. In [5], a study is conducted on possible tweets containing hashtags to detect ads for illicit activities. In [6], the authors used a semisupervised learning approach to discern potential patterns of human trafficking to identify related advertisements. They use non-parametric learning to implement text analysis that was then sent to further expert verification. In the area of computer vision, several works classify images by age groups: children, adolescents, and adults focusing just on the face of the person, for example, the research made in [7]. To the best of our knowledge, we are not aware of work on the recognition of age groups using information from the upper body. In this type of site, facial features are often hidden, then frequently, classifications should be done using other information as torso shape and proportions. There are still efforts to be made in the area. Techniques must continue to be developed that contribute to the detection of human trafficking indicators in social networks and linked websites.

3 System Proposal Our proposal for the detection of suspicious websites is divided into two phases. The first one realizes the treatment, analysis, and classification of hashtags using natural language processing. The second phase process and classify gender and age groups of images hosted on websites previously classified as suspicious using hashtag processing. We aimed to identify minors 14 years old or younger. In Fig. 1, we present the identification process of suspicious tweets using natural language processing. In Fig. 2, it is shown the image processing to classify gender and age group. 3.1

Natural Language Processing of Relevant Tweets

In the first phase, to recognize suspicious tweets, the system captures messages that are suspicious of being related to human trafficking through the identification of specific hashtags. The words are classified as relevant because they present hashtags related to age, mentions of skinny people, contain terms that save as keys for clients who demand this kind of service with minors, or refer to foreigners. This last consideration because the traffickers usually take victims out of their countries, so they isolate them and assure that they do not count on help. These terms change often but can be identified by the nature of the promotion of services and the detection of common issues in these types of messages. We conducted a preliminary analysis in tweets and in Facebook content that already were denounced as guilty of sex trafficking of minors; we detected that

Detection of Human Trafficking Ads in Twitter

79

they frequently used words in Spanish, corresponding to young, sweet, fresh, new, Lolita, penguin, and skinny. “Caldo de pollo”, “club penguin” and “cp” are used for criminals to refer to child pornography. Also, we chose the hashtags “escort” and “prepago” as a complement to the mentioned words. The obtained tweets can be very noisy; therefore, they are first normalized with natural language processing techniques with a lexical normalization algorithm for detecting words OOV (Out of Vocabulary) using the corpus TweetNorm_ES [8]. Also, we cleaned the messages with the following criteria: • Tweets containing certain characters that are not standardized were removed. • Repetitive tweets to avoid redundant information were also pruned. • Words that have a not relevant context were eliminated. For example, one of the words for the filtering of the tweets is “young,” referring to children and adolescents; but the user wrote “Long live Quito! Young city!”. This message has another context; therefore, it must be discarded. To extract features for classification, we used the criteria expressed in Table 1. We used an SVM algorithm with a semi-supervised approach for classifying 55123 recent tweets, all with the chosen target hashtags; 10% of this original corpus was annotated and serve as ground truth to evaluate the performance of the algorithms. In Table 2, we present the confusion matrix for the SVM classifier. Precision, recall and F-measurement were 90.7%, 87.3% and 89.9% respectively.

Fig. 1. The identification process of suspicious tweets using natural language processing.

80

M. Hernández-Álvarez and S. L. Granizo

Fig. 2. Image processing to classify gender and age group

Table 1. Features and reasons to consider them Features Quantity of words URL links and analyze if they are night club website or a massage therapy site Third-person use

Same Twitter user talking about more than one victim The number of hashtags considered to harvest the data It serves for detection of a user account from one country that mentions girls from another The number of adjectives and verbs are an indication of a possible deceptive message The similar advertising, from the same account promoting different women is a substantial lead Weight of women One account is promoting more than two different women

Reasons to consider them as input characteristics Deceptive messages have more words to make them forgettable It serves to detect the use of Twitter to publicize these sites Deceptive messages have fewer selfreferences to avoid accountability On the other hand, other people advertise the victims’ services Covert publicity of illicit activities Confirmation of Twitter relevance This criterion is a known signal of sex trafficking This number is high in comparison with a standard message because of deceptive messages are usually very expressive This criterion is a known sign of sex trade

Less than 100 lb may correspond to very young girls This criterion is a known sign of sex trade

Detection of Human Trafficking Ads in Twitter

81

Table 2. SVM performance – natural language processing of the tweets Actual class Suspicious tweet Not-suspicious tweet Predicted class Suspicious tweet 32453 3678 Not-suspicious 3320 15672

3.2

Image Processing of Photographs in Suspicious Tweets

The suspicious tweets and linked URLs were scraped to obtain links to the images that are shown in their domains. As a result, we receive a plain text file where all the links to the pictures are saved. Then we download the photos; it is essential to notice that scraping is limited to open access pages that do not require a paid subscription. Once the set of images are downloaded, we carry out a process of data cleaning to discard not relevant pictures. For example, icons; drop images in grayscale because the classification model need images in RGB color composition; eliminate pictures in a format other than JPG for the present project; resize images because the classification needs that all have the same dimensions (150  150 pixels). Once the data is ready, the feature extraction process is realized using Haar filters [9]. Initially, the loaded image is divided into k different local regions; then, we applied the Haar cascade classifiers to each area using the Viola-Jones algorithm to detect image patterns analyzing geometric properties. These patterns are handled as specific physic features (eyes, face, upper body, etc.). In this work, we only take into account the geometric features that allow detecting faces and upper body of the images collected from suspicious websites. Three geometric features are needed to discover a face successfully (eyes, nose, and mouth). It is required to detect shoulders, torso, and arms to detect the upper-body. To define if predominate the face or the upper body detection, we used the majority voting method that takes into account the most significant number of individual features detected for each case. This process is shown in Fig. 3, and Fig. 4.

Fig. 3. Face detection using Haar model.

82

M. Hernández-Álvarez and S. L. Granizo

Fig. 4. Upper body detection using Haar model

We used an SVM algorithm using Haar features to classify gender and age group (> 14 and  14 years old). The SVM classifier used a linear kernel function to construct the boundary function f(x), defined by (1). f ðxÞ ¼

m X

yi ai Kðx; xi Þ

ð1Þ

i¼1

In our application, we have a binary classification problem for age group identification and gender categorization. As a result, using only facial features, we attained a classification of gender and age group of 81.2% and 80.65%, respectively. However, using upper body features, we also obtained accuracy for gender and age group classification of 81.6% and 82.1% each. This last values result promising for detecting possible cases of human trafficking of minors because, as mentioned before, in sites related to this crime, they usually blurred or pixelated facial characteristics. Our system provides excellent results, even in those situations.

4 Conclusions This paper focused on obtaining Twitter messages that are suspected of promoting minors for sexual services, and therefore these messages are related to trafficking of persons. Our system has two phases. In the first one, we used hashtags that have been proven to have to do with this crime. Tweets are processed to eliminate noise and normalize them. With this information, we extract features to input to an SVM algorithm to classify messages as suspicious and not suspicious to build a blacklist that is going to be further processed. We download the images that are embedded and the ones from the linked websites from the blacklist. In the problem of age and gender classification, we have the complexity that images may, in many cases, not have information on facial features but only data on upper body size and proportions. In the present work, we attain good results even in these cases, only processing the information of the geometric characteristics of the torso.

Detection of Human Trafficking Ads in Twitter

83

As future work, to improve the presented results, we will continue researching strategies to better detect facial and upper body features even in unconstrained images such as those generally obtained in social networks.

References 1. Laczko, F.: Data and research on human trafficking. Int. Migr. 43(1–2), 5–16 (2005) 2. The statistics portal, Twitter: number of monthly active users 2010–2018. https://www. statista.com 3. Candes, M.R.: The Victims of Trafficking and Violence Protection Act of 2000: will it become the thirteenth amendment of the twenty-firsts century. U. Miami Inter-Am. L. Rev., 106–386 (2001) 4. Hughes, D.: Wilberforce can be free again: protecting trafficking victims. National Review Online (2008) 5. Hernández-Álvarez, M.: Detection of possible human trafficking in Twitter. In: International Conference on Information Systems and Software Technologies, pp. 187–191. IEEE (2019) 6. Alvari, H., Shakarian, P., Snyder, J.K.: A non-parametric learning approach to identify online human trafficking. IEEE Conference on Intelligence and Security Informatics, pp. 133–138. IEEE (2016) 7. Dehshibi, M.M., Bastanfard, A.: A new algorithm for age recognition from facial images. Sig. Process. 90(8), 2431–2444 (2010) 8. Alegria, I., Aranberri, N., Comas Umbert, P.R., Fresno, V., Gamallo, P., Padró, L., San Vicente Roncal, I., Turmo Borras, J., Zubiaga, A.: TweetNorm_ES: an annotated corpus for Spanish microtext normalization. In: Proceedings of the Ninth International Conference on Language Resources and Evaluation. European Language Resources Association, pp. 2274– 2278 (2014) 9. Mena, A.P., Mayoral, M.B., Díaz-Lópe, E.: Comparative study of the features used by algorithms based on Viola and Jones face detection algorithm. In: International WorkConference on the Interplay Between Natural and Artificial Computation, pp. 175–183. Springer, Cham (2015)

Text Mining in Smart Cities to Identify Urban Events and Public Service Problems Mario Gonzalez(&), Juan Viana-Barrero, and Patricia Acosta-Vargas Intelligent and Interactive Systems Lab (SI2 Lab), Universidad de Las Américas (UDLA), Quito, Ecuador {mario.gonzalez.rodriguez,juan.viana, patricia.acosta}@udla.edu.ec

Abstract. Cities will be constituted in large agglomerations and will be the axis of social, economic, cultural, and artistic human activity. According to the latest United Nations reports, in 2050, the cities will concentrate on 68% of the world’s population; this means that progressively the world will no longer be rural and will become urban. Local governments can benefit of the ICTs in order to collect data and make smarter decision and policy making for such large cities. The Smart Cities will be related to the use of social networks and social mining to be able to analyze the opinions of citizens and their reactions to the governors or governmental entities. Currently, social networks such as Twitter have provided the opportunity to perform analysis on the user interactions and sentiments. We intend to apply a tweets text mining process to identify public service problems and urban events related, for instance, traffic and security. A case study in Ecuador capital Quito and its metropolitan area. The present research allows identifying, on the one hand, the inconveniences/problems related to public services and urban events such as water, electricity, mobility, public transport, and security at the level of different areas. A temporal and spatial identification of such problems is also carried out. Keywords: Tweets mining visualization

 Urban problematic  Public services  Spatial

1 Introduction Data has become the center of a smart city and the knowledge economy. The future smart city will combine technology such as ubiquitous computing and big data mining, for the extraction and discovery of essential knowledge in order to improve the quality of life of their citizens. It is clear that are metropolitan areas are becoming smart with the increasing availability and interest of data visualization and analysis in areas, such as, transportation [1–3], air quality [4–6], internet of things and social mining [7, 8] to mention a few. Innovative solutions for sustainable transport and mobility, with particular emphasis on metropolitan areas, are required. In this sense, this works presents an application for visualization and analysis of urban events data, mainly in the areas of security, transportation, traffic, public services (i.e. electricity, water services). © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 84–89, 2021. https://doi.org/10.1007/978-3-030-51328-3_13

Text Mining in Smart Cities to Identify Urban Events

85

The collection and pre-processing of the information were carried out using the Python tweepy API [9]. An information preprocessing was carried out to extract the relevant information from the JSON returned by the tweepy API and store the data after filtering and merging as CSV files. The extracted data is used as the input of an RShiny app. Shiny [10] is an open-source R package that provides a robust web framework for building web applications using R [11]. Shiny helps you turn your analyses into interactive web applications translating automatically to HTML, CSS, and JavaScript. The shiny app, which we call Smart-UIO, implements a series of visualization techniques in order to present the data from the urban problems detected, a spatial and temporal representation is depicted as well as, the main problems in each area identified (security, transportation, traffic, public services). The rest of the paper is organized as follows: Sect. 2 presents the data extraction and preprocessing steps. Section 3 depicts the principal results and visualization of the data analyzed. Finally, Sect. 4 discusses the main contribution of the paper.

2 Data Extraction and Preprocessing 2.1

Information Collection

The Python tweepy library was used to search for twits because there is a large amount of documentation available, and there is an ease of coding with respect to other tools on the market. Then, it was proceeded to obtain tweets for the five services: drinking water, electricity, mobility, security and public transport. For each of them, multiple searches are carried out with different mentions, words or tags. The mentions used are the official accounts of each of the services, @aguadequito, @ElectricaQuito, @AMTQuito, @PoliciaEcuador, @TransporteQuito. In the case of the labels, the Twitter accounts of each of the services were consulted to know which were used to communicate or report a problem, for example, #AguaDeQuito, #EEQ or #AMTInforma. 2.2

Preprocessing

First, a Python script is made which takes the JSON file that the twitter API returns as a result, in case there are several search files per service the script is responsible for joining them. In addition, a process is carried to filter out retweets, this is achieved by building a column with the first three letters of the text, if the column contains the letters ‘RT’ this is considered as a retweet and the result is stored in a new column if it was true or false. Also, the text column that contains the text is stored by converting it to Unicode. Finally, within the file are columns with lists for the following elements of the tweet: mentions, labels, URL addresses and multimedia files; for each one of them a new column is added with the number of items in the list and a chain is created with all the elements that made it up. Once these transformations have been made by the Python script, SQL is used to establish the connection to a Microsoft SQL Server 2016database.

86

M. Gonzalez et al.

After the tweets are stored, SQL scripts are generated to make transformations to the different fields. To start the date that returns, Twitter is in the zero time zone, in the following format “Mon Jan 07 23:55:51 +0000 2019”. To be able to standardize, it is first necessary to subtract 5 h to be in the same time zone corresponding to the city of Quito, then through the location of the different characters the day, month, year and time are recognized. Finally, with the combination of these fields, the date is constructed and additional columns are added to know the name of the day of the week and of the month. Another column to which a transformation is performed is the source, originally Twitter returns the column with the following format: “ Twitter for Android .” Inside the tag is the name of the source. Information is extracted and classified and transformed into a standard name. A final CSV is stored with all columns merged and tidied.

3 Results The user interface of the SmartUIO App (https://si2lab-udla.shinyapps.io/smartcityuio_ app/), shown in Fig. 1, depicts general information of the collected tweets. Namely, the daily and weekly and monthly frequencies, mentions, hashtags, source, and hour of the day. Other sections of the user interface present information extracted from the tweets, namely the number of mentions according to neighborhood parish and administrative regions of the city of Quito and the metropolitan area. From that information, the frequencies can be extracted to perform a comparison and detect the most problematic areas of the city as well as the main problems associated with each problematic, as discussed in the next subsections.

Fig. 1. SmartUIO app user interface with general information from the collected tweets.

Text Mining in Smart Cities to Identify Urban Events

3.1

87

Public Services

Figure 2 shows the information related to public services in the metropolitan area of Quito, namely, the water and electricity services. Figure 2 left presents detailed information on water services problems by parish. The most problematic parishes are Nayon in the north, and Quitumbe in the South, both with red color. The next problematic parishes are depicted in orange, being the valley region of Tumbaco at the east, also at south-west the parishes of Guamani and Chillogallo. The center of the city which is also the wealthiest, the number of problems is low compared with the aforementioned parishes which are the least wealthy (Guamani and Chillogallo). Figure 2 right, shows similar results for electric services, including now two more parishes in red (high number of problems), Chimbacalle and La Libertad. These two parishes are south to the historic downtown, where old infrastructure can be the cause of the problematic. 3.2

Transportation and Security

Figure 3 depicts private (left), public (center) transportation and security (right) problematics. Both left and center panels of Fig. 3, have a similar pattern of transportation problematic, which is associated to the traffic going in and out from the valley regions, and north and south parts of the cities, which are satellites (dorm cities) for the people that work in the center of Quito. Such flow is the cause of this pattern for transportation problems. Quito is a very secure city for the Latin America standard. This is depicted in the right panel of Fig. 3. Again, the most problematic regions of the city are the south/north, due to wealth reasons, and the valley regions probably due to coverage of a more spread population.

Fig. 2. Public service issues detected in mined tweets.

88

M. Gonzalez et al.

Fig. 3. Transportation and security issues detected in mined tweets.

4 Conclusions Urban agglomerations are becoming complex interconnected environments. In order to effectively reach their citizens and understand their needs, local governments must take advantage of the possibilities that nowadays information systems to collect and process information can give in order to carry out better policy and decision making for the good of all members of the community. We have presented in this work and example of how social mining from twitter can give a valuable insight into the principal problematic citizens of Quito can have about public services, transportation and security. Albeit, being a descriptive work at present, it is not difficult to extend the work analysis and data collections efforts to areas such as city morphology to study urban ´distribution, mining transportation flows to map vehicle traffic, automatic urban activities inventory from the video. Smart cities are giving the opportunity to collect all this kind of information; the final goal may be to combine all these sources in order to generate prescriptive models that will help decision-makers to make the cities more comfortable to live in for all. Acknowledgments.

This work has been supported by UDLA SIS.MGR.18.02.

References 1. Kalamaras, I., Zamichos, A., Salamanis, A., Drosou, A., Kehagias, D.D., Margaritis, G., Papadopoulos, S., Tzovaras, D.: An interactive visual analytics platform for smart intelligent transportation systems management. IEEE Trans. Intell. Transp. Syst. 19(2), 487–496 (2017) 2. Liu, S., Pu, J., Luo, Q., Qu, H., Ni, L.M., Krishnan, R.: VAIT: a visual analytics system for metropolitan transportation. IEEE Trans. Intell. Transp. Syst. 14(4), 1586–1596 (2013) 3. Tang, T., Kong, X., Li, M., Wang, J., Shen, G., Wang, X.: VISOS: a visual interactive system for spatial-temporal exploring station importance based on subway data. IEEE Access 6, 42131–42141 (2018) 4. Naranjo, R., Almeida, M., Zalakeviciute, R., Rybarczyk, Y., González, M.: AirQ2: Quito air quality monitoring and visualization tool. In: 2019 Sixth International Conference on eDemocracy & eGovernment (ICEDEG), pp. 164–171. IEEE (2019)

Text Mining in Smart Cities to Identify Urban Events

89

5. Pérez-Medina, J.-L., Zalakeviciute, R., Rybarczyk, Y., González, M.: Evaluation of the usability of a mobile application for public air quality information. In: International Conference on Applied Human Factors and Ergonomics, pp. 451–462. Springer, Cham (2019) 6. Hernandez, W., Mendez, A., Diaz-Marquez, A.M., Zalakevic, R.: Robust analysis of PM2. 5 concentration measurements in the Ecuadorian Park La Carolina. Sensors 19(21), 4648 (2019) 7. Anastasi, G., Antonelli, M., Bechini, A., Brienza, S., D’Andrea, E., De Guglielmo, D., Ducange, P., Lazzerini, B., Marcelloni, F., Segatori, A.: Urban and social sensing for sustainable mobility in smart cities. In: 2013 Sustainable Internet and ICT for Sustainability (SustainIT), pp. 1–4. IEEE (2013) 8. Vakali, A., Anthopoulos, L., Krco, S.: Smart Cities Data Streams Integration: experimenting with Internet of Things and social data flows. In: Proceedings of the 4th International Conference on Web Intelligence, Mining and Semantics (WIMS 2014), p. 60. ACM (2014) 9. Roesslein, J.: Tweepy Documentation (2019). https://tweepy.readthedocs.io/en/latest/ 10. Chang, W., Cheng, J., Allaire, J.J., Xie, Y., McPherson, J.: Shiny: Web Application Framework for R. R package version 1.3.2 (2019). https://CRAN.R-project.org/package= shiny 11. R Core Team: R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria (2019). https://www.R-project.org/

Deep Learning-Based Creative Intention Understanding and Color Suggestions for Illustration Xiaohua Sun(&) and Juexiao Qin Center of Digital Innovation, Tongji University, Fuxin Road 281, Yangpu District, Shanghai 200082, China [email protected], [email protected]

Abstract. With the gradually maturity of deep learning and model training, machine learning is increasingly used in image processing, including style transfer, image repair, image generation, etc. In these studies, although artificial intelligence could accurately reproduce different artistic styles and be able to generate realistic images, illustrators’ creative experiences were ignored. Machine replaces almost all the work of human in these applications. But human’s desire for the painting experience and ability will not disappear. We believe that based on learning the creative intentions of illustrators, machine can make suggestions for illustrators’ problems and help them improve their capabilities. It will be a more harmonious cooperation direction for human and artificial intelligence in illustration field. This paper takes color suggestion as an example. We analyze the difficulties and needs of illustrators when they color the paintings. And we propose a method of using machine learning to assist illustrators in improving their coloring ability. Based on the color of the input works from illustrators, it will optimize the choice of colors, the arrangement and proportion of different colors in canvas to help illustrators understand their weakness in coloring and improvement directions visually. Keywords: Human machine cooperation  Deep learning  Illustration  Color suggestion

1 Introduction With the gradually maturity of algorithms of deep learning and model training, especially the CNN (Convolutional neural network) and the GAN (Generative Adversarial Neural Network), developers are increasingly applying machine learning to the image field. In many experiments and applications, developers have demonstrated to the public the powerful imitation capabilities and high efficiency of the machine. We can foresee that in the future, most of the tedious and repetitive tasks will be replaced by machines, just as the canvas and pigment of the past were replaced by today’s various convenient drawing software. However, for illustrators who are always imaginative and creative, the process of creation and aesthetic experience is more important. Therefore, illustrators will constantly reserve knowledge and improve their skills so as to obtain higher creative © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 90–96, 2021. https://doi.org/10.1007/978-3-030-51328-3_14

Deep Learning-Based Creative Intention Understanding

91

capabilities. If the application of machine learning in the art field blindly develops in the direction of replacing humans, it will not be accepted by creators. In contrast, if machine learning can be used to help illustrators recognize their shortcomings and give reasonable suggestions in their learning and practicing, such artificial intelligence tools will be more accepted by illustrators. For self-learners and enthusiasts who are not art majors, intuitive and accessible guidance is needed. In traditional art education, the study of painting relies on the accumulation of a large amount of observation, imitation and practice. In this long process, mentors point out mistakes of the students, timely corrected the students’ practice content and learning direction. Otherwise, even if students know that their work is not good enough, it is difficult for them to know the source of problems. And this is the status quo currently faced by most self-learners. Tara Geer has explained why people encounter obstacles in painting: The use of visual information for humans is very economical. When observing objects, they will abstractly classify things they see, extract the most representative features, and always use their own understanding to build the look of the world [1]. Justin Ostrofsky believes that painting ability is affected by two factors: 1) the choice of visual information during the painting process; 2) the degree of enhancement of the visual system selected by the visual system and the degree of suppression of the ignored visual information [2]. We believe that machine learning can help people realize that they have problems with visual information choosing in the process of drawing, and give intuitive visual suggestions for their problems. This requires the neural network model to be able to understand the appearance of excellent works and the creative intention of a certain illustrator in his work at the same time. In this paper, we choose color suggestion as an example. We built an image dataset which only expresses the color domain and color matching information. And we trained a conditional GAN to learn not only how to choose colors but also the arrangement and proportion of different colors on a canvas. Our goal is made the model optimize the color of a work one illustrator input, help the illustrator understand how to improve their color.

2 Related Work There have been a large number of research experiments on machine learning in the field of coloring. Currently the main directions are colorization, the generation of color palette, and the color transfer. Colorization is the method of converting an image in grayscale to a fully color image. The most common application is coloring based on edge information. Huang used a reliable edge detection method that was quick in speed of execution whilst also being accurate enough to prevent color bleeding [3]. Gupta, Raj Kumar, et al. present an example-based method to colorize a gray image [4]. Kevin Frans built an artificial intelligence tool to automatically coloring illustration for line drawings [5]. The above mentioned methods produce some very good results. However they all chose to replace the work of illustrators by machine. People using these tools can actually receive high-quality images, but they lose the opportunity to improve their capabilities.

92

X. Sun and J. Qin

The generation of color palette extracts the main color combinations from photographs or excellent art works. Zhao, Jun, et al. developed a method to generate a color palette having multiple colors from a color image described in a color space [6]. Delon, Julie, et al. presented a new method for the automatic construction of a color palette, which adjusts dynamically its number of colors according to the visual content of the image [7]. Color transfer is mainly used to modify the color composition of the original image according to a certain standard. Zhang, Qing, et al. presented an approach to edit colors of an image by adjusting a compact color palette [8]. Tan, Jianchao, et al. abstracted relationships between palette colors as a compact set of axes describing harmonic templates over perceptually uniform color wheels. They provided a basis for a variety of color-aware image operations, such as color harmonization and color transfer [9]. As the research mentioned above, generation of color palette - gives people some suggestions of color choosing. But in the actual creative process, people still face the problem of how to arrange these colors on the canvas. Color transfer modifies the input images so that people could see a better color composition of an image. However, the differences between input and output including too many visual information, machine only gives a result but not approaches. Our approach focuses on the main factors of color composition, the model learns features which are the choice, arrangement and proportion of colors. In order to focus on the color matching, we will omit most of the details in the image and only pay attention to the main color blocks. It also reduces humans’ difficulties in dealing with large amounts of visual information.

3 Implementation 3.1

Building the Dataset of Color Composition

We set up the dataset with portrait illustrations as objects. The data is mainly obtained from the open art communities using crawlers. For the needs of training, we uniformly modify the data to a size of 128px * 128px. Color mode is in RGB format. To ensure that the images will not contain too much detailed information, we use the K-means algorithm to integrate excessive color information in each image data, make the pixels belonging to the same cluster unified into the same color. So that each image in the dataset contains no more than 8 colors (Fig. 1). 3.2

Framework

We build a conditional generative adversarial neural network. In our training process, the generator needs to generate new images by retaining the main color, modifying the remaining colors and their distribution areas based on the color distribution of input images. We use the dataset of color composition and the images generated by generator to train the discriminator. It needs to learn the distribution features of colors which represent the harmonious color matching. The features include the relative position of the colors in the color wheel, proportion of different colors in a good color

Deep Learning-Based Creative Intention Understanding

93

Fig. 1. Some samples from the dataset of color composition. Every image was cropped to a uniform size of 128px * 128px, variety of colors in each image was filtered by K-means algorithm to 8 or less.

composition, and the position of different proportions of colors in the images. We update the generator every time the discriminator is trained a certain number of epochs. Our goal is that finally the generator can adjust the color distribution combining aesthetic standards and user creative intentions based on input images from users. At the same time, the adjusted color distribution features are presented to the user visually (Fig. 2).

Fig. 2. The framework of our tool. Using images from dataset or generator to train the discriminator.

94

3.3

X. Sun and J. Qin

Understanding the Creative Intention of Users

We believe that the color palette of input images from the user contains the user’s creative intention. In this attempt, we used two methods to extract that. First is using the K-means algorithm as a color filter to reduce the color variety to 8 or less, retaining part of colors that exceeds the threshold as seed colors. The generator will replace or adjust other colors based on the seed colors. Second is to let the user selects a small set of colors in the input image as seed colors. Then the color filter will retain the seed colors as a part of clusters of filtering. The generator will replace or adjust other colors based on the user-selected colors. 3.4

Experiment

What we concern about at this stage is whether the generator can learn user’s creative intention from the color matching of the input image and output the adjusted image according to the high color aesthetic standards. In experiment, we gathered some works from self-learners of illustration, input them into our generator after we integrated the coloring of them. As the images in dataset, we need to reduce the detailed information of the input images. The Fig. 3 shows part of the test results. In addition to the adjusted image, the color proportions are visualized to users.

Fig. 3. The process from input images to generated images, including filter colors, extract seed colors and optimize the color composition.

Deep Learning-Based Creative Intention Understanding

95

4 Discussion Our tool optimizes the color composition of the input images to a certain degree. But it does not fully understand the user’s creative intentions. The reason is that the colors retained in the input image cannot accurately represent the user’s intention, and the users’ own choices of colors do not exactly match their intentions. This is also caused by the cognitive bias in the painting process. In order to solve this problem, the input of semantic information will be considered in the future to assist the machine to grasp the user’s intention more accurately. We received some feedback that users hope to have some operations which could further iteratively adjust the results of optimization after they get the adjusted images. In the future, we will consider making our tool more interactive. For example, adding hue, lightness, and saturation adjustment sliders to a color block, or adding color picking and color selection functions, etc. The user’s interactive operation on the optimization result can be used as important parameters when the generator learns user intention. We will try to make our tool more user friendly and usable in the next stage. Our tool helps users to intuitively see how to modify the color matching will make their art works more harmonious, but we cannot prove that rely on our tool whether users can improve the ability of coloring or not yet. It requires more long-term and more samples testing and observation. And we need to dig out more information and the forms of information presentation which will be helpful to users in learning and practicing.

5 Conclusion In order to make use of the advantages of both parties in the cooperation between human and machine, we have designed a color suggestion tool that uses conditional GAN to understand the creative intentions of users. And because of the particularity of artistic creation, we take the creator’s creative intention and desire for the improvement of creative ability as the first consideration. For this reason, we did not choose to maximize the computing power of machine, but instead reduced the amount of data that needed to be calculated. Our goal is not to output images closed to real photos, but to help self-learners and enthusiasts of illustrations see the visual effects of different color combinations more intuitively. By training and testing the model, we can find that our method is feasible. In future work, we will continue our study from three aspects, 1) try to train the model to learn the relationship between color composition and semantic information, and use the semantic information as one basis for the generator to understand the user’s creative intention. 2) Add more interactive operations, allowing users to modify the optimized images given by the generator, so as to iteratively produce the color composition more in line with the user’s intention. 3) Design an experiment with a larger sample size and a longer period to determine whether our tool has a positive impact on the user’s color capabilities on a longer time scale.

96

X. Sun and J. Qin

Acknowledgements. During the design, development, and testing process of this research, we received extensive technical and hardware support from the Center of Digital Innovation of Tongji University. We are also appreciated to volunteers who provided their works to help us complete the test.

References 1. Geer, T.: What we Illustrate when we Draw: Normative Visual Processing in Beginner Drawings, and the Capacity to Observe Detail. Thinking Through Drawing: Practice into Knowledge, p. 45 (2011) 2. Ostrofsky, J., Kozbelt, A.: “A multi-stage attention hypothesis of drawing ability.” Thinking through drawing: Practice into knowledge: Proceedings of an Interdisciplinary Symposium on Drawing, Cognition and Education (2011) 3. Huang, Y.-C., et al.: An adaptive edge detection based colorization algorithm and its applications. In: Proceedings of the 13th Annual ACM International Conference on Multimedia (2005) 4. Gupta, R.K., et al.: Image colorization using similar images. In: Proceedings of the 20th ACM International Conference on Multimedia (2012) 5. Frans, K.: Outline colorization through tandem adversarial networks. arXiv preprint arXiv: 1704.08834 (2017) 6. Wijffelaars, M., et al.: Generating color palettes using intuitive parameters. Computer Graphics Forum, vol. 27. No. 3. Blackwell Publishing Ltd., Oxford (2008) 7. Delon, J., et al.: Automatic color palette. In: IEEE International Conference on Image Processing 2005, vol. 2. IEEE (2005) 8. Zhang, Q., et al.: Palette-based image recoloring using color decomposition optimization. IEEE Trans. Image Process. 26(4), 1952–1964 (2017) 9. Tan, J., Echevarria, J., Gingold, Y.: Palette-based image decomposition, harmonization, and color transfer. arXiv preprint arXiv:1804.01225 (2018)

Comparison of Probability Distributions for Evolving Artificial Neural Networks Using Bat Algorithm Adeel Shahzad(&), Hafiz Tayyab Rauf, Tayyaba Asghar, and Umar Hayat Department of Computer Science, University of Gujrat, Gujrat, Punjab, Pakistan [email protected] Abstract. In the field of continuous engineering, Predictive models have been broadly emerged with Artificial Neural Networks (ANN) for data classification and prediction. Commonly, Neural Networks (NN) have been trained through the backpropagation algorithm, which is considered as a traditional approach. In optimization problems, Bat Algorithm (BA) has been extensively carried out with ANN to tackle different barriers. One of the prominent issues is the worst population initialization in terms of retrieving initial weights for each neuron in ANN. The advantage of a strong pattern of initiating the search vectors may lead to improve overall algorithm performance. In this article, we have proposed the novel variant of NN structure with BA probability distributions. The proposed initialization methods composed of Gamma distribution (G-BAT-NN), an Exponential distribution (E-BAT-NN), Beta distribution (B-BAT-NN), and finally the Weibull distribution (W-BAT-NN). We implemented the proposed techniques for the ANN classification of feed forward neural networks using BA and suggest the enhanced versions of BA for ANN classification. We verified the results on 8 real-world data sets taken from the UCI repository for the classification problem. Keywords: Neural networks

 Probability distributions  Back propagation

1 Introduction For the particularly given condition, optimization provides several solutions for the complex structural and un-structural problems in the field of engineering and technology. Optimization refers to minimizes or maximizes the cost, profit, and other related measures that are dependent on the dimensions of the problem. The primary determination of an optimization process includes developing strong facts for solving an optimization problem by minimizing the cost or maximizing the profit. During the decision process, each process requires complicated independent variables and an objective function, which is analyzed for the optimization, processes [1]. For decoding Artificial intelligence problem of classification, clustering, and optimization, stochastic algorithm kind of Meta-heuristic algorithms [2] have been

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 97–104, 2021. https://doi.org/10.1007/978-3-030-51328-3_15

98

A. Shahzad et al.

employed. In the area of optimization, there are two major families of stochastic algorithms, the evolutionary family and the swarm family that plays a significant role to achieve the best optimal solutions. Evolutionary-based algorithms involve Differential evolution (DE) [3] and Genetic Algorithm (GA) [4] while Swarm based algorithms are Particle Swarm Optimization (PSO) [5], Cuckoo Search (CS) [6], Harmony Search (HS) [7], and Bat Algorithm (BA) [8]. Swarm-based algorithms accomplished by the collective behavior of the entire population for determining the global best solution [9]. The fundamental objective of the complex and constrained real-world data classification problem is to develop a standard optimal procedure for separation policy such that each property of a particular attribute depends simultaneously on the contrary data attributes [10]. For data classification, the most common real-world applications are pattern reorganization, bankruptcy prediction, quality control and medical diagnosis in the field of economics, medicine, and industry. ANN is the most famous learning architecture used to solve complex prediction based applied tasks [11]. ANN, the computational paradigm, is considered as the most leading universal predictor that contains unique attributes such as discovering, adjusting, coordinating and generalizing the huge amount of data and to interpret a useful knowledge from that huge amount of data. For information processing, ANN is the collection of interlinked samples called neurons that are organized in the particular layer adopting a computational pattern [12]. Several ANN algorithms are brought out to train the artificial neural networks. One of the primary algorithms applied for the weight optimization of ANN is the Back propagation algorithm. The purpose of training the ANN practicing Back propagation is to improve weights so that the applicability of sample data converts into the required output sample data within particular simulation constraints [13]. Bat Algorithm (BA) belongs to swarm meta-heuristic family, carried out by yang [8] in 2010. BA works with the phenomena of bats echolocation inspired by the collective characteristics of bats. Bats search their prey or food by using their echolocation ability, which refers as some sort of signals that indicating the new search areas in the search space. Premature convergence is considered as the frontest problem in Data classification based on nature-inspired algorithms [14–16]. In this paper, Data classification weight optimization with the feed-forward neural network is performed on 8 standard data set taken from a world-famous repository of UCI. The proposed algorithm for NN classification is compared with standard BATNN, the back propagation algorithm BPA, B-BAT-NN, E-BAT-NN, G-BAT-NN, and W-BAT-NN.

Comparison of Probability Distributions for Evolving ANN

99

2 Literature Review Expanding ANN endures an important impact in both areas of evolutionary computation (EC) and artificial neural networks (ANN). Among several data mining approaches, as a global predictor, ANN has been practicing in the much real-world application such as defective classification, industrial automation, and database integration. Similarly in the area of medical imaging, and medical diagnosis different applied ANN approach has been used in the applications of EEG, ECG, Breast cancer detection and diagnoses of dermatology diseases [17]. In [18], for assuring the accuracy in terms of better weight optimization the architecture for PSO-NN has been implemented. The weights of the PSO-NN architecture were initialized with the pseudo-random number generator following the uniform distribution. Proposed PSO-NN were tested against the 13 benchmark dataset and gives better results. For determining the primary features of the population and hybridization of the PSO, Subasi [19] organized the EMG signals by utilizing a random initialization following the uniform distribution including support vector machine (SVM) [20]. Specifically, with BA, the authors did the new altered variant of BA called I-BAT by controlling Torus dissemination into the initialization procedure of the BA. They work out on the examination of I-BAT with standard traditional BA and the standard PSO. Their investigation demonstrated that the proposed I-BAT is sufficient fully mature to create preferable outcomes over the standard BA. To prepare the neural system a few information-mining issues utilizing low discrepancy sequences such as Sobol, Faure and Halton dispersion. The primer outcomes have uncovered that the novel low discrepancy sequence-based population initialization is progressively viable and proficient as far as ideal weight optimization. Training of the artificial neural networks with the feed forward neural network has performed using BPA, GA, and PSO by the authors in [21]. For testing the introduced versions of algorithm three famous worldwide data sets taken from the category of medical of the UCI repository. The datasets include breast cancer, proben1, and diabetes. The stated variant of PSO neural network confers excellent performance as compared a couple of different classical techniques.

100

A. Shahzad et al.

3 Standard Bat Algorithm Algorithm 1 comprises the pseudo code of Standard BA.

4 Methodology NN classification and compared with standard BAT-NN, the back propagation algorithm BPA, B-BAT-NN, E-BAT-NN, G-BAT-NN, and W-BAT-NN.

Comparison of Probability Distributions for Evolving ANN

101

The pseudo code for the proposed Algorithm can be found in Algorithm 2.

5 Results and Discussion All the proposed methods and the standard BA is actualized in C++ programming language employing the visual studio tool 2013 variant. The device appropriated for the trials having the features of the i3-4010U CPU processor and 1.70 GHz Core (TM). The results for the weight optimization of feed forward neural networks employing the Weibull distribution W-BAT-NN, Gamma distribution G-BAT-NN, Exponential distribution E-BAT-NN and Beta distribution B-BAT-NN presented in this section. These approaches compare with the standard back propagation algorithm for the evolution comparison in order to ensure the superiority of the recommended techniques.

102

A. Shahzad et al.

5.1

Data Classification Experiment

We conducted the training of feed-forward neural networks on 8 real-world data sets taken from the world-famous repository of the UCI machine learning. In order to evaluate the suggested distribution patterns we have carried out the comparison of the proposed techniques with the standard BPA. The description of the 8-benchmark data sets along with their properties is presented in Table 1. Where Table 2 contains the accuracy results for the comparison of the Weibull distribution W-BAT-NN, Gamma distribution G-BAT-NN, Exponential distribution E-BAT-NN and Beta distribution B-BAT-NN.

Table 1. Description of 8 standard bench mark data sets Data set

Features Continuous 1 Heart 13 2 Blood tissue 5 3 Wine 13 4 Horse 27 5 Vertebral 6 6 Diabetes 8 7 Iris 4 8 Seed 7

No. of inputs No. of classes Nature Disc – 13 – 5 – 13 – 27 – 6 – 8 – 4 – 7

2 2 3 2 2 2 3 3

Real Real Real Real Real Real Real Real

Table 2. %Accuracy results for the classification of ANN using proposed techniques for data classification S#

Data sets

Type

BAT-NN Tr. Acc

BPA Ts. Acc

Tr. Acc

Ts. Acc

B-BAT-NN

E-BAT-NN

G-BAT-NN

W-BAT-NN

Tr. Acc

Tr. Acc

Tr. Acc

Tr. Acc

Ts. Acc

Ts. Acc

Ts. Acc

Ts. Acc

1

Heart

2-Class

79.02%

55%

64%

58%

77.97%

59.5%

76.27%

58.5%

72.69%

60%

74.63%

62%

2

Blood tissue

2-Class

94.52%

91.15%

91%

85%

97.10%

93.85%

99.11%

93.85%

93.12%

93.85%

98.19%

95.14%

3

Wine

3-Class

73.16%

63.22%

67%

62%

73.16%

67.14%

86.12%

69.17%

84.23%

68.14%

89.28%

73.52%

4

Horse

2-Class

83.79%

70.10%

86%

65%

84.7%

73.29%

90.90%

73.39%

83.33%

72.91%

94.85%

76.30%

5

Vertebral

2-Class

89.38%

75.62%

76%

73%

82.14%

76%

73.23%

77.59%

82.20%

77.33%

89.62%

80.99%

6

Diabetes

2-Class

96.14%

76.73%

84%

71%

99.23%

78.15%

97.52%

83.15%

97.53%

84.19%

99.55%

85.20%

7

Iris

3-Class

98.43%

73.49%

79%

68%

99.24%

76.53%

72.14%

77.24

95.43%

77.35%

97.50%

82.33%

8

Seed

3-Class

98.33%

96.60%

98%

94%

93.93%

97.96%

97.98%

97.33%

96.98%

97.33%

98.34%

98.3%

6 Conclusion This research work offers four strategies into the multi-dimensional search space for managing a staring composition for the Bat algorithm particularly based on probability distribution. The authenticity and supremacy of the recommended methods are examined and compared to carry out the fair comparison. We verified the proposed

Comparison of Probability Distributions for Evolving ANN

103

approaches for the optimization of feed forward neural networks with BA. The swarm heterogeneity also improved for the ANN classification with B-BAT-NN, E-BAT-NN, G-BAT-NN, and W-BAT-NN. The results obtained by the comparison of the B-BATNN, E-BAT-NN, G-BAT-NN, and W-BAT-NN shows that W-BAT-NN are more superior as compared to other techniques.

References 1. Yang, X.: Nature-Inspired Metaheuristic Algorithms. Luniver Press, Bristol (2010) 2. Noel, M.M., Noel, M.: A new gradient based particle swarm optimization algorithm for accurate computation of global minimum. Appl. Soft Comput. 12, 353–359 (2012) 3. Das, S., Suganthan, P.N.: Differential evolution: a survey of the state-of-the-art. IEEE Trans. Evol. Comput. 15(1), 4–31 (2011) 4. Davis, L.: Handbook of Genetic Algorithms. Van Nostrand Reinhold, New York (1991) 5. Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of the 1995 IEEE International Conference on Neural Networks (1995) 6. Yang, X.S., Deb, S.: Cuckoo search via Lévy flights. In: World Congress on Nature & Biologically Inspired Computing, NaBIC 2009 (2009) 7. Geem, W.Z., Kim, J.H., Loganathan, G.V.: A new heuristic optimization algorithm: harmony search. Simulation 76(2), 60–68 (2001) 8. Yang, X.S.: A new metaheuristic bat-inspired algorithm. In: González, J.R., Pelta, D.A., Cruz, C., Terrazas, G., Krasnogor, N. (eds.) Nature Inspired Cooperative Strategies for Optimization (NICSO 2010). Springer, Heidelberg (2010) 9. Gandomi, A.H., Yang, X.-S., Talatahari, S., Alavi, A.H.: Metaheuristic algorithms in modeling and optimization. In: Metaheuristic Applications in Structures and Infrastructures, pp. 1–24 (2014) 10. Zhang, G.P.: Neural networks for classification: a survey. IEEE Trans. Syst. Man Cybern. 30(4), 451–462 (2000) 11. Elshorbagy, A., Corzo, G., Srinivasulu, S.: Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology – part 2: application. Hydrol. Earth Syst. Sci. 14(10), 1943–1961 (2010) 12. Wongseree, W., Chaiyaratana, N., Vichittumaros, K., Winichagoon, P., Fucharoen, S.: Thalassaemia classification by neural networks and genetic programming. Inf. Sci. 177(3), 771–786 (2007) 13. Al-kazemi, B., Mohan, C.K.: Training feedforward neural networks using multi-phase particle swarm optimization. In: Proceedings of the 9th International IEEE Conference on Neural Information Processing, ICNOIP 2002 (2002) 14. Nawi, N.M., Rehma, M.Z., Khan, A., Chiroma, H., Herawan, T.: A modified Bat algorithm based on Gaussian distribution for solving optimization problem. J. Comput. Theor. Nanosci. 13, 706–714 (2016) 15. Kora, P., Kalva, S.R.: Improved Bat algorithm for the detection of myocardial infarction. SpringerPlus 4, 666 (2015) 16. Thangaraj, R., Pant, M., Deep, K.: Initializing PSO with probability distributions and lowdiscrepancy sequences: the comparative results. In: World Congress on Nature & Biologically Inspired Computing, NaBIC 2009 (2009) 17. Yan, H., Jiang, Y., Zheng, J., et al.: A multilayer perceptron-based medical decision support system for heart disease diagnosis. Expert Syst. Appl. 30(2), 272–281 (2006)

104

A. Shahzad et al.

18. De Falco, I., Della Cioppa, A., Tarantino, E.: Facing classification problems with particle swarm optimization. Appl. Soft Comput. 7(3), 652–658 (2007) 19. Subasi, A.: Classification of EMG signals using PSO optimized SVM for diagnosis of neuromuscular disorders. Comput. Biol. Med. 43(5), 576–586 (2013) 20. Bangyal, W.H., Ahmad, J., Rauf, H.T., Pervaiz, S.: An improved Bat algorithm based on novel initialization technique for global optimization problem. Int. J. Adv. Comput. Sci. Appl. (IJACSA) 9(7), 158–166 (2018) 21. Kiranyaz, S., Ince, T., Yildirim, A., et al.: Evolutionary artificial neural networks by multidimensional particle swarm optimization. Neural Netw. 22(10), 1448–1462 (2009)

Deep Neural Networks for Grid-Based Elusive Crime Prediction Using a Private Dataset Obtained from Japanese Municipalities Suguru Kanoga1(&), Naruki Kawai2, and Kota Takaoka1 1

National Institute of Advanced Industrial Science and Technology (AIST), 2-4-7 Aomi, Koto-ku, Tokyo, Japan [email protected] 2 Moly Inc., 1-4-1 Kasumigaseki, Chiyoda-ku, Tokyo, Japan

Abstract. People have the potential to be victims of elusive crimes such as stalking and indecent exposure anytime. To prevent the incidents, proposing an elusive crime prediction technique is a challenging work in Japan. This study assesses the efficiency of deep neural networks (DNNs) for grid-based elusive crime prediction using a private dataset obtained from Japanese municipalities that contains three crime categories (stalking, indecent exposure, and suspicious behavior) in five prefectures (Aichi, Fukuoka, Kanagawa, Osaka, and Tokyo) for 20 months (from July 2017 to February 2019). Through incremental training evaluation method that did not use future information of the testing 1-month data, the DNN-based technique using spatio-temporal and geographical information showed significant superior prediction performances (Mean ± SD%: 88.2 ± 3.0, 85.5 ± 4.5, and 85.8 ± 3.2 for stalking, indecent exposure, and suspicious behavior) to a random forest-based technique (81.9 ± 3.5, 83.3 ± 3.7, and 82.3 ± 2.1). Keywords: Crime prediction

 Deep neural network  Random forest

1 Introduction Elusive crimes such as stalking and indecent exposure have given rise to public concern in Japan since the 1990s because people have the potential to be victims anytime and the crimes seriously affect victims suffering its physical, mental, and social effects over long periods of time [1]. In addition, there are still many elusive crime incidents perpetrated against women and children in Japan [2] whereas the country is said to be a relatively safe one to travel and live in. Thus, proposing an elusive crime prediction technique is a challenging work to protect people from the offenders. To analyze crime data and to predict occurrences of crimes, machine learning algorithms with grid-based spatio-temporal features have been used over the last decade [3, 4]. In 2018, Lin et al. proposed a machine learning-based crime prediction technique [5], which combined deep neural networks (DNNs) with spatio-temporal and geographical information. It showed the best performance for theft incident prediction in Taiwan. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 105–112, 2021. https://doi.org/10.1007/978-3-030-51328-3_16

106

S. Kanoga et al.

Organizations in Japan do not provide us with open-access datasets for elusive crime prediction. In other words, we cannot assess the efficiency of the state-of-the-art crime prediction technique. However, municipalities in Japan individually send e-mails and post to SNS every day that contain text information of local elusive crimes to community participants for increasing their wariness about the offenders. We assumed that if the text data were individually collected from throughout municipalities and converted into other kinds of handleable/uniform digital data, the dataset would help us to assess the efficiency of machine learning-based predictors for elusive crime and lead to promote the introduction of open-access dataset in Japan. This study assesses and explores the performance of the state-of-the-art machine learning-based technique for elusive crime predictions in Japan using a local dataset obtained from municipalities. For this objective, 20-month period (from July 2017 to February 2019) text datasets were obtained from throughout Japan and converted to handleable digital data. Three types of elusive crimes (stalking, indecent exposure, and suspicious behavior) in five prefectures (Aichi, Fukuoka, Kanagawa, Osaka, and Tokyo) were extracted from the dataset for the assessment because there had the greatest number of crime incidents in 2018.

2 Materials and Methods 2.1

Crime Data and Target Areas

Moly Inc. has collected emails from Japanese municipalities for 20 months (from July 2017 to February 2019). The names of hot spots (e.g., Chiyoda-ku Tokyo) were translated into latitude and longitude through Azure Maps API of Microsoft Azure. Thus, the original private dataset with crime labels has three-dimensional information: latitude, longitude, and occurrence time. We selected three types of elusive crimes from the dataset: 1) stalking, 2) indecent exposure, and 3) suspicious behavior. This study focuses on five prefectures (Aichi, Fukuoka, Kanagawa, Osaka, and Tokyo) because they were the top five prefectures with criminal incidents until February 2019. The total number of incidents in the prefectures over 20 months was about 16,000. Note that Fukuoka and Tokyo prefectures do not have information on small islands such as Hachijojima and Izu Oshima to avoid generation of grids that mostly consisted of marine-related areas. 2.2

Grid Segmentation and Selection

For constructing a grid-based prediction model, we respectively created a square for the selected prefecture whose length and width depend on the endpoints of the prefecture. According to previous study [5], a squared grid-based space divided the prefecture into 40-by-40 or 80-by-80 grids to produce a grid-based map of the area. In addition, grids that did not occur any incidents over the period were removed to avoid making imbalance data sets. For example, the abovementioned grid segmentation and selection process for incidents of suspicious behavior in Aichi prefecture based on 40-by-40 grids and 80-by-80 grids is shown in Fig. 1. Among the five prefectures, Aichi is the

Deep Neural Networks for Grid-Based Elusive Crime Prediction

107

largest (106.2  94.1 km) when we ignore the small islands belonging to Fukuoka and Tokyo prefectures. One grid in the 80-by-80 grids of Aichi prefecture has fewer than 1.5  1.5 km geospatial information. If we consider small islands, Tokyo will overwhelmingly be the largest prefecture (1619.9  1719.6 km); however, almost all areas are offshore. Thus, in this study, we do not consider the islands.

Fig. 1. Grid segmentation and selection process for suspicious behavior in Aichi prefecture.

2.3

Feature Extraction

Spatial-temporal features (e.g., neighboring grid characteristics [4]) have been used for grid-based crime prediction modelling [6]. To overcome the sensitivity of time and space selection, additionally extracting other kinds of features such as a priori knowledge of hot spots [7] and geographical features [5, 8] have recently attracted attention. This paper extracted 12-dimensional neighboring grid characteristics regarding space and time factors. In addition, 84-dimensional geographical features that are distances between landmarks (e.g., school and hospital) and a crime occurrence point. The detail of extracted features is described in reference [5]. We separately obtained each feature of geographical information by applying Google Places API. Because of the radius limitation of nearby search in the API (50,000 m), we divided each prefecture by 5  5 grids only on the search phase, and then applied a nearby search to each grid. The search area was 20,000 m from central point of the grid. Furthermore, the redundant duplicate information by overlapping was removed. The searching condition required 10,500 calls (84 landmarks  25 grids  5 prefectures) to the API. Finally, min/max normalization was applied to 97-dimensional features (occurrence time + 12 neighboring grid characteristics + 84 geographical features) to unify units of measurement for each feature.

108

2.4

S. Kanoga et al.

Classifiers

This study compares prediction performances between two machine learning algorithms: random forests (RF) and DNNs. RFs consist of many classifiers and aggregate their results while adding an additional layer of randomness to bagging [9]. Each predictor is independently generated using a different bootstrap sample of the training data. The parameters in RFs, the minimum number of samples in the subset and the number of trees in the forest were 1 and 50 using the randomForest function in the H2O package for R. DNN is a parametric and nonlinear classifier that arbitrarily approximate continuous nonlinear function. This has been recently developed in various research fields, such as natural language processing and image processing [10, 11]. The network architectures hold many parameters associated with nonlinear models to solve optimization problems. The hyperparameters in DNNs are: 1) activation function; 2) dropout rate; 3) the number of hidden layers; 4) the number of hidden units; 5) effect of L1 regularization; and 6) effect of L2 regularization. They were turned from: 1) {“Rectifier”, “Tanh”, “Maxout”}; 2) (0, 0.05); 3) (20, 50, 100); 4) (2, 3, 4, 5); 5) {0:1.0  e−6:1.0  e−4}; and 6) {0:1.0  e−6:1.0  e−4}. We employed a random search approach that is more efficient for hyperparameter optimization than the grid search: 300 parameter sets were randomly chosen. The loss function and the number of epochs were set to cross entropy loss and 1000. If the AUC value does not improve by 1.0  e−2 for two scoring events, the model will stop the training step. In addition, we used an adaptive learning rate for its stochastic gradient descent optimization. These were implemented by the H2O package for R. 2.5

Evaluation

To assess the quality of constructed binary predictors (0: crime incident will not occur, 1: crime incident will occur), we employed an incremental training method. Because 12-month data are necessary for neighboring grid characteristics; 7-month data (from August 2018 to February 2019) serves as testing data; the first 13-month (from July 2017 to July 2018) data were always training data. On each validation step, training data included all data of the previous months (e.g., the prediction performance on September 2018 consider the data until August 2018 for training). In other words, future information was not used for training. A repeated measure analysis of variance (ANOVA) statistically clarified the relationship of prediction performances for each condition (feature type, grid size, classifier, and crime category). Furthermore, we visualized the predicted hot spots of both classifiers with true hot spots for each month and crime category based on Google Map.

3 Results and Discussion 3.1

Prediction Performances

Average prediction performances of RFs and DNNs for each grid size and crime category are shown in Table 1. Under all conditions, adding geographical features

Deep Neural Networks for Grid-Based Elusive Crime Prediction

109

improved prediction performances. A four-way repeated measures ANOVA for averaged prediction performances of two feature sets (only spatio-temporal features and all features), two machine learning algorithms (RFs and DNNs), three crime categories (stalking, indecent exposure, and suspicious behavior), and two grid size (40  40 and 80  80) showed statistically significant main effects of feature sets (F(1, 23) = 173.58, p < 0.001) and machine learning algorithms (F(1, 23) = 60.79. p < 0.001), but did not show significant main effects of crime categories (F(2, 23) = 2.19, p = 0.1684) and grid sizes (F(1, 23) = 1.71, p = 0.223). In addition, there were significant interactions between feature sets and machine learning algorithms (F(1, 23) = 12.54, p = 0.0063), feature sets and crime categories (F(2, 23) = 7.08, p = 0.0142), feature sets and grid sizes (F(1, 23) = 42.7, p < 0.001), and crime categories and grid size (F(2, 23) = 4.32, p = 0.0483). Table 1. Average prediction performances of two machine learning algorithms for each grid size and crime category (upper: using spatio-temporal and geographical features, lower: using only spatio-temporal features). Stalking 40  40 80  RF 80.7 ± 3.9 81.9 (69.8 ± 6.1) (66.1 DNN 83.4 – 4.7 88.2 (80.3 ± 7.1) (72.1

Indecent exposure 80 40  40 80  ± 3.5 77.8 ± 7.0 83.3 ± 7.7) (73.2 ± 9.0) (70.2 – 3.0 78.4 – 7.8 85.5 ± 7.9) (78.1 ± 9.1) (81.2

Suspicious behavior 80 40  40 80  80 ± 3.7 77.4 ± 6.8 82.3 ± 2.1 ± 9.0) (71.8 ± 5.2) (67.6 ± 2.9) – 4.5 78.7 – 8.6 85.8 – 3.2 ± 9.7) (77.5 ± 8.7) (73.4 ± 15.8)

Table 2. Best parameters in DNNs for each condition. Spatio-temporal

Spatiotemporal + geographical 40  40 80  80 40  40 80  80 Activation function Rectifier Rectifier Rectifier Rectifier Dropout rate 0 0 0.05 0.05 Hidden layer and unit 50, 50, 50, 50 50, 50, 50, 50 100, 100, 100 100, 100, 100 1.0  e−4 8.6  e−5 8.6  e−5 L1 regularization 1.0  e−4 −5 −5 −5 L2 regularization 7.1  e 7.1  e 8.1  e 8.1  e−5

The optimized parameters in DNNs for each condition related to grid size and feature dimension are tabulated in Table 2. In all conditions, the rectified linear unit (ReLU), which is the most used activation function, was the best activation function for our dataset. Interestingly, when the feature set contained geographical information, the number of hidden layers and units was the same as the previous study [5]. The difference in grid size did not affect the hyperparameter variation; the conditions using grid sizes 40  40 and 80  80 showed the same optimal parameters in both feature sets.

110

S. Kanoga et al.

Prediction performance is usually decreased by increasing grid size because small grids have fewer crime incidents and more complicated data structure. In many cases, the performances were decreased when the feature set contained only spatio-temporal features (see results in brackets of Table 1). However, geographical features effectively assisted in solving the prediction problem in the complicated data structure; the prediction performance was improved by increasing grid size. It is thought that geographical information effectively contributes to the incident prediction when the handling grid has narrow space. Thus, the DNN algorithm might be useful for the prediction efficiency about relationship between crime and geographical features.

Fig. 2. Grid segmentation and selection process for suspicious behavior in Aichi prefecture.

Deep Neural Networks for Grid-Based Elusive Crime Prediction

3.2

111

Predicted and True Hot Spots

The actual (true) hot spots of the suspicious behavior crime category in February 2019 and the hot spots predicted by RFs and DNNs for 80  80 grids, are shown in Fig. 2. In Aichi, Kanagawa, and Osaka prefectures, the grids predicted by DNNs were fewer than those predicted by RFs. Even in the case of other categories, RFs tend to produce more incident outputs. Although the prediction performance of true positives and true negatives is the most important in predictive models, false positives themselves can strengthen police patrols and indirectly deter crimes. Thus, high sensitivity prediction is not a bad strategy for model construction. However, prediction models should pursue pure accuracy until the efficiency of machine learning-based crime prediction becomes widespread and social contributions are made. The DNN-based technique was effective for elusive crime prediction; thus, it is important that the DNN prediction and local practice should be collaborated to prevent victims in those communities.

4 Conclusions This study compared prediction performances of DNN-based techniques with random forests for elusive crimes in Japan obtained from municipalities in cases where geographical information is or is not used. The superior prediction performances of DNNsbased techniques were shown not only when using spatial-temporal features but also when the geographical features was combined. Given further rich information such as user characteristics and political boundary conditions, prediction performance and interpretability are expected to improve in the future. In addition, to implement realistic applications in elusive crime prevention, we will develop a public and standardized long-period crime database by associations with the police and start-up companies. Acknowledgments. This work was supported in part by the New Energy and Industrial Technology Development Organization (NEDO). The authors would like to thank Dr. YingLung Lin for his helpful advice on the technical issues examined in this paper.

References 1. Watanabe, Y., Miyazaki, M.: Sex-related violence and the protection of women’s health in Japan. Med. Law 37, 353–362 (2018) 2. Police of Japan (2018). https://www.npa.go.jp/english/index.html 3. McClendon, L., Meghanathan, N.: Using machine learning algorithms to analysis crime data. Mach. Learn. Appl.: Int. J. 2, 1–12 (2015) 4. Marco, M., Gracia, E., López-Quílez, A.: Linking neighborhood characteristics and drugrelated police interventions: a Bayesian spatial analysis. ISPRS Int. J. Geoinf. 6(3), 65 (2017) 5. Lin, Y.-L., Yen, M.-F., Yu, L.-C.: Grid-based crime prediction using geographical features. ISPRS Int. J. Geoinf. 7(8), 298 (2018)

112

S. Kanoga et al.

6. Leong, K., Sung, A.: A review of spatio-temporal pattern analysis approaches on crime analysis. Int. E-J. Crim. Sci. 9, 1–33 (2015) 7. Weisburd, D., Telep, C.W.: Hot spots policing: what we know and what we need to know. J. Contemp. 30(2), 200–220 (2014) 8. Sypion-Dutkowska, N., Leitner, M.: Land use influencing the spatial distribution of urban crime: a case study of Szczecin, Poland. ISPRS Int. J. Geoinf. 6, 74 (2017) 9. Breiman, L.: Random forests. Mach. Learn. 45, 5–32 (2001) 10. Collobert, R., Weston, J.: A unified architecture for natural language processing: deep neural networks with multitask learning. In: Proceedings of the 25th International Conference on Machine Learning, pp. 160–167. ACM, Finland (2017) 11. Xu, L., Ren, J.S., Liu, C., Jia, J.: Deep convolutional neural network for image deconvolution. In: Advances in Neural Information Processing Systems, pp. 1790–1798 (2014)

Remote Sensing, Heat Island Effect and Housing Price Prediction via AutoML Rita Yi Man Li1,2(&), Kwong Wing Chau3, Herru Ching Yu Li2, Fanjie Zeng2, Beiqi Tang2, and Meilin Ding2 1

2

3

Department of Economics and Finance, Hong Kong Shue Yan University, North Point, Hong Kong [email protected] Sustainable Real Estate Research Center, Hong Kong Shue Yan University, North Point, Hong Kong Department of Real Estate and Construction, The University of Hong Kong, Pok Fu Lam, Hong Kong

Abstract. Global warming has become a major environmental concern in recent years. It is believed that urban heat island issues have caused health problems and thus we aim to investigate the impact of the heat island effect on housing prices by studying Whampoa Garden – one of the largest trading volume housing estates in Hong Kong. Landsat 8, an advance satellite that comprises the camera of the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS) were used to collect satellite images from 2014 to 2018 from the United States Geological Survey (USGS). All the satellite images of the study points are equipped with the least cloud. We then used that information to conduct housing price prediction via AutoML. Keywords: Heat island effect Hong Kong

 Remote sensing  Housing price  AutoML 

1 Introduction Heat island effect is an environmental problem in urban city. Keikhosravi [1] found that heat island problems worsened when there were heatwaves. Zhang, Fukuda [2] agreed heat island effect is caused by heatwaves from metropolises. Wang, Liu [3] realized that thermal remote sensing can monitor the distribution, periodic and dynamic changes of urban heat islands. Shirani-Bidabadi, Nasrabadi [4] suggested that remote sensing identify urban heat island by applying the satellite to retrieve land surface temperature (LST). Huang and Wang [5] applied high-resolution remote sensing data to test summer daytime LST in Wuhan and concluded that remote sensing data show temperature in different areas. Hofierka, Gallay [6] suggested that LST is a reliable indicator of the heat island effect due to a strong correlation between the LST and nearsurface air temperature. Since building materials absorb more heat than vegetation, heat island often exists in built environment and we use LST as a proxy for heat island effect.

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 113–118, 2021. https://doi.org/10.1007/978-3-030-51328-3_17

114

R. Y. M. Li et al.

While previous research suggested that housing price is affected by a basket of factors [7–9], e.g. sustainability [10]: a reduction in PM10 levels is US $41.73 in Mexico [11]. And government public policies affect housing price [12]. Most research adopted econometric models, e.g. Granger causality test [13] and Markov chain [14] etc. They did not study heat island’s impact nor predict housing price via AutoML.

2 Materials and Methods Whampoa Garden is largest and the most actively trading estates in Hong Kong. Its latitude and longitude in Google Maps were 22°18′13.64688000000018″ and 114°11′ 21.35723999999994″. Based on the central point, four other points along the diagonal were marked and temperature value was recorded. Five Landsat 8 satellite images from 2014 to 2018, were collected from the United States Geological Survey (USGS) and were filtered with cloud. The meteorology data of these five satellite images were collected from HK Observatory to compute atmospheric correction parameter by NASA (atmcorr.gsfc.nasa.gov/) (Table 1). Table 1. Band average atmospheric transmission, and radiance Parameters Lat/Long Band average atmospheric transmission Effective bandpass upwelling radiance Effective bandpass downwelling radiance

2014.11.16 23.1/113.6 0.7

2015.10.18 23.1/113.6 0.8

2016.02.07 23.1/113.6 1.0

2017.10.23 23.2/113.6 0.7

2018.02.12 23.1/113.6 0.9

2.4 W/m2/ sr/um

1.7 W/m2/ sr/um

0.3 W/m2/ sr/um

2.2 W/m2/ sr/um

0.8 W/m2/ sr/um

3.7 W/m2/ sr/um

2.7 W/m2/ sr/um

0.5 W/m2/ sr/um

3.5 W/m2/ sr/um

1.3 W/m2/ sr/um

After collecting LST data, we collected data which may affect property prices (Table 2), cubic spline interpolation was used to increase data frequency. We utilized H2O AutoML to select the best housing price prediction model out of 30 models (Table 3).

Remote Sensing, Heat Island Effect and Housing Price Prediction via AutoML

115

Table 2. Data table (for full dataset, please email [email protected]). Variable name GoldPrice OilPrice

Variable description Gold price closing Crude oil closing price three_month Three month interest settlement rate six_month Six month interest settlement rate HSI_Close Hang Seng index closing USDHKD_Price USD/HKD exchange rate HSNP_Close Heng Seng properties index CNYHKD CNY/HKD exchange rate GBPHKD_Price GBP/HKD exchange rate Overnight Overnight interest settlement rate nine_month Nine-month interest settlement rate one_week One-week interest settlement rate twelve_month Twelve-month interest settlement rate one_month One-month interest settlement rate TheLinkClose The link closing price FSP Fine suspended particulates SO2 Sulphur dioxide in MK CO Carbon monoxide in MK NOX Nitrogen oxides in MK RSP Respirable suspended particulates O3 Ozone NO2 Nitrogen dioxide

Data source website Investing.com

http://tiny.cc/d1rrjz

finance.yahoo.com Investing.com finance.yahoo.com Investing.com Investing.com hkma.gov.hk/media/eng/doc/market-data-andstatistics/monthly-statistical-bulletin/T060303.xls

finance.yahoo.com

shorturl.at/aewP6

(continued)

116

R. Y. M. Li et al. Table 2. (continued)

Variable name LST

Variable description Land surface temperature Floor Property floor actual_sq_feet Actual square feet building_sq_feet Building square feet

Data source website Authors’ data en.midland.com.hk/

Table 3. AutoML results model_id

Mean residual deviance

rmse

Mse

mae

StackedEnsemble_BestOfFamily_AutoML_20200129_074516 24,853.8 157.7 24,853.8 86.9 StackedEnsemble_AllModels_AutoML_20200129_074516 24,952.7 158.0 24,952.7 86.9 XGBoost_grid__1_AutoML_20200129_074516_model_2 25,031.3 158.2 25,031.3 89.5 GBM_grid__1_AutoML_20200129_074516_model_4 25,491.0 159.7 25,491.0 90.4 GBM_2_AutoML_20200129_074516 25,508.5 159.7 25,508.5 90.4 XGBoost_2_AutoML_20200129_074516 25,604.5 160.0 25,604.5 92.2 GBM_3_AutoML_20200129_074516 25,927.9 161.0 25,927.9 91.2 XGBoost_grid__1_AutoML_20200129_074516_model_1 26,090.1 161.5 26,090.1 92.3 XGBoost_3_AutoML_20200129_074516 26,222.9 161.9 26,222.9 92.4 XGBoost_1_AutoML_20200129_074516 26,520.3 162.9 26,520.3 93.1 DRF_1_AutoML_20200129_074516 26,543.1 162.9 26,543.1 92.2 GLM_1_AutoML_20200129_074516 26,576.1 163.0 26,576.1 93.7 GBM_4_AutoML_20200129_074516 26,690.8 163.4 26,690.8 92.1 GBM_grid__1_AutoML_20200129_074516_model_2 26,842.0 163.8 26,842.0 96.2 GBM_1_AutoML_20200129_074516 27,035.1 164.4 27,035.1 92.0 XGBoost_grid__1_AutoML_20200129_074516_model_4 27,039.6 164.4 27,039.6 93.9 XRT_1_AutoML_20200129_074516 27,220.0 165.0 27,220.0 94.3 XGBoost_grid__1_AutoML_20200129_074516_model_5 27,322.4 165.3 27,322.4 96.0 GBM_5_AutoML_20200129_074516 27,559.2 166.0 27,559.2 94.2 DeepLearning_1_AutoML_20200129_074516 28,495.7 168.8 28,495.7 104.1 XGBoost_grid__1_AutoML_20200129_074516_model_3 29,020.1 170.4 29,020.1 97.1 XGBoost_grid__1_AutoML_20200129_074516_model_6 29,375.3 171.4 29,375.3 99.0 GBM_grid__1_AutoML_20200129_074516_model_1 29,993.0 173.2 29,993.0 106.0 DeepLearning_grid__2_AutoML_20200129_074516_model_1 36,408.9 190.8 36,408.9 115.4 GBM_grid__1_AutoML_20200129_074516_model_3 36,575.8 191.2 36,575.8 118.2 DeepLearning_grid__1_AutoML_20200129_074516_model_2 43,740.1 209.1 43,740.1 120.6 GBM_grid__1_AutoML_20200129_074516_model_6 44,340.5 210.6 44,340.5 120.5 DeepLearning_grid__2_AutoML_20200129_074516_model_2 45,533.7 213.4 45,533.7 138.5 GBM_grid__1_AutoML_20200129_074516_model_5 75,906.8 275.5 75,906.8 209.4 DeepLearning_grid__1_AutoML_20200129_074516_model_1 100,588.0 317.2 100,588.0 115.3 XGBoost_grid__1_AutoML_20200129_074516_model_7 259,955.0 509.9 259,955.0 460.3

rmsle

0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 Nan 0.4 Nan 0.4 0.4 0.4 0.9

Remote Sensing, Heat Island Effect and Housing Price Prediction via AutoML

117

3 Results and Discussion Table 4 records the ranking of the models run in AutoML. The first two are Stacked Ensemble models but they cannot show the relative importance feature. Thus, we run XGBoost_grid__1_AutoML_20200129_074516_model_2 and Table 4 illustrates that finance factors, e.g. Hang Seng Index play more important role in housing price prediction than heat island effect. Nevertheless, there are limitations in the study: 1) urban heat island effect is better measured by computing the difference between built environment and vegetation nearby, 2) we only collect one data point in 2014 to 2018. We should include annual Summer and Winter data instead. Yet, this pilot study paves the way for future study.

Table 4. Relative importance of the factor Variable Relative importance actual_sq_feet 281928000 building_sq_feet 23363638 HSI_Close 21421016 one_week 20988716 one_month 14540963 GoldPrice 12043814 three_month 11355125 Overnight 10841773 six_month 10333431 Floor 9962034 TheLinkClose 9241284 HSNP_Close 7448313 OilPrice 7140574 GBPHKD_Price 5028980 CO 4948166 O3 4518419 NO2 4411174 SO2 4234257 NOX 4230000 RSP 4033939 CNYHKD 3571214 FSP 2998845 twelve_month 2398187 LST 1087924 nine_month 61517

Scaled importance Percentage 1.000 0.585 0.083 0.048 0.076 0.044 0.074 0.044 0.052 0.030 0.043 0.025 0.040 0.024 0.038 0.022 0.037 0.021 0.035 0.021 0.033 0.019 0.026 0.015 0.025 0.015 0.018 0.010 0.018 0.010 0.016 0.009 0.016 0.009 0.015 0.009 0.015 0.009 0.014 0.008 0.013 0.007 0.011 0.006 0.009 0.005 0.004 0.002 0.000 0.000

118

R. Y. M. Li et al.

References 1. Keikhosravi, Q.: The effect of heat waves on the intensification of the heat island of Iran’s metropolises (Tehran, Mashhad, Tabriz, Ahvaz). Urban Clim. 28, 100453 (2019) 2. Zhang, L., Fukuda, H., Liu, Z.: The value of cool roof as a strategy to mitigate urban heat island effect: a contingent valuation approach. J. Clean. Prod. 228, 770–777 (2019) 3. Wang, W., et al.: Remote sensing image-based analysis of the urban heat island effect in Shenzhen, China. Phys. Chem. Earth Parts A/B/C 110, 168–175 (2019) 4. Shirani-Bidabadi, N., et al.: Evaluating the spatial distribution and the intensity of urban heat island using remote sensing, case study of Isfahan city in Iran. Sustain. Cities Soc. 45, 686– 692 (2019) 5. Huang, X., Wang, Y.: Investigating the effects of 3D urban morphology on the surface urban heat island effect in urban functional zones by using high-resolution remote sensing data: a case study of Wuhan, Central China. ISPRS J. Photogramm. Remote Sens. 152, 119–131 (2019) 6. Hofierka, J., et al.: Physically-based land surface temperature modeling in urban areas using a 3-D city model and multispectral satellite data. Urban Clim. 31, 100566 (2020) 7. Li, R.Y.M., Chau, K.W.: Econometric Analyses of International Housing Markets. Routledge, London (2016) 8. Li, R.Y.M., Li, H.C.Y.: Have housing prices gone with the smelly wind? Big data analysis on landfill in Hong Kong. Sustainability 10(2), 341 (2018) 9. Li, R.Y.M., Cheng, K.Y., Shoaib, M.: Walled buildings, sustainability, and housing prices: an artificial neural network approach. Sustainability 10(4), 1298 (2018) 10. Li, R.Y.M.: The usage of automation system in smart home to provide a sustainable indoor environment: a content analysis in Web 1.0. Int. J. Smart Home 7(4), 47–59 (2013) 11. Li, R.Y.M.: The internalisation of environmental externalities affecting dwellings: a review of court cases in Hong Kong. Econ. Aff. 32(2), 81–87 (2012) 12. Yu, S., Zhang, L.: The impact of monetary policy and housing purchase restrictions on housing prices in China. Int. Econ. J. 33(2), 286–309 (2019) 13. Granger, C.W.J.: Investigating causal relations by econometric models and cross-spectral methods. Econometrica 37(3), 424–438 (1969) 14. Loftis, M.W., Mortensen, P.B.: A dynamic linear modelling approach to public policy change. J. Public Policy 38(4), 553–579 (2018)

A Framework for Selecting Machine Learning Models Using TOPSIS Maikel Yelandi Leyva Vazquezl(&), Luis Andy Briones Peñafiel, Steven Xavier Sanchez Muñoz, and Miguel Angel Quiroz Martinez Computer Science Department, Universidad Politécnica Salesiana, Guayaquil, Ecuador {mleyva,mquiroz}@ups.edu.ec, {lbrionesp,ssanchezm3}@est.ups.edu.ec

Abstract. In machine learning, it is common when multiple algorithms are applied to different data sets that are complex because of their accelerated growth, a decision problem arises, i.e., how to select the algorithm with the best performance? This has generated the need to implement new information analysis techniques to support decision making. The technique of multi-criteria decision making is used to select particular alternatives based on different criteria. The objective of this article is to present some Machine Learning models applied to a data set in order to select the best alternative according to the criteria using the TOPSIS method. The deductive method and the scanning research technique were applied to study a case study on the Wisconsin Breast Cancer dataset, which seeks to evaluate and compare the performance and effectiveness of machine learning models using the TOPSIS. Keywords: Machine learning dataset  Breast cancer

 TOPSIS  Data set  Breast cancer Wisconsin

1 Introduction Today, Machine Learning uses algorithms to build analytical models, helping systems to “learn automatically” by giving a set of data. Now it can be applied to large amounts of data, since the accelerated growth of databases in each of the different areas of knowledge, and among them, the area of health are being generated in response to the development and evolution of technology in their day to day, to detect and identify information from large data sets is necessary to take advantage of the exploratory analysis of data provided by data mining models [1]. There is a wide variety of algorithms used in Machine Learning, some of which are very popular such as linear regressions, neural networks, decision trees or kNN algorithms, among others. Despite this, until now, there is no predefined and validated model for the effective operation on any data set [2]. The analysis faces a recurrent problem of decision that depends on the nature of the data and the output variable (result) to be predicted, one or more algorithms must be selected, then proceed with its subsequent cross-validation according to the desired

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 119–126, 2021. https://doi.org/10.1007/978-3-030-51328-3_18

120

M. Y. L. Vazquezl et al.

criteria, which evaluates the results of different models and determines which is the most accurate to ensure optimal performance [3, 4]. Selection is made using the Technique for Order of Preference by Similarity to Ideal Solution (also called TOPSIS) is an effective method of making decisions with multiple criteria (MCDM) used to classify or milk a finite set of alternatives [5]. This method was first proposed by Hwang and Yoon taking as a reference that the selected alternatives should have the shortest relative proximity from the positive ideal solution and the longest relative proximity from the negative ideal solution [6, 7]. In real MCDM problems, attribute values are usually always expressed with imperfect information, however, decision-makers may choose to use an easy technique and give the same result instead of a complex algorithm [8, 9]. The study objective of this article is to present some models of Machine Learning applied to a data set in order to select the best alternative according to the criteria using the TOPSIS method.

2 Preliminary In this section, you will first find each important element that shapes the research work, we provide a brief review of the multi-criteria decision-making process, we present the TOPSIS method algorithm and Wisconsin Cancer Data set. 2.1

Multi-criteria Decision Making

In multi-criteria decision making, it is applied to preferable decisions to select a parameter from all available parameters according to the established criteria. The criteria are attributes, goals or objectives that are used to evaluate the different alternatives and are considered important since they seek to provide a solution to a certain decisional problem [10]. This technique can be used in various fields where a selection problem occurs, the problem where decision-makers usually have uncertain knowledge is solved by making decisions with multiple criteria [11]. This method of selecting the best alternatives is important these days for real complex problems because of its ability to select alternatives based on various criteria for the best alternative [12]. The main objective is to identify the problem where multiple alternatives are determined, then identify the criterion or criteria to be taken into account with which the alternatives are to be evaluated, then apply different decision-making methods and evaluate each of the alternatives where the best alternative is selected [13]. Multi-criteria decision making has different methods of solving MCDM problems. These methods are used for specific and complex problems that vary depending on the type of data set used [14]. 2.2

TOPSIS Method Algorithm

The TOPSIS method assumes prior knowledge of the decision matrix m  n D = [aij]:

A Framework for Selecting Machine Learning Models Using TOPSIS

2

a11 6 a21 6 D¼6 4  am1

a12 a22  am2

  .. . 

3 a1n a2n 7 7 7  5 amn

121

ð1Þ

Step 1 The standardized decision matrix is constructed R = [rij]. Each rij element is calculated as follows: aij rij ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffi ; j ¼ 1; 2; . . .; n: 2 Rm i¼1 aij

ð2Þ

Step 2 The standardized weighted decision matrix V = [vij] is constructed. Each rij element is multiplied by its weight associated with each wj criterion. 2

w1 r11 6 w1 r21 6 V ¼6 4  w1 rm1

w2 r12 w2 r22  w2 rm2

  .. . 

3 2 wn r1n v11 6 v21 wn r2n 7 7 6 7¼6  5 4  vm1 wn rmn

v12 v22  vm2

3 v1n v2n 7 7 7  5    vmn   .. .

vij ¼ rij  wj ; j ¼ 1; 2; . . .; n:

ð3Þ

ð4Þ

Step 3 Determine the alternative PIS positive ideal solution and NIS negative ideal solution. A þ and A define the maximum benefit and minimum benefit for each criterion. Aþ ¼ A ¼



     maxni¼1 jj 2 I þ j ; minni¼1 jj 2 I j ¼ v1þ ; v2þ ; . . .; vnþ :

ð5Þ

       minni¼1 jj 2 I þ j ; maxni¼1 jj 2 I j ¼ v 1 ; v2 ; . . .; vn :

ð6Þ



So that I þ and I make up the set of cost and benefit type criteria. Step 4 Calculate the distance measurement based on Euclidean distance. The distance to the PIS; otherwise the distance to the NIS. diþ d i

¼ ¼

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

þ Rm j¼1 vij  vj

2

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

 Rm j¼1 vij  vj

2

; i ¼ 1; 2; . . .; n:

ð7Þ

; i ¼ 1; 2; . . .; n:

ð8Þ

Step 5 Calculate the relative proximity for each alternative to the PIS.

122

M. Y. L. Vazquezl et al.

Ciþ ¼

diþ

d i ; 0  Ciþ  1: þ d i

ð9Þ

Step 6 Sort the alternatives in order according to relative proximity. The best alternatives are those with the highest Ciþ value, say closer to 1. 2.3

Wisconsin Breast Cancer Data Set

The original Wisconsin Breast Cancer (Diagnostics) (WBC) data set was collected from the University Hospital of California created by Wolberg [9]. It is located at the University of California (UCI) Irvine Machine Learning Repository which is an opensource repository with many datasets that can be used for experimental analysis of some machine learning rhythms [15]. The characteristics of these data sets were derived from the analysis of fine-needle aspiration images of breast masses and the cell nucleus [16]. For the implementation of the Machine Learning algorithms, the data set was separated; 70% for the training phase and 30% for the test phase [17]. This is a classification data set, which records measurements for breast cancer cases that have 699 instances. There are two classes of tumors (2 benign, 4 malignant) and 9 integer value attributes [18]. Using the Orange Data mining tool in a typical widget usage, we take this data set and test some learning algorithms, we observe their performance Test & Score using cross-validation and confounding matrices to measure the most successful breast cancer screening methods [19]. The data is usually pre-processed before the test; in this case, a selection has been made from the Wisconsin Breast Cancer data set. The results are promising and most of them have a score above 95% [20].

3 Case Study The case study is applied to a well-known dataset, the Wisconsin Breast Cancer Malignant and Benign Tumor dataset. There is a need to select different alternatives of the machine models according to the classification metrics by means of the TOPSIS multicriteria method. The alternatives taken into account will be denoted by Ai (i ¼ 1; 2; . . .; m). Where A1 is Logistic Regression, A2 is SVM Learner, A3 is Neural Network, A4 is Naïve Bayes, A5 is k-Nearest Neighbors. While the criteria used to evaluate the performance of the alternatives are Cj (j ¼ 1; 2; . . .; n). Where C1 is Area under the curve, C2 is Accuracy, C3 is F1, C4 is Precision, C5 is Completeness, C6 is Specificity, Table 1. With the help of the above parameters a decision matrix is created, that is to say, the m  n matrix (5  6) whose generic element aij, Table 2.

A Framework for Selecting Machine Learning Models Using TOPSIS

123

Table 1. Cj criteria C1

Diagnostic tests can be displayed using Recall and Sp

AUC ¼ 12 ðRecall þ SpÞ

C2

It’s the amount of benign cells matched correctly

VP þ VN CA ¼ VP þ VN þ FP þ FN

C3

It is the harmonic value of recall and precision

F1 ¼ 2  PPRecall þ Recall

C4

It is the number of benign cells that are correctly predicted to be normal Also called TPR is the percentage of benign cells that are correctly identified Also called FPR is the percentage of malignant cells that are incorrectly classified as benign

P ¼ VPVP þ FP

C5 C6

Recall ¼ VPVP þ FN Sp ¼ VNVN þ FP

Table 2. Decision matrix D = [aij] A1 A2 A3 A4 A5

C1 0,994 0,993 0,993 0,992 0,982

C2 0,965 0,960 0,966 0,974 0,975

C3 0,965 0,960 0,966 0,974 0,975

C4 0,965 0,960 0,967 0,974 0,975

C5 0,965 0,960 0,966 0,974 0,975

C6 0,952 0,952 0,964 0,976 0,969

The values are normalized by (2) as shown in Table 3. Table 3. Standardized decision matrix R = [rij] A1 A2 A3 A4 A5

C1 0,449 0,448 0,448 0,448 0,443

C2 0,446 0,444 0,446 0,450 0,450

C3 0,446 0,444 0,446 0,450 0,450

C4 0,446 0,443 0,447 0,450 0,450

C5 0,446 0,444 0,446 0,450 0,450

C6 0,442 0,442 0,448 0,453 0,450

The weight vector is then integrated into the normalized decision matrix using Eq. (4) to obtain the weighted normalized decision matrix in Table 4. Table 4. Standardized weighted decision matrix V = [vij] 1

A A2 A3 A4 A5

C1 0,0897 0,0896 0,0896 0,0896 0,0886

C2 0,0892 0,0887 0,0893 0,0900 0,0901

C3 0,0892 0,0887 0,0893 0,0900 0,0901

C4 0,0891 0,0887 0,0893 0,0900 0,0901

C5 0,0892 0,0887 0,0893 0,0900 0,0901

C6 0,0885 0,0885 0,0896 0,0907 0,0900

124

M. Y. L. Vazquezl et al.

The ideal solutions A þ and A are determined using (5) and (6), used in Table 5. Table 5. Ideal solutions A þ and A þ

A A

C1 C2 C3 C4 C5 C6 0,0897 0,0901 0,0901 0,0901 0,0901 0,0907 0,0886 0,0887 0,0887 0,0887 0,0887 0,0885

The Euclidean distances of each PIS and NIS alternative are calculated using (7) and (8), then the relative closeness is calculated (9) and classified, as shown in the Table 6. Table 6. Euclidean distances diþ and d i , relative proximity and rank 1

A A2 A3 A4 A5

diþ

d i

Ciþ

Rank

0,0029 0,0036 0,0020 0,003 0,0013

0,0014 0,0010 0,0019 0,0035 0,0032

0,3296 0,2182 0,4898 0,9319 0,7163

4 5 3 1 2

Finally, among 5 models of machine learning concerning 6 criteria, after using this method Clearly, the ranking obtained is A4 > A5 > A3 > A1 > A2. In this case study, Naïve Bayes is the best alternative.

4 Conclusion Based on the work presented here, we conclude that making decisions based on multiple criteria in any of the knowledge areas is a very complex activity. In order to select the machine learning model, the main thing we must take into account is that it must satisfy the needs according to a specific problem, in this case, the Wisconsin Breast Cancer data set supervised learning problem. The TOPSIS method selects the best machine learning model in a simple and structured way. The final result indicated that when the preference order technique was applied based on the value of relative closeness, satisfactory results obtained and the selected machine learning model was provided, in this case, Naïve Bayes. Future work will concentrate on weighting criteria bases on Shannon Entropy. Acknowledgments. Authors want to thank the Grupo de Investigación en Inteligencia Artificial y Reconocimiento Facial (GIIAR) and the Universidad Politécnica Salesiana for supporting this research.

A Framework for Selecting Machine Learning Models Using TOPSIS

125

References 1. Satapathy, S.C., Joshi, A., Modi, N., Pathak, N.: A comparative analysis of feature selection methods and associated machine learning algorithms on Wisconsin breast cancer dataset (WBCD). In: Advances in Intelligent Systems and Computing, vol. 408 (2016) 2. Pacheco, A.G.C., Krohling, R.A.: Ranking of classification algorithms in terms of meanstandard deviation using A-TOPSIS. Ann. Data Sci. 5, 93–110 (2018) 3. Bayrak, E.A., Kırcı, P.: Comparison of machine learning methods for breast cancer diagnosis, pp. 4–6 (2019) 4. Amrane, M., Oukid, S., Gagaoua, I., Ensari, T.: Breast cancer classification using machine learning. In: 2018 Electric Electronics, Computer Science, Biomedical Engineering’s Meeting, EBBT 2018, pp. 1–4 (2018) 5. Wang, H.: A case-based reasoning method with relative entropy and TOPSIS integration, vol. 541 (2017) 6. Elhassouny, A., Smarandache, F.: Multi-criteria decision-making using combined simplified-TOPSIS method and neutrosophics. In: 2016 IEEE International Conference on Fuzzy Systems, FUZZ-IEEE 2016, pp. 2468–2474 (2016) 7. Panda, M., Jagadev, A.K.: TOPSIS in multi-criteria decision making: a survey. In: Proceedings - 2nd International Conference on Data Science and Business Analytics, ICDSBA 2018, pp. 51–54 (2018) 8. Ceballos, B., Teresa, M., David, L.: A comparative analysis of multi-criteria decisionmaking methods. Prog. Artif. Intell. 5, 315–322 (2016) 9. Singh, L., Singh, S., Aggarwal, N.: Applying machine learning algorithms for early diagnosis and prediction of breast cancer risk. Springer, Singapore (2019) 10. Daoud, S., Mdhaffar, A., Freisleben, B., Jmaiel, M.: A multi-criteria decision making approach for predicting cancer cell sensitivity to drugs. In: Proceedings of the ACM Symposium on Applied Computing, pp. 47–53 (2018) 11. Dodangeh, J., Mojahed, M., Yusuff, M.: Best project selection by using of group TOPSIS method. In: International Association of Computer Science and Information Technology– Spring Conference, pp. 50–53 (2009) 12. Latha, R., Vetrivelan, P.: TOPSIS based delay sensitive network selection for wireless body area networks. In: 2017 International Conference on Advances in Computing, Communications and Informatics, ICACCI 2017, pp. 736–739, January 2017 13. Rohani, A., Mojtaba, M.: Free alignment classification of Dikarya Fungi using some machine learning methods. Neural Comput. Appl. 31, 6995–7016 (2018) 14. Kaur, S., Sehra, S.K., Sehra, S.S.: A framework for software quality model selection using TOPSIS. In: 2016 IEEE International Conference on Recent Trends in Electronics, Information and Communication Technology, RTEICT 2016 – Proceedings, pp. 736–739 (2017) 15. Baneriee, C., Paul, S., Ghoshal, M.: A comparative study of different ensemble learning techniques using Wisconsin breast cancer dataset. In: 2017 International Conference on Computer, Electrical and Communication Engineering, ICCECE 2017 (2018) 16. Bharat, A., Pooja, N., Reddy, R.A.: Using machine learning algorithms for breast cancer risk prediction and diagnosis. In: 2018 3rd International Conference on Circuits, Control, Communication and Computing, pp. 1–4 (2018) 17. Carlile, T.: On breast cancer detection: an application of machine learning algorithms on the Wisconsin diagnostic dataset. Cancer 47, 1164–1169 (1981)

126

M. Y. L. Vazquezl et al.

18. Chaurasia, V., Pal, S., Tiwari, B.B.: Prediction of benign and malignant breast cancer using data mining techniques. J. Algorithms Comput. Technol. 12, 119–126 (2018) 19. Saygili, A.: Classification and diagnostic prediction of breast cancers via different classifiers. Int. Sci. Vocat. Stud. J. 2, 48–56 (2018) 20. Kumar, V., Mishra, B.K., Mazzara, M., Thanh, D.N.H., Verma, A.: Prediction of malignant & benign breast cancer: a data mining approach in healthcare applications, pp. 1–8 (2019)

Artificial Intelligence and Social Computing

Application-Oriented Approach for Detecting Cyberaggression in Social Media Kurt Englmeier1(&) and Josiane Mothe2 1

2

Schmalkalden University of Applied Science, Blechhammer, Schmalkalden, Germany [email protected] Josiane Mothe, IRIT, UMR5505 CNRS, INSPE-UT2, Université de Toulouse, Toulouse, France [email protected]

Abstract. The paper discuses and demonstrates the use of named-entity recognition for automatic hate speech detection. Our approach also addresses the design of models to map storylines and social anchors. They provide valuable background information for the analysis and correct classification of the brief statements used in social media. Furthermore, named-entity recognition can help to tackle the specifics of the language style often used in hate tweets, a style that differs from regular language in deliberate and unintentional misspellings, strange abbreviations and interpunctuations, and the use of symbols. We implemented a prototype for our approach that automatically analyzes tweets along storylines. It operates on a series of bags of words containing names of persons, locations, characteristic words for insults, threats, and phenomena reflected in social anchors. We demonstrate our approach using a collection of German tweets that address the vitally discussed topic “refugees” in Germany. Keywords: Hate speech detection anchor  Storyline

 Named-Entity recognition  Social

1 Introduction Detection of cyber-aggression and hate speech is still a complex task. It requires the careful analysis of a variety of human factors in language that reach beyond the words used in hate speech itself. Here, we give an impression to what extend named-entity recognition can be used in order to identify and classify regions of aggressive utterances in statements and to discriminate them against regions of profane utterances. In general, there is an actor creating aggressive statements that address a target person (prominent person or victim of cyberbullying, etc.) or group (refugees, Jewish people, Muslims, etc.). We discuss our approach based on a collection of German tweets, mainly related to the topic “refugees”. We also explain the role and importance of an analysis along the storyline of tweets and of including information about social anchors as roots of storylines. The work presented here, including the prototype used for demonstration purposes. Nevertheless, it demonstrates the potential of named-entity recognition for © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 129–136, 2021. https://doi.org/10.1007/978-3-030-51328-3_19

130

K. Englmeier and J. Mothe

hate speech detection. All in all, we see our approach as useful complement for part-ofspeech- or ontology-based strategies.

2 The Problem Today, applied hate speech detection mainly relies on key word analysis. In fact, there are many comments that use outright and clearly visible offensive terms: “Ich bin dafür, dass wir die Gaskammern wieder öffnen und die ganze Brut da reinstecken. (I am in favor of opening the gas chambers again and putting the whole brood in there).” These and similar statements can be located easily in an automatic way. There are clear key words indicating offensive and inciting statements. The correct classification of this statement as hate speech is unquestionable, even if we consider it in isolation. However, hate speech detection is more than just keyword spotting. The features to discover are manifold (type of language, sentiment, actor and target detection, and so on). Hate speech detection must also catch up with the specifics of the language applied in hate speech and the dynamic changes in our everyday language. The evolution of social phenomena and of our language makes it difficult to track all racial, abusive, sexual, and religious insults. The language used in social media also has its own style. Many authors—in particular when emotionally agitated—don’t care or cannot care about correct spelling or punctuation. They use sometimes strange abbreviations or deliberately incorrect spelling to express their emotions or (much like in spam mails) to try to cheat automatic hate speech detection. Apart from the (sometimes) poor writing style, tweets also use references to background knowledge that needs to be taken into account in hate speech analysis. The specifics of hate language start with these syntactic qualities that differ from regular texts such as in newspapers or books. Only in rare cases, the authors use outright offensive expressions. Sometimes, they try to “hide” their opinions and intentions by less obviously offensive terms. In many cases, offensive terms clearly address facets of events or phenomena, such as the holocaust for example, to indicate the author’s intention. Words, appearing innocuous in the first place, may reveal a clear act to stir up hate or to incite criminal acts after a closer look.

3 Our Approach With our collection of tweets, we made the experience that • outright offensive terms are much less used than we expected, • tweets can only partly be classified in isolation, and • we have to consider the complete storyline a tweet is embedded in. We analyze storylines that started with a particular news trailing a series of comments reflecting opinions and opposing viewpoints. Only by viewing the whole storyline we are in the position to identify “toxic” words or expressions that look profane in the first place, but may refer to a context that emblematizes an aggressive or

Application-Oriented Approach for Detecting Cyberaggression in Social Media

131

offensive act. Sadly, many such contexts reflect practices or methods of the Nazi regime. Kaggle’s Toxic Comment Classification Challenge differentiates six categories of toxicity that can be detected in hate speech: toxic, severe toxic, obscene, insult, identity hate and threat). The categories are not mutually exclusive. We add a further important category: inciting. Statements that incite others or intend to incite others to do a criminal act or to further propagate hate are among the most dangerous utterances in hate speech. Named-entity recognition usually addresses the problem of extracting and classifying proper names in texts, such as names of people, organizations, or locations. In this context, an entity is an individual person, place, or thing in the world, while a mention is a phrase of text that refers to an entity using a proper name. In the context of hate speech detection, named-entity recognition at first includes also extracting and classifying proper names of persons that are authors or targets of offense or aggression or names of locations that are focal points of hate inducing events. However, it also has to locate outright or disguised expressions of hate and offense. In this paper, we concentrate on this aspect of named-entity recognition: We locate toxic terms and investigate their surroundings, their mentions, and classify them. In such a situation, terms like “train” or “stock car” may become toxic! Both words are not offensive in the first place. However, a mention like “We need again long trains for these refugees!” clearly refers to the trains that brought prisoners of all sorts to the concentration camps during the Nazi regime. The same holds for a phrase such as “Are there any stock cars left?” with the mentioning of refugees or supporters of refugees further up the storyline. In both cases, the mentions refer to the trains of extermination and propose the same fate for the target persons which the passengers of those trains met. Both words turn from “toxic” into “threat” or even “inciting” when considering their immediate surrounding and preceding storyline. A toxic term or a set of toxic terms indicates the potential existence of a mention containing an offensive or aggressive act. However, here we have to be careful. Any sort of close negation can turn this potential into its contrary: “You are a fool!” (insult) vs. “I’m not such a fool and believe this story!” (profane). This is in particular the case in storylines that cover views and opposing viewpoints.

4 Related Work Correctly detecting hate speech and discriminating it from humor or simply profane expressions is still a challenging task. Current approaches apply the full range of method established in text analysis, such as part-of-speech (POS), N-grams, dictionaries or bag-of-words (BOW), TF-IDF, sentiment detection, or ontology-based strategies. In social media, humans use combinations of words, symbols (smiles etc.), and words that do not even exist in dictionaries. It is thus indispensable to learn significant expressions directly from the tweets. We incline our approach to the analysis of word N-grams [1], key-phrases [2], and linguistic features [3, 4].

132

K. Englmeier and J. Mothe

Many aggressive or offensive comments or posts originate from a certain event detailed in the news or in newspaper articles. Sometimes these comments are statements directly following a news. We may consider this news as anchor texts. In information retrieval—in particular when analysis targets social media—, anchor texts are used as query replacements or query enhancements when authors refer to these texts by hashtags or links [5, 6]. In contrast to traditional media that simply broadcast news, content in social media takes much more the form of a conversation or discourse. Lee and Croft [7] expand concept of anchor text further and consider texts that initiate conversation or discourse as social anchors. We believe that taking into account social anchors is indispensable for a correct interpretation of comments. Example: “author_of_the_tweet: #kandel 8,5 Jahre Jugendstrafe für einen MORD! Wofür gab es die 1,5 Jahre Rabatt??? Ich kann gar nicht soviel fressen, wie ich kotzen möchte (#kandel 8.5 years of young custody for MURDER! What is the 1.5-year discount for??? I can’t eat as much as I want to puke.” In this case, we consider “Kandel” as a social anchor. This includes first of all the anchor text, which can be one or even more news about the event and reports that follow up. A social anchor references a social phenomenon or event that is usually broadcasted by the news. In social media, there are one or more hashtags referring to discourses following this anchor event. We take “Kandel” as the title of a social anchor that can be summarized, for example, by the key words (extracted from an anchor text) “event: fatal stabbing”, “victim: German girl”, “culprit: asylum seeker, refugee, charged with murder, jail: 8.5 years”, “December 27, 2017”, “Kandel, Germany”. To achieve this summary, we may simply apply key word identification using TF/IDF or more sophisticated approaches for feature selection [8]. The example also shows that we probably have to collect more things than just key words. Much like in ontologies, there are qualities that further specify key items of the text. A broad range of tweets in our collection indicate that we probably need a broader concept of social anchor. In particular far-right populists often refer indirectly to the cruelties of the Nazis, mainly things and acts related to the murdering in concentration camps. Therefore, words like “gas”, “oven”, “furnace”, “freight train”, “chimney”, etc. are potentially toxic. Therefore, we need to treat the facets of the Nazi barbarism also as social anchors.

5 Feature Detection in Storylines In the end, we want to identify actor, intent, target, and intensity (or polarity) in hate speech utterances: “I really disgust these people”. By analyzing the sequence of utterances, we can link “people” with “refugees” if they are mentioned in close context beforehand. Surface features help to indicate the intent of the statement, too. It also helps to detect special stereotypes like (superiority of an actor or actor group) or the type of language (othering or discriminating language, e.g.). We propose a supervised learning approach to identify the hate speech-related features [9]. The ultimate goal is the design of a hate speech detection based on a multilayered feature extraction and learning algorithm.

Application-Oriented Approach for Detecting Cyberaggression in Social Media

133

We start with bags of words containing names of persons (including synonyms) and locations. We are aware that there are promising approaches to automatically identify names of persons and locations in texts using conditional random fields, for instance [10]. Here, we collect relevant names manually. Further bags of words contain toxic and severe toxic expressions and words indicating negation. Severe toxic words usually stand for insults like “fool”, “scumbag”, “idiot” and the like. The most interesting bag of words is the one containing words or expressions that reflect obscene or inciting statements or indicate identity hate or threat. It also contains profane expressions that specify otherwise toxic words as expressions of obscenity, inciting, identity hate, or threat. Words like “fire” or “gas”, for instance, are considered toxic. Combined with “send to” or “into” the whole expression becomes aggressive and inciting when referring to a target person or group. Much like many established approaches for hate speech detection we propose a learning process consisting of the following layers: 1. Cleansing obfuscated expressions, misspellings, typos and abbreviations. 2. Identification of toxic words or expressions (including word n-grams and key phrases) in the tweets along the storylines. Investigation of the proximity of these expressions to further specify the toxicity of these expressions. The obtained words can be new ones or synonym expressions. 3. Add suitable candidate words to existing bags of words. The first step—the cleansing process—addresses toxic words that are intentionally or unintentionally misspelled or strangely abbreviated: • “@ss”, “sh1t”, “glch 1 ns feu er d@mit”, correct spelling: “gleich ins Feuer damit”: “[throw him/her/them] immediately into the fire”. • “Wie lange darf der Dr*** hier noch morden?”: “How long may this sc*** still murder? “Dr***” stands for “Drecksack (scumbag)”. • “… die kuropten Politiker die ieben in saus und braus.”, correct spelling: “… die korrupten Politiker, die leben in Saus und Braus”: “… the corrupt politicians, they live in clover”. We recommend to apply distance metrics or character pattern recognition in this situation and to tag these expressions as named entities in order to achieve transparent forms of obfuscated, misspelled, or abbreviated terms.

Fig. 1. Cleansing and tagging of a single tweet containing misspellings and toxic and inciting expressions as discussed in the example above.

134

K. Englmeier and J. Mothe

The example of Fig. 1 shows a schema that addresses a target (“politicians”), one toxic expression (“corrupt politicians”) and one outright threat (“into the fire”). From the style of the tweet, we probably assume that the toxic expression—as a general statement about politicians—is an insult. However, this is hard to determine in an automatic way without further information. In a country with a high level of corruption this statement even might be true. The close proximity of the toxic expression to the threat, that is, with only (presumably) profane expressions in between, clearly indicates an overall statement to incite somebody to do severe harm to politicians. Thus, we can conclude that the tweet has the character of being inciting. This conclusion can be achieved by the system in an automatic way. This schema works also for similar mentions when different targets addressed like a religious group, a minority, or a prominent person in conjunction with a threat. The threat in the example is to throw somebody (indicated by “damit”) into the fire. The system will also indicate instances of similar patterns as “inciting” that mention different threats like “[send them] to the furnace”. The tweet of Fig. 1 can be classified as hate speech even without consideration of the preceding storyline the tweet is part of. However, there are cases when we need background information. Imagine the statement “send them by freight train to …” instead of “into the fire”. “Freight train” in the context of hate speech has always a connotation with the holocaust. The cruelties of the Nazi regime provide important background information, we have to take into account in hate speech analysis. Sadly, each facet of these cruelties can be a social anchor, too.

Fig. 2. Example of the analysis of a tweet with reference to the social anchor “Kandel”.

With Fig. 2 we come back to our example of a social anchor “Kandel” as outlined above. The reference to this anchor with all its characteristics (facts) is important to correctly analyze this tweet. The anchor provides information on the crime of a refugee that sparked an intense social dispute the tweet is referring to. One mention of the tweet indicates a clear negative opinion. The close proximity to the fact (conviction) indicates the author’s repudiation of the conviction. Multiple question marks are often used to express an opposite opinion to the fact rendered in the related phrase. Therefore, the system marks the expression as toxic. However, for the system there are also limits: The tweet expresses a strong opposition against the court decision. The vulgar phrase indicates that, but also the sentence with the three question marks. In absence of the negative opinion, the question marks are the only weak signal pointing to the author’s dismissive attitude. Of course,

Application-Oriented Approach for Detecting Cyberaggression in Social Media

135

the term “discount” (“Rabatt”) in the context of a judgment also reveals the author’s objection. From the information we have so far, we cannot automatically infer any negative connotation of the word “discount”.

6 Conclusion In this paper, we gave an impression to what extend named-entity recognition can support automatic classification of hate speech in social media. The examples of offensive statements discussed here are quite typical for the ones we found in our collection of tweets. They also demonstrate that is hard to interpret and classify statements in the absence of social anchors. Even a storyline of a single author is often rooted in one or more social anchors. As long as we can retrieve sufficient information about these anchors, we are in the position to automatically and correctly detect semantic relationships that essentially support our classification process. From a particular author’s storyline as series of her or his tweets we can deduce information on her or his attitude. However, there are limits. Many, probably important, utterances pass unnoticed the automatic process of hate speech detection if automatic hate speech detection systems lack the necessary context information. However, by indicating toxic terms or expressions we can support humans that fight against hate speech in social media. We can give them weak signals that point to offensive and aggressive language and make their work more efficient.

References 1. Ying, Y., Zhou, Y., Zhu, S., Xu, H.: Detecting offensive language in social media to protect adolescent online safety. In: Proceedings of the 2012 International Conference on Privacy, Security, Risk and Trust, PASSAT 2012, and the 2012 International Conference on Social Computing, SocialCom 2012, Amsterdam, Netherlands, pp. 71–80 (2012) 2. Mothe, J., Ramiandrisoa, F., Rasolomanana, M.: Automatic keyphrase extraction using graph-based methods. In: Proceedings of the 33rd Annual ACM Symposium on Applied Computing, pp. 728–730 (2018) 3. Xu, J.-M., Jun, K.-S., Zhu, X., Bellmore, A.: Learning from bullying traces in social media. In: Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 656–666 (2012) 4. Bollegala, D., Atanasov, V., Maehara, T., Kawarabayashi, K.-I.: ClassiNet—predicting missing features for short-text classification. ACM Trans. Knowl. Discov. Data (TKDD) 12(5), 1–29 (2018) 5. Anh, V.N., Moat, A.: The role of anchor text in clueweb09 retrieval. In: Proceedings of TREC, TREC’10 (2010) 6. Eiron, N., McCurley, K.S.: Analysis of anchor text for web search. In: Proceedings of SIGIR, SIGIR 2003, pp. 459–460 (2003) 7. Lee, C.-J., Croft, W.B.: Incorporating social anchors for ad hoc retrieval. In: Proceedings of the 10th Conference on Open Research Areas in Information Retrieval, pp. 181–188 (2013) 8. Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3, 1157–1182 (2003)

136

K. Englmeier and J. Mothe

9. Chatzakou, D., Leontiadis, I., Blackburn, J., de Cristofaro, E., Stringhini, G., Vakali, A., Kourtellis, N.: Detecting cyberbulling and cyberaggression in social media. ACM Trans. Web (TWEB) 13(3), 1–51 (2019) 10. Sutton, C., McCallum, A.: An introduction to conditional random fields. Found. Trends Mach. Learn. 4(4), 267–373 (2012)

A Study of Social Media Behaviors and Mental Health Wellbeing from a Privacy Perspective Tian Wang and Masooda Bashir(&) School of Information Sciences, University of Illinois, Urbana-Champaign, Champaign, USA {tianw7,mnb}@illinois.edu

Abstract. Mental healthcare services are insufficient under the current circumstances due to growing populations with mental health issues, the lack of enough mental health professionals, services, and programs that are needed. Traditional methods are often time consuming, expensive, and not timely. At the same time an increasingly number of people are using social media to interact with others and to share their personal stories and reflections. In this study we examined if online users’ social media activities were influenced by their mental well-being. To carry out this research we assessed Twitter activities between participants that reported high symptoms of depression and those with lower or no symptoms of depression. Our results confirm the influence in their activities in addition to interesting insights. We believe these findings can be beneficial to mental health care providers if users’ privacy is preserved. Keywords: Social media behaviors

 Mental health

1 Introduction It is known that mental health issues have been a growing concern these days. Especially, statistics presented by National Institute of Mental Health [1] state that young adults aged 18–25 years have the highest prevalence of any mental issues (25.8%) compared to people aged 26–49 years (22.2%) and elders (13.8%). However, traditional methods for mental healthcare has been considered insufficient since it mainly relies on self-reported information, behaviors reported by family and friends, and a mental status examination [2]. Therefore, there is a need for various strategies to be explored and considered in order to meet the high demand for mental healthcare. People use social media applications more frequently nowadays because of the development of Internet. Usage of social media has increased from 9% in 2005 to 89% in 2013 among young adults [3]. Use of social media involves creation and sharing of information, ideas, interests, and feelings. As a result, increasingly more personal information is being shared publicly by social media users, intentionally or unintentionally. It is possible for social media companies and third-party services to collect and use such information to categorize users into groups, and then send targeted advertisements, provide certain services, or even violate users’ privacy. It raises privacy concerns that social media activities can be used to identify and target users even © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 137–144, 2021. https://doi.org/10.1007/978-3-030-51328-3_20

138

T. Wang and M. Bashir

though they are not willing to share their personal data, and people may feel violated or embarrassed because of this targeting [4]. In this study, Twitter is selected as a specific social media platform because of the large amount of Twitter users. As of 2019, the number of Twitter active users is 139 million daily [5] with a total of 500 million Tweets sent each day [6]. The goal of this study is to examine the relationship between social media behaviors of young adults in the U.S. and their mental health well-being, and to discover if users with mental health issues, especially depression, are identifiable by their Twitter activities. It is also very important to consider the privacy violations that may occur if one’s mental health can be determined by their social media activities and how that may make them vulnerable. At the same time, previous research findings show that the use of social media is related to an individual’s self-judgement, level of depression and anxiety [7]. Thus it is plausible to examine whether one’s social media activities are influenced by their mental well-being. To test this hypothesis, we conducted a study to address the following research questions: • Does one’s mental well-being influence their social media activities? • Are there individual differences among those that report high levels of depressive symptoms and those that have lower or no depressive symptoms?

2 Literature Review Usage of social media has significant impact on individual’s self-evaluation and selfenhancement. The study by Vogel et al. [8] found that people who frequently used Facebook often had poorer self-esteem. Meanwhile, it is reported that self-esteem is linked to a low number, or even the absence, of psychological symptoms like depression or anxiety [9]. A study by Sowislo and Orth [10] further evaluated the relationship between low self-esteem and two low psychological adjustments, depression and anxiety, and the results showed that the influence of low self-esteem on depression and anxiety is robust. Prior studies suggests that individuals’ social media behaviors are strongly related to their health condition, both physically and mentally. Choudhury et al. [2] found that social media behaviors could be used to identify the onset of major depression by analyzing Twitter postings from a set of users who are diagnosed with clinical depression. Similar work by Choudhury et al. [11] successfully built prediction models to forecast significant postpartum changes and depression in new mothers based on their social media behaviors. Besides that, social media has been widely used in medical fields, such as predicting biomedical outcomes [12], predicting infectious disease risk [13], and tracking disease outbreaks [14]. Similarly, evidence from previous research indicates that social media activities can be used to identify users’ age, gender, and personality [15], but very few studies have considered how this may lead to privacy violations. In today’s digital world, online privacy has become a serious concern and the increased number of users on social media makes this even more challenging. More or less, people are sharing their personal life and interacting even more with others on social media than other online

A Study of Social Media Behaviors and Mental Health Wellbeing

139

services [16]. At the same time, it is becoming much more difficult for users to protect their privacy in the age of big data and sharing environment where data is increasingly collected and stored by commercial companies and users are not the only one to control their data [17]. Similar work by Qi and Edgar-Nevill [18] also reports that the major privacy concern was not only about the way that data is being gathered on social media but also where such data goes afterwards. Considering the possibility of identifying people based on their mental well-being or illness on social media is an important and timely topic because this type of analyses not only has tremendous benefits but also brings about privacy vulnerabilities for social media users. In particular, people with mental health issues or illness could be easily targeted on social media, third-party services or other individuals with harmful intentions who may want to take advantage of them. For example, a series of self-harmcausing tasks, called The Blue Whale Challenge, have been propagated via social media in recent years, and statistics showed that the largest number of potential victims involved in the game were people who were depressed [19]. Another big privacy concern is that individuals’ personal information collected, stored, and shared by social media services might be sold to advertisers [20].

3 Methodology 3.1

Data Collection

To conduct this research we designed an online study. Participants for the online survey were recruited via crowdsourcing platform, Amazon Mechanic Turk (MTurk) from the US. The online survey started with a questionnaire, Patient Health Questionnaire (PHQ-9), and then participants were asked to provide their public Twitter link if they voluntarily choose to do so, and the survey ended with some general questions about their Twitter usage and demographics. PHQ-9 is a reliable and valid measure of depression severity, and has been widely used as a clinical and research tool [21]. It was selected for this study to assess participants’ mental health well-being. Since the study focuses on young adults and their Twitter behaviors, only Twitter account holders aged 18–25 were recruited for this survey. A total of 134 participants took the online survey but after excluding incomplete surveys we had 83 responses that were collected and used for further analyses, with 53.01% of the participants being female and 46.99% males. 72.29% of the participants held a bachelor’s degree or higher, and about half of the participants (43) were from Northeast and Southeast regions of the US. For social media usage, 51 participants indicated that they use Twitter in a daily basis. Participant’s responses on the PHQ-9 were scored based on the guidelines provided in the measure. Based on their scores for PHQ-9, participants were divided into two groups: participants with low or no presence of depression symptoms (receiving less than 10 in PHQ-9 screening test), and participants who have symptoms of depression (receiving scores of 10 or higher in PHQ-9). After summarizing the data, 29 participants showed symptoms of depression while 54 of them had less or no presence of depression symptoms.

140

T. Wang and M. Bashir

Data on the user’s Twitter profile, including name, photo, description, location and personal website (if applicable), were collected manually. All of the Twitter postings were retrieved and extracted using Twitter’s official API. The API allows researchers to send HTTP GET requests through Postman in order to retrieve a JSON file from a particular user’s timeline. A total number of 615504 Tweets were retrieved from the 83 participants’ public links. As a first step in our preparation for data analysis, Twitter features that was going to be measured needed to be specified. These three features were included: engagement, depressive language, and level of personal information shared publicly. Measurements of engagement and depressive language were defined and explained by Choudhury et al. in 2013 [11], and we referred the similar methods in this study. Engagement: Adopting previous research methods, number of Tweets, Retweets, replies, likes, followers, and followings. Combination of those terms could be used to measure how active the user is on that particular social media. A large number of Tweets may imply that the user spends plenty of time on Twitter. Number of Retweets and replies show how frequently the user interacts with other people. Following and followers are the indicators of how the user is connected to the social media community [11]. Depressive Language: Depressive words used and negative attitude exhibited in participant’s Twitter activities were recorded. Depressive language was measured in two parts: frequency of depressive words used in Twitter postings [11], and if any negative attitude was exhibited in Twitter activities (liking negative Tweets, including depressive words in profile description, using dark color backgrounds, etc.). We adopted previous research methods in carrying out this analysis. Level of Personal Information Shared Publicly: The level of personal information shared by the participant was measured by the number of personal identifications that were disclosed on their Twitter profile. For example, selfie photos, current locations, real names, and real-life events. By examining the level of personal information being shared publicly, this could potentially serve as an indicator if the user is careful of privacy on social media namely Twitter. It is important to note that the study design, questionnaires was reviewed and approved by the Internal Review Board (IRB) at the academic institution. The survey was anonymous and no personally identifiable information was collected. In addition, National Suicide Prevention Lifeline and Crisis Text Line was provided at each step of the online survey in case participants felt upset during the study and needed support. 3.2

Results

Logistical regression and classification were used for data analysis on users’ information, including profile information, media postings, number of tweets, likes, following and followers, as well as Twitter postings (Tweets). Level of Engagement: While it may be typical to assume that participants who had no or low symptoms of depression will be more active on social media than participants who do have symptoms of depression, our results reveal a different pattern. Participants

A Study of Social Media Behaviors and Mental Health Wellbeing

141

that had high scores of depressive symptoms also had a higher than average number of total Tweets (12609 vs 1056 for participants with less or no signal) by calculating average of statuses_count for each group. To explore this phenomenon further, Tweets content were evaluated manually. People with high scores on depressive symptoms posted a lot of Retweets that seemed irrelevant to their real life. These Retweets included but is not limited to: current trending news, lottery posts, quotes from a movie, and posts from a celebrity on Twitter. Meanwhile, participants with lower scores on depressive symptoms had lower number of posts whereas participants with no depressive symptoms had more original posts in which they themselves wrote the content. Another interesting finding is that participants with no symptoms of depression usually had more followers than followings, while participants with depressive symptoms tend to have extremely larger amount of followings than followers. Use of Negative Words or Expressions: Our analysis showed that participants with high scores of depressive symptoms tend to use more negative words towards events/news, and use more depressive words or expressions in their overall Twitter activities, not only postings, but also in their profile description, retweets, likes, and profile backgrounds. They are more likely to express their loneliness, frustration, and sarcasm. Below are some examples of retrieved content from multiple participants’ Twitter activities (Table 1): Table 1. Examples of Content from Twitter activities Type Tweet Retweet Profile background

Example “Good job stop making fun of people for liking shit that makes them happy just bc you don’t feel the same way about it…” “Sometimes you just need to distance yourself from people. If they care, they’ll notice. If they don’t, you know where you stand.” “Could you make me a cup of team to open my eyes in the right way?” (with completely black color background)

Level of Personal Information Disclosed: In our study and observation, participants who reported no depressive symptoms are more open to use their selfies as profile photos and post photos about their family, friends, and themselves, while people who report depressive symptoms tend to avoid posting any self-related photos on Twitter. Interestingly, participants who reported symptoms of depression tend to use irrelevant pictures instead of selfies for their profile photos. Furthermore, even when these users are using a photo of themselves, their faces are not shown clearly enough to identify them. In contrast, participants who reported no symptoms of depression are more often to post photos about real-life events, such as going out with friends, dinner with family, and etc.

142

T. Wang and M. Bashir

4 Discussion This exploratory study provides some initial findings that support our research questions. Our results indicate that participants who reported symptoms of depression does behave differently on Twitter media compared with people with less or no symptoms of depression. These findings suggests that mental well-being does influence some particular activities on the selected social media. For example, those with reported depressive symptoms prefer to retweet irrelevant content that does not seem to be relevant to their personal life and they try to avoid disclosing their personal identity on this platform. Their language use on this media tends to be passive and their negative attitude like sadness, frustration, and loneliness towards different events is expressed. In addition, these users tend to focus on their own lives and often express judgement on others from their point of view. In contrast, participants who reported no or low symptoms of depression tends to be more open to the online community. They tend to have larger social network and interact with other users more frequently on the selected platform. Users seem inclined to share a lot of their personal life on this social media that can identify them. While this insight from online users’ social media activities can be a critical step towards non-traditional ways of detecting mental wellness or illness it should be used with caution. First more additional extensive research needs to be done to confirm and replicate these findings. Next with the ever-increasing social media services and companies that are capable of collecting and processing large amount of user data from their social media activities we need to be aware and protective of online users’ privacy especially as it relates to their mental well-being. There is evidence that some companies have already misused such data to analyze user preferences in order to send targeted promotions and advertisements. Privacy protections is even more necessary when social media companies may sell their data to third-party services for processing or if such data is breached by malicious entities. Online users who may have mental health concerns maybe more vulnerable or prone to attacks due to their increase online activities. While it may be, unavoidable those social media activities can identify or detect online users’ mental health wellness or illness because of the way users share on the online environments, intentionally or unintentionally, ethical and technical considerations should be in place that preserves a user’s privacy and decreases such data sharing practices that may be intended for profits. On the other hand, the use of social media activities can provides a new perspective on the detection and service delivery of mental health care. Traditional methods are considered insufficient and delayed and since social media activities can be a reflection of online users’ mental well-being this type of information can provide a supplemental set of information to mental health care professionals in providing care to their clients.

A Study of Social Media Behaviors and Mental Health Wellbeing

143

5 Future Work This preliminary study and results does indicate a relationship between social media users’ online activities and their mental well-being. However, additional extensive research is required to confirm our results and to consider such information in designing an early-warning system that may send alert messages to social media users that may have chronic mental illness or as a detection tool when their symptoms and disease may go through different phases. In addition, it is important to note that our limited results show how easy it maybe to detect user mental well-being via social media activities and hence online users should take more measures to protect their privacy.

References 1. National Institute of Mental Health. Mental Illness (2017). https://www.nimh.nih.gov/health/ statistics/mental-illness.shtml 2. De Choudhury, M., Gamon, M., Counts, S., Horvitz, E.: Predicting depression via social media. In: Seventh international AAAI Conference on Weblogs and Social Media, June 2013 3. Brooks, D.C., Pomerantz, J.: ECAR Study of Undergraduate Students and Information Technology, 2017. EDUCAUSE (2017) 4. Zhang, H., Guerrero, C., Wheatley, D., Lee, Y.S.: Privacy issues and user attitudes towards targeted advertising: a focus group study. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 54, No. 19, pp. 1416–1420. SAGE Publications, Sage CA, September 2010 5. Lunden, I.: Twitter Q2 beats on sales of $841 M and EPS of #0.20, new metric of mDAUs up to 139M, July 2019. https://techcrunch.com/2019/07/26/twitter-q2-earnings/ 6. Twitter. The power of Twitter (2019). https://business.twitter.com 7. Woods, H.C., Scott, H.: # Sleepyteens: social media use in adolescence is associated with poor sleep quality, anxiety, depression and low self-esteem. J. Adolesc. 51, 41–49 (2016) 8. Vogel, E.A., Rose, J.P., Roberts, L.R., Eckles, K.: Social comparison, social media, and selfesteem. Psychol. Popular Media Cult. 3(4), 206 (2014) 9. Orth, U., Robins, R.W., Trzesniewski, K.H., Maes, J., Schmitt, M.: Low self-esteem is a risk factor for depressive symptoms from young adulthood to old age. J. Abnorm. Psychol. 118(3), 472 (2009) 10. Sowislo, J.F., Orth, U.: Does low self-esteem predict depression and anxiety? a metaanalysis of longitudinal studies. Psychol. Bull. 139(1), 213 (2013) 11. De Choudhury, M., Counts, S., Horvitz, E.: Predicting postpartum changes in emotion and behavior via social media. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 3267–3276. ACM, April 2013 12. Young, S.D.: Behavioral insights on big data: using social media for predicting biomedical outcomes. Trends Microbiol. 22(11), 601–602 (2014) 13. Rushmore, J., Caillaud, D., Matamba, L., Stumpf, R.M., Borgatti, S.P., Altizer, S.: Social network analysis of wild chimpanzees provides insights for predicting infectious disease risk. J. Anim. Ecol. 82(5), 976–986 (2013) 14. Schmidt, C.W.: Trending now: using social media to predict and track disease outbreaks (2012)

144

T. Wang and M. Bashir

15. Schwartz, H.A., Eichstaedt, J.C., Kern, M.L., Dziurzynski, L., Ramones, S.M., Agrawal, M., Shah, A., Kosinski, M., Stillwell, D., Seligman, M.E.P., Ungar, L.H.: Personality, gender, and age in the language of social media: The open-vocabulary approach. PloS one 8(9), e73791 (2013) 16. Herhold, K.: How People Interact on Social Media in 2019 (2019). https://themanifest.com/ social-media/how-people-interact-social-media 17. Smith, M., Szongott, C., Henne, B., Von Voigt, G.: Big data privacy issues in public social media. In: 2012 6th IEEE International Conference on Digital Ecosystems and Technologies (DEST), pp. 1–6. IEEE, June 2012 18. Qi, M., Edgar-Nevill, D.: Social networking searching and privacy issues. Inf. Secur. Techn. Rep. 16(2), 74–78 (2011) 19. Khattar, A., Dabas, K., Gupta, K., Chopra, S., Kumaraguru, P.: White or Blue, the Whale gets its Vengeance: A Social Media Analysis of the Blue Whale Challenge. arXiv preprint arXiv:1801.05588 (2018) 20. Korolova, A.: Privacy violations using microtargeted ads: a case study. In: 2010 IEEE International Conference on Data Mining Workshops, pp. 474–482. IEEE, December 2010 21. Kroenke, K., Spitzer, R.L., Williams, J.B.: The PHQ-9: validity of a brief depression severity measure. J. Gen. Intern. Med. 16(9), 606–613 (2001)

Risk Analysis for Ethical, Legal and Social Implications of Information and Communication Technologies in the Forestry Sector Oliver Brunner(&), Katharina Schäfer, Alexander Mertens, Verena Nitsch, and Christopher Brandl Chair and Institute of Industrial Engineering and Ergonomics, RWTH Aachen, Bergdriesch 27, 52062 Aachen, Germany {o.brunner,k.schaefer,a.mertens,v.nitsch, c.brandl}@iaw.rwth-aachen.de

Abstract. The Aachen method for the identification, classification, and risk analysis of innovation-based problems (AMICAI) was employed with relevant stakeholders in order to identify and analyze ethical, legal and social aspects (ELSA) related to the use of information and communication technologies (ICT) in the German forestry sector. Based on the problem of different demands due to regional differences and the variety of stakeholders, a need for standards and regularities was reported and a lack of confidence in the functionality of ICT was identified. Hereby, the greatest concern was the insufficient mobile net coverage in the forest. However, problems regarding data protection were most concerning for the participants. Especially an uncertainty regarding data collection and processing was identified. The AMICAI method encouraged stakeholders to change their perspective, which enabled them to understand that their individual problems are interdependent. Thus, it was found that the method can make a substantial contribution towards responsible research and innovation in the field of ICT development. Keywords: ELSI  ELSA  AMICAI communication technologies

 Forestry sector  Information and

1 Introduction Technological innovations and the ongoing digitalization are well advanced in many areas of industry. An example is the use of smart glasses to scan orders and provide the user with order-specific information through augmented reality, which is commonly used in repair work for complaints processing. However, some industry sectors are not yet able to take sufficient advantage of the new opportunities offered by technological innovations. The German forestry sector is facing exactly this challenge, which is in particular related to the large number and diversity of stakeholders involved in this sector. For example, sawmills and machine producers already utilize a high degree of automation © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 145–151, 2021. https://doi.org/10.1007/978-3-030-51328-3_21

146

O. Brunner et al.

and individual machine tools are highly developed. Therefore, technologically advanced machines and equipment are used for wood harvesting. However, the use of information and communication technologies (ICT) in the forest is challenging due to difficulties with mobile net coverage in the forest and a lack of standardized communication infrastructure amongst the stakeholders. This leads to the problem that some processes during wood harvesting do not operate on an equally technologically advanced level, thus missing out on many advantages of ICT use. Especially transmitting information quickly from the forest to the outside or vice versa is difficult. Existing ICT could offer appropriate solutions for this problem or could help reaching a more advanced level of digitalization for some processes by providing information through visualizing and managing communication automatically. With the advancement of technology and processes, the demands of employees who might benefit from the use of such systems and the requirements for IT-systems themselves increase [1]. In order to consider those demands sufficiently, it is necessary to involve the different stakeholders concerned in the day-to-day work of research and development, in order to match their demands [2]. To avoid undesirable effects of an innovation, the consideration of ethical, legal and social aspects (ELSA) or implications (ELSI) is important at all stages of research and development (R&D) processes. Involving different stakeholders as well as the society in the R&D process is also known as Responsible Research and Innovation (RRI). Ribeiro et al. [3] conclude: RRI has become a prominent factor in European Union research policy over the last years. Zwart et al. [4] gives an overview and describes the origin and development of ELSI, ELSA and RRI. The recently published Aachen method for the identification, classification, and risk analysis of innovation-based problems (AMICAI) is a new approach in the field of RRI to identify and quantify risks of technological innovations regarding ELSI [5]. The use of this method precipitates the identification, quantification and discussion of possible occurring problems as well as their effects and causes early on in the process of technology development. AMICAI reflects the ethical frameworks of MEESTAR [6] and Heintz et al. [7] insofar as it encourages the analysis of ELSA from different perspectives. The stakeholder selection is based on Yves Fassin [8] and Achterkamp and Vos [9].

2 Method To identify, quantify and discuss potential innovation-based problems of ICT in the German forestry sector, an AMICAI workshop was conducted with relevant stakeholders and analysts who are experts in the use and implementation of ICT for industrial application. 2.1

AMICAI

The AMICAI method operates on the basis of the failure mode, effects and criticality analysis (FMECA) [10]. Hereby, involved stakeholders and analysts are instructed to assess ELSI with respect to their likelihood of occurrence, severity of the impact and detectability. AMICAI is divided in three phases: Preparation, Risk Analysis and

Risk Analysis for Ethical, Legal and Social Implications

147

Measures. The “Preparation” phase includes definition of scope and objectives as well as their context and classifications for the analysis. In addition, a proper selection of analysts and stakeholders is part of the preparation. In the “Risk Analysis” phase, the first step is to identify problems. For each problem, the problem’s effect, cause and detection methods have to be identified. In order to assess the risk of the problem, the severity (S) of the problem’s effect, the probability of occurrence (O) and the detectability (D) of the problem’s cause are quantified. The respective scales for rating S (insignificant – catastrophic) O (improbable – very likely) and D (almost certain – completely uncertain) reach from 1 to 10. A causal chain of a problem is represented by the problem’s effect and cause as well as the problem itself. By multiplying S, O and D, a risk priority number (RPN) is calculated. With RPNs, causal problem chains can be prioritized. In the “Measures” phase, ideas on how identified problems could be addressed, are formulated. The thus structured collection of ideas can serve as indicators to help experts develop adequate measures to address the identified problems [5]. 2.2

Workshop Design

Previously to the workshop itself, the “Preparation” phase was conducted. Hereby, the participating stakeholders and analysts as well as the perspectives from which ELSI should be discussed, were selected. Table 1 presents an overview of the stakeholders that participated and of the discussed perspectives. The “Risk Analysis” phase, i.e. the identification, quantification and discussion of problems regarding ICT, was conducted at the workshop. The “Measures” phase was not part of this workshop and needs to be carried out in a subsequent step. The context of the workshop was to consider specifically ICT use in the forestry sector. This scope was intentionally defined broadly to identify the perspectives that concern different stakeholders most in a first step. In a second step, some causal chains of the most concerning perspectives were discussed to identify pressing problems of the involved stakeholders. Additional workshops, which are more detailed e.g. considering a specific technology, are planned to be carried out in a third step.

Table 1. Overview of participating stakeholders and discussed perspectives Stakeholders lumberjack, machine operator, forestry company, forest owner, forester, wood purchaser, machine producer, chief instructor, apprentice Perspectives data security, functionality, regularities and standards, rationalization, economy, society, safety, health, environment, personality development

2.3

Participants

The workshop was conducted with 18 different stakeholders of the German forestry sector according to Table 1. Additionally, 10 analysts, who are experts in the implementation and use of ICT for industrial applications, attended the workshop. The

148

O. Brunner et al.

participants analyzed the risk of ELSI of ICT use in the forestry sector. The different stakeholder groups were represented approximately equally. 2.4

Procedure

The AMICAI method and the workshop procedure were explained to the participating stakeholders. Each participant was asked to identify most concerning problems regarding ICT use in his/her daily work, write them individually on a card and pin these on a bulletin board. The collected problems were then prioritized. For this purpose, every participant marked three problems that were most pressing in their view. Moderators of the workshop clustered the mentioned problems and classified them according to the pre-defined perspectives. All participants discussed these problems from their point of view. The focus was to identify the most concerning perspective. For this perspective, some causal chains were identified, quantified and discussed in more detail. Additionally, participants were instructed to identify detection methods for the problems. If a detection method was difficult to find, it was rated with a score of 10 on the scale “D”.

3 Results From the perspective “functionality”, problems with the confidence in ICT were identified. A great concern was the mobile net coverage in the forest and the reliability and transparency of the technology as well as the data processing. The problems identified from the perspective “regularities and standard” are different demands due to regional differences and a variety of different stakeholders. Also, a low level of trust in regularities and standards was reported. In general, a fear of rationalization, financial feasibility and no acceptance from society were mentioned. The perspective which the stakeholders considered to be most important was “data protection”. Hence, some casual chains for this category were discussed in more detail (see Table 2). The problem of being overly transparent as an employee was discussed from the stakeholder view of a machine operator. Specifically, employees felt uncomfortable with the idea that data are collected about the work they are doing, which potentially effects that employees tend to be unmotivated. The severity (S) of this effect was rated with a score of eight. A lack of data privacy was identified as the problem cause. The probability of occurrence (O) of this causal chain was rated with a score of three. A detection method (D) was not identified, which resulted in a rating score of ten. The calculated risk priority number (RPN) for this causal chain was therefore 240. Having no available data for process optimization was identified as a problem from the view of a forestry company. This was considered to lead to the problem effect of low optimization potential and rated with a score of five on the scale “S”. The problem cause was identified as missing declarations of consent to use personal data for process optimization. This was rated with a score of four on the scale “O” and ten on the scale “D”, because no detection method was found. This resulted in a RPN of 200 for this causal chain.

Risk Analysis for Ethical, Legal and Social Implications

149

Another discussed problem was the misuse of personal data from the stakeholder view of a forestry company. Employees felt themselves to be controlled by continuous performance monitoring. The identified effect was that they could lose trust in their employer. Both “S” and “O” were rated with a score of ten. As a possible detection method, a review of the legal framework for performance monitoring was suggested. This was rated with a score of one on the scale “D”, which resulted in a RPN of 100 for this causal chain. Table 2. Discussed causal chains of the conducted AMICAI workshop Context

Information and communication technologies in the forestry sector

Classification Problem criticality Stakeholder Perspective Problem Machine operator Forestry company

Data protection Data protection

Transparent employee No available data

Forestry company

Data protection

Misuse of personal data

Problem effect

S

Problem cause

Unmotivated employees Low optimization potential Loss of trust

8 Lack of data privacy 5 No declaration of consent 10 Performance monitoring

0

Detection method

D

RPN

3 -

10 240

4 -

10 200

10 Review of legal framework

1 100

4 Discussion Using the method AMICAI in a workshop with various stakeholders, a number of potential problems of ICT use in the forestry sector were identified. The identified problems were very different in terms of their content, which is likely attributable to the high number of different stakeholders in the forestry sector. Individual stakeholders obviously have their own point of view and objectives. However, “data protection” was clearly most concerning for the participants. For two of the discussed causal chains, no detection method was found. Therefore, D was quantified with 10, which resulted in the highest RPN’s for these two causal chains. This indicates an uncertainty regarding the legal framework of data acquisition and processing, which is also a problem in different fields [11]. Comparing the discussed causal chains shows that they are closely related to each other, even though they are representing different problems. Machine operators felt uncomfortable and controlled by the acquisition of personal data, whereas forestry companies wanted to gather data to improve their processes and continuously work efficiently. If the employees feel that their performance is monitored, they are likely to feel stressed and could lose their trust in the employer, which might result in a decreasing work motivation. This would be a very undesirable effect for both the machine operator and the forestry company. However, forestry companies are willing to potentially optimize their processes, for which they need to acquire data. Optimized processes, on the other hand, are also in the interest of the workers. The stakeholders

150

O. Brunner et al.

thus put themselves in the position of the other, which enabled them to understand that their individual problems are interdependent. Taking also the identified problems regarding functionality as well as regularities and standards into account, it may be assumed that there is a general uncertainty with the ongoing digitalization and lack of confidence in the use of ICT in the forestry sector. This reveals a great demand for innovations, which create transparent solutions for data protection and a clarification of the legal framework. A number of lessons were learned in the process of this study with the regard of the application of the method AMICAI. The approach in this workshop was to define a broad scope to identify the most concerning perspectives in the first step. In a second step, detailed discussions about problems regarding “data protection” were conducted. The group size of the workshop was very big in order to have different stakeholders in the discussion. In a smaller group and with a more narrowly defined scope, discussions might have been more efficient. Further workshops with specific innovations addressing the discussed problems are planned to be conducted.

5 Conclusion The findings indicate that in the German forestry sector, there are many different points of view and problems regarding ELSI, which is probably due to the large number of different stakeholders involved. There is also a different level of digitalization achieved by those stakeholders. For example, machine producers already work with a high degree of automation and highly developed processes, whereas the users of the machines do not have the same advantages of digitalization and automation in the forest. Therefore, it is important for the stakeholders to understand the point of view of others, in order to develop solutions, which appropriately match the demands of the respective stakeholders. The AMICAI method seems to be an adequate tool to encourage stakeholders to change their perspective in an effective discussion. The participants of the workshop understood that their individual problems are interdependent. The findings can help to reduce undesirable effects of innovation-based problems. However, further workshops with specific innovation have to be conducted in order to derive more specific recommendations for RRI and related digitalization processes in the forestry sector. Acknowledgement. This workshop was conducted in the project KWH40 (grant number EFRE-0200459) funded by the Federal Ministry. The authors would like to express their gratitude for the support they have received.

References 1. Salemink, K., Strijker, D., Bosworth, G.: Rural development in the digital age: a systematic literature review on unequal ICT availability, adoption, and use in rural areas. J. Rural Stud. 54, 360–371 (2017) 2. Greenbaum, D.: Expanding ELSI to all areas of innovative science and technology. Nat. Biotechnol. 33(4), 425 (2015)

Risk Analysis for Ethical, Legal and Social Implications

151

3. Ribeiro, B.E., Smith, R.D., Millar, K.: A mobilising concept? unpacking academic representations of responsible research and innovation. Sci. Eng. Ethics 23(1), 81–103 (2017) 4. Zwart, H., Landeweerd, L., van Rooij, A.: Adapt or perish? assessing the recent shift in the European research funding arena from ‘ELSA’ to ‘RRI’. Life Sci. Soc. Policy 10(1), 11 (2014) 5. Brandl, C., Wille, M., Nelles, J., Rasche, P., Schäfer, K., Flemisch, F.O., Frenz, M., Nitsch, V., Mertens, A.: AMICAI: a method based on risk analysis to integrate responsible research and innovation into the work of research and innovation practitioners. Sci. Eng. Ethics 26, 1–23 (2019) 6. Manzeschke, A.: MEESTAR–ein Modell angewandter Ethik im Bereich assistiver Technologien. In: Weber, K., Frommeld, D., Manzeschke, A., Fangerau, H. (eds.) Technisierung des Alltags-Beitrag zu einem guten Leben, pp. 263–283. Stuttgart: Steiner (2015) 7. Heintz, E., Lintamo, L., Hultcrantz, M., Jacobson, S., Levi, R., Munthe, C., et al.: Framework for systematic identification of ethical aspects of healthcare technologies: the SBU approach. Int. J. Technol. Assess. Health Care 31(3), 124–130 (2015) 8. Fassin, Y.: The stakeholder model refined. J. Bus. Ethics 84(1), 113–135 (2009) 9. Achterkamp, M.C., Vos, J.F.: Investigating the use of the stakeholder notion in project management literature, a meta-analysis. Int. J. Project Manage. 26(7), 749–757 (2008) 10. IEC 60812 Failure modes and effects analysis (FMEA and FMECA) (2018) 11. Gahi, Y., Guennoun, M., Mouftah, H.T.: Big data ana-lytics: Security and privacy challenges. In: 2016 IEEE Symposium on Computers and Communication (ISCC), pp. 952– 957. IEEE, June 2016

Traffic Scene Detection Based on YOLOv3 Qian Yin, Ruyi Yang, and Xin Zheng(&) School of Artificial Intelligence, Beijing Normal University, Beijing, China {yinqian,zhengxin}@bnu.edu.cn, [email protected]

Abstract. With the development of urbanization and the ongoing big data scene, intelligent object detection in traffic scenes has become a hot issue at the moment. In this paper, the largest computer vision algorithm evaluation data set in the world, the KITTI data set, is trained on the model of YOLOv3. K-means method is used to adjust anchor parameters to achieve more accurate results. The experimental results show that the traffic scene detection model trained in this paper has good detection effect. Keywords: Yolov3

 KITTI  Object detection

1 Introduction With the rapid development and widespread application of intelligent monitoring technology, traffic information collection technology based on object detection has become vital of intelligent transportation systems and more promising. At present, object detection is applied to many traffic scenarios, such as self-driving technology and security system. Most of the traditional object detection methods can be summarized as the following three steps: First, generate multiple target suggestion boxes. Second, the features in the suggestion box are extracted, and the features manually designed by people are used as standards, such as the Haar [1] features used in face algorithms; Third, design classifiers for object classification, and use classifiers trained in the field of machine learning to achieve classification of targets, such as SVM [2] used in pedestrian detection algorithms model. Since 2012, AlexNet [3] has made its appearance in the ILSVRC (Large Visual Recognition Challenge) competition in image processing, and deep learning has begun to shine in computer vision. There are the following types of deep learning algorithms in current object detection: First, algorithms of two-stage, which firstly generate candidate boxes, and then these boxes are classified, such as R-CNN [4] algorithm, SPPnet [5] algorithm, Fast R-CNN [6] algorithm, Faster R-CNN [7] algorithms, etc.; Second, the one-stage based on regression method is an end-to-end algorithm that directly converts the problem of target positioning into regression problem, such as the representative YOLO [8–10] algorithm series and SSD [11] algorithm.

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 152–157, 2021. https://doi.org/10.1007/978-3-030-51328-3_22

Traffic Scene Detection Based on YOLOv3

153

2 Yolov3 2.1

Darknet-53

As shown in Fig. 1, the network structure of Darknet-53 is main composed of a convolutional layer and a residual layer. In the YOLOv3 network structure, in order to further improve the network performance, YOLOv3 has the following changes based on Darknet-53: First, the Batch Normalization layer is added after the convolution layer. Second, add Leaky Relu layer before the residual layer.

Fig. 1. Darknet-53 network structure

2.2

Multi-scale Detection

YOLOv3 draws on the idea of FPN network, and fuses the feature maps with different resolutions to output. Feature maps are output at three different scales, combining shallow and deep features, and merging low-level semantic information and high-level semantic information. Each basic unit of a feature map corresponds to the prediction of 3 bounding boxes. The information of each bounding box includes: target center coordinate information (x, y), the width and height of the bounding box w, h, and the confidence of the target. And the conditional probability corresponding to each class of the target, so the final output tensor size is (2 + 2 + 1 + class) * (13 * 13 + 26 * 26 + 52 * 52) * 3. (Class is the number of classes during training).

154

Q. Yin et al.

3 Experimental Preparation 3.1

Environment Setup

The environment of this article uses the following software: Visual studio 2015, CUDA 8.0, Cudnn 5.0, opencv3.2.0, and the hardware is NVIDIA GeForce GTX 1050 Ti (4 GB). 3.2

Data Conversion

The KITTI data set, is co-founded by Karlsruhe Institute of Technology in Germany and the American Institute of Technology in Toyota. Before training, the label data needs to be converted. Firstly, convert the KITTI data set label format to the PASCAL VOC data set label format, and then convert the PASCAL VOC data set label format to the data label format required by the YOLOv3 model. In the first step, a python script is used to firstly merge 8 types of data into 3 types of data (car, pedestrian, and cyclist), where car, van, truck, tram are merged into car, and pedestrian, pedestrian (sitting) are merged into pedestrian. Cyclist is a separate category, ignoring misc and dontcare categories.

4 Experimental Results This paper trains 5984 pictures selected from the KIITI data set. The parameter settings are batch = 8, subdivisions = 8, weight = 416, height = 416, learning_rate = 0.001, etc. Since three categories are set: Car (pedestrian), cyclist (cyclist), so classes is set to 3, accordingly, filters = (3 + 5) * 3 = 24, due to GPU memory limitations, multi-scale training is not turned on, set random = 0, the anchor parameter was readjusted using the k-means method, and the order of the receptive fields was: (4, 37), (5, 14), (9, 67), (10, 22), (17, 100), (18, 34), (30, 53), (42, 90), (61, 119). 4.1

Training Process

During the training process, the loss function is in a downward trend, which shows that the training is effective. In addition, the loss function does not decrease every moment, and there is also an oscillation phenomenon. As shown in the figure, the value of avg loss can be reduced to about 1.14 (Fig. 2): 4.2

Model Selection

As mentioned earlier, a weight file is generated every 100 iterations. In the face of these weight files, a test needs to be performed to select the best weight file model, the test results are as follows (Table 1):

Traffic Scene Detection Based on YOLOv3

155

Fig. 2. Avg loss Table 1. Average mAP comparison of weight file Number of iterations/category Car Cyclist Pedestrian 6000 64.47% 100.00% 61.04% 6100 85.71% 100.00% 89.77% 6200 69.24% 32.95% 42.68%

The test process takes 1 s, and the average accuracy of each category is accurately calculated and then the three categories are averaged to obtain the final mAP. The accuracy rate is 75.17% when it is trained to 6000 rounds, The accuracy rate reached 91.83% when training to 6100 rounds, but the accuracy rate was 48.29% for 6200 rounds, and the rising in training rounds did not make the accuracy continue to increase. Therefore, based on the above, the training model of 6100 rounds is a good model.

156

Q. Yin et al.

4.3

Test Results

Select the weight file with the best effect, that is, the highest mAP, as the model. The test results are as follows: A total of 3 vehicles were detected with a probability of 98%, 100%, and 63%; 3 pedestrians with a probability of 30%, 30%, 87%; 1 rider with a probability of 63% (Fig. 3).

Fig. 3. Best model test results

5 Conclusion This paper takes the YOLOv3 model as the core, and selects the KITTI data set for training. The k-means method is used to adjust anchor parameters in the training process to achieve more accurate results. Finally, a better model is selected from many models. After testing, the effect is good. The work has certain limitations in some ways. The future aspects of the follow-up work are: First, Multiple models can be trained more focused according to different needs. Second, according to the needs, targeted picture sets of the scene can be collected for more effective training. Acknowledgements. The research work in this paper was supported by the National Key Research and Development Program of China (No. 2018AAA0100203) and National Key R & D program of China (No. 2017YFC1502505). Professor Xin Zheng is the author to whom all correspondence should be addressed.

References 1. Papageorgiou, C.P., Oren, M, Poggio, T.: A general framework for object detection. In: Sixth International Conference on Computer Vision (IEEE Cat. No. 98CH36271). IEEE (2002) 2. Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995) 3. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: International Conference on Neural Information Processing Systems, 12, pp. 1097–1105. Curran Associates Inc.

Traffic Scene Detection Based on YOLOv3

157

4. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014) 5. He, K., Zhang, X., Ren, S., et al.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904–1916 (2014) 6. Girshick, R.: Fast R-CNN. Computer Science (2015) 7. Ren, S., He, K., Girshick, R., et al.: Faster R-CNN: towards real-time object detection with region proposal networks. In: International Conference on Neural Information Processing Systems (2015) 8. Redmon, J., Divvala, S., Girshick, R., et al.: You Only Look Once: Unified, Real-Time Object Detection (2015) 9. Redmon, J., Farhadi, A.: YOLO9000: Better, Faster, Stronger (2017) 10. Redmon, J., Farhadi, A.: YOLOv3: An Incremental Improvement (2018) 11. Liu, W., Anguelov, D., Erhan, D., et al.: SSD: Single Shot MultiBox Detector (2015)

Social Networks’ Factors Driving Consumer Restaurant Choice: An Exploratory Analysis Karen Ramos1, Onesimo Cuamea2(&), Jorge Morgan2, and Ario Estrada2 1

Facultad de Contaduría y Administración, Universidad Autónoma de Baja California, Tijuana, Mexico [email protected] 2 Faculad de Turismo y Mercadotecnia, Universidad Autonoma de Baja California, Tijuana, Mexico {onesimo,jorgemorgan,ario}@uabc.edu.mx Abstract. The study was aimed at finding out the factors of social networks driving consumer restaurant choice in Tijuana, Mexico. The quantitative method was used and data was collected using the online survey technique to a sample of 385 costumers of full-service restaurants. Thirteen items related to the influence of social networks were included. As well, to test the consistency of the instrument, Cronbach’s alpha analysis was performed; and to examine the appropriateness of the data, the KMO and Bartlett’s sphericity tests were performed. Then, it was carried out an exploratory factor analysis (EFA), to identify the social networks ‘factors influencing the consumer restaurant choice. The results obtained in the EFA, show that there are four key factors of social networks: The content generated by the restaurant in social networks, the rating, and popularity of the restaurants’ social network, customer recommendation on social networks and real-time customer participation. Keywords: Social networks

 Consumer decision  Factor analysis

1 Introduction Currently, electronic word of mouth (eWOM) is considered one of the most useful sources of information for consumers as it allows them to obtain peer opinions and experiences, and competes or complements the company generated information. Hence, the importance of analyzing the influence generated by social networks in the decision to choose a restaurant. In this context, the emergence of social networks allowed Internet users to communicate with people they already know. While discussion forums, consumer review sites, blogs, and shopping websites, enable communication between anonymous users [1]. In this way, eWOM became a dominant factor that influences consumer behavior and brand performance [2]. In recent years, with the purpose of attracting a greater number of customers, organizations have incorporated the use of social networks as a new marketing strategy [3]. Even more, social network users can create content about brands or their products and services, incorporating images and videos, taking advantage of the facilities provided by social © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 158–164, 2021. https://doi.org/10.1007/978-3-030-51328-3_23

Social Networks’ Factors Driving Consumer Restaurant Choice

159

networking websites [4]. However, this new digital strategy allowed consumers to participate in eWOM with the possibility of influencing the credibility of the information transmitted by the brand and, therefore, the credibility of the company itself. To attract new consumers or repetitive customers, companies must carry out a marketing strategy using Facebook, a non-intrusive social network, including different actions, for example display advertising, search engine optimization and search engine marketing [5]. Despite tourism and hospitality studies investigating eWOM, there is limited research on its factors and the relationship with the customer restaurant choice [6]. For these reasons, the aim of the study is to identify the factor of social networks influencing the consumers’ restaurant choice in Tijuana, Mexico.

2 Literature Review 2.1

Digital Content Generated on Social Networks by the Organization

Advertising on Facebook has a positive influence on the purchase intention, and it has a greater influence than information provided by a close friend [7, 8]. Advertising on social networks in restaurants is one of the key factors to establish successful relationships with the client, to better inform them about products or services easily and profitably [9]. The positive results of social networks show that it will be more influential in the next few days, it is necessary further analysis in this line of research [3]. Regarding photos and pictures used in social networks, Facebook users may be more attracted to the most direct and common messages that contain photos and status updates, rather than those that require clicking on a link or require time to watch a video. Likewise, digital content with photos and statuses generate greater interactivity among users [10]. The presence of pictures online facilitates consumers’ virtual product experience and can help to form positive images to other potential costumers leading to purchase intention [11]. Some of the critical aspects of a restaurant social network’s effectiveness are the quality of the content generated, the availability to respond to consumer reviews, the ability to process and leverage information for better decision-making. In this regard, most restaurant managers usually administrate the social networks of their businesses and have no experience or knowledge on how to create publications, how to monitor social networks, what to do with the available information, when to respond and, how to structure the content of their publications [12]. For these reasons, future research should include more social media tools in the digital platforms of restaurants, such as photos of the restaurant, food, variety, prices, and the evaluation of its level of influence on consumer behavior [13]. 2.2

Restaurants’ Social Network Popularity

Review ratings in eWOM represent an attempt to quantify perceptions of service quality that are determinants for behavioral intentions [14]. Restaurant with a larger number of positive reviews or a better overall rating promotes net sales [15]. Restaurant’s popularity online is positively associated with the ratings generated by

160

K. Ramos et al.

consumers on the quality of food, the environment, restaurant service and the volume of comments from online consumers [16]. The use of Facebook’ like button is necessary to increase the social network popularity and it positively influences consumer’s purchase intention [17]. Regarding the importance of followers on social networks, if a brand gains more followers on its Instagram in a specific period, the more sales revenue they will get in that time, this can be explained by the fact that people following the restaurants’ Instagram are more likely to choose those restaurants when they dine out [18]. Indeed, it is necessary to continue investigating the existence of additional factors that could influence the level of participation in social networks, such as the size of the community, the frequency of publication, etc. [9]. 2.3

Ewom on Customers’ Social Networks

Customers’ experiences shared in social media play a substantial role in influencing future consumer behavioral intentions [11]. Social networking sites can help eWOM receivers better understand eWOM senders by using the information in the form of notes, status updates, photos, videos or messages posted on the sender’s personal page. In this sense, social network users can generate content by sharing their experience through different actions as: writing a review, sharing photos and videos, check-in, tell stories, rate services, brands or companies, among others [19]. Due to the increasing importance of online reviews, as part of eWOM, managers should encourage consumers to write positive online reviews in the social network to attract further customers [20]. On the topic of social network reviews confidence, customers trust first in real friends, particularly experts in the relevant topic, second on Facebook friends [17, 21]. Also, message recipients perceive reviews with longer text to be more useful than those with a shorter text [22]. In a social network environment, consumers expect to learn about products and services or brands, through online reviews obtained from experienced friends, family or even strangers, these reviews can affect their decision-making process [23]. In this regard, the communications that are directly promotional will not generate the desired effect of e-WOM since it is about developing more conversations about the product rather than a traditional hard selling strategy [24]. The images of the physical environment and images of food and beverages are positively related to the enjoyment of the experience [25]. The extent of the review and the images of food and beverages are the most important factors that affect customer decisions [26]. These reviews represent the quantity of the restaurants’ information for future consumers could require, therefore, there is a strong correlation between review and sales [27, 28]. 2.4

Customer Live Posting on Social Networks

The use of Facebook’s location-based check-in service positively influences consumer’s purchase intention [17]. In the same way, a higher number of check-in in a restaurant, not only represents its general popularity but also increases the total number of eWOM [29]. With the consolidation of consumer-to-consumer social media platforms, such as Facebook and Twitter, stories can be powerful tools to configure cognitive processing, memory, brand image and choice. However, relatively little is known

Social Networks’ Factors Driving Consumer Restaurant Choice

161

about how this process works in digital marketing [30]. The current dominance of speedy technology on consumers’ lives creates a pressing urge for marketers to understand the new technologies and their effects [26]. Customers with very dissatisfied or very satisfying dining experiences tend to review their experience faster, either immediately after, or even during a dining experience [31]. In this sense, the closer the publication was at the time of consumption, the greater the significant influence on consumer buying intentions [32]. For this reason, it is suggested to evaluate the importance attributed to the live broadcasting generated by customers.

3 Research Methodology An online survey was applied to restaurants’ customers in Tijuana Mexico, to evaluate their most recent experience. To determine the sample size, a confidence level of 95% and a margin of error of ±5% were established, which allowed for defining the sample of 385 restaurant customers [33]. To reduce bias in the information, it was decided to eliminate the neutral response. In this sense, the exclusion of the neutral option does not necessarily change the proportion of responses that incline toward certain sides of a Likert response scale (positive or negative) [34, 35]. The final survey includes the evaluation of thirteen factors regarding their experience at social networks. These items were evaluated with a four-point Likert scale: 1 = Not important, 2 = Less important, 3 = Important and 4 = Very Important. To test the consistency of the instrument, Cronbach’s alpha analysis was performed; the results of the analysis confirmed that the instrument and items used were reliable with a coefficient Alpha value of 0.892, above the generally accepted score of 0.7 [36]; this result shows the internal consistency of the questionnaire. Then, the Kaiser–Mayer–Olkin (KMO) analysis was calculated as 0.896, which is greater than 0.50, indicating that the data set of 385 is adequate for EFA [37].

4 Results The EFA carried out explains 69.11 percent of the total variance with four factors, as presented in Table 1. EFA confirms that the factors of the social network influencing the restaurant customers’ choice are: 1) Digital content generated by the organization, 2) Restaurant’s social network popularity, 3) EWOM on customers’ social networks and 4) Customer live posting on the social network. The first factor named “Digital content generated by the organization” includes the facilities’ photographs posted by the restaurant (.841), the menus’ photographs posted by the restaurant (.808), the prices posted in the restaurant’s social networks (.731), the variety on the menu posted on the restaurant’s social networks (.836) and the promotions posted in the restaurant’s social networks (.708). The second component “Restaurant’s social network popularity” integrates as follows: the rating of a restaurants’ social network (Facebook pages) (.734), the number of followers of the restaurant social networks (.845) and the number of likes in the restaurant’ post (.747).

162

K. Ramos et al. Table 1. EFA factor structure (n = 385) Factors Digital content generated by the organization Restaurant’s social network popularity EWOM on customers’ social networks Customer live posting on social network Total

Eigenvalue 5.728 1.305 1.121 0.829

% of variance explained 44.064 10.041 8.624 6.378 69.110

The third factor was named “eWOM on customers’ social networks” integrates the items of Comments about the restaurant posted by customers on their social networks (.859), the experience posted by a client in the restaurant’ social networks (.817), and the restaurant’ food photos shared by customers in their social networks (.691). The four factors named “Customer live posting on social network” include the items of Restaurant customers’ check-ins on the social networks (.786) and customers’ live broadcast and stories about the restaurant (.835).

5 Discussion and Conclusions There are four social network’ factors involved in the restaurant choice; Digital content generated by the organization, restaurant’s social network popularity, eWOM on customers’ social networks and customer live posting on social networks. The presence of digital content generates a positive virtual product experience and can help to form positive images to other potential costumers leading to purchase intention [11, 12]. Restaurant’s popularity online is positively associated with the high ratings on Facebook, the number of followers and likes [16–18]. The eWOM generated on the customer social network is important, these publications represent the quantity of restaurant’ information for future consumers could require [23–28]. Lastly, customers’ live posting on the social network is an influence on consumer buying intentions because it shows very satisfying dining experiences and popularity increasing the total number of eWOM [17, 29, 31, 32]. Future research should include social media tools and evaluate its level of direct influence on consumer behavior such as restaurant choice and revisit intention [9, 13, 27, 28].

References 1. Moran, G., Muzellec, L.: eWOM credibility on social networking sites: a framework. J. Market. Comm. 23(2), 149–161 (2014). https://doi.org/10.1080/13527266.2014.969756 2. El-Baz, B.E.-S., Elseidi, R.I., El-Maniaway, A.M.: Influence of Electronic Word of Mouth (e-WOM) on brand credibility and Egyptian consumers’ purchase intentions. IJOM 8(4), 1– 14 (2018). https://doi.org/10.4018/ijom.2018100101

Social Networks’ Factors Driving Consumer Restaurant Choice

163

3. Ali, Z., Shabbir, M., Rauf, M., Hussain, A.: To assess the impact of social media marketing on consumer perception. Int. J. Acad. 6(3), 69–77 (2016). https://doi.org/10.6007/ijarafms/ v6-i3/2172 4. Erkan, I., Evans, Ch.: Social media or shopping websites? The influence of eWOM on consumers’ online purchase intentions. J. Market Commun. 24(6), 617–632 (2018). https:// doi.org/10.1080/13527266.2016.1184706 5. López-García, J.J., Lizcano, D., Ramos, C., Matos, N.: Digital marketing actions that achieve a better attraction and loyalty of users: an analytical study. Future Internet. 1, 130 (2019). https://doi.org/10.3390/fi11060130 6. Salleh, S., Hashim, N.H., Murphy, J.: The role of information quality, visual appeal and information facilitation in restaurant selection intention. Inf. Commun. Technol. Tourism. 87–97 (2016). Cyprus: Springer https://doi.org/10.1007/978-3-319-28231-2_7 7. Yang, T.: The decision behavior of Facebook users. J. Comput. Inform. Syst. 52(3), 50–59 (2012). https://doi.org/10.1080/08874417.2012.11645558 8. Duffett, R.G.: Facebook advertising’s influence on intention-to-purchase and purchase amongst Millennials. Internet Res. 25(4), 498–526 (2015). https://doi.org/10.1108/intr-012014-0020 9. Hanaysha, J.: The importance of social media advertisements in enhancing brand equity: a study on fast food restaurant industry in Malaysia. IJIMT 7(2), 46–51 (2016). https://doi.org/ 10.18178/ijimt.2016.7.2.643 10. Pletikosa Cvijikj, I., Michahelles, F.: Online engagement factors on Facebook brand page. SNAM 3(4), 843–861 (2013). https://doi.org/10.1007/s13278-013-0098-8 11. Mhlanga, O., Tichaawa, T.M.: Influence of social media on customer experiences in restaurants: a South African study. Tourizam 65(1), 45–60 (2017). https://hrcak.srce.hr/ 178622 12. Lepkowska-White, E., Parsons, A.: Strategies for monitoring social media for small restaurants. JFBR, 1–24 (2019). https://doi.org/10.1080/15378020.2019.1626207 13. Park, S., Nicolau, J.: Asymmetric effects of online consumer reviews. ATR 50, 67–83 (2015). https://doi.org/10.1016/j.annals.2014.10.007 14. Parasuraman, A., Zeithaml, V.A., Malhotra, A.: E-S-QUAL. J. Serv. Res. 7(3), 213–233 (2005). https://doi.org/10.1177/1094670504271156 15. Kim, W.G., Li, J., (Justin), Brymer, R.A.: The impact of social media reviews on restaurant performance: the moderating role of excellence certificate. Int. J. Hosp. Manag. 55, 41–51 (2016). 16. Kumabam, R., Meitei, Ch., Singh, S., Singh, T.: Expanding the Horizon of marketing: contemplating the synergy of both traditional word of mouth and EWord of Mouth. Int. J. Manag. 8(3), 204–212 (2017). http://www.iaeme.com/IJM/issues.asp?JType=IJM&VType =8&IType=3 17. Arceo, P.M., Cumahig, I.R., De Mesa, M.B., Buenaventura, M.J., Tenerife, J.T.: The impact of social media platforms to online consumers’ intention to purchase in restaurant industry. GJETeMCP 4(1), 565–580 (2018). http://globalbizresearch.org/marketing/issues.php?id= 251 18. Chang, Q., Peng, Y., Berger, P.: The impact of social-media performance on sales of retailfood brands. Int. J. Res 6(2) (2018). https://doi.org/10.5281/zenodo.1185601.1 19. Fang, Y.H.: Beyond the Credibility of Electronic Word of Mouth: Exploring eWOM Adoption on Social Networking Sites from Affective and Curiosity Perspectives. Int. J. Electron. Commer. 18(3), 67–102 (2014). https://doi.org/10.2753/jec1086-4415180303 20. Plotnika, D., Munzel, A.: What’s new with you? on the moderating effect of product novelty on eWOM Effectiveness. Customer Serv. Syst. 1(1), 103–114 (2014). Karlsruhe: Scientific Publishing. https://doi.org/10.5445/ksp/1000038784/12

164

K. Ramos et al.

21. Bitter, S., Grabner-Kräuter, S.: Consequences of customer engagement behavior: when negative Facebook posts have positive effects. EM 26(3), 219–231 (2016). https://doi.org/10. 1007/s12525-016-022022. Liu, Z., Park, S.: What makes a useful online review? Implication for travel product websites. Tour. Manag. 47, 140–151 (2015). https://doi.org/10.1016/j.tourman.2014.09.020 23. Toor, A., Husnain, M., Hussain, T.: The impact of social network marketing on consumer purchase intention in Pakistan: consumer engagement as a mediator. Asian J. Bus. Account. 10(1), 167–199 (2017). https://ajba.um.edu.my/article/view/3478/1499 24. Fox, G., Longart, P.: Electronic word-of-mouth: successful communication strategies for restaurants. Tour. Hosp. Manag. 22(2), 211–223 (2016). https://doi.org/10.20867/thm.22.2.5 25. Yang, S.-B., Hlee, S., Lee, J., Koo, C.: An empirical examination of online restaurant reviews on Yelp.com. Int. J. Contemp. Hosp. Manag. 29(2), 817–839 (2017). https://doi.org/ 10.1108/ijchm-11-2015-0643 26. Oliveira, B., Casais, B.: The importance of user-generated photos in restaurant selection. JHTT 10(1), 2–14 (2018). https://doi.org/10.1108/jhtt-11-2017-0130 27. Yan, Q., Zhou, S., Wu, S.: The influences of tourists’ emotions on the selection of electronic word of mouth platforms. Tour. Manag. 66, 348–363 (2018). https://doi.org/10.1016/j. tourman.2017.12.015 28. Zhang, S., Liu, L., Feng, Y.: A Study of Factors Influencing Restaurants Sales in Online-toOffline Food Delivery Platforms: Differences between High-sales Restaurants and Low-sales Restaurants. Twenty-Third PACIS, China 2019 (2019). http://www.pacis2019.org/wd/ Submissions/PACIS2019_paper_413.pdf 29. Liu, A.X., Steenkamp, J.-B.E.M., Zhang, J.: Agglomeration as a driver of the volume of electronic word of mouth in the restaurant industry. JMR 55(4), 507–523 (2018). https://doi. org/10.1509/jmr.16.0182 30. Gosline, R.R., Lee, J., Urban, G.: The power of consumer stories in digital marketing. MIT Sloan Manag. Rev. 58(4), 9–14 (2017). https://sloanreview.mit.edu/article/the-power-ofconsumer-stories-in-digital-marketing/ 31. Li, H., Xie, K.L., Zhang, Z.: The effects of consumer experience and disconfirmation on the timing of online review: Field evidence from the restaurant business. Int. J. Hosp. Manag. 84, 102344 (2020). https://doi.org/10.1016/j.ijhm.2019.102344 32. Wu, L., Shen, H., Li, M., Deng, Q.: Sharing information now vs later. Int. J. Contemp. Hosp. Manag. 29(2), 648–668 (2017). https://doi.org/10.1108/ijchm-10-2015-0587 33. Rea, L., Parker, R.: Survey Research: A Practical Guide, Collegiate Publication Service; San Diego (1991) 34. Hair, J.F., Black, W.C., Babin, B.J., Tatham, R.L.: Multivariate Data Analysis, 6th ed., Prentice-Hall, Englewood Cliffs, NJ (2006) 35. Lavrakas, P.J.: Encyclopedia of Survey Research Methods. SAGE Publications (2008). https://doi.org/10.4135/9781412963947 36. Brown, A., Maydeu-Olivares, A.: Item response modeling of forced-choice questionnaires. Educ. Psychol. Meas. 71, 460–502 (2011). http://dx.doi.org/10.1177/0013164410375112 37. Nunnally, J.: Psychometric Theory. McGraw Hill, New York (1978)

Artificial Intelligence User Research

Conversational Advisors – Are These Really What Users Prefer? User Preferences, Lessons Learned and Design Recommended Practices Jason Telner(&), Jon Temple, Dabby Phipps, Joseph Riina, Rajani Choudhary, and Umesh Mann IBM, Orchard Rd, Armonk, NY 10504, USA [email protected], {jtemple,phippsdo,riina}@us.ibm.com, [email protected], [email protected]

Abstract. The usability of interactive voice response systems (IVRs) has been shown to be challenging for a variety of reasons, including complex menu structures, lengthy menu options to remember, and long wait times for a human representative. Conversational advisors allow users to interact more naturally in a more human-like and conversational manner by using natural language processing, artificial intelligence and machine-learning technology. In this article, two user studies will be discussed examining user performance and preferences between a variety of call flows that incorporate conversational advisors for customer support. We will discuss the findings, and review some lessons learned and recommended practices for designing conversational advisors, along with the benefits and challenges in using them for customer support versus using a human advisor. The attributes that make for effective conversational advisors will also be discussed. Keywords: Conversational advisors  Customer support IVR design  Best practices  Usability

 Chatbots 

1 Introduction We’ve all been in situations where we have had a poor telephone support experience. You’re in a rush to get to a meeting when you experience a critical computer issue, such as a boot error or hard drive malfunction. In a panic, you call customer support in the hopes of getting a quick resolution. Once the system answers, you’re presented with a lengthy list of menu options. Your impatience grows as you try to remember which menu option was the correct one for your issue. After selecting a menu option, you’re bombarded with a series of additional menu options and verifications, which only add to your frustration. Finally, you’re asked to wait on the phone for a live representative —who often asks the same questions you have just answered. This familiar experience is an example of an interactive voice response (IVR) system, a technology used in customer support call centers that consists of an automated telephone answering system that allows a user to navigate through a series of prerecorded menu options using voice recognition or touch tone buttons. The transition © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 167–173, 2021. https://doi.org/10.1007/978-3-030-51328-3_24

168

J. Telner et al.

from live operators to IVRs has reduced labor costs; however, the usability of IVRs has been shown to be poor and customers would rather speak with a live representative than navigate their complex menus [1]. The concept of people interacting with computers through natural language dates back to the 1960s. A conversational advisor or chatbot is an intelligent software system activated by natural language input in the form of text or voice that can interact or chat with a human user. In recent years, technological advances in natural language have led to renewed interest in their use in customer support IVRs and digital assistants with the expectation of providing fast, intuitive, and cost effective channels for customer support. In addition, virtual assistants such as Google Home or Amazon Alexa have raised users’ expectations in terms of how we interact with our devices and customer support services. As reported by [2], consumers today expect a customer support system to be as user friendly as their virtual assistants. Conversational advisors have enabled users to obtain support in a more human-like and conversational manner by using natural language processing, artificial intelligence and machine learning technology, rather than only basic voice commands such as “yes” and “no” or through keypad selections. In this article, we will review two user studies we conducted examining user performance and preferences with conversational advisors for a customer technical support system. We will discuss the findings, and review some lessons learned and recommended practices for designing conversational advisors, along with the benefits and challenges in using them for customer support in place of a human advisor. The attributes that make for effective conversational advisors will also be discussed. More specifically, the benefits and user perceptions of a conversational advisor used at the start of an IVR flow for phone-based technical support were investigated. Prior research by [3] found that most users were satisfied with having a chatbot interrupt them in a request for support with a recommended solution to their query as long as it was able to at least partially solve their issue. As shown in Fig. 1, in this model of interaction the user would initiate an action such as search or live chat. A cognitive layer that would be hidden to the user would then screen the user input and decide whether there was a high-confidence matching solution. If a high confidence match was found for the user’s request, then it would display the solution to the user; if a high confidence match was not found, then the requested query or chat request would be shown in its standard results format.

Fig. 1. Model of chatbot intercept. Taken from [3].

Conversational Advisors – Are These Really What Users Prefer?

169

There appears to be a paucity of IVR customer support research comparing the timing of introducing a conversational advisor into the call flow for call scenarios of varying levels of difficulty. In the current study, it was hypothesized that a conversational advisor presented at the beginning of the IVR call flow would be the preferred support option for users using phone-based technical support. This was hypothesized especially for simple issues, due to the higher likelihood of getting a resolution to the issue, as well as the time savings it provided of not having to proceed through the IVR menu options or wait for an advisor.

2 User Study 1 In the first user study, the researchers compared a conversational advisor that was presented at the start of the IVR call flow (early conversational advisor) to one that occurred later in the IVR flow (late conversational advisor), after selecting a series of menu options to narrow down the caller’s issue. In the early conversational advisor call flow, the user would call into the system with their issue and the conversational advisor would ask the user immediately to explain in a few words the reason for their call. After the user briefly described their issue, the conversational advisor would then do a screening in the background and only offer the option for a solution if it understood the issue and had a solution. If the user declined to use the solution or the conversational advisor could not initially find a matching solution for their issue, it would simply route them back to the IVR menu. The user could also say “advisor” at any point in the call flow to be connected to a human advisor (Fig. 2 shows an example).

Fig. 2. Model of early conversation advisor call flow for an IVR for customer support.

The late conversational advisor call flow (Fig. 3) would begin with the user proceeding through the IVR menu options until they made a few menu selections and narrowed down their issue. It would then function just as the early conversational advisor did but would route them to a human advisor if it did not have a solution or the user declined to use it and wanted to speak directly with a human advisor.

170

J. Telner et al.

Fig. 3. Model of late conversational advisor call flow for an IVR for customer support.

Both types of conversational advisor solutions were then compared against a baseline solution in which the user would proceed through the call flow menu options without a conversational advisor present and would then be offered the option to connect to a human advisor or a conversational advisor for help with their issue. Seven remote users took part in a 45-min one-on-one usability test session with a moderator using screen sharing. Participants were taken through the three different call flows in a randomized order for a scenario of medium-level difficulty in which the user was expected to obtain assistance in resetting their email password. The moderator clicked on various hyperlinked audio files depending on how they responded to various audio prompts that simulated a phone conversation with technical support. After each participant had completed the call flows, the moderator reviewed each flow with them to get a better understanding of their impressions and understanding of the flows and voice interactions, and were asked to provide satisfaction ratings (Fig. 4).

Fig. 4. Model of IVR call flow with option for speaking with a human or a conversational advisor.

Conversational Advisors – Are These Really What Users Prefer?

171

We found no user errors for the early conversational advisor, which also exhibited the fewest user errors amongst the three call flows. Participants in both the late conversational advisor and the call flow with the option to select a human or conversational advisor at the end of flow each committed several user errors in which they selected incorrect menu options for their issues. All participants preferred the early conversational advisor over the late conversational flow or having to proceed through the IVR and then select either a human or conversational advisor. The early conversational advisor also had the highest satisfaction ratings from users amongst the three call flows. All the users indicated that they preferred the early conversational advisor because it was faster, more direct and efficient, and that they liked being able to bypass the IVR menu options. The majority of users, however, wanted to know if the conversational advisor would be able to avoid repeating the same answers offered to them previously (via online search or chatbots); if not, then speaking to a human advisor would be the preferable option. The majority of users also preferred receiving their solution from the conversational advisor in the form of an email rather than over the phone so that they could refer to the information later. Only one user preferred having the solution steps read to them over the phone while at their computer, but indicated that they also wanted an email solution that they could reference later. It was also noted that the email solution should contain the appropriate steps directly within the email and not require a user to click any additional links within the email, in order to minimize navigation.

3 User Study 2 A second user study was conducted in order to examine user performance and preferences for the early conversational advisor in more detail. In this study, 10 users went through six different call scenarios in a randomized order with varying levels of difficulty, all with an early conversational advisor. The scenarios ranged from call scenarios of easier difficulty, such as installing software from an app store, to mediumdifficulty call scenarios, such as resetting an email password, to call scenarios of high difficulty, such as obtaining help on a cybersecurity issue or finding out when you would receive your w4 form from human resources. Participants called into the phone number and interacted immediately with the conversational advisor for each of the scenarios in order to get a solution to the specific problem they were experiencing. The moderator followed along with the participant using a flow diagram and tracked their interaction with the conversational advisor, counting the number of errors and steps taken. After each scenario, participants rated the difficulty of the task and their satisfaction. The moderator then reviewed the various call flows with the participant in order to obtain further feedback and understanding of the call flow and scenarios. Finally, the moderator asked participants about their preferences for interacting with the conversational advisor – early on as soon as they phoned in or after proceeding through some IVR menu selections first. As expected, the most difficult scenarios – which included seeking assistance for a cybersecurity issue or finding out when their w4 form would be mailed – resulted in the most user errors and lowest satisfaction ratings. These errors were attributable to the conversational advisor’s natural language processor not being able to correctly identify

172

J. Telner et al.

which intents the users spoke over the phone. The more complex scenarios involved a variety of different ways that the intent could be verbalized by the user such as “need help with cybersecurity, cybersecurity incident, victim of cyberattack” and required the natural language to be disambiguated. The need for users to repeat their intents during these scenarios often resulted in them being routed to the main menu, leading to increased frustration and greater dissatisfaction. The majority of participants, however, still preferred having the early conversational advisor, because it did not require key presses, had less navigation, and was faster to get to a solution. There were a few users nonetheless that did prefer the late conversational advisor, especially for the more difficult call scenarios in which the conversational advisor had more difficulty understanding their verbal intents. These users mentioned that using key presses to make menu selections at the start of the call to narrow down their issue and then speak with the conversational advisor later on might be easier for the conversational advisor and result in fewer errors in understanding. Users were also asked if they preferred interacting with a human advisor for their issue compared to using the conversational advisor. The majority of participants had a preference for using the conversational advisor on the condition it worked well and correctly understood their dialogue and accents. For simpler scenarios, it was noted to be faster than a human advisor. Users also noted that using a conversational advisor would be preferred if wait times to reach a human advisor were greater than 5 min. The accents of human advisors were noted to be difficult to understand, and it was noted that it took a while for them to process the caller’s issue. For complex issues, there was a preference for a human advisor especially among users that had difficulty being understood by the conversational advisor. These users noted that human agents would understand them better and require less navigation compared to using the conversational advisor. Finally, the majority of users preferred receiving an email solution for complex problems or solutions with multiple steps, repetitive tasks, or if a diagram would be useful. Email was noted to better than a voice solution if the user was not under a time crunch because they could refer to it on their own time and share the solution with colleagues. Voice was generally preferred by users if they were not in a hurry and if it was followed up with an email to refer to later on.

4 Lessons Learned in Designing Conversational Advisors A number of important best practices and lessons learned for designing conversational advisors were acquired after conducting both user studies. 1. For simpler issues, an early conversational advisor provides a consistently better user experience for obtaining answers to problems. 2. For more complex issues, in which there was a greater chance that the natural language comprehension of the conversational advisor might fail, include the conversational advisor toward the end of the IVR call flow, so that the issue could be narrowed down to a greater extent and the conversational advisor could better

Conversational Advisors – Are These Really What Users Prefer?

3. 4.

5.

6.

173

understand the issue. As natural language systems improve, it is possible that the early conversational advisor may eventually prove advantageous here as well. Keep the number of steps for any particular solution to a maximum of five, keeping the dialogue short and concise with minimal redundancy of information provided. When designing the natural language processor of a conversational advisor, be sensitive to picking up a variety of dialogues that might indicate the same intent or command. For example, “victim of cyberattack”, “cybersecurity incident” and “cybersecurity problem” indicate the same intent of “I am having a problem with cybersecurity” while “continue”, “please continue”, “proceed” or “yes” all indicate the same command of move forward. The conversational advisor should also speak in a manner similar to a human conversation, in which it pauses between sentences and also pauses when it picks up speech, allowing the user to interrupt and barge in with a spoken command or intent. When designing difficult solutions provided by the conversational advisor with multiple steps and complex navigation of menus or inputting of web links, email is generally recommended so that the user can refer to the solution later for troubleshooting. For simple solutions or when the user is under time pressure to fix the problem immediately, a voice solution is recommended as it is faster and more immediate.

References 1. Sum, B., Peterson, P.: A data-driven methodology for evaluating and optimizing call center IVRs. Int. J. Speech Technol. 5(1), 22–37 (2002) 2. Nuance Communication. Consumers Want Conversational Virtual Assistants. www.nuance. com 3. Temple, J.G., Elie, C., Schenkewitz, L., Nezbedova, P., Phipps, D., Roberson, K., Scherpa, J., Malatesta, M.: Not Your Average Chatbot: Using Cognitive Intercept to Improve Information Discovery. Paper presented at the UXPA International Conference, Scottsdale, AZ (2019)

Investigating Users’ Perceived Credibility of Real and Fake News Posts in Facebook’s News Feed: UK Case Study Neil Bates1(&) and Sonia C. Sousa2 1 Department of Multimedia and Graphic Arts, Cyprus University of Technology, 30 Arch, Kyprianos Street, 3036 Limassol, Cyprus [email protected] 2 Institute of Informatics, Tallinn University, Narva Road 25, 10120 Tallinn, Estonia [email protected]

Abstract. The purpose of this research was to understand differences in UK users’ perceived credibility of real and fake news posts in Facebook’s news feed based on location, age, gender, education level, frequency of Facebook use, and intention to interact. A survey was designed to collect and measure demographic data from UK-based Facebook users, their behaviors, and perceived credibility of real and fake news posts. The study has made it evident that the perceived credibility of a Facebook post is dependent on the post origin and its truthfulness. The study also points to an interesting phenomenon that users are more likely to interact with posts that are seen as more credible. Keywords: Fake news  Credibility  Brexit  Facebook  News feed  Human factors  United Kingdom  Tabloids

1 Introduction Social networking sites such as Facebook have simplified the way in which we share and disseminate information to the masses. With that, it has been claimed that social networking sites are largely responsible for fueling the spread of fake news by creating a partisan environment. In addition, in the case of Facebook, if it would like to continue using the term “news feed” to describe its service, it must take responsibility in ensuring that what appears there is accurate [1]. The ramifications of this phenomenon became apparent for the general population in the UK during, and after, the 2016 UK European Union membership referendum. The influence of fake news has been described as unconventional warfare, using technology to disrupt, magnify, and distort [2]. In the case of Twitter, falsehoods consistently outperform the truth - with fake news reaching more people and being shared more quickly than accurate news [3]. There is a need to rethink the current information ecosystem with interdisciplinary research to tackle the rise of fake news [4]. Many discussions within the UX community center around the need for a framework to tackle fake news through UX design [5]. Due to increased awareness of fake news, it is commonly claimed that we now live in a post-truth era, © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 174–180, 2021. https://doi.org/10.1007/978-3-030-51328-3_25

Investigating Users’ Perceived Credibility of Real and Fake News Posts

175

where the boundaries between fact and fiction are blurred - especially on social networking sites [6]. Based on a public consultation on fake news and online disinformation in the EU, 97% of citizens claim to have been confronted with fake news, 38% of them on a daily basis and 32% on a weekly basis. This public consultation took place between 13 November 2017 and 23 February 2018 and found that amongst citizens, there is a common perception that fake news is highly likely to cause harm to society [7]. It has been acknowledged that consumers, scholars and industry players have yet to harmonize the rejection criteria for fake news in spite of the ensuing platforms designed to curb the societal conundrum that fake news has become [8]. As news is increasingly consumed through social networking sites [9], interaction designers are not adequately considering the phenomenon of fake news in the experiences they build. Often referred to as dark user experience, interaction designers are creating experiences and interfaces that are fun, immersive, and addictive, but without the proper ethical thought in order to suitably consider the context or ramifications of fake news in the social networking sites they design [10]. It could be considered that this has partly contributed to the spread of falsehoods on digital platforms and profoundly influenced the worlds of politics, business, and media [11]. There are significant peer-reviewed sources regarding fake news generally, particularly on the user side, with many attributing the spread of fake news to cognitive bias [12], distrust in mainstream media [9] and digital literacy [13]. However, relatively little empirical research has been conducted into the perceived credibility of fake and real news posts on Facebook, and specifically within the context of UK-based users and Brexit.

2 Research Design and Methodology The main purpose of this research is to better understand the behavioral commonalities associated with UK-based Facebook users when evaluating the credibility of posts in Facebook’s news feed. This research comprises of three main parts. Firstly, a survey that aims to gather data and measure UK-based users’ perceived credibility of both real and fake news posts in Facebook’s news feed. Secondly, a statistical analysis of the survey data in order to identify significant patterns in participants’ responses. Finally, based on the findings from the survey, recommendations are presented as a basis for future research and for consideration by interaction designers when designing news feed experiences. The survey limited responses to UK-based respondents and those with a Facebook profile through control questions. Following from the qualification questions, participants were required to provide the following demographic and behavioral data; location (country in the UK), age, gender, education level, frequency of Facebook use and intention to interact. After providing demographic and behavioral data, participants entered the main part of the study where they were presented with the stimuli. In total, each participant assessed the credibility of a total of four stimuli, one at a time and randomly generated, using Kang’s 14-item measure for author and content credibility [14]. The author credibility scale consists of five items where participants rate whether the author is knowledgeable, influential, passionate, transparent, and reliable on a 5point Likert scale (strongly disagree, disagree, undecided, agree, and strongly agree).

176

N. Bates and S. C. Sousa

The credibility of content, on the other hand, is rated by nine aspects. These aspects are based on content authenticity, insightfulness, how informative it is, its consistency, fairness, how focused and accurate it is, as well as how timely, and how popular the content is. For that, the same 5-point Likert-type scale was used. Each of the stimulus are real-life examples of both real and fake news from a UK tabloid, and published between 2016 and 2019. To minimize varied bias towards the story, all of the posts featured a news story that is negative towards the EU. All four stimuli feature an actual news article from a UK tabloid that has been verified as either real or fake by one of three fact-checking initiatives: EUFACTCHECK, European Commission Representation in the UK, or Full Fact. The author of each Facebook post is different: two from a UK tabloid (Daily Mail and Daily Express) and two from unknown male individuals. To compare, there is one real and one fake news story for both UK tabloids and individual authors. Finally, the number of engagements (reactions, comments and share) on each of the stimulus are average depending on the author of the post.

3 Data Analysis During the data collection phase in November 2019, a total of 201 participants enrolled in the study. Thirty-four participants failed to complete the survey in its entirety; therefore, data associated with those participants is discarded from the results. Consequently, this analysis is based on 167 participants who answered all questions in the survey. The majority of the respondents indicated that they reside in England (80%). Gender distribution of the sample was not equal, with the majority identifying as female (76%). The age of participants was dispersed along the entire scale; however, the biggest group is 30–39 (41%). The education level of participants in the sample is equally distributed. Finally, 60% of the sample indicated that they use Facebook more than once a day. 3.1

Assessment of the Scale

The scale used in this research is a two-faceted measure developed by Kang [14]. Its two distinct facets are treated separately, which are author credibility and content credibility. Scores were calculated by averaging responses of items within the two distinct facets. Although Kang already provided evidence that the scale is suitable for use [14], it is essential to assess the reliability of the scale. To do so, the Cronbach’s alpha reliability coefficient is one of the most common procedures [15], due to its reliability on assessing such scales. The value of this coefficient can range from 0 to 1, and values between .7 and .95 are accepted as adequate [15]. Table 1 presented below, shows reliability coefficients calculated for every time the scale was administered to participants in this research. Following recommendations provided by Tavakol and Dennick [15], the scale is suitable in this context, as all measures are considered adequate.

Investigating Users’ Perceived Credibility of Real and Fake News Posts

177

Table 1. Alpha coefficients for two facets of the credibility scale [14] across four administrations Situation Real news post from individual Fake news post from tabloid Fake news post from individual Real news post from tabloid

3.2

Author credibility a Content credibility a .716 .853 .717 .880 .711 .861 .826 .912

Understanding Users’ Perceived Credibility

Since all participants responded to all stimuli with repeated measures, a general linear model was applied. Before commencing the analysis, several modifications to the dataset were made. Due to a small number of participants in some categories, variables coding frequency of Facebook use and country have been re-coded. New variable coding for country discriminates between England and Scotland, while participants from Wales and Northern Ireland were combined due to smaller response rates. Similarly, coding for Facebook frequency of use were merged. Usage that is once a week and less than once a week were merged. Judging from Table 2, it can be concluded that the condition of equality of variances has been met for every measure used. Table 2. Results of Levene’s test of equality of error variances F df1 Real news post from individual - Mean ,861 104 Fake news post from tabloid - Mean ,706 104 Fake news post from individual - Mean 1,010 104 Real news post from tabloid - Mean ,996 104

df2 62 62 62 62

Sig. ,752 ,942 ,490 ,514

Results of testing for the effects of different factors on perceived credibility are provided in Table 3 below. Judging from the results, it is clear that only repeated measures (that is, type of media and its truthfulness) reach a statistical significance of p < .05. It can be concluded from this sample some significant effects have been observed. These effects pertain to the author and their truthfulness. Table 3. Results of repeated measures general linear model for credibility Source

Author credibility F df1 df2 p η2 Repeated 3.85 3 450 .010* .025 Country*Repeated 1.66 6 450 .131 .022 Frequency*Repeated .79 9 450 .626 .016 Gender*Repeated .58 3 450 .632 .004 Age*Repeated .81 15 450 .667 .026 Education*Repeated .98 15 450 .476 .032 Notes. Repeated stands for factor comprised of four fake news media, fake individual, real news media.

Content credibility F df1 df2 p η2 5.88 3 450 .001* .038 .44 6 450 .853 .006 .20 9 450 .995 .004 .12 3 450 .951 .001 1.53 15 450 .090 .049 .968 15 450 .488 .031 measurements: real individual, * indicates significant effects

178

N. Bates and S. C. Sousa

Figure 1 presents the variation of perceived author credibility based on where the post comes from. The findings show that fake news posts from individuals are most likely perceived as the least trustworthy, while real news posts from individuals and fake news from media outlets are somewhat similarly perceived. A sharp increase was observed while reading real news from tabloids.

Fig. 1. Superimposed graphs of changes in perceived content and author credibility.

3.3

Predicting Interaction

Having in mind that for the purpose of this research, there is no intention to calculate the probability of interaction with a post based on perceived credibility. Instead, this study aims to determine whether significant influence exists; the reporting of conducted analysis was kept simple. Four binary logistic regressions were conducted, one for every type of post, using participants’ statements about whether they would interact with it as the outcome variable and perceived credibility of author and content as two predictor variables. The results of regression analyses are presented in Table 4.

Table 4. Results of four logistic regression analyses B S.E. Wald Real news post from individuala Author credibility 2.828 1.136 6.197 Content credibility −.692 .924 .560 Constant −8.307 2.322 12.801 Fake news post from tabloidb Author credibility .875 .888 .971 Content credibility .409 .892 .211 Constant −6.241 1.592 15.368

df p

Exp (B)

1 .013* 16.913 1 .454 .501 1 .000 .000 1 .324 1 .646 1 .000

2.399 1.506 .002 (continued)

Investigating Users’ Perceived Credibility of Real and Fake News Posts

179

Table 4. (continued) B S.E. Wald df p Exp (B) Fake news post from individualc Author credibility 1.015 .744 1.859 1 .173 2.759 Content credibility .551 .727 .574 1 .449 1.734 Constant −5.919 1.694 12.205 1 .000 .003 Real news post from tabloidd Author credibility 2.232 .832 7.192 1 .007* 9.315 Content credibility −.725 .915 .628 1 .428 .484 Constant −6.627 1.662 15.905 1 .000 .001 Notes. * significant at .05 level. a Model fit: v2 (2) = 10.88, p = .004, Nagelkerke R2 = .150. b Model fit: v2 (2) = 6.10, p = .047, Nagelkerke R2 = .093. c Model fit: v2 (2) = 7.70, p = .021, Nagelkerke R2 = .087. d Model fit: v2 (2) = 14.81, p = .001, Nagelkerke R2 = .176

Examining the indices reported in the main section of the table, it is concluded that only in two cases does credibility actually predict willingness to interact with a post – for a real news post shared by an individual and a real news post shared by a tabloid. In order to understand the contribution of author credibility, the only significant predictor in both cases, it is advised to interpret Exp(B). This number indicates how many times the chance of the outcome happening when compared to the outcome not happening for every one-point increase in the predictor variable [16]. The results in Table 4 illustrate that, a person is 16.9 times more likely to interact with a real news post from an individual if its author is perceived as more credible. In brief, every point of author credibility brings an almost seventeen-fold increase in the chance of users interacting with a post – if that post was real news published by an individual. If it was real news published by a tabloid, the increase is somewhat smaller and equals to 9.3. This kind of relationship, however, was not observed for fake news published by an individual and fake news published by a tabloid.

4 Integrating Findings and Conclusion Firstly, methodological scrutiny and ample sample size have created a stable platform from which generalizable conclusions can be drawn. Although not nationally representative, the collected sample can be considered to capture enough variation between participants to allow for an inferential analysis. The most important findings from this research relate chiefly to the prominent difference in perceived credibility, which is dependent on the post origin and its truthfulness. Mean differences for various post types, that have proved to be statistically significant, call for deeper inquiry. The second and equally important finding in this study is the prediction of whether an individual will interact with a post. It points to an interesting phenomenon that people are more likely to interact with posts that are seen as more credible. Apart from suggestions about an increase in people who are likely to interact with a post that have already been put forth,

180

N. Bates and S. C. Sousa

several more remarks have to be made about possible improvements to this line of research. It could prove to be of importance for future researchers to expand the rating scale for credibility from a 5-point scale to 7- or 9-point ones, in order to widen the range of possible answers. By doing so, it would be possible to address whether the relationship between interaction likelihood and post credibility is linear. It also could be the case that interaction likelihood reaches a plateau at some point and that further increases in post credibility are unchanged. To better understand the implications of these results, future research in this area could also use more than one post per situation in order to provide a more reliable estimate for a condition and developing methods to understand better whether the presentation of posts on Facebook affects users’ perceived credibility of the author and content.

References 1. Stockdale, D.: Why We Should Hold Facebook Responsible for Fake News (2017). https:// www.digitalethics.org/essays/why-we-should-hold-facebook-responsible-fake-news 2. Althuis, J., Haiden, L.: Fake News: A Roadmap. NATO StratCom COE and The King’s Centre for Strategic Communications, Riga (2018) 3. Vosoughi, S., Roy, D., Aral, S.: The spread of true and false news online. Science 359, 1146–1151 (2018) 4. Lazer, D., Baum, M., Benkler, Y., Berinsky, A., Greenhill, K., Menczer, F., Metzger, M., Nyhan, B., Pennycook, G., Rothschild, D., Schudson, M., Sloman, S., Sunstein, C., Thorson, E., Watts, D., Zittrain, J.: The science of fake news. Science 359, 1094–1096 (2018) 5. Singh, V.: Solutions to misinformation need human-centered design (2018). https:// misinfocon.com/solutions-to-misinformation-need-human-centered-design-4f811a8f949b 6. Sismondo, S.: Post-truth? Soc. Stud. Sci. 47, 3–6 (2017) 7. European Commission: Summary Report of the Public Consultation on Fake News and Online Disinformation (2018) 8. Bozarth, L., Saraf, A., Budak, C.: Higher ground? how ground truth labeling impacts our understanding of fake news about the 2016 U.S. Presidential Nominees. SSRN Electron. J., 9–10 (2018). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3340173 9. Heuer, H., Breiter, A.: Trust in news on social media. In: Proceedings of the 10th Nordic Conference on Human-Computer Interaction - NordiCHI 2018 (2018) 10. Zhou, K.: Designing ethically pt. 1 (2018). https://uxdesign.cc/designing-ethically-pt-19800bfbc86a3 11. Martens, B., Aguiar, L., Gomez, E., Mueller-Langer, F.: The digital transformation of news media and the rise of disinformation and fake news. SSRN Electron. J., 31–46 (2018). EU report – https://ec.europa.eu/jrc/en/publication/eur-scientific-and-technical-research-reports/ digital-transformation-news-media-and-rise-disinformation-and-fake-news 12. Shu, K., Sliva, A., Wang, S., Tang, J., Liu, H.: Fake news detection on social media. ACM SIGKDD Explor. Newslett. 19, 22–36 (2017) 13. Chen, Y., Conroy, N., Rubin, V.: News in an online world: the need for an “automatic crap detector”. Proc. Assoc. Inf. Sci. Technol. 52, 1–4 (2015) 14. Kang, M.: Measuring Social Media Credibility – A Study on a Measure of Blog Credibility. Institute for Public Relations (2010) 15. Tavakol, M., Dennick, R.: Making sense of Cronbach’s alpha. Int. J. Med. Educ. 2, 53–55 (2011) 16. Field, A.: Discovering Statistics Using IBM SPSS Statistics. SAGE, Thousand Oaks Predicting Interaction (2013)

Future Trends in Voice User Interfaces Jason Telner(&) IBM, 1 Orchard Rd, Armonk, NY 10504, USA [email protected]

Abstract. With the use of voice user interfaces (VUIs) only expected to grow and dominate in the future, this paper will explore the future trends and predictions for voice user interfaces, as well as the changing demands for users. Trends that will be discussed include the increase in personalized voice interface experiences and greater integration of voice user interfaces into our everyday devices. The predicted shift toward voice search in the future from text search will also be discussed. Finally, the use of voice device channel notifications, advertising through VUIs, and the need for better security of voice user interfaces to protect our personal information will be discussed. Keywords: Voice interfaces  Future trends  Personalization Voice search  Security of personal information

 Integration 

1 Introduction When most people think of voice user interfaces today such Google Home and Amazon Alexa, they think of simple experiences, such as finding out the weather forecast, playing a news update or playing your favorite song. But imagine if one morning, you jump into your car and on your way to work your Amazon Alexa reminds you through your vehicle’s infotainment system that you have a doctor’s appointment at 2 pm and also reminds you of the grocery items you need to pick up on your way home. It then asks you if you want to listen to your favorite song in the car. This integration of voice user interfaces into our everyday devices, including home entertainment systems, wearable devices, public kiosks within airports, malls, and office buildings is already occurring and is expected to expand even further in the future. A voice user interface (VUI) allows users to interact with computers and devices through voice recognition technology. VUIs are - in essence - invisible, hands free interfaces that may allow for more human-like and efficient interactions compared to visual interfaces. Recently, VUIs have exploded in popularity. One in six or 39 million adults in the United States own a voice activated smart speaker such as Google Home or Amazon Alexa. Further, within the next year, it is expected that 50% of web browsing will be conducted by voice search [1]. The main driver toward this rapid shift in the use of voice user interfaces has been changing user demands. Users are requiring faster, more efficient, and convenient interactions due to their more frequent multitasking demands that require the use of their hands. These needs are better met through voice interaction compared to display interaction that often requires the use of a user’s hands and greater focus of attention. The mass adoption of artificial intelligence, as © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 181–186, 2021. https://doi.org/10.1007/978-3-030-51328-3_26

182

J. Telner

well as Internet of things (IOT) devices such as thermostats and smart appliances have provided voice assistants with further utility. This paper will focus on future trends in voice user interfaces. These include more personalized voice user interface experiences, in which voice user interfaces will have an increased understanding of the user’s context and environment, as well as the ability to differentiate between multiple voices and hold multiple user accounts, tailoring information to a specific individual user. Another trend that is already emerging is the greater integration of voice user interfaces into the many devices we use every day, such as cars, home appliances, furniture, and television sets. This integration could potentially present challenges to app developers who might need to develop apps across multiple platforms resulting in a fragmented experience for users. Search behavior will also be discussed with regards to a predicted shift toward voice search in the future from text search. Voice interaction devices will also open additional channels for notifications and advertising which will be further discussed. Finally, security for voice interfaces which is another area that will change in the future will be discussed.

2 Personalized Voice Interface Experiences While voice interface technology has progressed substantially in recent years within voice assistants and in conversational advisors for telephone support, it still lacks the basic conversational components that create personalized voice experiences, similar to speaking with another human being. Context is one aspect of personalized voice experiences that is expected to become further enhanced in the future. Currently, some voice assistants such as Google Home, have the ability to keep some continual conversation in context and not require a user to say “Hey Google”, for each question in the conversation. This ability allows the device to continue listening after it has provided the user with information, so that it can answer follow-up questions from the user without the user having to prompt its attention again by saying “Hey Google”. For example, the user might begin by saying “Hey Google, who is Isaac Newton?” then ask “What was he famous for?”, then ask “When was he born?” Although the use of context by voice user interfaces has been shown to be generally helpful to users, voice user interfaces will need to be able to understand and discern between when context might be helpful to the user and when it could be a potential barrier to communication, such as when the user does not ask or give specific enough information. For example, “Hey Google, what time is the grocery store open until?”. “Which grocery store would you like information on?”. “The last grocery store I asked about”. “The last grocery store you asked about was Price Chopper”. This is an example in which the user’s prior question was not specific enough to indicate the hours it was open until even though it was in their prior question and could present a barrier to communication and to receiving accurate information [2]. The ability of voice interfaces to be more aware in the future of their locations, surroundings, and interactions is another example of the trend toward having more personalized voice interactions. Currently Amazon’s Alexa can tell the room it is in based on its ability to turn the room lights on or off. Voice interfaces in the future are predicted to make use of a user’s previous searches, online purchases, calendar and

Future Trends in Voice User Interfaces

183

previous interactions in order to provide more informative and useful information to a user. It could also provide more useful reminders to users regarding their appointments and daily activities. For example, the user could try and book a lunch reservation through their voice interface device for a few weeks into the future. The device could use the user’s calendar and notify them of current appointments already booked, as well as make recommendations of places to book based on their prior online searches, locations, etc. The ability of voice interfaces to understand a user’s tone of voice, pitch, mood and incorporate background noise levels are other future trends that could make voice interfaces more personalized. Voice interfaces could show empathy based on a user’s sad tone of voice, lower their volume of speech if they detect a lot of background noise or perhaps determine that the user was speaking in a private setting. The ability to differentiate between multiple voices and hold multiple user accounts, tailoring information to a specific individual user is another trend that may emerge in the future. If the device knew for example they were speaking with a particular user with difficulty hearing, it could speak more slowly with greater pauses and also use prompts with the user to verify that the user could hear it.

3 Integration of Voice Interfaces to Our Everyday Devices Another trend that is already emerging is greater integration of voice user interfaces into the many devices we use every day such as cars, home appliances, furniture, and television sets. Amazon’s Alexa is already integrated into a variety of products such as Samsung’s Family Hub refrigerators [3]. Google has created Google Assist Connect for manufacturers to create custom devices that are integrated with the digital assistant and its’ voice interface [4]. For other manufacturers of devices that do not create custom devices, these manufacturers might need to develop apps across multiple platforms resulting in a fragmented experience for users. In order to provide a more consistent experience and easier development of apps across multiple devices, there will be an increasing need for app developers and brands to develop apps together on a single platform in partnership to enable a more consistent VUI experience. There is already some evidence of this in the partnership between Amazon and Logitech to enable Amazon Alexa technology within various automobile brands [5]. This can also be seen in the finance industry in which companies are using Amazon Alexa to provide banking services to their customers from wherever they are located.

4 Shift Toward Voice Search from Text Search Voice and text search differ in a variety of ways and also in the experience they provide to the user. Voice search on the whole has longer search queries than text search and also makes greater use of natural language in the form of questions compared to text search, which makes use of only a few words. The expectations for how search results are displayed also differ between voice and text search. When asking a question via voice search, users generally expect a single answer, similar to engaging in a conversation with another person. Text search offers a listing of results, in which the user

184

J. Telner

expects they can scan the list to find the information they need while also possibly learning other peripheral information to their query in a more discoverable manner. Voice search also tends to be more location dependent than text search, in which the location of the user’s device has more importance for providing location-based searches, such as for directions to a location or the local phone number to call a type of business. Voice search is also the predominant way of searching for answers on a mobile device, in which the location of the device is used to provide more accurate information. It is predicted that text search will shift toward voice search in the future. Voice search is not a new phenomenon and is quietly taking over from text search as mobile devices have taken over from desktop devices. There are a variety of proposed reasons for this shift in search modality. One is that the accuracy in voice recognition is predicted to increase substantially. Another is that the contextual awareness of voice search is expected to improve. Voice assistants on mobile devices such as Apple’s Siri and Google Assistant have supported voice search since their creation. Research has shown that more users are using their smartphones for search and also are spending more time visiting mobile websites. Voice commands are generally easier than typing when on mobile devices because of the multitasking demands in which a user’s eyes and hands are often focused on multiple tasks. With voice search being more greatly integrated into a mobile device’s map applications, this will present greater opportunities for local businesses to incorporate location information into their sites. Voice search also is more conversational in nature than text search, enabling content creators of blogs and websites to incorporate more conversational language into their sites for better flow and more natural presentation of information. It is predicted that conversational language will comprise the future of SEO due to the shift in search from text to voice. There are still some challenges that remain however, before voice search will completely surpass text search. These include the challenge of when a human misspeaks and they must wait for a response from the VUI before they can speak again, which is slower then retyping their query again during text search. Another challenge for voice search is searching the web. Currently, voice search only provides single search results. In the future however, voice search could perhaps provide a listing of the top web results for a web query or even provide an audio read out of specific web page summary descriptions from the search results, similar to listening to an audio book.

5 More Channels for Notifications and Advertising Voice interaction devices will also open additional channels for notifications and advertising. Developers of voice apps will need to be proactive in getting users engaged with their voice interfaces instead of waiting for them to engage, similar to how push notifications were designed for mobile phones. Advertisers will also take advantage of voice interfaces to advertise their brands. It had been noted by [6] that voice advertising through a VUI is able to provide more direct advertising with more emotionally resonant and intimate messaging than other forms of advertising. Unlike online advertising, in which multiple adds occur at the same time, with voice advertising, adds

Future Trends in Voice User Interfaces

185

would take place one at time allowing for more focused attention from the user. The biggest value add for voice advertising is that it could cultivate interactivity between the user and the voice interface, unlike simple ads on the radio [7]. The ads are even predicted to be delivered around the content the user was asking for, making use of their context. If for example, a user was asking their voice interface device a question about a certain food or cuisine, this could prompt an advertisement to follow from a local store that sells the particular food item or restaurant of that cuisine. It is predicted that advertisers will also make use of advertising through voice interfaces in cars, in which the location of vehicle could be used to promote local ads of nearby businesses. Adobe has been leading an effort to provide voice analytics and insights to marketing companies regarding user voice searches and behaviors. Solutions already exist to assist in the analysis of voice big data which help marketers retrieve key KPIs and insights. These include tools such as voice activated Google Analytics or other tools developed for voice-based business intelligence. Even though voice ads have been described as a marketer’s dream because of their ability to leverage a user’s context, brands have not yet fully embraced this form of advertising with regards to launching them specifically on a voice enabled platform such as Amazon Alexa. These audio ads have only made their way to voice enabled platforms through a specific skill the user must request, that was created around brand related activities serving as the voice gateway.

6 Security for Voice Interfaces With almost half of all users of voice interfaces concerned about trust and privacy, security concerns will be of primary focus in the future [1]. With voice interface devices tied to numerous personal account information, such as payments as well as other personal devices such as home appliances and vehicles, they are a gate way to important user information. They are vulnerable to hackers and fraudulent activity mainly because a human’s voice can be reproduced or recorded. Two-factor authentication will be needed with another form of identification other than voice, perhaps in the form of a physical key, electronic controller, verbal login, facial recognition, pin code or traditional password. A second important security issue that users of voice interfaces must contend with is data collection, especially in the future as more of our voice interfaces become interconnected with our devices. Proper encryption methods across all data related devices must be used to reduce the risk of cyberthreats. Further, as proposed by [8], one way to mitigate privacy concerns is to use more on-device AI processing of data rather than cloud processing. This would reduce the risk of sending a user’s sensitive information to the cloud. The user would be much more trustworthy of their voice interfaces if the amount of information sent to the cloud was limited by a gateway that would first check with the user about where it would send their data. With more voice interfaces expected to be used in the workplace in the future, this raises a whole series of different concerns with regards to exchange of company sensitive data. Companies will need to develop policies and guidelines for employees using voice interface devices in terms of what they may use them for and what data can be exchanged, where the data will be stored and how long it will be kept for.

186

J. Telner

7 Conclusions Voice user interfaces in the future are expected to grow, expand and dominate. In particular, this evolution includes increased personalization in the use of voice user interfaces, and greater integration into our everyday devices. Further developments for voice user interfaces in the future include further domination of voice search over text search and the creation of additional channels for notifications and advertising. Although these advances are predicted to be significant, they will not be met without challenges in the future. These include the ability of voice user interfaces to properly discern context, potential fragmented user experiences as a result of multiple apps across different platforms, the VUIs capacity to handle a user’s misspoken words, searching the World Wide Web via a VUI, getting brands onboard with advertising through VUIs and the increase in security threats to a user’s personal and confidential information.

References 1. Ciligot, C.: 7 Key Predictions for the Future of Voice Assistants and AI. Clearbridge mobile (2019). Clearmobile.com 2. Richardson, C.: The Future of Voice User Interfaces. Digital Doughnut (2018). Digitaldoughnut.com 3. Wroklowski, D.: Which Smart Appliances Work With Amazon Alexa, Google Home, and More. Your Appliances Might Already be Smarter Than you Think. Consumer Reports. www.consumerreports.org/appliances 4. Ghebart, A.: Everything you need to know about Google Home. cnet (2019). cnet.com 5. Etherington, D.: Logitech Now Offers the Easiest Way to Get Amazon’s Alexa in Your Car. TechCrunch. techcrunch.com, 2017/02/07 6. Trimble, J.: Advertisers are Finding New Places for Ads With the Rise of voice technology. Vox (2019). www.vox.com/2018/1/25 7. Marinina, M.: Voice marketing is a looming opportunity, but not without its pitfalls. Venture Beat. (2019). Venturebeat.com 8. Nachum, Y.: Privacy issues with voice interfaces (2019). EEweb.com

Artificial Intelligence Enabled User Experience Research Sarra Zaghdoudi(&) and Leonhard Glomann LINC Interactionarchitects GmbH, Munich, Germany {Sarra.Zaghdoudi,Leo.Glomann}@linc-interaction.de

Abstract. This paper describes an approach to a prototypic system that aims to automate user research activities. Building digital products that fulfil user needs requires an analysis of its users, their contexts and their tasks. These user research activities commonly require a large amount of human effort, resulting in collections of unlinked quantitative and qualitative data sets. Our system aims to analyse and combine “human-made” user research data and extract meaningful insights through machine learning methods. This paper highlights the challenges we faced while building the first prototypes, our need for labelled data and how we managed to overcome its scarcity. Finally, we describe the first prototype which consists of a keyword extractor, sentiment extractor and text annotator, providing a programmatic way to analyse user research data. Keywords: Machine learning  Natural language processing  Neural networks  User research  Qualitative analysis  Quantitative analysis

1 Introduction In business, most companies regard the customer experience of their digital products or services as the primary differentiator towards competition [1]. Referring to the standard “Ergonomics of human-system interaction – […] Human-centred design for interactive systems” [2], designing a digital customer experience needs to be based on user (experience) research activities and the insights gained. In simple terms, user research is conducted by collecting and analysing quantitative data (e.g. programmatic user tracking, online surveys or customer requests) and/or qualitative data (e.g. contextual inquiries, field studies or focus groups). Gathering this information, especially quantitative data, may in part be done in an automated fashion. However, combining different data types, analysing that combination and deriving insights from it, requires a very high degree of human effort. In many contexts the way user research is set-up, planned and conducted is highly inefficient, especially in regards to the following three aspects: 1) Qualitative information is usually used for one-off projects which results in high costs for a limited outcome, 2) merging data is a manual process requiring intense human effort, and 3) linking quantitative and qualitative data is rarely done.

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 187–193, 2021. https://doi.org/10.1007/978-3-030-51328-3_27

188

S. Zaghdoudi and L. Glomann

2 Objective Our objective is to develop a system that automates user research activities. This system consolidates and links quantitative and qualitative user research data, analyses it and creates meaningful user data insights. With the help of this system, companies save time and money in manual analysis and uncover hidden potential in their own data to optimize or innovate research and development topics, business models and customer experiences. This paper describes the approach towards an initial prototype of that system and indicates how we gained first insights.

3 Approach Before starting to work on the first prototype, we outlined the main challenges for user researchers when it comes to understanding users. We wanted to determine how a programmatic approach, especially machine learning1 (ML), may be effectively applied to meet researchers’ needs. It is not a simple task to analyse a huge amount of unstructured contextual user documentation in order to obtain deep insights from it. With machine learning and natural language processing2 (NLP) techniques, investigating the user context becomes faster and easier. To create an initial prototype, we went through three phases. First, we used fundamental NLP methods to implement a keyword extractor tool that enables researchers to highlight the main user concerns in survey data. After that, we implemented a sentiment extractor since it is important for businesses to understand how customers perceive their products. Finally, we built a text annotator to associate meaningful tags with data from user evaluations.

4 Statistics Based Insights With NLP, statistical methods can be used to derive insights from text, such as keyword extraction. Keyword Extraction. Keyword extraction is an automatic process that consists of extracting the most relevant words or expressions from a text. It helps to pull out the most important words or phrases that best describe a user statement on a huge file of statements. It can be achieved using statistical methods, linguistic methods or machine learning algorithms. In our case, we used a popular statistical method called TF-IDF. TF-IDF stands for term frequency-inverse document frequency. It is a mathematical formula that measures how relevant a word is to a given document [3] – in our example

1

2

Machine learning is an artificial intelligence (AI) field that enables systems to learn “intelligent” tasks through experience. To build an ML algorithm, we have two fundamental steps: Training where the algorithm learns how to perform the desired task and testing to validate the model. Natural language processing is an artificial intelligence component that aims to make computers understand and generate human natural language.

Artificial Intelligence Enabled User Experience Research

189

below, a single user statement from a user review – in a collection of many documents. It considers the whole set of reviews when setting the keywords. The TF-IDF method calculates the number of occurrences of a word in a review, term frequency, and compares it with the inverse document frequency which indicates how common this word is in the whole data set of reviews. The higher the TF-IDF score, the more relevant the term is to the review. Thus, we consider words with the highest TF-IDF values as keywords. In one example, we used previously gathered user requirements from user stories3 as input data. The following shows two examples of keyword extraction using TF-IDF: As a user, I want a clearer grid icon so that I can better understand the connection between both screens. => Screen, Icon, Connection As a user, I want to see the product’s season so that I can have a better overview. => Season, Product, Overview

5 Machine Learning Based Insights In this section we describe our journey with machine learning methods, namely neural networks4, to perform sentiment analysis and text classification for the creation of our prototype. 5.1

Data Collection

Data is the lifeblood of every machine learning project. During training, ML models learn how to predict results from data. Thus, the choice of data is critical for the model quality and efficiency. In our project, we needed labelled data to perform sentiment analysis and text classification, since both are supervised learning5 techniques. Sentiment Dataset. Sentiment analysis is a binary classification problem: The data is labelled into positive or negative statements. Thankfully, there are many free datasets to train sentiment classification models. We chose to work with the IMDB data set which contains 50,000 native movie reviews [4]. The dataset is balanced and split into training and testing sets, each containing 25,000 user reviews. Moreover, each set contains 12,500 positive reviews and 12,500 negative reviews. Text Classification Dataset. Our text classification model is a multi-labelling problem, which means, that for every user statement we associate a set of labels. 3

4

5

A user story in agile software development is an informal, natural language description of one or more features of a software system from an end-user perspective. Neural networks are a subclass of machine learning algorithms, that are mainly inspired by biological neural networks of mammals. In supervised learning, we have data and labels (as an output) and we want the algorithm to learn how to map the input data to the desired output.

190

S. Zaghdoudi and L. Glomann

We prepared our dataset from scratch because it was challenging to find a suitable dataset, especially considering we want a specific set of labels for every data point. For this, we collected customer feedback data from the internet. More specifically, we scraped reviews from Amazon’s website and labelled the data manually. For every user comment we associated some, all or none of the following tags: Pricing, Ease of Use and Features. Although labelling is crucial for model accuracy, it is costly and time consuming. So far, we only labelled 1,600 reviews and have managed to deal with this lack of data using algorithmic methods. As a first step, we used data augmentation to expand the size of the labelled set without any human effort. We opted for a synonym replacement method which consists of replacing a noun or a verb with its most similar synonym. For every review, we replaced 50% of its nouns and verbs. The final augmented set consists of 3,000 comments. 5.2

Data Processing

Since the collected data is mainly a range of native users’ statements, it is unstructured and not in the desired format. Therefore, we cleaned the text to enhance its quality. We also encoded it into numerical vectors using word embeddings so that it is readable by a machine learning algorithm. Text Cleaning. The goal of text cleaning is to get rid of extra useless information and put the raw sentences in a proper form. The first step is tokenization which consists of splitting the sentences into a group of tokens or words. The next step is normalization, which is a series of operations that puts all user statement texts on the same level. From those operations we mainly used lemmatization [5] to set a word to its lemma or root word, stemming to eliminate affixes, removing punctuation such as commas and quotes, as well as removing extra whitespaces. We also set all words to lowercase and remove stop words, which are common words like “that”, “and”, “or”, with no meaningful semantic contribution. Word Embedding. Word embedding is a way to map words from a vocabulary to vectors of real numbers. This transformation is necessary because machine learning algorithms such as neural networks are not able to process strings of plain text. This vector representation has two main advantages that make it powerful above other encoding methods. It guarantees a low dimensional vector space which is more efficient when it comes to model complexity and it preserves semantic meanings when representing words, which means that words with similar meaning have similar vector representation. In our neural networks, we started with a word embedding layer to process the already cleaned user statements. 5.3

LSTM Neural Network

To perform sentiment analysis and text annotation, we have chosen to work with the long short-term memory (LSTM) [6] neural network, a specific class of recurrent neural networks (RNN) [7].

Artificial Intelligence Enabled User Experience Research

191

RNNs are specific neural networks, popular for processing sequential information. In an RNN, neurons have feedback connections which form a sort of internal “memory”. The memory means that the RNN performs the same operation for every element of a sequence of inputs in a way that every output is dependent on all previous computations. In each step the network still remembers information about the previous steps. This information is called hidden state. This mechanism helps learn and model sequential data like text as a sequence of words, or words as a sequence of letters. A major problem with RNNs is their short-term memory: If a sequence is long, RNNs are not able to carry information from initial steps to later steps. If we process long paragraphs with an RNN, it may ignore important information from the beginning and, as a result, deliver bad results. LSTM was invented to overcome this gap by adding another internal mechanism called gates that can regulate the flow of information through steps. 5.4

Sentiment Analysis

The goal of sentiment analysis is to classify native users’ statements into two categories: Positive and negative. To train and test our LSTM neural network, we used the labelled reviews from the IMDB dataset, as described in Sect. 5.1. The final model is trained during 4 epochs6 and has a training accuracy of 92% and testing accuracy of 88%. Below are examples of user statement sentiment classification results with the degree of confidence of each prediction. We don’t find the system stable. => Sentiment: negative | Prediction score: 0.30341896 Great quality!! => Sentiment: positive | Prediction score: 0.57020533

5.5

Text Classification

The goal of text classification is to assign a set of predefined tags to a given text. It is similar to a keyword assignment task. It is not necessary that the tags appear in the original text. Rather, they help categorize the text content. From a machine learning perspective this is a multi-label classification task, which means that each sample may be assigned to one or many classes at once. This is different from multi-class tasks such as sentiment analysis, in which each sample is only assigned to one. To train the LSTM model we used our prepared data, as described in Sect. 5.1, which contains 3,000 multi-tagged user statements. We took out 500 samples for later testing. Therefore, we only had 2,500 statements to train the model and here lies the real challenge. On the one hand, 2,500 labelled samples are not enough for a machine learning supervised model to perform well. On the other hand, we have thousands of unlabelled

6

In neural networks, an epoch is a training cycle through the full training dataset.

192

S. Zaghdoudi and L. Glomann

user statements. Unfortunately, we cannot proceed to unsupervised learning7 since we know our target classes. Therefore, we decided to proceed with semi-supervised learning, which combines both supervised and unsupervised learning and makes use of the abandoned unlabelled data. The semi-supervised technique that we used is called pseudo labelling [8]. It follows four basic steps: First, we train the model on a small amount of labelled data, then we use that first model to predict labels for new unlabelled data. From the new pseudolabelled set, we select a set of data that corresponds to a good prediction score and combine it with the original training set, and we continue training. We repeat those steps until we have a good model. In our case, we started by the 2,500 labelled review that we have, training the model during equal number of epochs, before enhancing the training set. Namely, after each 50 epochs, incorporating the pseudo-labelled data and continuing training. The total number of training iterations was 500, when the model training accuracy stabilized at around 98%. During the final 100 epochs, the model was trained on almost 20,000 multi-labelled statements, and we got a testing accuracy of around 87%. The following shows an example of a multi-label classification. Predicted tags for ‘This is a nice and cheap android device. HD display and sturdy built quality with 10 h + battery backup. Sometimes runs a little slow, rest is good’: Pricing: 95% Ease of Use: 98% Features: 99%

6 Summary, Limitations and Perspectives With our prototype we have successfully implemented a keyword extractor, sentiment extractor and a text annotator. This is only a first step. With the insights that we have derived so far, a user researcher is able to dig into the voluminous information gathered from and about users more easily. While keyword extraction and sentiment analysis modules can be used to investigate any kind of user statements, the current text classification module is limited to user reviews related to a specific category of products. This is due to the data we used to train the algorithm. Data is a challenge for us since it determines the quality and the performance of the model. We used user input data such as online user reviews, rather than contextual user research documentation because it was easier to access larger numbers of the former. However, the data sets used for the prototype are not large enough to train the actual system in the required way. We are aware that our first prototypes are not fully accurate, but we are working to get better results and bypass major problems we encountered during training, most notably the text classification model where we had

7

Unlike supervised learning, unsupervised learning models find patterns inside data without any reference to a labelled data or known output.

Artificial Intelligence Enabled User Experience Research

193

an overfitting8 problem. For that we are still improving our text classifier and playing with data quality. As a next step, we are planning to mix insights together, such as merging the text classifier and the sentiment analysis to derive more detailed information. We are also considering building models to extract the users’ intent and predict their behaviour. These in-depth insights are necessary to attain our objective of developing a system that automates user research activities.

References 1. Gartner: Customer Experience Research. USA (2017) 2. International Organization for Standardization: DIN EN ISO 9241-210 Human-centred design for interactive systems (2010) 3. Leskovec, J., Rajaraman A., Ullman J. D.: Mining of Massive Datasets, pp. 8–9. Cambridge University Press, Cambridge (2014) 4. Lakshmipathi, N.: IMDB Dataset of 50 K Movie Reviews (2019) . https://www.kaggle.com/ lakshmi25npathi/imdb-dataset-of-50k-movie-reviews. Retrieved 31 January 2020 5. Torres-Moreno, J.: Automatic Text Summarization (Cognitive Science and Knowledge Management). p. 24. Wiley-ISTE (2014) 6. Elman, J.L.: Finding structure in time. Cogn. Sci. 14(2), 179–211 (1990) 7. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997) 8. Dong-Hyun, L.: Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks. In: ICML 2013 Workshop: Challenges in Representation Learning (WREPL), Atlanta, USA (2013)

8

In ML, overfitting refers to a model that learns the training data too well, it learns all the details and noise to the point that is not able to perform well on new unseen data.

Product Sampling Based on Remarks of Customs in Online Shopping Websites for Quality Evaluation Haitao Wang, Jing Zhao, Gang Wu, Fan Zhang, Chao Zhao, and Xinyu Cao(&) China National Institute of Standardization, Beijing 100191, China {wanght,zhaoj,wugang,zhangfan,zhaochao, caoxy}@cnis.ac.cn

Abstract. In recent years, the scale of network marketing increase rapidly. The remark information of customs after shopping will mostly make comments on the quality of goods. The information can provide support for online marketing platform, production enterprises and market supervision departments, and guide management organizations to find quality problems. This paper proposes a sampling method based on Bayesian method and remarks of customs, which can be used to evaluate the quality of goods. It can greatly reduce the number of samples and find the quality problems of goods effectively. Keywords: Product sampling Bayesian method

 Quality evaluating  Customs remarks 

1 Introduction In recent years, the scale of online shopping has soared. The comment information of users after shopping will mostly comment on the quality of goods. These information can also provide support for quality management of online marketing platform, production enterprises and market supervision departments, and guide management organizations and personnel to find commodity quality problems. Based on Bayesian method and user comment information, this paper studies the sampling scheme and sampling technology for online goods. The scores of quality related comments are converted into sampling probability, and then stratified sampling is adopted to extract from online stores and brands. This method can greatly reduce the number of samples needed for sampling, and more effectively find the quality problems of commodities. Sampling inspection is a kind of sampling method that does not inspect all products, but only takes a certain amount of samples from the overall product for inspection, and judges whether the overall quality level of products in a batch or process has reached the expected requirements according to the results of sample inspection. It is especially suitable for large batch or destructive inspection. Scientific sampling inspection plan can not only reduce the cost and unqualified risk, but also improve the accuracy, which is an important measure to achieve quality control [1, 2]. Product sampling inspection is an © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 194–200, 2021. https://doi.org/10.1007/978-3-030-51328-3_28

Product Sampling Based on Remarks of Customs

195

important means of quality management, an important content of quality assurance mode and an important part of product quality control technology system.

2 Basic Algorithm Online marketing platform is generally composed of multiple layers as following picture (Fig. 2).

On-line Market Platform

Shop 1

Shop 2

Brand 21

Product Type 1

...

...

...

Brand 2i

Product Type m

Fig. 1. Architecture of on-line market.

Calculating affective score of comments of customs

Calculating prior probability

Calculating posterior probability

Calculating stratified sampling probability

Fig. 2. Process of algorithm.

Shop n

196

H. Wang et al.

We can calculate the affective score of every product from the customs’ comments, and get the total score of every brand and every shop. The score is treated as prior probability that reflects the goodness and badness of products. Then, we use Bayesian method calculate the posterior probability with it. Because the prior probabilities vary with the affective scores of different products, the posterior probabilities vary respectively. Therefore, we will design different sampling probability accordingly one brand by on brand, one shop by one shop, which is stratified sampling method [3].

3 Prior Probability Calculation With the method of Natural Language Processing and affection computing, we can easily calculate the affective score of every comment, and sum them up as affective score of brand and shop. Xi ¼

X

w  xij j¼1 j

P j¼1

wj ¼ 1

Xi : Emotion value of a brand

ð1Þ

xij : Emotion value of a type of a brand Considering that the magnitude of the original negative score is quite different from that of the positive score, even if the negative score is given a large weight, the negative score cannot be reflected in the final result. Therefore, the positive and negative scores are standardized by Z-score, and the data after standardization are analyzed. xij ¼ a1  z þ þ a2  z

a1 þ a2 ¼ 1

þ

z : Positive Z-score z : Negative Z-score 1 Xn siþ  s þ i¼1 n rþ þ si : Positive emotion value s þ : Average positive emotion value

ð2Þ

zþ ¼

ð3Þ

r þ : Standard deviation of positive emotion  1 Xn s i s  i¼1 n r s : Negative emotion value i

z ¼

s : Average negative emotion value r : Standard deviation of negative emotion value

ð4Þ

For the weight score, we use entropy method to calculate. The steps of entropy method to calculate the weight are as follows:

Product Sampling Based on Remarks of Customs

197

1. Process the original data xij with following formulas: xijþ ¼

xij  Minðxj Þ Maxðxj Þ  Minðxj Þ

ð5Þ

x ij ¼

Maxðxj Þxij Maxðxj Þ  Minðxj Þ

ð6Þ

2. Calculate the weight value. xij pijþ ¼ Pm

i ¼ 1; 2; . . .; m; j ¼ 1

ð7Þ

xij p ij ¼ Pm

i ¼ 1; 2; . . .; m; j ¼ 2

ð8Þ

i¼1 xij

i¼1 xij

3. Calculate the entropy value. ej ¼ k

Pm i¼1

pij ln pij

i ¼ 1; 2; . . .; m; j ¼ 1; 2

xij pij ¼ Pm

i¼1 xij

¼

1 m

ð9Þ ð10Þ

Where, m is the count of brands. 4. Calculate the factor of difference. gj ¼ 1  e j

ð11Þ

5. Calculate weight. gj aj ¼ P2 i¼1

gj

4 Stratified Sampling We propose a two-layer stratified sampling method as following (Fig. 3):

ð12Þ

198

H. Wang et al.

Products of various brands

Products of brand A

Shop1

...

Products of brand B

...

Products of brand n

Shop i

Fig. 3. Layers for sampling.

The upper layer is brand-layer where different brand has different sampling probability. The lower layer is shop-layer where the sampling probability of brand will be divided into different shops according the sales volume. After getting the affective score of different layer of Fig. 1, we calculate the probability and weight, as following: Pðxij Þ ¼ ðMaxðXÞ  xij Þ=ðMaxðXÞ  MinðXÞÞ  100% xij : Emotion value of a product type MaxðXÞ; MinðXÞ : Max. and Min. emotion value of same brand

ð13Þ

Here, the more sampling probability of a brand means more negative comments and worse quality. After that, the sampling probability of a brand is calculated: X PðXi Þ ¼ PðXi jxij ÞPðxij Þ ð14Þ j¼1 The sampling probability of upper layer is: PðXi jGÞ ¼ P G : Products Xi : Brands

PðXi Þ  PðGjXi Þ     j¼1 P Xj  P GjXj

ð15Þ

Product Sampling Based on Remarks of Customs

199

The sampling probability of lower layer is:     C Sj ; X i PXi Sj ¼ P k¼1 C ðSk ; Xi Þ Sj : Shops

ð16Þ

Xi : Brands CðSj ; Xi Þ : Sales volume of Xi in shop Sj

5 Experiments and Conclusion We developed an experimental system to test the algorithm with python and Java. The test data is from a shopping website in China. The result shows that Bayesian method and remarks of customs can be used to evaluate the quality of goods, and can greatly reduce the number of samples and find the quality problems of goods effectively (Figs. 4 and 5).

Fig. 4. Prior probability calculated with affective score.

200

H. Wang et al.

Fig. 5. Sampling probability calculated with prior probability. Acknowledgements. This research was supported by National Key Technology R&D Program (2017YFF0209004, 2016YFF0204205, 2018YFF0213901), and grands from China National Institute of Standardization (522018Y-5941, 522018Y-5948, 522019Y-6781, 522019Y-6771).

References 1. Liu, C., Liu, D.: Study on sampling inspection scheme to digital products in GIS. Geo-Spatial Inf. Sci. 4, 62–67 (2001) 2. He, W., Cheng, L., Li, Z., Liu, G.: Study on evaluation of sampling inspection plan for cigarette trademark paper. In: The 2014 IMSS International Conference on Future Manufacturing Engineering (ICFME 2014), pp. 459–462 (2014) 3. Ozturk, O.: Post-stratified probability-proportional-to-size sampling from stratified populations. J. Agric. Biol. Environ. Stat. 24, 693–718 (2019)

Affective Computing Based on Remarks of Customs in Online Shopping Websites Haitao Wang, Fan Zhang, Gang Wu, Jing Zhao, Chao Zhao, and Xinyu Cao(&) China National Institute of Standardization, Beijing 100191, China {wanght,zhangfan,wugang,zhaoj,zhaochao, caoxy}@cnis.ac.cn

Abstract. In recent years, the scale of network marketing increase rapidly. The remark information of customs after shopping will mostly make comments on the quality of goods. The information can provide support for online marketing platform, production enterprises and market supervision departments. This paper proposes an affective computing method based on previous remarks of customs, which can not only make good use of remarks, but also greatly reduce the difficulty of finding better products. Keywords: Affective computing

 Quality evaluating  Customs remarks

1 Introduction The scale of the online retail market is expanding rapidly, and its position in people’s daily life is becoming more and more prominent. The effective supervision of the quality of online goods is being paid attention by the online marketing platform, enterprises and government management departments. After people buy goods, they will comment on the quality of goods, which often includes the user’s feelings about the quality of goods. For products with good quality, users will have more positive comments, and for those with relatively poor quality, users will have more negative comments. In addition to online marketing platform user comments, many users will also publish their comments on the quality of a product through microblogging and other social media. By mining and analyzing the user’s comments, we can get the overall evaluation of the goods and predict the quality of the goods from the user’s point of view, which will give a good hint to the subsequent purchasers, production enterprises, online sales platforms, market supervision departments, etc. [1, 2]. Therefore, this paper proposes an analysis method of user comments based on affective computing. Through the calculation and integration of user comments and microblogs on the online marketing platform (mainly Chinese platform at present), the user affective scores of different brand product quality are obtained.

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 201–206, 2021. https://doi.org/10.1007/978-3-030-51328-3_29

202

H. Wang et al.

2 Architecture of On-Line Market Online marketing platform is generally composed of multiple stores, each store sells one or more brand goods, and each product can be divided into different models (Fig. 1).

On-line Market Platform

Shop 1

Shop 2

Brand 21

Product Type 1

...

...

...

Shop n

Brand 2i

Product Type m

Fig. 1. Architecture of on-line market.

Affective analysis, also known as affective computing, tendentious analysis and opinion mining, is a process of analyzing, processing, summarizing and reasoning subjective texts with affective color. The technology based on affective dictionary focuses on analyzing the affective words in the text and associating the modifiers near the words to express the affective tendency of the sentence. Because the sentence structure of commodity reviews is relatively simple and there must be strong affective color, the affective analysis method based on affective dictionary can calculate the affective tendency of reviews more accurately and efficiently [3]. The calculation method of sentiment tendency of commodity comment based on sentiment dictionary mainly uses the collected commodity comment information, with using clause segmentation technology to divide the user comment into multiple sub sentences, which is based on the pre-constructed sentiment dictionary. Then, it analyzes the modification relationship of commodity comment clauses, calculates the positive and negative tendency of each clause according to the sentiment calculation model of commodity comment. Finally, we summarize the positive and negative affective scores of all the reviews of the commodity. The figure below is the frame chart for calculating the affective tendency score of commodity evaluation. The specific steps are as follows (Fig. 2).

Affective Computing Based on Remarks of Customs

203

Sentences Segmentation

Remarks and comments of customs

Affection Words

Modification Analysis

Affection Dictionary

Affective Computing Model

Computing and Output

Fig. 2. Architecture of affective computing.

3 Process of Computing 3.1

Dictionaries

The calculation process mainly uses the following Dictionaries: 1. Affection dictionary In the affection dictionary, each word is marked with an affective score, which is used to express the amount of subjective affection expressed by the word when it is used. Generally speaking, a positive score means positive, positive and a negative score means negative, negative and negative. 2. Dictionary of negative words The appearance of negative words will directly turn the affection of a sentence to the opposite direction, and usually the utility is superimposed. Such as: No, never wait. 3. Dictionary of degree words Since the positive and negative affections of the text are judged by scoring, the absolute score of score usually indicates the strength of affection. The introduction of degree adverbs is imperative as it relates to the strength of degree. The data format in the dictionary can refer to the following format, that is, there are two columns, the first

204

H. Wang et al.

column is degree adverb, the second column is degree score, > 1 means strengthening affection, 0 indicating that the single word ch should be combined with the word or FWTs on the left; weight(ch) > 0 means that the single word ch should be combined with the word or FWTs on the right. The weight(ch) is calculated as follows. weightðchÞ ¼ 8 < 1 f1 ðchÞ ¼ 0 : 1

i¼1

ðwi fi ðchÞÞ

ð1Þ

posðchÞ 2 fa; m; pg otherwise posðchÞ 2 fgg

ð2Þ

Formðch; leftÞ 2 Diction otherwise Formðch; rightÞ 2 Diction

ð3Þ

9word 2 Diction ^ resðword; Formðch; leftÞÞ ¼ 1 otherwise 9word 2 Diction ^ resðword; Formðch; rightÞÞ ¼ 1

ð4Þ

8 < 1 0 f2 ðchÞ ¼ : 1 8 < 1 0 f3 ðchÞ ¼ : 1

X5

8 < 1 0 f4 ðchÞ ¼ : 1 8 < 1 0 f5 ðchÞ ¼ : 1

IsTailFqðchÞ  IsHeadFqðchÞ otherwise IsHeadFqðchÞ  IsTailFqðchÞ

ð5Þ

Fqðch; leftÞ [ Fqðch; rightÞ otherwise Fqðch; leftÞ\Fqðch; rightÞ

ð6Þ

fi(ch) means the weight value of a single word ch when it meets the i-th feature. wi represents the weight of each feature and is given initial value according to prior knowledge. pos(ch) means the part-of-speech of word ch, Form(ch, orient) represents a new FWT combined by word ch and word or feature word term in direction of orient. orient has two values, left and right. res(w1, w2) = 1 means word or feature word term w1 and w2 have similar construction. IsTailFq (ch) represents the number of occurrences of a word ending with the word ch, and IsHeadFq (ch) represents the number of occurrences of a word beginning with the word ch; Fq (ch, orient) represents the number of occurrences of word or feature word term in the orient direction of the context dictionary for a single word ch. We update and expand Diction based on processing results of Step 3.

Recognition Method of Feature Word Terms

221

Step4: Iteration. According to the updated Diction, Step 3 is repeated; when the change rate of the Diction is less than the threshold, the iteration is stopped.

4 Experiment and Results The method proposed in this paper has a good effect on the recognition of feature word terms in named entities. We have done many experiments on named entities such as institution names, city names, and region names in Chinese. A total of 80,000 named entities were selected for the recognition of feature word terms, and the recognition accuracy rate was as high as 99.3%. Some experimental results are as follows in Table 1.

Table 1. Some experimental results. Name entity Villagers’ Committee of Bailong Village, Baima Town Chaotian Palace Police Station Hongxinglong Restaurant Co., Ltd. Bakers electric heating appliance factory

Result Villagers’ Committee, Bailong Village, Baima Town Chaotian Palace, Police Station Hongxinglong, Restaurant, Co., Ltd. Bakers, electric heating, appliance factory

Acknowledgments. This research was supported by National Key Technology R&D Program (2016YFF0204205, 2018YFF0213901, 2017YFF0206503, 2017YFF0209004), and grands from China National Institute of Standardization (522019Y-6771, 522019Y-6781, 522019C-7044, 522019C-7045).

References 1. Zhang, J.: A masterpiece on the study of translator’s style: a review of style in translation: a corpus perspective. Contemp. Foreign Lang. Stud., 85–87 (2016) 2. Sang, E.F., Kim, T., et al.: Introduction to the CoNLL-2003 shared task: languageindependent named entity recognition. IEEE Trans. Wireless Commun. 8, 142–147 (2003)

Research on the Sampling Procedures for Inspection by Variables Based on the Rate of Nonconforming Products Jing Zhao1, Jingjing Wang1,2, Gang Wu1, Chao Zhao1, Fan Zhang1, and Haitao Wang1(&) 1

2

China National Institute of Standardization, Beijing, China [email protected] School of Mathematical Sciences, Capital Normal University, Beijing, China

Abstract. The sampling procedures for inspection by variables with the rate of the non-conforming product under the assumption that the product quality characteristics are subject to the normal distribution is discussed. This paper mainly determines the sampling plan by searching for the claimed quality level (DQL) and the limit quality ratio (LQR). This paper provides a sampling plan based on the three LQR levels under the “r” method (Variance of the normal distribution is known). For the single specification limit case, the sampling plan of this paper does not divide the upper and lower limits, and adopts the “k” type sampling plan (n, k); for the double specification limit case, the upper and lower limits need to be defined at the same time, and the “k” type sampling plan is not suitable. This paper uses the p* sampling plan. This paper mainly determines the “k” type sampling plan (n, k) through nonlinear programming methods. Keywords: Nonlinear programming quality ratio (LQR)

 Declared quality level (DQL)  Limit

1 Introduction International standards already exist for the sampling procedures for inspection by variables with the rate of the non-conforming product, such as the measurement sampling plans and evaluation procedures specified in ISO 3951-4 [1], which can be used to assess whether an overall (lot or process, etc.) quality level meets a claimed quality level. ISO 3951-4 primarily determines the sampling plan by searching for the claimed quality level (DQL) and the limit quality ratio (LQR). For cases where the standard deviation is known and unknown, ISO 3951-4 provides a corresponding threelevel sampling plan for the limit quality ratio (LQR). Measurement sampling inspection procedures for nonconformance rates in ISO 3951-4 are designed in such a manner that the risk of finding nonconformance is approximately controlled below 5% when the overall actual quality level of the inspection is equal to or superior to the claimed quality level, that is, the probability of making the type I error is less than or equal to 5%. Therefore, when the actual quality level of the verification overall is better than the claimed quality level, it is possible to judge the overall as a non-conforming lot. When © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 222–231, 2021. https://doi.org/10.1007/978-3-030-51328-3_32

Research on the Sampling Procedures for Inspection by Variables

223

the actual quality level is LQR times the claimed quality level, the procedures have a risk of 10% of failing to contradict the declared quality level (corresponding to a 90% probability of contradicting the declared quality level). However, this does not necessarily apply to the quality supervision of our products. Based on the actual situation of product quality supervision in China, the random inspection supervision of product quality is slightly relaxed in this paper, and the sampling plan is mainly studied in which the probability of making the type II error is equal to 15% (equivalent to the probability of determining the sampling unqualified is 85%). At the same time, this paper uses the nonlinear programming method to determine the sampling plan (n, k).

2 Principle of Measurement Sampling Test Take the upper specification limit U on one side as an example (the lower specification limit L is the same as U). Assume that the product quality characteristics X  Nðl; r2 Þ, l0 and l1 all represent the mean of the product quality, l0 is the limit of the high quality lot, and l1 is the limit of the inferior lot. For the upper specification limit, we want to receive at a high probability of not less than 1  a when the overall mean l  l0 , and a low probability of not higher than b when l  l1 . That is, l0 is the supplier quality level (PQL), a is the supplier risk; l1 is the buyer’s quality level (CQL), and b is the buyer’s risk. For given l0 , l1 , a, b, we hope that the solution is satisfied [2]: 

PðlÞ  1  a; l  l0 ; PðlÞ  b; l  l1

ð1Þ

where l0 \l1 , P(l) is the acceptance probability when the product quality mean is l. Here, we use the sample mean x as a measure of the acceptance criteria [3] of the program. For the upper specification limit, the acceptance criterion of the scheme is x  k, and the sampling plan is (n, k). Figure 1 shows the acceptance criteria for the sampling plan (n, k).     N l; r2 . For the sampling plan (n, k), the probability of Since X  Nðl; r2 Þ, X n accept is   kÞ ¼ U PðlÞ ¼ PðX



 kl pffiffiffi r= n

ð2Þ

(2) is the decreasing function of l, and by the monotonicity of P(l), it is known that the Eq. (1) is equivalent to 

Pðl0 Þ ¼ 1  a ðl1 [ l0 Þ Pðl1 Þ ¼ b

ð3Þ

224

J. Zhao et al.

Fig. 1. Acceptance criteria for the program (upper specification limit)

which is 8   kl0 > > < U r=pffiffinffi ¼ 1  a > > :

 U

kl 1ffi pffiffi r= n



ðl1 [ l0 Þ

ð4Þ

¼b

To solve the Eqs. (4), we get: 8  1 2 U ðaÞ þ U1 ðbÞ > > r2

> ðbÞ þ l1 U1 ðaÞ : k ¼ l0 U1 U1 ðaÞ þ U1 ðbÞ

ð5Þ

The rate of non-conforming product relative to the upper specification limit U is     Xl Ul Ul p ¼ Pð X [ U Þ ¼ P [ ¼1U r r r

ð6Þ

l ¼ U þ rU1 ðpÞ

ð7Þ

That is

When r is known, since U1 ðxÞ is a strictly monotonic function, p and l correspond one-to-one, which is 

l0 ¼ U þ rU1 ðp0 Þ l1 ¼ U þ rU1 ðp1 Þ

ð8Þ

Research on the Sampling Procedures for Inspection by Variables

225

Where, p0 represents the quality of the qualified product with the non-conforming product rate as the quality index, and p1 represents the ultimate quality. In this way, we can use the formula (7) to convert the problem of the non-conforming product rate to the quality index. Substituting (8) into (5), we can obtain 8 > > >
> > :k ¼ Uþ

U1 ðaÞ þ U1 ðbÞ U1 ðp1 ÞU1 ðp0 Þ

2

U1 ðaÞU1 ðp1 Þ þ U1 ðbÞU1 ðp0 Þ U1 ðaÞ þ U1 ðbÞ

ð9Þ r

let k¼

U1 ðaÞU1 ðp1 Þ þ U1 ðbÞU1 ðp0 Þ U1 ðaÞ þ U1 ðbÞ

ð10Þ

 At this time, k ¼ U  kr, let QU ¼ Urx, and QU is called the upper quality statistic. The reception criterion x  k of the “k” type sampling plan [4] can be equivalently written as QU  k, and the reception criterion called “k” type scheme (n, k) is QU  k. In this paper, we use the sampling plan (n, k), i.e.

8 > > >
1 1 > > ðp1 Þ þ U1 ðbÞU1 ðp0 Þ : k ¼  U ðaÞU 1 1

ð11Þ

U ð aÞ þ U ð bÞ

Note: This sampling plan also applies to the lower specification limits.

3 Sampling Plan This paper mainly determines n, k by searching for the claimed quality level (DQL) and the limit quality ratio (LQR). In the actual background of China’s quality supervision, with reference to the international standard ISO3951-4, the limit quality ratio (LQR) has three levels, namely 7.6–14.1, 5.34–7.48, 4.72–5.97. 3.1

Determination of n, k

Take k ¼ U  kr and (8) into Eq. (4), then we can obtain   pffiffiffi U  n k þ U1 ðp0 Þ ¼ 1  a  pffiffiffi U  n k þ U1 ðp1 Þ ¼ b

ð12Þ

226

J. Zhao et al.

Here, p0 is the claimed quality level (DQL), and p1 is the limit mass, which is the product of the claimed quality level (DQL) and the limit mass ratio (LQR). In this paper,  pffiffiffi we consider the case of U  n k þ U1 ðDQLÞ  1  a. (12) is converted into 8  pffiffiffi < U  n k þ U1 ðp0 Þ  1  a : Upffiffinffik þ U1 ðp Þ ¼ b 1

ð13Þ

First of all, we analyze the case of 7.6  LQR  14.1. Substituting the claimed quality level (DQL) and the limit quality ratio (LQR) into (13), we obtain  pffiffiffi U  n k þ U1 ðDQLÞ  1  a  pffiffiffi 1 > : U  n k þ U ðDQL  LQRÞ ¼ b 7:6  LQR  14:1 8 >
> > > < 1 k   UpffiffinðffibÞ  U1 ð7:6DQLÞ > > > > 1 : ffiffiffi Þ  U1 ðDQLÞ k   U pð1a n

ð15Þ

We mainly use the idea of nonlinear programming to determine the sampling plan (n, k), and we plot the range of values of the inequality. As shown in Fig. 2, the shaded portion is a range of values of n and k. When conducting a sample inspection of product quality, we want n and k to be smaller, so n takes the value 19 and k takes the value 3.236, which is the red dot in Fig. 2.

Fig. 2. Range of n and k

Research on the Sampling Procedures for Inspection by Variables

227

The Formula of p

3.2

On the one-sided upper specification limit U, when the overall l is unknown and the standard deviation r is known, we replace the l with the sample mean  x of sample size n, and the minimum Variance Unbiased Estimation (MVUE) [5] of p = P(X > U) is  ¼ xÞ ¼ 1  U ^ p ¼ PðX [ U jX

rffiffiffiffiffiffiffiffiffiffiffi  n U x n1 r

ð16Þ

Now we change the acceptance criterion to ^p  p , which is called the maximum allowable nonconforming rate. That is rffiffiffiffiffiffiffiffiffiffiffi  n U x 1U  p n1 r

ð17Þ

qffiffiffiffiffiffiffi  n  n1QU  1  p , i.e. QU  U1 ð1  p Þ. Therefore, k that is equivalent to the “k” type sampling plan is k ¼ U1 ð1  p Þ. Thereby we obtain 

Since QU ¼ Urx, (16) can be reduced to U

 rffiffiffiffiffiffiffiffiffiffiffi  n k p ¼U  n1 

ð18Þ

When the actual product quality involves a double-sided limit, we can use the p sampling plan. 3.3

Sampling Master Table Under Different Claimed Quality Levels (DQL)

The sampling plan (n, k) provided in this paper is indexed at the claimed quality level (DQL) and limit quality ratio (LQR) levels. The nonlinear programming can obtain the corresponding sampling plan under different DQLs at three levels. See Table 1 for details.

Table 1. Master table of sampling plans DQL in % nonconforming items 0.010 0.015 0.025 0.040 0.065 0.10 0.15 0.25

LQR Level I nr kr p 19 3.236 0.04426 18 3.119 0.06650 17 2.961 0.11361 16 2.813 0.18348 15 2.645 0.30923 14 2.498 0.47669 13 2.346 0.73072 11 2.158 1.18077

LQR Level II nr kr p 28 3.376 0.02931 27 3.263 0.04418 26 3.115 0.07448 25 2.968 0.12260 23 2.809 0.20386 21 2.673 0.30812 19 2.529 0.46843 18 2.339 0.80462

LQR Level III nr kr p 37 3.416 0.02670 35 3.313 0.03878 33 3.165 0.06543 30 3.017 0.10754 29 2.871 0.17400 27 2.723 0.27612 24 2.591 0.40637 21 2.409 0.67842 (continued)

228

J. Zhao et al. Table 1. (continued)

DQL in % nonconforming items

LQR Level nr kr 10 1.931 8 1.721 7 1.516 6 1.279 4 0.957 3 0.459 2 −0.601 – –

0.40 0.65 1.0 1.5 2.5 4.0 6.5 10

I 

p 2.09027 3.28969 5.07664 8.05959 13.45693 28.70042 80.23214 –

LQR Level II nr kr p 16 2.151 1.31571 14 1.949 2.15586 12 1.759 3.30890 10 1.577 4.82259 9 1.277 8.77943 7 0.931 15.73052 5 0.657 23.13073 3 0.147 42.85616

LQR Level III nr kr p 19 2.236 1.08016 17 2.029 1.82441 15 1.831 2.90286 12 1.651 4.23166 10 1.377 7.33224 8 1.089 12.21735 6 0.769 19.97828 4 0.316 35.75985

4 Calculation of Limit Quality Ratio (LQR) and Probability a of Falsely Contradicting a Correct DQL After determining the sampling plan (n, k), we can obtain the corresponding limit mass ratio (LQR) and false positive probability a. 4.1

The Formula of LQR and a

According to the principle of measurement and sampling test in Sect. 2, for the “r” method, (12) is 

 pffiffiffi U  n k þ U1 ðDQLÞ ¼ 1  a  pffiffiffi U  n k þ U1 ðDQL  LQRÞ ¼ b

ð19Þ

Thus the limit quality ratio (LQR) and the probability a of falsely contradicting a correct DQL are: 8 pffiffiffi 1 >

n : LQR ¼ DQL

4.2

ð20Þ

The Limit Quality Ratio (LQR) and the Probability a of Falsely Contradicting a Correct DQL

Level I can be used when the required sample size is small. For the sampling plan of LOR level I, the limit quality ratios range from 7.6 to 14.1. For example, if the declared quality level of the nonconforming product is 1.0% nonconforming items, and the actual quality level is 12.2 times this claimed quality level (That is, the actual rate of nonconforming product is 12.2%), then the risk is 15% of failing to contradict the claimed quality level (see Table 2).

Research on the Sampling Procedures for Inspection by Variables

229

Table 2. Level I plans, LQR and probabilities of falsely contradicting correctly DQL DQL in % nonconforming items

LQR Level I kr nr

LQR

0.010 0.015 0.025 0.040 0.065 0.10 0.15 0.25 0.40 0.65 1.0 1.5 2.5 4.0 6.5 10

19 18 17 16 15 14 13 11 10 8 7 6 4 3 2 –

13.57782 13.47995 13.47174 13.31580 13.41189 13.17543 13.17928 12.99286 13.60991 13.50431 13.04503 13.06885 13.21637 13.88568 13.98272 –

3.236 3.119 2.961 2.813 2.645 2.498 2.346 2.158 1.931 1.721 1.516 1.279 0.957 0.459 0.601 –

Probability of falsely contradicting a correct DQL in % 1.76274 1.76186 1.60561 1.54181 1.35042 1.33481 1.24901 1.56750 1.12972 1.54858 1.60173 1.45283 2.24320 1.26345 0.13894 –

p

0.04426 0.06650 0.11361 0.18348 0.30923 0.47669 0.73072 1.18077 2.09027 3.28969 5.07664 8.05959 13.45693 28.70042 80.23214 –

Level II is the standard level that shall be used unless specified conditions warrant the use of another level. For the level II sampling plans, the limit quality ratios range from 5.34 to 7.48. For example, if the declared quality level of the nonconforming product is 0.1% nonconforming items, and the actual quality level is 7.21 times this claimed quality level (That is, the actual rate of nonconforming product is 0.721%), then the risk is 15% of failing to contradict the claimed quality level (see Table 3). Table 3. Level II plans, LQR and probabilities of falsely contradicting correctly DQL DQL in % nonconforming items

LQR Level II kr LQR nr

0.010 0.015 0.025 0.040 0.065 0.10 0.15 0.25

28 27 26 25 23 21 19 18

3.376 3.263 3.115 2.968 2.809 2.673 2.529 2.339

7.36039 7.29121 7.18846 7.20941 7.32104 7.20591 7.31678 7.23935

Probability of falsely contradicting a correct DQL in % 3.47564 3.35798 3.10908 2.71789 2.54806 2.79382 2.79115 2.35334

p

0.02931 0.04418 0.07448 0.12260 0.20386 0.30812 0.46843 0.80462 (continued)

230

J. Zhao et al. Table 3. (continued)

DQL in % nonconforming items

LQR Level II nr kr LQR

0.40 0.65 1.0 1.5 2.5 4.0 6.5 10

16 14 12 10 9 7 5 3

2.151 1.949 1.759 1.577 1.277 0.931 0.657 0.147

7.31318 7.27108 7.21715 7.05244 7.03167 7.37130 6.51210 6.74144

Probability of falsely contradicting a correct DQL in % 2.25201 2.27003 2.46868 3.03609 2.02363 1.50533 2.76483 2.47013

p

1.31571 2.15586 3.30890 4.82259 8.77943 15.73052 23.13073 42.85616

Level III is suitable for situations where the LQR is required to be small and at the expense of a larger sample size. For the level III sampling plan, the limit quality ratios range from 4.72 to 5.97. For example, if the declared quality level of the nonconforming product is 0.1% nonconforming items, and the actual quality level is 5.81 times this claimed quality level (That is, the actual rate of nonconforming product is 0.581%), then the risk is 15% of failing to contradict the claimed quality level (see Table 4). Table 4. Level III plans, LQR and probabilities of falsely contradicting correctly DQL DQL in % nonconforming items

LQR Level III kr LQR nr

0.010 0.015 0.025 0.040 0.065 0.10 0.15 0.25 0.40 0.65 1.0 1.5 2.5 4.0 6.5 10

37 35 33 30 29 27 24 21 19 17 15 12 10 8 6 4

3.416 3.313 3.165 3.017 2.871 2.723 2.591 2.409 2.236 2.029 1.831 1.651 1.377 1.089 0.769 0.316

5.85994 5.67382 5.67936 5.85911 5.68801 5.80902 5.77967 5.80964 5.71152 5.80500 5.89800 5.88123 5.88125 5.87434 5.61104 5.80126

Probability of falsely contradicting a correct DQL in % 3.26516 3.68533 3.48479 3.29411 3.16006 2.81830 3.24729 3.40745 3.48692 3.03921 2.75255 3.60741 3.26285 3.06358 3.39915 2.67352

p

0.02670 0.03878 0.06543 0.10754 0.17400 0.27612 0.40637 0.67842 1.08016 1.82441 2.90286 4.23166 7.33224 12.21735 19.97828 35.75985

Research on the Sampling Procedures for Inspection by Variables

231

5 Conclusion In this paper, through the control of a class of second-class errors, the sampling supervision program of bank’s teller service quality is formulated. Here, the probability a of type I error is less than or equal to 5%, and the probability b of type II error is equal to 15%. Under the horizontal LQR III, the sampling plan determined in this paper is 8 and the sample size is relatively small. To a certain extent, the relevant supervision departments have reduced the intensity of spot checks. At the same time, the probability of substantiating a qualified product as a non-conforming product is only 3.06358%, and the risk of misjudgment is low, which is a satisfactory sampling plan. Acknowledgments. This research was supported by National Key Technology R&D Program (2017YFF0206503, 2017YFF0209004, 2016YFF0204205) and China National Institute of Standardization through the “special funds for the basic R&D undertakings by welfare research institutions” (522018Y-5941, 522018Y-5948, 522019Y-6781).

References 1. International Organization for Standardization: ISO 3951-4. Sampling procedures for inspection by variables-Part 4: Procedures for assessment of declared quality levels. International Organization for Standardization (2011) 2. Zhang, W., Qin, S., Han, Z., Feng, X.: Quality Control. Science Press, Beijing (1988) 3. Yang, G., Liu, D.: Study on some basic problems in parameter hypothesis testing. Stat. Decis. 24, 13–15 (2012) 4. Shanqi, Yu.: Introduction to Statistical Methods. Beijing University of Technology Press, Beijing (2014) 5. Wang, H.: The relationship between bank service quality and customer satisfaction. J. Sun Yatsen University (Soc. Sci. Ed.) 6, 107–108 (2006)

Artificial Intelligence Applications

Deepfakes for the Good: A Beneficial Application of Contentious Artificial Intelligence Technology Nicholas Caporusso(&) Department of Computer Science, Northern Kentucky University, Louie B Nunn Dr, Highland Heights 41099, USA [email protected]

Abstract. Deepfake algorithms are one of the most recent albeit controversial developments in Artificial Intelligence, because they use Machine Learning to generate fake yet realistic content (e.g., images, videos, audio, and text) based on an input dataset. For instance, they can accurately superimpose the face of an individual over the body an actor in a destination video (i.e., face swap), or exactly reproduce the voice of a person and speak a given text. As a result, many are concerned with the potential risks in terms of cybersecurity. Although most focused on the malicious applications of this technology, in this paper we propose a system for using deepfakes for beneficial purposes. We describe the potential use and benefits of our proposal and we discuss its implications in terms of human factors, security risks, and ethical aspects. Keywords: Machine Learning

 Deepfakes  Cybersecurity  Digital Twin

1 Introduction In the last decade, research in the field of Artificial Intelligence (AI) advanced at an unprecedented pace. In addition to the availability of more powerful hardware resources, novel approaches to the development of Neural Networks (NN) resulted in very efficient Machine Learning (ML) toolkits and frameworks that are effectively being used in applications in several different fields, from healthcare to business, from security to education [1, 2]. Also, meta-languages enable using NNs and other algorithms with very little programming knowledge [3]. Moreover, easier and affordable access to distributed computational power enables implementing sophisticated Deep Learning (DL) algorithms that would not otherwise execute on a single computer. As multiple Graphics Processing Units working in parallel make it possible to obtain results faster and ML architectures become more mature, a new era is being introduced in the field of AI: in addition to recognizing and clustering existing information, nowadays software can generate new content (e.g., text, images, and video) that perfectly mimics an input dataset. For instance, given a gallery of head shots of different people, Generative Adversarial Networks (GAN) can learn the features of a human face and create a series of fake pictures of individuals who are not real, which studies found to be realistic enough for them not to be identified as fake [4]. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 235–241, 2021. https://doi.org/10.1007/978-3-030-51328-3_33

236

N. Caporusso

Using a similar principle, deepfake algorithms can be used to learn the features of two different individuals from a source and destination datasets of head shots and use them to output images that combine the face of the individual in the source and the expressions of the individual in the destination dataset (i.e., face swap). In addition to standard face swap techniques based on landmark detection, accurately mimicking facial expressions and movements renders the resulting material very real to the casual viewer. As a result, there is a rising concern with the risks associated with the increasing quality and level of realism of the content created with this technology. Particularly, several research groups highlighted the potential danger resulting from malicious uses of deepfakes in terms of making it difficult to confirm the authenticity of information. The authors of [5] detailed the extent of the problem and described potential countermeasures, though the novelty of this technology itself makes it difficult to predict its future directions, threats, and impact. For instance, deepfakes might facilitate cyberattacks that leverage biometric traits, such as facial features, voice, or writing style, for impersonation, identity theft, and revenge cybercrime.

2 Related Work Given the potential of their harmful applications, deepfakes are described by many as an example that demonstrates that some aspects of ML are becoming a looming challenge for privacy, democracy, and national security. In [6], the authors analyze the current social, technological, political, economic, and legislative scenario and discuss the implications in terms of digital impersonation as forged content becomes increasingly realistic and convincing. Several groups are especially concerned with images, videos, and speeches featuring political leaders and prominent figures [7], because they could be utilized to create fake news [8] aimed at initiating national scandals or international crises. Particularly, images and videos raise the most concern, because the widespread use of social networks exposes entire datasets, current image processing algorithms have greater performances with images than with audio, and videos result in major impact in a society in which communication is increasingly visual. For instance, the circulation of examples of deepfakes that involve sexual activity demonstrate that this technology could open the door to new and more aggressive types of bullying, revenge porn, and blackmailing [9]. Consequently, most research in the field is focusing on solutions that can detect and flag fake videos. To this end, several approaches can be utilized, such as evaluating the authenticity of images using algorithms that work at the pixel level to identify discrepancies that are not visible to the human viewer [5], or introducing digital fingerprints and watermarks in the source material that prevent deepfake algorithms from learning and using the facial features of the individual. Nevertheless, the recent scientific literature started taking into consideration the potential benefits of this technology: the authors of [6, 10] present examples that can be applied to improve education and to deliver a more personalized learning experience by creating instructional material that features characters students are more familiar with. Given the novelty of deepfake technology it is especially relevant to keep suggesting novel ideas that highlight the beneficial aspects for its use while the debate about its future directions is still ongoing.

Deepfakes for the Good

237

3 A Beneficial Application of Deepfakes In this paper, we propose an application that leverages the power of deepfake algorithms to extract an accurate model of an individual and generate new content especially designed for benign use. Specifically, in our work, we aim at using this technology to create an interactive Digital Twin of a subject that can serve as a replacement for in-person or virtual presence in the context of Cyber-Physical systems. The purpose of the proposed application is to provide users with easy-to-use tools that enable them to produce their own digital replica for future use, so that it can be featured in re-enactments, interactive stories, memorials, and simulations. For instance, deepfakes could be utilized to reinforce or surrogate an individual’s physical and synchronous presence. This is especially useful in distance relationships: in contrast to conferencing tools or recorded footage, the use of video templates enables producing material without requiring the user to actually be in it. This can be particularly beneficial for families in which members are remote, as it could be utilized for creating digital storytelling books for children where grandparents are the readers. Moreover, deepfakes could be utilized to generate content for obituaries, to celebrate an individual who passed away and help their milieu cope with the loss of their beloved one. Also, deepfake videos could enable interaction with prominent contemporary or historical figures (e.g., scientists, politicians, and artists) to consolidate their legacies and keep them interactive in the form of their digital replicas. To this end, we suggest a modification to the standard architecture of deepfake algorithms to enhance their features and make them available to individual users. Moreover, we propose an extension of deepfakes that supports generating videos programmatically, so that individuals can produce content on demand based on their media archive and on the desired type of output. The proposed application consists in (1) a material acquisition system, (2) a content processing component, (3) a deepfake generator that comprises (4) a system for programmatically producing output videos. The material acquisition system has the purpose of enabling the user to add their images and videos: this can be realized either with a web-based interface where preexisting footage and source files can be uploaded or with a dedicated app that with audio and video recording features especially designed to collect an input dataset with given specifications; also, the content acquisition component can be integrated with third-party systems (e.g., social networks), so that the material can be automatically imported from an external archive via a set of Application Programming Interfaces. The advantage of a dedicated app is in the possibility of prompting the user to capture input videos and images based on specific requirements in terms of light conditions, subject posture, facial expressions, and content. Conversely, importing content from external systems does not involve any additional overhead for the user, though it might produce a sparse and inaccurate dataset. Indeed, these acquisition techniques can be mixed to obtain improved results. One of the key tasks of the material acquisition process consists in categorizing the acquired videos along a timeline, to support generating multiple models that represent the subject in different stages of life. The content processing component and the deepfake generator are the subsystems that wrap the encoder and the decoder, respectively. The former has the purpose of

238

N. Caporusso

extracting a model from the features of the subject based on the source and destination material. To this end, it realizes the following preliminary steps: 1. frame extraction, that is, separating the video into a sequence of images; 2. face detection, image cropping and rescaling, in order to obtain clear head shots of the subject; in addition, this can involve algorithms for improving the quality of the output, such as background removal; 3. image filtering, correction, and evaluation, which analyzes the source and classifies it into categories and descriptors that are associated with its ambient color, pose, angle, blurriness, similarity with other images. The latter step is especially important for obtaining a dataset having adequate quantity of quality material labeled appropriately (e.g., discarding images that have high similarity). By doing so, the system can create multiple models of the subject in different stages of life and environment conditions. This, in turn, supports realizing a more accurate matchmaking with the destination videos based on their similarity in terms of aspects, such as lighting, ethnicity, and body and face shape. Subsequently, the content processing component utilizes the dataset to train the network and updates the model. Finally, the third component of the proposed application enables the user to generate and output a deepfake by selecting a video among the templates available in the library. To this end, the decoder in the generator can swap the latent faces of the source and destination models. Alternatively, it can use generative techniques, such as GANs, to programmatically create a new scene based on a configuration provided by the user (Fig. 1).

Fig. 1. The components of the proposed application and the deepfake production workflow.

4 Security, Ethical, Legal, and Human Factors As the proposed application is deliberately designed to enable users to facilitate creating fake content in the form of images, videos, and videos, several aspects have to be taken into consideration in regard to the potential risks associated with its use. From a

Deepfakes for the Good

239

cybersecurity standpoint, the materials produced by the proposed application could be utilized by an attacker for malicious purposes. In this regard, the concern is three-fold and related to (1) the input content (i.e., the source material acquired from the subject), (2) the system itself, including its processes, and (3) the deepfakes generated by the system as an output. As for the former point, any archives of personal information, and especially image galleries, involve the risk of breaches resulting in the potential misuse of the leaked material for perpetrating crimes, such as impersonation and forgery. However, nowadays individuals are accustomed with sharing their images and videos on Social Network websites with acquaintances and with the larger public. On the contrary, the proposed system is not designed as a distribution platform: for security reasons it operates as a safe box that keeps the source material private after it is recorded and uploaded by the user. In compliance with privacy and data protection regulations, the owner could be provided with the option of downloading their personal information, which could be protected by a secondary password to prevent account breaches from causing leaks of videos and images. Furthermore, the system could use predefined scripts to record interview-like sessions, as this would increase the quality and quantity of the source dataset by eliciting different facial expressions. Simultaneously, answering to pre-defined questions could prevent subjects from sharing sensitive information. Moreover, additional measures can be taken by the system to protect the visibility of the content collected from users. To this end, the source videos, images, audio, and text could be deleted or stored in encrypted archives after the model extraction algorithm has extracted and learned user’s features. Secondly, the system could restrict destination videos by forcing the use of templates that are reviewed and approved by an editorial board. By doing this, in case of an account breach, the attacker has limited freedom with respect to the content of the deepfakes. The third risk element is represented by the content of the output itself: in the absence of original images and videos of the victim (or in case they are insufficient to train a model), an attacker could utilize the material produced by the proposed system and cut and assemble its parts to produce the desired message. Alternatively, output deepfakes could be exploited as a source and fed into a Deep Learning algorithm using a destination video intentionally created for malicious purposes (e.g., revenge porn or impersonation). To this end, the system could apply visible watermarks or digital fingerprints to the resulting deepfake to mark it, affect image extraction, or apply a digital tracker to any secondary material. Moreover, several ethical aspects have to be taken into consideration with specific regard to the use of scenes and videos featuring individuals who have passed away: on the one hand, the proposed system could help the milieu cope with the loss and keep alive the memory of the beloved one, which might have a positive impact on their lives and on their overall psychological well-being; on the other hand, it might prevent full detachment after a loss and, thus, cause additional discomfort and trigger more serious mental health dynamics. Furthermore, legal implications of the proposed system include issues related to the copyright of the input and output material, including enabling access to the system and transferring ownership, which are especially critical in case of individuals who are deceased or when dealing with videos of prominent figures.

240

N. Caporusso

5 Conclusion In this paper, we primarily considered the positive aspects of deepfake technology with the objective of highlighting its potential benefits. To this end, we described an application that could be utilized to collect material from individuals at different stages of their lives and feed it into a ML system to obtain their interactive models in the form of Digital Twins. These, in turn, can be utilized to generate deepfakes for a variety of purposes, such as producing re-enactments that might help recover memories or cope with a loss, rendering scenes that contain more realistic, image-based avatars than their three-dimensional counterparts, and serving movies, commercials, and shows with custom characters chosen by the user. Moreover, we highlighted some of the crucial security, ethical, and human factors involved in the production and use of deepfakes. In addition to presenting a new system that supports the argument for a positive perspective on this contentious technology, our paper primarily aimed at fostering the scientific debate on deepfakes and stimulating new ideas as well as critics. Despite having beneficial objectives and several advantages, the application presented in our paper might also result in users’ discomfort and harmful psychological dynamics that we will evaluate in a follow-up work, in which we will also detail the result of further research that evaluates whether the benefits overcome the potential risks.

References 1. Bevilacqua, V., Carnimeo, L., Brunetti, A., De Pace, A., Galeandro, P., Trotta, G.F., Caporusso, N., Marino, F., Alberotanza, V., Scardapane, A.: Synthesis of a neural network classifier for hepatocellular carcinoma grading based on triphasic ct images. In: International Conference on Recent Trends in Image Processing and Pattern Recognition, pp. 356–368. Springer, Singapore, December 2016 2. Bevilacqua, V., Trotta, G.F., Brunetti, A., Caporusso, N., Loconsole, C., Cascarano, G.D., Catino, F., Cozzoli, P., Delfine, G., Mastronardi, A., Di Candia, A.: A comprehensive approach for physical rehabilitation assessment in multiple sclerosis patients based on gait analysis. In: International Conference on Applied Human Factors and Ergonomics, pp. 119– 128. Springer, Cham, July 2017 3. Caporusso, N., Helms, T., Zhang, P.: A meta-language approach for machine learning. In: International Conference on Applied Human Factors and Ergonomics, pp. 192–201. Springer, Cham, July 2019 4. Caporusso, N., Zhang, K., Carlson, G., Jachetta, D., Patchin, D., Romeiser, S., Vaughn, N., Walters, A.: User discrimination of content produced by generative adversarial networks. In: International Conference on Human Interaction and Emerging Technologies, pp. 725–730. Springer, Cham, August 2019 5. Maras, M.H., Alexandrou, A.: Determining authenticity of video evidence in the age of artificial intelligence and in the wake of Deepfake videos. Int. J. Evid. Proof 23(3), 255–262 (2019) 6. Chesney, R., Citron, D.K.: Deep fakes: a looming challenge for privacy, democracy, and national security (2018)

Deepfakes for the Good

241

7. Agarwal, S., Farid, H., Gu, Y., He, M., Nagano, K., Li, H.: Protecting world leaders against deep fakes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 38–45, June 2019 8. Ajao, O., Bhowmik, D., Zargari, S.: Fake news identification on twitter with hybrid CNN and RNN models. In: Proceedings of the 9th International Conference on Social Media and Society, pp. 226–230, July 2018 9. Harris, D.: Deepfakes: false pornography is here and the law cannot protect you. Duke L. Tech. Rev. 17, 99 (2018) 10. Silbey, J., Hartzog, W.: The upside of deep fakes. Md. L. Rev. 78, 960 (2018)

Research on Somatotype Recognition Method Based on Euclidean Distance Tong Yao1, Li Pan1(&), Jun Wang1, and Chong Yao2 1

2

School of Fashion, Dalian Polytechnic University, Qinggongyuan, Ganjingzi District, Dalian, Liaoning, China [email protected] Air Force Harbin Flight Academy, Nantong Street, Nangang District, Harbin, Heilongjiang, China

Abstract. In order to replace the body shape of customers in virtual fitting, 829 valid female body size in China were obtained through three-dimensional body scanning. Through factor analysis, several parts that have great influence on the body shape were obtained: body height, bust girth, waist girth, buttock girth. According to Chinese national standard for clothing size, the human body is divided into four body types: Y, A, B and C, by the difference between bust girth and waist girth. On this basis, the body types are subdivided according to the clothing size. Taking the women whose body height is 160 cm (157  body height < 163 cm) as an example, nine female alternate models are extracted by Euclidean distance. After that, the Euclidean distance is used to carry out intelligent somatotype recognition. The verification experiment shows that the method has better recognition ability and can realize the replacement of human body and intelligent somatotype recognition in virtual fitting. Keywords: Body type classification

 Somatotype  Euclidean distance

1 Introduction Body type classification is a basic problem in the study of clothing size. Intelligent identification of body type has also been a hot and difficult point in the field of virtual fitting and intelligent evaluation of body type. The indexes of body type classification generally include: girth difference method, characteristic index method, side shape subdivision method, etc. [1, 2]. The methods of body type classification are: clustering analysis, support vector method and neural network analysis [3, 4]. Algorithms for somatotype recognition include regression model algorithm, random forest algorithm [5], support vector algorithm [6], Fisher algorithm, Naive Bayesian algorithm, etc. [7]. Through factor analysis, this paper obtains several body parts that have the greatest impact on body type classification, then classification of body types by bust girth and waist girth differences, and obtains the standard size, and uses Euclidean distance to find the alternate models, so as to realize the intelligent evaluation of human body and the replacement of human body model.

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 242–249, 2021. https://doi.org/10.1007/978-3-030-51328-3_34

Research on Somatotype Recognition Method

1.1

243

Experimental Method

Measuring object: the non-contact measurement with the Human Solution 3D body scanner were used to get the body dimensions. 18–25 years old young women were selected from China. They were primarily recruited from the colleges, universities, offices, factories and companies. Measurement body parts: body height, neck height, waist height, arm length, neck at base girth, bust girth, waist girth, buttock girth, high hip girth, underbust circumference, shoulder width, a total of 11 items of data participated in the analysis.

2 The Method of Somatotype Study 2.1

Factor Analysis

800 groups of data were randomly selected for principal component factor analysis to explore the main factors affecting the body shape characteristics [8]. According to the variance contribution rate and cumulative contribution rate of each component, the characteristic roots of the first two principal components are all greater than 1, and the cumulative variance contribution rate of the first two components is 75.02%. As shown in the principal component analysis scree plot in Fig. 1, two characteristic factors can be extracted.

Fig. 1. Principal component analysis scree plot

The absolute value of the components is greater than 0.5 as the index factor for its coverage, so there are the factors included in the first principal component factor in Table 1: high hip girth, buttock girth, underbust circumference, bust girth, waist girth, shoulder width. The variance contribution rate is 46.04%, and the first principal component is an element comprehensively representing the lateral circumference of the human body. The second principal component factor includes: waist height, neck height, body height, arm length, the variance contribution rate is 28.99%, and the second principal component is an element comprehensively representing the longitudinal length of the human body.

244

T. Yao et al. Table 1. Component matrix Variable name

Component 1 2 High hip girth 0.915 −0.269 Buttock girth 0.887 −0.182 Underbust circumference 0.879 −.300 Bust girth 0.876 −0.306 Waist girth 0.856 −0.380 Shoulder width 0.550 −0.171 Neck at base girth 0.353 −0.129 Waist height 0.387 0.886 Neck height 0.460 0.863 Body height 0.458 0.861 Arm length 0.410 0.661

We will apply the content of this article to the somatotype recognition software, the software is closely related to virtual fitting and clothing size, furthermore considering the application of the software, the customer needs convenient measurement. Therefore, three characteristic variables of bust girth, waist girth, buttock girth are selected from category 1 factors of sample principal component analysis. It is more common to select body height as the characteristic variable from the second type of factors. 2.2

Select Alternate Models Size

In order to better combine with enterprise application, the body type classification method adopted in this paper is still the body type classification method of Chinese national clothing size standard (GB/T 1335–1997). Body height is divided into 5 cm, “type” refers to the net bust girth minus waist girth, According to the difference between bust girth and waist girth, the human body is divided into four types: Y, A, B and C as shown in the Table 2 [9]. The range of body height 160 cm is 157  body height < 163 cm [10].

Table 2. Chinese female body type classification Unit: cm Name of body type Y A B C Bust girth-Waist girth 24–19 18–14 13–9 8–4

Take the body height of 160 cm as an example, We extracted 9 standard size, then found the alternate models, and put them in the order of bust girth - waist girth buttock girth as follows: 84-64-84, 84-64-88, 84-68-88, 84-68-90, 88-68-90, 88-68-94, 88-72-94, 88-72-98, 92-72-98, Unit: cm.

Research on Somatotype Recognition Method

245

3 Intelligent Somatotype Recognition Based on Euclidean Distance The essence of intelligent somatotype recognition is to find the human body that is most similar to the target body shape, which involves the recognition of body shape similarity. Due to the complex structure of the human body, and the high dimension of shape space, the recognition of human body similarity has its particularity [11]. Its core is the problem of multi-dimensional data similarity. The common methods to solve this kind of problem include: Euclidean distance, Cosine distance, Pearson correlation coefficient, Tanimoto coefficient, Bayesian algorithm, Vague value (set) similarity and other methods [12]. Euclidean Metric (also known as Euclidean distance) is a commonly used distance definition, which refers to the real distance between two points in m-dimensional space, or the natural length of a vector (i.e. the distance from the point to the origin). Euclidean distance in two-dimensional and three-dimensional space is the actual distance between two points. Euclidean distance is simple and easy to understand, so this paper intends to select this method to recognise the body shape, and whether this method is effective will be verified in the fourth part of this paper. 3.1

Extracting Alternate Models Based on Euclidean Distance

The first step of somatotype recognition is to take the size I mentioned in 2.2 in consideration and then select alternate standard human bodies from 800 groups of human body data. At this time, set the target size of human body as X, xi as each element in X, representing the body height, bust girth, waist girth, buttock girth of the standard size, xi 2 X. The alternate models is Y, yi is each element in Y, representing the body height, bust girth, waist girth, buttock girth of the alternate models, yi 2 Y. Therefore, the similarity between standard size and the alternate models by Euclidean distance is Eq. (1): f(x,y) =

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Xn ðxi  yiÞ2 i¼1

ð1Þ

After calculating the Euclidean distance, the obtained alternate models are listed in Fig. 3. From the values of ∣X–Y∣ and Euclidean distance in the table, it can be seen that the greater the differences, the greater the Euclidean distance, In other words, the lower the similarity between the two body types (Table 3).

246

T. Yao et al. Table 3. Comparison of target body size and substitute standard body

Y Unit: cm No. X Unit: cm Body height - Bust Body height - Bust girth - Waist girth - Buttock girth - Waist girth - Buttock girth girth 1 160-84-64-84 160.8-84.8-63.1-84.3 2 160-84-64-88 158.8-84.3-64.7-89.1 3 160-84-68-88 159.4-84.8-68.4-87.9 4 160-84-68-90 160.2-82.7-67.5-90.6 5 160-88-68-90 160.2-88-68.8-89.8 6 160-88-68-94 160.9-87.4-68.8-94 7 160-88-72-94 160.9-87.2-72.5-94.3 8 160-88-72-98 158.7-89.4-73.4-97.9 9 160-92-72-98 159.5-90.2-72.2-99.7

3.2

∣X-Y∣ Unit: cm

Euclidean distance

0.8-0.8-0.9-0.3 1.2-0.3-0.7-1.1 0.6-0.8-0.4-0.1 0.2-1.3-0.5-0.6 0.2-0-0.8-1.2 0.9-0.6-0.8-0 0.9-0.8-0.5-0.3 1.3-1.4-1.4-0.1 0.5-1.8-0.2–1.7

1.48 1.80 1.08 1.53 0.85 1.35 1.34 2.37 2.53

Customer Somatotype Recognition Based on Euclidean Distance

The second step of somatotype recognition is to compare the customer’s body shape with the 9 alternate models in the system. At this time, set the customer’s body size is M and mi is each element of M, representing the body height, bust girth, waist girth, buttock girth of the customer’s, mi 2 M. The alternate models is still Y. The expression for the Euclidean distance is Eq. (2): f(m,y) =

qX ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n ðmi  yiÞ2 i¼1

ð2Þ

The human bodies of 3 customers are randomly selected as a, b and c for body somatotype recognition, Use the Euclidean distance minimization principle the results are shown in Table 4:

Table 4. Results of customers’ somatotype recognition No. M Unit: cm Body height - Bust girth Waist girth - Buttock girth a 160.9-87.6-72.8-94 b 158.8-89-75.1-96.3 c 158.7-84.1-67.5-86.6

Y Unit: cm Body height - Bust girth Waist girth - Buttock girth 160.9-87.2-72.5-94.3 158.7-89.4-73.4-97.9 159.4-84.8-68.4-87.9

∣M-Y∣ Unit: cm

Euclidean distance

0-0.4-0.3-0.3 0.1-0.4-1.7-1.6 0.7-0.7-0.9-1.3

0.58 2.37 1.87

4 Validation In order to verify the validity of applying Euclidean distance to somatotype recognition, this paper will verify it from two angles of objective numerical value and subjective image observation.

Research on Somatotype Recognition Method

4.1

247

Objective Validation

This article compares the customer’s body shape with the alternate models, the comparison parts are the 11 measurement parts mentioned above, respectively: body height, neck height, waist height, arm length, neck at base girth, bust girth, waist girth, buttock girth, high hip girth, underbust circumference, shoulder width. Take customer a as an example. The results are shown in Fig. 2. From the results in the figure, it can be seen that the comparative effects all show good similarity. Customers b and c also show good similarity.

180.00 160.00 140.00 120.00 100.00 80.00 60.00 40.00 20.00 0.00

Customers'body size

Alternate model

Fig. 2. Customer a and alternate model body size matching line chart

4.2

Subjective Validation

This time, subjective observation is also used to verify the similarity between the customer’s body shape and the alternate model. As shown in Fig. 3, the black solid line is the body shape outline of customer a, Gray human body is the alternate model, they are all obtained through three-dimensional scanning. They have the same ratio, so the higher the coincidence rate, the more similar the two human bodies are. By looking at the pictures, we can see that the two human bodies have a good coincidence degree in the trunk part, and the left arm is due to the difference caused by the different opening angles. Therefore, it is judged that the two are similar and can be used for replacement. The same method also verified the body shape of customers b and c.

248

T. Yao et al.

Fig. 3. Resemblance between customer and alternate model body size

5 Conclusions In order to solve the problem of somatotype recognition, this paper analyzes 829 of women’s human body size in China and draws the following conclusion: body height, bust girth, waist girth, buttock girth is determined as the main indexes applied to somatotype recognition through factor analysis. With the increase of samples in database, the somatotype recognition index can be increased, and the alternate model will be more accurate. The Euclidean distance is used to somatotype recognition. The method is simple, intuitive and convenient to use. The validation part proves that the method is more effective. The results of this paper can be applied to intelligent evaluation of human body and virtual fitting system, and can realize the replacement of human body in virtual fitting. Acknowledgments. This work was financially supported by the fund of Liaoning education department (J2019023).

References 1. Kuang, Y., Li, R., Wu, Y.: Study of the categorization indexes of women body figure. J. China Text. Leader (9), 117–118 (2019) 2. Pan, L., Wang, J., Sha, S.: Study on body typing and garment size grading of young women in Northeast China. J. Text. Res. 34(11), 131–135 (2013) 3. Fang, F., Wang, Z.: Application of K—means clustering analysis in the body shape classification. J. Donghua Univ. (Nat. Sci.), 405593598 (2014) 4. Zhang, S., Zou, F., Ding, X.: Research on the young women’s body classification based on SVM. J. Zhejiang Sci. Tech Univ/ 25(1), 41–45 (2013) 5. Gerard, P.M., Jonathan, T., Jamie, S.: Metric regression forests for correspondence estimation. Int. J. Comput. Vis. 113(3), 163–175 (2015) 6. Ru, J., Wang, Y.: Study on SVM based women’s dress size recommendation. J. Silk 52(6), 27–31 (2015) 7. Jing, X., Li, X.: Application of naive bayesian method in girl’s figure discrimination. J. Text. Res. 38(12), 124–128 (2017) 8. Wang, J., Li, X., Pan, L.: Waist hip somatotype and classification of young women in Northeast China. J. Text. Res. 39(4), 106–110 (2018)

Research on Somatotype Recognition Method

249

9. Zhang, W., Fang, F.: Apparel Somatology, pp. 108–109. Donghua University Press, Shanghai (2008) 10. Dai, H.: Size Designation and Application of Clothes, 2nd edn, p. 20. China Textile & Apparel Press, Beijing (2009) 11. Rim, S., Hazem, W., Mohamed, D.: Extremal human curves: a new human body shape and pose descriptor. In: 10th IEEE International Conference on Automatic Face and Gesture Recognition, pp. 1–6 (2013) 12. Jiang, Z., Li, H.: A collaborative filtering recommendation algorithm based on user similarity and trust. J Softw. Guide 16(6), 28–31 (2017)

Robust and Fast Heart Rate Monitoring Based on Video Analysis and Its Application Kouyou Otsu1(&), Qiang Zhong1, Das Keya1, Hisato Fukuda1, Antony Lam2, Yoshinori Kobayashi1, and Yoshinori Kuno1 1

Saitama University, Saitama, Japan {otsu,keya0612,fukuda,kuno}@cv.ics.saitama-u.ac.jp, {kyochu,yosinori}@hci.ics.saitama-u.ac.jp 2 Mercari, Inc., Tokyo, Japan [email protected]

Abstract. Techniques to remotely measure heartbeat information are useful for many applications such as daily health management and emotion estimation. In recent years, some methods to measure heartbeat information using a consumer RGB camera have been proposed. However, it is still a difficult challenge to accurately and quickly measure heart rate from videos with significant body movements. In this study, we propose a video-based heart rate measurement method that enables robust measurement in real-time by improving over previous slower methods that used local regions of the facial skin for measurement. From experiments using public datasets and self-collected videos, it was confirmed that the proposed method enables fast measurements while maintaining the accuracy of conventional methods. Keywords: Heart rate sensing processing  Healthcare

 Remote photoplethysmography  Image

1 Introduction Heartbeat information such as heart rate (HR) and heart rate variability (HRV) is a useful index for daily health management. Generally, medical instruments such as an electrocardiogram (ECG) have been used for measuring heartbeat information. In recent years, several inexpensive measuring devices using photoplethysmography (PPG) have spread widely [1] making it easier to use heartbeat information for measuring cardiopulmonary function during activities such as playing sports or daily life. Heartbeat information is also known to reflect human internal psychological states such as tension or relaxation. In recent years, numerous human emotion estimation methods based on speaking analysis, and facial expression recognition have been proposed [2] but there are also many methods using heartbeat information obtained from ECG and PPG [3]. Such biological information reflects internal emotions that are not actively expressed by humans and cannot be felt by other people. Therefore, heartbeat information is a promising modality to achieve high accuracy emotion estimation.

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 250–257, 2021. https://doi.org/10.1007/978-3-030-51328-3_35

Robust and Fast Heart Rate Monitoring Based on Video Analysis

251

A major problem to using heart rate information in the daily measurement of health and emotional states is the measuring equipment must always be in contact with the body. To use heartbeat information in emotion estimation in daily life, a method that can measure people in natural environments without using any special equipment is required. For such a problem, several methods for measuring heartbeat information without contact are proposed [4, 5]. Above all, methods utilizing inexpensive consumer RGB cameras such as web cameras or smartphone cameras solve the above-mentioned problems and makes it possible to acquire information about human internal physiological states more easily. In heart rate measurement methods that use an RGB camera, the heartbeat signal is typically obtained by measuring the color change in the skin surface caused by the variation in blood flow volume. However, the raw signal obtained from the camera contains large noise due to illumination changes and body movement. Therefore, noise reduction based on band-pass filtering, signal extraction techniques based on independent component analysis (ICA) [4] and the hue analysis of the skin area (2SR) [5] are often performed to detect the heartbeat signal. Various methods with noise reduction have been proposed to deal with such problems, but it is still a difficult challenge to accurately measure heart rate from videos that include body motion. We have also proposed a method for robustly measuring heart rate from the human face area [6]. In this method, heart rate measurements are performed on multiple local regions that are randomly selected from the face skin region. The measurement results from each area are integrated by a majority vote and it can yield valid results with reduced effects from local noise. In an experiment using the MAHNOB-HCI public database [7] which contains 487 video clips (at 570  470 resolution, 30 s and 61 fps) and ECG data corresponding to each video (captured at 250 Hz), the mean absolute error was 4.7 BPM and the root mean square error was 8.9 BPM. This result shows that this method performs very well in comparison to previous methods proposed at the time. However, it requires large computational resources to measure many areas and is difficult to use in real-time.

Fig. 1. Overview of the heart rate measurement method

252

K. Otsu et al.

In this paper, we propose a method of video-based heart rate measurement by improving upon our previous method that would iteratively process random patches. Our improved method can be used in real-time and easily embedded into applications. If we can preferentially acquire suitable regions for measurement when selecting local regions, then we can get an accurate result with only a small number of iterations. Therefore, we investigated whether there is a statistically suitable part of the human face. And we found that when the cheek, nose, and forehead regions were selected as patches, the results were often close to the ground truth (ECG value). From experiments using public datasets, we confirmed that our proposed method achieves almost the same accuracy in comparison to our previous method with only a small number of iterations. Also, by adding a function that automatically excludes outliers caused by large head movements, we developed a framework that can acquire heart rate from video in real-time. This proposed framework is faster than other existing methods and can be incorporated into a real-time measurement system.

2 Methodology 2.1

Previous Method

The methodology proposed in this paper is an improvement of our previously proposed heart rate measurement method which uses multiple random local regions for robust measurement [6]. Therefore, this section gives a brief overview of our previous method and reports on the results of this study’s investigation and strategies aimed at improving the method. Figure 1 shows the overview of the method. First, the face tracker is applied to each frame of the video to detect the face area. Multiple pairs of local regions (500 patches) are randomly selected as measurement regions from the obtained face regions, and the change of mean pixel value as time series are measured for each local region. Then, filtered heartbeat signals are derived by applying Independent Component Analysis (ICA) and a detrending filter to the obtained signals. Finally, each heart rate value is derived by performing a frequency analysis on each heartbeat signal. The results are aggregated as a histogram and the best estimation result is determined by majority voting. This approach was found to be robust but it requires analysis of a large number of local face regions. As a result, it requires huge computational resources to perform measurement for many areas and is difficult to use in real-time. In our test environment (CPU: Core-i7-4720 K, RAM: 32 GB, MATLAB R2015b), it takes about 6 min to process a 30-s video. 2.2

An Investigation Towards Improvement of Random Patch Selection

The previous method achieves robust measurement by integrating multiple measurement results obtained from face local regions, but its slow processing speed is a distinct drawback. If we can preferentially acquire suitable regions for measurement when selecting local regions, we will be able to achieve an accurate result with a small number of iterations. Therefore, in this study, we investigate whether there are statistically suitable parts of the human face for heartbeat measurement.

Robust and Fast Heart Rate Monitoring Based on Video Analysis

253

The 487 cases of movies and ECG data in the MAHNOB-HCI-Database were used to investigate suitable points for heartbeat measurement. First, the center coordinates of the 1,000 local regions which were selected while processing on each video were recorded. Then, each local region was weighted using reciprocal of the error from the ground truth heart rate value obtained from the ECG data corresponding to each video as the validity rate. Finally, a validity map was generated by normalizing the coordinates of each local region and we summed up the validity rate (Fig. 2(a)). As shown by the validity map, statistically suitable parts for heartbeat measurement were clearly seen on the cheeks and nose. In response to this result, an experiment was conducted to investigate relationships between the selection area of the local region and the measurement accuracy. In the experiment, 469 videos (at 570  470 resolution, 30 s and 61 fps) of the MAHNOBHCI-Database was used to measure the MSE and RMSE values based on the error between the ground truth and the estimation result when measuring in three patterns with different selected areas (1: whole face, 2: bottom of the face, 3: cheek and nose) as shown in Fig. 2(b). Also, we tested two numbers of iterations (500 and 100) for each case. Table 1 shows the results of mean absolute error (MAE), root mean square error (RMSE), and match rate (percentage of cases measured with an error of 0 for all i = 1, 2, …, n. One of the bases of the method is in the comparison two to two of the weights of any pair Ai e Aj. This is represented by a square matrix A = (aij) of order n, where aij = wi/wj. At AHP relative priorities are assigned to different criteria using a 1-9 scale for peer comparison (Table 1). Table 1. 1-9 priority scale of AHP criteria for peer comparison. Intensity 1 3 5 7 9 2, 4, 6, 8

Definition Equal importance Moderate Importance Great importance Very important Intermediate values

Explication Two elements contribute equally to the objectives It favors one element slightly more than another It clearly favors one element than another It greatly favors one element than another It greatly favors one element than another It is used as a compromise between evaluators with judgments different

AHP Applied to the Prioritization of Recreational Spaces in Green Areas

295

This weight vector is calculated using the AHP method for determining the proposed weights of recreational area. AHP allows it to be used for group evaluation. In this case, the weighted geometric mean is calculated for the final value. This allows to meet the requirement of the reverses. Weighting is used to give different weights to specialist criteria taking into account different factors such as authority, expertise and effort. x ¼ ð

Yn

x wi Þ i¼1 i

ð1Þ

With this method it is possible to set local priorities between subcriteria. In this way, it is possible to calculate the weight vector associated with a set of subcriteria. Proposals to address the problem of green areas were identified, and then using the AHP method and especially the tidal comparison matrix and the own vector methods prioritized these proposals [10] (Tables 2 and 3).

3 Opciones Identificadas Areas of active recreation: – – – –

Presence of bikeway Multipurpose courts Soccer fields Childish games Passive recreation areas:

– Trail development – Multipurpose room

Table 2. Matrix peer comparison. Source: Own elaboration. Bikeway Multipurpose courts Bikeway 1.000 0.111 Multipurpose 9.000 1.000 courts Soccer fields 9.000 0.500 Childish 6.000 1.000 games Trail 0.167 0.111 development Multipurpose 0.125 0.125 room Sum 25.292 2.847

Soccer fields 0.111 2.000

Childish games 0.167 1.000

Trail development 6.000 9.000

Multipurpose room 8.000 8.000

1.000 1.000

1.000 1.000

9.000 9.000

8.000 1.000

0.111

0.111

1.000

9.000

0.125

1.000

0.111

1.000

4.347

4.278

34.111

35.000

296

L. A. Toapanta Orbea et al. Table 3. Final weighting.

Source: Own elaboration.

Bikeway Multipurpose courts Soccer fields Childish games Trail development Multipurpose room

Bikeway Multipurpose courts

Soccer fields

Childish games

Trail development

Multipurpose room

0.040 0.356

0.039 0.351

0.026 0.460

0.039 0.234

0.176 0.264

0.229 0.229

0.091 0.316

0.356 0.237

0.176 0.351

0.230 0.230

0.234 0.234

0.264 0.264

0.229 0.029

0.248 0.224

0.007

0.039

0.026

0.026

0.029

0.257

0.064

0.005

0.044

0.029

0.234

0.003

0.029

0.057

Multi-use courts > Football Courts > Childish games > Bikeway > Development of Trails > Multipurpose Room.

4 Proposal of Architectural and Green Infrastructure for a Recreation Center in the El Empalme Canton in Ecuador The process for designing the architectural infrastructure and green infrastructure for a Recreational Center in the El Empalme canton is established as follows: – Analyze the recreational and cultural activities that take place in the canton. – Lifting information related to the type of soil and vegetation on the site. – Identify spatial requirements for the development of recreational and cultural activities for children, youth and adults to serve. – Design ingestry and green infrastructure.

5 Conclusions The El Empalme canton does not have a recreational center that provides spaces to carry out the recreational and cultural activities of its inhabitants, besides that it has a deficit of percentage of green areas per inhabitants for which a Center is proposed Recreational to meet these needs and increase the percentage of green areas per capita. For which the AHP method has been studied to prioritize the types of recreational activities that must be taken into consideration of the type of organization and urban density, taking into account the population to be served which is the urban parish of the El Empalme canton.

AHP Applied to the Prioritization of Recreational Spaces in Green Areas

297

References 1. INEC, Índice Verde Urbano, p. 26 (2012) 2. Jansson, M., Lindgren, T.: A review of the concept ‘management’ in relation to urban landscapes and green spaces: toward a holistic understanding. Urban For. Urban Green. 11(2), 139–145 (2012) 3. Jim, C.Y., He, H.: Estimating heat flux transmission of vertical greenery ecosystem. Ecol. Eng. 37(8), 1112–1122 (2011) 4. Woolley, H.: Urban Open Spaces. Taylor & Francis, Abingdon (2003) 5. Wu, J.: Public open-space conservation under a budget constraint. J. Public Econ. 111, 96– 101 (2014) 6. Zérah, M.-H., Landy, F.: Nature and urban citizenship redefined: the case of the national park in Mumbai. Geoforum 46, 25–33 (2013) 7. Campbell, S.: Green cities, growing cities, just cities?: urban planning and the contradictions of sustainable development. J. Am. Plan. Assoc. 62(3), 296–312 (1996) 8. Seik, F.T.: Planning and design of Tampines, an award-winning high-rise, high-density township in Singapore. Cities 18(1), 33–42 (2001) 9. Saaty, T.L.: What is the analytic hierarchy process?. In: Mathematical Models for Decision Support (1988) 10. Anand, A., Kant, R., Patel, D., Singh, M.: Knowledge management implementation: a predictive model using an analytical hierarchical process. J. Knowl. Econ. 6(1), 48–71 (2015)

Building Updated Research Agenda by Investigating Papers Indexed on Google Scholar: A Natural Language Processing Approach Rita Yi Man Li(&) HKSYU Real Estate and Economics Research Lab, Hong Kong Shue Yan University, Hong Kong, China [email protected]

Abstract. Under many circumstances, scholars need to identify new research directions by going through many different databases to identify the research gap and identify areas which have not yet been studied thus far. Checking all the electronic databases is tiresome, and one often misses the important pieces. In this paper, we propose to shorten the time required for identifying the research gap by using web scraping and natural language processing. We tested this approach by reviewing three distinct areas: (i) safety awareness, (ii) housing price, (iii) sentiment and artificial intelligence from 1988 to 2019. Tokenisation was used to parse the titles of the publications indexed on Google Scholar. We then ranked the collocations from the highest to the lowest frequency. Thus, we determined the sets of keywords that had not been stated in the title and identified the initial idea as a research void. Keywords: Natural Language Processing  Tokenization  Safety awareness Research agenda  Housing  Artificial Intelligence  Sentiment



1 Introduction Natural language processing (NLP) converts unstructured text into a structured form and thus enables the automatic identification and extraction of information. It can be used for the classification of patients into different groups or the codes of a clinical coding system, chatbot and so on (4). At the word level, additional normalization steps determine the lexical word root (stemming) and expand the abbreviations to their corresponding full forms. The syntactic analysis determines the language portion of words (e.g. noun or adjective), their grammatical structure, or the dependency relationships (8). Semantic analysis assigns meaning to words and phrases by linking them to semantic types and concepts. A lexicon of words with definitions and synonyms can be used for this purpose, for example the Metathesaurus of the Unified Medical Language System or RadLex, a specialized radiological lexicon [1]. Linking findings to anatomical locations is an example of NLP for relationship mining. SymText, an NLP system that analyses both syntax and semantics of text, was developed by Fizman et al. and is used for extracting concepts related to aspiration, and pneumonia from reports [1]. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 298–305, 2021. https://doi.org/10.1007/978-3-030-51328-3_42

Building Updated Research Agenda by Investigating Papers Indexed on Google Scholar

299

2 Three Areas of Research that We Attempt to Build a Research Agenda 2.1

Safety Awareness

Safety training is the key determinant of improving knowledge and worker safety awareness. Further research should be undertaken to assess knowledge and practice related to health and safety outcomes and the other components of safe practice [2]. Previous studies show that safety awareness of the personnel working on a DSR project is lower than those who worked for a new skyscraper construction [3]. Construction practitioners wished to know more about safety, increase their knowledge via Web 2.0 so as to heighten their safety awareness [4]. 2.2

Artificial Intelligence (AI) and Sentiment Analysis

AI is a rapidly growing field according to a rough estimate of the number of related articles in the Google Scholar database. Intelligent algorithms are widely used in datadriven methods in the literature to support advanced searches. AI has led to the development of tools and applications that can increase the efficiency of managing complex diseases, including diabetes and cancer [5]. AI is also used in the field of education, e.g. programming-based models exhibit good predictive power in registration [6]. Sentiment analysis via AI is a task of opinion judgment that converts unstructured qualitative data into quantitative data for decision making. The semantic orientation approach uses predefined mood lexicons such as WOrt and Sentiwordnet, which provide mood words lists or a corpus with a massive mood expression text and a dictionary with the polarity of words. By counting words from messages that match the categories in the lexicon, it recognises words with positive or negative polarity, or neutral. Statistical analysis and pattern comparison match words from text documents with the mood lexicon. In financial markets, lexicon-based text analysis tools are used to assess the mood of tweets from the financial community [7–9]. 2.3

Housing Prices

Housing research is also a rather wide area; e.g., some researchers may wish to know what type of design is the most suitable for whom and how to realise this in transformed buildings [10]. One common research focuses on factors that affect housing price [11], e.g. speculation drives property prices up and subprime mortgages led to an additional demand that further increase property prices [12]. The expectation that housing prices will rise motivate consumers to buy housing [13]. Besides, research studied the effects of air quality on housing prices via quantitative regression [14]. Li and Li [15]’s research on 2,175,911 data reflects when more municipal waste is sent to landfill, residents’ complaints increase but the entire region’s property prices rise [15].

300

R. Y. M. Li

3 Research Method and Results In this research, we scrapped paper titles from Google Scholar, one of the largest databases online with the largest number of literature works, to conduct a literature review. In case of safety awareness research, many of these focus on food safety (92), awareness among (something) (70), health (45) and patient (44) related. The keyword also reflects the construction (industry) (51) is one of the popular topic (Tables 1 and 2). Table 1. Tokenisation results of safety awareness research indexed on Google Scholar of terms with the highest frequency Phrase Count N Phrase Safety awareness 619 2 Safety education Food safety 92 2 School students Awareness among 70 2 Public safety Safety awareness among 65 3 Traffic safety Health and safety 45 3 Patient safety awareness Patient safety 44 2 Occupational safety Road safety 44 2 Awareness and practice Situation awareness 43 2 Students safety awareness Food safety awareness 40 3 Occupational health Awareness and safety 34 3 Safety management Fire safety 30 2 Safety practices College students 28 2 Students safety Situational awareness 28 2 Radiation safety awareness among Fire safety awareness 26 3 Safety awareness and safety Radiation safety 25 2 Awareness of food Radiation safety awareness 24 3 Study on safety Road safety awareness 23 3 Construction workers Health and safety awareness 22 4 High school Case study 22 2 Safety culture

Count 20 17 16 16 15 15 14 14 14 14 14 14 13 13 13 13 13 13 13

N 2 2 2 2 3 2 3 3 2 2 2 2 4 4 3 3 2 2 2

Table 2. Tokenisation results of sentiment analysis research indexed on Google Scholar of words with the highest frequency Term Safety Awareness Among Study Food Students

Count 1132 1004 148 135 123 95

Term Fire Occupational Behaviour Knowledge Management College

Count 38 38 36 36 36 35

Term Effects Culture Effect Hospital Use High

Count 23 22 22 22 22 21 (continued)

Building Updated Research Agenda by Investigating Papers Indexed on Google Scholar

301

Table 2. (continued) Term Count Term Health 93 Case Workers 67 Practice Road 58 Survey Construction 51 Radiation Situation 51 Using Education 50 Level School 50 Risk Training 48 Situational Public 47 Industry Analysis 46 Program Patient 46 Assessment Based 41 Improving System 41 Factors Practices 40

Count 34 34 33 31 30 29 29 29 28 27 25 25 24

Term Consumers Evaluation Improve Medical Research Traffic Approach Development Employees Nigeria Prevention Measures University

Count 20 20 20 20 20 20 19 19 19 19 19 18 18

Tables 3 and 4 illustrate the results of ‘artificial intelligence sentiment analysis’. With the use of tokenisation, frequency of common phrases used in research title is counted, e.g. a considerably large number of papers had titles related to machine learning (39), twitter sentiment (30), opinion mining (24), and social media (22). This implied that these topics were hot, and that if we consider conducting research on ‘Twitter sentiment analysis: Opinion mining’, we might have to spend more effort on identifying the related research gap as there are large number of prior research already.

Table 3. Tokenisation results of artificial intelligence sentiment analysis research indexed on Google Scholar of terms with the highest frequency Phrase Sentiment analysis Sentiment classification Based sentiment Machine learning Analysis using Sentiment analysis using Twitter sentiment Based sentiment analysis Opinion mining Level sentiment Social media Feature selection

Count 446 135 40 39 35 34 30 30 24 24 22 20

Phrase Sentiment lexicon Opinion mining and sentiment Cross domain Sentiment lexicons Aspect based Artificial neural Sentiment analysis of twitter Level sentiment analysis Cross domain sentiment Based approach Approach for sentiment Analysis of twitter

Count 13 13 13 12 12 12 11 11 11 11 11 11 (continued)

302

R. Y. M. Li Table 3. (continued) Phrase Count Phrase Deep learning 19 Semi supervised Neural networks 18 Lexicon based sentiment Lexicon based 17 Domain sentiment classification Domain sentiment 17 Aspect based sentiment Twitter sentiment analysis 16 Twitter sentiment classification Mining and sentiment 16 Text sentiment Classification using 16 Survey on sentiment Social networks 15 Sentiment polarity Neural network 15 Multimodal sentiment Using machine learning 14 Movie reviews Using machine 14 Lexicon based sentiment analysis Sentiment classification using 14 Learning approach Mining and sentiment analysis 14 Big data Twitter data 13 Aspect based sentiment analysis

Count 10 10 10 10 9 9 9 9 9 9 9 9 9 9

Table 4. Tokenisation results of sentiment analysis research indexed on Google Scholar of words with the highest frequency Term Sentiment Analysis Classification Using Based Learning Twitter Social Reviews Mining Approach Domain Machine Data Networks Lexicon Text Feature Neural Deep Opinion Aspect

Count 791 480 155 152 136 104 76 55 54 50 45 42 42 39 39 38 37 36 36 33 33 31

Term Chinese Level Model Supervised Review Selection Media Survey Online Stock Techniques Prediction News Approaches Arabic Ensemble Network Product Semantic Artificial Language Multi

Count 30 30 30 29 28 28 27 26 25 23 23 22 21 20 20 20 20 20 20 19 19 19

Term Topic Tweets Polarity Cross Knowledge Lexicons Method Study System Web Detection Methods User Word Emotion Information Via Fuzzy Movie Sentence Features Ontology

Count 18 18 17 16 16 16 16 16 16 16 15 15 15 15 14 14 14 13 13 13 12 12

Term Document Improving Multimodal Unsupervised Affective Automated Big Content Hybrid Market Models Semi Visual Attention Classifier Comparative Context Financial Improve Performance Specific Systems Towards

Count 11 11 11 11 10 10 10 10 10 10 10 10 10 9 9 9 9 9 9 9 9 9 9

Building Updated Research Agenda by Investigating Papers Indexed on Google Scholar

303

In housing price research, housing market (87), price dynamics (47) and price index (44) are the hottest research. The keyword title also reflects the research interest, common ones include analysis (147), China (95) and hedonic (91). That implies that when we build a research agenda, other regions/research method should be included instead (Tables 5 and 6).

Table 5. Tokenisation results of housing price research indexed on Google Scholar of terms with the highest frequency Phrase Count N Phrase Count Housing price 727 2 Urban housing 21 Housing market 87 2 Commercial housing 20 Price dynamics 47 2 Price indexes 20 Price index 44 2 Housing price volatility 19 Hedonic price 39 2 Price of housing 19 Housing price dynamics 35 3 Relationship between housing 19 House price 35 2 Price model 18 Empirical analysis 32 2 Relationship between housing price 17 Land Price 32 2 Price to income 17 Housing price index 31 3 Case study 17 Housing markets 28 2 Income ratio 17 Empirical study 27 2 Real estate 17 Price based 25 2 Housing price and land 16 Price volatility 25 2 Housing price to income 16 Hedonic housing price 24 3 Price and land price 16 Hedonic housing 24 2 Price and land 16 Housing price based 22 3 Hong Kong 16 Monetary policy 21 2 Housing prices 16 Price bubbles 21 2 Price bubble 16 Price indices 21 2 Price models 16

N 2 2 2 3 3 3 2 4 3 2 2 2 4 4 4 3 2 2 2 2

Table 6. Results of housing price research’s paper title indexed in Google Scholar of words with the highest frequency. Term Housing Price Analysis Based Market Study China

Count 1071 1069 141 129 118 104 95

Term Index Supply Factors Income Relationship Case Policy

Count 48 47 46 45 43 42 42

Term Count Rent 26 Bubbles 25 Demand 24 Estimation 24 Real 24 Shanghai 24 Bubble 23 (continued)

304

R. Y. M. Li Table 6. (continued) Term Count Term Hedonic 91 Research Model 90 Effects Empirical 84 City Evidence 70 House Land 69 Markets Urban 64 Ratio Dynamics 63 Using Spatial 61 Beijing Impact 59 Models Effect 51 Cities Data 48 Volatility

Acknowledgements. We (UGC/FDS15/E01/18).

thank

the

grant

Count 42 40 39 39 39 38 37 32 28 27 27

support

Term Indices Prices China’s High Indexes Monetary Influence Panel Regional Commercial Residential

from

Count 23 23 22 22 22 22 21 21 21 20 20

Research

Grant

Council

References 1. Pons, E., et al.: Natural language processing in radiology: a systematic review. Radiology 279(2), 329–343 (2016) 2. Gebrezgiabher, B.B., Tetemke, D., Yetum, T.: Awareness of occupational hazards and utilization of safety measures among welders in Aksum and Adwa Towns, Tigray Region, Ethiopia, 2013. J. Environ. Public Health 2019, 4174085 (2019) 3. Li, R.Y.M., Chau, K.W., Zeng, F.F.: Ranking of risks for existing and new building works. Sustainability 11(10), 2863 (2019) 4. Li, R.Y.M., Tang, B., Chau, K.W.: Sustainable construction safety knowledge sharing: a partial least square-structural equation modeling and a feedforward neural network approach. Sustainability 11(20), 5831 (2019) 5. Contreras, I., Vehi, J.: Artificial intelligence for diabetes management and decision support: literature review. J. Med. Internet Res. 20(5), e10775 (2018) 6. Zawacki-Richter, O., et al.: Systematic review of research on artificial intelligence applications in higher education – where are the educators? Int. J. Educ. Technol. High. Educ. 16(1), 39 (2019) 7. Yu, Y., Wang, X.: World Cup 2014 in the Twitter world: a big data analysis of sentiments in US sports fans’ tweets. Comput. Hum. Behav. 48, 392–400 (2015) 8. Choi, Y., Lee, H.: Data properties and the performance of sentiment classification for electronic commerce applications. Inf. Syst. Front. 19(5), 993–1012 (2017) 9. Daniel, M., Neves, R.F., Horta, N.: Company event popularity for financial markets using Twitter and sentiment analysis. Exp. Syst. Appl. 71, 111–124 (2017) 10. Overtoom, M.E., et al.: Making a home out of a temporary dwelling: a literature review and building transformation case studies. Intell. Build. Int. 11(1), 46–62 (2019) 11. Li, R.Y.M., Chau, K.W.: Econometric Analyses of International Housing Markets. Routledge, Abingdon (2016)

Building Updated Research Agenda by Investigating Papers Indexed on Google Scholar

305

12. Alexiou, C., Chan, A.-S., Vogiazas, S.: Homeownership motivation, rationality, and housing prices: Evidence from gloom, boom, and bust-and-boom economies. Int. J. Finan. Econ. 24(1), 437–448 (2019) 13. Su, C.-W., Yin, X.-C., Tao, R.: How do housing prices affect consumption in China? new evidence from a continuous wavelet analysis. PLoS ONE 13(9), e0203140 (2018) 14. Liu, R., et al.: Impacts of haze on housing prices: an empirical analysis based on data from Chengdu (China). Int. J. Environ. Res. Public Health 15(6), 1161 (2018) 15. Li, R.Y.M., Li, H.C.Y.: Have housing prices gone with the smelly wind? big data analysis on landfill in Hong Kong. Sustainability 10(2), 341 (2018)

Applications in Software and Systems Engineering

Microstrip Antennas Used for Communication in Military Systems and Civil Communications in the 5 GHz Band - Design and Simulation Results Rafal Przesmycki(&), Marek Bugaj, and Marian Wnuk(&) Faculty of Electronics, Military University of Technology, Warsaw, Poland {rafal.przesmycki,marek.bugaj,marian.wnuk}@wat.edu.pl

Abstract. The article deals with problems related to antenna technology and its main goal is to develop two models of microstrip antennas with optimally small sizes working in systems operating at 5.2 GHz (e.g. WLAN). The article presents two designs of developed models of microstrip antennas with their exact dimensions. The article also presents the results of simulations and measurements of selected electrical parameters and characteristics of designed antennas, which were carried out in an anechoic chamber. Keywords: Microstrip antenna  Radiation pattern  Military systems  CST 5 GHz



1 Introduction For the first time the microstrip antenna powered by the microstrip line was introduced in 1953 by G.A Deschamps, but as is often the case in the world of science, the patent was not filed until 1955 by the French inventors Gutton and Baissinot. After the patent was established, 20 years passed before it was put into service. It resulted from the development of other fields of science related to materials, which allowed obtaining substrates with appropriate parameters ensuring good operation of antennas formed on them. The first to make a practically used microstrip antenna were Howell and Munson. It was used in the Sprint and Sidewinder missiles, which are used by the US Army. High potential in this type of arrangement solutions was noticed then. This gave rise to the large development of this technology at the turn of the next decades. Currently, microstrip antennas are widely used due to their numerous advantages, among which are [1]: small dimensions, low weight, low production costs etc. Of course, these antennas also have many disadvantages such as: very narrow band of work, large losses in the feed track, high cross polarization, small tolerance for high power. After all, many companies are constantly working on removing microstrip defects of their flaws, or at least minimizing them. This work will continue until the market requires it [1]. Connectivity is a field of technology that at the turn of the 20th and 21st century has made alarming progress - changing the medium from wired to wireless. However, this © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 309–316, 2021. https://doi.org/10.1007/978-3-030-51328-3_43

310

R. Przesmycki et al.

would not be possible were it not for the development of antenna production technology, which allowed the abandonment of the need to develop cables over hundreds of kilometers, and allowed their replacement by using two simple transceiver antennas. The Internet went the same way, 20 years ago it was necessary to connect the cable to a computer to be able to enjoy the benefits of the Internet on your personal computer or laptop. Today, WLAN networks are ubiquitous, which allow you to wirelessly connect your device to an Ethernet network - allowing much easier access to the Internet, as well as increasing the comfort of using its services [4, 5].

2 Antenna Assumptions The main assumption for the antennas being developed is their operating frequency in the 5 GHz band. This band enables communication in military and civil systems. The basic assumption for antennas is that the developed antenna models should operate in a WLAN system on the 5.2 GHz carrier frequency. The second important assumption for antennas is their size. They should be antennas of the smallest possible dimensions, allowing their use in mobile devices. This will require optimization to minimize the spatial dimensions of the antenna. An important assumption is also the requirement of low production cost of such an antenna. This was the main factor in choosing the FR-4 dielectric as a substrate for antenna construction, despite the fact that it has worse parameters than the solutions proposed by ROGERS, but it is several times cheaper than them. The basic assumptions for the designed antennas are shown in Table 1. Table 1. Basic design assumptions for the designed antennas The shape of the patch Dielectric thickness Dielectric constant tan d Work frequency Bandwidth Power supply method VSWR Polarization

Rectangle 1.5 mm 4.6 0.025 5.2 GHz 20, 40 MHz Microstrip line

< S2 X ð4Þ YI ðXÞ ¼ S2 S1 ; S1 \X  S2 : > : S3 X ; S \X  S 3 S S 2 3

2

8 XS1 > < S2 S1 ; S1 \X  S2 YII ðXÞ ¼ XS2 ; S \X  S : 3 > : S3 S2 2 0; others

ð5Þ

8 XS2 > < S3 S2 ; S2 \X  S3 YIII ðXÞ ¼ XS3 ; S \X  S : 4 > : S4 S3 3 0; others

ð6Þ

8 XS3 > < S4 S3 ; S3 \X  S4 YIV ðXÞ ¼ XS4 ; S \X  S : 5 > : S4 S5 4 0; others

ð7Þ

8
12.0 [35, 36]. In the case of measured patches, there is a slight difference in color measurement. The color of many foods has been measured using computer vision techniques [37–41]. The CIE L*a*b* coordinate system is what is suggested as the best color space for food color quantification [42–50]. The taking of images by means of a camera and its processing by means of suitable programs is presented as an interesting alternative for heterogeneous systems, because they provide a large amount of information in a practical way.

344

M. J. S. Chero et al.

4 Conclusion The method based on image analysis allowed the determination of the color of fruits through the CIEL *a*b * system. The implementation of the computer vision system is relatively cheap compared to the costs of other equipment. The method based on image analysis was applied, allowing the measurement of the color parameters (L*a*b*) of foods, such as fruits, achieving values of L*, a* and b* for Tommy Atkins mango (L* = 74%, a* = 78.3% of red and b* = 55% of yellow), papaya (L* = 74%, a* = 15% of green and b* = 43.3% of yellow), star fruit (L* = 59%, a* = 18.3% of red and b* = 61.7% of yellow) and golden berry (L * = 67%, a * = 16.7% of red and b * = 85% of yellow).

References 1. Patrício, D.I., Rieder, R.: Computer vision and artificial intelligence in precision agriculture for grain crops: a systematic review. Comput. Electron. Agric. 153, 69–81 (2018) 2. Pourdarbani, R., Sabzi, S., Kalantari, D., Hernández-Hernández, J.L., Arribas, J.I.: A computer vision system based on majority-voting ensemble neural network for the automatic classification of three chickpea varieties. Foods 9(2), 113 (2020) 3. Cavallo, D.P., Cefola, M., Pace, B., Logrieco, A.F., Attolico, G.: Non-destructive and contactless quality evaluation of table grapes by a computer vision system. Comput. Electron. Agric. 156, 558–564 (2019) 4. Sabzi, S., Javadikia, H., Arribas, J.I.: A three-variety automatic and non-intrusive computer vision system for the estimation of orange pH value. Measurement 152, 107298 (2020) 5. Leme, D.S., da Silva, S.A., Barbosa, B.H.G., Borém, F.M., Pereira, R.G.F.A.: Recognition of coffee roasting degree using a computer vision system. Comput. Electron. Agric. 156, 312–317 (2019) 6. Ireri, D., Belal, E., Okinda, C., Makange, N., Ji, C.: A computer vision system for defect discrimination and grading in tomatoes using machine learning and image processing. Artif. Intell. Agric. 2, 28–37 (2019) 7. Yossy, E.H., Pranata, J., Wijaya, T., Hermawan, H., Budiharto, W.: Mango fruit sortation system using neural network and computer vision. Procedia Comput. Sci. 116, 596–603 (2017) 8. Nadafzadeh, M., Abdanan Mehdizadeh, S., Soltanikazemi, M.: Development of computer vision system to predict peroxidase and polyphenol oxidase enzymes to evaluate the process of banana peel browning using genetic programming modeling. Sci. Hortic. 231, 201–209 (2018) 9. Tripathi, M.K., Maktedar, D.D.: A role of computer vision in fruits and vegetables among various horticulture products of agriculture fields: a Survey. Information Processing in Agriculture (2019) 10. Bhargava, A., Bansal, A.: Fruits and vegetables quality evaluation using computer vision: a review. J. King Saud Univ. Comput. Inf. Sci. (2018). https://doi.org/10.1016/j.jksuci.2018. 06.002 11. Zhang, B., Huang, W., Li, J., Zhao, C., Fan, S., Wu, J., Liu, C.: Principles, developments and applications of computer vision for external quality inspection of fruits and vegetables: a review. Food Res. Int. 62, 326–343 (2014)

Application of the Computer Vision System

345

12. Li, S., Luo, H., Hu, M., Zhang, M., Feng, J., Liu, Y., Dong, Q., Liu, B.: Optical nondestructive techniques for small berry fruits: a review. Artif. Intell. Agric. 2, 85–98 (2019) 13. Mohd Ali, M., Hashim, N., Abdul Hamid, A.S.: Combination of laser-light backscattering imaging and computer vision for rapid determination of oil palm fresh fruit bunches maturity. Comput. Electron. Agric. 169, 105235 (2020) 14. Cömert, E.D., Mogol, B.A., Gökmen, V.: Relationship between color and antioxidant capacity of fruits and vegetables. Curr. Res. Food Sci. 2, 1–10 (2020) 15. Minaei, S., Kiani, S., Ayyari, M., Ghasemi-Varnamkhasti, M.: A portable computer-visionbased expert system for saffron color quality characterization. J. Appl. Res. Med. Aromat. Plants 7, 124–130 (2017) 16. Cubero, S., Albert, F., Prats-Moltalbán, J.M., Fernández-Pacheco, D.G., Blasco, J., Aleixos, N.: Application for the estimation of the standard citrus colour index (CCI) using image processing in mobile devices. Biosys. Eng. 167, 63–74 (2018) 17. Pothen, Z., Nuske, S.: Automated assessment and mapping of grape quality through imagebased color analysis. IFAC-PapersOnLine 49(16), 72–78 (2016) 18. Tan, K., Lee, W.S., Gan, H., Wang, S.: Recognising blueberry fruit of different maturity using histogram oriented gradients and colour features in outdoor scenes. Biosys. Eng. 176, 59–72 (2018) 19. Gené-Mola, J., Vilaplana, V., Rosell-Polo, J.R., Morros, J.-R., Ruiz-Hidalgo, J., Gregorio, E.: KFuji RGB-DS database: fuji apple multi-modal images for fruit detection with color, depth and range-corrected IR data. Data Brief 25, 104289 (2019) 20. Donis-González, I.R., Guyer, D.E.: Classification of processing asparagus sections using color images. Comput. Electron. Agric. 127, 236–241 (2016) 21. Cárdenas-Pérez, S., Chanona-Pérez, J., Méndez-Méndez, J.V., Calderón-Domínguez, G., López-Santiago, R., Perea-Flores, M.J., Arzate-Vázquez, I.: Evaluation of the ripening stages of apple (Golden Delicious) by means of computer vision system. Biosys. Eng. 159, 46–58 (2017) 22. Jana, S., Parekh, R., Sarkar, B.: A De novo approach for automatic volume and mass estimation of fruits and vegetables. Optik 200, 163443 (2020) 23. Concha-Meyer, A., Eifert, J., Wang, H., Sanglay, G.: Volume estimation of strawberries, mushrooms, and tomatoes with a machine vision system. Int. J. Food Prop. 21(1), 1867– 1874 (2018) 24. Zielinska, M., Michalska, A.: Microwave-assisted drying of blueberry (Vaccinium corymbosum L.) fruits: drying kinetics, polyphenols, anthocyanins, antioxidant capacity, colour and texture. Food Chem. 212, 671–680 (2016) 25. Salehi, F., Kashaninejad, M.: Modeling of moisture loss kinetics and color changes in the surface of lemon slice during the combined infrared-vacuum drying. Inf. Process. Agric. 5 (4), 516–523 (2018) 26. Benalia, S., Cubero, S., Prats-Montalbán, J.M., Bernardi, B., Zimbalatti, G., Blasco, J.: Computer vision for automatic quality inspection of dried figs (Ficus carica L) in real-time. Comput. Electron. Agric. 120, 17–25 (2016) 27. Lv, W., Li, D., Lv, H., Jin, X., Han, Q., Su, D., Wang, Y.: Recent Development of microwave fluidization technology for drying of fresh fruits and vegetables. Trends Food Sci. Technol. 86, 59–67 (2019) 28. Seyedabadi, E., Khojastehpour, M., Abbaspour-Fard, M.H.: Online measuring of quality changes of banana slabs during convective drying. Eng. Agric. Environ. Food 12(1), 111– 117 (2019) 29. Wang, D., Martynenko, A., Corscadden, K., He, Q.: Computer vision for bulk volume estimation of apple slices during drying. Drying Technol. 35(5), 616–624 (2016)

346

M. J. S. Chero et al.

30. Nadian, M.H., Abbaspour-Fard, M.H., Martynenko, A., Golzarian, M.R.: An intelligent integrated control of hybrid hot air-infrared dryer based on fuzzy logic and computer vision system. Comput. Electron. Agric. 137, 138–149 (2017) 31. Seyedabadi, E., Khojastehpour, M., Abbaspour-Fard, M.H.: Online measuring of quality changes of banana slabs during convective drying. Eng. Agric. Environ. Food 12(1), 111– 117 (2018) 32. Gonzalez, R.C., Woods, R.E.: Digital image Processing, 4th edn. Pearson, New York (2018) 33. Duda, R.O., Hart, P.E., Store, D.G.: Pattern Classification, 2nd edn. John Wiley & Sons, New York (2007) 34. CIE (Commission Internationale de l’Éclairage).: Colorimetry. Part 4: CIE 1976 L*a*b* Colour space. ISO/CIE 11664-4:2019(E). Commission Internationale de l’Éclairage, Vienna, Austria (2019) 35. Li, L.T.: Food physics. China Agricultural Press, Beijing (2001) 36. Zhang, Y.-F., Li, J.-B., Zhang, Z.-Y., Wei, Q.-S., Fang, K.: Rheological law of change and conformation of potato starch paste in an ultrasound field. J. Food Measur. Charact. 13, 1695–1704 (2019) 37. Miranda-Zamora, W.R.: Determinación de los parámetros del color (L*, a* y b*) de alimentos utilizando un método alternativo: sistema de visión por computadora. Tesis para optar el Grado de Doctor en Ingeniería Industrial. Escuela de Postgrado. Universidad Nacional de Piura, Piura, Perú (2012) 38. Miranda-Zamora, W.R., Vignolo, T.G., Leyva, N.L.: Ingeniería del tratamiento térmico de alimentos. Universidad Nacional de Piura, Piura (2012) 39. Miranda-Zamora, W.R., Stoforos, N.G.: Procesamiento térmico de alimentos. Teoría, práctica y cálculos. AMV (Antonio Madrid Vicente) Ediciones, Madrid-España (2016) 40. Tretola, M., Ottoboni, M., Di Rosa, A. R., Giromini, C., Fusi, E., Rebucci, R., Leone, F., Dell’Orto, V., Chiofalo, V., Pinotti, L.: Former food products safety evaluation: computer vision as an innovative approach for the packaging remnants detection. J. Food Qual. 1–6 (2017) 41. Naik, S., Patel, B.: Machine vision based fruit classification and grading: a review. Int. J. Comput, Appl. 170(9), 22–34 (2017) 42. Peng, Y., Adhiputra, K., Padayachee, A., Channon, H., Ha, M., Warner, R.D.: High oxygen modified atmosphere packaging negatively influences consumer acceptability traits of pork. Foods 8(11), 567 (2019) 43. Parafati, L., Palmeri, R., Trippa, D., Restuccia, C., Fallico, B.: Quality maintenance of beef burger patties by direct addiction or encapsulation of a prickly pear fruit extract. Front. Microbiol. 10, 1760 (2019) 44. Choe, J., Kim, H.-Y.: Comparison of three commercial collagen mixtures: quality characteristics of marinated pork loin ham. Food Sci. Anim. Res. 39(2), 345–353 (2019) 45. Zhuang, H., Rothrock Jr., M.J., Hiett, K.L., Lawrence, K.C., Gamble, G.R., Bowker, B.C., Keener, K.M.: In-Package air cold plasma treatment of chicken breast meat: treatment time effect. J. Food Qual. 2019, 1–7 (2019) 46. Karbowiak, T., Crouvisier-Urion, K., Lagorce, A., Ballester, J., Geoffroy, A., Roullier-Gall, C., Chanut, J., Gougeon, R.D., Schmitt-Kopplin, P., Bellat, J.-P.: Wine aging: a bottleneck story. NPJ Sci. Food 3, 14 (2019) 47. Martins, A.J., Cerqueira, M.A., Pastrana, L.M., Cunha, R.L., Vicente, A.A.: Sterol-based oleogels’ characterization envisioning food applications. J. Sci. Food Agric. 99(7), 3318– 3325 (2018) 48. Ahmad, N.A., Yook Heng, L., Salam, F., Mat Zaid, M.H., Abu Hanifah, S.: A colorimetric pH sensor based on Clitoria sp and Brassica sp for monitoring of food spoilage using chromametry. Sensors 19(21), 4813 (2019)

Application of the Computer Vision System

347

49. Libera, J., Latoch, A., Wójciak, K.M.: Utilization of grape seed extract as a natural antioxidant in the technology of meat products inoculated with a probiotic strain of LAB. Foods 9(1), 103 (2020) 50. Hernández-Guerrero, S.E., Balois-Morales, R., Palomino-Hermosillo, Y.A., López-Guzmán, G.G., Berumen-Varela, G., Bautista-Rosales, P.U., Alejo-Santiago, G.: Novel edible coating of starch-based stenospermocarpic mango prolongs the shelf life of mango “Ataulfo” fruit. J. Food Qual. 2020, 1–9 (2020)

Analysis of the Gentrification Phenomenon Using GIS to Support Local Government Decision Making Boris Orellana-Alvear1(&) and Tania Calle-Jimenez2 1

2

Universidad de Cuenca, Cuenca, Ecuador [email protected] Escuela Politécnica Nacional, Quito, Ecuador [email protected]

Abstract. Gentrification is a territorial process that is affecting the social dynamic by dispossession that is causing the displacement of the population. This process is due to characteristics such as heritage, economic and political model, social construction, low economic resources, culture, identity, among others in Latin America. This paper presents an analysis of the gentrification phenomenon in the Cuenca city displayed on maps. The analysis of the quantitative data extracted from the government webpages and the qualitative data obtained from the surveys were visualized on maps generated by Geographic Information System (GIS). The result could contribute with the local government to the decision-making process to decrease this phenomenon in the historic center of the Cuenca city. Keywords: Gentrification  Displacement  GIS  Cuenca  Ecuador  Decision making  Local government

1 Introduction The implementation of government policies in the planning of urban regeneration projects specifically in the centers of historical heritage attracts and privileges real estate capital. These projects transform the structure and affecting the social dynamics causing the displacement of the population, this phenomenon is called gentrification [1]. The gentrification concept begins in the sixties by Ruth Glass [2]. However, the deepening of study of the displacement that this phenomenon produces has been increasing and has received the attention of researchers such as Michael Paccione, David Ley, Neil Smith, Tom Slater, Chris Hamnett, Peter Mercuse and Janoschka, Sequera, Salinas in Latin America. At present, research teams from universities such as Berkeley, California, Yale, are analyzing qualitative and quantitative data to address this phenomenon. These data are compared with different methods to measure gentrification phenomenon and displacement such as Computer Assisted Qualitative Data Analysis (CAQDA) to support research through georeferencing data [3]. Although the causes and effects of gentrification respond to the economic and political model of a given context. This phenomenon constitutes a process that changes © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 348–354, 2021. https://doi.org/10.1007/978-3-030-51328-3_48

Analysis of the Gentrification Phenomenon Using GIS

349

in space and time. This phenomenon can be characterized through an analysis of indicators that start from the reality of a region, which allows to visualize the displacement process that contributes loss territorialization. In the case of Latin American cities, it is relevant to extract, add, compare and systematize indicators of methods proposed by authors such as: Fouch for Asheville [4], Othman for Melaka [5], Chang for New York [6], Mujahid for San Francisco [7], among others, and apply in cities to the Latin American with urban characteristics in which there is the phenomenon. This case study is located in the historic center of the city of Cuenca, which presented as an intermediate city, has a representative geography of the cultural heritage of the cities of humanity declared by UNESCO [8]. This research analyzes through maps the gentrification phenomenon with respect to displacement focused on the centers of historical heritage in Latin America with particular case is Cuenca City. This study will contribute to support for decision making and the implementation of public policies framed in Territorial Planning.

2 Related Work In this research applied the gentrification concept of the accumulation by dispossession of David Harvey [9]. On the other hand, for gentrification analysis, in [10] the authors involved the politics exerted by the State as an entity that facilitates the privatization of public goods and the deprivation of the right to the city of the popular classes is framed as territorial expulsion. In [7], the paper showed the gentrification and displacement in the San Francisco Bay, the authors presented comparison of measurement approaches. This research evaluated the Free-man method, the Landis method and the Urban Displacement Project. The result of censuses and surveys were analyzed and compared and finally modeled in GIS. On the other hand, in [11] presented a model to identify gentrified areas, with census data. This model concentrates on the analysis by areas and catalogs into categories according to land use and conservation. In addition, in [12] the authors showed a geographical analysis of susceptibility to gentrification in the Asheville city. This research indicates the strategy for the implementation of GIS against the respective units of analysis and indicators for gentrification phenomenon. These papers contributed with the direction that the surveys should have such as the nature of the questions and the systematization and proposed in this research.

3 Method This proposed method tries to simplify the objective of analysis and generalize ideas to obtain useful tools for spatial analysis with regarding gentrification. Thus, the indicators present in several gentrification methods according to literature review have been analyzed to make visible the processes of displacement by dispossession. In this way, the process that have been taken as a reference was:

350

B. Orellana-Alvear and T. Calle-Jimenez

1. Description of the case study, Cuenca city in Ecuador is characterized as an intermediate city, declared by UNESCO as Cultural Heritage of Humanity, an event that has emphasized the preservation of its historic center. However, although the built heritage has deserved special attention during two decades of this millennium, the social structure, consequence of this event, has not been properly analyzed. 2. Applied surveys that are considered fundamental due to their individual nature [8], is a method of analysis, these surveys have been applied to the old population of the historic center, which moved to the periphery for personal reasons. 3. Analysis of gentrification variables focused in the coincidence of the point of gravity generated as a result of the displacement points with respect to the historic center. Because, there is not predominance towards a focus of city growth, gentrification is completely radial with respect to the city’s own geometry, considering the elliptical main axis east - west. 3.1

Case Study

Latin American incorporates government policies, protected by organizations that protect heritage such as UNESCO. The central or sectional governments that offer social housing plans in the periphery urban area, favor migration and eradicate the right to the city forcing the unsustainable supply of services. Increase surplus value and reduce the social construction instead of protecting, as well as the declared public policies and action plans, represent an artifice that facilitates the depredation of the city by estate capital.

Fig. 1. Study area: historic center (black) and urban area (gray)

The Fig. 1 shows the urban area, Cuenca has approaches 70 km2, a circle of six kilometers of radius can be circumscribed geometrical and the urban area has a radius of two kilometers. The historic center is the centers of gravity of these circumferences. Cuenca is the third largest city in Ecuador, was declared Cultural Heritage of Humanity

Analysis of the Gentrification Phenomenon Using GIS

351

in 1999. Cuenca has numerous historical buildings and this city has agricultural, economic and administrative center of the region. Cuenca is characterized by providing a high quality of life to population and with good public services. The historic center has an area of approximately two square kilometers and has a high degree of conservation building. 3.2

Survey Applied

215 actors have been determined as the sample size to obtain reliable results. The surveys took place in chronological order. First, actors were identified, who have lived in central place in the city. The actors were chosen those who consider having to move their home to the periphery for reasons beyond control. At this stage, the research group considers only those that motivate displacement by dispossession. Second, the surveys were applied with open questions about of the reason for moving, previous address and actual address. The research group analyzed the open questions, and catalogs as general characteristics. In addition, the survey stored addresses described to turn into UTM coordinates. According in [8], the authors mentioned that has been possible to group by the displacement and also measure the length of the displacement. Thirty, the data has been analyzed with GIS tools, this resource systematized the results, transferred to a visual field for a discussion and generate scenarios that identify the different causes of dispossession. In this way, the analysis contributed to focus on the different areas of displacement. 3.3

Result of Analysis of Gentrification Variables

According with the event grouping: The various causes have been grouped by the following reasons: a) comfort; b) economy; c) mobility; d) expulsion. As can be evidenced at Table 1, more than 50% of cases refer to problems generated by lack of space, noise, insecurity and pollution. Table 1. Qualitative surveys: displacement cause Displacement cause Small houses Expensive rent Buy house Insecurity Studies Noise Shabby housing Lack privacy Work Vehicular congest

Distance (km) Actors 515.85 67 167.31 47 371.96 16 348.50 15 251.06 11 32.37 9 27.69 9 39.20 8 21.54 7 10.68 4

Average (km) 7.70 3.56 23.25 23.25 22.82 3.60 3.08 4.90 3.08 2.67 (continued)

352

B. Orellana-Alvear and T. Calle-Jimenez Table 1. (continued) Displacement cause Distance (km) Actors Average (km) No information 9.30 3 3.10 Insecurity sensation 7.03 3 2.34 Commercial locals 8.85 2 4.43 No parking 5.72 2 2.86 House sold 4.95 2 2.47 Bad neighbor 2.08 2 1.04 Remoteness 6.27 1 6.27 No appropriation 4.34 1 4.34 Pollution 3.49 1 3.49 Heritage 3.22 1 3.22 Eviction 2.95 1 2.95 Strong rules 2.15 1 2.15 Construct changes 1.34 1 1.34 Expensive services 0.17 1 0.17

It is important to relate the reasons of the distance. For example, small houses present not only the greatest number of cases, but also the greatest sum of distances. In the same sense, insecurity has a large number, but the average is extremely high, which shows that actors moved as far as possible. In the case of the economic reasons, excessive income does not show greater averages of distance from the first house, but, as a home buying, this is at the highest average in terms of displacement. The displacement by mobility, is carried out in not-so-peripheral areas. However, mobility to study centers in rural areas has promoted the relocation of 11 of the actors. The resistance to displacement is evidenced, due to the low level averages in the surveys. The different reason of displacement have been grouped into four main groups as can be seen in Table 2. Thus, the conclusion that comfort in all its dimensions is the predominant motive that leads to displacement. This phenomenon is evidenced in the sum of distances and in the number of surveys.

Table 2. Consolidate principal displacement reasons. Motive of displacement Total distance (km) Actors Average (km) Comfort (blue) 983 116 8.47 Economy (green) 543 65 8.35 Mobility (orange) 295 25 11.80 Expulsion (yellow) 18 6 3.00 unknown 0 3 0 Total 1839 215

Analysis of the Gentrification Phenomenon Using GIS

353

4 Conclusion In this context, this research analyzes the phenomena gentrification of the city of Cuenca. The main objective is to show through the maps that the phenomenon exists and that it is the cause of several effects and that the government must implement public policies to reduce this phenomenon (Fig. 2).

Fig. 2. Displacement: UTM coordinate display using GIS

The research focuses on problems of economic origin such as income, cost of land, high payments for services and similar elements. This study presented almost a third of the surveys in the GIS. Although the gentrification concept is still in doubt, this study allowed differentiating ergonomic - spatial characteristics. The actors interviewed mentioned lack of space and that local government has contributed little to generate areas of public use within the historic center to the population have a good lifestyle. Actors contributed to the surveys the manifestation of the confinement, the lack of equipment dedicated to recreation, favorable policies for the buildings improve and to solved the inconveniences generated by noise and pollution. The local government plays an important role in which it must claim the actions to be taken to avoid gentrification of the city’s historic center. On the other hand, the use of GIS tools allowed to identify the real problem of the city. The theoretical discussion became concrete and specific by to the graphics generated by GIS. Although, the empirical discussion faces problems of land use. However, the map showed from real the problem of the gentrification. The results stored in the database have become completely objective, allowing the understanding of the phenomenon. The importance of the use of GIS lies in the elimination of the subjective nature of the surveys and the respective materialization of the phenomenon.

354

B. Orellana-Alvear and T. Calle-Jimenez

References 1. Janoshcka, M.: Gentrification-displacement-dispossession: key urban processes in American cities. Revista Invi 31(88), 27–71 (2016) 2. Glass, R.: Aspects of change. The gentrification debates: A reader, 19–30 (1964) 3. Reynolds, M.: A glamorous gentrification: public art and urban redevelopment in Hollywood. Calif. J. Urban Des. 17(1), 101–115 (2012) 4. Fouch, N.: Planning for gentrification: a geographic analysis of gentrification susceptibility in the city of Asheville, NC (2012) 5. Othman, R.N.R.: The impact of gentrification on local urban heritage identity in old quarter, Melaka heritage city. Plan. Malays. 15(2) (2017) 6. Chang, T.C.: New uses need old buildings: gentrification aesthetics and the arts in Singapore. Urban Stud. 53(3), 524–539 (2016) 7. Mujahid, M.S., Sohn, E.K., Izenberg, J., Gao, X., Tulier, M.E., Lee, M.M., Yen, I.H.: Gentrification and displacement in the san francisco bay area: a comparison of measurement approaches. Int. J. Environ. Res. Public Health 16(12), 2246 (2019) 8. UNESCO: Convention concerning the protection of the World Cultural and Natural Heriage: Adopted by the World Heritage Committee, Marrakesh-Morocco (1999) 9. Harvey, D.: David Harvey. Race, Poverty and the Environment (2009) 10. Paton, K.: Gentrification: A Working-Class Perspective. Routledge, Abingdon (2016) 11. Wachsmuth, D., Weisler, A.: Airbnb and the rent gap: Gentrification through the sharing economy. Environ. Plan. A Econ. Space 50(6), 1147–1170 (2018) 12. Hayes, M.: The coloniality of UNESCO’s heritage urban landscapes: heritage process and transnational gentrification in Cuenca, Ecuador. Urban Stud. (2020)

EMG Signal Interference Minimization Proposal Using a High Precision Multichannel Acquisition System and an Auto-Calibrated Adaptive Filtering Technique Santiago Felipe Luna Romero(&) and Luis Serpa-Andrade Grupo de Investigacion en Inteligencia Artificial y Tecnologias de Asistencia GIIATa, Universdidad Politecnica Salesiana Sede Cuenca, Cuenca, Ecuador [email protected], [email protected]

Abstract. Within the biomedicine scope, electromyography or EMG signals are known as an electrical impulses record produced by the human brain with the intention of activating the body muscles movement. Usually, these signals are acquired using surface electrodes placed on the skin in the place where muscle activity is intended to record. However, the acquisition of these signals is usually affected by unwanted interference or “Noise” that come from different sources such as the static produced by skin contact with the electrodes and electromagnetic interference from the system power supplies and the environment where measurements are made, etc. Therefore, and also considering that this work proposes to use a high-resolution system for the acquisition of EMG signals, and this high resolution involves a high sampling frequency, which at the same time causes the system to become more vulnerable to the previously aforementioned interferences, therefore, this work proposes as a solution to use an auto-calibration system that allows the acquisition system to learn the interferences produced in the acquisition of the EMG signals before making the measurement, in order to try to eliminate them when they are subsequently acquired, the filtering technique was proposed in previous work. The proposed efficiency evaluation metric is known as the signal-to-noise ratio (SNR) compared before and after using the proposed auto-calibrated filtration system. This system allows acquiring an electromyography signal with a minimum noise level, which subsequently allows to faithfully use this type of acquired signals in systems of extraction of characteristics and classification of EMG signals. Keywords: Signal processing noise ratio

 Filtering  Perturbation  EMG  Signal to

1 Introduction EMG signals are a record of the action potentials produced by the human nervous system during the process of muscle contraction and relaxation [1]. Commonly these signals are acquired by surface electrodes on the skin, in the place where registration is intended [2], but this acquisition always comes from the hand of electromagnetic noise, which can be produced by different sources such as the “Power supply” of the © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 355–360, 2021. https://doi.org/10.1007/978-3-030-51328-3_49

356

S. F. L. Romero and L. Serpa-Andrade

acquisition system, the measuring environment and certainly the electrostatic energy produced by the skin contact of a user with the measuring electrodes [3, 4]. From a medical point of view, an EMG signal altered by noise could cause a diagnostic error of muscular behavior, while, from a scientific and technological point of view, with an altered EMG signal it cannot be guaranteed that the results obtained with The signal processing correspond to the measurement of the desired variables, subtracting reliability from the results obtained in the research [5, 6]. In [7] It presents a novel system to minimize electrostatic noise caused by skin contact with the electrodes at the time of acquiring an EMG signal applying a technique that the work calls “Auto-calibration”. This System basically tries to learn the physical conditions of any person before performing the electromyography measurement, to minimize the noise at the time of acquisition and works as follows: when a user is going to perform electromyography measurements, the system in the first instance asks the user not to move any muscle in his body, then during that moment of absolute muscular rest the system learns the user’s physical conditions, then proceed to minimize noise at the time of acquisition of the EMG signal. This technique has promising results that minimize noise in an EMG signal with high efficiency. The device that was used to acquire the EMG signals is a commercial device called “MYO ARMBAND” which does not have relatively good resolution and fidelity, considering that its sampling rate is 200 Hz and its resolution is 8Bits. For this reason for this work it is proposed to use the filtering system proposed by the work [7], but through a “High Accuracy Multi-channel EMG Acquisition System” that certainly ensures high resolution and fidelity of an EMG signal. This study is divided into four sections, in Sect. 1 the introduction is described, Sect. 2 describes the technique used and the configuration of the adaptive filter. Section 3 details the results obtained and Sect. 4 presents the discussion of the proposal and the conclusions.

2 Technique Description The minimization proposal mentioned above is based on generating a signal that contains the information of the noise present in the acquisition of an EMG signal, and then with this generated signal and the use of an adaptive filter to minimize interference at the time of acquisition. Thus, the problem lies in generating the signal that contains the noise information, for this purpose, this work proposes a self-calibration stage that consists of remaining in absolute rest for a time before performing the measurement, in this period of time the system must start acquiring signals, which we can classify as the noise that contaminates an EMG signal, this system can be observed in Fig. 1 [8].

EMG Signal Interference Minimization Proposal Using a High Precision

357

Auto-calibraƟon Stage High-Accuracy EMG AcquisiƟon System Stage

Noise Reference

OperaƟng Stage High-Accuracy EMG AcquisiƟon System Stage

AdapƟve Filtering

Filtered Signal

Noise Reference Fig. 1. Adaptive filtering scheme

The auto-calibration process described above is carried out as follows: the signals obtained in the state of absolute rest, we have stored them in a buffer, this with the intention of knowing the electrostatic noise produced by the skin contact with the electrodes and thus obtain a reference noise. An important condition is that, if the buffer that stores the signal acquired in the autocalibration stage reaches its last position, the buffer restarts the count from the first sample, thus achieving a permanent signal. As can be seen in Fig. 1, the adaptive filter simulates the relationship be-tween the noise reference signal acquired in the auto-calibration stage and the actual noise signal that is obtained at the moment of acquisition of the EMG signal. This simulation is possible through the use of a recursive or optimizing algorithms at work [7], It is shown that the RLS algorithm is one of the best in terms of its low computational cost and efficiency. 2.1

High Accuracy Multi-channel EMG Acquisition System

For this work the “High Accuracy Multi-channel EMG Acquisition System” is the commercial device BIOPAC MP45. The frequency rate of the device is 48000 Hz, and the resolution is 16 bits, which guarantees high fidelity and resolution bioelectric signals such as EMG signals. 2.2

RLS Optimizer

The minimum square recursive filter (RLS) can be applied for both stationary and nonstationary signals [9]. Its application is very useful especially in non-stationary signals, because the optimizer tracks the statistical variations in the time of the signal to be filtered in relation to the noise reference signal, its advantage being the rapid convergence to reduce the error [10].

358

S. F. L. Romero and L. Serpa-Andrade

The convergence of the RLS filter is achieved by controlling two parameters: the forgetting factor (u) and the filter order Regarding the order of the filter, the procedure described in [11] which consists in varying the order of the optimizer and observing how the signal to noise ratio (SNR) changes, and thus, depending on the higher SNR value the filter order is selected. In relation to the forgetting factor (l), it is necessary to analyze the variation of the signal to noise ratio, as well as the number of delays. After this analysis the best parameters have been defined and those can be observed in Table 1: Table 1. RLS optimizer parameters Parameters Values Forgetting factor 25 Filter order 26

3 Results In the experimentation phase, the patient was asked to keep his arm in complete rest with the intention of performing the “Autocalibration” technique, and then he was asked to perform three muscle contractions and relaxations, in order to check if the system works correctly to filter the EMG signal. Figure 2 shows the result of applying the filtering to the input signal, in it in the first instance it is possible to observe the original EMG signal without applying any type of filter, then next we can observe the same signal, but this time then of apply filtering.

Fig. 2. Filtering process with autocalibration, Up: unfiltered signal, Down: filtered signal

EMG Signal Interference Minimization Proposal Using a High Precision

359

At first sight it is possible to observe a visual improvement that exists between both types of signals, in Fig. 3 we can observe in more detail a process of muscular contraction specifically in the upper part we observe a signal without any type of filtering, in it easily You can identify that there is a large amount of noise, then the same signal is presented, but after applying the filtering, it can be seen that the interference has been significantly reduced and muscle contraction is more easily identifiable. To quantify this result, it was decided to obtain the metric of the signal to noise ratio of the signal or SNR, Table 2 indicates the SNR before and after filtering.

Fig. 3. Muscle contraction process, Up: unfiltered signal, Down Filtered signal.

Table 2. Signal to noise ratio Signal SNR Before filtering 2.7057 After filtering 35.1821

With these values it is possible to calculate the relative improvement of the signal obtained against the original signal, this in order to obtain a metric that indicates the efficiency of the system obtaining thus an efficiency of 92.31%.

4 Conclusions In this work, a novel methodology was used to minimize the noise caused by the skin contact with the electrodes in the acquisition of EMG signals by using a “High Accuracy Multi-Channel EMG Acquisition System” allowing to have a high resolution and fidelity of the acquired signal. This technique, through its stage called “Autocalibration”, minimizes noise for anyone who is going to perform the electromyography measurement. In this work we used a high-end device known as “BIOPAC” which allows to acquire bioelectric signals such as EMG signals, but at a high sampling rate and

360

S. F. L. Romero and L. Serpa-Andrade

resolution, thus allowing to obtain a high resolution and fidelity that guarantee that there is no loss of information to acquire an EMG signal. With the application of this proposed technique, it has been possible to obtain an EMG signal at a minimum noise level, corroborating the results of the work [7]. technology systems that use feature extraction and EMG signal classification.

References 1. NoraLi, A.N.: Human breathing classification using electromyography signal with features based on mel-frequency cepstral coefficients. Int. J. Integr. Eng. 9(4) (2017). https:// publisher.uthm.edu.my/ojs/index.php/ijie/article/view/2019/1225. Accessed 20 Sep 2019 2. Luna-Romero, S., Delgado-Espinoza, P., Rivera-Calle, F., Serpa-Andrade, L.: A domotics control tool based on MYO devices and neural networks. In: Advances in Intelligent Systems and Computing, vol. 590, pp. 540–548 (2018) 3. Meireles, A, Figueiredo, L., Lopes, L.S., Almeida, A.: ECG denoising with adaptive filter and singular value decomposition techniques. In: ACM International Conference Proceeding Series, 20–22 July 2016, pp. 102–105 (2016) 4. Wu, Y., Rangayyan, R.M., Zhou, Y., Ng, S.C.: Filtering electrocardiographic signals using an unbiased and normalized adaptive noise reduction system. Med. Eng. Phys. 31(1), 17–26 (2009) 5. Navarro, X., Porée, F., Carrault, G.: ECG removal in preterm EEG combining empirical mode decomposition and adaptive filtering. In: IEEE International Conference on Acoustics, Speech and Signal Processing – Proceedings, ICASSP, pp. 661–664 (2012) 6. Poungponsri, S., Yu, X.H.: An adaptive filtering approach for electrocardiogram (ECG) signal noise reduction using neural networks. Neurocomputing 117, 206–213 (2013) 7. Palacios, C.S., Romero, S.L.: Automatic calibration in adaptive filters to EMG signals processing. RIAI – Rev. Iberoam. Autom. e Inf. Ind. 16(2), 232–237 (2019) 8. Cruz, P.P.: Inteligencia artificial con aplicaciones a la ingeniería. Alfaomega (2011) 9. Vaseghi, S.V.: Advanced Digital Signal Processing and Noise Reduction. Wiley, Hoboken (2008) 10. Karabo, N.: A novel and efficient algorithm for adaptive filtering: artificial bee colony algorithm. Comput. Sci. 19(1), 175–190 (2011) 11. Golabbakhsh, M., Masoumzadeh, M., Sabahi, M.F.: ECG and power line noise removal from respiratory EMG signal using adaptive filters. Majlesi J. Electr. Eng. 5(4) (2011)

Decision Model for QoS in Networking Based on Hierarchical Aggregation of Information Maikel Yelandi Leyva Vazquez(&), Miguel Angel Quiroz Martinez, Josseline Haylis Diaz Sanchez, and Jorge Luis Aguilera Balseca Computer Science Department, Universidad Politécnica Salesiana, Guayaquil, Ecuador {jdiazs2,jaguilerab}@est.ups.edu.ec

Abstract. One of the main objectives for Quality of Service (QoS) processes in computer networks is to establish the priority of the network services used within an organization in a given situation. Given the anticipation of a new scenario, an evaluation group must make an assessment in order to establish the set of network service priorities according to their level of importance for the new scenario. The process of assigning priorities to network services generally involves subjectivity and uncertainty, so the use of linguistic labels is appropriate for their assessment. A group of aggregation operators have been proposed in the literature. The existing models don’t provide enough flexibility and adaptability to the specific contexts of the organizations. This paper proposes a decision-making method that makes use of the aggregation operators in a hierarchical way using the model of the 2-linguistic tuples to prioritize services. The proposal allows the inclusion of aspects such as the criteria importance and simultaneity. A case study shows the applicability of the proposal. The article ends with proposals for future work that contribute to the increase of the applicability of the method. Keywords: Quality of Service linguistic model

 Aggregation operators  WPM  2-tuple

1 Introduction One of the main objectives for Quality of Service (QoS) processes in computer networks is to establish the priority of the network services used within an organization [1]. Given the anticipation of a new scenario, an evaluation group must make an assessment in order to establish the set of network service priorities according to their level of importance for the new scenario. The process of assigning priorities to network services generally involves subjectivity and uncertainty, so the use of linguistic labels is appropriate for their assessment. The computational model of the 2 linguistic tuples has been successfully applied to decision-making problems [1, 2]. Due to its characteristics and its ease of use, the authors of this research consider this model suitable for handling linguistic information for decision-making. The objective of this work is to present a method for the prioritization of services. For this purpose, we used the linguistic weighted power means © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 361–368, 2021. https://doi.org/10.1007/978-3-030-51328-3_50

362

M. Y. Leyva Vazquez et al.

operator. The article is structured as follows: A state of the art of information aggregation is presented in Sect. 2; Sect. 3 proposes the linguistic weighted power means (LWPM) for the prioritization of services and Sect. 4 describes a case study. The article ends with conclusions and future work.

2 Fuzzy Linguistic Approach Computing with Words is a methodology that allows performing a computation and reasoning process using words belonging to language instead of using numbers. The use of language allows the creation and enhancement of decision-making models in which vague and inaccurate information [2] can be represented through linguistic variables. Within the proposed models, the representation of 2-linguistic tuples [3] allows words computing processes without   loss of information, using the concept of symbolic translation Let S ¼ s0 ; s1 ; . . .; sg be a set of linguistic terms and b 2 [0, g] a value in the granularity range of S be a set of linguistic terms and b 2 [0, g] a value in the granularity range of S. The symbolic translation of a linguistic term, si , is a number valued in the interval [−.5, .5) that expresses the difference of information between an amount of information expressed by the value b 2 [0, g], obtained in a symbolic operation and the nearest integer value, i 2 {0,…,g} which indicates the index of the closest linguistic label (si ) in S [4]. Based on the previous concept, a new model of representation of linguistic information is developed, which makes use of a pair of values (2-tuples). This representation model defines a set of functions that facilitate operations on these 2-tuples. Let S = {s_0,s_1,…,s_g} be a set of linguistic terms and b [0,g] a value that represents the result of a symbolic operation, then the linguistic 2-tuple that expresses the information equivalent to b, is obtained using the function: D : ½0; g ! S  ½:5; :5Þ DðbÞ ¼ ðsi ; aÞ; con



si ; i ¼ roundðbÞ a ¼ b  i; a 2 ½:5; :5Þ

ð1Þ

Where round is the usual rounding operator, s_i, is the label with index closest to b and a is the symbolic translation value [4]. It should be noted that D1 : S ! ½0; g is defined as D1 ðsi ; aÞ ¼ i þ a. Thus, a linguistic 2-tuple 〈S〉 is identified with its numerical value in [0, g]. 2.1

Weighted Power Means

Information aggregation is the process of combining different data providing a single output. Aggregation operators are a type of mathematical function used for information fusion. They combine n values in a domain D and return a value in that same domain [5].

Decision Model for QoS in Networking

363

Aggregation operators have multiple applications in several areas [6]. In the decision-making process, its fundamental role lies in the evaluation and construction of alternatives [5]. Each of the families of operators has specific characteristics that allow them to model certain situations. The weighted average (WA) makes it possible to assign a weight to the sources of information, which allows its use to represent the reliability or importance/preference. On the other hand, the family of ordered weighted averaging (OWA) operators [7] makes it possible to compensate or give weight to the data depending on their values. Fuzzy integrals [8] allow modeling redundancy, complementarity and interactions between criteria. However, the operators are not adequate to describe the properties expressed in human reasoning [9]. Weighted power means (WPM) aggregation operator allows expressing the degree of simultaneity and relative importance of the entrances (Weights). Additionally, it enables the construction of hierarchical aggregation models [10]. The r-th WPM is defined as follows: Mn½r ða; wÞ ¼ ð

n X

1

ari wi Þr

ð2Þ

i¼1

P where wi 2 ½0; 1 and ni¼1 wi ¼ 1 and r can be selected to achieve desired logical properties. For the determination of the weights corresponding to each characteristic and sub-characteristic, it is possible to use AHP [11]. The WPM using the hierarchical aggregation model is called the logic scoring of preference (LSP) model [10]. One of the main strengths of the LSP model is that it can model diverse logical relationships between attributes and sub-characteristics so that they reflect the needs of the different participants in the evaluation process. The decision-maker can use two parameters in the aggregation process [10]: • Degree of simultaneity (andness). • Relative importance of the input (weights). An interesting aspect of this operator is that it allows adding information taking into account that it is possible to determine which elements are mandatory and which are optional [12]. All the elements previously approached in the author’s opinion allow a more realistic reflection of the prioritization of services.

3 QoS Quality of service (QoS) [13] is defined as the ability of a network to provide different levels of service to different types of traffic. By having QoS it is possible to ensure the correct delivery of the information, giving preference to critical performance applications, where network resources are shared simultaneously with other non-critical applications. QoS makes a difference by providing efficient use of resources in case of congestion on the network, selecting a specific traffic from it, prioritizing it according to its relative importance, and using congestion control and evasion methods to give them preferential treatment.

364

3.1

M. Y. Leyva Vazquez et al.

QoS Levels

There are three levels of service: better effort, differentiated service, and guaranteed service. 1. Best effort: when the network makes every effort to deliver the package to its destination, but there is no guarantee that this will happen. This is the model used by FTP and HTTP applications [14]. 2. Integrated services: The Integrated Services model provides applications with a guaranteed level of service, negotiating end-to-end network parameters. The application requests the necessary level of service in order to operate properly and is based on the QoS so that the necessary network resources are reserved before the application begins to operate. 3. Differentiated services: this includes a set of classification tools and queuing mechanisms that provide certain applications or protocols with certain priorities over the rest of the network traffic. Congestion management is a term that covers different types of queuing strategies to handle situations where the demand for application bandwidth exceeds the total bandwidth that the network can provide. FIFO It is the simplest type of queueing; it consists of a simple buffer that retains outgoing packets until the transmission interface can send them. Packets are sent outside the interface in the same order in which they reached the buffer [15, 16].

4 Proposed Model In this section, we present the linguistic evaluation model for QoS network based on the hierarchical aggregation of information. Among the activities included in the proposed method, we may find: the selection of criteria and services, obtaining information and aggregation. Bellow, the activities contained in the workflow are graphically presented (Fig. 1) and each one of them is described:

Establishment of the evaluaƟon framework

Obtaining informaƟon

AggregaƟon

Fig. 1. Workflow activities for the prioritization of services.

Decision Model for QoS in Networking

365

• Establishment of the prioritization framework: The services and experts that will be evaluated are selected.  Being Se ¼  fse1 ; se2 ; . . .; sek g with k  2 the services to be evaluated and E = e1 ; e2 ; . . .; ej with j  2 the experts. • Obtaining information: Information is obtained based on the preferences of the decision-makers. This information represents the assessment of each service with respect The utility vector [17] is represented as follows  to the criteria.  Vj ¼ vj1 ; vj2 ; . . .; vjn , where vjk is the preference in relation to the service vk of the expert Ej . • Aggregation: The OAG aggregation function: ½0; 1n ! ½0; 1 is obtained through a hierarchical aggregation process. The logic scoring of preference model (LSP) is used [10] because it fits more realistically to the vocational guidance process. The use of aggregation operators in a hierarchical way gives flexibility to the method. The possibility of directly obtaining the preferences of the decision-maker and his expression in the weight vectors is another of its strengths. For aggregation, the linguistic weighted power means (LWPM) operator defined in [18] is used: • Let X ¼ fðs1 ; a1 Þ; . . .; ðsn ; an Þg be a set of 2-linguistic tuples, the r-th linguistic WPM (LWPM) is defined as follows. Mn½r ððs1 ; a1 Þ; . . .; ðsn ; an ÞÞ ¼ ð where wi 2 ½0; 1 and

n P

Xn i¼1

1

bri wi Þr

ð3Þ

wi ¼ 1 and r can be selected to achieve the desired logical

i¼1

properties [10].

5 Case Study The following is a case study with the fundamental purpose of showing the applicability of the proposal. This section presents an illustrative example of the linguistic evaluation model of QoS network based on the hierarchical aggregation of linguistic information. Attributes will be valued based on the following linguistic scale (Fig. 2):

Fig. 2. Set of labels used.

366

M. Y. Leyva Vazquez et al.

Be the expert selection set E ¼ fe1 ; e2 ; . . .; e6 g The first two experts fe1 ; e2 g from department A and the rest from department B The service options to be evaluated are determined (Table 1): se1 = Ticket Table System se2 = Purchasing and Heritage System se3 = Electronic Government Subsequently, the assessment is made for each service with respect to the selected criteria (Table 2). Table 1. Valuation of services Service se1 se2 se3

Department A e1 e2 s3 s3 s2 s2 s2 s3

Department B e3 e4 s3 s1 s1 s3 s3 s2

e5 s2 s2 s1

e6 s1 s3 s3

The hierarchical aggregation structure obtained is shown below. Aggregation opera-tors that reflect simultaneity as established by LSP [19, 20] were employed. Table 2. Aggregation structure Initial entries e1 e2 e3 e4 e5 e6

Operator 0.4 C0.6 0.2 D– 0.1 0.3 0.4

Block ID Department A

Operator 0,7 C–

Department B

0.3

Block ID Global priority

The results of the aggregation of the criteria allow sorting the services. In this case, the order of priority is as follows: se1  se3  se2, according to the aggregation results table (Table 3).

Table 3. Results of aggregation Services se1 se2 se3

AG 2.723 2.119 2.590

2-tuples (s3 ,−0.277) (s2 ,0.119) (s2 ,−0.41)

Decision Model for QoS in Networking

367

Among the advantages remarked by specialists, we may find: the relative ease of the technique and the high flexibility offered by the use of this aggregation model. The results also show the applicability of decision support models based on the aggregation of information.

6 Conclusions In this contribution, we have proposed a new linguistic evaluation model for QoS network based on the hierarchical aggregation of information. The novelty of this model lies on its flexibility. In the current article, we presented a procedure for decision-making support based on the use of aggregation operators to prioritize services in a network. For the hierarchical aggregation, the WPM operator adapted to the 2-linguistic tuples (LWPM) was used. Among the activities included in the method, there are the following ones: the selection of the criteria, obtaining information about the preferences of the decision-makers and finally the aggregation of the standardized values of the preferences. Among the main advantages of the method is the possibility of modeling the importance of the criteria and the compensation maintaining the interpretability of the models. As future work, an approach with multiple scenarios is outlined. The development of a computer system that supports this model, constitutes another area of future work. Acknowledgments. Authors want to thank the Grupo de Investigación en Inteligencia Artificial y Reconocimiento Facial (GIIAR) and to the Universidad Politécnica Salesiana for supporting this research.

References 1. Gramajo, S., Martínez, L.: QoS linguistic model for network services. Red. 17, 18 (2010) 2. Herrera, F., et al.: Computing with words in decision making: foundations, trends and prospects. Fuzzy Optim. Decis. Making 8(4), 337–364 (2009) 3. Dutta, B., Guha, D., Mesiar, R.: A model based on linguistic 2-tuples for dealing with heterogeneous relationship among attributes in multi-expert decision making. Fuzzy Systems, IEEE Transactions on 23(5), 1817–1831 (2015) 4. Herrera, F., Martínez, L.: A 2-tuple fuzzy linguistic representation model for computing with words. IEEE Trans. Fuzzy Syst. 8(6), 746–752 (2000) 5. Torra, V, Narukawa, Y.: Modeling Decisions: Information Fusion and Aggregation Operators. Springer, Heidelberg (2007) 6. Beliakov, G., Pradera, A., Calvo, T.: Aggregation Functions: A Guide for Practitioners. Springer, Heidelberg (2007) 7. Yager, R.R.: On ordered weighted averaging aggregation operators in multicriteria decisionmaking. IEEE Trans. Syst. Man Cybern. 18(1), 183–190 (1988) 8. Arenas-Díaz, G.: Medidas difusas e integrales difusas. Universitas Scientiarum 18(1), 7–32 (2013) 9. Dujmović, J.J.: Continuous preference logic for system evaluation. IEEE Trans. Fuzzy Syst. 15(6), 1082–1099 (2007)

368

M. Y. Leyva Vazquez et al.

10. Dujmović, J.J., Nagashima, H.: LSP method and its use for evaluation of Java IDEs. Int. J. Approx. Reasoning 41(1), 3–22 (2006) 11. Saaty, T.L.: What is the analytic hierarchy process? In: Mathematical Models For Decision Support, pp. 109–121. Springer, Heidelberg (1988) 12. Nogués, J.B.: Semantic recommender systems. provision of personalised information. PhD thesis, Universitat Rovira i Virgili About Tourist Activities, in Department of Computer Science and Mathematics, Universitat Rovira i Virgili (2015) 13. Chen, J., et al.: QoS-driven efficient client association in high-density software-defined WLAN. IEEE Trans. Veh. Technol. 66(8), 7372–7383 (2017) 14. Wydrowski, B., Zukerman, M.: QoS in best-effort networks. IEEE Commun. Mag. 40(12), 44–49 (2002) 15. Lo, M.: Software defined fifo buffer for multithreaded access. Google Patents (2017) 16. Pérez-Martın, J.: Colas de Prioridad (2019). https://www.academia.edu/36996879/Colas_ de_Prioridad 17. Espinilla, M., et al.: A 360-degree performance appraisal model dealing with heterogeneous information and dependent criteria. Inf. Sci. 22, 459–471 (2012) 18. Al-Subhi, S.H.S., et al.: Operador media potencia pesada lingüística y su aplicación en la toma de decisiones. Int. J. Innov. Appl. Stud. 22(1), 38–43 (2017) 19. Gyorgy, T., Suciu, G., Militaru, T.L.: Classification of on-line students using an expert system over open source, distributed cloud computing system. In: The International Scientific Conference eLearning and Software for Education.. “ Carol I” National Defence University (2014) 20. Tapia-Rosero, A., et al.: Fusion of preferences from different perspectives in a decisionmaking context. Inf. Fus. 29, 120–131 (2016)

Nondeterministic Finite Automata for Modeling an Ecuadorian Sign Language Interpreter Jose Guerra1, Diego Vallejo-Huanga2(&), Nathaly Jaramillo1, Richard Macas1, and Daniel Díaz3 1

Department of Computer Science, Universidad Politécnica Salesiana, Quito, Ecuador {jguerrac1,njaramilloa,rmacasn}@est.ups.edu.ec 2 IDEIAGEOCA Research Group, Universidad Politécnica Salesiana, Quito, Ecuador [email protected] 3 Systems Engineering Department, Universidad Politécnica Salesiana, Quito, Ecuador [email protected]

Abstract. This paper presents a stand-alone application for text interpretation to Ecuadorian Sign Language (LSEC), modeled by a nondeterministic finite automaton (NFA), lexical analysis and regular expressions. This application will allow you to interpret a text entered by a user and present it through GIF graphic resources, thus allowing you to establish unidirectional communication between a hearing person and a person with hearing impairment. Our application is developed under MVC architecture. The state machine and lexical analyzer are in the model layer, which will be handled by the controller layer and that will allow the user to receive the inputs and send the outputs through the view layer. We evaluate the effectiveness of the tool using an LSEC dictionary with a total of 275 words and idioms, for different users with hearing impairment, and the results showed that our application is robust and fast. Keywords: Lexical analysis expressions

 Hearing disability  Automata theory  Regular

1 Introduction According to the World Health Organization (WHO), 466 million people suffer from hearing loss in the world and the World Federation of the Deaf (WFD) estimates that 70 million people use sign language as their primary communication system [1]. Sign language is a natural language of expression and configuration of spatial gestures and visual perception, thanks to which, people with this disability can establish a communication channel with their social environment. Although, there is an international standard for sign language, there are variations for each particular country. According to the WFD, there are more than 300 sign languages and all countries have a particular variation [2]. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 369–376, 2021. https://doi.org/10.1007/978-3-030-51328-3_51

370

J. Guerra et al.

The National Council for Equal Disabilities of Ecuador (CONADIS) estimates that there are 65821 people registered in Ecuador who have some type of hearing impairment [3]. In Ecuador, the official language is Spanish spoken by 99% of the population, along with thirteen other recognized indigenous languages, including Quichua and Shuar. Ecuador is a multicultural and multiethnic country and it is very common to append to the language: phrasal idioms, words of Quichua origin and some deformations of Spanish. The sign language used in Ecuador is called the Ecuadorian Sign Language (LSEC) [4]. Although the LSEC is officially recognized by the government, there is no broad dissemination system that allows reaching different communities. In a generic way, the population that knows this form of communication is the one that somehow relates to the deaf community, whether by family, friendship or professional ties. In 1998, the fifty-third article of the Constitution of Ecuador, officially recognized the right to communication, with alternative forms (LSEC, oralism, braille system, etc.), for people with hearing disabilities. In 2008, with the issuance of the new constitution, article 53 was modified by article 47, which continues to guarantee the rights of persons with disabilities, seeks equal opportunities and their social integration [5]. Any communication method takes into account the language and the particularities of the country where the communication is established, and this is also extrapolable for sign language. Since Ecuador has its own sign language and the constitution of the republic guarantees this right for all citizens regardless of their abilities, it is essential to implement some communication mechanism through digital media, which supports compliance with this right. Ergo, in this work an Ecuadorian sign language interpreter will be developed modeled by lexical analysis and Nondeterministic Finite Automata (NFA) [6], as a support tool for people with hearing impairment. The tool was implemented using free software tools and is housed in an open source repository: https://github.com/dievalhu/LSEC_Ecuadorian_Sign_Language. For the modeling an NFA was used, since the transition from a state can be to multiple next states for each input symbol, which assigns the model its nondeterministic characteristic. In addition, due to the nature of the translation in the Spanish language, it is necessary that the NFA permits empty string transitions. Finally, our application as a stand-alone requires less space, which is an intrinsic feature of the NFA [7]. Although there are several works that have represented the LSEC with different approaches based on software and hardware [8–11], in no case has modeling been carried out using NFA. In this article, NFA are used as the nucleus of the unidirectional translation process of a text in Spanish to the Ecuadorian sign language.

2 Materials and Methods 2.1

Architecture and Interpreter Modeling

Our sign language interpreter model was developed using a programming language with an object-oriented paradigm. The programming language chosen was JAVA [12], which being a general purpose language, concurrent and with few implementation

Nondeterministic Finite Automata for Modeling an Ecuadorian

371

dependencies. Java allows greater flexibility in the management of nondeterministic finite states, thus, each state is treated as an independent object or entity, which can process an input and return an output. The developed software uses regular expressions [13], that allows to define the accepted elements within the language, and nondeterministic automata to establish the input string that will be valid. The strings will define the correct structure in which tokens will be generated later. A token is an element derived from a regular expression which have a name generated and that provides information that must be consulted in a database [14]. On the other hand, for the systematic data storage and its subsequent use, a relational database was created in PostgreSQL. In this way persistence of information is achieved for: words, letters and numbers, that will be later extracted from a video clip. All video clips that have been compiled for the interpreter’s general dictionary have been converted into multimedia content in GIF format. GIFs contain information translated from a word, letter or number to the LSEC. It is necessary to clarify that the generated GIF multimedia content is not stored in the database, i.e., all this data is stored in the client running the software. Figure 1 shows the general scheme of operation of the interpreter to LSEC modeled by means of a nondeterministic finite automaton.

Fig. 1. Interpreter to LSEC represented by layers in an MVC architecture

The software architectural style of our LSEC interpreter was designed under the Model-View-Controller (MVC) architecture [15]. Model layer contains the information and structure referring to the nondeterministic finite states, tokens and a lexer. This last

372

J. Guerra et al.

element was implemented to handle the communication of the states, the generation of the tokens and to obtain database information. In the programming language, the Model layer is treated as an object that represents states and tokens, respectively. The View generates the graphical user interface, which consists of a main screen that allows the user to interact with the software and secondary configuration screens for connection to the database and manipulation of the information that persists. Finally, the Controllers are responsible for managing the flow that is handled in the models, therefore, they will be responsible for executing all the actions or interacting with the information that these models have. The software modeled as a nondeterministic finite automaton is simulated through a three-state machine that is responsible for validating the inputs that the user has provided. In this way, when the state machine receives an input text string, which corresponds to the text that the user wishes to translate, the lexer reads the string and converts it to an array of characters, then, through a cycle, this array is traversed and its first element goes to state Q 0. The Q 0 state is responsible for verifying that the input string begins with a letter or numeric digit, to ensure that the input does not contain special characters except for the characters: á, é, í, ó, ú and ñ, belonging to the Spanish alphabet ðA  Z; a  zÞ. If state Q 0 is valid, the next element of the character array is validated by means of state Q 1, which verifies that letters or numeric digits ð0  9Þ are being entered and that no special characters are entered except for the characters before mentioned. Another element accepted by state Q 1 is the empty string, denoted by the lambda expression (k), which works as a delimiter that marks the end of state Q 1 and, therefore, allows it is passed to state Q 2 that represents the state of acceptance. Upon reaching the acceptance state, the interpreter model is responsible for generating a token that is composed of the union of all the characters validated by the state machine and form the sentences that are translate to LSEC. When in any state Q 0, Q 1 or Q 2, the input of characters accepted by the language cannot be validated, an exception is generated that indicates where the error was generated and what is the invalid character, and immediately the translation is canceled. On the other hand, if the entire input chain has been validated and all tokens have been generated, the automaton proceed to check if each token is registered in the database in order to extract information that allows to the GIF graphic resource, associated with each token, can be recovered locally. Finally, when the GIF graphic resource of each generated token has already been recovered, the result of the translation into the sign language is shown on the user’s main screen. The state diagram of the nondeterministic finite automaton is represented by digraph of Fig. 2.

Fig. 2. State machine diagram for the validation of Spanish language characters.

Nondeterministic Finite Automata for Modeling an Ecuadorian

373

P So, our NFA represented Pas a quintuple is NFA ¼ fQ; ; d; Q 0; Q 2g. Where: Q es el finite set of three states, P is the finite input alphabet ðA  Z; a  z; 0  9Þ, d is the transition function d : Q x and defined for our automaton by Eq. 1, Q 0 is the initial state, Q 0 2 Q, and Q 2 is the unique final state, Q 2Q. dðQ 0; A  Z a  z 0  9Þ ¼ Q1 dðQ 1; A  Z a  z 0  9Þ ¼ Q1 dðQ 1; kÞ ¼ Q2

2.2

ð1Þ

Graphical Interface

The graphic interface allows the interaction between the user and the model state machine. The stand-alone application has a total of five screens: one main screen, two screens for resource management and two screens for configurations. The first screen allows the user to execute the main function of the software, i.e., to carry out the process of interpreting text to sign language. It has a box for entering the text to be translated to LSEC and a contiguous area that displays GIF graphic resources with the respective translation. To start the interpretation process, it is necessary to press the “Play” button and the process is stopped with the “Stop” button. Additionally, the main screen contains a menu located on the left side that allows access to the two graphic resource management screens and the two configuration screens. Figure 3 shows the generic distribution of the main screen.

Fig. 3. Display of the graphic interface of the main screen in a process of translating.

The two graphic resource management screens allow the user to add or modify information from GIFs, that is, from the repository tokens. The configuration screens parameterize the display time values of the word/character GIFs, and also make changes in the communication with the database.

374

J. Guerra et al.

3 Experiments and Discussions Our system uses two types of tokens: characters and words, for the translation of a text message to the LSEC. Both types of tokens are represented by graphic resources GIFs. The system has integrated 37 characters that include the numbers from 0 to 9 and all the letters of the Spanish alphabet. Additionally, it has a dictionary of 275 words and idioms used in the Ecuadorian sign language. A computer with Intel (R) Core (TM) i5-7200U processor, 2.70 Ghz CPU, 8 GB of RAM, running on Windows 10 operating system with 64-bit architecture was used for the computational testbed. In order to quantify the average time it would take to display a repository character or word in the system, an individual token test was performed, which measured the average access time to the local database. Table 1 summarizes the storage results and execution times of these two types of resources managed by the interpreter. Table 1. Results of the individual tests performed on the two types of tokens Token type Average time (ms) Average file size (MB) Repository total size (MB) Character 0.54 1.89 69.9 Word 9.93 0.37 103

The state machine executes the spelling of a word when a token is not registered in the word repository or when the token contains proper names or numbers. Ergo, and according to the values recorded in Table 1, it is possible to obtain the Eq. 2 of time Ts that the system takes to show the user the graphical resource of a token to be spelled, depending on the average time tc of deployment of a character and the number of characters n that the token has to spell. This is equivalent to the time it takes for the system to query the token in the database and collect the graphic resource stored locally, to which the delay time di of characters that the user configures in the application must be added. This same process can be extrapolated to calculate the time it takes for the system to show the user the graphic resource of a token recognized as a word. Ts ¼ ntc þ

n X

di

ð2Þ

i¼0

Finally, the results of twenty users who have tested the system were recorded. For this, each user introduced a text to be interpreted to the LSEC, and the results of the total execution times of each interpretation were retrieved. Table 2 shows the average execution times obtained, as well as the average size of words entered.

Nondeterministic Finite Automata for Modeling an Ecuadorian

375

Table 2. System performance results for a set of twenty tests Average number of translated characters 75.45

Average number of translated words 13.8

Average Mean time of machine execution time (ms) state validation (ms) 69.4 22,65

For this last experiment, the consumption of computational resources measured in percentage of CPU usage was 0.46% and the amount of RAM was 78.55 MB.

4 Conclusions and Future Work LSEC is perhaps the form of communication most used by the deaf-mute community in Ecuador, but their knowledge is limited for a small number of people. This tool has proven to be a robust and fast alternative to be consumed by the community that seeks to establish communication with people with hearing impairment without prior knowledge of the LSEC. The modeling of our interpreter is based on the operation of the state machine of an NFA, where a state can have more than one transition by input, allowing validate the text entered and enabling to our application to be scalable for later stages of lexical analysis. As future work, we want to migrate the application to web or mobile environments, so that portability is much simpler and its scope is greater. Although this software involves only lexical analysis, in the future another type of analysis such as syntactic and semantic can be implemented. A-posteriori, it is also intended that the functionality of the software be expanded and integrated to handle hardware devices that capture the gestures of the LSEC and perform the translation or interpretation from the LSEC to text, i.e., in the opposite direction to what is currently performed, to have a two-way communication. Acknowledgments. This work was supported by IDEIAGEOCA Research Group of Universidad Politécnica Salesiana in Quito, Ecuador.

References 1. World Health Organization (WHO). https://www.who.int/news-room/fact-sheets/detail/ deafness-and-hearing-loss 2. United Nations (UN). https://www.un.org/development/desa/disabilities/news/dspd/internationalday-sign-languages.html 3. The National Council for Equal Disabilities of Ecuador (CONADIS). https://www. consejodiscapacidades.gob.ec/estadisticas-de-discapacidad 4. Erting C., Johnson R., Smith D., Snider B.: The Deaf Way: Perspectives from the International Conference on Deaf Culture. Gallaudet University, Washington D.C (1994) 5. Arteaga, K.: Design of a braille manual to improve the reading comprehension in English language directed to children from 7 to 9 years with visual impairments at “CEPE-I” in Ibarra, Ecuador. Pontificia Universidad Católica del Ecuador, Ibarra (2018)

376

J. Guerra et al.

6. Rabin, M., Scott, D.: Finite automata and their decision problems. IBM J. Res. Dev. 3(2), 114–125 (1959) 7. Câmpeanu, C., Sântean, N., Yu, S.: Mergible states in large NFA. Theor. Comput. Sci. 330(1), 23–34 (2005) 8. Jiménez, L., Benalcázar, M., Sotomayor, N.: Gesture recognition and machine learning applied to sign language translation. In: VII Latin American Congress on Biomedical Engineering CLAIB 2016, pp. 233–236. Springer, Singapore (2017) 9. Rivas, D., et al.: LeSigLa_EC: learning sign language of Ecuador. In: Huang, T.C., Lau, R., Huang, Y.M., Spaniol, M., Yuen, C.H. (eds.) Emerging Technologies for Education, SETE 2017, Lecture Notes in Computer Science, vol 10676. Springer, Cham (2017) 10. Ingavélez-Guerra, P., et al.: An intelligent system to automatically generate video-summaries for accessible learning objects for people with hearing loss. In: International Conference on Applied Human Factors and Ergonomics, vol 596, pp. 113–122. Springer, Cham (2018) 11. Oramas, J.: Technology for Hearing Impaired People: A novel use of Xstroke pointer gesture recognition algorithm for Teaching/Learning Ecuadorian Sign Language (2009) 12. Arnold, K., Gosling, J., David, Holmes D.: The Java TM Programming Language. AddisonWesley, Boston (2006) 13. Stubblebine, T.: Regular Expression Pocket Reference: Regular Expressions for Perl, Ruby, PHP, Python, C, Java, and Net. O’Reilly Media Inc., Mishawaka (2007) 14. Mertz, D.: Text Processing in Python. Addison-Wesley, Boston (2003) 15. Qian, K., Fu, X., Tao, L., Xu, C., Díaz-Herrara, J.: Software Architecture and Design Illuminated. Jones & Bartlett Publishers, Subdury (2010)

Establishing and Verifying the Ergonomic Evaluation Metrics of Spacecraft Interactive Software Interface Jianhua Sun1, Yu Zhang1, Ting Jiang2(&), and Chunlin Qian1 1

Department of Industrial Design, School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, China [email protected], [email protected], [email protected] 2 National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing 100094, China [email protected]

Abstract. The interactive spacecraft software is an indispensable supporting system that astronauts work with multiple types of devices. The quality of the software, to a great extent, depends on the software system is designed to be compatible with human capabilities. The purposes of this paper are to identify metrics for measuring and verifying usability of the interactive spacecraft software that focuses on human performance, such as human errors, consistency, and operability, to help designers and evaluators directly figure out software interface problems. The ergonomic evaluation metrics framework was categorized into three dimensions of the interface layer, operation layer, and demand layer included eight primary indicators and 51 secondary indicators. The reliability and validity of the metrics framework were verified in three dimensions – necessity, overlapping-degree, and completeness, in which 12 experts who specialized in the area of the aerospace ergonomic evaluation were investigated. The results can guide to direct future research and practice for the ergonomic evaluation of related software interfaces. Keywords: Establishment and verification method software  Spacecraft

 Metrics  Interactive

1 Introduction The interactive software interface is an essential medium for the “dialogue” between human and information products [1], many countries pay great attention to the design and evaluation of software interface [2]. With the continued deepening and innovation of software interface design theory and assessment methods, some software evaluation metrics have gradually been formed [3]. The classic model of software evaluation metrics– Hierarchical Usability Model [4], suggested by McCall in 1977, is divided into three levels: factor, criterion, and metric. The factor level refers to the usability dimension. The criterion level is to refine the usability dimension into measurable subfactors; The metric level means the © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 377–383, 2021. https://doi.org/10.1007/978-3-030-51328-3_52

378

J. Sun et al.

indicators that can be directly measured. Based on the combination of usability concept definitions, heuristics, and guidelines, Welie constructed the evaluation indicator system in 1999 from the four levels – usability, usage indicators, means, knowledge [5], for instance, the highest level of usability defined by ISO is, consists of three aspects: efficiency, effectiveness, and satisfaction. Han presented an innovative and systematic evaluation framework consisting of two groups of dimensions in 2000: usability dimension and user interface design elements [6]. Kwahk suggested a new methodology of usability evaluation through a simple, structured framework in 2002 [7], outlined by three major components: the interface features, the evaluation context and the usability measures. In the view of modeling the dynamic relationships of the attributes that affect software usability, Seffah proposed an approach – QUIM (Quality in Use Integrated Map) for quantifying software quality in 2002 in use, consisting of 4 components in 2006: factors, criteria index and data [8]. At present, there are two main problems with the existing metrics. Firstly, most of the existing indicators like as effectiveness, efficiency, and user satisfaction, proposed by ISO9241 [9] offer a macroscopic evaluation, which is suitable for scoring the software or comparing the advantages and disadvantages of various software. It probably not figures out problems and suggestions for improvement; secondly, software interfaces used by astronauts lacking consistency due to various software development separately. Therefore, the purpose of this paper is to define metrics for measuring and verifying the spacecraft interactive software usability focus on errors, consistency, and maneuverability, and to help designers and evaluators directly figure out software interface problems.

2 Construction Method of Ergonomic Metrics The establishment of reasonable metrics is the premise and basis for making correct and objective evaluations of the evaluation object [10]. Each selected indicator should reflect the evaluation object’s information on a specific aspect as adequately as possible [11]. The evaluation metrics should possess comprehensiveness, representativeness, and operability. At the same time, we should try to avoid overlap among the indicators [12]; each selected indicator should meet the following principles [13]: (1) Necessity, each indicator is essential to the metrics. (2) Low overlapping-degree, the redundancy between indicators should be the lowest. (3) Completeness, the evaluation metrics should be able to cover the software inter face design elements comprehensively and thoroughly reflect the interface design level, overall performance, and the whole situation. In order to locate interface design problems directly through evaluation indicators, it is necessary to sort out the types of interface design problems that may occur in the software’s interactive interface, and then analyze the interface composition.

Establishing and Verifying the Ergonomic Evaluation Metrics of Spacecraft

2.1

379

Analysis of Interface Element Design Attributes

Interface elements have different types of design attributes that affect the overall ease of use and usability of the software. The design attributes of each interface element can be divided into three categories: individual attribute, interaction attribute and integration attribute. The individual attribute refers to a attribute that a single element has independently, such as the size, color and the shape; the interaction attribute means an attribute that occurs when the user operates the software interface, such as feedback message in response to the control action by the user; the integration attribute refers to a attribute that occurs when more than two elements are combined, such as interface layout, interface level. 2.2

Classification of Interface Elements

Through the analysis of the typical individual, interactive attributes of interface elements, the interface elements are divided into five categories, the specific categories and their characteristics are shown in Table 1, given the above, the evaluators can classify the interface elements easily. Table 1. Interface element design attributes Individual attributes Contains Contains picture text  √

Interaction attributes Can be directly operated √

Support for entering text 

Text button Text labels  √   Image √  √  button Image √    labels Textbox N/A √ √ √ √ Shows that this item matches the corresponding abscissa header,  indicates that it does not conform to the corresponding abscissa header, and N/A means that it does not fit the corresponding abscissa header.

2.3

Extracting Evaluation Indicators

For specific software types, evaluation indicators can be extracted for different interface types according to specific evaluation principles. For instance, to evaluate the readability of the software interface, it mainly refers to whether the interface elements can be seen. Whether it can be seen clearly or not depends on human visual perception [14], including color vision, contour perception, and contrast perception [15]. Take contour perception as an example, the contour perception means that various contours dominate our perception [16], which is the basis for us to perceive the shape of an object. It determines the boundary and the area of the object so that the object can be visually

380

J. Sun et al.

perceived. Mapping contour perception to the text label and text button can extract the font size, font style, and other indicators.

3 Verification of the Evaluation Metrics Necessity. Necessity is tested from two aspects of concentration and dispersion. The necessity of each bottom indicator should be divided into five levels, k = 1, 2, 3, 4, 5, which respectively suggests that the indicators from very unnecessary to very necessary. Experts on software interface ergonomics in the corresponding field were invited to rate the necessity of bottom indicators. The concentration of expert comments on the necessity of the ith single evaluation indicator Fi : Fi ¼

5 1X Ek Pik : p k¼1

ð1Þ

p represents the number of experts, Ek which means the k-level necessity score of the ith single evaluation indicator, Pik refers to the number of evaluators of the k-level necessity of the ith single evaluation indicator. Dispersion of the ith single evaluation metrics di : di ¼ ð

5 1 1 X Pik ðEk  Fi Þ2 Þ2 : p  1 k¼1

ð2Þ

di represents the degree of dispersion of the evaluation opinions of the evaluators on the necessity of the ith single evaluation indicator. Combined with the concentration and dispersion of the necessity evaluation, the necessity coefficient of the ith evaluation metrics Vi : Vi ¼ di =Fi :

ð3Þ

The higher the value of Fi is, the more essential the indicator is, the lower the dispersion, the more concentrated the judgment of experts is. The higher the value of Fi is, the lower the value of di is and the lower Vi is, the higher the necessity of the indicator is. Overlapping-Degree. The overlapping-degree of the ith indicator and the jth indicator rij 2 ½0; 1. That rij is 0 means that there is no overlapping relationship between the two indicators; and that rij is 1 means that the two indicators are entirely overlapped. The overlapping-degree of evaluation indicator i with other indicators Ci and the large redundancy degree C:

Establishing and Verifying the Ergonomic Evaluation Metrics of Spacecraft

Ci ¼

n 1 X ð rij  1:0Þ n  1 j¼1



n X

Ci

381

ð4Þ

ð5Þ

i¼1

4 Verification This section tests the establishment and verification method of the evaluation metrics in Sects. 2 and 3 in the light of spacecraft software. 4.1

Construction of the Metric System

The evaluation principle of the spacecraft software interface has two parts. One aspect includes the visibility, readability, and understandability of interface elements. It should give evaluation conclusions and suggestions on the visibility, readability, and understandability of each element in each type of the display page. Another aspect is the task matching of the software interface, and the evaluation should focus on whether the software interface elements can fully and effectively support the completion of the operation task, guarantee the astronauts to operate quickly and effectively, and prevent the risk of misoperation. The evaluation indicators can be extracted by mapping the evaluation principle of the interface elements, the design of the software interface means the specific design of interface elements, logical structure, and interaction process based on the analysis of the software use situation and user needs [10], so the following three dimensions can be considered as followed when evaluating the software interface. Firstly, the interface layer refers to the interface elements and logical structure to interface design. It means the individual attribute and integration attribute of the interface element corresponding to the attribute of the interface element. The primary indicators can be sorted into interface layout, interface level, interface information, and interface elements. Secondly, the operation layer mainly refers to the design of the interaction process, while it refers to the interaction attribute of the interface element attribute. The primary indicators can be considered from two aspects of the operation process and interactive requirements. Thirdly, the demand layer refers to the user demand analysis, which is the premise of the interface design. In terms of the overall design of the software, the demand layer means that whether the software meets the functions required by the user to perform the specific tasks of the software, and whether the user is satisfied with the overall use of the software. The primary indicators can be sorted into user requirements and user satisfaction. Finally, the metrics consist of 8 primary indicators and 51 secondary indicators are formed.

382

4.2

J. Sun et al.

Validation of the Evaluation Metrics

Necessity. The Evaluation metrics of the spacecraft software interface have a total of 51 secondary indicators. 12 aerospace ergonomics evaluation experts scored the necessity of each indicator by questionnaires. The concentration vector F and the dispersion vector of the experts’ comments on the single indicator were calculated by the formula (1) and formula (2) respectively, the values of which are shown in Fig. 1. The results showed that the concentration values Fi of the 51 indicators exceed 3. Generally, that the dispersion value is less than 0.63 means that the dispersion meets the requirements. Set the concentration values di limit to 2, indicating that it has reached at least a general level. The necessity coefficient is limited to less than or equal to 0.63/2 = 0.315. From Fig. 1, we can see that the value of necessity coefficient of 51 indicators is less than 0.315, which meets the requirement of necessity. Overlapping-Degree. 12 experts were invited to score the overlap of the indicators, and the overlap between each evaluation indicator and other indicators was calculated according to the formula (4). For the comprehensive redundancy of the metrics, if it accords with C\ nq, the redundancy of the evaluation metrics meets the requirements in principle. q is the whole redundancy coefficient, which is set strictly to 0.1. The closer C is to 0, the lower the redundancy of the metrics is. The value of C calculated is 1.83 according to the formula (5), which is less than 5.1, so the redundancy of the metrics meets the requirements. Completeness. After experts have a full understanding of the index system, the experts were asked whether the metrics are comprehensive. The 12 aerospace ergonomics evaluation experts agreed that the existing indicator system had covered the software interface evaluation objects, and no new indicator was proposed.

5 Conclusion This paper proposes a new establishment and verification method focusing on the problems existing in the establishment and verification of the evaluation metrics of the software interface, which was tested in the field of the spacecraft software. The evaluation metrics consist of three dimensions of the interface layer, the operation layer and demand layer, including eight primary indicators and 51 secondary indicators. The verification of the metrics is from three dimensions: necessity, overlap, and completeness. The method that constructing and verifying the metrics proposed by this paper would also be directly applied to the establishment and verification of the evaluation metrics of the interactive software interface in other related fields.

Establishing and Verifying the Ergonomic Evaluation Metrics of Spacecraft

383

References 1. Philips, B.H.: Developing interactive guidelines for software user interface design: a case study. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 37(4), 263–267 (1993) 2. Hinze, A., Heese, R., Schlegel, A., et al.: Manual semantic annotations: user evaluation of interface and interaction designs. J. Web Semant. 58, 100516 (2019) 3. Merino, L., Ghafari, M., Anslow, C., et al.: A systematic literature review of software visualization evaluation. J. Syst. Softw. 144, 165–180 (2018) 4. McCall, J.A., Richards, P.K., Waiters, G.F.: Factors in Software Quality. Volume I. Concepts and Definitions of Software Quality. Springfield Publishing, Springfield (1977) 5. Welie, M., Veer, G., Elins, A.: Breaking down usability. In: Proceedings of Interact (2001) 6. Han, S.H., Hwan Yun, M., Kim, K.S., et al.: Evaluation of product usability: development and validation of usability dimensions and design elements based on empirical models. Int. J. Ind. Ergon. 26(4), 477–488 (2000) 7. Kwahk, J., Han, S.H.: A methodology for evaluating the usability of audiovisual consumer electronic products. Appl. Ergon. 33(5), 419–431 (2002) 8. Saffeh, A., Kececi, N., Donyaee, M.: QUIM: a framework for quantifying usability metrics in software quality models. In: APAQS 2001: Second Asia—Pacific Conference on Quality Software, Hong Kong, pp. 311–318 (2001) 9. ISO. 9241-11:2018. Ergonomics of human-system interaction - Part 11: Usability: Definitions and concepts. ISO (2018) 10. Deng, L., Yu, S.H.: Human-machine interface evaluation of driller control room based on fuzzy AHP. Comput. Eng. Appl. 1114–1117 (2014) 11. Tan, T., Xiong, Z.: Comparative study on the evaluation index system of project performance. Sci. Technol. Manag. Res. (2014) 12. Liu, Y.Y., Zhao, Q., Liu, Y.L.: The research on the construction method of evaluation indicator system based on software product line. In: Conference Nam, Shanghai, pp. 2363– 2368 (2011) 13. Wang, M., Yu, S.H., Yang, Y.P.: Human-machine interface evaluation of aircraft cockpit based on fuzzy AHP-GEM. J. Mach. Des. (2017) 14. Cui, J., Zhang, Y., Wan, S., et al.: Visual form perception is fundamental for both reading comprehension and arithmetic computation. Cognition 189, 141–154 (2019) 15. Lima-de-Faria, A.: Brain Imagery: Visual Perception of Form, Color and Motion. Springer International Publishing, Cham (2014) 16. Cecchetto, S., Lawson, R.: The role of contour polarity, objectness, and regularities in haptic and visual perception. Atten. Percept. Psychophys. 80(1), 1250–1264 (2018)

Systematic Mapping on Embedded Semantic Markup Validated with Data Mining Techniques Rosa Navarrete(&), Carlos Montenegro, and Lorena Recalde Departamento de Informática y Ciencias de la Computación, Escuela Politécnica Nacional, Quito, Ecuador {rosa.navarrete,carlos.montenegro, lorena.recalde}@epn.edu.ec

Abstract. The lack of structured information in the Web does not enable search engines to present accurate information in search results. Some technologies have arisen to produce structured data, including embedded semantic markup that aims to improve the quality and significance of the information exhibit in search results. An indicator of the success of a technology is the interest of researchers to address its particularities. To this end, in this work, we conducted a systematic mapping of research articles published into the embedded semantic markup field to expose the maturity of this technology. Moreover, the novelty of this systematic mapping is the use of data mining techniques aimed to confirm the topic-specific categorization conducted by the expert human regarding the research articles presented in this mapping. Keywords: Embedded semantic markup  Systematic mapping  User experience on web searching  Topic model  Latent Dirichlet Allocation

1 Introduction Nowadays, searching on the Web is a stressful user experience because of the inaccurate information exhibit in search results. Search engines must deal with predominantly unstructured data on the Web. The embedded markup technology can generate structured data by introducing semantic annotations in the Web content. Major search engines interpret these annotations to show enriched information results [1]. The adoption of embedded markup has gained relevance and become an upward trend [2]. The semantic annotations involve different vocabularies and formats. This work presents a systematic mapping of the research conducted on embedded semantic markup aims to appreciate the maturity of this technology and identify the topics that have been addressed. We have not found so far a systematic mapping in this field, so the contribution of this research is relevant to fill this gap [3]. Moreover, the novelty of this systematic mapping is the use of data mining techniques aimed to confirm the topic-specific classification conducted by human experts.

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 384–391, 2021. https://doi.org/10.1007/978-3-030-51328-3_53

Systematic Mapping on Embedded Semantic Markup Validated with Data Mining

385

The rest of this article is organized as follows: Sect. 2, introduces the main concepts; Sect. 3 exposes the method applied for this systematic mapping; Sect. 4, presents the results of the systematic mapping; and, Sect. 5 presents Conclusions and Future Work.

2 Background Embedded semantic markup refers to include machine-readable labels into the HTML code to produce structured data that describe concepts using terms of some vocabulary. The search engines and social networks can crawl these semantic annotations to improve the quality and meaning of the information exhibit to users in search results [4]. These markups use different vocabularies and encoding formats, described next. 2.1

Formats

a) Microdata. It uses an item and name-value pairs to define values to its descriptive properties based on any supporting vocabulary. b) RDFa. (W3C recommendation, 2008). It is a syntax for embedding RDF structured data in HTML, meaning a way to represent RDF expressions. c) JSON-LD. (W3C recommendation, 2014). It adds a script element (used as a data block, not as a script), separately from the existing markup. 2.2

Vocabularies

From the perspective of the Semantic Web, vocabularies define the concepts and relationships (meaning “terms”) proposed to represent and describe a concern area. Some of the most used are: a) Schema.org. It was created in 2011 by the major search engines, Google, Bing, Yahoo, and Yandex. It provides terms for describing a wide variety of entities, and it grows by integrating other vocabulary and standards [5]. b) Open Graph Protocol (OGP). It is developed by the Facebook platform to integrate Web pages into global mapping to Facebook’s Social Graph. c) Dublin Core. It is a controlled set of vocabulary terms mainly used to describe educational resources. 2.3

Approach of the Research

This systematic mapping considers these possible approaches in the research articles: a) Use of embedded markup on specific fields. It involves the use in education, ecommerce, digital library, person description, and more. b) Qualitative or quantitative analysis of the deployment of embedded markup technology in large-scale Web corpus. It involves the analysis of structured data obtained from large-scale Web crawling corpus, including formats and vocabularies.

386

R. Navarrete et al.

c) Detection and fix of errors in the embedded markup. It embraces errors in syntactical, semantic, typos, and misuse of vocabulary terms. d) Entity retrieval, knowledge base population, or entity summarization from embedded markup. It includes the creation of knowledge graphs. e) Mapping embedded annotations with Semantic Web. It concerns to link vocabulary used for markup with Linked Open Data (LOD), and the mapping of markup vocabulary with semantic web vocabulary.

3 Method A systematic mapping aims to find research trends (e.g. topics covered in the literature) and it is guided by research questions. The method applied for this systematic mapping is based on the guidelines proposed in [3]. The phases of this study are planning, conducting, and reporting the systematic review. Each phase is developed next. 3.1

Planning the Systematic Review

The output from this phase includes the research questions that define the purpose of the systematic review and the search protocol. These are the Research Questions (RQ): RQ1. How the research articles have been published over time? RQ2. Where the research articles have been published? RQ3. What the topics of interest have been covered in the research articles? The search protocol requires the definition of the following elements: a) Scientific digital libraries. IEEE Xplore, ACM Digital Library (ACM DL), Scopus, Web of Science (WOS), and Springer. b) Search terms. It includes the most frequent equivalent terms, vocabularies and formats. The search string was (“embedded markup” or “semantic annotations” or “semantic markup”) and (“Schema.org” or “OGP” or “Dublin Core” or “Microdata” or “RDFa” or “JSON-LD”). c) Criteria for inclusion/exclusion. The non-compliance of these criteria implies exclusion of the research article. d) Years of publication. From 2011 to 2019. The initial year was selected because, in 2011, Schema.org vocabulary was launched. e) Language. English language. f) Venue of publication. Specifically, peer-reviewed conferences and journals. g) Topics of interest. At least one of the approaches, the vocabularies and formats explained in Sect. 2.3. 3.2

Conducting the Systematic Review

The output from this phase is the literature corpus constituted by the set of research articles defined as relevant for the systematic mapping.

Systematic Mapping on Embedded Semantic Markup Validated with Data Mining

387

Searching is conducted in two steps: i) retrieve all research articles that meet the inclusion criteria, ii) select the relevant research articles by analyzing the full text of those collected in the first step. We choose only research articles that address the topics of interest. The set of research articles finally selected is named as “literature corpus”. 3.3

Reporting the Results of the Systematic Review

The results of the systematic review are presented by answering the proposed RQ. RQ1. How the research articles have been published over time? Table 1 presents the search results. For each year, the column “P” presents research articles obtained by the search protocol, and the column “R” presents relevant research articles. The application of the search protocol provided 190 research articles; and, only 57 were relevant for the mapping process. There was not found a trend in publications over time. In 2013 and 2016, we found the highest number of relevant works. Table 1. Research articles by year and database source Digital library 2011 2012 P R P R IEEE 4 0 6 3 ACM 4 0 7 4 SCOPUS 2 0 3 0 SPRINGER 1 0 1 0 WOS 1 1 2 0 Total 12 1 19 7

2013 P R 4 2 9 2 14 2 1 0 4 3 32 9

2014 2015 2016 2017 2018 2019 Total P R P R P R P R P R P R P R 2 0 5 3 2 0 4 2 2 1 8 3 37 14 3 3 3 2 5 4 6 1 2 2 6 2 45 20 12 2 7 0 9 4 9 1 6 3 5 2 67 14 2 0 2 2 3 0 2 0 2 0 4 0 18 2 3 1 3 1 5 1 2 0 2 0 1 0 23 7 22 6 20 8 24 9 23 4 14 6 24 7 190 57

RQ3. What the topics of interest have been covered in the research articles? The answer to this RQ constitutes the systematic mapping proposed in this work. It is exposed in the next section, where the manual and automatic mapping is explained.

4 Results The systematic mapping is obtained by a two-fold approach by aiming to verify if the results of the manual process can be validated through the automatic process. 4.1

Manual Process

The results of the systematic mapping, according to the manual process, are presented in Table 2. For the lack of space, only an excerpt is presented, but the complete information is available in an online file (https://tinyurl.com/yd4ro6d9). First column is the “Year” of publication; second is “Research Article ID”. In “Manual classification,” the next columns represent the parameters (Approach of the research, Vocabulary, and Format).

388

R. Navarrete et al.

The approach of the research encompasses four columns, each labeled with a letter, according to the detail presented in Sect. 2.3. A visual convention is used to depict the grade of the relation of the article with the respective approach. The box in white represents “not related”; the box with striped represents, “partially related”; and, the box in black represents “fully related”. The vocabulary and format columns contain their respective names (can be more than one). From the manual mapping, we have recognized that Schema and some standards adopted by this vocabulary, such as GoodRelations and LRMI appear in 78.94% of the literature corpus (45 articles). In a research article, more than one markup format can be addressed. The markup format more referenced is Microdata with 54.39% (31 articles), followed by RDFa in with 22.8% (13 articles). JSON-LD has a minimum reference, probably because it is the newest format declared as a W3C recommendation, and apparently, its use is still not extended. Table 2. Embedded semantic markup mapping (Excerpt) Manual classification

Automatic classification

Parame te rs for classification Scope of the re se arch Ye ar

Re se arch Work ID

a

b

c

d

Vocabulary

e

2012 5-2012-Mika

Schema

2012 6-2012-Krutil

Schema

2013 13-2013-Pohorec

Format

1 Microformats, RDFa, Microdata

1

2014 21-2014-Matosevic

Schema

Microdata

1

2015 24-2015-Hepp

GoodRelations

Microdata

1

2015 29-2015-Mika

Schema

1

2016 33-2016-Mazayev

Schema

1 2

2017 44-2017-Yu-Gadiraju

2

8.24

2

Schema

2

2018 50-2018-Yu-Gadiraju 2015 26-2015-Meusel-Paulheim

Schema

2015 27-2015-Meusel-Bizer

Schema

2016 34-2016-Meusel

Schema

2012 4-2012-Muhleisen

Schema

Microdata

3 3

Microdata

3

Microformats, RDFa, Microdata

4

2013 10-2013-Bizer

Microformats, RDFa, Microdata

4

2014 22-2014-Meusel

Microformats, RDFa, Microdata

4 4

2017 43-2017-Navarrete

Schema, OGP, Dublin Core

Microdata, RDFa, JSON-LD

2016 37-2016-Sahoo

Schema

Microdata

5

2016 38-2016-T aibi

LRMI

Microdata

5

Microformat, Microdata, RDFa

5

Microformat, Microdata, RDFa

5

2017 41-2017-Dietze 2017 42-2017-Dietze-T aibi

19.72

2

2016 40-2016-Yu-Gadiraju

2018 47-2018-T empelmeier

Probability

1

2016 39-2016-Yu-Fetahu

4.2

Topic

LRMI

6.42

6.38

6.02

Automatic Process

We have introduced an innovative automatic method to corroborate the topic-specific categorization conducted by authors. For the automatic classification, we use a method based on the Latent Dirichlet Allocation (LDA) topic model. It is a probabilistic unsupervised learning model that allows modeling a corpus as a finite mixture of K topics [6, 7]. The probability of a sequence of words is not affected by the order in

Systematic Mapping on Embedded Semantic Markup Validated with Data Mining

389

which they appear, criteria known as the Bag of Words concept (BOW) [8]. The central inferential problem for LDA is determining the posterior distribution of the latent variables given the document, which is [8]: pðh; z; w=a; bÞ ¼ pðh=aÞ pðz=hÞ pðw=z; bÞ The parameters a and b are constant in the utilized model version; z and w illustrate the repeated sampling of topics z and words w until N words have been generated for a document d. h represents the sampling of distribution over topics for each document d for a total of M documents. Due to the automatic technique recognize the existence of individual words in the literature corpus, we propose a cluster of descriptive words for each parameter that better represent the possible terms associated with it: Vocabulary: schema, ogp, foaf, dc, lrmi, goodrelations. Markup format: microdata, rdfa, jsonld. Approach of the Research: ads, commerce, commoncrawl, crawl, deploy, education, egovernment, entity, error, extract, extraction, fix, government, learning, lod, mistake, owl, pld, plds, rdf, video, wdc, webdatacommons. Since there does not exist an optimal solution to estimate the number of topics (K) in a corpus of documents, in this work we followed the approach proposed by Murzintcev [9] which combine maximization [10] and minimization [11] models for determining K as the number which best suits the considered models. By following this approach and using the R tool, the estimated K for our corpus was 22. To implement LDA, it is used the Gibbs sampling [12], a form of Markov Chain Monte Carlo, which provides a relatively efficient method for extracting a set of topics from a corpus. The solution was implemented in R, using related libraries. Because of the restriction of space, only an excerpt of the topics calculated, their probabilities and their top words are presented in Table 3. The cells are highlighted with blue for vocabulary, green for the format, and yellow for the approach of the research. The complete list of topics is available as an online file (https://tinyurl.com/yd4ro6d9). It is observed in highlighted cells, the descriptive words for each parameter, previously mentioned. Table 3. Topics for the corpus and their probability (Excerpt)

390

R. Navarrete et al.

The automatic classification binds each research article with the automatic topics obtained. Although a research article can be bind to more than one topic, only the topic with the higher probability is shown in the column labeled as “Topic ID” in Table 2, and its probability (as percent) is presented in the column labeled as “Probability”. The topics with low probability differ scarcely with each other; for this reason, the automatic classification assigns a different topic to research articles in spite that in the manual classification are closely related. Nevertheless, we can argue that the two-fold process of mapping presented in this work, improve the reliability of the results. Table 2 shows the highest probability (19.72%) to Topic 1 for seven articles (10.52% of literature corpus), followed by Topic 2, (8.24%) for five articles (8.77% of literature corpus). The remaining probabilities have values of less than 7% distributed in 72.04% of the literature corpus. Topic 1 is mostly related to the use of embedded markup with Schema vocabulary for getting structured data on the Web, independently of the use of a specific format. Topic 2 reveals the exploration of embedded markup to recognize entities for different purposes such as entity summarization.

5 Conclusions and Future Work In this work, we introduced an automatic method based on data mining techniques aimed to validate the manual process of systematic mapping. The results showed its helpful to back the human mapping decisions. Nevertheless, it would be more productive with a vast literature corpus. The number of publications in journals was significantly less than publications in conferences. It can mean that embedded semantic markup is still a developing technology, not yet adopted by all web communities. Schema.org vocabulary was the most addressed vocabulary because its wideness concerning the domains where their terms can be used and the support granted by the major search engines. Microdata was the format most addressed in research articles, surpassing RDFa and JSON-LD, which are already W3C recommendations. The articles evidence the importance of Schema.org vocabulary as the basis for obtaining structured data. Also, there was found significant research efforts oriented to the extraction of structured data from the corpus web, entity retrieval, knowledge base population, and entity summarization. Furthermore, the use of semantic markup annotations has been barely addressed for specific domains such as education, health, or government, and there was not any research aimed at facilitating the use of embedded markup for non-technical users. Based on these results, as future work, we plan to drive our research in the use of this technology in the educational domain.

Systematic Mapping on Embedded Semantic Markup Validated with Data Mining

391

References 1. Sikos, L.: Mastering Structured Data on the Semantic Web: From HTML5 Microdata to Linked Open Data. Apress, New York (2015) 2. Meusel, R., Petrovski, P., Bizer, C.: The webdatacommons microdata, RDFa and microformat dataset series. In: 13th International Semantic Web Conference (ISWC 2014), pp. 277–292. Springer, Cham (2014) 3. Kitchenham, B., Charters, S.: Guidelines for performing systematic literature reviews in software engineering. Technical report, EBSE Technical (2007) 4. Haas, K., Mika, P., Tarjan, P., Blanco, R.: Enhanced results for web search. In: 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 725–734. ACM (2011) 5. Guja, R.V., Brickley, D., Macbeth, S.: Schema.org: evolution of structured data on the web. Commun. ACM 59(2), 44–51 (2016) 6. Steyvers, M., Griffiths, T.: Probabilistic topic models. In: Landauer, T., McNamara, D., Dennis, S., Kintsch, W. (eds.) Latent Semantic Analysis: A Road to Meaning. Lawrence Erlbaum, Hillsdale (2007) 7. Blei, D., Ng, A., Jordan, M.: Latent Dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003) 8. Blei, D.: Probabilistic topic models. Commun. ACM 55(4), 77–84 (2012) 9. Murzintcev, N.: Select number of topics for LDA model. CRAN R Project, 24 (2016) 10. Deveaud, R., Sanjuan, E., Bellot, P.: Accurate and Effective Latent concept for Ad Hoc Information Retrieval. Revue des Sciences et Technologies de l’Information - Serie Document Numerique, pp. 61–84 (2014) 11. Arun, R., Suresh, V., Veni, C., Murthy, M.: On finding the natural number of topics with latent Dirichlet allocation: some observations. In: Advances in Knowledge Discovery and Data Mining, pp. 391–402 (2010) 12. Heinrich, G.: Parameter estimation for text analysis. Technical report (2005)

Physical Approach to Stress Analysis of Horizontal Axis Wind Turbine Blade Using Finite Element Analysis Samson O. Ugwuanyi, Opeyeolu Timothy Laseinde(&), and Lagouge Tartibu Mechanical and Industrial Engineering Department, Faculty of Engineering and the Built Environment, University of Johannesburg, Johannesburg, South Africa [email protected], {otlaseinde,ltartibu}@uj.ac.za

Abstract. Wind energy plays a very crucial role in meeting the growing demand for energy across the world. Energy from solar and wind are renewable and found in abundant. They do not deplete as fossils, which are alternate sources of energy. Many countries in the world like China, United States, Germany, India, Spain and others have already adopted wind as a source of energy. Large wind turbines are most common design and they extract more wind but their main challenge is that they require large terrains and are most suitable in areas with high wind speeds. Usually erected onshore or offshore. On the other hand, small wind turbines require small ground spaces for them to be erected and can operate well in terrains with low wind speed. The turbine blade is one of the most important components of a wind turbine expected to be 20% of the overall cost of the wind turbine. The performance of wind turbine blades depends greatly on the material used in the design. Selecting which material to use in the design is a very important and crucial. Many materials are available for selection, which include steel, carbon fibres, wood and many more. All these materials perform differently since they have different material properties. In this paper, only three materials, which are Thermoplastic Resin (Plastic and vinyl material), Aluminium Alloy and Titanium Alloy, were considered. The evaluation criteria included comparing the values of von misses stress and displacement. These values were attained using simulations. Performance analysis was carried out using Inventor professional software for the Finite Element Analysis (FEA). The design, and performance analysis was based on a 1kva small-scale horizontal axis wind turbine blade prototype with a blade length of 0.5 m which was scaled up based on the design specification and material performance criteria. Validation for the mechanical strength of the blade was done using FEA. Keywords: Wind turbine  Finite Element Analysis  FEA  Horizontal axis Stress analysis  Turbine blade  Material  Von misses

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 392–399, 2021. https://doi.org/10.1007/978-3-030-51328-3_54



Physical Approach to Stress Analysis of Horizontal Axis Wind Turbine Blade

393

1 Introduction There is an increase in the demand for electrical energy, which is fueled by rapid population growth and intensive industrialization. The greatest advantage of solar and wind are that they are renewable. They are abundant and are clean energy sources. When wind is in motion, it produces kinetic energy. A wind turbine is a device used to harvest wind and converts it into electrical energy [1, 2]. Among all the parts of the turbine, the blade is responsible for harvesting the wind. The turbine blade needs to be optimized to produce favorable results like long life span and high fatigue resistance. Brondsted [3] points out that optimizing the materials used in blade design is a vital step that is engaged in order to design a lighter weight blade at relatively minimal costs, with high ratios of stiffness to weight, high ratios of strength to weight and admirable fatigue performance. Pourrajabian and Mirzaei [4] supports the optimization of blade turbines since it is essential for design. This is because the costs of manufacturing the blades are associated with its weight, length and structure. Manufacturing methods, materials used and shape are accountable for the aerodynamic torque that is responsible for starting a wind turbine. Domnica et al., [5] added that as the blades of the turbine functions in extensive ranges of wind settings, blade optimization would be better boosted by bearing in mind the material structure. This study focuses on selecting the most ideal material resistant to fatigue. Three possible materials for designing a 1 kV small scale horizontal axis wind turbine was compared and analyzed. The comparison was between thermoplastic resin, aluminum alloy and titanium alloy. The analysis was based on the values of von misses stress and displacement. 1.1

Brief History on Wind Turbines

The design of wind turbines has come a long way and many designs failed to live up to expectations.

Fig. 1. Wind mill [5]

Fig. 2. Failed wind turbine blade [6]

Figure 1 is a view of Pou La Cour’s first electricity producing wind turbine in 1891 in Askov, Denmark. A wind turbine invented by Poul La Cour was one of the most primitive wind inventions in the year 1891 Denmark [6]. Current technological advancements of modern wind turbines can be erected on the seabed or ground [6].

394

S. O. Ugwuanyi et al.

Mishnaevsky and Branner [7] reported that a design made from steel in the year 1941 malfunctioned after just a few hours of operation. They added that the next wind turbine was designed using three composite blades that were made of steel spars with aluminum shell that were supported by wooden ribs named the Gedser wind turbine. It was somewhat prosperous and operated only for eleven years without maintenance. Figure 2 is a failed blade of Smith wind turbine. Traditionally, turbine blades were designed from wood but wood was substituted by aluminum, steel, thermoset and thermoplastic composites due to its adverse sensitivity to humidity. Babu [8] further argued that aluminum was only used for experimental purposes in blade design and it was discovered that it had a lower fatigue compared to steel. 1.2

Classification of Wind Turbines

Mao & Yan [9] classified wind turbines into two categories, which are vertical, and horizontal axis wind turbines. This classification is determined by the relative direction between wind direction and turbine rotational axis. 1.2.1 Vertical Axis Wind Turbines (VAWT) The blades of a Vertical Axis Wind Turbine rotate perpendicular to the ground. In the past most wind turbines were vertical axis wind turbines. This was due to the fact that they are easy to erect and they do not require any mechanism to orient them to the wind direction. Despite having these advantages, the vertical axis wind turbine was quickly out fazed by the horizontal axis wind turbines. 1.2.2 Horizontal Axis Wind Turbines (HAWT) Blades of HAWT typically revolve parallel to the surface of the ground. A generator is connected to the rotor of the turbine through the shafts kept in the nacelle box at the tower. Presently, HAWTs are the most prevalent design of turbines. Horizontal Axis Wind Turbines are considered more proficient as compared to Vertical Axis Wind Turbines.

2 Loads and Forces that Lead to Wind Turbine Blade Failure During operation, the turbine blades experience forces and loads that lead to blade failure. Some of the loads and conditions for blade failure are explained below. 2.1

Aerodynamic Loads

Fatigue stresses are established by wind speed variations and properties of the properties. A load refers to the weight or force imposed on an object. Aerodynamic loads refer to the pressure that is imposed on the wind turbine blade due to moving particles in the air interacting with solid parts. Aerodynamic loads are derived from lift and drag of the wind turbine blades aerofoil segment, which is reliant on the velocity of wind, yaw, blade velocity and the angle of attack. The A.O.A is reliant on pitch and blade twist and pitch. Aerodynamic lift and drag generated are set on into valuable thrust in the path of rotation absorbed by the reaction forces and generator.

Physical Approach to Stress Analysis of Horizontal Axis Wind Turbine Blade

2.2

395

Angle of Attack

The angle of attack describes the angle of geometry formed between the cord & flow in a 2D airfoil as applied in aero-elastic manufacturing. The flow passing through the blade section of the rotating blade is bended due replacement effects of the rotor. For a rotating blade, the flow passing by a blade section is bended due to rotor spin. 2.3

Drag on Turbine Blade

The force drag is referred to as a parallel force acting on the direction of approaching airflow. According to Manwell, et al. [10], the drag force is caused by viscous friction forces at act on the surface and the uneven pressure acting on the airfoil surfaces directed opposing the approaching flow [10]. 2.4

Lift Force on a Turbine Blade

A lift can be determined by combining the force of the pressure over the surface of the blade [11]. This force is created by the dynamic influence of the air that is acting on the airfoil and it acts “perpendicular to the flight path through the center of lift (CL) and perpendicular to the lateral axis. In level flight, lift opposes the downward force of weight” (Fig. 3).

Fig. 3. [3] L/D ratio against angle of attack

2.5

Gravitational Force

These forces are dependent on mass and are thought to rise cubically with growing diameter of turbine. Hence, turbines that have a diameter of less than ten meters have negligible inertial loads. To calculate the gravitational force, the mass multiplied by the gravitational constant. 2.6

Atmospheric Turbulence

Turbulence is triggered when air motion is uneven. Turbulence are wind currencies that fluctuate critically over short distances. Turbulence is produced by friction on the surface of the earth and thermal effects causing the masses of air to be relocated vertically due to temperature variations [12].

396

S. O. Ugwuanyi et al.

3 Methods Thermoplastic resin (Plastic and vinyl material), Aluminum Alloy, and Titanium Alloy. Based on the geometrical parameters of the blades, the 3D-dimension model of blade airfoil was imported in Autodesk inventor professional software for analysis. The blade aerodynamic features, Von misses stress, mass density, and displacement characteristics are key features observed. From the outcome of the displacement result of the blades tested, the highest displacement occurred in the blade tip for all materials tested during spinning at various ideal safe speeds in line with the prototype size and the von misses stress concentration occurred in the blade center. The wind turbine blades imported in Autodesk inventor software and analyzed in ANSYS software. The properties of the blades were as follows: Thermoplastic Resin (Plastic and vinyl material): • Density of 1280 kg/m3. • Modulus of Elesticity of 3.3 Gpa.• Poison’s ratio 0.3 µl. Aluminium Alloy: • Density of 2700 kg/m3. • Modulus elasticity of 68.9 Gpa.• Poisons ration of 0.33 µl. Titanium Alloy: • Density 4510 kg/m3. • Modulus elasticity of 102.81 Gpa. • Poisons ration of 0.36 µl. 3.1

Meshing

After the blade was modelled, it was meshed. A mesh is a Computer Aided Design model split into discrete fragments that are attached together. The obtained profile of endless geometry consisting of both interior value & exterior surface is as specified; Nodes: 220490, Elements: 125446, Min element size: 0.02 mm, Max turning angle: 60°, Average element size: 0.01 mm. 3.2

Boundary Condition

It described how a system whether fluid of real solid interrelates with the environment. Examples of boundary conditions are loads, flow velocities. The boundary conditions used were: • Root of the blade had a fixed constrain. • 0.020 Mpa pressure applied at the center of the blade during simulation. The finite element analysis was carried out in ANSYS software for Structural analysis of the three blade materials. Fixed constraint with load pressure 0.020 MPa. 3.3

Results/Observations

After simulations the results were observed and recorded for analysis as shown (Fig. 4).

Physical Approach to Stress Analysis of Horizontal Axis Wind Turbine Blade

Fig. 4. Mass density

Fig. 5. Yield strength

397

Fig. 6. Ultimate T-strength

The 1 kV Thermoplastic Resin blade turbine was 1.28, Aluminium alloy 2.7 and titanium alloy 4.51. In Fig. 5, the yield strength of the material refers to the maximum stress that can be applied to it before permanently deforms. The yield strength of thermoplastic resin has a yield strength of 57.2 MPa, aluminium alloy was 275 MPa and for titanium alloy has a yield strength of 275.6 MPa. In Fig. 6, the ultimate tensile strength is referred to the maximum stress that a material can endure before failing. It is the opposite of compressive strength. Thermoplastic resin has an ultimate tensile strength of 344.5 MPa, Aluminium Alloy has an ultimate tensile strength of 310 MPa and Titanium alloy has a tensile strength of 344.5 MPa (Figs. 7, 8 and 9).

Fig. 7. Young’s modulus

Fig. 8. Poisson’s ratio

Fig. 9. Shear modulus

Thermoplastic resin has a young modulus of 3.3 MPa, aluminum alloy has a value of 68.9 MPa and titanium alloy has a young’s modulus value of 102.81 MPa. Young’s modulus measures the resistance of a material to elastic deformation under a load. It relates stress to strain. Poisson’s ratio of Thermoplastic resin 0.36 ll, aluminum alloy 0.33 ll and titanium alloy 0.361 ll. This is the measure of how much change in diameter of a material whenever it is pulled lengthwise. Shear modulus of thermoplastic resin (plastic and vinyl material), aluminum alloy and titanium alloy. Shear modulus is the ratio of shear stress to shear strain. Thermoplastic resin has a shear modulus of 1.21324 GPa, aluminum alloy has a shear modulus of 25.9023 GPa and titanium alloy has a shear modulus of 37.77 GPa (Figs. 10 and 11).

398

S. O. Ugwuanyi et al.

Fig. 10. Von misses stress

Fig. 11. Displacement

Von Misses Stress: Titanium Alloy has less value of von misses 0.2224, and Aluminum alloy that has a value of 0.2569, & thermoplastic resin which had a value 0.4510. Displacement/Deformation: Titanium Alloy has a less value of displacement/ deformation, which is 0.00524, followed by aluminum alloy, which had a value of 0.00786, and lastly thermoplastic resin, which had a value of 0.16192.

4 Discussion and Conclusion From the simulation results on von misses stress, and displacement. Titanium alloy had the least values in both displacement and von misses. Aluminum alloy followed and lastly was thermoplastic resin. There were slight differences between titanium alloy and aluminum alloy and a huge difference with a huge difference with thermoplastic resin. From the outcomes, the material with less von-misses stress values and less displacement values were selected. The simulation of stress analysis of small-scale HAWT blade had been conducted in this work. The achieved values of Mass Density, Vonmises stress and Displacement states the preferable material for a wind turbine blade. As soon as the value of Von-mises stress and Displacement decrease, the material will become more suitable for the turbine blades. Assessment between three different blade materials was carried out under the same pressure load. Stress and displacement were calculated using Inventor software in FEA. The result achieved from the inventor Software suggests that the Titanium Alloy and the Aluminum Alloy showed the least value in stress and displacement, but from the mass density, Thermoplastic resin shows less in mass while extreme variation on von mises and displacement, therefore it can be concluded that Titanium Alloy is more preferable in making wind turbine blade in low wind speed environment.

References 1. Zafar, U.: Literature Review of Wind Turbines (2018) 2. Laseinde, T., Ramere, D.: Low-cost automatic multi-axis solar tracking system for performance improvement in vertical support solar panels using Arduino board. Int. J. LowCarbon Technol. 14(1), 76–82 (2019)

Physical Approach to Stress Analysis of Horizontal Axis Wind Turbine Blade

399

3. Brondsted, P., Lilholt, H., Lystrup, A.: Composite materials for wind power turbine blades. Annu. Rev. Mater. Res. 35, 505–538 (2005) 4. Pourrajabian, A., Mirzaei, M., Ebrahimi, M., Wood, D.H.: Effect of altitude on the performance of a small wind turbine blade: a case study in Iran. J. Wind Eng. Ind. Aerodyn. 126, 1–10 (2014) 5. Domnica, S.M., Ioan, C., Ionut, T.: Structural optimization of composite from wind turbine blades with horizontal axies using finite element analysis. Procedia Technol. 22, 726–733 (2016) 6. Cao, H.: Aerodynamics analysis of small horizontal axis wind turbine blades by using 2D and 3D CFD modelling. University of Central Lancashire, Preston, England (2011) 7. Mishnaevsky, L., Branner, K., Petersen, H., Beauson, J., McGugan, M., Sørensen, B.: Materials for wind turbine blades: an overview. Materials 10, 1285 (2017) 8. Babu, K., Subba, N., Reddy, N., Rao, N.: The material selection for typical wind turbine blades using a MADM approach & analysis of blades (2006) 9. Mao, Z., Tian, W., Yan, S.: Influence analysis of blade chord length on the performance of a four-bladed Wollongong wind turbine. J. Renew. Sustain. Energy 8, 023303 (2016) 10. Manwell, J., McGowan, J., Rogers, A.: Wind Energy Explained: Theory, Design and Application. Wiley, Hoboken (2002) 11. Bertin, J.J., Cummings, R.M.: Critical hypersonic aerothermodynamic phenomena. Annu. Rev. Fluid Mech. 38, 129–157 (2006) 12. Mikkelsen: Effect of free stream turbulence on wind turbine performance. Norwegian University of Science and Technology Department of Energy and Process Engineering (2013)

Application of Fuzzy Cognitive Maps in Critical Success Factors. Case Study: Resettlement of the Population of the Tres Cerritos Enclosure, Ecuador Lileana Saavedra Robles1(&), Maikel Leyva Vázquez2(&), and Jesús Rafael Hechavarría Hernández1(&) 1

Faculty of Architecture and Urbanism, University of Guayaquil, Cdla. Salvador Allende, Av. Delta y Av. Kennedy, Guayaquil, Ecuador {lileana.saavedraro,jesus.hechavarriah}@ug.edu.ec 2 University Politecnica Salesiana, Guayaquil, Ecuador [email protected]

Abstract. Mining activity in Ecuador is one of the fundamental causes of environmental pollution and damage to the quality of life of the populations near the quarries. The high levels of noise, dust and gases are some of the causes of multiple illnesses suffered by the people who live in these places. In the present work, fuzzy cognitive maps are applied for the modeling, analysis and prioritization of critical success factors, taking into account the interrelationship among them with a systemic approach, for the decision making in the resettlement plan of the inhabitants of the Tres Cerritos enclosure of the Taura Parish in the Naranjal County, which are affected by the mining exploitation of the Cerro Pelado quarries. The results achieved improved the decision-making process by giving priority to Basic information, Citizen engagement and Compensation factors to achieve success in the resettlement project. Keywords: Fuzzy cognitive maps planning  Decision-making

 Critical success factors  Territorial

1 Introduction Territorial planning is a set of actions determined to guide the transformation, occupation and use of geographical spaces, taking into account the needs and interests of the population, the potential that the territory may have and the harmony with the environment, with the in order to promote social and economic development [1]. The extraction of aggregates and stone executed by quarries turns out to be something very important in mining and the development of the towns, because their

L. S. Robles—Architect from the University of Guayaquil with 10 professional experience. Aspirante to the title of Master in Architecture with mention in Territorial Planning and Environmental Management. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 400–406, 2021. https://doi.org/10.1007/978-3-030-51328-3_55

Application of Fuzzy Cognitive Maps in Critical Success Factors

401

objective is to obtain construction materials. The problem is the environmental damage it causes in its environment [2]. The construction materials are obtained mechanically, using drilling machines for the removal of materials from the earth’s crust, and then blasting with the help of explosives in case of hard rocks, which causes contamination by noise, dust and gas [3]. Therefore, the objective of this research is to contribute to territorial planning through the application of fuzzy cognitive maps for the modeling, analysis and prioritization of critical success factors, taking into account the interrelationship among them with a systemic approach, for the decision making in the re-settlement plan of the inhabitants of the Tres Cerritos enclosure of the Taura Parish in the Naranjal County, which are affected by the mining exploitation of the Cerro Pelado quarries. Finally, a model of a population resettlement plan is proposed that can be applied to this case study according to the methodology applied.

2 Materials and Methods 2.1

Delimitation of the Study Area

The Tres Cerritos site belongs to the Parroquia Taura of the Canton Naranjal, and is located at km 15 of the Boliche-Puerto Inca road. The Naranjal canton is located 88 km from the city of Guayaquil. Its average temperature is 25 °C and it is 17 m above sea level (Fig. 1).

Fig. 1. Case study location Source: Carta Topográfica Naranjal-Guayas (2013). Instituto Geográfico Militar, Ecuador.

2.2

FCM as a Representation of Mental Models

FCM Fuzzy Cognitive Maps improve cognitive maps by describing the strength of the relationship using fuzzy values in the [−1.1] range or more recently word processing (CWW) [4]; and especially, the 2-tuple model. CWW is a methodology that allows a

402

L. Saavedra Robles et al.

process of computation and reasoning, using words belonging to a language instead of numbers. This methodology allows the creation and enrichment of decision models in which vague and imprecise information is represented through linguistic variables. FCM can be represented by a weighted directed network where the nodes represent concepts and the arcs indicate a causal relationship [5]. An adjacency matrix is constructed from the values assigned to the arcs generally in numerical form [6]. In the FCM there are three possible types of causal relationships between concepts: • Positive causality (W_ij > 0): Indicates a positive causality between the concepts C_i and C_j, that is, the increase (decrease) in the value of C_i leads to the increase (decrease) in the value of C_j. • Negative causality (W_ij < 0): Indicates a negative causality between the concepts C_i and C_j, that is, the increase (decrease) in the value of C_i leads to the decrease (increase) in the value of C_j. • Non-existence of relationships (W_ij = 0): Indicates the non-existence of a causal relationship between C_i and C_j. • Given the great usefulness of the DCMs, they have been extended to model various situations. Thus, we find extensions based on the theory of grey systems [7], intervals [8], intuitionist fuzzy logic [9], among other extensions. A FCM can be represented by a digit (Fig. 2), in which the nodes represent concepts and the arcs indicate causal relationships [5].

Fig. 2. Fuzzy cognitive maps Source: [12]

When a set of individuals (k) participate, the adjacency matrix is formulated through an aggregation operator, such as the arithmetic mean. The simplest method is to find the arithmetic mean of each of the connections for each expert. For k experts, the adjacency matrix of the final FCM (E) is obtained as [10]: E¼

ðE1 þ E2 þ . . . þ Ek Þ k

ð1Þ

This ease of aggregation allows the creation of collective mental models with relative ease.

Application of Fuzzy Cognitive Maps in Critical Success Factors

403

3 Model Proposed 1. CDF modeling: Critical success factors are identified that will provide more information about the project. The causal relationships between FCE’s are determined. The indicators will constitute nodes in the MCD, the causal relationships will constitute the edges. This information will be enriched with numerical values in the following activity. 2. Selection of the measures: The aspect of the MCD or the combination to be analysed is selected. In the present study, it was decided to establish the level of strength of the connection between the nodes to determine its importance within the map. 3. Centrality calculation: The input and output values of the nodes (indegree and outdegree) are calculated to determine centrality. If more than one centrality measure is used, a composite centrality value is determined by adding new values. 4. Sorting and classification: In this activity FCE are ordered according to their importance in the model. The FCE is represented as a weighted directed network (V, E), where V is the set of nodes and E is the set of connections between those nodes. To prioritize the most important nodes, the centrality of the factor (Ci) is determined from its outdegree (odi) and indegree (idi), taking into account the magnitude of the Cij weights as follows [11]: Outdegree odðvi Þ is the sum of the rows in the neurosophical adjacency matrix. It reflects the strength of the relationships (cij) out of the variable. odðvi Þ ¼

N X

cij

ð2Þ

i¼1

Indegree id(vi) is the sum of the columns reflecting the strength of the relationships (cij) of the variable. idðvi Þ ¼

N X

cji

ð3Þ

i¼1

The centrality Ci is calculated from the sum of its input degree (idi) and output degree (odi), as expressed in the following formula Ci ¼ idi þ odi

ð4Þ

Centrality in a DCM indicates how strongly a node is related to others, based on its direct connections. Nodes are classified according to the following rules: • Transmitting variables have positive outdegree and zero indegree. • Receiver variables: They have a positive indegree, and zero outdegree. • The ordinary variables: They have a degree of indegree and outdegree other than zero.

404

L. Saavedra Robles et al.

4 Critical Success Factors in Project Table 1. Following critical success factors are identifies as show in table below Source: Own elaboration CSF Description Citizen Extent to which the community participates in decision-making engagement Compensation Financial compensation to the community and individuals Basic services Basic services to be provided to the community in case of relocation Monitoring and Monitor and follow up when re-settled verifying the normal tracking performance of the usual activities Basic information Information should be collected on the number of people to be resettled and the socio-cultural characteristics of the population

See Table 1. 4.1

Fuzzy Cognitive Maps Obtained is Depicted in the Following Figure

Fig. 3. Fuzzy cognitive maps Source: Own elaboration

CSF are classified and prioritized according to centrality measures (Figs. 3 and 4).

Fig. 4. Static analysis Source: Own elaboration

Application of Fuzzy Cognitive Maps in Critical Success Factors

4.2

405

Factors

Factor are finally ranking as follows: Basic information > Citizen Engagement > compensation > Basic services > Monitoring and tracking.

5 Population Resettlement Partial Plan Proposal The initial procedure to carry out the Population Resettlement Plan is composed as follows: • Identify the number of families living in the Tres Cerritos enclosure of the Taura Parish in the Canton of Naranjal. • Request to the Decentralized Autonomous Government of the Canton of Naranjal the availability of the area of relocation or population resettlement. • Identify through the Department of Appraisals and Cadastre of the GAD of Naranjal if they are owners or possessors of the lots where they live. • By creating a Central Commission to inform and explain to the community the benefits that resettlement would bring them. • Promote the participation of all affected social units in the choice of possible compensation alternatives, the programs to be developed during and after the project; and the evaluation and monitoring of the implementation of the resettlement plan to be established. • Promote the design of a housing project to provide housing with basic services. • Provide employment opportunities to the population when this project is implemented.

6 Conclusions Factor are finally ranking and three more important are: Basic information, Citizen engagement and Compensation. For resettlement plans, one must first know their situation in which their inhabitants are and their possible conditions, citizen participation must prevail, in order to reach agreements and establish compensation for populations affected by mining activity in case of being resettled. The State, being the largest socio-environmental authority, must promote the well-being of the country, establishing and enforcing government policies that allow mitigating and preventing the problems generated by mining, in this case, the extraction of arid and stone.

References 1. Márquez, B.H., Castro, J.P., Cruz, E.P.: Integration centers: an experience of land management in the state of Tabasco. Probl. Dev. 47(184), 111–136 (2016) 2. Oca-Riscoal, M.D.E.: Environmental diagnosis of the Yarayabo quarry province Santiago. HOLOS 01, 30–48 (2018)

406

L. Saavedra Robles et al.

3. Hernández Jatib, N., Ulloa Carcasés, M., Almaguer-Carmenate, Y., Rosario Ferrer, Y.: Environmental evaluation associated to the exploitation of the construction materials Site La Inagua, Guantánamo, Cuba. Luna Azul 38, 146–158 (2014) 4. Rickard, J.T., Aisbett, J., Yager, R.R.: A new fuzzy cognitive map structure based on the weighted power mean. IEEE Trans. Fuzzy Syst. 23(6), 2188–2201 (2015) 5. Kosko, B.: Nanotime. Avon Books, New York (1997) 6. Liu, Z.-Q.: Causation, Bayesian networks, and cognitive maps. Acta Automática Sin. 27(4), 552–566 (2001) 7. Salmeron, J.L.: Modelling grey uncertainty with fuzzy grey cognitive maps. Expert Syst. Appl. 37(12), 7581–7588 (2010) 8. Papageorgiou, E., Stylios, C., Groumpos, P.: Introducing interval analysis in fuzzy cognitive map framework. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), LNAI, vol. 3955, pp. 571–575 (2006) 9. Iakovidis, D.K., Papageorgiou, E.I.: Intuitionistic fuzzy reasoning with cognitive maps. In: IEEE International Conference on Fuzzy Systems, pp. 821–827 (2011) 10. Kosko, B.: Hidden patterns in combined and adaptive knowledge networks. Int. J. Approximate Reasoning 2(4), 377–393 (1988) 11. Brandes, U., Borgatti, S.P., Freeman, L.C.: Maintaining the duality of closeness and betweenness centrality. Soc. Netw. 44, 153–159 (2016) 12. Leyva, M., Hechavarría, J., Batista, N., Alarcon, J.A., Gómez, O.: A frame for PEST analysis based on fuzzy decisión maps. Revista ESPACIOS 39(16) (2018)

Design of the Future Workstation: Enhancing Health and Wellbeing on the Job Dosun Shin(&), Matthew Buman, Pavan Turaga, Assegid Kidane, and Todd Ingalls The Design School, College of Health Solutions, Art Media and Engineering, Arizona State University, Tempe 85287, USA {Dosun.Shin,mbuman,pturaga,Assegid.Kidane, TestCase}@asu.edu

Abstract. Sedentary behavior (i.e., waking behaviors in the seated/lying position at low energy expenditure), or sitting, has emerged as a potential risk factor for numerous chronic diseases and all-cause mortality during the last decade. Working adults accumulate large number of sitting hours at work and interventions have been developed to reduce sitting in the workplace. An interdisciplinary university research team was created to develop new designs for instrumented sit-stand workstations: embedded sensors that will be used to assess postural stability, sit-to-stand efficiency, as well as to facilitate “in-themoment” prompts to increase standing behaviors. These features will be used for real-time, as well as summative feedback. Feedback techniques will include unobtrusive displays, light-modulation, and on-demand auditory feedback. The evaluation will include assessing the short-term and long-term effectiveness of the instrumented sit-stand workstations for objectively-measured workplace sitting, cardio-metabolic risk biomarkers, and work productivity. Keywords: Sensory feedback  Human-systems interaction and prototyping  Health and solution

 Product design

1 Background Sedentary behavior has emerged as a potential risk factor for numerous chronic diseases and all-cause mortality during the last decade [1]. For example, prolonged sitting time is prospectively associated with increased risk for type-2 diabetes, cardiovascular disease, some cancers, and all-cause mortality. This appears to be true even among individuals who are meeting US national physical activity guidelines of 150 min/week of moderate-vigorous physical activity. Working adults accumulate large number of sitting hours at work and interventions have been developed to reduce sitting in the workplace [2–4]. Studies done across the world show that physical inactivity represents a significant fraction of healthcare cost – 3.7% of the overall health care costs in Canada [5], and more than 15% of both medical and non-medical costs in China [6]. A recent review [7] suggests the workplace as a prime target for behavioral interventions. Motivated by these findings, this study aims to develop new designs for instrumented sit-stand workstations, to reduce time spent sitting at the workplace, © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 407–413, 2021. https://doi.org/10.1007/978-3-030-51328-3_56

408

D. Shin et al.

which are functional, ergonomic, enlivened via media for long-term use, with accurate electronic design and signal processing to enable scientifically valid measurements of efficacy. This project calls for expertise in design, media-synthesis, signal processing, electronics design, and mobility interventions. Our team includes experts from each of these areas, all of whom have backgrounds in bridging design, arts, and engineering for health-related interventions.

2 Introduction The project kick-off of this granted collaborative research was made in October, 2019 with the principal investigators and respective graduate students from The Design School (TDS), College of Health Solutions (CHS) and the School of Arts, Media and Engineering (AME) at Arizona State University. In 2013, two of the PIs collaborated to ameliorate sedentary behavior in children and adults. Looking to expand on these previous efforts, this interdisciplinary research team is now addressing sedentary behavior on the workplace environment. Prof. Shin (Principal Investigator) from TDS partnered with Prof. Buman from CHS along with other Co-PIs Prof. Turaga, Kidane, and Ingalls from AME is developing a new product that is cost effective for businesses to adopt, and easy to integrate into the current work environment. Having considered other products currently in the marketplace that do not address cost-effectiveness, the ideation of a seat cushion seemed to meet most criteria. This solution could house lowcost electronics whose data can be gathered and analyzed with computer algorithms which will advise the user of proper stand times according to the researched information. After the first meeting, all team members focused their efforts into a functioning seat cushion that would be easily placed on any office chair. AME engineering team proposed the SmartMat add-on cushion solution with embedded pressure sensors to register and send the information to be analyzed. They also worked on building and testing the prototype consisting of sensors, cables, and electronic modules. Based on the engineering architecture, the TDS team then developed a look-a-like computer model to visualize the refined design.

3 Aims and Scope of the Project The long-term aim is the development of a comprehensive solution to encourage healthier physical behaviors at the workplace – encompassing a) new workstation design with inexpensive embedded electronics for sensing of human movement, pressures, and other related attributes, b) new signal processing methods to convert low-fidelity sensed data to clinically relevant surrogates, c) media-feedback over attributes such as time spent sedentary, quality of sit-to-stand activity, and other posture indices, d) evaluation and long-term intervention studies in deployed conditions. This is a new interdisciplinary collaboration between the PIs, who have past collaborative track-record on smaller scale projects, which informs our research vision. The team’s vision and various component threads are summarized in the Fig. 1 below.

Design of the Future Workstation: Enhancing Health and Wellbeing on the Job

409

Fig. 1. Team’s vision and component threads

Our initial prototype for workstation design is restricted to a chair, focus on sitting, and sit-to-stand transitions, provide feedback on time spent sitting and smoothness of standing, and evaluate the intervention on a small scale.

4 Innovation 4.1

Engineering and System Design

New innovative sit-stand workstation design that will use embedded sensors for realtime feedback technology will provide necessary break-throughs and an engaging userexperience in promoting standing and reducing sitting time in the workplace. The embedded real-time auditory feedback technology allows the users to assess postural stability and sit-to-stand efficiency, and this new product innovation will enhance users’ health and wellbeing, as well as productivity of their work. Cushion electronics consist of 8 FSR (Force Sensing Resistor) pressure sensors embedded within the fabric layers. Four on the seat are placed length-wise on a two column configuration to sense thigh pressure distribution. Another four are on the back cushion placed in a similar distribution to sense back pressure and therefore provide seating posture data. The current working prototype which sends data to the controller via an Ethernet type cable and connector, uses the data from the left thigh sensor to control the mobile application on an Android device. The main controller is the Sparkfun’s ESP32 Thing Plus device. A custom interface auxiliary board using a resistor network is used to condition the FSR sensor data making it suitable for the ESP32 Thing analog to digital converters. The processed sensor data is then sent to an Android mobile device to produce the behavior modification notifications. The data is also archived at Thingspeak for visualization and future analysis. The ESP32 Thing is programmed to accomplish this using the Arduino IDE. The controller is powered by a Lithium battery that is good for about 17 h (about two 8 h workdays). On system turn-on, the device automatically connects to the local wireless network and immediately establishes connectivity with Thingspeak cloud server to archive sensor data. The ‘Futureworkstation’ Android mobile

410

D. Shin et al.

application can then connect to the controller to receive sensor data on a Bluetooth LE channel (Fig. 2).

Fig. 2. System architecture

The ‘Futureworkstation’ android application developed for the system receives sensor data from the left thigh sensor in the SmartMat. Using the Bluetooth data received, the application determines the state of occupancy of the chair and initializes several timers to monitor the person’s movement. The application plays a sound notification whenever the user remains sitting for longer than 30 min. Both the sitting and movement periods are configurable in the application to adapt to current clinical recommendations or for testing new experimental sit/stand period scenarios that may be more practical and in tune to different workplaces. 4.2

Prototyping

The engineering development, hardware and software, began as concurrent processes. On the hardware side, pressure sensors installed on acrylic frames for protection and robustness (Fig. 3) were distributed inside the seat cushion (Fig. 4). These pressure sensors are activated when the person seats down on the cushion. The controller mounted outside receives the data from the pressure sensors and distributes it to a mobile device and the cloud wirelessly. The mobile device software will start a timer and will keep it going as long as the sensor registers pressure. In this implementation, messages to the user consist of activity encouragement prompts (Figs. 5 and 6).

Fig. 3. Pressure sensor

Fig. 4. Seat cushion with sensors

Fig. 5. Working cushion

Design of the Future Workstation: Enhancing Health and Wellbeing on the Job

411

Fig. 6. Thingspeak data showing transitions from sit-to-stand & Android application

5 Usability Test and Methods 5.1

Study Sample

Our target population was full-time officer workers who primarily completed their job duties while seated. This inclusion criterion were (a) employees that worked primarily at a desk job; (b) full time employee (at least 30 h per week); and (c) between the ages of 18 and 65 years of age. Pregnant individuals were excluded from the study. 5.2

Study Design and Procedures

The purpose of this study was to develop a comprehensive solution for encouraging healthier physical behaviors at the workplace, which was done in the form of feedback from participants about the design and function of the Future Workstation. With IRB approval from the University, participants were recruited via flyers and emails that were distributed to possible eligible participants which took place December 2019, and the enrollment process took place January 2020. The type of voices that participant would be expecting were shown and when those voices were supposed to signal the participant were explained. The participants were encouraged to take notes regarding their experiences throughout their work day as to not forget anything that they may have experienced while using the Future Workstation. The participant was then left for the 8-h workday with the device and then interviewed at the end of the workday regarding their experiences. The open-ended interview included topics such as work environment, general overview of the Future Workstation, product application and its utility, and general thoughts. Interviews were recorded and later transcribed for meaning and summarized. Informed consent was received from each participant prior to participation. 5.3

Results

Five female office workers were recruited to participate in the study and they viewed the general concept of the Future Workstation as positive and promising. Participants were asked about their work environment in general before the use of the Future Workstation. The overall feedback on work environments were mixed to different occupations and responsibilities. The majority of participants felt that their work

412

D. Shin et al.

environment supported both sitting and standing behaviors during work hours (Fig. 7). Half of the participants who indicated that their work environment supported standing had and used a sit-stand workstation. Participants indicated that they think about how much they sit throughout their workday, and were open to incorporating different postures (i.e., standing) and movement throughout the work day. 80% of participants stated that their co-workers would also want to participate in such activities. Participants also commented on using the Future Workstation as an everyday product. All of the participants stated that the intervention of the Future Workstation would work well within the office setting. When asked if the Future Workstation would encourage participants to stand more at work, 40% stated that it would be a great reminder for them to stand more in the office setting. The other 10% stated that it would depend on a case by case basis of what they were doing when the prompts were given. Participants that had a sit-stand workstation stated that it would be a useful complement to their workstation. Participants were not concerned about the voice prompts causing distractions. All of the participants were not able to respond to the voice intervention because there was no voice that indicated the need for getting up and standing to the participants. When asked if participants were able to respond to visual cues on the phone 80% of participants were able to respond. Participants also provided feedback on the utility of the Future Workstation product. Participants reported that the product helped contribute to their standing or movement throughout their work day, with 80% indicating that it did not contribute to a decrease in their sedentary activity throughout their workday. 20% of participants identified that the visual cue helped them to become more active, but there were points throughout the day where vocal prompts could have helped but did not work. All participants indicated that the padding in the chair was comfortable, but 40% of the participants indicated that for the sensors to work they had to adopt a reclining position. Only 60% of participants stated that they were able to maintain focus while using the Future Workstation, while the other 40% indicated that they were not able to maintain focus because they were trying to listen for the voice prompt. Participants discussed their satisfaction level with the Future Workstation. 80% of participants believe that the product had the potential to get other individuals active in the workplace, although their reasons for this differed: 40% felt the visual prompts would be effective, 40% felt the voice prompts would be effective, and 20% indicated both would be effective. All participants who had a sit-stand workstation indicated that this product would be a useful tool. All participants indicated that the Future Workstation would be more useful if the voices had worked. When asked about what participants would change about the chair design, 20% stated to have a rubber back so the chair would not slip, 40% would like to see the padding more form fitting to the chair, and 60% would like to see lumbar support for the chair. When asked about how the Future Workstation would affect productivity, 80% stated that there was no negative effect on their productivity throughout the workday. The participants were also asked to rate their total experience on the product from 1 being poor to five being excellent, 60% rated their experience as a 4, 20% rated their experience as a 3, and 20% rated their experience as a two. 80% of participants would recommend the Future Workstation to other co-workers or friends (Fig. 8).

Design of the Future Workstation: Enhancing Health and Wellbeing on the Job

Fig. 7. Supportiveness

5.4

413

Fig. 8. Number of respond to visual prompts

Discussion

The idea of a ‘Future Workstation’ was found to be of interest by our participants. Many of the participants saw this product as something to complement additional sitto-stand tools or another great option for those who do not have additional sit-to-stand tools at their disposal. There was little to no indication from the participants about disruptions of productivity due to the Future Workstation. Negative reactions to the Future Workstation were largely due to technical glitches in the app rather than the overall product. There was generally high interest in the product, and its integration into their work environment was overall positive. This integration was particularly highly rated among those who already had a sit-stand workstations.

References 1. Ekelund, U., Steene-Johannessen, J., Brown, W.J., et al.: Does physical activity attenuate, or even eliminate, the detrimental association of sitting time with mortality? A harmonised metaanalysis of data from more than 1 million men and women. Lancet 388(10051), 1302–1310 (2016) 2. Biswas, A.A.: Sedentary time and its association with risk for disease incidence, mortality, and hospitalization in adults: a systematic review and meta-analysis. Ann. Intern. Med. 162(2), 123–132 (2016) 3. McCrady, S.K., Levine, J.A.: Sedentariness at work: how much do we really sit? Obesity (Silver Spring) 17(11), 2103–2105 (2009) 4. Owen, N., Healy, G.N., Matthews, C.E., et al.: Too much sitting: the population-health science of sedentary behavior. Exerc. Sport Sci. Rev. 38(3), 105 (2010) 5. Janssen, I.: Health care costs of physical inactivity in Canadian adults. Appl. Physiol. Nutr. Metab. 37, 803–806 (2012) 6. Zhang, J., Chaaban, J.: The economic cost of physical inactivity in China. Prev. Med. 56, 75– 78 (2013) 7. Shrestha, N., Kukkonen-Harjula, K.T., Verbeek, J.H., et al.: Workplace interventions for reducing sitting at work. Cochrane Database Syst. Rev. 3, CD010912 (2016)

Efficient FPGA Implementation of Direct Digital Synthesizer and Digital Up-Converter for Broadband Multicarrier Transmitter Cristhian Castro1(&) and Mireya Zapata2 1

Electronic Engineering Department, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain [email protected] 2 Research Center of Mechatronics and Interactive System, Universidad Indoamérica, Machala y Sabanilla, Quito, Ecuador [email protected]

Abstract. The high performance of FPGA devices allows moving traditionally analog stages into the digital world. This article introduces the implementation of a Digital Up-Converter, which is part of a broadband system. This system uses polyphase decomposition to achieve 5GSPS sampling rates. The transmitter uses 7 data channels each divided into 16 phases of 312.5 MHz. The model implements a DDS suitable to the specific needs of the system, keeping the frequencies of carrier’s constant, which reducing resource utilization and simplifying the architecture of the DDS. The model is coded in Verilog and simulated at RTL and Gate level. In order to validate the output, it is compared to a finite precision model in Matlab. The maximum clock frequency is measured using time analysis, obtaining adequate results in the operation and utilization of hardware resources. Keywords: Digital Up-Converter (DUC)  Direct Digital Synthesis (DDS) Polyphase structures  Multi-rate systems  DSP  FPGA



1 Introduction Communication systems in recent years have seen a wide range of FPGA-based technology applications, due to its parallel processing properties. Re-configurability and low power consumption [1]. The parallelism capability in an FPGA manages to process DSP algorithms at high sampling rates close to IF, bringing the part digital to RF front-end. For this purpose, one of the most critical elements is the up-conversion that brings the baseband signal to IF. Therefore, several efforts have been dedicated to implement efficient architectures [2]. In this context, the Direct Digital Synthesis (DDS) is one of the main options used for carrier generation in modern digital systems that require high signal quality, high bandwidth and agility of synthesis [3]. The design presented in this paper is part of a broadband, multi-band communications system [4] in which it is interpolated by 16 to the 7 baseband channels of 312.5 MHz. The proposed Digital Up-Converter (DUC) uses an efficient and specific multiphase architecture to bring the baseband signal to IF depending on the characteristics of © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 414–421, 2021. https://doi.org/10.1007/978-3-030-51328-3_57

Efficient FPGA Implementation of DDS and DUC for Broadband Multicarrier

415

the transmitter, this avoids unnecessary storage of carrier signal samples and simplifies the architecture of a conventional DDS. The document is structured as follows: in the second section the multi-band system is briefly presented. Section 3 describes the upconversion process, the architecture of a conventional DDS and its subsequent efficient polyphase implementation, as well as the architecture of the DUC and finally discusses the results obtained in Sect. 4.

2 Multi-band System The design object presented in this work is part of a broadband, multi-band transmission system, previous stages are described in [4], where the characteristics of the system are established and the design of the blocks for data mapping to QPSK, 16, 64 y 256 QAM and pulse shaping filters are presented. The system is mainly restricted by the features of the Analogue Digital Converter (ADC), which has a sampling rate of 5GSPS over a 16-channel interface, with 12-bit resolution. The proposed system has been synthesized and implemented on a Virtex 7 family FPGA device from Xilinx. Digital Up-conversion (DUC) blocks are implemented on the same device. In Table 1 the hardware (HW) resource occupation of the stages prior to the DUC is shown, which are mapping and pulse shaping. Table 1. Use of HW resources for pulse shaping and mapping. Resource Utilization Available LUT 1293 303600 FF 2015 607200 LUTRAM 1 130800 IO 46 700 BUFG 1 32 DSP48 0 2800

Utilization % 0.43 0.33 0.01 6.57 3.13 0

Given that the DAC samples 16 channels at 5GSPS, the transmitter system interpolates the input symbols by 16. To do this, each channel performs polyphase separation of the algorithm so that each phase is simultaneously present at a rate of 312.5 MHz, which together generates a rate of 5GSPS. Knowing from the Nyquist theorem that the maximum frequency will be located at half the DAC sampling rate i.e. 2.5 GHz, 7 channels of 312.5 MHz can be introduced within the Nyquist bandwidth without Inter-Symbolic Interference (ISI), and the center of each carrier will be located to: f ð nÞ ¼

1 fNyquist ½2ðnÞ  1; n ¼ 1; 2; 3; . . .; Nc : 2 Nc

ð1Þ

416

C. Castro and M. Zapata

where f ðnÞ represents the digital frequency of the channel at position n, fNyquist is the Nyquist digital frequency (1/2) and Nc is the number of carriers. The general block diagram of the system is shown in Fig. 1 and, in Fig. 2 the spectrum at the output of the broadband transmitter.

Fig. 1. General block diagram of the multi-band transmitter.

Fig. 2. Carrier arrangement within the Nyquist bandwidth for the multi-band transmitter.

3 Digital Up-Conversion Process The Up-conversion block must perform the translation of the baseband I/Q signals to a real IF-centered signal, this process is done by modulating a carrier of the shape ejxnT ¼ cosðxnT Þ þ jsinðxnTÞ, with I/Q signal, therefore the IF signal is represented as: yðnÞ ¼ I ðnÞ cosðxnTÞ  QðnÞ sinðxnTÞ [5] and its implementation through the scheme in Fig. 3.

Efficient FPGA Implementation of DDS and DUC for Broadband Multicarrier

417

Fig. 3. Block diagram of a conventional DUC.

The main process of the DUC performs the product of a carrier signal with each I/Q branch. For this purpose, it is necessary to have a digital oscillator capable of generating the, sinðxnT Þ y cosðxnTÞ, this fusion is done by direct digital synthesis (DDS). The device achieves digital processing suitable for frequency generation in FPGA, as it is relatively simple to implement and uses few HW resources [6]. 3.1

DDS

The basic scheme of a DDS consists of a phase accumulator and a phase to amplitude converter, as shown in Fig. 4. The first block that composes the DDS is a P-controlled M-bit accumulator that manipulates the passage of the accumulator producing a linear phase increase every DP. The accumulator overflow property is used and DP to set the carrier frequency fc , which is calculated by: fc ¼ DP  fclk =2M and the frequency resolution is obtained when DP ¼ 1, where fcmin ¼ fclk =2M with M being the word size of the accumulator.

Fig. 4. Basic diagram of a DDS.

The quality parameters of the frequency that generates the DDS are quantified by the Signal to Noise Ratio (SNR) and the Spurious Free Dynamic Range (SFDR), which are related to: the number of bits selected to quantify the signal samples (W), accumulator size (M) and phase truncation bits (L), through: SNR  6:02W þ 1:76 ½dB y SFDR  6:02L  3:92 ½dB. 3.2

Polyphase Decomposition

The design of the DUC starts from a polyphase scheme that generates through 16 phases the samples corresponding to a signal sinð2pfn þ /i Þ for the case of the Q

418

C. Castro and M. Zapata

branch and cosð2pfn þ /i Þ for the I branch. fn represents the frequency of the channel n and, /i the phase of each block which is calculated by: /i ¼ 2p  i=28 para 0  i  15. Figure 5 presents the phases of the DDS for carrier 1/28 and branch I [7]. When decimating and separating the phases of a periodic function such as sine and cosine, it is obtained that the samples of each phase also repeat. In this case, when looking for symmetries within each phase, it is found that there is periodicity every 7 samples, that is, each block of the DDS will store only 7 samples with a word length of W = 16 bit and this for each I/Q branch by the 7 carriers, which requires storing 21,504 Kbit. Both the frequency and the sampling rate are kept constant, thus simplifying the phase accumulator design, which will only be a 3-bit linear counter from 0 to 6. The diagram for the polyphase DDS is shown in Fig. 6.

Fig. 5. Distribution of the 16 phases for carrier 1/28 branch I

Fig. 6. Schematic of the polyphase DDS for the multi-carrier system

Efficient FPGA Implementation of DDS and DUC for Broadband Multicarrier

3.3

419

DUC Implementation

To generate the up-conversion, the phases of the I/Q signals must be modulated with each phase of the carrier and the real part must be added with the imaginary part and the IF signal must be generated. The HW module of the DUC consists of control signals such as: clk, reset and val_in, the implementation scheme for the DUC of one of the phases of the system is shown in Fig. 7. The data output format for the DDS is: s[18, 16], while for the output of the pulse shaper is: s[16, 15], so, the data growth at the output of the product is: s[34, 32], and on the way out of the sum s[35, 32]. The design presents efficient scalability in area and power since, it is implemented in DSP48 dedicated blocks, which have an architecture for data processing up to 48 bits. To provide the appropriate output, the numerical representation is s[16, 15] which is used throughout the implementation to maintain adequate signal quality [8].

Fig. 7. Implementation of the DUC for one phase of a multi-band system carrier

4 Results and Discussion The model has been coded in Verilog. The verification is done through the comparison of signals with a finite precision model made in Matlab/Simulink. The Test Bench reads the reference signals generated from a Matlab text file and compares them with the output of the HDL model. Figure 8 shows the signal output of the finite precision model and the HDL model. In the testbench, each sample of the HDL model is compared with the reference model and an error count is performed. In the gate level mode simulation, no difference is recorded between the reference model and the HDL model.

420

C. Castro and M. Zapata

The HW has been synthesized and deployed using Vivado on a Virtex 7 family FPGA. The results of resource occupation are obtained (see Table 2), and the maximum clock speed is calculated by measuring the WNS (Worst Negative Slack) using a time constraint script. we get WNS for 3.0 ns is 0.227 ns, according to this the maximum clock frequency of the model will be fmax ¼ 360:6 MHz. The low use of HW resources is mainly based on the fact that the frequency of each carrier is always constant, this helps to use only the necessary resources for this purpose, but eliminates the possibility of reconfiguration or frequency agility which would be a good option for future work as the FPGA device is suitable for such a purpose.

Fig. 8. Waveforms resulting from the simulation of the HW model and comparison with the Matlab finite precision model Table 2. Use of HW resources from the implementation of the multi-carrier DUC Resource Utilization Available Utilization % LUT 9016 303600 2.96 FF 5094 607200 0.84 LUTRAM 4 130800 0.01 IO 260 700 37.14 BUFG 1 32 3.13 DSP48 224 2800 8.0

References 1. Rittner, F., Glein, R., Thomas, K., Benjamin, B.: Broadband FPGA payload processing in a harsh radiation environment. In: Conference on Adaptive Hardware and Systems (AHS), vol. 1, no. 1, pp. 151–158 (2014) 2. Patil, R., Gawande, G.S.: Efficient design of a digital up converter using Xilinx system generator. Int. J. Eng. Res. Technol. (IJERT) 3(1), 2284–2286 (2014)

Efficient FPGA Implementation of DDS and DUC for Broadband Multicarrier

421

3. Yin, Y., Liao, Y.: Design and implementation micro-broadband radar signal source. In: Xing, S., Chen, S., Wei, Z., Xia, J. (eds.) Unifying Electrical Engineering and Electronics Engineering, vol. 238, pp. 1391–1398. Springer, New York (2014) 4. Castro, C., Gordon, C., Encalada, P., Cumbajin, M.: Multiband broadband modulator implementation on field-programmable gate array. In: International Conference on Applied Technologies (ICAT), Quito (2019) 5. de Figueiredo, F.A.P., Filho, J.A.B., Lenzi, K.G.: FPGA design and implementation of Digital Up-Converter using quadrature oscillator. In: 2013 IEEE Jordan Conference on Applied Electrical Engineering and Computing Technologies (AEECT), Amman, pp. 1–7 (2013). https://doi.org/10.1109/AEECT.2013.6716423 6. Vanka, J.: Direct Digital Synthesizers: Theory, Design and Applications. Department of Electrical and Communications Engineering - Helsinki University of Technology, Helsinki (2000) 7. Denis, D.C., Cordeiro, R.F., Oliveira, A.S., Viera, J., Silva, T.O.: A fully parallel architecture for designing frequency-agile and real-time reconfigurable FPGA-based RF digital transmitters. IEEE Trans. Microw. Theory Tech. 66(3), 1489–1499 (2018) 8. Fontes Pupo, E., Diaz Hernandez, R., Martinez Alonso, R., Hernandez Sanchez, Y., Guillen Nieto, G.: Direct to RF up-converter for DTMB modulator. In: IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), vol. 1, no. 1, pp. 1–15 (2016)

Low-Cost Embedded System Proposal for EMG Signals Recognition and Classification Using ARM Microcontroller and a High-Accuracy EMG Acquisition System Santiago Felipe Luna Romero(&) and Luis Serpa-Andrade Grupo de Investigacion en Inteligencia Artificial y Tecnologias de Asistencia GIIATa, Universidad Politecnica Salesiana, Sede Cuenca, Cuenca, Ecuador [email protected], [email protected]

Abstract. An electromyography signal (EMG) is known as a record of the action potentials that are produced by the human brain with the intention of activating or deactivating the skeletal area muscles. These signals are considered very important within the biomedical area because they contain information on muscle contraction and relaxation, which can be used in various areas. The EMG signals recognition and classification are a subject very studied within the scientific community for its various applications such as prosthesis management, virtual environment control, bio-robotics, etc. An EMG signals classification system fulfills four stages corresponding to the acquisition, preprocessing, processing and classification. In this system, last three stages, algorithms are used that in principle allow minimizing the interference produced in the acquisition of this type of signals, in second, they correctly extract the information or characteristics of the same signals, and in third, they allow to use techniques that allow classification in the best way the EMG signals. As technology advances, new needs emerge, day by day EMG recognition systems are in need of improvement, so new techniques are proposed that improve the processing and classification of these types of signals, but from the same In this way, these techniques require a higher computing cost that allows each one of the stages mentioned above to be fulfilled in the best way possible. Therefore, this article proposes a low-cost embedded system that allows EMG signals to be recognized and classified, as a signal acquisition technique, it is proposed to use a multi-channel high precision system acquisition, and for the preprocessing, processing and classification stages, it is proposed to use a high-performance, low-cost microcontroller such as the ARM cortex M4. The proposal in general aims to create a system of recognition and classification of EMG signal that can be used in different areas of biomedicine such as the design of prostheses, but with the particularity of an embedded system of high computational range and low cost, this will allow using the algorithms that emerge day by day to improve the recognition and classification of EMG signals. Keywords: Signal processing

 Embedded system  Microcontroller  EMG

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 422–428, 2021. https://doi.org/10.1007/978-3-030-51328-3_58

Low-Cost Embedded System Proposal for EMG Signals Recognition

423

1 Introduction An EMG electromyography signal is known as an action potentials record that is produced by the human brain with the intention of activating or deactivating the skeletal area muscles [1]. These signals are considered very important within the biomedical area because they contain information on muscle contraction and relaxation, which can be used in various areas. The recognition and classification of EMG signals, is a subject very studied within the scientific community for its various applications such as prosthesis management, virtual environment control, bio-robotics, etc. [2]. A classification system for these EMG signals fulfills four stages corresponding to acquisition, preprocessing [3], processing and classification [4]. In the last three system stages, algorithms are used that in principle allow minimizing the interference produced in the acquisition of this type of signals, in second they correctly extract the information or characteristics of the same, and thirdly they allow to use techniques to classify EMG signals in the best way. From the scientific point of view, there are several stochastic as well as deterministic methods applied in each of the stages previously mentioned of EMG signals. The idea that has happened lately with the passing of the years and the advancement of technology, especially in the area of microcontrollers, is the incorporation of EMG signal recognition and classification systems into embedded systems that allow EMG signals to be recognized and classified [5–7]. This idea brings with it two drawbacks, on the one hand, it is the computational cost that is related to the process of acquisition, extraction of characteristics and classification of EMG signals and on the other the economic cost of building an embedded system that possesses the characteristics necessary to fulfill the function of recognizing and classifying electromyography signals [8]. With the background mentioned in the previous paragraphs, this article proposes a low-cost embedded system that allows to recognize and classify EMG signals, for the stage of acquisition of the signals, it is proposed to use a multichannel acquisition system of high precision, and for the processing and classification stage proposes to use a microcontroller with high performance and low relative cost, such as the M4 cortex ARM. This study is divided into four sections, in Sect. 1 the introduction is described, Sect. 2 describes the proposed proposal. Section 3 details the discussion of the proposal.

2 Proposal Embedded system proposed in this work basically consists of two stages that correspond to the acquisition, preprocessing and processing-classification respectively, in Fig. 1 it is possible to observe the block diagram of the system, This diagram is divided into three stages called (Physical - Acquisition-Processing and Classification), in the physical stage corresponds to the placement of the electrodes in the area where the electromyography spelling measurements are intended. The second stage corresponds to the digital analog conversion and coding of the electromyography signals, for this

424

S. F. L. Romero and L. Serpa-Andrade

stage this system proposes to use a high precision system that guarantees the fidelity and resolution of an EMG signal. The third and final stage corresponds to the processing and classification of the EMG signals, this stage is the most important because it is here that the current novel algorithms will be applied both to process, recognize and classify the signals said signals, due to this. The information processing core must be capable enough to perform each of the techniques discussed above. Based on the above, the device chosen for processing is the ARM cortex M4 microprocessor, said device starts from being a relatively low-cost microcontroller for its performance because it has a 32-bit processing bus, and can reach a frequency of 180 MHz at its core.

Physical Stage

High-Accuracy EMG AcquisiƟon System Stage

Processing And ClassificaƟon Stage

Electrodes in the zone where the muscle informaƟon is pretended to register

Analog interface · DigitalizaƟon

Intelligent system to recognize and classify EMG signals

Interface · CodificaƟon Interface

Fig. 1. Operation of the proposed system diagram

2.1

Hight Accuracy EMG Acquisition System

The systems of acquisition and coding of biopotentials in this case electromyographic graphics, have the following stages: circuit power, preamplification using an instrumentation amplifier, analog filtering, rectification, analog-digital conversion and control [9, 10]. For the digital analog conversion stage, a 12-bit ADC can separate the voltage range from the input signal into 4095 intervals. This is sufficient for most kinesiological configurations. Very small signals may need more amplification to achieve a better resolution in the amplitude [9]. To further improve the resolution and fidelity of the EMG signal, a 24-bit ADC is recommended as expressed in the work [10]. The sampling rate must be at least 1000 Hz to 1500 Hz to avoid losses in the signal, works [9–12] suggest that the sampling rate be 2 kHz to have a greater amount of signal information. Among the characteristics that the ADC must comply with are: simultaneous sampling of the channels to avoid errors over time, the serial connection of several of these devices to avoid the error, low power consumption and the minimum processing cost [10]. This acquisition system can be observed in Fig. 2, which shows

Low-Cost Embedded System Proposal for EMG Signals Recognition

425

the circuit isolation and protection stage, signal coupling using an instrumentation amplifier, and analog filtering using a band filter, all this prior to the conversion stage digital analog. Isolation and protection

Instrumentation amplifier Band filter

Electrode +

ADC Driver

Electrode -

Electrode Ref

Fig. 2. Analog acquisition and preprocessing of an EMG signal diagram.

2.2

Processing and Clasificación Stage

As mentioned above, in this work, in addition to the stage of acquisition and coupling of EMG signals, it is also proposed to include a stage of recognition and classification of EMG signals, the proposal is to use a high-end microcontroller computational that can guarantee the use of algorithms for recognition and classification of this type of signals. The proposal for this work is to use the ARM cortex M4 uC, because it is a low cost device for the benefits that it has, such as working with a 32-bit bus and reaching a frequency of 180 MHz. The base algorithm that the microcontroller must include is the one shown in Fig. 3, the algorithm starts from the digitized and encoded signal in step 2. With this signal the first objective is to extract the information from the EMG signal, to achieve the objective there are several techniques for extracting characteristics that can be from the time domain or the frequency domain, like these works [5–7, 14] propose. next to this, the algorithm must enter into an EMG signal recognition process, which, with the help of a previously trained intelligent model, the system can recognize an EMG signal. The final process corresponds to the classification of the signal, where based on a decision that is given by the recognition stage, the system returns or not the recognition of the signal. For this stage different methods such as neural networks such as the multilayer perceptron [13], Markov’s Hidden Models [14], Support Vector Machine [15] and Fuzzy Logic [16] they have obtained promising results to classify EMG signals.

426

S. F. L. Romero and L. Serpa-Andrade

START EMG Codificated signal Feature Extraction Intelligent model

Recognition Algorithm Classification Prosses

YES

Decision

NO

Return Recognized Signal Fig. 3. Flowchart of the procedure to be performed by the core of the embedded system.

3 Discussion Currently there is great complexity when designing devices for acquiring surface EMG signals, this complexity is simplified by introducing the use of embedded system, which can divide several processes at the hardware level both processing and data management. The proposed system architecture has great scalability to develop other processes relevant to the scientific area, such as the filtering of signals at the hardware level, application of artificial intelligence architectures for pattern recognition, digital control modules, etc. In this way you can optimize the execution of processes and a saving of resources in both hardware and programming for a multitasking device. The modular design of the analog signal acquisition interface and the digitization interface allows the system not to be limited to the designed application only, but the analog and digitization boards can be integrated into other systems as required, without having to redo the entire design process that demands a lot of time and effort from the development team. This system is proposed as a basis for the development of a final system for the acquisition, processing and classification of EMG signals that can be used in the different areas of biomedicine such as the design of prostheses, but with the advantage

Low-Cost Embedded System Proposal for EMG Signals Recognition

427

of be an embedded device of low cost but with high computational range such as the ARM CORTEX M4, which in turn will allow to use the algorithms that emerge day by day to improve the recognition and classification of EMG signals.

References 1. Norali, A.N.: Human breathing classification using electromyography signal with features based on mel-frequency cepstral coefficients. Int. J. Integr. Eng. (2017) https://publisher. uthm.edu.my/ojs/index.php/ijie/article/view/2019/1225. Accessed 20 Sep 2019 2. Luna-Romero, S., Delgado-Espinoza, P., Rivera-Calle, F., Serpa-Andrade, L.: A domotics control tool based on MYO devices and neural networks. Advances in Intelligent Systems and Computing, vol. 590, pp. 540–548 (2018) 3. Palacios, C.S., Romero, S.L.: Automatic calibration in adaptive filters to EMG signals processing. RIAI – Rev. Iberoam. Autom. e Inform. Ind. 16(2), 232–237 (2019) 4. Shin, S., Langari, R., Tafreshi, R.: A performance comparison of EMG classification methods for hand and finger motion. In: ASME 2014 Dynamic Systems and Control Conference, DSCC 2014, vol. 2 (2014) 5. Benatti, S., et al.: A versatile embedded platform for EMG acquisition and gesture recognition. IEEE Trans. Biomed. Circuits Syst. 9(5), 620–630 (2015) 6. Duran Acevedo, C.M., Duarte, J.E.J.: Development of an embedded system for classification of EMG signals. In: 2014 3rd International Congress of Engineering Mechatronics and Automation, CIIMA 2014 - Conference Proceedings (2014) 7. Liu, J., Zhang, F., Huang, H.H.: An open and configurable embedded system for EMG pattern recognition implementation for artificial arms. In: 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2014, pp. 4095–4098 (2014) 8. Guevara, D.P.: SISTEMA DE ADQUISICIÓN DE SEÑALES EMG DE SUPERFICIE MULTICANAL PARA PRÓTESIS DE MIEMBRO SUPERIOR, Cuenca 9. Pulido, R.M.M.: PROTOTIPO CONTROL DE VEHÍCULO ROBOT POR SEÑALES EMG. In: Congr. Int. Electrónica, Control y Telecomunicaciones 2019, October 2019 10. Pashaei, A., Yazdchi, M.R., Marateb, H.: Designing a low-noise, high-resolution, and portable four channel acquisition system for recording surface electromyographic signal. J. Med. Signals Sens. 5(4), 245–252 (2015) 11. Madusanka, D.G.K., Gopura, R.A.R.C., Amarasingha, Y.W.R., Mann, G.K.I.: IBVS and EMG based reach-to-grasp task planning method for a trans-humeral prosthesis. In: SII 2016 - 2016 IEEE/SICE International Symposium on System Integration, pp. 447–452 (2017) 12. Abayasiri, R.A.M., Madusanka, D.G.K., Arachchige, N.M.P., Silva, A.T.S., Gopura, R.A.R. C.: MoBio: a 5 DOF trans-humeral robotic prosthesis. In: IEEE International Conference on Rehabilitation Robotics, pp. 1627–1632 (2017) 13. Tenore, F.V.G., Ramos, A., Fahmy, A., Acharya, S., Etienne-Cummings, R., Thakor, N.V.: Decoding of individuated finger movements using surface electromyography. IEEE Trans. Biomed. Eng. 56(5), 1427–1434 (2009) 14. Chan, A.D.C., Englehart, K.B.: Continuous myoelectric control for powered prostheses using hidden Markov models. IEEE Trans. Biomed. Eng. 52(1), 121–124 (2005)

428

S. F. L. Romero and L. Serpa-Andrade

15. Shenoy, P., Miller, K.J., Crawford, B., Rao, R.P.N.: Online electromyographic control of a robotic prosthesis. IEEE Trans. Biomed. Eng. 55(3), 1128–1135 (2008) 16. Ajiboye, A.B., Weir, R.F.F.: A heuristic fuzzy logic approach to EMG pattern recognition for multifunctional prosthesis control. IEEE Trans. Neural Syst. Rehabil. Eng. 13(3), 280– 291 (2005)

Rural Public Space Evolution Research - Based on the Perspective of Social Capital Zhang Hua(&) and Wuzhong Zhou School of Design, Shanghai Jiao Tong University, Shanghai, China [email protected]

Abstract. Rapid urbanization has changed the functions and types of many spaces, including the separation of production space and living space, the weakening of political space, the generation of new social capital production space and the improvement of public space’s external communication ability. It is found that there are three periods in the evolution of rural space in China. Rural public space is closely related to the production of social capital. The reproduction of community public space is essentially the reproduction of the power relationship between the spatial subjects. This kind of power relationship is manifested in three dimensions of social capital trust, norms and network. In the long historical period, under the influence of the three dimensions, the traditional villages gradually formed a unique public space layout and form. The power of rural public space evolution comes from the change of social capital. Keywords: Public space  Social capital  Spatial change  Rural revitalization

1 Introduction Chinese countryside is a collection of agricultural culture space composed of family, village and market town. Family, village and market town constitute an independent and interdependent living organism in the interaction. The logic of farmers’ social behavior revolves around agricultural production activities, so rural space follows the life law of agricultural society everywhere. Since the middle and late 1980s, China has begun the world-famous large-scale urbanization construction, which has a profound impact on all aspects of social, economic, political and cultural life. However, with the development of urbanization, the problem of rural recession is becoming more and more serious, and the gap between urban and rural areas is expanding. Rural society has experienced unprecedented transformation and change, many old orders have been broken, the original familiar environment has gradually become alienated. A large number of farmers have lost contact with the land and become cheap urban hired workers. The phenomenon of hollowing out in rural areas is becoming more and more serious. The result of this phenomenon in rural society is that the interpersonal relationship in rural areas tends to weaken, and the governance structure and moral value system in rural areas maintained by geography and blood have a huge impact. The large-scale flow of rural labor force to cities and towns has greatly changed the stock and structure of social capital as the adhesive between different individuals [4]. In the context of rural revitalization, it has become an urgent issue to realize rural “industrial © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 429–436, 2021. https://doi.org/10.1007/978-3-030-51328-3_59

430

Z. Hua and W. Zhou

prosperity, ecological livability, rural civilization, effective governance and rich life”. Lefebvre pointed out that the public space is the place of shaping interpersonal relationship, and he thought that “the social relationship of production is a kind of social existence They project themselves into space and brand them. At the same time, they produce space themselves” [3]. Inspired by this, we will naturally think about whether there is a relationship between the drastic changes of rural social capital and the changes of public space? If there is a connection, how to cultivate social capital to better serve the construction of rural public space? We try to construct the mechanism of social capital participation in the change of rural public space, so as to solve the outstanding problems in rural space construction.

2 Theoretical Framework The concept of “public space” is frequently used in many disciplines, such as geography, sociology, planning and so on. However, when using this concept, the objects are not exactly the same, and its specific connotation is very different. For example, the concept of public space in sociology tends to be close to the public domain. Apart from the entity space that can enter freely and carry out various exchanges, the virtual space represented by institutionalized organizations and activities is also an important component of public space. The public space in the field of planning focuses more on the space entity, which means the material space (indoor and outdoor) carrier that villagers can enter and leave freely, open to all people and carry out public activities. According to the classification method of urban public space, the spatial form of rural public space is divided into point, line and surface, and according to its function, it is divided into leisure space, road space, trading space, production space, etc. According to the types of public communication and the space places it carries, combined with our field survey of public space, the public space is divided into production space, living space and political space. This division is not a specific public space in the micro sense (such as around big trees, stage, etc.), but a meso level classification of rural public space to facilitate the use of this concept and the concept of social capital to form a dialogue and debate. This kind of classification is an ideal type. There is no strict boundary between various public spaces. Some public spaces may have the characteristics of politics, life and production, but only take the main aspects of their communication attributes. The concept of social capital comes from social network, which is a relatively abstract concept, and there is no unified definition at present. According to Bourdieu, “social capital is the sum of real or virtual resources, which are inseparable from a kind of persistent network possession”. The amount of capital possessed by a particular actor depends on the scale and quantity of the network he owns, and on the amount of resources possessed by all actors connected with him with his own power. Bourdieu’s concept of social capital has attracted the attention of scholars at home and abroad. Based on different research perspectives, the concept connotation and observation standards of social capital have been supplemented and improved in all aspects (Table 1). All the above studies believe that social capital is a public resource, which can be enjoyed by collective members, but not owned by individuals. Therefore, this paper

Rural Public Space Evolution Research

431

Table 1. Research status of social capital Researcher Bourdieu (1985) [1]

Putnam (1993) [2]

Nahapiet and Ghoshal (1998) [3]

Lin (2001) [4] Qiuling Chen (2005) [5] Rong Hu (2008) [6]

Connotation of social capital A collection of real or potential resources in a more or less institutionalized relationship network that people are familiar with and recognize The characteristics of social organizations, such as trust, norms and networks, which can improve social efficiency by promoting coordination and action Resources existing in the relationship network owned by individuals or organizations, including structural dimension, relationship dimension and cognitive dimension Resources that can be absorbed or mobilized in purposeful actions embedded in social structures –



Selected observation indicators –

Number of trust, norms and nongovernmental organizations





Rule system, network system, trust system, regional culture, regional identity, regional history Community identity, community trust and social network

defines social capital as the social resources which are shared by the collective members and can be borrowed to benefit the individual. Rural public space is the main place of rural interpersonal communication. Social capital, as the product of interpersonal communication, can reflect the characteristics of rural interpersonal relationship to some extent, and then react in rural space.

3 Evolution Characteristics of Rural Public Space 3.1

Overview of the Case Area

In this study, rural areas in eastern Zhejiang Province are selected as the research cases. The eastern part of Zhejiang Province has a long history. It is a traditional farming area with numerous ancient villages and ancient towns. It has obvious regional cultural characteristics. From the perspective of development and evolution, due to its unique hilly terrain, the internal interpersonal network and rural morphology of the villages in the eastern part of Zhejiang Province are not easily affected by the external factors. In addition, as a leading area for the construction of beautiful villages and characteristic towns, the eastern part of Zhejiang Province has accumulated a wealth of Practical

432

Z. Hua and W. Zhou

experience. Therefore, the case study of eastern Zhejiang is of great value. This study lasted for three months, through field visits to 12 natural villages in Fenghua City, Ninghai County, Taizhou Luqiao District, Shaoxing, etc. through literature search and semi-structured random interviews, the composition of traditional rural public space in eastern Zhejiang was determined, and the evolution and motivation of villages and public space were understood. 3.2

Analysis of the Evolution of Rural Public Space

3.2.1 Villages in the Traditional Period (Before 1949) Since ancient times, China has been a farming society, and rural manpower and materials are the basis for the maintenance and continuation of feudal rule. It is true that the 1911 Revolution led to the collapse of the feudal monarchy, which lasted for more than two thousand years [7]. Later, the government also implemented a series of rural reconstruction measures, with the intention of deepening the reform into the countryside [8]. However, due to the social inertia and the self-sufficient economic system, it has not fundamentally changed [9]. On the contrary, some rural elites continue to become The core of rural social life has little influence on social capital level, so this paper holds that the rural areas in the Republic of China are still the continuation of traditional villages in essence. In the traditional context, rural society is a close community, closed, single and homogeneous. On the one hand, the organizational system of the whole society is arranged according to blood relationship and geography, and family is the projection of blood relationship and geography. At the social level, it is “the imperial power does not go to the countryside”, and the state power only stays at the county level, in the vast rural areas. It presents a state of autonomy. Such a social relationship is described by Fei Xiaotong as “a circle of waves pushed out by throwing a stone on the water. Everyone is the center of the circle pushed out by his social influence, and those pushed by the waves of the circle are connected”, that is, the “differential pattern”. Under the “differential order pattern”, the family naturally becomes the main organization structure of the power and resource integration of the whole society, and plays a huge function. At the same time, the self-sufficient smallscale peasant economy makes people’s production radius cannot exceed the scope of villages and families. For an individual, his contact scope is only limited to a few miles. In this geographical and social area, he can meet almost all his requirements. In this case, the individual’s social network is closed and the external communication is very low. “Differential order pattern” constitutes the main relationship network between people, providing emotional support, economic support and social support. On the other hand, “writing creates class”, and “squire”, an intellectual in the countryside, is revered by ordinary villagers because of his knowledge of writing and etiquette. The demands of farmers are also transmitted to the government through the squire. Over time, between the squire and the farmers The relationship of “asylum obedience” has been formed [10]. The status advantage of the squire, to a certain extent, influences the rural social life and becomes the core of the rural political life.

Rural Public Space Evolution Research

433

3.2.2 Villages in the Period of Collectivization (1950–1977) After the founding of the people’s Republic of China, the rural social transformation centered on land reform was carried out in an effort to establish a new way of rural social production and life Communication [11]. A series of reforms have reached the peak with the comprehensive establishment of the people’s commune as the symbol. During this period, people’s communes performed the functions of township government on behalf of other communes. Communes, production brigades and production teams were subordinate to each other at all levels. Militarization and collectivization were their organizational characteristics. With the reconstruction of production relations and the sinking of state power, the original families in rural areas were regarded as the basis of feudal rule and thoroughly swept away, and the family authority was replaced by the administrative authority from the state. Farmers’ production, life and even thinking are forced into the national unified command. The collective labor of the production team which transcends the blood relationship replaces the production function of the family and the family. The common goal and value pursuit make every farmer become a “comradeship” relationship. Through production teams, production teams and people’s communes, individuals and organizations are closely linked, and collective production becomes the source of people’s lives. Political life has become an important part of every farmer’s life. Through frequent political movements, the state links the atomized individuals together, the big family disintegrates, and the farmer transforms into a cell in a strong political system. The original economic basis of rural private ownership and the authoritative order represented by the head of the squire clan have been changed [12]. A high degree of politicization requires individuals to be completely subordinated to organizational arrangements. Under the comradeship network, based on the infinite enthusiasm for revolution and the worship of leaders, the communication foundation of the whole space has been established, and the grass-roots political power organizations have become the only authority that people can rely on. But it is because of the high embedded social capital that the cultivation of horizontal interpersonal relationship encounters obstacles. 3.2.3 Villages in the Period of Rapid Urbanization (After 1978) Since the implementation of reform and opening up in 1978, the proportion of urban population in China’s total population has increased year by year, from 18.96% in that year to 58.52% in 2017. When rural areas make great contribution to the modernization of our country, the outflow of population also makes great changes in their social structure. Since 1986, the country has recruited a large number of farmers’ rotation or contract workers to work in textile, construction and other industries [13], resulting in the decline of some villages’ population, the decrease of schools, and the emergence of the tide of migrant workers leaving the countryside and the land. Secondly, with the withdrawal of state power from the countryside, all kinds of traditional power have stepped onto the historical stage, affecting the communication mode of rural society. On the one hand, rapid urbanization makes the material wealth and human capital continuously converge to the city, the capacity of urban space is close to saturation, and urban disease begins to appear. According to the theory of urban economics, when the urban scale develops to zero or even negative marginal benefit, it will inevitably form the trend of increasing and differentiation, and it is imperative to “reduce the burden”

434

Z. Hua and W. Zhou

for the city. On the other hand, the Fifth Plenary Session of the 16th CPC Central Committee put forward the goal of building a new socialist countryside - “perfect facilities, beautiful environment and harmonious civilization” as the direction of building a new socialist countryside [14]. Therefore, marked by 2005, China’s urbanrural relationship has entered a turning point, marking the country’s strategic transition from “blood drawing” to “blood transfusion” in rural areas [15]. In this period, the political space showed a revival trend. Closer social networks are the reason for this trend [16]. On the one hand, with the construction of new countryside, the construction of grass-roots party organizations is constantly strengthened, strengthening the vertical contact of Rural Interpersonal society. As the link of the party and the masses, various official political spaces such as Party member activity center, administrative building of village committee, cultural auditorium, etc. have been built; on the other hand, the rural kinship has been reconstructed, strengthening the horizontal contact among people, such as ancestral temple, ancestral hall, etc. The traditional political space has renewed its charm. The survey found that since 2005, there have been 6 new and rebuilt ancestral temples and ancestral halls in the survey area, accounting for 100% of the existing buildings.

4 Conclusion and Discussion It is found that the three dimensions of social capital promote the evolution of rural spatial content and form through different combinations (Fig. 1).

Social Trust

Living Space

Political Space

Social Networks

Social Norms Production Space

Fig. 1. Impact model of social capital-rural spatial

The living space is more affected by social trust and social norms. Interpersonal trust is the premise of public life. Higher social trust can promote the development of living space and promote the emergence of more living space. Social norms restrict the behavior in living space and regulate the boundary of interpersonal communication. Higher social norms are conducive to the rational layout of living space. Production space is influenced by social norms and social networks. Social norms ensure the order

Rural Public Space Evolution Research

435

of production. Good social norms are conducive to the development of production space. Social networks are the necessary conditions for production. The production space generated by village level social networks is often located in the village and convenient for personnel organization. The production space generated by town level social networks is often Market oriented or traffic oriented, convenient for the circulation of means of production and personnel. The political space is more affected by social networks and social trust, and political activities cannot be separated from social networks. Before the new round of village merger, the political space of each village serves the villagers of the village. After the village merger, the social network of the central village is expanded, and the political space vitality is enhanced, while the political space of the merged village changes its use and becomes a new living space; social trust is politics. The higher the social trust, the stronger the political space vitality, and the higher the frequency of use.

References 1. Bourdieu, P.: The social space and the genesis of groups. Theory Soc. 06, 723–744 (1985) 2. Putnam, R.D., Leonardi, D.R.: Making democracy work: civic traditions in modern Italy. Contemp. Sociol. 03, 306–308 (1994) 3. Nahapiet, J., Ghoshal, S.: Social capital, intellectual capital, and the organizational advantage. Acad. Manag. Rev. 23(2), 242–266 (1998) 4. Lin, N., Ensel, W.M., Vaughn, J.C.: Social resources and strength of ties: structural factors in occupational status attainment. Am. Sociol. Rev. 46(4), 393–405 (1981) 5. Chen, Q.: Regional social capital: the goal and path dependence of Development Zone. Shanghai University (2005) 6. Hu, R.: Social capital and political participation of urban residents. Sociol. Res. 05, 142–159 (2008) 7. Jiang, P.: The changes of rural social structure in North China during the period of the Republic of China. Nankai J. 04, 18–23 (1998) 8. Pan, J.: The appraise of cooperatives’ development in the period of Minguo. China Rural Surv. 02, 34–44 (2002) 9. Li, J., Dai, X.: A sketchy investigation concerning the forming of modern rural financial network in republican era–analysis concentrating on lower - middle Yangtze river area. J. Hebei Univ. (Philos. Soc.) 30(2), 36–40 (2005) 10. Zhao, X., Shaoping, F.: Multiple subjects, asylum relationship and institutional change of cooperatives. China Rural Surv. 02, 2–12 (2015) 11. Wang, P.: Inspection on the village temple, guildhall and stage of Novthern Shanxi in the Qing Dynansty–take temple of Dragon in Shijing village as an example. J. Shanxi Datong Univ. (Soc. Sc. Ed.) 01, 51–54 (2014) 12. Xie, D.: On the experience of rural social transformation in the early years of new China. Study C.P.C. Hist. 03, 28–36 (2010) 13. Tianfu, W., Fei, W., Youcai, T.: Land collectivization and the structural transformation of traditional rural families. Soc. Sci. China 02, 41–60 (2015)

436

Z. Hua and W. Zhou

14. Gongping, Y.: A historical investigation and reflections on the development of commune and brigade enterprise before 1984. Contemp. China Hist. Stud. 14(2), 60–69 (2007) 15. Guanping, C., Bin, M.: The changes and causes of institutional segmentation of China’s labor market since the reform and opening up. J. Soc. Theory Guide 07, 21–23 (2003) 16. Chen, J.: “State power building” and transformations of village politics in China. J. Shenzhen Univ. (Hum. Soc. Sci.) 23(1), 75–80 (2006)

Road-Condition Monitoring and Classification for Smart Cities Diana Kassem1 and Carlos Arce-Lopera2(&) 1

2

Arkass Consulting, Calle 64 # 4 - 90, 760046 Cali, Colombia [email protected] Universidad Icesi, Calle 18 # 122 - 135, 760031 Cali, Colombia [email protected]

Abstract. Smart Cities require the deployment of sensing technology that periodically monitor the city resources, such as the road infrastructure. Indeed, road maintenance is considered a key element for city management. Here, we propose an in-vehicle optical monitoring system that classifies and monitors road conditions. The system consists of three main modules allowing roadcondition sensing, geospatial localization, information storage and classification. The road-condition classification algorithm is not only capable of identifying potholes and bumps on the road but also of showing the degree of road damage to guide city authorities to design strategic road maintenance. Keywords: Human factors  Human-systems integration  Systems engineering

1 Introduction Smart Cities require the design and development of sensing technology that periodically record and monitor the different citizen activities and the state of the associated city resources [1]. This relies on the involvement of local government to invest in the deployment of sensors, big data and machine learning applications in order to increase the efficiency of public spending and infrastructure management. Evaluating the condition of the transportation infrastructure is a labor intensive, time consuming and expensive process. The traditional road evaluation protocols include measurements in specific locations that are manually taken along with subjective visual interpretations and recording [2]. Moreover, the estimation of road condition and deterioration is subjective and qualitative. In Cali, the third largest city of Colombia, the number of private vehicles has grown during the last decade at an average rate of 7% a year, resulting among other traffic problems, in continuous road deterioration. Road maintenance is considered a key element for the management of transportation systems. Moreover, in Cali there is a high correlation between the citizenship perception on public management efforts and the quality of the roads, as it is the most common complain on infrastructure deficiencies. Previous research on pothole detection systems emphasizes the technical challenges and limitations of remote sensing using either 3D reconstruction [3], 2D vision [2, 4], acceleration sensors [5, 6] and their combination [7]. Other researchers have developed © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 437–441, 2021. https://doi.org/10.1007/978-3-030-51328-3_60

438

D. Kassem and C. Arce-Lopera

guidelines for pothole classification to support decision-making systems [8]. However, the overall design of a road-condition monitoring system for Smart Cities remains unclear. Here, we propose an in-vehicle optical monitoring system to classify the road condition using GPS technology to locate the information on a GIS technology application. The system offers a non-contact method for road-condition management allowing periodical, comprehensive and quantitative monitoring of transportation infrastructure.

2 System Description A prototype implementation of the road monitoring system was designed and developed as a proof-of-concept in order to demonstrate the functionality and modularity of the system design. Three main modules were designed. First, the sensor module, including a depth camera and an inclinometer, record the road condition with an RGBD image. The inclination values serve as input for the calibration of the image registration processes. Then, the second module associates a GPS location to a segmented part of the road image for transmission to the cloud. Finally, the third module is a storage and classification module, in the cloud, that is designed to store and analyze data. Figure 1 shows a schematic view of the system.

Fig. 1. Schematic of the optical road monitoring system.

The first module needed a support and stabilizer system to eliminate high frequency movement when recording the data. The stabilizer was custom-built to support the optical sensor at the back of the car used in the tests. The external mechanical stabilizer was designed to compensate for camera instability caused by the car vibration and movement. The stabilizer used an iso-elastic frame connected by a low friction gimbal to the optical sensor and a counterbalancing weight that kept the sensor pointing

Road-Condition Monitoring and Classification for Smart Cities

439

straight to the road. The distance between the optical sensor and the road was adjusted to approximately two meters which allowed the sensor to record a complete image of one car lane (approximately 3 m wide). To improve the image registration process, an inclinometer with a gyroscope was carefully placed near the optical sensor allowing constant monitoring of the relative position of the optical sensor. Using the values from the inclinometer, a calibration process was performed to derive the tridimensional position of the RGBD images. With this process, pavement absolute tilt can be monitored and registered accurately (Fig. 2).

Fig. 2. Sensor module hardware schematic. Number 1 represents the low-friction gimbal used for camera stabilization. Number 2 and 3 represent the RGBD camera and the inclinometer, respectively. Both devices were encapsulated in a weather proof case with a counterbalance weight.

This first module was connected to a laptop computer that recorded the images using the Polygon File Format, which was designed for simple 3D data storage in an offline database. Before recording the images, signal processing techniques were applied to reduce the sensor noise and minimize information redundancy. At this stage, the sensor module recorded one car lane for one kilometer in one file at approximately 380–420 Mbytes. This means that 1000 km of the transport infrastructure of a small city could be recorded in three dimensions with less than half a Terabyte. However, this amount of information remains too expensive to transmit via internet. The first module scans the road and records its condition in a full continuous 3D file. The second module uses that information and labels the data using a GPS sensor. This GPS module performs a preliminary road classification based on size and depth deterioration of the road. In our test protocol, we determined that all spatial information that deviated from the ideal road for more than 10 cm in length or width and 5 cm in depth needed to be segmented. The spatial segmentation was performed using the twodimensional information of the bounding box circumscribed to the width and length of the road disturbance. Then, the GPS localization data with the segmented information

440

D. Kassem and C. Arce-Lopera

is sent to the cloud to the third module. Figure 3 shows an example of a segmented pothole. For visualization purposes, in Fig. 3, color determines the difference in depth information. However, the depth information that is stored and sent to the cloud is numerical data that can be visualized differently depending on the application.

Fig. 3. Example of a segmented pothole

The third module receives the data for recording and analyzing. This web module was designed as a web service fully customizable allowing to determine classification profiles to enable different types of road condition classification. This way the application users can personalize the visualization of the road condition following different approaches. To test our system, a controlled environment was designed. A set of 54 types of potholes and bumps were built artificially by modifying the pavement with construction tools. All road disturbances varied in width, length and depth. Six different depths (−20 cm, −10 cm, −5 cm, +5 cm, +10 cm, +20 cm), three different widths (+10 cm, +30 cm, +50 cm) and three different lengths (+10 cm, +50 cm, +100 cm) were constructed. Therefore, the two smallest road anomalies had a width and length of 10 cm and a depth difference (bump and pothole) of 5 cm. The largest road disturbance was 20 cm in depth difference (up and down), 50 cm wide and one-meter long.

3 Results Experimental results were focused on describing the limitations of the designed system and to find which attributes performed best with the requirements and resources available. All 54 test environments were tested on the same week and using the same sensor hardware and positioning. For all artificially made potholes, the width and length of the reconstructed 3D pothole was less than 2.4 cm different from the real measure. The average difference between the automatic and manual measurements for all potholes was 1,1 cm in length and 0,7 cm in width. Indeed, in the test environment, there was not a significant difference between automatic and manual measurement for both pothole width and length. However, for the case of road bumps, automatic measurement using the designed system suffered from systematic underestimation of both bump width and length. On average, the automatic system was −5,3 cm in length and −3,4 cm in width different from the real manual measurement. Moreover, the error increased as the size of the bump increased. Concerning the depth information, the automatic measurement of both types of road conditions (bump and potholes) stimuli

Road-Condition Monitoring and Classification for Smart Cities

441

revealed no significant difference with the manual measurement. On average, the difference was 0,7 cm for potholes and 0,2 cm for bumps. These experimental results revealed that the system is able to detect and measure road conditions automatically with negligible errors.

4 Conclusions An optical sensing technique to automatically monitor and assess the road-condition was designed and tested in a controlled environment. The road-condition classification algorithm was not only capable of identifying potholes and bumps on the road but also was capable of showing the degree of road damage. This stratified road condition classification can serve public government authorities to guide maintenance to critical points in the road infrastructure. Also, this type of sensing technology allows public managers to assess larger areas in shorter time by systematic usage of periodical monitoring. Future developments include the use of the information to feed simulations and predict road damage by increased traffic and weather conditions.

References 1. Schnebele, E., Tanyu, B.F., Cervone, G., Waters, N.: Review of remote sensing methodologies for pavement management and assessment. Eur. Transp. Res. Rev. 7, 7 (2015). https:// doi.org/10.1007/s12544-015-0156-6 2. Koch, C., Brilakis, I.: Pothole detection in asphalt pavement images. Adv. Eng. Inform. 25, 507–515 (2011). https://doi.org/10.1016/j.aei.2011.01.002 3. Yu, X., Salari, E.: Pavement pothole detection and severity measurement using laser imaging. In: IEEE International Conference on Electro/Information Technology, pp. 1–5 (2011). https://doi.org/10.1109/EIT.2011.5978573 4. Jo, Y., Ryu, S.: Pothole detection system using a black-box camera. Sensors (Basel) 15, 29316–29331 (2015). https://doi.org/10.3390/s151129316 5. Eriksson, J., Girod, L., Hull, B., Newton, R., Madden, S., Balakrishnan, H.: The pothole patrol: using a mobile sensor network for road surface monitoring. In: Proceedings of the 6th International Conference on Mobile Systems, Applications, and Services, pp. 29–39. Association for Computing Machinery, Breckenridge (2008). https://doi.org/10.1145/ 1378600.1378605 6. Burgart, S.: Gap Trap: A Pothole Detection and Reporting System Utilizing Mobile Devices (2014) 7. Jog, G.M., Koch, C., Golparvar-Fard, M., Brilakis, I.: Pothole properties measurement through visual 2D recognition and 3D reconstruction. In: Computing in Civil Engineering, pp. 553–560 (2012). https://doi.org/10.1061/9780784412343.0070 8. Kim, T., Ryu, S.-K.: A guideline for pothole classification. Int. J. Eng. Technol. 4, 618–622 (2014)

Discussion Features of Public Participation in Space Governance in Network Media – Taking Yangzhou Wetland Park as an Example Zhang Hua(&) School of Design, Shanghai Jiao Tong University, Shanghai, China [email protected]

Abstract. This paper uses content analysis method to sample and analyze the microblog text with the theme of “Yangzhou Sanwan Wetland Park” to investigate the characteristics and differences of Internet users in urban public space governance issues. The results show that: there are differences in gender and certification status of Internet users’ cognition of Sanwan Wetland Park, and men pay more attention to “landscape”, “function” and “traffic convenience” Women pay more attention to the “environment” issues and show more sensibility. Authenticated users often have more forwarding, comments and comments, so their views show strong infectivity, which can be known and spread by other Microblog users, become the opinion leaders in fact, and is the bridge between managers and the public. There are four stages of Internet public opinion on urban governance issues, namely, initial stage, agglomeration stage, upsurge stage and recession stage. The participants of public space governance issues are accustomed to using mobile clients to publish complex Microblogs, while the participants of computer clients are more inclined to publish simple text versions of Microblogs, which is worth further study. Keywords: Public space  Internet media issues  Gender differences  Spatial governance  Human-systems Integration  Systems engineering

1 Introduction Because of the popularity of information technology, social media represented by Microblog, WeChat and regional forums has become an important channel for more and more community residents to express their opinions and attitudes to social managers [1]. Online social media has the feature of free expression subject [2] in the transmission of urban social problems, that is, users can express certain tendentious emotions, attitudes, opinions or opinions on various aspects of social life. In addition, the rich and diverse expressions [3] and the rapid spread of views [4] also make the online social media have different characteristics from the previous media. It makes the network media become an important data source to study the behavior, preference and willingness of residents. Through the analysis of the network media data, it helps to expand the research field of urban public governance, and also helps to obtain the © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 442–448, 2021. https://doi.org/10.1007/978-3-030-51328-3_61

Discussion Features of Public Participation in Space Governance

443

decision-making basis for the designers and managers of urban public space in the future design and management process. In China, Microblog is the most representative platform of online social media. In just 14 months since its birth, Microblog has completely changed the traditional way of opinion dissemination by virtue of the rule of 140 words. Different in urban development at the same time, there will be different topics of concern, and then different depth and ways of participation [5]. On the issue of urban public space governance, what are the characteristics of public participation based on Internet media. Based on the above literature review, the researchers set out the following research questions from the issue of Wetland Park. Question 1: who are the participants in the urban park governance (renovation) issue? Question 2: what are the differences of Internet users’ participation in the discussion of urban park governance issues? Question 3: what are the social impact differences between different network users participating in Wetland Park topic interaction? Question 4: with the continuous development of time, what changes have taken place in the discussion of Wetland Park issues among Internet users?

2 Research Methods 2.1

Research Site Selection

Sanwan park is located in Yangzhou Economic Development Zone and Sanwan section of the ancient canal, covering an area of 3327.5 mu, including 1520 mu in the core area and 1807.5 mu in the expansion area. 2.2

Selection of Research Samples

In this study, Microblog advanced search function is used, and the keywords “Sanwan Wetland Park” and “Sanwan Park” are searched. The collected data period is from 0:00 on January 1, 2014 to the beginning of this study (24:00 on December 23, 2018). A total of 1359 samples were obtained, and 1329 samples were retained if they did not meet the requirements. 2.3

Construction of Analysis Category

Based on the research results in the field of Microblog content analysis [6], the following research categories are set up and developed: 1. Demographic categories of participants. Including participants’ gender, Microblog authentication status and number of fans. 2. Topic information category. Including number of forwarded comments, opinion type, emotional tendency, release channel, number of likes, number of comments, number of forwarded, and time period. 3. Forward comments: forward comments are divided into: A. zero comment forward: that is, direct forward the

444

Z. Hua

original information without any processing. B. Forwarding with comments: forwarding other people’s information through self-editing, including text, links, pictures and other information. 4. Opinion type: the coding of Microblog content type, which is divided into factual views and conceptual knowledge. It can be divided into a. landscape B. visit motivation C. mode of transportation D. landscape environment 5. Emotional tendency: it can be divided into rational point of view, emotional point of view and neutral point of view. “Rational view” is an objective, positive and comprehensive analysis of problems to express opinions; “perceptual view” starts from individual emotions to express emotional concepts such as anger, fear and abuse; “neutral view” has both rationality and sensibility, which is expressed as both approval and opposition, or there is no obvious emotional color in micro blog information. 6. Release channel: how Microblog users release urban park governance issues. In this study, it is divided into web page and mobile client. 7. Number of likes, comments and forwarding: the number of other users’ likes received by Microblog users to measure whether the urban park governance issues published by Microblog users can arouse the consensus of other users and form common values among users [8]. The number of forwarding and comments is a variable to measure the social influence of Microblog users [9]. 2.4

Reliability Test

In this study, 25% of the Microblog information was randomly selected from the samples in advance to allow two coders to code independently. The result shows that the reliability between coders is 0. 903, and the coding result is reliable. After coding, for controversial samples, two coders reach a consensus through consultation.

3 Data Statistics 3.1

Descriptive Analysis of Demographic Variables

Of the 1329 valid samples collected, 960 were issued by male users, accounting for 72.23%, and 369 by female users, accounting for 27.77%. After eliminating multiple Microblogs from the same user, 899 users participated in the discussion of Wetland Park issues, including 654 male users and 245 female users. In terms of authentication, more Microblog users are non-authenticated users, 720 in total, accounting for 80.08% of the total, only 19.98% of the total, 584 male non authenticated users, accounting for 89.30% of the total, and 230 female non authenticated users, accounting for 93.88% of the total. Among all the samples participating in the discussion of urban park governance topics, the average number of user fans is 74, and the standard deviation is 1100. The distribution of user fans participating in the discussion of public space governance topics is very uneven. Some users have a large number of fans, and some users have a single number of fans.

Discussion Features of Public Participation in Space Governance

3.2

445

Analysis of Statistical Results of Topic Information

In order to explore the differences of Internet users’ participation in the discussion of urban park governance issues, we conducted a correlation analysis on the collected categories, and found several groups of significantly related variables, such as the number of fans, the number of likes, the number of comments, and the number of forwarding; the number of fans is closely related to the number of forwarding; the number of likes, the number of comments, and the number of forwarding. It is closely related to the type of opinion. Further cross analysis of topic differences at the authentication level is carried out, as shown in Table 1. Authenticated users tend to have more fans and more likes, comments and forwarding in terms of communication effect. In reality, the authenticated users get the official identity guarantee, and the complexity of authentication makes the authenticated users more willing to express their opinions. Therefore, in this study, the authenticated users also show that their opinions have strong infectivity, and can be known and spread by many Microblog users, so they can produce a wider range of social influence. The opinions that the public often mention in public opinion are related to the features of things that impress them deeply, reflecting the sensitive parts of the public space. Therefore, under the category of content analysis, the frequency of main category and sub category of view type is counted to explore the structure and main connotation of view type of Sanwan Wetland Park. Sanwan Wetland Park has been mentioned 1977 times, including 943 times of landscape, 632 times of expected function, 357 times of traffic convenience and 45 times of environment. It shows that public opinion is mainly reflected in the landscape and expected function of Sanwan Wetland Park, which are also the most prominent impression elements of Sanwan Wetland Park. According to the statistics of view attribute sub category, the first three mentioned times are 259 times of natural features, 246 times of maintaining social intercourse and 185 times of cultural buildings, accounting for 78.7%, 68.7% and 56.2% of the total number of tourists. The types of views are related at the gender level. In the fields of “landscape”, “expected function”, “traffic convenience”, men pay more attention than women. In the field of “environment”, women pay more attention. There is also a correlation between opinion type and forwarding, comments and likes. According to the statistics of the frequency of comment, comment and forward, it is found that the frequency of comment and forward is the highest in environmental view, and the frequency of comment and forward is the highest in expected function. Gender differences in issues are significantly related to “emotional expression” and “issue type”. Male users tend to express their views on Wetland Park with rational emotion, while female users tend to express their perceptual views on Wetland Park.The difference between men and women is also reflected in their enthusiasm to participate in Wetland Park topics with different contents, such as in the fields of “landscape”, “function”, “traffic convenience”, etc., men pay more attention than women; while in the field of “environment”, women pay more attention. The research also verifies the research conclusion of Shi Yi-jen [10] on gender differences in environmental concerns, different values between male and female users, and different public issues.

446

Z. Hua Table 1. Variable interrelation

Gender Authentication Fans Forward

Gender

Authentication

Fans

Forward

Praise

Comment

Forward point

Viewpoint

Emotion

Channel

1

.072

.237

.026

.543

.232

.133

.086**

.221**

.056

1

−.187**

−.187

.230**

.088**

.109**

.032

.375

.002

1

.027

.107*

.133*

−.011**

.035*

.231

.052

.520

.107

−.032

.026*

.115

.053

1

.873*

−.435**

.074**

−.163

.563

1

−.617**

.321**

−192

.006

1

−.033

.109

.210

1

.538

.229

1

Praise Comment Forward point Viewpoint Emotion

1

Channel *

.052 1

At the level of 0.05 (bilateral), there was significant correlation;

**

At the level of 0.01 (bilateral), significant correlation

Word frequency analysis is a commonly used weighting technology for information retrieval and text mining, which is used to evaluate the importance of a word to a file or a field file set in a corpus, and further make word frequency statistics for the collected text content.As shown in Table 2, the top 20 keywords are shown in the table below. In addition to revealing the high weight of words in park locations and types, “lakeside”. “There are many landscape words such as bridge, yard and square, followed by function words, traffic words and environment words (Fig.1). Table 2. Microblog word frequency statistics Key words 扬州(Yangzhou) 湿地(Wetland) 公园(Park) 湖岸(Lakeshore) 桥(Bridge) 观光(Sightseeing) 停车(Parking) 院子(Yard) 卫生间(TOILET) 收费(Charge) 车站(Station) 书房(Reading room) 空气(Air) 广场(Square) 共享单车(Shared bicycle) 远(Far away) 传统(tradition) 休闲(Leisure time) 综合性(Comprehensiveness)

Word frequency Weight 300 1 236 0.9684 227 0.9164 214 0.8807 207 0.8699 205 0.8588 205 0.8556 194 0.8436 180 0.8423 177 0.8098 150 0.7996 141 0.7987 131 0.7944 128 0.7927 111 0.7917 95 0.7814 78 0.7762 65 0.7688 50 0.7586

Discussion Features of Public Participation in Space Governance

447

Fig. 1. Weight of hot words

3.3

Timing Characteristics of Issue Communication

In the field of network public opinion, with the centralized reporting of a hot event, there will be a situation of information superposition in a certain period of time. The superposition of information gradually deepens into the superposition of emotion, and gradually spreads to more fields and levels of society under the effect of further agglomeration. Such emotional superposition is related to the content of public opinion events. Information superposition is the external form of public opinion hot events, and emotional superposition is the internal driving force of public opinion.

4 Conclusion 4.1

Authenticated Users Are the Public Opinion Leaders of Space Issues

Because of the official identity guarantee and the complexity of authentication, authenticated users are more likely to express their opinions, so they are more likely to be trusted by other users, have more fans, and have more likes, comments and forwarding in the communication effect. In reality, therefore, the authenticated users in this study also show that their views have strong infectivity, which can be more known and spread by other microblog users, so as to produce a wider range of social influence. This requires managers to pay attention to the role of opinion leaders in the era of mass communication in which everyone is a microphone. Through opinion leaders, they can effectively communicate with the public, understand the public’s concerns, and reduce the resistance in the construction process. At the same time, behind every opinion leader, there may be millions of fans. Therefore, it is particularly important to cultivate and standardize opinion leaders. 4.2

Four Periods of Public Opinion Communication in Space Governance

The evolution process of public opinion of urban spatial governance in microblog can be divided into four periods: initial period, agglomeration period, upsurge period and decline period. Since the urban space issues on the media have the focus diffusion effect, the media’s reports on the events ignite the social discussion, many similar events are constantly raised, the news content and the voice of public opinion are constantly superimposed, which pushes the development of the events to the climax,

448

Z. Hua

and the change of the system and policy becomes the most direct demand of the people at that time, forming a strong pressure of public opinion. 4.3

Strong Correlation Between Channels and Media

The content expression of public space governance issues is related to the publishing channel selected by microblog, and the information form sent by users is related to the browsing mode of users. The convenience of mobile client makes users tend to use mobile client to express their opinions, breaking the limitation of time and space in the past, and sharing and absorbing others’ opinions in a fragmented way anytime and anywhere. In particular, in the past, it was thought that there could be enough time to enrich the microblog content and increase the form of expression in front of the computer client. However, the actual situation is that participants are accustomed to using the mobile client to send complex microblogs with pictures, videos, links, etc. However, it is worthy of further study.

References 1. Lei, Z., Yuan, F.: Research on the modes of relationship between network public opinion and urban social management. J. City Obser. 05(2011) 2. Wei, L.: The construction of network public opinion monitoring and guidance mechanism in urban management in the new media era. J. Public Commun. Sci. Technol. 07, 23 (2015) 3. Xu, S.: Internet public opinion: the new symptoms of urban public security crisis. J. Soc. Sci. Nanjing 04, 64–69 (2012) 4. Jiang, M., Zhu, R.: Network public opinion response and urban management image reconstruction. J. Urban Manag. Sci. Technol. 02, 50–52 (2013) 5. Zhang, J., Li, N.: The research of the government’s network public opinion governance with the construction and development of the intelligent city. J. Theory Modern. (06), 13 (2014) 6. Cheng, Z., Cao, R., Zhu, X.: The research on public paticipation of urban plan in China based on the experience in European and America countries. J. Urban Prob. 05, 72–75 (2003) 7. Fu, Y., Wang, X., Zheng, X.: Study on tourism image based on web text analysis: case of Gulangyu. J. Tourism Forum. 05, 59–66 (2012) 8. Wei, L., Hu, Y.: Issue presentation in the Chinese microblogosphere: an empirical study of sina hot weibo. J. Zhejiang Univ. (Humanities and Social) 02, 41–52 (2014) 9. Nie, J., Jin, H.: Research on the difference of health knowledge production among Internet users – content analysis based on “h7n9 event” microblog. J. Press Circles 07, 77–88 (2017) 10. Shih, Y.: Gender, environment and developmental studies. J. Collect. Women’s Stud. 04, 21–28 (2010)

Applications in Supply Chain Management and Business

A Modeling the Supplier Relationship Management in Agribusiness Supply Chain Rajiv Sánchez1(&), Bryan Reyes1, Edgar Ramos1, and Steven Dien2 1

Industrial Engineering Program, Universidad Peruana de Ciencias Aplicadas, Lima, Peru {u201502480,u201525788,pcineram}@upc.edu.pe 2 M.S. Global Supply-Chain Management, University of Southern California, Los Angeles, CA, USA [email protected]

Abstract. This research analyzes the current studies of supplier relationship management (SRM), based on a literature review to contrast and compare the evolution of SRM in agribusiness-oriented supply chain management (SCM). The result obtained in this research shows the agribusiness and its relationship with its suppliers. It also strives to identify potential models for a strong SRM. An SRM model is proposed to visualize the components that make up the management of suppliers in the agribusiness supply chain (SC). Keywords: Supplier relationship management Supply chain management  Agribusiness

 Agri-food supply chain 

1 Introduction The management of relations with suppliers in the Agri-food industry has become a critical process for business in the world power countries in recent years [1, 2]. As a result of pressure from the competition, the need to reduce costs and increase productivity in order to be more competitive [3], and the need to develop stronger relationships with its key suppliers [9, 12], a model is proposed which can meet the agribusiness demands. With which we can maintain reliability in the development of the supply chain [10, 18]. In studies on SRM, farmers are often evaluated along with their performance [5, 8]. Their effective selection can bring significant savings and value to the organization, such as the provision of speed and quality of raw materials [7, 27]. This study will help to understand research trends and explain the problems that positive and negative effects can have on the management of supplier relationships within the supply chain [10, 11]. Through the understanding and application of the management of their suppliers, companies involved in agribusiness can identify opportunities to effectively improve their processes in general [24, 37], and can evaluate and improve aspects such as cost, quality, time and flexibility [36, 41]. This research is structured into three parts: (1) literature review; and (2) the developed research methodology that ultimately leads to (3), the development of a model. A model proposed for the evaluation and action in relationship management of suppliers, within the framework of an agribusiness supply network. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 451–458, 2021. https://doi.org/10.1007/978-3-030-51328-3_62

452

R. Sánchez et al.

2 Literature Review 2.1

Management in Supply Chain

At present, companies are trying to develop a framework to measure the level of strategic, tactical and operational performance in a supply chain [12, 22, 25]. Supply chain management is one of the most important management functions to achieve a long-term competitive advantage [23, 26], and several of its practices are significantly correlated with the performance of its suppliers [18, 23, 24]. The SC is fundamentally competitive for any company or organization and that the purpose of any SC is to be benefited by each added value that is carried out [3, 7, 18, 26]. Therefore, companies must manage their efficiency and responsiveness in a balanced way in order to have a strategic advantage [7, 14, 27]. Finally, SCM includes all organizations involved in the up and down flows of products, services, finances and information from the supplier to the end customer [14, 23, 25, 27]. 2.2

Management in the Supplier Relationship

Over the past 20 years, academic research in the fields of SCM, such as administration, purchasing, and marketing, has monitored how value is created from relationships with its suppliers [4, 6, 8]. The competition in the global market incentivizes manufacturers to establish strategic relationships with their suppliers to have more efficiency and effective performance [8, 29, 31]. These relationships between suppliers provide advantages such as increased reliability of supply [6, 30, 33], the potential for product adaptation, reduction in the level of uncertainty, as well as improvements in the ability to plan [9]. Some studies show that the integration of SRM processes can take place through the integration of its various sub-processes into strategic and operational characteristics [17, 28, 32, 33]. In a buyersupplier relationship, partners are expected to have full knowledge about information exchanges, if they can process all information properly, and become aware of possible alternative suppliers [6, 7, 9]. Finally, within the SRM, satisfaction is sought through product quality, product delivery, timely information and knowledge sharing that will increase confidence in the supplier-client work relationship [28, 31, 32]. 2.3

Agribusiness and Supply Chain

The structure model of the agricultural product supply chain generally consists of a logistics center [7, 10], agricultural food producers, farmers and rural cooperatives, agricultural material suppliers, farmer intermediaries [11, 34, 36], primary agricultural products processing companies, regional distribution and delivery centers, final agricultural products processing companies and final consumers [11, 13, 17, 35, 36, 39]. The final product of agribusiness is the result of a complex production process that includes many participants [35, 37, 38], each of which adds value to the final product at each stage. Therefore, the agribusiness business model is linked to the value chain. This chain can be described as a set of processes that moves the crop product to the consumer [34, 40, 41].

A Modeling the Supplier Relationship Management in Agribusiness Supply Chain

453

3 Methodology 3.1

Objetives and Research Strategy

To begin development in the research strategy, we define in the first set the questions that we must answer at the end of the article according to the review of the articles published to date: What is the SRM model that allows better performance of suppliers in the Agri-food supply chain? And, How is the process of managing supplier relationships organized in the Agri-food supply chain, which facilitates operational results? 3.2

Results of the Review

The articles reviewed, in general have a clear delimitation as regards the subject to be treated, SRM in the agri-food supply chains, for that reason the initial search was based on finding the frameworks of work or models proposed in the last years that covered this delimitation. From this point on, the first finding represented the 7 articles presented in Table 1, which highlights the models and their SRM structures. Table 1. Models on supplier relationship management Model “The impact of sustainable supplier management practices on buyer-supplier performance: An empirical study in China” “Innovation exploitation, exploration and supplier relationship management” “The importance of key supplier relationship management in supply chains” “Towards a classification of supply chain relationships: a routine based perspective”

Authors Fan Yang, Xiongfei Zhang, 2017 [1]

Jing Cai, Alison U. Smart, Xuefeng Liu, 2014 [30] Christoph Teller, Herbert Kotzab, David B. Grant, Christina Holweg, 2016 [19] Muhammad Usman Ahmed, Mehmet Murat Kristal, Mark Pagell, Thomas F. Gattiker, 2017 [2] Amos Gyau, Achim Spiller, Christian “Price or relational behaviours?: Supplier relationship management in the German dairy Wocken, 2011 [7] industry” Divesh Kumara, Zillur Rahmanb, 2015 [15] “Sustainability adoption through buyer supplier relationship across supply chain: A literature review and conceptual framework” “Sustainability performance measurement Salinee Santiteerakul, Aicha Sekhari, framework for supply chain management” Abdelaziz Bouras, Apichat Sopadang, 2015 [23]

The first model that stands out is the framework for the management of relations with suppliers within the supply chain. The proposal of the framework involves ascending and descending relationships between customers and suppliers [19]. Managers have to decide whether they work with ascending and descending partners, as well as the degree of exchange they wish to have with these partners. In the second model, the role of supplier relationships, [30] has significant potential to add added value in decision-making. Relationship management becomes a challenge when

454

R. Sánchez et al.

innovations are radical, with high complexity in the communication and coordination of the actors across the boundaries of the organization. Table 2 shows the summary of articles related to SRM and SC in the agribusiness sector. Table 2. Items on supplier relationship management and agri-food supply chain Authors Soroosh Saghiria & Alex Hill

Journal IJPR

Findings The objective of this paper is to explore the supplier’s relationship with the implementation of the deferral of the purchasing company. Hypotheses and tests are carried out with structural modeling, in conjunction with three supplier relationship constructions and three postponement constructions [32] Jongkyung Park, Kitae Shin, TaiIM&DS This paper helps to verify the applicability Woo Chang, Jinwoo Park of your model, by analyzing the case. This with the purpose that the proposed framework and its adoption, add efficiency and effectiveness to the SRM, whose functions are highly related [21] Keng Lin Soh, K. Jayaraman, Teoh IJPQM This study focuses on the aspects of Su Yen and S. Kiumarsi mediation, infrastructure, quality and commitment of the provider and its performance, within the organization. Individual suppliers and their performance are vital in the management of the supply chain [42] Srikanta Routroy, Astajyoti Behera JADEE This paper reviews the dimensions and scope of a supply chain in agribusiness. We can find analysis in aspects such as scope, objectives, obstacles and more [35] O.A. Kusraeva PET Within this article we have a model focused on agribusiness in Russia. This research focused on interviews with members of certain agribusiness companies in the aforementioned country [34] PP&CMO This study focuses on measuring the Muhammad Moazzam, Pervaiz performance of the supply chain in Akhtar, Elena Garnevska & agribusiness. While we have general Norman E. Marr concepts for performance measurement, they still lack precise and adequate measurement frameworks within the SCM in agribusiness [41] Michael Kwamega, Dongmei Li, BPMJ This paper focuses on the exchange of Eugene Abrokwah, information within the supply chain and its integration. They also evaluate the internal performance of agribusiness companies [37]

A Modeling the Supplier Relationship Management in Agribusiness Supply Chain

455

Fig. 1. Modeling supplier relationship management in agri-food supply chain

4 Proposed Model with Supplier Approach in Agri-Food SC The model comprising the components of an SRM proposed in Fig. 1 is based on the sequence of activities of the supply chain of the agri-food sector [29, 34, 36]. From this, the components of the SRM model, together with an integrative framework for supplier relationship management [21]. In this integration, the six main processes are considered: purchasing strategies, supplier selection, selection process, supplier evaluation, supplier collaboration, and continuous improvement. The processes of this model were selected considering many of the barriers in the supply chains agri-food sector [21, 36, 39], such as sophisticated technology or media and lack of information that occurs along the chain, especially at the beginning of it [37, 38].

5 Conclusions This research presents an SRM model for the agri-food supply chain specifying the activities and/or key processes to consider in its management. Although supply chains in agribusiness show some complexity when developed, this does not mean that the agri-food sector is a minor issue, since the complexity of management in this sector becomes a limitation when planning a study. Finally, the SRM of the agri-food supply chains have been crucial in recent years. Previous studies show that there are bases to validate and prove that it can help to continue developing more updated models according to the technological contributions that other systems are going offering to continue investigating.

456

R. Sánchez et al.

References 1. Yang, F., Zhang, X.: The impact of sustainable supplier management practices on buyersupplier performance: an empirical study in China. Rev. Int. Bus. Strat. 27(1), 112–132 (2017) 2. Ahmed, M.U., Kristal, M.M., Pagell, M., Gattiker, T.F.: Towards a classification of supply chain relationships: a routine based perspective. Supply Chain Manag. Int. J. (2017) 3. Delbufalo, E.: Outcomes of inter‐organizational trust in supply chain relationships: a systematic literature review and a meta‐analysis of the empirical evidence. Faculty of Economics, European University of Rome (2012) 4. Salam, M.A., Khan, S.A.: Achieving supply chain excellence through supplier management: a case study of fast moving consumer goods. Benchmark. Int. J. (2018) 5. Elock Son, C., Müller, J., Djuatio, E.: Logistic outsourcing risks management and performance under the mediation of customer service in agribusiness. In: Supply Chain Forum: An International Journal (2019) 6. Oghazi, P., Rad, F.F., Zaefarian, G., Beheshti, H.M., Mortazavi, S.: Unity is strength: a study of supplier relationship management integration. J. Bus. Res. 69(11), 4804–4810 (2016) 7. Gyau, A., Spiller, A., Wocken, C.: Price or relational behaviours? Supplier relationship management in the German dairy industry. Brit. Food J. 113(7), 838852 (2011) 8. Alkahtani, M., Kaid, H.: Supplier selection in supply chain management: a review study. Int. J. Bus. Perf. Supply Chain Model. 10(2), 107–130 (2018) 9. Amoako-Gyampah, K., Boakye, K.G., Adaku, E., Famiyeh, S.: Supplier relationship management and firm performance in developing economies: a moderated mediation analysis of flexibility capability and ownership structure. Int. J. Prod. Econ. 208, 160–170 (2019) 10. Behzadi, G., O’Sullivan, M.J., Olsen, T.L., Zhang, A.: Agribusiness supply chain risk management: a review of quantitative decision models. Omega 79, 21–42 (2018) 11. Behzadi, G., O’Sullivan, M.J., Olsen, T.L., Scrimgeour, F., Zhang, A.: Robust and resilient strategies for managing supply disruptions in an agribusiness supply chain. Int. J. Prod. Econ. 191, 207–220 (2017) 12. Sehnem, S., Oliveira, G.P.: Analysis of the supplier and agribusiness relationship. J. Clean. Prod. 168, 1335–1347 (2017) 13. Kamble, S.S., Gunasekaran, A., Sharma, R.: Modeling the blockchain enabled traceability in agriculture supply chain. Int. J. Inf. Manag. 52, 101967 (2020) 14. Singh, R.K., Acharya, P.: Supply chain management: everlasting and contemporary research issues. Int. J. Logistics Syst. Manag. 19(1), 1–19 (2014) 15. Kumar, D., Rahman, Z.: Sustainability adoption through buyer supplier relationship across supply chain: A literature review and conceptual framework. Int. Strat. Manag. Rev. 3(1–2), 110–127 (2015) 16. La Rocca, A., Perna, A., Snehota, I., Ciabuschi, F.: The role of supplier relationships in the development of new business ventures. Ind. Mark. Manag. 80, 149–159 (2019) 17. Bukhori, S., Sukmawati, D.A., Eka, Y.W.: Selection of supplier using analytical hierarchy process: Creating value added in the supply chain agribusiness. In: 4th International Conference on Computer Applications and Information Processing Technology (CAIPT) (2017) 18. Elrod, C., Murray, S., Bande, S.: A review of performance metrics for supply chain management. Eng. Manag. J. 25(3), 39–50 (2013)

A Modeling the Supplier Relationship Management in Agribusiness Supply Chain

457

19. Teller, C., Kotzab, H., Grant, D.B., Holweg, C.: The importance of key supplier relationship management in supply chains. Int. J. Retail Distrib. Manag. Import. (2016) 20. Danese, P., Romano, P.: Supply chain integration and efficiency performance: a study on the interactions between customer and supplier integration. Int. J. Supply Chain Manag. (2011) 21. Park, J., Shin, K., Chang, T., Park, J.: An integrative framework for supplier relationship management. Ind. Manag. Data Syst. 110(4), 495–515 (2010) 22. Huo, B., Zhao, X., Lai, F.: Supply chain quality integration: antecedents and consequences. IEEE Trans. Eng. Manag. 61(1), 38–51 (2013) 23. Santiteerakul, S., Sekhari, A., Bouras, A., Sopadang, A.: Sustainability performance measurement framework for supply chain management. Int. J. Prod. Dev. 20(3), 221–238 (2015) 24. Joonhyeong Joseph Kim: Theoretical foundations underpinning supply chain management and supply chain level sustainable performance. Int. J. Tour. Sci. 17(3), 213–229 (2017) 25. Mentzer, J.T., DeWitt, W., Keebler, J.S., Min, S., Nix, N.W., Smith, C.D., Définir, Z.G.Z.: Le supply chain management. Logistique Manag. 23(4), 7–24 (2015) 26. Fernie, S., Tennant, S.: The non-adoption of supply chain management. Constr. Manag. Econ. 31(10), 1038–1058 (2013) 27. Kotzab, H., Teller, C., Grant, D.B., Friis, A.: Supply chain management resources, capabilities and execution. Prod. Plan. Control Manag. Oper. (2014) 28. Gupta, M., Choudhary, A.K.: An empirical study to assess the impact of various relationship dimensions on supplier relationship in Indian scenario. Int. J. Indian Cult. Bus. Manag. 12 (2), 255–272 (2016) 29. Belaya, V., Hanf, J.H.: Power and conflict in processor-supplier relationships: empirical evidence from Russian agri-food business. Supply Chain Forum Int. J. 15(2), 60–80 (2014) 30. Cai, J., Smart, A.U., Liu, X.: Innovation exploitation, exploration and supplier relationship management. Int. J. Technol. Manag. 66(2/3), 134–155 (2014) 31. Sharif, A.M., Alshawi, S., Kamal, M.M., Eldabi, T., Mazhar, A.: Exploring the role of supplier relationship management for sustainable operations: an OR perspective. J. Oper. Res. Soc. 65, 963–978 (2014) 32. Saghiri, S., Hill, A.: Supplier relationship impacts on postponement strategies. Int. J. Prod. Res. 52(7), 2134–2153 (2014) 33. Faraz, A., Sanders, N., Zacharia, Z., Gerschberger, M.: Monitoring type B buyer–supplier relationships. Int. J. Prod. Res. 56(18), 6225–6239 (2018) 34. Kusraeva, O.A.: The business model characteristics of Russian agribusiness companies. Prob. Econ. Trans. 60(4), 278–285 (2018) 35. Routroy, S., Behera, A.: Agriculture supply chain: A systematic review of literature and implications for future research. J. Agribus. Dev. Emerg. Econ. 7(3), 275–302 (2017) 36. Storer, M., Hyland, P., Ferrer, M., Santa, R., Griffiths, A.: Strategic supply chain management factors influencing agribusiness innovation utilization. Int. J. Logistics Manag. 25(3), 487–521 (2014) 37. Kwamega, M., Li, D., Abrokwah, E.: Empirical analysis of integration practices among agribusiness firms: perspective from a developing economy. Bus. Process Manag. J. (2019) 38. Babu, S.C., Shishodia, M.: Analytical review of african agribusiness competitiveness. Africa J. Manag. 3(2), 145–162 (2017) 39. McCarthy, D., Matopoulos, A., Davies, P.: Life cycle assessment in the food supply chain: a case study. Int. J. Logistics Res. Appl. 18(2), 140–154 (2015) 40. Huang, Y.S., Hsu, Y.C., Fang, C.C.: A study on contractual agreements in supply chains of agricultural produce. Int. J. Prod. Rese. 57(11), 3766–3783 (2019)

458

R. Sánchez et al.

41. Moazzam, M., Akhtar, P., Garnevska, E., Marr, N.E.: Measuring agri-food supply chain performance and risk through a new analytical framework: a case study of New Zealand dairy. Prod. Plan. Control 29(15), 1258–1274 (2018) 42. Soh, K.L., Jayaraman, K., Yen, T.S., Kiumarsi, S.: The role of suppliers in establishing buyer-supplier relationship towards better supplier performance. Int. J. Prod. Qual. Manag. 17(2), 183–197 (2016)

Consumer Perception Applied to Remanufactured Products in a Product-Service System Model Alejandro Jiménez-Zaragoza1(&), Karina Cecilia Arredondo-Soto1(&), Marco Augusto Miranda-Ackerman1(&), and Guillermo Cortés-Robles2(&) 1 Universidad Autónoma de Baja California, Universidad 14418, Parque Internacional Industrial, 22390 Tijuana, Mexico {alejandro.jimenez.zaragoza,karina.arredondo, miranda.marco}@uabc.edu.mx 2 Tecnológico Nacional de México/Instituto Tecnológico de Orizaba, Oriente 9, Emiliano Zapata, 94320 Orizaba, Mexico [email protected]

Abstract. This research focused on implement analysis to diagnose the viability to propose a design and repair strategy based on Product-Service System (PSS) and remanufacturing to preserve the value in white goods, more specifically laundry machines. The aim is to generate an alternative to the linear economy to redirect consumers to the circular economy, positively affecting the environment, the economy, and society, leading to responsible consumption. To achieve this, it is necessary to identify consumer behavior and the factors that intervene to buy remanufactured products. Also, find a timely methodology for the development of the PSS, analyze the ability to conserve added value, propose the strategy and verify its feasibility. The reach of this paper is establishing customer perception in the acceptance of remanufactured products in a circular economy model for white goods. Keywords: Remanufacturing  End of Life perception  Product-Service System

 Circular economy  Consumer

1 Introduction The dynamism related to changes happened on today’s society seems to be influencing an unthinking attitude towards the way in which the resources provided by the planet are used. The capitalist economic model that promotes consumption generates surpluses in waste, with the consequent waste of money, the time invested in earning that money and the scarce natural resources of the environment; acting as if there were no other alternatives that imply economic benefits through responsible consumption. Responsible consumption is one of the 17 Objectives for Sustainable Development in accordance with the 2030 Agenda [1, 2]. To achieve responsible consumption, it is necessary to reduce the ecological footprint by changing the methods of production and © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 459–464, 2021. https://doi.org/10.1007/978-3-030-51328-3_63

460

A. Jiménez-Zaragoza et al.

consumption of goods and resources. It is necessary to encourage industries, businesses and consumers to recycle and reduce waste [3]. In addition to recycling, there are more efficient end-of-life strategies, such as remanufacturing, which is a circular economy model, where products considered rights become raw materials and give the product a second (or more) useful life. Remanufacturing guarantees a product as good as a new one but at a lower price. The process consists of disassembly operations, cleaning inspections, development of recovery operations (to take to design specifications each component of the product), assembly and final test [4–6]. A Product-Service System (PSS) is a special servitization case that is a different option regarding the traditional manufacturing model, as well as the sale and use of the products. Additionally, an added value is assigned translated into a service. Otherwise, remanufacturing is an End of Life strategy pretending to preserve the greater quantity of the added value in a product and extend its life cycle. PSS and remanufacturing converge in two factors: first, the importance of the added value, where PSS rises it due to the service and, remanufacturing preserves it due to better utilization of the core. This research focused on implement analysis to diagnose the viability to propose a design and repair strategy based on Product-Service System (PSS) and remanufacturing to preserve the value in white goods, more specifically laundry machines. Studies have been conducted in this regard in China [7–13], India [14], Japan [15–17], United States [18–21], Holland [22], Greece [23], among others; however, in Mexico studies on remanufactured products are scarce and not from the consumer perception approach [4, 24–29]. In Mexico, in the past there was a culture of repair or restoration, but due to the strengthening of the linear economy through the shortening of product life cycles, this culture was lost. Now there is a tendency to buy new products by disposing of old ones that go to landfills, or in the best case they end up in recyclers, recovering only the value of their raw material, but losing all the added value that it had in the manufacturing process. These behaviors are promoted due to the implementation of product design strategies based on the concept of programmed obsolescence. The above with the purpose of presenting certain failures in a certain period of its useful life; causing a critical component to present a catastrophic failure that disables the possibility that the product can continue to function. When this scenario is presented, a common way of thinking as a consumer is that it is more attractive/easy to buy a new product and discard the old one, because the restoration price has a very high value or there are simply no spare parts offered by the manufacturer Original (OEM, Original Equipment Manufacturer). In contrast, the circular economy model allows the consumer to extend the life cycle of their product, by replacing its components, resulting in a reduction in the consumption of materials (mining, energy, water consumption, waste generation, among others).

2 Methodology This research was carried out in two stages. The first stage consisted of reviewing the literature to determine the consumer’s perception of remanufactured goods in a global way. For the literature review, the Scopus and Google Scholar databases were

Consumer Perception Applied to Remanufactured Products

461

consulted with the keywords: Consumer Behavior, Remanufacturing Perception. In the second stage, a short semi-structured questionnaire was designed to determine in a general way, whether there is congruence between global perception and local perception. The questionnaire was applied in the city of Tijuana in order to understand what the customer’s perception is in the acceptance of remanufactured products in a circular economy model. The sample was divided into two groups: group A consisted of applying the questionnaire where only a practical definition of remanufacturing was explained to the respondents, while the respondents of group B were given a more detailed explanation of several concepts; including linear economy, circular economy, awareness of environmental problems, planned obsolescence and remanufacturing. The sample size was chosen using a 95% confidence interval with a margin of error of 10% considering a population of 1,641,580 inhabitants in the metropolitan area of the city of Tijuana. Obtaining as a result the need for a collection of 97 samples per study, however, for reasons of symmetry, a sample size of 100 population per study was used.

3 Results Based on the studies carried out, it was observed that education prior to the application of the surveys did form an important role in the results, since the degree of acceptance was 16% higher with respect to group A (Tables 1 and 2). According to the education level of each respondent, it was determined in both studies that people who have postgraduate studies have greater knowledge regarding how the implementation of remanufactured products can contribute to the improvement of the environment. Table 1. Education level between groups Frequency group A Frequency group B Valid Elementary school to Jr. High school 26 High school 36 Technical career 14 Bachelor’s degree 17 Postgraduate 7 Total 100

29 30 13 23 5 100

Table 2. Acceptation of remanufactured products (Group A and Group B) Frequency group A Frequency group B Valid Yes 61 Maybe 32 No 7 Total 100

77 20 3 100

462

A. Jiménez-Zaragoza et al.

According to this study, the people that participated on the survey are willing to support the government on the implementation of laws that promote companies to sell remanufactured products, but only is there is a guarantee of quality by manufacturers (Table 3). According to the data obtained, it can be observed that there is a tendency of 9% in favor of the group B, which are the educated people. This suggests that information and education is a fundamental part for the acceptance of remanufactured products. Table 3. Government support Government support Frequency group A Frequency group B No 8 4 Yes 72 81 Maybe 20 15

4 Conclusions/Discussion It was observed that the action of sensitizing the concepts of sustainability prior to the application of the surveys does influence the results. It was determined that people graduated of higher education institutions have greater knowledge regarding how the implementation of remanufactured products can contribute to the improvement of the environment. At the same time, people with lower economic income or less education are those who are willing to adopt remanufactured products in a circular economy model. Accordingly, the PSS is attractive both to the manufacturer (by providing economic benefits) and to the consumer by acquiring a quality product, good service and at an affordable cost.

References 1. Sala, S., Castellani, V.: The consumer footprint: monitoring sustainable development goal 12 with process-based life cycle assessment. J. Cleaner Prod. 240, 118050 (2019) 2. Sanyé-Mengual, E., Secchi, M., Corrado, S., Beylot, A., Sala, S.: Assessing the decoupling of economic growth from environmental impacts in the European Union: a consumptionbased approach. J. Cleaner Prod. 236, 117535 (2019) 3. PNUD. Programa de las Naciones Unidas para el Desarrollo. Objetivo 12: Producción y Consumo Responsable. Objetivos del Desarrollo Sustentable (2019). Recuperado de https:// www.undp.org/content/undp/es/home/sustainable-development-goals/goal-12-responsibleconsumption-and-production.html 4. Arredondo-Soto, K.C., Realyvasquez-Vargas, A., Maldonado-Macías, A.A., García-Alcaraz, J.: Impact of human resources on remanufacturing process, internal complexity, perceived quality of core, numerosity, and key process indicators. Rob. Comput.-Integr.Manuf. 59, 168–176 (2019)

Consumer Perception Applied to Remanufactured Products

463

5. Hazen, B.T., Mollenkopf, D.A., Wang, Y.: Remanufacturing for the circular economy: an examination of consumer switching behavior. Bus. Strategy Environ. 26(4), 451–464 (2016). Wiley Online Library 6. Guide, V.: Production planning and control for remanufacturing: industry practice and research needs. J. Oper. Manag. 18, 467–483 (2000) 7. Zhu, X., Yu, L., Li, W.: Warranty period decision and coordination in closed-loop supply chains considering remanufacturing and consumer behavior. Sustainability 11(15), 4237 (2019) 8. Wang, S., Wang, J., Yang, F., Wang, Y., Li, J.: Consumer familiarity, ambiguity tolerance, and purchase behavior toward remanufactured products: The implications for remanufacturers. Bus. Strategy Environ. 27(8), 1741–1750 (2018) 9. Wang, Y., Huscroft, J.R., Hazen, B.T., Zhang, M.: Green information, green certification and consumer perceptions of remanufactured automobile parts. Res. Conserv. Recycl. 128, 187–196 (2018) 10. Wang, Y., Hazen, B.T., Mollenkopf, D.A.: Consumer value considerations and adoption of remanufactured products in closed-loop supply chains. Ind. Manag. Data Syst. 118(2), 480– 498 (2018) 11. Hazen, B.T., Mollenkopf, D.A., Wang, Y.: Remanufacturing for the circular economy: an examination of consumer switching behavior. Bus. Strategy Environ. 26(4), 451–464 (2017) 12. Wang, Y., Hazen, B.T.: Consumer product knowledge and intention to purchase remanufactured products. Int. J. Prod. Econ. 181, 460–469 (2016) 13. Wang, Y., Wiegerinck, V., Krikke, H., Zhang, H.: Understanding the purchase intention towards remanufactured product in closed-loop supply chains: an empirical study in China. Int. J. Phys. Distrib. Logistics Manag. 43(10), 866–888 (2013) 14. Singhal, D., Tripathy, S., Jena, S.K.: Acceptance of remanufactured products in the circular economy: an empirical study in India. Manag. Decis. 57(4), 953–970 (2019) 15. Matsumoto, M., Chinen, K., Endo, H.: Paving the way for sustainable remanufacturing in Southeast Asia: an analysis of auto parts markets. J. Cleaner Prod. 205, 1029–1041 (2018) 16. Matsumoto, M., Chinen, K., Endo, H.: Remanufactured auto parts market in Japan: historical review and factors affecting green purchasing behavior. J. Cleaner Prod. 172, 4494–4505 (2018) 17. Matsumoto, M., Chinen, K., Endo, H.: Comparison of US and Japanese consumers’ perceptions of remanufactured auto parts. J. Cleaner Prod. 21(4), 966–979 (2017) 18. Hazen, B.T., Boone, C.A., Wang, Y., Khor, K.S.: Perceived quality of remanufactured products: construct and measure development. J. Cleaner Prod. 142, 716–726 (2017) 19. Sabbaghi, M., Behdad, S., Zhuang, J.: Managing consumer behavior toward on-time return of the waste electrical and electronic equipment: a game theoretic approach. Int. J. Prod. Econ. 182, 545–563 (2016) 20. Gaur, J., Amini, M., Banerjee, P., Gupta, R.: Drivers of consumer purchase intentions for remanufactured products: a study of Indian consumers relocated to the USA. Qual. Mark. Res. Int. J. 18(1), 30–47 (2015) 21. Hazen, B.T., Overstreet, R.E., Jones-Farmer, L.A., Field, H.S.: The role of ambiguity tolerance in consumer perception of remanufactured products. Int. J. Prod. Econ. 135(2), 781–790 (2012) 22. Van Weelden, E., Mugge, R., Bakker, C.: Paving the way towards circular consumption: exploring consumer acceptance of refurbished mobile phones in the Dutch market. J. Cleaner Prod. 113, 743–754 (2016) 23. Kapetanopoulou, P., Tagaras, G.: An empirical investigation of value-added product recovery activities in SMEs using multiple case studies of OEMs and independent remanufacturers. Flex. Serv. Manuf. J. 21(3–4), 92–113 (2009)

464

A. Jiménez-Zaragoza et al.

24. Esquer, J., Arvayo, J.A., Alvarez-Chavez, C.R., Munguia-Vega, N.E., Velazquez, L.: Cleaner production in a remanufacturing process of air compressors. Int. J. Occup. Saf. Ergon. 23(1), 83–91 (2017) 25. Arredondo-Soto, K.C., Sanchez-Leal, J., Reyes-Martinez, R.M., Salazar-Ruíz, E., Maldonado-Macias, A.A.: World class remanufacturing productions systems: an analysis of Mexican maquiladoras. In: International Conference on Applied Human Factors and Ergonomics, pp. 153–161. Springer, Cham (2017) 26. Soto, K.C.A., Rivera, H.H., de la Riva Rodríguez, J., Martínez, R.M.R.: Effects of human factors in planning and production control activities in remanufacturing companies. In: Advances in Ergonomics of Manufacturing: Managing the Enterprise of the Future, pp. 465– 474. Springer, Cham (2016) 27. Cordova-Pizarro, D., Aguilar-Barajas, I., Romero, D., Rodriguez, C.A.: Circular economy in the electronic products sector: material flow analysis and economic impact of cellphone ewaste in Mexico. Sustainability 11(5), 1361 (2019) 28. Arredondo-Soto, K.C., Miranda-Ackerman, M.A., Nakasima-López, M.O.: Supply chain for remanufacturing operations: tools, methods, and techniques. In: Handbook of Research on Industrial Applications for Improved Supply Chain Performance, pp. 73–100. IGI Global (2020) 29. Arredondo-Soto, K.C., Reyes-Martínez, R.M., Sánchez-Leal, J., de la Riva Rodríguez, J.: Methodology to apply design for remanufacturing in product development. In: Handbook of Research on Ergonomics and Product Design, pp. 347–363. IGI Global (2018)

Blockchain in Agribusiness Supply Chain Management: A Traceability Perspective Luis Flores1, Yoseline Sanchez1, Edgar Ramos1(&), Fernando Sotelo1, and Nabeel Hamoud2 1

Industrial Engineering Program, Universidad Peruana de Ciencias Aplicadas, Lima, Peru {u201310446,u201413270,pcineram, fernando.sotelo}@upc.edu.pe 2 Department of Industrial and Manufacturing Engineering, University of Wisconsin-Milwaukee, Milwaukee, WI, USA [email protected]

Abstract. The demand for agricultural products for export is increasing every year. Thus, there is a need for a traceable and more communicative agricultural supply chain among its stakeholders. In addition, the increase in controls, verifications and communications in each SC agent makes agility and chain difficult, generating distrust among those involved. To overcome this issue, we consider Blockchain. Blockchain is a disruptive technology to decentralize data with this state-of-the-art technology, we develop a model that solves the traceability problem of the agricultural product. The model also improves transparency and security within the SC, increasing trust between the suppliers, collaborators and consumers. Keywords: Blockchain  Traceability chain  Supply chain management

 Smart contract  Agribusiness supply

1 Introduction For agricultural models, producers, wholesalers, retailers, distribution and computer systems that store data from each process are considered, with customer and suppliers involved in the same chain. [1, 2]. Food supply chains have shown great development over the years, due to consumption habit change and increased attention to food integrity [3, 4]. Also, the blockchain model has generated an impact on virtual transactions [5]. However, the complexity of the current agri-food chains has created a great gap between consumers and producers, increasing the demand for product information delivery. Current systems of the food supply chain do not support order traceability throughout the chain [8, 9]. Currently, compliance information is manually recorded on paper or stored in centralized databases, which may cause many issues such as high cost, inefficiency of processes, fraud, corruption, errors, problems due to manipulations of data (integrity of digital records) and expenditure on certificates of origin [10, 11], it also reduces fraud and the risk of losing data and payments [2, 6, 13, 14]. The traceability of products © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 465–472, 2021. https://doi.org/10.1007/978-3-030-51328-3_64

466

L. Flores et al.

within the agricultural supply chain requires the collection, management and communication of critical information exchanged among stakeholders [8, 9, 15, 16]. Blockchain offers immutable transactions and access to distributed data in a decentralized network where suppliers and customers interact with each other [13, 18–22].

2 Literature Review 2.1

Supply Chain Management and Agriculture

The development of an unstructured traditional agricultural supply chain has produced research in the agribusiness sector that is related to increasing the demand for the quality of products consumed by the population [23]. That is why in the modern agrifood industry, there are limitations and requirements necessary to achieve a transparent, auditable and reliable supply chain management process. To ensure the homogeneity of the activities carried out by the agents involved, the stakeholders and the management of the business model, it is necessary to ensure the reliability and protection of the shared data [8, 12, 15, 24]. 2.2

Traceability in Agri-Food

Food traceability management aims to track agricultural products throughout the stages of the supply chain in order to stay competitive [7, 25, 26]. Traceability has become a significant agribusiness innovation for producers. This is because the majority of producers export their products to nations where certifications are crucial for the acceptance and admission of these products [9, 27]. 2.3

Blockchain and Smart Contract

Blockchain is defined as a technical scheme, where a decentralized reliable database stores and maintains information of all transactions sequentially over time [28]. This system, using cryptography, can create blocks through multiple nodes. In addition, for greater reliability to the system, digital prints can be created that allow for the validation of stored data [7, 8, 28, 29]. The structure of the blockchain can follow a public or private pattern, and the nodes encrypt the flow of data from the supply chain to ensure data integrity [30, 31], each transaction made by the members of the entire chain is recorded and reviewed [2, 15, 18, 21, 32, 33]. Smart contracts are computerized transactions that execute the terms of a contract, generating more dynamism and versatility in agri-food supply chain operations within the blockchain network [34, 35].

3 Blockchain System Architecture and Requirement Smart Contracts in Agribusiness. Smart contracts are contracts with codes that are driven by events that can be automatically executed in the supply chain once the conditions are met [4, 36]. A smart contract cannot be modified directly in the original

Blockchain in Agribusiness Supply Chain Management

467

contract as soon as it is implemented. If there are errors or changes that need to be done, all parties involved in the network are notified and must approve the changes to take effect [37, 38]. Figure 1 shows a hypothetical example of the use of smart contracts in an agribusiness through automatic transactions between an association of rice farmers and their distributors [39]. The example demonstrates the execution of a smart contract in 6 steps, to automate and increase confidence in transactions involving small farmers and smallholder associations [39, 40].

2 “A” uses a smart contract to control the transfer

“A” N° 852225

1 Property is transferred

The smart contract is verified by each node

4

“A” Transfers 5 ownership of the product and money to “B”

“B” causes the code of the smart lock to unlock the delivery order

6

"B" transfers the product rights for 3 $ X from its blockchain B´s address 418932

Fig. 1. Smart contract model for agribusiness

A smart contract can be activated automatically once the product is delivered [41]. The entities involved in the agricultural supply chain, through the use of an Ethereum network with a smart contract tool, can post these requirements: Entities enter their new business partners to add the specifications of the inputs or resources managed in their supply chain [7, 42, 43]. The identity of all entities in the blockchain network and involved in smart contracts can be stored [20, 44]. All rules must be reviewed, validated and agreed upon by the entities before implementation in the blockchain network [7, 35, 45]. Traceability with Blockchain. The definition of traceability of agricultural products as part of logistics management emphasizes the fact that customer reliability must be controlled in order to obtain greater market shares, in addition to ensuring the safety of their products [46, 47]. Other traceability definitions focus on tracking functionality [32, 48, 49]. Traceability standards focus on describing the ability to track critical capabilities of a product from the source (including inputs used) until the finished product is reached [15, 25, 50, 51]. Each block in the chain, as shown in Fig. 2, a list of transactions and a hash #i (to the previous block, except for the first block of the chain, which is common for all clients in a public or private network and has no predecessor [52, 53]. In addition, no miner can change or add invalid data without other miners from other nodes detecting it as a threat or irregularity [54]. Blockchain technology has problems of scalability in terms of performance, latency and capacity when facing massive data in a real business environment [45, 56, 57, 58]. The “miners” add new blocks in the chain or new transactions

468

L. Flores et al.

in the block through a consensus algorithm, which must be confirmed by most nodes in the system (e.g., consensus vote) [40, 42]. Then, using consensus voting, the smart contract model becomes an important part of the blockchain platform [10, 59].

CONTRACTS

Message Data

...

CODE

Block #i

MINED BLOCK

STORAGE

Block #i + 1

Block #i + 2

Fig. 2. Security of transactions with blockchain

4 Blockchain System Architecture and Requirement The unique limitations and requirements of the modern agricultural food industry raise some important requirements to achieve a transparent, auditable and reliable supply chain management process. Figure 3 shows a model of this system included the participants and activities requested. A8, A9

A6, A7 A1

A2, A3, A4, A5

SUPPLIER

FARMER

PROCESSOR

DISTRIBUTOR/ RETAILER

A10 CUSTOMER

BLOCKCHAIN NETWORK

Fig. 3. Blockchain model in agribusiness supply chain

5 Conclusions In the context of technological evolution in the supply chain through distributed information and traceability of its operations by registering all the processes in the chain, the characteristics and the way in which this trend transforms the agricultural supply chain would present a change substantial. This paper tried to identify the factors involved in registering each process within the agricultural supply chain, a blockchain was proposed and what requirements should be considered by each involved in a public

Blockchain in Agribusiness Supply Chain Management

469

blockchain platform. This method cannot only guarantee the transparency of the operations and the information security of the transactions that occur, but also improves the reliability of the final customer when purchasing a product.

References 1. Bechtsis, D., Tsolakis, N., Bizakis, A.: A Blockchain Framework for Containerized Food, vol. 46. Elsevier Masson SAS (2019) 2. Brewster, C., Roussaki, I., Kalatzis, N., Doolin, K., Ellis, K.: IoT in agriculture: designing a Europe-wide large-scale pilot. IEEE Commun. Mag. 55(9), 26–33 (2017) 3. Borrero, J.D.: Agri-food supply chain traceability for fruit and vegetable cooperatives using Blockchain technology. CIRIEC-Espana Rev. Econ. Publica, Soc. y Coop. (95), 71–94 (2019) 4. Behnke, K., Janssen, M.F.W.H.A.: Boundary conditions for traceability in food supply chains using blockchain technology. Int. J. Inf. Manag. 52, 101969 (2020) 5. Delgado, O., Fierrez, J., Tolosana, R., Vera, R.: Blockchain and biometrics: a first look into opportunities and challenges oscar, vol. 1010, pp. 153–160 (2020) 6. França, A.S.L., Neto, J.A., Gonçalves, R.F., Almeida, C.M.V.B.: Proposing the use of blockchain to improve the solid waste management in small municipalities. J. Cleaner Prod. 244, 118529 (2020) 7. Tse, D., Zhang, B., Yang, Y., Cheng, C., Mu, H.: Blockchain Application in Food Supply Information Security, pp. 1357–1361 (2017) 8. Caro, M.P., Ali, M.S., Vecchio, M., Giaffreda, R.: Blockchain-based traceability in AgriFood supply chain management: a practical implementation. In: 2018 IoT Vertical and Topical Summit on Agriculture-Tuscany (IOT Tuscany), pp. 1–4. IEEE (2018) 9. Antonucci, F., Figorilli, S., Costa, C., Pallottino, F., Raso, L., Menesatti, P.: A review on blockchain applications in the agri-food sector. J. Sci. Food Agric. 99(14), 6129–6138 (2019) 10. Kamble, S.S., Gunasekaran, A., Sharma, R.: Modeling the blockchain enabled traceability in agriculture supply chain. Int. J. Inf. Manage., 1–16 (2019) 11. Leng, K., Bi, Y., Jing, L., Fu, H.C., Van Nieuwenhuyse, I.: Research on agricultural supply chain system with double chain architecture based on blockchain technology. Fut. Gener. Comput. Syst. 86, 641–649 (2018) 12. Wu, T., Huang, S., Blackhurst, J., Zhang, X., Wang, S.: Supply chain risk management: an agent-based simulation to study the impact of retail stockouts. IEEE Trans. Eng. Manag. 60(4), 676–686 (2013) 13. Bosona, T., Gebresenbet, G.: Food traceability as an integral part of logistics management in food and agricultural supply chain. Food Control 33(1), 32–48 (2013) 14. Allen, D.W.E., Berg, C., Markey-Towler, B., Novak, M., Potts, J.: Blockchain and the evolution of institutional technologies: implications for innovation policy. Res. Policy 49(1), 103865 (2020) 15. Salah, K., Nizamuddin, N., Jayaraman, R., Omar, M.: Blockchain-based soybean traceability in agricultural supply chain. IEEE Access 7, 73295–73305 (2019) 16. Aich, S., Chakraborty, S., Sain, M., Lee, H.I., Kim, H.C.: A review on benefits of IoT integrated blockchain based supply chain management implementations across different sectors with case study. In: International Conference on Advanced Communication Technology, ICACT, vol. 2019-Febru, pp. 138–141 (2019)

470

L. Flores et al.

17. Jonkman, J., Barbosa-Póvoa, A.P., Bloemhof, J.M.: Integrating harvesting decisions in the design of agro-food supply chains. Eur. J. Oper. Res. 276(1), 247–258 (2019) 18. Mezquita, Y., González-Briones, A., Casado-Vara, R., Chamoso, P., Prieto, J., Corchado, J. M.: Blockchain-based architecture: a MAS proposal for efficient agri-food supply chains. Adv. Intell. Syst. Comput. 1006, 89–96 (2020) 19. Gallersdörfer, U., Matthes, F.: Tamper-proof volume tracking in supply chains with smart contracts. In: Euro-Par 2018 Parallel Processing Working, vol. 11339, no. January, pp. 379–391 (2019) 20. Huang, H., Chen, X., Wang, J.: Blockchain-based multiple groups data sharing with anonymity and traceability. Sci. China Inf. Sci. 63(3), 1–13 (2020) 21. Valdeolmillos, D., Mezquita, Y., González-Briones, A., Prieto, J., Corchado, J.M.: Blockchain technology: a review of the current challenges of cryptocurrency. In: International Congress on Blockchain and Applications, vol. 1010, pp. 153–160 (2020) 22. Kim, M., Hilton, B., Burks, Z., Reyes, J.: Integrating blockchain, smart contract-tokens, and IoT to design a food traceability solution. In: 2018 IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference, IEMCON 2018, no. Figure 1, pp. 335–340 (2019) 23. Patidar, R., Venkatesh, B., Pratap, S., Daultani, Y.: A sustainable vehicle routing problem for Indian agri-food supply chain network design. In: 2018 International Conference on Production and Operations Management Society, POMS 2018, pp. 1–5 (2019) 24. Lee, C., Song, D.: Ocean container transport in global supply chains: overview and research opportunities, 1–33 (2016) 25. Tian, F.: A supply chain traceability system for food safety based on HACCP, blockchain & internet of things. In: 2017 International Conference Service System Service Management, pp. 1–6 (2017) 26. Bermeo-almeida, O., Cardenas-rodriguez, M., Samaniego-cobo, T., Ferruzola-g, E., Cabezas-cabezas, R., Baz, W.: Blockchain in agriculture: a systematic literature review. In: International Conference on Technologies and Innovation, pp. 44–56 (2018) 27. Iakovou, E., Bochtis, D., Vlachos, D., Aidonis, D.: Supply chain management for sustainable food networks, First Edition. Edited Sustainable Agrifood Supply Chain Management (2016) 28. Bechtsis, D., Tsolakis, N., Bizakis, A., Vlachos, D.: A Blockchain Framework for Containerized Food Supply Chains. In: Computer Aided Chemical Engineering, vol. 46, pp. 1369–1374. Elsevier (2019) 29. Zhu, Y., Song, X., Yang, S., Qin, Y., Zhou, Q.: Secure smart contract system built on SMPC over Blockchain. In: 2018 IEEE International Conference Internet of Things IEEE Green Comput. Communication. IEEE Cyber, Physics Social Computing and IEEE Smart Data, pp. 872–877 (2018) 30. Maertens, M., Velde, K.V.: Contract-farming in staple food chains: the case of rice in Benin. World Dev. 95, 73–87 (2017) 31. Fu, Y., Zhu, J.: Big production enterprise supply chain endogenous risk management based on blockchain. IEEE Access 7, 15310–15319 (2019) 32. Kamble, S.S., Gunasekaran, A., Gawankar, S.A.: Achieving sustainable performance in a data-driven agriculture supply chain: a review for research and applications. Int. J. Prod. Econ. 219, 179–194 (2020) 33. Yli-Huumo, J., Ko, D., Choi, S., Park, S., Smolander, K.: Where is current research on blockchain technology? - a systematic review. PLoS ONE 11(10), 1–27 (2016) 34. Baralla, G., Ibba, S., Marchesi, M., Tornelli, R., Missineo, S.: A blockchain based system to ensure supply chain. In: Euro-Par 2018 Parallel Processing Working, vol. 11339, no. January, pp. 379–391 (2019)

Blockchain in Agribusiness Supply Chain Management

471

35. Dasaklis, T.K.: Defining granularity levels for supply chain traceability based on IoT and blockchain (2019) 36. Zhao, G., et al.: Computers in Industry Blockchain technology in agri-food value chain management: a synthesis of applications, challenges and future research directions. Comput. Ind. 109, 83–99 (2019) 37. Wang, Q., Wu, J., Zhao, N., Zhu, Q.: Inventory control and supply chain management: a green growth perspective. Res. Conserv. Recycl. 145, 78–85 (2019) 38. Ding, Y., Pu, H., Liang, Y., Wang, H.: Blockchain Technology in the Registration and Protection of Digital Copyright, vol. 2, pp. 608–616. Springer, Heidelberg (2020) 39. Kamilaris, A., Fonts, A., Prenafeta-Boldύ, F.X.: The rise of blockchain technology in agriculture and food supply chains. Trends Food Sci. Technol. 91, 640–652 (2019) 40. Lin, J., Shen, Z., Miao, C.: Using blockchain technology to build trust in sharing LoRaWAN IoT. In: ACM International Conference Proceeding Series, vol. Part F1306, pp. 38–43 (2017) 41. Wei, P.C., Wang, D., Zhao, Y., Tyagi, S.K.S., Kumar, N.: Blockchain data-based cloud data integrity protection mechanism. Fut. Gener. Comput. Syst. 102, 902–911 (2020) 42. Lanko, A., Vatin, N., Kaklauskas, A.: Application of RFID combined with blockchain technology in logistics of construction materials, vol. 03032, pp. 1–6 (2018) 43. Delmolino, K., Arnett, M., Kosba, A., Miller, A., Shi, E.: Step by step towards creating a safe smart contract: lessons and insights from a cryptocurrency lab. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), LNCS, vol. 9604, pp. 79–94 (2016) 44. Soe, Y.N., Feng, Y., Santosa, P.I., Hartanto, R., Sakurai, K.: Advanced Information Networking and Applications, vol. 1. Springer, Heidelberg (2019) 45. Lu, Q., Xu, X.: Adaptable blockchain-based systems: a case study for product traceability. IEEE Softw. 34(6), 21–27 (2017) 46. Toyoda, K., et al.: A novel blockchain-based product ownership management system (POMS) for anti-counterfeits in the post supply chain, vol. XXX, no. XXX, pp. 1–13 (2017) 47. Schmidt, C.G., Wagner, S.M.: Blockchain and supply chain relations: a transaction cost theory perspective. J. Purchasing Supply Manag. 25(4), 100552 (2019) 48. Longo, F., Nicoletti, L., Padovano, A., Atri, G., Forte, M.: Computers & industrial engineering blockchain-enabled supply chain: an experimental study. Comput. Ind. Eng. 136 (July), 57–69 (2019) 49. Lin, Q., Wang, H., Pei, X., Wang, J.: Food safety traceability system based on blockchain and EPCIS. IEEE Access 7, 20698–20707 (2019) 50. Perboli, G., Musso, S., Rosano, M.: Blockchain in logistics and supply chain: a lean approach for designing real-world use cases. IEEE Access 6, 62018–62028 (2018) 51. Narsimhalu, U., Potdar, V., Kaur, A.: A case study to explore influence of traceability factors on Australian food supply chain performance. Procedia-Soc. Behav. Sci. 189, 17–32 (2015) 52. El Farouk Imane, I., Foaud, J., Abdennebi, T.: From modeling to logistic KPI Use of SCOR model and ARIS to build a dashbord to manage medicines supply chain in moroccan public hospital. In: Colloquium in Information Science and Technology, CIST, pp. 746–750 (2017) 53. Gligor, D.M., Esmark, C.L., Holcomb, M.C.: Performance outcomes of supply chain agility: when should you be agile? J. Oper. Manag. 33–34, 71–82 (2015) 54. Scholliers, J., Permala, A., Toivonen, S., Salmela, H.: Improving the security of containers in port related supply chains. Transp. Res. Procedia 14, 1374–1383 (2016) 55. Ji, H., Xu, H.: A review of applying blockchain technology for privacy protection. Adv. Intell. Syst. Comput. 994, 664–674 (2020)

472

L. Flores et al.

56. Tian, F.: An agri-food supply chain traceability system for China based on RFID & blockchain technology. In: 2016 13th International Conference on Service Systems and Service Management, ICSSSM 2016 (2016) 57. Muhamad, A., Alqatawna, J., Paul, S., Kiwanuka, F., Ahmad, I.: Improving event monitoring in IoT network using an integrated blockchain-distributed pattern recognition scheme anang, vol. 1010, pp. 153–160 (2020) 58. Galvez, J.F., Mejuto, J.C.: Future challenges on the use of blockchain for food traceability analysis. Trends Anal. Chem. 107, 222–232 (2018) 59. Queiroz, M.M., Telles, R., Bonilla, S.H., Telles, R.: Blockchain and supply chain management integration: a systematic review of the literature (2019)

Cold Supply Chain Logistics Model Applied in Raspberry: An Investigation in Perú Mijail Tardillo1(&), Jorge Torres1, Edgar Ramos1, Fernando Sotelo1, and Steven Dien2 1

Industrial Engineering Program, Universidad Peruana de Ciencias Aplicadas, Lima, Peru {u201420223,u201217238,pcineram, fernando.sotelo}@upc.edu.pe 2 M.S. Global Supply-Chain Mgmt, University of Southern California, Los Angeles, CA, USA [email protected]

Abstract. This research describes the viability of the processes in the logistics industry of the cold chain of raspberries in Peru. The cultivation of raspberries is in the stage of potential growth opportunities for agribusiness. Output logistics is an essential part of the management of the food supply chain; this improves performance and quality in the fresh product. The cold chain and proper practice techniques preserve quality and reduce raspberry production losses by 15%. This model is based on scientific articles that are the theoretical pillars for the process methodology that develops the competitiveness of the product. Keywords: Raspberry  Supply chain Agribusiness  Food supply chain

 Cold supply chain  Logistics 

1 Introduction The objective of this research is to examine the operations carried out in the outbound logistics process within the cold supply chain (CSC) of raspberry in Peru and to achieve standardization of processes, efficiency and quality in organic raspberries. Organic raspberry production is increasing by 10% each year, opening up opportunities for new producers and increasing the challenge of improving supply chain processes to increase fruit shelf-life [1, 2]. Cold chain logistics refers to a rapid, effective process of flow between suppliers and internal demanders to control a lowtemperature environment and overcome time barriers of fruit degeneration [3]. This allows for an efficient quality production and for the fresh preservation of raspberries [4–8].

2 Theoretical Framework Supply Chain Management. Supply Chain Management (SCM) is an important indicator where the integration of processes in production is the main tool to improve © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 473–480, 2021. https://doi.org/10.1007/978-3-030-51328-3_65

474

M. Tardillo et al.

the total performance of the products [9–11]. SCM generates cost reduction and improves product quality and delivery time for higher profitability and demand fulfillment [12, 13]. Supply chain management, defined as a set of practices focused on the continuous improvement of production processes, enables improved performance and customer satisfaction through product quality and delivery as required [12, 14, 15]. Agri-Fresh Supply Chain. The fresh Food Supply Chain (FSC) depends largely on the way food is produced and distributed, taking into account product preservation to maintain the physical and nutritional condition of the product [16–18]. The formation of an FSC contains a complex cooperative network relationship between enterprises and partners in the supply chain [19]. Reducing transportation costs and maintaining controlled temperature after harvest are the main challenges for fresh FSC management and quality [13, 18, 20]. Within FSC, perishable foods seek to extend their shelf life after production and remain fresh until the final consumer [21, 22]. Real-time temperature monitoring of fruits and vegetables during storage and transport is necessary to ensure product quality [3, 24, 25]. Food Supply Chain. The practices or exercises organized in the Food Supply Chain emphasize the advancement of continuous process improvement among supply chain stakeholders to improve sustainable performance and protect product shelf-life [29, 30]. Long distances create problems in terms of quality; fresh products are perishable, and maintaining their properties is the challenge of their supply chain [13, 19, 26, 27, 31– 33]. Also, consumers, faced with the asymmetry of information, may not know how long food may have been on the shelves of supermarkets and retailers, even after delivery and unpacking [28, 34, 35]. Cold Supply Chain. CSC is the succession of logistic processes (production, storage, packaging, loading and unloading, transport, distribution) with controlled temperatures and relative humidity, from the moment of production or harvest to the final consumer [3, 23, 36]. Its purpose is to preserve the product from critical risk temperatures and prevent bacterial growth that could affect the health of the final consumer(s) [6, 23]. 25% of perishable products are wasted in the supply chain, excluding household waste, which is estimated to be 19% of all food purchased [21, 29, 30]. Therefore, to ensure food safety and quality throughout the cold chain and to improve the performance of the cold chain, knowledge, and accessibility of product (environmental) data at all stages of the cold chain has been emphasized [18, 35, 37, 38]. This implies: Cooling Systems: Bring products such as food at the right temperature for processing, storage and transport. Cold Storage: Create product storage facilities for a period, to control the temperature of the fruits waiting for the entire harvest to arrive at the collection center and go to the distribution process. Cold Transport: Have means of transport on time and available for the transfer of the fruit. Maintain stable temperature and humidity conditions, to protect its integrity.

Cold Supply Chain Logistics Model Applied in Raspberry

475

In the supply chains, the cold chain logistics provides higher levels of integration, since keeping the temperature stable requires greater control of all the processes that are involved [34, 39, 40]. Performance and time are the most important in the CSC of perishable products [35, 41, 42].

3 Methodology This study is organized into four phases: Material search including identification of keywords, the definition of search strings, and selection of academic databases. Paper selection according to the definition of criteria for the inclusion and exclusion of certain documents in the review. Descriptive analysis to provide reviewers with a preliminary study to categorize selected papers. Content analysis to review papers, identify areas and gaps in research, and propose a conceptual framework for refrigerated foods. The model in Fig. 1 lists the processes at each stage of the cold supply chain.

Fig. 1. Perishable fruit supply chain management

4 Case Study The worldwide demand for raspberries is growing by 10% a year, according the Peruvian agriculture ministry. It is convenient to develop a CSC since the production of raspberries from Peru is delivered in places far away for consumption. They are located in Arequipa, Cajamarca, Lambayeque, Ancash, Junín, and Lima, with ninety hectares of raspberry cultivation, it only sells in the domestic market with current results, as shown in Table 1. The Peruvian specialized public technical body (SENASA), is coordinating with the U.S. all sanitary requirements for Peruvian raspberry products export to the American country.

476

M. Tardillo et al. Table 1. Current practices in Peru raspberry cold supply chain Process Source Make Deliver

Current practices No strategies and long-term planning Postharvest delays, techniques and losses Low storage condition, cold transport with poor quality

Percentage% 28% 35% 37%

5 Results Raspberry. Raspberries are a highly perishable product and are best stored at low temperatures with high relative humidity (RH), ideally 3 °C and 95%. Maintain such the temperature during post-harvest handling, and consequently, the ideal cold chain packaging to the shelf life of these fruits [1, 21, 22]. Physical Distribution. In this process, packing utensils, clamshells, for the internal distribution and transport of the fruit in a practical way, is planned with precision. Storage. Raspberries are stored with 10% O2, and 15% CO2 since at that temperature, decomposition is significantly reduced, and the fruits show a more attractive color compared to those stored in the controlled atmosphere [1, 13, 41], before distribution to retailers or end-users. Cold stores cooled and operated at a range of different temperatures depending on the product or customer requirements [13, 30]. Transport. Reduced O2 levels and high CO2 levels have been shown to reduce the respiration rate of fruits. Still, temperature fluctuations that may occur during storage, transport, and the retail display may result in a hostile atmosphere within the package and loss of aromatics [14, 18, 41]. Cold Supply Chain Logistic System Model. In the cold chain logistics model, any omission of a process is vital to make the cold chain logistics not generate a quality effect. The cold chain logistics system is with low-temperature condition controls throughout the process, transportation, storage, packing, loading, and unloading for transport [22, 41, 44]. As depicted in Fig. 2. The integration of these processes creates value, increases the shelf life of the postharvest raspberry by three days, as depicted in Fig. 3, and improves the level of services to meet the requirements of organic products. Safety Evaluation of Cold Chain Logistics. The cold chain is the crucial solution to extend shelf life and maintain quality [6, 38, 39]. For transport, a vital part of fruit conservation, four factors are considered: routes, environmental, human factor, vehicle conditions. These factors are key not to lose the physical and nutritional condition of raspberries and prevent their degeneration [37, 45].

Cold Supply Chain Logistics Model Applied in Raspberry

477

Fig. 2. Raspberry cold supply chain

Fig. 3. Output logistic process

6 Conclusions and Future Research This model considers not only food quality but also efficient storage and transport processes to increase the post-harvest shelf life of raspberries. There are three decisions to be made, good agricultural practices, storage control, and temperature control. With the advance of technology, software that helps logistics operations, starting with order taking, then receiving, going through storage, inventory control, loading to transport to unloading at the collection point, will be essential in all distribution processes in the cold chain. The programs that are necessary for cold logistics must contain a unique development, because the techniques of storage and transport, are not always compatible with the temperatures required.

478

M. Tardillo et al.

References 1. Giongo, L., Ajelli, M., Poncetta, P., Ramos-garcía, M., Sambo, P., Farneti, B.: Postharvest Biology and Technology Raspberry texture mechanical profiling during fruit ripening and storage. Postharvest Biol. Technol. 149, 177–186 (2019) 2. Jara-Rojas, R., Bravo-Ureta, B.E., Solís, D.A.: Technical efficiency and marketing channels among small- scale farmers: evidence for raspberry production in Chile. Int. Food Agribusiness Manag. Rev. 21(3), 351–364 (2018) 3. Verdouw, C.N., Robbemond, R.M., Verwaart, T., Wolfert, J., Beulens, A.J.M., Robbemond, R.M., Beulens, A.J.M.: A reference architecture for IoT-based logistic information systems in agri-food supply chains. Enterp. Inf. Syst. 7575, 755–779 (2015) 4. Rajeev, A., Pati, R.K., Padhi, S.S., Govindan, K.: Evolution of sustainability in supply chain management: a literature review. J. Clean. Prod. 162, 299–314 (2017) 5. Govindan, K.: Sustainable consumption and production in the food supply chain: a conceptual framework. Int. J. Prod. Econ. 195, 419–431 (2018) 6. Gong, J., Pu, L., Zhang, H.: Numerical study of cold store in cold storage supply chain and logistics. In: 2010 International Conference E-Product E-Service E-Entertainment, ICEEE 2010, pp. 1–3 (2010) 7. Chadalavada, H., Raj, D.S., Balasubramanian, M.: Six Sigma implementation in a manufacturing unit - a case study. Int. J. Product. Qual. Manag. 19(4), 409–422 (2016) 8. Attia, A.M.: The effect of triple-A supply chain on performance applied to the Egyptian textile industry. Int. J. Integr. Supply Manag. 10, 225 (2016) 9. Flynn, B.B., et al.: The impact of supply chain integration on performance: a contingency and configuration approach. J. Oper. Manag. 26(1), 468–489 (2008) 10. Terziovski, M., Hermel, P.: The role of quality management practice in the performance of integrated supply chains: a multiple cross-case analysis. Qual. Manag. J. 18(2), 10–25 (2011) 11. Liu, G.: Science direct the impact of supply chain relationship on food quality. Procedia Comput. Sci. 131, 860–865 (2018) 12. He, Y., Xu, Q., Xu, B., Wu, P., Teoman, S., Ulengin, F.: The impact of management leadership on quality performance throughout a supply chain: an empirical study. Total Qual. Manag. Bus. Excell., 1–25 (2016) 13. Giuggioli, N.R., Girgenti, V., Briano, R., Peano, C.: Sustainable supply-chain: evolution of the quality characteristics of strawberries stored in green film packaging. CYTA – J. Food 15, 211–219 (2017) 14. Gharehgozli, A., Iakovou, E., Chang, Y., Swaney, R.: Trends in global E-food supply chain and implications for transport: literature review and research directions. Res. Transp. Bus. Manag. 25, 2–14 (2017) 15. Jonkman, J., Barbosa-p, A.P., Bloemhof, J.M.: Integrating harvesting decisions in the design of agro-food supply chains. Eur. J. Oper. Res. Receiv. 276, 1–26 (2018) 16. Besik, D., Nagurney, A.: Quality in competitive fresh produce supply chains with application to farmers’ markets. Socioecon. Plann. Sci. 60, 62–76 (2017) 17. Siddh, M.M., Soni, G., Jain, R.: Agri-fresh food supply chain quality (AFSCQ): a literature review. Ind. Manag. Data Syst. 117(9), 2015–2044 (2017) 18. Hongli, Y., Yongming, W.: Transportation expenses minimal modelling with application to fresh food supply chain. In: 13th Conference International IEEE 2017 sobre Medición Electrónica e Instrumentos (2017) 19. Yang, J.-J., Huang, S.-Z.: A study on the effects of supply chain relationship quality on firm performance-under the aspect of shared vision. J. Interdiscip. Math. 21(2), 419–430 (2018)

Cold Supply Chain Logistics Model Applied in Raspberry

479

20. Osvald, A., Zadnik, L.: A vehicle routing algorithm for the distribution of fresh vegetables and similar perishable food. J. Food Eng. 85, 285–295 (2008) 21. Baselice, A., Colantuoni, F., Lass, D.A., Nardone, G., Stasi, A.: Trends in EU Consumers’ attitude towards fresh-cut fruit and vegetables. Food Qual. Prefer. 59, 87–96 (2017) 22. Cecilia, M., Nunes, N., Nicometo, M., Emond, J.P., Melis, R.B.: Improvement in fresh fruit and vegetable logistics quality: berry logistics field studies. Philosophical Transactions of the Royal Society A, Mathematical, Physical and Engineering Sciences (2014) 23. Hsu, H.: A compromise programming model for perishable food logistics under environmental sustainability and customer satisfaction. In: 2019 IEEE 6th International Conference on Industrial Engineering and Applications, pp. 294–298 (2019) 24. Yuan, Q., Chen, P.: Research on the agricultural supply chain management and its strategies. In: International Conference on Emergency Management and Management Sciences, pp. 173–176 (2010) 25. Rong, A., Akkerman, R., Grunow, M.: An optimization approach for managing fresh food quality throughout the supply chain. Int. J. Prod. Econ. 131(1), 421–429 (2011) 26. Fernandes, A.C., Sampaio, P., Sameiro, M., Truong, H.Q.: Supply chain management and quality management integration: a conceptual model proposal. Int. J. Qual. Reliab. Manag. 34(1), 53–67 (2017) 27. Jraisat, L.E., Sawalha, I.H.: Quality control and supply chain management: a contextual perspective and a case study. Supply Chain Manag. 18(2), 194–207 (2013) 28. Flynn, B.B., Flynn, E.J.: Synergies between supply chain management and quality management: Emerging implications. Int. J. Prod. Res. 43, 3421–3436 (2005) 29. Batson, R.G., Mcgough, K.D.: Quality planning for the manufacturing supply chain. Qual. Manag. J. 13(1), 33–42 (2006) 30. Majumdar, J.P., Manohar, B.M.: Why Indian manufacturing SMEs are still reluctant in adopting total quality management. J. Product. Qual. Manag. 17(1), 16–35 (2016) 31. Teoman, S., Ulengin, F.: The impact of management leadership on quality performance throughout a supply chain: an empirical study. Total Qual. Manag. Bus. Excell., 1–25 (2017) 32. Chuang, C.-J., Wu, C.-W.: Determining optimal process mean and quality improvement in a profit-maximization supply chain model. Qual. Technol. Quant. Manag. 3703, 1–16 (2017) 33. Lin, C., Chow, W.S., Madu, C.N., Kuei, C.H., Yu, P.P.: A structural equation model of supply chain quality management and organizational performance. Int. J. Prod. Econ. 96(3), 355–365 (2005) 34. Shashi, A.S., Cerchione, R., Singh, R., Centobelli, P.: Food cold chain management: From a structured literature review to a conceptual framework and research agenda. Int. J. Logist. Manag. 29, 792–821 (2018) 35. James, S.J., James, C.: The food cold-chain and climate change. Food Res. Int. 43(7), 1944– 1956 (2010) 36. Qiao, J.: Research on optimizing the distribution route of food cold chain logistics based on modern biotechnology. In: AIP Conference Proceedings, vol. 2110, June 2019 37. Wei, J.: Research on the cold chain logistics distribution system of agricultural products. In: Conference Series: Earth and Environmental Science, vol. 237, no. 5 (2019) 38. Badia-melis, R., Carthy, U.M., Ruiz-garcia, L., Garcia-hierro, J., Villalba, J.I.R.: New trends in cold chain monitoring applications - a review. Food Control 86, 170–182 (2018) 39. Carson, J.K., East, A.R.: The cold chain in New Zealand – A review. Int. J. Refrig 87, 185– 192 (2017) 40. Robertson, J., Franzel, L., Maire, D.: Innovations in cold chain equipment for immunization supply chains q. Vaccine 35(17), 2252–2259 (2017) 41. Hundy, G.F., Trott, A.W.: The Cold Chain – Transport, Storage, Retail. Refrigeration, Air Conditioning and Heat Pumps, pp. 273–287 (2016)

480

M. Tardillo et al.

42. Mercier, S., Villeneuve, S., Mondor, M., Uysal, I.: Time – Temperature Management Along the Food Cold Chain: A Review of Recent Developments, pp. 1–21 (2017) 43. Angolia, M.G., Pagliari, L.R.: Experiential learning for logistics and supply chain management using an SAP ERP software simulation. Decis. Sci. J. Innov. Educ. 00, 104– 125 (2018) 44. Quang, H.T., Sampaio, P., Carvalho, M.S., Fernandes, A.C., An, D.T.B., Vilhenac, E.: An extensive structural model of supply chain quality management and firm performance. Int. J. Qual. Reliab. Manag. 33(4), 444–464 (2016) 45. Zhang, X., Li, G.: Study on the time-space optimization for cold-chain logistics of fresh agricultural products. In: International Conference on Future Information Technology and Management Engineering, pp. 331–333 (2010)

Cognitive Computing and Internet of Things

Introducing Intelligent Interior Design Framework (IIDF) and the Overlap with Human Building Interaction (HBI) Holly Sowles(&) and Laura Huisinga California State University, Fresno, 5225 N. Backer Ave., Fresno, CA 93740, USA {hsowles,lhuisinga}@csufresno.edu

Abstract. Increasingly our interiors will have technology embedded throughout the space or used to morph or augment the space. The increased ubiquity of technology will result in humans living inside structures that not only provide for their occupant’s needs but anticipates those needs beforehand. The Intelligent Interior Design Framework (IIDF) goes beyond “smart homes” connected to the internet of things (IoT). We seek to introduce the Intelligent Interior Design Framework (IIDF) as an interdisciplinary strategy that can be used to design experience inside our interiors. The IIDF is a theoretical framework for emerging intelligent interiors, divided into three domains. These categories are Smart Geometry (SG), Ambient Intelligence (AmI), and Information Modeling (InfoMod). The discipline of Human-Computer Interaction (HCI) and Human Building Interaction (HBI) by their very nature, are interdisciplinary and collaborative. These different disciplines work in some way with the three domains in intelligent design: Smart Geometry, Ambient Intelligence, and Information Modeling. Keywords: Intelligent interiors  Internet of Things (IoT)  Automation  Artificial Intelligence  User experience  Zero UI  Smart Geometry  Ambient Intelligence  Information Modeling  Human Building Interaction

1 Introduction The research proposes a theoretical teaching and practice framework for the field of intelligent interior design. The framework is a sliding scale that permits the designer to establish the best practices of cyber-technologies for various intelligent interior design iterations through the utilization of diverse software, technologies, and design capabilities. A meta-analysis of the available literature of distributed cognition, experiential knowledge, intelligent design technology, and intelligent design outcomes was conducted. The predominant research question was how can the existing varieties of distributed cyber intelligence be integrated into a single framework that would inform ‘intelligent interiors’ in the future [1]. The framework consists of three technological domains, Smart Geometry (SG), Ambient Intelligence (AmI), and Information Modeling (InfoMod). Each domain creates specific design solutions. From the research the following assumptions were © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 483–489, 2021. https://doi.org/10.1007/978-3-030-51328-3_66

484

H. Sowles and L. Huisinga

formulated; due to cyber-technologies, a paradigm shift will occur in the design thinking of the 21st-century. Interior designers will not only design the void of the space; they will conceive interiorized experiences within the built environment. Instead of crafting for building typologies designers will distribute experiences. Healthcare facilities will transform into the experience of well-being, educational institutions to edutainment and communication will become the perception of presence by altering time, distance and scale within the intelligent interior [2]. The IIDF overlaps disciplines and functionality with HBI and HCI. IIDF uses HBI to understand how the occupants of a building interact with the physical space or objects inside it through HCI’s information technology design. For instance, the creation inside the Ambient Intelligent domain may require Voice User Interfaces (VUI) until enough microsensors are able to provide a Zero-User interface for the building to respond to its occupants seamlessly. This poster outlines the IIDF and how it works with HBI and HCI.

2 IIDF This research aims to introduce the Intelligent Interior Design Framework (IIDF) for interiors as an interdisciplinary strategy. IIDF goes beyond “smart homes” connected to the internet of things (IoT). Intelligent Interior Design Framework: The purpose of the theoretical framework for emerging intelligent interiors subsumes varieties of software, technologies, and design capabilities into one theoretical framework. The research comes out of a metaanalysis of the literature of distributed cognition, experiential knowledge, intelligent design technology, and intelligent design outcomes. The objective is to critically analyze existing intelligent design terminologies in the literature to see if diverse theories and technologies can be grouped into one system. This framework then serves as a tool for design applications for various cases in intelligent interiors by integrating distributed cyber intelligence into a single framework that will inform ‘intelligent interiors’ moving forward. Through the meta-analysis of the sources available in intelligent design literature, three domains have been determined to be the foundation of these computational processes. These domains are Smart Geometry, Ambient Intelligence, and Information Modeling. Timeline: In design today, emergent digital technologies play a significant role in how we communicate, collaborate, and produce our materials, artifacts, and structures through the many iterations of the design process. More importantly, digital technologies are changing the way of design thinking itself. This innovative hi-tech way of creating has produced an entirely new design paradigm. In the 20th century the Machine democratizes design at the level of making available more goods for the general population, the Computer in the 21st Century democratizes design by erasing the lines between disciplines and professions. The timeline is an illustration of the rapid changes that have occurred in computational design over the past six decades. This evolution was made possible through the invention of ubiquitous computing and cyber

Introducing IIDF and the Overlap with HBI

485

technologies, transforming the way designers conceive and process their designs. Although digital design thinking began in the 1960s, it was not until the late 1980s that rapid development of the computational design was realized with the advent of the personal computer. Since then many changes have occurred and this progression is projected to carry on into the future at expediential rates. 20th-century design thinking was based on how the design was best applied according to building typology. 21st-century design thinking will be based on interiorized experiences made possible through cyber-technologies, distributing real-time information upon demand. In the 21st century, the phenomenon of interiority will have a new meaning. Living within a smart city may be like living inside a living organism. All human experience will be interiorized in the meandering context of the smart city, and its megastructures [1]. The individual may navigate and interpret such environments much differently than how we currently perceive the interior. With limitless data in this boundless virtual landscape, the walls of the interior will no longer be confined by the exterior walls of the architecture.

The diagram has been arranged by a timeline that highlights significant dates in technological advancements impacting cyber-technology. The timeline begins with the introduction of the personal computer in the early 1990s progressing through 2G which connected the PC to the world-wide-web and enabled texting in our smartphones in approximately 2000. With the development of 3G and handheld devices have become powerful portable pcs with which we can download media and communicate anytime, anywhere. With the launch of 5G the IoT and Zero UI will be available by 2020 [3]. Smart cities are estimated to be fully operational by approximately 2035, and Artificial Intelligence will follow in 2050 [4]. Within each typology of the diagram (Fig. 1) of interiorized experience, examples have been provided to explain the evolution of the cybertechnologies within the typology which may alter the design mindset. Cyber-technologies will be instrumental in shifting the interior designer’s mindset from that of the designer of health care facilities to one of the well-being experiences. Concurrently, it will also

486

H. Sowles and L. Huisinga

change how we design intelligent interiors for learning and the sense presence as we communicate within virtual worlds.

Fig. 1. IIDF domains of Smart Geometry (SG), Ambient intelligence (AmI), and Information Modeling (InfoMod) with a sliding scale.

The Internet of Things and Zero UI: As intelligent technology continues to progress, the newest advancement in cyber-connectivity is the ‘Internet of Things’ (IoT). Today this technology facilitates us by pulling data from the cloud via high-speed, cyber-connectivity which communicates with our handheld devices and personal computers allowing for multi-layered tasking. As the IoT continues to develop it is anticipated that user interfaces such as television, computer screens, and cell phones will become unnecessary in order for us to interact, and communicate, with each other and the built environment. This trend will create a new paradigm in design, known as Zero User Interfaces (UI) turning our interiors and exteriors into haptic, automated, and ambient spaces [5]. The IoT will be embedded in every material and structural element of the built environment as well as the artifacts within it. Machines will speak to one another on our behalf without human intermediation. Our challenge as interior designers, with such advanced technology, will be to better understand how people are embedded in the world and how we will coordinate our spaces when considering this advanced network of systems. ‘Systems of Technology in Intelligent Interiors’: The three domains in intelligent design: Smart Geometry, Ambient Intelligence and Information Modeling, work in tandem as ‘systems’ through the cyber interconnectivity of the Internet of Things [1]. The map also delineates the digital software which generates various intelligent design outcomes, within each domain. Smart Geometry (SG): The domain of Smart Geometry includes technologies that are associated with parametric coding and scripting through algorithms. SG is not just a tool, but a computational technique, which creates design simulations of the model for the meaning and experience of the occupant [6].

Introducing IIDF and the Overlap with HBI

487

Ambient Intelligence (AmI): Ambient Intelligence is associated with an interactive design that activates interfaces, sensors, and actuators through various forms of computation and microprocessors [7]. AmI is instrumental in creating built environments that are sentient with the capability of recognizing the inhabitants’ needs, learn from their behavior, and reacting in their interest at all times. AmI is based on omnipresent and ubiquitous or embedded computing [7]. This technology has the aptitude to profile the user by using context awareness, and humancentric computer interaction. Information Modeling (InfoMod): The Information Modeling domain represents integrated technologies such as Revit and BIM. These technologies distribute information to allied fields streamlining the design process better controlling construction and budgetary constraints. In the virtual realm of information modeling, augmented reality is the facilitator of intelligent design. InfoMod is depicted as integrated technology software that is capable of generating interiors which are kinetic, dynamic and, deployable [8]. In other words, they are transportable, unfixed, and physically adaptable to the environment and the individual occupant. The two overarching conclusions of this research are: 1. Intelligent interior design typologies will no longer be based on building types. They will be based on a range of interiorized experiences. 2. The paradigm shift in 21st_century interior design thinking will be a result of interiorized experiences [1].

Fig. 2. Disciplines with the potential to contribute to the Intelligent Interior Design Framework. Graphic Design, Interior Design, Architecture, Computer Science, Urban Design, Mechanical Engineering, Cognitive Psychology, Human Factors, Education & others

Fig. 3. Intelligent Interiors Framework (SG, AmI, Info Mod)

488

H. Sowles and L. Huisinga

3 HBI and IIDF 3.1

HCI and HBI’s Role in IIDF

The discipline of Human-Computer Interaction (HCI) and Human Building Interaction (HBI) pull from a variety of other disciplines. HCI and HBI, by their very nature, are interdisciplinary and collaborative. Each of these different disciplines works in some way with the three domains in intelligent design: Smart Geometry, Ambient Intelligence, and Information Modeling. Figure 1 shows the overlap of a variety of disciplines that can all benefit from the IIDF and fall inside the realm of HCI and HBI. The intelligent design framework can be used not only by interior designers but all disciplines who seek to embed technology into the fabric of our lives seamlessly. Figure 2 embeds the IIDF into the encompassing sphere of experience using the IoT to create and achieve these total experiences and using a zero UI to interact with or change the experience. The diagram in Fig. 3 shows how different disciplines overlap with the intelligent design technologies framework [9]. HBI looks at design problems dealing with both human interaction complexity and the social experience inside, as well as, with the built environment [10]. What is Human-Building Interaction (HBI) and Human-Computer Interaction (HCI)? According to Shen et al. [11, 12], Human-Building Interaction is defined as the study of the interface between the occupants and the building’s physical space and the objects within it. According to the Interaction Design Foundation, Humancomputer interaction (HCI) is a multidisciplinary field of study focusing on the design of computer technology and, in particular, the interaction between humans (the users) and computers. While initially concerned with computers, HCI has since expanded to cover almost all forms of information technology design. 3.2

Transition Away from GUI to Zero-UI

As technology continues to evolve, we continue to transition from pure GUI (graphic user interface) to natural UI (user interface) and non-WIMP (windows, icons, mouse, pointer) interfaces [13]. Previously screens were needed to complete an interaction or to communicate the desired outcome. Moving forward, technology will become embedded into the things and structures around us through the IoT. Screens will not always be needed to interface with this embedded technology. This third industrial revolution [14] will allow for a super internet of things infrastructure, where sensors will be embedded in our everyday objects and appliances. If done properly, this has the potential to reduce the memory load on daily tasks or operations. As we create technology that understands the user instead of the user trying to remember orders of operation, we can prevent errors and reduce memory load while striving for consistency [13].

Introducing IIDF and the Overlap with HBI

489

References 1. Sowles, H.: Distributed Knowledge in Interior Design (2016) 2. Wang, D.: Towards a new virtualist design research programme. FORMakademisk 5(2), 3 (2013) 3. Rifkin, J.: The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism, 1st edn. Palgrave Macmillan, New York (2014) 4. Bostrom, N.: What happens when our computers get smarter than we are (2015). https:// www.youtube.com/watch?v=MnT1xgZgkpk 5. Jones, C.: The business of the IoT with Chad Jones. You Tube, 29 October 2013. www. youtube.com/watch?v=Yd98Naz8jvQ. Accessed 28 Dec 2015 6. Kocaturk, T., Medjdoub, B.: Distributed Intelligence in Design, pp. v–xv (2011) 7. Wichert, R.: Preface. In constructing ambient intelligence. In: AmI 2011 Workshops, Amsterdam, The Netherlands, 16–18 November 2011. Revised selected papers. Springer, Berlin (2012) 8. Fox, M., Kemp, M.: Interactive Architecture. Princeton Architectural Press, New York (2009) 9. Mackay, W.E., Fayard, A.-L.: HCI, natural science and design: a framework for triangulation across disciplines. In: Proceedings of the 2nd Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, pp. 223–234. ACM (1997) 10. Alavi, H.S., Churchill, E.F., Wiberg, M., Lalanne, D., Dalsgaard, P., Schieck, A.F.G., Rogers, Y.: Introduction to human-building interaction (HBI): Interfacing HCI with architecture and urban design. ACM Trans. Comput. Hum. Interact. (TOCHI) 26(2), 6 (2019) 11. Shen, L., Hoye, M., Nelson, C., Edwards, J.: Human-building interaction (HBI): a usercentered approach to energy efficiency innovations (2016). https://aceee.org/files/proceedings/ 2016/data/ 12. Shen, L.: Human-Building Interaction (HBI): Design Thinking and Energy Efficiency. Center for Energy and Environment, Minneapolis (2015). http://mncee.org/InnovationExchange/ Resource-Center/Technical-Reports/Human-Building-Interaction-(HBI)-DesignThinking-a/. What is Human-Computer Interaction (HCI)? (n.d.). https://www.interaction-design.org/ literature/topics/human-computer-interaction 13. Kim, G.J.: Human-Computer Interaction: Fundamentals and Practice. Auerbach Publications, Boca Raton (2015) 14. Rifkin, J.: How the third industrial revolution will create a green economy (2017). http:// www.huffingtonpost.com/jeremy-rifkin/third-industrial-revolution-green-economy_b_8286 142.html

IoT and AI in Precision Agriculture: Designing Smart System to Support Illiterate Farmers Javed Anjum Sheikh(&), Sehrish Munawar Cheema, Muhammad Ali, Zohaib Amjad, Jahan Zaib Tariq, and Ammerha Naz University of Sialkot, Sialkot, Pakistan {javed.sheikh,sehrish.munawar}@uskt.edu.pk, [email protected], [email protected], [email protected], [email protected]

Abstract. Precision agriculture is revolutionizing the concept of smart farming in the entire world. Smart and precise agriculture is the key to producing the best yield of crops. The major portion of the agrarian community is illiterate worldwide, which is unaware of smart farming and intelligent. Our research is a bridge between agricultural research and computer technologists. Our proposed framework will reflect an intelligent and secure system equipped with related sensors; wireless communication systems implanted in farms. Hardware will integrate with the android application (prototype) to manage to plant in farms. Plus a web interface to manage knowledge of the latest crops and already feed crops knowledgebase of a system by admin end. Our project is to create a repository to store crop data (knowledgebase) and to get sensed data for decision-making. Guidelines will be provided via an interactive interface for illiterate users. Keywords: Precision agriculture (PA)  IoT technology  Digital decision support  Illiterate farmer  Soil micro/macronutrients  Android apps

1 Introduction According to the UN Food and Agricultural Organization (FAO), the global population is set to reach 10 billion by the year 2050. Worldwide most of our agrarian population is illiterate. Researchers have proposed ICT based interventions and frameworks to facilitate information dissemination for farmers [1–4] but these are out of reach from illiterate farmers. Precision agriculture based on Artificial intelligence and Internet of Things (IoT) [6, 7] helps the farmer by improving, automating and optimizing all possible directions to boost agricultural productivity and make a smart crop system [2, 5, 8–11]. Literacy is one of the hurdles to adopt latest technologies [12]. In Pakistan [13–15] and other developing countries farmer’s experience does matter to grow the crops, which leads to the loss of production. Therefore, Precision Agriculture is failed due to the illiteracy. The other issue is the communication gap between agricultural researchers and technology researchers. Both are working and providing advancements in their own domains neither the concern about the awareness of farmers. This research will be a bridge between agricultural research and computer technologists. Our underlying work © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 490–496, 2021. https://doi.org/10.1007/978-3-030-51328-3_67

IoT and AI in Precision Agriculture

491

is for illiterate farmers to provide all types of agriculture-related support to their doorstep. We need a system that guides/answers to the following question of illiterate farmers. • Which crops are suitable to plant for a specific type of soil? • Which are the best cultivation practices for specific crop and soil? • Which are suitable and best pesticides and fertilizers for a crop? In this context, our research question is “To what extent illiterate farmers can be benefited by a smart system?”

The rest of the paper organized as follows: Sect. 2 introduces the literature study. Section 3 discussed proposed Methodology and system components are presented in Sect. 4 followed by conclusion/discussion.

2 Literature Study During the detailed literature survey, related research works were identified that were focusing to improve the performance of agriculture with respect to different aspects as precision, optimize energy consumption, language barrier, increasing soil fertility, secure communication [4, 16–20] but ignore the illiterate farmers. Category wise [21– 25] development of agricultural systems and mobile apps can be seen in Fig. 1.

Fig. 1. Development of agricultural system and apps (in %)

The statistics are shown in Table 1 clearly highlights that there is a need for an intelligent system that measures real-time plating needs of crops considering all possible parameters. It is also noted that internationally mostly farmers are not illiterate and these apps do not give a solution to them.

492

J. A. Sheikh et al. Table 1. Comparison with existing systems.

Previous systems

System name

Technology

Crop

Precision agriculture

Country/ language barrier

Optimize Secure energy communication consumption

[1]

Modified TEAPEST.

Radial basis function networks

Tea

Yes 99.9%

India

No

No

[26]

N/A

WSN

N/A

Yes

Not specific

Yes

No

[27]

SWAMP

IOT

N/A

No

N/A

No

No

[28]

N/A

Not specific

Yes

N/A

No

No

[29]

N/A

Grass

No

UK & Ireland

No

No

[10]

N/A

Fuzzy logic

Not specific crop

No

India

Yes

Yes

[30]

N/A

Cuckoo search algorithm

Not specific crop

No

No language barrier

No

No

IoT, fuzzy logic, ML

General crops

Yes

Yes

Yes

Yes

Proposed framework

The proposed system is designed with the ability of Fog computing that provides low latency [31] and Block-chain security support to track and trace the device transactions performed during the processing of the Smart System [32–37]. Such an ability to trace block-chain based transaction does enable the proposed system to provide security and privacy features but also provides seam-less connectivity and availability of the features proposed smart system.

3 Methodology and System Components The Proposed Recommender System gets real-time values by sensors implanted in user farms to make recommendations about suitable crops to grow, fertilizers, pesticides, and irrigations to put recommendations. Microcontroller grabs data from sensors and transmits to the webserver (fog nodes) where rule base analysis will perform to match between predefined conditions and the current state of crops grabbed through sensors. After mapping, the conditions to rules the average feasibility of a specific crop will be generated. Then values of sensors will be processed on the server and a list of crops with their feasibility will be sent to the user’s mobile app. From where users can decide and schedule the planting process of crops in farms where they implanted the sensors setup. A Fuzzy rule-based engine works on this real-time collected data to compare and match between knowledgebase data. After mapping the conditions to rules the average plant’s feasibility rating is generated. Generated results are then shaped up in the form of recommendations for farmers on the app dashboard.

IoT and AI in Precision Agriculture

3.1

493

Design Integration of System Components

From recommendations where users can decide which crops and seeds to grow, can schedule watering of plants, can choose schedule fertilizer of plants, and can schedule the period of harvesting. When the date & time of schedule arrives, it notifies to the user to control actuators remotely. System design and integration are shown in Fig. 2.

Fig. 2. Design integration of system components

4 Conclusion This paper aims to address the most important challenges towards the digitization of agriculture for illiterate farmers in Pakistan. Precision agriculture enhance crops’ productivity by using latest technologies, i.e., WSN, IoT, cloud computing, Artificial Intelligence (AI) and Machine Learning (ML) [38–40]. We are going to provide smart system to the Pakistani illustrate farmers in easy and understand-able way [41–44]. This project will be a bridge between agricultural research and computer scientists. The nest phase of the is research to conduct the illiterate farmers’ survey to validity of our proposed framework.

494

J. A. Sheikh et al.

References 1. Sheikh, J.A., Arshad, A.: Using heuristic evaluation to enhance the usability: a model for illiterate farmers in Pakistan. In: International Conference on Applied Human Factors and Ergonomics, pp. 449–459 (2017) 2. Mubin, O., Tubb, J., Novoa, M., Naseem, M., Razaq, S.: Understanding the needs of Pakistani farmers and the prospects of an ICT intervention. In: Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, pp. 1109–1114 (2015) 3. Jha, K., Doshi, A., Patel, P., Shah, M.: A comprehensive review on automation in agriculture using artificial intelligence. Artif. Intell. Agric. 2, 1–12 (2019) 4. Bannerjee, G., Sarkar, U., Das, S., Ghosh, I.: AI in agriculture: a survey. Int. J. Sci. Res. Comput. Sci. Appl. Manag. Stud. 7(3), 1–6 (2018) 5. Khanna, A., Kaur, S.: Evolution of Internet of Things (IoT) and its significant impact in the field of precision agriculture. Comput. Electron. Agric. 157, 218–231 (2019) 6. Jayaraman, P.P., Yavari, A., Georgakopoulos, D., Morshed, A., Zaslavsky, A.: Internet of Things platform for smart farming: experiences and lessons learned. Sensors 16(11), 1884 (2016) 7. Munir, M.S., Bajwa, I.S., Cheema, S.M.: An intelligent & secure smart watering system using fuzzy logic & blockchain. Comput. Electr. Eng. 77, 109–119 (2019) 8. Pawar, S.B., Rajput, P., Shaikh, A.: Smart irrigation system using IOT and raspberry pi. Int. Res. J. Eng. Technol. 5(8), 1163–1166 (2018) 9. Kamilaris, A., Prenafeta-Boldú, F.X.: Deep learning in agriculture: a survey. Comput. Electron. Agric. 147, 70–90 (2018) 10. Cheema, S.M., Khalid, M., Rehman, A., Sarwar, N.: Plant irrigation and recommender system–IoT based digital solution for home garden. In: International Conference on Intelligent Technologies & Applications, pp. 513–525 (2018) 11. Popović, T., Latinović, N., Pešić, A., Zečević, Ž., Krstajić, B., Djukanović, S.: Architecting an IoT-enabled platform for PA and ecological monitoring: a case study. Comput. Electron. Agric. 140, 255–265 (2017) 12. Tummers, J., Kassahun, A., Tekinerdogan, B.: Obstacles & features of farm MIS: a literature review. Comp. Elect. Agric. 157, 189–204 (2019) 13. Katyara, S., Shah, M.A., Zardari, S., Chowdhry, B.S., Kumar, W.: WSN based smart control & remote field monitoring of Pakistan’s irrigation system using SCADA applications. Wireless Personal Commun. 95(2), 491–504 (2017) 14. Talpur, M.S.H., Shaikh, M.H., Talpur, H.S.: Relevance of Internet of Things in animal stocks chain management in Pakistan’s perspectives. Int. J. Inf. Educ. Technol. 2(1), 29 (2012) 15. Khan, F.S., Razzaq, S., Irfan, K., Maqbool, F., Farid, A., Illahi, I., Ul Amin, T.: Dr. Wheat: a Web-based expert system for diagnosis of diseases and pests in Pakistani wheat. In: Proceedings of the World Congress on Engineering, vol. 1, pp. 2–4 (2008) 16. Nakutis, Z., et al.: Remote agriculture automation using a wireless link and IoT gateway infrastructure. In: 2015 26th International Workshop on Database and Expert Systems Applications (DEXA), pp. 99–103 (2015) 17. Brun-Laguna, K., et al.: A demo of the PEACH IoT-based frost event prediction system for precision agriculture. In: 13th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), pp. 1–3 (2016)

IoT and AI in Precision Agriculture

495

18. Ayaz, M., Ammad-Uddin, M., Sharif, Z., Mansour, A., Aggoune, E.-H.M.: Internet-ofThings (IoT)-based smart agriculture: toward making the fields talk. IEEE Access 7, 129551–129583 (2019) 19. Dholu, M., Ghodinde, K.A.: Internet of Things (IoT) for precision agriculture application. In: 2018 2nd International Conference on Trends in Electronics and Informatics (ICOEI), pp. 339–342 (2018) 20. Khattab, A., Abdelgawad, A., Yelmarthi, K.: Design and implementation of a cloud-based IoT scheme for precision agriculture. In: 2016 28th International Conference on Microelectronics (ICM), pp. 201–204 (2016) 21. Reghunadhan, R.: Big data, climate smart agriculture and India–Africa relations: a social science perspective. In: IoT and Analytics for Agriculture, pp. 113–137. Springer (2020) 22. Roopaei, M., Rad, P., Choo, K.-K.R.: Cloud of things in smart agriculture: intelligent irrigation monitoring by thermal imaging. IEEE Cloud Comput. 4(1), 10–15 (2017) 23. Patel, H., Patel, D.: Survey of android apps for the agriculture sector. Int. J. Inf. Sci. Tech. 6 (1–2), 61–67 (2016) 24. Farooq, M.S., Riaz, S., Abid, A., Abid, K., Naeem, M.A.: A survey on the role of IoT in agriculture for the implementation of smart farming. IEEE Access 7, 156237–156271 (2019) 25. Mainuddin, M., Kirby, M., Chowdhury, R.A.R., Shah-Newaz, S.M.: Spatial and temporal variations of, and the impact of climate change on, the dry season crop irrigation requirements in Bangladesh. Irrig. Sci. 33(2), 107–120 (2015) 26. Bahlo, C., Dahlhaus, P., Thompson, H., Trotter, M.: The role of interoperable data standards in precision livestock farming in extensive livestock systems: a review. Comput. Electron. Agric. 156, 459–466 (2019) 27. Lavanya, G., Rani, C., Ganeshkumar, P.: An automated low cost IoT based fertilizer intimation system for smart agriculture. Sustain. Comput. Inform. Syst. 1, 1 (2019). https:// doi.org/10.1016/j.suscom.2019.01.002 28. Pathak, A., AmazUddin, M., Abedin, M.J., Andersson, K., Mustafa, R., Hossain, M.S.: IoT based smart system to support agricultural parameters: a case study. Procedia Comput. Sci. 155, 648–653 (2019) 29. Bandjur, D., Jakšić, B., Bandjur, M., Jović, S.: An analysis of energy efficiency in wireless sensor networks (WSNs) applied in smart agriculture. Comput. Electron. Agric. 156, 500– 507 (2019) 30. Kumar, G.: Research paper on water irrigation by using wireless sensor network. Int. J. Sci. Res. Eng. Technol. 123–125 (2014). IEERT Conference Paper 31. Monteiro, K., Rocha, E., Silva, E., Santos, G.L., Santos, W., Endo, P.T.: Developing an ehealth system based on IoT, fog and cloud computing. In: 2018 IEEE/ACM International Conference on Utility and Cloud Computing Companion (UCC Companion), pp. 17–18 (2018) 32. Pee, S.J., Nans, J.H., Jans, J.W.: A simple blockchain-based peer-to-peer water trading system leveraging smart contracts. In: Proceedings of the International Conference on Internet Computing (ICOMP), pp. 63–68 (2018) 33. Rao, R.N., Sridhar, B.: IoT based smart crop-field monitoring and automated irrigation system. In: 2018 2nd International Conference on Inventive Systems and Control (ICISC), pp. 478–483. IEEE (2018) 34. Higgins, S., Schellberg, J., Bailey, J.S.: Improving productivity and increasing the efficiency of soil nutrient management on grassland farms in the UK and Ireland using precision agriculture technology. Eur. J. Agron. 106, 67–74 (2019) 35. Minh, Q.T., Phan, T.N., Takahashi, A., Thanh, T. T., Duy, S.N., Thanh, M.N., Hong, C.N.: A cost-effective smart farming system with a knowledge base. In: Proceedings of the Eighth International Symposium on Information and Communication Technology, pp. 309–316 (2017)

496

J. A. Sheikh et al.

36. Yousefpour, A., Fung, C., Nguyen, T., Kadiyala, K., Jalali, F., Jue, J.P.: All one needs to know about fog computing and related edge computing paradigms: a complete survey. J. Syst. Arch. 98, 289–330 (2019) 37. Zamora-Izquierdo, M.A., Santa, J., Martínez, J.A., Martínez, V., Skarmeta, A.F.: Smart farming IoT platform based on edge and cloud computing. Biosyst. Eng. 177, 4–17 (2019) 38. Zambon, I., Cecchini, M., Egidi, G., Saporito, M.G., Colantoni, A.: Revolution 4.0: industry vs. agriculture in a future development for SMEs. Processes 7(1), 36. https://doi.org/10. 3390/pr7010036.2019 39. World Economic Forum: Innovation with a Purpose: The Role of Tech Innovation. World Economic Forum, no. January, pp. 1–42 (2018) 40. Sun Media Corporation: DSTE Brings Smart Farming Solution to Agriculture Industry (2019). https://www.thesundaily.my/spotlight/dste-brings-smart-farming-solution-toagriculture-industry-NA1174843 41. Barnes, A.P., Soto, I., Eory, V., Beck, B., Balafoutis, A., et al.: Exploring the adoption of precision agricultural technologies: a cross regional study of EU farmers. Land Use Policy 80, 163–174 (2019) 42. Ayre, M., Mc Collum, V., Waters, W., Samson, P., Curro, A., Nettle, R., Paschen, J.-A., King, B., Reichelt, N.: Supporting and practising digital innovation with advisers in smart farming. NJAS Wageningen J. Life Sci. 90, 100302 (2019) 43. Mekala, M.S., Viswanathan, P.: CLAY-MIST: IoT-cloud enabled CMM index for smart agriculture monitoring system. Measurement 134, 236–244 (2019) 44. Nawandar, N.K., Satpute, V.R.: IoT based low cost and intelligent module for smart irrigation system. Comput. Electron. Agric. 162, 979–990 (2019)

Selection of LPWAN Technology for the Adoption and Efficient Use of the IoT in the Rural Areas of the Province of Guayas Using AHP Method Miguel Angel Quiroz Martinez(&), Gonzalo Antonio Loza González, Monica Daniela Gomez Rios, and Maikel Yelandi Leyva Vazquez Computer Science Department, Universidad Politécnica Salesiana, Guayaquil, Ecuador {mquiroz,gloza,mgomezr,mleyvaz}@ups.edu.ec

Abstract. Ecuador, being a developing country, depends on the search and implementation of new technologies that help improve the productive matrix, this is the case of the agricultural sector, driven by the growth of the Internet market. Internet of Things (IoT) Low-power wide-area network (LPWAN) has allowed its expansion in underdeveloped countries. Sigfox, LoRa, and NB-IoT are the three leading LPWAN technologies that compete for large-scale IoT implementation. This document provides a comparative study of these technologies, focused on the livestock sector. Basic criteria are identified to facilitate decision making when using IoT technology for the development of livestock in rural areas of Guayas province. Through the analytical hierarchical process (AHP), the type of technologies to be implemented is ranked and selected. Keywords: Livestock

 IoT  LoRa  NB-IoT  Sigfox  AHP

1 Introduction IoT employs the use of wireless technologies, users can access control devices from anywhere, which requires an internet connection to monitor the performance of the processes performed [1]. The area of implementation of this technology is extensive, it can cover almost all the processes that can occur in an industry such as product manufacturing, raw material processing up to logistics management [2]. Their implementation in the industrial sector is highly demand-ed, including the automotive, naval, logistics, hospitality, agricultural and livestock services sectors; In the latter, research was conducted regarding the application of this technology. The livestock sector in Ecuador, according to the ESPAC [3], records that by 2018, the annual rate of variation of cattle registered a decrease of 0.007% in relation to 2017 at the level national, this survey shows that the Costa region owns 40.91% of the number of cattle in the country. The Coast is the second region with the largest amount of livestock, ESPAC notes that 7.39% of livestock production comes from the province of Guayas. Under this context, the processes implemented for livestock production, in general, are very conventional, proper profit maximization is not obtained, there is low participation of new © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 497–503, 2021. https://doi.org/10.1007/978-3-030-51328-3_68

498

M. A. Q. Martinez et al.

technologies for the development of process improvements and generating quality products. The objective of the study was to establish enough principles, techniques, and methods that allow the correct adoption and efficient use of IoT in the livestock of rural areas of the Guayas province. This research carried out studies on the processes that must be applied for the Guayas livestock sector, specifically in rural areas, to implement IoT technology, due to the benefits that can be obtained by them, such as optimizing times, increasing quality, monitoring both production and livestock and reduce preventive controls, in this case, the presence of a veterinarian, which represents cost reduction.

2 Materials and Methods A phenomenological interpretative investigation was developed, with a qualitative approach, of documentary design, as the main research technique the bibliographic review of works related to the central theme was used, which contributed from various perspectives theories of foundation. Among the authors consulted are González, R. Casado, S. Márquez, J. Prieto and J. M. Corchado; K. Mikhaylov, J. Petaejaejaervi, and T. Haenninen, among others. 2.1

Bibliographic Review

LPWAN is very suitable for IoT applications that only need to transmit small amounts of data over long distances [4]. Currently, several organizations are working on the development of various IoT standards [5]. Although developed countries have led the world in the use of ICT, in the last decade the countries in the development start to implement ICT [6]. IoT has become a hot topic in China as early as 2009 when Chinese Prime Minister Jiabao Wen called for the rapid development of IoT technologies [7]. On other continents, the concept of IoT was recently adopted by non-agricultural scientists [8]. Jianhua Zhang and others [9], through their research, designed and developed IoT monitoring equipment, underpin what Xiaojing and Yuangua said, where they say that most cattle and poultry farms cannot precisely control the breeding environment in China, where the information was transmitted to the server using the wireless detection network (WSN) and mobile communication technology, such as Bluetooth, Wi-Fi, ZigBee, 3G. [10] Due to the high costs that require the implementation of an automatic water supply and control system in a cattle enclosure, for an SME in developing countries, González and others [11], consider that thanks to the evolution of IoT devices, it is possible to reduce the cost of this implementation. 2.2

Applications of IoT in the Livestock Sector

IoT applications are more used in livestock for field monitoring, agricultural vehicle tracking, and more. For example, the sensors will alert farmers to the amount of water and food available to livestock, which leads them to give time to those in charge to prevent lack of water and food, avoiding problems with negative effects and improving

Selection of LPWAN Technology for the Adoption and Efficient Use of the IoT

499

the quality of cattle. The placement of devices in animals that gather comprehensive information about their bodies and collect data on livestock health. 2.3

Technical Difference

In this section, the technical differences of Sigfox, LoRa, and NB-IoT are highlighted and presented in terms of physical and communication characteristics. In addition, these technologies were compared in terms of IoT success factors such as quality of service, coverage, range, latency, battery life, scalability, duration of payload, implementation, and cost. In order to establish which is the best LPWAN technology, where it was later defined as the basis of communication for IoT using AHP later. 2.3.1 Sigfox Sigfox is an LPWAN network operator that offers an end-to-end IoT connectivity solution based on its patented technologies. Sigfox deploys its patented base stations equipped with software-defined cognitive radios and connects them to the end servers using an IP-based network. The end devices were connected to these base stations by binary phase shift modulation (BPSK) in an ultra-narrow band (100 Hz) sub-GHZ ISM band carrier. Sigfox uses unlicensed ISM bands, for example, 868 MHz in Europe, 915 MHz in North America and 433 MHz in Asia. The number of messages through the uplink is limited to 140 messages per day. The maximum payload length for each uplink message is 12 bytes. The message of each device is transmitted several times (three by default) through different frequency channels. For this purpose, in Europe, for example, the band between 868.180 MHz and 868.220 MHz is divided into 400 orthogonal channels of 100 Hz (among them, 40 channels are reserved and not used) [12]. 2.3.2 LoRa LoRa is a physical layer technology that modulates signals in the sub-GHZ ISM band using a patented spread spectrum technique. Like Sigfox, LoRa uses ISM bands without a license, that is, 868 MHz in Europe, 915 MHz in North America and 433 MHz in Asia. Bidirectional communication is provided by the chirp expanded spectrum modulation, which propagates a narrow band signal over a wider channel bandwidth. The resulting signal has low noise levels, which allows high resistance to interference, and is difficult to detect or block [13]. LoRa uses six scattering factors (SF7 to SF12) to adapt the data rate and range of compensation. LoRa data rate is between 300 bps and 50 kbps depending on the expansion factor and the bandwidth of the channel. In addition, messages transmitted using different propagation factors can be received simultaneously by the LoRa base stations [14]. The maximum payload length for each message 243 bytes. 2.3.3 NB-IoT NB-IoT is a narrowband IoT technology specified in version 13 of the 3GPP in June 2016. NB-IoT can coexist with GSM (global system for mobile communications) and LTE (long term evolution) under licensed frequency bands (for example, 700 MHz, 800 MHz, and 900 MHz). NB-IoT occupies a frequency bandwidth of 200 kHz, which

500

M. A. Q. Martinez et al.

corresponds to a block of resources in GSM and LTE transmission [15]. With this frequency band selection, the following modes of operation as an independent operation, protection band operation, and in-band operation. The NB-IoT communication protocol is based on the LTE protocol. In fact, NB-IoT reduces the functionalities of the LTE protocol to a minimum and improves them as necessary for IoT applications. For example, the LTE backend system is used to transmit information that is valid for all end devices within a cell. Therefore, NB-IoT technology can be considered as a new air interface from the point of view of the protocol stack, while being built on the wellestablished LTE infrastructure. NB-IoT allows connectivity of up to 100 K end devices per cell with the potential to expand capacity by adding more NB-IoT operators. NBIoT uses multiple carrier frequency division multiple access (FDMA) in the uplink and orthogonal FDMA (OFDMA) in the downlink and uses quadrature phase-shift modulation (QPSK) [16]. 2.4

Cattle Monitoring Bracelets

Comparing the cattle monitoring bracelets that apply the 3 different LPWAN technologies, the following analysis of each one is presented. 2.4.1 Sodaq The Sodaq bracelets feature a smart cow collar that adopts a complete LoaWAN implementation for livestock tracking and activity detection. System is powered by a small solar cell, which allows extended deployment periods before service; However, they do not offer a patented cloud solution but depend on external service providers, this being their main attribute, they point out that one of the main problems that many cattle trackers have is the need to continually charge the battery by manually connecting it to Charging stations Concerning the other competing technologies in the LPWAN scenario, LoRa does not require a third-party infrastructure for access to the channel, but it can act almost de-centralized [17]. Having to constantly monitor the battery level of the scraper and verify that it is charging is extremely slow. To solve this problem, they designed the tracker to have a 0.5 W solar panel that collects energy from sunlight. 2.4.2 IoT TRACKER The Accent system IoT TRACKER uses NB-IoT technology to monitor the movements of cows and allows farmers to know where their animals have gone to know the reasons for diseases, in this case, infections. If you compare NB-IOT with other technologies (such as SigFox, LoRa or 3G/4G), the subscription cost for the company or the user and the battery consumption is much lower and optimizes the transmission of information. In addition, it has a better indoor and outdoor range and uses existing mobile phone antennas.

Selection of LPWAN Technology for the Adoption and Efficient Use of the IoT

501

3 Results To determine which LPWAN technology, to implement in the IoT communication, the hierarchical analysis process (AHP) was applied, for the multi-criteria decision making, Table 1, presents the criteria that were taken as the basis, for the implementation of IoT in rural areas destined for livestock activity, it should be mentioned that the battery life was not taken into account, despite being an important criterion for decision-making, because, for both the 3 study technologies, The same duration is estimated (10 to 12 years), with a variation of up to 2 years. Table 1. Overview of LPWAN technologies: Sigfox, LoRa y NB-IoT. Frequency Broadband Data rate Maximum messages/day Max payload length Coverage range Latency (ms) Spectrum Deployment Device cost

Sigfox ISM bands without a license in 860–930 MHz 100 Hz 100 bps 140 (UL), 4 (DL)

LoRa ISM bands without a license in 863–870 MHz 250 kHz y 125 kHz 50 kbps Unlimited

NB-IoT Licensed LTE frequency bands 200 kHz 200 kbps Unlimited

12 bytes (UL), 8 bytes (DL) 10 km (urban), 40 km (rural) 1–10 Free >4000€/Base station

243 bytes

1600 bytes

5 km (urban), 20 km (rural) 1–30 Free >1000€/€/Base station

15000€/Base station >20€

After defining the Hierarchy, a pairwise comparison of the criteria was made in order to obtain the weights of each one (Table 2). Table 2. Criteria comparison matrix. Criteria A. Frequency B. Broadband C. Maximum data rate D. Máximum messages/day E. Maximum payload length F. Coverage Range G. Latency (ms) H. Price

A 1 3 5 7 1/5 3 1/5 3

B 1/3 1 1/3 5 1/3 5 1/3 5

C 1/5 3 1 1/3 1/5 7 1/7 1/3

D 1/7 1/5 3 1 1/3 5 1/3 3

E 5 3 5 3 1 7 1/3 7

F 1/3 1/5 1/7 1/5 1/7 1 1/7 1

G 5 3 7 3 3 7 1 7

H 1/3 1/5 3 1/3 1/7 1 1/7 1

Peso 0.07 0.09 0.18 0.13 0.03 0.28 0.02 0.20

502

M. A. Q. Martinez et al.

A priority of alternatives (Sigfox, LoRa, NB-IoT) was obtained by doing aggregation on relative weights which were obtained by combining the criterion priorities of each decision alternatives relative to each criterion. Among the alternatives selected for the adoption of IoT technology, the main option is Sigfox with 41.83%, followed by NB-IoT with 31.74% and in the last place is LoRa. It is important to emphasize that, these criteria were selected according to the capacity of Ecuadorian territory, be in telecommunications networks, data transfer servers, technological level in rural areas, economic level for these projects among others. Since Sigfox is the option with the highest priority, it is due to its low cost of implementation, it adjusts to data traffic, destined for livestock activity, which is very slow and with few variations, no more expensive technology is needed, such as offered by LoRa and NB-IoT, in the same way, its range of coverage allows it to have a competitive advantage over the 2 study technologies. In a developing country where the purchasing power in the rural sector is very low, the implementation of low-cost technology and a wide range of coverage is very beneficial for the livestock sector of the Guayas.

4 Conclusions Wireless batch applications collect data on the location, welfare, and health of livestock. This information helps identify sick animals to separate them from the herd and prevent the spread of the disease. This also reduces labor costs, and farmers can locate their livestock using IoT-based sensors. The need for a growing population can be successfully met by implementing improvement solutions with IoT. Livestock, the various components, and protocols that make up the IoT need a global vision that allows them to understand the different aspects of this new paradigm, on the one hand, and address certain challenges and problems related to technologies, their design and deployment by much. The technologies are in a very advanced stage of development in the lower layers of animal husbandry, but there is much more to develop in the other layers. The automation and connection of everything around us require a series of technologies, protocols, and applications that form the basis of IoT. The AHP method allowed prioritizing the alternatives and selecting Sigfox as the best option. As future work, the use of fuzzy logic to represent uncertainty and the combination with the TOPSIS method will be proposed. Acknowledgment. This work has been supported by the GIIAR research group and the Salesian Polytechnic University of Guayaquil.

References 1. Madakam, S., Ramaswamy, R., Tripathi, S.: Internet of Things (IoT): a literature review. J. Comput. Commun. 3(05), 164 (2015) 2. Ortiz Monet, M.: Implementación y Evaluación de Plataformas en la Nube para Servicios de IoT. Universidad Politecnica de Valencia, Valencia (2019)

Selection of LPWAN Technology for the Adoption and Efficient Use of the IoT

503

3. INEC. Encuesta de Superficie y Producción Agropecuaria Continua (ESPAC). Instituto Nacional de Estadísticas y Censos del Ecuador, Quito (2018) 4. Sinha, R.S., Wei, Y., Hwang, S.H.: A survey on LPWA technology: LoRa and NB-IoT. Ict Express 3(1), 14–21 (2017) 5. Atzori, L., Iera, A., Morabito, G.: The internet of things: A survey. Comput. Netw. 54(15), 2787–2805 (2010) 6. Dlodlo, N., Kalezhi, J.: The internet of things in agriculture for sustainable rural development. In: International Conference on Emerging Trends in Networks and Computer Communications (ETNCC) (2015) 7. L. e. a. Zheng, “Technologies, applications, and governance in the Internet of Things, in Internet of things-Global technological and societal trends. In: From smart environments and spaces to green ICT (2011) 8. Xiaojing, Z, Yuangua, L.: Zigbee implementation in intelligent agriculture based on internet of things, College of Electronics Information and enginnering. Qiongzhou University (2012) 9. Zhang, J., Kong, F., Zhai, Z., Han, S., Wu, J., Zhu, M.: Design and development of IoT monitoring equipment for open livestock environment. Int. J. Simul. Syst. Sci. Technol. 17(26), 2–7 (2016) 10. Weixing, Z., Chenyun, D., Peng, H.: Environmental control system based on IOT for nursery pig house. Trans. Chin. Agric. Eng. (Trans. CSAE) 28(11), 177–182 (2012) 11. González, A., Casado, R., Márquez, S., Prieto, J., Corchado, J.M.: Intelligent livestock feeding system by means of silos with IoT technology. In: International Symposium on Distributed Computing and Artificial Intelligence (2018) 12. Raza, U., Kulkarni, P., Sooriyabandara, M.: Low power wide area networks: An overview. IEEE Commun. Surv. Tutorials 19(2), 855–873 (2017) 13. Reynders, B., Meert, W., Pollin, S.: Range and coexistence analysis of long-range unlicensed communication. In: 23rd International Conference on Telecommunications (ICT) (2016) 14. Mikhaylov, K., Petaejaejaervi, J., Haenninen, T.: Analysis of capacity and scalability of the LoRa low power wide area network technology. In: 22th European Wireless Conference European Wireless (2016) 15. Wang, Y.P.E., Lin, X., Adhikary, A., Grovlen, A., Sui, Y., Blankenship, Y., Razaghi, H.S.: A primer on 3GPP narrowband internet of things. IEEE Commun. Mag. 55(3), 117–123 (2017) 16. Adhikary, A., Lin, X., Wang, Y.P.E.: Performance evaluation of NB-IoT coverage. In: IEEE 84th Vehicular Technology Conference (VTC-Fall) (2016) 17. Raza, U., Kulkarni, P., Sooriyabandara, M.: Low power wide area networks: an overview. IEEE Commun. 19, 855–873 (2017)

Cryptocurrencies: A Futuristic Perspective or a Technological Strategy Carolina Del-Valle-Soto(&) and Alberto Rossa-Sierra Facultad de Ingeniería, Universidad Panamericana, Álvaro del Portillo 49, 45010 Zapopan, Jalisco, Mexico {cvalle,lurosa}@up.edu.mx

Abstract. Currently, there are several cryptocurrencies based on blockchain technology and a debate whether these kinds of assets are really a currency or if they are only an instrument for speculation is prevalent. Cryptocurrencies are technologies that allow sending value from person to person, without intermediaries. A cryptocurrency is digital money, which uses a type of encryption to protect transactions in a public and decentralized manner. This work argues that cryptocurrencies have intrinsic value and that their price is not only determined by supply and demand. It is concluded that this kind of digital asset contains some value drivers beyond the profitability that is generated through speculation. Some of the elements of value are cost of production associated with intellectual capacity, baggage of knowledge, time and resources invested in the creation process. Those resources can be found in electricity and sophisticated computer equipment, codes generating smaller units of measure, networks without physical barriers, artificial intelligence and human enhancement, and varied payment methods. Keywords: Value drivers

 Cryptocurrency  Blockchain technology

1 Introduction The innovation that Bitcoin represents is mainly because of its technology—the blockchain—a technology that allows cryptocurrency to be exchanged in a decentralized way without depending on a central issuer or on an intermediary to confirm transactions [1]. Cryptocurrency is a blockchain transaction of a digital asset. Contrary to the current banking system, in cryptocurrencies there is not a central bank that manages the money supply. The key to this technology is consensus, that is, if we all have the same information, then that information is true [2]. The blockchain is a distributed, secure, and open source database, where all kinds of value can be stored and exchanged without intermediaries. The blocks are linked together by cryptography and contain transaction data that cannot be altered retroactively without the endorsement of most nodes in the network. Digital currencies use cryptography to control their creation and transactions, and do not depend on any specific authority or a particular central bank. This record can be updated only based on a consensus of the majority of system participants and, in theory, the information can never be deleted or modified, only added [2]. Within the blockchain concept, smart contracts are stored, whose © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 504–509, 2021. https://doi.org/10.1007/978-3-030-51328-3_69

Cryptocurrencies: A Futuristic Perspective or a Technological Strategy

505

clauses are automatically executed by algorithms without a central authority. The content of these smart contracts consist of algorithms that regulate relatively simple processes (such as clauses associated with a simple transaction between two individuals), or extremely complex ones. An example of this is the creation of the Decentralized Autonomous Organization (DAO) [3], a decentralized investment fund that served to finance through Ethers (ETH) startup projects in the Ethereum ecosystem. However, due to a bug in the smart contract code that governs the DAO project, an individual of unknown identity managed to move 40 million dollars in Ether to his own account, without the approval of the fund [4]. Another case that is worth mentioning is with respect to selfish mining. In this type of attack, the miner or mining pool does not publish or distribute a valid solution to the rest of the network. This attack is a method for mining groups to increase their performance by not playing fair. The mechanics are that a selfish minority of the miners could take control of the process and centralize the profit. Then, any dishonest miner who made this attack would take a sure profit over the rest of the community; moreover, if up to a third of the total number of miners collude and attack the other honest miners, then they master the creation of the currency. The most important characteristics of cryptocurrencies are their public and open nature, i.e., permissionless, decentralized, and immutable with open, transparent, and participatory code [5]. A cryptocurrency is digital money that uses a type of encryption to protect transactions. A blockchain is, essentially, a register of transactions created from blocks. It is public and decentralized so that any change is visible to everyone. The blockchain technology was originally built for Bitcoin. The first cryptocurrency was created in 2009 with the name of “Bitcoin” under the pseudonym of Satoshi Nakamoto, who adopted the algorithm to stop attacks of denial of services and e-mail spam for the purpose in Bitcoin as a key part of the consensus mechanism. Later, Bitcoin’s development was led by other experts of the initial team, called the “Bitcoin Core.” This algorithm was presented in a technical document called “Bitcoin: A peer-to-peer electronic cash system” [6]. Currently, there are several cryptocurrencies based on blockchain technology and people are debating about whether these kinds of assets are really currencies or if they are only an instrument for speculation. Some of the applications most commonly studied today are in manufacturing automation, remote diagnosis, medical and social services, platforms for industrial Internet of Things, supply chain management, product certification, identity tracking, supplier reputation, asset registration, and inventory [2, 4, 6]. The objective of this work is to explore, analyze, and contrast the concept of value through history, taking into account that cryptocurrencies have an intrinsic value.

2 Blockchain: The Bitcoin Technology that Aims to Revolutionize Society A blockchain is a linked (chain) list in a peer-to-peer (P2P) network that contains a history of transactions, also called a “distributed ledger”. Therefore, additions in this transactional list are replicated to all connected machines, known as “peers” [7].

506

C. Del-Valle-Soto and A. Rossa-Sierra

Replication is quick, but verification and acceptance of the block (transaction) can be slow. A blockchain is implemented as a piece of software. In theory, this software contains a local database with its own replicated dataset and is capable of notifying other peers when data is modified to ensure that all peers maintain the same data. A transaction is a set of information that defines digital bits transferred, representing a “fiat” value. This includes details of a receiver and the amount of elements to be transferred [8]. While each blockchain implementation is different, most provide key pieces of functionality beyond the ability to store an identical list of transactions across multiple machines around the world. These parts can provide different permission systems to establish who can read and write transactions and, perhaps, most importantly, they can cryptographically guarantee the validity of transactions, making malicious modifications notoriously obvious or highly unlikely [9]. This work emphasizes the context of cryptocurrency as it relates to the assets concept, where security validators are incentivized as verifiers with a mining rewards promise. The process of validating transactions is called “mining”. To carry out the verification step, nodes or miners need to solve a computational puzzle, known as a “proof of work” (PoW) problem. But, why would someone want to confirm transactions for others? Because they benefit from it: the algorithm issues new bitcoins as a reward to the user who first confirms the transaction. This decentralized consensus process that takes place in the peer-to-peer network in order to validate user transactions and prevent double spending, network nodes are rewarded with units of digital currency. We think of this as a payment to the node in exchange for the service to create a block in the consensus chain. This procedure requires high processing and time consumption so there is only one “winner”, the first to verify the transaction. In this scenario, the dilemma is that if one peer is the first to find the blockchain, the remaining peers will lose all the effort invested on that particular transaction. One of the solutions being tested is to move from a work test system (PoW) to a security system based on proof of participation (PoS or Proof of Stake) [10]. A PoW is an algorithm that generates an element difficult to create but easy to verify and use to generate a new group of transactions (a block) and add it to the distributed database of transactions (blockchain). When a transaction is initiated, these data are sent to a block with a maximum capacity of 1 megabyte and then duplicated on multiple computers or nodes in the network. The nodes are the administrative body of the blockchain and verify the legitimacy of the transactions in each block. The mining nodes have to look for the correct series so that the complete block satisfies a certain arbitrary condition. Specifically, Bitcoin uses a double SHA-265 hash function in order to minimize the chances of a collision. SHA-256 is a recurring cryptographic algorithm that transforms a message into an apparently random illegible series, using an encryption key. SHA256 is a hexadecimal 64-digit hash with a fixed size of 256 bits (32 bytes). Each block of the blockchain has a short series of nonce data attached. Hashes are unidirectional functions, so there is no way to deduce the input from the output of the hash function. The only way to find that short series of nonsensical data is to try randomly until one works. The PoS concept indicates that a person can extract or validate transactions in blocks according to how many cryptocurrencies he has. The competition is to win the right to publish a block of transactions. Moreover, under Proof of Stake the reward is

Cryptocurrencies: A Futuristic Perspective or a Technological Strategy

507

proportional to the number of units of cryptocurrencies possessed by an entity. Nevertheless, there are a couple of scenarios happening that go against the principle of the decentralization concept such as the continuous increase of risk of attacks and vulnerabilities because of participation and the eventual decrease of the mining community, which is becoming more exclusive [10]. Figure 1 shows an example of a blockchain with only three nodes. We notice the main fields of a blockchain and how one block in the chain depends on the other as related to the algorithm of encryption. Each block contains an alphanumeric code that links to the previous block, the transaction packet that it includes, and another alphanumeric code that will link to the next block. Based on discussion and contrast, properties that give cryptocurrencies the characteristic of having an intrinsic value are listed below. The first is that they have no borders as they are not issued by any government or central entity. This global aspect makes it easy to transfer cryptocurrency. Another feature is that it is infinitely divisible due to the fact that it is a code that allows micro and nano payments, which will generate new business models that would be impossible at the moment due to their intermediation costs [11, 12]. This type of payment will avoid the rest of returned coins or what we call “the change”, and we will be able to pay millions of a payment in real time. The paradox of value (also known as paradox of diamond and water) is a singularity, an apparent contradiction within classical economics about the economic value that indicates that diamonds have a higher price in the market despite utility vital water. The Subjective Theory of Value proposes that the value of a good has nothing to do with the intrinsic properties of the good, but with the attitudes of the people towards the good. In this case, people do not demand a particular water supply even if it is a necessity if there are sufficient alternative sources. In contrast, in the desert, where there are few sources, the value of a particular amount of water increases. The economic value of a good depends on the circumstances. Cryptocurrencies satisfy those players who demand a seemingly deregulated and versatile financial market.

Fig. 1. Generic structure of a blockchain

508

C. Del-Valle-Soto and A. Rossa-Sierra

3 Value-Related Aspects of Cryptocurrencies A cryptocurrency is an asset that has no physical form because it is only a digital record stored in computers. What differentiates cryptocurrencies from regular bank accounts, although they are also only digital records, is that their existence is not conserved in the servers of a specific financial institution; instead, they are distributed in many places [11, 12]. Bitcoin is not linked to any existing currency and its value is not defined by being tied to other currencies: its value is the one that people assign to it. In addition, Bitcoin has intrinsic value because it solves some problems that money, such as euros, dollars, or credit cards, cannot. Bitcoin solves the problem of double spending and is better than credit cards in detecting fraud because each transaction requires public authentication and people accept it (which is why any form of money has value) [13]. Sending bitcoins looks like sending a lot of e-mails. If we have a specific electronic wallet tied to our Internet ID or access point, then other people can send us money anywhere while bypassing exchange rates, banks, and the Treasury. The person waits for their transaction to be mined and then confirmed for the number of blocks, which gives them sufficient confidence that it will not be reversed. In fact, cryptocurrencies make it possible to dispense with banks when moving money from one place to another; currently, however, these entities are actually looking to implement their own versions as are several governments. Nevertheless, there are some issues in this respect for our society. One of them is that we note limited capacity, which depends on the structure of the blockchain and the size of each transaction. Another discrepancy with this topic is related to the balance between the need for innovation, economic development, and social sustainability, and how these aspects are related to economic investments throughout history. With its decentralized nature that is apart from intermediaries, the objective of cryptocurrency is to distribute economic resources in various areas of industry [2]. With regard to subjective utility, the supporters of cryptocurrency perceive it to be a safe and transparent means of exchange that can generate profitability by changes in its price. However, now in which the price of Bitcoin rose by 1,400%, Bitcoin and most other cryptocurrencies experienced sharp falls in price. This has revived the debate about the reliability of these financial assets. Some people have begun to compare virtual currencies with the Ponzi scheme, a pyramid scheme developed in the 1920s. This method consists of “investors” receiving interest from their own money or the money of new investors. About the temporal and spatial dimensions, the cryptocurrency has the advantage that it permits fast and verifiable transactions anywhere in the world, without borders. Nevertheless, since their emergence on the global financial landscape, cryptocurrencies have dragged behind them important prejudices, especially those that relate these instruments of payment and investment to crime tools. Although the inherent characteristics of cryptoactives have been designed to provide security and confidence in the ecosystem, the misuse of cryptocurrencies justifies this loss of prestige. The vast digital space has served for criminal activities associated with money laundering, child pornography, financing of terrorism, sales of drugs, among other crimes, many of which have been financed through cryptoactives. Dream Market, AlphaBay, and the better-known Silk Road [14] are some of the illegal markets that have proliferated in the Deep Web, giving

Cryptocurrencies: A Futuristic Perspective or a Technological Strategy

509

prominence to cryptoactives such as payments for drug sales, extortion, and money laundering, especially using Bitcoin.

4 Conclusion The internet and its associated technologies have provided the means to generate lowcost resources, and this is where digital cryptocurrency appears. Blockchain technology offers new open source-based opportunities for developing new types of digital platforms. The current monetary system is based on a promise of payment and a collective agreement to be accepted as a means of payment. In addition, cryptocurrencies generate value in today’s society because any type of currency that is socially accepted (e.g., gold, cryptocurrencies, or banknotes) has value. The utility of a banknote is close to zero without recognition. Physical or virtual, a currency is valid as long as people are willing to accept it as a form of accumulation or exchange. A big central difference is that unlike traditional banknotes (i.e., “fiat” currency), cryptocurrencies are not supported by any government; however, people accept them and have a desire to exchange them, thus maintaining their functionality.

References 1. Raymaekers, W.: Cryptocurrency Bitcoin: disruption, challenges and opportunities. J. Payments Strategy Syst. 9(1), 30–46 (2015) 2. Di Pierro, M.: What Is the blockchain? Comput. Sci. Eng. 19(5), 92–95 (2017) 3. Swan, M.: Blockchain thinking: the brain as a decentralized autonomous corporation [commentary]. IEEE Technol. Soc. Mag. 34(4), 41–52 (2015) 4. Eyal, I., Sirer, E.G.: Majority is not enough: Bitcoin mining is vulnerable. In: International Conference on Financial Cryptography and Data Security, pp. 436–454. Springer, Heidelberg, March 2014 5. Hurlburt, G.F., Bojanova, I.: Bitcoin: Benefit or curse? IT Prof. 16(3), 10–15 (2014) 6. Nakamoto, S.: Bitcoin: A peer-to-peer electronic cash system (2008) 7. Porru, S., Pinna, A., Marchesi, M., Tonelli, R.: Blockchain-oriented software engineering: challenges and new directions. In: Proceedings of the 39th International Conference on Software Engineering Companion, pp. 169–171. IEEE Press, May 2017 8. Zyskind, G., Nathan, O.: Decentralizing privacy: using blockchain to protect personal data. In Security and Privacy Workshops (SPW), pp. 180–184. IEEE. IEEE May, 2015 9. Trautman, L.J.: Is disruptive blockchain technology the future of financial services? (2016) 10. Bentov, I., Lee, C., Mizrahi, A., Rosenfeld, M.: Proof of activity: extending bitcoin’s proof of work via proof of stake [Extended Abstract] y. ACM SIGMETRICS Perform. Eval. Rev. 42(3), 34–37 (2014) 11. Herbaut, N., Negru, N.: A model for collaborative blockchain-based video delivery relying on advanced network services chains. IEEE Commun. Mag. 55(9), 70–76 (2017) 12. Foster, I., Kesselman, C.: The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann, San Francisco (1999) 13. Van Alstyne, M.: Why Bitcoin has value. Commun. ACM 57(5), 30–32 (2014) 14. Low, K.F., Teo, E.G.: Bitcoins and other cryptocurrencies as property? Law Innov. Technol. 9(2), 235–268 (2017)

Acceleration of Evolutionary Grammar Using an MISD Architecture Based on FPGA and Petalinuxs Bernardo Vallejo-Mancero1(&) and Mireya Zapata2 1

Instituto Superior Central Técnico, Av. Isaac Albéniz E4-15 y El Morlán, Sector el Inca, Quito, Ecuador [email protected] 2 Research Center of Mechatronics and Interactive Systems, Universidad Indoamérica, Machala y Sabanilla, Quito, Ecuador [email protected]

Abstract. The evolutionary grammars are part of the optimization methods and searching for solutions based on the biological evolution postulates. The proposed technique is based on the genetic recombination’s concept and could be used to automatically generate programs in any programming language according to the specified grammar. The objective of this work is to design and evaluate a hardware acceleration solution for a Java evolutionary grammar software application, performing modifications in the data processing phase using an MISD (Multiple Instruction Single Data) architecture. The implementation is performed on a platform that integrates the programmability of an ARM processor along with the programmable logic of an FPGA with a custom Linux embedded operative system like Petalinux. The tests carried out allow determining the viability of the project, the results show that the parallelized stage is faster than the original solution, and the whole system can be implemented in a SoC. Keywords: Evolutionary grammar  Petalinux  Hardware acceleration  Linux embedded

1 Introduction The evolutionary grammars [1] are part of the optimization methods and searching for solutions based on the biological evolution postulates. The proposed technique is based on the genetic recombination’s concept and could be used to automatically generate programs in any programming language according to the specified grammar [2]. The algorithm starts with an initial population. The individuals of the population are evaluated independently assessed for an adaptation function (fitness), which is defined by a data set of inputs and outputs training. The individuals with the best result can pass full to the next generation, or it will be applied genetic operation techniques (mutation and/or crossing) until a new population which is going to be reassessed. This process is repeated in an interactively way until the optimal solution is obtained, or until the established generation limit is reached [3]. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 510–517, 2021. https://doi.org/10.1007/978-3-030-51328-3_70

Acceleration of Evolutionary Grammar Using an MISD Architecture

511

Each individual of the population is a program that has to be evaluated for each training value. The number of operations is equal to the product of the number of individuals by the number of training values that in large populations it represents a significant consume of time because it is done sequentially (individual after individual) [4]. This design introduces a parallel processing stage through the development of independent hardware packs which together form an MISD multiprocessor system [5]. Every pack represents an individual that can be reprogrammed according to the new solutions created in each one of the generations. The development of the project is conformed of two stages. The first one corresponds to the hardware description required for the multiprocessor system operations on an FPGA [6]. The second one consists of the development of a software component used for the multiprocessor integration with the other stages of the algorithm. It is performed on a Linux operating system for embedded devices due to the advantages it offers, among which stand out being open-source, allowing cross-compilation, portability, support, and less boot time for instance.

2 Design The starting point of the development is the accelerate hardware implementation in the evaluation stage of the generations of evolutionary grammar development in Java. The acceleration’s technique consists in the parallelization of the evaluation stage, through the use of programmable logic over an FDGA creating independent individuals based on a Picoblaze microprocessor [7, 8]. Originally, the application is executed in Software on a PC. When acceleration is included, two additional stages are considered: the first one corresponds to the creation of own hardware blocks implemented on a programmable logic (PL). Second, to the development of dedicated controllers on the Processing System (PS). This stage is used for the interaction between PL and the application of Software in the PC. The objective of this document is focused on seizing the advantages offered by the Zynq development cards of the company Xilinx. This card includes pro-grammable logic and a processing system based on microprocessors of the ARM family which allows the installation of a dedicated operating system to embedded systems configurable through the Petalinux tool [9]. It is able to execute all the entire application from a single SoC discarding the use of the PC and improving execution times with a direct communication. The Fig. 1 shows the structure of the developed solution. The focus of this document is giving in the stages of creation of the controllers and the user application on an operating system. It should be noted that the PL stage was previously created the blocks CDMA.vhd, Pblaze_m.vhd, and Pbrdy.vhd correspond to hardware used for communication between memories, implementation of individuals and interrupt handling respectively [10].

512

B. Vallejo-Mancero and M. Zapata

Fig. 1. Linux architecture, description of the user space application, the controllers and the programmable logic.

Section 3 describes the installation of the work environment of the Peta-linux, the creation and configuration of the embedded Linux system. Section 4 focuses on the development and implementation of the necessary drivers for communication with the hardware blocks, compilation and creation of the system boot image. Section 5 shows how the application is created in user space, execution and the results obtained. These results are discussed in Sect. 6. Finally, conclusions and future lines of research are shown in Sect. 7.

3 Petalinux and Embedded Linux The solution is implemented on an embedded Linux Operating System because it offers the advantage of being open source, allowing cross-compilation, portability, greater support, less startup time, and other features characteristics. 3.1

Petalinux Installation

For the compilation and creation of the boot image of the embedded Linux operating system, will be used the tool provided by Xilinx Petalinux in its 2018.2 version. Once Linux is installed, it is necessary to install a series of packages for the correct operation of the tool, packages described in the manufacturer’s specification sheet. Keep in mind that Petalinux works only with the same version of VIVADO and requires that the system be of the bash type (command interpreter). The download is made from the manufacturer’s page, a directory is created for its installation. The directory is assigned write and read permissions and the installation package is executed. The superuser mode (root or sudo) should not be used for execution. The commands used are:

Acceleration of Evolutionary Grammar Using an MISD Architecture

513

mkdir -p /opt/pkg/petalinux sudo chmod 755 /opt/pkg /opt/pkg/petalinux ./petalinux-v2018.2-final-installer.run /opt/pkg/petalinux

Once the installation is completed, the configuration of the work environment is passed. First, the appropriate address of the Petalinux configuration script (settings.sh) is established. After completing, a test with the echo command is performed on the environment variable. When everything is done correctly, it returns the installation address. 3.2

Creation and Configuration of the Embedded Linux System

First, it starts by including the developed hardware to the Linux compilation work directory, for this the PL hardware is exported from the LIVED tool copying the entire “Hardware” folder from the host machine and storing it in a new created directory. From this point on, the Petalinux Tool is used for the creation of the project and to establish the basic configurations. The following commands are used: $petalinux-create -t project -n software –template zynq $petalinux-config –get-hw-description -p */name/software

Using these commands creates a project within the Software folder from a defined template. The hardware description of the project is imported, and the configuration window is opened, the necessary changes are made that allow the system to boot from the SD memory and that the development card can be accessed through an Ethernet address. Finally, the kernel is compiled with the command: $petalinux-config -c kernel

4 PL Block Controller and System Boot Image A “Character” type controller is going to be created, because they are useful for IP blocks developed in programmable logic and it implements operations that allow applications to access the device. 4.1

Development of the Pblaze_M Controller

The Hardware IP block is called pblaze_m and for creation the command to use is: $petalinux-create -t modules -n pblaze –enable

The command generates a template to make changes based on the need of the application. The files included in the controller are the bitbake recipe, the controller

514

B. Vallejo-Mancero and M. Zapata

body and declaration, the driver compilation file and Readme file with useful information. The implemented controller also includes the interrupt attention function. With all these settings made, the controller is ready for compilation. To do this, return to the open terminal and execute the command: $petalinux-config -c rootfs

Which opens a configuration window, in which the developed module must be enabled. In this window you can add additional libraries for the use of Linux. 4.2

Creating Boot and Test Image

Once the necessary configurations are completed, the compilation is carried out. The command executed is: $Petalinux-build

After the compilation, a Boot.bin file is created to boot from the SD memory. It is necessary to package it with the bitstream of the PL hardware. Once the packaging is finished, within the current directory are the Boot.bin and image.ub files which must be recorded in a partitioned microSD memory. From a serial communication application Linux can be started, the username and password are (root/root). The controller node is created by executing the command: $mknod/dev/pblaze_Dev c 244 0

5 Application in User Space and Execution The user space application is responsible for communicating the Software application with the developed Hardware. The functions to be fulfilled are: define the number of individuals and training data, establish communication with the hardware modules, write and transfer the set of instructions from the DDR memory to the memory of each individual and send the data of training and read the resulting fitness of each individual. The application is developed using the Xilinx SDK tool, a new project is created with 4 header classes and 1 main C class: cdma.h, stores data necessary for the use of CDMA. pblaze.h, Contains the call functions for writing and reading the hardware module. data.h, It is responsible for storing training data. ROM.h, Stores the instruction set of each BRAM block. Linux_ge_app.c, Contains the declaration of the functions that the user application must fulfill.

Acceleration of Evolutionary Grammar Using an MISD Architecture

5.1

515

Running the Application on Embedded Linux

In Table 1 the result of the application execution can be observed, each of the individuals are represented by their equivalent code and the comparison of the quality (fitness) function obtained when executing the application is observed. Table 1. Execution of evolutionary grammars algorithm, PC and SoC comparison INDIVIDUALS OF A GENERATION if (((v[2]>96*Math.pow(10,+3))&&(v[2]96*Math.pow(10,+3))) { result = 0.0; } else { result = 1.0; } return result; if(((v[1]=4*Math.pow(10,+4)))) { result = 0.0; } else { result = 1.0; } return result; if ((v[1]>96*Math.pow(10,+3))) { result = 0.0; } else { result = 1.0; } return result; if ((v[1]>=v[1])) { result = 0.0; } else { result = 1.0; } return result; if (((v[2]96*Math.pow(10,+3))) { result = 0.0; } else { result = 1.0; } return result; if (((v[2]>96*Math.pow(10,+9))&&(v[2]=v[2])) { result = 0.0; } else { result = 1.0; } return result;

Fitness PC Solution

18

Fitness Embedded Solution

18

18

18

282

282

18

18

282

282

142

142

282

282

18

18

18

18

142

142

6 Discussion Comparing the results obtained from the execution of the original application on PC with the developed solution for a population of 10 individuals, it shows the same results when comparing their quality function, which is used to validate the best solutions and determine which individuals go to next generations. This demonstrates the validity of the implementation of all the algorithm in a single SoC which at the same time includes hardware acceleration because of the programmable logic, the execution, and administration of priority tasks due to the Linux operative system embedded in an ARM processor [6]. Due to the advancement of technology, nowadays new techniques can be integrated for the acceleration and optimization of resources used for the execution of algorithms. Actually, the field of parallelization of tasks through Hardware is one of the most exploited topics. There are works [11] that show that with the Hardware-Software design the time required for the execution of algorithms that are originally developed only in Software is reduced, but in spite of its advantages other investigations [12] show that although the potential to accelerate algorithms with parallelizable Hardware is highly attractive, its involves a deep knowledge of the relationship between the elements of Software and Hardware which produces the increase in the time required for development of the prototype, which forces us to consider issues such as time-tomarket, device reliability, and safety, among others.

516

B. Vallejo-Mancero and M. Zapata

7 Conclusions and Future Work This investigation together with its predecessors, demonstrates that the evolutionary grammar algorithm developed in JAVA can be accelerated with an MISD system based on configurable hardware, and also completely transferred to a SoC that runs on an embedded Linux operating system. This system has advantages in terms of resources, speed, size, and reconfigurability serving as a starting point for the development of a single chip that meets the functionality of the algorithm. Although the solution is viable, there are characteristics to improve that are expressed as future works. For example, the use of an own ALU that fits to the needs of the problem including the own phenotype’s syntax because currently using 8-bit microprocessors reduces the processing capacity and therefore the maximum speed that could be achieved. Another future work is the development of more stages of the algorithm within the user space application which avoids the need for communication between different applications. Finally, the solution adopted for other algorithms and applications in the own field of evolutionary techniques would be considered.

References 1. Vega-Rodríguez, M.A., Gutiérrez-Gil, R., Ávila-Román, J.M., Sánchez-Pérez, J.M., GómezPulido, J.A.: Genetic algorithms using parallelism and FPGAs: The TSP as case study. In: Proceedings of International Conference on Parallel Processing Workshops 2005, pp. 573–579 (2005). https://doi.org/10.1109/ICPPW.2005.36 2. Ryan, C., O’Neill, M., Collins, J.J.: Introduction to 20 years of grammatical evolution. In: Handbook of Grammatical Evolution, pp. 1–21 (2018). https://doi.org/10.1007/978-3-31978717-6_1 3. Lee, H.C., Herawan, T., Noraziah, A.: Evolutionary grammars based design framework for product innovation. Procedia Technol. 1, 132–136 (2012). https://doi.org/10.1016/j.protcy. 2012.02.026 4. De Silva, A.M., Leong, P.H.W.: Grammatical evolution. In: SpringerBriefs in Computational Intelligence, vol. 5, pp. 25–33 (2015). https://doi.org/10.1007/978-981-287-411-5_3 5. Sugiarto, I., Axenie, C., Conradt, J.: FPGA-based hardware accelerator for an embedded factor graph with configurable optimization. J. Circuits Syst. Comput. 28 (2019). https://doi. org/10.1142/S0218126619500312 6. Trimberger, S.M.: Three ages of FPGAs: a retrospective on the first thirty years of FPGA technology. Proc. IEEE 103, 318–331 (2015). https://doi.org/10.1109/JPROC.2015.2392104 7. Vallejo, B., Zapata, M.: Design and Evaluation of a Heuristic Optimization Tool Based on Evolutionary Grammars Using PSoCs, pp. 1–12 (2020) 8. Chapman, K.: PicoBlaze for Spartan-6, Virtex-6, 7-Series, Zynq and UltraScale Devices (KCPSM6), pp. 1–24 (2014) 9. Co Xilinx: PetaLinux Reference Guide. 1156, pp. 1–35 (2018) 10. Xilinx, Inc.: AXI Reference Guide UG761 (v13.1). 761 (2011)

Acceleration of Evolutionary Grammar Using an MISD Architecture

517

11. Dixit, P., Zalke, J., Admane, S.: Speed optimization of aes algorithm with hardware-software co-design. In: 2017 2nd International Conference for Convergence in Technology. I2CT 2017, January 2017, pp. 793–798 (2017). https://doi.org/10.1109/I2CT.2017.8226237 12. D’Hollander, E.H., Chevalier, B., De Bosschere, K.: Calling hardware procedures in a reconfigurable accelerator using RPC-FPGA. In: 2017 International Conference on Field Programmable Technology. ICFPT 2017, January 2018, pp. 271–274 (2018). https://doi.org/ 10.1109/FPT.2017.8280158

Human Factors in Energy: Oil, Gas, Nuclear and Electric Power Industries

Applying Deep Learning to Solve Alarm Flooding in Digital Nuclear Power Plant Control Rooms Jens-Patrick Langstrand(&), Hoa Thi Nguyen, and Robert McDonald Institute for Energy Technology, Os Alle 5, Halden, Norway {Jens.Patrick.Langstrand,Hoa.Thi.Nguyen, Robert.McDonald}@ife.no

Abstract. As the nuclear industry starts to shift to more digital controls and systems more information is provided to the control room and displayed on computer monitor workstations. This combined with alarm panels reduced to one alarm display creates the problem called alarm flooding, a situation where an overload of information can be caused during a plant disturbance or other abnormal operating condition. This project focused on finding a workable solution to assist operators in handling and understanding alarms during emergency situations. A generic pressurized water reactor simulator was used to collect process and alarm signals in scenarios that introduced common incidents causing expected alarms, as well as malfunctions causing unexpected alarms. Deep neural networks were used to model the collected data. Results showed that the models were able to correctly filter many of the expected alarms, indicating that deep learning has potential to overcome the problem of alarm flooding. Keywords: Alarm flooding Support system

 Deep learning  Digital nuclear control rooms 

1 Introduction In legacy control rooms the operators might have 20 to 30 alarms to deal with on 12 control boards during a reactor trip. The new digital alarm system in the upgraded control room will display between 150 and 200 alarm points on a single screen. This overload of information can happen rapidly in a short period of time preventing operators from identifying important alarms quickly. This phenomenon has been coined “alarm flooding” [1]. One of the main causes of alarm flooding is the propagation of abnormality in a connected system, in which a triggered alarm at a connection point will cause a chain reaction to the entire system of devices [2]. This problem could be tackled by identifying and using a root causal relationship between process and alarm signals to filter consequential alarms. A possible solution would be to create a state-based alarm system that recognize plant states using a knowledge base [3], but it requires expert knowledge and manually identification of the conditions for each plant state. Another approach is the use of data-driven techniques, which can infer relationships from historical alarm data. Example techniques are Bayesian network ([4–6]), © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 521–527, 2021. https://doi.org/10.1007/978-3-030-51328-3_71

522

J.-P. Langstrand et al.

Granger causality [7], and probabilistic signed digraph [8]. This work differs by investigating deep learning as a novel data-driven approach to solve alarm flooding, to our knowledge this study is the first that deploys deep learning for this purpose.

2 Methodology Rather than defining the reactor states manually, deep learning was used to learn reactor states in a data-driven manner. A generic pressurized water reactor (gPWR) simulator was used to collect data for the modelling. Eleven scenarios were designed together with a nuclear power plant operator ranging from full power and no incidents, full power with reactor trips, and full power with reactor trips and malfunctions. 2.1

Data Collection

A communication link to the gPWR simulator was setup and used to record process signals and alarm events during the scenarios. The scenarios lasted between 5 min and 4 h and had different complexities and operator procedures, in total approximately 16 h of simulator data was collected. The collected signals were preprocessed before the modelling began. Alarm events were converted into continuous binary signals and synchronized with the process signals using timestamps. In total there were 655 process signals and 751 alarm signals. Min-Max scaling was used to normalize the values of the signals between 0 and 1 to help the model converge quicker [9]. 2.2

Data Modelling

Two approaches using deep learning were explored, supervised learning for alarm classification and semi-supervised learning for anomaly detection. Alarm Classification. The idea is to infer the current alarm state from the process signals. An alarm has two states: “Off” or “On” and is encoded 0 or 1 respectively. The predicted alarm state is compared with the actual state and differences are marked as unexpected alarms. Data Preparation. Due to the nature of the scenarios used to gather data, many alarms were seldomly triggered or constantly off. Only one fourth of the dataset contained samples that had at least one alarm “On”. This led to an imbalanced dataset problem in which most data samples are labelled as one class, resulting in the algorithm always predicting the output as the majority class and still get a very high accuracy. To counteract this problem, oversampling by tripling samples with at least one alarm “On” was used, additionally accuracy was avoided as a training metric. Two scenarios with unexpected alarms were saved for the final test. The rest of scenarios were split into 70% training data, 15% validation data, and 15% test data. Building Model. Two deep neural network architectures were constructed: the first created a diamond-shape by increasing the numbers of nodes in the deeper layers followed by a reduction of nodes; the second architecture did the opposite to create an

Applying Deep Learning to Solve Alarm Flooding

523

hourglass-shape. It was found that the hourglass architecture started with lower performance compared to the first one, but it converged quicker and achieved better performance overall. Both architectures used drop-out regularization to reduce overfitting. Result. The models were evaluated on a separate set of data that was not included in the training process. It was desired that the models detected as many positive samples as possible. Since the dataset was imbalanced, confusion matrices and recall were used as training metrics instead of accuracy. In this case, it is crucial to not filter any unexpected alarms as that would be detrimental to operations. Therefore, recall was used to minimize the false negative rate and to evaluate our models. As shown in Fig. 1, the hourglass-shaped model has a much lower false-negative rate than the diamond-shaped model at the cost of more false-positives. In the final test, there were 121 active alarms, the hourglass-shaped model correctly predicted the behavior of 97. Figure 2 shows the illustration of two alarm signals.

Fig. 1. The confusion matrix and recall for diamond-shaped model is on the left. The confusion matrix and recall for the hourglass-shaped model is on the right.

Fig. 2. Illustration of predicted values for two alarm signals. The solid grey line is the true signal and the dashed black line is the signal as predicted by the model.

Anomaly Detection. By treating alarm flooding as an anomaly problem, it is possible to collect a representative dataset of the normal state of the reactor without having to introduce all possible malfunctions and other unexpected conditions. This approach has

524

J.-P. Langstrand et al.

an added benefit that no data annotation is required since all data is assumed to be nominal. In some ways this approach can be thought of as a true state-based system as the model attempts to learn the whole behavior of a plant operating under nominal conditions. The model can then be used to distinguish between nominal and abnormal data in the future. In this case the anomalies are unexpected alarms that are of interest to the operator. Nominal plant state data was collected with the plant simulator by running scenarios without introducing failures or anomalies. An added benefit of this approach is that not only anomalous alarms can be detected, but also anomalous process signals. An added complexity to this task is that alarms that are nominal under some conditions can be abnormal under other conditions, additionally noise can appear as anomalies and result in false positives. With the constraints posed by the scenarios no contextual anomalies were encountered for this study. Data Preparation. The data from the simulator scenarios were grouped into two tests. The first test had a training set consisting of data from 3scenarios of 100% power and stable operations. A scenario where a reactor trip was introduced was used to test the model’s ability to detect anomalies. The alarms caused by the reactor trip should be flagged as anomalies and the alarms caused by normal operations should be filtered for a successful test. The second test included the training data from the previous test as well as scenarios where reactor trips were introduced and stabilized using operator procedures. To test the model a scenario with the same structure as the training data with added malfunctions was used. In this case the model needed to filter the alarms caused by the reactor trip and normal operations as expected and flag the alarms caused by the malfunctions as abnormal. The data was split into 70% training data, 15% validation data, and 15% test data. These splits were sampled at random start locations to create time sequences with specified lengths that were used to train the model, 30–90 s sequences were found to work well. This approach was chosen to augment the amount of data available instead of only training on the data in sequence, largely increasing the amount of valid training sequences. Building Model. A recurrent neural network was chosen since the collected data has a time dimension and an auto-encoder architecture was chosen due to its versatility and simplicity. Auto-encoders can be used to denoise images [10], reduce the dimensionality of input data [11], and anomaly detection [12], to name a few uses. The input data is also used as the desired output, the idea is to learn how to encode the state of the plant into a lower dimension while retaining enough information to decode the state to its expected nominal state. The model must learn the connections between process signals and alarm signals in order to predict alarms as expected or abnormal. To determine if a signal or alarm is behaving abnormally the difference between the input plant state and the decoded plant state is used. If the difference is small there is no anomaly, however if it is bigger than a threshold it is treated as an anomaly.

Applying Deep Learning to Solve Alarm Flooding

525

Results. In the first test where the reactor simulator was in a 100% power and normal system configuration there were 57 alarms that were expected. The model was able to learn the nominal behavior of 55 of those alarms. See Fig. 3 for an example of a learned process and alarm signal determined to behave as expected due to the difference being close to zero. In the test set containing a reactor trip there were 27 alarms that were not part of the training set, the model was able to correctly model the behavior for 25 of them. The second test with the reactor simulator on 100% power and reactor trips introduced there were 93 expected alarms. The model was able to learn the expected behavior from 87 signals and misclassified 6 signals. One signal is responsible for 3232 of 3242 misclassification instances and is related to the reactor trip, showing that the model was unable to correlate the nominal behavior of this signal with the process signals. The anomaly test contained 52 alarms that were previously unseen. The model was able to correctly identify some of the alarms caused by the malfunctions as anomalies as well as filtering most of the expected alarms. See Fig. 4 for an example of an unexpected alarm where the difference between the learned and true signal is used to flag the alarm as an anomaly.

Fig. 3. The top graph shows a learned process signal and the bottom shows a learned alarm signal. The solid grey line is the true signal and the dashed black line is the signal as predicted by the model.

526

J.-P. Langstrand et al.

Fig. 4. This graph shows an anomalous alarm caused by the inserted reactor trip.

3 Discussion For this study, the focus was to explore if deep learning could be used to solve the alarm flooding problem in digital nuclear control rooms. Two approaches were pursued, alarm classification with supervised-learning and anomaly detection with semisupervised learning. The anomaly detection approach seemed to perform better with many less misclassifications compared to the alarm classification approach. The reason is most likely due to the nature of the collected dataset. Since the dataset is unbalanced in terms of active vs inactive alarms it is difficult for the alarm classification model to learn the behavior of these alarms. It becomes easy for the model to always classify alarms as off and still perform well. The anomaly detection approach however matches the nature of the dataset, and the model can learn the nominal behavior of many of the process and alarm signals. There is some noise in the predicted signals from both models, but a good threshold and a small window activation requirement can be used to filter the noise and prevent false positives. Real-time testing was also performed together with the reactor simulator and a control room operator. The models were able to correctly filter many alarms, but they also filtered some of the unexpected alarms. For future work the models must be modified to prefer false-positives over false-negatives. Any expected alarms the models can filter will reduce the strain caused by alarm flooding and reduce the amount of alarms the operators must process. Both approaches showed promise and warrants further investigation, anomaly detection does seem to have an advantage over classification due to the nature of the data that is collected.

4 Conclusion In this paper, we demonstrated two strategies of deep learning to solve the alarm flooding problem in digital nuclear control rooms. For this initial research the scenarios were kept simple, and as such the results are limited but deep learning shows potential to help mitigate the alarm flooding problem. A data set for a range of full power scenarios was collected and modelled using deep learning. Both alarm classification and anomaly detection showed promise by being able to identify expected alarms, thereby reducing the amount of alarms the operator must check.

Applying Deep Learning to Solve Alarm Flooding

527

For future work a larger and more representative dataset of scenarios with increased amount of possible reactor states will be collected. In addition, it is of interest to investigate how such an approach can be validated for use in real control rooms. For instance, by having it available as a support system the operators can use during normal operations in order to validate and gain trust. Another area of interest is to have the model continually learn through feedback provided by operators. Acknowledgements. The authors would like to acknowledge Idaho National Laboratory (INL) for financing and supporting this project.

References 1. Rothenberg, D.H.: Alarm Management for Process Control. Momentum Press, New York (2009) 2. Wang, J., Yang, F., Chen, T., Shah, S.L.: An Overview of Industrial Alarm Systems: Main Causes for Alarm Overloading, Research Status, and Open Problems. IEEE Trans. Autom. Sci. Eng. 13(2), 1045–1061 (2016) 3. Larsson, J. E., Ohman, B., Nihlwing, C., Jokstad, H., Kristianssen, L.I., Kvalem, J., Lind, M.: Alarm reduction and root cause analysis for nuclear power plant control rooms. Technical report, Norway In: Proceedings Enlarged Halden Prog. Group Mtg (2005) 4. Abele, L., Anic, M., Gutmann, T., Folmer, J., Kleinsteuber, M., Vogel-Heuser, B.: Combining knowledge modelling and machine learning for alarm root cause analysis. In: Proceedings 7th IFAC Conference on Manufacturing Modelling, Management, and Control, vol. 46, no. 9, pp. 1843–1848 (2013) 5. Wang, J., Xu, J., Zhu, D.: Online root-cause analysis of alarms in discrete Bayesian networks with known structures. In: Proceedings of the 11th World Congress on Intelligent Control and Automation, pp. 467–472, Shenyang (2014) 6. Wunderlich, P., Niggemann, O.: Structure learning methods for Bayesian networks to reduce alarm floods by identifying the root cause. In: 2017 22nd IEEE International Conference on Emerging Technologies and Factory Automation, pp. 1–8, Limassol (2017) 7. Wang, J., Li, H., Huang, J., Su, C.: A data similarity based analysis to consequential alarms of industrial processes. J. Loss Prevent. Process Ind. 35, 29–34 (2015) 8. Peng, D., Gu, X., Xu, Y., Zhu, Q.: Integrating probabilistic signed digraph and reliability analysis for alarm signal optimization in chemical plant. J. Loss Prev. Process Ind. 33, 279– 288 (2015) 9. Sola, J., Sevilla, J.: Importance of input data normalization for the application of neural networks to complex industrial problems. IEEE Trans. Nucl. Sci. 44(3), 1464–1468 (1997) 10. Gondara, L.: Medical image denoising using convolutional denoising autoencoders. In: 16th IEEE International Conference on Data Mining Workshops, pp. 241–246 (2016) 11. Wang, Y., Yao, H., Zhao, S.: Auto-encoder based dimensionality reduction. Neurocomputing. 184, 232–242 (2016) 12. Chong, Y.S., Tay, Y.H.: Abnormal event detection in videos using spatiotemporal autoencoder. In: Advances in Neural Networks, pp. 189–196. Springer, Cham (2017)

The First Decade of the Human Systems Simulation Laboratory: A Brief History of Human Factors Research in Support of Nuclear Power Plants Ronald Laurids Boring(&) Idaho National Laboratory, Idaho Falls, ID, USA [email protected]

Abstract. The Human Systems Simulation Laboratory (HSSL) was established as a research simulator at Idaho National Laboratory (INL) for control room modernization of U.S. nuclear power plants. The effort began in 2010 with the initial identification of the needs to provide support to utilities for instrumentation and control (I&C) system upgrades and for human factors of emerging human-system interaction (HSI) concepts. By 2011, working with nuclear energy utilities and simulator vendors, the first full-scope training simulator was acquired for the HSSL, which enabled direct design and evaluation work on the same I&C and HSI that were found at plants. By 2012, the first glasstop bays were acquired, allowing crews to operate the simulator in a full-scale representation of their home plants. Additional plant models were acquired, and a dedicated purpose-built lab space was established in 2014. That same year, the first functional HSI prototypes were developed, allowing operator-in-the-loop benchmark studies of existing and modernized control room concepts. Specific control systems were designed, prototyped, and evaluated across full-scope studies. New technologies were added and tested such as overview displays, prognostic HSIs, computer-based procedures, and microworld simulators for advanced reactor types. Novel human factors evaluation methods were developed and tested encompassing formative evaluation techniques in support of safety cases, wearable eye tracking, and microtask studies. In 2020, the HSSL received a significant facility and hardware upgrade. The purpose of the HSSL remains to conduct research that supports a wide range of human factors needs related to control rooms. Keywords: Human Systems Simulation Laboratory  Human factors  Nuclear

1 Introduction The Nuclear Renaissance [1] emerged as a concept in the early 2000s, spurred by interest in replacing aging nuclear power plants (NPPs) with newer, passively safer, and more efficient plants. This Nuclear Renaissance also brought with it the realization that the existing concept of operations centered on analog control rooms should be revised to consider newer digital control systems, including advanced visualization and automation capabilities. To facilitate the development of advanced human-system © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 528–535, 2021. https://doi.org/10.1007/978-3-030-51328-3_72

The First Decade of the Human Systems Simulation Laboratory

529

interfaces (HSIs) and their validation, a human factors testbed was required for control room design and operator-in-the-loop studies. While control room training simulators existed in the U.S. for each plant, there was no plant-agnostic facility that could be used to support human factors research for NPPs. First-rate research simulators existed in places like the OECD Halden Reactor Project [2], but there was no comparable facility in the U.S. Idaho National Laboratory (INL), a national research facility owned by the U.S. Department of Energy (DOE), sought to develop its own domestic research simulator to support advances in NPPs.

2 Simulator Forerunners: 2008–2012 The initial concept for the research simulator was not linked to a particular NPP. INL built a virtual testbed that could be configured to simulate various control room layouts. The design was centered around operator consoles at a desk with a large overview display in the background. In 2008, an initial proof of concept was set up in a reconfigured office space using representative models and visualizations created in LabVIEW (see Fig. 1a). A particular emphasis of this development was on recording measures, with a full suite of physiological and eye tracking measures to support potential studies. (a)

(b)

Fig. 1. The first two generations of the Human Systems Simulation Laboratory at INL.

By 2010, the simulator came to be called the Human Systems Simulation Laboratory (HSSL) and was relocated and collocated with INL’s Computer-Assisted Virtual Environment (CAVE; see Fig. 1b). A unique feature of this facility was the threewalled CAVE, which opened to form a large single overview display area. At this time, an initial collaboration with NuScale Power, a small modular reactor vendor, allowed the installation of an early stage simulator in the HSSL capable of simulating the operation of multiple reactor units from a single control room.

530

R. L. Boring

3 Arrival of the Glasstop Simulator: 2012–2014 The international financial crisis of 2008 dampened investment enthusiasm in new infrastructure such as NPPs. The Fukushima Daiichi nuclear accident in 2011 effectively halted talk of the Nuclear Renaissance. However, within the U.S., nuclear power still accounted for nearly 25% of electricity generation. A number of plants were nearing the end of their original 40-year operating licenses. Without suitable replacement plants available, most plants sought license extensions, bringing to light obsolescence issues with current control rooms. Duke Energy’s Oconee Nuclear Station became the first U.S. plant to implement a digital reactor protection system in its Unit 1 in 2011. Although ultimately a very successful upgrade across all three reactor units at the plant, the process was fraught with challenges such as the multi-year regulatory review process associated with their License Amendment Request. An INL-led survey of U.S. nuclear industry representatives [3] identified cost concerns, regulatory timelines, and lack of know-how as chief barriers to plant modernization. An infrastructure grant through the U.S. DOE’s then nascent Light Water Reactor Sustainability Program made possible the purchase of glasstop bays to align HSSL research to the modernization needs of the current fleet of reactors. Glasstop bays were first developed under the tradename VPanel by GSE Systems, one of the training simulator vendors for nuclear power in the U.S. [4]. This design consisted of three 46-in. monitors mounted in a vertical fashion to match the shape of traditional panels in control rooms. The top display included annunciator tiles, the middle display consisted of indicators, and the bottom represented a benchboard with controls. The displays featured graphical mimics of the analog control boards at the plant, and they were equipped with touchscreens to allow gesture interactions with controls. Other simulator vendors developed nearly identical glasstop solutions. INL procured 15 glasstop bays (each with three displays) to recreate the front panels of commercial NPPs [5, 6]. The bays required considerably more space than the earliest incarnations of the HSSL, and necessitated a move to a new space, consisting of a converted mailroom. The initial buildout in 2012 featured the horseshoe-shaped main control room for the San Onofre Nuclear Generating Station (SONGS; see Fig. 2). A move to a new purpose-built laboratory space took place in 2014 (see Fig. 3). This space featured a new observation gallery and better supported the Lshaped control room found in many plants. The premature decommissioning of SONGS led to closer work with Progress Energy (later merged with Duke Energy) and installation of their simulator models for Shearon Harris, Robinson, and Brunswick nuclear generating stations. This work featured the installation of simulator software from a variety of simulator vendors: GSE Systems, L3/MAPPS, and Western Services Corporation. The HSSL became truly reconfigurable in its ability to support the vast majority of NPP simulators in the U.S. The cross-vendor nature of the HSSL and its installation of multiple plant simulators on the same hardware were a unique capability in the world.

The First Decade of the Human Systems Simulation Laboratory

531

Fig. 2. The third generation of the HSSL featuring the SONGS simulator.

Fig. 3. The fourth generation of the HSSL featuring the Harris NPP simulator.

4 Enhancements to Methods and Systems: 2014–2018 While most of the early work on the HSSL involved the setup of the hardware and software to support simulator studies, the next phase became crucial toward developing the methods to support control room modernization. Much of this work is chronicled elsewhere [7–9], but a few thematic highlights are worth noting: 1. Development of methods for running studies. In a highly regulated environment like nuclear power, changes to the concept of operation including modernizing the control room must be demonstrated to be at least as safe as the predecessor environment. The HSSL served to apply common usability methods to control room operations. The focus on representing the existing control boards as they were installed at the plant allowed gathering baseline measures of performance [10, 11] and then benchmarking those to prototypes of new systems [12]. The HSSL pioneered formative evaluations as part of the design process for NPPs [13] and also reviewed methods for gathering data given small sample sizes [14]. Much of the approach of evaluation was outlined in the Guideline for Operational Nuclear Usability and Knowledge Elicitation (GONUKE) [8].

532

R. L. Boring

2. Development of prototyping tools for evaluating upgrades. An NPP simulator consists of software to model various plant systems, to control scenarios (e.g., inserting faults to train operators on abnormal operating conditions), to represent the HSI, and to log plant parameters. While qualified simulators adhere to ANSI Standard 3.5 [15], each simulator vendor provides a unique development environment. Because development work was conducted across multiple simulator platforms and because many of the features being developed were not necessarily part of the existing feature set, INL developed its own prototyping environment. This environment was based in Microsoft Visual Studio and centered around Windows Presentation Foundation (WPF), a type of HSI skin that allowed development of digital widgets that could be readily reused to mockup digital prototypes on the control boards [16]. Eventually, the widgets and customized linking software to the simulator backends were gathered to become the Advanced Nuclear Interface Modeling Environment (ANIME) [9]. ANIME continues to provide the framework for prototyping advanced HSIs for control rooms (see Fig. 4).

Fig. 4. Example of distributed control screen prototypes developed using ANIME.

3. Advanced concepts of operations. The focus of most of the studies conducted in the HSSL was on validating control room upgrades. Having professional operators in the HSSL to perform modernization studies also presented the opportunity to piggyback studies on new technologies that were not immediately part of planned upgrades. The Computerized Operator Support System (COSS) [17] was one such example that illustrated advanced visualizations, computer-based procedures, and prognostics. Advanced overview displays to simplify plant monitoring were also developed [18].

5 New Directions: 2018–2020 While support for control room modernization remains a strong focus in the HSSL, new research is also looking beyond legacy control rooms. New control rooms will be largely digital and will likely not feature a stand-at-the-boards configuration. Instead, control and monitoring will take place from an operator console. New control room concepts are being explored to facilitate the introduction of micro and small modular reactors, cybersecurity, and hybrid hydrogen energy production. In 2020, INL began the process of updating the hardware of the HSSL. This fourth generation HSSL will allow greater flexibility in configuring control room concepts for legacy and new reactors (Fig. 5).

The First Decade of the Human Systems Simulation Laboratory

533

Fig. 5. The Rancor microworld simulator shown here with four reactors.

Realizing that full-scope simulators were not available to support advanced control room designs for new builds, INL staff set about creating a simplified tool to demonstrate advanced visualization and automated controls of future control rooms. In 2018, INL researchers released the Rancor microworld simulator [19]. Rancor builds on the legacy of the tools developed for full-scope simulators but packages them in a way that they do not need to be linked to a full-scope simulator. Reduced order models allow rapid prototyping of basic functionality in advance of full-scope simulators. The result is quicker, more cost-effective development of control room concepts.

6 Conclusions The HSSL is much more than a facility; it is a testbed for human factors methods, HSI prototypes, and advanced operational concepts in NPPs. Whether upgrading the current fleet of commercial reactors or designing the next generation, the HSSL remains a key element in the design and validation of control rooms. In its short history, the HSSL has already helped upgrade the control rooms of nine commercial NPPs. NPPs will continue to require upgrades, many taking on larger numbers of systems to achieve a truly digital end state. The HSSL will help realize this end-state vison. With the recent resurgence of interest in building next generation NPPs, the HSSL also stands ready to develop new control rooms. Acknowledgements and Disclaimer. Many individuals have contributed to the development of the HSSL. These include Jacques Hugo and David Gertman (who had the early vision to build the facility); Bruce Hallbert (who championed the initial funding that made the simulator possible); Kirk Fitzgerald and Brandon Rice (who were responsible for simulator infrastructure and physical buildouts); Roger Lew, Thomas Ulrich, and Ahmad Al Rashdan (who developed the prototypes for upgrades and other features); and Jeffrey Joe and Katya Le Blanc (who expanded the initial capabilities). This work of authorship was prepared as an account of work sponsored by Idaho National Laboratory, an agency of the United States Government. Neither the United States Government, nor any agency thereof, nor any of their employees makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately-owned rights. Idaho National Laboratory is a multi-program laboratory operated by

534

R. L. Boring

Battelle Energy Alliance LLC, for the United States Department of Energy under Contract DEAC07-05ID14517.

References 1. Boring, R.L., O’Hara, J.M., Hugo, J., Jamieson, G.A., Oxstrand, J., Ma, R., Hildebrandt, M.: Human factors and the nuclear renaissance. Proc. HFES 52, 763–767 (2008) 2. Boring, R.L.: The use of simulators in human factors studies within the nuclear industry. In: Skjerve, A.B., Bye, A., (eds.), Simulator-Based Human Factors Studies Across 25 Years, pp. 3–17. Springer-Verlag (2011) 3. Joe, J.C., Boring, R.L., Persensky, J.J.: Commercial utility perspectives on nuclear power plant control room modernization. In: 8th International Topical Meeting on NPIC&HMIT, 2039–2046 (2012) 4. GSE Systems: VPanel (2014) 5. Boring, R.L., Agarwal, V., Joe, J.C., Persensky, J.J.: Digital full-scope mockup of a conventional nuclear power plant control room, Phase 1: installation of a utility simulator at the idaho national laboratory. INL/EXT-12-26367, Idaho National Laboratory (2012) 6. Boring, R., Agarwal, V., Fitzgerald, K., Hugo, J., Hallbert, B.: Digital full-scope simulation of a conventional nuclear power plant control room, Phase 2: installation of a reconfigurable simulator to support nuclear plant sustainability. INL/EXT-13-28432, Idaho National Laboratory (2013) 7. Joe, J.C., Boring, R.L.: Using the human systems simulation laboratory at Idaho national laboratory for safety focused research. In: Advances in Intelligent Systems and Computing, vol. 495, pp. 193–201 (2016) 8. Boring, R.L., Ulrich, T.A., Joe, J.C., Lew, R.T.: Guideline for operational nuclear usability and knowledge elicitation (GONUKE). Procedia Manufact. 3, 1327–1334 (2015) 9. Boring, R., Lew, R., Ulrich, T.: Advanced nuclear interface modeling environment (ANIME): a tool for developing human-computer interfaces for experimental process control systems. In: Lecture Notes in Computer Science, vol. 10293, pp. 3–15 (2017) 10. Boring, R.L., Lau, N.: Measurement sufficiency versus completeness: Integrating safety cases into verification and validation in nuclear control room modernization. Adv. Intell. Syst. Comput. 495, 79–90 (2016) 11. Kovesdi, C., Spielman, Z., Le Blanc, K., Rice, R.: Application of eye tracking for measurement and evaluation in human factors studies in control room modernization. Nucl. Technol. 202, 220–229 (2018) 12. Boring, R.L., Ulrich, T.A., Lew, R., Kovesdi, C., Al Rashdan, A.: A comparison of operator preference and performance for analog versus digital turbine control systems in control room modernization. Nucl. Technol. 205, 507–523 (2019) 13. Boring, R.L.: Formative evaluation for optimal upgrades in nuclear power plant control rooms. Joint Probabilistic Safety Assessment and Management and European Safety and Reliability Conference (2014) 14. Ulrich, T., Boring, R., Lew, R.: Qualitative or quantitative data for nuclear control room usability studies? a pragmatic approach to data collection and presentation. Proc. HFES 62, 1674–1678 (2018) 15. American Nuclear Society: Nuclear power plant simulators for use in operator training and examination. ANSI/ANS-3.5 (2009)

The First Decade of the Human Systems Simulation Laboratory

535

16. Lew, R., Boring, R.L., Joe, J.C.: A flexible visual process control development environment for microworld and distributed control system protoyping. In: Proceedings of the International Symposium on Resilient Control Systems (2014) 17. Boring, R.L., Thomas, K.D., Ulrich, T.A., Lew, R.: Computerized operator support systems to aid decision making in nuclear power plants. Procedia Manufact. 3, 5261–5268 (2015) 18. Jokstad, H., Boring, R.: Bridging the gap: adapting advanced display technologies for use in hybrid control rooms. In: 9th International Topical Meeting on NPIC&HMIT, pp. 535–544 (2015)

Human Factors Challenges in Developing Cyber-Informed Risk Assessment for Critical Infrastructure Katya Le Blanc(&) Idaho National Laboratory, Idaho Falls, ID, USA [email protected]

Abstract. Research efforts in the security of cyber physical systems often focus solely on technological aspects of security and ignore the human contributions to risk and resilience. However, ask any IT security admin what the greatest threat to their system is and they’ll quickly tell you, “the user.” While the threat of users is well established, humans are involved in the security of systems to a much larger degree. Humans interact with these systems at every stage of their lifecycle, from initial design to end-of-life: Humans design the components and structures of the physical system, they construct the facility, design the control logic, configure the networks and security controls, maintain the equipment, and operate the system. Ignoring the humans in the system means ignoring your largest source of risk. This paper describes the ways in which humans can contribute to risk using the electric grid as an example. Keywords: Human factors

 Cyber security  Risk analysis  Electric grid

1 Introduction The integration of computational technology into critical infrastructure has led to the need to capture the risk that the technology poses to the safe and reliable operation of the system. Understanding this risk requires characterizing the complex interactions between physical systems and general-purpose computing components that are now distributed across those systems. Current approaches rely on subjective assessment based on subject matter expert input and do not have a firm scientific basis. There are mature and accepted methods for identifying risk in physical systems (such as nuclear power plants) which require a thorough understanding of the components in the system, how those components interact to perform their intended functions, and the Katya Le Blanc—This work of authorship was prepared as an account of work sponsored by Idaho National Laboratory, an agency of the United States Government. Neither the United States Government, nor any agency thereof, nor any of their employees makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately-owned rights. Work supported through the INL Laboratory Directed Research& Development (LDRD) Program under DOE Idaho Operations Office Contract DE-AC0705ID14517. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 536–541, 2021. https://doi.org/10.1007/978-3-030-51328-3_73

Human Factors Challenges in Developing Cyber-Informed Risk Assessment

537

likelihood of failure of the components. Cyber-physical systems have another layer of complexity because they can aggregate previously unknown interactions between components in the physical system through intentional or unintentional misuse of the general computing capability of the cyber technologies embedded in the system. If you also consider the fact that the threat is an intelligent adversary whose goal may be to compromise or sabotage the system (rather than simply considering random failures or degradation) the complexity of determining what can happen in the system and how likely it is becomes overwhelmingly complex. In order for a risk assessment method to adequately capture the risk in a cyberphysical system, the method must capture the unique ways in which the system can fail, which may represent physical failure of computational components or misuse of those components to perform unintended or undesired actions. Characterizing these previously unconsidered failure modes requires detailed understanding of the technologies used, the way they are configured, and the controls that are put in place to protect them. Confounding the issue is that the technologies themselves, the way they are configured, and the ways they can be compromised are ever evolving. Finally, the parts of the system that are the most challenging to characterize, the humans, may have the largest impact on risk. This paper contrasts the way typical risk assessment methods are used to identify risk in physical systems against the complexity that would need to be considered in a cyber-physical system, using the electric grid as an example. This paper will focus on how the humans in the system, from the power grid operators, to the cyber defender and human adversaries, influence risk. The paper will present a framework for addressing these challenges and highlight research gaps that need be addressed.

2 Characterizing Risk in a Physical System Tools like probabilistic risk assessment have been used to characterize risk in physical systems for many years. Generally, risk is described as the consideration of three aspects, consequence, likelihood, and the scenarios that lead to the consequence. The risk in a system is generally characterized as the product of the consequence or impact of an event and the likelihood that event. In a physical system modeling the impact requires a thorough understanding of the components in the system, the interactions among those components, and the likelihood of failure of the components in the system. In fields such as nuclear power risk is defined for predefined consequences such as core damage frequency or radiological release. Similar high-impact consequences can be defined for the electric grid, such as large-scale blackouts. However, describing the way in which those high-impact consequences can occur is much more challenging in the electric grid. In nuclear power, risk analysis uses event trees and fault trees to model how sequences of failures of specific systems or components can lead to undesirable outcomes. Developing similar fault trees for the electric grid will require capturing complex interactions, particularly if one is considering cyber security.

538

K. Le Blanc

3 Characterizing the Human Contributions to Risk In nuclear power and other domains, human contributions to risk are characterized using human reliability analysis. Typically, important human actions are modeled in fault trees and specially trained analysts model the factors that influence success of those actions. The input to the fault tree then becomes a human error probability. Importantly, human error probabilities only capture the likelihood that the human will not take an important action (errors of omission) and doesn’t capture the myriad of ways that humans can take undesirable actions on the system (errors or commission). Considering errors of commission would make any analysis prohibitively complex because the conditions that need to be considered are seemingly unbounded [1].

4 Adding Cyber Risk to the Equation Information systems add another layer of complexity to modeling risk because they may contribute previously unknown interactions among components in the physical system as well as being components that can fail themselves. If you also consider the fact that one of the threats to information systems is an intelligent adversary who may be intentionally trying to compromise or sabotage the system (rather than simply considering random failures or degradation) the complexity of determining what can happen, and how likely it is become ever more complex. Computer security (also known as cyber security) encompasses a wide variety of considerations. In order for a system to reliably perform its functions it may need uninterrupted communications. The integrity of information from sensors and equipment needs to be maintained as well as the integrity of the controls and actuators. Further, security controls need to ensure that unauthorized entities do not have access to view or manipulate any of this information. Whether these conditions are met relies on many variables including how networks and communications are configured, what protocols are used, what security measures are in place, and whether there are technical errors in the software that is used on any of the computer components in the system. Further complicating the issue is that many so-called vulnerabilities in systems arise when systems are behaving as intended, but the context of use has changed such that assumptions the original designers made are now violated. Finally, the context of use changes dramatically when systems are upgraded or changed, when new technology emerges, or when attackers acquire new capabilities. This means characterizing where vulnerabilities exist on a system at any given time is extremely complex.

5 Considering the Human Contribution to Risk Humans are an integral part of the entire lifecycle of any complex sociotechnical system. Humans design the components and structures of the physical system, they construct the facility, design the control logic, configure the networks and security controls, maintain the equipment, and operate the system. Further, human attackers are a potential threat to the system. At each stage, humans make assumptions about the

Human Factors Challenges in Developing Cyber-Informed Risk Assessment

539

conditions in which the system will operate that have consequences for how it might fail. The first stage at which humans can contribute to risk is in the design of systems; both in the design of physical structures and components and in the design of software that interacts with the physical components. Engineers design physical systems to operate within a predetermined set of conditions. Although the actual conditions a system may encounter in its entire life cycle are complex and unpredictable, engineers must make a set of simplifying assumptions in order to produce a design that will work in most conditions they expect the system to encounter. Those simplifying assumptions may have important consequences for whether a system will fail under condition that arise in the actual operation of the system. However often those assumptions aren’t documented carefully, making it difficult for someone other than the designer to understand the conditions that might lead to a failure. The organization operating the system may not understand the implications to physical system if they unknowingly operated it under conditions outside the ones that it was designed for, and they may not even know that they are operating it outside those conditions. While the design of the physical system isn’t particularly a special consideration for cyber risk, there are potential interactions among the physical system and the computational components in the physical system. Certain designs of the physical system may also lead to unique failures that only arise when there are computer security challenges on the system. The second place where risk is introduced by human is in the design of the control logic of the system. Automatic controls typically rely on feedback from sensors into controllers that actuate certain controls based on the state of the system. Designers of these system carefully consider issues such as calibration failures, or other sensor issues in the design of their systems, but they may not consider impact to of an intelligent adversary who may be spoofing sensor data. They may make assumptions about the robustness of the system based on an ideal implementation of the design and not employ additional protections to ensure reliability of the system in the event of a targeted attack on the system. Further, the designers may make assumptions that allow for an attacker to exploit the system in unexpected ways. For example, they may assume that unauthorized access to a system is extremely unlikely and design the system in away that an attack can propagate from one system to another. A third way that humans can contribute risk is in the configuration of equipment. Decisions about how to configure protection systems are made in the context of a human’s understanding of the system. The human’s mental model may be incomplete or inaccurate and may lead to a poor configuration of the system. The configuration is then carried out by another human who may or may not understand the intent of the designer and may make an error in the implementation. Operation of the system is then carried on with an assumption that configuration was accurate and carried out as intended. This failure in configuration can happen on software and controllers directly related to controlling the physical system and for the networks and communications, and configuration of security controls on the system. They can appear in any part of the system with a computational component and are extremely hard to detect or predict. Humans can also contribute to risk in the operation of systems. Human operators must monitor equipment through specialized interfaces, make decisions about the state of the system, and take action to ensure it remains in a safe state. In electric grid

540

K. Le Blanc

operations, this typically means monitoring complex displays to ensure power flows remain within limits, performing switching to conduct scheduled maintenance, and ensuring load and generation balance while maintaining resources at acceptable levels. Operating the system relies on integrity of the data provided through the interface, effective training to ensure operators have accurate understanding of the system dynamics, and an understanding of the limitations of the tools used in control rooms to support decision making. Additionally, factors such as fatigue and overall complexity of the system can influence whether operator will be able to take appropriate actions during an upset. Finally, humans that interact with any part of the cyber-physical system can contribute to risk in the system. Maintenance technicians can fail to perform actions that are essential to keep the equipment in a safe state. Vendors performing maintenance on proprietary systems may temporarily circumvent security controls to complete updates or trouble shoot which may compromise the system or leave it open to compromise in the future. Company executives or other employees may unwittingly compromise their credentials on business systems, leaving their more protected systems open to compromise such as in the 2016 attack on the Ukrainian electric grid [2]. In these situations, it is very difficult to predict what compromises will occur and what the ultimate impacts will be if they do occur. Finally, when considering cyber security risk, the threat is a human or group of human attackers. Modeling or predicting what attacks might occur and what the outcome might be would require an understanding of the goals and capabilities of the attackers in the context of the physical system characteristics along with being able to predict how humans on the defending side would behave.

6 Quantifying the Risk Quantifying risk requires that consequences and scenarios can be adequately described and the frequency of failures be adequately quantified. In systems that use probabilistic risk assessment, such as nuclear power, component reliability and failures are tested prior to implementation in a system and then tracked throughout the entire operational lifecycle. Similar data for human failures and for cyber-security-induced failures is not readily available. Further, the likelihood of both human failure and cyber-security failures depend extensively on contextual factors that are simply not captured in existing risk models. Even if the data were available, it is unclear whether it would capture the real frequency if the additional context was not also captured.

7 Discussion Research efforts in the security of cyber physical systems often focus solely on technological aspects of security and ignore the human contributions to risk and resilience. However, ask any information technology security administrator what the greatest threat to their system is and they’ll quickly tell you, “the user.” While the threat of users is well established, humans are involved in the security of systems to a much larger

Human Factors Challenges in Developing Cyber-Informed Risk Assessment

541

degree. Humans interact with these systems at every stage of their lifecycle, from initial design to end-of-life. This paper has described the challenges in evaluating the risk that humans contribute to systems, but it also makes the case that it can’t be ignored. In a complex environment such as cyber-physical systems, it is important to understand where humans make assumptions about how the system will function and how those assumptions are carried through the design and operation of the system. It then becomes vital to track whether those assumption still hold throughout the operational lifecycle of a system in order to prevent undesired events from becoming catastrophic accidents.

References 1. Swain, A.D.: Human reliability analysis: need, status, trends and limitations. Reliab. Eng. Syst. Saf. 29(3), 301–313 (1990) 2. Lee, R.M., Assante, M.J., Conway, T.: Analysis of the cyber attack on the ukrainian power grid. SANS Industrial Control Systems (2016)

Dynamic Instructions for Lock-Out Tag-Out Jeremy Mohon(&) Idaho National Laboratory, Idaho Falls, ID, USA [email protected]

Abstract. Lock-out tag-out (LOTO) instructions protect personnel from unexpected re-energization or start-up of equipment that is under maintenance. Even though there is rigorous training associated with LOTO activities, incidents still occur. Idaho National Laboratory is researching and developing dynamic LOTO instructions, which supports automatic place keeping, personnel working on multiple instructions, and multiple work groups completing the same instructions. Researchers identified user needs through subject matter expert interviews, observations, and classroom training. Failing to identify potential hazards during LOTO can cause injury or death to personnel. Dynamic instructions help to guide the user seamlessly through the instructions. Combining planning and record keeping may lead to clearer, more applicable instructions which addresses the reason for compliance issues with current paper-based procedures. Keywords: Dynamic instructions procedures

 Lock-out Tag-out  Computer-based

1 Introduction Lock-out Tag-out (LOTO) instructions are used to protect employees servicing machines and equipment from the unexpected startup or release of hazardous energy that could cause serious injury or death to personnel. Lock-out can be described as the use of devices to lock equipment in a safe position during equipment maintenance. Tag-out can be characterized as using warning devices to inform personnel that equipment is under maintenance and not safe to operate. The United States Occupational Safety and Health Administration (OSHA) established LOTO as a part of workplace programs for personnel protection from workplace hazards in 1989 [1]. Despite rigorous training it is one of top 10 all-time most OSHA cited workplace violations and was the fourth most cited OSHA violation in 2018 [2]. Compliance violations for LOTO are failing to complete all steps as required by policy, failure to identify hazardous energy sources, and lack of personnel training on equipment procedures. Failure to implement LOTO in manufacturing was a contributing cause of 8% of fatalities and 15% non-fatal catastrophic injuries investigated between 2005–2014 [3]. Compliance prevents over 50,000 injuries and prevents over 100 deaths [4]. Many industries currently use paper-based procedures for LOTO activities. Even though paper-based procedures have kept industries mostly safe they have contributed to human errors due to missed or skipped steps, compliance issues, and managing © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 542–549, 2021. https://doi.org/10.1007/978-3-030-51328-3_74

Dynamic Instructions for Lock-Out Tag-Out

543

multiple procedures [5]. LOTO’s purpose is to protect personnel completing maintenance on equipment or systems from unexpected hazards or startup. Phases of the LOTO workflow are planning, recording, activities for workgroups involved, and completion. The planning phase is where the authorized employee puts a work package together which lists procedures, equipment, and previous maintenance performed on the equipment or system. The employee then records LOTO information, locations of hazardous energy on record sheets, and describe how equipment should be isolated. Record sheets are used to record hazardous energy conditions, maintenance personnel signatures, independent verifications that equipment is off or closed, and supervisor approvals. Once the planning is completed a supervisor will give final approval to begin the LOTO. The authorized employee will contact additional personnel to independently verify the equipment is off or closed. Once the independent verifier confirms the equipment is off or closed, they will install LOTO on the system or equipment. Multiple personnel will begin to work on the LOTO after the equipment has been independently verified as off or closed. Multiple work groups may all be assigned to the same instructions based on their expertise such as electrical, plumbing, or maintenance. Once the work groups have completed their tasks, they will sign the record sheet as completed. Once all activities are finished, the supervisor will give permission to an authorized employee to close out and remove the LOTO. Paper-based LOTO instructions work well for record keeping and confirming verifications are off or closed equipment have been completed before work begins. However, missed or skipped steps, compliance issues, and managing multiple procedures could be improved for more efficient workflow. 1.1

Dynamic LOTO Instructions

Some industries have switched to Smart PDFs which allow for limited user inputs and adding additional information, such as photos, drawings, and equipment information. However, smart PDFs function similarly and remain static documents. Research into dynamic procedures could offer improvements to the LOTO workflow process. Dynamic procedures help to guide the user seamlessly throughout the use of the procedure [5]. Human performance features can be added such as displaying only relevant steps, easy to access information, and guiding the user through the procedure. Researchers at Idaho National Laboratory (INL) designed a dynamic LOTO instruction which can support planning and compliance by addressing communications between workgroups, using automatic place keeping, record retention, printing tags, and improving the overall user experience. INL has created extensive design guidance for dynamic instructions that incorporate branching questions, field notes, notes, cautions, and warnings to guide personnel through the entire LOTO process [6]. The design of the dynamic LOTO instruction was based on the design concepts previously published by INL. By doing so the researcher ensured ease of use of the dynamic LOTO as well as improved means to communicate between work groups The objective of this research was to gather user needs and requirements for the creation of a dynamic LOTO instructions that will combine planning and record

544

J. Mohon

keeping in a manner which will reduce workload, increase efficiency, and increase administrative compliance.

2 Method The researcher identified user needs through three SME (Subject Matter Expert) interviews, observations, and classroom training to research and develop dynamic LOTO instructions. SMEs interviewed were facility supervisors, program managers, and operations LOTO experts. Supervisors review and approve record sheets before and after the work has been completed. Program manager oversees all LOTO operations in their facility. Operations LOTO experts oversees the entire company wide program. LOTO instructions and record sheet information were reviewed before interviewing SMEs to create targeted questions on how LOTO instructions are used, record sheet information recording, and understanding where human performance errors occur during instruction use. The questions targeted information such as; whether personnel could explain the flow of the record sheet and how it is used, what works well in the paper-based work process, what improvements to the process would have great impact, and how do supervisors approve the LOTO. The researcher attended a demonstration in a training facility where SMEs showed how equipment is locked out, what indications should be observed, and what additional resources are available to the authorized employee. The researcher also participated in a two-day LOTO classroom training to learn more about the planning and performing phases of the LOTO. Qualitative data and LOTO workflow process information was gathered to understand user needs when completing LOTO. Note taking was used for collecting information from interviews and observations. Previous design guidance from INL on computer-based procedures was used to identify areas for improvement to create dynamic LOTO instructions.

3 Results Interviews with SMEs found record retention, tag printing, and improving communication between personnel were areas where improvements could have a great impact. SMEs explained how LOTO instructions and paperwork are completed for different LOTO activities. SMEs identified Tag printing and record retention as important for work efficiency by saving personnel time and allowing for record retrieval for future LOTO with the same equipment or system. Observation of the training showed how personnel are trained on facility equipment and how to identify all potential hazardous energy sources. Classroom training provided information on how LOTO is applied. Instruction provided feedback on completing the training and how the record sheets and forms are completed.

Dynamic Instructions for Lock-Out Tag-Out

545

4 Discussion The LOTO workflow process areas identified as needing improvement were record retention, printing tags, and communication between workgroups. INL’s design guidance for creating dynamic procedures has identified additional areas such as field information, notes, cautions, and warnings, branching questions, and automatic place keeping. A prototype is being developed that combines instructions and record keeping with design guidance and identified user needs to address improvements in the LOTO workflow process. Planning has been combined with record keeping by identifying points in the instructions were tasks and record keeping could be combined to save time. Information can be added to dynamic instructions and allow personnel not to have to carry work packages around for planning. Personnel would begin by identifying the system or equipment and follow the instructions to begin LOTO. By combining instructions and record planning hazardous energy sources can be selected and added to the record sheet while completing instructions. Figure 1 shows an example of following the instructions and completing the record sheet by selecting the equipment or system and possible hazardous energy sources. Branching questions based on design requirements have been added when selecting potential hazardous energy sources for additional personnel safety. Branching questions are used for adding decision points in the instructions that will guide the personnel to the next appropriate step based on their selection. In Fig. 1 if personnel selected electrical hazard the instructions would then direct personnel to additional electrical questions. Combining the planning and record keeping addresses improvements to the workflow process that could help increase work efficiency.

Fig. 1. Example of dynamic LOTO instructions planning and record sheet.

Automatic place keeping is an identified need found based on both the design guidance and classroom training. Place keeping is a human performance tool helping personnel from unintentionally missing or skipping steps [5]. Currently the circle slash

546

J. Mohon

method is the most common method of place keeping in the nuclear industry. Personnel will read a step, then circle it to begin the work, and slash through it when work is completed [5]. Classroom training requires personnel to demonstrate they can perform LOTO without missing or skipping steps. If steps are missed or skipped the entire training must be repeated. Having automatic place keeping as a feature can improve overall human performance by helping personnel to not miss or skip a step unintentionally. This will also help reduce cost of training personnel. Place keeping is identified by the blue border surrounding the current step, as illustrated in Fig. 2. Previous and future steps are grayed out for easy identification of the current step. By scrolling personnel can revisit previous steps that have been completed as well as looking at future steps. An arrow button at the top of the dynamic procedure will return the worker to the current step if they have looked at previous or future steps. The tracking of conducted steps via the automatic place keeping can help to reduce the cognitive demands on personnel having to remember pervious step completed.

Fig. 2. Record sheet, sending notifications to personnel, and work preview.

Field information which incorporate notes, pictures or additional information has been included based on design guidance for computer-based procedures [6]. In PBPs additional information is included as paper copies in work packages. Having field information section would allow for documentation relating to the current step to be available as needed [6]. Adding pictures of the equipment, drawings, or notes can help personnel identify locations of hazardous energy for easier planning or completion of maintenance. LOTO PBPs record sheet has a space for notes that works for the LOTO process in case of additional information is needed. In Fig. 2 the dynamic instructions have a section to add notes and information in the field information folder in each instruction step [6]. The field notes section is grayed out when there is no current information included. The folder icon will turn blue if information has been added to field information for easy identification. The blue color change indication can help to identify if information has been added to an instruction step.

Dynamic Instructions for Lock-Out Tag-Out

547

Notes, cautions, and warning indicators are indicated to the left side of a series of steps for a visual cue to which steps related [6]. Having notes, cautions, and warnings help inform personnel to pay attention to related steps [6]. For example, Fig. 1 has a note to acknowledge possible equipment hazards that should be identified as part of the LOTO instruction step. Having the acknowledgement can help personnel to identify additional information for personnel protection. Having multiple personnel working on LOTO instructions at the same time can help increase work efficiency. Notifications can be sent easily to different personnel when work has been completed. An example is an electrician could send a notification that they have turned off circuit breakers allowing for maintenance to begin on equipment. In Fig. 2. notifications are sent to the supervisor to approve LOTO planning once the authorized employee has finished planning. The supervisor would receive the notification to review and approve the LOTO and could inform the authorized employee to submit approval. Having the ability to send notifications will help to increase work efficiency by reducing waiting time between maintenance and planning tasks. Record retention was an identified important user need that is not currently addressed when using paper-based procedures. SMEs stated that if records can automatically be stored after completion it will save personnel time. Records currently are kept as paper copies, but then are transferred to a database after a set period. Personnel must physically scan records into a database for future retrieval. Having automatic record retention built into dynamic procedures to automatically save or send once completed will help increase work efficiency. An additional area for improvement for record retention is the easy retrieval of LOTO information that has been performed on the same system or equipment. This could same time by identifying and locating hazardous energy sources that were isolated in previous maintenance activities. Currently tags are planned, filled out on the record sheet, and then written out again on the individual tags. Interviews with SMEs said having the ability to print the record sheet tag information will save personnel time from having to complete the record sheet and write out the information on each of the individual tags. In Fig. 2 once the planning phase is completed the supervisor will receive a notification to approve the LOTO. Once approval is granted a notification will be sent back to the authorized employee to print the tags. Dynamic instructions can help with improving the process for printing tags and save time. The dynamic LOTO instructions prototype has addressed areas for workflow improvement by combining identified user needs with design guidance helping to reduce cognitive workload, increasing work efficiency, and increasing compliance.

5 Limitations and Future Work The sample size of three SMEs was a limitation of this research. Limited access to SMEs could mean that there may be additional user needs that are not yet identified. More SMEs and users will be needed to identify additional user needs and requirements for creating dynamic LOTO instructions.

548

J. Mohon

A web-based prototype for dynamic LOTOs is under development. The prototype version is intended to be used as a mobile or desktop application and combines planning, documentation, and instructions. Users will be asked to evaluate the functionality by providing feedback on the design, challenges, and errors encountered during user trials. Usability of the prototype will be determined from feedback given after testing to improve the dynamic instruction prototype.

6 Conclusion Dynamic LOTO instructions are being researched by INL that will combine planning and record keeping reducing workload, increase efficiency, and increase administrative compliance. User needs were identified by interviewing LOTO SMEs, observing facility demonstrations, and classroom training. Identified user needs were record retention, tag printing, and improving communication between personnel. Classroom and facility demonstrations informed how LOTO planning and record keeping is performed. A prototype is under development to address areas for LOTO workflow improvement. Combining these functions may lead to clearer, more applicable instructions which addresses the reason for compliance issues with current paper-based procedures. Acknowledgments. I would like to thank Jeffery Wass, Timothy Hollis, Chauncey Peters, Johanna Oxstrand, and David Butikofer at Idaho National Laboratory for their contributions to this research, guidance, and mentoring. The opinions expressed in this paper are entirely those of the authors and do not represent official position. This work of authorship was prepared as an account of work sponsored by Idaho National Laboratory, an agency of the United States Government. Neither the United States Government, nor any agency thereof, nor any of their employees makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately-owned rights. Idaho National Laboratory is a multi-program laboratory operated by Battelle Energy Alliance LLC, for the United States Department of Energy under Contract DE-AC07-05ID14517.

References 1. Occupational Safety and Health Administration. United States Department of Labor (2017). https://www.osha.gov/SLTC/controlhazardousenergy/. 10 Jan 2020 2. Druley, K.: OSHA’s top 10 most cited violations for 2019 (2019). https://www. safetyandhealthmagazine.com/articles/19087-oshas-top-10-most-cited-violations. Accessed 14 Jan 2020 3. Yamin, S.C., Parker, D.L., Xi, M., Stanley, R.: Self-audit of lockout/tagout in manufacturing workplaces: a pilot study. Am. J. Ind. Med. 60(5), 504–509 (2017). https://doi.org/10.1002/ ajim.22715 4. Esafety.: What is Lockout/Tagout? (2018). https://www.esafety.com/what-is-lockout-tagout/. Accessed 15 Jan 2020

Dynamic Instructions for Lock-Out Tag-Out

549

5. Le Blanc, K., Oxstrand, J.: Computer-based procedures for field workers in nuclear power plants: development of a model of procedure usage and identification of requirements (2012). https://doi.org/10.2172/1047193 6. Oxstrand, J., Le Blanc, K., Bly, A.: Design guidance for computer-based procedures for field workers (2016). https://www.osti.gov/biblio/1344173-design-guidance-computer-basedprocedures-field-workers. Accessed 21 Jan 2020

Design Principles for a New Generation of Larger Operator Workstation Displays Alf Ove Braseth(&) and Robert McDonald Institute for Energy Technology, Digital Systems, Os Alle 5, 1777 Halden, Norway {alf.ove.braseth,robert.mcdonald}@ife.no

Abstract. Industrial plants are controlled from centralized control rooms, typically through a 4–5 level “deep” hierarchy. This structure causes operators to perform unwanted navigation and can cause keyhole effects. Decreasing cost of larger high-resolution personal displays can mitigate such effects. This paper presents a theoretical framework for designing graphics for larger personal displays, guided by four questions: i) What is the appropriate level of information content? ii) How should this information be comprehended and presented? iii) How to support efficient and safe operator input actions? And, iv) How to effectively perform efficient navigation to other displays? The paper uses findings from research into visual perception and human factors. The contribution is design principles, they are used to develop 30” display pictures for a full scope nuclear simulator. Further work should perform user studies to explore operator acceptance and performance for the display pictures. Keywords: Large personal displays

 Human factors  Design principles

1 Introduction Modern industrial plants such a as nuclear, chemical and petroleum industries are controlled from centralized control rooms. Using digital technology, both new builds and modernization projects perform monitoring and interaction through a display hierarchy. Often, such hierarchies consist of 4–5 levels [1]. At the top level, one or several large group-view displays present the “big picture”. Detailed process interaction is performed at individual operator workstations, typically 20” to 24” size displays. This display structure causes control room operators to perform unwanted navigation, and the resulting myriad of displays can cause keyhole effects [2]. There were reported unfortunate effects when moving from larger analogue panels to desktop-displays in both nuclear and conventional control rooms: [3, 4]. Industrial standards and guidelines explain how smaller personal displays are challenging for operator performance: [5, 6]. The availability of larger high-resolution personal displays, typically 30”, brings an opportunity to reduce both the “depth” of the display hierarchy, and the total number of pictures. A new generation of larger personal displays has therefore the potential of reducing both interface management and keyhole effects. There is, however, not much research or advice on how to design process graphics for such displays. There is a need to establish research-oriented design principles, taking advantage such displays for © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 550–557, 2021. https://doi.org/10.1007/978-3-030-51328-3_75

Design Principles for a New Generation of Larger Operator Workstation Displays

551

improved operator performance. This paper presents a theoretical framework for designing graphics for larger personal displays, guided by four questions: i) What is the appropriate level of information content? ii) How should this information be comprehended and presented? iii) How to support efficient and safe operator input actions? And, iv) How to effectively perform efficient navigation to other display pictures in the display hierarchy? The paper answers the questions using findings from research into visual perception for computer graphics and human factors.

2 Approach The following sections addresses the four questions chronologically. 2.1

The Appropriate Level of Information Content

The new displays should support the operator in obtaining a high level of Situational Awareness (SA). There are three levels of SA, important for shaping designs [7]. The first is perception of the elements in the environment. The second is to provide a comprehension of the current situation, particularly when integrated in relation to goals. The third is to present a projection of the near future status. SA level one can be improved through the increased “information space”, which has a capacity of presenting more information to the operator. This has the benefit of both reducing the number of displays and the depth of the hierarchy (Fig. 1).

Level 1 Level 2 Level 3 Level 4

Fig. 1. Left side: larger displays reduces hierarchy depth and number of displays. Right side creates a myriad of displays

For SA level 2, using the larger display space to present whole process segments instead of dividing them into several displays is a reasonable choice. In addition, information related to automation can help operators in building a strong mental model of the process keeping operators “in the loop” [7]. For SA level three, both trended information and other qualitative indicators used for presenting process values can help projecting a near future state. For a high level of SA, the displays should also support the operator in both top-down and bottom up data driven processing [7]. Examples of the first are process values and their goal-oriented target values. Data driven alarms and events are examples of bottom-up data driven information (Fig. 2).

552

A. O. Braseth and R. McDonald

48%

1200 rpm

Fig. 2. Left side: zoomed in part of the right-side whole process segment. Presenting process value (trend) and target value (triangle) comprehended with automation (48%) and bottom-up driven alarm (H)

2.2

Comprehension and Presentation

It is not enough to just afford a larger display area; it should also be used in an effective way. One principle according is to use a high data-ink ratio, focusing on valuable information [8]. In this approach, it is suggested to remove ornaments and redundant ink, using the real data itself to provide context. Both high data-ink ratio designs [9], and color layering) have demonstrated favorable results in visual search in displays [10], (Fig. 3).

Fig. 3. Left side, good design: thin grey lines, normal font, avoiding frames. Right side: cluttered design using unnecessary ink

The use of qualitative indicators for presenting process information is found to be an effective way of perceiving information: [11]. Trended indicators do have the advantage of presenting historic information, they give a cue of “where are we heading”, [18]. It is suggested using a grey background considering SA; alertness, eyestrain and fatigue [12]. For larger displays, it can be a challenge to catch attention toward data-driven alarms, being masked by other objects. Research into visual perception explains how to use perceptual salience for directing attention in computer graphics: [13, 14]. The most important data attributes should be displayed through unique distinct visual features such as: motion; grouping; shapes and color. Using frames for displaying alarms are challenging, as different sized objects result in inconsistent visual attention (Fig. 4).

Design Principles for a New Generation of Larger Operator Workstation Displays

553

Fig. 4. Left side good design, consistent visual presentation of two alarms. Right side, inconsistent size of two alarms by using frames

Research into visual perception for computer displays explains how humans have a low capacity in visual memory: [14, 16]. This suggests how the new class of displays should present information explicitly available “in the world” [15], avoid hiding information in menus or “tabs”. There is also a need to support the natural way humans see information in displays, which is done through a dynamic fixation-saccade cycle [13]. Rapid visual perception can be supported by adding multiscale structures open spaces and main flowlines [16] (Fig. 5).

Fig. 5. Example of layout supporting fast visual search: multiscale structures, open spaces and main flowlines

2.3

Operator Input Actions

Advice for building effective interactive menus are suggested in literature [17], and for efficiency, interactive menus should be placed in the display surface where their intended function is naturally mapped, close to related information objects. This will also strengthen the operator’s mental model for process changes, and to foresee the expected outcome of input actions. Confirmation of actions can prevent unintentional operations. This suggest using “confirmation” (a two-stage process) where the

554

A. O. Braseth and R. McDonald

Fig. 6. Left display: one stage action. Right display is a safety oriented two stage process

consequences of actions are high. It should, however, not be used for less safetyoriented actions, as using confirmations increases the workload (Fig. 6).

3 The Resulting Design Principles The design principles are presented chronologically from the research questions: i) The information content in each display should support a high level of SA, covering whole process segments, and support the operator in both top-down, and bottom-up information processing. Displays should present process values; target values; automation cues and alarm information. ii) Presentation should focus on a high data-ink ratio by, reducing the use of frames; ornaments; redundant “ink” and reserving signal colors for high priority information. Alarm information should be consistent in size and be visually unique, similarly sized alarm objects are better than frames. Trended qualitative indicators on a light grey background is suitable. Information should be explicitly available, avoid loading visual memory. Rapid search is supported through multiscale structures, open space and main process lines. iii) Confirmation of operator actions should be used only where the consequences of wrong actions are high. iv) Rapid access to other displays should be available through an explicit menu, and end of line “jump”. Safety oriented actions are two stage, other are one stage actions. The design principles were used to develop four new 30” size displays to monitor and control a nuclear simulator [19]. One of these displays (Fig. 7), is the Steam Generator (SGN) display. The SGN display shows the three Steam Generators (SGs) and all of their support systems, it visualizes levels, narrow and wide, pressures, feed flow, and steam flow. Within the steam generators there are trends that integrate narrow range level and pressure and well as a steam flow feed flow balance indication. The other systems found in the SGN display are; Main Steam, Main Feed, Auxiliary Feed, Blowdown, Sampling and Reactor Coolant Pumps. The top of the display there are the indicators associated with each SG. The left hand side is the level indicators and the right hand side is the steam line bi-stables for each steam generator main steam line. These bi-stable indications are back light when a tripped condition exists.

Design Principles for a New Generation of Larger Operator Workstation Displays

555

Fig. 7. Left dark side: menu for input actions and navigation. Top: displays the individual steam generators level and pressure indicators. Center: steam generators and support systems

On the far-left hand side is the control and navigation area. Here is where the operator can perform input actions such as to change settings for valves and pumps and navigate to other displays.

4 Discussion and Further Work Was it necessary to develop design principles for a new class of larger personal operator displays, or could we just use the advice from industrial guidelines for normal smaller sized operator displays? We will argue that the larger size has both possibilities and challenges that must be dealt with in other ways than for smaller displays. The use of SA as a framework for “what to display” was useful for which information to display. We found, however, that the larger information space could lead to higher visual complexity. From this, the concept of high data-ink ratio was helpful, for how to present information. During prototyping new displays for a nuclear simulator, we experienced how reducing frames and clutter improved the readability of the displays. The suggestion of using only explicit information presentation, has both positive and negative effects. On the negative side, we found how the display might be more cluttered and visually complex. On the positive side, the display information does not burden visual memory. We need feedback from user tests before any definitive conclusions are made. Using similarly sized graphical objects for alarm presentation instead of alarm frames around process objects, supports a consistent presentation. We need feedback from operators to evaluate the visibility. Regarding input actions, we are uncertain if the one stage input concept might lead to unwanted “slips”.

556

A. O. Braseth and R. McDonald

The proposed design principles are found useful for developing consistent displays across several subsystems. We found them also relevant for designing navigation structures to other displays. The principles were less valuable for selecting which data to include in the displays. This continues to require detailed plant system knowledge. Further work will incorporate user tests of the displays exploring the usefulness of the concept, and assessing the effects on operator performance. Based on these findings, the design principles and operational displays will be revised and improved.

References 1. Hollifield, B.: The high performance HMI, Process graphics to maximize operator effectiveness, November 2019. www.isa.org 2. Woods, D.D.: Toward a theoretical base for representation design in the computer medium: ecological perception and aiding human-cognition. In: Flach, J., Hancock, P., Caird, J., Vicente, K. (eds.) Global Perspectives On the Ecology of Human-Machine Systems, vol. 1, pp. 157–188. Lawrence Erlbaum Associates Inc, Hillsdale (1995) 3. Vicente K.J., Roth E.M., Mumaw R.J: How do operators monitor a complex, dynamic work domain? The impact of control room technology. Int. Journal. Hum. Comp. Stud. 54(6), 831–856 (2001). https://doi.org/10.1006/ijhc.2001.0463 4. Salo, L., Laarni, J., Savioja, P.: Operator experiences on working in screen-based control rooms. In: Proceedings, NPIC&HMIT, Albuquerque, USA (2006) 5. O’Hara, J.M., Brown, W.S., Lewis, P.M., Persensky, J.J.: NUREG-0700: Human-System Interface Design Review Guidelines, Rev. 2, pp. 309, 9. U.S. Nuclear Regulatory Commission, Washington, USA (2002) 6. IEC 61772: International standard, Nuclear power plants – Control rooms – Application of visual display units (VDUs), Edition 2.0, International Electrotechnical Commission, Geneva, Switzerland, p. 31 (2009) 7. Endsley, M.R.: Situation awareness. In: Lee, J.D., Kirlik, A. (eds.) The Oxford Handbook of Cognitive Engineering, pp. 88, 90, 97. Springer Verlag, New York (2013) 8. Tufte, E.: The Visual Display of Quantitative Information, 2nd edn, pp. 91–105. Graphics Press, Cheshire (2001) 9. Gillan D.J. Sorensen D.: Minimalism and the syntax of graphs: II. Effects of graph backgrounds on visual search. In: Human Factors and Ergonomic Society 53. Annual Meeting, pp. 1096–1100 (2009). https://doi.org/10.1177/154193120905301711 10. Van Laar, D., Deshe, O.: Evaluation of a visual layering methodology for colour coding control room display. Appl. Erg. 33 (2002). https://doi.org/10.1016/s0003-6870(01)00048-5 11. Tharanathan, A., Bullemer, P., Laberge, J., Reising, D.V., Mclain, R.: Impact of functional and schematic overview displays on console operator’s situation awareness. J. Cogn. Eng. Dec. Mak. 6(2) (2012). https://doi.org/10.1177/1555343412440694) 12. Bullemer, P., Reising, D.V., Laberge, J.: Why Gray Backgrounds for DCS Operating Displays? The Human Factors Rationale for an ASM Consortium Recommended Practice. ASM sponsored paper (2011). http://www.asmconsortium.net/. Accessed 29 May 2012 13. Ware, C.: Information Visualization, Perception For Design, Third Edition, pp. 157–159, 140-141. Elsevier/Morgan Kaufmann Publishers, USA (2013) 14. Healey, C.G., Enns, J.T.: Attention and visual memory in visualization and computer graphics. IEEE Trans. Visual. Comput. Graph. 18(7), 1170–1188 (2012). https://doi.org/10. 1109/tvcg.2011.127 15. Norman D.A.: The Design of Everyday Things. Basic Books, New York, p. 54 (2002)

Design Principles for a New Generation of Larger Operator Workstation Displays

557

16. Ware, C.: Visual Thinking for Design, vol. 40, pp. 10–11. Elsevier/Morgan Kaufmann Publishers, USA (2008) 17. Lidwell W., Holden K., Butler J.: Universal Principles of Design, 125 Ways to Enhance Usability, Influence Perception, Increase Appeal, Make Better Design Decisions, and Teach through Design, Rockport, Beverly Massachusetts, pp. 22, 54, 152, 154 (2010) 18. Braseth, A.O.: Information-rich design: a concept for large-screen display graphics. Ph.D. Thesis, Norwegian University of Science and Technology, Trondheim, Norway, pp. 9–10 (2015). https://brage.bibsys.no/xmlui/handle/11250/292990 19. Nystad, E., Kaarstad, M., McDonald, R.: Decision-making with degraded HSI process information. OECD Halden Reactor Project report, HWR-1242 (2020)

Tablet-Based Functionalities to Support Control Room Operators When Process Information Becomes Unreliable or After Control Room Abandonment Espen Nystad(&), Magnhild Kaarstad, Christer Nihlwing, and Robert McDonald IFE and OECD Halden Reactor Project, P.O. Box 173, 1751 Halden, Norway {Espen.Nystad,Magnhild.Kaarstad,Christer.Nihlwing, Robert.McDonald}@ife.no

Abstract. In some emergency situations, control room indications may become lost or unreliable, or the operators may have to abandon the control room. Although operators in such situations will have the ability to safely shut down the plant, they may lack the information for broader diagnosis and understanding of what has happened. A study has been performed to test what kind of portable supporting functionalities would be useful for operators to have when information in the control room is incomplete, or when they have to abandon the control room. The results showed that access to functions with historic process information could be helpful in diagnosing the situation, in particular for seeing details that may have been overlooked. A dynamic overview display with historic information was rated as the most useful function. Keywords: Operator support systems  Usability  Simulator study  Situation awareness  Information presentation

1 Introduction The main control room of a nuclear power plant (NPP) is designed to provide all the information operators need to handle normal operation and disturbances. However, there may be certain situations where some or all information becomes unreliable or unavailable. Sensors or other equipment may fail due to equipment aging or extreme conditions [1, 2], fire [3], loss of power [4], or even cyberattacks [5]. Control room crews may be forced to leave the control room due to e.g. smoke, gas or radiation, or threat from terrorists who attempt to enter the control room. In case of control room abandonment, NPPs are required to have an emergency control room with capabilities for remote shutdown [6]. The shutdown panel has instrumentation and controls for monitoring and operating the most critical plant functions and systems, such as reactor cooldown and decay heat removal [7]. At all times, the control room crew needs to assess the situation and keep an updated situation awareness (SA). Simply put, SA refers to an operator’s knowledge of what is going on in the area that he or she is monitoring [8]. A more formal definition © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 558–565, 2021. https://doi.org/10.1007/978-3-030-51328-3_76

Tablet-Based Functionalities to Support Control Room Operators

559

describes SA as 1) a perception of relevant elements in the environment, 2) understanding what those elements mean and 3) a projection of status into the near future [8]. The NPP operators need to have a status overview of relevant process parameters and plant systems. They need to understand how the current status (e.g. a lack of cooling capability) impacts the safety of the plant in the present and how a situation may develop in the near future. Operator workload tends to increase during an incident, and this may lead to reduced situation awareness [8, 9]. As operators are focused on some parts of the problem they may not be able to follow the developments in other parts of the process. Design of information displays and work environment may also impact situation awareness by determining what information is available in the system and how it is presented [8, 10]. Information may for instance be presented in a way that drives the operator to focus on routine performance rather than more global awareness [9]. During an incident, information should be presented in a way that quickly gives the operators overview of plant status and plant parameters. If information in the control room is lost, then mobile tools with support functionalities including recent process information may be helpful. This paper presents a user study of such functionalities.

2 Study Description An empirical simulator study was conducted to provide input on the usefulness and usability of a set of tablet-based functionalities intended to support understanding and handling of situations where process information is lost or becomes unreliable, or situations where operators have to abandon the control room. The participants were four crews of licensed NPP operators. Each crew consisted of three operators: turbine operator (TO), reactor operator (RO) and shift supervisor (SS). The operators first received training to familiarize with the simulator and with the tablet functionalities. Then the crews worked through four short accident scenarios in a NPP simulator. At the end of each scenario the screens were blacked out and the operators were interviewed about their understanding of the events in the scenario and about their strategy for taking the plant to a safe state. They also filled in questionnaires to rate their SA. For two of the scenarios the operators were allowed to use the tablets when answering the questions. This made it possible to compare the operators’ SA with and without the tablet functionalities. After the last scenario the operators were given

Fig. 1. Setup of operator desks and displays in the simulator facility.

560

E. Nystad et al.

an additional questionnaire and interview about the usability and usefulness of the functionalities. The sequence of scenarios and in which scenarios the operators were allowed to use the tablet functionalities was balanced out across crews. 2.1

Simulation Environment

The study took place in a computer-based full-scale control room simulator facility (Halden Man-Machine Laboratory - HAMMLAB). Figure 1 shows a setup of the operator desks, operator displays and overview display. The simulated NPP process was a generic Westinghouse three-loop pressurized water reactor, which was similar (but not quite identical) to the participating operators’ home plant. 2.2

Scenarios

Four short scenarios were used to test the tool functionalities. The scenarios involved various plant malfunctions that required tripping the reactor and follow emergency procedures to handle the situation. Each scenario lasted approximately 10 min and ended with all screens in the control room being blacked out. 2.3

Tablet-Based Functionalities Tested in the Study

The tablet-based functionalities tested in this study were implemented as an app on a Microsoft Surface tablet with a 12.3 in. display size. They were designed to be connected to continuously updated process information. The idea is that if operators need to abandon the control room, they may take the tablet with them, and have historical data available up to the point when they had to leave the main control room. In our study, process data was collected on the tablets from the simulator via a network cable. At the end of the scenarios when the screens went black, the cable was disconnected and a tablet was placed on each operator’s desk. New process data were no longer collected after this time, but historical process data were available in the tablet functionalities. The functionalities were intended to provide operators with an understanding of the situation and to help in developing and implementing a strategy to bring the plant to a safe state. The functionalities are described below. Alarm List. All alarms that occurred during the scenario were presented time-stamped and chronologically in a list (Fig. 1, left hand side). Parameter Trends. Four parameter trend graphs were displayed in different colors together with their tag id, parameter name, max and min value, an scale max and min (see Fig. 2, right hand side). The parameters to show as trends could be specified by the user. The parameters were searchable by parameter name. For synchronization of alarms and the parameter trends, a vertical line in the trend window represented the point in time when the selected alarm in the alarm list occurred. Overview Display. As seen in Fig. 3, this view showed the development in the overview display over the last minutes. Moving the slider at the bottom would browse through the history of displays to show how the status in the plant had changed over

Tablet-Based Functionalities to Support Control Room Operators

561

time. The pictures were shown with one-minute increments, and for the last minute of data all changes in the overview display were shown. Alarm Procedures. Alarm response procedures (in PDF format) could be displayed by selecting the related alarm in the alarm list. These procedures provided information about what to do in response to an alarm. Operating Procedures. A set of operating procedures in PDF format could also be accessed on the tablet. These could be procedures for the control room operators or procedures for local actions. In case of abandoning the control room, the operators may not otherwise have access to all relevant paper procedures.

Fig. 2. Alarm list and trends screen

Fig. 3. Overview display screen

2.4

Data Collection

During the interview and questionnaires, the crew was allowed to discuss among themselves. When the operators had access to the tablet functionalities, they were allowed to use the tablets to look up relevant information when responding to the questions. Interviews. Immediately after the end of each scenario the operators were asked to describe the current state of the plant and the events that had occurred in the scenario. Then they described the strategy they would use to bring the plant to a safe state.

562

E. Nystad et al.

Situation Awareness. The operators were asked to rate the importance of eight process parameters for each scenario, on a 6-point scale from not important to very important. This was a variant of the Important Parameter Assessment Questionnaire (IPAQ) [11]. The operators were also asked to indicate the status of the same eight process parameters at the end of the scenario (e.g. decreasing, stable, increasing). This was a simplified version of the Situation Awareness Control Room Inventory (SACRI) [12] and was a measure of the operators’ perception of elements in the environment, i.e. the first level of Endsley’s SA definition [8]. Use and Usability of Functionalities. Operators’ use of the functionalities was logged on the tablets. A questionnaire was used to assess the operators’ rating of each functionality’s usefulness in providing an overview of plant status; usefulness when generating a strategy to bring the plant to a safe state; and how easy it was to find information. Seven-point response scales were used.

3 Results 3.1

Interviews

When the crew described the scenario events by help of process information on the tablets, some of the crews reported that there were details regarding plant status that they had not been aware of at the end of the scenario. The information on the tablets allowed them to gain knowledge about these details, e.g. one crew missed that they lost an electrical bus, and noticed this in the alarm list on the tablet. With the tablets they could check and confirm suspicions, e.g. that loss of feed flow was the initiating event, or whether there had been a radiation alarm. They also used the tablet functionalities to check the sequence of events. In the non-tablet interviews, there tended to be more uncertainty about events and causes. The tablet functionalities were used less when the crews described their strategy to bring the plant to a safe state. This is likely because at that point they had already got an overview of the plant status, and therefore did not need to refer to the functionalities and process information once more. 3.2

Situation Awareness

The use of the tablet functionalities did not affect the operators’ answers on the parameter importance (IPAQ) questions. The level of correct response was identical for the tablet and no-tablet scenarios (78%). For the questions on parameter status (SACRI), the mean percentage of correct responses was slightly higher when the tablet functionalities were used (92% vs 83%), but the difference was not significant (F(1, 3) = 2,45, p = ,22).

Tablet-Based Functionalities to Support Control Room Operators

3.3

563

Use and Usability of Functionalities

From the tablet logs and observations, it was clear that the functionalities that were used most were the alarm list, the trend window and the overview display. Alarm descriptions and procedures were used very rarely or not at all. The usability ratings (Fig. 4) showed that the overview display functionality received the highest rating for usefulness both for getting an overview of status and for forming a strategy. It was also rated highest for ease of finding information. The alarm list was rated next highest on all questions, tied with trends for the strategy formation question.

Usefulness for status overview 7 6 5 4 3 2 1 0

Usefulness for strategy formation 7 6 5 4 3 2 1 0

Trends

Alarm Procedures Overview Alarm list description display

Trends

Alarm Procedures Overview Alarm list description display

Ease of finding information 7 6 5 4 3 2 1 0 Trends

Alarm Procedures Overview Alarm list description display

Fig. 4. Operators’ rating of usability of the tablet functionalities.

4 Discussion Although not statistically significant, a slight difference was found in situation awareness (SACRI measure) between the tablet scenarios and no-tablet scenarios. This tendency was supported by the difference in how operators described the events in the scenarios with and without support from the tablet functionalities. Their descriptions showed that their overview of plant parameters at the end of the scenarios did have some holes, and the tablet functionalities did help the operators get an overview of specific details they had missed during the scenario. This indicates that in a complex incident situation, operators do not always have a full overview of the current process situation or the events that have happened. If control room indications are lost, or the crew has to abandon the control room, it can be helpful to have access to recent historical process information to help improve the operator’s overview of the causes, events and specific details of an incident. If having to shut down the plant from a remote shutdown panel, the operators have

564

E. Nystad et al.

procedures based on the reason for leaving the control room and all the indications they need to get to cold shutdown are located on the panel. However, there may be specific details of plant status that will have impact on personnel safety or the integrity of plant components. For example, there may be excessive hydrogen in a generator that could pose a risk for personnel, or conditions that may damage a turbine if not mitigated. Information about such risks may not be available in the remote shutdown panel, but should be taken into consideration when shutting down the plant. This kind of process information could be available in portable tools with functionalities for process information. The overview display was the tablet functionality that was highest rated and among the most used functionalities. One reason is likely the way the information was presented. The operators reported that it was easy to get an overview of status of the main plant systems, and easy to navigate the time slider to find historic information around specific events and times. The alarm list presented a complete overview of recent process alarms in a way that was easy to understand, but with a lot of alarms it may take some time to browse to find what you are looking for. The parameter trends also provided a lot of information about process parameters that could be taken in with a glance. Some steps were however needed in order to select the relevant parameters to trend, and the search functions required that you knew the specific parameter name. These issues point to potential improvements of the trend functionality. Another suggested improvement was to integrate parameter trends into the overview display functionality. The operators suggested that the tested functionalities could be useful also for other contexts, specifically for debriefing after operator training, for incident review, or for relief staff during an incident to get them up to speed. Acknowledgments. This work has been sponsored by and performed within the research program of the OECD Halden Reactor Project. We would like to thank the participating operators and all who assisted in planning and conducting the study.

References 1. O’Hara, J., Gunther, B., Martinez-Guridi, G., Xing, J., Barnes, V.: The effects of degraded digital instrumentation and control systems on human-system interfaces and operator performance. In: NPIC&HMIT (2010) 2. Hashemian, H.M.: On-line monitoring applications in nuclear power plants. Doctoral dissertation, Chalmers University of Technology (2009) 3. NRC: Recommendations related to the Browns Ferry fire. Springfield, Virginia, USA. U. S. Nuclear Regulatory Commission (1976) 4. Kim, M.C.: Insights on accident information and system operations during Fukushima events. Sci. Technol. Nucl. Installations 2014, 12 (2014). Article ID 123240 5. van Dine, A., Assante, M., Stoutland, P.: Outpacing Cyber Threats: Priorities for Cybersecurity at Nuclear Facilities. Nuclear Threat Initiative (2016) 6. U.S. DOE: Code of federal regulations, Title 10, Part 50. Domestic licensing of production and utilization facilities. U. S. Department of Energy, Washington, DC (2018)

Tablet-Based Functionalities to Support Control Room Operators

565

7. NRC: Standard review plan. NUREG-0800. U. S. Nuclear Regulatory Commission, Springfield, Virginia, USA (2007) 8. Endsley, M.R.: Toward a theory of situation awareness in dynamic systems. Hum. Factors 37, 32–64 (1995) 9. Wickens, C.D.: Situation awareness and workload in aviation. Curr. Dir. Psychol. Sci. 11(4), 128–133 (2002) 10. van Dorn, E., Horvath, I., Rusak, Z.: A systematic approach to addressing the influence of man-machine interaction on situation awareness. In: Horvath, I., Rusak, Z. (eds.), pp. 109– 120 (2014) 11. Andresen, G., Svengren, H., Heimdal, J.O., Nilsen, S., Hulsund, J-E., Bisio, R., Debroise, X: Procedure automation: the effect of automated procedure execution on situation awareness and human performance. HWR-759. OECD Halden Reactor Project, Halden (2004) 12. Hogg, D.N., Follesø, K., Torralba, B., Volden, F.S.: Measurement of the operator’s situation awareness for use within process control research: four methodological studies. HWR-377. OECD Halden Reactor Project, Halden (1994)

Simulation Technologies for Integrated Energy Systems Engineering and Operations Roger Lew1(&), Thomas Ulrich2, and Ronald Boring2 1

University of Idaho, Moscow, ID, USA [email protected] 2 Idaho National Laboratory, Idaho Falls, ID, USA {thomas.ulrich,ronald.boring}@inl.gov

Abstract. Idaho National Laboratory (INL) has recently received funding to explore integrated energy systems (IES) such as joint electricity-hydrogen production and the implications for control room operations of nuclear power plants (NPPs). Work is needed to identify how such a system would integrate with a nuclear power plant and how reactor operators would safely and reliably manage the integrated system during normal and abnormal conditions. INL is making use of simulation technologies to address both of these questions. The Human Systems Simulation Laboratory (HSSL) is adapting GSES’ Generic Pressurized Water Reactor (gPWR), a full-scale full-scope simulator, to model the extraction system of a joint electricity-hydrogen production NPP. Here we design and evaluate a preliminary human system interface (HSI) for the steam extraction system. Keywords: Human factors

 Human-systems integration  Nuclear power

1 Introduction Nuclear Power in the United States has a net capacity of over 100,000 megawatts, and in 2016 accounted for almost 60% of the emission-free energy generation in the United States. Renewable energy sources, like solar and wind, are gaining traction but these sources are intermittent and other power generators are needed during periods of low or absent production. The Energy Information Administration (EIA) data shows that large-scale solar plants have an average capacity factor of 28% [1]. The current fleet of nuclear power plants were designed to optimize uptime and have capacity factors of greater than 90%. Typically, reactors are brought online and operate at close to 100% electrical load until they are shutdown for refueling. Current light water reactors reactors (LWRs) are not designed to change load on a daily basis, and electricity being fed into the grid must be used instantaneously. Even if NPPs could load-follow (e.g. reduce load by 30% on a daily basis) the operating expense would not decrease because the bulk of expenses come from fixed personnel operating costs and not the relatively low nuclear fuel costs. Large scale energy storage is needed to allow reactors to utilize excess energy during the day when solar production is high. One possible solution is to utilize integrated energy systems (IES) to use excess generating capacity in the form of excess steam and electricity to generate hydrogen. The hydrogen could be stored or © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 566–572, 2021. https://doi.org/10.1007/978-3-030-51328-3_77

Simulation Technologies for Integrated Energy Systems Engineering and Operations

567

transported and ultimately used for transportation, heating, industrial processes like fertilizer production, or even electricity production. Currently, 95% of hydrogen production is reliant on fossil fuels, like natural gas, and has a large carbon footprint [2]. Coupling hydrogen production to nuclear power would have no carbon emissions. Within the U.S. there is a single precedent for a nuclear plant to provide offsite steam. Midland Nuclear Plants, Units 1 and 2 were planned on providing steam to an offsite industrial facility. The project was licensed by the NRC (1982) and construction was 85% complete when the project was cancelled due to construction issues and cost overruns. The site was eventually retrofitted to a natural gas co-generation plant. However, with the increasing adoption of renewables, a recent renewed interest in a joint electricity-hydrogen NPP has emerged. Research is needed to identify how such a system would integrate with an NPP and how reactor operators would safely and reliably manage the integrated system during normal and abnormal conditions. Idaho National Laboratory (INL) is making use of simulation technologies to address both of these questions. The Human Systems Simulation Laboratory (HSSL) at INL features a fully digital simulation environment with multiple simulation packages ready to support research activities [3]. Here we discuss efforts to design and conceptualize the operations of a large-scale IES comprised of GSE’s Generic Pressurized Water Reactor (gPWR) and a hydrogen production plant. gPWR is a full-scale full-scope simulator available for research purposes by GSE Systems. The gPWR simulator provides a high-fidelity model of all the components and sub-systems found within a NPP. This allows for system designs and potential faults to be examined and simulated. The end goal of this project, funded by the U.S. Department of Energy’s (DOE) Light Water Reactor Sustainability (LWRS) Program is to build a thermo-hydraulics model of the energy transfer system to the hydrogen production plant as well as the control systems and interfaces for the energy transfer. The plant model and interfaces will then be installed in HSSL where licensed reactor operators will be used to validate the IES operations and interface design. Previous work has detailed the development and evaluation of digital human systems interfaces (HSIs) for nuclear power plant main control room operations [3, 4].

2 System Design The system design for linking a hydrogen generation plant to a 1000 MW reactor for load-following is still preliminary and evolving. The current concept places a hydrogen generation plant adjacent to existing nuclear power plants. The current thinking is the hydrogen plant would be around 1–2 km from the NPP to maximize safety. Energy from the reactor would be diverted from the main steam header through a closed twophase water transport loop (see Fig. 1). A main steam extraction system would be added to the turbine hall of an NPP. The steam extraction system is comprised of a pressure control system to regulate the steam being diverted away from the main turbine. The steam would then pass through a set of steam traps to dry the steam and then through two heat exchangers arranged in series. The first would condense the steam to water and the second would preheat incoming feedwater from the transport loop. A drain tank would be placed after the condensing heat exchanger to ensure that

568

R. Lew et al.

the subsequent coolant flow is fully condensed. A drain tank level controller would make sure the drain tank does not run dry.

Fig. 1. Preliminary piping & instrumentation diagram of the extraction system.

This tertiary transport loop provides another level of isolation from potential radioactive contaminants. The separation between the secondary system and the transport loop would also help to maintain water chemistry within the secondary loop. The chemistry is very closely controlled to maintain proper function of the steam generators. After heat is removed from the main steam it would return to the main condenser, ideally at close to the normal condensate temperature to avoid back-pressure in the condenser that would negatively influence the efficiency of the turbine. The hydrogen production requires not only heat, in the form of steam, but also large quantities of electricity. One thought would be to provide electricity to the hydrogen production plant from a dedicated bus at the NPP. This would allow a plant to sell electricity directly to the hydrogen plant thereby bypassing commercial grid rates. However, the downside to such an arrangement would be more operational complexity when hydrogen plant trips occur. If the hydrogen plant is directly connected to the power plant then the power plant has to deal with the full brunt of the dropped load. If the hydrogen plant is on the grid the impact is distributed amongst the power generators on the grid and can be easier to manage.

Simulation Technologies for Integrated Energy Systems Engineering and Operations

569

A model of this design is being developed and evaluated for feasibility and is likely to change, but this description is sufficient to begin thinking about the functional requirements and tasks the operators will need to perform.

3 Concept of Operations The nuclear power plant and hydrogen production would be integrated but controlled separately. Of critical importance from a nuclear power operations and regulatory perspective is that the hydrogen production does not control the reactivity of the NPP; the reactivity needs to be controlled solely by licensed reactor operators. For this reason, operators would control the transition from full power generation to hybrid energy production. The hydrogen plant would place an order for steam and the NPP would carry out the order. The transition to hydrogen production would require first preheating the transport loop and the main steam extraction system. Then the turbine load control would be set to decrease load at a specified rate. The pressure controller on the steam extraction system would then open to maintain main steam header pressure. As the turbine and generator load decreases more steam will be diverted through the extraction system and ultimately through the transport loop to the hydrogen production system. Conversely, transitioning back to full power generation could be carried out by raising turbine and generator load and having the pressure controller close to compensate. Licensed operators suggested that in the event of the hydrogen plant tripping the steam could be diverted directly to the condenser. Operationally it wouldn’t affect much if the steam is used by hydrogen or not, because it is similar to the steam dump evolutions that are already performed. The gPWR reactor is designed to dump up to 7% to the condenser, though that magnitude would require initiating a reactivity reduction and according to operators the baffles in the condenser could be damaged if too much steam is dumped directly to the condensor. Operators reported that their respective NPP can handle a load rejection event up to 80% of full power.

4 Preliminary Human Systems Interface Design After conducting a preliminary functional requirements analysis (FRA) and functional allocation analysis (FAA), a preliminary HSI was designed for the extraction system. The design was intended to be retrofitted into the hybrid (analog and digital) control room gPWR installed in the HSSL to allow for usability and formative validation exercises scheduled for a future date [3, 4]. The interface design builds on general design findings from previous efforts designing Computerized Operator Support Systems for NPPs [5–8]. The layout is comprised of two monitors (see Fig. 2). Each monitor would be connected to separate thin clients on separate power buses and network buses to provide redundancy from hardware failures and onsite power failures. One monitor is used to display an overview of the extraction system and the second monitor is used for digital controls that the operator would interact with.

570

R. Lew et al.

Fig. 2. Top: Overview screen for extraction system. Bottom: Extraction system controls.

The overview screen has a row of alarm tiles across the top of the display. Below the alarm tiles is a row of primary system indicators that would lend themselves to checking system state “at a glance”. These primary indicators are large and intended to be legible from across the control room. In the center of the display is a piping and instrumentation (P&ID) representation of the extraction system. The HSI displays the engineering P&ID drawing, but a simplified P&ID is being developed for the interface that would eliminate manual components such as isolation valves that would not have sensing instrumentation. The primary purpose of the P&ID is to display valvealignments so operators are aware of extraction flow. The valve-alignments and flows would change depending on whether the system is offline, warming, online, or in

Simulation Technologies for Integrated Energy Systems Engineering and Operations

571

bypass. To the left and right of the P&ID are pressure, temperature, and flow indicators arranged in tables categorized by sub-systems (loops) in the extraction system. The control system for the extraction loop aims to minimize the operational complexity in the system, minimize the number of controls and indications needed for operators, and maximize automation. Though, the system would be operated independently of other reactor and turbine and generator controls. The operations of the extraction system would require procedures to coordinate dropping turbine load and monitoring reactivity while simultaneously increasing extracted steam. Likewise, shutting down the extraction and handling abnormal situations would require additional procedures. With these considerations in mind the resulting control display is simplified to two buttons for warming and starting load following, a button to trip the extraction system, and some additional controls for manually operating the two controllers in the system, isolating various components of the system, and managing the breaker for electricity to the hydrogen production.

5 Conclusions and Future Work The preliminary screen layouts will be evaluated by licensed operators familiar with the design of the steam extraction system. The steam extraction system is currently being modeled in gPWR. The HSI will be implemented in INL’s Advanced Nuclear Interface Modeling Environment (ANIME) and integrated with gPWR. Once this has been accomplished normal and abnormal scenarios will be developed and NPP crews will be brought to the HSSL to conduct verification and validation exercises. Disclaimer. This work of authorship was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government, nor any agency thereof, nor any of their employees makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privatelyowned rights. Idaho National Laboratory is a multiprogram laboratory operated by Battelle Energy Alliance LLC, for the United States Department of Energy.

References 1. Andrews, R.: Solar PV capacity factors in the US – the EIA data. In: Energy Matters (2016) 2. Hydrogen Europe. Hydrogen in Industry (2017). https://hydrogeneurope.eu/hydrogenindustry 3. Boring, R., Agarwal, V., Fitzgerald, K., Hugo, J., Hallbert, B.: Digital Full-Scope Simulation of a Conventional Nuclear Power Plant Control Room, Phase 2: Installation of a Reconfigurable Simulator to Support Nuclear Plant Sustainability, INL/EXT-13- 28432. Idaho National Laboratory, Idaho Falls (2013) 4. Boring, R.L.: Human factors design, verification, and validation for two types of control room upgrades at a nuclear power plant. In: Proceedings of the Human Factors and Ergonomics Society 58th Annual Meeting, pp. 2295–2299 (2014)

572

R. Lew et al.

5. Boring, R.L., Joe, J.C.: Baseline evaluations to support control room modernization at nuclear power plants. In: Proceedings of ANS NPIC & HMIT, pp. 911–922 (2015) 6. Boring, R.L., Thomas, K.D., Ulrich, T.A., Lew, R.: Computerized operator support systems to aid decision making in nuclear power plants. Procedia Manufact. 3, 5261–5268 (2015) 7. Lew, R., Ulrich, T.A., Boring, R.L.: Nuclear reactor crew evaluation of a computerized operator support system HMI for chemical and volume control system. In: Conference: International Conference on Augmented Cognition (HCII) (2017) 8. Boring, R.L., Lew, R. Ulrich, T.A.: Advanced nuclear interface modeling environment (ANIME): a tool for developing human-computer interfaces for experimental process control systems. In: International Conference on HCI in Business, Government, and Organizations (2017)

Promoting Operational Readiness of Control Room Crews Through Biosignal Measurements Satu Pakarinen1(&), Jari Laarni2, Kristian Lukander1, Ville-Pekka Inkilä1, Tomi Passi1, Marja Liinasuo2, and Tuisku-Tuuli Salonen2

2

1 Finnish Institute of Occupational Health, P.O. Box 40, FI-00032 Työterveyslaitos, Finland {Satu.Pakarinen,Kristian.Lukander, Ville-Pekka.Inkila,Tomi.Passi}@ttl.fi VTT Technical Research Centre of Finland Ltd. VTT, P.O. Box 1000, FI-02044 Espoo, Finland {Jari.Laarni,Marja.Liinasuo, Tuisku-Tuuli.Salonen}@vtt.fi

Abstract. Challenging operational tasks, such as complex, unexpected incidents and severe accidents are characterised by an increase of operators’ mental demands, stress-induced deterioration of cognitive capacity and increased time pressure to resolve the situation. This combination can negatively affect the operator crew’s performance. This paper describes the progress of a research project that models the stress and workload of 54 nuclear power plant operators during simulated incident and accident scenarios. Here, we demonstrate how an extensive empirical field study with psychophysiological assessments can be successfully performed in a simulator with free movement. Also, the modelling approach to examine the relationship between the stress and workload, and performance, with moderating effects of operator role and the efficiency of the abnormal and emergency operations procedure (OPs) use will be described. Even though some observations will be made, the results of the study are, at this point, preliminary. Keywords: Human factors  Stress  Workload  Cognition  Psychophysiology  Heart rate variability  Skin conductance  Eye tracking Performance



1 Introduction Working in complex dynamic environments, and successful management of incident and accident situations in particular, requires a set of diverse cognitive and teamwork skills. In demanding situations, the performance of the operator can, however, become compromised due to stress-induced reductions in cognitive capacity. In nuclear power plants (NPPs), operator work is supported and directed by emergency and abnormal operational procedures (EOPs and AOPs), describing the actions that are needed to © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 573–580, 2021. https://doi.org/10.1007/978-3-030-51328-3_78

574

S. Pakarinen et al.

safely and efficiently manage a given situation. Effective use of the operational procedures (OPs) can reduce the stress and load of the operator and thus mitigate the effects of stress on performance. In this project, we examine the effects of acute stress on NPP control room operator crew performance during simulator training. Further, we will investigate the potentially moderating effects of the OPs and the operator role. The main contribution of this paper is to demonstrate how an extensive empirical field study with psychophysiological assessments can be successfully performed at an actual work place, with freely moving participants and how the data will be analyzed and the results utilized in later phases of the project. While the purpose of the acute stress response, also known as the fight-or-flight response, is to promote survival by increasing emotional arousal and promoting physical performance [1], this comes with an expense on higher cognitive functions. In the state of acute stress, attentional [2], memory [3] and executive [4] functions become impaired, altering risk evaluation [5] and decision making [6]. Stress has been shown to affect also teamwork, for instance by altering communication patterns [7]. In NPPs the operator activity in deviant situations is supported by the operational procedures (OPs). They provide rules, guidelines and reference values as well as describe the actions that are required to safely and effectively manage the plant. Skilful use of the procedures is considered to reduce the cognitive load of the operator by decreasing the requirements for controlled processing, reducing memory load and providing support for decision making. Hence, the use of OPs can protect from the negative consequences of stress. On the other hand, according to some estimates, approximately 70% of the incidents in the nuclear domain have been associated with failures in procedure usage. Some of these failures could be explained by stressinduced alterations in cognition, interfering with the selection and use of the appropriate OPs. Previously, we have shown that NPP control room operators do experience stress even in simulator conditions, and that their stress increases with the severity of the simulated incident/accident scenario [8]. Moreover, our research suggests, that increased levels of stress are associated with longer performance times, and poorer information seeking activities within the scenarios [9]. Possibly, the operators are less supported by the OPs in the information seeking phase, requiring thus more active cognitive processing, and increasing the susceptibility to the negative effects of stress [9]. The aim of this project is to explore the effects of stress on performance. Specifically, our first aim is to quantify the stress and workload of the NPP control room operators during different events and procedures within the simulated incident and accident scenarios. For this, both subjective and psychophysiological measures are used. Secondly, we will examine how the role and the responsibilities of the operator affect the stress and workload. Thirdly, we will examine how the operators use the OPs, by means of head cameras, gaze trackers and interviews. The ultimate goal is to model the association between operator stress and workload, cognitive strategies and the use of OPs on operator crew performance.

Promoting Operational Readiness of Control Room Crews

575

2 Simulator Experiment The study was conducted in accordance with the Declaration of Helsinki. The ethical statement was obtained from the ethical board of the Helsinki University Hospital, and the participants signed an informed consent before participating in the experiment. 2.1

Participants

A total of 15 nuclear power plant operator crews took part in the annual simulator training. Of those, 54 operators or operator trainees from 14 operator crews took part in our study. In addition, one operator participated in the study without the wearable sensors and the gaze tracker. Each participating crew (N = 14) included a shift supervisor (SS), a reactor operator (RO) and a turbine operator (TO), and in 13 crews also an auxiliary panel operator (APO) was involved. Two crews included also an auxiliary panel operator trainee, and one crew involved also a former SS (currently system designer). 2.2

Simulated Scenarios

The study was conducted in a physical full-scale training simulator as part of the power plant’s operators’ annual simulator training. The actual training consisted of two scenarios, interleaved with baseline recordings, questionnaires and interviews (Fig. 1).

Fig. 1. A schematic illustration of the experiment (left) and the biosignal instrumentation (right).

The first scenario simulated a loss of one reactor coolant pump (RCP) and maneuvering the plant to full power production with the remaining five RCPs. The second scenario started out as routine testing, that was then interrupted by a sudden, unexpected launch of an emergency shutdown, following a control rod drop-failure, leading to a launch of chemical shin (boric acid) to bring the reactor to zero power level and subcritical state. The first scenario is regarded as an incident/unusual event,

576

S. Pakarinen et al.

whereas the second scenario involves a site area emergency and is regarded as a severe accident. 2.3

Observations and Event Logging

During the experiment, operator trainers and researchers monitored the operator work from an observation room situated at the rear of the simulator through a one-way mirror, and from computer screens. Speech was transmitted to the room via microphones. One researcher recorded the activities with the Noldus Explorer software. In addition, at least two other researchers marked the occurrence of predetermined key events and actions taken, as well as the role (e.g., SS) of the operator who executed the action or gave the command. After the experiments, the observation logs are checked for consistency, and completed for missing information from the video recordings. 2.4

Surveys and Questionnaires

A Priori Estimate of Stress and Workload. Before the simulator course, the three individual operator trainers were asked to evaluate the operators’ stress and workload during different parts of the scenarios. The ratings were given on a scale from 0 to 10, with 0 referring to no stress and minimal workload, and 10 extreme stress and workload. Background Information. Before the initiation of the first scenario, the operators filled background questionnaires on their health, body mass index (BMI), medication, substance use, sleep, stress, workload, recovery, and work engagement. After the second scenario the operators filled a questionnaire on their education, work experience, working hours, and major life events during the preceded 6 mo. A Posteriori Estimates of Vigilance, Stress, Workload and Performance. Directly following the first and the second scenario, the operators evaluated their perceived vigilance (Karolinska Sleepiness Scale, KSS), effort, task load, and performance (NASA Task Load Index, NASA-TLX) and stress (1-item stress scale) during the scenario. In addition, they filled questionnaires evaluating various aspects of their situational awareness (Mission Awareness Rating Scale, MARS) as well as their crew collaboration and workload (Team Work Load Questionnaire) during the scenario. After the second scenario, the operators were asked to rate their experienced stress level during different parts of the simulator training: while filling out questionnaires, during the initiation of the scenarios, when the fault emerged, during operations, and when the incident was resolved. A scale from 0 to 10 was used (0 = no stress at all, 10 = extreme stress). 2.5

Psychophysiological Measures

Cardiac, Sudomotor and Movement Activity. The operator stress and workload was quantified with measurements of cardiac (electrocardiography, ECG) and sudomotor

Promoting Operational Readiness of Control Room Crews

577

(skin conductance, SC) activity of the autonomic nervous system (ANS). Movement activity (3-D accelerometers) was used to control for the effects of movement and to detect stress-related movement patterns. The ECG and the accelerometer data were recorded with a Faros 180 device (Bittium Ltd.) using two disposable Ambu® BlueSensor electrodes attached to the chest of the participant. SC was recorded with Shimmer3 GSR+ units with reusable pre-gelled dry electrodes attached to the palmar side of the proximal phalanges of the index and the middle fingers, on both hands (see Fig. 1). From the ECG data, the mean and the median heart rate (HR), and the most commonly described indices of heart rate variability (HRV), such as the root mean square of successive beat-to-beat intervals (RMSSD), as well as low and high frequency spectral power (LF 0.04–0.15 Hz; HF 0.15–0.40 Hz) are calculated. From the SC data, the skin conductance response (SCR) representing the arousal response to the stimulus or immediate conditions, as well as the skin conductance level (SCL), mainly representing the state like, tonic arousal, are extracted. The movement activity is quantified as the squared mean activity of the three acceleration vectors. Eye Tracking and Head Cameras. SSs’ and ROs’ gaze location, gaze paths, blinks, saccades and eyelid closure times were recorded with SMI Eye Tracking Glasses (Fig. 1) connected to a laptop mounted in a back bag or to a smartphone, that was carried on a waist bag. TOs and APOs wore light weight head cameras mounted on a fitted cap. The wearable sensors were mounted in the beginning of the experiment, after the participants had signed their consent. The sensors were removed after the final questionnaires, but before the last interviews. Hence, baseline activity was recorded before, after and in between the two scenarios while the operators filled out surveys and questionnaires. Eye trackers and head cameras were on only during the simulated scenarios. 2.6

Operator Crew Performance

Crew performance was evaluated by the operator trainers, and as the time taken to resolve the situation. Following the incident/accident scenario, two operator trainers evaluated the performance of the crew from different perspectives such as information seeking, diagnosis and corrective actions, use of OPs, conservativity, collaboration and communication on a scale from 1 to 5. Operator’s perception of their own performance was obtained with NASA-TLX, and in the interviews. Performance times were calculated as the time between the occurrence of the fault and the time it was completely resolved. In addition, performance times will be calculated between certain selected events, such as the launch sequence for the control rods and the launch of the boric acid. As the scenarios and the events differ considerably from each other in duration, the performance times are converted to relative durations.

578

S. Pakarinen et al.

2.7

Operator Interviews

After the first scenario, a semi-structured personal interview focusing mainly on the use of the OPs was conducted. The interview addressed questions on the selection and the use of the appropriate OP as well as the format of this particular AOP. The operators were also asked their opinion on how this AOP could be improved. After the second scenario, the operators were shown either their eye tracking video or the view from their head camera. In this process-tracing interview, they were asked to speak aloud and reflect on their thoughts and actions from the beginning of the emergency shutdown.

3 Quantification of Operator Performance Quantification of operator performance has several parallel goals. Firstly, our aim is to create a general description of the operator crew performance. This includes an analysis of the overall performance scores and description of those aspects of that may require additional training, as well as description of the strongest competence areas. These results of can be utilized in the development of training. Secondly, our aim is to model the relationship between operator performance as evaluated by the operator trainer, and the relative performance time. Often, faster performance is assumed to indicate better performance, but when safety as the main priority, the relationship may not be linear but rather follow the shape of an inverted U.

4 Quantification of Operator Stress and Workload Self-Reported vs. Measured Stress. The operator trainers’, and the operators’ own evaluations on the stress and workload are compared with the physiological recordings. This will give insight on how the operators’ experienced stress and workload covary with the physiological state. Previous results suggest that there is a fairly good correspondence, despite individual differences in the operators’ perception of stress and self-reflective abilities [8]. Also, the operator trainers’ a priori estimates of the stress and work load will be compared to the a posteriori measures of stress ([8]. Stress and Workload During Different Events of the Scenarios. The psychophysiological recordings of the ANS activity provide continuous information on the minute fluctuations of the operator state throughout the training. When the cardiac, sudomotor and movement activity are linked to each other, and with the observed events, baselines and the recovery period, the stressfulness of the different phases of the training can be quantified. Moreover, as the cardiac activity is mediated by both the sympathetic and parasympathetic branches of the ANS, but the sudomotor activity is controlled by the parasympathetic branch only, the approach allows a more detailed separation of the sympathetic and the parasympathetic involvement in the stress response. Operator Role and Stress and Workload. The experienced stress may depend on the role of the participant. In certain tasks, the role of the SS is emphasized, whereas in

Promoting Operational Readiness of Control Room Crews

579

some other the RO or the TO may have a more critical role. The interaction of role and event provides information on which events are the most stressful for each role.

5 Operator Stress, Crew Performance and Use of OPs The ultimate goal is to examine how the operator stress and workload is associated with the operator performance. Further, the aim is to evaluate whether the use of OPs has a moderating effect on performance. The performance scores for the OP use (rated by the operator trainer), will be set as a moderating variable. Also, the effects of the operator role will be examined. The use of OPs is expected to have a mediating role on the relationship between stress and performance. The role of the operator may also mediate the link between the operator stress and crew performance. We expect that the stress of the operator in charge of the most critical corrective procedures would be more strongly related to the crew performance, than the those in the less decisive roles.

6 Significance The results of the study will be shared with the nuclear power plant community, the stakeholders, the scientific community and the general public. In NPPs, the information can be used, instance in the development and targeting of operator training. In later phases of this research, the results will be utilized in the development of the OPs, to increase clarity and usability of the procedures. For regulative purposes, the study provides information for developing plant safety culture, particularly from the perspectives of operator training, stress management and supporting operator work in deviant situations. In general, the study builds understanding on the effects of stress and work on human cognition, collaboration and performance. Insight will be gained also on protective and supportive factors that could be used to mitigate the undesirable effects of stress. The project also develops the methodology for unobtrusive, objective quantification of cognitive and emotional stress in real life conditions, as well as understanding on the practical significance of stress on performance. Acknowledgments. The authors would like to thank the operators, the operator trainers and other NPP personnel involved in this study.

References 1. Shields, G.S., Rivers, A.M., Ramey, M.M., Trainor, B.C., Yonelinas, A.P.: Mild acute stress improves response speed without impairing accuracy or interference control in two selective attention tasks: implications for theories of stress and cognition. Psychoneuroendocrino. 108, 78–86 (2019) 2. Olver, J.S., Pinney, M., Maruff, P., Norman, T.R.: Impairments of spatial working memory and attention following acute psychosocial stress. Stress Health. 31, 115–123 (2015) 3. Wirth, M.M.: Hormones, stress, and cognition: the effects of glucocorticoids and oxytocin on memory. Adap. Hum. Behav. Physiol. 1, 177–201 (2015)

580

S. Pakarinen et al.

4. Starcke, K., Wiesen, C., Trotzke, P., Brand, M.: Effects of acute laboratory stress on executive functions. Front. Psychol. 7, 461 (2016) 5. Sobkow, A., Traczyk, J., Zaleskiewicz, T.: The affective bases of risk perception: negative feelings and stress mediate the relationship between mental imagery and risk perception. Front. Psychol. 7, 932 (2016) 6. Porcelli, A.J., Delgado, M.R.: Stress and decision making: effects on valuation, learning, and risk-taking. Curr. Opin. Behav. Sci. 14, 33–39 (2017) 7. Tiferes, J., Bisantz, A.M.: The impact of team characteristics and context on team communication: an integrative literature review. Appl. Ergon. 68, 146–159 (2018) 8. Pakarinen, S., Korpela, J., Torniainen, J., Laarni, J., Karvonen, H.: Cardiac measures of nuclear power plant operator stress during simulated incident and accident scenarios. Psychophysiol. 55, e13071 (2018) 9. Pakarinen, S., Korpela, J., Karvonen, H., Laarni, J.: Modeling the cardiac indices of stress and performance of nuclear power plant operators during simulated fault scenarios. Psychophysiol. 00, e13513 (2019)

Adopting the AcciMap Methodology to Investigate a Major Power Blackout in the United States: Enhancing Electric Power Operations Safety Maryam Tabibzadeh(&) and Shashank Lahiry Department of Manufacturing Systems Engineering and Management, California State University, Northridge, 18111 Nordhoff Street, Northridge, CA 91330, USA [email protected]

Abstract. The electric power industry is an example of a high-risk system in which large-scale accidents occur. Conversely, there is a high dependence on electricity. According to Wang [1], the end use of electricity in the US was approximately 3.95 trillion kilowatt hours in 2018. Therefore, improving the safety of operations in the electric power industry is of paramount importance. Originally proposed by Rasmussen in 1997 [2], the AcciMap methodology is utilized in this paper to systematically analyze the Arizona-Southern California power outages of 2011 in the United States, identify its main contributing causes, and use that as a basis to develop prevention strategies. The AcciMap attempts to explain the context in which an accident occurred by highlighting socio-technical factors that contributed to the accident using a hierarchical framework. It also provides a detailed analysis of the interactions of different involved key players and decision makers. Keywords: Electric power operations power outage  Socio-technical factors

 2011 Arizona-Southern California  Accident investigation  AcciMap

1 Introduction A power grid is an interconnected network that aims to deliver electricity from suppliers to customers. Today, the power grid in North America is one the most complex and tightly coupled systems of our time. Due to the interconnectivity, today, more than 50% of the electricity generated domestically travels hundreds of miles in wholesale markets before it is used by customers. The same capability allowing the transmission of power over long distances can contribute to cascading local failures becoming gridwide events. A cascading failure in the power grid happens when the failure of a part can initiate the failure of succeeding system’s components. In power grids, when one of the system parts completely or partially fails, it shifts its load to neighboring elements in the system making them overloaded. In this situation, the nearby elements also shift their load onto other elements. In fully loaded or slightly overloaded high voltage systems, a single point of failure commonly results in to an abrupt surge across all © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 581–588, 2021. https://doi.org/10.1007/978-3-030-51328-3_79

582

M. Tabibzadeh and S. Lahiry

network nodes, setting off more overloads and thereby taking down the entire system very quickly. Although the electric power industry is a high-risk system in which large-scale accidents occur, there is a high dependence on electricity. According to Wang [1], the end use of electricity in the US was approximately 3.95 trillion kilowatt hours in 2018. Therefore, improving the safety of operations in this industry is of paramount importance. This paper utilizes the AcciMap methodology, which was originally proposed by Rasmussen in 1997 [2], for the analysis of the Arizona-Southern California, referred to as the Southwest, power blackout by investigating different socio-technical contributing causes of the accident and interactions between those causes. The Southwest power blackout occurred on September 8, 2011. It was a widespread power outage that affected the San Diego–Tijuana area, Southern Orange County, the Imperial Valley, Mexicali Valley, and Coachella Valley, and parts of Arizona. This blackout left nearly seven million people without power. It was the largest power failure in California history [3]. This paper analyzes various causes of the 2011 Southwest blackout, across different layers of the AcciMap framework, while also providing recommendations and suggesting precautions in order to avoid future blackouts. It is noteworthy that this framework can be utilized for the analysis of other power blackouts and the identification of their common contributing causes, which can be used as a basis to develop more generalized recommendations with the purpose of developing more reliable as well as more resilient power grid systems.

2 The AcciMap Methodology There have been several developed methodologies to better understand and analyze accidents. Some examples of these methodologies include the Systems-Theoretic Accident Model and Processes (STAMP) by Leveson [4], Reason’s model of organizational accidents [5] and Rasmussen’s AcciMap approach [2]. Rasmussen’s AcciMap approach is particularly useful for this purpose as it models different contributing factors of an accident, and their interactions, in a causal diagram. According to Rasmussen, accidents in complex socio-technical systems are the result of a loss of control over hazardous work processes, which can cause injuries to people, loss of investment or damage to the environment [2]. The AcciMap framework was introduced by Jens Rasmussen in 1997 [2]. It works by organizing the various factors that contributed to an accident into a logical hierarchical causal diagram that illustrates how they combined to result in that event, which makes it a useful tool for uncovering factors related to the accident. The AcciMap methodology was developed in conjunction with a six-layer hierarchical framework (Fig. 1), known as risk management framework. Each layer of the framework represents a main group of involved decision-makers, players or stakeholders in a studied system. These six layers starting from top are: government, regulators and associations, company, management, staff and work. Analysis of such a framework emphasizes not only on the assessment of the activities of players in each layer but more importantly,

Adopting the AcciMap Methodology to Investigate a Major Power Blackout

583

on the analysis of the interactions between key players in these stated layers, which is in the form of decisions propagating downwards and information upward, as shown by arrows in the right-hand side of Fig. 1 [2]. This graphical representation provides a big-picture to illustrate the context in which an accident occurred as well as the interactions between different levels of a socio-technical system that resulted in that event. In general, analysis of past accidents using this framework can define patterns of hazards within an industrial sector, which can lead to definition of preconditions for safe operations, as a focus point of proactive risk management systems [6]. The AcciMap framework has been used for the investigation of accidents in different context. However, due to our knowledge, this is the first attempt to utilize this framework to systematically analyze an accident; i.e. power blackout, in the electric power operations. The AcciMap framework has been utilized in this paper (in Sect. 3) for the analysis of the 2011 Southwest power outage.

Fig. 1. Rasmussen’s risk management framework [6]

3 The AcciMap Framework of the 2011 Southwest Blackout This section describes the developed AcciMap framework (Fig. 2) for the analysis of the contributing causes and involved socio-technical factors of the 2011 Southwest power blackout. The blackout was initiated from the loss of a single 500 kV transmission line and it cascaded to cause power outage to parts of Arizona, Southern California and Baja California, Mexico for half a day to a day. The disturbance

584

M. Tabibzadeh and S. Lahiry

NERC divided authorities and responsibilities of grid reliability to eight Regional Entities through a series of delegation of authority agreements making FERC Powerless [8]

Lack of resources such as trained personnel to expand reliability coordinator functionality [7, p.93;8]

Failure to conduct next-day studies of demand [7, p.66,67]

Outcome

Physical Events, Processes and Conditions

Transmission Operators (TOPs)

Transmission Planners (TPs)

Reliability Coordinator (WECC)

Coordination between WECC and NERC on operations compliance was not properly established [8]

Balancing Authority (BA)

Industry Codes and Standards

Government and Regulatory Bodies

occurred near rush hour, on a business day, and caused traffic issued for hours. Schools and businesses closed and some flights and public transportation were disrupted. In addition, water and sewage pumping stations lost power and beaches were closed due to sewage spills. Millions went without air conditioning on a hot day. As Fig. 2 shows, the developed AcciMap framework for the analysis of this accident consists of eight layers. These layers have been customized, based on the original introduced layers of the Rasmussen’s framework shown in Fig. 1, for the context of this specific accident.

NERC failed to include regional studies and modeling to monitor activities of reliability coordinators [8]

Not enough emphasis was given to perform seasonal studies, which led to problems for the operators during the start of the new season [8]

Lack in near- and long-term planning process [7, p.119]

Coordination with individual TOPs for review of remedial action was not done by WECC staff [8]

TOPs failed to implement load shedding and also failed to dispatch additional generators to prevent the cascade [7, p.69]

Lack in near- and long-term planning process [7, p.119]

Proper training was not provided to the operating personnel [7, p.121]

Power flows were instantaneously redistributed throughout the system, which increased flows through lower voltage systems [7, p.1] There was a peak demand of power due to a hot day [7, p.1]

Power Outages

There were no Emphasis to focus on planning and situational awareness to effectively operate on the BPS in case of blackouts [8]

A TOP did not have proper tools to determine phase angle between the two terminals of its 500kV line after the line tripped [7, p.126]

Voltage deviation and equipment overloads in the north of the SWPL [7, p.2]

The adverse impact of under 100kV facilities on BPS reliability was not consistently recognized [7, p.122; 8]

Lack of proper lines of communication and coordination between BAs operating entities [7, p.68]

Lack of interoperability and communication between BAs and reliability coordinators [7, p.68]

Methods like cross training to improve efficiency were not implemented [8]

Loss of a single 500kV line (H-NG line) [7, p.1]

FERC lacked authority because the Reliability Coordinator had the highest level of authority and was responsible to maintain reliability for the whole interconnection [7, p.17]

Problems with facilities outside individual TP systems that affect external operations of transmission facilities under 100kV, which could jeopardize the BPS reliability, were not known to be addressed [7, p.122; 8]

Lack of communications and data sharing between grid operators present at different locations of Southwest grid [7, p.120]

Significant overloading on three of IID’s transformers located at the Coachella Valley and Ramon substations [7, p.2] Excessive overloading on WECC Path 44 located south of the San Onfre Nuclear Generating Stations (SONGS) in Southern California [7, p.2]

Lack of situational awareness [7, p.5,30]

Lack of real-time tools [7, p.89] A TOP lost the ability to conduct RTCA and to notify WECC and other TOPs [7, p.95]

Ripple effect and an automatic load shedding [7, p.2]

Intertie separation scheme at SONGS [7, p.2]

Loss of the SONGS nuclear units [7, p.2]

Cascading of the Blackout; i.e. complete blackout of San Diego and Baja California, Mexico

Fig. 2. AcciMap framework for the analysis of 2011 Southwest power blackout

Adopting the AcciMap Methodology to Investigate a Major Power Blackout

3.1

585

Government and Regulatory Bodies

The Federal Energy Regulatory Commission (FERC) is an independent agency that regulates the interstate transmission of electricity, natural gas and oil in the U.S. The North American Electric Reliability Corporation (NERC) is a not-for-profit international regulatory authority whose mission is to assure the effective and efficient reduction of risks to the reliability and security of the grid. In the 2011 Southwest blackout, the FERC lacked some authority due to facts such as the Western Electricity Coordinating Council (WECC), as the Reliability Coordinator (RC), having the “the highest level of authority” to main the Bulk Power System (BPS) reliability [7, p. 17]. 3.2

Industry Codes and Standards

The coordination between WECC and NERC on compliance operations was not properly established, which led to a communication gap to delegate authority to Transmission Operators (TOPs) [8]. Moreover, not enough emphasis was given to perform seasonal studies [8]. Seasonal studies are a very important tool to determine the transmission of electricity in different seasons. Furthermore, regional studies were also failed to be conducted, which determine path ratings and flow of transfer [8]. 3.3

Reliability Coordinator (RC); WECC

According to NERC, the RC is the “highest level of authority” and it maintains reliability for the interconnection as a whole. The WECC had the RC role in the Western Interconnection. One of the major reasons for the 2011 blackout was that the WECC failed to provide resources such as trained operators to expand the reliability coordinator functionality [7, p. 93]. This led to a communication gap between the WECC and Training Operators (TOPs), and the effects of this were visible during the blackout. Individual TOPs were not coordinated by the WECC for remedial action schedules and real-time tools were not provided by the WECC for the immediate actions that could have been taken by TOPs to prevent the cascade of the blackout [7, pp. 89, 95]. 3.4

Balancing Authority (BA)

The balancing authority is responsible for making resource plans ahead of time and maintaining the balance of electricity resources and electricity demand. Due to failure to conduct next-day studies by some of the BAs and also due to ineffective communication between BAs as well as BAs and the RC, the BAs failed to do a proper forecast of the transmission of electricity in the Southwest American region [7, pp. 66–68; 8]. 3.5

Transmission Planners (TPs)

A TP is responsible for developing and forecasting a prospective plan for the reliability of the consistent bulk transmission systems within its portion of the planning coordinator area. Issues such as lack of near- and long-term planning processes [7, p. 119],

586

M. Tabibzadeh and S. Lahiry

not implementing methods such as cross training to improve efficiency [8] and not being aware of issues outside their own territory that could affect the BPS reliability resulted in TPs not being able to do their responsibilities correctly, which also affected the TOPs effective performance. 3.6

Transmission Operators (TOPs)

The training operators are responsible for the real-time operation of transmission assets under their purview. They have a similar role to the RC, but for their own specific territory. TOPs have the authority to take corrective actions to ensure that its area operates reliably. For instance, they are responsible for load shedding and maintaining the grid when there is a high voltage transmitted on one transmission line, which can cause overload and failure. Lack of real-time tools [7, p. 89], losing the ability to conduct Real-Time Contingency Analysis (RTCA) by one of the TOPs and also failing to notify WECC and other TOPs about that [7, p. 95] were some of the contributing causes of lack of situational awareness, which prevented TOPs from doing their job to be aware of changes in their own transmission lines, as well as their neighbors, and to study the consequence of those changes. 3.7

Physical Events, Processes and Conditions

The loss of a single 500 kV transmission line initiated the 2011 Southwest power outage [7, p. 1]. This affected line - Arizona Public Service’s Hassayampa-N. Gila (HNG) - is a segment of the Southwest Power Link (SWPL), which is a major transmission corridor that transports power, from generators in Arizona, through the service territory of Imperial Irrigation District (IID), into the San Diego area. After the loss of H-NG, power flows were instantaneously redistributed throughout the system, increasing flows through lower voltage systems to the north of the SWPL, as power continued to flow into San Diego on a hot day during hours of peak demand [7, p. 1]. This created a voltage deviation and equipment overloads. Significant overloading occurred on three of IID’s 230/92 kV transformers located at the Coachella Valley and Ramon substations, as well as on WECC Path 44, located south of the San Onofre Nuclear Generating Station (SONGS) in Southern California [7, p. 2]. This resulted in a ripple effect in which, transformers, transmission lines and generating units tripped offline and led to an automatic load shedding throughout the region in a relatively short time span [7, p. 2]. Just seconds before the blackout, Path 44 carried all flows into the San Diego area as well as parts of Arizona and Mexico. This initiated an intertie separation scheme at SONGS and led to the loss of the SONGS nuclear units, and eventually resulted in the complete blackout of San Diego and Baja California Control Area.

Adopting the AcciMap Methodology to Investigate a Major Power Blackout

587

4 Conclusion 4.1

Model Analysis

Investigation of accidents using the AcciMap framework prevents the unfair blame of front-line operators by providing a means to systematically analyze an accident in a broader socio-technical context. For the analysis of the 2011 Southwest power blackout, there are captured contributing causes across each of the layers of Government and Regulatory Bodies; Industry Codes and Standards; Reliability Coordinator; Balancing Authority; Transmission Planner; Transmission Operator; Physical Events, Processes and Conditions; and Outcome. These connections allow an upward tracking of factors from the lowest layer (Outcome) to the topmost layer (Government and Regulatory Bodies). Investigating the above-stated power blackout using the AcciMap framework indicates that involved operating companies as well as the government and regulatory bodies played a critical role in causing the blackout. Lack of authorities from the regulatory perspective as well as ineffective interoperability among balancing authorities, between BAs and the RC, and also between the RC and TOPs were some the major contributing causes of the blackout. In addition, deficient regulations, lack of situational awareness, inconsideration of human factors and lack of (or obsolete) standard operating and emergency procedures were some other involved factors that caused the accident. 4.2

Recommendations

Based on the thorough investigation of the 2011 Southwest power outages through the developed AcciMap framework as well as some of the brought up points in the Model Analysis section, the electric power industry needs to improve its planning for operations and increase the situational awareness of its involved players about the ongoing operations. Our analysis indicates that there is a growing need for more coordination of grid operations and proper interoperability flow among involved entities. Availability of proper functioning tools is also of paramount importance. We also recommend: 1. The Reliability coordinator needs to focus on improving the effectiveness of its staffing level, training and tools. 2. The RC and TOPs management should review their post contingency procedures. Moreover, they should conduct studies on how much time it requires to implement mitigation procedures and how the time can be reduced while implementing them. 3. The TOPs management and higher authorities; e.g. the reliability coordinator, should review their real-time tools and provide measures to check the functionality of the tools and make sure operators are aware of how to use those tools and when to use them. 4. Interoperability and data sharing between reliability coordinator(s), BAs and TOPs should be well-defined, frequent and effective.

588

M. Tabibzadeh and S. Lahiry

References 1. Wang, T.: Electricity end use in the U.S. 1975–2018, 9 August 2019. https://www.statista. com/statistics/201794/us-electricity-consumption-since-1975/. Accessed 20 Oct 2019 2. Rasmussen, J.: Risk management in a dynamic society: a modelling problem. Saf. Sci. 27(2), 183–213 (1997) 3. Medina, J.: Human error investigated in California blackout’s spread to six million. New York Times, 9 September 2011. https://www.nytimes.com/2011/09/10/us/10power.html?scp= 1&sq=blackout&st=cse. Accessed 1 Jan 2019 4. Leveson, N.: A new accident model for engineering safer systems. Saf. Sci. 42(4), 237–270 (2004) 5. Reason, J.: Managing the Risks of Organizational Accidents. Ashgate Publishing Limited, Hampshire (1997) 6. Rasmussen, J., Svedung, I.: Proactive Risk Management in a Dynamic Society, 1st edn. Swedish Rescue Services Agency, Karlstad (2000) 7. FERC/NERC Staff Report: Arizona-Southern California Outages on 8 September 2011: Causes and Recommendations. Prepared by the staffs of the Federal Energy Regulation Commission (FERC) and North American Electric Reliability Corporation (NERC) (2012) 8. Cauley, G.: NERC Comments on WECC Preliminary Response to the “Arizona-Southern California Outages on September 8, 2011” Report. North American Electric Reliability Corporation (NERC), President and CEO (2012)

Systematic Investigation of Pipeline Accidents Using the AcciMap Methodology: The Case Study of the San Bruno Gas Explosion Maryam Tabibzadeh(&) and Viak R. Challa Department of Manufacturing Systems Engineering and Management, California State University, Northridge, 18111 Nordhoff Street, Northridge, CA 91330, USA [email protected]

Abstract. There is a high dependence on pipeline transport in the world including the U.S. The U.S. has the largest network of natural gas pipeline in the world; approximately 3 million miles pipeline [1]. Conversely, large-scale accidents have occurred in the pipeline transport industry. This paper investigates the San Bruno Gas Explosion, which occurred on September 9, 2010 in San Bruno, California by utilizing a systematic accident investigation methodology called AcciMap, originally proposed by Rasmussen in 1997 [2]. This graphical representation provides a big-picture to illustrate the context in which the accident occurred, by capturing the socio-technical factors that contributed to the accident across different defined layers of the AcciMap framework, as well as the interactions between those layers. Our analysis shows that apart from external contributing causes; i.e. factors related to government and regulatory bodies, organizational factors, among internal factors, were the root causes of the San Bruno gas explosion. Keywords: Accident investigation  AcciMap methodology  Socio-technical factors  Pipeline transport safety  San Bruno gas explosion

1 Introduction In today’s dynamic society, we witness large-scale accidents in safety-critical systems. These systems deal with tightly coupled and interactively complex operations. The pipeline industry is an example such safety-critical systems. Conversely, there is a high dependence on pipeline transport. The U.S. has the largest network of natural gas pipeline in the world; approximately 3 million miles pipeline [1]. Hence, improving the safety of pipeline transport operations is of paramount importance. There have been several accidents in the pipe industry leading to severe injuries, deaths and loss of properties. One example of such accidents is the San Bruno Gas Explosion, which occurred in San Bruno, California on September 9, 2010. This accident was a major explosion, which resulted in eight fatalities, 58 injuries, destruction of 38 houses, severe damage to 70 homes and the release of 47.6 million standard cubic feet of gas. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 589–596, 2021. https://doi.org/10.1007/978-3-030-51328-3_80

590

M. Tabibzadeh and V. R. Challa

This paper utilizes the AcciMap methodology, which was originally proposed by Rasmussen in 1997 [2], for the analysis of the San Bruno gas explosion by investigating different contributing causes of the accident and interactions between those causes. This graphical representation provides a big-picture to illustrate the context in which the accident occurred, by capturing the socio-technical factors that contributed to the accident across different defined layers of the AcciMap framework, as well as the interactions between those layers.

2 Rasmussen’s Risk Management Framework and AcciMap Methodology There have been several developed methodologies to better understand and analyze accidents. Some examples of these methodologies include the Systems-Theoretic Accident Model and Processes (STAMP) by Leveson [3], Reason’s model of organizational accidents [4] and Rasmussen’s AcciMap framework [2]. Rasmussen’s AcciMap framework is particularly useful for this purpose as it models different contributing factors of an accident, and their interactions, in a causal diagram. Rasmussen has introduced a six-layer hierarchical framework (Fig. 1), known as his risk management framework. Each layer captures the role of a specific decision maker or a key player, and the integration as well as the interaction of these layers indicate the leading ways to an accident [2, 5]. These six layers from top to bottom are: government, regulators and associations, company, management, staff, and work (Fig. 1).

Fig. 1. Rasmussen’s risk management framework [5]

Systematic Investigation of Pipeline Accidents Using the AcciMap Methodology

591

The AcciMap methodology was developed by Rasmussen [2] in conjunction with his 6-layer risk management framework, illustrated in Fig. 1. This methodology captures the associated socio-technical factors of an accident within an integrated framework and analyzes the contribution of those factors in causing the accident. The AcciMap framework has been used for the investigation of accidents in different context. However, due to our knowledge, this is the first attempt to utilize this framework to systematically analyze an accident in the pipeline transport industry. The AcciMap methodology is used in this paper (in Sect. 3) to investigate and explain how certain managerial decisions, organizational processes and other contributing factors led to the San Bruno gas explosion.

3 The AcciMap Framework of the San Bruno Gas Explosion In this section, the AcciMap methodology has been utilized to analyze the contributing causes of the San Bruno explosion, which took place on September 9, 2010 in San Bruno, California (Fig. 2). A 30-inch-diameter segment of an intrastate natural gas transmission pipeline known as Line 132, which was owned and operated by the Pacific Gas & Electric Company (PG&E), ruptured in a residential area in San Bruno and contributed to the gas leak. Such analysis will contribute to improving the pipeline transport industry’s understanding on how human and organizational factors, along with technical elements, attributed to the accident. By utilizing this, facilities can apply newly suggested protocols to future circumstances with the goal of creating better operational systems with exceptional work environments while using most compatible technology. As stated before, the AcciMap framework consists of several layers, each representing a main group of involved decision-makers, players or stakeholders in a studied system. For the analysis of the San Bruno gas explosion, the following layers, from top to bottom, have been considered: Government and Regulatory Bodies; PG&E (the involved company in the San Bruno case); Management; Physical Events/Conditions and Actors Activities; and Outcome. This paper captures the contributing causes of the Explosion across each layer. It also illustrates the interactions of those layers through connecting arrows between layers. 3.1

Government and Regulatory Bodies

This layer of the AcciMap framework captures the role of government and regulatory bodies in causing the San Bruno gas explosion. One of these contributing causes was the failure of the California Public Utilities Commission (CPUC), as the regulatory agency that regulates privately owned public utilities in the state of California, to detect the inadequacies of PG&E’s pipeline integrity management program [6, p. xii]. Moreover, CPUC’s and the U.S. Department of Transportation’s (DOT) exemptions of existing pipelines from the regulatory requirement for pressure testing, which likely would have detected the installation defects, contributed to this accident [6, p. xii]. As another contributing cause, the Pipeline and Hazardous Materials Safety Administration (PHMSA), which is a DOT’s agency responsible for developing and enforcing regulations for safe pipeline transportation, did not incorporate effective and meaningful measures for pipeline management safety programs [6, p. 48].

592

3.2

M. Tabibzadeh and V. R. Challa

Pacific Gas and Electric Company (PG&E)

Physical Events/Processes and Actors Activities

Management

Pacific Gas and Electric Company (PG&E)

Government and Regulatory Bodies

In this layer, the contributing causes of the San Bruno gas explosion that are related to the PG&E (the layer of company) are analyzed. One of these contributing causes is PG&E’s deficient/ineffective pipeline integrity management, which should have ensured the safety of the system [6, p. xi; 7, p. 7]. PG&E was lagging a systematic risk assessment to identify threats/anomalies and address them. In October 2010, the CPUC found out that PG&E may be “diluting the requirements of the [integrity management program] through its exception process and appears to be allocating insufficient resources to carry out and complete assessments in a timely manner” [6, p. 68]. Moreover, PG&E lacked adequate quality assurance and quality control with regard to 1956 relocation project for Line 132 [6, p. xii, 95]. A substandard pipe piece was installed and remained in service undetected until the accident, 54 years later [6, p. 95]. Concerns about PG&E’s performance culture is another contributing cause of the accident. These started with the frequent management changes and dysfunction from excessive layers of management [7, p. 50]. The concern is based on the top management whose interest and expertise lie in financial performance, which could dilute Company’s focus on safe and reliable operations [7, p. 50]. Finally, PG&E’s inadequate emergency response played a major role in the explosion, as they were unable to stop the gas leakage for about 95 min [6, p. 56; 7, p. 15]. After the gas leak, the management and crew were not able to locate/pinpoint the rupture location. This delayed the control of the gas leak. In addition, lack of automatic remote control valve contributed to the gas leakage continuation.

CUPC’s and DOT’s exemption of existing pipelines from regulatory requirement of pressure testing [6, p.xii]

CUPC’s failure to detect the inadequacies of PG&E’s pipeline integrity management [6, p.xii]

PG&E’s deficient/ ineffective pipeline integrity management [6, p.xi; 7, p.7]

Mgmt. did not consider the design and materials contribution to the risk of a pipeline failure [6, p.xi]

PG&E’s SCADA system lacked several tools to assist the staff in recognizing the location of the rupture [6, p.101]

PG&E’s inadequate quality assurance since 1956 for Line 132 relocation project during the pipeline installation [6, p.xii, 95; 7, p.8]

An examination method was selected that could not detect welded seam defects [6, p.xi]

Concerns about PG&E’s performance culture [7, p.50]

Mgmt. failed to consider the presence of previously identified welded seam cracks as part of its risk assessment [6, p.xi]

Crew could not recognize and pinpoint the location of the rupture [6, p.101]

Rupture in a 30inch-diameter segment of Line 132 [6, p.x]

PHMSA did not incorporate effective and meaningful metrics for pipeline management safety programs [6, p.xi]

PG&E’s ineffective emergency response [6, p.56; 7, p.15]

Poorly planned electrical work at the Milpitas Terminal for replacement of UPS [6, p.127]

Crew did not respond to the incident in time [6, p.101] Delay in isolating the rupture and stopping the gas flow [6, p.xii]

Lack of automatic or remote control shut-off valve [6, p.x]

Mechanics had to manually shut off valves at Healy station, 95 minutes after the rupture [6, p.17]

While replacing UPS, SCADA lost data for pressures and flows, as well as critical loads on UPS panels [6, p.5] Erroneous low-pressure signal resulted in regulating valves to fully open [6, p.6]

The high-pressure alarms continued in the SCADA system [6, p.9] Outcome

Erratic output voltages from two redundant power supplies [6, p.5]

The monitor valves became the only means for pressure control [6, p.6]

The monitor valves were set at 386 psig, but pressure at Milpitas increased to 396 psig on Line 132 due to lag in response time [6, p.9]

Gas Leak and Explosion

The workers removed power from unidentified breaker resulting in a local control panel unexpectedly losing power [6, p.5]

Erroneous signals to the SCADA center [6, p.6]

SCADA center alarm console displayed over 60 alarms within a few seconds at the Milpitas Terminal [6, p.9]

Continued gas leakage for 95 minutes

Fig. 2. AcciMap framework for the analysis of 2010 San Bruno Gas Explosion

Systematic Investigation of Pipeline Accidents Using the AcciMap Methodology

3.3

593

Management

Management decision making can play a critical role in contributing to accidents. One of these contributing factors is that PG&E’s management did not consider the design and materials contribution to the risk of pipeline failure [6, p. xi]. “The PG&E’s integrity management program significantly understated the threats due to external corrosion and design and materials, and overstated the threats due to third-party damage and ground movement” [6, p. 110]. There were also other known design/material and construction defects on Line 132, which were documented in PG&E’s records (e.g. previously identified welded seam cracks), but were not considered in its integrity management program [6, p. 110]. Moreover, selecting examination methods that could not detect welded seam defects contributed to not recognizing and pinpointing the rupture in Line 132. Finally, there was a poor planning for the electric work at the Milpitas Terminal, where Line 132 originates, for the replacement of Uninterruptible Power Supply (UPS) [6, p. 127]. This factor led to a series of events that have been captured in the Physical Events/Conditions and Actors Activities layer in the next section, which eventually resulted in the gas leak and explosion as well as the continuation of the leakage. 3.4

Physical Events/Conditions and Actors Activities

As explained before, the rupture in a 30-inch-diameter segment of an intrastate natural gas transmission pipeline known as Line 132 caused the gas leak in the San Bruno case [6, p. x]. The ruptured pipe segment was installed in 1956. PG&E’s Supervisory Control and Data Acquisition (SCADA) system lacked several tools, including realtime monitoring and leak detection, to assist the staff in recognizing the location of the rupture [6, p. 101]. Moreover, the three factors of management not considering the design and materials contribution to the risk of a pipeline failure, the selected examination method by the management not detecting welded seam defects, and management’s failure to consider the presence of previously identified welded seam cracks as part of risk assessment, which all belong to the Management layer that was explained in the previous section, contributed to crew’s inability to locate the rupture location. Furthermore, management poor planning for the electric work at the Milpitas Terminal for the UPS replacement, as described in the previous section, contributed to the gas leak and its continuation. The crew were transferring loads to small UPS devices so that they can remove the existing UPS distributing panel and replace it with a new one. This process affected SCADA data and contributed to the loss of pressures and flows data [6, p. 5]. Following the transfer of critical loads from the UPS panel, workers began to remove power from an unidentified breaker. During that work, the workers opened a circuit that resulted in a local control panel unexpectedly losing power [6, p. 5]. The post-accident interview of a technician revealed that when they were measuring the current by using clamp-on amp meter, some of the control panel displays went off. The trouble shooting showed that two redundant power supplies emitted erratic voltages [6, p. 5]. These erratic voltages when passed on to pressure transmitters

594

M. Tabibzadeh and V. R. Challa

resulted in an erroneous low-pressure signal to regulating valve controllers, which caused them to command the regulating valves to a fully open position [6, p. 6]. This resulted in monitor valves, which their purpose is to protect against accidental overpressure, becoming the only means of pressure control [6, p. 6]. The erratic voltages also affected valve position sensors, generating erroneous signals to the SCADA center [6, p. 6]. Because of regulating valves fully opening and the erroneous signals caused by the erratic voltages, the SCADA center alarm console displayed over 60 alarms within a few seconds, including controller error alarms, high differential pressure and backflow alarms from the Milpitas Terminal [6, p. 9]. These alarms were followed by high and high-high pressure alarms on several lines including Line 132 [6, p. 9]. With the regulating valves wide open, the monitor valves limited pressure on the outgoing lines. The monitor valves were set at 386 psig [6, p. 9]. However, because of a typical lag in the monitor valves response time, the pressure in the lines leaving the Milpitas Terminal peaked at 396 psig [6, p. 9]. This also contributed to the continuation of high-pressure alarms in the SCADA system.

4 Conclusion 4.1

Model Analysis

Investigation of accidents using the AcciMap framework prevents the unfair blame of front-line operators by providing a means to systematically analyze an accident in a broader socio-technical context. Creating an AcciMap can help regulators, lawmakers and pipeline transport companies understand the interaction and interdependency of various socio-technical systems. For the analysis of the San Bruno gas explosion, there are captured contributing causes across each of the layers of Government and Regulatory Bodies, PG&E (the involved company), Management, Physical Events/Conditions and Actors Activities, and Outcome. We have also connected the captured factors using arrows. These connections allow an upward tracking of factors from the lowest layer (Outcome) to the topmost layer (Government and Regulatory Bodies). One example of upward tracking of the developed AcciMap framework (Fig. 2) is as follows: from the layer of Physical Events/Conditions and Actors Activities, ruptured pipe was a contributing cause of the gas leak and the explosion. One of the contributing causes of that event was crew not recognizing and pinpointing the location of the rupture. One of the contributing causes of this issue was management failure to consider the presence of previously identified welded seam cracks as part of their risk assessment (from the Management layer). This was due to PG&E’s ineffective pipeline integrity management (from the PG&E’s layer). In addition to internal factors, regulatory bodies and associations played a role in this accident (layer of Government and Regulatory Bodies). For instance, the California Public Utilities Commission (CUPC) could not detect the ineffectiveness of PG&E’s existing pipeline integrity management program, which resulted in the PG&E not having an adequate program in this regard. It

Systematic Investigation of Pipeline Accidents Using the AcciMap Methodology

595

is noteworthy that this is only one path towards the final outcome of gas leak and explosion. Due to space limitation, we suffice to this. Analyzing the San Bruno gas explosion using the AcciMap framework shows that both federal regulatory agencies; i.e. DOT and PHMSA, as well as the state of California regulatory agency; i.e. CPUC, in the pipeline transport industry lacked effective ways of evaluating pipeline management safety programs and also detecting inadequacies in operating companies’ pipeline integrity management. Furthermore, our analysis shows that apart from external contributing factors; i.e. government and regulatory bodies, organizational factors were the root cause of errors and questionable decisions made by personnel and management in the case of the San Bruno accident. PG&E’s ineffective pipeline integrity management, deficient quality assurance and quality control by this Company, concerns about its performance culture (safety versus production) as well as its ineffective emergency response are the main organizational factors that contributed to this accident. It is noteworthy that although the AcciMap framework has been used for the analysis of the San Bruno gas explosion, this is only a case study to investigate contributing causes of accidents in the pipeline transport industry. In general, analysis of past accidents using this framework can define patterns of hazards within an industrial sector, which can lead to definition of preconditions for safe operations, as a focus point of proactive risk management systems [5]. The next section provides some recommendations to improve pipeline transport operations safety based on the conducted analysis in this paper. 4.2

Recommendations

From the external factors perspective, the U.S. Secretary of Transportation has to conduct audits to assess the effectiveness of PHMSA’s oversight on pipelines management safety programs. This includes incorporating effective and meaningful metrics into their safety programs and developing adequate inspection protocols to ensure the completeness and accuracy of pipeline operating companies’ integrity management program. The CPUC should also improve its process of evaluating operating companies’ integrity management program. Moreover, regulatory bodies should improve their oversight by frequently conducting reviews and checking safety measures for risk management programs. They also need to recommend companies to frequently conduct safety analyses. From the internal (organizational) factors perspective, operating companies need to design and develop a comprehensive pipeline integrity management system. They also have to equip their SCADA system with proper monitoring tools and real-time defect; e.g. leak detection systems, to recognize and pinpoint the location of leaks, and do that in a timely manner. Furthermore, they have to develop proper pressure tests so that they know how much pressure pipelines could handle, and they should take active measures to observe the pressure capacity of the pipeline, which will help in case of emergency. Finally, companies need to allocate reasonable budget to improve safety by incorporating proper safety measures and risk analysis programs that will contribute to having a better situational awareness and reducing the risk of failures and incidents.

596

M. Tabibzadeh and V. R. Challa

References 1. U.S. Energy Information Administration (EIA): Natural gas explained Natural gas pipelines, 5 Dec 2019. https://www.eia.gov/energyexplained/natural-gas/natural-gas-pipelines.php. Accessed 10 May 2020 2. Rasmussen, J.: Risk management in a dynamic society: a modelling problem. Saf. Sci. 27(2), 183–213 (1997) 3. Leveson, N.: A new accident model for engineering safer systems. Saf. Sci. 42(4), 237–270 (2004) 4. Reason, J.: Managing the Risks of Organizational Accidents. Ashgate Publishing Limited, Hampshire (1997) 5. Rasmussen, J., Svedung, I.: Proactive Risk Management in a Dynamic Society, 1st edn. Swedish Rescue Services Agency, Karlstad (2000) 6. National Transportation Safety Board: Pacific Gas and Electric Company Natural Gas Transmission Pipeline Rupture and Fire, San Bruno, California, 9 September 2010. Pipeline Accident Report NTSB/PAR-11/01. Washington, D.C. (2011) 7. Jacobs Consultancy: Report of the Independent Review Panel San Bruno Explosion. Prepared for the California Public Utilities Commission, 8 June 2011

A Tool for Performing Link Analysis, Operational Sequence Analysis, and Workload Analysis to Support Nuclear Power Plant Control Room Modernization Casey Kovesdi(&) and Katya Le Blanc Idaho National Laboratory, Idaho Falls, ID, USA {Casey.Kovesdi,Katya.LeBlanc}@inl.gov

Abstract. Nuclear power continues to have an imperative role for the U.S.’s electricity generation. However, for these nuclear power plants (NPPs) to remain economically viable, new strategies for reducing operations and maintenance costs must be explored. The U.S. Department of Energy Light Water Reactor Sustainability Program is researching how to digitally transform existing analog and hybrid main control rooms into a fully integrated control room that addresses this challenge. The transformation will fundamentally change the conduct of operations for these U.S. NPPs. Human factors engineering has a vital role in this effort, where traditional task analysis methods are important in informing the design of advanced human-system interface displays. This work describes a preliminary tool to support task analysis for the development of these advanced displays. Details on the use of this tool, including the specific task analysis methods offered, are presented in this paper. Keywords: Task analysis modernization

 Human factors engineering  Nuclear power plant

1 Introduction Nuclear power continues to have an imperative role for electricity generation in the United States (U.S.). However, for these nuclear power plants (NPPs) to remain economically viable and to support a license renewal, new strategies for reducing operations and maintenance costs must be explored. The Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) Program is addressing these challenges through targeted research and development (R&D) to (1) ensure existing legacy analog instrumentation and control (I&C) systems are not life-limiting concerns for the U.S. NPP fleet, and to (2) implement advanced digital technology that enables businessdriven innovation in NPP operations. One R&D focus is to digitally transform existing analog and hybrid main control rooms into a fully integrated control room that offers improved human-system performance and possibly reductions in required staffing. The fully integrated control room will fundamentally change the conduct of operations for the U.S. NPPs, allowing for new capabilities such as automation of manual actions, integration of plant data, and © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 597–604, 2021. https://doi.org/10.1007/978-3-030-51328-3_81

598

C. Kovesdi and K. Le Blanc

applications of advanced technologies that support decision making. For instance, rather than requiring operators to walk and visually scan the boards to find relevant information, the integrated control room will provide context-relevant information in an accessible and usable format that improves operating performance and reduces workload. Human factors engineering (HFE) has a vital role in supporting this transformation by ensuring that human-system performance is optimized, and no new human failure modes are introduced [1]. Traditional HFE methods such as task analysis are important in informing the design of new human-system interface (HSI) displays for the integrated control room such as by identifying important human actions and understanding contextual considerations that are important to HSI design [2]. This paper introduces an open-source R-based [3] tool that supports task analysis for the development of advanced HSIs like task-based displays [4] by allowing the human factors engineer to systematically map important human actions, such as specified in a procedure, to the I&C on control boards. Collectively, the tool is intended to aid in the identification of important human actions from the task analysis outputs. This work attempts to supplement available task analysis data collection methods such as interviews, observations, verbal protocols, and walk/talk-throughs to provide a comprehensive understanding of critical task data that supports HSI design. A brief overview of applicable task analysis techniques provided within the tool is described next.

2 Overview of Applicable Task Analysis Techniques Task analysis is a methodology used to study the cognitive and physical actions needed to achieve some goal within a system [5]. For NPPs, there are complex interactions between technology, processes, and people that must be considered in task analysis. A comprehensive review of all task analysis techniques is beyond the scope of this paper. However, an overview of the techniques available within the tool is summarized next. 2.1

Operational Sequence Diagrams

An operational sequence is a series of control and information gathering activities completed in a specific order to accomplish a task [5, 6]. Operational sequence diagrams (OSDs) graphically represent these sequences in either temporal or spatial formats. Temporal OSDs graphically represent the sequence of activities required by a team of agents through the course of time. Hence, one axis represents time whereas the other axis represents the different agents involved in a task [6]. An activity, such as a manual action required by an operator, is graphically presented as a single coordinate, intersecting a point in time and responsible agent (i.e., the operator). Spatial OSDs graphically represent the sequence of activities required by a team of agents in interacting within the physical environment. A diagram of a control room panel or display is typically presented where the activities are visually overlaid on the diagram to show the spatial location of where the activity takes place on the control panel or display [5]. The sequence of activities can be represented as directional arrows like the temporal OSD.

A Tool for Performing Link Analysis, Operational Sequence Analysis

599

OSDs are most useful in showing the relationship of activity flow between agents through space or time. From a design standpoint, OSDs offer insights into task activities that yield the greatest benefit from redesign such as by identifying inefficient interactions between agents (e.g., such as unnecessary manual information gathering outside the control room), or unnecessary physical movements between I&C on a control panel. The primary disadvantage of OSD is that developing the diagram can be labor intensive and take a considerable amount of time [6]. To this end, complex tasks can result in unwieldy visualizations that are difficult to interpret [5]. 2.2

Link Analysis

Link analysis identifies the connections, or activity flow, between different parts of a system in completing a task [5]. These connections identified from link analysis are described as the links. The granularity of link analysis regarding what constitutes a part of a system is context dependent on the task analysis question at hand; however, the process of link analysis is similar. That is, sequence task data is collected like OSD, but is then aggregated across agents in a transition matrix to represent the number of times a given agent interacted with another agent. Figure 1 illustrates an example transition matrix used in link analysis. In this example, the rows denote the initiating agent (i) whereas the columns denote the receiving agent (j). For example, if Agent 1 (e.g., a reactor operator) is required to monitor Agent 2 (e.g., pressurizer level) six times throughout the course of a task, the expression in the matrix would be: L1,2 = 6.

Fig. 1. Example transition matrix for link analysis.

The main output of link analysis is a visualization that shows the connections between the agents of a system to complete a task. Elements of graph theory can be used to describe these inter-relations between agents [7]. Within this context, agents are represented as nodes, whereas connections are represented as edges on the graph. The weight, or thickness, of the edge can graphically represent the frequency of connections between a pair of agents. Furthermore, extensions from graph theory and social network analysis such as the measurement of centrality may be adopted to support interpretation of link analysis. Centrality can be regarded as degree of prominence of

600

C. Kovesdi and K. Le Blanc

the agents within a system [8]. An agent with the most connections between other agents has the highest centrality. From a design standpoint, the concept of centrality can be regarded as a measure for quantifying proximity compatibility among design elements needed to complete a specific task [8]. Common centrality measures include degree, betweenness, and closeness [9]. Degree centrality can be defined as the extent to which an agent is interwoven between other agents. Betweenness centrality is the degree to which an agent acts as a mediator between the communication of two other agents. Finally, closeness centrality is the extent to which an agent utilizes the minimum number of agents between itself and each other agent [10]. 2.3

Timeline and Workload Profile Analysis

Put simply, a timeline is a two-dimensional line or bar chart that graphs some set of variables across time, represented on one of the axes [5]. A Gantt chart is a commonly used timeline. For task analysis, timelines can be used for many functions, including the evaluation of workload throughout the course of time for a given task [5]. Workload profile analysis (WPA) is one such method that can be used to evaluate workload across a task [6]. With WPA, the human factors engineer in collaboration with a subject matter expert (SME) typically rates workload for each task using some subjective scaling technique [5]. The instantaneous self-assessment (ISA) technique is one such workload rating technique that collects workload at different points in time. The ISA requires the SME to rate the level of workload using a rating scale (e.g., 1 = low; 5 = high) [6]. An advantage of ISA lies in its low-cost nature and simplicity as a single item question, which provides a quantitative estimate of workload. Due to ISA’s subjective nature, an obvious pitfall is that the validity and reliability of the workload estimates are limited to the experience of the rater (i.e., the SME).

3 A Preliminary Tool for Integrated Task Analysis This work presents the R-based Integrated Task Analysis Tool (R-ITAT). R-ITAT is intended to support the design of advanced HSI displays for the existing U.S. NPP fleet by providing output from OSDs, link analysis, and workload estimates at specific human actions, such as defined in a procedure, through a simple interface. R-ITAT allows a user to map important human actions described in a procedure to their spatial coordinates on a control board and temporal sequence within the task, as well as enter relevant data like WPA through a single workflow. Figure 2 outlines the general workflow of R-ITAT. The following sub-sections describe the general workflow of using R-ITAT.

A Tool for Performing Link Analysis, Operational Sequence Analysis

601

Fig. 2. General workflow of R-ITAT.

3.1

Entering Task Analysis Data

R-ITAT enables the user to map human actions in a procedure to a reference image through a graphical user interface. The reference image should refer to a diagram of the control panels, workstation, or individual HSI display. Figure 2A shows an entire control panel loaded in R-ITAT as the reference image. The user points and clicks over the reference image to map each procedural step to the specific location on the image that corresponds to the step. R-ITAT allows the user to enter task-relevant data in a form (Fig. 2B), including procedure step number, the action verb, referred instrumentation in the step, and corresponding agent or plant system. The WPA is completed using a scale widget that provides the ISA technique with a measurement scale analogous to that described in Stanton and colleagues [6]. There is a legend for the ISA rating scheme, provided when selecting the ‘Info’ button. The ISA ratings can be completed in real-time or entered in R-ITAT retrospectively. Spatial coordinates and temporal sequences are collected automatically by the tool. 3.2

Data Processing, Analysis, and Output

Data processing and analysis begins with R-ITAT creating a master dataset. R-ITAT creates visualizations from this master dataset, including: a spatial OSD, an attentional heatmap of the spatial OSD, a temporal OSD, a link analysis graph (including centrality metrics), and a WPA timeline (Fig. 2C). The spatial OSD helps visualize where the operator needs to interact, mapping the specific indications and controls, to complete a task. Each step is labeled by step number, action verb, and I&C label. The color of the step is based on the type of action verb. The heatmap provides a gradient of where the most attention is spent on the reference image. The temporal OSD describes the

602

C. Kovesdi and K. Le Blanc

temporal order of a task, visually correlating each procedural step’s order to the agent described in the procedural step (e.g., a plant system, component, or plant personnel). Link analysis presents a network diagram where each node represents an instrumentation label. The edges represent transitions to these components from each step. Centrality metrics are also available for the link analysis. The link analysis may be used to support the identification of indications and controls that are frequently used in a specific sequence, which can inform grouping of indications for new task-based displays. Finally, the WPA diagram describes estimated workload levels across each procedural step. This data can help focus design where necessary to optimize workload. 3.3

Applying R-ITAT to Support Task Analysis for Integrated NPP Operations

The transformation of the existing U.S. NPP fleet into an integrated operations model requires a complete understanding of the interplay between the people, processes, and technology required for safe, effective, and efficient electricity generation. Figure 3 illustrates the specific applications of each task analysis method offered by R-ITAT in this transformation.

Fig. 3. Use cases of R-ITAT in the digital transformation of U.S. NPPs as an integrated operations model.

Operational costs can be reduced by leveraging technology where there are obvious bottlenecks in the work process. For example, manual actions requiring the control room operator to contact field personnel to check the value or status of a plant component could be streamlined through the use of advanced online monitoring technology or advanced integration of field data. Temporal OSD and WPA can identify key procedural steps in these operations to help focus these capabilities and maximize their value. Furthermore, the application of advanced HSI displays that reduce complexity, integrate related information, and automate tedious actions can support decision-making, which

A Tool for Performing Link Analysis, Operational Sequence Analysis

603

may consequently support a reduced staffing footprint. Spatial OSD, link analysis, and WPA can support in developing these advanced capabilities by identifying human actions that pose greatest risk of operational difficulties and human error, resulting from an inefficient workflow, non-optimal workload, or poor grouping of information. In all cases, R-ITAT is intended to serve as additional input, as part of a larger suite of task analysis methods, to facilitate the collection of important task analysis inputs.

4 Conclusions This work presents a tool to support task analysis for the development of advanced HSI displays in a fully integrated NPP control room. R-ITAT attempts to accomplish this goal by providing task analysis outputs, including temporal and spatial OSD, link analysis with measures of centrality, as well as WPA to aid in HSI design and preliminary workload assessment. Lastly, it is important to note that R-ITAT is intended to be used as one of a complete set of task analysis methods that provide a comprehensive understanding of critical task data that supports the design of advanced HSI displays. Acknowledgments. INL is a multi-program laboratory operated by Battelle Energy Alliance LLC, for the United States Department of Energy under Contract DE-AC0705ID14517. This work of authorship was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government, nor any agency thereof, nor any of their employees makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privatelyowned rights. The United States Government retains, and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a nonexclusive, paidup, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or any agency thereof. The INL issued document number for this paper is: INL/CON19-56905.

References 1. Hugo, J.V., Kovesdi, C.R., Joe, J.C.: The strategic value of human factors engineering in control room modernization. Prog. Nucl. Energy 108, 381–390 (2018) 2. Joe, J., Hanes, L., Kovesdi, C.: Developing a human factors engineering program plan and end state vision to support full nuclear power plant modernization, INL/EXT-18-51212. Rev. 0 (2018) 3. R Core Team R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria (2017). https://www.R-project.org/ 4. Braseth, A., Nihlwing, C., Svengren, H., Veland, Ø., Hurlen, L., Kvalem, J.: Lessons learned from Halden project research on human system interfaces. Nucl. Eng. Technol. 41(3), 215– 224 (2009) 5. Kirwan, B., Ainsworth, L.K.: A Guide to Task Analysis: The Task Analysis Working Group. CRC Press, Boca Raton (1992)

604

C. Kovesdi and K. Le Blanc

6. Stanton, N.A., Salmon, P.M., Rafferty, L.A., Walker, G.H., Baber, C., Jenkins, D.P.: Human factors methods: a practical guide for engineering and design. CRC Press, Boca Raton (2017) 7. Strathie, A., Walker, G.H.: Can link analysis be applied to identify behavioral patterns in train recorder data? Hum. Factors 58(2), 205–217 (2016) 8. Dinakar, S., Tippey, K., Roady, T., Edery, J., Ferris, T.: Using modern social network techniques to expand link analysis in a nuclear reactor console redesign. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 40(1), 1083–1087 (2016) 9. Freeman, L.C.: Centrality in social networks conceptual clarification. Soc. Netw. 1(3), 215– 239 (1978) 10. Guastello, S.: Human Factors Engineering and Ergonomics: A Systems Approach. CRC Press, Boca Raton (2013)

Renewable Energy System Design for Electric Power Generation on Urban Historical Heritage Places in Ecuador Blanca Topon-Visarrea(&), Mireya Zapata, and Rayd Macias Research Center of Mechatronics and Interactive System, Universidad Indoamérica, Machala y Sabanilla, Quito, Ecuador {blancatopon,mireyazapata}@uti.edu.ec, [email protected]

Abstract. Taking advantage of the Ecuadorian geography on earth that maximizes the amount of energy achievable from the sun, a renewable energy system for power generation on Urban Historical Heritage places located in Ecuador is presented. The index of electricity consumption expenditure of a typical housing sector was determined. Using this parameter and the electricity demand, a set of solar panels, battery bank, inverter and a charge regulator were sized. Subsequently, the efficiency of the system was determined by calculating the energy payback time, the carbon footprint generated and the energy returned on investment. The results of this study show the feasibility of installing an alternative power generation system that takes advantage of solar radiation levels in a heritage urban area. Keywords: Quito

 Energetic sustainability  Cultural heritage  Solar energy

1 Introduction The world population has grown considerably and conjointly their energy needs, for this reason, each country seeks new sources and energy alternatives based on the natural resources available [1]. Today’s cities are not equipped to deal with exponential urban growth, which in turn generates large amounts of anthropogenic greenhouse gases, which contribute to aggravate the effects of climate change and extreme weather. In this context, it is mandatory to create resistant habitable cities with low carbon emissions through the use of clean energies that reduce global CO2 emissions [2]. The city of Quito-Ecuador has one of the best-preserved Historic Centers in Latin America, so since September 8th, 1978 it has been considered Cultural Heritage of Humanity by the United Nations Educational, Scientific and Cultural Organization (UNESCO). This important recognition has turned the city into a national and international tourist reference, which motivates citizens, authorities, together with both public and private institutions, academia and organizations in general, to join forces to conserve the heritage of the city looking for sustainable and efficient solutions through the use of renewable energy sources like solar energy for power generation in low voltage grid electrified urban places. In order to size the technical characteristics © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 605–612, 2021. https://doi.org/10.1007/978-3-030-51328-3_82

606

B. Topon-Visarrea et al.

required for this proposal, the index of electricity consumption and the average monthly expenditure of a typical housing sector was determined, which according to the study is equivalent to 143.31 kWh per month. With these data, the present study was carried out in order to determine the most suitable energy source that fits the conditions of the 5,100 houses feed by conventional energy sources and located in this area. From the selected energy alternative, the sizing of the system is carried out. The results of this study show the ugliness of installing an alternative power generation system that takes advantage of solar radiation levels in a heritage urban area. The installation of this type of systems in this area that has high levels of population density and that represents a priority sector due to its heritage characteristics, allows not only a reduction in the demand for electricity and the dependence of the interconnected national system, but it reduces visual pollution by the infrastructure of the local electricity network, making the visit of tourists more enjoyable, which in turn energizes the economy of the historic center of the capital city. This, coupled with a reduced billing for electric service also contributes to improving the profitability of the businesses that usually operate in these buildings. Finally, this research involves a contribution to future projects that seek the implementation of sustainable practices in urban areas with great tourist potential and whose heritage or attractiveness lies in its streets and buildings.

2 Analysis and Design 2.1

Analysis

Quito was considered the first city in the list of World Heritage, in addition to having one of the best-preserved Historic Centers, these peculiarities make this a site protected by the benefit of humanity, which has been certified by UNESCO. In order to contribute to the preservation of the Historic Center, it is proposed to replace the electrical energy provided by conventional systems to homes located on-site, by a photovoltaic energy system. This system will reduce greenhouse gas emissions and reduce the energy dependence of the national system. In the present investigation, it is proposed to couple the conventional feeding system with clean energy that will help reduce CO2 emissions into the atmosphere. For the analysis of photovoltaic energy, radiation data and peak sun hours are required, which have been obtained through the Ecuadorian National Institute of Meteorology and Hydrology (INAMHI). 2.2

Design Criteria for the Photovoltaic System

It is proposed to replace the consumption of household electrical energy generated by specific sources by energy from photovoltaic panels. This system does not alter the landscape of the place or increase the environmental problems of the place. The proposed photovoltaic system (see Fig. 1) is composed of an array of solar panels, battery bank in mixed connection, controller, DC/AC inverter and a mixed charge regulator. Each element has been dimensioned according to the residential load

Renewable Energy System Design for Electric Power Generation

607

installed in the HCQ. The calculation takes as a reference to an average house of 117 m2, which includes 3 rooms, kitchen, bathroom, living room, dining room and a garage, with estimated electricity consumption of 143.31 kWh/month. This value is reported by the Empresa Eléctrica Quito (EEQ) [4].

Fig. 1. Photovoltaic solar system

Solar Panels. Based on the estimated energy of 141.33 kWh/month of consumption, the number of solar panels required is defined. The calculations were made considering the total energy to be generated (ETG), the total efficiency (ƞT), the extra energy (Eextra) that is reserved for cloudy or rainy days and the energy provided by a solar panel (Epanel). Concluding that for the energy demand for a typical house it is necessary to have 14 panels. Inverter. Based on the energy consumed of 143.31 kWh/month in a typical house, the calculation is made to determine the consumption in 1 h, equivalent to 1286 Wh/day. In order to select the inverter, it is necessary to determine the overload and consider a percentage of safety due to environmental conditions. It is determined that the power of the inverter must be equal to or greater than 2009.4 W. Batteries. The photovoltaic system requires a battery bank to store the generated charge, so it determines the levels of charge and discharge and the number of batteries needed, considering the energy consumed per day, the days of autonomy, the nominal working voltage, the efficiency of the inverter and the discharge capacity of the battery. Concluding that it is required 14 batteries for the storage of the energy consumed by a typical house. Charge Regulator. For the sizing of the regulator it is necessary to consider the number of total panels, the short circuit current of the solar panel and a safety factor for the losses due to variations in voltages resulting from the environmental conditions. In the proposed model, the photovoltaic system is connected to a mixed type regulator (MPPT/PWM), it will maintain the voltage levels of the battery bank, and maintain this voltage constant. The regulator will be connected to the house and the electricity grid to release the excess energy produced by the photovoltaic system. After

608

B. Topon-Visarrea et al.

performing the calculations, the commercial elements listed in Table 1 have been selected according to the obtained parameters. Table 1. List of commercial equipment for the photovoltaic energy system Team Quantity Photovoltaic solar panel 14 Batteries 14 Inverter 1 Regulator 1

Technical characteristics Polycrystalline, 250 W, 12 VDC Type Gel, 12 VDC, 220 Ah Mixed, 2000 W Type MPPT/ PWM, 12/48 V, 200 A

3 Efficiency Study of the Photovoltaic Solar System 3.1

Energy Payback Time (EPT)

The analysis of the recovery time of the energy is carried out by means of the following calculation: EPT ¼ Energy inverted =Annual energy generated

ð1Þ

– Energy inverted in the construction of 1 panel is 725.12 kWh [5]. For this design, we have to consider 14 panels. – Energy generated by the annual photovoltaic system = 4162.37 kWh/year. This value is obtained using the TEG from Eq. (1) and multiplying by 365 days of the year. In the same way, the energy inverted by the 14 solar panels is determined. EPT ¼ 2:43  2:5 years It can be seen that for an approximate period of 2 and a half years, all the energy invested in the development and construction of the photovoltaic system can be recovered. 3.2

Total CO2 Emissions from the Manufacture of a Photovoltaic System

When a photovoltaic solar energy system is manufactured, a carbon footprint of 0.384 g CO2 is produced for each kW consumed in its production [6]. In the manufacture of a solar panel 725.12 kWh is invested. For calculations, this value will be multiplied by the number of solar panels to be used [5]. TotalFC ¼ Energy inverted  FC ¼ 3; 898:25 kg of CO2 :

– FC. Carbon footprint

ð2Þ

Renewable Energy System Design for Electric Power Generation

609

The carbon footprint generated in the construction of the 14 solar panels is 3,898.25 kg of CO2. 3.3

Comparison of CO2 Emissions Generated by Conventional Energy Vs. Photovoltaic Energy

To determine the amount of CO2 emitted by conventional energy, a calculation is made of the energy consumed by 5100 houses in 25 years. This value is multiplied by the CO2 emission factor of conventional energy. Photovoltaic energy does not emit CO2 except the required for its construction. In order to determine this value, it was considered the calculation of the energy for the construction of the photovoltaic system which is 725.12 kWh. This value multiplied by the CO2 emission factor has allowed determining tons of CO2, the results are shown in Table 2. Table 2. CO2 emissions Energies Tons of CO2 Conventional energy 841,974,912 Photovoltaic energy 19,881,075

From the results, it can be seen that the difference in CO2 emissions from conventional energy is 42 orders of magnitude greater than the produced by photovoltaic energy. 3.4

Energy Return Rate (TER)

In order to determine the efficiency of the system, the energy return rate is calculated with the following equation. TER ¼ Energy obtained=Energy inverted ¼ 10:26

ð3Þ

– Energy obtained: Energy obtained by the system considering a useful life of 25 years. In this case, this value corresponds to 104,159.2 kW. – Energy inverted: It is all the energy that was used for the system to generate electrical energy. For example, energy used in the installation of panels, electrical energy used in their production (direct and indirect way) = 10151.68 kW. The result obtained means that for every kW invested in the photovoltaic system 10.26 kW of return is obtained. The TER for this type of renewable energy is located in the interval 3–30. This value meets the expectation of generation [7].

610

3.5

B. Topon-Visarrea et al.

Analysis with Homer Pro Software

With the help of the Homer Pro Software [8], the images shown in Fig. 2 were obtained. The radiation is observed on a solar panel during the day, in a whole year. The photovoltaic panels generate electricity in the hours of 6:00 h to 18:00 h, with more solar radiation between 11:30 h to 13:00 h. The photovoltaic system will be composed of 14 batteries in mixed connection, distributed in two blocks in series of 7 batteries in parallel each. The battery bank will store 24 V to 3080 Ah according to the design. This system will have 4 days of autonomy. In Fig. 2, the state of charge of the batteries of the photovoltaic system can be observed, it is also shown, as they are kept charged during the day, while at night their load will be consumed, reaching up to 55% of the total load.

Fig. 2. Energy output of the solar panels per hour (a), Battery bank state of charge - obtained with Homer Pro Software (b)

3.6

Positive Environmental Impact

Photovoltaic energy helps reduce the impact of carbon footprint. The designed system only generates CO2 emissions in the construction and installation stage. A photovoltaic system produces 42 times less CO2 emissions than the energy produced in a thermoelectric plant. In addition, it does not need water, takes advantage of small spaces (ceilings, walls, and luminaries). Another important characteristic of photovoltaic systems is that the main component of solar panels is silicon, an element that is extracted from the sand, which is very abundant on the planet. That is why there are no alterations in the ecosystems. Besides, it does not generate polluting discharges, nor excavations. The impacts on the earth’s crust, to flora and fauna are zero, as they do not generate noise or vibrations.

4 Discussion This proposal aims to strengthen the city and be a world reference, taking advantage of the available renewable resources, replacement of conventional energy delivered by the national electrical system for photovoltaic energy in the HCQ was analyzed. After carrying out the design it was determined that it is necessary 14 solar panels of 100 W, 14 batteries in connection mixed, an inverter and an MPPT type controller, to supply

Renewable Energy System Design for Electric Power Generation

611

the average consumption of a house (143.31 kWh). The system will allow the reduction of 33678 tons of CO2 per year. Similar implementation has proven the suitability of photovoltaic systems to produce energy and reduce pollution. In [9] the results of the energy delivered by a photovoltaic system connected to the network, located on the roof of a building in the historic center of Mexico City, are presented. The photovoltaic system connected to the grid has a peak of 6150 W. This system generates 10.8 MWh of electrical energy annually, without this system approximately 2.1 tons of CO2 per year would be generated. In the study carried out by Marchi [3] in the Historic Center of Siena, the CO2 emissions for the residential and industrial sectors are analyzed, a plan based on the Paris Treaty was proposed, applying policies to reduce emissions of greenhouse gases committing to comply in a span of 10 years. The solution proposes by Marchi was the implementation of photovoltaic panels, based on public regulations for the reduction of carbon emissions. The CO2 balance of the Historic Center of Siena was reduced by 53,400 tons, a value that was reached in a period of 10 years. In the present study, is planned to reduce 19,881 tons of CO2 for 25 years, 42 times less than those produced by consuming conventional energy at the same time.

5 Conclusions The present investigation proposes an alternative to reduce 42 times the amount of CO2 emissions generated in a span of 25 years, installing photovoltaic panels on the roofs of houses of the HCQ. Thus contributing development environmental practices in the city, that support the preservation of the colonial structure of the buildings. The results of this study show the feasibility of installing an alternative power generation system that takes advantage of solar radiation levels in a heritage urban area. The installation of this type of systems in this area that has high levels of population density and that represents a priority sector due to its heritage characteristics, allows not only a reduction in the demand for electricity and the dependence of the interconnected national system but, at the same time it reduces visual pollution by the infrastructure of the local electricity network, making the visit of tourists more enjoyable, which in turn energizes the economy of the historic center of the capital city. This research shows that photovoltaics energy generation is the most suitable to reduce electricity consumption from the grid, which does not affect the architecture of the place in addition to supporting the decarbonization of contemporary cities. This proposal constitutes a contribution to future projects that seek the implementation of sustainable practices in urban areas with great tourist potential and whose heritage or attractiveness lies in its streets and buildings.

References 1. Oviedo-Salazar, J.L.: History and use of renewable energies. Daena Int. J. Good Consci. 10, 1–18 (2015) 2. Kammen, D.S.: City-integrated renewable energy for urban sustainability (2016)

612

B. Topon-Visarrea et al.

3. Marchi, M.N.: Environmental policies for GHG emissions reduction and energy transition in the medieval historic center of Siena (Italy): the role of solar energy (2018) 4. EEQ.: Empresa Eléctrica Quito. http://www.eeq.com.ec. Accessed 4 Ene 2018 5. Moreno, R.H.: Energy payback time of crystalline silicon photovoltaic systems in Costa Rica. Revista de Ingeniería Energética 39, 198 (2018) 6. Zambrano, C.: CO2 emission factor for electricity generation in Ecuador during the period 2001–2014. Avances en Ciencias e Ingenierías. 7, 3–4 (2015) 7. Ballenilla, M.: La tasa de retorno energético. El ecologista. 55, 24–28 (2008) 8. Singh, A.B.: Computational simulation & amp; optimization of a solar, fuel cell and biomass hybrid energy system using HOMER pro software. Procedia Eng. 127, 743–750 (2015) 9. Santana-Rodriguez, G.V.: Evaluation of a grid-connected photovoltaic system and in situ characterization of photovoltaic modules under the environmental conditions of Mexico City. Mexican Phys. Mag. 127, 88–94 (2013)

Author Index

A Acosta-Vargas, Patricia, 84 Adler, Rachel F., 9 Aguilera Balseca, Jorge Luis, 361 Ali, Muhammad, 490 Amjad, Zohaib, 490 Anderl, Reiner, 22 Arce-Lopera, Carlos, 437 Arredondo-Soto, Karina Cecilia, 459 Asghar, Tayyaba, 97 B Bashir, Masooda, 137 Bates, Neil, 174 Boring, Ronald, 566 Boring, Ronald Laurids, 528 Brandl, Christopher, 145 Braseth, Alf Ove, 550 Brunner, Oliver, 145 Bugaj, Marek, 309, 325 Buman, Matthew, 407 C Calle-Jimenez, Tania, 348 Cao, Xinyu, 47, 194, 201, 217 Caporusso, Nicholas, 235 Castro, Cristhian, 414 Cha, Hyeon-Ju, 28 Challa, Viak R., 589 Chau, Kwong Wing, 113 Cheema, Sehrish Munawar, 490 Chero, José Antonio Sánchez, 341

Chero, Manuel Jesús Sánchez, 341 Choudhary, Rajani, 167 Cortés-Robles, Guillermo, 459 Cuamea, Onesimo, 158 D Dai, Wenhan, 69 Del-Valle-Soto, Carolina, 504 Díaz, Daniel, 369 Diaz Sanchez, Josseline Haylis, 361 Dien, Steven, 451, 473 Ding, Meilin, 113 Ding, Wenxing, 47 E Englmeier, Kurt, 129 Estrada, Ario, 158 F Flores, Luis, 465 Fukuda, Hisato, 40, 250 G Galan-Mena, Jorge, 258 García-Vélez, Roberto, 258, 264, 270 Glomann, Leonhard, 187 Gögelein, Leo, 22 Gomez, Christopher E., 9 González, Gonzalo Antonio Loza, 497 Gonzalez, Mario, 84 Goto, Riku, 40 Granizo, Sergio L., 77

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 613–615, 2021. https://doi.org/10.1007/978-3-030-51328-3

614 Guerra, Jose, 369 Guo, Beiyuan, 333 H Hämmerle, Moritz, 3 Hamoud, Nabeel, 465 Hanawa, Junichi, 40 Hasan, Mahmudul, 40 Hayat, Umar, 97 Hechavarría Hernández, Jesús Rafael, 292, 400 Hernández-Álvarez, Myriam, 53, 77 Hourlier, Sylvain, 16 Hua, Zhang, 429, 442 Huang, Tiancheng, 333 Huisinga, Laura, 483 I Ingalls, Todd, 407 Inkilä, Ville-Pekka, 573 J Jaramillo, Nathaly, 369 Jiang, Ting, 377 Jiménez-Zaragoza, Alejandro, 459 Jung-Jung, Yook, 28 K Kaarstad, Magnhild, 558 Kanoga, Suguru, 105 Kassem, Diana, 437 Kawai, Naruki, 105 Keya, Das, 250 Kidane, Assegid, 407 Kobayashi, Yoshinori, 40, 250 Kovesdi, Casey, 597 Kugler, Stefan, 22 Kuno, Yoshinori, 40, 250 Kweon, Sang Hee, 28 L Laarni, Jari, 573 Lahiry, Shashank, 581 Lam, Antony, 250 Langstrand, Jens-Patrick, 521 Laseinde, Opeyeolu Timothy, 392 Le Blanc, Katya, 536, 597 Lew, Roger, 566 Leyva Vazquez, Maikel, 292, 400 Leyva Vazquez, Maikel Yelandi, 361 Li, Herru Ching Yu, 113 Li, Rita Yi Man, 113, 298 Liinasuo, Marja, 573 Liu, Yuan, 333 Lukander, Kristian, 573

Author Index M Macas, Richard, 369 Macias, Rayd, 605 Mann, Umesh, 167 Mao, Chen, 286 Martinez, Miguel Angel Quiroz, 119, 497 McDonald, Robert, 521, 550, 558 Mertens, Alexander, 145 Miranda-Ackerman, Marco Augusto, 459 Mohon, Jeremy, 542 Montenegro, Carlos, 384 Moreno, Bryam Vega, 270 Morgan, Jorge, 158 Mothe, Josiane, 129 Muchemi, Lawrence, 278 Muñoz, Steven Xavier Sanchez, 119 N Nakata, Toru, 317 Navarrete, Rosa, 384 Naz, Ammerha, 490 Nguyen, Hoa Thi, 521 Nihlwing, Christer, 558 Nitsch, Verena, 145 Nyaga, Casam, 207 Nystad, Espen, 558 O Orellana-Alvear, Boris, 348 Otsu, Kouyou, 250 P Pakarinen, Satu, 573 Pan, Li, 242 Passi, Tomi, 573 Peñafiel, Luis Andy Briones, 119 Phipps, Dabby, 167 Pinos-Veles, Eduardo, 264 Pokorni, Bastian, 3 Przesmycki, Rafal, 309, 325 Q Qian, Chunlin, 377 Qin, Juexiao, 90 Quiroz Martinez, Miguel Angel, 361 Quisi-Peralta, Diego, 258, 264 R Ramos, Edgar, 451, 465, 473 Ramos, Karen, 158 Rauf, Hafiz Tayyab, 97 Recalde, Lorena, 384 Reyes, Bryan, 451 Riina, Joseph, 167

Author Index Rios, Monica Daniela Gomez, 497 Rivera, David Morales, 270 Robles-Bykbaev, Vladimir, 258, 264 Röhm, Benjamin, 22 Romero, Santiago Felipe Luna, 355, 422 Rosero-Perez, Esteban, 270 Rossa-Sierra, Alberto, 504 Ruiz-Ichazu, Angel, 270 S Saavedra Robles, Lileana, 400 Salonen, Tuisku-Tuuli, 573 Sánchez, Rajiv, 451 Sanchez, Yoseline, 465 Satoh, Hironobu, 33 Schäfer, Katharina, 145 Serpa-Andrade, Luis, 264, 355, 422 Shahzad, Adeel, 97 Sheikh, Javed Anjum, 490 Shibata, Kyoko, 33 Shin, Dosun, 407 Shinke, Jun, 33 Sotelo, Fernando, 465, 473 Sousa, Sonia C., 174 Sowles, Holly, 483 Sun, Jianhua, 377 Sun, Xiaohua, 90 T Tabibzadeh, Maryam, 581, 589 Takaoka, Kota, 105 Tan, Lingqi, 61 Tang, Beiqi, 113 Tardillo, Mijail, 473 Tariq, Jahan Zaib, 490 Tartibu, Lagouge, 392 Telner, Jason, 167, 181 Temple, Jon, 167 Toapanta Orbea, Lizbeth Amelia, 292 Topon-Visarrea, Blanca, 605 Torres, Edgar A., 53 Torres, Edgar P., 53 Torres, Jorge, 473 Trana, Rachel E., 9 Turaga, Pavan, 407 U Ugwuanyi, Samson O., 392 Ulrich, Thomas, 566

615 V Vallejo-Huanga, Diego, 369 Vallejo-Mancero, Bernardo, 510 Vazquez, Maikel Yelandi Leyva, 497 Vazquezl, Maikel Yelandi Leyva, 119 Viana-Barrero, Juan, 84 Villarreyes, Susana Soledad Chinchay, 341 Volz, Fabian, 3 W Wang, Haitao, 47, 194, 201, 217, 222 Wang, Jingjing, 222 Wang, Jun, 242 Wang, Liuhuo, 61 Wang, Tian, 137 Wang, Yao, 286 Wang, Zengbin, 61 Wanjawa, Barack, 278 Wario, Ruth, 207 Wnuk, Marian, 309, 325 Wu, Gang, 47, 194, 201, 217, 222

Y Yang, Renjie, 286 Yang, Ruyi, 152 Yao, Chong, 242 Yao, Tong, 242 Yin, Qian, 152 Yoo, Sang Guun, 53 Z Zaghdoudi, Sarra, 187 Zamora, William Rolando Miranda, 341 Zapata, Mireya, 414, 510, 605 Zeng, Fanjie, 113 Zhang, Chengwei, 69 Zhang, Fan, 47, 194, 201, 217, 222 Zhang, Jun, 286 Zhang, Xueying, 333 Zhang, Yu, 377 Zhao, Chao, 47, 194, 201, 217, 222 Zhao, Jing, 47, 194, 201, 217, 222 Zheng, Xin, 152 Zhong, Qiang, 250 Zhou, Changqing, 47 Zhou, Wuzhong, 429 Zwerina, Jan, 3